A lot of nowadays machine vision tasks imply real-time video stream processing at rates of order of (25 frames / sec) x (640 x 480 pixels / frame) x (24 bit / pixel) = 184 Megabit / sec and higher. As reasonable estimations show, a very limited number of operations (even quite primitive) is allowed to be applied to each pixel of every video frame being processed in order to realize the processing rate required. Hence implementing algorithms widely exploiting formal methods (e.g. sorts of iterative approaches, transformations in multidimensional feature space etc) over the whole video stream becomes unaffordable. A potential (and practically working) overcome is introducing a <i>cascade approach</i>. From architectural viewpoint cascade processing consists of several predefined stages. Video frame passes through them from the zero stage (original frame) to the final stage (processing results). Some stages can be skipped, and the whole processing can be canceled at any stage. Passing through two sequential stages can be viewed as applying some operation to the information left to be processed. The keystone of cascade approach is designing an optimal sequence in which the simplest operations precede the more complex ones, so that the processing mechanism becomes essentially non-uniform and non-linear in terms of processing rates: the great amount of useless data is discriminated on the initial quick stages while the further analysis exploits smart algorithms over comparatively low data flux. The presentation demonstrates a practical example of cascade approach for the task of airplanes' tail identification numbers recognition.