The recent research is focused on development of mobile vision systems and algorithms suitable for very large-scale integration implementation. These systems can be used in various applications. We propose a novel field-programmable gate array (FPGA)-based architecture for early vision. The central idea is to take into account the perceptual aspects of visual tasks inspired by biological vision systems: shape and color. For this reason, we propose an original approach based on a system implemented in an FPGA connected to a CMOS imager. The proposed algorithm implementation analysis and optimization methodology under resource constraints enable one to implement the algorithm on only one FPGA chip. To prove the proposed concept the system was implemented and tested on an autonomous mobile platform. The implementation framework enables direct algorithm implementation in application-specific integrated circuit.