Fuzzy logic has been promoted recently by many researchers for the design of navigational algorithms for mobile robots. The new approach fits in well with a behavior-based autonomous systems framework, where common-sense rules can naturally be formulated to create rule-based navigational algorithms, and conflicts between behaviors may be resolved by assigning weights to different rules in the rule base. The applicability of the techniques has been demonstrated for robots that have used sensor devices such as ultrasonics and infrared detectors. However, the implementation issues relating to the development of vision-based, fuzzy-logic navigation algorithms do not appear, as yet, to have been fully explored. The salient features that need to be extracted from an image for recognition or collision avoidance purposes are very much application dependent; however, the needs of an autonomous mobile vehicle cannot be known fully 'a priori'. Similarly, the issues relating to the understanding of a vision generated image which is based on geometric models of the observed objects have an important role to play; however, these issues have not as yet been either addressed or incorporated into the current fuzzy logic-based algorithms that have been purported for navigational control. This paper attempts to address these issues, and attempts to come up with a suitable framework which may clarify the implementation of navigation algorithms for mobile robots that use vision sensor/s and fuzzy logic for map building, target location, and collision avoidance. The scope for application of this approach is demonstrated.