Chameleonization occurs when a self-learning autonomous mobile system’s (SLAMR) active vision scans the surface of which it is perched causing the exoskeleton to changes colors exhibiting a chameleon effect. Intelligent agents having the ability to adapt to their environment and exhibit key survivability characteristics of its environments would largely be due in part to the use of active vision. Active vision would allow the intelligent agent to scan its environment and adapt as needed in order to avoid detection. The SLAMR system would have an exoskeleton, which would change, based on the surface it was perched on; this is known as the “chameleon effect.” Not in the common sense of the term, but from the techno-bio inspired meaning as addressed in our previous paper. Active vision, utilizing stereoscopic color sensing functionality would enable the intelligent agent to scan an object within its close proximity, determine the color scheme, and match it; allowing the agent to blend with its environment. Through the use of its’ optical capabilities, the SLAMR system would be able to further determine its position, taking into account spatial and temporal correlation and spatial frequency content of neighboring structures further ensuring successful background blending. The complex visual tasks of identifying objects, using edge detection, image filtering, and feature extraction are essential for an intelligent agent to gain additional knowledge about its environmental surroundings.
Autonomous bimodal microsystems exhibiting survivability behaviors and characteristics are able to adapt
dynamically in any given environment. Equipped with a background blending exoskeleton it will have the
capability to stealthily detect and observe a self-chosen viewing area while exercising some measurable form of selfpreservation
by either flying or crawling away from a potential adversary. The robotic agent in this capacity
activates a walk-fly algorithm, which uses a built in multi-sensor processing and navigation subsystem or algorithm
for visual guidance and best walk-fly path trajectory to evade capture or annihilation. The research detailed in this
paper describes the theoretical walk-fly algorithm, which broadens the scope of spatial and temporal learning,
locomotion, and navigational performances based on optical flow signals necessary for flight dynamics and walking
stabilities. By observing a fly’s travel and avoidance behaviors; and, understanding the reverse bioengineering
research efforts of others, we were able to conceptualize an algorithm, which works in conjunction with decisionmaking
functions, sensory processing, and sensorimotor integration. Our findings suggest that this highly complex
decentralized algorithm promotes inflight or terrain travel mobile stability which is highly suitable for nonaggressive
micro platforms supporting search and rescue (SAR), and chemical and explosive detection (CED)
purposes; a necessity in turbulent, non-violent structured or unstructured environments.