In a robotised warehouse, as in any place where robots move autonomously, a major issue is the localization or detection of human operators during their intervention in the work area of the robots. This paper introduces a wearable human localization system for large warehouses, which utilize preinstalled infrastructure used for localization of automated guided vehicles (AGVs). A monocular down-looking camera is detecting ground nodes, identifying them and computing the absolute position of the human to allow safe cooperation and coexistence of humans and AGVs in the same workspace. A virtual safety area around the human operator is set up and any AGV in this area is immediately stopped. In order to avoid triggering an emergency stop because of the short distance between robots and human operators, the trajectories of the robots have to be modified so that they do not interfere with the human. The purpose of this paper is to demonstrate an absolute visual localization method working in the challenging environment of an automated warehouse with low intensity of light, massively changing environment and using solely monocular camera placed on the human body.
The process how to acquire knowledge about the operating environment belongs to the most challenging problems that autonomous mobile robots solve. The quality of the model depends on a number and the art of sensors used and on a precision the robot knows its position in the environment. The occupancy grid belongs to the most common low-level models of the environment being considered for highly robust approach for fusion of noisy data and for fusion of data from different kinds of sensor. This paper primarily introduces a novel method for building an occupancy grid from a monocular color camera with its’ automatic calibration. The other part of the work describes a method for fusion of camera data with data from a sonar rangefinder. The presented methods were experimentally verified with an indoor experimental robot at the Czech Technical University facilities.