The use of industrial robots for measuring and testing is becoming increasingly significant as a component of flexible production. In the early stages of their development robots were used mainly for monotonous and repetitive tasks such as handling and spot welding. Thanks to improvements in the precision with which they work and also in control and regulation technologies, it is possible today to employ robots as flexible, sensor-assisted and even "intellligent" tools for measuring and testing. As a result, however, much higher accuracy is demanded of the robots used for such purposes. In addition, robot measurement and acceptance test requirements have become more exacting. The present paper is based on recommendations that have been developed by cooperative work of the Association of German-Engineers (VDI/GMA). The appropriate working group is entitled "Industrial Robots -Measurement and Inspection". The author is the chairman of this working group. Apart from the technical equipment involved, the use of industrial robots for measuring purposes also calls for the devi-sing and programming of appropriate measuring strategies. In this context the planning and implementation of measuring projects have to be discussed along with software reliability and on-line/off-line programming strategies. Four different utilizations of robots for measuring and testing are presented and illustrated by examples.
A system is described which is based on a unique weld image sensor design which integrates the optical system into the weld end effector to produce the so-called "coaxial view" of the weld zone. The resulting weld image is processed by a flexible, table driven vision processing system which can be adapted to detect a variety of features and feature relationships. Provision is made for interactive "teaching" of image features for generation of table parameters from test welds. A table driven control program allows various vision control strategies to be invoked. The main result of the system is a level of emulation of the capability of the expert welder or welding operator, essential to successful precision welding robotization.
A case is made for the use of regular hexagonal sampling systems in robot vision applications. With such a sampling technique, neighbouring pixels reside in equidistant shells surrounding the central pixel and this leads to the simpler design and faster processing of local operators and to a reduced image storage requirement. Connectivity within the image is easily defined and the aliasing associated with vertical lines in the hexagonal system is not a problem for robot vision. With modern processors only a minimal time penalty is incurred in reporting results in a rectangular coordinate system, and a comparison between equivalent processing times for hexagonal and rectangular systems implemented on a popular processor has shown savings in excess of 40% for hexagonal edge detection operators. Little modification is required to TV frame grabber hardware to enable hexagonal digitisation.
An industrial vision system capable of recognition of non-overlapping parts is presented. The system operates in a moderately unconstrained environment in terms of lighting and object surface characteristics. Typical images exhibit non-homogeneous object surfaces, transparency, shadows and specular reflections. The input image is mapped into a ternary image corresponding to the brightness of each pixel relative to the estimated background. A bounding rectangle is fit to each segmented object by aligning its sides parallel to the principal axis of the object. The shape recognition system uses features which are extracted from the projections of each segmented object onto the vertical and horizontal axes of the bounding rectangle, and from the projections of the skeleton of the segmented object. The classification scheme is chosen so that perfect segmentation is not a requirement of the system. Two uncon-nected object configurations can be recognized. Results are shown for an object domain in which different versions of objects within the same class exhibit different shape and brightness features, and objects in different classes exhibit resembling features. For 37 test images of tools, some with multiple tools in the image, the vision system successfully classified each tool into one of six classes.
A model-based three-dimensional (3-D) vision system is introduced. Range data of objects in a scene are recovered by passive trinocular stereo. Two depth maps are independently produced from two pairs of stereo images, and 3-D points are obtained by merging the two depth maps. The scene description method is as follows. At first, short segments are formed from the 3-D points which satisfy a proximity condition. The condition is that both the 3-D distance between two points and the 2-D distance between corresponding 2-D points on an edge image should be less than each threshold. Next, scene features are extracted by connecting the short segments successively with the proximity and direction conditions. The representations of 3-D objects are built by the solid model based on surface boundary representations. The model is extended from a conventional solid model on the geometrical representations. Shape matching is performed by a hypothesis and verification method. At first, two prominent scene features are matched to the stored model features. And then, the matching is verified with other scene features. The re-sults of matching are used to determine the location and the orientation of the object in 3-D space. Experimental results with a complex object are shown.
In the last few years, there has been a growing interest in the areas of telemanipulation, task planning, and, in a more general sense, in efficient interfaces to robot systems. In this context, telerobotics studies how control can be shared between a human operator and an intelligent robot controller. This research investigates the problems associated with manipulation and, to a certain extent, programming errors in a shared operator/computer control of a robot system. The principle is to trace all actions, at run time, to provide on-line detection and recovery of errors. A world model is constructed and maintained for the purpose of predicting the effects of actions and signaling errors when the actual outcome of an action differs from its required effect. Default reasoning is used extensively to speed up processing and compensate for the high cost of sensing. After a task planner has dealt with the general organisation of the program, the system presented here has the responsibility of coping with variations of the real world to attain the desired goal with the given plan. A test case, overhead power line maintenance, demonstrates the functioning of the system and, although the work is based on this particular context, the scheme described comprises a generic "substrate" which deals with common basic robot actions such as move and grasp and is supplemented by task and environment specific knowledge such as which parts can be mated, sizes, and weights. This part of the system is static for a given task and a good portion of it, the substrate, is valid for a wide range of tasks.
At the core of a robotics system is the ability to acquire, inte-grate, and interpret multisensory data to generate appropriate actions in order to perform a given task. In this paper, we show the feasibility of a realistic autonomous manipulation task using multisensory information. The robotic system used here is a six-degree-of-freedom industrial robot to which we added a number of sensors: vision, range, sound, proximity, touch, force, and torque to enhance its inspection and manipulation capabilities. The cooperative use of sensors and the manipulation capabilities of this robotic system is shown through an experiment involving a task panel including a number of indicators (such as analog and digital meters), and controls (such as valves and push-buttons). The control of the robot, sensory data acquisition, processing, and interpretation, as well as global task supervision are performed by a dedicated VAX 11/785 computer using higher level languages. By integrating vision, range, proximity, touch, force, and torque sensors, many manipulation experiments are accomplished autonomously and successfully. The hardware and software modularity of this robotic system, the generic nature of its building blocks, as well as its reconfigurability make it an ideal tool for industrial multisensor robotic experimentation.
Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.
A large class of man-made objects are characterized by a hierarchical composition consisting of geometric features, special markings and kinematics of movable parts. This hierarchy provides a natural algorithmic control structure that successively constrains the image processing, defining at each level the image region of interest and the significant patterns to be identified. A demonstration of this principle is provided in this paper using Task Panel images. The Task Panel has several movable doors, panels and latches. The model-based vision system determines the "state" of the Task Panel from a single input image. Furthermore, the vision system is integrated with a robot planning system that controls a T-3 746 manipulator as it performs simple tasks such as opening and closing doors on the Task Panel.
The nonuniqueness of perceiving hidden lines from a single line drawing of a solid is illustrated by examples. We assume the solids are trihedral polyhedra without holes and the drawing has been labeled. Our problem is to determine the gradients of the visible and hidden faces as well as to hypothesize the topology of the hidden part. We make the plausible assumption that no hidden face, whose boundary is completely hidden, exists. Under this assumption, we show that the number of possible hidden graphs is finite and is a Catalan number. We then report four search trees for enumerating all possible hidden graphs. The trees, except the first one, are minimal in the sense that no two nodes of the trees represent identical subgraphs. The entropy of a hidden subgraph is then defined. The entropy of an embedding of a hidden subgraph is modeled as the variety of the exterior angles at vertices. This formulation allows both past experience and context to be incorporated in a statistical manner. We then report a minimum-entropy beam search to find nonunique solutions in order of their naturalness. Finally, we propose a surface construction paradigm based on this theory and the shape-from-contour heuristics.
Manufacturing industries have greatly emphasized the need to "integrate" various manufacturing functions-Design, Planning, and Business Operations- into a unified and well coordinated system, so as to increase productivity. In order to achieve this goal, the various CAD/CAM systems must have a common engineering and manufacturing knowledge base. We propose a Knowledge Based Manufacturing System(KBMS) that will help the Manufacturing Engineer(ME) to directly model the workcell environment. The system consists of two main modules: the Workcell Modelling Facility and the Task Planner. The Workcell Modelling Facility helps the ME to create workcell models using the workcell components, product parts, and manufacturing operations contained in a pre-defined knowledge base. The system also allows the manufacturing engineer to add information to the existing knowledge bases schemas. The Task Planner accesses these knowledge bases to generate a network of proposed actions from a given production goal. Integration of the proposed KBMS with a Geometric Modelling System will provide the ME with a tool to perform off-line animation of the Manufacturing Process in a particular workcell model. A prototype KBMS is currently being implemented at the University of Florida using the Object-oriented Semantic Association Model(OSAM*) as the underlying data model for the Knowledge Bases. OSAM* provides the object-oriented features of inheritance and encapsulation of data, as well as the ability to represent complex relationships between object classes in semantic nets.
A variety of problems in automated packaging and processing seem ready for expert robotic solutions. Such problems as automated palletizing, bin-picking, automated stoilw and retrieval, automated kitting of parts for assembly, and automated warehousing are currently being considered. The use of expert robots which consist of specialized computer programs, manipulators and integrated sensors has been demonstrated with robot Chedkers, peg games, etc. Actual solutions for automated palletizing, pit-carb basket loading, etc. have also been developed for industrial applications at our Center. The generic concepts arising from this research will be described, unsolved problems discussed, and some important tools demonstrated. The significance of this work lies in its broad application to a host of generic industrial problems which can improve quality, reduce waste, are eliminate human injuries.
Intelligent Robotic Systems are a special type of Intelligent Machines. When modeled based on Vle theory of Intelligent Controls, they are composed of three interactive levels, namely: organization, coordination, and execution, ordered according, to the ,Principle of Increasing, Intelligence with Decreasing Precl.sion. Expert System techniques, are used to design an Intelligent Robotic System Organizer with a dynamic Knowledge Base and an interactive Inference Engine. Task plans are formulated using, either or both of a Probabilistic Approach and Forward Chapling Methodology, depending on pertinent information associated with a spec;fic requested job. The Intelligent Robotic System, Organizer is implemented and tested on a prototype system operating in an uncertain environment. An evaluation of-the performance, of the prototype system is conducted based upon the probability of generating a successful task sequence versus the number of trials taken by the organizer.
A goal of Distributed Artificial Intelligence (DAI) has been the development of heuristics for problem-solving by logically distributed components (agents). The roles of organizational structure, communication and planning in addressing the central issue of coherence are discussed in the context of representative DAI simulation systems. Despite the range of DAI research, few organizing principles have emerged. We attribute this lack to a reliance on human models of cooperative processes. As the effectiveness of the models has broken down, improvements have come through incremental, compensatory changes, rather than through the development of new models. We argue for the importance of a higher level view of distributed problem-solving.
This paper addresses error recovery as a planning problem. A discussion of previous research in planning systems shows that plan generation has been emphasized, with some inclusion of error recovery as part of the plan. This paper extends this work to provide execution monitoring and plan generation for error recovery. A causal reasoning model is presented. During the task plan generation, the model will generate a network of nodes which consists of activities and states, called a causal net of activities and states. The same network will be used to monitor the execution of the task plan. Since the process is modelled based on its cause and effect relationships, the difficulties in detecting a failure and classifying the failure can be reduced. Once the failure has been properly classified, the system will generate the appropriate recovery plan based on the knowledge about the relationships between the state which represents the failure and the recovery task. Using this model, the process of imbedding the recovery task knowledge into the planning system can be done in conjunction with the process of building the task knowledge base. A simple manufacturing application example is presented.
The advancements in machine vision technology have been substantial in recent years with the introduction of faster processors and the improvements in sensor technology. A depth map can be obtained with both direct and indirect methods. The first ones recover depth directly from ranging devices. The second ones recover 3-D information by means of shape from xxx and stereopsis. Our idea consists of integration of information from two different source: local shading analysis and stereo vision. At present this alternate method has been tested with satisfactory results on conventional hardware but it's impracticable for computing time. The use of advanced parallel hardware is surely suitable to achieve the real time response, but it is not justified for some application fields (where response time is not very critical) because of its cost. An alternate choice can fall on low-cost and simple architectures that allow a configuration to achieve the required speed/cost ratio for a particular vision application by using a combination of standard modules. In this paper our method for depth recover is analyzed in order to enhance the critical steps for computing time. They are expressed in terms of computation suitable for standard and special purpose modules.
This paper describes the overall design of a robot audio system which has been realized by us, it makes robots understand all of three kinds of sentences input by speech and limited in the situations of robots, namely, declarative sentence, imperative sentence and question sentence. Furthermore, we present the realizing strategy of its robot-plan subroutine and speech recognition subroutine.
The most suitable programming level is certainly that closer to the human way of reasoning, which matches the task level in robotics. Unluckily a large number of complex problems automatic solution is required at this level. Starting from this consideration, we believed in the design of a more reliable robot programming language design (the Robot Motion Structured Language) at a slightly lower level, generally referred as "object-level", which could gradually grow upward the task-level, as more experience was acquired in automatic problems solving. Main RMSL advantages rely on the independence from work-cell devices and the reduced, tough complete, instructions set, which enables inexpert users to write programs in a compact form. At present, our programming language doesn't include multi-robot robot cooperation and sensors handling, tough it can address wide range applications.
In this paper we show that learning schemes can be utilized to generate robot grasping points. These learning schemes can be based on the geometrical similarity of objects and the functional similarity of tasks. This approach will drastically increase the speed of the search process and enrich the system's knowledge base. A Neural Network model that acquires data from a sold modeling data base is suggested. This model combines the completeness of information provided by solid modeling with the uncertainty encountered in the grouping process to perform geometrical classification of objects.
The selections of material handling equipment for different manufacturing components are largely dependent on the characteristics of the component to be manipulated. The tasks of designing or selecting material handling equipment are usually dependent on the experience of the engineer and the equipment available. In this research, the group technology concept is applied to record and organize the material handling information. Expressions related to material handling, such as the weight, size, configurations of the component, etc., are included in a general purpose group technology classification and coding system. Components can be grouped into part families according to their material handling properties. Material handling equipment can then be designed for a group of components or can be selected based on the similarities of a group of parts. In addition, a multi-objective clustering method, which is based on a goal programming theory, is utilized for more effective information searching. This approach assists the engineer in designing the material handling equipment or selecting the available one. Industrial application shows that this approach reduces the design time cycle for material handling equipment and increases in utilization of the available facilities.
This paper discusses the application of a heuristic technique to stack regular and irregular shapes objects on the same container or on the same pallet. The computer representation of any object is based on the recursive octree method where each unit volume element is a voxel. Then, the choice of the space taken by any shape object within the volume is made through the heuristic approach. The heuristic technique developed is an evaluation function that compares all the available spaces based on weighing factors and threshold levels. The parameters used are shape, space available, contents of the object, and dimensions. The goal is to choose the most feasible available space every time an object is ready to be stacked. The heuristic algorithm is implemented within a knowledge based system to control a flexible material handling cell. Generally the cell comprises a material handling robot, a conveyance system that brings the objects to the cell where objects are distributed randomly to the cell, a vision system to identify the objects and verify the stacking procedure, and a computer to control and initiate the decision making process to stack all shape objects on the same volume.
Special attention must be given to the planning of robotic assembly. The objective is to assemble the product correctly the first time. A look-ahead procedure is therefore required. This paper describes a computer aided system for planning robotic grasping strategies for small rotational parts.
In this paper, I present an approach to representing plans that make on-line decisions about resource allocation. An on-line decision is the evaluation of a conditional expression involving sensory information as the plan is being executed. I use a plan representation called 7ZS10'1 1,12that has been especially designed for the domain of robot programming, and in particular, for the problem of on-line decisions. The resource allocation example is based on the robot assembly cell architecture outlined by Venkataraman and Lyons16. I begin by setting forth a definition of on-line decision making and some arguments as to why this form of decision making is important and useful. To set the context for the resource allocation example, I take some care in categorizing the types of on-line decision making and the approaches adopted by other workers so far. In particular, I justify a plan-based approach to the study of on-line decision making. From that, the focus shifts to one type of decision making: on-line allocation of robot resources to task plans. Robot resources are the physical manipulators (grippers, wrists, arms, feeders, etc) that are available to carry out the task. I formulate the assembly cell architecture of Venkataraman and Lyons16 as an R.S plan schema, and show how the on-line allocation specified in that architecture can be implemented. Finally, I show how considering the on-line allocation of logical resources, that is a physical resource plus some model information, can be used as a non-traditional approach to some problems in robot task planning.
In this paper we will focus on the problems encountered in the precision assembly of electronic components on printed circuit boards (PCB). A pilot assembly cell has been constructed which is provided with sensory integration and sensory data processing capabilities. Various sensors, such as vision and small fiber-optic sensors distributed in suitable places of the system, have been used in environmental detection and monitoring. Two principles have been used to solve the assembly problem: the problem division into subproblems using object-oriented structure, and applying goal-driven, hierarchically divided assembly operations. The task performance of the assembly system is divided into the following main blocks: Knowledge base, Assembly planning, Blackboard and Task control and monitoring. The knowledge base comprises the assembly data generated for the PCB during CAD phase, e.g. hole positions and types of components to be assembled. The CAD data is completed with descriptions of components and environment and data for assembly operations. During the design phase of the assembly, detailed primitive operations and assembly sequence are generated. The primitive operations include handling of the sensor information connected to movement and action commands. The PCB based assembly sequence, component data and state of the assembly phase is maintained utilizing the blackboard principle. Frames are used in the system to describe peripheral devices, sensors, assembly components, and actions of the robotic cell.
An intelligent, sensor based robot assembly system is discussed. This includes two algorithms which have already been developed and concern themselves with component scheduling and the interpretation of 3d visual information which is used to construct an on-line model of the robots workspace.
This research explores case-based reasoning for robotic assembly cell diagnosis. The case-based reasoning approach to cell diagnosis is different from the case-based reasoning approach to general diagnosis problems which have enough past cases or examples. Since the failure cases of a robotic assembly cell are not generally available in the early stage of cell operation, the cell failures should be artificially generated from the design information of cell struc-ture and assembly sequence; the analysis of generated cell failures are then used for the cell diagnosis. The case representation, case management and search for appropriate case from case database are studied with an example of a robotic assembly cell. The failure cases are hierarchically distributed and multiply indexed within the hierarchical causal model of the robotic assembly cell. The diagnostic performance gradually increases as the diagnostic case data-base grows.