The elements of an open architecture robot control system developed using Matlab/Simulink and a real time system are described. It offers the opportunity to control almost every robotic system (serial or parallel) with up to six axes while commercial robot controls are often designed for serial kinematic systems and can hardly be adapted to control robots with parallel structures.
The described open architecture robot control programmed in Matlab/Simulink and ANSI-C is a modular system. To adapt the control to a new robotic structure it is necessary to add the transformation algorithms, position control algorithms, inputs and outputs and machine specific error states to the pre-programmed modules of the system.
These modules are programmed by using Simulink elements extended by special functions of the real time system and so called S-Functions that are programmed in C-Code. In the control new functionalities can be implemented easily by adding new modules and connecting them with the present system. A pre-designed graphical user interface provides
most of the input buttons and display information needed for a robot control. Graphical buttons or displays can be added and connected with the required signal from Matlab/Simulink by drag and drop. An application example of a parallel robot shows the functionalities of the control.
To enable lightly staffed or fully autonomous machining operations, it is essential that both the condition of the cutter and the health of the machine tool system be known. In this paper, the health of the spindle positioning drive (Z axis) on a Proteo D/94 precision machining center is investigated using time, frequency and time-frequency techniques. Investigated is a cogging phenomenon produced as a result of the DC servomotor brushes sticking due to poor design. This incipient fault reduces the accuracy and controllability of the machine tool, and always leads to total drive failure. Thus, it is important to determine the fault signature of the drive so that corrective action may be taken before failure can occur, permanently damaging both the motor and the workpiece. The vibratory signatures of both a healthy and a faulty spindle during translation are analyzed. It is shown that a spindle under fault conditions behaves differently from a healthy one, and that time and time-frequency domain methods provide useful information on the status of the system. This paper lays the groundwork for the development of a future
machine condition monitoring system, which can be easily retrofitted to any machine tool system.
Indirect optical methods of mechanical actuation exploit the ability of high intensity light sources to generate heat, and thereby influence the thermal properties of gases, fluids or solids. Optical actuators that utilize this photo-thermal effect for creating structural displacement often produce very large power/weight ratios. This paper describes the basic concept and operation of two optically driven micro-mechanisms that use the shape memory effect of 50/50 nickeltitanium (NiTi) material to generate the desired force and displacement. Shape memory alloys (SMA) such as NiTi exhibit reproducible phase transformation effects when undergoing repetitive heating and cooling cycles. Increasing the temperature above the ambient conditions of a pre-loaded NiTi wire or foil will cause the material element to undergo a martensite-to-austenite phase transformation and move the position of an attached load a distance of approximately 4% of the overall length. The reduction in the length can be recovered by cooling the SMA material back to the original temperature. The number of times the NiTi material can exhibit this shape memory effect is dependent upon the amount of strain, and consequently, the total distance through which the actuating material is displaced. The proposed devices use a focused high-intensity light source to provide both the energy and control signal needed to activate a simple wire shaped SMA element in a microcantilever beam and a SMA thin film in a diaphragm micropump.
Smart sensors and their networking technology when applied in manufacturing environment for monitoring, diagnostics, and control and for data/information collection could dwarf all the advances made so far by the manufacturing community through traditional sensors. Smart sensors can significantly contribute to improving automation and reliability through high sensitivity, self-calibration and compensation of non-linearity, low-power operation, digital pre-processed output, self-checking and diagnostic modes, and compatibility with computers and other subsystem blocks. There is a huge gulf between the existing models of manufacturing systems and the computational models that are required to correctly characterize manufacturing systems integrated with smart sensor networks. This paper proposes a multi-agent model for S2IM system. The agent characteristics and the expected model behavior are presented.
Micro-production is meeting new challenges due to the continuing miniaturization of modern products and the increasing variety of emerging hybrid microsystems, which are mainly produced manually. For small lot production, teleoperated micro-assembly systems offer new perspectives in improving manual assembly processes. By using smart sensor information, teleoperated systems enable an operator to feel physically present in a distant environment. In contrast to conventional sensor applications, smart sensors are encapsulated and intelligent sensor modules with integrated functions for data processing, status monitoring and autonomous dynamic parameter adaptation. To investigate the correlation between smart sensor data and immersion, a teleoperated micro-assembly system has been developed. To achieve a close-to-reality impression and to improve the dexterity of the operator, several smart sensor modules, including virtual sensors and shared sensor components, are integrated into the system. If required, sensor signals are enhanced and transformed into other modalities in order to control the micro-assembly system more intuitively. Due to flexibility requirements, all sensors are adaptable to new environments. Visual supervision is achieved through a precise optical system. All sensor components have been tested within an international teleoperation scenario consisting of a local operator in Munich, Germany, and a distant operator in Pittsburgh, USA.
Laser trackers are precision measurement devices often used to measure parts too large for conventional Coordinate Measuring Machines (CMMs). Multiple laser trackers can be used simultaneously to increase the number of part features viewable and therefore available for measurement. Each laser tracker has its own coordinate system that is linked to the others through the measurement of common points. The process of registration uses these common points to bring all measurement data into a Common Coordinate System (CCS). Provided all measurements are in a CSS, any localized part feature measured by more than one laser tracker can benefit from sensor fusion. This process improves the measurement accuracy of a feature location by using the error information associated with each laser tracker. This paper describes the application of sensor fusion and registration algorithms to metrology. Testing of the registration and fusion algorithms is performed using an API laser tracker 2. The algorithms are being commercially implemented in the Maya Metrix Build!IT software.
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Because of these two key factors, increased technical complexity and an fewer resources, the people who continue to work in the factory are finding it ever more difficult to deal with issues that involve the production line's sophisticated machine vision equipment. An image processing technology is now available that enables a system to match an operator’s subjectivity. A hardware-based implementation of a neural network system enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
The quality of a work piece is usually related to the machining precision. In order to improve the accuracy of machining, geometric adaptive control is adopted in conventional CNC (Computer Numerical Control) control system. Adjusting machining variables in real-time can compensate for the errors caused by the varying machining condition. This paper briefly introduces the geometric adaptive control system and proposes a new path generator architecture for CNCs, suitable for real-time error correction.
Polarized laser phase shifting method is adopted in this single frequency laser interferometer to solve “zero drift” of laser intensity in conventional single frequency laser interferometer. The measurement stability and repeatability are improved by means of common optical path arrangement. The resolution power of the interferometer is improved by using optical path difference doubling technique. In the experiment the main factors that affect the length measurement precision are analyzed and calculated. Results obtained are shown.
Fixtures are designed to accurately locate and secure a part during machining operations such that the part can be manufactured to design specifications. To reduce the design costs associated with fixturing, various computer-aided fixture design (CAFD) methods have been developed through the years to assist the fixture designer. One approach is to use a case-based reasoning (CBR) approach where relevant design experience is retrieved from a design library, and adapted to provide a new fixture design solution. Indexing design cases is a critical issue in any CBR approach, and CBR systems can suffer from an inability to distinguish between cases if indexing is inadequate. This paper presents a CAFD methodology that adopts a rigorous approach to defining the indexing attributes in which Axiomatic Design “Functional Requirement” decomposition is adopted. Thus, a design requirement is decomposed in terms of functional requirements, physical solutions are retrieved and adapted for each individual requirement, and the design is then re-constituted to form a complete fixture design. Furthermore, adaptability is used as the basis by which designs are retrieved in place of the normal attribute similarity approach, which can sometimes return a case that is difficult or impossible to fix.
A detailed procedure is given for the development and implementation of a risk analysis to protect the operations of an enterprise from all types of threats . This procedure includes the performance of assets analysis and threat analysis, the analysis of the annual loss expectancy, the identification of security measures, and the evaluation of these measures.
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
In this paper, a new methodology for multi-view stereovision inspection is presented, based on the inspection system of binocular 3-D vision. We demonstrate that this methodology can improve the
accuracy of the image measurement by the use of the adjustment calculation for the pictures obtained by rotating the measured objects, the depth information of which is retrieved by computing the parallax of corresponding feature points, the so-called homologous point, between two pictures. The dates of camera calibration and actual measurement are proved at last in the paper.
We experimentally investigated the man-machine interface using the Micro-LabSat developed by NASDA and launched in 2002. The velocity of ground operators' eye movements was used to analyze their mental condition. Eye movement was observed using an Eye Mark Recorder. In this paper, we describe the results of our evaluation of the man-machine interface.
Extensive research has been conducted on the subject of Intelligent Time Series forecasting, including many variations on the use of neural networks. However, investigation of model adequacy over time, after the training processes is completed, remains to be fully explored. In this paper we demonstrate a how a smoothed error tracking signals test can be incorporated into a neuro-fuzzy model to monitor the forecasting process and as a statistical measure for keeping the forecasting model up-to-date. The proposed monitoring procedure is effective in the detection of nonrandom changes, due to model inadequacy or lack of unbiasedness in the estimation of model parameters and deviations from the existing patterns. This powerful detection device will result in improved forecast accuracy in the long run. An example data set has been used to demonstrate the application of the proposed method.
Returnable transport packaging plays an important role in facilitating the transfer of a large volume of products in a close-loop distribution network. To make effective use of returnable transport packaging, vehicle dispatching strategies are crucial. With an appropriate vehicle dispatching strategy, for example, a fast turnover time and a short waiting time for packaging dispatch can be achieved. However, there are some factors that directly influence vehicle dispatching strategies. These factors include the arrival demand fluctuations, the availability of serving vehicles, and the geographic proximity of the facility to the customer’s locations. In this study, authors investigate the effect of these factors on vehicle dispatching strategies for transport packaging by using a simulation modeling approach. This paper reports different performance outcomes obtained through various test cases.
Database mining, widely known as knowledge discovery and data mining (KDD), has attracted lot of attention in recent years. With the rapid growth of databases in commercial, industrial, administrative and other applications, it is necessary and interesting to extract knowledge automatically from huge amount of data. Almost all the organizations are generating data and information at an unprecedented rate and they need to get some useful information from this data. Data mining is the extraction of non-trivial, previously unknown and potentially useful patterns, trends, dependence and correlation known as association rules among data values in large databases.
In last ten to fifteen years, data mining spread out from one company to the other to help them understand more about customers' aspect of quality and response and also distinguish the customers they want from those they do not. A credit-card company found that customers who complete their applications in pencil rather than pen are more likely to default. There is a program that identifies callers by purchase history. The bigger the spender, the quicker the call will be answered. If you feel your call is being answered in the order in which it was received, think again.
Many algorithms assume that data is static in nature and mine the rules and relations in that data. But for a dynamic database e.g. in most of the manufacturing industries, the rules and relations thus developed among the variables/items no longer hold true. A simple approach may be to mine the associations among the variables after every fixed period of time. But again, how much the length of this period should be, is a question to be answered. The next problem with the static data mining is that some of the relationships that might be of interest from one period to the other may be lost after a new set of data is used. To reflect the effect of new data set and current status of the association rules where some of the strong rules might become weak and vice versa, there is a need to develop an efficient algorithm to adapt to the current patterns and associations.
Some work has been done in developing the association rules for incremental database but to the best of the author’s knowledge no work has been done to do the same for periodic cause and effect analysis for online association rules in manufacturing industries. The present research attempts to answer these questions and develop an algorithm that can display the association rules online, find the periodic patterns in the data and detect the root cause of the problem.
Telepresence and teleoperation permit the ability to sense and interact with a remote and potentially hazardous environment without the difficulty of getting there, being there, and then returning safely. Previous telepresence demonstrations have employed only a single remote device or vehicle which, if it experiences difficulty, may require human intervention for rescue, or be abandoned if the rescue is too hazardous. Multiple remote device or vehicle deployment opens the opportunity for interaction to improve the chances for mission success. With a sufficiently large number of remote devices or vehicles, whose interaction is conveyed over high speed internet links, a large body of simultaneous remote users can result. Imposing an access fee structure can result in an enterprise which is economically self-supporting when conducted on a sufficiently large scale. Various levels of interaction, ranging from active participant to active viewer to passive viewer, have corresponding levels of access fee. Experiences in achieving group telepresence among a small fleet teleoperated vehicles are discussed, as are simple solutions to complex issues of inter-vehicle awareness. A general economic model is presented for a large scale "telepresence safari" that is economically self-supporting. The potential for large scale Lunar telepresence is also discussed.
This paper compares ship-dismantling processes in India and the U.S. The information for India was collected during an informal visit to the ship dismantling sites in Alang, India. The information for the U.S. was obtained from the MARAD report. For a 10,000-ton passenger ship, the Indian contractor makes a profit of about 24% compared to a loss of about 15% in the U.S. The loss in the US is primarily due to high labor costs, compliance to safety and health regulations and lack of market for used components and scrap metal.
This paper describes a database tool for Dismantling of Obsolete Vessels (DOVE). DOVE 1.0 consists of three databases: a) The Obsolete Vessels Database (OVD), b) The Metals and Alloys Database (MAD), and c) The Cutting Technology Database (CTD). The OVD provides information on ship name, type, year built, number, status, light displacement, length, beam, changes made, dead weight, number of propellers, propulsion type, and vessel location. The MAD provides information on several metals and alloys and the CTD has information on cutting technologies, decontamination technologies, and waste processing methodologies. DOVE 1.0 runs on an IBM compatible personal computer and was implemented in Visual Basic 6.0 using Microsoft Access as the database.
Taking a multi-resolution approach, this research work proposes an effective algorithm for aligning a pair of scans obtained by scanning an object's surface from two adjacent views. This algorithm first encases each scan in the pair with an array of cubes of equal and fixed size. For each scan in the pair a surrogate scan is created by the centroids of the cubes that encase the scan. The Gaussian curvatures of points across the surrogate scan pair are compared to find the surrogate corresponding points. If the difference between the Gaussian curvatures of any two points on the surrogate scan pair is less than a predetermined threshold, then those two points are accepted as a pair of surrogate corresponding points. The rotation and translation values between the surrogate scan pair are determined by using a set of surrogate corresponding points. Using the same rotation and translation values the original scan pairs are aligned. The resulting registration (or alignment) error is computed to check the accuracy of the scan alignment. When the registration error becomes acceptably small, the algorithm is terminated. Otherwise the above process is continued with cubes of smaller and smaller sizes until the algorithm is terminated. However at each finer resolution the search space for finding the surrogate corresponding points is restricted to the regions in the neighborhood of the surrogate points that were at found at the preceding coarser level. The surrogate corresponding points, as the resolution becomes finer and finer, converge to the true corresponding points on the original scans. This approach offers three main benefits: it improves the chances of finding the true corresponding points on the scans, minimize the adverse effects of noise in the scans, and reduce the computational load for finding the corresponding points.