PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Many robotic applications in space will require vision systems to identify objects and determine their distance and orientation. Current vision systems that identify objects by analyzing shape, texture, or motion require the power of special-purpose hardware and are subject to many difficulties as the object parameters change with distance and orientation. In this paper, these difficulties are addressed through the development of a special target label whose characteristics supply all the information needed to determine the identity, range, orientation, and geometrical characteristics of the object. The scene segmentation algorithms used in the system are tuned to recognize target labels as distinct objects. Given that the geometry of the labels is known a priori and is constant, an inverse perspective transform can then be applied to determine the range and orientation of the target label from the camera. A bar code within the label provides an index into a database describing the object to which the label is attached. The database can provide a complete description of the object, including approach and grasp locations, along with a complete CAD description of the object. The system runs on an inexpensive desk-top computer and promises to make many robot vision tasks both practical and affordable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Capaciflector, a capacitive proximity sensor, is being developed by NASA for collision avoidance. Capaciflector provides a single output value as measure of altered base frequency. This value is a characteristic of an external object in the sensor''s field of view. An attempt is made to use the Capaciflector for imaging with operating range from 1 to 2 inches. By positional arangement of sensors in a grid pattern and electronic activation of sensors over the grid one at a time an object characteristic image is obtained. The article describes the Capaciflector experimental imaging system and preliminary results obtained by the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability of a telerobotic manipulator to operate in confined spaces while avoiding unwanted collisions is enhanced by the accurate sensing of its proximate environment. To achieve the fidelity required for precise manipulator control, a proportional proximity sensor system with a sufficiently large measurement envelope is required. Current proximity sensors provide a binary indication of the presence of obstacles within a small envelope with coarse or no proportional measurement of their location. A proportional proximity sensor system configured as a Frequency Modulated Continuous Wave (FMCW) Coherent Laser Radar (CLR) using a semiconductor laser as the energy source is described and analyzed. The source and reflected energies mix coherently to generate a radio frequency (RF) signal whose frequency is proportional to the range. The system is tested for accuracy, range, depth of range, speed, and sensitivity and the results are presented. Techniques to derive orientation information and an application to telerobotic control are also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new Pseudo Random Noise (PN) modulated CW diode laser radar system is being developed for real time ranging of targets at both close and large distances (greater than 10 KM) to satisy a wide range of applications: from robotics to future space applications. Results from computer modeling and statistical analysis, along with some preliminary data obtained from a prototype system, are presented. The received signal is averaged for a short time to recover the target response function. It is found that even with uncooperative targets, based on the design parameters used (200-mW laser and 20-cm receiver), accurate ranging is possible up to about 15 KM, beyond which signal to noise ratio (SNR) becomes too small for real time analog detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel technique for automatic elevation model computation is introduced. The technique employs a binocular camera system and an algorithm termed hierarchical feature vector matching to derive an elevation model, as well as to compute the interframe correspondences for tracking. It is argued that this algorithm unifies the procedures of range estimation (i.e., stereo correspondence), tracking (i.e., interframe correspondence), and recognition (input/model correspondence). This technique is demonstrated using a physical model of the Mars surface and a binocular camera system with seven geometrical degrees of freedom. This system provides a tool to generate realistic test imagery for the mock-up of a spacecraft approaching the landing site. The trajectory of the spacecraft can be predefined and is controlled by a computer interfaced to seven motorized positioners. Several experiments defined to estimate the accuracy of the computer vision system are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Flight Telerobotic Servicer (FTS) was developed to enhance and provide a safe alternative to human presence in space. The first step for this system was a precursor development test flight (DTF-1) on the Space Shuttle. DTF-1 was to be a pathfinder for manned flight safety of robotic systems. The broad objectives of this mission were three-fold: flight validation of telerobotic manipulator (design, control algorithms, man/machine interfaces, safety); demonstration of dexterous manipulator capabilities on specific building block tasks; and correlation of manipulator performance in space with ground predictions. The DTF-1 system is comprised of a payload bay element (7-DOF manipulator with controllers, end-of-arm gripper and camera, telerobot body with head cameras and electronics module, task panel, and MPESS truss) and an aft flight deck element (force-reflecting hand controller, crew restraint, command and display panel and monitors). The approach used to develop the DTF-1 hardware, software and operations involved flight qualification of components from commercial, military, space, and R controller, end-of-arm tooling, force/torque transducer) and the development of the telerobotic system for space applications. The system is capable of teleoperation and autonomous control (advances state of the art); reliable (two-fault tolerance); and safe (man-rated). Benefits from the development flight included space validation of critical telerobotic technologies and resolution of significant safety issues relating to telerobotic operations in the Shuttle bay or in the vicinity of other space assets. This paper discusses the lessons learned and technology evolution that stemmed from developing and integrating a dexterous robot into a manned system, the Space Shuttle. Particular emphasis is placed on the safety and reliability requirements for a man-rated system as these are the critical factors which drive the overall system architecture. Other topics focused on include: task requirements and operational concepts for servicing and maintenance of space platforms; origins of technology for dexterous robotic systems; issues associated with space qualification of components; and development of the industrial base to support space robotics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Space Station Freedom (SSF) will be assembled in the 1995 to 2000 time period, when permanently manned capability (PMC) will be achieved. During the build phase and after PMC, the Mobile Servicing System (MSS) will be used as a tool to assist crew in the building and in assembly and all maintenance aspects of SSF. Operation of the MSS will be executed and controlled by on-orbit crew, thereby having an impact on the limited crew time and resources. The current plan specifies that the MSS will not be operable when crew are not present. Simulations have been carried out to quantify the maintenance workload expected over the life of SSF. These simulations predict a peak in maintenance demand occurring even before PMC is achieved. The MSS is key to executing those maintenance tasks, and as a result, the demands on MSS crew resource will likely exceed availability, thereby creating a backlog of maintenance actions and negatively impacting SSF effectiveness. Ground operated telerobotics (GOT), the operation of the MSS from the ground, is being proposed as an approach to reducing the anticipated maintenance backlog, along with reducing crew workload when the MSS is executing simple or repetitive tasks. GOT would be implemented in a phased approach, both in terms of the type of activity carried out and the point of control gradually passing from on-orbit crew to ground personnel. The benefits of GOT are expressed in terms of reduced on-orbit crew workload, greater availability of the MSS during the post-PMC period, and the ability to significantly reduce or even eliminate any maintenance action backlog. The benefits section compares GOT with crew operation timelines, and identifies other benefits of GOT. Critical factors such as safety, space-ground communication latency, simulation, operations planning, and design considerations are reviewed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Space Station Remote Manipulator System (SSRMS) is an element of the Mobile Servicing System (MSS) which is Canada's contribution to the Space Station Freedom. The SSRMS is a large, flexible, mechanical arm with seven degrees-of-freedom, measuring approximately 17.6 meters when fully extended and is intended to manipulate a wide range of payload masses up to and including the Shuttle orbiter. A project is proposed to address the robotics evaluation and characterization (REACH) of the SSRMS in-orbit. The objectives of this project are to establish the characteristics of the robotics parameters, including structural dynamics parameters, over the SSRMS work-volume. In the collection and analysis of data, extensive use is to be made of the MSS baseline instrumentation, namely, joint resolvers, force/moment sensors, motors currents, and especially the video system. A measurement systems testbed (MST) is planed to define, test, and validate the REACH system concept, the SSRMS measurement systems capabilities, calibration methods for the measurement systems, data processing algorithms, the in-orbit experiments/tests and the in-orbit operational scenarios and configurations. This paper describes the development of the MST and discusses its current status. The MST will be a full-scale mock-up of the SSRMS and will have joint-resolvers and a similar complement of video cameras. It will also have independent measurement systems in order to validate the measurements and parameter identification processes. Though the MST arm will not possess powered joints for manipulation, it will be geometrically similar. Like many large space structures the SSRMS is unable to support itself on the ground; consequently, it is proposed to suspend the MST arm from a soft spring suspension mechanism. A unique feature of the MST will be its ability to emulate the three-dimensional kinematics of the SSRMS. In addition, some important dynamics properties will be simulated, such as the first one or two modal frequencies. To facilitate manual reconfiguration of the MST the supporting mechanism for the arm will be equipped with seven mechanical degrees- of-freedom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The growing insight into the complexity and cost of in-orbit operations of future space missions strengthens the belief that a significant amount of automation will be needed to operate the orbital laboratories in a safe, efficient, and economic way. Thus, Automation & Robotics (A&R) technology is vital for unmanned exploration missions to comets and planets. While part of the space worksite may be structured, the space environment is generally unstructured. By `structured,' we mean environments that are designed and engineered to somehow `cooperate' with the machine. In addition, the structured part of the space worksite may be damaged or in an unknown condition. This lack of structure, as well as the non- repetitive nature of the tasks, require constant adaptation to the space environment by the robot. This is the motivation for increased space robot autonomy. However, complete autonomy is still beyond the scope of today's state-of-the-art in the case of a system executing a complete mission in a hazardous environment such as space. A systematic approach for the development of A&R technologies will reduce the lead-times and costs of facilities for recurrent basic tasks. A space robotic workcell (SRW) is a collection of robots, sensors, and other industrial equipment grouped in a cooperative environment to perform various complex tasks in space. Due to their distributed nature, the control and programming of SRWs is often a difficult task. The issues involved in order to design a real-time teleprogrammable SRW system that performs intervention tasks at remote unstructured sites are summarized. The concept of `remotely operated autonomous robots' (i.e., robots teleprogrammed and telesupervised at the task level while at a space worksite) is also developed via telepresence for human-machine interface and voice/speech programming. This paper makes an assessment of the role that teleprogramming may have in furthering the automation capabilities of space teleoperated robotic devices. Finally, programming results for our SRW system are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the design of a knowledge-based task planning system for the special purpose dexterous manipulator (SPDM). The SPDM is a component of Canada's contribution to the International Space Station Project, the Mobile Servicing System (MSS), which will assist in assembling, servicing, and maintaining the Space Station Freedom through the use of advanced robot systems. A general description of a supervisory control system for the SPDM is presented. The supervisory control system includes space based and ground control systems. Knowledge-based task planning is performed within the ground control system. The ground control system includes a world model, a task decomposition sub-system, and an operator interface. The functionality and interconnections of these sub-systems are explained in detail in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotic systems deployed in space must exhibit flexibility. In particular, an intelligent robotic agent should not have to be reprogrammed for each of the various tasks it may face during the course of its lifetime. However, pre-programming knowledge for all of the possible tasks that may be needed is extremely difficult. Therefore, a powerful notion is that of an instructible agent, one which is able to receive task-level instructions and advice from a human advisor. An agent must do more than simply memorize the instructions it is given (this would amount to programming). Rather, after mapping instructions into task constructs that it can reason with, it must determine each instruction's proper scope of applicability. In this paper, we examine the characteristics of instruction, and the characteristics of agents, that affect learning from instruction. We find that in addition to a myriad of linguistic concerns, both the situatedness of the instructions (their placement within the ongoing execution of tasks) and the prior domain knowledge of the agent have an impact on what can be learned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A robotic assistant for extra-vehicular activity in space must deal with a complex, constantly changing environment. Classical planning architectures are ineffective for such tasks, but modern reactive systems are a plausible alternative. In addition, the robot must be capable of effective real-time response to the needs of human astronauts. Writing and debugging robot programs will not be possible under actual work conditions; astronaut-assistant dialogues must, therefore, take place in natural language. Unfortunately, the generally nonrepresentationalist nature of reactive systems make generative natural language interfaces impossible. In this paper, we present an overview of the Dynamic Predictive Memory architecture for robotic assistants. This architecture is an extension of the Direct Memory Access Parsing (DMAP) model of language understanding, in which the data structures and algorithms associated with the reactive action package (RAP) execution system are represented in the uniform memory format of the system. This allows natural language reference to take place coincident with the reactive execution of plans. The result is a reactive system which human users can interact with in natural language.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are many types of tasks in space where operations with robotics can play a significant role, including: (1) Tasks that are dangerous, boring, fatiguing for persons [extravehicular activity (EVA) crewmembers]; (2) Tasks where a division of labor between EVA crewmembers and robotic equipment is desirable. Current notions involve a succession of robotic capabilities: (1) Teleoperations (where the robotic system is controlled remotely to the level of maneuvers); (2) Telerobotics [where the robotic system can carry out a set of maneuvers on its own, with full-time monitoring from an EVA or intravehicular activity (IVA) crewmember]; (3) Supervised autonomy (with control and monitoring functions on the part of persons provided on a less intense basis) with occasional traded control or shared control. Of these, only the first can be considered state of the art for space applications. In considering how to achieve shared control and autonomous capability, there is a tendency to invoke terms like `cognition,' `perception,' `learning,' etc., thereby constituting wish lists of `what is needed.' By way of contrast, the thrust of this paper is to outline an approach whereby robotic systems become as `person-like' as possible to achieve needed capabilities. This approach makes use of formulations in the Person Concept, pioneered by one of the present authors, Dr. Peter G. Ossorio. These include: (1) The state of affairs (SA) system; (2) The intentional action (IA) system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Telerobotic systems (TRSs) and shared teleautonomous systems result from the integration of multiple sophisticated modules. Procedures used in systems integration design decision-making of such systems are frequently ad hoc compared to more quantitative and systematic methods employed elsewhere in engineering. Experimental findings associated with verification and validation (V&V) are often application-specific. Furthermore, models and measurement strategies do not exist which allow prediction of overall TRS performance in a given task based on knowledge of the performance characteristics of individual subsystems. This paper introduces the use of general systems performance theory (GSPT), developed by the senior author to help resolve similar problems in human performance, as a basis for: (1) measurement of overall TRS performance (viewing all system components, including the operator, as a single entity); (2) task decomposition; (3) development of a generic TRS model; and (4) the characterization of performance of subsystems comprising the generic model. GSPT uses a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented in the context of a distributed telerobotics network (Universities Space Automation and Robotics Consortium) as a testbed. Insight into the design of test protocols which elicit application-independent data (i.e., multi-purpose or reusable) is described. Although the work is motivated by space automation and robotics challenges, it is considered to be applicable to telerobotic systems in general.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Goddard robotics effort is alive and well, even after the sad demise of the flight telerobotic servicer (FTS). We have released a technical report on the design and implementation of a flexible robot control system architecture, and are in the process of prototyping it. The architecture is based on our experience in implementing NASA Standard Reference Model Architecture (NASREM) and the prototype FTS control system in Ada. We have successfully demonstrated the integration of the capaciflector proximity sensor for docking and berthing. We can even still do state of the art force reflecting teleoperation, thanks to adding a sixth motor to the Kraft hand controllers. We look forward to transferring some of this technology to other NASA centers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The servicing aid tool (SAT) is a teleoperated manipulation system designed for use on the NSTS Orbiter. The system will assist EVA servicing of spacecraft such as the Hubble Space Telescope and Explorer platform. SAT components are spaceflight adaptations of existing ground-based designs from Robotics Research and Schilling Development. Fairchild Space is providing the control electronics, safety system, and flight integration. The manipulator consists of a 6-DOF slave arm mounted on a 1-DOF positioning link in the payload bay. The slave arm is controlled via a highly similar, 6-DOF, force-reflecting master arm. Each slave arm joint receives position commands from the corresponding master arm joint; torque commands are reflected to each master joint based on the current state of the slave joint and the master/slave relationship. Scaled and indexed control will be accommodated, as will various features to ensure safe operation. The paper focuses on the development of the safety system, and operations for the demonstration and servicing missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Automated Structures Assembly Laboratory is a unique facility at NASA Langley Research Center used to investigate the robotic assembly of truss structures. Two special- purpose end-effectors have been used to assemble 102 truss members and 12 panels into an 8- meter diameter structure. One end-effector is dedicated to truss member insertion, while a second end-effector is used to install panels. Until recently, the robot motions required to construct the structure were developed iteratively using the facility hardware. Recent work at Langley has resulted in a compact machine vision system capable of providing position information relative to targets on the structure. Use of the vision system to guide the robot from an approach point 10 to 18 inches from the structure, offsetting model inaccuracies, permits robot motion based on calculated points as a first step toward use of preplanned paths from an automated path planner. This paper presents recent work at Langley highlighting the application of the machine vision system during truss member insertion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human body models are geometric structures which may be ultimately controlled by kinematically manipulating their joints, but for animation, it is desirable to control them in terms of task-level goals. We address a fundamental problem in achieving task-level postural goals: controlling massively redundant degrees of freedom. We reduce the degrees of freedom by introducing significant control points and vectors, e.g., pelvis forward vector, palm up vector, and torso up vector, etc. This reduced set of parameters are used to enumerate primitive motions and motion dependencies among them, and thus to select from a small set of alternative postures (e.g., bend versus squat to lower shoulder height). A plan for a given goal is found by incrementally constructing a goal/constraint set based on the given goal, motion dependencies, collision avoidance requirements, and discovered failures. Global postures satisfying a given goal/constraint set are determined with the help of incremental mental simulation which uses a robust inverse kinematics algorithm. The contributions of the present work are: (1) There is no need to specify beforehand the final goal configuration, which is unrealistic for the human body, and (2) the degrees of freedom problem becomes easier by representing body configurations in terms of `lumped' control parameters, that is, control points and vectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of neural networks to learning nonlinear functions for use in filters and control systems is discussed. Several examples characterize the quality of learning that takes place and the performance of the neural network in the presence of noisy inputs. A final example compares a neural network-based filter to an optimal Kalman filter for a simple quadratic system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a unified formulation for the inverse kinematics of redundant arms. Based on a special formulation of the null space of the Jacobian. By extending (appropriately re-scaling) previously used null space parameterizations, we obtain, in a unified fashion, the manipulability measure, the null space projector, and the particular solutions for the joint velocities. We obtain the minimum norm pseudo-inverse solution as a projection from any particular solution, and the method provides an intuitive visualization of the self-motion. The result is a computationally efficient, consistent approach to compute redundant robot inverse kinematics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since 1986 NASA has been developing a telerobotic system as a part of the Flight Telerobotic Servicer (FTS) Project at Goddard Space Flight Center. The project was formed to meet the national objectives of identifying and developing technologies for automation and robotics. The overall approach is to adapt current teleoperational and robotic technologies into a lightweight, dexterous telerobotic device that could operate efficiently and safely in space and that would evolve into an autonomous space robot. The concept behind this device is that it (1) operate in space, a much less structured and more hostile environment than industrial robots normally operate in and (2) perform varied dexterous tasks which increase in complexity with time. The design must also allow for growth and increased capabilities as new technologies become available. These top-level system goals significantly influenced system design, architecture, controls implementation, and manipulator packaging design. If the FTS is to be considered as a credible tool for work in space, its fundamental building blocks must be tested in space. An early development test flight (DTF-1) was conceived to fly as an attached payload on the Shuttle in order to validate the FTS hardware design. While the funding for the FTS was eliminated in September 1991, the DTF-1 system design has been completed with major flight hardware elements in different stages of fabrication and qualification. Safety was a design driver for the DTF-1. System safety engineering was implemented with the system safety requirements and design criteria established by NASA's National Space Transportation System (NSTS) Program and defined in the Safety Policy and Requirements for Payloads Using the Space Transportation System, NSTS, 1700.7B. Satisfying these safety requirements presented significant challenges to the system designers. In an effort to capture some of the knowledge gained from the program, this paper gives an overview of the DTF-1 mission, describes the system design, and describes the safety requirements and safety features that were incorporated. It also presents the `lessons' that were learned during the design and early development stages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most robot controllers today employ a single processor architecture. As robot control requirements become more complex, these serial controllers have difficulty providing the desired response time. Additionally, with robots being used in environments that are hazardous or inaccessible to humans, fault-tolerant robotic systems are particularly desirable. A uniprocessor control architecture cannot offer tolerance of processor faults. Use of multiple processors for robot control offers two advantages over single processor systems. Parallel control provides a faster response, which in turn allows a finer granularity of control. Processor fault tolerance is also made possible by the existence of multiple processors. There is a trade-off between performance and the level of fault tolerance provided. This paper describes a shared memory multiprocessor robot controller that is capable of providing high performance and processor fault tolerance. We evaluate the performance of this controller, and demonstrate how performance and processor fault tolerance can be balanced in a cost- effective manner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The principal advantage of mobile robots is that they are able to go to specific locations to perform useful tasks rather than have the tasks brought to them. It is important, therefore, that the robot be able to reach desired locations efficiently and reliably. A mobile robot whose environment extends significantly beyond its sensory horizon must maintain a representation of the environment, a map, in order to attain these efficiency and reliability requirements. We believe that qualitative mapping methods provide useful and robust representation schemes and that such maps may be used to direct the actions of a reactively controlled robot. In this paper we describe our experience in employing qualitative maps to direct, through the selection of desired control strategies, a reactive-behavior based robot. This mapping capability represents the development of one aspect of a successful deliberative/reactive hybrid control architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided `docking' problems. It differs from other hand-in-eye visual servoing problems such as tracking in that the distance from the target is a relevant servo parameter. The methodology described in this paper is a hierarchy of levels in which the vision/robot interface is decreasingly `intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a real-time multi-tasking kernel called behavior-based robot executive (BRET) that incorporates subsumption architecture-style communication facilities. The result is a portable, flexible system for building control software for behavior-based autonomous robots, which operate well in dynamic environments, including extraterrestrial areas. Unlike `pure' subsumption architecture implementations, this system does not exclude the use of traditional programming methodologies or impose artificial structures on the robot control system. Thus, traditional methods such as real-time data processing, searching or map making, as well as teleoperated commands can be integrated with the behavior-based modules. Two autonomous rovers named Ripley I and Ripley II have been built to test the software system. They were designed to use modified off-the-shelf hardware components in order to reduce construction and debugging time. Their control systems are described at the end of this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Refurbishing the thermal-protection tiles on a space shuttle before each mission is a lengthy and labor-intensive process. A mobile robot is being developed (described elsewhere) to perform two of the required maintenance operations on the bottom side of the shuttle: (1) injection of a hydrophobic fluid, to prevent tiles from absorbing water, and (2) visual inspection, to detect anomalous tile conditions. Both operations depend on precise positioning of the robot end effector with respect to each tile. We describe our method for precise visual registration. The technique first detects the edges of the tile (whose approximate shape and dimensions are given from CAD data) and then uses correspondence between visual features in the post- and pre-flight images to improve the registration accuracy. Results on actual tile images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper summarizes work by Rockwell International's Space Systems Division's Robotics Group at Downey, California. The work is part of a NASA-led team effort to automate Space Shuttle tile rewaterproofing in the Orbiter Processing Facility (OPF) at the Kennedy Space Center (KSC) and the ferry facility at the Ames-Dryden Flight Research Facility. Rockwell's effort focuses on the rewaterproofing end-effector, whose function is to inject hazardous dimethylethyloxysilane into thousands of ceramic tiles on the underside of the orbiter after each flight. The paper has five sections. First, it presents background on the present manual process. Second, end-effector requirements are presented, including safety and interface control. Third, a design is presented for the five end-effector systems: positioning, delivery, containment, data management, and command and control. Fourth, end-effector testing and integrating to the total system are described. Lastly, future applications for this technology are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The description, analysis, and experimental results of a method for identifying possible defects on high temperature reusable surface insulation (HRSI) of the Orbiter thermal protection system (TPS) is presented. Currently, a visual postflight inspection of Orbiter TPS is conducted to detect and classify defects as part of the Orbiter maintenance flow. The objective of the method is to automate the detection of defects by identifying anomalies between preflight and postflight images of TPS components. The initial version is intended to detect and label gross (greater than 0.1 inches in the smallest dimension) anomalies on HRSI components for subsequent classification by a human inspector. The approach is a modified Golden Template technique where the preflight image of a tile serves as the template against which the postflight image of the tile is compared. Candidate anomalies are selected as a result of the comparison and processed to identify true anomalies. The processing methods are developed and discussed, and the results of testing on actual and simulated tile images are presented. Solutions to the problems of brightness and spatial normalization, timely execution, and minimization of false positives are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maintaining and supporting an aircraft fleet, in a climate of reduced manpower and financial resources, dictates effective utilization of robotics and automation technologies. To help develop a winning robotics and automation program the Air Force Logistics Command created the Robotics and Automation Center of Excellence (RACE). RACE is a command wide focal point. An organic source of expertise to assist the Air Logistic Center (ALC) product directorates in improving process productivity through the judicious insertion of robotics and automation technologies. RACE is a champion for pulling emerging technologies into the aircraft logistic centers. One of those technology pulls is shared control. The small batch sizes, feature uncertainty, and varying work load conspire to make classic industrial robotic solutions impractical. One can view ALC process problems in the context of space robotics without the time delay. The ALCs will benefit greatly from the implementation of a common architecture that supports a range of control actions from fully autonomous to teleoperated. Working with national laboratories and private industry we hope to transition shared control technology to the depot floor. This paper provides an overview of the RACE internal initiatives and customer support, with particular emphasis on production processes that will benefit from shared control technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robots are being used successfully in factory automation; however, recently there has been some success in building robots which can operate in field environments, where the domain is less predictable. New perception and control techniques have been developed which allow a robot to accomplish its mission while dealing with natural changes in both land and underwater environments. Unfortunately, efforts in this area have resulted in many one-of-a-kind robots, limited to research laboratories or carefully delimited field task arenas. A user who would like to apply robotic technology to a particular field problem must basically start from scratch. The problem is that the robotic technology (i.e., the hardware and software) which might apply to the user's domain exists in a diverse array of formats and configurations. For end-user robots to become a reality, an effort to standardize some aspects of the robotic technology must be made, in much the same way that personal computer technology is becoming standardized. Presently, a person can buy a computer and then acquire hardware and software extensions which simply `plug in' and provide the user with the required utility without the user having to understand the inner workings of the pieces of the system. This technology even employs standardized interface specifications so the user is presented with a familiar interaction paradigm. This paper outlines some system requirements (hardware and software) and a preliminary design for end-user robots for field environments, drawing parallels to the trends in the personal computer market. The general conclusion is that the appropriate components as well as an integrating architecture are already available, making development of out-of-the- box, turnkey robots for a certain range of commonly required tasks a potential reality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotic systems for space operations will require a combination of teleoperation, closely supervised autonomy, and loosely supervised autonomy. They may involve multiple robots, multiple controlling sites, and long communication delays. We have constructed a distributed telerobotics system as a framework for studying these problems. Our system is based on a modular interconnection scheme which allows the components of either manual or autonomous control systems to communicate and share information. It uses a wide area network to connect robots and operators at several different sites. This presentation describes the structure of our system, the components used in our configurations, and results of some of our teleoperation experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Monolithic robots are poorly suited to the broad requirements and uncertainties of space automation applications. Weight limitations prohibit the selection of many robots, each capable of a few tasks. Building one generic robot limits automation to that robot's narrow application spectrum. A better approach is to fly a set of standardized components that can be reconfigured as required by the immediate needs on the Space Station, the Lunar surface, or beyond. This set of modular building blocks will weigh less than the family of equivalent monolithic machines, offer changeouts of broken components, and widen the spectrum of tasks that automation can address in space. Further advantages of the modular design philosophy include reduced mean time to repair, reduced operator training, and reduced system cost. While a number of robotic joint and link modules have been developed within the community (7 joints and 12 links at UT alone) there has yet to be an agreement on the standardized interfaces that other industries have exploited. The goal for this project was to design a modular robot standard that allows advanced controllers to communicate with each of the modules, verifying their positions and mounting orientations within the robot, while simultaneously offering a quick release capability to the operator. Two new link modules and one new joint module were designed to support this standard, and their development is reported. The design has proven merits which include a lightweight, high stiffness, on-module data storage, extra wire capacity, and assembly verification capabilities that are unique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a rapid prototyping environment for robotic systems, based on tenets of modularity, reconfigurability and extendibility that may help build robot systems `faster, better, and cheaper.' Given a task specification, (e.g., repair brake assembly), the user browses through a library of building blocks that include both hardware and software components. Software advisors or critics recommend how blocks may be `snapped' together to speedily construct alternative ways to satisfy task requirements. Mechanisms to allow `swapping' competing modules for comparative test and evaluation studies are also included in the prototyping environment. After some iterations, a stable configuration or `wiring diagram' emerges. This customized version of the general prototyping environment still contains all the hooks needed to incorporate future improvements in component technologies and to obviate unplanned obsolescence. The prototyping environment so described is relevant for both interactive robot programming (telerobotics) and iterative robot system development (prototyping).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One promise of telerobotics is the ability to interact in environments that are distant (e.g., deep sea or deep space), dangerous (e.g., nuclear, chemical, or biological environments), or inaccessible by humans for political or legal reasons. A key component to such interactions are sophisticated human-computer interfaces that can replicate sufficient information about a local environment to permit remote navigation and manipulation. This environment replication can, in part, be provided by technologies such as virtual reality. In addition, however, telerobotic interfaces may need to enhance human-machine interaction to assist users in task performance, for example, governing motion or manipulation controls to avoid obstacles or to restrict interaction with certain objects (e.g., avoiding contact with a live mine or a deep sea treasure). Thus, effective interactions within remote environments require intelligent virtual interfaces to telerobotic devices. In part to address this problem, MITRE is investigating virtual reality architectures that will enable enhanced interaction within virtual environments. Key components to intelligent virtual interfaces include spoken language processing, gesture recognition algorithms, and more generally, task recognition. In addition, these interfaces will eventually have to take into account properties of the user, the task, and discourse context to be more adaptive to the current situation at hand. While our work has not yet investigated the connection of virtual interfaces to external robotic devices, we have begun developing the key components for intelligent virtual interfaces for information and training systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The National Aeronautics and Space Administration's Reduced Gravity Program (RGP) offers opportunities for experimentation in gravities of less than one-g. The Extravehicular Activity Helper/Retriever (EVAHR) robot project of the Automation and Robotics Division (A&RD) at the Lyndon B. Johnson Space Center in Houston, Texas, is undertaking a task that will culminate in a series of tests in simulated zero-g using this facility. A subset of the final robot hardware consisting of a three-dimensional laser mapper, a Robotics Research 807 arm, a Jameson JH-5 hand, and the appropriate interconnect hardware/software will be used. This equipment will be flown on the RGP's KC-135 aircraft. This aircraft will fly a series of parabolas creating the effect of zero-g. During the periods of zero-g, a number of objects will be released in front of the fixed base robot hardware in both static and dynamic configurations. The system will then inspect the object, determine the object's pose, plan a grasp strategy, and execute the grasp. This must all be accomplished in approximately 27 seconds of zero-g.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a multi-view based method for estimating poses of free floating/rotating objects to support robotic manipulation in space. The multi-view based method consists of two-stage processing, namely off-line processing for building a multi-view database of each model, and on-line processing for real-time pose estimation. The multi-view database is composed of feature vectors from the range images of multiple views of the model. The feature vectors are organized into a KD tree for fast spatial indexing. At run-time, a small number of candidate poses are extracted from the KD tree via efficient feature matching, and are verified and refined using an optimization procedure to obtain the estimated pose, which may be further refined by a Kalman filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method for adaptively tracking an object given a sequence of range images for a free flying space robot. The tracker assigns unique identifiers to each new object that it detects, updates the object position and velocity information each frame, feeds and later utilizes position information from a Translational State Estimator, and resolves ambiguities due to object occlusion. The tracker must also remove objects from the world model, as well as successfully identify body parts in images. The tracker uses rough object position and size information to correspond objects frame to frame until the Translational State Estimator (TSE) has enough data to provide accurate position information. Objects that are occluded by other objects are over-segmented and individual sub-blobs are assigned to their respective objects based on proximity, or by correspondence with previously identified sub-blobs. Relative range, TSE accuracy, and priority information is maintained for each object and analysis varies depending on these properties. The proposed system has been extensively tested on simulated range images using a simulator for the EVA Retriever robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method of providing continuous and current attitude estimates of freely rotating objects to support the grasping tasks of a flying autonomous space robot. The rotational state estimator is subdivided into two sections: the rotational dynamics equations which propagate an object's attitude, angular velocity, and angular acceleration (rotational state) in time and an extended Kalman filter which produces an estimate of the error between an object's true rotational state and its estimated rotational state. In simulation tests, the rotational state estimator is periodically provided with noisy, dated object attitude measurements from computer vision. The difference between the noisy object attitude measurement and its associated attitude estimate are the input into the extended Kalman filter. Each object attitude measurement is dated when it is received by the rotational state estimator because of necessary computer vision processing time. Using this dated attitude measurement the rotational state estimator revises its current estimate of the object's attitude to guide the robot. Results of simulation tests, showing the accuracy of the attitude estimate as different parameters vary, are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Laboratory for Space Teleoperation and Robotics is developing a neutrally buoyant robot for research into the automatic and teleoperated (remote human) control of unmanned robotic vehicles for use in space. The goal of this project is to develop a remote robot with maneuverability and dexterity comparable to that of a space-suited astronaut with a manned maneuvering unit, able to assume many of the tasks currently planned for astronauts during extravehicular activity (EVA). Such a robot would be able to spare the great expense and hazards associated with human EVA, and make possible much less expensive scientific and industrialization exploitation of orbit. Both autonomous and teleoperated control experiments will require the vehicle to be able to automatically control its position and orientation. The laboratory has developed a real-time vision-based navigation and control system for its underwater space robot simulator, the Submersible for Telerobotic and Astronautical Research (STAR). The system, implemented with standard, inexpensive computer hardware, has excellent performance and robustness characteristics for a variety of applications, including automatic station-keeping and large controlled maneuvers. Experimental results are presented indicating the precision, accuracy, and robustness to disturbances of the vision-based control system. The study proves the feasibility of using vision-based control and navigation for remote robots and provides a foundation for developing a system for general space robot tasks. The complex vision sensing problem is reduced through linearization to a simple algorithm, fast enough to be incorporated into a real-time vehicle control system. Vision sensing is structured to detect small changes in vehicle position and orientation from a nominal positional state relative to a target scene. The system uses a constant, linear inversion matrix to measure the vehicle positional state from the locations of navigation features in an image. This paper includes a description of the underwater vehicle's vision-based navigation and control system and applications of vision-based navigation and control for free-flying space robots. Experimental results from underwater tests of STAR's vision system are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cooperative Intelligent Robotics in Space Exploration
A planetary rover, like a spacecraft, must be fully self-contained. Once launched, a rover can only receive information from its designers, and if solar powered, power from the sun. As the distance from Earth increases, and the demands for power on the rover increase, there is a serious tradeoff between communication and computation. Both of these subsystems are very power hungry, and both can be the major driver of the rover's power subsystem, and therefore, the minimum mass and size of the rover. This paper discusses this situation in more detail, and discusses software techniques that can be used to reduce the requirements on both communication and computation, allowing the overall robot mass to be greatly reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Significant effort is being applied to defining NASA's Space Exploration Initiative (SEI). The current thrust is, at least in the early stages, to perform the mission in a manner that is both cost effective and low risk. The first manned-mission element is the development of the first lunar outpost (FLO). As currently envisioned this is a manned mission with little or no robotics content, yet there are a number of places where robotic systems would be expected to be used. The lack of robotics use seems to be happening because there is no systems engineering approach to demonstrate that robotic capability is a low risk achievable mission. This paper first reviews the robotic potential for these missions and some of the challenges that appear to be presented. Then we draw parallels to other bodies of system engineering where system engineering methodologies are well developed. Finally, we begin to draw parallels between metrics and robotic system engineering. It is the authors' hope that significant quantities of research exist which can be put into this framework. We look forward to hearing about it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Lyndon B. Johnson Space Center (JSC) Automation and Robotics Division has developed simulation tool for use in evaluating the feasibility and effectiveness of proposed candidate mission concepts, such as the first lunar outpost (FLO) design reference missions. The tool helps in defining requirements for such missions at the beginning of the definition activities and can be applied iteratively thereafter on a short cycle basis as changes are made to the mission definition. The simulation is applicable to the amount and capabilities of automation and robotics; number and skills of crew; amount of engineering work, science, and in-space materials utilization to be accomplished; impact of/provision for maintenance and repair; and flight schedules and manifests. The simulation accounts for supply and demand of resources to accomplish tasks and accounts for elapsed time in carrying out mission processes and tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An anticipated goal of Mars surface exploration missions will be to survey and sample surface rock formations which appear scientifically interesting. In such a mission, a planetary rover would navigate close to a selected sampling site and the remote operator would use a manipulator mounted on the rover to perform a sampling operation. Techniques for accomplishing the necessary manipulation for the sampling components of such a mission have been developed and are presented. We discuss the implementation of a system for controlling a seven (7) degree of freedom Puma manipulator, equipped with a special rock gripper mounted on a planetary rover prototype, intended for the purpose of performing the sampling operation. Control is achieved by remote teleoperation. This paper discusses the real-time force control and supervisory control aspects of the rover manipulation system. Integration of the Puma manipulator with the existing distributed computer architecture is also discussed. The work described is a contribution toward achieving the coordinated manipulation and mobility necessary for a Mars sample acquisition and return scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents some aspects of software design for telerobotic tasks. Because of specific characteristics (such as `delays'), spatial domain is an interesting field with particular requirements. First, we apply to telerobotic tasks an important work dedicated to methodology for robotized tasks programing. This work is being developed with other French laboratories (C.N.R.S., GDR Automatique, Cooperation Homme/Machine, and CEA). On the other hand, we propose a blackboard method approach to build specific architectures (both on-board and remote). We have designed and developed a hybrid distributed blackboards system based on a parallel blackboard model. The two presented models (for static decomposition of mission and dynamic decision making) satisfy the same requirements of software engineering, such as genericity. They adopt a functional approach and emphasize the autonomy as a dynamic decision making criterium.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.