Teleoperated robotic technology has a great potential in delivering healthcare in alternative ways from in-person encounters such as surgery, diagnosis, and nursing with specialists remotely controlled. Most teleoperated procedures heavily rely on visual feedback from cameras to observe the situation. While ideally, the camera position and orientation should be optimized adaptively depending on the task and circumstance, the view adjustment by the operator is required either by dedicating concentration on camera control or pausing during the adjustment. Then, there is the demand for a more intuitive telepresence method to improve the efficiency and performance of the remote operation. This paper proposes a hands-free approach to control the camera view for an improved telepresence experience. The system comprises an RGBD camera mounted on a robotic arm and a motion tracking virtual reality (VR) head mount display (HMD) maps the human head and upper-body motion to the robotic arm for the immersive teleoperation task. Based on this setup, an Augmented Head Motion Mapping (AHMM) mode is introduced. Wherein this mode, the user can decide to control the camera following the head motion directly or following the remote center of motion (RCM) to the target location so that the reachable visual field can be expanded. Through the user study with seven subjects, we evaluated the proposed method compared with other conventional methods in terms of the reachable visual field, control intuitiveness, and task efficiency. The possibility of further enlarging the reachable visual field by introducing the motion scaling factor is investigated through the simulation. The result successfully demonstrated that an operator using the proposed system could examine a larger area on the given object within a similar amount of time with limited training.
Medical ultrasound is extensively used to define tissue textures and to characterize lesions, and it is the modality of choice for detection and follow-up assessment of thyroid diseases. Classical medical ultrasound procedures are performed manually by an occupational operator with a hand-held ultrasound probe. These procedures require high physical and cognitive burden and yield clinical results that are highly operator-dependent, therefore frequently diminishing trust in ultrasound imaging data accuracy in repetitive assessment. A robotic ultrasound procedure, on the other hand, is an emerging paradigm integrating a robotic arm with an ultrasound probe. It achieves an automated or semi-automated ultrasound scanning by controlling the scanning trajectory, region of interest, and the contact force. Therefore, the scanning becomes more informative and comparable in subsequent examinations over a long-time span. In this work, we present a technique for allowing operators to reproduce reliably comparable ultrasound images with the combination of predefined trajectory execution and real-time force feedback control. The platform utilized features a 7-axis robotic arm capable of 6-DoF force-torque sensing and a linear-array ultrasound probe. The measured forces and torques affecting the probe are used to adaptively modify the predefined trajectory during autonomously performed examinations and probe-phantom interaction force accuracy is evaluated. In parallel, by processing and combining ultrasound B-Mode images with probe spatial information, structural features can be extracted from the scanning volume through a 3D scan. The validation was performed on a tissue-mimicking phantom containing thyroid features, and we successfully demonstrated high image registration accuracy between multiple trials.