Cephalometric analysis is the study of the dental and skeletal relationship in the head, and it is used as an assessment and planning tool for improved orthodontic treatment of a patient. Conventional cephalometric analysis identifies bony and soft-tissue landmarks in 2D cephalometric radiographs, in order to diagnose facial features and abnormalities prior to treatment, or to evaluate the progress of treatment. Recent studies in orthodontics indicate that there are persistent inaccuracies and inconsistencies in the results provided using conventional 2D cephalometric analysis. Obviously, plane geometry is inappropriate for analyzing anatomical volumes and their growth; only a 3D analysis is able to analyze the three-dimensional, anatomical maxillofacial complex, which requires computing inertia systems for individual or groups of digitally segmented teeth from an image volume of a patient’s head. For the study of 3D cephalometric analysis, the current paper proposes a system for semi-automatically segmenting teeth from a cone beam computed tomography (CBCT) volume with two distinct features, including an intelligent user-input interface for automatic background seed generation, and a graphics processing unit (GPU) acceleration mechanism for three-dimensional GrowCut volume segmentation. Results show a satisfying average DICE score of 0.92, with the use of the proposed tooth segmentation system, by 15 novice users who segmented a randomly sampled tooth set. The average GrowCut processing time is around one second per tooth, excluding user interaction time.
This paper proposes a method for false-positive reduction in mammography computer aided detection (CAD) systems by
detecting a linear structure (LS) in individual microcalcification (MCC) cluster candidates, which primarily involves
three steps. First, it applies a modified RANSAC algorithm to a region of interest (ROI) that encloses an MCC cluster
candidate to find LS. Second, a peak-to-peak ratio of two orthogonal integral-curves (named the RANSAC feature) is
computed based on the results from the first step. Last, the computed RANSAC feature is, together with other MCC
cancer features, used in a neural network for MCC classification, results of which are compared with the classification
without the RANSAC feature. One thousand (1000) cases were used in training the classifiers, 671 cases were used in
testing. The comparison shows that there is a significant improvement in terms of the reduction of linear structure
associated false-positives readings (up to about 40% FP reduction).
This research addresses the problem of determining the location of a pulmonary nodule in a radiograph with the aid of a
pre-existing computed tomographic (CT) scan. The nodule is segmented in the radiograph using a level set segmentation
method that incorporates characteristics of the nodule in a digitally reconstructed radiograph (DRR) that is calculated
from the CT scan. The segmentation method includes two new level set energy terms. The contrast energy seeks to
increase the contrast of the segmented region relative to its surroundings. The gradient direction convergence energy is
minimized when the intensity gradient direction in the region converges to a point. The segmentation method was tested
on 23 pulmonary nodules from 20 cases for which both a radiographic image and CT scan were collected. The mean
nodule effective diameter is 22.5 mm. The smallest nodule has an effective diameter of 12.0 mm and the largest an
effective diameter of 48.1 mm. Nodule position uncertainty was simulated by randomly offsetting the true nodule center
from an aim point. The segmented region is initialized to a circle centered at the aim point with a radius that is equal to
the effective radius of the nodule plus a 10.0 mm margin. When the segmented region that is produced by the proposed
method is used to localize the nodule, the average reduction in nodule-position uncertainty is 46%. The relevance of this
method to the detection of radiotherapy targets at the time of treatment is discussed.
This paper proposes a method for linear-structure (LS) verification in mammography computer-aided detection (CAD)
systems that aims at reducing post-classification microcalcification (MCC) false-positives (FPs). It is an MCC cluster-driven
method that verifies linear structures with a small rotatable band that is centered on a given MCC cluster
candidate. The classification status of an MCC cluster candidate is changed if its association with a linear structure is
confirmed through LS verification. There are mainly four identifiable features that are extracted from the rotatable band
in the gradient-magnitude and Hough parameter spaces. The LS verification process applies cascade rules to the
extracted features to determine if an MCC cluster candidate resides in a linear structure area. The efficiency and efficacy
of the proposed method are demonstrated with results obtained by applying the LS verification method to over one
hundred cancer cases and over one thousand normal cases.
Patient setup error is one of the major causes of tumor position uncertainty in radiotherapy for extracranial targets,
which can result in a decreased radiation dose to the tumor and an increased dose to the normal tissues. Therefore, it
is a common practice to verify the patient setup accuracy by comparing portal images with a digitally reconstructed
radiograph (DRR) reference image. This paper proposes a practical method of portal image and DRR fusion for
patient setup verification. As a result of the mean intensity difference between the inside and outside of the actual
radiation region in the portal image, the image fusion in this work is fulfilled by applying an image registration
process to the contents inside or outside of the actual radiation region in the portal image and the relevant contents
that are extracted, accordingly, from the DRR image. The image fusion can also be fulfilled statistically by applying
two separate image registration processes to the inside and outside of the actual radiation regions. To segment the
image registration contents, automatic or semiautomatic region delineation schemes are employed that aim at
minimizing users' operation burden, while at the same time maximizing the use of human intelligence. To achieve
an accurate and fast delineation, this paper proposes using adaptive weight in the conventional level-set contour-finding
algorithm for the automatic delineation scheme, as well as the use of adaptive banding in the conventional
Intelligent Scissors algorithm for the semiautomatic delineation scheme.
This paper reports on the current status of the multimodal user supervised interface and intelligent control (MUSIIC) project, which is working towards the development of an intelligent assistive telemanipulative system for people with motor disabilities. Our MUSIIC strategy overcomes the limitations of previous approaches by integrating a multimodal RUI (robot user interface) and a semi-autonomous reactive planner that will allow users with severe motor disabilities to manipulate objects in an unstructured domain. The multimodal user interface is a speech and deictic (pointing) gesture based control that guides the operation of a semi-autonomous planner controlling the assistive telerobot. MUSIIC uses a vision system to determine the three-dimensional shape, pose and color of objects and surfaces which are in the environment, and as well as an object-oriented knowledge base and planning system which superimposes information about common objects in the three-dimensional world. This approach allows the users to identify objects and tasks via a multimodal user interface which interprets their deictic gestures and a restricted natural language like speech input. The multimodal interface eliminates the need for general purpose object recognition by binding the users speech and gesture input to a locus in the domain of interest. The underlying knowledge-driven planner, combines information obtained from the user, the stereo vision mechanism as well as the knowledge bases to adapt previously learned plans to perform new tasks and also to manipulate newly introduced objects into the workspace. Therefore, what we have is a flexible and intelligent telemanipulative system that functions as an assistive robot for people with motor disabilities.
The development of an assistive telerobotic system which integrates human-computer interaction with reactive planning is the goal of our research. The system is intended to operate in an unstructured environment, rather than in a structured workcell, allowing the user considerably freedom and flexibility in terms of control and operating ease. Our approach is based on the assumption that while the user's world is unstructured, objects within are reasonably predictable. We reflect this arrangement by providing a means of determining the superquadric shape representation of the scene, and an object-oriented knowledge base and reactive planner which superimposes information about common objects in the world. A multimodal user interface interprets deictic gesture and speech inputs with the goal of identifying the object that is of interest to the user. The multimodal interface performs a critical disambiguation function by binding the spoken words to a locus in the physical work space. The spoken input is also used to supplant the need for general purpose object recognition. Instead, 3D shape information is augmented by the users spoken word which may also invoke the appropriate inheritance of object properties using the adopted hierarchical object-oriented representation scheme. The underlying planning mechanism results in a reactive, intelligent and `instructible' telerobot. We describe our approach for an intelligent assistive telerobotic system (MUSIIC) for unstructured environments: speech-deictic gesture control integrated with a knowledge-driven reactive planner and a stereo-vision system.
People with disabilities such as quadriplegia can use mouth-sticks and head-sticks as extension devices to perform desired manipulations. These extensions provide extended proprioception which allows users to directly feel forces and other perceptual cues such as texture present at the tip of the mouth-stick. Such devices are effective for two principle reasons: because of their close contact with the user's tactile and proprioceptive sensing abilities; and because they tend to be lightweight and very stiff, and can thus convey tactile and kinesthetic information with high-bandwidth. Unfortunately, traditional mouth-sticks and head-sticks are limited in workspace and in the mechanical power that can be transferred because of user mobility and strength limitations. We describe an alternative implementation of the head-stick device using the idea of a virtual head-stick: a head-controlled bilateral force-reflecting telerobot. In this system the end-effector of the slave robot moves as if it were at the tip of an imaginary extension of the user's head. The design goal is for the system is to have the same intuitive operation and extended proprioception as a regular mouth-stick effector but with augmentation of workspace volume and mechanical power. The input is through a specially modified six DOF master robot (a PerForceTM hand-controller) whose joints can be back-driven to apply forces at the user's head. The manipulation tasks in the environment are performed by a six degree-of-freedom slave robot (the Zebra-ZEROTM) with a built-in force sensor. We describe the prototype hardware/software implementation of the system, control system design, safety/disability issues, and initial evaluation tasks.
A person with limited arm and hand function could benefit from technology based on teleoperation principles, particularly where the mechanism provides proprioceptive-like information to the operator giving an idea of the forces encountered in the environment and the positions of the slave robot. A test-bed system is being prepared to evaluate the potential for adapting telemanipulator technology to the needs of people with high level spinal cord injury. This test-bed uses a kinematically dissimilar master and slave pair and will be adaptable to a variety of disabilities. The master will be head controlled and when combined with auxiliary functions will provide the degrees of freedom necessary to attempt any task. In the simplest case, this mapping could be direct, with the slave amplifying the person's movements and forces. It is unrealistic however to expect that the operator will have the range of head movement required for accurate operation of the slave over the full workspace. Neither is it likely that the person will be able to achieve simultaneous and independent control of the 6 or more degrees of freedom required to move the slave. Thus a set of more general mappings must be available that can be chosen to relate to the intrinsic coordinates of the task. The software structure to implement the control of this master-slave system is based on two IBM PC computers linked via an ethernet.