To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.
A general body-wide automatic anatomy recognition (AAR) methodology was proposed in our previous work based on hierarchical fuzzy models of multitudes of objects which was not tied to any specific organ system, body region, or image modality. That work revealed the challenges encountered in modeling, recognizing, and delineating sparse objects throughout the body (compared to their non-sparse counterparts) if the models are based on the object’s exact geometric representations. The challenges stem mainly from the variation in sparse objects in their shape, topology, geographic layout, and relationship to other objects. That led to the idea of modeling sparse objects not from the precise geometric representations of their samples but by using a properly designed optimal super form. This paper presents the underlying improved methodology which includes 5 steps: (a) Collecting image data from a specific population group G and body region Β and delineating in these images the objects in Β to be modeled; (b) Building a super form, S-form, for each object O in Β; (c) Refining the S-form of O to construct an optimal (minimal) super form, S*-form, which constitutes the (fuzzy) model of O; (d) Recognizing objects in Β using the S*-form; (e) Defining confounding and background objects in each S*-form for each object and performing optimal delineation. Our evaluations based on 50 3D computed tomography (CT) image sets in the thorax on four sparse objects indicate that substantially improved performance (FPVF~2%, FNVF~10%, and success where the previous approach failed) can be achieved using the new approach.
With the rapid growth of positron emission tomography/computed tomography (PET/CT)-based medical applications, body-wide anatomy recognition on whole-body PET/CT images becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem and seldom studied due to unclear anatomy reference frame and low spatial resolution of PET images as well as low contrast and spatial resolution of the associated low-dose CT images. We previously developed an automatic anatomy recognition (AAR) system  whose applicability was demonstrated on diagnostic computed tomography (CT) and magnetic resonance (MR) images in different body regions on 35 objects. The aim of the present work is to investigate strategies for adapting the previous AAR system to low-dose CT and PET images toward automated body-wide disease quantification. Our adaptation of the previous AAR methodology to PET/CT images in this paper focuses on 16 objects in three body regions – thorax, abdomen, and pelvis – and consists of the following steps: collecting whole-body PET/CT images from existing patient image databases, delineating all objects in these images, modifying the previous hierarchical models built from diagnostic CT images to account for differences in appearance in low-dose CT and PET images, automatically locating objects in these images following object hierarchy, and evaluating performance. Our preliminary evaluations indicate that the performance of the AAR approach on low-dose CT images achieves object localization accuracy within about 2 voxels, which is comparable to the accuracies achieved on diagnostic contrast-enhanced CT images. Object recognition on low-dose CT images from PET/CT examinations without requiring diagnostic contrast-enhanced CT seems feasible.
To make Quantitative Radiology a reality in routine radiological practice, computerized automatic anatomy recognition (AAR) becomes essential. Previously, we presented a fuzzy object modeling strategy for AAR. This paper presents several advances in this project including streamlined definition of open-ended anatomic objects, extension to multiple imaging modalities, and demonstration of the same AAR approach on multiple body regions. The AAR approach consists of the following steps: (a) Collecting image data for each population group G and body region B. (b) Delineating in these images the objects in B to be modeled. (c) Building Fuzzy Object Models (FOMs) for B. (d) Recognizing individual objects in a given image of B by using the models. (e) Delineating the recognized objects. (f) Implementing the computationally intensive steps in a graphics processing unit (GPU). Image data are collected for B and G from our existing patient image database. Fuzzy models for the individual objects are built and assembled into a model of B as per a chosen hierarchy of the objects in B. A global recognition strategy is used to determine the pose of the objects within a given image I following the hierarchy. The recognized pose is utilized to delineate the objects, also hierarchically. Based on three body regions tested utilizing both computed tomography (CT) and magnetic resonance (MR) imagery, recognition accuracy for non-sparse objects has been found to be generally sufficient ( 3 to 11 mm or 2-3 voxels) to yield delineation false positive (FP) and true positive (TP) values of < 5% and ≥ 90%, respectively. The sparse objects require further work to improve their recognition accuracy.
We study the problem of automatic delineation of an anatomic object in an image, where the object is solely
identified by its anatomic prior. We form such priors in the form of fuzzy models to facilitate the segmentation of
images acquired via different imaging modalities (like CT, MRI, or PET), in which the recorded image properties
are usually different. Our main interest is in delineating different body organs in medical images for automatic
anatomy recognition (AAR).
The AAR system we are developing consists of three main components: (C1) building body-wide groupwise
fuzzy anatomic models; (C2) recognizing the body organs geographically and then delineating them by employing
the models; (C3) generating quantitative descriptions. This paper focuses on (C2) and presents a unified approach
for model-based segmentation within which several different strategies can be formulated, ranging from modelbased
hard/fuzzy thresholding to model-based graph cut, fuzzy connectedness, and random walker methods and
algorithms. This is an important theoretical advance.
The presented experiments clearly prove, that a fully automatic segmentation system based on the fuzzy
models can indeed provide the reliable segmentations. However, the presented experiments utilize only the
simplest versions of the methodology presented in the theoretical part of the paper. The full experimental
evaluation of the methodology is still a work in progress.