A novel algorithm for analysis and classification of breast abnormalities in digital mammography based on a deep convolutional neural network is proposed. Simplified neural network architectures such as MobileNetV2, InceptionResNetV2, Xception, and ResNetV2 are intensively studied for this task. In order to improve the accuracy of detection and classification of breast abnormalities on real data an efficient training algorithm based on augmentation technique is suggested. The performance of the proposed algorithm for analysis and classification of breast abnormalities on real data is discussed and compared to that of the state-of-the-art algorithms.
Proc. SPIE. 11510, Applications of Digital Image Processing XLIII
KEYWORDS: 3D image reconstruction, Cameras, Sensors, Clouds, 3D modeling, Image registration, Reconstruction algorithms, 3D image processing, RGB color model
In this paper, we propose a new algorithm for dense 3D object reconstruction using a RGB-D sensor at high rate. In order to obtain a dense shape recovery of a 3D object, an efficient merging of the current and incoming point clouds obtained with the Iterative Closest Point is suggested. As a result, incoming frames are aligned to the dense 3D model. The accuracy of the proposed 3D object reconstruction algorithm on real data is compared to that of the estate-of-the-art reconstruction algorithms.
In this paper, 3D face recognition based on a deep convolutional neural network in autonomous mobile systems is associated with a large size of neural models and extremely high computational complexity of classification procedures owing to the large network depth. To solve this problem, we use compression and pruning algorithms. Since these algorithms decrease the recognition accuracy, we propose an efficient retraining of neural models in such a way to approach the recognition accuracy to very large modern models of neural networks. The performance of the proposed neural models using compression and pruning is compared in terms of face recognition accuracy and compression rate.
In this paper, we first estimate the accuracy of 3D facial surface reconstruction from real RGB-D depth maps using various depth filtering algorithms. Next, a new 3D face recognition algorithm using deep convolutional neural network is proposed. With the help of 3D face augmentation techniques different facial expressions from a single 3D face scan are synthesized and used for network learning. The performance of the proposed algorithm is compared in terms of 3D face recognition metrics and processing time with that of common 3D face recognition algorithms.
In this paper, we propose an algorithm for the detection of local features in depth maps. The local features can be utilized for determination of special points for Iterative Closest Point (ICP) algorithms. The proposed algorithm employs a novel approach based a cascade mechanism, which can be applied for several 3D keypoint detection algorithms. Computer simulation and experimental results obtained with the proposed algorithm in real-life scenes are presented and compared with those obtained with state-of-the-art algorithms in terms of detection efficiency, accuracy, and speed of processing. The results show an improvement in the accuracy of 3D object reconstruction using the proposed algorithm followed by ICP algorithms.
Proc. SPIE. 10752, Applications of Digital Image Processing XLI
KEYWORDS: Data modeling, Digital filtering, Denoising, Clouds, 3D modeling, Image filtering, Reconstruction algorithms, Nonlinear filtering, Magnetorheological finishing, RGB color model
In this paper, we estimate the accuracy of 3D object reconstruction using depth filtering and data from a RGB-D sensor. Depth filtering algorithms carry out inpainting and upsampling for defective depth maps from a RGB-D sensor. In order to improve the accuracy of 3D object reconstruction, an efficient and fast method of depth filtering is designed. Various methods of depth filtering are tested and compared with respect to the reconstruction accuracy using real data. The presented results show an improvement in the accuracy of 3D object reconstruction using depth filtering from a RGB-D sensor.
Proc. SPIE. 10752, Applications of Digital Image Processing XLI
KEYWORDS: Detection and tracking algorithms, Cameras, Sensors, Calibration, Clouds, 3D modeling, Reconstruction algorithms, Biological research, Sensor calibration, RGB color model
In this paper, we reconstruct 3D object shape using multiple Kinect sensors. First, we capture RGB-D data from Kinect sensors and estimate intrinsic parameters of each Kinect sensor. Second, calibration procedure is utilized to provide an initial rough estimation of the sensor poses. Next, extrinsic parameters are estimated using an initial rigid transformation matrix in the Iterative Closest Point (ICP) algorithm. Finally, a fusion of calibrated data from Kinect sensors is performed. Experimental reconstruction results using Kinect V2 sensors are presented and analyzed in terms of the reconstruction accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.