Purpose: Detecting knee orientation automatically from scout scans with high speed and accuracy is essential to a successful workflow of MR knee imaging. Although traditional methods of image processing such as rigid image registration and object detection are potentially available solutions, they are sensitive to image noise such as missing features due to the metal implants and anatomical variability in knee size and tissue composition. Method: In this study, a segmentation-based approach was proposed to calculate a 3-D transformation matrix that defined 3-D knee orientation using low-res MR scout scans. Specifically, 3-D U-net was used to segment a plane that was parallel to the knee meniscus plane and reconstruct the plane norm as one of the vectors (v1) needed for a 3d transformation matrix. A separate model of 3-D U-net was then trained to segment another plane that was perpendicular to the meniscus and reconstruct the plane norm as v2. A linear 3-D transformation matrix was then obtained for each patient case in 14 testing subjects that were initially manually rotated in small (group S) and large (group L) degrees for training. Angle corrected images were also visually compared against their corresponding ground truth. Results: The average of v1 and v2 error in group S were 5.62° and 5.12° , respectively, whereas the error average of these two vectors were 6.65° and 8.25° , respectively for group L. The standard deviation for v1 and v2 in group S and L were 2.51° , 2.84° , 5.65° , and 7.65° , respectively. The Dice similarity coefficient (DSC) of reconstructed v1 and v2 planes were 0.78, 0.70, 0.71, and 0.65 for group S and L. The qualitative assessment further showed consistent knee representation after correction for knees with heavy distortion and fatty tissue. Conclusion: Initial results suggest that our approach has the potential to automatically correct for small knee rotations commonly seen in clinical setting and is robust even under stress test for knees with anatomical structures (e.g. fatty tissue) that were even absent in the training data set and that appear heavily distorted.
Volumetric texture analysis is an import task in medical imaging domain and is widely used for characterizing tissues and tumors in medical volumes. Local binary pattern (LBP) based texture descriptors are quite successful for characterizing texture information in 2D images. Unfortunately, the number of binary patterns grows exponentially with number of bits in LBP. Hence its straightforward extension to 3D domain results in extremely large number of bit patterns that may not be relevant for subsequent tasks like classification. In this work we present an efficient extension of LBP for 3D data using decision tree. The leaves of this tree represent texture words whose binary patterns are encoded using the path being followed from the root to reach the leaf. Once trained, this tree is used to create histogram in bag-of-words fashion that can be used as texture descriptor for whole volumetric image. For training, each voxel is converted into a 3D LBP pattern and is assigned the label of it’s corresponding volumetric image. These patterns are used in supervised fashion to construct decision tree. The leaves of the corresponding tree are used as texture descriptor for downstream learning tasks. The proposed texture descriptor achieved state of the art classification results on RFAI database 1. We further showed its efficacy on MR knee protocol classification task where we obtained near perfect results. The proposed algorithm is extremely efficient, computing texture descriptor of typical MRI image in less than 100 milliseconds.
Automatically detecting anatomy orientation is an important task in medical image analysis. Specifically, the ability to automatically detect coarse orientation of structures is useful to minimize the effort of fine/accurate orientation detection algorithms, to initialize non-rigid deformable registration algorithms or to align models to target structures in model-based segmentation algorithms. In this work, we present a deep convolution neural network (DCNN)-based method for fast and robust detection of the coarse structure orientation, i.e., the hemi-sphere where the principal axis of a structure lies. That is, our algorithm predicts whether the principal orientation of a structure is in the northern hemisphere or southern hemisphere, which we will refer to as UP and DOWN, respectively, in the remainder of this manuscript. The only assumption of our method is that the entire structure is located within the scan’s field-of-view (FOV). To efficiently solve the problem in 3D space, we formulated it as a multi-planar 2D deep learning problem. In the training stage, a large number coronal-sagittal slice pairs are constructed as 2-channel images to train a DCNN to classify whether a scan is UP or DOWN. During testing, we randomly sample a small number of coronal-sagittal 2-channel images and pass them through our trained network. Finally, coarse structure orientation is determined using majority voting. We tested our method on 114 Elbow MR Scans. Experimental results suggest that only five 2-channel images are sufficient to achieve a high success rate of 97.39%. Our method is also extremely fast and takes approximately 50 milliseconds per 3D MR scan. Our method is insensitive to the location of the structure in the FOV.