The automatic analysis of whole slide images (WSIs) of stained histopathology tissue sections plays a crucial role in the discovery of predictive biomarkers in the field on immuno-oncology by enabling the quantification of the phenotypic information contained in the tissue sections. The automatic detection of cells and nuclei, while being one of the major steps of such analysis, remains a difficult problem because of the low visual differentiation of high pleomorphic and densely cluttered objects and of the diversity of tissue appearance between slides. The key idea of this work is to take advantage of well-differentiated objects in each slide to learn about the appearance of the tissue and in particular about the appearance of low-differentiated objects. We detect well-differentiated objects on a automatically selected set of representative regions, learn slide-specific visual context models, and finally use the resulting posterior maps to perform the final detection steps on the whole slide. The accuracy of the method is demonstrated against manual annotations on a set of differently stained images.
Motivation: The existing visualization of the Camera augmented mobile C-arm (CamC) system does not have enough cues for depth information and presents the anatomical information in a confusing way to surgeons. Methods: We propose a method that segments anatomical information from X-ray and then augment it on the video images. To provide depth cues, pixels belonging to video images are classified as skin and object classes. The augmentation of anatomical information from X-ray is performed only when pixels have a larger probability of belonging to skin class. Results: We tested our algorithm by displaying the new visualization to 2 expert surgeons and 1 medical student during three surgical workflow sequences of the interlocking of intramedullary nail procedure, namely: skin incision, center punching, and drilling. Via a survey questionnaire, they were asked to assess the new visualization when compared to the current alphablending overlay image displayed by CamC. The participants all agreed (100%) that occlusion and instrument tip position detection were immediately improved with our technique. When asked if our visualization has potential to replace the existing alpha-blending overlay during interlocking procedures, all participants did not hesitate to suggest an immediate integration of the visualization for the correct navigation and guidance of the procedure. Conclusion: Current alpha blending visualizations lack proper depth cues and can be a source of confusion for the surgeons when performing surgery. Our visualization concept shows great potential in alleviating occlusion and facilitating clinician understanding during specific workflow steps of the intramedullary nailing procedure.
Although Virtual Histology (VH) is the in-vivo gold standard for atherosclerosis plaque characterization in IVUS
images, it suffers from a poor longitudinal resolution due to ECG-gating. In this paper, we propose an image-based approach to overcome this limitation. Since each tissue have different echogenic characteristics, they show
in IVUS images different local frequency components. By using Redundant Wavelet Packet Transform (RWPT),
IVUS images are decomposed in multiple sub-band images. To encode the textural statistics of each resulting
image, run-length features are extracted from the neighborhood centered on each pixel. To provide the best
discrimination power according to these features, relevant sub-bands are selected by using Local Discriminant
Bases (LDB) algorithm in combination with Fisher's criterion. A structure of weighted multi-class SVM permits the classification of the extracted feature vectors into three tissue classes, namely fibro-fatty, necrotic core and dense calcified tissues. Results shows the superiority of our approach with an overall accuracy of 72% in comparison to methods based on Local Binary Pattern and Co-occurrence, which respectively give accuracy rates of 70% and 71%.
Medical imaging is essential in the diagnosis of atherosclerosis. In this paper, we propose the semi-automatic matching of two promising and complementary intravascular imaging techniques, Intravascular Ultrasound (IVUS) and Optical Coherence Tomography (OCT), with the ultimate goal of producing hybrid images with increased diagnostic value for assessing arterial health. If no ECG gating has been performed on the IVUS and OCT pullbacks, there is typically an anatomical shuffle (displacement in time and space) in the image sequences due to the catheter motion in the artery during the cardiac cycle, and thus, this is not possible to perform a 3D registration. Therefore, the goal of our work is to detect semi-automatically the corresponding images in both modalities as a preprocessing step for the fusion. Our method is based on the characterization of the lumen shape by a set of Gabor Jets features. We also introduce different correction terms based on the approximate position of the slice in the artery. Then we train different support vector machines based on these features to recognize these correspondences. Experimental results demonstrate the usefulness of our approach, which achieves up to 95% matching accuracy for our data.