Patients’ responses to a drug differ at the cellular level. Here, we present an image-based cell phenotypic feature quantification method for predicting the responses of patient-derived glioblastoma cells to a particular drug. We used high-content imaging to understand the features of patient-derived cancer cells. A 3D spheroid culture formation resembles the in vivo environment more closely than 2D adherent cultures do, and it allows for the observation of cellular aggregate characteristics. However, cell analysis at the individual level is more challenging. In this paper, we demonstrate image-based phenotypic screening of the nuclei of patient-derived cancer cells. We first stitched the images of each well of the 384-well plate with the same state. We then used intensity information to detect the colonies. The nuclear intensity and morphological characteristics were used for the segmentation of individual nuclei. Next, we calculated the position of each nucleus that is appeal of the spatial pattern of cells in the well environment. Finally, we compared the results obtained using 3D spheroid culture cells with those obtained using 2D adherent culture cells from the same patient being treated with the same drugs. This technique could be applied for image-based phenotypic screening of cells to determine the patient’s response to the drug.
We present a cell image quantification method for image-based drug response prediction from patient-derived glioblastoma cells. Drug response of each person differs at the cellular level. Therefore, quantification of a patient-derived cell phenotype is important in drug response prediction. We performed fluorescence microscopy to understand the features of patient-derived 3D cancer spheroids. A 3D cell culture simulates the in-vivo environment more closely than 2D adherence culture, and thus, allows more accurate cell analysis. Furthermore, it allows assessment of cellular aggregates. Cohesion is an important feature of cancer cells. In this paper, we demonstrate image-based quantification of cellular area, fluorescence intensity, and cohesion. To this end, we first performed image stitching to create an image of each well of the plate with the same environment. This image shows colonies of various sizes and shapes. To automatically detect the colonies, we used an intensity based classification algorithm. The morphological features of each cancer cell colony were measured. Next, we calculated the location correlation of each colony that is appeal of the cell density in the same well environment. Finally, we compared the features for drug-treated and untreated cells. This technique could potentially be applied for drug screening and quantification of the effects of the drugs.
Cancer cell morphology is closely related to their phenotype and activity. These characteristics are important in drug-response prediction for personalized cancer therapeutics. We used multi-channel fluorescence microscopy images to analyze the morphology of highly cohesive cancer cells. First, we detected individual nuclei regions in single-channel images using advanced simple linear iterative clustering. The center points of the nuclei regions were used as seeds for the Voronoi diagram method to extract spatial arrangement features from cell images. Human cancer cell populations form irregularly shaped aggregates, making their detection more difficult. We overcame this problem by identifying individual cells using an image-based shape descriptor. Finally, we analyzed the correlation between cell agglutination and cell shape.
3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.
Tumor cell morphology is closely related to its invasiveness characteristics and migratory behaviors. An invasive tumor
cell has a highly irregular shape, whereas a spherical cell is non-metastatic. Thus, quantitative analysis of cell features is
crucial to determine tumor malignancy or to test the efficacy of anticancer treatment. We use phase-contrast microscopy
to analyze single cell morphology and to monitor its change because it enables observation of long-term activity of living
cells without photobleaching and phototoxicity, which is common in other fluorescence-labeled microscopy. Despite this
advantage, there are image-level drawbacks to phase-contrast microscopy, such as local light effect and contrast
interference ring, among others.
Thus, we first applied a local filter to compensate for non-uniform illumination. Then, we used intensity distribution
information to detect the cell boundary. In phase-contrast microscopy images, the cell normally appears as a dark region
surrounded by a bright halo. As the halo artifact around the cell body is minimal and has an asymmetric diffusion pattern,
we calculated the cross-sectional plane that intersected the center of each cell and was orthogonal to the first principal
axis. Then, we extracted the dark cell region by level set. However, a dense population of cultured cells still rendered
single-cell analysis difficult. Finally, we measured roundness and size to classify tumor cells into malignant and benign
groups. We validated segmentation accuracy by comparing our findings with manually obtained results.
Since the morphology of tumor cells is a good indicator of their invasiveness, we used time-lapse phase-contrast
microscopy to examine the morphology of tumor cells. This technique enables long-term observation of the activity of
live cells without photobleaching and phototoxicity which is common in other fluorescence-labeled microscopy.
However, it does have certain drawbacks in terms of imaging. Therefore, we first corrected for non-uniform illumination
artifacts and then we use intensity distribution information to detect cell boundary. In phase contrast microscopy image,
cell is normally appeared as dark region surrounded by bright halo ring. Due to halo artifact is minimal around the cell
body and has non-symmetric diffusion pattern, we calculate cross sectional plane which intersects center of each cell and
orthogonal to first principal axis. Then, we extract dark cell region by analyzing intensity profile curve considering local
bright peak as halo area. Finally, we examined cell morphology to classify tumor cells as malignant and benign.
Measuring the artery thickness can detect potential blockages in blood flow. Automating parts of the measurement
could improve both the accuracy and repeatability, leading to more consistent treatment. We develop a process for
extracting the artery boundaries from an ultrasound image, then consistently measuring the artery thickness. The measurements result in excellent agreement between computer-calculated values and expert measurements.
In this paper, we introduce new diagnosis tool to observe carotid artery based on ultrasonic volume data. The main components and applied algorithms of the developed diagnosis tool are explained. As one of main components, the semi-automatic segmentation method includes an effective speckle reducing filter and an automatic ROI tracking scheme. Furthermore, we present the reconstruction method that is effective for Y-typed carotid artery and the navigation path generation method that applies interpolation of medial points of ROI. To support the objective diagnosis, we provide the automatic measurement method of artery’s diameter. To show usefulness of the developed tool, we constructed 3D model for carotid artery of 34-year-old person and the diameter of carotid artery was automatically measured.
Accurate estimation of ventricular volume and motion is very important for cardiac diagnosis and treatment planning. Physicians typically calculate ventricular volume by using a few slice images and simplified equations. Such methods are generally limited by assumptions about ventricular shape particularly when the ventricle is distorted by ischemia or infarction. They also estimate ventricular motion by sequentially examining slice and phase images. However, it doesn't always give the same results. In this paper, we present an efficient double time-varying deformable model to estimate left ventricular volume and mass more accurately and to analyze endocardial and epicardial wall motions separately. At each time step, the model first finds global rigid motions and then tracks local non-rigid motions based on 3-D point sets extracted from the surface of myocardium, which are classified into endocardial or epicardial walls. The reconstructed endocardial and epicardial walls are visualized at the same time or separately and their motions are quantitatively analyzed. Results of application to gated SPECT images are given. Using the presented model, physicians can estimate the volume change over the cardiac cycle more easily and accurately, and evaluate ventricular motions reproducibly and objectively.
Due to the structural complexity of the bone, it is difficult to diagnose and make a treatment plan for injuries and diseases in bones. In this paper, we designed and implemented a telediagnosis system for orthopedic deformity analysis based on 3D medical imaging. In order to define the intersseous relationships in each bone and to evaluate a deformity without invasions, the system produces volumetric images by reconstructing the planar images spatially and provides deformity analysis by measuring distance, area, volume and angle among the bones. The reconstructed volumetric images are freely manipulated to simulate surgical operations such as translation, scaling, rotation and so on. Our system integrates three main components: server, clients and communication subsystem. It is also composed of three main functions including the information control manager for event and message process used between client and server, and surgical simulation manager for object visualization and manipulation in individual bones, and the medical database manager for patient information. The system also supports user-friendly graphical user interface and simultaneous use by multiple users.