Positron emission tomography (PET) using uorodeoxyglucose (18F-FDG) is commonly used in the assessment of breast lesions by computing voxel-wise standardized uptake value (SUV) maps. Simple metrics derived from ensemble properties of SUVs within each identified breast lesion are routinely used for disease diagnosis. The maximum SUV within the lesion (SUVmax) is the most popular of these metrics. However these simple metrics are known to be error-prone and are susceptible to image noise. Finding reliable SUV map-based features that correlate to established molecular phenotypes of breast cancer (viz. estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2) expression) will enable non-invasive disease management. This study investigated 36 SUV features based on first and second order statistics, local histograms and texture of segmented lesions to predict ER and PR expression in 51 breast cancer patients. True ER and PR expression was obtained via immunohistochemistry (IHC) of tissue samples from each lesion. A supervised learning, adaptive boosting-support vector machine (AdaBoost-SVM), framework was used to select a subset of features to classify breast lesions into distinct phenotypes. Performance of the trained multi-feature classifier was compared against the baseline single-feature SUVmax classifier using receiver operating characteristic (ROC) curves. Results show that texture features encoding local lesion homogeneity extracted from gray-level co-occurrence matrices are the strongest discriminator of lesion ER expression. In particular, classifiers including these features increased prediction accuracy from 0.75 (baseline) to 0.82 and the area under the ROC curve from 0.64 (baseline) to 0.75.
Radiation therapy (RT) plays an essential role in the management of cancers. The precision of the treatment delivery process in chest and abdominal cancers is often impeded by respiration induced tumor positional variations, which are accounted for by using larger therapeutic margins around the tumor volume leading to sub-optimal treatment deliveries and risk to healthy tissue. Real-time tracking of tumor motion during RT will help reduce unnecessary margin area and benefit cancer patients by allowing the treatment volume to closely match the positional variation of the tumor volume over time. In this work, we propose a fast approach which enables transferring the pre-estimated target (e.g. tumor) motion extracted from ultrasound (US) image sequences in training stage (e.g. before RT) to online data in real-time (e.g. acquired during RT). The method is based on extracting feature points of the target object, exploiting low-dimensional description of the feature motion through slow feature analysis, and finding the most similar image frame from training data for estimating current/online object location. The approach is evaluated on two 2D + time and one 3D + time US acquisitions. The locations of six annotated fiducials are used for designing experiments and validating tracking accuracy. The average fiducial distance between expert's annotation and the location extracted from our indexed training frame is 1.9±0.5mm. Adding a fast template matching procedure within a small search range reduces the distance to 1.4±0.4mm. The tracking time per frame is on the order of millisecond, which is below the frame acquisition time.