Automated image analysis of slides of thin blood smears can assist with early diagnosis of many diseases. Automated detection and segmentation of red blood cells (RBCs) are prerequisites for any subsequent quantitative highthroughput screening analysis since the manual characterization of the cells is a time-consuming and error-prone task. Overlapping cell regions introduce considerable challenges to detection and segmentation techniques. We propose a novel algorithm that can successfully detect and segment overlapping cells in microscopic images of stained thin blood smears. The algorithm consists of three steps. In the first step, the input image is binarized to obtain the binary mask of the image. The second step accomplishes a reliable cell center localization that utilizes adaptive meanshift clustering. We employ a novel technique to choose an appropriate bandwidth for the meanshift algorithm. In the third step, the cell segmentation purpose is fulfilled by estimating the boundary of each cell through employing a Gradient Vector Flow (GVF) driven snake algorithm. We compare the experimental results of our methodology with the state-of-the-art and evaluate the performance of the cell segmentation results with those produced manually. The method is systematically tested on a dataset acquired at the Chittagong Medical College Hospital in Bangladesh. The overall evaluation of the proposed cell segmentation method based on a one-to-one cell matching on the aforementioned dataset resulted in 98% precision, 93% recall, and 95% F1-score index.
Current video tracking systems often employ a rich set of intensity, edge, texture, shape and object level features combined with descriptors for appearance modeling. This approach increases tracker robustness but is compu- tationally expensive for realtime applications and localization accuracy can be adversely affected by including distracting features in the feature fusion or object classification processes. This paper explores offline feature subset selection using a filter-based evaluation approach for video tracking to reduce the dimensionality of the feature space and to discover relevant representative lower dimensional subspaces for online tracking. We com- pare the performance of the exhaustive FOCUS algorithm to the sequential heuristic SFFS, SFS and RELIEF feature selection methods. Experiments show that using offline feature selection reduces computational complex- ity, improves feature fusion and is expected to translate into better online tracking performance. Overall SFFS and SFS perform very well, close to the optimum determined by FOCUS, but RELIEF does not work as well for feature selection in the context of appearance-based object tracking.