Biomedical imaging when combined with digital image analysis is capable of quantitative morphological and physiological characterizations of biological structures. Recent fluorescence microscopy techniques can collect hundreds of focal plane images from deeper tissue volumes, thus enabling characterization of three-dimensional (3-D) biological structures at subcellular resolution. Automatic analysis methods are required to obtain quantitative, objective, and reproducible measurements of biological quantities. However, these images typically contain many artifacts such as poor edge details, nonuniform brightness, and distortions that vary along different axes, all of which complicate the automatic image analysis. Another challenge is due to “multitarget labeling,” in which a single probe labels multiple biological entities in acquired images. We present a “jelly filling” method for segmentation of 3-D biological images containing multitarget labeling. Intuitively, our iterative segmentation method is based on filling disjoint tubule regions of an image with a jelly-like fluid. This helps in the detection of components that are “floating” within a labeled jelly. Experimental results show that our proposed method is effective in segmenting important biological quantities.
Video surveillance systems are of a great value for public safety. With an exponential increase in the number of cameras, videos obtained from surveillance systems are often archived for forensic purposes. Many automatic methods have been proposed to do video analytics such as anomaly detection and human activity recognition. However, such methods face significant challenges due to object occlusions, shadows and scene illumination changes. In recent years, crowdsourcing has become an effective tool that utilizes human intelligence to perform tasks that are challenging for machines. In this paper, we present an intelligent crowdsourcing system for forensic analysis of surveillance video that includes the video recorded as a part of search and rescue missions and large-scale investigation tasks. We describe a method to enhance crowdsourcing by incorporating human detection, re-identification and tracking. At the core of our system, we use a hierarchal pyramid model to distinguish the crowd members based on their ability, experience and performance record. Our proposed system operates in an autonomous fashion and produces a final output of the crowdsourcing analysis consisting of a set of video segments detailing the events of interest as one storyline.
Video surveillance systems are of a great value to prevent threats and identify/investigate criminal activities. Manual analysis of a huge amount of video data from several cameras over a long period of time often becomes impracticable. The use of automatic detection methods can be challenging when the video contains many objects with complex motion and occlusions. Crowdsourcing has been proposed as an effective method for utilizing human intelligence to perform several tasks. Our system provides a platform for the annotation of surveillance video in an organized and controlled way. One can monitor a surveillance system using a set of tools such as training modules, roles and labels, task management. This system can be used in a real-time streaming mode to detect any potential threats or as an investigative tool to analyze past events. Annotators can annotate video contents assigned to them for suspicious activity or criminal acts. First responders are then able to view the collective annotations and receive email alerts about a newly reported incident. They can also keep track of the annotators’ training performance, manage their activities and reward their success. By providing this system, the process of video analysis is made more efficient.
Video sharing platforms and social networks have been growing very rapidly for the past few years. The rapid increase in the amount of video content introduces many challenges in terms of copyright violation detection and video search and retrieval. Generating and matching content-based video signatures, or fingerprints, is an effective method to detect copies or “near-duplicate” videos. Video signatures should be robust to changes in the video features used to characterize the signature caused by common signal processing operations. Recent work has focused on generating video signatures based on the uncompressed domain. However, decompression is a computationally intensive operation. In large video databases, it becomes advantageous to create robust signatures directly from the compressed domain. The High Efficiency Video Coding (HEVC) standard has been recently ratified as the latest video coding standard and wide spread adoption is anticipated. We propose a method in which a content-based video signature is generated directly from the HEVC-coded bitstream. Motion vectors from the HEVC-coded bitstream are used as the features. A robust hashing function based on projection on random matrices is used to generate the hashing bits. A sequence of these bits serves as the signature for the video. Our experimental results show that our proposed method generates a signature robust to common signal processing techniques such as resolution scaling, brightness scaling and compression.