We investigated ultrafast laser-based x-ray (ULX) source as an attractive alternative to a microfocal x-ray tube used in micro-CT systems. The laser pulse duration was in the 30 fs-200 fs range, the repetition rate in the 10 Hz - 1 kHz range. A number of solid targets including Ge, Mo, Rh, Ag, Sn, Ba, La, Nd with matching filters was used. We optimized conditions for x-rays generation and measured: x-ray spectra, conversion efficiency (from laser light to x-rays), x-ray fluence, effective x-ray focal spot size and spatial resolution, contrast resolution and radiation dose. Good quality projection images of small animals in single-and dual-energy mode were obtained. ULX generates narrow x-ray spectra that consist mainly of characteristic lines that can be easily tailored (by changing laser beam target) to the imaging task, (e.g. to maximize contrast while minimizing radiation dose). X-ray fluence can exceed fluence produced by conventional microfocal tube with 10 μm focal-spot hence allowing for faster scans with very high spatial resolution. Changing the laser target, and thus matching the characteristic emission lines with the investigated animal's thickness and composition, can be done quickly and easily. Using narrow emission lines for imaging, instead of broad bremsstrahlung, offers superior dose utilization and limits beam-hardening effects. Employing two narrow emission lines-above and below the absorption edge of a contrast agent-in quick succession allows dual-energy-subtraction micro-CT for imaging with a contrast medium. Dual-energy-subtraction is not practical with a microfocal tube. Compact, robust, ultrafast lasers are commercially available, and their characteristics are rapidly improving. We plan to construct a prototype in vivo ultrafast laser-based micro-CT system.
Hard x-ray (8-100 keV) spectrum emission from plasma produced by femtosecond laser solid target interactions and Kα x-ray conversion efficiency have been studied as a function of laser intensity (10<sup>17</sup> W/cm<sup>2</sup> ~ 10<sup>19</sup> W/cm<sup>2</sup>), pulse duration (70 fs ~ 400 fs), laser pulse fluence and laser wavelength (800 nm and 400 nm). The Ag Kα x-ray conversion efficiency produced by a laser pulse at 800 nm with an intensity <i>I = 4x10<sup>18</sup> W/cm<sup>2</sup></i> can reach <i>2x10<sup>-5</sup></i>. We discuss the behavior of Kα conversion efficiency scaling laws as a function of the laser parameters. We found that the Kα x-ray conversion efficiency is more dependent on laser fluence than on pulse duration or laser pulse intensity. The conversion efficiency exhibits a similar value at <i>I ~ 1x10<sup>18</sup> W/cm<sup>2</sup></i> when we work with a high contrast laser pulse at 400 nm or with a low contrast laser pulse at 800 nm, but in the first case it presents a higher scaling law. Consequently, the use of 400 nm laser pulses could be an effective method to optimize the Kα x-ray emission via vacuum heating mechanisms.
Characteristic Kα emissions from Mo, Ag and La targets irradiated by 60 fs, 600 mJ, 10 Hz Ti: Sapphire laser pulse at 10<sup>17</sup> W/cm<sup>2</sup> - 10<sup>19</sup> W/cm<sup>2</sup> can be potentially used in x-ray mammography. We have investigated x-ray spectra created by this novel x-ray source in this context. All the obtained spectra exhibited a dominating narrow emission lines with only a small portion of x-ray emission in Bremsstrahlung. Such spectra might be very usful in mammography and might improve contrast and dose utilizaion, as compared to a conventional mammographic x-ray tube. The effective focal spot size was of the order of 50 μm, i.e. significantly smaller than in conventional mammography. In contradiction to conventional mammography the effective x-ray focal spot size and the effective dose remained constant across the field of view. Kα conversion efficiency, from laser light to x-rays, was optimized and values as high as 2 x 10<sup>-5</sup> have been obtained.
Indexing is an important aspect of video database management. Video indexing involves the analysis of video sequences, which is a computationally intensive process. However, effective management of digital video requires robust indexing techniques. The main purpose of our proposed video segmentation is twofold. Firstly, we develop an algorithm that identifies camera shot boundary. The approach is based on the use of combination of color histograms and block-based technique. Next, each temporal segment is represented by a color reference frame which specifies the shot similarities and which is used in the constitution of scenes. Experimental results using a variety of videos selected in the corpus of the French Audiovisual National Institute are presented to demonstrate the effectiveness of performing shot detection, the content characterization of shots and the scene constitution.
Objects and camera movements are important clues, which can be used for a better video and image understanding for content-based image and video indexing. In this paper, we propose a new general technique for object and 3D camera movement detection based on 2D hints extracted from 2D images. Our approach consists of extracting first 2D hints based on object contours and calculating their different order derivatives. We then apply our pattern matching method to obtain objects movement vectors, and with help of 3D projection theory, we detect the camera movement description in the 3D space. The further work we have been undertaking shows that 2D and 3D hints combined with movement vectors can lead to 3D scene description. Some experimental evidence is also provided.
Scene segmentation within a video is an important issue for easy and fast content-based access to on-line video database. In this paper, we introduce the classification of shots into 'Exterior shots' as a new clue with the aim to perform automatic scene segmentation within a video. Indeed, shots taking place outside or inside is a crucial information in the film grammar. Based on luminance intensity variation caused by the natural and artificial lights, our method detects and distinguishes exterior and interior shots. Our technique follows two steps. First, the luminance intensity of every image pixel is calculated by applying a linear image transformation from the CIE 3D color solid spaces RGB to the NTSC 3D perceptual chromaticity coordinates YIQ; we then analyze the maximum of minimum of luminance intensity values from a mosaic version of the image, leading to classify interior and exterior shots. Experimentation we have drawn so far shows that our technique leads to a successful classification rate up to 95 percent. The further work we have been undertaking shows that our method can also distinguish day and night lights within images. These techniques are being used for automatic scene generation within a video.
Currently, most shot detection methods proposed in the literature are based on well-chosen static thresholds on which the quality of result largely depends. In this paper, we present a method for dynamic threshold selection based on clustering a set of N points on a comparison curve, which we sue for characteristic feature comparison through images in a video sequence to detect shots In this method we recursively chose N successive values from the curve. Then by using the clustering method on them, we partition this set into two parts, larger values in E1, and smaller values in E2. We try to model the form of the curve as a bimodal one, and try to find a threshold around a valley area. Using above clustering analysis, we first apply color histogram (CH) and double Hough transformation (DHT) that we reported in our previous work on 90 minutes of video sequence. The experimental results show that dynamic threshold based methods improve the static threshold based ones, reducing false and missed detection, and that dynamic threshold based DHT is more robust than dynamic threshold based CH.
Currently the most content-based retrieval methods of images are based on global features like histograms. Few methods have considered the spatial information for the indexing and query purpose. In this paper we present an efficient multi-dimensional spatial indexing method based on the Peano key ordering of spatial locality of regions. The Peano order gives a direct mapping between an integer and its corresponding element in the multi-dimensional space. The position in the ordering of each region in an image can be simply determined by interleaving the bits of the x and y coordinates of the region. In our method, global features of the query image like histograms of colors are first used to eliminate images in the database, which are not similar. Then the query is decomposed into a quadtree in order to extract characteristics, for instance predominant colors, associated with each square. These spatial information are identified by a list of Peano keys. This list constitutes a spatial signature of the query image. This spatial signature is researched into candidate images. For a given candidate image, each Peano key of the signature precisely indicates the spatial region whose characteristics are compared to the ones associated with the Peano key. The main advantages of our method are twofold: first its generality since it allows to associate spatial information to every kind characteristics of images, second its efficiency because there is no need to pre- extract characteristics from images in the database.
Int his paper, we present a new method for video sequence segmentation which can be used in video indexation applications. Our approach uses the image content as indices of segmentation. As for most video sequences, the images contain 3D hints. In order to detect these indices efficiently, we develop a two step Hough transformation (HT). The first HT tries to find all lines contained in video image. The second one according to the theory of projective geometry, gives the possible focus of expansion (FOE) point. Once we have all possible FOE positions, simple comparison of these positions tells the difference of video sequences. This method is robust not only for the images taken from well structured objects in the scene as buildings, roads and other man made entities, but also for the scene containing the flower field or other aligned natural objects. The results of the approach are shown at final part of the paper.
A video is a multimedia document which is structured in scenes and shots. Scenes are lists of consecutive shots characterized by common visual and audio features. Shots are sets of consecutive frames separated by cuts, which can be easily recognized by existing techniques. Video segmentation into scenes is a new and open problem. It is needed for scenes retrieval, specially in authoring and interactive video applications. We propose a new approach of video segmentation into scenes, which is based on several media and takes into account the film syntax. We characterize a scene by some similarity between color histograms of the current shot, and of one of the most recent previous shots. Similarity between a shot frame and a frame of a previous shot may indicate the presence of alternate shots, which belong to the same scene. Other techniques based on projective geometry are presented in a companion paper. These techniques enable to detect the movement of the camera. We recognize the speakers of a scene by AR vector model techniques, such as the one proposed by some of the authors in the Orphee system, implemented at Laforia. However the speaker recognition problem is much more difficult when applied to the video CD-I, due to several transition types and various types of noise. We present experimental results, based on this approach. Detection of alternate shots is efficient, but speaker recognition needs improvements.