Circular target has an important application in pose estimation based on vision by the virtue of anti-occlusion, anti-noise and easy recognition on the image. This paper focuses on the pose estimation problem when the projection of circle center is determined, and a pose calculation method is proposed based on 1D homography. Firstly, under the normalized image coordinates, a homography matrix is determined from three points, one is the projection of circle center, the other two are the intersections of the line through the projected center and the projected ellipse. According to the geometrical properties of the homography matrix, a linear method for computing the circular pose from multiple homography matrices is presented. Secondly, the reprojection errors of image points are taken as the objective function to optimize the linear solution results. Finally, the proposed method is compared with the two existing pose estimation methods through experiment. The experimental results show that the proposed method is slightly superior to the existing methods in terms of anti-noise performance and has obvious advantages in remote pose estimation.
<p>We advocate a model to effectively detect salient objects in various videos; the proposed framework [spatiotemporal saliency and coherency, (STSC)] consists of two modules, for capturing the spatiotemporal saliency and the temporal coherency information in the superpixel domain, respectively. We first extract the most straightforward gradient contrasts (such as the color gradient and motion gradient) as the low-level features to compute the high-level spatiotemporal gradient features, and the spatiotemporal saliency is obtained by computing the average weighted geodesic distance among these features. The temporal coherency, which is measured by the motion entropy, is then used to eliminate the false foreground superpixels that result from inaccurate optical flow and confusable appearance. Finally, the two discriminative video saliency indicators are combined to identify the salient regions. Extensive quantitative and qualitative experiments on four public datasets (FBMS, DAVIS, SegtrackV2, and ViSal dataset) demonstrate the superiority of the proposed method over the current state-of-the-art methods.</p>
In cross-view action recognition, there remains a challenge that the action representation will lack the ability of transfer learning when the feature space changes. To solve this problem, a cross-view action recognition approach using a bilayer discriminative model is proposed. We first extract the key poses to capture the essence of each action sequence and represent each key pose by a bag of visual words (BoVW) in a single view. We then construct a bipartite graph between the heterogeneous poses and apply multipartitioning to cocluster the view-dependent visual words for developing the cross view bags of visual words feature, which is more discriminative in the presence of view changes. The novelty is to design a bilayer classifier consisting of SVM and HMM at the frame level and sequence level, respectively, to make up for the loss of temporal information when using a BoVW to represent the whole action sequence. Finally, DTW is used as a pruning algorithm to lessen the number of nodes for searching the Viterbi path. Extensive experiments are performed on two well-known multiple view action datasets IXMAS and N-UCLA, and a detailed performance comparison with the existing view-invariant action recognition techniques indicates that the proposed method works equally well in accuracy and efficiency.
Illumination design used to redistribute the spatial energy distribution of light source is a key technique in lighting applications. However, there is still no effective illumination design method for the removing of the chromatic dispersion. What we present here is an achromatic lens design to enhance the efficiency and uniform illumination of white light-emitting diode (LED) with diffractive optical element (DOE). We employ the chromatic aberration value (deg) to measure the degree of chromatic dispersion in illumination systems. Monte Carlo ray tracing simulation results indicate that the chromatic dispersion of the modified achromatic collimator significantly decreases from 0.5 to 0.1 with LED chip size of 1.0mm×1.0mm and simulation efficiency of 90.73%, compared with the traditional collimator. Moreover, with different corrected wavelengths we compared different chromatic aberration values that followed with the changing pupil percent. The achromatic collimator provided an effective way to achieve white LED with low chromatic dispersion at high efficiency and uniform illumination.
The development of transparent materials is closed to optoelectronic technology. It plays an increasingly important role in various fields. It is not only widely used in optical lens, optical element, optical fiber grating, optoelectronics, but also widely used in the building material, pharmaceutical industry with vessel, aircraft windshield and daily wear glasses.Regard of solving the problem of refractive index measurement in optical transparent materials. We proposed that using the polychromatic confocal method to measuring the refractive index of transparent materials. In this article, we describes the principle of polychromatic confocal method for measuring the refractive index of glass,and sketched the optical system and its optimization. Then we establish the measurement model of the refractive index, and set up the experimental system. In this way, the refractive index of the glass has been calibrated for refractive index experiment. Due to the error in the experimental process, we manipulated the experiment data to compensate the refractive index measurement formula. The experiment taking the quartz glass for instance. The measurement accuracy of the refractive index of the glass is ±1.8×10<sup>-5</sup>. This method is more practical and accurate, especially suitable for non-contact measurement occasions, which environmental requirements is not high. Environmental requirements are not high, the ordinary glass production line up to the ambient temperature can be fully adapted. There is no need for the color of the measured object that you can measure the white and a variety of colored glass.
As most countries are facing the growing population of seniors, automatic detection for abnormal behaviors has been a promising goal for a vision system operating in supportive home environment. In this paper, we investigate a novel approach for fall detection which is frequently observed in elderly people motions using a panorama camera mounting on the ceiling, we employ and modify a combination of two different features representing fall events: optical flow and human shape variation, which allows fall detection conducted from coarse to fine. In the pre-processing step, we analysis the raw video data to extract the meaningful motion region,then we designed an energy function as representing phase and magnitude of optical flow vector for the coarse detection in temporal domain, where the information entropy is adopted as the abnormal coefficient to estimate the consistency of motion directions. Once the optical flow changes abnormal, shape context descriptor is introduced to do the template matching for the fine detection, here we propose a novel shape matching descriptor which improves the rotation invariance based on the traditional shape context, while remaining its tolerance to most shape distortion. Our method is evaluated on a panorama-view fall detection database including fall events and confounding events, we demonstrate more effective performance and less computational costs on the fall detection regardless of challenging conditions and encourage the potential use of a vision-based system to provide safety and security in the homes of the elderly.
In traditional optics, the focusing should be done before the exposuring. However, a light field camera makes it a reality to take a photograph before the focusing. The principle of a light field camera consists of some fundamental theories on optical imaging, including pinhole imaging, depth of focus, digital refocusing, synthetic aperture imaging, etc. It is easier for students to understand the above theories through learning the theory and experiment of a light field camera. Meanwhile it also involves some optical knowledge for a light field camera during the acquisition and processing images. In the paper, we will discuss the similarities and differences on optical properties among the pinhole, the convex lens and the light field camera. Our intention is to make these optical theories much easier to understand for students in our teaching work.
Infrared scene projector is essential and standard equipment for training and testing IR threat detection systems including missile warning systems and hostile fire indicators. DMD as one of the scene generator used in IRSP has performed several attractive features including high spatial resolution, high framerates, no dead pixel and excellent uniformity. In this paper we proposed a new structure of DMD based IRSP. We use a field lens and a mirror as separator to achieve a wide field-of-view optical system design. Since the field lens is a part of the illumination path as well as projection path, it brings several challenges to the optical system design. In this approach we detailed analyze the design method and perform test equipment and facilities we developed.
Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.
F-theta lens is an important unit for selective laser melting (SLM) manufacture. The dual wavelength f-theta lens has not been used in SLM manufacture. Here, we present the design of the f-theta lens which satisfies SLM manufacture with coaxial 532 nm and 1030 nm~1080 nm laser beams. It is composed of three pieces of spherical lenses. The focal spots for 532 nm laser and 1030 nm~1080 nm laser are smaller than 35 μm and 70 μm, respectively. The results meet the demands of high precision SLM. The chromatic aberration could cause separation between two laser focal spots in the scanning plane, so chromatic aberration correction is very important to our design. The lateral color of the designed f-theta lens is less than 11 μm within the scan area of 150 mm x 150 mm, which meet the application requirements of dual wavelength selective laser melting.
With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator’s facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare’s geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle’s average measuring accuracy is controlled within 40′′. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.