The acquisition of complex data of the recorded scene is an important task for video analysis of the processes occurring. The use of IR images allows obtaining data on additional characteristics that are not visible in the optical range. The data obtained by IR sensors can be in the near and far range, which makes it possible to see objects in the dark or to obtain data on their temperature. In the second case, the boundaries of objects are vague and difficult to correlate with the usual optical ranges. To do this a combination of data obtained by a pair of cameras is used. In this paper, we propose using the algorithm for stitching IR images based on data obtained in the optical range. To this end, an approach will be applied that includes parallel analysis of data on: saliency maps; search for boundaries and key points; reduction of bit resolution of images with preservation of borders; image matching; filtering data and restoring the sharpness of object boundaries. As an example, the result of combining data obtained under poor lighting conditions and the results of combining television images will be presented.
The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.
Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.
Currently an analog microscope has a wide distribution in the following fields: medicine, animal husbandry, monitoring technological objects, oceanography, agriculture and others. Automatic method is preferred because it will greatly reduce the work involved. Stepper motors are used to move the microscope slide and allow to adjust the focus in semi-automatic or automatic mode view with transfer images of microbiological objects from the eyepiece of the microscope to the computer screen. Scene analysis allows to locate regions with pronounced abnormalities for focusing specialist attention. This paper considers the method for stitching microbial images, obtained of semi-automatic microscope. The method allows to keep the boundaries of objects located in the area of capturing optical systems. Objects searching are based on the analysis of the data located in the area of the camera view. We propose to use a neural network for the boundaries searching. The stitching image boundary is held of the analysis borders of the objects. To auto focus, we use the criterion of the minimum thickness of the line boundaries of object. Analysis produced the object located in the focal axis of the camera. We use method of recovery of objects borders and projective transform for the boundary of objects which are based on shifted relative to the focal axis. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
Currently decision-making systems get widespread. These systems are based on the analysis video sequences and also additional data. They are volume, change size, the behavior of one or a group of objects, temperature gradient, the presence of local areas with strong differences, and others. Security and control system are main areas of application. A noise on the images strongly influences the subsequent processing and decision making. This paper considers the problem of primary signal processing for solving the tasks of image denoising and deblurring of multispectral data. The additional information from multispectral channels can improve the efficiency of object classification. In this paper we use method of combining information about the objects obtained by the cameras in different frequency bands. We apply method based on simultaneous minimization L2 and the first order square difference sequence of estimates to denoising and restoring the blur on the edges. In case of loss of the information will be applied an approach based on the interpolation of data taken from the analysis of objects located in other areas and information obtained from multispectral camera. The effectiveness of the proposed approach is shown in a set of test images.
Proc. SPIE. 10221, Mobile Multimedia/Image Processing, Security, and Applications 2017
KEYWORDS: Image processing algorithms and systems, Signal to noise ratio, Computer aided diagnosis and therapy, Data modeling, Tissues, Magnetic resonance imaging, Image segmentation, Bone, Medical imaging, 3D magnetic resonance imaging
Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.
In this paper we propose a stitching algorithm of medical images into one. The algorithm is designed to stitching the medical x-ray imaging, biological particles in microscopic images, medical microscopic images and other. Such image can improve the diagnosis accuracy and quality for minimally invasive studies (e.g., laparoscopy, ophthalmology and other). The proposed algorithm is based on the following steps: the searching and selection areas with overlap boundaries; the keypoint and feature detection; the preliminary stitching images and transformation to reduce the visible distortion; the search a single unified borders in overlap area; brightness, contrast and white balance converting; the superimposition into a one image. Experimental results demonstrate the effectiveness of the proposed method in the task of image stitching.
Content–based image retrieval systems have plenty of applications in modern world. The most important one is the image search by query image or by semantic description. Approaches to this problem are employed in personal photo–collection management systems, web–scale image search engines, medical systems, etc. Automatic analysis of large unlabeled image datasets is virtually impossible without satisfactory image–retrieval technique. It’s the main reason why this kind of automatic image processing has attracted so much attention during recent years. Despite rather huge progress in the field, semantically meaningful image retrieval still remains a challenging task. The main issue here is the demand to provide reliable results in short amount of time. This paper addresses the problem by novel technique for simultaneous learning of global image features and binary hash codes. Our approach provide mapping of pixel–based image representation to hash–value space simultaneously trying to save as much of semantic image content as possible. We use deep learning methodology to generate image description with properties of similarity preservation and statistical independence. The main advantage of our approach in contrast to existing is ability to fine–tune retrieval procedure for very specific application which allow us to provide better results in comparison to general techniques. Presented in the paper framework for data– dependent image hashing is based on use two different kinds of neural networks: convolutional neural networks for image description and autoencoder for feature to hash space mapping. Experimental results confirmed that our approach has shown promising results in compare to other state–of–the–art methods.
A new image denoising method is proposed in this paper. We are considering an optimization problem with a linear objective function based on two criteria, namely, L2 norm and the first order square difference. This method is a parametric, so by a choice of the parameters we can adapt a proposed criteria of the objective function. The denoising algorithm consists of the following steps: 1) multiple denoising estimates are found on local areas of the image; 2) image edges are determined; 3) parameters of the method are fixed and denoised estimates of the local area are found; 4) local window is moved to the next position (local windows are overlapping) in order to produce the final estimate. A proper choice of parameters of the introduced method is discussed. A comparative analysis of a new denoising method with existed ones is performed on a set of test images.
This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.
In this paper we present a method for the functional analysis of human heart based on electrocardiography (ECG) signals. The approach using the apparatus of analytical and differential geometry and correlation and regression analysis. ECG contains information on the current condition of the cardiovascular system as well as on the pathological changes in the heart. Mathematical processing of the heart rate variability allows to obtain a great set of mathematical and statistical characteristics. These characteristics of the heart rate are used when solving research problems to study physiological changes that determine functional changes of an individual. The proposed method implemented for up-to-date mobile Android and iOS based devices.
Image mosaicing is the act of combining two or more images and is used in many applications in computer vision, image
processing, and computer graphics. It aims to combine images such that no obstructive boundaries exist around overlapped
regions and to create a mosaic image that exhibits as little distortion as possible from the original images. Most of the
existing algorithms are the computationally complex and don’t show good results always in obtaining of the stitched
images, which are different: scale, light, various free points of view and others. In this paper we consider an algorithm
which allows increasing the speed of processing in the case of stitching high-resolution images. We reduced the
computational complexity used an edge image analysis and saliency map on high-detailisation areas. On detected areas
are determined angles of rotation, scaling factors, the coefficients of the color correction and transformation matrix. We
define key points using SURF detector and ignore false correspondences based on correlation analysis. The proposed
algorithm allows to combine images from free points of view with the different color balances, time shutter and scale. We
perform a comparative study and show that statistically, the new algorithm deliver good quality results compared to