Abstract: In recent years, the concept of Big Data has become a more prominent issue as the volume of data as well as the velocity in which it is produced exponentially increases. By 2020 the amount of data being stored is estimated to be 44 Zettabytes and currently over 31 Terabytes of data is being generated every second. Algorithms and applications must be able to effectively scale to the volume of data being generated. One such application designed to effectively and efficiently work with Big Data is IBM’s Skylark. Part of DARPA’s XDATA program, an open-source catalog of tools to deal with Big Data; Skylark, or Sketching-based Matrix Computations for Machine Learning is a library of functions designed to reduce the complexity of large scale matrix problems that also implements kernel-based machine learning tasks. Sketching reduces the dimensionality of matrices through randomization and compresses matrices while preserving key properties, speeding up computations. Matrix sketches can be used to find accurate solutions to computations in less time, or can summarize data by identifying important rows and columns. In this paper, we investigate the effectiveness of sketched matrix computations using IBM’s Skylark versus non-sketched computations. We judge effectiveness based on several factors: computational complexity and validity of outputs. Initial results from testing with smaller matrices are promising, showing that Skylark has a considerable reduction ratio while still accurately performing matrix computations.
When several low-resolution images are taken of the same scene, they often contain aliasing and differing subpixel
shifts causing different focuses of the scene. Super-resolution imaging is a technique that can be used to construct
high-resolution imagery from these low-resolution images. By combining images, high frequency components are
amplified while removing blurring and artifacting. Super-resolution reconstruction techniques include methods such as the
Non-Uniform Interpolation Approach, which is low resource and allows for real-time applications, or the Frequency
Domain Approach. These methods make use of aliasing in low-resolution images as well as the shifting property of the
Fourier transform. Problems arise with both approaches, such as limited types of blurred images that can be used or creating
non-optimal reconstructions. Many methods of super-resolution imaging use the Fourier transformation or wavelets but
the field is still evolving for other wavelet techniques such as the Dual-Tree Discrete Wavelet Transform (DTDWT) or the
Double-Density Discrete Wavelet Transform (DDDWT). In this paper, we propose a super-resolution method using these
wavelet transformations for use in generating higher resolution imagery. We evaluate the performance and validity of our
algorithm using several metrics, including Spearman Rank Order Correlation Coefficient (SROCC), Pearson’s Linear
Correlation Coefficient (PLCC), Structural Similarity Index Metric (SSIM), Root Mean Square Error (RMSE), and PeakSignal-Noise
Ratio (PSNR). Initial results are promising, indicating that extensions of the wavelet transformations produce
a more robust high resolution image when compared to traditional methods.
Algorithm selection is paramount in determining how to implement a process. When the results can be computed
directly, an algorithm that reduces computational complexity is selected. When the results less binary there can be difficulty
in choosing the proper implementation. Weighing the effect of different pieces of the algorithm on the final result can be
difficult to find. In this research, we propose using a statistical analysis tool known as General Linear Hypothesis to find
the effect of different pieces of an algorithm implementation on the end result. This will be done with transform based
image fusion techniques. This study will weigh the effect of different transforms, fusion techniques, and evaluation metrics
on the resulting images. We will find the best no-reference metric for image fusion algorithm selection and test this method
on multiple types of image sets. This assessment will provide a valuable tool for algorithm selection to augment current
techniques when results are not binary.
Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.
Multi-focus image fusion is becoming increasingly prevalent, as there is a strong initiative to maximize visual information in a single image by fusing the salient data from multiple images for visualization. This allows an analyst to make decisions based on a larger amount of information in a more efficient manner because multiple images need not be cross-referenced. The contourlet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to pick up the directional and anisotropic properties while being designed to decompose the discrete two-dimensional domain. Many studies have been done to develop and validate algorithms for wavelet image fusion, but the contourlet has not been as thoroughly studied. When the contourlet coefficients for the wavelet coefficients are substituted in image fusion algorithms, it is contourlet image fusion. There are a multitude of methods for fusing these coefficients together and the results demonstrate that there is an opportunity for fusing coefficients together in the contourlet domain for multi-focus images. This paper compared the algorithms with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments to select the image fusion method.
There is a strong initiative to maximize visual information in a single image for viewing by fusing the salient data from multiple images. Many multi-focus imaging systems exist that would be able to provide better image data if these images are fused together. A fused image would allow an analyst to make decisions based on a single image rather than crossreferencing multiple images. The bandelet transform has proven to be an effective multi-resolution transform for both denoising and image fusion through its ability to calculate geometric flow in localized regions and decompose the image based on an orthogonal basis in the direction of the flow. Many studies have been done to develop and validate algorithms for wavelet image fusion but the bandelet has not been well investigated. This study seeks to investigate the use of the bandelet coefficients versus wavelet coefficients in modified versions of image fusion algorithms. There are many different methods for fusing these coefficients together for multi-focus and multi-modal images such as the simple average, absolute min and max, Principal Component Analysis (PCA) and a weighted average. This paper compares the image fusion methods with a variety of no reference image fusion metrics including information theory based, image feature based and structural similarity based assessments.