Translator Disclaimer
22 August 2014 Experimental research of methods for clustering and selecting image fragments using spatial invariant equivalent models
Author Affiliations +
Proceedings Volume 9286, Second International Conference on Applications of Optics and Photonics; 928650 (2014) https://doi.org/10.1117/12.2066068
Event: Second International Conference on Applications of Optics and Photonics, 2014, Aveiro, Portugal
Abstract
In the paper, we show that the nonlinear spatial non-linear equivalency functions on the basis of continuous logic equivalence (nonequivalence) operations have better discriminatory properties for comparing images. Further, using the equivalent model of multiport neural networks and associative memory, (including matrix-matrix and matrix-tensor with adaptive-weighted correlation, multi-port neural-net auto-associative and hetero-associative memory (MP NN AAM and HAM ) and the proposed architecture based on them, we show how we can modify these models and architectures for space-invariant associative recognition and clustering (high performance parallel clustering processing) images. We consider possible implementations of 2D image classifiers, devices for partitioning image fragments into clusters and their architectures. The main base unit of such architectures is a matrix-matrix or matrix-tensor equivalentor, which can be implemented on the basis of two traditional correlators. We show that the classifiers based on the equivalency paradigm and optoelectronic architectures with space-time integration and parallel-serial 2D images processing have advantages such as increased memory capacity (more than ten times of the number of neurons!), High performance in different modes . We present the results of associative significant dimension (128x128, 610x340) image recognition - renewal modeling. It will be shown that these models are capable to recognize images with a significant percentage (20- 30%) damaged pixels. The experimental results show that such models can be successfully used for auto-and heteroassociative pattern recognition. We show simulation results of using these modifications for clustering and learning models and algorithms for cluster analysis of specific images and divide them into categories of the array. Show example of a cluster division of image fragments, letters and graphics for clusters with simultaneous formation of the outputweighted spatial allocated images for each cluster. Show results of other of modeling experiments with images of large dimension, such as clustering fragments (blocks 7x 7, 3x3, 15x15 and other sizes), 610x340 elements images into 8 clusters. We show that it is the use of nonlinear processing and nonlinear functions improves the quality of classification and image recognition. We offer criteria for the quality evaluation of patterns clustering with such MP NN AAM. It is shown that time of learning in the proposed structures of multi-port neural net classifier / categorizer-clustering (MP NN C) on the basis of equivalency paradigm, due to their multi-port, decreases by orders and can be, in some cases, just a few epochs. Other experimental data is also shown.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Vladimir G. Krasilenko, Alexander A. Lazarev, and Diana V. Nikitovich "Experimental research of methods for clustering and selecting image fragments using spatial invariant equivalent models", Proc. SPIE 9286, Second International Conference on Applications of Optics and Photonics, 928650 (22 August 2014); https://doi.org/10.1117/12.2066068
PROCEEDINGS
14 PAGES


SHARE
Advertisement
Advertisement
RELATED CONTENT

Macromolecular extraction based on contour evolution
Proceedings of SPIE (March 13 2013)
Neural network modelling by rank configurations
Proceedings of SPIE (September 30 2018)
Neural network for image segmentation
Proceedings of SPIE (October 12 2000)
Online image quality surveys based on response time
Proceedings of SPIE (January 27 2008)

Back to Top