Linear classifiers based on computation over the real numbers R (e.g., with operations of addition and
multiplication) denoted by (R, +, x), have been represented extensively in the literature of pattern recognition. However,
a different approach to pattern classification involves the use of addition, maximum, and minimum operations over the
reals in the algebra (R, +, maximum, minimum) These pattern classifiers, based on lattice algebra, have been shown to exhibit superior
information storage capacity, fast training and short convergence times, high pattern classification accuracy, and low
computational cost. Such attributes are not always found, for example, in classical neural nets based on the linear inner
product. In a special type of lattice associative memory (LAM), called a dendritic LAM or DLAM, it is possible to
achieve noise-tolerant pattern classification by varying the design of noise or error acceptance bounds.
This paper presents theory and algorithmic approaches for the computation of noise-tolerant lattice associative
memories (LAMs) under a variety of input constraints. Of particular interest are the classification of nonergodic data in
noise regimes with time-varying statistics. DLAMs, which are a specialization of LAMs derived from concepts of
biological neural networks, have successfully been applied to pattern classification from hyperspectral remote sensing
data, as well as spatial object recognition from digital imagery. The authors' recent research in the development of
DLAMs is overviewed, with experimental results that show utility for a wide variety of pattern classification
applications. Performance results are presented in terms of measured computational cost, noise tolerance, classification
accuracy, and throughput for a variety of input data and noise levels.