This study examined whether perceptual learning at early levels of visual processing would facilitate learning at
higher levels of processing. This was examined by determining whether training the motion pathways by practicing leftright
movement discrimination, as found previously, would improve the reading skills of inefficient readers significantly
more than another computer game, a word discrimination game, or the reading program offered by the school. This
controlled validation study found that practicing left-right movement discrimination 5-10 minutes twice a week (rapidly)
for 15 weeks doubled reading fluency, and significantly improved all reading skills by more than one grade level,
whereas inefficient readers in the control groups barely improved on these reading skills. In contrast to previous studies
of perceptual learning, these experiments show that perceptual learning of direction discrimination significantly
improved reading skills determined at higher levels of cognitive processing, thereby being generalized to a new task. The
deficits in reading performance and attentional focus experienced by the person who struggles when reading are
suggested to result from an information overload, resulting from timing deficits in the direction-selectivity network
proposed by Russell De Valois et al. (2000), that following practice on direction discrimination goes away. This study
found that practicing direction discrimination rapidly transitions the inefficient 7-year-old reader to an efficient reader.
A computational visual system (CVS) has been developed that segments objects in natural scenes using algorithms and filtering elements similar to those used by people. The filtering elements of the CVS are based on neural networks elucidated by physiological and anatomical studies. The algorithms of the CVS are based on data from psychophysical studies. This CVS classifies different types of patterns, based on object shape, texture, position in the visual field, and amount of motion parallax in subsequent scenes, without any a priori models. When analyzing 3D scenes, psychophysical and physiological evidence indicate that people construct an object-based perception, one that is event-driven. The object-based representation being modeled focuses on the object formation found in the dorsal cortical pathway, used to locate an object in 3D space. Therefore, the interaction between the eye-head movement system and the pattern recognition system is modeled. Global scene attributes, used to reveal objects masked by shadows and improve object segmentation, and local object attributes defined by the boundary of contrast differences between an object and its background are modeled. The importance of using paired odd and even symmetric detectors to form the boundary and analyze the texture of an object is emphasized. This information is used to construct a viewer- centered object-based map of the scene that is based on multiple object attributes. Algorithms that incorporate the relative weighting of the different object attributes being used to discriminate objects are used to instantiate computational networks that incorporate both competitive and cooperative networks.
As people age, so do their photoreceptors. If the visual system has been exposed to sufficient UV radiation combined with other precursors for age-related maculopathies (ARM), then a large number of photoreceptors in central vision stop functioning when the person reaches their late sixties and early seventies. There are channels in the visual system tuned to different bands, approximately one octave, of spatial frequencies. In low vision observers with ARM, the loss of central vision causes a loss in channels sensitive to spatial frequencies above 8 to 10 cyc/deg. Therefore, for ARM observers, words must be magnified to read normal text. I have developed image enhancement filters that compensate for the low vision observer's losses in contrast sensitivity to intermediate and high spatial frequencies. These filters automatically enhance the text displayed on closed-circuit TVs (CCTVs) and render the text in shades of gray more easily perceivable than black and white text. These filters work by boosting the amplitude of the less visible intermediate spatial frequencies more than the lower spatial frequencies. Not only do these image enhancement filters reduce the magnification needed for reading by up to 70%, they also increase the speed that can be used to read text two to four times. A short summary of this research is presented.