1 July 1992 Sequence learning with recurrent networks: analysis of internal representations
Author Affiliations +
Abstract
The recognition and learning of temporal sequences is fundamental to cognitive processing. Several recurrent networks attempt to encode past history through feedback connections from `context units.' However, the internal representations formed by these networks is not well understood. In this paper, we use cluster analysis to interpret the hidden unit encodings formed when a network with context units is trained to recognize strings from a finite state machine. If the number of hidden units is small, the network forms fuzzy representations of the underlying machine states. With more hidden units, different representations may evolve for alternative paths to the same state. Thus, appropriate network size is indicated by the complexity of the underlying finite state machine. The analysis of internal representations can be used for modeling of an unknown system based on observation of its output sequences.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Joydeep Ghosh, Vijay Karamcheti, "Sequence learning with recurrent networks: analysis of internal representations", Proc. SPIE 1710, Science of Artificial Neural Networks, (1 July 1992); doi: 10.1117/12.140112; https://doi.org/10.1117/12.140112
PROCEEDINGS
12 PAGES


SHARE
Back to Top