The author describes a novel design of neural networks for lossless data compression. The proposed neural network establishes an efficient dictionary by storing two symbols into each neuron and interconnecting those neurons that match a number of consecutive input strings. After an operation of experience-based competitive learning, a number of input strings can be matched by winning neurons in the network. Variable-length codes are then designed to encode the location of the first neuron, possible interconnections, and the number of matched neurons to achieve data compression. For unsuccessfully matched input strings, a literal code is constructed that contains an overhead code identifying the length of the literal code and the original codes of those unsuccessful strings. Extensive experiments show that the proposed neural network achieves very competitive compression performance in comparison with a few typical existing data compression algorithms. This also opens a new area for the application of neural networks to lossless data compression, where massive parallel processing and powerful learning capability can be utilized to develop high-performance algorithms and new techniques.