The superconducting super collider (SSC), a Department of Energy funded 8.25 billion dollar high energy physics laboratory currently under construction south of Dallas, Texas in Ellis County, will be the world's largest proton accelerator with a main ring 54 miles in circumference and a capability of 20 TeV. When completed in 1999 the laboratory will house several thousand support staff and researchers all interacting with one another and the collider by way of a networked distributed computing environment. It is estimated that during operation the collider's detectors alone will generate over 5 terabytes of data per day in addition to the many terabytes of data required for the ongoing operations of the laboratory. Currently, the network is a mix of copper and multimode fiber optic technology using async, ISDN, T1, Ethernet, and FDDI to provide support for administrative computing, engineering and design, simulation, and video conferencing over an extensive local and wide area network. As completion of the collider approaches and high energy physics experiments are begun the networks role will become ever more crucial with bandwidth demands at an all time high. To meet these demands the existing network will migrate to a mostly single-mode fiber optic system utilizing higher speed technologies such as T3, FDDI follow-on, SMDS, SONET, and serial HIPPI, to support the additional needs of data acquisition, control systems, and environmental control and safety systems. Throughout the design and implementation of the network several themes persist: `the network is the system,' bandwidth requirements are on the increase, solutions must be standards based, and fiber optics will prevail.