The efficient exploitation of the coming 1 GB/sec optical interconnects will require radical re- design of the communication protocols and programming models used for distributed systems. The dominant programming models, such as message-passing, are complex enough to require extensive software involvement for data copying, matching received messages with posted buffers, and storing received data. This processing must use relatively slow accesses to non- cacheable memory structures. Consequently, the inter-processor latency, relative to the byte transmission time, is effectively increasing. Faster interconnects will exacerbate this problem. Lowest latency communication over 1 GB/sec LANs will require communication protocols and programming models that are implementable entirely in special-purpose hardware. True shared memory programming models allow this, but are unfeasible for workstations which also function as stand-alone machines, and are also extremely difficult to optimize over many processors. A proposed programming model for distributed systems, termed the 'shared array architecture,' combines the simple and deterministic communication of shared memory with the node independence and optimizability of message-passing. In the SAA model, each workstation/process in the user's parallel job maintains a 'shared array' of blocks of memory. These blocks map directly to transmission packets, and are directly read/write accessible by other processes in the job. A corresponding array of 'tag' values describe the status of the shared memory locations. Simple hardware mechanisms assure protection of non-shared data. Some similar type of mechanism, which closely ties the user interface to the communication software and hardware, will be necessary to exploit the capabilities of coming high-bandwidth transmission technology.