Many types of multichannel systems, such as time-, wavelength-, space-, and code-division-multiplexed systems, are discussed in this paper. There have been enormous research strides over the past several years, and some of the more important advances are mentioned. Due to the overview nature of this paper, the main body will discuss general concepts and main issues whereas specific systems and trends will only be mentioned briefly near the end. This approach for the written paper will complement the oral presentation which will focus on the exciting and emerging "hot" topics in the general area of Multichannel Optical Communication Systems.
Ubiquitous access to large quantities of information content is one of the main benefits anticipated from the Global Information Infrastructure (GII). One of several major factors in achieving this objective is the cost and performance of the distributed storage systems which will act as the repositories for this information. This article presents an overview of the major components and requirements of storage systems for information services on the GII.
Optical fiber networks may one day offer potential capacities exceeding 10 terabits/sec. This paper describes present gigabit network techniques for distributed computing as illustrated by the CASA gigabit testbed, and then explores future all-optic network architectures that offers increased capacity, more optimized level of service for a given application, high fault tolerance, and dynamic reconfigurability.
There is rapidly increasing demand for very high performance shared access to distributed data, for multiprocessors, networked workstation clusters, distributed databases, industrial data acquisition and control systems, etc.
The objective is to satisfy this demand at the lowest long-term cost. This paper first considers the general properties that an appropriate system architecture should have. A new architectural model, Local-Area Multiprocessor, is introduced.
These properties are then considered in more detail, and practical design decisions are made, illustrated by the evolution of the ANSEIEEE standard Scalable Coherent Interface (SCI) as it addressed these issues.
Finally, the current status of the various SCI follow-on and support projects is reported.
New federal standards for the protection of sensitive data now make it possible to ensure the authenticity, integrity and confidentiality of digital products, and non-repudiation of digital telecommunications.
Under review and comment since 1991, the new Federal standards were confirmed this year and provide standard means for the protection of voice and data communications from accidental and wilful abuse. The standards are initially tailored to protect only ‘sensitive-but-unclassified’ (SBU) data in compliance with the Computer Security Act of 1987. These data represent the majority of transactions in electronic commerce, including sensitive procurement information, trade secrets, financial data, product definitions, and company-proprietary information classified as ‘intellectual property.’ Harmonization of the new standards with international requirements is in progress.
In the United States, the confirmation of the basic standards marks the beginning of a long-range program to assure discretionary and mandatory access controls to digital resources. Upwards compatibility into the classified domain with multi-level security is a core requirement of the National Information Infrastructure.
In this report we review the powerful capabilities of standard Public-Key-Cryptology, the availability of commercial and Federal products for data protection, and make recommendations for their cost-effective use to assure reliable telecommunications and process controls.
The introduction of Asynchronous Transfer Mode (ATM) and other high speed networking protocols has increased the demand for higher speed transmission facilities in data communications, telecommunications and video networks. Fiber optics is well suited as an ATM transmission medium due to its inherent bandwidth and throughput capabilities. Although ATM is the current logical method of routing high speed transmissions, it is still subject to the properties of its physical transmission medium. This paper highlights issues associated with the management of fiber optic networks in an ATM environment and how new optical transmission products can improve network utilization and reliability in the areas of disaster recovery, testing, and network management.
The development of a practical technique for monolithically integrating optoelectronic components on a semiconductor chip is necessary in order to unleash the full capacity of the optical fiber based information superhighway. This paper establishes the criteria for photonic integration, reviews the progress achieved by several techniques presently being actively studied (in particular, selective area regrowth, selective area growth, and selective area quantum well intermixing), and critiques their near and long term futures. Since practicality implies low cost which, in turn, implies simplicity in growth and processing, initial photonic integrated circuits (PIC) will likely involve a tradeoff between manufacturability and performance. Thus, elementary PICs fabricated using the simplest technologies will emerge in the marketplace first. In the longer term, as growth and processing are brought under better control, more sophisticated PICs using complex integration technologies will also become available.
The Advanced Telecommunications Program at Lawrence Livermore National Laboratory, in collaboration with Pacific Bell, is developing an experimental high speed, four wavelength, protocol independent optical link for evaluating wide area networking interconnection schemes and the use of fiber amplifiers. Lawrence Livermore National Laboratory, as a super-user, supercomputer, and super-application site, is anticipating the future bandwidth and protocol requirements necessary to connect to other such sites as well as to connect to remote sited control centers and experiments. In this paper we discuss our vision of the future of Wide Area Networking and describe the plans for the wavelength division multiplexed link between Livermore and the University of California at Berkeley.
In this paper, we describe a multimedia system architecture consisting of: (1) an information management subsystem, (2) a storage subsystem, and (3) a network subsystem. Whereas the information management subsystem provides means for identifying the set of multimedia objects that may be pertinent to a client’s query, the storage subsystem ensures that multimedia objects are efficiently stored and retrieved from secondary storage devices. The network subsystem, on the other hand, guarantees timely delivery of the multimedia objects accessed by the storage subsystem to each of the client sites. The main goal of this paper is to identify and discuss the research issues involved in designing each of these three subsystems.
Multimedia systems integrate text, audio, video, graphics, and other media and allow them to be utilized in a combined and interactive manner. Using this exciting and rapidly developing technology, multimedia applications can provide extensive benefits in a variety of arenas, including research, education, medicine, and commerce. While there are many commercial multimedia development packages, the easy and fast creation of a useful, full-featured multimedia document is not yet a straightforward task.
This paper addresses issues in the development of multimedia documents, ranging from user-interface tools that manipulate multimedia documents to multimedia communication technologies such as compression, digital video editing and information retrieval. It outlines the basic steps in the multimedia authoring process and some of the requirements that need to be met by multimedia development environments. It also presents the role of video, an essential component of multimedia systems and the role of programming in digital video editing. A model is described for remote access of distributed video. The paper concludes with a discussion of future research directions and new uses of multimedia documents.
Due to the emergence of various multimedia applications, a significant amount of research effort has recently been elaborated upon how to deal with video data retrieval in parallel disk arrays. We examine in this paper various issues related to video data retrieval in parallel disk arrays and survey several recent studies. The issue of load balancing is first discussed. Various approaches to replicate hot videos are considered. Then, we study the issue of disk scheduling and examine a general scheduling method to minimize the buffer requirement of the video server. In addition, the issue of batching viewers is investigated which is an effective way to increase the server’s throughput. Finally the means to support VCR-like functions for a disk-array-based video server is considered. We expect that the problem of how to cost-effectively handle video data in parallel disk arrays will be of growing importance.
A variety of multimedia and video services have been proposed and investigated, including services such as video-on-demand, distance learning, home shopping, and telecommuting. These services tend to rely on high-datarate communications and most have a corresponding need for a large amount of storage with high data rates and short access times. For some services, it has been predicted that the cost of storage will be significant compared to the cost of switching and transmission in a broadband network. This paper discusses architectures of a variety of multimedia and video services, with an emphasis on the relationship between technological considerations of the storage heirarchy to support these services and service architectures.
Advances in high speed networking and the convergence of operating systems on UNIX and variants, now allows the resources of the four NSF Supercomputer Centers to be considered by remote users, for many purposes, as a single distributed resource. This article will explore the genesis of the concept and why it is a precursor to the National Information Infrastructure.
Continued evolution of consumer broadband services such as digital video and digital multimedia has placed renewed emphasis on the need for network solutions to the broadband connectivity challenge. Although still important to architectural planners, connection oriented broadband services based on ISDN concepts must now compete with a wider array of broadcast and highly asymmetrical services for bandwidth on the network. For network operators, the business imperative is to identify and execute a network rebuild plan that will meet the capacity and flexibility needs of these services and compete with the inevitable alternate paths into the home. This paper focuses on some of the key issues facing broadband network planners as they search for the best architecture to meet the business and operations goals in their segment of the market. It will be apparent that no single optimum solution exists for all deployment scenarios, emphasizing the need for flexible and modular sources (such as servers) and network interfaces (such as set tops) which preserve the value of content, the ultimate driver in this round of network revolution.
An alternative Cable TV architectural approach to that of the presently planned Fiber to the Feeder (FTF) architecture is described for switched video-on-demand applications. This approach, which builds on digital technologies used for non-switched digital services, also enables an economic, smooth and scalable introduction of broadband multimedia services into evolving Cable TV distribution networks. The approach also addresses the present and future issues of interference ingress, privacy and security, power consumption, upstream protocols, testing and maintenance. Both modernization and new-build configurations using this approach are included.
Digital libraries in building a National Information Infrastructure will be considered. The need of intelligent system requirements is articulated, and various intelligent agents that carry out the requirements are described.
Internet is rapidly becoming a household word, a synonym for the "information superhighway", a part of the National Information Infrastructure (Nil) in place today. When reviewing the educational use for Internet, one must consider the target user, potential purpose, strength and limitations of the technology, current applications, and the potential for improvement. Originating in academia, the advanced technology evolved among the computing elite. Encouraging the computer user to use the system may prove to be frustrating and premature. Inventors and innovators in computer science developed networking technology and telecommunications for educational use. Today’s educators have become accustomed to computer technology developed for the single user. New Internet users are immersed in technical detail. Applications that capitalize on Internet capability are still in the developmental phase. Extensive use of the current Internet may lead to unacceptably slow response times. A "killer" Internet application has yet to emerge. This paper will provide a brief overview of the technology and focus on areas that create problems. Improvement opportunities are in areas which develop an easy uniform interface which enhances work but hides the underlying technology.
We examine the entire imaging chain, from acquisition to interpretation and storage, and indicate the opportunities for applying state-of-the-art techniques throughout that chain. Implications for quality and cost of care, including the impact on what is considered to be the standard of care, are discussed. Attention is drawn to the complementary roles played by large databases on one hand, and powerful comparative and quantitative methods on the other. The relevance of well-designed user interfaces, and of psychovisual considerations, is indicated, and their interaction with various compression algorithms is discussed. The benefit to diagnosis of knowledge-based approaches is considered.
The NYNEX Media Broadband Service Trials in Boston examined the use of several multiple media applications from healthcare in conjunction with high speed fiber optic networks. As part of these trials, NYNEX developed a network-based software technology that simplifies and coordinates the delivery of complex voice, data, image, and video information. This permits two or more users to interact and collaborate with one another while sharing, displaying, and manipulating various media types. Different medical applications were trialed at four of Boston's major hospitals, ranging from teleradiology (which tested the quality of the diagnostic images and the need to collaborate) to telecardiology (which displayed diagnostic quality digital movies played in synchronicity). These trials allowed NYNEX to uniquely witness the needs and opportunities in the healthcare community for broadband communications with the necessary control capabilities and simplified user interface. As a result of the success of the initial trials, NYNEX has created a new business unit, Media Communications Services (MCS), to deliver a service offering based on this capability. New England Medical Center, as one of the initial trial sites, was chosen as a beta trial candidate, and wanted to further its previous work in telecardiology as well as telepsychiatry applications. Initial and subsequent deployments have been completed, and medical use is in progress.
At $1T and growing, the American medical establishment may become one of the largest initial customers for emerging information technologies. This paper reviews basic information access modalities in medicine, including the requirements each modality impose on the use of information technology and the choice of multimedia technology. Application developers must understand the medical user to offer multimedia applications that address actual needs. We also claim that most medical applications must be compatible with existing paradigms, since the main problem with educating medical users is finding time in their schedules. Finally, as an illustration, the paper discusses some of the issues associated with a use of narrowband voice channel used in many medical applications to navigate through and access medical data scripted using a format proposed by Multimedia and Hypermedia Expert Group (MHEG).
The nineties is witnessing an evolution in the area of Local Area Network(LAN)s along with the advances in computing power, disk drive capacity and disk transfer rate. The increased demand for high bandwidth connectivity calls for newer approaches to networking and peripheral sharing. There is at present a plethora of LAN standards in development trying to vie for at least a piece of the total burgeoning market for datacom and telecom. The purpose of this paper is to present some of the emerging high speed networking standards, their intended applications, and their benefits. The standards discussed include Fibre Channel, Asynchronous Transfer Mode(ATM) and Fast Ethernet.
This paper documents the architecture and prototype development of the Tera ATM LAN project at Carnegie Mellon University. The Tera ATM LAN testbed connects hundreds of workstations in the Electrical and Computer Engineering Department via an ATM-based network. The Tera network architecture consists of multiple switched Ethernet clusters interconnected using an ATM switch that is optimized for LAN traffic. A switched Ethernet cluster consists of the MATER network interface, sixteen connected Ethernet networks and a single port of the ATM switch. The ATM switch is based on the CMU Tera architecture. The Tera architecture, optimized for local area networks, incorporates a scalable nonblocking switching element with hybrid cell queues. Cells are queued first in a global first-in first-out queue that is shared by all switch inputs and then in output queues that are dedicated to individual switch outputs. The shared input queue design is scalable since it is based on a Banyan network and N FIFO memories.
In this paper, we introduce the Illinois Pulsar-based Optical Interconnect (iPOINT) testbed, and present performance results obtained for the FPGA prototype switch in a working environment consisting of an optical network of Sun SPARCStations and other local and wide-area ATM switches.
The term multimedia implies the combination of many different forms of information including computer graphics, text, video and audio, along with data distribution mechanisms and storage systems that can provide such data with real time or interactive response . Until recently, almost all systems were limited in their ability to manipulate video data.
A broadband communication infrastructure (over 150 megabits per second), deployed almost everywhere outside the third world within 20 years, is a common planning assumption of governments, communication carriers, and information providers. The "structure" of this infrastructure has been variously projected as being that of the telephone network, the cable system, or the Internet. An argument is made that the telephone model, with features borrowed from the other two, will prevail. This model is used to project broad features of printing, publishing, and advertising. In support of this projection, printing is modeled purposefully, a document is printed to either archive it, give it to someone else, or use it (read, mark up, take along, etc.). In the broadband future, only the last is sustainable. Publishing is modeled as a four-stage chain of commerce from creator to buyer. The progress of both the document and its chain of payments is considered today and in the broadband scenario. Finally, advertising today and tomorrow is modeled as a 2x2x2 cube. One dimension contrasts the "notify/inform" and "persuade" aspects of advertising; another contrasts the consumer's role as passive recipient vs. active controller of what s/he hears and sees; the third views the institution of advertising as reflecting or setting societal values.
Video on demand (VOD) is expected to be the first of many Video Dial Tone (VDT) services that will bring broadband connections to residential customers. Significant research is being undertaken to identify cost effective broadband access technologies. However, much less effort has been expended on video servers and backbone architectures. In this publication we highlight what we believe are the issues relevant to obtaining a critical understanding of the VDT architecture. These issues are shared between the video server and its storage elements, and the placement of key architectural elements (transport, storage and switching) within the network. In particular, it will be argued that establishing a balance between centralized and distributed storage of movies plays a key role in optimizing the backbone network. It also places new requirements on server architecture. Intuitively, storing copies of popular movies close to the users (e.g. in the central office) reduces demand on the network, while hauling movies from a central storage location (e.g. the VIP site) potentially increases utilization of storage resources and hence the number of movies in the system and the resulting cost of storage.
From communication and publishing behemoths to mom-and-pop makers of CD-ROMs, from Michael Ovitz to firstname.lastname@example.org, purveyors and packagers of bits are racing for grubstakes and homesteads in the digital wilderness. This paper surveys some of the current trends with specific examples.
Despite the many challenges facing widespread adoption of the Integrated Services Digital Network, prospects for its deployment look better than ever. Driving the standard toward implementation are a broad range of customer requirements. These include higher-speed data transmissions, better quality voice transmissions, as well as new applications for video and voice spawned by the personal computer industry. This paper will chronicle the history of ISDN, examine its current status as defined by evolving customer needs, and take a brief look at future developments.
Laser communications can play an important role in future hybrid network architectures of orbital and terrestrial systems, allowing rapid access to multimedia information services. Satellite communications will be an important segment upon which these future hybrid network architectures will be based. Recent technological advances in the supporting technologies have now enabled laser communication subsystems to support the anticipated high-performance characteristics needed by future networks. In this paper we describe the advantages of space borne optical communications for intersatellite links and the application to the emerging Information Infrastructures.
The changing environment of communications is bringing the realization that network traffic will be uniform at some fundamental level. When this occurs, the public network (the "information superhighway") will carry voice, video and data equally well. Underlying this assumption is the idea that the video information is of a digital nature. Because full-motion video requires raw digital speeds from 90 Mb/s to 2 Gb/s, depending upon the type and quality of video signal, these digital representations tend to be reduced (compressed) by eliminating redundant information. This compressed video- information (now requiring about 6 Mb/s) will be merged with data and voice traffic in the Asynchronous Transfer Mode network. That's to say, the basic information for communications will be placed into equal length packets or cells and be identical at this cell level.
Although the visual aspect is important for communication between humans, the most common form of communications for separated humans, telephony, has no visual component. In recent years, advances in video compression have made video telephony possible from specialized rooms but, until recently, expensive and inconvenient. Decreasing costs and technical advances are now bringing video telephony into the office in both digital and analog form. Video telephony, coupled with collaborative computing, now provides a more complete technical capability for office-to-office communications. The near future should make video telephony common and productive.