PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
While several methods of automatic video segmentation for the identification of shot transitions have been proposed, they have not been systematically compared. We examine several segmentation techniques across different types of videos. Each of these techniques defines a measure of dissimilarity between successive frames which is then compared to a threshold. Dissimilarity values exceeding the threshold identify shot transitions. The techniques are compared in terms of the percentage of correct and false identifications for various thresholds, their sensitivity to the threshold value, their performance across different types of video, their ability to identify complicated transition effects, and their requirements for computational resources. Finally, the definition of a priori set of values for the threshold parameter is also examined. Most techniques can identify over 90% of the real shot transitions but have a high percentage of false positives. Reducing the false positives was a major challenge, and we introduced a local filtering technique that was fairly effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many commercial and academic disciplines need digital video libraries. Such libraries must provide the user with the opportunity to manipulate the material non-linearly. This paper discusses digital video libraries and various non-linear services these libraries will provide. The architectural components of such libraries are discussed and a model of user access is developed. Results from caching experiments utilizing that model are reported for several usage patterns and four cache organizations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Content-based retrieval techniques can be characterized in several ways: by the manner in which image data are indexed, by the level of specificity/generality of the query and response of the system, by the type of query (e.g., iconic or symbolic), and by the kind of information used (intrinsic image features or attached information such as text). The method described in this paper automatically indexes images in the database, and is intended to retrieve specific objects by image query based on inherent image content. Our method is actually quite similar to object recognition except that instead of searching a single image for a given object, an entire database of images is examined. The approach uses linear phase coefficient composite (LPCC) filters to encode and match queries consisting of multiple images (e.g., representative views of an object of interest) against multiple images in the database simultaneously. Retrieval is a two-step process that first isolates those portions of the database containing images that match the query, and then identifies the specific images. Our use of LPCC filters exploits phase information to retrieve specific images that match the query from the database. The results from the experiments suggest that phase information can be used to index and retrieve multiple images from a database in parallel, and that large numbers of operations can be performed simultaneously using a complex number representation. In one experiment well over 100 real correlations were effectively performed by a single complex correlation. Problems encountered in processing video data are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we examine a content-based method to download/record digital video from networks to client stations and home VCR's. The method examined is an alternative to the conventional time-based method used for recording analogue video. Various approaches to probing the video content and to triggering the VCR operations are considered, including frame signature matching, program barcode matching, preloaded pattern searching, and annotation signal searching in a hypermedia environment. Preliminary performance studies are conducted to provide some insights into this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an experimental system (WeatherDigest) for automatic conversion of TV weather forecasts to HTML documents. This application is presented as an example of a larger system dealing with media understanding, representation and dissemination. An object model for media representation and processing is described and WeatherDigest is presented in terms of this object model. The concepts explored in WeatherDigest are then generalized to a repository of multimedia information (media server). This server can handle requests from clients with different requirements allowing to retrieve the same information in multiple formats.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today's typical multimedia computer consists of a fast CPU and multimedia peripherals such as attached sound and video support hardware. In such a machine, the low-level audio and video handling may be performed by these peripherals while any significant processing on these media requires the intimate involvement of the CPU, main memory system and the operating system. Given that these components are not typically designed to meet the real-time requirements of continuous media, there are a number of constraints on the nature of multimedia applications that can be executed. To get around these constraints, demanding multimedia applications may typically be offloaded to attached peripheral compute engines which are customized to execute the applications. This approach requires dedicating resources to specific applications and suffers from the problem that such applications are not flexible integrated with the main system. We posit that new software and hardware technologies are needed to truly integrate real-time multimedia processing capabilities within a multimedia computer. The design of these technologies must be based on a thorough understanding of the timing and real-time computing requirements of various types of continuous media. Formal mechanisms are needed to express and analyze these requirements in order for multimedia operating systems to efficiently and correctly schedule the use of computing and communication resources. Indeed such mechanisms may be used to guide the specification and design of hardware and software processing component. In this paper, we present a framework for specifying the timing properties and computational requirements of continuous media. We discuss the usefulness of this framework in the context of some common audio and video data formats. We also discuss the impact of this framework on the design of resource scheduling mechanisms. Finally we offer some insights on designing audio/video processing engines based on real-time requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we will discuss how we design high-performance hardware implementation architecture of the JPEG/MPEG encoders using hybrid technologies-analog optics and digital VLSI. A major costly computation of the JPEG/MPEG standard is the 2D discrete cosine transform. We design a powerful highly-parallel optical computing technique to perform the cosine transform and use VLSI for additional control-required functions. It can significantly save the cell count and improve the performance by combining the best features of optics and VLSI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present the hardware realization of an Audio decoder according to the ISO/IEC 11172-3 (MPEG 1 Audio). The aim of this development was the implementation of a MPEG 1 Audio layer 1/2 decoder core that can be used as a standalone solution as well as in combination with other functionalities (e.g. video-decoder) integrated on one chip. To match these requirements, it is essential to achieve low power consumption and to minimize the demand of chip area. In the described system, optimization was carried out on the algorithmic level as well as on the architectural level. On the algorithmic level, the number of multiplications per audio sample was diminished by about 50% compared to the solution presented in the standard, modifying the necessary polyphase filterbank. On the architectural level, the system exploits parallelism and resource sharing to minimize the number of computation units and the size of memory. The decoding sequence is divided into several processes which run in parallel. In this way, the computation unit can be shared by time multiplex between the different processes and run almost continually. Communication between the processes is realized with shared memory. A technique to minimize the lifetime of data in memory by a kind of virtual addressing is presented. By using these optimization techniques the developed core has a gate count of only about 35 k (including on chip memory) and can be run at a clock rate of 16 MHz which results in a low power consumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pfinder is a real-time system for tracking and interpretation of people. It runs on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to segment a person from a background scene, and implements heuristics which can find and track people's head and hands in a wide range of viewing conditions. Pfinder produces a real- time representation of a user useful for applications such as wireless interfaces, video databases, and low-bandwidth coding, without cumbersome wires or attached sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adelson and Wang have proposed a video compression algorithm in which moving images are decomposed into layers whose motion can be represented by a six-parameter affine transformation. Such a representation not only adds compression efficiency but also enables modification and automated scene understanding. We have previously proposed a data-flow computational framework for a flexible video decoder capable of handling both `traditional' transform-based algorithms and model-based descriptions of scenes. In this paper, we explain how such an architecture can decode Adelson and Wang's algorithm, and present the performance results for several variations of the algorithm on a data-flow video processing system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video Server Architectures and Applications in Large Commercial Media Delivery Systems
This paper presents a view of a large scale interactive television system (ITVS) based on an hierarchical architecture of intelligent store/forward nodes. The described ITVS functions within the purview of local knowledge and distributed control which the author argues is the reasonable approach to managing the complexity of such systems. The major data handling themes are outlined and a case is made for the use of segmented burst transmissions. A feature of such an ITVS is the asymmetrical bandwidth requirements for upstream demand (`the back channel') and downstream fulfillment flows. This leads to the identification of an important operational characteristic of such systems which is summarized as the Demand Conservation Principle. The application of this conservation principle is shown to support the development of analytical performance models under an achievable set of assumptions for practical systems. An approach to the development of such models is presented using the ability to collapse large vertical partitions of such systems into a generalized demand cascade. This material is based on Intergraphics Associates TN-94031, an unpublished technical note with the same title, that is one of a series of technical notes on the design, performance analysis, and modeling of interactive television systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Selecting the best network for a given cable or telephone company provider is not as obvious as it appears. The cost and performance trades between Hybrid Fiber Coax (HFC), Fiber to the Curb (FTTC) and Asymmetric Digital Subscriber Line networks lead to very different choices based on the existing plant and the expected interactive subscriber usage model. This paper presents some of the issues and trades that drive network selection. The majority of the Interactive Television trials currently underway or planned are based on HFC networks. As a throw away market trial or a short term strategic incursion into a cable market, HFC may make sense. In the long run, if interactive services see high demand, HFC costs per node and an ever shrinking neighborhood node size to service large numbers of subscribers make FTTC appear attractive. For example, thirty-three 64-QAM modulators are required to fill the 550 MHz to 750 MHz spectrum with compressed video streams in 6 MHz channels. This large amount of hardware at each node drives not only initial build-out costs, but operations and maintenance costs as well. FTTC, with its potential for digitally switching large amounts of bandwidth to an given home, offers the potential to grow with the interactive subscriber base with less downstream cost. Integrated telephony on these networks is an issue that appears to be an afterthought for most of the networks being selected at the present time. The major players seem to be videocentric and include telephony as a simple add-on later. This may be a reasonable view point for the telephone companies that plan to leave their existing phone networks untouched. However, a phone company planning a network upgrade or a cable company jumping into the telephony business needs to carefully weigh the cost and performance issues of the various network choices. Each network type provides varying capability in both upstream and downstream bandwidth for voice channels. The noise characteristics vary as well. Cellular quality will not be tolerated by the home or business consumer. The network choices are not simple or obvious. Careful consideration of the cost and performance trades along with cable or telephone company strategic plans is required to ensure selecting the best network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in networks, storage and video delivery systems are about to make commercial deployment of interactive multimedia services over digital television networks a reality. The emerging components individually have the potential to satisfy the technical requirements in the near future. However, no single vendor is offering a complete end-to-end commercially-deployable and scalable interactive multimedia applications systems over digital/analog television systems. Integrating a large set of maturing sub-assemblies and interactive multimedia applications is a major task in deploying such systems. Here we deal with integration issues, requirements and trade-offs in building delivery platforms and applications for interactive television services. Such integration efforts must overcome lack of standards, and deal with unpredictable development cycles and quality problems of leading- edge technology. There are also the conflicting goals of optimizing systems for video delivery while enabling highly interactive distributed applications. It is becoming possible to deliver continuous video streams from specific sources, but it is difficult and expensive to provide the ability to rapidly switch among multiple sources of video and data. Finally, there is the ever- present challenge of integrating and deploying expensive systems whose scalability and extensibility is limited, while ensuring some resiliency in the face of inevitable changes. This proceedings version of the paper is an extended abstract.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For high-end real-time video services, a prioritized transmission scheme (PT) should match the video information distribution of the high priority and low priority bitstreams generated by a layered source coder. This paper studies two PT schemes, which are respectively coupled- concatenated transmission (CCT) and coupled-interlaced transmission (CIT). Simulation results reveal that: given the same amount of network resource, the CIT scheme outperforms both the CCT scheme and the non-layered scheme in terms of the delivered video quality and the network resource utilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of Video-On-Demand (VOD) service is to allow users to access arbitrary video sequences at arbitrary time across wide-area networks. Because of the large amount of data volume associated with real-time video, the communications and I/O bandwidth requirements pose an extremely difficult challenge to computer systems designers. This paper focuses specifically on the I/O subsystem design for large-scale video servers. We describe a periodic broadcasting approach and its associated data layout algorithms to cost-effectively address the I/O bandwidth problems associated with large-scale VOD servers. The paper also presents techniques to support a limited form of VCR-like trick plays such as fast forward and reverse under the proposed framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although today the focus within the broadband communications framework is mainly on data communications for business customers and asymmetric services for residential customers (e.g. VoD) the interest for new services is growing. The paper considers broadband multimedia symmetric services for residential customers and provides examples of their possible use. Four network architectures for supporting such services are then illustrated and a comparison is made among them in order to underline the related main points of strength and weakness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the enabling technologies for broadband services become mature and affordable, new multimedia applications are emerging and being implemented by computing and telecommunication vendors. A particularly attractive one in this fast growing area is interactive video services which are being trialed and developed by a number of product and service providers. A number of technical challenges have to be resolved however before the service can be viable. In this paper, we report some results of our recent studies in three specific areas regarding the design of large-scale interactive consumer video services: server design issues, transport and distribution network design issues, and signaling and interactive response issues. We focus on techniques for building a scalable and reliable media delivery system capable of providing flexible, interactive access to multimedia content for a large number of users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The build-out of residential interactive networks presents application developers and content providers with new and exciting distribution channels. With this opportunity comes the challenge of delivering compelling multimedia services within a severely resource-constrained environment. Initially, these networks will provide unidirectional broadband data streaming capabilities with limited bi-directional control channels. For economical reasons, the associated customer premise equipment will consist of simple set-top boxes (STBs) containing only a few megabytes of memory and no secondary storage. Given these constraints, the challenge for application developers is to construct a service delivery framework capable of efficiently utilizing powerful upstream computing resources while minimizing latencies due to the interventing network. A client/server software architecture, originally developed for enterprise networks, is the basis of this framework. Combined with distributed computing concepts, a properly partitioned system can provide a very capable interactive network platform. This discussion focuses on the client side of such a system. It explores the usage of interpretive runtime environments for STBs and suggests methods for expanding client functionality through object aggregation and encapsulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SBC Communications Inc. (SBC) is engaged in a full service broadband trial in Richardson, Texas. This trial brings together a unique combination of technologies. The all digital broadband network implements fiber to the curb (FTI'C), providing high bandwidth service to the trial participants. The fiber network carries telephone service and full service video and multimedia applications. The high bandwidth afforded by fiber also supports upstream video services and other highly interactive applications. We've all heard the old saw "Content is King." In building and integrating media systems there is a new monarch — software — and it is posing, perhaps, the most challenging aspect of the process. Knowing that a network with nothing to play upon it will not satisfy the needs of our customers, SBC is leveraging the advanced, object oriented capabilities in the Microsoft architecture to yield an environment that eases the work for developers. By creating an environment that lowers costs and speeds development time, SBC hopes to enable and to encourage its media partners to place larger numbers of titles on our network. SBC further hopes this environment will shorten and improve the quality control process and will significantly lower integration costs.
Keywords: software architectures, systems integration, interactive television, multimedia, media systems
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The richness and diversity of information available over the Internet, its size, convenience of access, and its dynamic growth will create new ways to offer better education opportunities in medicine. The Internet will especially benefit medical training process that is expensive and requires continuous updating. The use of the Internet will lower the delivery cost and make medical information available to all potential users. On the other hand, since medical information must be trusted and new policies must be developed to support these capabilities, technologies alone are not enough. In general, we must deal with issues of liability, remuneration for educational and professional services, and general issues of ethics associated with patient-physician relationship in a complicated environment created by a mix of managed and private care combined with modern information technology. In this paper we will focus only on the need to create, to manage and to operate open system over the Internet, or similar low-cost and easy access networks, for the purpose of medical education process. Finally, using business analysis, we argue why the medical education infrastructure needs an information broker, a third party organization that will help the users to access the information and the publishers to display their titles. The first section outlines recent trends in medical education. In the second section, we discuss transfusion medicine requirements. In the third section we provide a summary of the American Red Cross (ARC) transfusion audit system; we discuss the relevance of the assumptions used in this system to other areas of medicine. In the fourth section we describe the overall system architecture and discuss key components. The fifth section covers business issues associated with medical education systems and with the potential role of ARC in particular. The last section provides a summary of findings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data embedding is a new steganographic method for combining digital information sets. This paper describes the data embedding method and gives examples of its application using software written in the C-programming language. Sandford and Handel produced a computer program (BMPEMBED, Ver. 1.51 written for IBM PC/AT or compatible, MS/DOS Ver. 3.3 or later) that implements data embedding in an application for digital imagery. Information is embedded into, and extracted from, Truecolor or color-pallet images in MicrosoftTM bitmap (BMP) format. Hiding data in the noise component of a host, by means of an algorithm that modifies or replaces the noise bits, is termed `steganography.' Data embedding differs markedly from conventional steganography, because it uses the noise component of the host to insert information with few or no modifications to the host data values or their statistical properties. Consequently, the entropy of the host data is affected little by using data embedding to add information. The data embedding method applies to host data compressed with transform, or `lossy' compression algorithms, as for example ones based on discrete cosine transform and wavelet functions. Analysis of the host noise generates a key required for embedding and extracting the auxiliary data from the combined data. The key is stored easily in the combined data. Images without the key cannot be processed to extract the embedded information. To provide security for the embedded data, one can remove the key from the combined data and manage it separately. The image key can be encrypted and stored in the combined data or transmitted separately as a ciphertext much smaller in size than the embedded data. The key size is typically ten to one-hundred bytes, and it is derived from the original host data by an analysis algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
1Broadband Integrated Services Data Network (BISDN) communication wifi involve the integration of high speed data, voice, and video functionality delivered via technology similar to Asynchronous Transfer Mode (ATM) switching and Synchronous Optical Network (SONET) optical transmission systems. Customers of BISDN services may need a variety of data authenticity and privacy assurances. Cryptographic methods can be used to assure authenticity and privacy, but are hard to scale for implementation at high speed. The incorporation of these methods into computer networks can severely impact functionality, reliability, and performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To be competitive in today's globally connected marketplace, a company must ensure that their internal network security methodologies and supporting policies are current and reflect an overall understanding of today's technology and its resultant threats. Further, an integrated approach to information security should ensure that new ways of sharing information and doing business are accommodated; such as electronic commerce, high speed public broadband network services, and the federally sponsored National Information Infrastructure. There are many challenges, and success is determined by the establishment of a solid and firm baseline security architecture that accommodate today's external connectivity requirements, provides transitional solutions that integrate with evolving and dynamic technologies, and ultimately acknowledges both the strategic and tactical goals of an evolving network security architecture and firewall system. This paper explores the evolution of external network connectivity requirements, the associated challenges and the subsequent development and evolution of firewall security systems. It makes the assumption that a firewall is a set of integrated and interoperable components, coming together to form a `SYSTEM' and must be designed, implement and managed as such. A progressive firewall model will be utilized to illustrates the evolution of firewall systems from earlier models utilizing separate physical networks, to today's multi-component firewall systems enabling secure heterogeneous and multi-protocol interfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes why encryption was selected by Lockheed Martin Missiles & Space as the means for securing ATM networks. The ATM encryption testing program is part of an ATM network trial provided by Pacific Bell under the California Research Education Network (CalREN). The problem being addressed is the threat to data security which results when changing from a packet switched network infrastructure to a circuit switched ATM network backbone. As organizations move to high speed cell-based networks, there is a break down in the traditional security model which is designed to protect packet switched data networks from external attacks. This is due to the fact that most data security firewalls filter IP packets, restricting inbound and outbound protocols, e.g. ftp. ATM networks, based on cell-switching over virtual circuits, does not support this method for restricting access since the protocol information is not carried by each cell. ATM switches set up multiple virtual connections, thus there is no longer a single point of entry into the internal network. The problem is further complicated by the fact that ATM networks support high speed multi-media applications, including real time video and video teleconferencing which are incompatible with packet switched networks. The ability to restrict access to Lockheed Martin networks in support of both unclassified and classified communications is required before ATM network technology can be fully deployed. The Lockheed Martin CalREN ATM testbed provides the opportunity to test ATM encryption prototypes with actual applications to assess the viability of ATM encryption methodologies prior to installing large scale ATM networks. Two prototype ATM encryptors are being tested: (1) `MILKBUSH' a prototype encryptor developed by NSA for transmission of government classified data over ATM networks, and (2) a prototype ATM encryptor developed by Sandia National Labs in New Mexico, for the encryption of proprietary data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Communications over wide area networks are becoming a commonly used form of commercial interaction. Internet Protocol (IP) transfer of data packets has facilitated world-wide connectivity of employees who use computers as a daily tool. There is rapidly growing interest in Asynchronous Transfer Mode (ATM) networking as a means of increasing service, while reducing cost-per-bit for transmission. ATM can simultaneously carry IP data packets along with other data forms such as voice and video. In the commercial world, which has become aware of the need for security on its data transmissions and is just beginning to acquire products individually for IP, or voice, or video data security, there now arises the potentially unifying transfer mechanism of ATM, and with it the prospect of a seemingly more unified ATM security process as well. As first applications of ATM security are emerging, we enter an era of complex requirements and issues that must be addressed if these security products are to be genuinely used. This paper will begin with a brief introduction to ATM encryption as a process. However, its primary focus will be upon the issues and technical challenges that face potential developers of ATM encryption products. This is useful because the type of encryption a designer selects and the details of its implementation are influenced by the threat to be addressed as well as the communication environment. These decisions will be influenced particularly heavily for ATM by the initial investment required to develop an encryptor. A summary of the state-of-the-art will be provided in conclusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.