This paper discusses the distinguishing and advantageous attributes of optically addressed memory technologies. It also indicates application optical memory arenas. Among these are present opportunities and near-term niches as well as speculative arenas that await further optical memory technology development, improved interface technologies, maturation of nascent applications, or changing computational architectures. Of course, memory applications transcend computing; requirements of other insertion opportunities are outlined.
We are developing a high density, high speed optical memory, using rare-earth doped hole- burning materials. These materials are theoretically capable of achieving storage densities of 1000 Gigabits/cm3 at input/output (I/O) rates of several gigabits/sec. One remarkable attribute of this storage concept is that both the temporal and spatial information encoded on a laser beam can be stored. Because both the temporal and spatial information can be stored, digital data can be recorded serially as a data or packet stream as well as holographically using the page format. During the past year we have achieved a breakthrough in demonstrating random access holographic data storage at high frame rates. Five hundred holograms were stored and retrieved with very good fidelity at 30 Hz (video rate). Each holographic image with 512 multiplied by 488 pixels could be randomly accessed during the storage and retrieval process. This frame rate is the highest demonstrated frame rate for any optical technique. This breakthrough was achieved through the invention of a memory architecture that allows multiple holographic frames to be stored without any mechanical beam scanning. In this new architecture multiplexing of 500 holograms was achieved by stepping the laser frequency over a range covered by high speed acousto-optic modulators (AOM).
We have achieved a surface density of 10 bits/micrometer2 (6.5 Gbits/in2) with an experimental holographic storage setup, using DuPont's 100 micrometer thick photopolymer as the recording medium. Its performance characteristics in terms of access rate and signal-to- noise-ratio are described. Furthermore, a simple holographic 3D disk system with high surface density (10 bits/micrometer2 using a 100 micrometer thick recording material) and an architecture similar to compact disks is shown.
The rapid advance of image-dependent information processing and entertainment applications has accelerated the need for data storage solutions that offer high capacity and high data transfer rates while maintaining low system and media costs. Volume optical memories based on 2-photon absorption-induced photochromism enable random access writing and erasure to individual bits, lines, planes, or sets of planes within the volume of the memory media. The 3- dimensional nature of the storage enables high storage capacities (theoretically 1012 bits/cm2) and parallel readout for high data transfer rates (1-100 Gb/s). The customizable nature of the media (dye-doped plastic) and efficient fluorescent mechanism of the memory readout promise cost-effective system and media solutions. Characterization experiments for erasable and read-only media have been performed, and system experiments for automated recording, and portable read-only memories have been designed and constructed.
We describe an integrated disk and file design, called IIS, and present algorithms for processing operations using this design. The operations include sequential and random retrieval, range queries, insertion, deletion, and get next record. We analyze the disk space requirements of IIS files, and the primary memory requirements. We analyze the time requirements for the various operations and give results for some examples. We show that IIS files provide 1-access random retrieval with a small transfer time, very fast sequential retrieval, efficient insertion and deletion, good storage utilization, and low primary memory requirements. We show that the IIS design is potentially superior to other popular file organizations for a wide range of realistic applications using modern technology.
Kodiak is a new member of StorageTek's Online+ family of cached direct access storage devices (DASD). Kodiak attaches to large scale IBM 370/390 architecture and compatible computers, provides exceptional performance and capacity and is fully fault tolerant. This paper first outlines the goal we had for developing Kodiak. Then it explains the hardware and software architecture with emphasis on how the architecture implements the goals.
The paper considers the information technology allowing personal computer users to create personal databases that are sufficiently full and can be quickly renewed through the TV communication channels. The advantage of this information technology over that used in the computer networks is its absolute protection from unauthorized access. Suggested is an architecture of the computer external memory comprising the subsystems of subscriber and archive memories and providing a considerable reduction of the mean access time over the entire volume of the information stored in the personal databases. The cylindrical optical information carries are shown to be most promising in the mass storage systems, since it is possible to realize in them the multilayer recording with a great number of registering layers.
Standard disk array fault-tolerance solutions such as RAID 3 and RAID 5 are unattractive for video retrieval workloads. RAID 3 organizations have dedicated parity disks which are not utilized during fault-free operation. When used for video workloads, RAID 3 also suffers from decreased performance due to the effects of its fine-grained striping. RAID 5 organizations perform well under fault-free conditions, but their performance under failure is poor: the load on the surviving disks in an affected parity group doubles; these disks become a bottleneck which may result in a loss of as many as half of the streams. We describe a data organization which performs as well as RAID 5 under fault-free conditions, while performing much better than RAID 5 under failure. Improved performance under failure is achieved by distributing the reconstruction load better. Our new approach offers certain advantages when compared to existing approaches which are based on balanced incomplete block designs (BIBDs): contiguity is maintained under reconstruction; a distinction is made between the reconstruction of real- time video data and less critical check data; a relaxed balance requirement results in solutions which are not feasible with BIBDs.
Recent technological advances have made multimedia on-demand servers feasible. Two challenging tasks in such systems are: (1) satisfying the real-time requirement for continuous delivery of objects at specified bandwidths and (2) efficiently servicing multiple clients simultaneously. To accomplish these tasks and realize economies of scale associated with servicing a large user population, the multimedia server can require a large disk subsystem. Although a single disk is fairly reliable, a large disk farm can have an unacceptably high probability of disk failure. Further, due to the real-time constraint, the reliability and availability requirements of multimedia systems are very stringent. In this paper we present techniques for providing reliability in multidisk video-on-demand storage systems.
A challenging task when designing a video server is to support the performance criteria of its target application, i.e., both its desired number of simultaneous displays and the waiting tolerance of a display. With a multi-disk video server, the placement of data has a significant impact on the overall performance of a system. This study quantifies the tradeoffs associated with alternative organization of data across multiple disks. We describe a planner that configures a system to: (1) support the performance criteria of an application, and (2) minimize the cost of a system. The experimental results demonstrate the superiority and flexibility of using the planner to configure a system.
On-demand video servers based on hierarchical storage system offer high-capacity and low- cost video storage. In such a system, video files are stored in the tertiary level and transferred to the secondary level to be displayed. We have studied the architecture and performance of a hierarchical storage system for an on-demand video server. Our objectives are to understand its performance characteristics, and to design such a server to meet specific application requirements. The secondary level in the hierarchical storage system is characterized by its bandwidth and storage capacity, while the tertiary level is characterized by its number of drives, drive bandwidth and exchange latency. The performance measure we mainly consider here is user delay. We show that, given a certain delay requirement, non-uniform video popularity can reduce the secondary storage requirement tremendously compared with the uniform-popularity case. We also found that secondary bandwidth, secondary storage capacity and tertiary bandwidth can generally be traded with each other to achieve the same average delay performance. However, there is a limit in such 'trade-off.' Furthermore, our study indicates that increasing a system resource (e.g., secondary bandwidth, secondary storage capacity or tertiary bandwidth) in the hierarchical storage system does not always lead to better performance. Therefore, in designing a server based on hierarchical storage system, balancing various system resources is very important. We finally provide some methodologies in designing a server given a certain delay requirement, by taking into account current storage technologies.
In this paper, we present two disk failure recovery methods that utilize the inherent characteristics of video streams to ensure that the user-invoked on-the-fly failure recovery process does not impose any significant load on the disk array. Whereas the first approach utilizes the sequential nature of playback of video streams to reduce the overhead of the on- the-fly recovery process, the second exploits the inherent redundancies present in video streams to facilitate efficient failure recovery. For the latter approach, we also present a disk array architecture that enhances the scalability of multimedia servers by: (1) integrating the recovery process with the decompression of video streams, and thereby distributing the reconstruction process across the clients; and (2) supporting graceful degradation in the quality of recovered images with increase in the number of disk failures. We compare and contrast our methods to the conventional disk failure recovery schemes.
With the advent of multimedia computing, there is an emerging need for systems that can support digital continuous media without requiring special adaptation logic on the part of application programs and that can be implemented on existing network infrastructures. In this paper the system architecture of the Stony Brook video server (SBVS) is described. To guarantee real-time end-to-end performance, SBVS uses a real-time network access protocol, RETHER, that uses existing Ethernet hardware. SBVS tightly integrates the bandwidth guarantee mechanisms between network transport and disk I/O. SBVS's stream-by-stream disk scheduling scheme optimizes the effective disk bandwidth without incurring scheduling overhead every cycle. In addition, SBVS implements multi-resolution video coding to reduce network and I/O bandwidth demands in normal viewing mode, while supporting fast forward/backward without requiring extra bandwidth. To demonstrate the feasibility of the proposed architecture, we have implemented the first prototype, SBVS-1, which can support five concurrent video streams on an EISA PC. To our knowledge, this is the first video server that provides an end-to-end performance guarantee from the server's disks to each user's display over standard Ethernet. In this paper, we describe the implementation details of integrating network and I/O bandwidth guarantee mechanisms, and the performance measurements that drive and/or validate our design decisions.
After a brief history of tape in audio, video, and data storage applications, trends and projections for areal density and cost are given. The enabling technologies required in heads, media, transports and channels to realize these projections are described. The rate of progress in areal density is projected to continue at historical rates or faster for the foreseeable future. Technology tradeoffs between rotary and linear serpentine tape drive technology are discussed. Continued advancement in magnetoresistive head technology is projected to bring serpentine tape recording systems to an areal density par with rotary systems. This paper is a synopsis of the authors update of the National Storage Industry Consortium (NSIC) roadmap for magnetic tape recording technology prepared in 1994.
Various numerical and experimental techniques used in the study of the head/tape interface over the last twenty years are discussed. The main equations governing the air bearing formation between the tape and the head, the deflection of the tape under pressure, and the contact behavior between tape asperities and the surface of the magnetic head are presented. Historical and modern methods of solving this system of equations are presented, focusing on finite difference and finite element techniques. The numerical models are verified by experimental measurements using various techniques of interferometry.
A review of the performance requirements for the writing components of high density tape recording heads is given in the context of multichannel linear tape head arrays. A brief tutorial of head operation is given. This is followed by a discussion of pole materials and head array architectures. Technical advances which have led to increases in data rate and recording density are identified.
This paper is intended to give an overview of signal processing required for digital magnetic recording tape drives. It discusses modulation codes, write equalization, read equalization and finally a Viterbi or maximum likelihood detector used with a (1-D) equalizer in contrast to a traditional Peak detector.
Read heads for use with magnetic tape in the past have used magnetic inductive devices. More recently however there has been a migration to magnetoresistive flux sensing devices using primarily the anisotropic magnetoresistance (MR) effect in thin permalloy stripes. High end digital data tape storage systems currently use arrays of MIR read heads for parallel track readback of magnetic transitions on tape. With the rapidly increasing track and bit density, the higher output afforded by MIR sensors and the ability to readily form them into multi-track arrays has made them the device of choice for tape read heads in digital magnetic recording. Both inductive and MIR read heads are reviewed here with discussion on the various types and design issues with an emphasis on MIR heads. In the near future it is envisaged that the recently discovered giant magnetoresistance (GMR) phenomena will be implemented in read heads in both tape and disk applications for very high areal density magnetic storage.
As bit and track densities increase, more complex detection schemes must be devised to combat the nonlinearities inherent in high-density magnetic-recording channels. With an accurate model we can quantify the departure of the channel from linear superposition, leading to optimized detectors providing improved system performance. Two distinct models have been developed. The first attempts to account for many nonlinear effects created by the physical properties of the medium; the other tries to combine all of these effects into a simple nonlinear function. Both models show excellent agreement using parameters derived from experimental signals. Simpler detection schemes result from the second model, while the first, more complex, model ensures preservation of generality. Apart from devising nonlinear detectors, we can also generate pseudo-random signals with enhanced nonlinear distortions, which may be used to test the robustness of various detection schemes. In this paper we present the results of parameter extraction and nonlinear distortion for several magneto- resistive tape-recording channels at a wide range of densities.
As magnetic recording linear densities increase, more bits are packed per inch along the individual tracks. The resulting high linear densities result in read signals that overlap each other. This is called inter symbol interference or ISI. The distortions in read signal shape due to ISI can be corrected by a linear electronic filter on the read side. This filter is called a read- equalizer. The read equalizer function is not unlike an audio equalizer that enhances audio performance at certain frequencies, or an optical lens that improves visual clarity by correcting for distortions of the human eye. This paper outlines an organized approach to the design of real equalizers used in high density digital magnetic recording on either tape or disks. This approach is also applicable to optical disk or tape applications, and in general to any digital data communication system.
By combining modified hard disk heads/suspensions, advances in thin metal particle media, media stabilization, and servo technologies, 3.5 inch floppy disks have recently jumped from the ubiquitous 1.44 MB floppies to the 100 MB ZipTM disks. This paper details the technological improvements that have gone into increasing the storage capacity of the floppy disk drive and how the same improvements pushed data throughput to greater than 1 MB/sec and seek times down to 29 ms.
The effect of temperature and humidity on the tribology of the head/disk interface is studied using constant speed drag testing on unlubricated disks and disks lubricated with AM2001. Both lubricated and unlubricated disks were found to survive a larger number of revolutions before failure for intermediate levels of humidity (40% RH). At elevated temperature the lifetime of the disks was found to improve at all humidity levels. The effect of high temperatures and high humidity on the tribology of phosphazene lubricated disks is also studied.
As the spacing between the magnetic transducer and media decreases in hard disk drives, one approaches the regime of constant contact between the head and disk. In this regime, conventional measures of the head/disk interface such as 'takeoff velocity' and 'fly height' become less important. Instead, the 'contact force' between the head and the disk is a more relevant parameter to evaluate the performance and reliability of the interface. In this paper, a new contact force measurement technique that uses the acoustic emission (AE) from the interface is introduced. The contact force is modeled as a series of continuous collisions that cause the slider to vibrate at its resonant frequencies. These vibrations generate an AE signal, the magnitude of which is proportional to the contact force. The Read-Rite tripad slider, which is a contact recording head, is used for the measurements. Some intuitive expectations from contact force measurements are presented as validation of the technique. Specifically, it is shown that contact force decreases with increasing disk velocity, that the contact force varies inversely with the flying height measured on a glass disk, and that the contact force decreases with burnishing of the interface.
Before detecting signals from magnetic storage devices, we must derive a clock signal via a timing-recovery technique. We concentrate on decision-directed timing-recovery schemes that operate at baud rate. While magnetic recording readback signals suffer from intersymbol interference and noise, this study also considers possibly asymmetric pulse shapes caused by magneto-resistive readback heads. We show performance results based on baud-rate sampling and introduce new techniques to combat pulse asymmetry.
A novel very high performance digital optical tape recorder is described. Linear tape motion at 4.2 meters per second and simultaneous writing of about 80 parallel bit tracks with a data density of three bits per micron per track enables a data rate of 1,000 Megabits per second, sufficient for a data rate of 100 megabytes per second with error correction. One micron track to track spacing gives a data capacity of one Terabyte (1,000 GB) in a single '3480' style tape cartridge shell. A single beam from a frequency doubled, laser diode pumped, solid state (2X- LDP-SS) laser is split into a multiplicity of like beams, each of which is then independently modulated at 12.5 MHz for recording.
As the data storage density continues to grow, high performance tape transports are needed for fine positioning and transport of the media over the read and write heads. This paper presents a survey of digital tape transport servo systems for four popular types of drives: (1) reel to reel drives; (2) single cartridge drives; (3) helical scanning drives; and (4) belt-driven drives. Concepts behind velocity, tension and tracking control employed in production drives are discussed and references to pertinent research cited. Techniques used to measure error in the parameters under servo control are presented.