PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Among the many advances in computer technology in recent years, one significant development has been the use of networks as a means of sharing resources among multiple computer systems. Despite the considerable successes achieved on this front, limitations have become apparent in the current generation of network-based resource sharing technology. The recent introduction of gigabit-per-second local area networking and high-performance storage device technologies now provide the basis for a new class of networked system. Our recent efforts have involved realizing a system which supports resource sharing among computer systems in a very high performance network- based environment. Developments range from integration of the HIPPI-based networked systems to new protocols for data transport to the incorporation of the UniTree system as the coordinating entity for system-wide storage and file management. This paper presents an overview of considerations for this style of network-based resource sharing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The grand challenges of science and industry that are driving computing and communications have created corresponding challenges in information storage and retrieval. An industry-led collaborative project has been organized to investigate technology for storage systems that will be the future repositories of national information assets. Industry participants are IBM Federal Systems Company, Ampex Recording Systems Corporation, General Atomics DISCOS Division, IBM ADSTAR, Maximum Strategy Corporation, Network Systems Corporation, and Zitel Corporation. Industry members of the collaborative project are funding their own participation. Lawrence Livermore National Laboratory through its National Energy Research Supercomputer Center (NERSC) will participate in the project as the operational site and provider of applications. The expected result is the creation of a National Storage Laboratory to serve as a prototype and demonstration facility. It is expected that this prototype will represent a significant advance in the technology for distributed storage systems capable of handling gigabyte-class files at gigabit-per-second data rates. Specifically, the collaboration expects to make significant advances in hardware, software, and systems technology in four areas of need, (1) network-attached high performance storage; (2) multiple, dynamic, distributed storage hierarchies; (3) layered access to storage system services; and (4) storage system management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the fileserver and mass storage marketing manager for Convex, I have talked to more than 150 customers about their fileserving and mass storage needs. At the same time Convex has gained much experience and comments from our customers who are using these new technologies. This paper summarizes some of the trends, technologies, and problems that are occurring and experiences that we have gained from supporting these customers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mass memory system based on rewriteable optical disk media are expected to play an important role in meeting the data system requirements for future NASA spaceflight missions. NASA has established a program to develop a high performance (high rate, large capacity) optical disk recorder focused on use aboard unmanned Earth orbiting platforms. An expandable, adaptable system concept is proposed based on disk drive modules and a modular controller. Drive performance goals are 10 gigabyte capacity, 300 megabit per second transfer rate, 10-12 corrected bit error rate, and 150 millisecond access time. This performance is achieved by writing eight data tracks in parallel on both sides of a 14 inch optical disk using two independent heads. System goals are 160 gigabyte capacity, 1.2 gigabits per second data rate with concurrent I/O, 250 millisecond access time, and two to five year operating life on orbit. The system can be configured to meet various applications. This versatility is provided by the controller. The controller provides command processing, multiple drive synchronization, data buffering, basic file management, error processing, and status reporting. Technology developments, design concepts, current status including a computer model of the system and a controller breadboard, and future plans for the drive and controller are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia and image oriented use of computers is placing new requirements on mass storage. In attempting to address these needs holographic storage devices are being considered because of their image nature. Holographic storage is not just a single storage device, but is an entire class of storage. This paper lists the distinguishing features of this storage class and presents a taxonomy of its various forms. These forms are then assessed with respect to various emerging applications such as high resolution image data bases, video storage, scientific data capture, interactive animation, and multimedia library distribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to access very large data sets in seconds rather than hours or days is key to an efficient analysis procedure. We have developed a high performance robotic storage and retrieval system plus an innovative data management technique which provides this capability and is now in operation at user sites. Our EMASSTM (E-systems modular automated storage system) achieves this high performance with a combination of state-of-the-art technology. helical-scan digital recorders, each having a sustained transfer rate of 15 megabytes/second and a burst transfer rate of 20 megabytes/second are used to store data, and many recorders can be accessed concurrently to achieve the desired transfer rate. Commercially available, out-of-box D2 high density magnetic tape is used for archiving data, and this 19 mm tape is available in three cassette sizes having capacities of 25, 75, or 165 gigabytes, respectively. Cassettes are stored in modular robotic systems whose capacities can be expanded from a few terabytes to thousands of terabytes. EMASS software controls all aspects of data management including commanding the robots to move cassettes between their storage locations and the recorders. Performance is matched to each user's needs by tailoring the number of recorders, the size and configuration of the robotic archive, and the number of robots for the user's environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Seong Ki Mun, Matthew T. Freedman M.D., Anthony Gelish, Robert E. de Treville, Monet R. Sheehy, Mark Hansen, Mac Hill, Elisabeth Zacharia, Michael J. Sullivan, et al.
Image management and communications (IMAC) network, also known as picture archiving and communication system (PACS) consists of (1) digital image acquisition, (2) image review station (3) image storage device(s), image reading workstation, and (4) communication capability. When these subsystems are integrated over a high speed communication technology, possibilities are numerous in improving the timeliness and quality of diagnostic services within a hospital or at remote clinical sites. Teleradiology system uses basically the same hardware configuration together with a long distance communication capability. Functional characteristics of components are highlighted. Many medical imaging systems are already in digital form. These digital images constitute approximately 30% of the total volume of images produced in a radiology department. The remaining 70% of images include conventional x-ray films of the chest, skeleton, abdomen, and GI tract. Unless one develops a method of handling these conventional film images, global improvement in productivity in image management and radiology service throughout a hospital cannot be achieved. Currently, there are two method of producing digital information representing these conventional analog images for IMAC: film digitizers that scan the conventional films, and computed radiography (CR) that captures x-ray images using storage phosphor plate that is subsequently scanned by a laser beam.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among the applications for high-bandwidth systems are medical picture archiving and communication systems (PACS). Viewed in the light of developments anticipated ten years ago when these systems were first discussed, progress has been disappointingly slow. However, with the recognition that teleradiology, digital archives, and computed radiography can be regarded as PAC subsystems, and that systems confined to a single modality can be regarded as mini-PACS, it becomes clear that development of PAC systems has made remarkable progress. Growing from modest size today, the markets for PAC systems and subsystems in the U.S. is likely to exceed $600 million by the end of the decade, a market of the same magnitude as those for CT, MRI, and catheterization lab equipment. This paper discusses the forces that have stimulated growth of PAC systems -- among them, the opportunity to provide better service to patients and referring physicians, a chance to expand effective service area, and a possible solution to the ubiquitous problems of lost films -- as well as the impediments that have retarded growth -- among them, technical limitations, especially of the radiologists' workstations, and cost. A review of these forces allows prediction of likely developments in the 1990s.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a project involving the creation of an electronic archive of x ray images, and the development of geographically dispersed workstations that access the image store, retrieve the image files over Internet, and allow viewers to display, manipulate, enhance, and read the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since 1983, the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) have been engaged in developing standards related to medical imaging. This alliance of users and manufacturers was formed to meet the needs of the medical imaging community as its use of digital imaging technology increased. The development of electronic picture archiving and communications systems (PACS), which could connect a number of medical imaging devices together in a network, led to the need for a standard interface and data structure for use on imaging equipment. Since medical image files tend to be very large and include much text information along with the image, the need for a fast, flexible, and extensible standard was quickly established. The ACR-NEMA Digital Imaging and Communications Standards Committee developed a standard which met these needs. The standard (ACR-NEMA 300-1988) was first published in 1985 and revised in 1988. It is increasingly available from equipment manufacturers. The current work of the ACR- NEMA Committee has been to extend the standard to incorporate direct network connection features, and build on standards work done by the International Standards Organization in its Open Systems Interconnection series. This new standard, called Digital Imaging and Communication in Medicine (DICOM), follows an object-oriented design methodology and makes use of as many existing internationally accepted standards as possible. This paper gives a brief overview of the requirements for communications standards in medical imaging, a history of the ACR-NEMA effort and what it has produced, and a description of the DICOM standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transmission of medical image data is subject to stringent requirements because of the large size of the images that are used. An appropriate compression algorithm could greatly reduce archival requirements and the time and cost of transmission. Investigation into compression algorithms has shown that the lossless variety gives a compression of about three to one. An alternative algorithm, such as a full-frame discrete cosine transform followed by lossless encoding, yields more compression but incurs a cost in terms of a deterioration of image quality. We wish to determine the threshold of compression, using this algorithm, beyond which significant deterioration of diagnostic quality is observed. Contrast-detail analysis has been used to describe aspects of the total imaging-observer performance of computed tomography, digital radiography, and standard film-screen systems. A radiographic phantom made of plastic and containing holes of varying diameter and depth is used to create appropriate images. These images are used to conduct a threshold visibility experiment with the participation of several human observers. The result is expressed in a contrast-detail curve. We report on the application of contrast-detail analysis to phosphor plate or computed radiography images in order to document the deterioration caused by compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of approaches have been proposed for supporting high-bandwidth time-dependent multimedia data in a general purpose computing environment. Much of this work assumes the availability of ample resources such as CPU performance, bus, I/O, and communication bandwidth. However, many multimedia applications have large variations in instantaneous data presentation requirements (e.g., a dynamic range of order 100,000). By using a statistical scheduling approach these variations are effectively smoothed and, therefore, more applications are made viable. The result is a more efficient use of available bandwidth and the enabling of applications that have large short-term bandwidth requirements such as simultaneous video and still image retrieval. Statistical scheduling of multimedia traffic relies on accurate characterization or guarantee of channel bandwidth and delay. If guaranteed channel characteristics are not upheld due to spurious channel overload, buffer overflow and underflow can occur at the destination. The result is the loss of established source-destination synchronization and the introduction of intermedia skew. In this paper we present an overview of a proposed synchronization mechanism to limit the effects of such anomalous behavior. The proposed mechanism monitors buffer levels to detect impending low and high levels on frame basis and regulates the destination playout rate. Intermedia skew is controlled by a similar control algorithm. This mechanism is used in conjunction with a statistical source scheduling approach to provide an overall multimedia transmission and resynchronization system supporting graceful service degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The integration of multimedia into heterogeneous computer applications exposes the need for context-sensitive multimedia objects which can adapt to the fluctuating resources and needs of the application. The typical model of a multimedia application is to give the application access to a library of static multimedia objects. System management of the presentation is limited to synchronization. We contend that multimedia objects should be dynamic and adaptable to their application environments. At the very least, multimedia objects should be scalable in terms of resource usage, e.g., the use of screen space, network bandwidth, and computational resources. Multimedia objects should also be able to alter their modes of representation in response to the changing needs of the user. Finally, multimedia objects should be capable of altering their content to fit the preferences of the user and the context of the presentation. We present a multimedia distribution system called O, which enables the creation of dynamic multimedia objects for distribution over a network. These objects are `self-aware,' in that they can be programmed with the behaviors necessary to respond to a changing presentation environment. O is featured as the multimedia distribution tool in both a personalized information retrieval application and a mapping application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Library of Congress's National Library Service for the Blind and Physically Handicapped serves a readership of about 750,000 patrons with `talking books' and magazines on specially formatted cassettes and flexible phonograph records. Looking toward the future, our technology assessment and research program has focused on digital systems as the most likely successor to today's methods. Digital technology offers unprecedented opportunities to explore new distribution methods such as central or regional archiving with network connectivity. It also allows for experiments in new patron interfacing using innovative strategies such as speech recognition with voice prompting. In this paper we present a brief description of the existing service, a proposed configuration for the next generation of talking book machines and a patron profile. We discuss the challenges and opportunities that would be presented by the experimental introduction of multimedia digital technology to our unique patron population. We solicit comments and recommendations from the research community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Timely delivery of multimedia data is one of the key requirements in a distributed multimedia system. In a token ring network, this can be achieved by a proper control of channel access. FDDI uses a timed token MAC protocol to control the token rotation time. During network initialization, a protocol parameter called target token rotation time (TTRT) is determined. Its value indicates the expected token rotation time. Each node is assigned a fraction of the TTRT, known as synchronous bandwidth, for transmitting its synchronous messages which may be deadline constrained. In this paper, we study methods used for setting the protocol parameters such as TTRT and synchronous bandwidth. The goal is to satisfy the message deadlines. In particular, we discuss the use of the normalized proportional scheme for allocating the synchronous bandwidth. We derive a condition on the utilization factor for ensuring timely delivery of all synchronous messages, and show that 33% utilization is the tight lower bound, regardless of the number of nodes, message lengths, periods, etc. Based on this scheme, we address proper selection of TTRT and trade-off between buffer size and the capability of meeting message deadlines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a management system for a multi-media information network. The system consists of a main control system for monitoring the behaviors of the terminal devices, a multi-media database management system to maintain the relationships among data files, a script generating system which produces script files to control the behaviors of the terminal devices, and a preprocessing system to compose images with Chinese fonts. The information system has been evaluated by querying 562 users. The statistic results show that the information network is very popular, and the use of the management system has improved the performance of the information network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a set of interactive tools for collecting, annotating, and analyzing group communication sessions. These tools have been used to model group meetings which we have enacted on our computer-based video conferencing system as well as single location meetings. The purpose of this work is to support the analysis of group meetings over computer-based video conferencing systems. The resulting analysis can be used for various purposes including creating meeting summaries, identifying communication patterns, facilitating group communication, and suggesting agendas for follow-on meetings. The current system is used for off-line annotation and analysis of communication sessions which involve various parallel media tracks including the video and audio component for each participant, the text transcription of the meeting, and various documents and media forms referenced during the session. In this paper we review these tools and describe an architecture for employing these techniques for real-time feedback to a communication session. Real-time feedback could include suggestions for materials and individuals to include in the current meeting, change of topic, and suggesting problem solving strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-Bandwidth Applications in Science and Engineering
Presenting engineering and scientific information in the courtroom is challenging. Quite often the data is voluminous and, therefore, difficult to digest by engineering experts, let alone a lay judge, lawyer, or jury. This paper discusses computer visualization techniques designed to provide the court methods of communicating data in visual formats thus allowing a more accurate understanding of complicated concepts and results. Examples are presented that include accident reconstructions, technical concept illustration, and engineering data visualization. Also presented is the design of an electronic courtroom which facilitates the display and communication of information to the courtroom.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today's powerful computer workstations enable multimodal nondestructive testing (NDT) to be used for such practical applications as detecting and evaluating defects in structures. Radiography (x ray) and ultrasonics (UT) are examples of two different nondestructive tests, or modalities, which measure characteristics of materials and structures without affecting them. Traditionally, NDT produced an analog result, such as an image on x-ray film, which was difficult to review, interpret, and store. New and more powerful digital NDT techniques, such as industrial x-ray computed tomography (CT), produce digital output that is readily amenable to computerized analysis and storage. Computers are now available with sufficient memory and performance to support interactive processing of digital NDT data sets, which can easily exceed 100 megabytes. Numerous data sets can be stored on small, inexpensive tape cassettes. Failure Analysis Associates, Inc. (FaAA) has developed software-based techniques for using NDT to identify defects in structures. These techniques are also used to visualize the NDT data and to analyze the structural integrity of parts containing NDT-detected defects. FaAA's approach employs state-of-the-art scientific visualization and computer workstation technology. For some types of materials, such as advanced composites, data from different NDT modalities are needed to locate different types of defects. Applications of this technology include assessment of impact damage in composite aerospace structures, investigation of failed assemblies, and evaluation of metallic casting defects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virgo is an experiment carried out by INFN (Italy) and CNRS (France) to detect gravitational waves using a long baseline (3 X 3 km) interferometer, that requires a strong integration process among the various subsystems, for the functional coupling of the real-time control, database, communication, and visualization aspects of the project. In this paper we present the system architecture and illustrate some of the subsystems with particular emphasis on the data retrieval and processing, used in the monitoring and data storage subsystems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an optical music recognition system composed of a database and three interdependent processes: a recognizer, an editor, and a learner. Given a scanned image of a musical score, the recognizer locates, separates, and classifies symbols into musically meaningful categories. This classification is based on the k-nearest neighbor method using a subset of the database that contains features of symbols classified in previous recognition sessions. Output of the recognizer is corrected by a musically trained human operator using a music notation editor. The editor provides both visual and high-quality audio feedback of the output. Editorial corrections made by the operator are passed to the learner which then adds the newly acquired data to the database. The learner's main task, however, involves selecting a subset of the database and reweighing the importance of the features to improve accuracy and speed for subsequent sessions. Good preliminary results have been obtained with everything from professionally engraved scores to hand-written manuscripts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Center for Art and Media Technology is dedicated to art and its relationship to new media. The Center supports music as well as graphic art. It also will house museums. The Center will be fully operational by the middle of 1996. The audio network will interconnect five recording studios and a large theater with three control rooms. With the additional facilities, the number of 40 interconnected rooms is reached. As to the quality and the versatility, the network can be compared, to some extent, to that of a broadcast-building. Traditional networking techniques involve many kilometers of high quality audio-cables and bulky automated patch-bays. Still, we wish even more freedom in the way the rooms are interconnected. Digital audio and computer network technology are promising. Although digital audio technology is spreading, the size of the totally digital systems is still limited. Fiber optic and large capacity optical disks offer attractive alternatives to traditional techniques (cabling, multitrack recorders, sound archives, routing). The digital audio standards are evolving from point to point communication to network communication. A 1 Gbit/s network could be the backbone of a solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital interactive media augments interactive computing with video, audio, computer graphics and text, allowing multimedia presentations to be individually and dynamically tailored to the user. Multimedia, and particularly continuous media pose interesting problems for system designers, including those of latency and synchronization. These problems are especially evident when multimedia data is remote and must be accessed via networks. Latency and synchronization issues are discussed, and an integrated system, Tactus, is described. Tactus facilitates the implementation of interactive multimedia computer programs by managing latency and synchronization in the framework of an object-oriented graphical user interface toolkit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NeXT Computer system software release 3.0 includes a new Mach device driver that controls the sound playback and recording hardware. Also new are several software objects in the sound kit that provide direct access to the driver. Included are objects that represent the hardware devices themselves, and objects that implement streams of sound data flowing to and from the devices. The real-time mixing and amplitude scaling features of the new driver are brought out to the programmer through these objects. This paper describes the new objects and provides a short code example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
American education is on the verge of a revolution. The revolution is focused on involving students in real-world tasks and in practicing communities, both geographical and topical, where these tasks are done. This revolution is characterized by a new recognition of the social and community nature of learning and requires providing students and teachers with access to people, information, and computers that form the rich variety of existing and emerging education resources. We describe the manner in which current and next-generation communications technology support this approach and can provide the ubiquitous connectivity implied by this approach. We describe the key characteristics of the needed technological innovations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Project-enhanced science learning (PESL) provides students with opportunities for `cognitive apprenticeships' in authentic scientific inquiry using computers for data-collection and analysis. Student teams work on projects with teacher guidance to develop and apply their understanding of science concepts and skills. We are applying advanced computing and communications technologies to augment and transform PESL at-a-distance (beyond the boundaries of the individual school), which is limited today to asynchronous, text-only networking and unsuitable for collaborative science learning involving shared access to multimedia resources such as data, graphs, tables, pictures, and audio-video communication. Our work creates user technology (a Collaborative Science Workbench providing PESL design support and shared synchronous document views, program, and data access; a Science Learning Resource Directory for easy access to resources including two-way video links to collaborators, mentors, museum exhibits, media-rich resources such as scientific visualization graphics), and refine enabling technologies (audiovisual and shared-data telephony, networking) for this PESL niche. We characterize participation scenarios for using these resources and we discuss national networked access to science education expertise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Narrow bandwidth networks have provided limited capabilities for the transmission of expressive communication that is present in face-to-face communication, e.g., facial expressions and gestures, as well as affective information that is present in audio/visual media, e.g., animated graphics and stereophonic music. An understanding of the structure and functions of expressive communication in face-to-face communication and audio/visual media can inform the development of new multi-media applications in broadband networks. At the same time, a review of existing knowledge suggests that there is a need for considerable research if the rich potential of expressive communication in these new settings is to be fully developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While research and experience show many advantages for incorporation of computer technology into secondary school mathematics instruction, less than 5 percent of the nation's teachers are actively using computers in their classrooms. This is the case even though mathematics teachers in grades 7 - 12 are often familiar with computer technology and have computers available to them in their schools. The implementation bottleneck is in-service teacher training and there are few models of effective implementation available for teachers to emulate. Stevens Institute of Technology has been active since 1988 in research and development efforts to incorporate computers into classroom use. We have found that teachers need to see examples of classroom experience with hardware and software and they need to have assistance as they experiment with applications of software and the development of lesson plans. High-band width technology can greatly facilitate teacher training in this area through transmission of video documentaries, software discussions, teleconferencing, peer interactions, classroom observations, etc. We discuss the experience that Stevens has had with face-to-face teacher training as well as with satellite-based teleconferencing using one-way video and two- way audio. Included are reviews of analyses of this project by researchers from Educational Testing Service, Princeton University, and Bank Street School of Education.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The golden opportunities represented by multimedia systems have been recognized by many. The risk and cost involved in developing the products and the markets has led to a bonanza of unlikely consortia of strange bedfellows. The premier promoter of personal computing systems, Apple Computer, has joined forces with the dominant supplier of corporate computing, IBM, to form a multimedia technology joint venture called Kaleida. The consumer electronics world's leading promoter of free trade, Sony, has joined forces with the leader of Europe's protectionist companies, Philips, to create a consumer multimedia standard called CD-I. While still paying lip service to CD-I, Sony and Philips now appear to be going their separate ways. The software world's most profitable/fastest growing firm, Microsoft, has entered into alliances with each and every multimedia competitor to create a mish mash of product classes and defacto standards. The battle for Multimedia Standards is being fought on all fronts: on standards committees, in corporate strategic marketing meetings, within industry associations, in computer retail stores, and on the streets. Early attempts to set proprietary defacto standards were fought back, but the proprietary efforts continue with renewed vigor. Standards committees were, as always, slow to define specifications, but the official standards are now known nd being implemented; ... but the proprietary efforts continue with renewed vigor. Ultimately, the buyers will decide -- like it or not. Success by the efforts to establish proprietary defacto standards could prove to be a boon to the highly creative and inventive U.S. firms, but at the cost of higher prices for consumers and slower market growth. Success by the official standards could bring lower prices for consumers and fast market growth, but force the higher-wage/higher-overhead U.S. firms to compete on a level playing field. As is always the case, you can't have your cake and eat it too.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.