The emergence of high-speed networking for multimedia will have the effect of turning the computer screen into a window on a very large information space. As this information space increases in size and complexity, providing users with easy and intuitive means of accessing information will become increasingly important. Providing access to large amounts of text has been the focus of work for hundreds of years and has resulted in the evolution of a set of standards, from the Dewey Decimal System for libraries to the recently proposed ANSI standards for representing information on-line: KIF, Knowledge Interchange Format, and CG's, Conceptual Graphs. Certain problems remain unsolved by these efforts, though: how to let users know the contents of the information space, so that they know whether or not they want to search it in the first place, how to facilitate browsing, and, more specifically, how to facilitate visual browsing. These issues are particularly important for users in educational contexts and have been the focus of much of our recent work. In this paper we discuss some of the solutions we have prototypes: specifically, visual means, visual browsers, and visual definitional sequences.
Digital video is now available to the designers of interactive computer applications and the integration of video with existing user-interface techniques is being explored. Examples of this integration include video widgers-user-interface components rendered with video information. The main issue in implementing video widgets is how to combine real-time video processing, such as chroma-keying, with existing user-interface software. This paper presents three implementation strategies and makes comparisons in terms of extensibility, portability and ease of programming.
With the release of IBM's preemptive multitasking OS/2 2.1 operating system, a 32-bit multimedia environment supporting digital video and audio is now available for widespread commercial use. While digital audio architectures and subsystems are becoming common, digital video architectures are relatively new, and the OS/2 2.1 implementation is one of the first to come standard with an operating system. This paper gives an overview of the general architecture of OS/2 2.1 multimedia, followed by a focused description of the architectural aspects unique to digital video.
In this paper, the design model and implementation of an interactive PC system for creation and presentation of continuous media (CM) streams will be described. An object-oriented approach was used for defining the high-level abstractions of CM streams as well as the abstractions for grouping of related streams. The synchronization of multiple streams was performed by another abstraction known as logical time system. The system was implemented on a personal computer running under MS-DOS in C++ and is toolkit independent. The main contribution of this work is the integration of different abstractions in a single object-oriented model which allows suitable interactive manipulation with CM streams.
Source-based dithering is a set of techniques designed to maximize the performance of real- time networked digital video systems that encode and decode video entirely in software. Usually frame grabber hardware presents frames in a 24 bit per pixel (bpp) format. However, most hosts are only equipped with single or eight bit deep displays and thus the color depth of the video must be reduced at some point. If the encoder reduces the color depth, the bandwidth required to carry the video on the network is lowered by a factor of 24 or 3 respectively, and the computational load is lightened on the receiving hosts. The color depth reduction algorithm must be efficient since the resulting frame rate, and thus the degree to which the illusion of motion is preserved, depends on how quickly a pixel can be processed. We use dithering algorithms chosen for efficiency and a contrast enhancement algorithm to improve image quality.
This paper describes how the painting of decompressed digital video data has been altered from the original highly efficient rectangle to a completely programmable shape: any pixel of the output shape can be painted with any pixel from the decompressed picture. In addition, because the system paints all pixels of the output shape each frame, the actual shape can be moved and changed while decompression proceeds. Shape files are precompiled into a run length format (described in the paper) and this format is manipulated into a screen pixel based data structure before the movie is played. There is no limit (apart from available processor power) on the complexity of the shape: it can be disconnected, concave etc. though it can only contain pixels from the current decompressed picture. Using a 25 MHz ARM3 (Acorn Archimedes) or ARM610 processor, the system is capable of decompressing a 25 frame per second file, painting it onto a 3D model of an acorn (two parabolic surfaces for `nut' and `cup', plus two cylinders for the `stalk') with the movie wrapped round each separate surface, the whole thing rotating as the movie is played.
With support from four NSF awards we aim to develop a prototype digital library in computer science and apply it to improve undergraduate educations. First, Project Envision, `A User- Centered Database from the Computer Science Literature,' 1991-94, deals with translation, coding standards including SGML, retrieval/previewing/presentation/browsing/linking, human-computer interaction, and construction of a partial archive using text and multimedia materials provided by ACM. Second, `Interactive Learning with a Digital Library in Computer Science,' 1993-96, supported by NSF and ACM with additional assistance from other publishers, focuses on improving learning through delivery of materials from the archive. Third, `Networked Multimedia File System with HyTime,' funded by NSF through the SUCCEED coalition, considers networking support for distributed multimedia applications and the use of HyTime for description of such applications. Fourth, equipment support comes from the Information Access Laboratory allotment of the `Interactive Accessibility: Breaking Barriers to the Power of Computing' grant funded by NSF for 1993-98. In this paper we report on plans and work with digital video relating to these projects. In particular we focus on our analysis of the requirements for a multimedia digital library in computer science and our experience with MPEG as it applies to that library.
This paper describes our perception of current developments in networking, telecommunication and technology of multimedia. As such, we have taken a constructive view. From this standpoint, we devised a client server architecture that veils servers from their customers. It adheres to our conviction that network and location independence for serve access is a future trend. We have constructed an on-line KARAOKE on an existing CVS (Chinese Videotex System) to test the workability of this architecture and it works well. We are working on a prototype multimedia service network which is a miniature client server structure of our proposal. A specially designed protocol is described. Through this protocol, an one-to-many connection can be set up and to provide for multimedia applications, new connections can be established within a basic connection. So continuous media may have their own connections without being interrupted by other media, at least from the view of an application. We have advanced a constructive view which is not a framework itself. But it is tantamount to a framework, in building systems as assembly of methods, technics, designs, and ideas. This is what a framework does with more flexibility and availability.
In this paper, a prototyping system for dial-up remote access image database is proposed. As a videotex system, the system includes Information Customer, Information Provider, Communication Server, Public Switch Telephone Networks, and a database server containing an image database. Because the color natural image is included in the database, the high resolution visual medium are given and many possible applications can be provided. Currently, a color image with a resolution of 400 by 400 can be accessed in about 25 seconds by using JPEG compression and high-speed modem. The system can be employed on many applications, such as home-shopping, remote education, etc. Also, it can be a pioneer system to provide teleservice in Integrated Serve Digital Network.
Video frames are usually compressed before their transmission through a network. Due to this compression, some frames will be more important than others when the images are reconstructed at the receiver node, i.e., frames are not independent of one another. Although it is desirable to keep the miss rate of these important frames as low as possible, it is not always appropriate to give them higher priority than other real-time traffic since the performance guarantees of real-time channels could otherwise be compromised. We propose a scheme which can provide performance guarantees to all real-time traffic and also improve the overall frame reconstruction rate without compromising the performance guarantees of inter- dependent video frame streams. Basically, we use a statistical real-time channel to deliver each video stream and dynamically extend the due data of a frame as long as the frame is still useful after its previously-assigned due date. Channels originating from the same source node are multiplexed so that the overall network utilization can be improved. We have simulated the proposed scheme using MPEG-coded frames of the movie Star Wars and the scheme is shown to be able to effectively improve the overall frame reconstruction rate, successfully reduce the link capacity necessary for an MPEG channel to an average level from the worst-case level needed for a channel of the same quality without channel multiplexing, and at the same time provide performance guarantees.
Delayed conferencing is a way to conduct people-to-people communication based on multimedia mail systems. An active multimedia system for delayed conferencing uses the concept of active media by enclosing active knowledge in messages. Messages having active knowledge will automatically react to certain events to perform operations for generating a timely response and improving operational efficiency. Moreover, each message also possesses a hypergraph structure for organizing its contents. In this paper, an active multimedia system is presented to illustrate its usefulness in delayed conferencing. The knowledge representation and the execution model are described for generating and executing active knowledge. Finally, we discuss the current approach and future work.
Personal Telepresence is an interactive multimedia tool that allows individuals or groups to, affordably, meet with remotely located individuals or groups--from their desktop--as if they were all in the same location. A Personal Telepresence workstation would include telephony, computer, desktop videoconferencing, groupware, and graphics capability on a single platform. The user interface presented will allow natural, face-to-face interaction between all those involved in `virtual' meeting, classroom, office or manufacturing problem solving sessions. Files could be opened and placed on a virtual `conference table' where changes could be made interactively by any or all the `meeting' participants. `Copies' of the files can be made, `stapled' together, and given to each of the attendees. The desktop would include a `whiteboard' for brainstorming sessions and a `projector screen' to display movies, video mail, and/or the results of a simulation program. This paper discusses desktop collaboration needs and the Personal Telepresence project at LLNL.
Current efforts in multimedia communications emphasized on network infrastructure research and development in areas such as coding, compression, multicast, high-performance transport protocols, service interface, and resource management. These efforts intend to support a general service for efficient point-to-point and multipoint communications. However, other services, which were overlooked in the past, also demand network infrastructure support. For example, no specific way is currently proposed to perform conference services such as scheduling, announcement, and discovery. In the case of conference scheduling, the network resources and conference-related information need to be set up ahead of time. This requires scheduling support from the low-layer functions and protocols, including connection/configuration management, resource management and their associated protocols. This issue has not bee discussed seriously or included in any of the current proposed solution. In this paper, we will concentrate mainly on how to support the multimedia conferencing scheduling service in the network protocol layers. A novel scheme, which separates the connection phases, is proposed as the foundation to support the scheduling ability and scalability. The impact of scheduling to the API, reservation protocols, and resource management is also addressed.
In this paper, we argue that significant performance benefits can accrue if integrated networks implement application-specific mechanisms that account for the diversities in media compression schemes. Towards this end, we propose a simple, yet effective, strategy called Frame Induced Packet Discarding (FIPD), in which, upon detection of loss of a threshold number (determined by an application's video encoding scheme) of packets belonging to a video frame, the network attempts to discard all the remaining packets of that frame. In order to analytically quantify the performance of FIPD so as to obtain fractional frame losses that can be guaranteed to video channels, we develop a finite state, discrete time markov chain model of the FIPD strategy. The fractional frame loss thus computed can serve as the criterion for admission control at the network. Performance evaluations demonstrate the utility of the FIPD strategy.
A software decoder for MPEG-1 video was integrated into a continuous media playback system that supports synchronized playing of audio and video data stored on a file server. The MPEG-1 video playback system supports forward and backward play at variable speeds and random positioning. Sending and receiving side heuristics are described that adapt to frame drops due to network load and the available decoding capability of the client workstation. A series of experiments show that the playback system adds a small overhead to the stand alone software decoder and that playback is smooth when all frames or very few frames can be decoded. Between these extremes, the system behaves reasonably but can still be improved.
MPEG (Moving Picture Experts Group) video coding standard has emerged to facilitate the fast growth of full-motion compression on digital storage media and digital communication. As new applications arise, the problems of arising of noisy channels need to be solved. Some error resilience techniques have been proposed to address this problem. However, in MPEG compressed video there are some data elements within the picture-header which are absolutely crucial to decoding. Without them no decoding can be accomplished. Previous proposed error resilience techniques can only handle this kind of loss by replacing whole frame with the previously decoded frame. In this paper, an error concealment strategy is proposed for the case of losing picture-header information during transmission of the compressed MPEG bit- stream. This proposal is motivated by the fact that all bits are not equal important within the video bit-stream. The basic idea of this strategy is the use of the redundant picture-header concept which allows redundant transmission of these very sensitive data within the MPEG-2 video headers. This strategy has to be supported by appropriate ATM-type transport structure. The redundant picture header will be assigned to the different cell with the original picture- header to protect the loss of picture-header for the purpose of improving the noise channel performance.
A rate control algorithm for delivery of digital video traffic on local area networks that results on uniform video quality during normal traffic conditions and graceful degradation during periods of congestion is proposed. The algorithm controls a dual rate-control-mode MPEG-2 encoder. This encoder operates in either variable bit-rate (VBR) or joint rate controlled VBR mode.
This paper examines the requirements for a large multimedia storage system, such as one for a movie-on-demand service. It presents the architecture for a storage system based on these requirements. The paper then presents a scheme for the organization, placement, and retrieval of multimedia data from the server. The data organization and retrieval schemes are analyzed and it is shown that their performance is superior to other schemes found in the literature.
A multiple-access server for moving picture information (MAMI) has been developed that allows multiple users to access moving picture information stored in the disk of the server concurrently without interfering with each other. This paper presents the structure of MAMI and the scheduling algorithm used on it.
This paper is on the design and implementation of GRAMS (Gopher-style Real-time ATM Multimedia Services), a multimedia system designed for a star configuration ATM LAN with the central server providing multimedia services to multiple users in real-time. It is also the initial step towards and the essential part of a distributed multimedia system over larger area networks.
Multimedia information systems are an essential component of several multimedia applications ranging from media entertainment to on-line library systems. However, the bandwidth available on traditional workstation-based I/O systems is not sufficient to be able to support hundreds of multimedia information streams. In this paper, we discuss our approach to built a storage subsystems in a multimedia server which can deliver variable-bit-rate audio/video data streams to several hundred clients simultaneously over a high-speed broadband network. Our approach is to distribute the bandwidth requirement across multiple disks in a disk-array architecture. We address issues of multimedia data layout on this disk subsystem.
This paper presents a framework for reasoning about the timing correctness of multimedia data streams supported on a shared, serially-reusable server. A real-time scheduling approach is used to guarantee the timing requirements for multimedia applications such as video-on- demand and multimedia presentations. This framework incorporates the use of scheduling models, which are defined as abstractions that can be used to reason about timing correctness on physical resources. The scheduling models in this paper can be applied to multimedia systems which use periodic tasks to retrieve data from a disk. With their use, the multimedia system designer can reason a priori about the throughput, capacity, and schedulability of a system. The models enable the real-time system architect to quickly explore the system design space, to establish and maintain a firm performance baseline, to optimize system configuration parameters, and to explore the impact of new technologies. As an example of the application of this framework a new disk policy called T-scan is developed here. T-scan is applied to a multimedia task set and the analytical results are presented. Additionally, the performance of various disk policies are compared using the framework.
Dynamic QoS yields better network utilization by deliberately degrading the quality of existing continuous media connections so that new user requests can be accommodated. This distributed multimedia approach involves dynamically controlling multimedia parameters such as color depth, frame rate and audio sampling rate. It is obvious that a degradation of the physical characteristics of a real-time service will result in perceived differences in video acceptability. But not so obvious is the relationship between video message importance and video degradation. Through experiments it is possible to gain insight as to what type of distributed video applications will be more susceptible to a degradation in the QoS. This paper examines the effects that dropping the frame rate of a video window has on user perception. Findings support the fundamental premise that frame rate reduction itself leads to progressively lower ratings of acceptance, which erodes with each stepwise decrease in the experimental frame rates. We describe the results in terms of network bandwidth and distributed video applications. Dynamic QoS is by no means an intermediate step in the realization of unconstrained multimedia on-demand services. Adaptive algorithms will continually be required since there is no upper bound on the complexity of user requirements.
In this article we will describe the results of a study to determine the utility of compressed video techniques for medical diagnostic image consultation. This study was designed to assess the feasibility of decreasing bandwidth requirements from 90 megabits per second (Mbits/sec) to T-1 rates (1.5 Mbits/sec) using state-of-the-art coders/decoders with the most modern data compression algorithms. Vendors provided T-1, H.261 standard video codec devices which were CCITT Px64 compliant, with a selected data rate of 1.5 Mbit/sec and motion video resolution set to F-CIF at 30 frames/sec. The T-3 (45 Mbit/sec) codecs use pulse code modulation as the transmission format, with a line code format of B3ZS. The tests quantified the performance of the codecs by comparing the input test images to the output images after compression by the vendors' codecs. Evaluations were conducted utilizing common National Television Systems Committee industry performance standards, as well as the Society of Motion Picture and Television Engineers Publican Recommended Practice (RP-133) for medical diagnostic imaging. Results indicated that the T-1 video codecs available at the time of the tests could not provide either the contrast or spatial resolution necessary for diagnostic medical image consultation.
High-speed, public network telecommunication services are becoming critical to the health- care sector. We are examining the use of Asynchronous Transfer Mode (ATM) transmission of digital video to support remote real-time diagnostic procedures (e.g. fluoroscopy and ultrasound). Remote visualization may eliminate the need for a physician on-site, thus reducing personnel (physician) costs while providing access to remote physician specialists. A key requirement of cost-effective video transmission and storage is the need for data compression, JPEG compression provides reconstructed images of high quality, inexpensive codecs are now available, and the compressed data streams may be easily transmitted via ATM networks based on DS-3 transport. We will present a telemedicine model, describe a preliminary experimental protocol, and discuss psychovisual assessment of data from fluoroscopy and ultrasound examinations. Early results (monochrome medical video compressed at approximately 1 bit/pixel) provide a basis for examining deployment issues and costs.
The BERKOM MultiMedia Transport System is designed for communication between distributed multimedia applications on top of broadband ISDN networks. Its core are the ST-II protocol on the network layer and an extended version of the XTP protocol called XTP-Lit on the transport layer. ST-II provides mechanisms for fast packet forwarding, multiple target connections and bandwidth management. XTP-Lite supports negotiation of QoS parameters, rate control and flexible error handling.
Efficient mapping of connection oriented protocols on ATM needs the identification of the protocol elements and their association to the corresponding ATM elements. In this context the functions provided by connection oriented protocols are structured like the B-ISDN reference model. The control of higher power (transport- or network-) layer protocol connections is done using procedures and parameters that are currently defined in the ATM control plane. With regard to that a simultaneous establishment of an ATM connection and the corresponding higher layer protocol connection is presented. The Quality of Service (QoS) parameters of the connection oriented protocol are identified and associated to the corresponding QoS parameters of the ATM layer. The possibilities of dealing with not matching parameters are examined and evaluated. Some observations from a real system (BERKOM Transport System) concerning the simultaneous connection establishment and the efficient mapping of the QoS parameters onto the ATM performance parameters are given. Finally, additional requirements for the connection oriented protocols and the ATM signalling protocol, to fully support the simultaneous connection establishment and the QoS mapping, are discussed.
Real-time applications require not only high bandwidths, low access delays, high reliability (low loss rates) but guarantees on the upper or lower bounds of all three. Guarantees, in addition to allowing real-time applications to be notified of their exact Quality of Service (QoS) actually improve the QoS that can be provided. The client's QoS varies based upon system and network load and may be adjusted based on the total offered load, the available bandwidth and delay characteristics of the network. These requirements for guarantees in delay and bandwidth have stimulated the need for new network architectures that consist of network protocols that provide the underlying support for guarantees in medium access delay and bandwidth. 100Base-VG is a data link layer 100 Mbps LAN protocol that uses a preemptive round robin scheme to provide system connectivity to a workgroup. In this paper, we analyze the performance of 100Base-VG using a simple analytical and simulation model. Expressions for network utilization, throughput and access delay are derived. From these, we show that 100Base-VG provides the basic characteristics required by a real-time network architecture at the data link layer to support guarantees for delay, delay jitter and throughput.
There are two main characteristics that distinguish ordinary video applications from lossless digital image browsing: first, digital images used in medical imaging and scientific visualization applications usually can not tolerate any compromise of image quality; second, unlike live or playback video, browsing implies different playback rates. With these differences, it is not longer possible to increase the browsing speed by dropping alternate frames or by increasing compression ratios as done in video applications. From a network's point of view, the image browsing applications send data at different rates during different periods of time. These type of applications require guaranteed performance service in terms of bandwidth, end-to-end delay and jitter, however, their performance requirements change during the lifetime of the application. Most of the existing solutions that support guaranteed performance services require resource reservation on a per-connection basis and the amount of resources reserved during the lifetime of the connection is usually fixed. The static nature of this type of resource reservation does not easily accommodate the dynamics inherent in applications like lossless image browsing. In this paper, we propose a new abstraction called Dynamic Connection Management (DCM) within the framework of guaranteed performance communications. The DCM scheme provides the network with the ability to modify the performance parameters of any active connection subject to a modification contract. We describe the DCM scheme, validate the design with simulation experiments, and present a prototype implementation within the context of the Sequoia 2000 project.
This article reviews the current hardware acceleration techniques employed for digital video capture/compression and playback/decompression, and discusses future implementations. We discuss current capture systems that employ either dedicated silicon or a high throughput processor to provide real-time capture and compression. The advantages offered by each approach are compared. Similarly, video decompression acceleration can take the form of dedicated silicon (e.g., MPEG, JPEG) or be implemented using a dedicated high throughput processor (e.g., DVI). Programmable devices allow new algorithmic implementations to be introduced without replacing the hardware system but may not offer the performance of dedicated devices. Decompression and display hardware subsystems also provide real-time functionality like color conversion, dithering, dynamic scaling with interpolation filters and other blitter functions. Finally, we discuss the process of mixing an independent display layer with the existing systems graphics output for display, and present new designs to combine the functionality of digital video with the system graphics controller to provide a totally integrated solution.
In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.
The feasibility and performance issues of various types of digital video on personal systems are investigated. We distinguish digital video systems according to compression/decompression hardware, media storage types, and output mixing schemes, and examine them individually. Potential bottlenecks like I/O throughput, CPU cycles, system bus bandwidth are investigated according to different types of video applications. Performance information is collected to provide design guidance, specifically for CDROM-based, network- based, disk-based digital video applications. Finally, we investigate the design alternatives of software-only digital video, and the trade-offs between memory to VRAM transfer rate, cpu cycles for (de)compression, compression quality, and disk I/O throughput.
Video display systems require large amounts of high-bandwidth memory to implement multi- window display-processing functions for full-motion video images (e.g. multi-source gen-lock, variable scaling, scan conversion, etc.) and high resolution animated graphics (e.g. (alpha) - channeling, block moving, multi-windowing, etc.). In this paper, low-cost memory architectures are presented that efficiently share memory among the different video and graphics functions in a multi-window full-motion video and graphics display system. The major problem associated with shared (display) memory systems: I/O bottleneck, is eliminated by using a segmented memory, I/O buffers and a communication network that routes concurrent streams of video and graphics data between I/O devices, buffers and memory segments. Further reduction of memory is obtained by encoding the window overlay priorities with 2D run lengths instead of overlay codes for every pixel on the screen. This approach also reduces the real-time performance requirements for the controllers of the architecture. Finally, the paper describes an algorithm that schedules the video/graphics access to the segments of the display memory such that bus contention is minimized. The resulting `free time' can be used by a graphics processor to perform fast updates in the display memory.
In this paper we present the architecture of a multimedia ISDN-PC, providing advanced compression/decompression video as well as video manipulation facilities at affordable costs. The system provides sophisticated tools for PC-based video presentation and video communication. The modular and flexible architecture consists of hardware modules for video compression and real-time video manipulation.
The design of hybrid transmission algorithms for the multiplexing of voice and data over a common digital channel is of interest to various communication networks, including cellular radio and high speed topologies. In environments where the characteristics of the voice and data traffics may vary dynamically, the issue is the deployment of Hybrid Multiplexing Algorithms (HMAs) which satisfy the constraints imposed by the voice traffic, while they simultaneously attain high channel utilization and induce low implementation overhead. In this work, we propose, evaluate, and compare two HMAs: a semidynamic and a dynamic. The former induces lower implementation overhead than the latter, but it is applicable only to environments where the rate of the voice traffic may vary relatively slowly and its statistics are parametrically known. The semidynamic HMA induces frame structures, where the capacity allocation per frame, for the voice versus data traffic, is dictated by a superimposed traffic monitoring algorithm. The dynamic HMA, on the other hand, assigns each channel slot to voice versus data packets dynamically; it requires no statistical knowledge about the voice traffic, at the expense of significantly increased implementation overhead.
An interactive system designed for talking via multimedia presentation with other parties on Ethernet- LAN is proposed. Our Multimedia Chatting System will take several media services into consideration, like still image, text, pen writing, voice, and slow-motion video, to integrate a practical chatting system. The prototyping subsystem to implement the above idea is currently under development using NETBIOS communication interface and Microsoft Windows environment.
Self-stabilizing protocols for the distributed control of fiber-optic networks are presented. The protocols tolerate any number and kind of initial faults and handle ongoing changes in the status of links. The new protocols improve upon previous protocols by their stabilization time, by their utilization of limited switch bandwidth and by their avoiding the use of (unbounded) sequence numbers.
JBIGW is a document archiving and retrieving system that is based on the ISO/IEC 11544 /CCITT T-82, encoding standard. This new lossless coding standard is much more suitable for image databases than the currently used CCITT group IV standard, since its additional capabilities include image database browsing, input output device Independency and 50% better compression. The JBIGW bi-level image decoding, encoding and presentation program uses the Microsoft Windows version 3.1 graphical user interface software platform. The application supports the standard Windows MDI user interface in order to decode, present and process black and white images, that have previously been encoded using the JBIG encoding standard. The entire program has been developed using the Borland C++ ver 3.1 compiler for the MS Windows system, and code generation for the 80386 microprocessor in order to achieve maximum decoding speed. The system in its current software implementation can decode images with speeds around 9600 bits / sec.
Transmission and playback of digital audio and video via TCP/IP has been successfully completed with the use of typical low-end UNIX workstations. In addition, attempts have been made to increase the frame rate by sending only sections of a frame that varied the most.
Storing multimedia text, speech or images in personal computers now requires very large storage facilities. Data compression eases the problem, but all algorithms based on Shannon's information theory will distort the data with increased compression. Autosophy, an emerging science of `self-assembling structures', provides a new mathematical theory of `learning' and a new `information theory'. `Lossless' data compression is achieved by storing data in mathematically omni dimensional hyperspace. Such algorithms are already used in disc file compression and V.42 bis modems. Speech can be compressed using similar methods. `Lossless' autosophy image compression has been implemented and tested in an IBM PC (486), confirming the algorithms and theoretical predictions of the new `information theory'. Computer graphics frames or television images are disassembled into `known' fragments for storage in an omni dimensional hyperspace library. Each unique fragment is used only once. Each image frame is converted into a single output code which is later used for image retrieval. The hyperspace image library is stored on a disc. Experimental data confirms that hyperspace storage is independent of image size, resolution or frame rate; depending solely on `novelty' or `movement' within the images. The new algorithms promise dramatic improvements in all multimedia data storage.
Lakes is an architecture for collaborative working developed to support a wide range of collaborative applications of the `same time/different place' variety across different platforms and communication media. When applications wish to share data in such an environment, connections must be established and channels created to provide the necessary communication links. As new nodes join or leave the conference, new connections must be made or broken. If an application is using several distinct channels for different types of data, the management and control of these channels can be a complex task. This paper presents an overview of the Lakes architecture and focuses on two particular features which have been designed to reduce the burden of connection and channel management on the application programmer: (1) the ability of one particular Lakes application, the call manager, to provide flexible connection management on behalf of other applications; and (2) the facility to request that channels are automatically created and destroyed as applications are shared into or unshared from calls.