Pitch division lithography (PDL) with a photobase generator (PBG) allows printing of grating images with twice
the pitch of a mask. The proof-of-concept has been published in the previous paper and demonstrated by
others. Forty five nm half-pitch (HP) patterns were produced using a 90nm HP mask, but the image had line
edge roughness (LER) that does not meet requirements. Efforts have been made to understand and improve the
LER in this process. Challenges were summarized toward low LER and good performing pitch division.
Simulations and analysis showed the necessity for an optical image that is uniform in the z direction in order for
pitch division to be successful. Two-stage PBGs were designed for enhancement of resist chemical contrast. New
pitch division resists with polymer-bound PAGs and PBGs, and various PBGs were tested. This paper focuses on
analysis of the LER problems and efforts to improve patterning performance in pitch division lithography.
Multimedia systems are required to provide proper synchronization of various components for intelligible presentation.
However, it is challenging to accommodate the heterogeneity of different media characteristics. Audio-video
synchronization is, for instance, required for presenting video chunks with audio frames where video chunk size is
generally large and variable, but audio frame size is small and fixed. Such audio-video synchronization problem has
been widely studied in the literature. The problem involves proper definition and preservation of temporal relationship
between audio and video. Moreover, it is also important to take into account the processing complexity, since the
computational resources and processing power on embedded platforms, such as cell phones and other handheld devices,
are very limited. In this paper, we present the implementation of three audio-video synchronization methods on an
embedded system. We discuss the performance as well as the advantages and disadvantages of each of these techniques.
Based on our evaluation, we reason why one of the presented techniques is superior to the other two.
TCP is one of the most widely used transport protocols for video streaming. However, the rate variability of TCP makes it difficult to provide good video quality. To accommodate the variability, video streaming applications require receiver-side buffering. In current practice, however, there are no systematic guidelines for the provisioning of the receiver buffer, and smooth playout is insured through over-provisioning. In this work, we are interested in memory-constrained applications where it is important to determine the right size of receiver buffer in order to insure a prescribed video quality. To that end, we characterize video streaming over TCP in a systematic and quantitative manner. We first model a video streaming
system analytically and derive an expression of receiver buffer requirement based on the model. Our analysis shows that the receiver buffer requirement is determined by the network characteristics and desired video quality. Experimental results validate our model and demonstrate that the receiver buffer requirement achieves desired video quality.