The rapid advent of Silicon Photonics presents many challenges for test and packaging. Here we concisely review SiP device attributes that differ significantly from classical photonic configurations, with a view to the future beyond current, connectivity-oriented silicon photonics developments, looking to such endeavors as all-optical computing and quantum computing. The necessity for nano-precision alignment of optical elements in test and packaging operations quickly emerges as the unfilled need. We review the industrial test and packaging solutions developed back in the 1997-2001 photonics boom to address the needs of that era's devices, and map their gaps with the new SiP device classes. Finally we review the new state-of-the-art of recent advances in the field that address these gaps.
We present a comparison of classical and recently developed communications interfacing technologies relevant to
scanned imaging. We adopt an applications perspective, with a focus on interfacing techniques as enablers for enhanced
resolution, speed, stability, information density or similar benefits. A wealth of such applications have emerged, ranging
from nanoscale-stabilized force microscopy yielding 100X resolution improvement thanks to leveraging the latest in
interfacing capabilities, to novel approaches in analog interfacing which improve data density and DAC resolution by
several orders of magnitude. Our intent is to provide tools to understand, select and implement advanced interfacing to
take applications to the next level.
We have entered an era in which new interfacing techniques are enablers, in their own right, for novel imaging
techniques. For example, clever leveraging of new interfacing technologies has yielded nanoscale stabilization and
atomic-force microscopy (AFM) resolution enhancement.
To assist in choosing and implementing interfacing strategies that maximize performance and enable new capabilities,
we review available interfaces such as USB2, GPIB and Ethernet against the specific needs of positioning for the
scanned-imaging community. We spotlight recent developments such as LabVIEW FPGA, which allows non-specialists
to quickly devise custom logic and interfaces of unprecedentedly high performance and parallelism. Notable
applications are reviewed, including a clever amalgamation of AFM and optical tweezers and a picometer-scaleaccuracy
interferometer devised for ultrafine positioning validation. We note the Serial Peripheral Interface (SPI),
emerging as a high-speed/low-latency instrumentation interface. The utility of instrument-specific parallel (PIO) and
TTL sync/trigger (DIO) interfaces is also discussed. Requirements of tracking and autofocus are reviewed against the
time-critical needs of typical applications (to avoid, for example, photobleaching), as exemplified in recent capabilities
for fast acquisition of focus with bumpless transition between optical and electronic position control. A novel
planarization approach is reviewed, providing a nanoscale-accurate datum plane over mesoscale scan areas without scanline
flattening. Finally, not to be overlooked is the original real-time interface: analog I/O, with novel capabilities
introduced in recent months. Here additional developments are discussed, including a resolution-enhancing technique
for analog voltage generation and a useful combination of high-speed block-mode and single-point data acquisitions.
In this paper, we discuss the characteristics of a six-axis, micro-scale nanopositioner and steps that have been taken
to adapt it for use in aligning and manipulating micro-optics. This device, the microHexFlex, is designed to possess
motion and force characteristics that enable it to align or manipulate small optical elements such as waveguides,
diode laser, lenses, fibers, etc... More specifically, a microHexFlex with 3mm diameter footprint has been shown to
have a quasi-static range of 7 x 13 x 8 μm3 and 0.9 x 0.8 x 1.4 degrees. Simulations show that the microHexFlex is
capable of exerting quasi-static forces of approximately 20mN and 2.7mN along in-, and out-of-plane directions.
We discuss how the dynamic performance and resolution of the microHexFlex have been augmented using Input
ShapingTM and HyperBit control respectively. This enables the microHexFlex to rapidly and accurately control
position within 10 nm when operating at 100 Hz. The microHexFlex may be manufactured using deep reactive ion
etching (DRIE) at a cost of less than $2 dollars per device.
The active silicon microstructures known as Micro-Electromechanical Systems (MEMS) are improving many existing technologies through simplification and cost reduction. Many industries have already capitalized on MEMS technology such as those in fields as diverse as telecommunications, computing, projection displays, automotive safety, defense and biotechnology. As they grow in sophistication and complexity, the familiar pressures to further reduce costs and
increase performance grow for those who design and manufacture MEMS devices and the engineers who specify them for their end applications.
One example is MEMS optical switches that have evolved from simple, bistable on/off elements to microscopic, freelypositionable beam steering optics. These can be actuated to discrete angular positions or to continuously-variable angular states through applied command signals. Unfortunately, elaborate closed-loop actuation schemes are often necessitated in order to stabilize the actuation. Furthermore, preventing one actuated micro-element from vibrationally
cross-coupling with its neighbors is another reason costly closed-loop approaches are thought to be necessary.
The Laser Doppler Vibrometer (LDV) is a valuable tool for MEMS characterization that provides non-contact, real-time measurements of velocity and/or displacement response. The LDV is a proven technology for production metrology to determine dynamical behaviors of MEMS elements, which can be a sensitive indicator of manufacturing variables such as film thickness, etch depth, feature tolerances, handling damage and particulate contamination. They are also
important for characterizing the actuation dynamics of MEMS elements for implementation of a patented controls technique called Input Shaping®, which we show here can virtually eliminate the vibratory resonant response of MEMS elements even when subjected to the most severe actuation profiles.
In this paper, we will demonstrate the use of the LDV to determine how the application of this compact, efficient algorithm can improve the performance of both open- and closed-loop MEMS devices, eliminating the need for costly closed-loop approaches. This can greatly reduce the complexity, cost and yield of MEMS design and manufacture.
Significant advancements in the field of controls engineering have recently been commercialized which have application to the fields of micromachining as well as for automated alignment, precision machining, tracking and active optics:
1. Cost effective, industrial-class implementations of Momentum Compensation (also known as Frahm Damping) provide low-order cancellation of inertial inputs to supporting structures and are of particular applicability to structures with low natural resonance frequencies;
2. Input Shaping, a patented digital controls algorithm developed at the Massachusetts Institute of Technology, provides effective cancellation of structural resonances in arbitrary actuation;
3. Input Preshaping, a technique realized in both a priori and self-learning implementations, substantially eliminates following errors in repetitive actuation.
Simultaneous advancements in the minimization of multi-axis positioner workpiece mass via parallel kinematics technologies have increased the native bandwidth of the positioners used in these throughput-intensive applications.
The author reviews applications of each of these, alone and together, in a comprehensive overview of the state of the art of high-bandwidth positioning techniques for micropatterning and micromachining.
In the glory days of photonics, with exponentiating demand for photonics devices came exponentiating competition, with new ventures commencing deliveries seemingly weekly. Suddenly the industry was faced with a commodity marketplace well before a commodity cost structure was in place. Economic issues like cost, scalability, yield-call it all "Photonomics" -now drive the industry. Automation and throughput-optimization are obvious answers, but until now, suitable modular tools had not been introduced. Available solutions were barely compatible with typical transverse alignment tolerances and could not automate angular alignments of collimated devices and arrays. And settling physics served as the insoluble bottleneck to throughput and resolution advancement in packaging, characterization and fabrication processes. The industry has addressed these needs in several ways, ranging from special configurations of catalog motion devices to integrated microrobots based on a novel mini-hexapod configuration. This intriguing approach allows tip/tilt alignments to be automated about any point in space, such as a beam waist, a focal point, the cleaved face of a fiber, or the optical axis of a waveguide- ideal for MEMS packaging automation and array alignment. Meanwhile, patented new low-cost settling-enhancement technology has been applied in applications ranging from air-bearing long-travel stages to subnanometer-resolution piezo positioners to advance resolution and process cycle-times in sensitive applications such as optical coupling characterization and fiber Bragg grating generation. Background, examples and metrics are discussed, providing an up-to-date industry overview of available solutions.
We consider a single node which multiplexes a large number of
traffic sources. We are concerned with the amount of buffer and
bandwidth that should be allocated to a class of i.i.d. on/off
fluid flows. We impose a maximum overflow probability on the
class, and assume that the aggregate flow can be modelled using
effective bandwidth. Unlike previous approaches which assume that
the total buffer allocated to the class is either constant or
linearly proportional to the number of sources, we determine the
minimum cost allocation given a cost per unit of each resource. We
find that the optimal bandwidth allocation above the mean rate and
the optimal buffer allocation are both proportional to the square
root of the number of sources. Correspondingly, we find that the
excess cost incurred by a fixed buffer allocation or by linear
buffer allocations is proportional to the square of the percentage
difference between the assumed number of sources and the actual
number of sources and to the square root of the number of sources.
Models are developed to analyze the throughput of ARQ protocols, such as Go Back N and Selective Repeat, and protocols without ARQ. Forward error correction is added to these models to study the interactions between these mechanisms.
In systems where sending FEC has a negligible effect on the channel loss probability, the goodput of streams increases. Reducing the data throughput and including error correction packets, thus keeping the data rate perceived by the channel constant, has advantages in the Go Back N protocol when the channel loss rate is above a certain range. Furthermore, these models form a starting point with which to study more complicated models such as the TCP.
Proc. SPIE. 4531, Modeling and Design of Wireless Networks
KEYWORDS: Signal to noise ratio, Roads, Signal attenuation, Power supplies, Wireless communications, Optimization (mathematics), Network architectures, Systems modeling, Astatine, Electrochemical etching
We consider power control for voice users in a wideband CDMA wireless network. We investigate admission control policies that base a new call admission decision not only upon available capacity but also upon the required downlink transmit power and upon the user's willingness to pay. We assume that each voice user has a utility function that describes the user's willingness to pay as a function of downlink SINR. The network is assumed to desire to either maximize total utility of all users or total revenue generated from all users. We present a numerical study of a single cell. We display the optimal power allocation to each user, as a function of the geographical distribution of users, for a selection of different utility function distributions. We demonstrate how prices per code and per unit transmitted power can be used to achieve the optimal power allocation in a distributed fashion, and the variation of these prices with system load.