Sensors with long lifetimes are ideal for infrastructure monitoring. Miniaturized sensor systems are only capable of
storing small amounts of energy. Prior work has increased sensor lifetime through the reduction of supply voltage ,
necessitating voltage conversion from storage elements such as batteries. Sensor lifetime can be further extended by
harvesting from solar, vibrational, or thermal energy. Since harvested energy is sporadic, it must be detected and stored.
Harvesting sources do not provide voltage levels suitable for secondary power sources, necessitating DC-DC upconversion.
We demonstrate a 8.75mm<sup>3</sup> sensor system with a near-threshold ARM microcontroller, custom 3.3fW/bit
SRAM, two 1mm<sup>2</sup> solar cells, a thin-film Li-ion battery, and integrated power management unit. The 7.7μW system
enters a 550pW data-retentive sleep state between measurements and harvests solar energy to enable energy autonomy.
Our receiver and transmitter architectures benefit from a design strategy that employs mixed signal and digital circuit
schemes that perform well in advanced CMOS integrated circuit technologies. A prototype transmitter implemented in
0.13μm CMOS satisfies the requirements for Zigbee, but consumes far less power consumption than state-of-the-art
This paper explores the recent advances in circuit structures and design methodologies that have enabled ultra-low power
sensing platforms and opened up a host of new applications. Central to this theme is the development of Near Threshold
Computing (NTC) as a viable design space for low power sensing platforms. In this paradigm, the system's supply
voltage is approximately equal to the threshold voltage of its transistors. Operating in this "near-threshold" region
provides much of the energy savings previously demonstrated for subthreshold operation while offering more favorable
performance and variability characteristics. This makes NTC applicable to a broad range of power-constrained
computing segments including energy constrained sensing platforms. This paper explores the barriers to the adoption of
NTC and describes current work aimed at overcoming these obstacles in the circuit design space.
Bridges are an important societal resource used to carry vehicular traffic within a transportation network. As such, the
economic impact of the failure of a bridge is high; the recent failure of the I-35W Bridge in Minnesota (2007) serves as a
poignant example. Structural health monitoring (SHM) systems can be adopted to detect and quantify structural
degradation and damage in an affordable and real-time manner. This paper presents a detailed overview of a multi-tiered
architecture for the design of a low power wireless monitoring system for large and complex infrastructure systems. The
monitoring system architecture employs two wireless sensor nodes, each with unique functional features and varying
power demand. At the lowest tier of the system architecture is the ultra-low power Phoenix wireless sensor node whose
design has been optimized to draw minimal power during standby. These ultra low-power nodes are configured to
communicate their measurements to a more functionally-rich wireless sensor node residing on the second-tier of the
monitoring system architecture. While the Narada wireless sensor node offers more memory, greater processing power
and longer communication ranges, it also consumes more power during operation. Radio frequency (RF) and mechanical vibration power harvesting is integrated with the wireless sensor nodes to allow them to operate freely for long periods of time (e.g., years). Elements of the proposed two-tiered monitoring system architecture are validated upon an operational long-span suspension bridge.
The long-term deterioration of large-scale infrastructure systems is a critical national problem that if left unchecked,
could lead to catastrophes similar in magnitude to the collapse of the I-35W Bridge. Fortunately, the past decade has
witnessed the emergence of a variety of sensing technologies from many engineering disciplines including from the
civil, mechanical and electrical engineering fields. This paper provides a detailed overview of an emerging set of sensor
technologies that can be effectively used for health management of large-scale infrastructure systems. In particular, the
novel sensing technologies are integrated to offer a comprehensive monitoring system that fundamentally addresses the
limitations associated with current monitoring systems (for example, indirect damage sensing, cost, data inundation and
lack of decision making tools). Self-sensing materials are proposed for distributed, direct sensing of specific damage
events common to civil structures such as cracking and corrosion. Data from self-sensing materials, as well as from
more traditional sensors, are collected using ultra low-power wireless sensors powered by a variety of power harvesting
devices fabricated using microelectromechanical systems (MEMS). Data collected by the wireless sensors is then
seamlessly streamed across the internet and integrated with a database upon which finite element models can be
autonomously updated. Life-cycle and damage detection analyses using sensor and processed data are streamed into a
decision toolbox which will aid infrastructure owners in their decision making.
With the increased need for low power applications, designers are being forced to employ circuit optimization
methods that make tradeoffs between performance and power. In this paper, we propose a novel transistor-level
optimization method. Instead of drawing the transistor channel as a perfect rectangle, this method involves
reshaping the channel to create an optimized device that is superior in both delay and leakage to the original
device. The method exploits the unequal drive and leakage current distributions across the transistor channel to find
an optimal non-rectangular shape for the channel. In this work we apply this technique to circuit-level leakage
reduction. By replacing every transistor in a circuit with its optimally shaped counterpart, we achieve 5% savings in
leakage on average for a set of benchmark circuits, with no delay penalty. This improvement is achieved without
any additional circuit optimization iterations, and is well suited to fit into existing design flows.
With continued aggressive process scaling in the subwavelength lithographic regime, resolution enhancement techniques (RETs) such as optical proximity correction (OPC) are an integral part of the design to mask flow. OPC creates complex features to the layout, resulting in mask data volume explosion and increased mask costs. Traditionally, the mask flow has suffered from a lack of design information, such that all features (whether critical or noncritical) are treated equally by RET insertion. We develop a novel minimum cost of correction (MinCorr) methodology to determine the level of correction of each layout feature, such that prescribed parametric yield is attained with minimum RET cost. This flow is implemented with model-based OPC explicitly driven by timing constraints. We apply a mathematical-programming-based slack budgeting algorithm to determine OPC level for all polysilicon gate geometries. Designs adopted with this methodology achieve up to 20% Manufacturing Electron Beam Exposure System (MEBES) data volume reduction and 39% OPC run-time improvement.
A methodology for layout verification and optimization based on
exible design rules is provided. This methodology
is based on image parameter determined
exible design rules (FDRs), in contrast with restrictive design
rules (RDRs), and enables fine-grained optimization of designs in the yield-performance space. Conventional
design rules are developed based on experimental data obtained from design, fabrication and measurements of a
set of test structures. They are generated at early stage of a process development and used as guidelines for later
IC layouts. These design rules (DRs) serve to guarantee a high functional yield of the fabricated design. Since
small areas are preferred in integrated circuit designs due to their corresponding high speed and lower cost, most
design rules focus on minimum resolvable dimensions.
Focus is one of the major sources of linewidth variation. CD variation caused by defocus is largely systematic after the layout is finished. In particular, dense lines "smile" through focus while isolated lines "frown" in typical Bossung plots. This well-defined systematic behavior of focus-dependent CD variation allows us to develop a self-compensating design methodology.
In this work, we propose a novel design methodology that allows explicit compensation of focus-dependent CD variation, either within a cell (<i>self-compensated cells</i>) or across cells in a critical path (<i>self-compensated design</i>). By creating <i>iso</i> and <i>dense</i> variants for each library cell, we can achieve designs that are more robust to focus variation. Optimization with a mixture of iso and dense cell variants is possible both for area and leakage power, with the latter providing an interesting complement to existing leakage reduction techniques such as dual-Vth. We implement both heuristic and Mixed-Integer Linear Programming (MILP) solution methods to address this optimization, and experimentally compare their results. Our results indicate that designing with a self-compensated cell library incurs ~12% area penalty and ~6% leakage increase over original layouts while compensating for focus-dependent CD variation (i.e., the design meets timing constraints across a large range of focus variation). We observe ~27% area penalty and ~7% leakage increase at the worst-case defocus condition using only single-pitch cells. The area penalty of circuits after using either the heuristic or MILP optimization approach is reduced to ~3% while maintaining timing. We also apply our optimizations to leakage, which traditionally shows very large variability due to its exponential relationship with gate CD. We conclude that a mixed iso/dense library combined with a sensitivity-based optimization approach yields much better area/timing/leakage tradeoffs than using a self-compensated cell library alone. Self-compensated design shows an average of 25% leakage reduction at the worst defocus condition for the benchmark designs that we have studied.
Current ORC and LRC tools are not connected to design in any way. They are pure shape-based functions. A wafer-shape based power and performance signoff is desirable for RET validation as well as for "closest-to-silicon" analysis. The printed images (generated by lithography simulation) are not restricted to simple rectilinear geometries. There may be other sources of such irregularities such as Line Edge Roughness (LER). For instance, a silicon image of a transistor may not be a perfect rectangle as is assumed by all current circuit analysis tools. Existing tools and device models cannot handle complicated non-rectilinear geometries.
In this paper, we present a novel technique to model non-uniform, non-rectilinear gates as equivalent perfect rectangle gates so that they can be analyzed by SPICE-like circuit analysis tools. The effect of threshold voltage variation along the width of the device is shown to be significant and is modeled accurately. Taking this effect into account, we find the current density at every point along the device and integrate it to obtain the total current. The current thus calculated is used to obtain the effective length for the equivalent rectangular device. We show that this method is much more accurate than previously proposed approaches which neglect the location dependence of the threshold voltage.
For current and upcoming technology nodes (90, 65, 45 nm and beyond) one of the fundamental enablers of Moore's Law is the use of Resolution Enhancement Techniques (RET) in optical lithography. While RETs allow for continuing reduction in integrated circuits’ critical dimensions (CD), layout distortions are introduced as an undesired consequence due to proximity effects. Complex and costly Optical Proximity Correction (OPC) is then deployed to compensate for CD variations and loss of pattern fidelity, in an effort to improve yield. This, together with other sources for CD variations, causes the actual on-silicon chip performance to be quite different from sign-off expectations.
In current design optimization methodologies, process variation modeling, aimed at providing guardbands for performance analysis, is based on "worst-case scenarios" (corner cases) and yields overly pessimistic simulation results which makes meeting design targets unnecessarily difficult. Assumptions of CD distributions in Monte Carlo simulations, and statistical timing analysis in general, can be made more rigorous by considering realistic systematic and random contributions to the overall process variation.
A novel methodology is presented in this paper for extracting residual OPC errors from a placed and routed full chip layout and for deriving actual (i.e., calibrated to silicon) CD values, to be used in timing analysis and speed path characterization. The implementation of this automated flow is achieved through a combination of tagging critical gates, post-OPC layout back-annotation, and selective extraction from the global circuit netlist. This approach improves upon traditional design flow practices where ideal (i.e., drawn) CD values are employed, which leads to poor performance predictability of the as-fabricated design.
With this more accurate timing analysis, we are able to highlight the necessity of a post-OPC verification embedded design flow by showing substantial differences in the silicon-based timing simulations, both in terms of a significant reordering of speed path criticality and a 36.4% increase in worst-case slack. Extensions of this methodology to multi-layer extraction and timing characterization are also proposed. The paper concludes by showing how the methodology implemented in this flow also provides a general design for manufacturability (DFM) tool template. In particular, by passing design intent to process/OPC engineers, selective OPC can be applied to improve CD variation control based on gates' functions such as critical gates and matching transistors. Furthermore, back-annotated process-based data can be used during early stages of circuit design verification and optimization, driving tradeoffs when significant variability is unavoidable.
Today's design-manufacturing interfaces have only minimal information exchange. Lack of information on either side leads to under-performance due to too much guardbanding, and increased mask cost and increased turnaround time due to over-correction. In this work we present techniques that simultaneously utilize design and manufacturing information to improve mask quality and reduce mask cost.
As minimum feature sizes continue to shrink, patterned features have become significantly smaller than the wavelength of light used in optical lithography. As a result, the requirement for dimensional variation control, especially in critical dimension (CD) 3σ, has become more stringent. To meet these requirements, resolution
enhancement techniques (RET) such as optical proximity correction (OPC) and phase shift mask (PSM) technology are applied. These approaches result in a substantial increase in mask costs and make the cost of ownership (COO) a key parameter in the comparison of lithography technologies. No concept of function is injected into the mask flow; that is, current OPC techniques are oblivious to the design intent. The entire layout is corrected uniformly with the same effort. We propose a novel minimum cost of correction (MinCorr)
methodology to determine the level of correction for each layout feature such that prescribed parametric yield is attained. We highlight potential solutions to the MinCorr problem and give a simple mapping to traditional performance optimization. We conclude with experimental results showing the RET costs that can be saved
while attaining a desired level of parametric yield.