Since the mid-1960s when Kumar Patel invented the 9-10μm CO<sub>2</sub> laser and the 5μm CO laser, the CO<sub>2</sub> laser has experienced tremendous commercial success, while the CO laser has essentially played no role. Until recently, reliable, cost effective, room temperature, long sealed-off lifetime CO laser sources didn’t exist. With the product release of Coherent’s CO laser family, CO lasers are now commercially available with similar performance to CO<sub>2</sub> lasers.
Because certain materials have different absorption coefficients at 5μm and 9-10μm, there is a light-material interaction that is wavelength dependent. In addition, the 5μm beam can be focused to a tighter spot size and for the same spot size, it has a longer depth of focus than at 10μm. This finds relevance in the processing of glass and ceramics where 10μm radiation is absorbed near the surface, but 5μm radiation is deposited into the bulk material and does not rely solely on diffusion from the surface. Leveraging this difference, Coherent and other organizations have conducted experiments comparing CO<sub>2</sub> and CO laser processing of glasses and ceramics. Results show that the CO laser provides processing and performance advantages in this important materials processing market.
In this paper, we present the performance characteristics of commercially available CO lasers compared to the equivalent CO<sub>2</sub> laser. Application test results include: straight and curved cutting of various glasses, hole drilling, ceramic scribing, and the emerging area of 3-D glass printing. Both successes and remaining challenges are discussed.
Silica and silica-doped high quality factor (Q) optical resonators have demonstrated ultra-low threshold lasers based on numerous mechanisms (eg rare earth dopants, Raman). To date, the key focus has been on maintaining a high Q, as that determines the lasing threshold and linewidth. However, equally important criteria are lasing efficiency and wavelength. These parameters are governed by the material, not the cavity Q. Therefore, to fully address this challenge, it is necessary to develop new materials. We have synthesized a suite of silica and polymeric materials with nanoparticle and rare-earth dopants to enable the development of microcavity lasers with emission from the near-IR to the UV. Additionally, the efficiencies and thresholds of many of these devices surpass the previous work. Specifically, the silica sol-gel lasers are co- and tri-doped with metal nanoparticles (eg Ti, Al) and rare-earth materials (eg Yb, Nb, Tm) and are fabricated using conventional micro/nanofabrication methods. The intercalation of the metal in the silica matrix reduces the clustering of the rare-earth ions and reduces the phonon energy of the glass, improving efficiency and overall device performance. Additionally, the silica Raman gain coefficient is enhanced due to the inclusion of the metal nanoparticles, which results in a lower threshold and a higher efficiency silica Raman laser. Finally, we have synthesized several polymer films doped with metal (eg Au, Ag) nanoparticles and deposited them on the surface of our microcavity devices. By pumping on the plasmonic resonant wavelength of the particle, we are able to achieve plasmonic-enhanced upconversion lasing.
High optical field intensities build up inside microtoroids owing to its ultra-high quality factor, making them an ideal platform for plasmonic-photonic interactions with noble metals and a suitable pump source for microlaser. In this work, a microlaser based on hybrid silica microtoroids coated with gold nanorods is theoretically modeled and experimentally demonstrated. Theoretically, we used 3-D Comsol Multiphysics and modeled the interaction between the optical mode of the microtoroids and the surface plasmonic resonance of gold nanorods, both on and off resonance. To thoroughly study the role that the polymer layer plays in the plasmonic laser system, we perform a series of finite element method simulations in which the polymer layer thickness and refractive index is varied, and its effect on the plasmonic resonance is quantified. Experimentally, we demonstrated a visible laser at 575nm from hybrid microtoroids with a 30μWthreshold and an approximately 1nm linewidth. We have also varied the gold nanorod concentration on the microtoroids surface, and studied its effect on the Quality factor and the threshold power in order to get the optimum concentration for lasing.
An adaptive watermarking method based on discrete wavelet transform is proposed in this paper. To get good
imperceptivity and robustness, the energy of sub-blocks and the texture of the carrier image have been used to determine
where the watermark will be embedded. And the watermark embedding intensity will be determined by the noise visibility
function. Experimental results show that this method is effective.
RISC and DSP, two main architectures, have their own features. The main idea of RISC is “simple is fast”. Acting as controller, RISC is based on Load/Store structure, register-register Instruction Set Architecture (ISA), general purpose registers and cache. On the other hand, designed for signal processing, DSP emphasizes large data accessing and fast computing. It’s based on register-memory ISA, diverse addressing modes, data address generator, multiplier accumulator and RAM. As Embedded Systems grow fast, no single core architecture, neither RISC nor DSP, could meet the needs anymore. Combination is necessary. There are two kinds of combination: dual-core or single core. Single core means RISC core and DSP core melt into one core with common resource and unified ISA. A 32b media processor named MediaDSP3201 (MD32 for short) is a new member of this family. In this paper, the MD32 design is introduced and concentrated on ISA design and pipeline design. They are important in architecture design. Compatibility runs through the whole design. The ISA should include features from both RISC ISA and DSP ISA. The pipeline should fit the designed ISA as good as possible. MD32 was made by TSMC at the first try on 2004 spring. Application programs running on it show that the design is successfully and the chip is suitable for Embedded System applications.
Proc. SPIE. 5683, Embedded Processors for Multimedia and Communications II
KEYWORDS: Digital signal processing, Detection and tracking algorithms, Computer programming, Signal processing, Multimedia, Electronics engineering, Algorithm development, Electronic imaging, Cerium, Information science
Register allocation is an important part of optimizing compiler. The algorithm of register allocation via graph coloring is implemented by Chaitin and his colleagues firstly and improved by Briggs and others. By abstracting register allocation to graph coloring, the allocation process is simplified. As the physical register number is limited, coloring of the interference graph can’t succeed for every node. The uncolored nodes must be spilled. There is an assumption that almost all the allocation method obeys: when a register is allocated to a variable v, it can’t be used by others before v quit even if v is not used for a long time. This may causes a waste of register resource. The authors relax this restriction under certain conditions and make some improvement. In this method, one register can be mapped to two or more interfered “living” live ranges at the same time if they satisfy some requirements. An operation named merge is defined which can arrange two interfered nodes occupy the same register with some cost. Thus, the resource of register can be used more effectively and the cost of memory access can be reduced greatly.
Register file (RF) has been widely used in latest DSP and media processors. It is very important to include a reasonable RF configuration in the processor designs to help reducing the chip area, consumption and architecture complexity. DSP and media processors need many direct accesses to memory, and this will reduce register accessing. Another kind of reducing RF accessing frequency is bypassing or forwarding implemented by software mechanism, that the successive following instructions can directly use the result produced by the previous one through bypassing logic rather than RF. Therefore, the RF accessing frequency is decreased further. According to the experiment result, this new RF configuration not only satisfies the requirements of traditional media processors, but also is well applied to media processors with very long instruction word (VLIW) architecture.
KEYWORDS: Digital signal processing, Lithium, Detection and tracking algorithms, Embedded systems, Nanoimprint lithography, Electronics engineering, Algorithm development, Cerium, Information science, Standards development
A C-compiler is a basic tool for most embedded systems programmers. It is the tool by which the ideas and algorithms in your application (expressed as C source code) are transformed into machine code executable by the target processor. Our research was to develop an optimizing C-compiler for a specified 16-bit DSP. As one of the most important part in the C-compiler, Code Generation's efficiency and performance directly affect to the resultant target assembly code. Thus, in order to improve the performance of the C-compiler, we constructed an efficient code generation based on RTL, an intermediate language used in GNU CC. The code generation accepts RTL as main input, takes good advantage of features specific to RTL and specified DSP's architecture, and generates compact assembly code of the specified DSP. In this paper, firstly, the features of RTL will be briefly introduced. Then, the basic principle of constructing the code generation will be presented in detail. According to the basic principle, this paper will discuss the architecture of the code generation, including: syntax tree construction / reconstruction, basic RTL instruction extraction, behavior description at RTL level, and instruction description at assembly level. The optimization strategies used in the code generation for generating compact assembly code will also be given in this paper. Finally, we will achieve the conclusion that the C-compiler using this special code generation achieved high efficiency we expected.