Photoelectrowetting on dielectric surfaces can be used to drive droplets of liquid along reconfigurable paths on a microfluidic chip using controlled optical signals. These electrostatically activated surfaces along the desired path eliminate the need for precision molded channels and discrete functional components such as microvalves and micropumps. The photoelectrowetting effect exploits the surface tension of the droplet to maintain its volume during the transportation pathway and the photoelectric properties of the substrate surface are used to induce reversible fluidic flow. The active light-driven substrate is structured from graphene doped zinc-oxide (ZnO-G) films deposited on ITO coated glass. This substrate is coated from the ZnO-G side with Ruthenium-based dye (N719) to maximize its absorbability. The light triggers two forces that enable the droplet to be transported along the substrate. The first arises from the induced hydrophobicity gradient formed across the droplet contact area with the substrate surface. Exposing the ZnO-G film to a broad spectrum white light source alters the surface’s electric potential which induces a change in the droplet’s contact angle and the associated hydrophobicity. Once the hydrophobicity gradient is generated the droplet will start to move in the direction of the wetting zone. The second force is also created by the optical input when the absorbed light generates a photoelectric potential that produces a piezo-electrical effect on the ZnO-G film. The light triggered piezo-electrical behavior of the ZnO-G film can be used to generate the erasable microchannels that can guide droplet movement through a microfluidic chip. Preliminary experiments are performed to investigate the photoelectric potential of light activated ZnO-G films.
Large area polydimethylsiloxane (PDMS) flexible optical light guide sheets can be used to create a variety of passive light harvesting and illumination systems for wearable technology, advanced indoor lighting, non-planar solar light collectors, customized signature lighting, and enhanced safety illumination for motorized vehicles. These thin optically transparent micro-patterned polymer sheets can be draped over a flat or arbitrarily curved surface. The light guiding behavior of the optical light guides depends on the geometry and spatial distribution of micro-optical structures, thickness and shape of the flexible sheet, refractive indices of the constituent layers, and the wavelength of the incident light. A scalable fabrication method that combines soft-lithography, closed thin cavity molding, partial curing, and centrifugal casting is described in this paper for building thin large area multi-layered PDMS optical light guide sheets. The proposed fabrication methodology enables the of internal micro-optical structures (MOSs) in the monolithic PDMS light guide by building the optical system layer-by-layer. Each PDMS layer in the optical light guide can have the similar, or a slightly different, indices of refraction that permit total internal reflection within the optical sheet. The individual molded layers may also be defect free or micro-patterned with microlens or reflecting micro-features. In addition, the bond between adjacent layers is ensured because each layer is only partially cured before the next functional layer is added. To illustrate the scalable build-by-layers fabrication method a three-layer mechanically flexible illuminator with an embedded LED strip is constructed and demonstrated.
Optically transparent electrodes (OTEs) are used for bioelectronics, touch screens, visual displays, and photovoltaic cells. Although the conductive coating for these electrodes is often composed of indium tin oxide (ITO), indium is a very expensive material and thin ITO films are relatively brittle compared to conductive polymer or graphene thin films. An alternative highly conductive optically transparent thin film based on a graphene (G) and silver-nanoprism (AgNP) dispersion is introduced in this paper. The aqueous G ink is first synthesized using carboxymethyl cellulose (CMC) as a stabilizing agent. Silver (Ag) nanoprisms are then prepared separately by a simple thermal process which involves the reduction of silver nitrate by sodium borohydride. These Ag nanoprisms are only a few nanometers thick but have relatively large surface areas (>1000 nm2). As a consequence, the nanoprisms provide more efficient injection of free carriers to the G layer. The concentrated G-AgNP dispersions are then deposited on optically transparent glass and polyimide substrates using an inkjet printer with a HP6602A print head. After printing, these optically thin films can be thermally treated to further increase electrical conductivity. Thermal treatment decomposes CMC which frees elemental carbon from polymer chain and, simultaneously, causes the film to become hydrophobic. Preliminary experiments demonstrate that the G-AgNP films on glass substrates exhibit high conductivity at 70% transparency (550 nm). Additional tests on the Gr-AgNP thin films printed on polymide substrates show mechanical stability under bending with minimal reduction in electrical conductivity or optical transparency.
Soft-lithography techniques can be used to fabricate mechanically flexible polydimethylsiloxane (PDMS) optical waveguide sheets that act as large area light collectors (concentrators) and illuminators (diffusers). The performance and efficiency of these optical sheets is determined by the position and geometry of micro-optical features embedded in the sheet or imprinted on its surface, thickness and shape of the waveguide, core and cladding refractive indices, and wavelength of the incident light source. The critical design-for-manufacturability parameters are discussed and a scalable method of fabricating multi-layered PDMS optical waveguides is introduced. To illustrate the concepts a prototype waveguide sheet that acts a combined light collector and illumination panel is fabricated and tested. The region of the waveguide sheet that acts as the light collector consists of two superimposed PDMS layers with slightly different indices of refraction. The top layer is patterned with micro-lenses that focus the incident light rays onto the micro-wedge features that act as reflectors on the bottom of the second layer and, due to total internal reflection, redirect the light rays to the light diffuser region of the waveguide sheet. The bottom face of the diffuser PDMS layer is patterned with angled triangular wedge micro-features that project the light out of the waveguide sheet forming an illuminating pattern. The proposed fabrication technique utilizes precision machined polymethylmethacrylate (PMMA) moulds with negative patterned PDMS inserts that transfer the desired micro-optical features onto the moulded waveguide.
The controlled guidance of light rays through a mechanically flexible large area polymer optical waveguide sheet is investigated using Zemax OpticStudio software. The geometry and spatial distribution of micro-optical features patterned on the waveguide sheet determines whether the surface acts as a light concentrator or diffuser. To illustrate the concept, incident light is collected over a large center area and then transmitted to the border where it is emitted through an illumination window covered by an array of photo-cells. The efficiencies of the collector and illuminating regions of the hybrid PDMS collector-diffuser waveguide sheet are discussed. Initial analysis of the waveguide design demonstrates an ideal efficiency of over 90% for the concentrating region of the waveguide and over 80% efficiency for the diffusing region of the waveguide. The Zemax simulation of the ideal design of the hybrid concentrator-diffuser waveguide exhibited an efficiency of up to 75%. However this efficiency significantly decreased when examining the waveguide’s performance as a flexible sheet. The necessary design modifications, to mitigate these losses in efficiency, are discussed, and future work will focus on analyzing and optimizing the waveguide design for performance as a fully flexible concentrator-diffuser membrane.
Optically transparent electrodes are a key component in variety of products including bioelectronics, touch screens, flexible displays, low emissivity windows, and photovoltaic cells. Although highly conductive indium tin oxide (ITO) films are often used in these electrode applications, the raw material is very expensive and the electrodes often fracture when mechanically stressed. An alternative low-cost material for inkjet printing transparent electrodes on glass and flexible polymer substrates is described in this paper. The water based ink is created by using a hydrophilic cellulose derivative, carboxymethyl cellulose (CMC), to help suspend the naturally hydrophobic graphene (G) sheets in a solvent composed of 70% DI water and 30% 2-butoxyethanol. The CMC chain has hydrophobic and hydrophilic functional sites which allow adsorption on G sheets and, therefore, permit the graphene to be stabilized in water by electrostatic and steric forces. Once deposited on the functionalized substrate the electrical conductivity of the printed films can be “tuned” by decomposing the cellulose stabilizer using thermal reduction. The entire electrode can be thermally reduced in an oven or portions of the electrode thermally modified using a laser annealing process. The thermal process can reduce the sheet resistance of G-CMC films to < 100 Ω/sq. Experimental studies show that the optical transmittance and sheet resistance of the G-CMC conductive electrode is a dependent on the film thickness (ie. superimposed printed layers). The printed electrodes have also been doped with AuCl3 to increase electrical conductivity without significantly increasing film thickness and, thereby, maintain high optical transparency.
Laser microprocessing technologies offer an important tool to fulfill the needs of many industrial sectors. In particular, there is growing interest in applications of these processes in the manufacturing areas such as automotive parts fabrication, printable electronics and solar energy panels. The technology is primarily driven by our understanding of the fundamental laser-material interaction, process control strategies and the advancement of significant fabrication experience over the past few years. The wide-ranging operating parameters available with respect to power, pulse width variation, beam quality, higher repetition rates as well as precise control of the energy deposition through programmable pulse shaping technologies, enables pre-defined material removal, selective scribing of individual layer within a stacked multi-layer thin film structure, texturing of material surfaces as well as precise introduction of heat into the material to monitor its characteristic properties are a few examples. In this research, results in the area of laser surface texturing of metals for added hydrodynamic lubricity to reduce friction, processing of ink-jet printed graphene oxide for flexible printed electronic circuit fabrication and scribing of multi-layer thin films for the development of photovoltaic CuInGaSe2 (CIGS) interconnects for solar panel devices will be discussed.
Non-conductive graphene-oxide (GO) inks can be synthesized from inexpensive graphite powders and deposited on
functionalized flexible substrates using inkjet printing technology. Once deposited, the electrical conductivity of the GO
film can be restored through laser assisted thermal reduction. Unfortunately, the inkjet nozzle diameter (~40μm) places
a limit on the printed feature size. In contrast, a tightly focused femtosecond pulsed laser can create precise micro
features with dimensions in the order of 2 to 3 μm. The smallest feature size produced by laser microfabrication is a
function of the laser beam diameter, power level, feed rate, material characteristics and spatial resolution of the micropositioning
system. Laser micromachining can also remove excess GO film material adjacent to the electrode traces and
passive electronic components. Excess material removal is essential for creating stable oxygen-reduced graphene-oxide
(rGO) printed circuits because electron buildup along the feature edges will alter the conductivity of the non-functional
film. A study on the impact of laser ablation on the GO film and the substrate are performed using a 775nm, 120fs
pulsed laser. The average laser power was 25mW at a spot size of ~ 5μm, and the feed rate was 1000-1500mm/min.
Several simple microtraces were fabricated and characterized in terms of electrical resistance and surface topology.
Flexible electronic circuitry is an emerging technology that will significantly impact the future of healthcare and
medicine, food safety inspection, environmental monitoring, and public security. Recent advances in drop-on-demand
printing technology and electrically conductive inks have enabled simple electronic circuits to be fabricated on
mechanically flexible polymers, paper, and bioresorbable silk. Research has shown that graphene, and its derivative
formulations, can be used to create low-cost electrically conductive inks. Graphene is a one atom thick two-dimensional
layer composed of carbon atoms arranged in a hexagonal lattice forming a material with very high fracture strength, high
Young’s Modulus, and low electrical resistance. Non-conductive graphene-oxide (GO) inks can also be synthesized
from inexpensive graphite powders. Once deposited on the flexible substrate the electrical conductivity of the printed
GO microcircuit traces can be restored through thermal reduction. In this paper, a femtosecond laser with a wavelength
of 775nm and pulse width of 120fs is used to transform the non-conductive printed GO film into electrically conductive
oxygen reduced graphene-oxide (rGO) passive electronic components by the process of laser assisted thermal reduction.
The heat affected zone produced during the process was minimized because of the femtosecond pulsed laser. The degree
of conductivity exhibited by the microstructure is directly related to the laser power level and exposure time. Although
rGO films have higher resistances than pristine graphene, the ability to inkjet print capacitive elements and modify local
resistive properties provides for a new method of fabricating sensor microcircuits on a variety of substrate surfaces.
Edge-lit light guide panels (LGPs) with micropatterned surfaces represent a new technology for developing small- and medium-sized illumination sources for application such as automotive, residential lighting, and advertising displays. The shape, density, and spatial distribution of the micro-optical structures (MOSs) imprinted on the transparent LGP must be selected to achieve high brightness and uniform luminance over the active surface. We examine how round-tip cylindrical MOSs fabricated by precision micromilling can be used to create patterned surfaces on low-cost transparent polymethyl-methacrylate substrates for high-intensity illumination applications. The impact of varying the number, pitch, spatial distribution, and depth of the optical microstructures on lighting performance is initially investigated using LightTools™ simulation software. To illustrate the microfabrication process, several 100×100×6 mm 3 LGP prototypes are constructed and tested. The prototypes include an “optimized” array of MOSs that exhibit near-uniform illumination (approximately 89%) across its active light-emitting surface. Although the average illumination was 7.3% less than the value predicted from numerical simulation, it demonstrates how LGPs can be created using micromilling operations. Customized MOS arrays with a bright rectangular pattern near the center of the panel and a sequence of MOSs that illuminate a predefined logo are also presented.
Bioelectronics involves interfacing functional biomolecules or living cells with electronic circuitry. Recent advances in
electrically conductive inks and inkjet printing technologies have enabled bioelectronic devices to be fabricated on
mechanically flexible polymers, paper and silk. In this research, non-conductive graphene-oxide (GO) inks are
synthesized from inexpensive graphite powders. Once printed on the flexible substrate the electrical conductivity of the micro-circuitry can be restored through thermal reduction. Laser irradiation is one method being investigated for
transforming the high resistance printed GO film into conductive oxygen reduced graphene-oxide (rGO). Direct laser
writing is a precision fabrication process that enables the imprinting of conductive and resistive micro-features on the
GO film. The mechanically flexible rGO microcircuits can be further biofunctionalized using molecular self-assembly
techniques. Opportunities and challenges in exploiting these emerging technologies for developing biosensors and
bioelectronic cicruits are briefly discussed.
Recent advances in materials engineering have enabled photovoltaic (PV) cells to be fabricated from solid state semiconductors,
photosensitive organic dyes, and photoactive proteins. One type of organic PV cell is based on the natural
light-harvesting protein bacteriorhodopsin (bR) found in the plasma membrane of a salt marsh archaebacteria. When
exposed to sunlight, each bR molecule acts as a simple proton pump which transports hydrogen ions from the
cytoplasmic to the extracellular side through a transmembrane ion channel. Two types of bR-PV cells comprised of
photosensitive dry and aqueous (wet) bR thin films are described in this paper. The self-assembled monolayer of
oriented purple membrane (PM) patches from the bR protein is created on a bio-functionalized gold (Au) surface using a
biotin molecular recognition technique. The dry bR monolayer is covered with an optically transparent Indium Tin
Oxide (ITO) electrode to complete the dry bR-PV device. In contrast, the aqueous bR-PV cell is created by
immobilizing the bR monolayer on an Au-coated porous substrate and then inserting the assembly between two micro-reservoirs
filled with KCl solutions. Platinum wire probes are then inserted in the opposing liquid reserviors near the
porous bR monolayer. The dry bR-PV cell generated a photo-electric response of 9.73 mV/cm2, while the aqueous bR-PV
produced 41.7 mV/cm2 and 33.3 μA/cm2. Although the generated voltages appear small, it may be sufficient to
power various microelectromechanical systems (MEMS) and microfluidic devices.
Disposable microfluidic systems are used to avoid sample contamination in a variety of medical and environmental
monitoring applications. A contactless hot intrusion (HI) process for fabricating reusable polymer micromolds with near
"optical quality" surface finishes is described in this paper. A metallic hot intrusion mask with the desired microchannels
and related passive components is first machined using a tightly focused beam from a diode-pumped solid-state (DPSS)
laser. The polymer mold master is then created by pressing the 2D metallic mask onto a polymethylmethacrylate
(PMMA) substrate. Since it is a contactless fabrication process the resultant 3D micro-reliefs have near optical quality
surface finishes. Unfortunately, the desired micro-relief dimensions (height and width) are not easily related to the hot
intrusion process parameters of pressure, temperature, and time exposure profile. A finite element model is introduced
to assist the manufacturing engineer in predicting the behavior of the PMMA substrate material as it deforms under heat
and pressure during micromold manufacture. The FEM model assumes that thermo-plastics like PMMA become "rubber
like" when heated to a temperature slightly above the glass transition temperature. By controlling the material
temperature and maintaining its malleable state, it is possible to use the stress-strain relationship to predict the profile
dimensions of the imprinted microfeature. Examples of curved microchannels fabricated using PMMA mold masters are
presented to illustrate the proposed methodology and verify the finite element model. In addition, the non-contact
formation of the micro-reliefs simplifies the demolding process and helps to preserve the high quality surface finishes.
Laser micro-polishing (LμP) is a new laser-based microfabrication technology for improving surface quality during a
finishing operation and for producing parts and surfaces with near-optical surface quality. The LμP process uses low
power laser energy to melt a thin layer of material on the previously machined surface. The polishing effect is achieved
as the molten material in the laser-material interaction zone flows from the elevated regions to the local minimum due to
surface tension. This flow of molten material then forms a thin ultra-smooth layer on the top surface. The LμP is a
complex thermo-dynamic process where the melting, flow and redistribution of molten material is significantly
influenced by a variety of process parameters related to the laser, the travel motions and the material. The goal of this
study is to analyze the impact of initial surface parameters on the final surface quality. Ball-end micromilling was used
for preparing initial surface of samples from H13 tool steel that were polished using a Q-switched Nd:YAG laser. The
height and width of micromilled scallops (waviness) were identified as dominant parameter affecting the quality of the
LμPed surface. By adjusting process parameters, the Ra value of a surface, having a waviness period of 33 μm and a
peak-to-valley value of 5.9 μm, was reduced from 499 nm to 301 nm, improving the final surface quality by 39.7%.
Edge-lit backlighting has been used extensively for a variety of small and medium-sized liquid crystal displays (LCDs).
The shape, density and spatial distribution pattern of the micro-optical elements imprinted on the surface of the flat
light-guide panel (LGP) are often "optimized" to improve the overall brightness and luminance uniformity. A similar
concept can be used to develop interior convenience lighting panels and exterior tail lamps for automotive applications.
However, costly diffusive sheeting and brightness enhancement films are not be considered for these applications
because absolute luminance uniformity and the minimization of Moiré fringe effects are not significant factors in
assessing quality of automotive lighting. A new design concept that involves micromilling cylindrical micro-optical
elements on optically transparent plastic substrates is described in this paper. The variable parameter that controls
illumination over the active regions of the panel is the depth of the individual cylindrical micro-optical elements.
LightTools™ is the optical simulation tool used to explore how changing the micro-optical element depth can alter the
local and global luminance. Numerical simulation and microfabrication experiments are performed on several
(100mmx100mmx6mm) polymethylmethacrylate (PMMA) test samples in order to verify the illumination behavior.
A range scan of a building's interior typically produces an immense cloud of colorized three-dimensional data that represents diverse surfaces ranging from simple planes to complex objects. To create a virtual reality model of the preexisting room, it is necessary to segment the data into meaningful clusters. Unfortunately, segmentation algorithms based solely on surface curvature have difficulty in handling such diverse interior geometries, occluded boundaries, and closely placed objects with similar curvature properties. The proposed two stage hierarchical clustering algorithm overcomes many of these challenges by exploiting the registered color and spatial information simultaneously. Large planar regions are initially identified using constraints that combine color (hue) and a measure of local planarity called planar alignment factor. This stage assigns 72 to 84% of the sampled points to clusters representing flat surfaces such as walls, ceilings, or floors. The significantly reduced data points are clustered further using local surface normal and hue deviation information. A local density driven investigation distance (fixed density distance) is used for normal computation and cluster expansion. The methodology is tested on colorized range data of a typical room interior. The combined approach enabled the successful segmentation of planar and complex geometries in both dense and sparse data regions.
A light driven microvalve activated by a thin organic photoelectric film that controls the expansion and shrinkage of a
pH sensitive HEMA-AA hydrogel actuator is described in this paper. The self-assembled monolayer of oriented
bacteriorhodopsin (bR) purple membrane patches are immobilized on a porous bio-functionalized gold (Au) surface
using a biotin molecular recognition technique. When exposed to visible light, each bR molecule in the monolayer acts
as a simple proton pump which transports hydrogen ions from the cytoplasmic to the extracellular side through a
transmembrane ion channel that connects both sides of the membrane. The flow of ions from the photon activated bR
changes the pH value of the ionic solution that surrounds the gel microactuator. The chargeable polymeric network
undergoes a measureable geometric change when the pH of the ionic solution is shifted to the phase transition point
pKa. The fabrication of the thin bR film and photo-responsive hybrid hydrogel are discussed. Preliminary experiments
show that the 13nm self-assembled photoelectric layer can generate approximately 1.3mV/(mW·cm2) when exposed to
an 18mW, 568nm light source. The photo-voltage produced by the monolayer is believed to be sufficient to change the
pH of the surrounding ionic solution from its neutral state and trigger the swelling of the gel. Several design issues that
need to be resolved before a fully functional light-driven microvalve can be created are identified and discussed.
Bioelectronic photosensor arrays are hybrid devices where light-sensitive biological molecules are interfaced with
microelectronic circuitry. In this paper, a mechanically bendable multi-pixel photosensor array that exploits the light
transduction properties of thin bacteriorhodopsin (bR) films is described. The photo sensitive protein is immobilized on
a flexible plastic substrate coated with a patterned indium-tin-oxide (ITO) microelectrode array. The thin bR film
responds to light intensities over a wide spectral range with a peak response at 568nm. The photovoltage generated by
the thin bR film remains approximately linear for a variety of wavelengths and over the light power range of 200μW to
12mW. By fabricating patterned photo sensor arrays on bendable plastic substrates it is possible to develop a variety of
specialized non-planar imaging technologies. The design and development of a prototype cylindrical bR sensor array for
a panoramic camera that detects movement in a wide 180° field-of-view is briefly described. Several key design
challenges are identified and discussed.
The rapid fabrication of polymeric mold masters by laser micromachining and hot-intrusion permits the low cost
manufacture of microfluidic devices with near optical quality surface finishes. A metallic hot intrusion mask with the
desired microfeatures is first machined by laser and then used to produce the mold master by pressing the mask onto a
polymethylmethacrylate (PMMA) substrate under applied heat and pressure. A thorough understanding of the physical
phenomenon is required to produce features with high dimensional accuracy. A neural network approach to modeling the
relationship among microchannel height (H), width (W), the intrusion process parameters of pressure and temperature is
described in this paper. Experimentally acquired data are used to both train and test the neural network for parameterselection.
Analysis of the preliminary results shows that the modeling methodology can predict suitable parameters
within 6% error.
The transformation of a surface mesh from one form to another requires information about object geometry and node
topology. Establishing a valid correspondence between the mesh nodes of the two bounding objects is critical for
smooth shape deformation. The complexity of the task is increased if the meshes are originally created from separate sets
of measured surface data. The shape transformation technique described in this paper utilizes a self-organizing feature
map (SOFM), with a fixed number of nodes and known spherical topology, to fit a tessellated surface mesh around the
reference data set. The nodal mesh is then allowed to gradually deform and assume the underlying geometry of the
target data set. The mesh deformation is achieved through an unsupervised learning algorithm that iteratively modifies
the location of nodes based on randomly selected coordinate points from the target surface. Furthermore, regional shape
changes occur because the algorithm adjusts the location of nearest neighboring nodes in the evolving mesh. The
correspondence between the neighboring nodes in the two bounding shapes is maintained during the intermediate stages
of shape interpolation process. The algorithm's performance is illustrated using scanned surface data from several
A method for rapid fabrication of mold masters for soft-molding of polydimethylsiloxane (PDMS) microfluidic devices is successfully developed and tested. The method involves laser micromachining and a hot-intrusion process, and produces mold masters from polymethylmethacrylate (PMMA) substrates. A metallic mask with microchannel line features of various widths (25 to 200 µm) is initially created by laser micromachining a 75-µm-thick brass sheet. Under the hot-intrusion process, a 2-mm-thick solid PMMA substrate is then heated and molded under pressure to force the softened material through the shaped microfeatures in the mask. The height of the extruded microrelief is determined by the pressure, temperature, and time profile of the hot-intrusion process. A mathematical model that characterizes the rapid fabrication process and enables the operator to select appropriate process parameters is described. The derived empirical model is based on experimental observations where extruded microrelief heights were varied from 5 to 75 µm with aspect ratios from 0.1 to 0.46, and radii of the extruded profile from 12 to 270 µm. The proposed model is developed to describe the relationship between key process parameters and the extruded heights of the microreliefs. Furthermore, the model provides the operator with simple guidelines for selecting the process parameters. An example of PDMS microfluidic devices replicated by the rapid micromold fabrication methodology is presented to illustrate the quality of the resultant features of microchannels.
Glass is often used as a substrate material for developing microfluidic chips because it is hydrophilic (attracts and holds moisture), chemically inert, stable over time, optically clear, non-porous, and can be fabricated at low cost. However, the size and geometry of the various components, flow channels and fluid reservoirs are all fixed on the substrate material at the time of microfabrication. Recent advances on the development of a light driven microactuator for actively changing the size and geometry of micro features, based on a photo-responsive hydrogel, are described in this paper. Each discrete microactuator in the platform structure is a bi-layered hydrogel that exploits the ionic nature of the pH sensitive polymer blend of polyethylenimine (PEI) and poly(vinyl alcohol) (PVA), and the proton pumping ability of the retinal protein bacteriorhodopsin (bR). When irradiated by a light source with a peak response of 568 nm the bR molecules in the (bR-PVA) layer undergo a complex photocycle that causes protons to be pumped into the adjoining pH sensitive (PEI-PVA) layer. The diffusion of similarly charged ions through the second actuating layer generates electrostatic repulsive and attractive forces which alter the osmotic pressure within the cross-linked polymer network. Depending upon the type of electrostatic forces generated, the pH sensitive hydrogel layer will swell or, alternatively, collapse. The fabrication of the (bR-PVA)-(PEI-PVA) hydrogel microactuator is described and the experimental results from preliminary tests are presented. The application of the light sensitive hydrogels to developing a reconfigurable microchip platform is briefly discussed.
The performance of wide field-of-view (FOV) and omni-directional sensors are often limited by the complex optics used to project three-dimensional world points onto the planar surface of a charged-couple device (CCD) or CMOS array. Recent advances in the design and development of a spherical imaging system that exploits the fast photoelectric signals generated by dried bacteriorhodopsin (bR) films are described in this paper. The bendable, lightweight and durable bR-based photocell array is manufactured on an indium-tin-oxide (ITO) coated plastic film using Electrophoretic Sedimentation technique (EPS). The effective sensing area of each pixel in the preliminary prototype is 2x2 mm2, separated by 1mm and arranged in a 4x4 array. When exposed to light, the differential response characteristic is attributed to charge displacement and recombination within the bR molecule, as well as loading effects of the attached amplifier. The peak spectral response occurs at 568nm and is linear over the tested light power range of 200μW to 12mW. Response remains linear at the other tested wavelengths, but at reduced signal amplitude. Excess material between the bR sensing elements can be cut from the plastic substrate to increase structure flexibility and permit the array of photodetectors to be wrapped around the exterior, or adhered to the interior, of a sphere.
Recent improvements in computer graphics, three-dimensional digitization and virtual reality tools have enabled
archaeologists to capture and preserve ancient relics recovered from excavated sites by creating virtual representations of
the original artefacts. The digital copies offer an accurate and enhanced visual representation of the physical object. The
process of reconstructing an artefact from damaged pieces by virtual assemblage and clay sculpting is summarized in
this paper. Surface models of the digitized fragments are first created and then manipulated in a virtual reality (VR)
environment using simple force feedback tools. The haptic device provides tactile cues that assist the user with the
assembly process and introducing soft virtual clay to the resultant assemblage for complete 3D reconstruction. Since
reconstruction is performed within a VR environment, the joining or "gluing" of separate damaged fragments will permit
the scientist to investigate alternative relic configurations. Results from a preliminary experiment are presented to
illustrate the virtual assemblage procedure used to reconstruct fragmented or broken objects.
One technological challenge in microfluidic system design has been controlling the directional flow of minute amounts of fluid through various narrow channels. Stimuli-responsive polymers can be used as micro control devices such as valves because these materials significantly change their volumetric properties in response to small environmental changes in pH, temperature, solvent composition, or electric field. In this paper, a bi-layered hydrogel structure is introduced as a light activated microactuator. The first layer of the device is a light sensitive polymer network containing poly(vinyl alcohol) (PVA) and the retinal protein bacteriorhodopsin (bR). The second layer is a blend of PVA hydrogel and a pH sensitive polymer polyethylenimine (PEI). When exposed to a light source with a peak response at 568 nm, the bR molecules in the first layer undergo a multistage photocycle that cause protons to be pumped into the surrounding medium. The diffusion of these similarly charged ions through the adjoining pH responsive hydrogel generates electrostatic repulsive and attractive forces which alter the osmotic pressure within the cross-linked polymer network. Depending upon the type of electrostatic forces generated, the pH sensitive hydrogel layer will swell or, alternatively, collapse. The multi-layered structure can be fabricated and inserted into the microchannel. The expanding volume of the actuating hydrogel is used to regulate flow or control leakage. Preliminary experiments on a 625mm3 optical actuating device are presented to identify key response characteristics and illustrate the mechanism for actuation
A bendable photocell array that exploits bioelectronic photoreceptors based on bacteriorhodopsin (bR) is described in this paper. Fabricating such a sensor array on a flexible plastic substrate introduces a new design approach that enables lightweight and durable non-planar sensing devices to be created with curved or spherical geometries. In this research, purple membrane patches obtained from wild-type bR are deposited onto a polyethylene terephthalate (PET) substrate coated with a patterned ITO layer using Electrophoretic Sedimentation (EPS) technique. The current prototype consists of a flexible 4x4 pixel array and an amplification circuit that magnifies the small electrical signal arising from the charge displacement and recombination within the dried bR film. Each individual pixel is a 2mm x 2mm square separated by a 1mm distance between neighboring elements. The measured photoelectric response of an individual pixel is approximately linear over the light power range between 200μW and 12mW. These bR photocells respond primarily to visible light with a spectral peak response at 568nm. The response times of the photoelectric signals can reach up to the microsecond range. Preliminary tests have demonstrated that photoresponse characteristics are maintained while the flexible substrate is deformed up to a 10mm bending radius. Unfortunately, dried bR photocells are inherently susceptible to electrical noise because of their extremely high film resistance, necessitating the employment of a noise-filtering amplifier. The image processing capabilities of bR are demonstrated in a motion detection application. Specifically, Reichardt's delay-and-correlate algorithm is implemented and is used to detect both the speed and direction of a moving light spot.
The time and frequency response behavior of a new class of photodetectors based on a light-sensitive protein, known as bacteriorhodopsin (bR), is described. Each bR-based detector consists of an indium tin oxide (ITO) electrode/bR thin film/indium tin oxide (ITO) electrode structure. The response of the photodetector to square-wave and transient pulse illumination are both simulated using an equivalent resistor-capacitor (RC) circuit and experimentally observed. The investigative study demonstrates that the physical dimensions of the sensor surface, load resistance and capacitance, and the illumination conditions all have an impact on the transient response and gain-bandwidth characteristics. It is observed that changing the sensing area of the detector only affects the amplitude of the response, but not the bandwidth. Increasing the load resistance produces a higher gain, but reduces bandwidth. Increasing the load capacitance has the effect of dramatically reducing both gain and bandwidth. The observations and conclusions derived from this research provide design guidelines for developing hybrid photoelectric sensors and imaging arrays using bacteriorhodopsin thin films.
Accurate measurement and thorough documentation of excavated artifacts are the essential tasks of archaeological fieldwork. The on-site recording and long-term preservation of fragile evidence can be improved using 3D spatial data acquisition and computer-aided modeling technologies. Once the artifact is digitized and geometry created in a virtual environment, the scientist can manipulate the pieces in a virtual reality environment to develop a "realistic" reconstruction of the object without physically handling or gluing the fragments. The ARCHAEO-SCAN system is a flexible, affordable 3D coordinate data acquisition and geometric modeling system for acquiring surface and shape information of small to medium sized artifacts and bone fragments. The shape measurement system is being developed to enable the field archaeologist to manually sweep the non-contact sensor head across the relic or artifact surface. A series of unique data acquisition, processing, registration and surface reconstruction algorithms are then used to integrate 3D coordinate information from multiple views into a single reference frame. A novel technique for automatically creating a hexahedral mesh of the recovered fragments is presented. The 3D model acquisition system is designed to operate from a standard laptop with minimal additional hardware and proprietary software support. The captured shape data can be pre-processed and displayed on site, stored digitally on a CD, or transmitted via the Internet to the researcher's home institution.
A micromanipulator based on the photo-thermal bending effect experienced by a beveled optical fiber is described in this paper. The micromanipulator design incorporates four fingers, two bendable fibers for actively grasping small objects and two stationary fibers to provide structural support while holding the object. Each finger is a 1mm diameter acrylic optic fiber with a 25mm beveled edge near the tip. The beveled edge is coated with a thin layer of black paint where the thickness has a measurable impact on the amount of tip deflection. A light beam, from a 150W halogen illuminator, is directed into the fixed end of the sculpted optic fiber causing the tip at the free end to deflect by approximately 50 microns. Several experiments are conducted to demonstrate that this simple microgripper is able to grasp, hold, and release a variety of small metal screws and ball bearings. Finite element analysis is used to further investigate the physical properties of the optical actuator. The theoretical deflections are slightly greater than the experimentally observed values. The FEM analysis is also used to estimate the maximum force (~ 0.7mN) generated at the actuator tip during deflection.
Bacteriorhodopsin (bR) thin films have been investigated in recent years as a viable biomaterial for constructing micro- or nanoscale optical devices. During illumination, the bR molecules in the thin film undergo a photocycle that is followed by a proton transport from the cytoplasmatic side to the extracellular side of the cell membrane. The photoelectric response induced by the charge displacement can be influenced by both the wavelength and intensity of the impinging light sources. A photocell based on the photoelectric properties of a thin bR film is described in this paper. The bR-based photocell is built as a sandwich-structural device with an ITO (Indium Tin Oxide) electrode/bR film/ITO electrode configuration. The photocell is fabricated by depositing the oriented bR film onto the grounded ITO electrode. The cytoplasmic side of the bR membrane is attached to the ITO conductive surface and the extracelluar side is placed in contact with the second ITO electrode that provides the signal input to the instrumentation circuit. A polyester thin film was used as the spacer separating the two ITO electrodes. The size of the active area of the photocell is about 10×10 mm. A HeNe laser coupled with an acoustic-optical scanning system is used as the light source. Experimental results confirm that the photoelectric response generated by the bR-photocell prototype is durable, stable, and highly sensitive to changes in light intensity. The sensitivity of the proposed signal transducer is 10.25mV/mW. The wavelength dependence of the photoelectric responses is similar to the optical absorption spectrum of bR membrane.
Indirect optical methods of mechanical actuation exploit the ability of high intensity light sources to generate heat, and thereby influence the thermal properties of gases, fluids or solids. Optical actuators that utilize this photo-thermal effect for creating structural displacement often produce very large power/weight ratios. This paper describes the basic concept and operation of two optically driven micro-mechanisms that use the shape memory effect of 50/50 nickeltitanium (NiTi) material to generate the desired force and displacement. Shape memory alloys (SMA) such as NiTi exhibit reproducible phase transformation effects when undergoing repetitive heating and cooling cycles. Increasing the temperature above the ambient conditions of a pre-loaded NiTi wire or foil will cause the material element to undergo a martensite-to-austenite phase transformation and move the position of an attached load a distance of approximately 4% of the overall length. The reduction in the length can be recovered by cooling the SMA material back to the original temperature. The number of times the NiTi material can exhibit this shape memory effect is dependent upon the amount of strain, and consequently, the total distance through which the actuating material is displaced. The proposed devices use a focused high-intensity light source to provide both the energy and control signal needed to activate a simple wire shaped SMA element in a microcantilever beam and a SMA thin film in a diaphragm micropump.
Optical controllers exploit lightwave technologies to implement different control strategies. It is possible to replace many of the electrical and mechanical components found in traditional linear and nonlinear controllers with optical analogues that increase the speed of signal processing or enhance system sensitivity. These optical sensors, switches, communication links, and actuators are largely immune from electromagnetic interference, exhibit low signal attenuation, provide secure flow of information, and are safe in hazardous or explosive environments. In addition, the energy in the light beam is one of the easiest forms of energy to shape and transmit through free space. The fundamental characteristics of several "control-by-light" systems are discussed in this paper. The proposed control system utilizes an acousto-optic deflector (AOD) to change the direction of the reshaped laser beam in response to the feedback error signal. The deflected beam strikes an array of photodetectors where each discrete detector represents a specific control action. One- and two-dimensional detector array configurations are explored for control. Although the controller designs can be implemented on optical breadboards using off-the-shelf optical devices, recent advances in nanotechnology would allow similar micro-scale optical controller to be fabricated at low cost.
Light activated optical circuits have several key advantages over conventional electronics because they are free from electrical current losses, resistive heat dissipation, and friction forces that greatly diminish system performance and efficiency. The effects of current leakage and power loss are also crucial design constaints in developing micro-electromechanical (MEMS) technology. An essential device for creating viable micro-optical circuitry is a robust photonic transistor that can act as a small signal switch and amplifier. The proposed photonic transistor is based on the complementary suppression-modulated light transmission properties of thin bacteriorhodopsin (bR) films. The light transmission properties exhibited by the thin film are controlled using the variable wavelength and intensity of the impinging light soruces. The light transmisison properties of the bR film are illustrated using a mathematical model for the two-state photoreaction system. The two-state model represents the longest lifetime in the bR photocycle, largest change in absorption maxima, and high photochemical stability. The optical response is proportional to changes in the light transmission properties of the biometrical, and therefore represents a viable material for creating optoelectronic devices.
Multiple off-the-shelf cameras can be configured to simultaneously provide redundant data, complementary information, and fast processing through sensor parallelism. The redundancy in the captured data can increase the accuracy of scene interpretation and improve system reliability by reducing the overall uncertainty associated with feature classification. Complementary information extracted from several cameras allows novel features in the environment to be identified that are normally impossible to detect with an individual CCD camera or range scanner. An unsolved problem in using multiple cameras for part identification or fault detection is associating the image features captured by one camera with that from another camera, or the same camera at a different point in time. In this paper, a spherical self-organizing feature map (SOFM) is used to combine and correlate both redundant and complementary features extracted from the images acquired by a multiple camera system. An important feature of the proposed technique is that the spherical SOFM develops a topologically ordered representation of the feature vectors derived from a high-dimensional input space. The unsupervised learning algorithm exploits hidden redundancies in the data set and ensures that 'similar' feature vectors will be assigned to cluster units that lie in identifiable neighborhoods on the spherical lattice. To illustrate the proposed methodology, a spherical SOFM that classifies the feature vectors acquired by a trinocular camera system is described.
An experimental shape measurement system that does not require peripheral sensors to track the position and orientation of the sensor-head is described in this paper. The prototype consists principally of a multiple-line light projector and a CCD camera. The light projection unit uses a low-power diode laser with a single line-generator and several dichroic cube beamsplitters. This simple hardware configuration creates three parallel line profiles with unique intensity values due to the transmission and reflection properties of the constituent beamsplitters. To eliminate background information, a bandpass filter with a peak response near the wavelength of the laser source is placed over the camera lens. The CCD
camera acquires images of distorted light patterns as the sensor-head is swept across the object surface. The coordinate points of the parallel profiles in each view are recovered relative to the sensor head using a nonlinear image-to-object coordinate calibration technique. Where partial overlap exists between adjacent views, a multi-view registration algorithm can be applied. In the proposed registration method, the geometry of the surface in each view is approximated prior to computation of the sensor transformation. These synthetic surfaces are used to establish the corresponding features in adjacent views that are necessary to compute the translation and rotation parameters of the sensor-head. The potential of the surface-measurement method is demonstrated using narrow overlapping views.
Industrial lasers are used extensively in modern manufacturing for a variety of applications because these tools provide a highly focused energy source that can be easily transmitted and manipulated for micro-machining. The quantity of material removed and the roughness of the finished surface are a function of the crater geometry formed by a laser pulse with specific energy (power). Laser micro-machining is, however, a complex nonlinear process with numerous stochastic parameters related to the laser apparatus and the material specimen. Consequently, the operator must manually set the process control parameters by trial and error. This paper describes how an artificial neural network can be used to create a nonlinear model of the laser material-removal process in order to automate micro-machining tasks. The multi-layered neural network predicts the pulse energy needed to create a crater of specific depth and average diameter. Laser pulses of different energy levels are impinged on the surface of the test material in order to investigate the effect of pulse energy on the resulting crater geometry and volume of material removed. Experimentally acquired data from several sample materials are used to train and test the network's performance. The key system inputs for the modeler are mean depth of crater and mean diameter of crater, and the system outputs are pulse energy, variance of depth and variance of diameter. The preliminary study using the experimentally acquired data demonstrates that the proposed network can simulate the behavior of the physical process to a high degree of accuracy. Future work involves investigating the effect of different input parameters on the output behavior of the process in hopes that the process performance, and the final product quality, can be improved.
Recognition of free-form objects is a difficult task in a variety of engineering applications such as reverse engineering and product inspection. Most recognition systems can handle polyhedral objects that are defined by a set of primitives such as vertices, edges, or planar faces. However, free-form shapes have curved surfaces and often lack identifiable markers such as corners or sharp discontinuities. This paper presents a novel approach to creating structured representations of free-form surfaces that can be used for object recognition. The proposed method maps the three-dimensional coordinate data acquired by a range sensor onto a spherical self-organizing feature map (SOFM). The adaptation algorithm of the SOFm develops a topological order to the measured coordinate data such that connected nodes on the spherical map represent neighboring points on the object surface. Features are then extracted at each node of the SOFM. The feature vector is computed using a simple function that relates the node's positional vector to each of its neighboring nodes, within a circular are of one unit radius, in the SOFM. The feature vectors are used to establish a correspondence between the spherical map generated by an unknown free-form shape and maps of all the reference models. Any two free-form shapes can be matched for recognition purposes by registering the spherical SOFMs and determining the minimum registration error. This approach enables the unknown object to be in an arbitrary orientation. An experimental study is presented in order to demonstrate the effectiveness of this approach. The spatial coordinate data of a human foot and a toy in the shape of a pelican are used for illustrative purposes.
An optical transducer based on the light modulated transmission properties of bacteriorhodopsin (bR) film is described in this paper. The bR protein molecules undergo a complex photocycle when absorbing light energy that is characterized by several measurable states. The most relevant states in the photocycle for this application are the initial B state (λmax= 570 nm) and the longest lived M intermediate state (λmax= 410 nm). If a yellow light source with a wavelength of approximately 570 nm and a second deep blue source at 410 nm illuminate the same region of the thin bR film, the two beams will mutually suppress the optical transmission properties of the thin film and reduce the intensity of the light output. The suppression-modulated transmission mechanism of the bR polymeric film is, therefore, controlled by the intensity and wavelength of the two light sources. Based on this simple mechanism, a number of different protein-based optical devices have been proposed in the literature for optical signal and information processing. The focus of this research is to exploit the light transmission properties of the bR film to develop efficient optical transducers that can be easily interfaced with micro-electro-mechanical systems for mechatronic applications. The proposed transducer design is activated by an external light source and free from electrical noise. Illustrations of how thin bR film can be used for the modulation of light intensity, optical switches, and logic gates are presented.
An important task in reverse engineering and computer-aided- design applications is to create a mathematical model of surface geometry based on coordinate measurements. A two- step techniques that fits parametric surfaces to partial or whole human body measurements for free-form surface reconstruction is described in this paper. The first step of the proposed technique employs a self-organizing feature map to adaptively parameterize non-uniformly spaced coordinate points. The second step uses a Bernstein Basis Function (BBF) network to fit a deformable Bezier surface to the parameterized data. Once the adaption phase is compete, the weights of the BBF network can be utilized by a variety of commercially available geometric modeling and CAD/CAM packages for shape reconstruction. An experimental study is presented to demonstrate the effectiveness of the BBF network for generating smooth Bezier surfaces of complex anatomical shapes.
12 A biosensor telemetry system for the on-line remote monitoring of toxic sites is described in this paper. The device is a self-contained field measurement system that employs immobilized luminescent. Vibrio fisheri bacteria to detect airborne contaminants. The presence of toxic chemicals in the air will lead to a measurable decrease in the intensity of light produced by the bacteria population. Both cellular and environmental factors control the level of bioluminescence exhibited by the bacteria. The biological sensing element is placed inside a miniature airflow chamber that houses a light-to-frequency transducer, power supply, and Radio-Frequency (RF) transmitter to convert the intensity of bioluminescence exhibited by the bacteria population into a radio signal that is picked up by a RF receiver at a safe location. The miniature biosensor can be transported to the investigated on either a terrestrial or airborne robotic vehicle. Furthermore, numerous spatially distributed biosensors can be used to both map the extent and the rate-of-change in the dispersion of the hazardous contaminants over a large geographical area.
12 The demand to provide greater flexibility in surface- geometry measurement of free-form objects, has led to range sensors which are permitted continuous free motion. However, most devices employ mechanical linkages instrumented with position sensors, or optical or magnetic tracking sensors to determine the position and orientation of the moving range- sensor head. These additional sensors limit the working volume, impede the free movement of the range sensor, or restrict the material allowed in the work environment. This paper presents a method of surface-geometry measurement by a laser-camera range sensor head, which is permitted continuous motion in space, without the need to track its position and orientation by additional sensors.
This paper presents a technique for reconstructing smooth closed Bezier surfaces from coordinate measurements based on a Bernstein Basis Function (BBF) network. While various neural networks, such as the backpropagation network and radial basis function networks, have been effective in functional approximation and surface fitting these neural networks produce system dependent solutions that are not easily transferable to commercially available design software. The BBF network has an advantage over other networks by directly employing the same Bernstein polynomial basis functions that are used in describing Bezier surfaces. The BBF network is capable of implementing a close approximation to any continuous nonlinear mapping by forming a linear combination of nonlinear Bernstein polynomial basis functions. Changing the number of basis neurons in the network architecture is equivalent to modifying the degree of the Bernstein polynomials. Complex smooth surfaces can be reconstructed by using several simultaneously updated networks, each corresponding to a separate surface patch. A smooth transition between adjacent Bezier surface patches can be achieved by imposing additional positional C0 and tangential C1 continuity constraints on the weights during the adaptation process. Once adapted, the final weights of the networks correspond to the control points of the Bezier surface, and can therefore be used directly in commercial CAD software packages that utilize parametric modelers.
The rapid detection of toxic contaminants released into the air by chemical processing facilities is a high priority for many manufacturers. This paper describes a novel biosensor for the remote monitoring of toxic sites. The proposed biosensor is a measurement system that employs immobilized luminescent Vibrio fisheri bacteria to detect airborne contaminants. The presence of toxic chemicals will lead to a detectable decrease in the intensity of light produced by the bacteria. Both cellular and environmental factors control the bioluminescence of these bacteria. Important design factors are the appropriate cell growth media, environmental toxicity, oxygen and cell concentrations. The luminescent bacteria are immobilized on polyvinyl alcohol (PVA) gels and placed inside a specially constructed, miniature flow cell which houses a transducer, power source, and transmitter to convert the light signal information into radio frequencies that are picked up by a receiver at a remote location. The biosensor prototype is designed to function either as a single unit mounted on an exploratory robot or numerous units spatially distributed throughout a contaminated environment for remote sensing applications.
Multiple off-the-shelf cameras can be configured to simultaneously provide a large variety of part features that are impossible to capture with a single CCD camera or range scanner. One unsolved problem is using several cameras for passive shape recognition is that of multi-view registration. Registration is the process of associating the feature vectors extracted from the image captured by one camera view with that from another view. This paper describes an unsupervised clustering algorithm used to associate redundant and complementary features extracted from different views of a 3D object for part identification and inspection. The unsupervised learning algorithm ensures that 'similar' feature vectors will be assigned to cluster units that lie in close spatial proximity in a 3D feature map. The technique reduces the dimensionality of the input by exploiting hidden redundancies in the training data. During the inspection phase, novel features activate a number of cluster nits that have weights similar to the applied training data. During the inspection phase, novel features activate a number of cluster units that have weights similar to the applied training input. If the sum- of-square error between the input and weights of the cluster unit with the strongest response is greater than a predefined tolerance, then the part is rejected. A simulation study is presented to illustrate how the proposed multi-sensor fusion technique can be applied to identifying parts for inspection.
The determination of point correspondences between range images is used in computer vision for range image registration and object recognition. The use of a spin image as a feature for matching has had considerable success in object recognition. However, in registration, refinement by iterative methods has been required. This paper present a method of determining the surface geometry in a local region surrounding the point. The technique is developed for range images which have little movement between viewpoints, and which consists of only several profiles each. The method involves fitting surface patches to the surfaces of the two successive views, creating spin-image features at a few points of each patch in one view, and determining the best match of features on the previous reference view using a localized interpolating search. The sets of corresponding points of the two successive range views are then used directly to compute the registration transformation between views. This computation effectively refines the corresponding by minimizing the residual errors. The technique is demonstrated using a pair of synthetic range views, derived from a range image of an object with a free- form surface.
The synergistic use of data acquired from difference sensors will enable autonomous manufacturing equipment to make faster and more intelligent decisions about the current status of the workspace. Multisensor data fusion deals with mathematical and statistical issues arising from the combination of different sources of sensory information into a single representational format. A fundamental problem in data fusion is associating the data captured by one sensor with that from another sensor or the same sensor at a different point in time. This paper describes a non- statistical unsupervised hierarchical clustering algorithm used to associate the complementary feature vectors extracted from different data sets. Each level in the hierarchy consists of one or more self-organizing feature maps that contain a small number of cluster units based on the combined feature set derived from the original data. The unsupervised learning algorithm ensures that 'similar' feature vectors will be assigned to cluster units that lie in close spatial proximity in the feature map. If the sum- of-square error for the feature vectors associated with a cluster unit is greater than a predefined tolerance, then those vectors are used to create another feature map at the next level of the hierarchy. This growing procedure enables the feature set to control the number of cluster units generated. The hierarchical structure provides an efficient mechanism to deal with uncertainties in correct classification. Experimental studies are present din order to illustrate the robustness of this technique.
Parallel approaches for packing arbitrary 3D objects into fixed volumes are characterized by rearranging all of the parts simultaneously and evaluating the results. The practical application of each proposed approach to real world problems has been hindered by the computational time required to find a solution or over simplifications made to reduce the time required. A serial approach is proposed in this paper that reduces the complexity of the problem domain by packing each object one a time as `best as possible', thus more closely emulating the way a human might arrange items in the trunk of a car. This technique has enabled the implementation of an efficient packing algorithm that is not limited by working with the object's bounding boxes and or by a restricted set of permissible orientations. Preliminary tests demonstrate that the technique reduces computational times, on average, by a factor of 19 or more compared to an existing technique. Furthermore, the new approach is guaranteed to produce a viable packing arrangement for a subset of the parts even if every part cannot possibly be accommodated in the available volume, a typical situation found in rapid prototyping service bureaus. The same cannot be said for existing parallel packing algorithm implementations.
Registration of conventional range images form multiple viewpoints has generally relied on redundant information from 10,000-100,000 points per image. Continuous scanning by laser-camera sensors without viewpoint knowledge requires the ability to register and integrate narrow range views inclose sequence, in order to minimize redundant data acquisition, permit high acquisition speed and reduce viewpoint planning. This paper presents a method of registering and integrating narrow and spatiotemporally- dense range views, which consists of only three profiles each, without sensor pose information.
Many computer vision, computer graphics and computer-aided design applications require mathematical models of existing objects to be generated from measured surface points. The geometric model of a complex surface can be created by joining numerous low-order bi-parametric surface patches, and adjusting the control parameters such that the constituent patches meet seamlessly at their common boundaries. In this paper a two-layer neural network, called the Bernstein Basis Function (BBF) network, is proposed for computing the control points of a defining polygon net that will generate a Bezier surface that 'best' approximates the data in a local segmented region. Complex surfaces are reconstructed by using several simultaneously updated networks, each corresponding to a separate surface patch. A smooth transition between the adjacent Bezier surface patches is achieved by imposing additional positional and tangential continuity constraints on the weights during the adaptation process. This method is illustrated by adaptively stitching together several patches to form a smooth surface.
Commercially available orthopaedic implants used to replace a fractured or damaged radial head in the elbow are limited because the simplified axisymmetric design only approximates the normal bone anatomy. An implant that more closely approximates the normal anatomy of the radial head is likely to be superior to the ones of standard shapes and sizes. This paper provides a description of how reverse engineering technology is being used to replicate the geometry of the radial head from computer tomography imagery. Reverse engineering is the process of generating accurate 3D CAD models of free-form surfaces from measured coordinate data. In this application, shape information of the bone is extracted from CT images, translated into global coordinates, and transferred to a CAD software package in order to generate a solid model of the radial head region. The solid model is formed by creating contours from edge points, lofting these contours, and then joining the lofted contours. The tool-path for machining the implant device on a computer numerically controlled milling machine is generated from the solid model. The results of an experiment are presented in order to demonstrate the effectiveness of this approach to reverse engineering and manufacturing radial head replacements.
An initial step in goal-oriented dynamic vision is tracking a nonstationary object, or target, and maintaining its position in the center of the field-of-view for detailed analysis. Any image analysis performed by a dynamic vision system must be able to clearly distinguish between the image flow generated by the changing position of the camera and by the movement of potential targets. Many image-based motion analysis techniques are, however, unable to deal effectively with the complexities of dynamic vision because they attempt to calculate true velocities and accurately reconstruct 3D depth information from spatial and temporal gradients. An alternative pattern classification technique has been developed for qualitatively identifying regions in the image plane which most likely correspond to moving targets. This approach is based on the notion that all projected velocities arising from a camera moving through a rigid environment will lie along a line in the local velocity space. Each point on this constraint line maps a circle that represents all corresponding velocities that are parallel to the direction of the spatial gradient. If the camera motion is known, then the gradient-parallel velocity vectors associated with an independently moving target are unlikely to fall in the region arising from the union of all the circles generated by the points along the constraint line. Imprecise or approximate knowledge of the camera motion can be utilized if the projected velocities associated with the constraint line are modeled as radial fuzzy sets with supports in the local velocity space. Homogeneous regions in the image plane that violate these camera velocity constraints become possible fixation points for advanced tracking and detailed scene analysis.
Reverse engineering is the process of generating accurate three-dimensional CAD models from measured surface data. The coordinate data is segmented and then approximated by numerous parametric surface patches for an economized CAD representation. Most parametric surface fitting techniques manipulate large non-square matrices in order to interpolate all points. Furthermore, the interpolation process often generates high-order polynomials that produce undesirable oscillations on the reconstructed surface. The Bernstein basis function (BBF) network is an adaptive approach to surface approximation that enables a Bezier surface to be reconstructed from measured data with a pre-determined degree of accuracy. The BBF network is a two-layer architecture that performs a weighted summation of Bernstein polynomial basis functions. Modifying the number of basis neurons is equivalent to changing the degree of the Bernstein polynomials. An increase in the number of neurons will improve surface approximation, however, too many neurons will greatly diminish the network's ability to correctly interpolate the surface between the measured points. The weights of the network represent the control points of the defining polygon net used to generate the desired Bezier surface. The location of the weights are determined by a least-mean square (LMS) learning algorithm. Once the learning phase is complete, the weights can be used as control points for surface reconstruction by any CAD/CAM system that utilizes parametric modeling techniques.
A neural network approach that automatically maps measured 2D image coordinates to 3D object coordinates for shape reconstruction is described. The appropriately trained radial-basis function network eliminates the need for rigorous calibration procedures. The training and test data are obtained by capturing successive images of the intersection points between a projected light line and horizontal strips on a calibration bar. Once trained, the 3D object space coordinates that correspond to an illuminated pixel in the image plane is determined from the neural network. In addition, the generalization capabilities of the neural network enable the intermediate points to be interpolated. An experimental study is presented in order to demonstrate the effectiveness of this approach to 3D measurement and reconstruction.
Rapid detection of independently moving objects by a moving camera system is essential for automatic target recognition (ATR). The image analysis performed by the ATR system must be able to clearly distinguish between the image flow generated by the changing position of the camera and the movement of potential targets. In this paper, a qualitative motion detection algorithm that can deal with imprecise knowledge of camera movement is described. This algorithm is based on the notion that the true velocity at any point on an image, arising from a camera moving through a rigid environment, will lie on a 1D locus in the vx - v(subscript y$. velocity space. Each point on this line maps a constraint circle that represents all components of the true velocity that are parallel to the direction of the spatial gray-scale gradient. If the camera motion is known, then an independently moving target can be detected because the corresponding gradient-parallel components of velocity are unlikely to fall in the constraint region arising from the union of all the circles generated by the points along the 1D locus. The algorithm is made more robust by modeling the projected camera velocities as radial fuzzy sets with supports in the 2D velocity space. Approximate knowledge of the translational and rotational components of camera motion can be used to define the parameters of the corresponding fuzzy constraint region. In terms of detecting independently moving targets, the algorithm tags the gradient-parallel velocity vectors that violate this fuzzy constraint on camera motion. An estimate of the true velocity is computed only at the pixel locations that violate the constraint. To illustrate this approach, a simulation study involving a translating camera system and an independently moving target is presented.
Reverse engineering is the process of generating accurate 3D CAD models of manufactured parts from measured coordinate data. The 3D coordinate data can be acquired from non- contact laser scanning machines or contact coordinate-measuring machines. Prior to creating the CAD model, it is necessary to segment the dense data into regions that are free of any sharp changes in the surface shapes. These segmented regions are then fitted with parametric surface patches for an economized CAD representation. In this paper, a hybrid basis function neural network is proposed for segmenting dense depth data. The first three layers of the network perform the coarse segmentation task by clustering surface features and classifying them as one of eight primitive surface types. The features correspond to the mean curvature (H) and Gaussian curvature (K) of the measured 3D surface. Each surface type image is further partitioned into isolated regions by a series of competitive feedback networks that perform opening and closing morphological operations. Once segmented, each region is parameterized and the associated depth data is approximated by a Bezier surface patch. The corresponding control points are used to reconstruct the parametric surface patch in a typical CAD system.
The ability to rapidly detect moving objects while dynamically exploring a work environment is an essential characteristic of any active vision system. However, many of the proposed computer vision paradigms are unable to efficiently deal with the complexities of real world situations because they employ algorithms that attempt to accurately reconstruct structure- from-motion. An alternative view is to employ algorithms that only compute the minimal amount of information necessary to solve the task at hand. One method of qualitatively detecting independently moving objects by a moving camera (or observer) is based on the notion that the projected velocity of any point on a spherical image is constrained to lie on a one-dimensional locus in a local 2-D velocity space. The velocities along this locus, called a constraint ray, correspond to the rotational and translational motion of the observer. If the observer motion is known a priori, then any object moving independently through the rigid 3- D environment will exhibit a projected velocity that does not fall on this locus. As a result, the independently moving object can be detected using a clustering algorithm. In this paper, a hybrid neural network architecture is proposed for discriminating between flow velocities that are caused by camera movement and by object motion. The computing architecture is essentially a two stage process. In the first stage, a self-organizing neural network is used to learn the constraint parameters associated with typical observer movements by moving the camera apparatus through a stationary environment. Once the observer movements have been adequately learned by the self-organizing neural network, the corresponding synaptic weight values are used to program a modified radial basis function (RBF) network. During the second stage, the RBF network architecture acts as a constraint region classifier by employing clustering strategies to label incomplete motion field information (i.e. the velocity component that is parallel to the spatial gradient).
The primary task of machine vision is to utilize a variety of techniques to segment a digital image into meaningful regions, extract the corresponding edges, compute the various features (e.g., area, centroids) and primitives (e.g. lines, corners, curves) that exist in the image, and finally develop some decision rules or grammar structures for interpreting the image content. In conventional vision systems, the operations performed involve making crisp (yes or no) decisions about the regions, features, primitives, regional relationships and overall scene interpretation. However, various degrees of uncertainty exist at each and every stage of the vision system process because these decisions are often based on data that is inexact or ambiguous in nature. Much of the incertitude in the image information can be interpreted in terms of either grayness ambiguity (deciding on the intensity of a pixel) or spatial ambiguity (deciding on the shape and geometry of the regions within the image). Fuzzy morphology is a mathematical tool developed to deal with imprecise or ambiguous information that arises during a subjective evaluation process such as scene interpretation. This mathematical approach transforms a gray scale image into a two-dimensional array of fuzzy singletons called a fuzzy image. The value of each fuzzy singleton reflects the degree to which the pixel possesses some property such as brightness, edgeness, redness, or surface uniformity (i.e., texture). A variety of morphological operations can be performed on the singletons in order to modify the ambiguity associated with the desired property. For the efficient shape representation of objects in a scene, a thinning algorithm for fuzzy images is proposed in this paper. Once the object shape has been thinned to a skeleton-like representation, curve descriptors can be used to transform the generalized shape into a coded form. In essence, this thinning algorithm is used to reduce, or compress, the structural shape information of a vaguely defined object into simplified features for a rule-based description of the object shape.
A gray-tone image taken of a real scene will contain inherent ambiguities due to light dispersion on the physical surfaces. The neighboring pixels may have very different intensity values and yet they may represent the same surface region. A fuzzy set theoretic approach for representing, processing, and quantitatively evaluating the information in gray-tone images is presented in this paper. The gray-tone digital image is mapped into a two-dimensional array of singletons called a fuzzy image. The value of each fuzzy singleton represents the degree to which a pixel intensity can be associated with some vaguely defined visual property γ. For illustrative purposes, the visual properties related to the notion of a uniform surface are investigated. The inherent ambiguity in the surface information can be modified by performing a variety of fuzzy mathematical operations on the singletons. Once the fuzzy image processing operations are completed, the modified fuzzy image can be converted back to a gray-tone image representation. The ambiguity associated with the processed fuzzy image is quantitatively evaluated by measuring the uncertainty present both before and after processing. Computer simulations are presented to illustrate some of these notions.
An innovative design of a dynamic neural network architecture that is able to first learn and then utilize fuzzy-like `IF-THEN' rules is presented in this paper. Each fuzzy neuron in the network represents a compositional rule of inference that defines the relationship between a particular premise and the corresponding consequence. The neural network first determines the similarity between a neural input (a discretely sampled fuzzy set) and the feedforward synaptic weights (accumulated knowledge-base). The `best' definition of the input is selected by competition arising from the dense feedback between the neurons. A satisfactory conclusion is reconstructed in weighted feedforward outputs from the `winning' neuron. The knowledge-base is updated by an unsupervised learning algorithm that adapts the feedforward weights assigned to both the neural inputs and outputs. An example of how this dynamic neural network can be used to perform fuzzy-like inference rules for the navigation of an autonomous vehicle through an unstructured environment is used to illustrate these notions.
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
A gray tone image taken of a real scene will contain inherent ambiguities due to light dispersion on the physical surfaces. The neighboring pixels may have very different intensity values and yet represent the same surface region. In this paper, a fuzzy set theoretic approach to representing, processing, and quantitatively evaluating the ambiguity in gray tone images is presented. The gray tone digital image is mapped into a two-dimensional array of singletons called a fuzzy image. The value of each fuzzy singleton reflects the degree to which the intensity of the corresponding pixel is similar to the neighboring pixel intensities. The inherent ambiguity in the surface information can be modified by performing a variety of fuzzy mathematical operations on the singletons. Once the fuzzy image processing operations are complete, the modified fuzzy image can be converted back to a gray tone image representation. The ambiguity associated with the processed fuzzy image is quantitatively evaluated by measuring the uncertainty present both before and after processing. Computer simulations are presented in order to illustrate some of these notions.
Populations of cortical nerve cells that selectively learn from external stimuli are described in this paper. Numerous neural populations are interconnected within a spatially distributed neural activity field. Each population is assumed to possess a multi-modal distribution of neural thresholds which enable it to exhibit one or more state attractors for any given stimulus input. Each stable attractor represents a potential memory. The memory function of the field corresponds to the numerous attractors, or potential memories, generated after the removal of the external stimulus pattern. Massive numbers of attractors are inherent in the field of the onset of learning. The selective learning process involves enlarging the basin around the attractor selected by a given stimulus. A computer simulation involving three sets of stimuli is used to illustrate some of these notions.
Our innate ability to process and interpret large volumes of poorly defined visual data, in essence to perceive visual information, enables us to function effectively in a continually changing complex world. As knowledge engineers, it would be highly desirable to incorporate such flexibility into artificial systems. Fuzzy logic is a mathematical tool created to help synthesize complex systems and decision processes that must deal with imprecise or ambiguous information. In terms of vision, this ambiguity arises from the meanings attached to the sensor inputs and the rules used to describe the relationship between the various informative visual attributes. Notions that pertain to vision perception such as fuzzy images, fuzzy mathematical operators and fuzzy inference procedures are outlined in this paper.
An artificial vision system with spatio-chromatic channels is proposed. A dynamic neural network is used for the spatial and chromatic information of a scene. The spatio-chromatic information is transmitted into two channels for processing. This segmentation allows accurate spatial and chromatic analysis of the visual input. For both channels, models based on the biology of the visual system are used. Spatial channel responses simulate, e.g., enhanced edges and subjective contours. Chromatic channel output is shown to correspond to the color characteristics found in the spectral color tests and in the literature of the physiology of color vision. The ultimate goal of the project is to find a biologically motivated model for an intelligent image sensor. In this report we describe potential candidates for both spatial and chromatic information.
A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.
A multi-task neuro-vision parameter which performs a variety of information processing operations associated with the early stages of biological vision is presented. The network architecture of this neuro-vision processor, called the positive-negative (PN) neural processor, is loosely based on the neural activity fields exhibited by thalamic and cortical nervous tissue layers. The computational operation performed by the processor arises from the strength of the recurrent feedback among the numerous positive and negative neural computing units. By adjusting the feedback connections it is possible to generate diverse dynamic behavior that may be used for short-term visual memory (STVM), spatio-temporal filtering (STF), and pulse frequency modulation (PFM). The information attributes that are to be processes may be regulated by modifying the feedforward connections from the signal space to the neural processor.
A multi-task dynamic neural network that can be programmed for storing processing and encoding spatio-temporal visual information is presented in this paper. This dynamic neural network called the PNnetwork is comprised of numerous densely interconnected neural subpopulations which reside in one of the two coupled sublayers P or N. The subpopulations in the P-sublayer transmit an excitatory or a positive influence onto all interconnected units whereas the subpopulations in the N-sublayer transmit an inhibitory or negative influence. The dynamical activity generated by each subpopulation is given by a nonlinear first-order system. By varying the coupling strength between these different subpopulations it is possible to generate three distinct modes of dynamical behavior useful for performing vision related tasks. It is postulated that the PN-network can function as a basic programmable processor for novel vision machine systems. 1. 0
A dynamic neural network with neural computing units that exhibit hysteresis phenomena is proposed as a mechanism for visual memory. The neural network, named the PN-processor, is loosely based on a mathematical theory proposed by Wilson and Cowan to describe the functional dynamics of conical nervous tissue. The individual neural computing units of the network are programmed to exhibit localized hysteresis phenomena. This neural network structure is capable of storing visual information without physical changes to its synaptic connections. External stimuli move the network's neural activity around a high-dimensional phase space of state attractors until the overall response is stabilized. Once stabilized, the response remains unperturbed by weak or familiar stimuli and is changed only by a sufficiently strong new input. In this paper we briefly describe several aspects of this type of visual memory.
SC255: Opto-Mechatronic Systems: Techniques and Applications
Optical and photonic devices are being incorporated into a variety of “smart” engineered products and processes because these lightwave technologies provide components for high precision, rapid data processing, flexible circuits, and circuit miniaturization. Optomechatronics focuses on the tools and technologies needed to create intelligent systems from optical and optoelectronic sensors and actuators, flexible fiber optic and lightwave communication, smart machine vision systems, reconfigurable structures, and embedded control. Course participants will develop skills and knowledge necessary to adopt an interdisciplinary and integrated approach to optomechatronic design. We will provide attendees with an overview of the basic concepts necessary to effectively combine optical, electrical, control, and mechanical technologies and will review the fundamentals of lightwave technology and optical systems. Emphasis will be placed on design methods for integrating optical sensing, actuation, and control. We will review several practical case-studies involving optomechatronic products and processes. System performance will be analyzed and benefits/limitations will be discussed.
This course provides an introduction to optical actuator technologies that enable light to be transformed into mechanical displacements or forces. The optically driven actuators can be interfaced with optical fibres and integrated optics to create “control-by-light” systems. Both indirect and direct methods of optical actuation will be examined. Throughout the course the emphasis will be placed on fundamental principles and system design. The design opportunities for utilizing optical actuation and control are illustrated by several innovative systems for optical micro-flow control, micro-pumping, micromanipulation, and lightwave propulsion.