The method of optical centroid measurement (OCM) has shown to exhibit spatial super-resolution with enhancements at the Heisenberg limit in plane wave interference experiments. In this work, the OCM method is for the first time used in an imaging setting where actual object features are observed. The OCM result is rederived in a near-field imaging formalism for a general imaging system and in full 2-D treatment. Analogies to coherent and incoherent imaging are shown Moreover, coherent OCM imaging is experimentally implemented for photon number N = 2, where an experimental setup is presented which allows to generate the desired entangled two-photon state containing the super-resolved image. This state is then imaged by an imaging system with a finite resolution defined by its point spread function (PSF). The centroid measurement of the two-photon states delivers then an image with a width of the PSF reduced by a factor 2 corresponding to the Heisenberg limit.
In the experiment, the object is illuminated by a continuous-wave pump laser centred at 405nm with an output power of 30mW. A 4-f lens system images the object to the state preparation output plane. In the far-field plane between the lenses, a 5mm long periodically poled KTiOPO4 non-linear crystal generates photon pairs at 810nm by type-0, frequency-degenerate and collinear spontaneous parametric down-conversion. This generated OCM state is then imaged by a single lens and detected in coincidence by a fully digital 2-D sensor array with single-photon sensitivity and per-pixel sub-nanosecond time resolution.
The OCM state is spectrally filtered at 810nm. Its imaging capability is compared to classical light sources including spatially coherent, monochromatic illumination at 405nm and 810 nm, as well as spatially incoherent light at 810 nm. The former is implemented using collimated lasers, the latter is a thermal light source spectrally filtered at 810 nm. The PSF of the different light sources are compared at low numerical aperture by imaging a focal point of 25μm Gaussian waist radius.
Quantum imaging uses entangled photons to overcome the limits of a classical-light apparatus in terms of image quality, beating the standard shot-noise limit, and exceeding the Abbe diffraction limit for resolution. In today experiments, the spatial properties of entangled photons are recorded by means of complex and slow setups that include either the motorized scanning of single-pixel single-photon detectors, such as Photo-Multiplier Tubes (PMT) or Silicon Photo- Multipliers (SiPM), or the use of low frame rate intensified CCD cameras. CMOS arrays of Single Photon Avalanche Diodes (SPAD) represent a relatively recent technology that may lead to simpler setups and faster acquisition. They are spatially- and time-resolved single-photon detectors, i.e. they can provide the position within the array and the time of arrival of every detected photon with <100 ps resolution. SUPERTWIN is a European H2020 project aiming at developing the technological building blocks (emitter, detector and system) for a new, all solid-state quantum microscope system exploiting entangled photons to overcome the Rayleigh limit, targeting a resolution of 40nm. This work provides the measurement results of the 2nd order cross-correlation function relative to a flux of entangled photon pairs acquired with a fully digital 8×16 pixel SPAD array in CMOS technology. The limitations for application in quantum optics of the employed architecture and of other solutions in the literature will be analyzed, with emphasis on crosstalk. Then, the specifications for a dedicated detector will be given, paving the way for future implementations of 100kpixel Quantum Image Sensors.
The present work is focused on the description of a SPAD-based pixel suitable for random bits extraction. Compared to
the state-of-the-art, the proposed approach aims at improving the performance of the random generator with respect to
possible photon flux variation. Thanks to the adopted methodology, the entropy of the output is maintained almost
constant over a wide range of operating conditions. The principle has been validated through simulations and
implemented in a compact pixel, suitable for array implementation.
This work describes a novel color pixel topology that converts the three chromatic components from the standard RGB space into the normalized r-g chromaticity space. This conversion is implemented with high-dynamic range and with no dc power consumption, and the auto-exposure capability of the sensor ensures to capture a high quality chromatic signal, even in presence of very bright illuminants or in the darkness. The pixel is intended to become the basic building block of a CMOS color vision sensor, targeted to ultra-low power applications for mobile devices, such as human machine interfaces, gesture recognition, face detection. The experiments show that significant improvements of the proposed pixel with respect to standard cameras in terms of energy saving and accuracy on data acquisition. An application to skin color-based description is presented.
The SPADnet FP7 European project is aimed at a new generation of fully digital, scalable and networked photonic components to enable large area image sensors, with primary target gamma-ray and coincidence detection in (Time-of- Flight) Positron Emission Tomography (PET). SPADnet relies on standard CMOS technology, therefore allowing for MRI compatibility. SPADnet innovates in several areas of PET systems, from optical coupling to single-photon sensor architectures, from intelligent ring networks to reconstruction algorithms. It is built around a natively digital, intelligent SPAD (Single-Photon Avalanche Diode)-based sensor device which comprises an array of 8×16 pixels, each composed of 4 mini-SiPMs with in situ time-to-digital conversion, a multi-ring network to filter, carry, and process data produced by the sensors at 2Gbps, and a 130nm CMOS process enabling mass-production of photonic modules that are optically interfaced to scintillator crystals. A few tens of sensor devices are tightly abutted on a single PCB to form a so-called sensor tile, thanks to TSV (Through Silicon Via) connections to their backside (replacing conventional wire bonding). The sensor tile is in turn interfaced to an FPGA-based PCB on its back. The resulting photonic module acts as an autonomous sensing and computing unit, individually detecting gamma photons as well as thermal and Compton events. It determines in real time basic information for each scintillation event, such as exact time of arrival, position and energy, and communicates it to its peers in the field of view. Coincidence detection does therefore occur directly in the ring itself, in a differed and distributed manner to ensure scalability. The selected true coincidence events are then collected by a snooper module, from which they are transferred to an external reconstruction computer using Gigabit Ethernet.
The design, simulation results and experimental characterization of a compact analog readout circuit for photon counting applications are presented in this paper. Two linear test arrays of 40 pixels with 25 μm pixel pitch have been fabricated in a 0.15 μm CMOS technology. Each pixel of the array consists of a Single-Photon Avalanche Diode (SPAD), a quenching circuit, a time-gating circuit and an analog counter. Each input pulse corresponding to a SPAD avalanche event is converted to a step in the output voltage. Along with compactness, the circuit was designed targeting low power consumption, good output linearity and sub-nanosecond timing resolution. The circuit features 8.6% pixel output nonuniformity and 1.1 % non-linearity. The gating circuit provides the sub-nanosecond window of 0.95 ns at FWHM. Consisting of a small number of transistors and occupying only 238μm2, this approach is suitable for the design of SPAD-based image sensors with high spatial resolution.