As most cameras are currently built to be used alongside machine learning algorithms, image quality requirements still emanate from human perception. To redefine key performance indicators (KPI) for machine vision, optical designs are tested and optimized before their conception using differentiable simulation methods and gradient backpropagation to jointly train an optical design and a neural network. Although this helps to design optical systems for improved machine learning performance, it remains unstable and computationally expensive to model complex compound optics such as wide-angle cameras. We focus on optimizing the distortion profile of ultra wide-angle designs as it constitutes the main KPI during the optical design. Along the way, we highlight the benefits of controlling the distortion profile of such systems, as well as the challenges related to using learning-based methods for optical design.
Miniaturization, wide-angle, compactness or low-light performance are required in automotive or mobile device. While conventional design techniques are limited, we introduce methods for designing wide-angle lenses using freeform surfaces illustrated by designs showing improved performances.
We present a wide-angle design simulation to predict how its aberrations impact neural networks performances. Our PSF models are optimized for computational efficiency while maintaining accurate predictions which is powerful to support optical design.
Automated navigation of Unmanned Aircraft Systems (UAS) in a broad range of illumination scenarios implies improved and real-time depth estimation and long-distance obstacle detection. We present our lightweight ultra wide-angle camera optimized for low-light illumination (down to < 1 lux) mounted on a drone and compare its optical performance with other module found in the market. We also capture images from the drone in flight and test them on monocular depth estimation neural networks and show that our camera module is suitable for low-light navigation.
Data-driven approaches have proven to be very efficient in many vision tasks, and they are now used for optical parameters’ optimization for application-specific camera designs. Methods such as neural networks are used to estimate camera performance indicators related to the point spread function—such as the root mean square (RMS) spot size—from optical parameters. Such procedures help to understand the connection between optical characteristics and push optical design expertize beyond its limits. We investigate these approaches to model the interaction between the distortion of wide-angle designs and their RMS spot size, which is not explained by aberration theory. Specifically, we test off-the-shelf data-driven methods to determine in which conditions we can establish a model that is able to predict the variations of the RMS spot size along the field of view from the distortion function even in the absence of a mathematical model. Although current methods focus on building accurate models often usable for very specific designs—composed of a few elements only, we present a methodology focusing on more complex and realistic wide-angle designs.
Data-driven methods to assist lens design have recently begun to emerge; in particular, under the form of lens design extrapolation to find starting points (lenses and freeform reflective system). I proposed a trip over the years to better understand why the AI have been applied first to the starting point problems and where we are going in the future. In this talk, we will explore to most recent progress applications of DNN in optical and lens design. We will also show some working example and discuss the future.
The next generation of sUAS (small Unmanned Aircraft Systems) for automated navigation will have to perform in challenging conditions, bad weather, high and low temperature and from dusk-to-dawn. The paper presents experimental results from a new wide-angle vision camera module specially optimized for low-light. We present the optical characteristics of this system as well as experimental results obtained for different sense and avoid functionalities. We also show preliminary results using our camera module images on neural networks for different scene understanding tasks.
As more and more cameras are used for machine perception, the optical design process still relies on key indicators such as point spread function (PSF), modulated transfer unction (MTF) based on aberration minimization. This process has proven efficient for human vision but is not tailored for machine perception. Given a specific computer vision task, it is not always necessary to target the same key performance indicators (KPIs) than when images are visualized by humans. Moreover, this image quality might change during a camera lifespan with the appearance of defocus for example. It is crucial to be able to determine how this kind of degradation can affect a computer vision task. In this work we study the impact of defocus on 2D object identification and show that, for a certain design, it is not impacted by image degradation under a certain threshold. We also demonstrate that this threshold is higher for lower f-number which makes them better design candidates.
Optical design process consists in minimizing aberrations using optimization methods. It relies on key performance indicators (KPIs), such as point spread function (PSF), Modulated transfer function (MTF), or relative illumination (RI) and spot sizes, that depend on lens elements aberrations. Their target values need to be defined -either for human or machine perception- at early stage of the design, which can be complex to do for challenging designs such as extended field of view. We developed an optical and imaging simulation pipeline able to render the effects of complex optical designs and image sensor on an initial aberration-free image. Extracting files from ray tracing software for simulating the PSF and sensor target information, the algorithm accurately renders off-axis aberrations with Zernike polynomials representation combined with noise contribution and relative illumination. The obtained image faithfully represents an optical system performance from the optics to the sensor component and we can then study the impact of additional aberration introduction.
Data driven approaches have proven very efficient in many vision tasks and are now used for optical parameters optimization in application-specific camera design. A neural network is trained to estimate images or image quality indicators from the optical characteristics. The complexity and entanglement of such optical parameters raise new challenges we investigate in the case of wide-angle systems. We highlight them by establishing a data-driven prediction model of the RMS spot size from the distortion using mathematical or AI-based methods.
The new generation of sUAS (small Unmanned Aircraft Systems) aims to extend the range of scenarios in which sense-and-avoid functionality and autonomous operation can be used. Relying on navigation cameras, having a wide field of view can increase the coverage of the drone surroundings, allowing ideal fly path, optimal dynamic route planning and full situational awareness. The first part of this paper will discuss the trade-off space for camera hardware solution to improve vision performance. Severe constraints on size and weight, a situation common to all sUAS components, compete with low-light capabilities and pixel resolution. The second part will explore the benefits and impacts of specific wide-angle lens designs and of wide-angle images rectification (dewarping) on deep-learning methods. We show that distortion can be used to bring more information from the scene and how this extra information can increase the accuracy of learning-based computer vision algorithm. Finally, we present a study that aims at estimating the link between optical design criteria degradation (MTF) and neural network accuracy in the context of wide-angle lens, showing that higher MTF is not always linked to better results, thus helping to set better design targets for navigation lenses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.