Event-based imaging polarimeter

Abstract. The design and initial experimental results from an event-based, rotating-polarizer, imaging polarimeter are presented. The speed of division-of-time imaging systems is traditionally constrained by the frame rate of the focal plane sensor and the need to sample several polarization angles during rotation. The asynchronous event reporting of event-based cameras (EBCs) frees these constraints but introduces challenges. The Stokes vectors used to describe polarization are based on irradiance, but EBCs are inherently change-detection systems that struggle to estimate irradiance. Two methods of estimating the polarization state without first recovering irradiance images are presented and compared. The DAVIS 346 sensor enables simultaneous recording of events with conventional frame-based images, enabling direct comparison of event-based polarization estimates to traditional techniques.


Introduction
Event-based cameras (EBCs) use in-pixel analog processing to detect changes in the photocurrent. When a pixel sees a change that exceeds a user-defined threshold, the pixel generates a signal that is used by a peripheral circuit to create an event report. Each pixel generates events independently of other pixels, with no fixed frame rate or integration time. This asynchronous operation mode offers many advantages including low latency, reduced power draw and data bandwidth, and high dynamic range. 1 These benefits do come at a cost, of course. One issue is that EBCs struggle to estimate the scene irradiance, particularly for static portions of the scene. Algorithms to recover a conventional image from the event stream have been an area of active research. 2,3 The active pixel sensor (APS) circuit in the DAVIS camera series is the most robust solution to this problem, but it introduces another set of issues. 4,5 Another possible solution is to induce a controlled scene change. Anyone who has tapped on their EBC to see the scene while adjusting focus and pointing has already used this concept. Here, we propose the use of a rotating polarizer to induce changes. We expect the number of events generated in each pixel to depend on the degree of polarization in the scene as well as the polarizer rotation rate.
Polarimetric images have many applications in remote sensing. [6][7][8][9] The polarization state of each pixel is computed from multiple observations. With a rotating polarizer, the observations are made at different times; also known as a division-of-time polarimeter. (Schott 10 presents a good comparison of other types.) Division-of-time instruments are generally considered the simplest way to make polarization measurements but are subject to artifacts caused by changes in the scene between observations. 11 This can be mitigated by reducing the overall measurement time, which suggests the low latency of EBC pixels may be advantageous.
The proposed device is similar to one demonstrated by Lu et al., 12 which showed that manmade objects (which are in general more strongly polarizing than natural surfaces) created more events. In this paper, we attempt to quantify the polarization and instrument performance more systematically.
An alternative to division-of-time-based polarimetry is division-of-focal-plane. In this scheme, all measurements are made simultaneously, with different pixels measuring light of different polarizations. Haessig et al. 13 have demonstrated an event-based polarimetric imager of this type by placing polarizing filters directly on the pixels similar to RGB color imaging. The polarization state at each pixel is computed from surrounding pixels by a deep neural network. They used a rotating polarizer to generate a changing scene rather than as part of the sensor. By changing the rotation speed of their target, they demonstrate a clear benefit for event-based sensors for fast-moving targets. In this work, we present and compare different analytic methods for a temporally-modulated signal rather than spatial demosaicing.

Polarization
The polarization state of incident light can be quantified using the Stokes vector notation 14 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 5 4 9S where E x is the irradiance in polarization state x. The subscripts correspond to the horizontal, vertical, þ45 deg, −45 deg, right-hand circular, and left-hand circular polarization states. The change in polarization due to reflection or transmission can be described using Mueller matrices, which are 4 × 4 matrices in which element M ij describes how to input light in-state j couples into output state i, so that E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 4 5 4S out ¼ MS in : (2) The Mueller matrix 15 for an ideal linear polarizer with a transmission axis at an angle θ relative to the x axis is Equations (2) and (3) will predict the polarization state of the light entering the camera, but the camera is only sensitive to the total energy (i.e., the S 0 component). The expected irradiance is, therefore E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 1 1 6 ; 2 8 3 Notice that this system is completely insensitive to circular polarization because M 03 ¼ 0 for all θ. In many remote sensing applications, it is common to neglect S 3 and only measure the linear polarizations. This choice is made partly out of convenience-linear polarizations are simpler to measure-and partly because elliptical polarization is relatively uncommon in natural scenes. Henceforth, we will use the shortened (linear-only) notation, which omits S 3 and M i3 . In this case, the polarization state is also often described using the degree of linear polarization (DoLP) and angle of polarization (AoP), defined as below: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 1 4 8 In this notation, we can rewrite the Stokes vector asS ¼ S 0 ½1; D cos 2ψ; D sin 2ψ. 16 Combining this with Eq. (4) yields For completely polarized light where D ¼ 1, this equation reduces to the familiar form of Malus' law. 17 The irradiance can be measured at multiple different rotation angles. We can use Eq. (4) to construct a system of equations E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 7 ; 1 1 6 ; 6 5 5Ẽ where E j represents the irradiance observed while the polarizer is at angle θ j , and g is a scalar that accumulates all the constant multiplicative factors such as optics transmission and digital gain. Each row of matrix M is simply the top row of the Mueller matrices for each polarizer angle. We can get a least-squared-error estimate of the polarization state from Eq. (7) by using the Moore-Penrose pseudoinverse E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 1 1 6 ; 5 1 3S If we consider a single rotation of the polarizer divided evenly into n samples, the matrix M is well-conditioned for all n ≥ 6. The condition number decreases monotonically with increasing n, so increasing the number of samples should only improve estimates. This result highlights one issue with division-of-time polarimeters-the need for a scene that is static over the time needed to collect multiple measurements. If the input changes during the collection of the n samples, Eq. (8) is no longer correct. With conventional focal plane array (FPA) sensors, the goal of measuring more samples for accuracy conflicts with the frame rate and exposure time limitations of the FPA. Even with fast framing cameras, motion of the polarizer during integration will reduce the accuracy and sensitivity of the system. 13 The use of an EBC frees us from the frame rate constraints, although the refractory period of the pixels still limits the effective bandwidth of the system. 18

Event Rate
The Stokes' vector approach assumes we are measuring intensity, but event-based sensors do not directly report intensity. The pixel circuit reports changes in the log of the photocurrent. Using Lichtsteiner's model, 18 the event rate n can be approximated as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 6 ; 2 7 4 where Θ is the comparator threshold. This continuous function obviously cannot exactly predict the discrete event output, but is a helpful model for the moment. Substituting Eq. (6) into Eq. (9), we expect the event rate for a rotating polarizer to be E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 1 1 6 ; 1 9 7 where θ 0 is the angle between the polarizer axis and the polarization of the incident light, or The constant b is a bias term proportional to 1∕D. In real measurements, b will also include any other constant offset such as detector bias or leakage through the polarizer. For small values of b, n behaves like tan 2θ with a regularization term to avoid singularities where cosine goes to zero. For b ≫ 1, the denominator is approximately constant and the event rate approaches a sine function as shown in Fig. 1. Values of b ≤ 1 cause singularities in Eq. (10) that no real camera can produce; limitations such as refractory period and buffer size will always produce event rates that look like the b > 1 values in Fig. 1. This equation predicts a zero-mean event rate, which can only make sense if we use signed values for events to account for polarity.
In the work below, all events are assigned a value of AE1 unless otherwise noted.
The phase term in the definition of θ 0 above is determined by the AoP relative to the polarizer axis at time t ¼ 0. The experiments conducted so far do not have any absolute time reference, making ϕ arbitrary.
The cumulative sum of events (considering events to have value AE1 to indicate polarity) is the discrete analog of computing the integral of a derivative, so that E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 1 1 6 ; 4 5 2 We can therefore recover an estimate of the photocurrent as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 1 1 6 ; 3 9 1Î The experiments described below were intended to verify these relationships and show an eventbased sensor can be used to recover estimates of the polarization state.

Instrument
We assembled two event-based imaging polarimeters using a Prophesee EVK and a DAVIS 346 imaging through a 1-in. diameter thin-film polarizer (Thorlabs LPVIS 100) mounted in an Elliptec rotation stage (Thorlabs ELL14K). The results presented here all use the DAVIS 346 camera because the APS imaging mode enables a comparison between conventional and eventbased polarization analysis. The rotating polarizer was mounted about 2 mm in front of the lens (Edmund Optics 67714; 16 mm, f/1.4). This gap enabled some light leakage around the polarizer, which is expected to contribute to the bias in b. Figure 2 shows the prototype. The camera and rotation stage are mounted to a custom-printed plastic mount.

Highly Polarized Target
The camera was pointed at a Samsung quantum-dot light emitting diode (QLED) monitor as an initial test to verify the equations above. The monitor is strongly polarized in the horizontal direction. Figure 3 shows the values reported from a single pixel as the polarizer rotates through ∼540 deg. The APS video, the event rate n and the event count N all show good agreement with the theory above [Eqs. (6), (11), and (10), respectively]. Figure 4 shows the results of using Fig. 1 The theoretical event rate (normalized to unit variance for display scaling) in response to horizontal polarization over one full rotation of the polarizer, plotted for different values of bias, b.
As the bias increases, the event rate becomes more harmonic and the phase offset of the event rate relative to Malus's law irradiance (dashed black line) approaches π.
Eq. (8) to compute the Stokes vector for each pixel in the scene. The hue in this visualization indicates AoP, with the saturation controlled by DoLP, so that white is unpolarized, pale pastels are weakly polarized, and bright colors are strongly polarized. 19 We would expect all the QLED region to be strongly polarized with the same AoP, where the wall above the monitor should be unpolarized or weakly polarized.   Based on the equations above, all pixels observing the QLED screen should show the same AoP regardless of brightness. Figure 4 shows general agreement with this, although the darker letters are still noticeable in the AoP images. The calculated AoP for the QLED pixels is 0.505 AE 0.035 rad when using event data, compared with 0.405 AE 0.031 for APS data (mean AE standard deviation). The 0.1-radian angle difference is due in part to the time delay between the first event and the first frame. In this case, the delay was about 17.4 ms and the polarizer rotation rate was ∼5 rad∕s, which leads to about 0.09 rad of offset in the polarizer angle between the two datasets.
Note that AoP in this data includes an unknown (but constant) offset because we cannot directly measure θðtÞ. We estimate it from the motor speed as θ 0 ¼ ωt þ ϕ − ψ, where ϕ is a phase offset due to the polarizer angle at time t ¼ 0. The phase and the incident AoP should be constant for any pixel in the scene, but we cannot differentiate one from the other in this data without some a priori information about the scene. In this example, we know the QLED pixels are horizontally polarized, so we could subtract the mean angle to correct this offset.

Complex Scene
The QLED screen was helpful for confirming the theory using a source with a high degree of polarization with a known state. The method was also applied to a more complex indoor scene. The scene includes a textbook, a small plant, a mug, a plastic stapler, and a steel tumbler. Figure 4 shows a side-by-side comparison of images computed from the two different data streams showing APS imagery on the left and dynamic vision sensor (DVS) event data on the right. The polarization images show general agreement, but the S 0 image recovered from events is poor. This result is not surprising, given the challenge of measuring static images with EBCs. As the polarizer rotates, the S 0 information produces no changes and no events. While Eq. (8) implies we can estimate S 0 from the event stream, we are not measuring any information related to S 0 . This ambiguity is fundamentally tied to the constant of integration in Eq. (11).
The polarization data for Fig. 5 were computed using Eq. (8). Alternatively, the fit coefficients from Eq. (11) could be used as estimates of the phase and magnitude of the oscillation, which can be related to AoP and DoLP, respectively. We could also apply Fourier analysis to find (at each pixel) the magnitude and phase of the oscillations at the polarizer rotation frequency. In experiments so far, we find those methods to be less robust than the matrix inverse.

Analysis
When using Eq. (8), we found the results are actually more robust and consistent when applying the equation to NðtÞ, which is proportional to log intensity, rather than directly to the intensity from Eq. (12). Because each pixel can generate at most one event per timestep, we estimate the event rate as the inverse of time between events, which results in a noisy estimate of nðtÞ. The noise can be reduced by increasing the time step and also by counting events in a small neighborhood around each pixel (usually 3 × 3 or 5 × 5 pixels).
In our experiments so far, the value of S 0 resulting from Eq. (8) has many negative values, which is an obviously non-physical result. The negative and near-zero values of S 0 then also corrupt estimates of DoLP. To correct this, we shifted the distribution of S 0 to be strictly positive by S 0 where ϵ is a regularization factor to avoid division by small noisy numbers when computing DoLP. In practice, we find that using S 0 0 ¼ jS 0 j þ ϵ is also effective. The choice of regularization constant has a large effect on S 0 estimates. While it is easy to get plausible results by tuning ϵ, we hope to develop a data-driven way to estimate correct values of ϵ. Ultimately, this uncertainty arises from the constant integration in Eq. (11). Future work will include a set of experiments to see if this constant is repeatable from one collection to the next.
The APS circuit of the DAVIS camera allows us to develop "truth" estimates of the input state to verify the equations presented above. Figure 6 shows one example of this. Equation (10) predicts that the event rate should be proportional to DoLP (through b). The DoLP at each pixel was computed by applying Eq. (8) to the APS frames. The vertical axis of Fig. 6 shows a count of events (ignoring polarity) on each pixel as the polarizer rotated through 3π radians at the motor's maximum rotation rate. The fit (blue dashed line) in Fig. 6 is based on summing unsigned events as predicted by Eq. (10), or E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 3 ; 1 1 6 ; 4 4 0 where Δθ is the angular sample spacing, and the fit coefficients, a i were chosen to minimize Euclidean distance between Eq. (13) and the median points, with distances weighted by population at each value of N tot . The fit line in Fig. 6 uses a 0 ¼ 3.15, a 1 ¼ 1.57, and a 2 ¼ −0.68, which gives a fit R 2 ¼ 0.87. As with the event data, our image-based DoLP calculations include a regularization term in the denominator. Even though the APS image values are strictly positive, the S 0 values computed from Eq. (8) can produce small or negative values. The values plotted in Fig. 6 used ϵ ¼ 1. As an alternative, we can also estimate DoLP as the contrast in pixel intensity as the polarizer rotates E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 4 ; 1 1 6 ; 6 8 7 DoLP ¼ maxðEÞ − minðEÞ maxðEÞ þ minðEÞ : The matrix inversion should be more accurate, since it is a least-squared error solution using all the data, where Eq. (14) uses only two values for each pixel. In the current datasets, we find the two estimates are well-correlated (R 4 ¼ 0.93), but the contrast estimate is much faster to compute.
To summarize, we have four ways to estimate DoLP: two from images and two from events. These are   Table 1. The first three methods (or all except the bottom right plot below) require a regularization term in the denominator to reduce the effects of small, noisy numbers in the denominator. For a fair comparison, all three plots used ϵ ¼ 1, though there is no a priori reason for that choice.

Results and Discussion
The method applied here is effective at recovering polarization information from a scene using an EBC. The DoLP images and AoP are useful for augmenting contrast between objects, but so far are not sufficiently accurate for quantitative polarimetry. Future experiments will focus on removing/estimating some of the unknown parameters that limit the accuracy in the current work, such as the value of the threshold. The theory presented above suggests the possibility of recovering an accurate intensity (S 0 ) image via Eq. (8), but experimental results have been disappointing. The accuracy of the S 0 images reconstructed in this way is poor thus far, which aligns with the conventional wisdom about EBCs.
The Elliptec motor used for these experiments has several advantages (notably high speed and low cost) but imposes some limitations as well. For characterizing and calibrating the instrument, we require more precise angle data. Also, the motor is not intended to be operated continuously, so each rotation is a separate event in terms of the processing and analysis pipeline.
The algorithm has not yet been optimized and does not run in real-time. Estimating the DoLP and AoP for a full image takes over a minute at this point. As each pixel is treated independently by the algorithm, we expect vectorization and parallelization would speed it significantly. The slow algorithm obviously detracts from the promise of EBCs to speed up data collection, but the algorithm is very immature at this point. Using the event count at each pixel as a proxy for DoLP is much faster (∼20 ms), and is more closely correlated to image-based DoLP calculations. The matrix calculation is still necessary for estimating AoP, which is not affected by the uncertainty in S 0 .
These initial experiments show that the device has some promise for applications where polarization is used to enhance contrast or object detection. It may be less suited to applications that require precise polarization measurements. Future work will emphasize collecting data in more controlled conditions and over a broader range of conditions to determine which of the findings here are repeatable and robust.

Data, Materials, and Code Availability
The code and data are property of the U.S. Air Force. Contact the authors to arrange access.