Open Access
29 October 2024 On-orbit demonstration of a linear variable band-pass filter-based miniaturized hyperspectral camera for CubeSats
Author Affiliations +
Abstract

Hyperspectral cameras are useful Earth observation sensors with many practical applications in agriculture and forest remote sensing for detecting and classifying the state of objects. Recently, CubeSats satellites were launched into low Earth orbits. These satellites have certain advantages, such as low cost and short-term development, and their practical applications are continuously increasing. The installation of a hyperspectral camera on CubeSats would significantly increase their observation frequency. However, to fit on a CubeSat, observation instruments must be compact and weigh only a few hundred grams. To address this challenge, we developed a 35-g miniaturized hyperspectral camera by combining a linear variable band-pass filter and an image sensor. This innovative camera was installed on two CubeSats and launched into orbit in 2023 and 2024. The on-orbit experiments conducted using these CubeSats were successful, and valuable hyperspectral data were obtained.

1.

Introduction

Hyperspectral cameras are useful Earth observation devices with many practical applications, including agriculture, forest, and ocean remote sensing; they are able to detect and classify the condition of targets by simultaneously acquiring images and spectral information.14 Although there are examples and development plans for hyperspectral cameras used on large satellites59 and the International Space Station,10 their cases are limited, and their effectiveness has not been fully verified. Desirable characteristics of hyperspectral remote sensing include full Earth coverage and increased observation frequency. This is the case in agricultural remote sensing, for which regular and frequent observations are essential for efficient crop management. However, when only a single satellite is utilized, the observation frequency is low (approximately once every 2 weeks). A way to acquire more frequent observations would be to develop a CubeSat constellation equipped with hyperspectral cameras.

Generally, spaceborne hyperspectral cameras require large telescopes to gather sufficient light for spectroscopic observations. Recently, image sensors have become more sensitive, enabling hyperspectral remote sensing even with smaller telescopes. In addition, plans are being developed to construct micro- and nanosatellites equipped with hyperspectral cameras.1114

1U-CubeSat is a standardized cube-shaped satellite with dimensions of 10×10×11  cm, typically weighing less than 1.33 kg.1517 As the mass production and practical applications of CubeSats rapidly advance, the number of organizations launching and operating them worldwide is also increasing. Furthermore, the development of multi-mission CubeSats, which can accommodate multiple mission payloads for various purposes within a single satellite, is on the rise. Multi-mission CubeSats can be shared by multiple users, thereby reducing the development costs per organization and increasing the launch frequency and mass production.1820 The installation of a hyperspectral camera on a CubeSat would potentially enhance the frequency of observations. However, installing multiple instruments on a CubeSat requires each instrument to be light and compact (weighing only a few hundred grams). Examples of small-sized hyperspectral cameras include HYPSO-121 from the Norwegian University of Science and Technology and HyperScout-122 from Cosine Remote Sensing B.V. However, both cameras weigh more than 1.5 kg, which makes them difficult to install on multi-mission CubeSats. Recently, a method to miniaturize hyperspectral cameras using a linear variable band-pass filter (LVBPF) has been proposed.2326 Although no on-orbit demonstrations of an LVBPF-based hyperspectral camera exist, their incorporation on CubeSats holds great potential.

The spectral resolution of an LVBPF-based hyperspectral camera is limited by the bandwidth of the filter, F-number of the lens, and distance between the filter and focal plane, and the effective spectral resolution is further limited by the effects of spectral cross-talk between neighboring pixels.24 Therefore, it is considered difficult to improve spectral resolution using existing LVBPF. However, this technology realizes that, for significant miniaturization to install on a small platform such as a CubeSat, it is possible to easily measure the spectral characteristics of the ground surface and expand the range of uses for hyperspectral cameras. For example, a CubeSat equipped with an LVBPF-based hyperspectral camera is used to measure the rough spectral characteristics of the ground surface, detect any anomaly or feature based on acquired hyperspectral data, and then take an image with another high-resolution multispectral satellite. Another possible use would be to roughly identify the characteristic spectral band and take an image with this band using a multispectral satellite equipped with a liquid crystal tunable filter that can select a specific wavelength.13

In this study, an LVBPF-based spaceborne miniaturized hyperspectral camera was successfully developed. This hyperspectral camera was installed on two 3U-CubeSats (OPTIMAL-1 and TIRSAT) launched into orbit in 2023 and 2024 for an on-orbit demonstration. This study reports an overview of the LVBPF-based hyperspectral cameras, imaging methods, and on-orbit results for each of the above-mentioned CubeSats.

2.

Development of a Hyperspectral Camera Equipped with a Linear Variable Band-Pass Filter

2.1.

Imaging Method of the LVBPF-Based Hyperspectral Camera

A traditional hyperspectral camera primarily consists of a telescope and a spectrometer with a slit, as shown in Fig. 1. This device operates using a push-broom scanning mechanism.27 It acquires the spectrum at each pixel location by scanning the ground surface line-by-line using a push-broom scanning mechanism.28,29 Its spectrometer consists of a collimator, a dispersion element, and an objective lens. The dispersion element can typically be a prism or a grating. Due to the many optical components required, the traditional hyperspectral camera is difficult to miniaturize. A way to bypass this limitation is to use an LVBPF, which is a band-pass filter with a wavelength transmittance that changes linearly with position. This way, spatial and spectral information can be acquired using a single easy-to-miniaturize filter. The configuration of an LVBPF-based hyperspectral camera is shown in Fig. 2. Traditional hyperspectral cameras using a grating are characterized by high spectral resolution. However, because the incident light is divided into different orders of light, the amount of light that reaches the image sensor is reduced. By contrast, the sensitivity of LVBPF-based hyperspectral cameras mainly depends on the transmittance of the filter, so these cameras have high sensitivity. Nevertheless, as with the traditional type, push-broom observations are also required for LVBPF-based hyperspectral cameras. The spectral images acquired from the LVBPF have different transmission wavelengths in the along-track direction. Thus, it is necessary to capture spectral images at different positions using push-broom observations to acquire an image of the same wavelength. These spectral images are then combined to generate the hyperspectral data.

Fig. 1

Traditional hyperspectral observation method.

JARS_18_4_044512_f001.png

Fig. 2

LVBPF-based hyperspectral observation method.

JARS_18_4_044512_f002.png

2.2.

Development of the LVBPF-Based Hyperspectral Camera and Ground Demonstration

2.2.1.

Design and characteristics of developed hyperspectral camera

Figure 3 shows the development of the CubeSat LVBPF-based hyperspectral camera. The camera was small (3  cm in length including the lens and image sensor). Two types of cameras to be installed on the CubeSats were developed in this study (Table 1). Both cameras had the same appearance but different filters and image sensors. However, because these cameras were intended to be installed as a sub-mission of a multi-mission CubeSat, they used a small lens with coarse resolution. Table 2 shows the specifications of each filter. The type A LVBPF was an initial experiment; therefore, we used a readily available existing filter with a long and thin shape. By contrast, the type B LVBPF is a filter cut to a size that matches the image sensor. The type A hyperspectral camera has a smaller pixel size in the image sensor, resulting in finer spatial and spectral resolutions compared with the type B camera. However, the type B camera has an advantage in its wavelength range as it covers both the visible and near-infrared spectra, whereas the type A camera is limited to the visible range only. The structures and detailed views of each hyperspectral camera are shown in Fig. 4. This figure shows the structure of the type B camera; the type A camera is the same except for the filter size. The filter is not deposited onto the image sensor but is fixed mechanically. However, in the case of mechanical fixation, the distance between the filter and image sensor becomes wider, which increases the effects of spectral cross-talk, causing light leakage of adjacent wavelengths and mixing of the wavelengths. Commercially available image sensors have a cover glass attached; therefore, the cover glass was removed, and the structure was designed for the filter to be as close as possible to the image sensor. Considering the spectral resolution in this configuration to be the width of the light beam passing through the filter, the spectral resolution is calculated as the product of the F-number, dispersion, and distance of the filter and focal plane. The F-number of the telescope lens is F/2.5, and the distance of the filter and focal plane is 0.67 mm; these are the same for both cameras. In the case of the type A camera, the dispersion of LVBPF is 42.1 nm/mm; thus, the spectral resolution is 11.3  nm. In the case of type B, if the dispersion is 67.7 nm/mm, the spectral resolution is calculated as 18.1  nm. However, this is merely a simplified calculation in which the half-band width of the filter itself is ignored. The measurement results of the spectral resolution of both cameras are shown in Sec. 2.2.2.

Fig. 3

Overview of the developed hyperspectral camera.

JARS_18_4_044512_f003.png

Table 1

Specifications of the developed hyperspectral cameras.

ItemType AType B
Size3.6×3.6×2.4  cm3.6×3.6×2.4  cm
Weight35 g35 g
Ground sampling distance (case of altitude 500 km)138 m/pixel450 m/pixel
Swath66 km460 km
Available wavelength range435 to 655 nm400 to 770 nm
Spectral sampling distance5 nm5 nm
Spectral resolution14.1 nm18.5 nm
Number of band45 band75 band
Focal length of telescope lens8 mm8 mm
F-number of telescope lensF/2.5F/2.5
Valid pixel area2560×480  pixels1280×1024  pixels
Pixel size2.2  μm5.3  μm
Dynamic range8 bit8 bit
SatelliteOPTIMAL-1TIRSAT

Table 2

Specifications of the LVBPF (condition: incident angle = 0 deg).

ItemType AType B
Size8.9×1.05  mm10.1×8  mm
Thickness1.0 mm0.5 mm
Valid wavelength range380 to 750 nm380 to 850 nm
Dispersion42.1 nm/mm67.7 nm/mm
Peak transmission60%65%
Spectral blocking propertyNo information<1%
Half bandwidthNo information15 nm at 430 nm
20.6 nm at 780 nm
SubstrateFused silicaOptical glass

Fig. 4

Structure of hyperspectral type B camera.

JARS_18_4_044512_f004.png

The LVBPF-based hyperspectral camera offers flexibility as its spectral resolution and wavelength range can be easily modified by simply changing the filter, image sensor, and telescope lens. To avoid relying on the attitude control accuracy of the CubeSat for the demonstration experiment, the ground sampling distance was coarsened to provide a wide field of view, although the original ground sampling distance of both cameras was already coarser than 100 m.

2.2.2.

Pre-flight performances

The performance of the developed hyperspectral camera types A and B was evaluated as a ground test. The imaging performance of a hyperspectral camera is expected to depend on the wavelength, and the point-spread function (PSF) of each camera engineering model was measured with each wavelength. The PSF of each camera was measured using a collimator with a pinhole chart and band-pass filters of 400, 550, 600, 650, and 700 nm. Figures 5 and 6 show the PSFs of each camera with each wavelength. Based on the results obtained, each plot was approximated with a normal distribution function to derive the standard deviation (σ). As a result, the PSFs of the type A camera had an (σ) of 1.8 pixels at 450 nm, 1.7 pixels at 550 nm, and 1.6 pixels at 650 nm, whereas the type B camera had a PSF with an (σ) of 1.8 pixels at 450 nm, 1.7 pixels at 550 nm, 1.7 pixels at 650 nm, and 2.1 pixels at 700 nm. The result shows that the imaging performance in the visible range has no wavelength dependency. By contrast, there is some wavelength dependency in the near-infrared range of the type B. However, the difference is only 0.4  pixels, which is sufficient for imaging performance across the entire wavelength range. The wavelength dependency of imaging performance is due to the optical design, and it is considered that there is no difference between the engineering model and the flight model; therefore, the PSFs of the flight model were measured only for green wavelengths (550 nm).

Fig. 5

PSF of hyperspectral type A camera for each wavelength. (a) 450 nm, (b) 550 nm, and (c) 650 nm.

JARS_18_4_044512_f005.png

Fig. 6

PSF of hyperspectral type B camera for each wavelength. (a) 450 nm, (b) 550 nm, (c) 650 nm, and (d) 700 nm.

JARS_18_4_044512_f006.png

The PSFs of each camera flight model were measured with ground tests using a collimator with a pinhole chart and green light. Both cameras were subjected to environmental tests, including random vibration, thermal cycling, and thermal vacuum tests. Figure 7 shows the PSF measured before and after the environment test. Thermal cycling and thermal vacuum tests were performed under 10°C to +50°C. A random vibration test was performed with the camera mounted on each satellite under the conditions of the referenced launching vehicle. Before environmental tests, the PSFs of the type A camera had a standard deviation (σ) of 1.4 pixels, whereas the type B camera had a PSF with an (σ) of 1.4 pixels. By contrast, after the environmental tests, the results show that the type A camera had a PSF with an (σ) of 1.7 pixels and the type B had a PSF of 1.6 pixels. The difference in PSF before and after the environmental tests was 0.1 to 0.2 pixels, with both cameras achieving sufficient imaging quality based on the results of the environmental tests.

Fig. 7

PSF of hyperspectral type A and type B camera flight models.

JARS_18_4_044512_f007.png

Figure 8 shows the results of evaluating full width at half maximum (FWHM) as the spectral resolution of 400, 550, 600, 650, and 700 nm (type A did not have 700 nm).

Fig. 8

Spectral resolution of hyperspectral type A and type B camera flight models.

JARS_18_4_044512_f008.png

The horizontal axis is the pixel position of the image sensor, which is converted to a unit of wavelength using the result of spectral calibration. The conversion factor is 0.09 [nm/pixel] for type A and 0.336 (nm/pixel) for type B. The evaluation test was conducted by attaching a band-pass filter with an FWHM of 10 nm to an integrating sphere and using a halogen lamp. In the type A camera, the spectral resolution was 10.4  nm at 450 nm, 12.1 nm at 500 nm, and 20.7 nm at 600 nm. In the case of the type B camera, the spectral resolution was 19.8  nm at 450 nm, 23.7 nm at 500 nm, 36.4 nm at 600 nm, and 47.5 nm at 700 nm. As shown in Table 1, the design values of the spectral resolution are 14.1 nm for type A camera and 18.5 nm for the type B camera. At short wavelengths, the measured and calculated results are in agreement, but as the wavelength increases, the spectral resolution is reduced. This could be because the half-band width of the LVBPF has wavelength dependence as the filter specification shown in Table 2, and the half-width on the long wavelength side is wide. Table 2 lists the half-band width of the Type B filter; it is assumed that the half-band width of the type A filter is similar. As the half-band width increases, the angle of incidence that can be transmitted increases. Furthermore, the PSF is rougher on the near-infrared wavelength side. These combined causes could make the spectral resolution on the long wavelength side of the image sensor become coarse. For these reasons, the influence of spectral cross-talk between neighboring pixels, especially on the long wavelength side, becomes large, and the spectral purity becomes dilute so that the effective number of bands becomes coarser.

To ensure the success of the CubeSat demonstration experiment in this research, the camera adopted an F/2.5 fast lens to ensure that it captured the image of the Earth’s surface. As a result, the width of the light beam that passes through the filter becomes larger, and the spectral resolution is coarser. The spectral resolution of LVBPF-based hyperspectral cameras was found to be strongly dependent on the F-number of the lens. This should be considered in future research to find an optimal design solution.

2.2.3.

Demonstration using a drone

To evaluate the developed type B LVBPF-based hyperspectral camera, an aerial photography experiment was performed using a drone. The drone model used in this experiment is the DJI Phantom 4. The purpose of this experiment was to confirm that images and spectral information can be effectively acquired using the LVBPF-based hyperspectral camera. The experiment was performed at a soybean farm, where spectral information on soy, weeds, and soil was acquired. The specifications of the type B camera are shown in Table 1. The drone was flown at an altitude of 15 m, and the images were taken with a ground sampling distance of 1  cm/pixel. The ground surface is scanned in the direction of the drone’s movement, and the overlapping parts of the images are synthesized to generate a spectral image, which then forms hyperspectral data. However, due to the vibrations that occur when taking photos with a drone, the method of combining images at equal time intervals did not produce a successful combination. Therefore, overlap coordinates are estimated using image feature point matching and homography transformation, and images are synthesized to generate hyperspectral data. In the CubeSat imaging experiments described in Sec. 3, the satellite’s attitude was not stable, so a similar synthesis method was used to generate hyperspectral data.

The results of the RGB composite color image and the spectral information extracted from the acquired hyperspectral data are shown in Fig. 9. The results of soy and weed classification from the acquired hyperspectral data using the Spectral Angle Mapper method are summarized in Fig. 10.30 As shown in Figs. 10(a) and 10(b), the results confirm that weeds can be classified by comparing the classification outcomes obtained using hyperspectral data and visual inspection. Next, the classification accuracy was calculated. The weeds were scattered in small areas, and it was not possible to create an accurate annotated mask; only the classification accuracy of soybeans was calculated. The classification accuracy for soybeans was obtained at 76.8%. This classification accuracy is calculated by comparing the annotated mask shown in Fig. 10(c) with the classification result in Fig. 10(a) and dividing the number of classified correct pixels by the total number of pixels of the annotated mask. The annotated mask was created manually using the high-resolution aerial photograph image with a ground sampling distance of 0.2  cm/pixel shown in Fig. 10(d). These results reveal that the developed LVBPF-based hyperspectral camera has sufficient performance to acquire spectral information for agricultural remote sensing applications.

Fig. 9

Composite image consisting of 474, 564, and 614 nm and respective spectral information (2022-08-08 11:27:23 JST, captured by Drone).

JARS_18_4_044512_f009.png

Fig. 10

Classification results of the soybean farm. (a) Classification results for soy and weed, (b) comparison of the classification result of weed with a visual inspection, (c) manually annotated mask for the region of soy, (d) aerial photograph image of a high-resolution color camera.

JARS_18_4_044512_f010.png

3.

On-orbit Result of 3U-CubeSat “OPTIMAL-1”

3.1.

Details of OPTIMAL-1 Satellite

OPTIMAL-1 is a 3U-CubeSat (100×100×340  mm, weighting 3.9 kg) developed by ArkEdge Space Inc. in collaboration with the University of Fukui and uses the TRICOM-2 satellite platform.31,32 The exterior view and specifications of OPTIMAL-1 are shown in Fig. 11 and Table 3, respectively. This satellite is used for multiple purposes, including the demonstration of novel components, storing and forwarding collected data, and conducting Earth observations using color and hyperspectral cameras. OPTIMAL-1 was deployed from the International Space Station on January 6, 2023, and successfully entered the Earth’s orbit at an altitude of 400  km and an inclination of 51.6 deg.33 Two weeks after deployment, the basic functionality of the satellite bus was successfully confirmed, and mission operations began. The satellite completed its missions and reentered the atmosphere on August 22, 2023, at the end of its operations.

Fig. 11

Exterior view of OPTIMAL-1 and location of the hyperspectral camera.

JARS_18_4_044512_f011.png

Table 3

Specifications of OPTIMAL-1.

ItemSpecification
Size100×100×340  mm
Weight3.9 kg
Attitude determination control subsystemGeomagnetic sensor, gyroscope, 3-sun sensor, global positioning system (GPS) receiver, magnetic torque, reaction wheel
PowerSolar array panel: 4 body-mounted panels
Maximum power generation: 8 W
Typical power consumption: 2.6 W
Battery: 5.8 Ah, nominal 8 V
CommunicationTelemetry/command: S-band
Command uplink: 4 kbps,
Telemetry downlink: 4 to 64 kbps
Controller for hyperspectral cameraRaspberry Pi Zero
Camera interface: USB2.0
Initial orbitAltitude: 410×417  km
Eccentricity: 0.0005361
Inclination: 51.642 deg

3.2.

On-orbit Observation Results

A type A LVBPF-based hyperspectral camera was installed on this satellite and tested in orbit. As previously mentioned, the hyperspectral camera scans the ground surface using push-broom observations. Unfortunately, owing to an attitude control failure, OPTIMAL-1’s hyperspectral camera was not able to identify the nadir point and stably scan the ground surface. Nevertheless, we attempted to process the data acquired during the adjustment of the altitude control parameters and successfully acquired the partial hyperspectral data shown in Fig. 12. In Fig. 12, the black areas in each spectral range represent defective areas. The land area image that could be acquired was limited to a spectral image of 435 to 505 nm. Figure 13 shows the composite images created using wavelengths of 465, 485, and 505 nm and the extracted spectral information. Although it was not possible to evaluate the performance at all wavelengths covered by this hyperspectral camera, the minimum necessary functions of the hyperspectral camera, such as imaging and spectroscopy, using OPTIMAL-1 were determined.

Fig. 12

Spectral resolution of type A and type B camera flight models (2023-01-27 05:17:37 UTC, captured by OPTIMAL-1).

JARS_18_4_044512_f012.png

Fig. 13

Composite image consisting of 465, 485, and 505 nm and respective spectral information of the A, B, and C locations (2023-01-27 05:17:37 UTC, captured by OPTIMAL-1).

JARS_18_4_044512_f013.png

4.

On-orbit Results of 3U-CubeSat “TIRSAT”

4.1.

TIRSAT Details

TIRSAT is a 3U-CubeSat developed mainly by SEIREN Co. Ltd. under the management of Japan Space Systems as a commissioned project by the Ministry of Economy, Trade and Industry of Japan. The size of the satellite was 117×117×381  mm, and its weight was 4.97 kg. TIRSAT was equipped with a bolometer-type camera with the main purpose of using thermal infrared to measure changes in heat on the Earth’s surface, detecting heat sources such as factories, and estimating the operating status. The hyperspectral camera under test was installed as an extra payload. The exterior view and specifications of the TIRSAT are presented in Fig. 14 and Table 4, respectively. This satellite was successfully launched into a sun-synchronous sub-recurrent orbit at an altitude of 680  km on February 17, 2024, by Japan’s H3 launch vehicle.34 After orbit injection, the basic function of the satellite bus was successfully confirmed, and mission operations began.

Fig. 14

Exterior view of TIRSAT and location of the hyperspectral camera.

JARS_18_4_044512_f014.png

Table 4

Specifications of TIRSAT.

ItemSpecification
Size117×117×381  mm
Weight4.97 kg
Attitude determination control subsystem3-axis stabilization control using a geomagnetic sensor, gyroscope, 3-sensor, GPS receiver, magnetic torque, and reaction wheel
PowerSolar array panel: 4 deployable panels, 4 body-mounted panels
Maximum power generation: 20 W
Typical power consumption: 10 W
Battery: 5.8 Ah, Nominal 8 V
CommunicationTelemetry/command: S-band
Command uplink: 4 kbps,
Telemetry downlink: 4 to 64 kbps
Mission data downlink: X-band
5 Mbps, 10 Mbps
Controller for hyperspectral cameraRaspberry Pi compute module 3
Camera interface: USB2.0
OrbitSun-synchronous sub-recurrent orbit
Altitude: 680 km (approximately)
Inclination: 98 deg

4.2.

On-orbit Observation Results

The type B LVBPF hyperspectral camera was installed on TIRSAT and tested in orbit. TIRSAT successfully pointed at the nadir, scanned, and acquired the ground surface images using the hyperspectral camera. The experimental data described below were acquired near South Korea and the Goto Islands on March 29, 2024. Raw images were acquired every second for 200 s, successfully generating the hyperspectral data shown in Fig. 15. This figure shows a composite image consisting of 482.6, 502.6, and 522.6 nm and the extracted spectral information in each area. The hyperspectral camera was able to acquire spectral information from all ocean areas (e.g., locations B and C), whereas only visible wavelengths could be extracted from the land areas (e.g., location A) because of the large amount of cloud cover and limited land area. Overall, the initial on-orbit demonstration of the type B LVBPF hyperspectral camera installed in TIRSAT was considered successful.

Fig. 15

Composite image consisting of 482.6, 502.6, and 522.6 nm and respective spectral information of the A, B, and C locations (2024-03-29 01:22:16 UTC, captured by TIRSAT).

JARS_18_4_044512_f015.png

TIRSAT continues regular observations after the initial operation. The hyperspectral camera was installed on TIRSAT as a sub-mission; therefore, although it does not make many observations, it has acquired observation data as shown in Figs. 16 and 17. Figure 16 shows data taken near Lake Gregory in Australia. From the graph, it can be seen that the spectral information of lakes (e.g., location A) and bare land (e.g., locations B and C) was obtained. Figure 17 shows data taken in Japan. Location B is the ocean area, locations A and C are vegetated areas, and location D is the urban area. Various types of spectral information have successfully been acquired. Our future plans include further making adjustments to the imaging parameters, continuing the demonstration, performing radiometric calibrations, and obtaining spectral reflections.

Fig. 16

Composite image consisting of 482.6, 567.6, and 617.6 nm and respective spectral information of locations A, B, and C (2024-07-19 01:45:12 UTC, captured at Australia by TIRSAT).

JARS_18_4_044512_f016.png

Fig. 17

Composite image consisting of 457.6, 552.6, and 637.6 nm and respective spectral information of locations A, B, C, and D (2024-08-01 01:48:12 UTC, captured at Japan by TIRSAT).

JARS_18_4_044512_f017.png

5.

Conclusions

This study described the results of an on-orbit demonstration of an LVBPF-based hyperspectral camera installed on 3U-CubeSats.

In this research, 3-cm cube miniaturized hyperspectral cameras were developed to be installed on two CubeSats. The developed cameras performed pre-flight tests, the imaging performance was evaluated before and after a space environment test, and the results showed that it could withstand the space environment. In addition, an aerial photography experiment was performed using a drone at a soybean farm to evaluate the developed camera. The results confirmed that soybeans and weeds can be classified using hyperspectral data, showcasing that the developed hyperspectral camera has sufficient performance to acquire spectral information.

The developed LVBPF-based hyperspectral cameras installed on two CubeSats, OPTIMAL-1 and TIRSAT, successfully acquired hyperspectral data while in orbit. The results demonstrated that this hyperspectral camera can be installed as a mission module on various types of CubeSats. Although tested on 3U-CubeSats, the hyperspectral camera developed in this study is sufficiently small to be installed on 2U- and 1U-CubeSats. In addition, the spectral characteristics of the LVBPF-based hyperspectral camera can easily be modified by replacing the filter, which has a high optical transmittance and can improve spatial resolution. In the future, we plan to develop various types of hyperspectral cameras, including high-resolution cameras, by exploiting the advantages of these features.

Disclosures

The author declares no conflicts of interest associated with this paper.

Code and Data Availability

The data supporting the findings of this study are available from the corresponding author upon request. However, the satellite images will be made available after permission is granted by the satellite ownership organization.

Acknowledgments

This work was partially supported (specifically the development and on-orbit operation of OPTIMAL-1) by the Ministry of Economy, Trade and Industry of Japan (METI) (Grant No. JPJ006617). Part of the development and on-orbit operation of TIRSAT was supported by the METI. The authors thank ArkEdge Space, Inc., Japan Space Systems, and SEIREN Co. Ltd. for their cooperation in the development and on-orbit operation of the satellites. We would like to thank Editage for English language editing.

References

1. 

P. S. Thenkabail, J. G. Lyon and A. Huete, Hyperspectral Remote Sensing of Vegetation, CRC Press( (2011). Google Scholar

2. 

B. Lu et al., “Recent advances of hyperspectral imaging technology and applications in agriculture,” Remote Sens., 12 (16), 2659 https://doi.org/10.3390/rs12162659 (2020). Google Scholar

3. 

J. Transon et al., “Survey of hyperspectral earth observation applications from space in the sentinel-2 context,” Remote Sens., 10 (2), 157 https://doi.org/10.3390/rs10020157 (2018). Google Scholar

4. 

R. L. Lucke et al., “Hyperspectral imager for the coastal ocean: instrument description and first images,” Appl. Opt., 50 1501 –1516 https://doi.org/10.1364/AO.50.001501 APOPAI 0003-6935 (2011). Google Scholar

5. 

J. S. Pearlman et al., “Hyperion a space-based imaging spectrometer,” IEEE Trans. Geosci. Remote Sens., 41 (6), 1160 –1173 https://doi.org/10.1109/TGRS.2003.815018 IGRSD2 0196-2892 (2003). Google Scholar

6. 

R. Loizzo et al., “Prisma: the Italian hyperspectral mission,” in IGARSS 2018-2018 IEEE Int. Geosci. and Remote Sens. Symp., (2018). https://doi.org/10.1109/IGARSS.2018.8518512 Google Scholar

7. 

EnMAP at Earth Observation Center EOC of DLR, https://www.enmap.org/ (). Google Scholar

8. 

L. Guanter et al., “The EnMAP spaceborne imaging spectroscopy mission for earth observation,” Remote Sens., 7 (7), 8830 –8857 https://doi.org/10.3390/rs70708830 (2015). Google Scholar

9. 

Q. Shen-En, “Hyperspectral satellites, evolution, and development history,” IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens., 14 7032 –7056 https://doi.org/10.1109/JSTARS.2021.3090256 (2021). Google Scholar

10. 

A. Iwasaki et al., “Prelaunch status of hyperspectral imager suite (HISUI),” in IGARSS 2019-2019 IEEE Int. Geosci. and Remote Sens. Symp., (2019). https://doi.org/10.1109/IGARSS.2019.8898660 Google Scholar

11. 

S. G. Ungar et al., “Overview of the Earth Observing-1 (EO-1) mission,” IEEE Trans. Geosci. Remote Sens., 41 1149 –1159 https://doi.org/10.1109/TGRS.2003.815999 IGRSD2 0196-2892 (2003). Google Scholar

12. 

M. Esposito et al., “In-orbit demonstration of artificial intelligence applied to hyperspectral and thermal sensing from space,” Proc SPIE, 11131 111310C https://doi.org/10.1117/12.2532262 (2019). Google Scholar

13. 

Y. Yatsu et al., “PETREL: platform for extra and terrestrial remote examination with LCTF,” in 35th Annu. AIAA/USU Conf. Small Satellites, SSC21-VI-04, (2021). Google Scholar

14. 

T. Tikka, J. Makynen and M. Shimoni, “Hyperfield—hyperspectral small satellites for improving life on Earth,” in IEEE Aerosp. Conf., (2023). https://doi.org/10.1109/AERO55745.2023.10115806 Google Scholar

15. 

H. Heidt et al., “CubeSat: a new generation of picosatellite for education and industry low-cost space experimentation,” in 14th Annu./USU Conf. on Small Satellite, (2000). Google Scholar

16. 

R. Nugent et al., “The CubeSat: the picosatellite standard for research and education,” in Proc. AIAA Space 2008 Conf. and Exhibit., (2008). https://doi.org/10.2514/6.2008-7734 Google Scholar

17. 

S. Nakasuka et al., “Evolution from education to practical use in University of Tokyo’s nano-satellite activities,” Acta Astronaut., 66 1099 –1105 https://doi.org/10.1016/j.actaastro.2009.09.029 AASTCF 0094-5765 (2020). Google Scholar

18. 

C. R. Boshuizen et al., “Results from the planet labs flock constellation,” in Proc. 28th AIAA/USU Conf. Small Satellites, (2014). Google Scholar

19. 

E. E. Areda et al., “Development of innovative CubeSat platform for mass production,” Appl. Sci., 12 (18), 9087 https://doi.org/10.3390/app12189087 (2022). Google Scholar

20. 

S. DelPozzo and C. Williams, 2020 Nano/Microsatellite Market, Forecast, 10th ed.SpaceWorks Enterprises, Inc( (2020). Google Scholar

21. 

S. Bakken et al., “HYPSO-1 CubeSat: first images and in-orbit characterization,” Remote Sens., 15 (3), 755 https://doi.org/10.3390/rs15030755 (2023). Google Scholar

22. 

M. Esposito et al., “Demonstration in space of a smart hyperspectral imager for nanosatellites,” in 32th Annu. AIAA/USU Conf. on Small Satellites, SSC18-I-07, (2018). Google Scholar

23. 

A. M. Mika, “Linear-wedge spectrometer,” Proc. SPIE, 1298 https://doi.org/10.1117/12.21343 (1990). Google Scholar

24. 

S. Song et al., “Low-cost hyper-spectral imaging system using a linear variable bandpass filter for agritech applications,” Appl. Opt., 59 (5), A167 –A175 https://doi.org/10.1364/AO.378269 APOPAI 0003-6935 (2020). Google Scholar

25. 

M. Dami et al., “Ultra compact spectrometer using linear variable filters,” in Int. Conf. Space Opt. 2010, (2010). https://doi.org/10.1117/12.2309265 Google Scholar

26. 

Jr. T. D. Rahmlow, W. Cote and Jr. R. Johnson, “Hyperspectral imaging using a linear variable filter (LVF) based ultra-compact camera,” Proc. SPIE, 11287 1128715 https://doi.org/10.1117/12.2546709 (2020). Google Scholar

27. 

S. Kaiser et al., “Compact prism spectrometer of pushbroom type for hyperspectral imaging,” Proc. SPIE, 7100 710014 https://doi.org/10.1117/12.797177 (2008). Google Scholar

28. 

P. Mouroulis and R. O. Green, “Review of high fidelity imaging spectrometer design for remote sensing,” Opt. Eng., 57 (4), 040901 https://doi.org/10.1117/1.OE.57.4.040901 (2018). Google Scholar

29. 

D. Labate et al., “The PRISMA payload optomechanical design: a high performance instrument for a new hyperspectral mission,” Acta Astronaut., 65 (9), 1429 –1436 https://doi.org/10.1016/j.actaastro.2009.03.077 AASTCF 0094-5765 (2009). Google Scholar

30. 

Boardman, SIPS User ’s Guide Spectral Image Processing System, 88 Center for the Study of Earth from Space, Boulder (1992). Google Scholar

31. 

Y. Aoyanagi et al., “Design of 3U-CubeSat bus based on TRICOM experience to improve versatility and easiness of AIT,” Trans. Jpn. Soc. Aeronaut. Space Sci., Aerosp. Technol. Jpn., 19 (2), 252 –258 https://doi.org/10.2322/tastj.19.252 (2021). Google Scholar

32. 

Q. Verspieren et al., “Store-and-forward 3U CubeSat project TRICOM and its utilization for development and education: the cases of TRICOM-1R and JPRWASAT,” Trans. Jpn. Soc. Aeronaut. Space Sci., 63 (5), 206 –211 https://doi.org/10.2322/tjsass.63.206 TJASAM 0549-3811 (2020). Google Scholar

33. 

ArkEdge Space successfully injected the nano-satellite “OPTIMAL-1” into orbit from the International Space Station, https://arkedgespace.com/en/news/2023-02-03_optimal-1 (). Google Scholar

34. 

SEIRENS’s nano-satellite “TIRSAT” successfully injected into orbit, https://www.seiren.com/news/2024-02-1901.html (). Google Scholar

Biography

Yoshihide Aoyanagi is a project associate professor at the University of Fukui, Japan. He received his PhD in engineering from the Hokkaido Institute of Technology in 2011. He focuses specifically on the research development of spaceborne optical sensors such as hyperspectral cameras, versatile CubeSats, and autonomous observation systems.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yoshihide Aoyanagi "On-orbit demonstration of a linear variable band-pass filter-based miniaturized hyperspectral camera for CubeSats," Journal of Applied Remote Sensing 18(4), 044512 (29 October 2024). https://doi.org/10.1117/1.JRS.18.044512
Received: 24 July 2024; Accepted: 4 October 2024; Published: 29 October 2024
Advertisement
Advertisement
KEYWORDS
Cameras

Tunable filters

Optical filters

Bandpass filters

Satellites

Spectral resolution

Linear filtering

Back to Top