Object detection is a critical task in computer vision, with applications ranging from robotics to surveillance. Traditional RGB-based methods often face challenges in low-light, high-speed, or high-dynamic-range scenarios, resulting in blurred or low-contrast images. In this paper, we present a novel algorithmic approach that fuses event data from event cameras with RGB images to improve object detection performance in real-time. Event cameras, unlike traditional frame-based cameras, provide high temporal resolution and dynamic range, capturing intensity changes asynchronously at the pixel level. Our method leverages the complementary strengths of event data and RGB images to reconstruct blurred images while retaining contrast information from the original data. We propose an algorithmic pipeline that first fuses event data and RGB images, followed by a reconstruction step to generate enhanced images suitable for object detection tasks. The pipeline does not rely on deep learning techniques, making it computationally efficient and well-suited for real-time applications. To validate the effectiveness of our approach, we compare its performance against the popular YOLO benchmarks for object detection tasks. Moreover, we assess real-time metrics to demonstrate the practicality of our method in time-sensitive applications.
|