The initial steps of many computer vision algorithms are local feature extraction and matching. However, in the problem of recognizing objects in images with complex backgrounds, this approach has a weak point since keypoints may be found not only in the object of interest, but also in the background. This leads to redundant calculations and can cause mismatches. In this paper, we propose a keypoints filtering method applicable to the problem of classification and localization of ID documents in the wild. Using a light-weight deep learning model, keypoints are divided into ”document” and ”background” classes, after which the keypoints of the background are removed. Experimental results show that adding the proposed filtering step gives an average speedup of 3.14% on the entire MIDV-500 dataset and 14.77% on MIDV-2020. At the same time, the acceleration on target images with complex backgrounds reaches 81%.
U-Net similar architectures are widely used in the task of document image binarization. However, despite the good quality of binarization, they also have high computational complexity, which greatly limits their use on mobile and embedded devices. The performance bottleneck of U-Net architectures is the first encoder layers and the last decoder layers, which operate on high-resolution input data and contain the largest number of operations. Based on this, in this paper we propose a new Threshold U-Net model: instead of predicting the final image, Threshold U-Net predicts a low-resolution adaptive threshold map, with which the input image is binarized. The proposed architecture naturally combines the ideas of classical algorithms that calculate the binarization threshold for a specific image region with an approach based on a deep learning model with a large receptive field and context understanding. Threshold U-Net demonstrates quality of binarization of historical documents comparable to U-Net on the DIBCO-2017 dataset. At the same time, depending on the resolution of the threshold map, Threshold U-Net is up to 2 times faster, requires up to 26% less RAM and consists up to 10% fewer parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.