We propose an architecture to automatically detect widgets in mobile screenshots, considering only visual cues. Even though traditional object detection methods perform well on common objects in natural scene images, they are unable to deal with the screenshot images with complex widget layout. Therefore, we propose region-based Widget Detection Network (WDN), which introduces regularities in the screenshot images as the regularizations. First, we design a scale-aware attention structure to make the backbone network sensitive to widget scales so that the salient features of the interest regions could be captured. Second, a strategy of horizontal region generation is proposed to fully utilize the aligned property of widget arrangement, which generates all the region candidates in a horizontal line at once. Finally, a variant of online hard example mining is employed to alleviate the problem of imbalance samples, which explicitly restricts the ratio of foreground and background to achieve better balance. We conduct experiments on a proposed benchmark dataset. The quantitative results and qualitative analysis on the benchmark dataset show that WDN achieves impressive performance, which outperforms the common object detection methods in the widget detection task.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.