21 November 2012 Bottom-up attention based on C1 features of HMAX model
Author Affiliations +
This paper presents a novel bottom-up attention model only based on C1 features of HMAX model, which is efficient and consistent. Although similar orientation-based features are commonly used by most bottom-up attention models, we adopt different activation and combination approaches to get the ultimate map. We compare the two different operations for activation and combination, i.e. MAX and SUM, and we argue they are often complementary. Then we argue that for a general object recognition system the traditional evaluation rule, which is the accordance with human fixations, is inappropriate. We suggest new evaluation rules and approaches for bottom-up attention models, which focus on information unloss rate and useful rate relative to the labeled attention area. We formally define unloss rate and useful rate, and find efficient algorithm to compute them from the original labeled and output attention area. Also we discard the commonly adopted center-surround assumption for bottom-up attention models. Comparing with GBVS based on the suggested evaluation rules and approaches on complex street scenes, we show excellent performance of our model.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Huapeng Yu, Huapeng Yu, Zhiyong Xu, Zhiyong Xu, Chengyu Fu, Chengyu Fu, Yafei Wang, Yafei Wang, } "Bottom-up attention based on C1 features of HMAX model", Proc. SPIE 8558, Optoelectronic Imaging and Multimedia Technology II, 85580W (21 November 2012); doi: 10.1117/12.999263; https://doi.org/10.1117/12.999263


Bilinear models of natural images
Proceedings of SPIE (February 06 2007)
Symmetry Detection In Human Vision
Proceedings of SPIE (September 04 1989)
Primary set of characteristic views for 3-D objects
Proceedings of SPIE (August 31 1991)
Recognition as translating images into text
Proceedings of SPIE (January 09 2003)

Back to Top