This paper presents a novel bottom-up attention model only based on C1 features of HMAX model, which is efficient
and consistent. Although similar orientation-based features are commonly used by most bottom-up attention models, we adopt different activation and combination approaches to get the ultimate map. We compare the two different operations for activation and combination, i.e. MAX and SUM, and we argue they are often complementary. Then we argue that for a general object recognition system the traditional evaluation rule, which is the accordance with human fixations, is inappropriate. We suggest new evaluation rules and approaches for bottom-up attention models, which focus on information unloss rate and useful rate relative to the labeled attention area. We formally define unloss rate and useful rate, and find efficient algorithm to compute them from the original labeled and output attention area. Also we discard the commonly adopted center-surround assumption for bottom-up attention models. Comparing with GBVS based on the suggested evaluation rules and approaches on complex street scenes, we show excellent performance of our model.