Surveillance applications require transmission of video flows to a high performance remote server or perform onsite processing. In both cases, image analysis techniques are used to support tasks for situational awareness, such as: detection, classification, and tracking. Nowadays, neural networks are widely used as data driven Machine Learning methods. The hardware required for deploying state-of-the-art neural networks models implies massive and invasive installations. Embedded devices could be an alternative running with more modest settings. Then, it is important to understand which trade-off is better for this surveillance application. When a new camera angle lead to an unknown scenario, detection and classification deteriorates in terms of precision and recall. To solve this problem we analyze and predict the angles in different scenarios using homography techniques. We proposed the inclusion of more data sets and labeled images from different angles. This means that more images does not always mean better detection. We have a model by transfer of learning with images from scenarios with similar angles to the real environments of detection. We compare the performance of different implementations of an automated counter in an embedded device using a GPU versus a server performing the same task. Results show that an Edge computing implementation is possible with a good performance. Possible solutions involve regulating bit rate and skipping a specific number of frames (skip-n-frames) to rise the number of Frames per Second of the system. The results show that using both: an embedded device and a server is possible to perform real-time object detection and counting with precision values of around 92% in the embedded device and 93% in a server.