The paper describes a vision for dependable application of machine learning-based inferencing on resource-constrained edge devices. The high computational overhead of sophisticated deep learning learning techniques imposes a prohibitive overhead, both in terms of energy consumption and sustainable processing throughput, on such resource-constrained edge devices (e.g., audio or video sensors). To overcome these limitations, we propose a “cognitive edge” paradigm, whereby (a) an edge device first autonomously uses statistical analysis to identify potential collaborative IoT nodes, and (b) the IoT nodes then perform real-time sharing of various intermediate state to improve their individual execution of machine intelligence tasks. We provide an example of such collaborative inferencing for an exemplar network of video sensors, showing how such collaboration can significantly improve accuracy, reduce latency and decrease communication bandwidth compared to non-collaborative baselines. We also identify various challenges in realizing such a cognitive edge, including the need to ensure that the inferencing tasks do not suffer catastrophically in the presence of malfunctioning peer devices. We then introduce the soon-to-be deployed Cognitive IoT testbed at SMU, explaining the various features that enable empirical testing of various novel edge-based ML algorithms.
Singapore’s “smart city” agenda is driving the government to provide public access to a broader variety of urban informatics sources, such as images from traffic cameras and information about buses servicing different bus stops. Such informatics data serves as probes of evolving conditions at different spatiotemporal scales. This paper explores how such multi-modal informatics data can be used to establish the normal operating conditions at different city locations, and then apply appropriate outlier-based analysis techniques to identify anomalous events at these selected locations. We will introduce the overall architecture of sociophysical analytics, where such infrastructural data sources can be combined with social media analytics to not only detect such anomalous events, but also localize and explain them. Using the annual Formula-1 race as our candidate event, we demonstrate a key difference between the discriminative capabilities of different sensing modes: while social media streams provide discriminative signals during or prior to the occurrence of such an event, urban informatics data can often reveal patterns that have higher persistence, including before and after the event. In particular, we shall demonstrate how combining data from (i) publicly available Tweets, (ii) crowd levels aboard buses, and (iii) traffic cameras can help identify the Formula-1 driven anomalies, across different spatiotemporal boundaries.