This paper describes the design and implementation of Cascades, a scalable, flexible and composable middleware platform for multi-modal sensor networking applications. The middleware is designed to provide a way for application writers to use pre-packaged routines as well as incorporate their own application-tailored code when necessary. As sensor systems become more diverse in both hardware and sensing modalities, such systems support will become critical. Furthermore, the systems software must not only be flexible, but also be efficient and provide high performance. Experimentation in this paper compares and contrasts several possible implementations based upon testbed measurements on embedded devices. Our experimentation shows that such a system can indeed be constructed.
This paper presents the architectural trade-offs to support fine-grain multi-resolution video over a wide range of resolutions. In the future, video streaming systems will have to support video adaptation over an extremely large range of display requirements (e.g. 90x60 to 1920x1080). While several techniques have been proposed for multi-resolution video adaptation, which is also known as spatial scalability, they have focused mainly on limited spatial resolutions. In this paper, we examine the ability of current techniques to support wide-range spatial scalability. Based upon experiments with real video, we propose an architecture that can support wide-range adaptation more effectively. Our results indicate that multiple encodings with limited spatial adaptation from each encoding provides the best trade-off between efficient coding and the ability to adapt the stream to various resolutions.