Efficient Multimodal Sensing Systems

Adaptive audio-visual sensor fusion for energy-efficient embedded sensing

Developing an energy-efficient multimodal sensing system that integrates adaptive audio-visual inputs to optimize data acquisition and reduce computational overhead. The key idea is to dynamically suppress redundant sensor streams rather than processing all inputs continuously.


Some highlights of this project are:

  • Designed an adaptive sensor fusion framework that dynamically activates or suppresses audio-visual streams based on contextual relevance.
  • Reduced redundant computation and improved energy efficiency by filtering out low-value data points.
  • Evaluated the framework on embedded platforms to quantify reductions in inference latency and energy consumption for continuous sensing workloads.

Advisor: Prof. Prashant Shenoy, Laboratory for Advanced System Software, UMass Amherst.


Figure: Internet of Things, Public domain, via Wikimedia Commons.