Markov-Optimal Sensing Policy For User State Estimation In Mobile Devices

IPSN '10: The 9th International Conference on Information Processing in Sensor Networks Stockholm Sweden April, 2010(2010)

引用 50|浏览35
暂无评分
摘要
Mobile device based human-centric sensing and user state recognition provide rich contextual information for various mobile applications and services. However, continuously capturing this contextual information consumes significant amount of energy and drains mobile device battery quickly. In this paper, we propose a computationally efficient algorithm to obtain the optimal sensor sampling policy under the assumption that the user state transition is Markovian. This Markov-optimal policy minimizes user state estimation error while satisfying a given energy consumption budget. We first compare the Markov-optimal policy with uniform periodic sensing for Markovian user state transitions and show that the improvements obtained depend upon the underlying state transition probabilities. We then apply the algorithm to two different sets of real experimental traces pertaining to user motion change and inter-user contacts and show that the Markov-optimal policy leads to an approximately 20% improvement over the naive uniform sensing policy.
更多
查看译文
关键词
Energy efficiency,Mobile sensing,Optimal sampling policy,Markovian User state
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要