14.1 A 126.1mW real-time natural UI/UX processor with embedded deep-learning core for low-power smart glasses.
ISSCC(2016)
摘要
Wearable head-mounted display (HMD) smart devices are emerging as a smartphone substitute due to their ease of use and suitability for advanced applications, such as gaming and augmented reality (AR) [1–2]. Most current HMD systems suffer from: 1) a lack of rich user interfaces, 2) short battery life, and 3) heavy weight. Although current HMDs (e.g. Google Glass) use a touch panel and voice commands as the interface, such interfaces are solely smartphone extensions and are not optimized for HMD. Recently, gaze was proposed for an HMD user interface [2], but gaze cannot realize a natural user interface and experience (UI/UX), due to its limited interactivity and lengthy gaze-calibration time (several minutes). In this paper, as shown in Fig. 14.1.1, gesture and speech recognition are proposed as natural UI/UX, based on: 1) speech pre-processing: 2-channel ICA (independent component analysis), speech selection, and noise cancellation and 2) gesture pre-processing: depth/color-map generation, hand detection, hand segmentation, and noise cancellation. This paper presents a low-power natural UI/UX processor with an embedded deep-learning core (NINEX) to provide wearable AR for HMD users without calibration. Moreover, it provides higher recognition accuracy than previous work [3].
更多查看译文
关键词
CMOS integrated circuits,SRAM chips,helmet mounted displays,pattern recognition,user interfaces,8-metal CMOS technology,SRAM,embedded deep-learning core,equivalent gates,frequency 200 MHz,gesture,head-mounted display users,low-power natural UI/UX processor,low-power smart glasses,pattern recognition,power 126.1 mW,power efficiency,real-time natural UI/UX processor,size 65 nm,speech recognition,voltage 1.2 V,wearable augmented reality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络