Chrome Extension
WeChat Mini Program
Use on ChatGLM

A 1 TOPS/W Analog Deep Machine-Learning Engine with Floating-Gate Storage in 0.13 Μm CMOS

IEEE journal of solid-state circuits(2015)

Cited 164|Views36
No score
Abstract
An analog implementation of a deep machine-learning system for efficient feature extraction is presented in this work. It features online unsupervised trainability and non-volatile floating-gate analog storage. It utilizes a massively parallel reconfigurable current-mode analog architecture to realize efficient computation, and leverages algorithm-level feedback to provide robustness to circuit imperfections in analog signal processing. A 3-layer, 7-node analog deep machine-learning engine was fabricated in a 0.13 μm standard CMOS process, occupying 0.36 mm 2 active area. At a processing speed of 8300 input vectors per second, it consumes 11.4 μW from the 3 V supply, achieving 1×10 12 operation per second per Watt of peak energy efficiency. Measurement demonstrates real-time cluster analysis, and feature extraction for pattern recognition with 8-fold dimension reduction with an accuracy comparable to the floating-point software simulation baseline.
More
Translated text
Key words
Analog signal processing,current mode arithmetic,deep machine learning,floating gate,neuromorphic engineering,translinear circuits
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined