Chrome Extension
WeChat Mini Program
Use on ChatGLM

Programmable Dictionary Code Compression for Instruction Stream Energy Efficiency.

2020 IEEE 38TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2020)(2020)

Tampere Univ

Cited 3|Views12
Abstract
We propose a novel instruction compression scheme based on fine-grained programmable dictionaries. In its core is a compile-time region-based control flow analysis to selectively update the dictionary contents at runtime, minimizing the update overheads, while maximizing the beneficial use of the dictionary slots. Unlike in the previous work, our approach selects regions of instructions to compress at compile time and changes dictionary contents in a fine-grained manner at runtime with the primary goal of reducing the energy footprint of the processor instruction stream. The proposed instruction compression scheme is evaluated using RISC-V as an example instruction set architecture. The energy savings are compared to an instruction scratch pad and a filter cache as the next level storage. The method reduces instruction stream energy consumption up to 21 % and 5.5 % on average when compared to the RISC-V C extension with a 1% runtime overhead and a negligible hardware overhead. The previous state-of-the-art programmable dictionary compression method provides a slightly better compression ratio, but induces about 30 % runtime overhead.
More
Translated text
Key words
code compression,energy efficiency,instruction stream,instruction fetch,energy optimization,dictionary compression
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers

Ballast: Implementation of a Large MP-SoC on 22nm ASIC Technology

Antti Rautakoura,Timo Hamalainen,Ari Kulmala, Tero Lehtinen, Mehdi Duman, Mohamed Ibrahim
2022 25th Euromicro Conference on Digital System Design (DSD) 2022

被引用1

Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种基于细粒度可编程词典的指令压缩方案,旨在减少处理器指令流的能量足迹,通过编译时区域基础的控制流分析选择性更新词典内容以最小化更新开销并最大化词典槽位的益用。

方法】:该方案在编译时选择要压缩的指令区域,并在运行时以细粒度方式改变词典内容,主要目标是减少处理器指令流的能量消耗。

实验】:该方法使用RISC-V作为示例指令集架构进行了评估,与指令暂存区和滤波缓存作为下一级存储进行了能量节省比较,结果显示,与RISC-V C扩展相比,该方法能将指令流能量消耗降低最多21%,平均降低5.5%,同时保持了1%的运行时开销和可忽略的硬件开销。而之前最先进的可编程词典压缩方法虽然提供略好的压缩率,但引起了约30%的运行时开销。