Chrome Extension
WeChat Mini Program
Use on ChatGLM

TSTC: Two-Level Sparsity Tensor Core Enabling Both Algorithm Flexibility and Hardware Efficiency

2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD)(2023)

Cited 0|Views14
No score
Abstract
The tensor cores in modern GPUs lead to significant performance improvement in matrix multiplication, which is the primary operation in deep learning. However, existing hardware architectures face unstructured sparsity in deep learning, resulting in algorithm inflexibility and hardware inefficiency. The previous tensor core architecture requires matrices to be pruned into 2:4 sparse patterns, leading to algorithm inflexibility. Customized accelerators introduce extra architectures (e.g., interconnection networks for dynamic data routing or buffers for avoiding data conflicts) for unstructured sparse matrices, leading to hardware inefficiency. To tackle the contradiction between algorithm inflexibility and hardware inefficiency, we propose Two-level Sparsity Tensor Core (TSTC) in this paper. TSTC points out that the unstructured sparsity which enables algorithm flexibility can be maintained at the coarse-grained level, while hardware efficiency which requires structured sparsity can be ensured at the fine-grained level. For algorithm flexibility, we propose Flexible Sparse Block (FSB) pattern. FSB enables unstructured sparse matrices can be divided into fine-grained blocks with different structured sparsity. As a result, using FSB leads to up to 7.29x speed up compared with other formats. For hardware efficiency, we propose Dynamic Extendible Reduction Network (DERN). DERN enables different structured sparse reductions by only extending the data width on the standard reduction network without introducing interconnections or buffers. DERN enables TSTC to achieve 7.19x more energy savings under a similar speed. We also propose the whole flow, which can automatically deploy different sparse deep learning algorithms to TSTC. According to extensive experiments, TSTC achieves 1.24x similar to 7.69x speedup and 3.68x similar to 4.17x energy savings than the tensor core and the SOTA customized accelerator.
More
Translated text
Key words
Sparse Matrix-Matrix Multiplication,Neural Networks,Graphics Processing Units
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined