Chrome Extension
WeChat Mini Program
Use on ChatGLM

VIDGCN: Embracing input data diversity with a configurable graph convolutional network accelerator

Hao Ming, Tingting Pan, Dong Chen,Chencheng Ye,Haikun Liu, Liting Tang,Xiaofei Liao,Hai Jin

Journal of Systems Architecture(2023)

Cited 0|Views23
No score
Abstract
Hardware accelerated inference is a promising solution for exploiting graph convolutional networks (GCN) in latency-sensitive applications. Existing accelerators overlook an important barrier to widespread adoption: the input data (i.e., weighted graphs) of GCN inference diverge from scale and sparsity, causing the accelerators optimized for an array of graphs to lose efficiency on other graphs. This paper presents a reconfigurable GCN inference accelerator, VIDGCN, that switches between all possible GCN inference computation schemes to realize timely inference for all input graphs. VIDGCN incorporates an analytical performance model and a reconfigurable hardware design. The performance model allows users to find the optimal computation scheme for any given input graph. The hardware design reuses all the computation units under all computation schemes, and only distributes the data to the units in different ways. Evaluation on seven real-world graphs shows that VIDGCN outperforms state of the art, SGCNAX, by 1.79x, and consistently yields the ideal amount of memory accesses.
More
Translated text
Key words
Graph convolutional network (GCN),ASIC-based accelerator,Sparse-dense matrix multiplication
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined