Accelerating String-Key Learned Index Structures via Memoization-based Incremental Training
arxiv(2024)
摘要
Learned indexes use machine learning models to learn the mappings between
keys and their corresponding positions in key-value indexes. These indexes use
the mapping information as training data. Learned indexes require frequent
retrainings of their models to incorporate the changes introduced by update
queries. To efficiently retrain the models, existing learned index systems
often harness a linear algebraic QR factorization technique that performs
matrix decomposition. This factorization approach processes all key-position
pairs during each retraining, resulting in compute operations that grow
linearly with the total number of keys and their lengths. Consequently, the
retrainings create a severe performance bottleneck, especially for
variable-length string keys, while the retrainings are crucial for maintaining
high prediction accuracy and in turn, ensuring low query service latency.
To address this performance problem, we develop an algorithm-hardware
co-designed string-key learned index system, dubbed SIA. In designing SIA, we
leverage a unique algorithmic property of the matrix decomposition-based
training method. Exploiting the property, we develop a memoization-based
incremental training scheme, which only requires computation over updated keys,
while decomposition results of non-updated keys from previous computations can
be reused. We further enhance SIA to offload a portion of this training process
to an FPGA accelerator to not only relieve CPU resources for serving index
queries (i.e., inference), but also accelerate the training itself. Our
evaluation shows that compared to ALEX, LIPP, and SIndex, a state-of-the-art
learned index systems, SIA-accelerated learned indexes offer 2.6x and 3.4x
higher throughput on the two real-world benchmark suites, YCSB and Twitter
cache trace, respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要