LOCR: Location-Guided Transformer for Optical Character Recognition
arxiv(2024)
摘要
Academic documents are packed with texts, equations, tables, and figures,
requiring comprehensive understanding for accurate Optical Character
Recognition (OCR). While end-to-end OCR methods offer improved accuracy over
layout-based approaches, they often grapple with significant repetition issues,
especially with complex layouts in Out-Of-Domain (OOD) documents.To tackle this
issue, we propose LOCR, a model that integrates location guiding into the
transformer architecture during autoregression. We train the model on a dataset
comprising over 77M text-location pairs from 125K academic document pages,
including bounding boxes for words, tables and mathematical symbols. LOCR
adeptly handles various formatting elements and generates content in Markdown
language. It outperforms all existing methods in our test set constructed from
arXiv, as measured by edit distance, BLEU, METEOR and F-measure.LOCR also
reduces repetition frequency from 4.4
from 13.2
OOD marketing documents. Additionally, LOCR features an interactive OCR mode,
facilitating the generation of complex documents through a few location prompts
from human.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要