UnsupervisedWord Segmentation with Bi-directional Neural Language Model

arxiv(2023)

引用 4|浏览27
暂无评分
摘要
We propose an unsupervised word segmentation model, in which for each unlabelled sentence sample, the learning objective is to maximize the generation probability of the sentence given its all possible segmentations. Such a generation probability can be factorized into the likelihood of each possible segment given the context in a recursive way. To capture both the long- and short-term dependencies, we propose to use a bi-directional neural language model to better extract the features of the segment's context. Two decoding algorithms were also developed to combine the context features from both directions to generate the final segmentation at the inference time, which helps to reconcile word-boundary ambiguities. Experimental results show that our context-sensitive unsupervised segmentation model achieved state-of-the-art at different evaluation settings on various datasets for Chinese, and the comparable result for Thai.
更多
查看译文
关键词
Unsupervised word segmentation,bi-directional neural language model,recurrent neural networks,context-sensitive segmentation,decoding algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要