%0 Journal Article %T Summary of Research Methods on Pre-Training Models of Natural Language Processing %A Yu Xiao %A Zhezhi Jin %J Open Access Library Journal %V 8 %N 7 %P 1-7 %@ 2333-9721 %D 2021 %I Open Access Library %R 10.4236/oalib.1107602 %X In recent years, deep learning technology has been widely used and developed. In natural language processing tasks, pre-training models have been more widely used. Whether it is sentence extraction or sentiment analysis of text, the pre-training model plays a very important role. The use of a large-scale corpus for unsupervised pre-training of models has proven to be an excellent and effective way to provide models. This article summarizes the existing pre-training models and sorts out the improved models and processing methods of the relatively new pre-training models, and finally summarizes the challenges and prospects of the current pre-training models. %K Natural Language Processing %K Pre-Training Model %K Language Model %K Self-Training Model %U http://www.oalib.com/paper/6758071