全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

Summary of Research Methods on Pre-Training Models of Natural Language Processing

DOI: 10.4236/oalib.1107602, PP. 1-7

Subject Areas: Statistics

Keywords: Natural Language Processing, Pre-Training Model, Language Model, Self-Training Model

Full-Text   Cite this paper   Add to My Lib

Abstract

In recent years, deep learning technology has been widely used and developed. In natural language processing tasks, pre-training models have been more widely used. Whether it is sentence extraction or sentiment analysis of text, the pre-training model plays a very important role. The use of a large-scale corpus for unsupervised pre-training of models has proven to be an excellent and effective way to provide models. This article summarizes the existing pre-training models and sorts out the improved models and processing methods of the relatively new pre-training models, and finally summarizes the challenges and prospects of the current pre-training models.

Cite this paper

Xiao, Y. and Jin, Z. (2021). Summary of Research Methods on Pre-Training Models of Natural Language Processing. Open Access Library Journal, 8, e7602. doi: http://dx.doi.org/10.4236/oalib.1107602.

References

[1]  Lyu, L.C., Zhang, B., Wang, Y.P., Zhao, Y.J., Qian, L. and Li, T.T. (2021) Global Patent Analysis of Natural Language Processing. Science Focus, 16, 84-95.
[2]  Yu, T.R., Jin, R., Han, X.Z., Li, J.H. and Yu, T. (2020) Review of Pre-Treaning Models for Natural Language Processing. Computer Engineering and Applications, 56, 12-22.
[3]  Liu, Q., Kusner, M.J. and Blunsom, P. (2020) A Survey on Contextual Embeddings.
[4]  Radford, A., Narasimhan, K., Salimans, T., et al. (2020) Improving Language Understanding by Generative Pretraning. https://www.cs.ubc.ca/~amuham01/LING530/papers/redford2018improving.pdf
[5]  Radford, A., Wu, J., Child, R., et al. (2019) Language Models Are Unsupervised Multitask Learners. OpenAI Blog, 1.
[6]  Liu, Y., Ott, M., Goyal, N., et al. (2020) RoBERTa: A Robustly Optimized BERT Pretraining Approach. https://arxiv.org/abs/1907.11692
[7]  Martin, L., Muller, B., Suarez, P.J.O., et al. (2019) CamemBERT: A Tasty French Language Model. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 7203-7219. arXiv:1911.034894 https://doi.org/10.18653/v1/2020.acl-main.645
[8]  Alsentzer, E., Murphy, J.R., Boag, W., et al. (2019) Publicly Available Clinical BERT Embeddings. Proceedings of the 2nd Clinical Natural Language Processing Workshop, 72-78. arXiv:1904.03323 https://doi.org/10.18653/v1/W19-1909
[9]  Huang, K., Altosaar, J. and Ranganath, R. (2019) Clinica-1BERT: Modeling Clinical Notes and Predicting Hospital Readmission. arXiv:1904.05342
[10]  Shoeybi, M., Pateary, M., Puri, R., et al. (2019) Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Paralleslism. arXiv:1909.08053
[11]  Clark, K., Luong, M.T., Le, Q.V., et al. (2020) ELECTRA: Pretraining Text Encoders as Discriminators Rather than Generators. arXiv:2003.10555

Full-Text


comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413