全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

大模型赋能大学计算机通识课程教学利弊分析与应对举措
Analysis of Advantages and Disadvantages of Large Model Empowering the University General Computer Curriculum and Proposed Countermeasures

DOI: 10.12677/ces.2025.133192, PP. 329-337

Keywords: 大模型,课程改革,利弊分析,应对措施
Large Model
, Curriculum Reform, Analysis of Advantages and Disadvantages, Countermeasures

Full-Text   Cite this paper   Add to My Lib

Abstract:

在教育数智化转型加速推进的背景下,大模型技术已经渗透到教育教学的各个环节,深刻改变着教师的教学模式与学生的学习方式,课程教学迎来了全新的机遇和挑战。如何充分利用大模型的优势,使其有效赋能课程教学的实施并提升教学效果,已成为教育教学领域的重要课题。文章聚焦于大学计算机通识课程教学过程,分析大模型技术在课程设计与实施,学生学习过程中的应用现状,并提出具有针对性的策略与方法,旨在助力课程教学实现创新发展与质量提升。
In the context of accelerating digital intelligence transformation in education, large model technology has permeated all aspects of education and teaching, profoundly changing teachers’ teaching models and students’ learning styles. Curriculum teaching has ushered in brand-new opportunities and challenges. How to fully utilize the advantages of large models, enable them to effectively empower the implementation of curriculum teaching and enhance teaching effects, has become an important topic in the field of education and teaching. Focusing on the teaching process of general university computer courses, this article analyzes the current application status of large model technology in curriculum design and implementation, as well as its role during students’ learning processes. Furthermore, it proposes targeted strategies and methods aimed at facilitating innovative development and quality improvement in course instruction.

References

[1]  中共中央 国务院印发《教育强国建设规划纲要(2024-2035年)》[EB/OL].
https://www.gov.cn/zhengce/202501/content_6999913.htm, 2025-01-19.
[2]  教育部部长: 将实施人工智能赋能行动, 促进智能技术与教育教学、科学研究深度融合[EB/OL].
http://www.moe.gov.cn/jyb_xwfb/xw_zt/moe_357/2024/2024_zt02/mtbd/202402/t20240202_1114004.html, 2024-02-01.
[3]  刘学博, 户保田, 陈科海, 等. 大模型关键技术与未来发展方向——从ChatGPT谈起[J]. 中国科学基金, 2023, 37(5): 758-766.
[4]  中国人工智能系列白皮书——大模型技术(2023版) [C]//中国人工智能学会. 北京: 中国人工智能学会, 2023: 13-27.
[5]  Zhao, W.X., Zhou, K., Li, J., et al. (2023) A Survey of Large Language Models. arXiv: 2303.18223.
[6]  Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H. and Neubig, G. (2023) Pre-Train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys, 55, 1-35.
https://doi.org/10.1145/3560815
[7]  Devlin, J., Chang, M.W., Lee, K., et al. (2018) Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding. arXiv: 1810.04805.
[8]  Liu, Y., Ott, M., Goyal, N., et al. (2019) Roberta: A Robustly Optimized Bert Pretraining Approach. arXiv: 1907.11692.
[9]  Brown, T., Mann, B., Ryder, N., et al. (2020) Language Models Are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
[10]  OpenAI (2023) GPT-4 Technical Report.
https://cdn.openai.com/papers/gpt-4.pdf
[11]  Touvron, H., Martin, L., Stone, K., et al. (2023) LLAMA 2: Open Foundation and Fine-Tuned Chat Models. arXiv: 2307.09288.
[12]  Chowdhery, A., Narang, S., Devlin, J., et al. (2022) Palm: Scaling Language Modeling with Pathways. arXiv: 2204.02311.
[13]  Raffel, C., Shazeer, N., Roberts, A., et al. (2020) Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 21, 1-67.
[14]  Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., et al. (2020) BART: Denoising Sequence-to-Sequence Pre-Training for Natural Language Generation, Translation, and Comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5-10 July 2020, 7871-7880.
https://doi.org/10.18653/v1/2020.acl-main.703
[15]  Ouyang, L., Wu, J., Jiang, X., et al. (2022) Training Language Models to Follow Instructions with Human Feedback. Advances in Neural Information Processing Systems, 35, 27730-27744.
[16]  Ding, N., Qin, Y., Yang, G., et al. (2022) Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-Trained Language Models. arXiv: 2203.06904.
[17]  Hu, E.J., Shen, Y., Wallis, P., et al. (2021) Lora: Low-Rank Adaptation of Large Language Models. arXiv: 2106.09685.
[18]  Ding, N., Hu, S., Zhao, W., Chen, Y., Liu, Z., Zheng, H., et al. (2022) OpenPrompt: An Open-Source Framework for Prompt-Learning. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Dublin, 22-27 May 2022, 105-113.
https://doi.org/10.18653/v1/2022.acl-demo.10
[19]  严昊, 刘禹良, 金连文. 类ChatGPT大模型发展、应用和前景[J]. 中国图形图像学报, 2023, 28(9): 2749-2762.
[20]  王佐旭. 知识图谱和大语言模型辅助新工科课程教学资源建设方法[J]. 高等工程教育研究, 2025, 1:40-46, 110.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133