全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...
-  2016 

面向大规模在线开放课程的编程题 多特征综合自动评分方法
An Automatic Scoring Method of Student Programs Using Multi??Feature Analysis for Massive Open Online Courses

DOI: 10.7652/xjtuxb201610010

Keywords: 大规模在线开放课程,自动评阅,多特征分析,抽象语法树,相似度计算
massive open online courses
,automatic evaluation,multi??feature analysis,abstract syntax tree,similarity computation

Full-Text   Cite this paper   Add to My Lib

Abstract:

针对大规模在线开放课程环境下C/C++语言学习者人数众多、自动评阅准确率低的问题,提出一种基于多特征综合分析的编程题自动评分方法。通过对源程序编译预处理剔除提示性信息,用词法分析和抽象语法树(AST)分别抽取学生程序和标准模板程序的多种特征并计算特征相似度,再根据程序编译是否通过,采用不同策略综合分析多种特征相似度进行自动评分。特征相似度包括多项测试用例运行结果的相似度、AST抽取的各项特征的相似度和源程序代码相似度。如果学生程序编译失败,在计算AST特征相似度的同时需进行源程序代码相似度分析。实验结果表明:相对于仅基于测试用例运行结果的动态测试方法和传统静态分析方法,所提方法的平均准确率分别提高了18.48%和14.17%,评价结果与人工评分高度相关且无需借助人工辅助分析。该方法适用于大规模在线开放课程教学。
A new automatic scoring method based on multi??feature analysis is proposed to focus the problem that there are a great number of C/C++ programming learners on massive open online courses environment, while the existing automatic scoring techniques possess low accuracy. The prompt information in a submitted program is eliminated with a preprocessing compiler. The lexical analysis and abstract syntax tree (AST) methods are used to extract the features of the submitted program and the standard template one, respectively. Then similarities of these features are calculated. According to whether or not the program is compiled successfully, two different strategies are applied to comprehensively analyze the multi??feature similarities and the program is automatically evaluated finally. The multi??feature similarities include the running result similarity of test cases, the AST feature similarity, and the source code similarity. If the program fails to be compiled, both the source code similarity and the AST features similarity need to be analyzed. Experimental results and comparisons with the dynamic test method and the static analysis method show that the average accuracy of the proposed method increases by 18??38 % and 14??17 %, respectively. The automatically generated scores are highly correlated with manually determined scores and there is no manual assistant to ensure the accuracy of the scoring results. The proposed method can be applied in massive open online courses

References

[1]  [2]ALBER S, DEBIASI L. Automated assessment in massive open online courses [EB/OL]. (2013??07??16) [2015??12??12]. http:∥uni??salzburg??at/fileadmin/multimedia/SRC/docs/teaching/SS13/SaI/Paper_Alber_Debiasi??pdf.
[2]  [3]PIETERSE V. Automated assessment of programming assignments [C]∥Proceedings of the 3rd Computer Science Education Research Conference on Computer Science Education Research. New York, USA: ACM, 2013: 45??56.
[3]  [4]ALA M, KIRSTI M. A survey of automated assessment approaches for programming assignments [J]. Computer Science Education, 2005, 15(2): 83??102.
[4]  [5]POZENEL M, FURST L, MAHNICC V. Introduction of the automated assessment of homework assignments in a university??level programming course [C]∥Proceedings of the 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics. Piscataway, NJ, USA: IEEE, 2015: 761??766.
[5]  MA Peijun, WANG Tiantian, SU Xiaohong. Automatic grading of student programs based on program understanding [J]. Journal of Computer Research and Development, 2009, 46(7): 1136??1142.
[6]  [1]STAUBITZ T, KLEMENT H, RENZ J, et al. Towards practical programming exercises and automated assessment in massive open online courses [C]∥Proceedings of the 2015 IEEE International Conference on Teaching, Assessment, and Learning for Engineering. Piscataway, NJ, USA: IEEE, 2015: 23??30.
[7]  [6]RUBIO M, KINNUNEN P, PAREJA C, et al. Student perception and usage of an automated programming assessment tool [J]. Computers in Human Behavior, 2014, 31(2): 453??460.
[8]  [7]GUPTA S, DUBEY S K. Automatic assessment of programming assignment [J]. Computer Science & Engineering, 2012, 2(1): 67??74.
[9]  [8]VUJOSEVIC M, NIKOLIC M, TOSIC D, et al. Software verification and graph similarity for automated evaluation of students’ assignments [J]. Information & Software Technology, 2013, 55(6): 1004??1016.
[10]  [9]马培军, 王甜甜, 苏小红. 基于程序理解的编程题自动评分方法 [J]. 计算机研究与发展, 2009, 46(7): 1136??1142.
[11]  [10]王倩, 苏小红, 马培军. 有语法错误的编程题自动评分方法研究: 用局部语法分析和采分点匹配实现 [J]. 计算机工程与应用, 2010, 46(17): 239??242.
[12]  WANG Qian, SU Xiaohong, MA Peijun. Automatic grading method for programs with syntax error: via local syntax analysis and key point matching [J]. Computer Engineering and Applications, 2010, 46(17): 239??242.
[13]  [11]CUI B, LI J, GUO T, et al. Code comparison system based on abstract syntax tree [C]∥Proceedings of the 2010 3rd IEEE International Conference on Broadband Network and Multimedia Technology. Piscataway, NJ, USA: IEEE, 2010: 668??673.
[14]  [12]张宇, 刘雨东, 计钊. 向量相似度测度方法 [J]. 声学技术, 2009, 28(4): 532??536.
[15]  ZHANG Yu, LIU Yudong, JI Zhao. Vector similarity measurement method [J]. Technical Acoustics, 2009, 28(4): 532??536.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133