![]() |
[email protected] |
![]() |
3275638434 |
![]() |
![]() |
Paper Publishing WeChat |
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
LI Wanjun, ZHAO Yun, JIA Wenfeng, ZHAO Yushan
Full-Text PDF
XML 509 Views
DOI:10.17265//1539-8072/2021.10.001
Shandong University, Weihai, China
The objective of this paper is to explore the reliability of Online Automatic Scoring (OAS) through the comparison of OAS and Teacher Scoring (TS), and further demonstrate the feasibility of the integration of the two scoring methods. The Pearson correlation statistics of the two scoring results of 115 compositions are processed with SPSS analysis software, indicating that the correlation between the two reaches 0.83, which means that OAS is relatively reliable in dealing with students’ compositions. After the second stage of the TS experiment, the questionnaire results show that students generally recognize the OAS and have a clear understanding of the advantages and disadvantages of the two scoring methods. Combined with the students’ interview, the conclusion is that the OAS is reliable and the integration of the two scoring methods will have a better effect.
online automatic scoring, teacher scoring, integration
Chen, X., & Ge, S. (2008). Review of automatic composition scoring. Journal of PLA University of Foreign Languages, (5), 78-83.
Gao, J. (2021). A study of the rating quality of an automated essay scoring platform: Pigai. Journal of Harbin University, 42(7), 102-105.
Ge, S., & Chen, X. (2007a). Exploration of automatic composition scoring for Chinese EFL learners. Foreign language World, (5), 43-50.
Ge, S., & Chen, X. (2007b). Research on automatic composition scoring technology abroad. Technology Enhanced Foreign Language Education, (117), 25-29.
Ge, S., & Chen, X. (2009). Problems and countermeasures in the research of automatic scoring of college English composition. Shandong Foreign Language Teaching Journal, (3), 21-26.
Landauer, T. K., Laham, D., & Foltz, P. W. (2000). The intelligent essay assessor. In K. Hearst (Ed.), The debate on automated essay scoring. IEEE intelligent systems & their applications (pp. 27-31). Retrieved on November 12, 2004 from http://que.info-science.uiowa.edu/~light/research/mypapers/autoGradingIEEE.pdf
Li, J. (2011). A case study of teachers’ written feedback and students’ response in Chinese students’ English writing. Foreign Language World, (6), 30-39.
Li, Y., & Ge, S. (2008). A study on the validity of graded vocabulary in automatic scoring of college English composition. Foreign Languages and Their Teaching, (10), 48-52.
Liang, M. C. (2005). Construction of automatic scoring model for Chinese students’ English composition (Doctoral thesis, Nanjing University, 2005).
Liang, M., & Wen, Q. (2007). Review and enlightenment of foreign automatic scoring systems. Technology Enhanced Foreign Language Education, (117), 18-24.
Page, E. B. (2003). Project essay grade: PEG. In M. D. Shermis and J. Burstain (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 43-55). Mahwah, NJ: Lawrence Erlbaum Associates.
Wu, D., & Zhang, Q. (2011). A comparative study of intelligent and teacher assessment of college English composition. China Electric Power Education, (188), 177-178.
Xie, C. (2010). Automatic scoring of English composition and its validity, reliability and operability. Journal of Jiangxi Normal University (Social Sciences), 43(2), 136-140.
Zhou, X., Fan, X., Ren, G., & Yang, Y. (2021). Automated English essay scoring method based on multi-level semantic features. Journal of Computer Applications, 41(8), 2205-2211.