AN AUTOMATED SCORING APPROACH FOR ESSAY QUESTIONS

7 2

Authors

  • Ahmed ALZAHRANİ
  • Abdulkareem ALZAHRANİ
  • Fawaz ALARFAJ
  • Khalid ALMOHAMMADİ
  • Malek ALRASHİDİ

Keywords:

Automated essay scoring, project essay grade, e-pedagogy and e-assessment

Abstract

The automated scoring or evaluation for written student responses have been, and are still a highly interesting topic for both education and natural language processing, NLP, researchers alike. With the obvious motivation of the difficulties teachers face when marking or correcting open essay questions; the development of automatic scoring methods have recently received much attention. In this paper, we developed and compared number of NLP techniques that accomplish this task. The baseline for this study is based on a vector space model, VSM. Where after normalisation, the baseline-system represents each essay by a vector, and subsequently calculates its score using the cosine similarity between it and the vector of the model answer. This baseline is then compared with the improved model, which takes the document structure into account. To evaluate our system, we used real essays that submitted for computer science course. Each essay was independently scored by two teachers, which we used as our gold standard. The systems' scoring was then compared to both teachers. A high emphasis was added to the evaluation when the two human assessors are in agreement. The systems' results show a high and promising performance.

Downloads

Published

2014-09-01

How to Cite

ALZAHRANİ, A., ALZAHRANİ, A., ALARFAJ, F., ALMOHAMMADİ, K., & ALRASHİDİ, M. (2014). AN AUTOMATED SCORING APPROACH FOR ESSAY QUESTIONS. The Eurasia Proceedings of Educational and Social Sciences, 1, 232–236. Retrieved from https://epess.net/index.php/epess/article/view/36

Issue

Section

Articles