Obmaaq: Ontology-Based Model for Automated Assessment of Short-Answer Questions

2023 First International Conference on Advances in Electrical, Electronics and Computational Intelligence (ICAEECI)(2023)

引用 0|浏览0
暂无评分
摘要
Automated scoring of short answers is becoming increasingly important nowadays, as more classes are taking place online due to COVID-19 outbreaks. Over the past 50 years, different techniques have been applied to develop ways of assessing short answers automatically. The automated assessment of short answer questions has been used with varying kinds of questions in different domains. However, no interoperable tools have been developed to facilitate the automated assessment of short answers for common modules across educational institutions. In this paper, we present a novel system called OBMAAQ, which grades short answers for questions designed according to Bloom's revised taxonomy remember category. The proposed system uses publicly available domain ontologies to grade short answers in different subject areas. This method is unique as it does not use model answers since it relies solely on the domain ontologies by utilizing an ontology-based pruning technique to grade the students' responses. The purpose of the system is to allow teachers from different educational institutions to use a similar approach when assessing the short answers for common modules automatically. Two publicly available domain ontologies in the field of security and computer networks have been used to validate students' answers. Experimental results based on both datasets indicate some promising results even though it is less than the set thresholds. This paper discusses the lessons learned and future recommendations based on the OBMAAQ model.
更多
查看译文
关键词
Ontology,Text mining,Assessment,Automated marking,Short answer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要