Mostrar el registro sencillo del ítem

Content-based methods in peer assessment of open-response questions to grade students as authors and as graders

dc.contributor.authorLuaces Rodríguez, Óscar 
dc.contributor.authorDíez Peláez, Jorge 
dc.contributor.authorAlonso-Betanzos, Amparo
dc.contributor.authorTroncoso, Alicia
dc.contributor.authorBahamonde Rionda, Antonio 
dc.date.accessioned2017-01-19T10:11:35Z
dc.date.available2017-01-19T10:11:35Z
dc.date.issued2017-02
dc.identifier.citationKnowledge-Based Systems, 117, p. 79–87 (2017); doi:10.1016/j.knosys.2016.06.024
dc.identifier.issn0950-7051
dc.identifier.urihttp://hdl.handle.net/10651/39345
dc.description.abstractMassive Open Online Courses (MOOCs) use different types of assignments in order to evaluate student knowledge. Multiple-choice tests are particularly apt given the possibility for automatic assessment of large numbers of assignments. However, certain skills require open responses that cannot be assessed automatically yet their evaluation by instructors or teaching assistants is unfeasible given the large number of students. A potentially effective solution is peer assessment whereby students grade the answers of other students. However, to avoid bias due to inexperience, such grades must be filtered. We describe a factorization approach to grading, as a scalable method capable of dealing with very high volumes of data. Our method is also capable of representing open-response content using a vector space model of the answers. Since reliable peer assessment requires students to make coherent assessments, students can be motivated by their assessments reflecting not only their own answers but also their efforts as graders. The method described is able to tackle both these aspects simultaneously. Finally, for a real-world university setting in Spain, we compared grades obtained by our method and grades awarded by university instructors, with results indicating a notable improvement from using a content-based approach. There was no evidence that instructor grading would have led to more accurate grading outcomes than the assessment produced by our modelsspa
dc.description.sponsorshipThis research was supported in part by the Spanish Ministerio de Economía y Competitividad (grants TIN2011-23558, TIN2012-37954, TIN2014-55894-C2- 2-R, TIN2015-65069-C2-1-R, TIN2015-65069-C2-2-R), the Junta de Andalucía (grant P12-TIC-1728) and the Xunta de Galicia (grant GRC2014/035), all, in turn, partially funded by FEDER. We would also like to thank the students from the University of A Coruña, Pablo de Olavide University and University of Oviedo who participated in this researchspa
dc.format.extentp. 79–87spa
dc.language.isoengspa
dc.publisherElsevierspa
dc.relation.ispartofKnowledge-Based Systems, 117spa
dc.rights© 2017 Elsevier
dc.rightsCC Reconocimiento - No comercial - Sin obras derivadas 4.0 Internacional
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectPeer assessmentspa
dc.subjectFactorizationspa
dc.subjectPreference learningspa
dc.subjectGrading gradersspa
dc.subjectMOOCsspa
dc.titleContent-based methods in peer assessment of open-response questions to grade students as authors and as gradersspa
dc.typejournal articlespa
dc.identifier.doi10.1016/j.knosys.2016.06.024
dc.relation.projectIDMEC/TIN2011-23558, TIN2012-37954, TIN2014-55894-C2- 2-R, TIN2015-65069-C2-1-R, TIN2015-65069-C2-2-R
dc.relation.projectIDJunta de Andalucía/P12-TIC-1728
dc.relation.projectIDXunta de Galicia/GRC2014-03
dc.relation.publisherversionhttp://dx.doi.org/10.1016/j.knosys.2016.06.024spa
dc.rights.accessRightsopen accessspa
dc.type.hasVersionAM


Ficheros en el ítem

untranslated

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

© 2017 Elsevier
Este ítem está sujeto a una licencia Creative Commons