Année : 2019
Lieu de publication de l'article :

Résumé de l'article

This study examines fidelity of ranking and rating scales in the context of online peer review and assessment. Using the Monte-Carlo simulation technique, we demonstrated that rating scales outperform ranking scales in revealing the relative “true” latent quality of the peer-assessed artifacts via the observed aggregate peer assessment scores. Our analysis focused on a simple, single-round peer assessment process and took into account peer assessment network topology, network size, the number of assessments per artifact, and the correlation statistics used. This methodology allows to separate the effects of structural components of peer assessment from cognitive effects. CCS CONCEPTS • Information systems~Similarity measures • Applied computing~Education~E-learning KEYWORDS Peer assessment, peer review, peer evaluation, ranking, rating, scales, reliability, validity ACM Reference Format: D. Babik, S. Stevens, and A. Waters. 2019. Comparison of Ranking and Rating Scales in Online Peer Assessment: Simulation Approach. In The International Learning Analytics and Knowledge Conference (LAK19), March, 2019, Tempe, AZ, USA. ACM, New York, NY, USA. 5 pages. https://doi.org/10.1145/3303772.3303820

Mots-clés

Simulation,Monte Carlo method,Network topology,Aggregate data,

Caractéristiques





Caractéristiques

level
step
environment
target