This page provides visualizations on grading team performance using Crowdmark at the University of Toronto during Fall 2014.
- The area of each bubble is proportional to the number of pages uploaded.
- The velocity is based on the Fall 2014 Crowdmark timing data algorithm (see below).
- Exams with multiple versions in the same class are shown here separately.
- Other information including the days before returning the exam are shown in a hover tip.
This page is built by imitating this resource by Mike Bostock: http://bl.ocks.org/mbostock/3887118
When a grader adds a score, Crowdmark records a time called the "created at" time. When a facilitator changes that score, Crowdmark records a time called the "updated at" time. The algorithm for calculating time spent grading completely ignores "updated at" and is exclusively based on "created at" times. Suppose that a grader or facilitator creates a sequence of "created at" times t_1, t_2, t_3, .... , t_4. If |t_{n+1} - t_n| < THRESHOLD, we assume that the grader was continuously grading during the time interval [t_n, t_{n+1}]. If the time separation is bigger than THRESHOLD, we assume that the grader has stopped grading. We then calculate the length of the time intervals interpreted as continuous marking and that is the total time spent grading. The current THRESHOLD is 10 minutes. Should we change the threshold time?
Caveats: If a grader spends 11 minutes reviewing a page and then enters a score and then spends 11 minutes reviewing the next page and enters a score, the total time appearing in the algorithm will be 0 minutes spent grading. Activities like matching and reviewing scores do not presently contribute to time-spent-grading.