Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save RobertTalbert/6285a32e6fad5a7fb51c to your computer and use it in GitHub Desktop.
Save RobertTalbert/6285a32e6fad5a7fb51c to your computer and use it in GitHub Desktop.
My current approach to assessment in specs grading (Casting Out Nines in exile)

This post is really an extended answer to a question that came up on Twitter today among some of my standards-based grading people about finding a balance between being thorough about assessing student work on learning objectives on the one hand, and not being crushed by the grading workload on the other hand:

.@RobertTalbert Using SBG in my calc classes this term, mostly liking it. However, grading load has become unsustainable (~60 Ss). Advice?

— Spencer Bagley (@sbagley) October 13, 2015
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>

SBG is based on having a list of micro-scale, atomic learning objectives for the course, and students progress through the course by providing evidence that they have mastered each objectives. Some of these lists can contain dozens of learning objectives. Multiply that by 60+ students and you have a lot of grading to to. At least, that's the question -- does SBG have to come at this much expense?

I haven't been using "pure" SBG in my classes (precisely because of my concerns over the increase in the grading load) but I consider specifications grading to be a form of SBG, and here's what student assessment on learning objectives currently looks like for me.

First of all, one general category of assessment is something I call Learning Community which I discussed here. This is not going to be part of the discussion now because Learning Community items aren't about content standards -- they are about things like preparing for class, working with others, and incentivizing getting your hands dirty through practice and presentations. Students contribute to the Learning Community by completing Guided Practice pre-class activities, bringing in their Homework A to discuss and correct, and presenting at the board. These are important things but not "standards" as such; this is all formative assessment.

When it comes to assessing standards, it might be helpful to trace the brief history of how I've done this.

This time last year when I was first doing specs grading, I had a list of learning objectives for each class that went down to the atomic level of things students should know. Here's the list for Discrete Structures 2 for instance. Each learning objective had its own mini-assessment that consisted of 1--3 tasks that assessed student mastery of that one objective. At intervals throughout the semester -- something like 5-6 times -- we'd set aside time for students to come and take these mini-assessments, which were graded Pass/No Pass. If you got No Pass on one of these, you could retake a new version of it at the next assessment period in class. Students' grades in the course were partially dependent on how many learning objectives they Passed.

So, this was quite close to a "pure" SBG approach. The students and I liked the specificity of this system -- we all knew exactly what each student knew and didn't know, and each Pass/No Pass mark maps to exactly one learning objective. The downside should be obvious: It was a crap ton of grading, and also a crap ton of writing since I ended up having to write 6--8 versions of assessments for around 50 total learning objectives. The logistics of assessment were awkward as well. And, as I've noted elsewhere, there was no reassessment component, so that students could spuriously Pass a learning objective. Building in reassessment would have made a bad grading load worse. So this is exactly the sort of thing that Spencer was trying to get away from in his tweet above.

Over the summer, in my online Calculus course, I switched to something more like this model by George McNulty [PDF]. In this model I didn't have "standards" for students to meet but rather 16 different kinds of problems that students should be able to work. Here is a sample final exam with instances of all 16 of those problems. I could map those problems onto actual standards if I wanted to -- Problem 6 is Compute the derivative of functions involving trigonometric functions, Problem 9 is Find the slope at a point on a curve defined as an implicit function, and so on. Students took a series of midterm exams during the semester that were constructed such that the first one had Problems 0--3 on it, the second one had Problems 0--6, the third had problems 0--11, and so on. Each problem was graded Pass/No Pass, and again if you didn't pass a problem on one go-around, you would just retake another version at the next one.

What made this approach different was that I was clustering some related standards together. For example, each instance of Problem 7 asked students to find the derivative of a function using the Chain Rule, find the slope of a tangent line to the graph of a composite function at a point, find an instantaneous rate of change in a composite function at a point, and find the derivative value of a composite function expressed as a table or as a graph. If I were doing SBG this would be four different standards. Here, it's four standards clustered into the one problem, and the idea is that if you can show that you know what you are doing on most of that problem, you've Passed.

This approach worked pretty well too, and it dramatically cut down on the amount of grading and assessment-writing I had to do since the standards were clustered together inside a Problem. The biggest downside was that it lacked specificity and raised questions (to me at least) about rigor. A student could Pass problem 7 for instance without having a completely solid grasp of how to work with the Chain Rule when the functions are given as graphs or tables, by showing excellent work on the other three parts of that problem. I was never fully OK with this. Also, again there was no reassessment process.

This time around, I decided to cluster the learning objectives even further. Each course I teach (Discrete Structures 1 and Discrete Structures 2) has five topics. Students can certity on each topic at either Level 1 or Level 2. They do this through a combination of timed skills tests and untimed homework problems. Certifying at Level 1 means Passing a skills test (the "Level 1 assessment") on basic concepts and 2 out of 5 homework problems. Certifying at Level 2 means Passing a skills test (the "Level 2" assessment) on more advanced concepts and 3 out of 5 homework problems. (The homework problems are pretty hard, often involving either programming or proving theorems.) Here are a sample Level 1 assessment and a sample Level 2 assessment. Each problem on the assessment roughly maps to a learning objective. There are 4--8 learning objectives being assessed on each one. And as always, if you don't Pass an assessment, you take a new version of it later.

So far with this system, I've liked the fact that there is not so much writing to do -- I am not making up separate assessments for each learning objective but one page of 3--5 problems that cover 4--8 learning objectives. Logistically it's a lot easier to distribute and collect. It's easier for students to keep track of what they've Passed or Not Passed. It's easier to grade, too, since I don't have to keep track of so much paper. And I've built in a reassessment portion of the grade where students will take shortened versions of the assessments again during the last week of classes.

So, this system has worked fine in some ways. It has been less work than the other two times I've used it, since each assessment covers more and therefore I don't have to look at each individual atomic objective and decide whether students have shown enough evidence of mastery. But after living with it for seven weeks, I don't like the results as much as I did either of the other two iterations. And that is because:

  • Students can still Pass an assessment without showing evidence of having mastered all the objectives. With 4--8 problems on each assessment, basically we are looking at a 3/4 or 6/8 correct to Pass. And in some cases I am not OK with that. There are certain objectives I want all students to meet. In the previous two runs of specs grading these were spelled out (the "CORE-M" objectives from last year or the "Core problems" from the summer) but not here.
  • It looks and feels too much like a traditional test, and students treat it like one -- they panic, they get grade-conscious, and they don't take to heart my advice that they should only take those assessments for which they feel ready. I am trying to get away from traditional testing and this seems like a step backwards.
  • The assessment grades don't say exactly what you do or do not know. Getting a Pass on Sets Level 1 means that you know stuff about basic set theory; I'd like something that's more descriptive.

In the next round of specs grading next semester, I am going to go back to more of the "old" way of doing things and having more explicit standards and more frequent, but smaller, assessments that check smaller clusters of objectives. For example, rather than having "Sets Level 1", I might have a smaller assessment on the objective Represent a set using both roster notation and set-builder notation, another one on Determine the elements in a union, intersection, Cartesian product, or set difference, and so on. These are not "atomic" goals in the sense that they could be broken down further than they are; but they are small enough to be focused, while big enough to contain more than just one idea. Because they are small and focused, I'll be able to have a fine-tuned sense of what students know and don't know; because each one assesses a medium-sized grouping of objectives, it should be easier work than last year's class with three dozen objectives and multiple instances of assessments for each one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment