I need to collect and share some thoughts on my experience so far with standards based grading in both introductory calculus-based physics and introductory astronomy. I don’t know that everything, or anything, here will make much sense but it will help me sort some things out. In doing so, I may see things I’ve not yet noticed.
One realization I had last week was to discontinue using the word “mastery” as a description of what I want students to achieve. Many astronomical and physical concepts took thousands of years to accept. Can I realistically expect students to get to that same level of intellectual confidence in one semester? For some things, like straightforward numerical calculations, yes. For other things, like understanding the chain of evidence that led to ideas like the structure of our solar system, I don’t think so. I’m still struggling with this question. Nevertheless, beginning next semester I have decided to replace “mastery” with “proficiency” because I think that more accurately reflects what I want students to achieve, at least with the more sophisticated concepts and ideas.
I suppose the most challenging thing for me has been deciding whether or not the standards we’ve used are rigorous enough. Do they focus on the most important aspects of the course? Yes, I think. Formulating standards really forced me to think about what I wanted students to take away from the course. I can honestly say that I’ve addressed this issue for years before I adopted SBG so I feel I have a good handle on what I want the takeaways to be. That doesn’t mean they never change though. I treat course planning as always being subject to modification as long as that modification leads to improvements for students. Are there too many standards? Yes, at first I had far too many standards. As of this semester, I see that my standards focus on two basic things: things I call “skills” and things that require deeper thought. In physics, for example, calculating a particle’s momentum, energy, or angular momentum (relative to something else) is a skill. It’s an important skill, one that requires error-free proficiency, and certainly one that is essential for later big picture issues (e.g. conservation of momentum), but does it warrant being an individual standard on its own? At first I decided that it did indeed warrant such importance, but this semester I changed my mind. It is now part of a much more coarsely grained standard.
This brings up another question in my mind, namely the question of whether or not there could be different sets of standards. I can envision having one set of “training” standards whose purpose is to ease students into this strange new way of assessment with goals that are more easily obtainable early in the course. These standards may address “skills” as described above and may serve as a gateway to accessing future standards. Staying with the example of calculating a particle’s momentum, how many attempts are necessary to demonstrate proficiency? Is one successful attempt sufficient? Should I require, say, three consecutive successful attempts before awarding permanent proficiency? Should I mandate that after, say, two unsuccessful attempts to demonstrate proficiency with the same question related to a given standard that a new question must be given? I rather like that idea because it prevents the gaming of one particular question. By the way, some students seemed surprised to discover that the questions that I use to check for proficiency on a given standard will be different from one attempt to another. Somewhere along the line they were led to think that the same questions/assessments would be used over and over. I don’t know how or why that happened, but I need to make sure that doesn’t crop up again.
When students request individual assessments, am I treating all students the same in judging their proficiency? Am I subconsciously holding some to higher or lower criteria for some reason? This is always in the back of my mind.
Is it okay for some standards to be more difficult to achieve than others? I think it’s probably okay for standards to get progressively more challenging as a course unfolds. In fact, I’m almost convinced that this is a necessity. I think to make this work, I need to make sure the questions/assessments I use to judge proficiency need to grow more challenging during the course. I have a large collection (hundreds) of astronomy and critical thinking science questions that I developed in working on the LCTTA materials. I need to go through these questions and see how they reflect, if at all, the course standards. These questions were developed over several years before I moved to standards based grading so it is possible that some of them need to be modified.
Is it okay for students to collaborate during assessments? I conclude that it is. My main reason to not allow collaboration is to discourage cheating. How would I recognize cheating? Well, I think if two or more students turn in identical responses, and I mean down to the letter, to an assessment then I am conditioned to suspect cheating. One way I have to guard against this is to immediately turn the assessment into an oral one. I begin asking each person with the same response different targeted questions, giving them an opportunity to demonstrate that they didn’t just copy someone else’s thoughts in writing. This has led me to see the power of oral assessments in general. I don’t know that requiring some assessments to be oral is practical, but I would like to experiment more with this idea because I think the confidence with which one explains reasoning is directly proportional to one’s understanding of that reasoning, but I may be wrong. As I write this in my office, I have one student outside in the hallway attempting an assessment. That student is accessing Google, hoping to find something useful. I’ve tried all semester to convince my astronomy students that Google isn’t of much use to them in this course because we don’t focus on “Googleable” (is that a word?) things. I provide them with numerous activities with the expectation that they have to complete them outside of class. Unfortunately, to my knowledge not a single student has recognized that some of the assessment questions appeared in the activities. Many, okay most, of them do not do that and do not realize till the semester’s end nears that I wasn’t kidding. I don’t know how much nagging should be necessary at this level (freshman/sophomore undergraduate level) to get this point across to them. Some colleagues tell me that because this is a community college I should nag more than I would need to at a four year college, but I don’t accept this. Our transfer courses are supposed to have the same expectations of our students that they would be subjected to at a four year school. At what point do I say that this is not my problem as an instructor, but a problem of time management and organization for the students? I don’t have a good answer to that question.
What if in the course of assessment, a student displays deficient writing skills? What should I do? Should I not award proficiency for this reason alone? Should I overlook atrocious grammar? Should I overlook consistent misspelling of common words (e.g. “celestrial” instead of “celestial”)? Should I overlook the inability to write a coherent, complete sentence? My response thus far has been to try to correct these problems within the context of my own courses. I tell students that it’s not enough to understand something, but that it’s also necessary to effective and correctly communicate understanding both orally and in writing. Word choice matters. Clarity matters.
As an aside, while I’ve been writing this post off and on over the past three hours I’ve been working with one student in particular who cannot read or write at a college level. Three hours! Is that justified? In these days of slashed budgets and loss of positions, I’m told that such efforts are essential. I don’t know…would YOU spend three hours with a student who clearly isn’t prepared to be in your environment? It’s not that I mind doing it, because doing so does allow for some actual learning to take place, but I obviously cannot do this with every single student.
Unfortunately I must also address certain questions that are taboo in my environment. I’m not allowed to ask how students who can’t read and write at high school level get into college courses. I’m not allowed to ask about the negative impact of a 20+ contact hour teaching load on my ability to spend as much time on assessment as I probably should due to the amount of paperwork it can generate. I’m not allowed to cite students’ refusal to sometimes do work outside of class as having anything to do with my performance. In fact, there seems to be an asymmetry in that students can cite my deficiencies as reason for their underperformance but I’m not allowed to cite their deficiencies as reason for my (lack of) effectiveness or for their inability to succeed. I’m supposed to pretend that students who can’t read or write at grade level can be successful in my environment. I’m not sure how to deal with these issues, so I’ll just keep pretending they don’t exist while knowing that they really do.
Another thing I have to consider is my grade distribution. My classes are capped at twenty-four and while it’s rare for the calculus-based physics to fill, it’s not uncommon for two out of three sections of first semester astronomy to fill in the fall semester. I come under scrutiny if too many students earn an A and if too few students earn an A. My chair seems to think that an A should be difficult to obtain yet also thinks that physics “is just F=ma” and so I’m not sure what to make of this. I feel that while some standards should probably be more challenging to tackle, proficiency should be attainable. I just can’t accept setting standards for which students have no hope of demonstrating proficiency. On the other hand, am I obligated to take into account things like insufficient reading or writing proficiency that seems present in many of our students? I don’t know.
On the first day of class, I show students my definitions of teaching, learning, and taking a class. Am I succeeding in adhering to these definitions? Am I right to do so?
I’ve raised a lot of question. Some are relatively straightforward to answer but others will take time to figure out.
This question may very well be beyond the scope of a traditional introductory calculus-based physics course, but given the recent trend in early introduction to computational physics with curricula like Matter & Interactions it may be within the scope of a reformed course.
In classical physics, finding a particle’s trajectory under the influence of a force requires solving a second order differential equation. This second order differential equation relates the particle’s acceleration to the force it experiences. Explain how using the particle’s momentum, rather than acceleration, changes the process of solving for the particle’s trajectory. Be sure to give at least one advantage to using momentum and at least one disadvantage to using momentum.
The question may not be clearly worded so I invite feedback. Go!
This is post is long overdue, and I’m sorry for that. Life and work got busy late in the semester. I have far many more ideas for more posts in this thread that I doubt I’ll ever be able to write them all up but I will try.
In this post, I focus on an interesting idea for which I found inspiration in a book called The Shaggy Steed of Physics by David Oliver, a book I consider a little gem. Here is the question.
In introductory physics, a particle’s position and velocity are usually the first two vector quantities encountered. Using appropriate symbols for both, find as many unique combinations of position and velocity as you can, and comment on the physical significance of each combination. Your commentary should consist of at least one complete sentence and at most one paragraph for each combination you find.
This can be taken as an open ended question or a specific number of quantities can be required. I choose the former as I think it would foster deeper thinking. What do you think?