Higher Education
A short video clip on why assessments should prioritize student learning improvement over routine reporting
Hear how data and impact should be at the center of every assessment practice to enhance instruction and learning experiences. Watch the Full Webinar
Catherine Wehlburg:
We should not be assessing because we have to turn a report in to our assessment director or our vice president or our accreditation or whatever, and we should be doing assessment because we are improving student learning.
Now if we do that, we can turn that report in and it will essentially write itself at that point because we’ve got the data, we’ve got the results, we’ve got all of those kinds of things. But I certainly have seen, and I know all of you have seen instances where an assessment report is being done because it has to be. And there is really no looking at really is this impacting it? So if the report is due on this date, and so I’m just going to take last year and just change some dates and put it in because now I’ve checked it off the list.
And so we really have to look at are we assessing because it’s easy and the answer is, no. We shouldn’t be just using something because it’s easy. We need to make sure, is this giving us what we want to do? And there’s a question that Yvonne has posed that there is confusion around the word assessment. And I think that is absolutely true. I joke sometimes that assessment is one of the longest four letter words out there because people don’t like hearing about it. They often don’t like talking about it, and it means something very different to different people depending on context. And so when we think about assessment, are we talking about assessing the students so that we’re giving the student feedback? And sometimes yes, that’s what we mean by that. Or are we talking about it as more program level assessments where we’re gathering information from the students to decide is what we are providing working or not? And that’s more of that student learning outcomes or program level assessments.
And then I think sometimes we also will talk about assessment as a diagnostic tool. So when you’re assessing whether or not someone has a learning difference, or a medical disorder or something that you go in and get assessed to see where that is. So I think there’s a lot of confusion around the word assessment. And I will say that I often will conflate some of these and add to the confusion, and it’s certainly not intentional, but I think that we need to ask that question of what is that purpose of assessment? Is it that we are giving feedback to students, in which case that’s the focus, or are we gathering information from students to see across students what is in general being learned? And then that becomes more of that program level assessment.
So I think that’s a great question to ask. And one of the things that we need to keep looking at and how we think about better designing our assessment strategies, because we’ve got to know what the purpose is so that we can know how do we want to design what it is we’re doing? So what is the learning goal, what do we want students to be able to do as a result of our teaching or our presentation of curriculum or any of that. So what’s the point? What’s the learning goal?
And then how are we measuring it and does it make sense? Which again is that validity question? So if I’m measuring critical thinking and I’m doing it by a multiple choice test, that should make some people go, maybe that’s not the best way to measure critical thinking. And I’ve looked at a lot of program assessment reports that say, oh, critical thinking is very important to us. And then you look and see how they’re measuring it and they’re not really measuring critical thinking, they’re measuring something else because they happen to have that test in place.
So we’ve got to ask, what’s the learning goal? How are we measuring it and does that make sense? And if so, how are we going to use the results? If all we’re going to do is put them in a report and not really use them, then we’re not using our data. So we need to really think about, this is helpful, we need to change something, or this is helpful because it’s working and we don’t need to change something, but we need to be open to having that discussion and that conversation because depending on the answers, that may change the way you assess the next time. You may decide, this is a terrible tool to use. It didn’t get it at all what we wanted. And that’s a great finding because that gives you a focus on what needs to change.