Higher Education
A webinar featuring Catherine Wehlburg, Ph.D., Interim President at Athens State University
View this insightful discussion between Catherine Wehlburg, Ph.D., Interim President at Athens State University on designing assessment strategies that foster authentic learning experiences to prepare graduates for success in the evolving economy.
Jenny Gordon:
Let’s begin our session today. Hello everybody and welcome to today’s webinar. We are really thrilled that you’ve joined us and we hope that you enjoy our discussion and that you’re able to walk away with insightful suggestions for authentic assessment and work readiness. And we may even touch on AI at some point through the discussion today. Of course, we will.
We hope that you’ll be further inspired to make positive impact on your students’ careers and we love it when our guests and colleagues are able to take tangible things away from these webinars. And I’m sure with our guests today you’ll be able to do just that.
My name’s Jenny Gordon. I have the very lucky job of introducing our guest today. I’m the vice president of international markets here at GoReact, and for those of you who might not be familiar with GoReact, we are a skills mastery platform. We are powered by video assessment and feedback. GoReact is used in further in higher education throughout campuses and remotely and hybrid situations all around the world. We support remote training, coaching organizations. We work with schools and educators and corporate partners in the US and all around the globe.
I am really happy to be joined today by a very special guest and today’s presenter, Dr. Catherine Wehlburg. Hi Catherine.
Catherine Wehlburg:
Hello everybody. Good morning and good afternoon. Great to be here.
Jenny Gordon:
Thank you, Catherine. You currently serve as interim president at Athens State University and prior to your current role, you were the provost and vice president for academic affairs. You have served in several different roles in over 30 years in higher education as a tenured full professor, department chair, associate dean, dean, associate provost and provost. And in each of these roles, you’ve been focused on assessment of learning, mission enhancing student learning and strategic decision making.
You have published many articles, books and chapters and have always had a focus on educating the whole student as the essential role in higher education. You’ve been recognized for your work in student learning outcomes and assessment by elected presidents of the Association for the Assessment of Learning in Higher Education and received outstanding achievement award. You are the editor in chief for the New Directions of Teaching and Learning and have also served on several different editorial boards for higher education journals. You also received your PhD from the University of Florida in education psychology. Wow, how lucky we are today to welcome such a esteemed guest to join us. Can’t wait to hear from you.
Catherine Wehlburg:
Well, thank you. You’re very kind and it just means I’m old and I’ve been doing this a long time. So that’s really the takeaway from that. But thank you very much.
Jenny Gordon:
Well, the wealth of experience that you bring is what we’re all here to listen to and I can’t wait to hand over to you, Catherine, and to have our discussion. But before I do that, just a little couple of points of housekeeping that my team make sure that I let everybody know about.
Today’s event usually lasts about 45 minutes, which will include around about half an hour of presentation and discussion. And then there’ll be time for question and answers at the end. But if you would like to ask any questions throughout, you can use the Q&A section of Zoom and you can leave your questions in there and we will try to answer them as we’re having our discussion.
We’re also recording the presentation today, so if you have to hop off before the end, it’s not a problem. We’ll be sharing the recording with you all and for anybody else who couldn’t join. And let’s make it as fun and as interactive as we can.
So without further ado, I will hand over to Catherine to begin today’s presentation. Welcome, Catherine.
Catherine Wehlburg:
Well, thank you. Thank you very much. And talking about assessment and how we assess learning and why we assess learning is something I enjoy talking about all the time because I think that in higher education, it is so easy to get focused on what we’re doing and takes our focus away from is what we’re doing actually working. And I think that assessment gives us that opportunity to really focus on why we’re doing what we’re doing and the impact that what we’re doing is having. And so I’m so excited to talk about this idea of how we are going to adapt assessments in an ever-changing higher education landscape. And on this slide, you all can see my email address and I am perfectly happy, I’d love for any of you to reach out to me if you’ve got other comments or questions after this is completed because I think that this just can be the start of a conversation.
So I am so looking forward to that.
Jenny Gordon:
Thank you very much.
Catherine Wehlburg:
One of the first things I kind of wanted to start with is it used to be, and with this slide, with this comic up here, in higher education, many of us and many of my colleagues over the years have kind of wanted accreditation bodies, governmental agencies to just kind of leave us alone. We’re doing a good job, we know what we’re doing, we know better than you about what we’re doing, so go away and leave us alone so we can do the job that we’re doing. And unfortunately, or maybe fortunately, that’s just simply not the case anymore. And so it is not enough to just say lots of people think we’re doing a good job. We actually have to show and demonstrate what we’re doing to our students. Our students need to show that to us. We need to show that to potential students, to our communities, to our accreditors, to our governmental agencies. So all of that needs to be done and by using data that we can gather through assessment, we are much better able to do that.
This is a quote by a colleague, Ted Marchese, and he and I worked together for about a year when I worked with the Association of American Higher Education and AAHE, and he would say, “Assessment per se guarantees nothing by way of improvement, no more than a thermometer cures a fever.” And I thought that this kind of encapsulates some of what happens when we talk about assessment and how we are assessing things.
So if we use kind of that thermometer idea, so I have a daughter, she is now 24 years old and she is healthy and happy and in graduate school working on her master’s and all is well. But when she was a baby, like most infants, she would occasionally run a fever. And so when she would run a fever, I was, and I’ll just start this off by saying I am a good mom, so just note that, so when she would have a fever, what would I do? I would put my hand, do the mom hand on the forehead and know that she’s running a little bit of a temperature. And if it was, I would pull out a thermometer and check her temperature.
If she had a fever, I would do something with that data that I got from the thermometer. I wouldn’t just go, “Huh, 104, oh, well that’s interesting.” Obviously I would do something quickly if it was 104. And so I would give her appropriate medication or I would get a cool bath, but I would do something with the data that I got and then I would test again. I would take her temperature again and see if it was better. If it was better, I would keep doing what I was doing until we were back to where it needed to be.
If it wasn’t, I would change and I would do something different. Might be eventually call an ambulance or take her to a hospital if it got to that point. The idea is that gathering data in and of itself isn’t enough. We have to actually act on the data that we receive. And I think that’s one of the reasons that I really have always just paid attention to this particular statement that Ted gave because it really puts things into perspective about why it’s important to gather data on learning and use that to do something. Sometimes that’s giving feedback to students. Sometimes that’s giving feedback to ourselves and what we need to change and how we’re teaching. And sometimes it’s having larger conversations across the program or across the institution, but getting data and having data is not enough. We have to actually be able to do something with that data.
Jenny Gordon:
And I guess that the way in which we can get that data now is changing and it’s evolving and that’s certainly part of this conversation today. But I also think it’s one of the challenges that many educators have because there’s so many different points at which you can get data or ask for data. And then I know that we’ll hopefully move on to the validity of that data as well when it comes to the use of technology and potentially AI.
Catherine Wehlburg:
Absolutely.
Jenny Gordon:
You’re going to introduce your approach and thinking of assessment for today, and in an ever-changing day as you’ve already pointed out, but for you, somebody that’s so esteemed and experienced in the industry, what would you say are the things you’re hearing about most when it comes to assessment strategy? What do you think are the current global trends that are, I guess, driving many of us to consider our approach to assessment strategy?
Catherine Wehlburg:
Well, and I think that it is changing and we have to be able to be flexible. And so the things that we might have used before don’t necessarily still work in the ways that they did before. We are presenting a lot of our content remotely now. We have presented information to students before class through always a textbook or now sometimes we kind of flip that classroom and provide information through some online mechanism, whether that’s a video or other kinds of content. And so we are using a lot of different tools that can help us.
Now every tool can be misused. And so, one of the things that we have to also think about is what is this tool used for? Are we using it appropriately and how is it being misused? And I think that gets to some of the questions about the validity that you were we’re talking about that any tool can be misused. And there are many examples in higher education where we have taken something, some kind of a test for example, or some type of a survey that was designed for one purpose and we use it for something completely different and we assume incorrectly that that’s fine to do, and we have to be very careful about that. And so I think that one of the things that we can really look at is ask certain questions. So what is the purpose of assessment? What are we trying to do with this?
And so I put up a few of my answers to this question of what’s the purpose of assessment? And much of that is very much specific to a learning situation. And again, whether that’s a course or class period, an exam, an overall course or an overall program, that might change some of these questions a little bit. But in general, the purpose of assessment for me is improving learning by giving feedback and that feedback can impact the student.
So a student writes a paper, we give feedback on that. The whole point of giving that feedback is to, or at least when I do that, the point of giving feedback is so the student can revise and improve and that improves learning. So that that kind of formative assessment process happens regularly and ongoing.
I also think that part of the purpose of assessment is finding out what works and what doesn’t. So if I’m teaching something or if I’m presenting a class and I find out that students aren’t learning anything from that, what I did didn’t work. So that means I need to change what didn’t work because why would we want to waste time and money on things that don’t work? So we need to find out what works and keep doing it or what doesn’t work so we can change it.
So assessment can also impact the pedagogy that we are using in a class or in a program or overall. And so that’s something that we can also see as a purpose of assessment. And then of course, looking at a course or a program level assessment is going to impact the curriculum. We may find that… Well, to give you an example, at a previous institution where I was, we had an assessment plan that was for an economics department. And one of the things that the faculty found out when they would look at the senior projects and senior presentations of the students is that they didn’t really have enough knowledge of methodology to create the type of capstone that the faculty were expecting. And so what they did is they went back and they looked at the curriculum and they moved the methodology class to earlier in the sequence of courses that students were taking and added parts to several courses along the way so students were kind of reinforcing the knowledge they had learned in that methodology course.
And then after a couple of years, because it took a while for the students to go through that process, they were able to look again and see that the student projects were much better because they had now more understanding of that methodology piece. So assessment can help us to impact the curriculum that we are designing for students.
So a lot of the things that we need to think about go into assessment and asking what is the purpose of this particular activity or this particular plan becomes really important.
Jenny Gordon:
Absolutely. And I think that leads us on to thinking about the ways that we can then design our assessments based on the purpose and what it is we’re looking to get out of it. But here’s a challenging question for you, perhaps not for you, but generally for the industry, do you think we accept ways of assessing students because they’re easy to assess using that particular methodology and they’re easy to mark? Are there often better ways to assess students that might be more time-consuming and take more marking time or feedback time and therefore we tend not to use them because we’re already resource poor, we’re already challenged to do everything we need to do as educators?
So I wonder when we think about the ideal design, how much do we have to bring our minds back to always thinking about the best solution, not necessarily the easiest solution to get to that best design as well?
Catherine Wehlburg:
Right. And I think you’ve identified a really important point is we should not be as many do, but we should not be assessing because we have to turn a report in to our assessment director or our vice president or our accreditation or whatever, and we should be doing assessment because we are improving student learning.
Now if we do that, we can turn that report in and it will essentially write itself at that point because we’ve got the data, we’ve got the results, we’ve got all of those kinds of things. But I certainly have seen, and I know all of you have seen instances where an assessment report is being done because it has to be. And there is really no looking at really is this impacting it? So if the report is due on this date, and so I’m just going to take last year and just change some dates and put it in because now I’ve checked it off the list.
And so we really have to look at are we assessing because it’s easy and the answer is, no. We shouldn’t be just using something because it’s easy. We need to make sure, is this giving us what we want to do? And there’s a question that Yvonne has posed that there is confusion around the word assessment. And I think that is absolutely true. I joke sometimes that assessment is one of the longest four letter words out there because people don’t like hearing about it. They often don’t like talking about it, and it means something very different to different people depending on context. And so when we think about assessment, are we talking about assessing the students so that we’re giving the student feedback? And sometimes yes, that’s what we mean by that. Or are we talking about it as more program level assessments where we’re gathering information from the students to decide is what we are providing working or not? And that’s more of that student learning outcomes or program level assessments.
And then I think sometimes we also will talk about assessment as a diagnostic tool. So when you’re assessing whether or not someone has a learning difference, or a medical disorder or something that you go in and get assessed to see where that is. So I think there’s a lot of confusion around the word assessment. And I will say that I often will conflate some of these and add to the confusion, and it’s certainly not intentional, but I think that we need to ask that question of what is that purpose of assessment? Is it that we are giving feedback to students, in which case that’s the focus, or are we gathering information from students to see across students what is in general being learned? And then that becomes more of that program level assessment.
So I think that’s a great question to ask. And one of the things that we need to keep looking at and how we think about better designing our assessment strategies, because we’ve got to know what the purpose is so that we can know how do we want to design what it is we’re doing? So what is the learning goal, what do we want students to be able to do as a result of our teaching or our presentation of curriculum or any of that. So what’s the point? What’s the learning goal?
And then how are we measuring it and does it make sense? Which again is that validity question? So if I’m measuring critical thinking and I’m doing it by a multiple choice test, that should make some people go, maybe that’s not the best way to measure critical thinking. And I’ve looked at a lot of program assessment reports that say, oh, critical thinking is very important to us. And then you look and see how they’re measuring it and they’re not really measuring critical thinking, they’re measuring something else because they happen to have that test in place.
So we’ve got to ask, what’s the learning goal? How are we measuring it and does that make sense? And if so, how are we going to use the results? If all we’re going to do is put them in a report and not really use them, then we’re not using our data. So we need to really think about, this is helpful, we need to change something, or this is helpful because it’s working and we don’t need to change something, but we need to be open to having that discussion and that conversation because depending on the answers, that may change the way you assess the next time. You may decide, this is a terrible tool to use. It didn’t get it at all what we wanted. And that’s a great finding because that gives you a focus on what needs to change.
Jenny Gordon:
Absolutely. I think these are the great tips, and I’m sure already at the fingertips of our colleagues, but isn’t it lovely to come back sometimes and remember the basics in a world where we are being thrown different questions and challenges all the time, not least of course we have to mention it, AI, because that’s something that, it’s not just hitting the assessment side of things. It’s hitting all aspects of teaching and learning all aspects of home lives and everything else. What is your current… Well, I should ask, how are you seeing AI impacting assessment so far, Catherine?
Catherine Wehlburg:
Well, and I think that we’ve been using AI a lot longer than people have been calling it AI, really. And so it can absolutely be a tool. We may actually, and I know some of my faculty here at Athens State are using AI as part of the assignment, so instructing the students to use that as a tool and go out and develop this report or this paper or this presentation using AI as a piece of that. And so AI then becomes a tool.
So if we say to them, go use ChatGPT or go use some other, see what comes of that and then how can we use that as part of the instruction?
Now there’s pushback from that. Some faculty just don’t want to use it at all, want to make sure that the student work is their own work. And I think that’s very important that no matter what the student work should be the student’s own work. But I think that we can use AI as a tool in part of our assignment. So we’re already using that.
Sometimes we can use AI in grading student work where we have rubrics or other kinds of evaluation tools that we may have created, but AI can implement those for us. And I think that helps us do our work faster if it’s an appropriate rubric with appropriate elements in that. So I think that can be something that is very useful. We also use-
Jenny Gordon:
We talked a little bit… Sorry to interrupt.
Catherine Wehlburg:
Yeah, no.
Jenny Gordon:
We talked about that briefly the other day when we were talking about in a world where we’re teacher short, if you like, educator short, experience short for the skills today, let alone the skills tomorrow that we need to train and educate for, allowing our experts to really focus on the quality of their teaching and their quality of their feedback to improve students and to develop students. If that is the goal, then how can we use AI to support that initial feedback process? Why shouldn’t students be able to get some initial feedback on their presentation, on their submission, whatever it might be, and get an indication of how they’re doing ahead of coming to you to get that really high quality feedback that then helps them to develop on?
And I mean, we talked about this as well, I’m a mum like you, hopefully a good one like you and my kids are all at different stages of doing different things. And one question my daughter had the other day, which she’d been given a science fair project and she was trying to work out, she knew what she wanted to do, but she couldn’t work out how to ask the right question. And so used AI through ChatGPT to help her pose the right question. And in that entire process, she independently developed how she wanted to do something ahead of coming then to me to ask for help on how to actually do the experiment. So is I feel strongly that there’s a role for it, but it is not taking the place of our educators and experts, but actually enabling them to give that focus and that quality that they want to already, but they’re perhaps stretched to do so.
Catherine Wehlburg:
And that augmentation that AI can provide I think is why we need to think very carefully about how we want to use it. I mean, that ship has left. We are using AI and so we’re using it. We’re using it in lots of ways. So how do we want to use it and how do we want to make sure that it’s being used in ways that make sense educationally and that are in line with what we are teaching? And I think that’s that question. So how your daughter used it I think is a wonderful tool. I know many faculty have said to students, “Well, run it through spell check and Grammarly before you submit it so that I don’t have to look at any grammar or spelling errors. Fix that yourself.” Well, that’s AI.
And so we are already using a lot of AI tools and now we have more options and that certainly helps us. It helps us with computerized testing, with pre-test or post-test. One of the things I used to do in some of my classes was to ask students or require students to take a pretest. So they had to pass before they came to class because I wanted them to do the reading. And when I had them do that online, if I could just get, and that was a multiple choice and they could do as many times as they wanted, the point was that they would learn something before they came to class. And so that was using computerized testing and that helped me then be able to focus on more of the details when they did come to class because we could do more of that.
Jenny Gordon:
Have you talked to your team about setting policies for the use of AI at your university? How contentious has that been so far? Are you there with a policy? Where are you on that journey?
Catherine Wehlburg:
So I’ve seen several at other institutions and they’ve been the range of, no, you can’t use it at all, which is maybe a little bit, I don’t know, overly dramatic because we already are using it in a lot of ways and measuring whether or not they use it can be difficult. Actually on my campus, we just had a faculty-led discussion about the brave new world of AI and I had a religion faculty who talked about the ethics of it. I had computer science faculty talking about the mechanisms and all of that. And we had other faculty kind of chime in with how they might use it and how they would not want to use it kind of as a precursor for developing some kind of a policy that would sit in under our student honor code and talking with plagiarism.
And I think that on any campus, AI is changing daily practically. And I think that any policy we have is going to have to be revised as we move forward and look at that in more ways. I see a question came in on the question and answer about, “Can I say more about using AI to apply rubric?” And I think that there are a lot of ways to do that, and there are software examples of ways where we can develop a rubric and have that rubric scored by AI that can then provide us feedback and the ability to have some more discussion on that. But I know that’s some of what you all do on a pretty regular basis.
Jenny Gordon:
Certainly. From a GoReact point of view, we’re working with, very closely actually, hand in hand with our partners at universities and colleges to understand what they want from AI so that it’s not challenging to these policies that we’re talking about, so that it does enhance all of the experience that you are talking about, so that it adds value to that learning process as well. And the slow adoption of it in some product sets allows that kind of change management process to take place.
And I think that different policies, different organizations will have different approaches to it. And from a software perspective or a learning technology perspective, it’s very important that we listen to that feedback around what people want to do. But certainly I think, Valerie, to your question, it would be very interesting for us to ask anybody else as well on the meeting today if they’re using AI to apply rubrics and what sort of solutions they’ve found to do that. More surely answer those as we go.
Catherine Wehlburg:
And I think it provides us with the ability to have a tool. Back in the day before we had much AI at all used in the classroom, we’d hire graduate students to do this kind of work. And so now we can actually have our grad students and some of our student assistants do other things so that we can use AI to help with that.
Jenny Gordon:
Absolutely.
Catherine Wehlburg:
And I think that also is helpful with adaptive learning so that we can create processes for students that we used to call using machine learning in a way to have a student answer some questions. And if they get it wrong, the system will go in and have them review more of that before they’re allowed to move on to the next one. And so with some content, we can use that adaptive learning, which is very helpful.
And then we’re also using AI for predictive analytics. So we’re actually being able to identify at risk students early on based information that they come into the university with, but then also things that they are doing now that can help us know, okay, we’re crunching a whole bunch of big data sets that AI can do very, very easily once we set the system up. And that can help us identify at risk students so that we can give them the support they need earlier rather than later.
Jenny Gordon:
And that’s a really good example of how powerful it can be for the improvement of students ultimately and their experiences. Thank you. We could talk about AI all day. Let’s look a little bit around skills and work readiness. And there’s a widespread movement at the minute to have much more focus on skills and making sure that students have had the opportunity to learn and hone their skills and their practical abilities alongside their knowledge ahead of going out into the world of work. And from your perspective as a university leader, are you leading any changes at Athens State when it comes to this trend of re-looking at skills?
Catherine Wehlburg:
Absolutely. And some of that is coming because much of our funding is coming from our state and our state is looking at job skills and wanting us to ensure that our students, when they leave, are ready to jump into a job or a career or something like that. And there is certainly a balance here, and I personally struggle with this on a regular basis where I ask them, so what’s the purpose of college? Is it learning or is it to get a job? And the answer is of course, “Yes.” And so we’ve got to figure out how to balance the focus on skills and work readiness with the bigger picture things like critical thinking, like problem solving that are absolutely part of work readiness and skills, but they are not usually thought of in that way. Often when we think of skills, we think of very small pieces of behavior that someone has learned how to do, and you learn enough skills you can do more with them and they build into a larger base of behaviors.
And I know as a faculty member, I have always been a little bit leery of this focus on skills because I like to think that I’m teaching learning and I’m teaching understanding, and I’m teaching those kinds of things that are not necessarily easily measurable behaviors, but at the same time, I can only see that learning has happened through a behavior or a skill. I mean, I could ask students, “Have you learned things?” And they say, “Yes.” And I say, “Oh, well, then you get an A.” I mean no one would do that because we want to see that that learning is happening. And so we want to see it in a behavior, whether that’s answering a question on a test or completing a computer program that fits the specs of the industry that’s asking for that coding or whatever. But we want to see that behavior. So finding that balance of making sure that the skills and behaviors we’re looking for are part of our larger, sometimes not easily measured bigger picture kinds of things like critical thinking, problem solving-
Jenny Gordon:
Those sort of transferable durable skills. Yes.
Catherine Wehlburg:
Exactly. And so that means we need to really think about what are we defining as these behaviors so they can be measured and are they actually aligned with our larger mission and purpose as an institution? And that can be done, but it’s not necessarily easy to do.
Jenny Gordon:
No. No. And I think that this comes down to the big question around how you assess skills as well as knowledge, whether they be durable, transferable skills that we’ve talked about or discipline specific capabilities that one needs to demonstrate ahead of graduation and then be proficient in the moment they graduate and walk into their first job. We, as an organization, talk a lot about the role of video in that process and the role of a student or a learner being able to capture something and having that assessed either by themselves in the self-reflective process, by a peer and by others, and to really be able to demonstrate something alongside their understanding and their knowledge.
But there are many other ways of doing it. We talked the other day about the rise again of apprenticeships and the alternative routes to the workplace as well and getting that great connection with experienced professionals who can teach on the job and they’re able to assess as you’re going. So there are lots of different ways that sort of the skill assessment can happen. Are there any specific things you could think of that you’re doing at Athens State when it comes to an assessment of skills?
Catherine Wehlburg:
So there are on the end point of what do students have when they are graduating through us? But I also want to put in a plug for prior learning assessments and how that really looks at the skills that students are bringing in with them and whether or not those align with any of our courses.
We have done a lot with using prior learning assessments to give appropriate credit to students so that they… We have a lot of students coming into Athens State who are adult learners who may have certifications or have gone through other kinds of work experiences that have given them a lot of knowledge. So if we can translate that, crosswalk that into academic credit, that’s going to get them closer to completing their degree in meaningful ways. And part of the way to do that is to look at the skills that the student is showing.
Jenny Gordon:
Thank you. I just want to remind colleagues that are listening, if you have any questions, please do pop them in the Q&A. I would invite Catherine to sort of, I guess summarize what we’ve been talking around really today around the future of assessment. And as you’re doing that, perhaps we might get a couple more questions before we finish today.
Catherine Wehlburg:
Sure. Yeah. So I think the future of assessment is what the passive assessment has been, which is focused on learning even though we may do it in a different way, but that means we need to set aspirational learning goals and outcomes and we need to do that collegially so that faculty and staff and students are all part of that conversation.
We also need to remember that assessment is a process. It is not a report. And this kind of goes back to Yvonne’s question earlier about the confusion of the word. People talk about, “I’m done with assessment,” meaning they’ve done the report and we’re never done with assessments. That’s good and bad I guess. But it is a process and it’s something that takes a while to do. And if we use it right, we can use assessments to kind of guide what happens and the directions that we’re going in because it will help us know what’s working and to keep that and what’s not working and that we need to change that.
And that means that we have to be open to this idea of failure. And what I mean by that is not meeting goals or outcomes because when we ask an assessment question, has a student learn this? If the answer is no, that is a very valuable answer because that means we have to do something different. That means we have to rethink how we’re teaching it or we need to provide additional training or knowledge or skills or resources.
So when I look at assessment reports that have been done across a program or an institution and I see that, oh, everything’s fine, everybody’s meeting every outcome, then I don’t think that’s an aspirational enough goal because no institution or program is perfect. And we need to be able to be open to the idea of asking difficult questions about our students’ learning so that we know where we want to put our limited resources into improving or changing or modifying, whether that’s curriculum or the need for a new lab or a need for a new faculty position or whatever, we should be able to look at learning outcomes data to help guide those decisions.
So that means assessments about learning, but it’s also about transformation. And so assessing learning should be transformative. And I know that may sound really way out there because we don’t often think about assessment as being a transformative process, but it really can be because if we ask the right questions and make a cyclical process that is ongoing and [inaudible 00:42:55] dialogue that we have on campus about student learning, then it can be transformative because it’s something that really is going to be able to make a difference in how we operate as universities, we operate as a social change and economic mobility kind of question.
So it is about transformation, and when you look at the root of the word assessment, it actually means to sit beside. And I think that’s something for us to keep in mind that when we are assessing, whether that’s a student or a program or an institution, we’re using that as a way to sit aside the data we’re collecting and think about it and talk about it and determine what is the next best step in that overall process.
And so from that perspective, assessment is transformational. It is some of the most important work that I think we can do on any campus. And I think everyone is involved in assessment, not just people who have that in their titles, but we are all involved in improving and enhancing student learning. And we can do this by taking learning data and sitting beside it and looking at it and talking about it and letting it help to inform the next and the next steps that we’re taking.
Jenny Gordon:
I couldn’t agree more and I’d not heard that that was the root of the word. So I’m always learning and I’m thrilled to do so. And we are so grateful for you joining us today. And as one of our guests pointed out what an informative session we’ve just had with you. So thank you Catherine so much. I don’t believe we have any other questions at the minute. Just to answer that previous question. Yes, this will be recorded and yes, you will receive a copy of the recording and we will also include Catherine’s email address again that she left on her slides just in case you want to contact her as well.
So thank you again, Catherine, so much for joining us, and thank you to all of our attendees as well who’ve joined us from all over the world. I hope you have a great rest of the day. It’s been lovely to have your questions.
If you want to continue the discussion about assessment and AI, we have a reaction conference coming up in just a few weeks. It’s our biggest conference of the year. The link to that is in the chat right now. It’s a free conference. It is full of excellent content by world leading educators, letting us know their views and opinions on the state of affairs when it comes to skill development, assessment, and all those good things. I hope you’ll be able to join us for that. Like I said, the link is in the chat. Catherine, thank you again. It’s been such a pleasure, and I hope to see you really soon.
Catherine Wehlburg:
Thank you so much, and thanks for inviting me.
Jenny Gordon:
Pleasure.