Higher Education
See how using the EAT (Equity, Agency, and Transparency) Framework can help you work with students to make assessments more student-centered and beneficial to learning
About the Session
In the journey to becoming an independent life-long learner, the development of self-regulated learning skills is fundamental. Assessment, and the feedback associated with assessments, have a powerful role to play in this process. However, this requires us to revise our approaches from designing assessments of learning, to assessments ‘for’, or ‘as’, learning. By embedding assessment as a core part of the learning process, we can guide students towards the self-critical skills they need to be independent learners. The ‘EAT’ (‘Equity, Agency and Transparency’) Framework is a research-based framework that summarises the characteristics of an effective assessment. We can use the EAT Framework to help us work in partnership with students, to enhance our assessments, and make them more student-centred, and beneficial to learning.
About the Presenter
Steve Rutherford is a Professor of Bioscience Education, at Cardiff University in Wales, UK, and is the institutional Academic Lead for Professional Development in Education. Steve holds a prestigious National Teaching Fellowship, and holds a PhD in Biology, as well as a Masters degree and Doctorate in Education. His educational research focuses on the development of self-regulated learning during the transition to, and through, University, and on the development of ‘personal learning networks’ between learners. Steve is the lead partner in the ‘EAT-Erasmus Project’, a substantial three-year European Union-funded project between institutions in the UK, Spain, Portugal and Belgium, developing resources to support assessment and feedback.
Hello, and thank you all for joining us for ReAction today. I’m Matthew Short, client success manager here with GoReact, and I’m looking forward to today’s presentation. This session will be led by Steve Rutherford, who is joining us from Cardiff University in Wales and will be presenting on using the EAT framework to enhance the impact of assessment on the development of self-regulation.
As we’re getting started, feel free to introduce yourselves to your fellow audience members, share your thoughts, or share resources in the chat window. Make sure you change your message settings to Everyone to share with the entirety of the group. If you have any questions for our presenter, please use the Q&A window within Zoom.
If you see a question in there you’d like answered, click the thumbs up button to upvote it. We will answer as many questions as we can during our roundtable portion of this particular session immediately following Steve’s presentation. So at this time, I’d like to turn the floor over to Steve Rutherford to start his presentation.
STEVE RUTHERFORD: Brilliant. Thank you, Matthew.
So hello everybody, and thank you for coming along to this session.
I hope it will be useful to you, and I hope it will fit in with the themes of the rest of the conference.
So my name is Steve Rutherford. I’m from Cardiff University in the UK, in Wales. I’ll show you exactly where I am in a minute.
But I’m also talking on behalf of a project that I’m the lead of which is called the EAT Erasmus project, which is to do with developing resources for assessment and curriculum design across a series of different European partners. And we have partners in the UK, in Belgium, in Portugal, and in Spain as well.
So if you’re on a computer or have a phone ready, I will be doing some interactive things, and also there’s a few links they’ll have as QR codes as we go along. So the very first of those, I’ve popped this link in the chat. But as we go through the workshop, it will be really interesting and useful for us in terms of a project to hear the sorts of things that you feel you would like support on in terms of assessment and feedback activities.
So that QR code there, and then I’ve put the link in the chat to give you a chance to add some thoughts towards that as we go along. And that QR code is going to loiter for a couple of slides. But first, I just thought it’d be interesting to see what sort of things people are involved with, whether you’re an active educator at the moment or someone who’s been in education.
I thought I’d just start off by having a quick little share of some ideas of the sorts of assessments that you’re either involved in delivering or supporting or have designed or have experienced within your educational setting.
So you can either go to that QR code for the phone or to www.menti.com
and then use that particular code there, which hopefully still works.
Or to this particular link, which again I’ve put in the chat, so you might need to scroll up a little bit and find that. I’ll pop it in again.
And it’s really just to see the sorts of things that people are involved in, the sorts of variety of assessments, because that’s the one thing I’m going to talk about as part of this session. So I’ll just pop over to that Mentimeter and see what we have.
So we’ve got some things coming through, which is lovely. Thank you very much.
Woo-hoo! Brilliant.
Now, it’s nice to see there’s no one uber thing coming out of there. So yeah, online presentations and quizzes. Exams, yeah. Clinical assessments, yes. So practical assessments, assessments where you’re actually looking at a particular type of skill. Presentations and quizzes seem to be the things that are coming to the fore in this little word cloud.
Now, what’s really, really nice there is there’s a wide variety of different assessments.
There are some places where I’ve delivered a talk similar to this where you tend to see just exam and essays popping up.
So it’s wonderful to see a nice variety of things there. Brilliant. That’s lovely. I’ll keep that going, and we might come back to that slightly later.
Fantastic. Thank you.
So what I wanted to talk to you today about is embedded in something that’s really, really close to my heart and something that’s a major research interest of mine, is how students develop as independent learners as they go through their courses. So I’m based at Cardiff University, which is in south of Wales in the UK you can see here at the bottom left.
And it’s a fairly big university by British standards. We have 30,000 students. And I’m in the School of Biosciences, and we have about 800 or so students, mostly bioscience students but we also teach some medical and dental pre-clinical stuff as well.
I’m also someone who–
I lead the team that is involved in academic development, professional development for lecturers and teachers in the University.
And we just developed a whole load of courses to do that. So I’m really embedded in the idea of trying to develop learning and teaching.
And in September 2020, I very luckily had funded a major initiative between a series of different institutions in Europe to look at assessment practices and developing appropriate and effective techniques for assessment within that. And I’ll pop those links there to the project in the chat as well.
And that’s based around something called the EAT framework, which I’ll talk about in more length in this as a talk. But it’s a collaborative exercise between a series of different universities. And what we’re really interested in is developing ways of enhancing assessment, and in particular student-centered assessment that enhances student independence and student critical thinking and self-regulation in students as well.
So we as a project aim to develop a suite of practical resources and to develop and share good practice across international boundaries. Because one thing that’s really, really clear is individual sectors of higher education have different approaches to assessment based on the culture, based on the government regulations or the local regulations or lack of. And so we’re interested in developing an extended good practice idea.
So there’s a series of outputs I’ll talk about slightly later. We’re developing a series of resources that help educators think about ways of actively promoting self-regulation in students with developing a series of case studies of how to use this framework so people can look and see the different ways that you can engage with this as a framework, and also a series of training resources and a recognition framework as well.
So there’s a little QR code there, and I put the link in the chat of a site to go to. If you’re interested in keeping in touch about it, do please drop us a line. And if you’re interested in piloting the framework, I can get in touch with you as well.
So I’m getting closer to what the heck this framework actually is. But the key thing about it is something I’m very passionate about is rethinking the role of assessment in the learning process. So for many, many years, assessment has been primarily assessment of learning. You teach the student something, and then you assess them to see whether they’ve learned it or not or whether they grasp the concepts or whether they can do that particular skill.
More recently, work has been done to develop the idea of assessment for learning. And that’s assessment that helps drive the learning experience. It helps students engage with their learning better. It helps them verify whether or not they’ve understood things for themselves rather than for some sort of external measure.
But in particular, I’m really interested in the idea of assessment as learning, that assessment is an integral part of the learning process. It’s something that students and learners do that helps them learn, that develops their learning, that can teach things as part of an assessment process.
And so how can we get ways of getting students at the center of assessment as a learning experience as well as something to test their knowledge? And in particular, the idea that assessment is driving the development of skills, not just knowledge and understanding, but capabilities, the ability to do things and to regulate themselves.
And that helps in that transition from a pedagogic environment to an andragogic environment.
I use the word andragogy because that’s the most common one in the literature. Arguably, anthrogogy is better because andragogy technically means adult male education rather than adult education. But it’s generally taken to mean adult-focused or independent education.
So how do we get students to go from a pedagogic environment which is very teacher-led, it’s very passive on the part of the learner, to an andragogic environment or an anthrogogic environment where the learner is at the center of that and the learner takes control of their own learning? So how can we get assessment to help in that process?
And I talked about the idea of self-regulated learning, and there’s various models of that. But there’s generally three key factors that are part of any model of self-regulation in learning as a process. You’ve got–
this is a model done by Monique Boekaerts which I particularly quite liked because it’s presented as a series of concentric rings.
So at the very center, the very core of self-regulated or independent learning, is the ability to choose cognitive strategies, to be able to identify ways in which you can learn. So when we’re supporting students, we need to encourage them to reflect on ways in which they learn and to see whether they are effective or not.
Then, in order to do that, you need to have the metacognitive approach of being able to reflect on whether your learning approach is working or not. So you need to be able to identify strategies that work and strategies that don’t. Outside of that is the motivational domain or the affective domain, so the ability to choose effective goals, to identify resources that you need, and to regulate yourself as a learner.
I rather arrogantly argue in research that I’ve done that there’s a fourth dimension, and that’s the regulation of other people, so the interaction of other people in your learning and how you regulate those social interactions. And so encouraging students to think about the social aspect of their learning. And students and staff and students and students are really, really fundamental to that.
So I tend to refer to it often as student-mediated learning rather than self-regulated learning, but that’s a whole nother webinar.
So I would argue that the ideal graduate position, especially regarding assessment for any student, would be that they were able to take ownership of evaluating themselves once they leave university. So they shouldn’t need academics to say, that’s good, that’s bad, this needs more work, that’s a really effective way of saying something. They should be able to identify the strengths and weaknesses of their own work as they go along.
So how do we empower them to be able to do that? Now, a key element of that is reflection and thinking about what they do. And so things like GoReact are actually really, really good for that as a reflective tool. But we can help them with how we structure things as well. And the EAT framework is really a good example of how we can do that.
So what is this thing I’ve been talking about for the last 10 minutes? So EAT stands for Equity, Agency, and Transparency in assessment. So equity in terms of we need to make assessments that are equally valid and equally accessible to all our students. And it doesn’t mean that student with a particular type of background or a particular type of–
or disability or outlook is disadvantaged or advantaged within our assessments.
Agency means that students have an active role in that assessment process. And transparency means that we’re clear to students what they need to do and what the assessment is there for. And the EAT framework, which was developed by a colleague of mine, Carol Evans–
Professor Carol Evans from Southampton University initially–
is about taking those ideas and putting them together into an integrated framework that lets you see all of the key elements of a good assessment and how they link in with each other.
This is a really, really, to me, very important quote from Carol’s initial publication about the EAT framework. And I’ll share a link on how you can get a copy of that later on. This phrase I like really, really nicely because it says, how do students–
how do we get students to co-own their programs with the lecturer so they see themselves as part of that experience, not passive recipients of a product but active members of that?
And in particular regarding assessment, I like that very last thing, is how do we get them to be part of the assessment process and not see assessment as something done to them?
I think the way we phrase assessment in a lot of areas of education put the student as almost like a victim of assessment. It’s something they have to endure. It’s something done to them. It’s something unpleasant.
We need to get ourselves into a space where, actually, students want to do an assessment, where it helps them learn, where they get feedback, where they get positive reinforcement about their progress as part of that activity. So how do we change the conversation on that? So core concepts with the EAT framework are really centered around student partnership, so involving students in assessment design, involving students in curriculum design, making students the center of the assessment.
So it’s something that’s designed to support the students’ development, not something that’s designed to just create a mark for your particular course.
And getting students in partnership to improve and develop those assessments.
And these three things at the bottom are really core concepts. So on the left we’ve got agentic engagement, so getting the students actively involved in the things that we’re encouraging them to do in their learning and the assessments that we’re developing.
On the right-hand side, we’ve got assessment literacy, so getting the students–
making sure they understand what assessment is about and what assessment can do for them. And then in the center, again, that real core concept of self-regulation, the idea of encouraging the students to be able to or empowering the students to be able to regulate themselves and assess themselves.
I thought I’d do another little interactive thing.
And let me just move my Mentimeter onto another question.
And it’s lovely, lovely word cloud there, so thank you very, very much. What I thought would be interesting to do is–
I’ll hide those results to start with–
is again thinking of assessments in which you’re involved, either ones that you deliver or ones you’ve designed or once you’ve experienced or ones you support.
And to what extent have you in the past experienced students to be involved in the development of those, either individual assessments or assessments at the course or module level. They’re called different things in different countries. Or the program level, such as the degree level or the year level or the faculty or school or department level or the university or institutional level.
Just interested to see what people’s experiences are in terms of to what extent have students or are students involved in the decision making processes in those environments.
Now, it should be a slider from Entirely developed by Staff on the left to Entirely developed by Students on the right. I’d be surprised if most things are on the right.
Most things tend to be towards the left. Let’s have a quick look to see if–
and the link to this is in the chat if you’re interested in joining in.
So we’ve had a few people, six or seven so far.
Just give it a little moment more.
Do keep submitting ideas. Let’s have a look [INAUDIBLE]
Oh, OK. So you’ll be glad to know you are entirely normal. [LAUGHS]
So most people when you ask this question, their experiences are very much that assessments are typically designed by staff with very, very little input from students, very little engagement with students from that as an approach.
I’m just going to nip on quickly to a second question which develops that a little further. And what you tend to see is there’s a little bit more student interaction at the individual level. There’s less at the more higher level. And sometimes there’s a bit of involvement at the institutional level if students are involved in developing policies. But in most cases, it’s very much towards the left.
I wonder if people would also like to answer to this question.
So based on your opinion, to what extent do you think students should be involved in that?
It’ll be an interesting question to ask. So I’ll just give it a couple of moments while we get some people involved in that.
A few ideas.
Now, there is no right answer to this, by the way. [LAUGHS]
I’ve got a few people.
That’s interesting.
So people are saying that, actually, they think they should be a bit more involved. It’s interesting that no one’s really right there on the right-hand side, that students should be dictating everything, which is absolutely fair I think.
But it’s generally towards the middle of there should be an even involvement between students and staff in that as an approach.
It’s really lovely. Thank you. Very, very interesting. And we can maybe talk about that in the discussions later. So the EAT framework really is looking at ways in which we can involve students either as active participants in designing and changing things or just engaging them so they understand things better.
But the core concept is getting them at the center of that as a process.
So the EAT framework itself, as I say, was designed by or developed by Professor Carol Evans.
And she did it from a meta-analysis of several thousand–
I think she did about 4,000 papers in real detail and then about 50,000 papers in less detail to identify the key factors that make a successful or effective assessment. So it’s a research-based, research-led framework.
And what Carol was able to do was distill those ideas down to three core areas with four dimensions in each of those key areas. Those areas are assessment literacy, the understanding of what assessment is about and what assessment can and should do. Assessment feedback, so the provision of guidance and advice relating to the outputs of an assessment.
And assessment design, so designing assessments that are fit for purpose and have a good, sound basis to them. Now, those link together into a quite nice little network, and she’s relatively arbitrarily given them numbers. This looks a really, really complicated diagram, but I’ll take you through it in a bit more depth.
There’s four dimensions for each of those particular areas, and each of them speaks to a particular type of idea that makes for an effective student-centered or learner-centered assessment.
The reason it’s in the web format I’ll come to in a minute, because that’s one of the ways in which you can use the framework in a practical sense.
How am I doing for time? Oh, dear.
So assessment literacy, the ideas of those are four key ideas. So the first one is, do you to a certain extent clarify what a good outcome looks like? So do you clarify to the students what you want the students to do, what a good mark in this assessment would look like, what your marking criteria? Are you guiding the students through those criteria so they know what it is that you’re looking for as an assessor?
The second one is clarifying how different assessments within a course or different assessments within a program fit together. So if you are doing a course and there are a series of assessments or a series of quizzes or a series of activities they do, how does one build on the other? Clarifying student entitlement is really interesting because we don’t often say to students exactly what access to us they have as part of the assessment.
We might say we have office hours at a certain time, but do we clarify whether they can get comments on a draft or where that support actually happens during that process? In some cases, it’s necessary to say to the students, I’m not an infinite resource. I cannot give you 20 different comments on drafts. So where do you want that effort put?
Do you want me to comment–
really guide you more to start with? Do you want me to give you guidance on a draft? You want me to give you feedback at the end of the process? You can’t have all of those.
Which is the most useful to you? Get the students actively involved in making that decision.
Then, finally, the idea of clarifying the requirements of the discipline.
So how do these assessments fit in with what that student is going to be? So if they’re on an accountancy degree or a medical degree or a history degree or a philosophy degree or a biology degree, how do these assessments fit in with what they’re going to do at the end of their degree? Or if it’s a degree that they just tend to go into, I don’t know, management or a good job, how do we build that in?
Assessment feedback is the guidance we give to students, and the key thing with that is how we get the students to be independent. So providing accessible feedback is key. So do the students understand the guidance we’re giving them?
Are we making comments that are meaningful to the students, that motivate the students? Or is it just a series of ticks and crosses and question marks and underlining? Do we actually guide them?
Do we provide early opportunities for the students to change their behavior as part of that feedback? So if we just give them feedback on a very, very final exam at the end of the course, is that really helping them change their learning or develop their learning? Whereas if we give them feedback on something early on, they can identify that they’re either doing well or not doing well and change their behavior accordingly.
The last two are really, really important. Do we prepare students for a meaningful dialogue with us and with each other about that assessment and about the outcomes of that assessment? Do we encourage them to share the feedback that they give–
they gain or to talk about those ideas? And finally, are we providing feedback that encourages the students to be able to critique themselves so by the time they leave university they don’t need us to do it for them?
The final ones are assessment design. So do we have robust procedures for marking, for moderating marks, for checking the quality of the marking? And do we tell the students about those? Do they know what’s going to happen to their work after they submit it?
Are we testing and developing authentic skills as part of their assessment? So are we developing things that they’re actually going to use once they leave university?
Are we ensuring that students have an equal chance of competing in that assessment regardless of their background, regardless of any disabilities they have or their socioeconomic background? And do we have a process on board in order to evaluate and support that development as we go through? So have we got a system where we can involve students and colleagues in evaluating whether the assessments we give are fit for purpose?
So that’s those 12 dimensions. I could go into them in a lot more detail, but I don’t have time.
But how do you actually use that in practice? So you can use EAT in a number of different ways. You can either use it to make really big changes or really, really small, subtle changes. So approaches you can do with the big changes, for example, is you can use it as a diagnostic tool.
So take all 12 of those dimensions and see to what extent you’re doing that with the students. Get the students involved in that as well. I’ll show you how to do that in a minute.
Or you could take one particular dimension, so for example what does good look like, and evaluate that across your department or faculty or school. And all your assessments, you’re going to look at that one dimension to see to what extent you’re doing that.
Or you can use it to manage student success, expectations about assessment and feedback. Some colleagues of mine suggested you could use those 12 dimensions as headings for stuff within a course guideline as to key things about each of the assessments that they do so that they reinforce those ideas as you go along.
The key thing is engaging students as partners within that process. So get the students involved in suggesting ideas on how to develop, suggesting ways in which you can improve or change or things that would be useful to them.
So for example, using that as a diagnostic tool, an approach you can do is to use this little spider diagram and rate yourself on each of those dimensions on a scale of 1 to 5, 1 being don’t really do it that much, 5 being, yep, absolutely top dollar with that. It’s absolutely perfect.
Then there’s a student version. Get the students to do it as well, and get them to see whether they agree whether something’s effective, something’s less effective. And you’ll be able to compare those.
And then you can see–
I refer to it as you can see where the bodies are buried. So you can see the areas that are problems.
So for example, on this little diagram here, we’ve got what constitutes good. The staff think they’re doing pretty well with that. The students, not so much. So that’s an area you might want to talk to the students about more so that they understand what a good outcome of the assessment might be.
Conversely, you might be kicking yourself for no reason. You might say that, actually, we don’t really clarify what a good entitlement for the students is, but the students are perfectly happy with that. They know exactly what they can expect and what they can’t.
And so that’s something you don’t need to worry about. So you can use it as a way of finding out what you need to focus on.
As part of the EAT framework, there’s a whole load of questions for each of those different dimensions that you can ask yourself as you’re designing an assessment and as you’re designing a course or as you’re revising an assessment. You can also use it for making small, subtle changes. So it might prompt you to make a small, little change in the guidelines that you give the students for an assessment, something you clarify a little bit more.
Get the students to identify things that they feel they need a little bit more guidance on.
Could give you a chance to discuss what entitlements the students would like as support for that particular assessment.
It could mean something that you focus on as a series of activities within a series of classes. Again, engaging students in that as an activity is really, really key.
So for example, you could focus on one particular dimension and design some activities that you work with the students for doing that. And I’ve got a good example of that in a moment.
But using that web, you can see that things integrated with each other. So if you focus on defining what a good outcome of an assessment looks like, so developing criteria with the students, getting them to understand that, in order to explain that you’d also need to clarify the requirements of the discipline, why that outcome is necessary.
You might need to think about how you design your feedback in order to meet those criteria and get the students to think about those criteria. So all of these different things are interconnected with each other. And if you want to focus on those, again, there’s a series of activities that you can think of using for your students, either as activities you can do as staff members or activities you can get the students to be involved in.
For example, this is for defining what a good outcome looks like. Get the students to produce model answers themselves. Get the students to mark some work that you’ve identified or you’ve written or you’ve used in the previous year. And here’s just some examples of how people at my university have done that.
So within my school, we’ve used that wheel approach to look at year 1 freshman assessments to see where the problem areas are so we know what to focus on to improve them. In year 2, in a course that I run in year 2, it was identified that students had some group work activities they did, but they didn’t really know how to work as group. So I developed a whole load of activities for the students to do that trained them how to work as members of a group.
Here in English literature–
this is the thing I said I would mention–
what they decided to do was focus on the marking criteria. And what they did was, at the beginning of each class, they spent five minutes focusing on what an aspect of those marking criteria meant. So week 1 was critical analysis. Week 2 was use of evidence.
Week 3 was presentation. Week 4 was development of an argument, and so on.
So there’s a range of different ways in which you can use this as a framework. And it doesn’t have to be big, Earth-shattering changes. It can be just changing how you phrase something or how you talk to the students about it. But getting the students involved in telling you what you need to do and what they need to understand is a really powerful approach.
So if you want more details about the EAT framework, go to eatframework.com. That’s Carol’s website. She’s got a whole load of different resources and things there.
But also, we’ve got some resources that are coming out of our project. So we’ve got a guide on self-regulation, which is a bit dense. It’s a bit turgid, so we’re breaking it up into a series of other, smaller, more accessible resources because it’s about 85 pages of an awful lot of quite intangible theory at the moment.
That’s also got a series of little diagnostic tools you can use to look at the self-regulatory approaches you have in your teaching.
We’re also developing a series of case studies so that people can go to a bank of examples of how people have done different case studies. And we’ll have a few of those up on our YouTube site within a few days because we’ve got some examples that have just been processed. I got an email about that about half an hour ago.
And if you’d be interested in piloting this approach, we’d love to work with you. So it can be anything from a single assessment to something you do at a faculty level, ways in which you change your teaching or change how students interact with you. A number of different ways in which you can do it. And we’re also developing a series of training resources and webinars, and eventually there’ll be a MOOC as well for people to do to guide them through these sorts of approaches.
Thank you very much. I think I will pause there.
So if you’d like to be involved or just kept updated with this as a project, then there’s a link there on the left. And again, if you can think of any resources you think would be useful, then do please use that link on the right to let us know, because the more examples we have of what people need, the more we can create things for people to use.
But the key thing I think with all of these is developing ways in which the students can get involved and be active partners and see themselves as active partners in their learning, especially regarding assessment, and getting them to think about their assessment and think about how they develop. And so it’s nice to be part of this GoReact conference because I think that’s a really useful tool that encourages students to think about what they’ve done and to think about the ways in which they are developing as a whole.
Thank you very much. I’ll stop sharing there and just have a quick look at some things in the chat. I’ve just seen several participants are asking for PDFs of the documents you referenced.
If you’d like to drop your name in that link on the left, then I can send you some of those resources. But also, as we’re talking I’ll see if I can link to a shared drive to give you a link to those. Thank you very much.
MATTHEW SHORT: Excellent. Steve, thank you so much for the–
STEVE RUTHERFORD: [INAUDIBLE]
any questions.
Yeah. And Steve, thank you so much initially just for going through your work. That information was extremely helpful. And for folks in the audience, if you have any questions for Steve that you’d like to pose, we’ve got the Q&A window that’s available or the Q&A tool within Zoom that you can drop questions into. If you’re up for it, we can also promote folks to panelists to ask questions aloud to Steve based on his presentation or any of the research that they’ve been leading here.
And while we give folks a minute or two to prime the pump and think of any questions they might have for you, Steve, as you were talking about engaging students in the assessment process, I was curious, from your perspective, whether through the research or anecdotally, how well do you find that students are equipped or ready to discuss how they should be involved in assessments, what support resources they need?
You talked about earlier people feel like an assessment is something due to them–
done to them.
How do you find when the role is reversed or they’re given the opportunity–
do they dive into it? Is there initial building or work you need to do with them to get them in the mind frame for that? And I’ll stop rambling to give you a chance to respond. [LAUGHS]
STEVE RUTHERFORD: No. No, that’s a very, very good question.
And my answer would be, generally, not very well. But I think that’s more our fault than their fault, I think because in a lot of senses, when they come to us, certainly in the UK–
I’ll be interested to see if it’s similar in the States, but I have worked in the States and so my suspicion would be it would be even more so, is theyre not at all equipped to be active partners in that because they’ve had a very passive experience at high school and, often, a very passive experience when they come to university.
And that’s, I think, one of the big problems we have in higher education, is we get the students in and we say to them, right, what we want from you in higher education is to be thoughtful, critical, analytical, challenging, to come up with really good ideas. Now shut up, sit down, and I’m going to talk to you for 50 minutes. So I think we need to change how we engage students from the outset.
And if you do involve students in those sorts of discussions, then they do get more confidence in participating in that. But certainly, what you will find is if you do try to engage students actively in this sort of a process, you tend to only get about 20% or 30% of them, regardless of the class size, being actively involved in it. Whether that’s to do with confidence, whether that’s to do with engagement, I don’t know.
But you can certainly encourage that by, from the word go, making it clear that you are listening to the students and you will develop things as they want that extra support and that extra guidance. So making them clear that you’re open to them being part of that discussion is a really, really key point, I think.
And it’s a slow process. You have to build up gradually that confidence within them
Yeah, and I imagine that makes it–
it’s got to be a programmatic–
like, it’s a part of your continuing ongoing education. It can’t be just one course you get that opportunity, and everybody else is, oh, I don’t get that opportunity to contribute, which makes sense.
STEVE RUTHERFORD: Yeah, absolutely. Yeah.
It does work at a very local level, but it’s much more powerful if it’s something that’s a bigger thing for students to be involved with.
So actively having students as change champions within a school or within a faculty, really, really powerful thing to do. You don’t have to say yes to everything they come up with, but they do come up with some really good ideas sometimes, and it’d be foolish not to listen.
Absolutely. And it looks like we might have a question in the Q&A window from Rosanne Nunnery, Steve, if you want to take a gander at that.
STEVE RUTHERFORD: Yeah.
Sorry. I’m looking at the wrong thing.
So Rosanne has asked, since most assessments at university level are completed prior to a semester start, how would you encourage engagement regarding assessments after the semester begins? I was thinking of being open to adjusting a final assessment as the semester unfolds and I get student feedback.
Yeah, absolutely. That’s wonderful.
So yes, you can start with things and saying, right, this is how I’m going to run this particular assessment.
But I think a good thing to do right when you run that assessment that you’ve designed before the beginning of the course is saying, right, have a look at those guidelines. What other information do you need? And let me know.
And you could do that anonymously. You could do that by getting them to drop a post-it note into a bucket or something.
There’s a range of different ways in which you could do that. So getting them for a start to think they’ve got an active part in shaping the guidelines of this particular assessment, then gradually building them in more and more.
Then you could say to them, so what sort of support do you feel you need from that? And don’t just say, what you want? Because they’ll say, everything.
So say, I have x amount of hours a week that I can dedicate to this assessment. Where do you want me to spend that time?
Or x amount across the length of the course. Where do you want me to spend that time?
But then, yeah, actively getting them involved in designing that final assessment is a really, really powerful thing to do. You can get them to–
you could shape that assessment. But you could get them, for example, to take some marking criteria that you’ve designed and rewrite them in language that’s accessible to them. You could get them to design their own criteria.
You say, right, this is the outcome I want.
You’re accounting student, so you need to be able to do this by the end of this course. So what sort of criteria do you think are going to be important with that? And work with them to help design those criteria. Or get them to think of a particular element within that where they can share ideas or get them working with each other to look at–
get them to peer-evaluate some drafts that they’ve written of an assessment.
Those are really, really powerful things to do. Get the students talking to each other about the assessment and feeding off each other’s ideas.
So yeah, I’d advocate starting small with something you have pre-prepared, but then actively getting them involved in designing the thing that’s the end point, because they’ll have much more ownership of that. They’ll be much more engaged with it.
And probably, you’ll get a better outcome as a result of that. I can’t guarantee it, but usually that happens.
Absolutely.
There’s a question in the chat from Vera, specific question about the diagram and plotting the student and staff responses. It looks great.
How did you get it? Which tool do you use to visualize it like this? So we are actually working on trying to get an electronic version of that that’s something you can send out, an app type of an idea.
At the moment, I just do it with a piece of paper. So I get the students to–
I’ll hand out bits of piece of paper–
it’s not particularly high tech–
to the students and get them to just draw that. It’s also a really good professional development activity to do with staff.
I do that in the professional development courses that I run. I get staff to think of an assessment they got, grade it on those 12 points, and then talk about it to someone else.
And then you can clip those in and either crunch the numbers or just look generally at what sorts scores high and what scores low. You could just create a little survey for it, 12 questions. The limitation in that is it often helps for them to see how these things link together, so seeing that wheel really, really helps.
So what you might want to do is put the wheel up on a screen and then get students to do an electronic–
a Google form or a Microsoft form or some sort of survey thing which makes crunching the numbers a bit easier.
But we are trying to develop something that’s a little bit more of an interactive tool.
Cool. Questions.
Looks like we’ve got a question from Anne in the ch–
Q&A, excuse me.
STEVE RUTHERFORD: Yeah.
So Anna’s asked, I use youth program quality assessment to guide observations of after-school programs in rural Alaska. Awesome. Our goal is to get program staff to score the assessments and provide feedback to each other.
Our students are adult staff members. Do you have strategies for helping them share those observations and collaborate with each other to improve their programs using GoReact?
I’m not an expert with GoReact, so Matthew is probably better at answering that than I am. But I think anything that involves them talking to each other and critiquing those sorts of ideas is a really, really powerful thing to do. So getting them to evaluate either the feedback they’re giving or see the student’s responses to the feedback they’re giving or getting them to explain the rationale behind why they’ve graded something in a particular way is a really, really powerful thing to do.
And so as I said, one thing I tend to do with professional development courses is I get the staff to pick an assessment that they’ve run or that they’ve designed to grade it on those 12 points themselves and then to discuss things that they graded well. Pick one thing they’ve graded well and talk about that to someone else and explain why they think they’ve graded it well, and one thing that they graded low and explain why they think they’re graded at low and what they plan to do about that.
And so anything that you can get people vocalizing their thought process is really powerful. And my impression of GoReact is that it does exactly that. It encourages people to think about and vocalize what it is that they’re doing.
There’s a question. So you’re actually speaking about the SLOs you previously created.
Yeah. So you can adapt your assessments to learning outcomes that you’ve–
I think that’s what you mean by SLOs–
that you’ve previously designed. You don’t need to completely rethink everything. You don’t need to throw the baby out with the bathwater, so to speak.
In a lot of cases, we’re using this as a framework. It’s just making very, very small changes to what it is that you’re talking about to the–
that you’re dealing with the students.
It can be as much–
as little as changing a paragraph, adding a paragraph to a course handbook or an assessment guideline. It could be just reshaping or rewording your assessment criteria in a way that students understand better or explain to them the difference between good, excellent, exceptional, and outstanding within your marking criteria.
So it doesn’t have to be things that are radically changing learning outcomes. But it can be. It can be, but it doesn’t have to be.
MATTHEW SHORT: Excellent.
Yeah. So Alison’s put, with more detail for the students to understand. Yeah, and that’s how my colleagues in the English language and literature department used the framework. They said, right, we want our students to understand what we want from them in this particular assessment, which was an essay assessment.
So they said, right, do the students–
these were freshman students. Do they understand what critical analysis means?
And so they had a five-minute discussion at the beginning of the class. What does critical analysis mean? They worked with the students to get a shared understanding of what that term meant and what a good element of critical analysis would look like or not. Then, next week they looked at use of an argument and they talked about that for five minutes.
And so the students gradually–
wasn’t doing it all in one go where the students get cognitive overload and remember 2% of it. Do it in small little stages.
And getting them to think about those criteria and think about how they would work, getting them to look at other students’ work or previous students’ work and to grade them themselves so they understand what it is that you’re looking for. Really, really good ways of engaging the students with that.
And let’s see.
Nothing currently open in the Q&A window, and see some chat messages.
So folks, we still have about 11 minutes before the end of our session, so still time if you want to ask any questions in the Q&A. Or again, if you want to volunteer to be a panelist, if you want to raise your hand, you can ask those questions aloud.
And Steve, another question I had as you were going through–
you mentioned an important part of this framework or process is to incorporate aspects of self-reflection for the students and also incorporating peer feedback into that process. In your experience, are there approaches or ways that you have found are successful in incorporating or reinforcing those?
With a video skill assessment product, I feel like younger learners, the thought of themselves being recorded and have somebody picking them piece–
picking them apart is a little nerve wracking, a little anxiety–
causing a little bit of anxiousness. In your experience, do you find ways of easing students into that process to make them comfortable with either self-analysis or receiving peer feedback? Or are there aspects of the framework where you’re handling or addressing that as part of this framework as a whole?
I think the key is to start small. So don’t expect students to be able to–
there’s three dimensions. If you say students, you’re going to get critiqued on this, that’s a terrifying experience. So that’s one thing. Another thing is you can’t expect students or any learner to take a huge amount of feedback on board at one time.
So too much feedback is often as bad as too little. So modifying that.
You also need to train people in giving feedback, so in giving constructive feedback and identifying things that are useful.
So my approach is usually to start small.
So it’s something like this with a video type of environment. Maybe pick one thing that you–
you get the learner to say one thing they would like some feedback on. Is it my presentation style? Is it the speed I talk? Is it how I designed my slides?
Is it–
do I gesture a lot? Which I do.
And just focus on that one particular thing. Get some feedback on that. Then, also, talk to the people about ways in which you give feedback.
So it’s supposed to be encouraging. It’s supposed to be constructive. You’ve got to be professional. Those sorts of ideas. And then starting very small with one particular thing, and that gives someone the experience of doing a presentation, of getting over that fear of standing up and talking to people.
But then they get that one key thing that they can work on. Then, the next time, maybe make it three things that they want some feedback on, again something that they think of themselves. Give them the choice of what things they want critiqued. So that way, if there is something that they think they’re really uncertain about or really frightened about, they can avoid that until they’re a little bit more confident.
And then you can work up to a process where it’s open season and say, right, let us know anything. But again, you can frame that in a number of different ways. So there’s the idea of the feedback five or the feedback seven, of three things you did well, three things to improve on, one big thing to remember, good or bad, for next time.
So again, giving people that framework in which to work is a key thing. So I think something like GoReact has that sort of facility within it to help people give the feedback as well as receive it. So again, starting small and building people up so that they gain that confidence, but also training people who are giving the feedback as well.
Absolutely. Yeah, I think for our product specifically, a lot of folks focus in on the ability for students to record themselves and analyze their performance or craft. But there are other aspects where instructors can provide video recordings to allow students that opportunity to demonstrate and/or potentially build those feedback commentary skills that are necessary to support a program or a framework like this, which can be very powerful as well. [INAUDIBLE]
I think the key is keep giving them that safe space and the safe space to make mistakes.
And also to get them to understand that we, as more seasoned professionals, aren’t perfect as well.
I finished a lecture about three, four weeks ago which just went completely to pot. [LAUGHS]
I just went to the [INAUDIBLE]
just said to student, look, I’m going to stop now because I’m not helping you. And thankfully, it was only five minutes from the end.
I said, well, I’ll record the last bit as a video thing. But also, again, one of the most salutary experiences I had is just watching yourself on video and seeing the little quirks you have. So I move my hands around a lot. I talk out of one side of my mouth.
When you see me physically, I stand with my left hand in my pocket all the time for reasons I don’t understand. Weirdly, my grandfather used to do that as well, so it’s probably genetic.
[LAUGHTER]
But giving people the chance to do that and saying, what do you think? Pick one thing you think you did well, one thing you think you would like to improve on, and focus on that. But the benefit of peer feedback is it trains someone to be able to critique themselves, and that’s the key thing people often forget with peer assessment and peer feedback.
The purpose of it is you’re training someone with the analytical skills to be able to do that to themselves.
The feedback they give to someone else is largely irrelevant. What you’re trying to do is encourage them to think about what that person has done that you think was good or less good or that you would do differently. That’s the key idea behind it.
MATTHEW SHORT: Certainly.
And let’s see. I think Vera has a question in the chat. Do you have, by any chance, a good article for training students to receive feedback, so any resources or research or guides that you think would helped in that vein?
I can’t think of one offhand.
But essentially, anything by David [INAUDIBLE]
or Naomi Winston–
I’ll put their names in the chat–
is a good thing.
I’ll dig some out and put them in that shared folder that I’ve just put a link to up in the chat further up, because there are some. I can’t think of one offhand, but there certainly are several out there.
MATTHEW SHORT: Excellent. Like So we’ve got just about three more minutes here, so probably time for maybe one more question at least. If anybody has any lingering questions or if anybody ready to volunteer to come live with Steve and I and put on your video and ask us a question directly, we’re happy to do that as well. But any other questions for Steve as we approach the end of today’s session?
Lovely link put in the chat there. Thank you. Thank you, Joseph.
Perfect. So as we give folks a minute here in case they do have any lingering questions they’re typing into the chat window, again, definitely want to take the opportunity to thank Steve so much for taking a portion of his day. I know we’re getting into the evening hours, I imagine, over in Wales for you, so probably dinnertime for you [INAUDIBLE]
Half past six in the UK. I should just say, I was conscious as I was talking that my dog’s head kept poking out behind my head, but that’s what was behind me.
[LAUGHTER]
Fantastic.
But definitely taking the opportunity to thank you, Steve, for sharing so much here today with your presentation.
STEVE RUTHERFORD: Fantastic. Thank you.
Also answering some of our panelist questions. And for those that still have time, in about 15 minutes we’re transitioning into the more GoReact product specific session. So if you’re interested in learning more about those, the GoReact link–
or the Zoom links, I should say, should be available to you from our event registration pages and whatnot. But thank you, Steve, again so much for your time.
STEVE RUTHERFORD: Thank you.
Panelists, thank you so much for taking a little bit of time to listen in here today, and hope you enjoy the rest of our conference here.
STEVE RUTHERFORD: Brilliant. Thank you. Have a good conference.
MATTHEW SHORT: Bye, Steve.
And thanks for the invitation. I really appreciate it. Thank you.
Take care.