Higher Education

Mastering Assignment Makeovers to Enrich Instruction

On-demand webinar on revamping assignments and assessments to drive meaningful learning

Special guest Derek Bruff, author and educator, shares how to remake your assignments not simply by banning AI tools but instead by ensuring they require students to demonstrate thinking and skills they will need after graduation.

SEE FULL TRANSCRIPT

Jenny Gordon:

Hi everybody. Hello. Welcome to our webinar today. We’re thrilled that you’re able to join us and we hope that you enjoy this presentation and walk away with some fresh ideas and the right tools to give your assignments a makeover to ensure they are more authentic. I’m Jenny Gordon and I’m the vice president of international markets here at GoReact. I’ve got the lucky job of introducing our main guest today. I’ll come to that in just a second. For those of you who might not be familiar with GoReact, we’re a competency-based video assessment and feedback solution, used primarily across campuses in the US and around the globe, supporting students to develop their skills and competencies in many different disciplines. I’m really happy to be joined today by our special guest and today’s presenter, Dr. Derek Bruff. Good morning, Derek.

Many of you will recognize Dr. Bruff, who is an educator, higher ed consultant, and the author of two books. Before I hand over to Dr. Bruff, just a few friendly reminders about today’s webinar and how things run here. Today’s event will last about 45 minutes, and this includes about 30 minutes of presentation, 15 minutes for Q&A at the end. We are recording the presentation, so if you have to hop off before we finish or if you want to share this recording with colleagues, we will email you the recording and how you can share it out amongst those that weren’t able to join today.

Please do submit questions throughout this session. To submit questions to Dr. Bruff, you can use the Q&A and we’ll answer as many of those questions we can at the end of the session. You’ll also see a chat function at the bottom and you can use that chat to introduce yourself as many of you are already, thank you, to tell us where you’re from, who you’re with or if you have any links to relevant resources that you want to share with colleagues or other attendees. So anything that you want to add in the chat there, please feel free to do so. So without further ado, and I think we’ve welcomed many, many colleagues now, Derek, I’ll hand over to you to introduce yourself and to begin today’s presentation.

Derek Bruff:

Great, thank you, Jenny. I’m glad to be here. Excited to chat with folks about teaching in this time of artificial intelligence. Very excited to be here today. We will go ahead and jump in. I should say Jenny gave a little bit of background on me, I was also the director of the Center for Teaching at Vanderbilt University for over a decade, and there I got to work with faculty in all kinds of disciplines, helping them develop foundational teaching skills and explore new ideas in teaching and learning. One of my favorite areas to explore is educational technology. It’s the topic of both of my books and we’ll be talking a lot about teaching with technology today. It’s also meant that I’ve had a busy year since ChatGPT came out last fall, on November 30th, 2022. You were probably aware of ChatGPT and its brethren in artificial intelligence.

These are generative AI tools that are trained on big sections of the internet and know how to put words together in fluent ways. They aren’t really answer machines so much as putting one word after another in ways that are somewhat predictable. So if you ask it, what does the phrase, the good, the bad, and the ugly refer to, it will give you a nice answer to that. But it is really just again, kind of linking up words in sensible ways to try to make fluent prose. And so one of the things that people have noticed about these tools is they tend to speak with great confidence, and I often compare them to the character of Cliff Clavin from the sitcom Cheers back in the-

Speaker 3:

Hey, Derek, I’m going to interrupt you real quick. Can you go into presenter mode so that everyone can see it full screen?

Derek Bruff:

Presenter mode, what am I doing wrong? On PowerPoint?

Speaker 3:

Yeah.

Derek Bruff:

Did I share the wrong screen?

Speaker 3:

Yeah.

Derek Bruff:

Yes, that’s what happened. Let’s try that again. There we go.

Speaker 3:

Perfect. Thank you.

Derek Bruff:

All right, I swear I’ve done Zoom before. So anyway, Cliff Clavin would often speak with great authority about things he did not know anything about, and I find that often ChatGPT has a similar flavor to it. I like to say that tools like ChatGPT speak, but don’t think, they don’t actually know things, but they are good with words. And I would say that higher education has been thinking a lot about these generative AI models over the last 10 months or so, in part because they came out late in the fall semester, which was a terrible time to have to grapple with new tools like this. But also because this type of question and response is something that we use a lot in higher education, a lot of our assessments and assignments involve this type of writing and ChatGPT seems like something that can do that writing perhaps on behalf of our students.

So what does this mean? Well, a few kind of big picture comments and then we’ll drill down on one thread in particular, we might need to change our learning objectives because of generative AI. So I’ll share a brief story from about 10 years ago when a different tool came out, called out called WolframAlpha. This was a computational engine. It actually does know things, it can compute in lots of ways. I was teaching a linear algebra course at the time and having this tool available meant that there was a certain kind of computation that my students would have to do frequently in this linear algebra course that the tool could actually do more quickly and more efficiently and more accurately than my students. And so I changed my assessments, I changed my tests. Instead of giving students one question where students would have to kind of model a situation with a particular mathematical tool and then do a long computation and then interpret the answers, WolframAlpha could do that computation for them. And so I could de-emphasize that piece of what I was teaching.

I could actually give them multiple questions on an exam to ask them to do the hard part, the modeling part, and let the tool take on the computation. And so I changed my learning objectives pretty significantly because of this tool, but we’re starting to see similar things with generative AI. So for instance, in the realm of computer science and teaching coding, these tools are pretty good at writing basic code. And so I found a white paper by Brett Becker and colleagues, where they argue that because of this, this minimally suggests a shift in emphasis towards code reading and code evaluating rather than code generation. And I’m starting to see that with some of my colleagues at the University of Mississippi in computer science as they start to rethink what their introductory programming courses are going to look like.

We might use AI tools to enhance student learning even towards traditional goals. And so my metaphor here is when I was learning photography, that’s a picture of me out on a bird walk with some friends and family, that’s me with the zoom lens there. When I was learning photography, there were a lot of conceptual elements to that skill that I needed to learn. I needed to learn about focus and depth of field, composition and lighting, but I also needed to learn the buttons and knobs and panels on my camera. There was a kind of technical learning that went along with the conceptual learning and I would argue that the one informed the other. So as I was learning to take photos, learning what all the knobs and buttons did, helped me try out different types of photographs so that I could learn the concepts better. And I think there may be some potential for generative AI to serve a similar purpose in some of our courses.

So for instance, I talked recently with Guy Krueger, who is in the writing and rhetoric department at the University of Mississippi. He was on a panel we had last week at Mississippi. And he’s teaching writing, and this is one of his quotes about the impact of AI on his teaching. “Since we’ve been using AI, I have had several students prompt the AI to write an introduction or a few sentences just to give them something to look at beyond the blank screen. Sometimes they keep all or part of what the AI gives them, sometimes they don’t like it and they start to rework it, in effect beginning to write their own introductions.” And he talked about students who really kind of freeze up at that blank page when they’re starting the writing process.

And he used to tell them, “Look, just don’t start with the introduction. Start with one of your body paragraphs. That may be a way to get in.” And now he’s seeing this AI provide a different way for some of these students to get started, not to do the writing for them, but to help them get over that writer’s block and start to write their own introduction. But the focus today is going to be on what I’ve been calling assignment makeovers. We might need to revise some assignments in light of these new AI tools. The kinds of things that AI is good at and bad at are going to affect how our students encounter our assignments, and I think it’s important that we be exploring that and aware of that so that we can maybe make some useful modifications.

So before I jump into that, I have a poll for you. I’m a little curious to know where you are right now in terms of use of AI. So yep, Abby’s got the poll up and running. How would you describe your policy on generative AI this fall? Are you a red light, are you prohibiting the use of AI in your courses? Are you a yellow light, are you permitting it with some limitations? Are you a green light, you’re all in and ready to see what it can do? Or are you a flashing light and you’re not quite sure yet what your policy will be? Give you a second to vote on that.

Abby, whenever you think we’re ready, you can share the results. All right. Okay, so not a lot of red lights, which is interesting. I ran a similar poll at Mississippi last week and I got a similar result. Maybe the red lights don’t come to workshops like this, or maybe, like some colleagues, you felt like you can’t be a red light. There’s just no way to kind of police this meaningfully. But lots of yellow lights and green lights in the room, but also a fair number of flashing lights who are still figuring things out, which is totally fair. This is a fast moving subject, and I don’t think we often have to move this fast in higher education, certainly in 2020 we did, but more typically we don’t have to. So lots to explore here. Let’s look at some assignments here. So one of the things that I did a month or two ago was I took out a couple of my old assignments and tried to kind of see how I would need to revise them for this fall semester if I were going to offer them in light of these AI tools.

And so one of the assignments was for a writing seminar that I used to teach, a course on cryptography, a course about codes and ciphers. It has a mathematical component but also kind of a history component, certainly some writing and also current events. It’s a really fun course to teach. And so I had an expository essay that asked students to find a code or cipher from history that we hadn’t already explored in the course, and to tell its story in 1,000 to 1,500 words. Where did it come from? How was it used? What impact did it have? But also how did it work mechanically?

So there were two main goals for this assignment, one was a storytelling, how can you tell a compelling story about this little slice of history, but then also a kind of technical communication piece. I wanted to see if students could explain the mathematics behind some of these codes or ciphers in writing. And a little bit of research as well here. This wasn’t a full on research paper, but they did have to find some credible sources. And so I went to ChatGPT and I said, “Well, what if I just ask it to write this paper for me?” One of the ironies is that I try to be really transparent with my students about the assignments and what I’m expecting from them. And so because of that, it was pretty easy to copy and paste my assignment instructions into ChatGPT, and it gave me a response here, I screenshotted this.

It was not good, this was not a good essay. It was very by the numbers. It picked a topic that was actually not a good topic. The Enigma machine is a fantastic topic in general, but we had already talked about it in our course. And so it was not fair game for this paper assignment. But more importantly, this essay was really dull and very formulaic. I used my grading rubric on ChatGPT’s essay, and it scored a D minus according to my grading rubric. So not great. But I thought, what if I were a student who either really wanted to get out of this assignment or maybe just wanted to use these tools effectively? How would I keep using generative AI to try to help me on this assignment? What else could I do? Well, the next step was the easiest, I needed a different paper topic because the Enigma machine was not appropriate. And so I asked ChatGPT to identify some other codes or ciphers from history, and it did a pretty good job of generating [inaudible 00:14:29] many of which would’ve been appropriate for this essay assignment. So that was good to know.

I picked one of them, the Japanese purple cipher, which was fair game for this assignment. And I went to a different tool called Elicit. Elicit, it’s going through a lot of changes right now, but as of a month ago, I was able to put in a question like what is known about the origin, use, influence and mechanics of the Japanese purple cipher? And what it does is it actually looks for scholarly sources that might answer that question. And again, these tools are good with words. So they’re able to take my question, which is expressed in words, and to try to find papers or book chapters that might answer that question. It gives me citation information about these resources, but then also in case I get tired of reading those long abstracts, it will summarize the abstract for me, which I think is kind of comical, but it will also do other things to try to pull out different features of these papers.

So I view it as kind of an alternative to a Google Scholar search or a library database search in terms of finding resources. And I looked at this and I said, “Okay, some of these are pretty good.” Now I’ve been in this field for a while. I know where to find good sources. My students do not. So my students would need to do some digging into these suggested sources, but I saw a few here that had some potential that my enterprising student might want to use. So then I went to Bing Chat, which is another generative AI tool attached to the search engine no one used before last year, Bing. And I asked it the same question, what do we know about this cipher? And it gave me a nice summary of different sources. And it’s important to know that Bing as a search engine with a chatbot will search the internet and will find live sources out on the internet and will summarize them for you.

Again, the summary is actually pretty good generally, they’re good with words. And it gave me some sources that I could follow. These sources were not as strong, it’s just searching the general web, it’s not searching scholarly sources. And so several of these sources would need to be interrogated because of that. It also though, as you’ll see at the bottom, it gave me other things I might ask it about this topic, which I thought was an interesting exploration tool. So then I went back to ChatGPT and I said, “Let’s take another approach. Instead of just giving it the assignment description, I will ask it to tell me an exciting story about the breaking of the Japanese purple cipher.” It did much better at this. This was a much better essay. It lacked the kind of technical communication that it needed to have, but the storytelling was very strong.

And so where I landed was that a student who wanted to use these tools thoughtfully could kind of mix and match output from different tools and probably write a pretty decent paper. So if it brought some of the technical communication from the first draft, the storytelling from the second draft, and then built in some references and use of those sources thoughtfully, a student could use this in their final paper and probably do a pretty good job. So I asked myself, what are my makeover decisions here? What would I have to change about this assignment given all of this? One is that I would decide to teach writing with AI. So I’m not going to try to avoid the use of these tools. I think they’re going to be everywhere. They’re coming to Google Docs and Microsoft Word, and so I need to help students learn to use them well.

There may be some courses out there where you’re going to prohibit the use of these tools. I talked with a philosophy professor recently who was doing a graduate level course and her students were basically doing almost publishable level work. They weren’t using AI. But she had a small class, advanced students who knew why they were in there, and she was able to tell them specifically, “This is why we’re not using AI, because I need you to develop these skills for your profession.” But for my first year writing seminar, I’m going to use the tools. As a class activity, we’re going to collaboratively evaluate that first ChatGPT essay draft and use it as a way to talk about what’s not good about this kind of writing. We’re going to spend some class time on structured peer review. So I have some activities where students are going to give each other feedback about their storytelling and technical communication choices. So even if the tools are doing some ghostwriting for the students, they’re still going to have to have some discussions about what rhetorical choices to make in their writing.

We’re going to use those source suggestions to teach about working with sources. This is something I would do anyway, but Bing Chat just gave me some terrible sources to evaluate, and so I want to capitalize on that and use that as a way to help students judge the quality and credibility of sources. I’m going to ask students to document and reflect on their use of AI. So I often have students do a kind of writers statement or designers statement alongside a project, and so I’m going to want them to kind of talk to me about AI. I want to keep those lines of communication open with my students. And I’m going to look at that rubric and see if there are any categories I don’t need to spend points on anymore. There may be some stuff around grammar or fluent prose that just isn’t worth giving points to anymore because I can assume that students will use an AI tool to clean up their language like that.

Now these are my choices, they don’t have to be your choices, but I do think there’s a process here that’s really useful. So know what your assignment is and why it makes sense for this course. Make sure you’re clear on the specific learning objectives you have for this assignment. I talked about technical communication and storytelling use of sources. What are those learning objectives? How might students make use of AI tools? That was the big part of my exploration, and I think this will be the big part of your exploration, is seeing what the tools are able to do right now that may be helpful or disruptive for your students. And then it’s about matching it, how can AI undercut the goals of this assignment and how can I mitigate that? If students are depending on ChatGPT to do their storytelling for them, they’re not going to develop those skills themselves. So I might need to shift a little bit to focus on those goals in some other way. But how might AI enhance the assignment and where would students need help figuring that out? I love the idea of using these tools to help students find better sources. So that’s something I can leverage in my work. And then finally, focus on the process. How could you make this assignment more meaningful for students or support them more in the work?

All right, so let me, I’m going to skip a few slides here. Sorry for the flashing. I want to move forward to this little activity. This is what I’m calling a ready-set-go activity. So in the text chat, what I would like you to do is respond to this question, but don’t hit enter until I say go. All right, so type your response, but don’t hit enter. So my main question is what changes might you make to your assignments in light of generative AI? If you’re not sure though, which you may not be, what questions do you have about how AI might be used on your assignments? So take about 30 seconds and compose your response, but don’t hit enter until I say go.

So the questions are on the slide. What changes might you make to your assignments in light of generative AI? Or if you’re not sure, what questions do you have about how AI might be used on your assignments? All right, ready, set, go, hit enter. And now we’ll take a minute and look at some of these responses. So Kirsten says, “Make assignments more process focused rather than the end product with some in-class work.” I think that’s important for these bigger assignments to spend class time having students engage in parts of the assignment so that you can help and guide and shape and so that they see how the assignment also is connected to the in-class time. I think that’s important. I think the more the assignment feels like some out-of-class busy work, the more likely they are to lean too heavily on these AI tools.

So there’s a couple of questions about Elicit, that tool. It actually uses something called the Semantic Scholar Web, which is a database of scholarly articles. It’s trained very differently than ChatGPT, which is trained on the whole internet basically. Elicit is trained on actual databases of scholarly articles. And so that’s one thing that’s important to know is that these tools work in very different ways. And so it may take some digging to figure out exactly how each tool works and whether or not you should believe what it gives you. ChatGPT is much more likely to make up a source entirely, especially if you use the free version of ChatGPT. The paid version of ChatGPT does have plugins that allow it to search the internet. And so it is less likely to make up stuff, but it still will make up some stuff. Elicit works very differently. It’s a natural language interface to a database of kind of vetted material.

One quick example, Wendy’s, the fast food restaurant is apparently working on a drive-through chatbot that would take your order. That sounds kind of terrible until you think about how often drive-throughs get your order wrong anyway. I think the industry standard is pretty easy to beat. But the point is that it’s using that natural language interface to some other system. And so I think we’re going to see a lot of uses of AI there where we may have a little more confidence in what it’s giving us because we have a better understanding of the system it’s interfacing with.

And one more comment. I see Carol, hi, Carol, mentions, “How can I use AI to make my assignments more culturally responsive?” And I had someone on my podcast recently, Garret Westlake, who kind of took a surprising take on this. He was doing some brainstorming with a group of faculty and he found that even among a fairly diverse set of faculty, there was still some kind of groupthink that was happening. Because of course, a group of faculty tend to be a little more educated than average, they have other things in common. They are kind of not diverse in some ways, even though they may be diverse in other ways. And so he was using ChatGPT to introduce some ideas that hadn’t occurred to the group.

And I don’t usually think of ChatGPT as kind of adding cultural diversity to a conversation. It’s trained on the internet and has tons of biases that are problematic, but that’s something to explore. And I can also imagine the flip of that, having your students critically analyze the output of one of these tools to look for things like bias, cultural biases, other kinds of biases. One of the things that’s nice about having students critique the output of a ChatGPT is no one gets their feelings hurt when you’re rude to a robot, that students will often actually engage in a more rigorous critique because it’s not a peer that they’re critiquing.

All right, you guys have lots of comments. I’m going to have to read some of these later because we need to move on. Oh, I see there’s some Elicit exploration happening now. That’s great. I want to share a few ideas here as you start to rethink, well, what do we do? How can we make these assignments a little more authentic? And this is from an initiative I led at Vanderbilt University, called the Students as Producers Initiative, and we were looking for ways to help faculty engage their students, not just as consumers of information, but as producers of knowledge. And so we looked at a lot of different ways this can happen in different disciplines, different approaches. One common element was giving students open-ended problems. If the answers to the questions are all known, then the assignment does feel a little bit more like busy work. More authentic assignments are more open-ended.

And so for instance, I talked with Steve Baskauf in the biology department. That’s Steve in the upper right there. He was doing a biology lab where students were working in groups to do actual research projects. So not like groundbreaking research, but he worked with a team of grad students to develop some open-ended research questions for students where the answers were not known and the students had to design an experiment to test certain things. And I went to the poster session for this course and talk to the students, and almost every one of the students said at some point, “All of our little amoebas,” or whatever they were, “All the creatures died, the plants died, the insects died.” Their experiments almost always failed the first time, but they had chance in the semester to revise those experiments. And I think that’s an important learning opportunity.

Steve said, “There are always a fair number of failures, but that’s what happens when you tackle open-ended questions. You do have to focus a little bit more on the process, but you have the opportunity to have students learn from failure. Or a different discipline entirely, Betsey Robinson in the history of art at Vanderbilt was having students do digital restorations. So this is a photo of the frieze, some of the sculpture on the Greek Parthenon, and as you can see, some of the sculpture is in better shape than in other parts. And so the assignment was to actually create a 3D rendering of this sculpture and then fill in the gaps. And so this is a screenshot of one student’s work who did this, and so she had to figure out what would’ve been there, what do we know about the rest of this sculpture, about techniques and methods of the time?

It was a very open-ended question and it had some elements of creativity, but she also needed to make research-based informed choices as she was doing her artistic rendering of this. And I’m going to skip this video, but she talks a lot about how she learned and what she learned through this process. Now my PowerPoint is running away from me. So another element that I think is often important is having a sense of audience for students, who are they producing this work for? I took this picture in Boston a few years ago, and this street performer was doing some really cool acrobatics for a real audience here, and I imagine he had to hang out in the gym a lot, kind of practicing all of these different moves he did, but he didn’t just stay in the gym. He actually went out to an audience and got some oohs and ahs and some financial contributions.

I think often we have our students in the gym all the time, where they’re only practicing and they’re never getting to perform, in a sense, for real audiences. And so it could be other student audiences. I found this textbook by Brian Lower at Ohio State, who is having his students write parts of the textbook for future students, which I think is a really interesting model and a much more authentic approach to doing this type of writing. Or in my own class, I do a podcast assignment. My old essay assignment has turned into a podcast assignment, where students get to kind of tap into their interests and try to do the storytelling and technical communication in another form, but for potential audiences out there, I tell my students, each episode gets hundreds of downloads. There will be people out there listening to your work.

Randy Bass and Heidi Elmendorf call this social pedagogies. “We define social pedagogies as design approaches for teaching and learning that engage students with an authentic audience other than the teacher, where the representation of knowledge for an audience is absolutely central to the construction of knowledge in a course.” There’s something about representing your knowledge, communicating in a way that someone else can understand that helps you develop that knowledge to begin with. And as you develop that knowledge, you’re better at representing it. And so there’s a really nice feedback loop with audience here.

The last element that we identified in these authentic assignments is connecting with student interests. This is a photo of my stepson. A few years ago at Christmas he got a new Lego set and I was amazed at his ability to focus on one task for three hours in a row because he was really interested in building this Lego set. Now, we can’t always have that same type of interest from our students, but often we can find ways to connect with them. So at Vanderbilt, I talked with Susan Kevra, who was teaching a course on the sociology of food. And she had her students, as kind of an introductory assignment, create a narrated screencast, some images, some of their own words about food that was important to them where they grew up. And it was really interesting way to get to know other students and to see what foods they talked about. And this kind of multimedia element I think was also helpful, because they were able to talk about and share pictures of their food and make some personal connections to the course, which can sometimes seem somewhat abstract.

And then, okay, I will share this video because it’s one of my favorites. Wrong button. Let me stop, share and share with the sound on. Screen two. This is from an earth and environmental sciences class taught by Larisa DeSantis, where she asked her students to find their own ways to represent what they were learning, and this young man had a particular-

(singing)

I’ll stop there, it goes on for a little bit longer. This gentleman was a member of the Vanderbilt University award-winning men’s acapella group, and so he brought the musical skills to this, but he had to find a song.

Speaker 3:

Hey, Derek? Sorry, can you reshare your slides the other way?

Derek Bruff:

Sorry.

Speaker 3:

No worries. Thank you.

Derek Bruff:

My screen one and screen two are in the opposite order.

Speaker 3:

Perfect, thanks.

Derek Bruff:

He was a member of the men’s acapella group, they won some awards that year. He brought the musical skills to this project, but he had to find a song whose lyrics he could adapt to talk about the Miocene epoch. And Larisa tells me his science was 100% accurate. And I love this example because if she had gone with a more traditional assignment for this course, we wouldn’t have gotten to see that, and he wouldn’t have found a way to connect his personal and professional skills to the course topic at hand. Because he did that, he created something really cool, and I love being surprised by our students when we kind of open the door for this type of connection and surprise.

(singing)

That’s enough. Now my computer is just annoying. I’m going to hand it over to questions now. That’s my next step. So Jenny, do you have some questions you want to send me from the audience?

Jenny Gordon:

Yeah, I absolutely do, Derek, thank you. We’ve just been thoroughly enjoying the presentation. And there’s a lot of people asking if they can get a link to that video specifically after this session. Is that going to be possible? Because everybody loves it.

Derek Bruff:

Oh, sure. Yeah, yeah, absolutely.

Jenny Gordon:

Brilliant. Thank you. So we do have some questions. There’s been loads of interactivity and chat, both in the Q&A and colleagues have been talking to each other, which is brilliant, lots of different things, but there’s a couple of themes that I’ve picked out. And by the way, anybody else, if there’s any questions you want to ask, pop them into the Q&A now. First of all, just to focus on an area of cost of these tools. Now this is something that comes up to me. I’m lucky, I live in the UK and I work with universities, colleges, schools around the world, and one of the big questions I have in developing markets and developing economies is what about the cost of these tools? Some are free, some are paid for, the cost of them varies. How do we ensure that those that can’t afford to invest specific AI tools aren’t disadvantaged in their learning?

Derek Bruff:

Sure, that’s a good question. I think of, for instance, just ChatGPT, which is perhaps the most famous of these tools. It does have a free version, which is great. I don’t know about its access to that free version in different countries. There may be some countries where they’ve not permitted access to it. But the paid version is way better actually, it’s a more up-to-date language model, it does much better with words. It has these plugins that allow it to search the internet and do mathematical computations. The paid version is significantly better and it’s $20 a month US, which is not nothing. And so there are some equity issues out there. I don’t have a great answer to that, but I do think this is the kind of thing that institutions have a role in solving. So there was a time where we had computer labs on lots of our campuses because computers were hard to access, and so the institutions would invest in central resources that would allow for certain types of technology use.

And so I think we’re in perhaps a similar case. And so you may think about your school or your department, is there a way that you can find some funds to provide? And I want to be intentional about this. Think about what role are these tools going to play in your courses? How are they going to line up with your learning objectives? How are you preparing students for the workplace where they may actually be asked to use these tools on a regular basis? But I think if you’re intentional, you can probably make a case to someone at your institution who might be able to provide some level of access for these tools, some shared accounts, some central accounts. So that’s my answer. It is something to be mindful of though. I think it’s a really important question.

Jenny Gordon:

I’m seeing that as policy is debated and set at institutional level, even government level in some countries, there will be new models for investment at organizational or institutional level as well, but it’s not going to happen overnight. So I do agree that giving the access as much as possible through institutions is an important thing-

Derek Bruff:

And to look for free options where there are free options, there may be tools that are good enough for what you need them to do that are not expensive or not costly at all.

Jenny Gordon:

We always end up asking about cheating when it comes to using AI, and authenticity in assessment can be interpreted in different ways. Is the assessment itself authentic for what we’re assessing? Is the answer that we’re getting authentic? Some of the questions we’ve had around cheating, do we consider using AI working smarter, not harder or cheating? Because there’s a gray area obviously and a line between the two. But how can you be sure, what are your recommendations for ensuring that what you’re getting to assess is the original work of a student and what do you constitute is crossing the line into cheating when it comes to assessment?

Derek Bruff:

That’s the million-dollar question. It’s really hard because … let me step up just a minute. I find that there are people in professions now who are using these tools regularly to increase their efficiency, to write the easy stuff so they can focus on the hard stuff. There’s actual use cases for these tools in a variety of contexts. I do find though that, for instance, my wife is a marketing manager, and so she’s using these tools to write marketing copy, but everything that comes out, she has to look at and evaluate and decide, is this something we can actually use? What do I need to change here? Her expertise in writing that kind of text is still vitally important to using the tool. As an expert, she might be able to use the tool to create some expert level work, but if she were a novice and just kind of taking the output for what it is, there could be some trouble there.

And the challenge is our students are novices. And so the question is how do we help them develop the expertise with or without the tools? I think that we’re going to need to develop the expertise with the tools. I think any discipline that uses words is going to be affected by these tools, and we need to kind of understand how they work and where they can be useful and where they’re problematic. So that’s my response is let’s teach the tools. There are AI detectors out there that don’t work very well and cause all kinds of problems with false accusations. So I think that’s a problem. I think that’s one of the reasons we didn’t have a lot of red lights in the room because it’s really hard to police this kind of thing. I think part of what’s happening is that these generative AI technologies are shedding a light on certain types of assessments we regularly use that were not very interesting or powerful to begin with, and now the tools can do them for us.

And so I’ll share one more quote here, it’s on my last slide. This is from Ted Underwood who does a lot of work on AI and linguistics. “We will also need to devise new kinds of questions for advanced students, questions that are hard to answer even with AI assistance because no one knows what the answer is yet. These are assignments of a more demanding kind than we have typically handed undergrads, but that’s the point. Some things are actually easier now and colleges may have to stretch students further in order to challenge them.” That’s kind of a hard thing to hear, but I think it’s important and I think that these technologies are changing what’s expected of our students when they graduate, and so we need to be responsive to that. I think honestly the instructors that are going to have the hardest time are the ones who are doing fully asynchronous online courses.

Those have traditionally depended so much on writing assignments, small writing assignments. And if you’re not, it’s just so tempting for students to just shortcut that with tools. So those are the courses that I think need perhaps some of the biggest course redesign. I want to just acknowledge that this is a faculty labor issue. We don’t have time to redesign all of our courses today. This is going to take time to do that. And so anyway, I wish I had a magic wand I could wave to make this problem go away. But it’s really a problem that’s been there for a long time, is how do we engage our students in really authentic deep learning? It’s hard.

Jenny Gordon:

I think we, because obviously being a video company, that our whole foundation of GoReact is video, we see a call now on the use of video feedback and assessment, whether it be formative or summative, being more common as a way of rather than, as you say, completely redesigning a course, to redesign an assessment, the same assessment objective, as you’ve talked about today, but to change the tools in which you use to do that assessment, but to get a better outcome. Surely this has got to lead to being a better outcome. You don’t want to try and just change everything to have the same outcome. As you said, you don’t know the student, the questions we’ve got to ask the students yet or even the direction in which they’re going, but surely all of this has got to lead to better outcomes in the end as well.

Derek Bruff:

That’s the goal, yeah. And I think moving to audio and video makes a lot of sense actually. Now at some point, AI tools will be able to fake their way through audio and video as well, but to the extent that we’re trying to make human connections with our students, we just have to find the tools that help us do that. And the more we can focus on those relationships, on the motivations students have for being in a particular course or pursuing a particular degree, the more process and structure we can build around these types of assignments, I think the more students are going to benefit from it. I mean, that’s good teaching to begin with, it’s just more important now than it used to be.

Jenny Gordon:

Yeah, we’ve sort of got this sort of catalyst for change now, although it’s been on our minds for a while. I’ve got a very, very specific question that the colleagues in the chat might answer as we wrap up towards the end of this session. It’s about the best AI app that gives mathematical equations in the right format, like using special symbols and characters instead of regular texts. Derek, you might not know that, but our colleagues in the chat might. Do you have any recommendations for math equation AI or should we ask our colleagues to answer that within the chat?

Derek Bruff:

I should know this. There’s one that’s been around a little while where you can just basically point your phone at some handwritten mathematics and it will try to answer that for you.

Jenny Gordon:

Brilliant.

Derek Bruff:

Surely someone in the chat can remember the name of that app for me.

Jenny Gordon:

There has been such an amazing amount of interaction in the chat, and one of the things I love doing when we’re running these sessions is to look at that chat. A lot of people are saying, “Can we take the chat as a resource?” We will find some way to share what’s in that chat with you when we package this session up with some of Derek’s content and the recording as well. Last call for any final questions before we finish today’s session and thank Derek for his brilliant presentation. I can’t see any coming in just at the minute. Obviously it doesn’t have to stop here. We can continue to talk to each other through various different channels and things like that and lots of different contact detail we’ll send you out at the end, and on the recording session as well.

So Derek, thank you so much for your excellent presentation. You provided so much valuable information and I think tools for our colleagues to take away and actually start to implement. So thank you very much. I’d encourage you all to discover more ways to add authenticity to your assessment, whether that be assessing authentically or ensuring that the tools that we’re using are authentic in themselves. You can see how we are making suggestions around our authentic assessment strategy by visiting the GoReact Authentic Assessment Hub. And I’m hopeful that Abby will be sharing yep, a QR code just now. Take a little screenshot or a picture of that QR code and you can access our authentic assessment guide and lots and lots of different tools and content that might, and we hope, will be helpful to you. Thank you to all of our attendees for joining us and making it really interactive, and we hope to see you again at a future GoReact webinar. Have a great rest of your week. Derek, thanks again. Goodbye everybody.