Higher Education

AI in Writing & Literature: Gains & Pitfalls

See how writing and literature educators have been using AI—what works and what doesn’t

One of the primary concerns raised about generative AI is its misuse in writing situations.  However, there are numerous ways that generative AI can be a benefit for students. Let’s discuss some of the ways in which higher education writing and literature educators have been using AI. What went well? What went wrong?

PRESENTERS & TRANSCRIPT

Presenter:

Dr. Valerie Guyant is an Associate Professor in the Department of English at Montana State University – Northern.  She earned her Masters in the Teaching of English, with an emphasis in Renaissance female authors, from University of Wisconsin Stevens Point and her PhD from Northern Illinois University, where her dissertation was an examination of female sexuality in vampire literature.  She has also earned graduate certificates in Women’s Studies from Northern Illinois University and in Native American Studies from Montana State University in Bozeman.  Her research is focused on areas of speculative fiction, adaptation, and popular media, such as film adaptations, fairy tales, and serial killers in popular culture.  Valerie has published research in The Many Lives of the Twilight Zone: Essays on the TV and Film Franchise, The Many Lives of the Purge: Essays on the Horror Franchise, The Many Lives of the Evil Dead: Essays on the Cult Film Franchise, Adaptation Before Cinema, The Explicator, The WISCON Chronicles, The Companion to Victorian Popular Fiction, FEMSPEC: A Feminist Speculative Fiction Journal, and London’s East End. Valerie is also an associate editor for The Wachtung Review, Dialogue: The Interdisciplinary Journal of Popular Culture and Pedagogy, and Essence and Critique: Journal of Literature and Drama Studies.

Transcript:

Valerie Guyant:

I was looking back at my ChatGPT history, which I actually have a history. I’m not sure whether I should be proud or sad about that, and realized that I have been using it probably for about a year, which means I started at least playing with it fairly early in when it went super viral, and that I actually both have used it for my own benefit and have used it in the classroom.

So when the proposals for GoReact came through, I thought it might be very interesting to have a conversation with others about use of AI and when it is or isn’t actually a useful tool versus the bane of our existence, which is what I hear a lot of other English professors talking about. We seem to be very split between it is horrible and it can be useful. So I wanted to talk about some of the ways that those are both true. I am going to share a slideshow with you. It’s going to take me just a brief second to get it pulled up because technology is fun, but if you would give me just one second.

Yay. Viewing it as a pretty slideshow. So I know that the title is not this, but I was having a little bit of fun with it because I’m not really talking about the uses of AI in art at all, which I know is an entirely different conversation that we could have, and instead I’m very specifically looking at it in reading, writing, and literature classrooms.

Before I get started talking, I wanted to do a very quick poll just to gauge some of the possible interest and opinions of people in here. So I believe that… There. So the first one is please, very short answer. You can do even single word answers of, what’s your reaction when you hear the term AI?

And then Karina was going to share a little bit of those results with us, correct?

Karina:

Valerie, I’m sorry. It seems like it is not wanting to share right now.

Valerie Guyant:

Oh, okay. That happens. It happens.

Karina:

So we can try the next poll.

Valerie Guyant:

Sure, let’s do that. This first one was really just designed to see what other people are thinking about. I know that when I first heard about AI, my first reaction was, well, my students are never going to actually write their own things again. And I have since moved pretty far on the spectrum into ways that I find it useful in classes. But we can go on to the second question. I’m fine with that.

Great. So as you can see, roughly 80% of us pretty much agreed that it’s a maybe. Some people say yes or no very distinctly. And one of the reasons I asked this is because I have actually had a really fun, long-running disagreement with one of our communications professors on campus about the ways in which use of AI might be academic misconduct in certain situations, but not necessarily automatically. And my argument about plagiarism was based on the wording of plagiarism on our campus, which says that plagiarism has to be the use of another person’s work and presenting it as your own. And I just was really facetious and told him, “But AI isn’t a person,” but that maybe is actually what most people tend to say because specifically, it can be a useful tool if it is properly cited, if it is engaged with in a way that the professor is aware of.

The biggest problem with the academic misconduct component, and the way I explain it to my students is this, if you ask a friend for some advice on what to research or their opinion on something and you use it as a bouncing off point for things, then you wouldn’t necessarily consider that academic misconduct. But if you asked your friend to write your paper for you, obviously it is. And I don’t know whether referring to AI as their friend is the greatest idea in the world, but certainly it seems to resonate with them.

Exactly. AI shouldn’t be, as Christopher said, shouldn’t be used as a crutch, but can be used for a way in which to help you focus your own ideas.

I haven’t played with Perplexity yet. I will have to try that one. And there are so many of them now. But for the most part, I saw one really good research tool that said that ChatGPT specifically had reached 100 million users quicker than any other app, even quicker than TikTok, which as we probably all know if we work with students, they use TikTok a lot.

So my final poll question to lead us into the rest of the discussion is whether you use AI in any way in your classrooms.

And we have a fairly even split, which also, from all of the research I’ve done, is pretty replicable across a pretty wide range of different professors I’ve talked to across the nation at least, as far as a split between people who do and don’t use it.

So one of the reasons I ask that is because obviously there is some very real reasons to be concerned about using AI, and the first of those is one that people mentioned already in the chat, which is the accessibility of it may in fact make students become so overly reliant on it that they don’t engage, they don’t do their own work, they don’t learn the materials. It could lead to them simply repeating things that are already out on the web, which is a very real concern. Plagiarism, as I mentioned, and not just inadvertent plagiarism, but misuse of AI in general, and then ethical concerns. Because if you are using AI, generally speaking, that AI is obviously using a scrub of different materials to give you information, and that information may or may not… I mean, there’s some ethical concerns.

I was just catching up a teeny bit on some of the chat. But those particular concerns, those are things that most definitely we can work with our students and help them understand the ethics, help them understand that if they’re going to use the materials at all, they need to actually avoid plagiarizing them.

The next set is a little bit deeper level, and that is the quality and accuracy of AI-generated materials can often be questionable. Everybody…. Well, I am assuming most of you have heard about hallucinations, which is the term used for when AI says something exists that doesn’t, and that accuracy and detail might be lacking. AI itself is of course pulling information that already may have bias in it and therefore can perpetuate bias. In addition, the prompts we use can actually perpetuate more bias in the results we get back. And of course there is that loss of a human component. If our students are writing, they should be thinking about interacting with their audience, and AI generated material isn’t designed to do that.

I figured that it was more useful to start with concerns that we had, so that’s why I was beginning here. There’s also a concern about unequal access. Not everyone has access to the same tools, and skill development. So AI can be used for writing tasks, but that might hinder our students from actually learning how to do those essential writing and critical thinking tasks. And then the one that I will admit until I started really digging into it I hadn’t thought about, that is most, there are a couple that don’t, but most of the different AIs that can be used for writing actually may retain any data from our use of them, including not only when we used it, what we searched, or what we typed in as prompts, but also any details that we included in our writing prompts and they continue to use that material, which if students are doing any sort of writing that has personal information in it, we certainly should make them be aware of it.

So before we move on, are there any concerns other than those that I mentioned that people the people have about the use of AI?

How to properly cite the use of artificial intelligence? I have seen a couple of different versions of this. I believe the MLA has actually come out with a specific about how they would recommend citation of it, but that is if you’re using a quote within your materials rather than a significant amount. I have also seen a couple of professors who include details in their syllabus regarding if you used AI to help generate this material, you have to include what prompts you used, what results you got, and the extent of what is AI generated, up to and including telling them that they have to highlight it in order to specify what was AI generated. I do not know that APA has come out with a citation.

Oh, they have. Awesome. Thank you. I do far more MLA myself, so I hadn’t used it. So both MLA and APA have made references for how to properly cite it, but again, this is if they’re using something small within the context of a larger document, much like you would cite a website rather than using it as a beginning point or using it as a significant amount of. So obviously citations themselves are an issue.

So this was a particular website from montana.edu, I’m from Montana, that was talking about how as professors for us to generate good prompts in AI if we’re going to actually show our students how to do it, which I noticed… So yes. You can’t actually go back and find the exact same thing again. Sorry, I was just reading the comment. Because AI is dynamic, much like Wikipedia, it shouldn’t be considered an authoritative source, and I certainly agree I wouldn’t consider either of them authoritative sources. I would consider them good places to start from if you are looking for general information. But I guess I agree. That is a healthy concern, in my opinion.

So I wanted to actually show you a skill development example that I have used in my writing one-on-one classes. And in the world of Google, as I said, students struggle with starting other forms of research, with using databases, with understanding how to break down a research question into these larger databases. And my lovely librarian husband showed me this use of it. So this was specifically using ChatGPT, and the prompt we typed in was, “Please identify search facets for this research question: does participation in youth sports reduce recidivism rates?” So I have a lot of criminal justice students who take my writing classes, so that’s where this one originally came from. And interestingly, you can see how it actually gives you different demographic facets or country, type of sport, type of offenses. It’s actually giving you ways in which you could narrow your focus as a student to do better research, which might be a interesting use for other people. But of course this doesn’t really help with using a database.

So the next thing I asked it to do was to specify what facets should I use in order to search databases specifically, and it gave me specific keyword searches, how to use date ranges. Some of this obviously I tell them all the time myself, but that first part, the keyword thing. They quite often have issues coming up with keywords because once they have a question, they’ll have difficulty identifying additional ways in which to think of the same question.

So the reason I’m pointing this out is that it’s not actually asking AI to create something that they will then actually cite in a paper or that they will use instead of writing their own paper, it’s using it as a useful tool.

Some other possibilities, having students create a prompt where they ask the AI to write something, including citations, and then have to do their own research to validate those citations, to look through them, to identify whether they’re good citations, whether they exist at all, is a possibility. I’ve actually used this second one that I’m talking about here in a… I teach a class in writing for stage and screen, so they’re doing actual playwriting, and many of my students have never written a play before. Some of them have never even read a play before, and so I have them start by simply playing with introducing a prompt into AI and then trying to continue to refocus it.

One of the things that is interesting is you then find out, for instance, that ChatGPT specifically will not write a short story of any sort that includes violence. It will actually give you an error code, but the reason I have them do it is it gives them the format and then they have to continue to add to it, adjust it, refine it, give it more depth and that personal component, and they’re not allowed to turn it into me until it is less than 20% the original ChatGPT version of something, which actually seems to work really well because I’m giving them permission to use it, but then trusting them to continue to make adjustments.

Certainly you can use ChatGPT as a beginning research tool, basically to scrub the internet to identify possible sources. So for instance, I have a student who is working on a book about different flowers and what they mean and different times they have appeared in literature, and she was having problems finding a variety of places where daisies showed up in books because there are a lot of them, but also because she doesn’t have that depth of knowledge of all of the literature that I may have read or any one of you may have read. And so I recommended that she actually ask ChatGPT or another AI for a list, and then she has to go actually look at that source for the quotes that she wants to use. So she can’t just use ChatGPT and not continue to move forward, but it’s a good beginning for her.

Then this last one that I share here is one that one of my colleagues has used a number of times, and that is she has the students read a short article in class, and then she has them ask AI to write a summary of that article and then go through and critique the actual summary that AI created for what it’s missing, what it misinterpreted, and what it just should have included more of.

Share. So those are obviously only a few possibilities, but one of the things that really needs to be emphasized is, as several people in the chat have said, students themselves are definitely going to use any sort of resource that they have, just like students use phones when they shouldn’t, or started using calculators when they ought to be… Well, when their professor tells them do long math. But some students will always use the newest technology to cheat, and some students with careful guidance can use those tools to actually increase their understanding or to find materials that otherwise they might struggle to identify.

And I don’t have a specific quote of it, mostly because it was on Facebook and I couldn’t find it afterwards, but there was someone in one of the groups that I follow that discussed the ways in which ChatGPT itself, it was what he was talking about, but generative AI in general can help people bridge gaps in understanding so that they can continue to succeed. He was specifically discussing the ways in which the language itself has a certain amount of… There’s a certain controlling element to how language is supposed to be used, especially in an academic setting. And students who their preparedness for college is not where we would like it to be can sometimes use these generative AI components as a way of increasing their likelihood of succeeding.

For instance, if someone is in a difficult class, and right this minute to the one that pops to my mind is anatomy and physiology, but a course where the instructor is asking them to read content with vocabulary that they may not have. They certainly can ask generative AI to propose an additional summary of the content as a beginning point.

Sorry, I’m looking at the comment here that, “One has to have foundational knowledge to write effective prompts and to discern valid information from AI hallucinations.” So I guess one of the things that I’m trying to emphasize is at least in a beginning writing class, I would highly recommend that we actually help our students develop that foundational knowledge to write effective prompts and to identify AI hallucinations so that they will continue to establish that foundational knowledge, just like we have them read articles that help them establish foundational knowledge or we have them write specific types of styles of writing in order to have them develop foundational knowledge, that helping them develop that foundation is an important component.

So I would really like to open it up to any questions that people might have or specific examples that you may have of types of AI usage that have worked well in your classroom.

Sorry, I’m scrolling through the chat a little bit, and I noticed this comment about, “It’s interesting to think about the difference between citing a source versus citing a process,” and most definitely, I agree. I don’t actually have my students cite process, but I do know other people who include it in their syllabus. I do, however, have them cite the source itself if they continue to include a significant quote from having done generative AI.

Oh, so earlier I had mentioned that I’ve seen a few people who in their syllabus, they specify you need to include what prompt you asked AI and what process you went through to use it and what percentage of that material you continue to use. So that’s what I meant by process.

Okay. “The use of AI to create case studies for teaching and comparing across platforms along with verifying the information.” Yeah, that’s an excellent example. I have not actually had them do case studies yet because I teach primarily literature courses instead.

“How do you navigate potential plagiarism conversations with students?” One of the things that we talk about is the difference between plagiarism and academic misconduct, and one of the things you have to do is… Well, you don’t have to, but I do is bring the policy that is on our website and discuss the ways in which using AI does and does not fit within the different categories, that having something else create an entire essay for you and simply turning it in is just like paying someone to write a paper for you. It’s still academic misconduct and that therefore you shouldn’t do that, it’s cheating, but that getting ideas from it is similar to searching through a bunch of different websites to look at different pieces of information.

Yes, having different step-by-step guides to help students, having them actually ask, for instance, having them ask generative AI to create an outline, and then having them expand that outline personally because it helps them organize.

My first experience with AI was having someone ask it to make a comparison between the graphic novel of The Crow and the movie, and the AI presented a comparison between the graphic novel and the TV show. So it didn’t include any of the information that they were supposed to actually be talking about, so I completely sympathize with the hierarchical systems of kitchen management being actually about a home kitchen instead and not recognizing any of what was said in it.

And that is actually one of the things that I work with my students a lot on, is understanding that you have to work with, not rely on AI if you’re going to use it at all. That you can use it for ideas, for organization, for information, but you shouldn’t rely on it to simply produce an essay for you. I have had students previously… Because introductions, especially in long essay formats for a freshmen, can be something that they struggle with. And so I’ve had them use generative AI to craft three different ways of introducing this topic, and then looking at the different ways that those introductions might be a good place to start a paper from and what they like or dislike about them. And often what they end up doing is taking bits and pieces from each of them to craft something that’s more real to what they wanted to talk about in the first place.

So earlier somebody asked about using different social annotation tools for engagement around reading, and I’ve used Perusall but not Hypothesis. They work really well for students to see what other people in the class are thinking about or thought was important in a particular piece. I’ve also used, for video, not for reading, a similar tool. It’s called VideoAnt, that you can actually input a short video and then you can add annotations and everybody can see those annotations in real time. So that can also be a potential tool.

Did anyone else have specific examples? I’m going to use some of these because I haven’t done them before, but again, really the biggest thing that I personally want to emphasize is that using the generative AI tools in these smaller ways actually helps students see not only their usefulness, but the potential for problems and what they’re not useful at. And certainly I have actually had my students start with a topic and have AI create a 250-word discussion of, and then ask them, “Where in what was produced would you like more information? Where do you think that an example would be useful?” Because generative AI doesn’t tend to use examples unless you specifically tell it to, so often students won’t have done that, and that gives them an additional way to look at what writing should have. And another way to engage with that component is, “Where do you wish it said more? Well now how about you add that information,” and that helps with them expanding upon a idea rather than relying upon a result.

Did anyone else have anything that they would like to share or specific questions they would like to open up to the rest of the group?

Matthew Short:

I’ll give just another moment here to see if any other questions pop up in the chat. I don’t believe I see any in the Q&A window at this point, but if anybody does… Oh, there was a comment from Christopher.

Valerie Guyant:

Yeah, Diane Hacker’s Rules for Writing. I also used to use that, and sometimes Shrunk and White, The Elements of Writing. And I still do, although I don’t use the entire book, I still reference parts of them and sometimes compare them specifically to things that students have produced, that the generative AI has produced, the rules that they should be following. I have not yet tried to use some of Diane Hacker’s Rules for Writing and ask generative AI to follow those rules, but that’s something I might play with this summer after classes are done because I think it might be an interesting exercise in seeing how it adapts what they’re doing.

Matthew Short:

All right. So at this point, I’m not seeing any other questions or chats popped up in the chat feature, so at this particular point, we’ll start to wrap things up.

I would like to start off by saying thank you to Dr. Guyant for today’s presentation to kickstart our session. I’d also really like to thank all the attendees in the chat who have been sharing their experiences, resources, lots of great content within there that has been helpful, and I’ve got some bookmarks thrown up on my screen to check out later on. We appreciate everybody’s active engagement in Dr. Guyant’s session here today.