Higher Education

AI in Higher Ed: Navigating Opportunities & Challenges

Panel discussion on how educators view opportunities and challenges of integrating AI , and its impact on the future of work and skills

Watch this panel discussion to hear various perspectives on the intersection of AI and higher education. Explore how other educators are tapping into opportunities and challenges AI has presented and its impact on the future of work and the skills students need for success.

PRESENTERS & TRANSCRIPT

Presenters:

Sandra Frempong Ph.D., Administrator, Blassys Academy

Valerie Guyant, Associate Professor, Montana State University – Northern

Dr. Valerie Guyant is an Associate Professor in the Department of English at Montana State University – Northern.  She earned her Masters in the Teaching of English, with an emphasis in Renaissance female authors, from University of Wisconsin Stevens Point and her PhD from Northern Illinois University, where her dissertation was an examination of female sexuality in vampire literature.  She has also earned graduate certificates in Women’s Studies from Northern Illinois University and in Native American Studies from Montana State University in Bozeman.  Her research is focused on areas of speculative fiction, adaptation, and popular media, such as film adaptations, fairy tales, and serial killers in popular culture.  Valerie has published research in The Many Lives of the Twilight Zone: Essays on the TV and Film Franchise, The Many Lives of the Purge: Essays on the Horror Franchise, The Many Lives of the Evil Dead: Essays on the Cult Film Franchise, Adaptation Before Cinema, The Explicator, The WISCON Chronicles, The Companion to Victorian Popular Fiction, FEMSPEC: A Feminist Speculative Fiction Journal, and London’s East End. Valerie is also an associate editor for The Wachtung Review, Dialogue: The Interdisciplinary Journal of Popular Culture and Pedagogy, and Essence and Critique: Journal of Literature and Drama Studies. 

Julie Hollenbeck, Radiation Therapy Program Director, Clinical Assistant Professor, University of Michigan-Flint 

Julie Hollenbeck M.Ed, R.T.(T) is the current Radiation Therapy Program Director and a Clinical Assistant Professor in the Department of Public Health and Health Sciences at the University of Michigan-Flint. Her primary research interests lie in the areas of interprofessional education and enhancing critical thinking among students. Continuing to actively engage with contemporary educational challenges, Julie contributed to a critical issues forum on the intersection of artificial intelligence and critical thinking in higher education. Her practical insights reflect her dedication to integrating new technologies and methodologies into her teaching and research to better prepare students for the evolving healthcare landscape.

Deanna Carr-West, Clinical Assistant Professor, University of Michigan-Flint

Deanna Carr-West is a Radiation Therapy Clinical Coordinator and a Clinical Assistant Professor in the Public Health and Health Science Department at the University of Michigan-Flint. Her research focus includes interprofessional education and critical thinking.

Alicia Plant, Associate Academic Dean, Beulah Heights University

Alicia is a college dean & higher ed leader specializing in digital learning. Led successful pandemic response, migrating university to new Learning Management System. She regularly presents at academic conferences on digital learning & researches AI’s impact on academic dishonesty—expert in organizational leadership, workflow creation, managing academic departments, institutional effectiveness, & accreditation/university compliance. 

Transcript:

Jenny Gordon:

I want to start by asking our colleagues that are joining us, the participants on our Zoom call today, a quick poll question and my colleagues are going to put this poll available for you now to vote on it. The question asks, do you use AI in any of your teaching processes? So while you are answering that, we’ll start chatting to our panel and find out how they are potentially using AI in their teaching processes already too. So our panel represents many different subject areas in higher education, everything from accountancy and finance, English literature, Native American studies, public health, health science, digital learning, and academic management. So let’s start by asking you, how would each of you sum up your approach to or philosophy of AI? Valerie, can we start with you? Nope. Valerie’s not joined us at the moment, but she might come back in just a second. Alicia.

Alicia Plant:

Sure, sure. Happy to. Just yesterday I read an article from Inside Higher Education that gave some interesting data from 2100 public participants. They asked the question, who do you trust with AI? And we, higher education edged out other industries. So 49% voted that they trust higher education to handle AI most responsibly. We tied with the healthcare industry and coming up a little bit behind us was law enforcement. So I want to tie my approach to AI to that, that I approach AI first responsibly as best as possible using the data available and also ethically. I prioritize the ethics of engagement and the responsibility of it over the innovation piece. I also approach it through the data. I stay connected to the credible data that comes to us from our higher education publications as well as some of the techies. I’m a big fan of Google Cloud Next, the CEO constantly publishes and just on Tuesday, he published a great article related to the generative AI use and how it’s advancing forward in our industries. So those would be my approaches to AI.

Jenny Gordon:

Thank you. You’re right. You’re including some big words there and things that we’re all thinking around ethics and the like. Julie, how would you add to that?

Julie Hollenbeck:

I think that was a wonderful response and I really can relate to it. I think maybe to add just a little bit, my philosophy is evolving and it’s changing. It almost seems daily right now as we’re learning more about it. So keeping an open mind as to the benefits and the challenges, and I think it’s great that people are looking at higher ed as a leader in sort of demonstrating how we should appropriately use AI. And so I think for me, trying to keep a very supportive environment for my students to try it, but then also helping to guide them ethically, technically, and practically on how to use it responsibly is kind of part of my philosophy.

Jenny Gordon:

And Deanna, would you say you have similar thoughts around that as well?

Deanna Carr-West:

Yeah, very much. Julie and I are fortunate to be working kind of hand in hand as we navigate this AI thing, and I really piggyback off of that. The ethics and the responsibility are key in all of this, but also in educating the students, how to validate the information and making sure that they’re doing the work in addition to AI assistance.

Jenny Gordon:

Yeah, absolutely. And the poll response has just come in from the participants on the call. That 38% of them say that they are using AI in their teaching processes so far. 63% are not yet. And I’m sure we’ll get to lots of reasons why that might be. Sandra, where are your colleagues on the issue of AI? Have you seen any trends either that you’ve set or that you’re beginning to observe throughout faculty and throughout your colleagues?

Sandra Frempong:

Well, I am in accounting and with accounting it is the language of business and business in terms of money making. Money and ethics just do not mix. Sorry, but that’s the truth because we always want more for less, right? And so whenever you have money and information pertaining to money and especially if you look at a lot of accounting scandals, any type of scandal, there’s always money at the center of the motive. When people look into why it happened, there’s something tied to money. So for AI, it’s extremely important to be teaching the students about what to use and how to use it, but emphasize how it could also be misused in terms of presenting to clients to make things look good. In accounting, there’s a joke that you have two books, the one that you cook and the one that is done. Okay. It depends on who you’re trying to impress.

If you’re trying to impress the IRS, the book will look like you’re poor because you don’t want to pay taxes. But when you want to impress the investors, then the book is always good. You’ve been doing fantastically well. You may not have the money to show it, but it’s in there. So [inaudible 00:06:37] the software being used, the approach, and also keeping abreast of what’s going on within the industry is extremely important, especially for those of us in the accounting field. You constantly have to know the software out there, the changes being made, and the rules being applied to ensure that what we’re teaching the student will actually help them to develop the work and present financial information fairly and correctly and not misuse the AI that they’re using to mislead.

Jenny Gordon:

Absolutely, and I think we’re going to come on to questions around sort of the policies that you may or may not have in place, or at least things to be considering when it comes to setting policy with the use of AI in education in just a second. Valerie, I’m so glad that you could join us. I lost you for a minute there, but I did introduce you and I wanted to ask you how you would sum up your approach to or philosophy of AI.

Valerie Guyant:

My philosophy of AI is I guess two sides of the same coin, that is used incorrectly or improperly, you will never learn a thing, but if you use it well in a classroom, or at least in my classrooms because I teach writing, you can actually use it to help students learn instead. So I teach at a university with a lot of under-prepared students. They may have never had to create an outline or write a proper essay, and they therefore, and they may never have even seen it, and they certainly don’t often know how to take notes. So showing them the ways in which AI can help them do those things actually gets them up to a better level playing field.

Jenny Gordon:

I see. And I think, again, we could go down the rabbit hole of talking about leveling up and the opportunities that AI does bring for students that haven’t had the exposure that more educated, better educated individuals have had. And as I mentioned, I’m here in the UK. I’m lucky enough to work all over the world, and we can see that AI is definitely bringing parts of education closer to those that haven’t had that exposure before as well. There’s lots and lots of benefit to it. And looking at sort of more broader opportunities as well and association with it Sandra, what do you think were the direct implications of AI for the future success of students?

Sandra Frempong:

In accounting, student have to be aware of what’s going on, and I would place responsibility on us, the educator, to be sure that we’re feeding them the right information. They cannot be lax or say, oh, I want to, they cannot pick and choose what they need to learn in AI because things are changing. Just when you’ve mastered how to do your banking online, then somebody is looking at it and say, oh, really? You’re still using that? You can just click and just wipe something and just do barcode and you can get the same information so you look obsolete. So the implication of course, definitely will be kicking up and that can be very, very stressful, but it’s doable and it’s important to affiliate our student with organizations such as the CP Academy where their familiar with courses and stay close with practitioners to help them navigate what to expect when they graduate.

Jenny Gordon:

Julie and Deanna, you were involved in the critical issues forum on the intersection of AI and sort of critical thinking in higher ed. What have you changed in your teaching to make sure that critical thinking continues to happen?

Julie Hollenbeck:

Yeah, that’s a wonderful question. And when I first started realizing my students were using this, that kind of scared me a little bit because I thought, oh no, how am I knowing if this is generative AI producing this work or themselves? So I think the first thing that Deanna and I discussed was we do a lot of knowledge base lecturing in our classes with many little projects, active learning, but we use papers, research papers and presentations to sort of figure out, okay, they know, but do they really understand? And we decided at that moment to sort of table papers for a moment as we were learning how to handle and how to navigate this, and we brought things back into the classroom.

So we started using more presentations. In fact, our students have been, this has been the week of presentations for them for all their classes, but also more frequent feedback loops with those presentations. So we touch base with them kind of during each step, more bite sized pieces so we really know that they’re doing the work. And then also if you are presenting and then you’re needing to ask or answer questions, you really need to understand those concepts. And so that’s kind of how we’ve started to navigate it. I don’t know, Deanna, if you want to add to that.

Deanna Carr-West:

Yeah, and I think one of the fun things that has come from this too is the touch point that Julie referred to. Students can’t just show up and turn in a paper at the end and kind of check that off of the list. When we come together at our checkpoints, we’re talking about where are they at and asking each other to engage with each other’s projects and provide feedback and kind of helping them critically think it through and how they could develop certain aspects of their paper to improve it. So it’s really fostered the critical thinking, not only on their project, but also engaging them with others as they work through their presentations also. So I think that that’s been very helpful.

Jenny Gordon:

I know when we spoke last week, we talked about sort of authenticity of assessment to ensure that that critical thinking opportunity, for example, can continue to happen and sort of not going off track too much, but AI is great. It’s kind of in the catalyst for us to really consider whether or not some of the assessment methods we already had in place were appropriate or authentic in terms of testing the right way in the first place. And I think that that has really allowed us to think again around things like some of the transferable skills, durable skills like critical thinking. How do we assess those? Can we do that in a more authentic way? How can we use technology to do that?

That doesn’t necessarily relate to AI, but it is forcing us to think about really innovative ways to do that again, that might then lead onto finding AI solutions that can then support the development of it later. Alicia, you’ve been involved in your university’s response to AI and you’ve had to approach it from different subjects. What was the faculty worried about and what was the reality of how students were using AI in your experience?

Alicia Plant:

Initially academic dishonesty. I think it was a panic, another layer you to police, so to speak. So I think that was the first response, but we quickly were able to adjust with our research scientists who gave us a way to detect early on whether or not a student indeed was using generative AI in developing their papers. So that was very helpful that those products were advancing right along with the generative AI models. That helped. I would say that was the initial response, the academic dishonesty piece. Since then, we’ve conducted additional studies and in a most recent one, I think the concern now moves toward faculty. So we have roughly recently the participants, maybe a little more than half of our faculty who participated in the study, noting that they feel that the challenge in the classroom with AI now is the lack of faculty development for preparing to assist students with AI.

Is there a real model? Are there models for training and development to help them in the classrooms? And it was nearly 100% of those who responded, that find that the lack of faculty development or a lack of a standardized model or something of the sorts that would be the challenge in the classroom moving forward. And I think for students, it tended to be that at first it was a hey exploration day for them, of course, and they knew about it. There are studies that show that students knew about it way ahead of time before faculty did, and they were abreast of it, and we knew that they would be. So at first there was curiosity, there was practice, there is engagement, but we quickly responded in helping those students adjust to, hey, this doesn’t change the academic expectation here. We were already expecting academic integrity and we will continue to expect that. So we had to make the shift and help students understand this doesn’t change the quality of work that we’re expecting from our students here.

Jenny Gordon:

Anybody else have any points to add to that?

Valerie Guyant:

Yeah, actually, I recently discovered that students who use Grammarly to help with sentence construction or vocabulary because they’re looking for that extra level up of scholarly sounding, even if the ideas were originally theirs, but then they put it through Grammarly or a similar product that will also, it’ll tag as being AI created, and then you have to have real conversations with them about what it is they actually did. Because there’s a huge difference, at least in my opinion when I’m grading between somebody who wrote an essay and then got some generative AI help with elevating it and someone who just let ChatGPT write an essay for them. Just another layer to think about.

Jenny Gordon:

Absolutely. This sort of area and this kind of questioning leads us on to kind of thinking about policy and the policy that you either have at faculty or whole institution level. I want to ask a poll of our participants before we as a panel discussed policy, but yeah, does your institution have an AI policy in place? Let us know. Perhaps it’s something that’s being worked on. It’d be great to see what the answers are to that. So we talked about sort of faculty developments, Alicia and the requirements for that, standardized models, academic integrity, and that does lead us all in just sort of policy. Policy that can be a guiding star, but also a policy that can slow down the innovation as well. So when it comes to policy, either at your faculty or department or institution level, where are each of you on that journey? Sandra?

Sandra Frempong:

I was just going to follow up on Alicia’s comment that, and she’s right, students, they’re younger and they use these tools more than we do. They talk to each other, they have friends, they have network. So by the time faculty is able to pick up on what’s going on, they’re already like, really? You just woke up? It’s always more like that. So sometimes it’s not even good to engage them, but I like the policy of, hey, we’re aware that you know all this. Okay. And you may be using it, but we’re still expecting this. It’s not going to change our quality expectations. So academic dishonesty is an issue.

What’s central though when you talk about policy, you can have policy from here to eternity. Are you going to implement that policy? And when you have faculty complaining about a student, are you going to back that faculty up? Because if you’re not going to do that, then that policy is not going to be effective. I remember a case where a student submitted, didn’t even use AI, but just submitted an assignment and I patted it a little bit, and then resubmitted for another class and it’s almost like, hey, why using a different pot? I already used it to cook this, use it for something else. So we submitted the same assignment two or three times for three different courses, and a faculty picked up on that and said to give a zero. Well, with no support from the department, even with the policy that you have within the syllabus, nothing is going to happen. So it depends on leadership.

Jenny Gordon:

Absolutely. Right. Where are you where regards to policy in Michigan, Deanna?

Deanna Carr-West:

We currently do not have a departmental policy in place. It’s really left up to us as instructors as to how we want to implement it or utilize it. So I mean, we’ve really focused on talking to the student just about honesty and integrity when they’re doing their assignments. Our students have to take a board examination to be a certified radiation therapist. And so we’ve had that conversation of this is information that you need. And the students actually all came to the same conclusion that they would be cheating themselves to not put in the study time and actually completing the work and learning the information that they need to learn. So I feel like we’re in a little bit of a unique situation in our particular field for that.

Julie Hollenbeck:

Yeah,-

Jenny Gordon:

Sorry. Julie, go.

Julie Hollenbeck:

I would just kind of piggyback off that by saying I don’t think the university as a whole has a policy. I think generally we have fortunately had a lot of help from our online educational department with workshops and tools, and we have language for syllabi, but I think they’re kind of allowing faculty members to make those decisions for each of their classes at this point.

Jenny Gordon:

And the poll there suggests that 40% of our colleagues have something in the works. About 25% don’t have a policy at the minute, but 35% do. Any advice when it comes to looking at and setting policy from any of our panelists, anything that you would advise or suggest colleagues think about as they consider their policy for AI?

Alicia Plant:

Yeah, I would say get feedback from the community that you are a part of. It’s important to understand how they feel about it, their perceptions of AI. Because what I find in the research is that it varies on the continuum. There are so many different responses. There are those that want there to be a structured policy standardization across the board, but then there are others who feel they need the autonomy in their classrooms to manage it their own way. So I would say get feedback from those you’re working with. So to find out a little bit more as opposed to just forcing the policy, find out their perceptions of AI before engaging in official policy. You may find that it will vary.

Valerie Guyant:

I would agree with that wholeheartedly. I have a few colleagues who simply want our provost and our administration to issue an edict, and there are several of us, myself included, who are actively using AI in the classroom, and we don’t want an edict that says students can’t ever use it because we’re actually using it in ways that are helping our students. So we’re actually having that exact conversation on our campus right now about how to balance those two needs from different faculty.

Jenny Gordon:

And I guess sensibly we’d engage students’ feedback as well, right? Because like you just said, they’re already using it. A great anecdote we talked about on the call last week was my wonderful father who says, “Oh, we’ll never use AI,” whilst he’s watching his predetermined television programmes on Netflix while looking at his banking on his iPad, that’s logged everything for him based on AI. So I think that engaging with those that are already streaked ahead of us in using things, just because that’s what they do in the systems that they’re accessing is a really beneficial thing to do as well. On that note, then let’s set another poll question for our colleagues online. And let’s ask, how many of you have students using AI already? That would be a good question to ask. We’ll ask that and see what their responses are. And while we’re asking that question, I would like to know, have you put any boundaries around the use of AI with students? Alicia?

Alicia Plant:

Absolutely. 100%. Yes. I say that in jest, but yes, of course. And what we refer back to is just the standards of integrity with learning. We are developing critical thinkers who are asked to produce some original work. So when doing that, we cannot go to ChatGPT or any other, I know Chat is the most popular, but there are others, of course. We can’t just go and say, “Hey, write my paper.” So we have to engage properly any use. If an instructor decides AI as Valerie has decided, and I think she’s been managing it well. I think that’s suitable. I absolutely do. But we do have to set the guidelines for how to use it. And I think that depends upon the instructors and the university that is engaging it.

But those guidelines always to me go back to our standards, our standards of engagement in learning, that we expect integrity, we expect academic honesty from our students as they engage any model. Plagiarism is still intact. If you go to Chat, it’s just plagiarizing. If you go to Gemini, you’re just plagiarizing. So reminding them of the policies that were already in place, I think it’s the best way. It doesn’t change anything. But now we have another layer of something that could help us or something that could hinder our development as critical thinkers. So we have to find ways to leverage the communities of persons like Valerie and others who want to engage chat or a generative AI in the classroom.

Jenny Gordon:

Any other feedback on that? I think,-

Julie Hollenbeck:

I agree with Alicia completely. Deanna and I got to participate in a workshop and we had a faculty member from the math department and he said, “We’ve been doing this for a long time because calculators came on board and they could all of a sudden do everything. And we’ve already had this conversation about a tool that could be harmful or helpful.” And I agree. I think again, as educators, I feel like it’s our responsibility to implement responsible generative AI use in our classrooms and teaching our students how to do that.

Jenny Gordon:

Absolutely. My daughter is taking part in a science fair, which as you know, is just either the make or break of family relationships sometimes. However, she came up with a great idea to assess which chemical compounds made the best slime, but she didn’t know how to ask the question. She didn’t know how to pose the challenge that she wanted to solve. And we used ChatGPT to form a question that made sense for her, so to form something that she could actually answer. And that for me was a really good way of using ChatGPT. She’s nine, she’d never heard of it before. I can guarantee she’ll be using a lot more or other systems similar as she gets older. But that for me was a great way of using something that really enhanced that whole experience and will potentially enhance the output as well.

So those tools, and we’ll come onto your favorite tools in a minute when we ask about that. The poll results, 61% of our colleagues say that their students are using AI. 33% say they are, but they won’t admit it and 6% say that they’re not yet. Very interesting. So when it comes to the boundaries and the use of AI, when we making that clear to stakeholders, whether that be students, whether that be colleagues, whether it be leaders, what are the main disadvantages of AI and where do you think we need to be most careful, thinking about areas that we might not have talked about so far? Sandra?

Sandra Frempong:

Well, with accounting, we’re between a rock and a hard place because we promote AI to simplify some of the bookkeeping work. And so when teaching, we’re imparting to our students how to incorporate AI to generate financial statements. But anybody can, I mean, you can use any software to spit out financial statement, but you have to understand what the numbers mean. Okay. You have to know, you have to know what the financial statements are and you have to be able to interpret them when you’re dealing with clients. These nurses are relying on this. So if students are relying too much on software and other AI tools to prepare financial statements, then they really, it’s not even about creativity, it’s about understanding, okay, you can do creative accounting, but that is not a joke. You heard a lot of people that have gone to jail.

So the way we emphasize the ethics in accounting in terms of the usage of software and generating financial information is look, when you talk about money, you’re getting to the heart of people here. Okay. So you cannot give misleading information. So if you don’t understand how to interpret the financial statement because you’ve been relying too much on using software, then there’s a problem. And so that’s some of the things that we’ve always had to contend with because one hand we’re saying, use QuickBooks, use ChatGPT, use this and use that. But then when it comes times to interpret the financial statement or explain to the client, then students don’t really understand how to accurately present the data and,-

Jenny Gordon:

Yeah, the fundamentals.

Sandra Frempong:

Yeah. The fundamentals of accounting. So that’s one disadvantage, that there’s tendency to rely too much on it. And so the way I’ve taught my courses is to actually have student present in financial. I actually have them create companies and then they will apply all the accounting concepts and methods and then present it at the end of the semester, okay, to explain how the concept will apply and be used within the units of the organization. So that way I know that they actually understand the concept, are not just relying on AI to design the work.

Jenny Gordon:

And I imagine Deanna and Julie, that there’s some similar disadvantages in the health sector around that over reliance as well.

Deanna Carr-West:

AI is actually used quite a bit in our field as a practicing radiation therapist. And so we do rely on it, for instance, for our image analysis when we’re looking at CT scans and X-rays. But at the same time, we need our students to be able to look at them and understand what they’re looking at, how they’re aligning that image, the patient appropriately for treatment. So that’s just one example. It’s really going high-tech even with the treatment machines adapting treatment on the fly. So AI is very much embedded in what we do in the medical field.

Jenny Gordon:

And I think that in the same way you mentioned it there, that you still need your students to be able to interpret some of that as well. And I think that comes down the same, the [inaudible 00:34:20] track to Sandra in the over reliance of the output and then that fundamentals, let’s not forget those fundamentals. What about the disadvantages in English Valerie and in the writing side?

Valerie Guyant:

Well, I mean the very first and easiest disadvantage is most generative AI, if you ask it to produce an essay, is very superficial. The level to which it gives information doesn’t have any nuance or any creative thinking to it because it’s simply scrubbing for information from the internet. But what it’s good for is that beginning information. And what it can be really good for is exactly what you were talking about with your daughter, which is, I have an idea, but I don’t know how to articulate it or I have a research question already, but I don’t know what to ask the databases or any of those number of things.

One of the things that I actively do is craft prompts that require them to use what we’ve talked about in class in the prompt. So they have to actually understand to a higher level in order to get a good answer to whatever it is I’m asking. So just asking a generative AI won’t have that level of nuance discussion. But in a lot of ways, I look at it this way, we spent years with English teachers. I had some of my own who told you, for instance, The Scarlet Letter only means this one thing. Generative AI will do the same thing. It will tell you what people say it means, but it won’t allow you to think about what it could mean. That’s what you need your own brain for.

Jenny Gordon:

Absolutely. I’ll just ponder on that for a moment. I’d like to ask if anybody has any questions, if any of our participants that are listening to this brilliant discussion have any questions, please do drop them into the Q&A and we’ll try and answer them in the last few minutes of our discussion. And if we can’t do that today, we’ll try and answer them afterwards. So we’ve talked about lots of different tools today that are potentially used by yourselves, or that you’ve seen your students use, or you’ve heard talked about. And I’d love to ask our audience as well. What are the current favorite AI tools for whatever reason? What would you say are your most favorite tools that are in the AI space at the minute? Alicia, let’s start with you.

Alicia Plant:

Sure. So I do a lot of presentations. So I love this new one called, it’s called ClassPoint. So ClassPoint is an AI-empowered PowerPoints. It really works very neatly and it’s so new that the president, the CEO, will actually email you when you sign up asking you for how you plan to use it and all that. So what’s neat about it is that your PowerPoint slides become more interactive in that you can, using the AI-empowered technology on each slide, ask instant questions right on spot as you’re presenting to your audience. So it makes it more dynamic, more engaging. So I really love it.

I also like I Gist is a new one as well, but it’s an AI-empowered workspace. So that’s what it’s going to look like in the future, that our workspaces, it’s kind of like single sign-on where we have all of these workspaces, all of our applications in one place, but it’s going to be AI-powered, meaning that we have an AI assistant helping us keep our calendar and things like that. Those things are really neat and helps automate these administrative processes that will make me more available to my students. So those are the things I really love.

Jenny Gordon:

I’m scribbling these names down.

Alicia Plant:

I Gist, ClassPoint. Thank you. Sandra, what about you?

Sandra Frempong:

Sorry. Not so much for teaching, but the tools that we have, they’re constantly changing, but the one that I’ve always loved is the AI tax to teach our students, because that is the one area that it’s not everyone that, I don’t know that everyone own businesses, but we all pay taxes. And so that is the one area that I love to use in terms of tax research so that they can see firsthand or just online what’s, I mean, in real time, sorry, when proceedings are being reported and how to use that when preparing taxation and answer questions regarding scenarios. So that’s really the one that I favor the most.

Jenny Gordon:

Brilliant. And we’re being asked to list these, so we will do that. Absolutely. Valerie, what about you?

Valerie Guyant:

I have a new favorite because Alicia just told me about it. I’ve really used, I use ChatGPT a lot with my students because it’s easily accessible, it’s very comfortable for them. And I definitely, because it’s kind of on the lower end of the learning curve, it’s easy for me to use in a classroom.

Jenny Gordon:

Julie?

Julie Hollenbeck:

Valerie, I use ChatGPT all the time, and I just upgraded myself to 4.0. And so now I’m kind of playing with that as well. UM has their own GPT system that’s housed underneath the university. I use that quite a bit. And Deanna and I are just starting to work on a special project where we’re building a bot for our program that will help foster critical thinking skills with our particular material in the program, and that’s been kind of fun to play with too.

Jenny Gordon:

Amazing. Deanna, do you have an answers to add?

Deanna Carr-West:

Yeah. I mean, honestly, ChatGPT has been awesome in regards to creating case study patients. I mean, it can whip them out in a millisecond, and I actually just had my students do that as they created their own case study. And it’s very, very helpful because when ChatGPT creates a full on case study, it’s full of holes, it’s like Swiss cheese, and so part of that project was for them to go in and fill in the gap and they had to have a baseline knowledge to be able to question, wait, I’m missing this, this needs to be in there, and that sort of thing. But at the same time, I use it quite a bit to create those situations for them to critically think through and what they would do for particular patients. Super helpful.

Jenny Gordon:

Thank you.