GoReact is now part of Echo360!
Higher Education
In higher education, the real question about artificial intelligence isn’t whether students will use it—it’s whether they’ll know how to use it well. Without clear guidance, AI can just as easily undermine learning as it can accelerate it. That’s why building AI literacy has become one of the most urgent priorities for colleges and universities.
In a recent webinar, Kate Grovergrys and Tina Rettler-Pagel, two faculty members at Madison College, shared a framework for moving faculty and students from passive awareness of AI to active, intentional use. Their approach encourages instructors to experiment with AI themselves, adapt strategies to the realities of their own disciplines, and weave ethical considerations into every stage of learning. The result is a practical, sustainable way to prepare students to navigate AI in both academic settings and beyond.
Grovergrys and Rettler-Pagel frame AI readiness as a progression. First comes AI literacy — understanding what AI is, how it works, and where it has limitations, bias, or ethical concerns. Only after building that foundation should students move toward AI fluency, the ability to apply AI creatively, strategically, and ethically in real-world contexts.

This distinction matters because AI can either strengthen or undermine learning. Used well, it can accelerate skills, boost creativity, and save time. Used poorly, it can distort understanding or replace essential skill-building altogether. Helping students tell the difference is essential for academic success, and for preparing them to think critically in an AI-powered world.
One of the most practical takeaways from the webinar was that instructors need to use AI themselves before they can effectively guide students. Grovergrys and Rettler-Pagel outlined this three-step approach:
This isn’t about replacing human teaching—it’s about modeling thoughtful use so students see AI as a partner in learning, not a shortcut to avoid the work.
Many colleges are drafting institution-wide AI policies, but Grovergrys and Rettler-Pagel advocate for a complementary, department-level approach. Faculty know the skills and contexts their students need, so they’re best positioned to define how AI should be used in their courses.
Begin with a workshop on AI basics, followed by semester-long experimentation in small faculty groups. These groups should then reconvene to share what worked, what didn’t, and how to refine their approach before embedding AI guidelines and activities at the course or program level. This keeps implementation collaborative and rooted in real teaching experience, rather than dictated from above.
When AI shows up in student work, the default reaction is often suspicion. But detection tools are unreliable, and focusing solely on catching misuse doesn’t address the root issue. Instead, teach students specific, discipline-relevant ways to use AI productively and design assignments that require engagement with real-world events, in-person interviews, or in-class collaboration. By clearly defining when and how AI can be used, and by modeling that transparency, faculty can turn potential misuse into a learning opportunity.
When it comes to building AI literacy, Grovergrys and Rettler-Pagel recommend starting with a large language model to learn the fundamentals of prompting and evaluating AI output. Their go-tos include:
Once you’ve mastered the basics, you can branch out to purpose-built tools for teaching and learning, like Brisk, MagicSchool, NotebookLM, or Canva with AI features.
If you missed the live webinar, watch the full recording here.