Higher Education

Bridging the Gap: Integrating AI & Student Intelligence for Knowledge Co-Creation in Higher Education

Discover how AI is transforming higher ed and how to balance innovation with integrity. Gain insights on digital literacy, ethical use, and student engagement.

Higher Education is undergoing a profound technological transformation, with Artificial Intelligence (AI) reshaping teaching, learning, and knowledge creation processes. While various AI tools have already influenced academic practices, the introduction of ChatGPT has accelerated this shift, offering new opportunities for personalized learning and academic support. However, despite its benefits, concerns persist among academics and other stakeholders regarding academic integrity and the overall quality of education. This session explores the crucial role of emotional intelligence, subject matter expertise, ethical and legal awareness, critical thinking, creativity, and AI literacy in fostering a responsible, authentic, and comprehensive hybrid knowledge co-creation process. By integrating these elements, institutions can leverage AI not only to enhance students’ engagement in the knowledge creation process but also to uphold academic integrity and assessment quality. This session highlights the importance of digital literacy, pedagogical design, and institutional support in maximizing AI’s potential in higher education, and provide actionable insights to balance technological innovation with academic integrity.

PRESENTERS & TRANSCRIPT

PRESENTER

Andrews Agya Yalley

Andrews Agya Yalley is a Senior Lecturer in Business and Programme Director for the MSc Digital Marketing Programme at the London campus of York St John University. With over a decade of experience in both industry and higher education in different continents, his research focuses on social and sustainability marketing, consumer behaviour, and operational and quality management in higher education and services. He has published widely in top-tier journals and worked on consultancy projects with major organizations, underscoring his practical expertise.

TRANSCRIPT

Matthew Short:

I’d like to now introduce our presenter today, Dr. Andrews Yle, who is a senior lecturer in business and a program director for the MSC Digital Marketing Program at the London campus of St. John’s University. Dr. Yle is an avid traveler and also enjoys scuba diving. Thank you for presenting today, Dr. Yle, and I’ll turn the floor over to you to begin your presentation.

Andrews Agya Yalley:

Okay, thank you Matthew, for your nice introduction. And as he said, I’m Andrew Ele and the topic is about brilliant, the skill gap, which is about integrating AI and human intelligence for coating hybrid intelligence in higher education. Also, those who attended the keynote, I think Jim emphasized on some of the strengths and limitations associated with ai and it’s going to be something that I’ll pick on from there in my presentation. So I thank Jim for bringing those insights in his presentation as well. Okay, so for this section, I’ll quickly give you some background introduction into the topic. Then I will take you straight away to how we went about collecting data to identify those skills and also talk about the implementation of those implications, of those skills and conclude. So let’s just begin. As we are all aware, technology is changing the ball game in every industry and higher education is no exception.

So as a result, the ed tech technologies, particularly AI such as Grammarly Charge GTP, and recently Gini AI has emerged and all these technologies are shaping, our knowledge is created in higher education. So the traditional perception where knowledge used to be one dimensional is being revolutionized into a two-way or multi-dimensional knowledge creation. Again, our knowledge is taught traditionally used to be within human intelligence. Then we went on to be stored in books. Now it’s evolving and the same goes with how knowledge is retrieved and disseminated, but our emphasis will even focus more on how knowledge is being created.

AI has transformed this ball game. Now, the introduction of AI in our education has had different responses and debates from different stakeholders in academia, some has been positive and some has been negative, and others with a mixed reaction in terms of how this technology influence teaching and learning. Now on the positive front, particularly from students, AI have transformed how they learn and prepare for assessment. So among the positivists, it has personalized their learning personalization in the sense that students are able to get specific information that relate to what information they are seeking, the learning that they want to learn. And in a way it has created also an interactive environment where they can interact simultaneously with the AI tech and receive response. And in that way there is some form of active learning where there is that iveness. In the learning environment, students are also able to get real, real-time feedback.

Traditionally, there is some delay loop in communication. So as students send the feedback to a lecturer or tutor, it takes a while email, it takes a while to get the feedback. But with ai, students are able to get instantaneous feedback on what they are looking for. And again, it has become a virtual assistance in the sense that wherever you are, 24, 27, 24 in the month, a day minute, they are able to assess this technology and it’s a ball game for students. They see it as positively impacting on their learning. On the academic front, server consensus has been raised by academia’s, vice chancellors and other stakeholders, and more importantly, it’s about the issue of academic integrity. How do we ensure that the work that students are producing their own?

And this is becoming an emerging concern in almost all higher education institutions. As a result, some New York institutions have even blocked the use of charge GTP because of academic integrity issues. And as an academic we’ve experienced increasing use of this AI in writing assignment where student claims ownership for those works. Again, the issue of academic assessment quality is becoming another concern. How do academics respond? Increasing assessment data, AI proof, some are resorting to the use of face-to-face presentations, but that itself as also its own limitation. But assessment quality is becoming an issue. And again, other stakeholders have criticized AI of biases, particularly gender and racial stereotyping.

And although recent improvement, it’s minimizing some of these biases. Some of these biases exist and again, the idea of misinformation and anybody who has used charge ETP Gemini will accept the fact that the platform wants use as that it might provide information that are not accurate. And in the other instances, there is this idea of AI hallucination where it’s producing results that do not exist. I have experiment with my students, so sometime I give them activities in class and ask them to search for articles referencing and sometime they produce references that do not exist. So these limitations has been highlighted. And more importantly, the idea that if we not careful, AI will take over human intelligence.

And this is becoming a very critical issues that humanistic propose that it has ethical and legal ramification on society. As a result, these concerns have ended the use of charge GTP and other AI technologies in academia. Then we have the centralist who view are currently imagined and they in one way see the benefits of ai. They see the potential benefit of AI in terms of assistance to them providing the real time ability to manage huge chunks of data and its efficiency. And they also understand the benefit of human intelligence as a result of being able to exercise social and emotional intelligence.

Its adaptability to emergent trends, its ability to contextualize information within a particular setting. And from Jim’s keynote, he even noted ability to feel emotions, which sometimes AI misses some of this perspective. So that centralist view is about how do we balance the benefit and minimize the limitations which relate to academic malpractice and other quality issues. And here these are based on some key theories which are very relevant when it comes to what the balance or the centralist perspective. And here one of the key theories is the social technical theory. And here the theory possess that technology can work with other entities. So artificial intelligence can work with other entities and the other entities can be humans. So the humans bring the social aspect and the technology bring the technological perspective. So they have the view that it’s feasible for AI to call exists with human intelligence.

Other theories might be the action network theory. And again, we have the knowledge creation theory and they all emphasize on how knowledge can be created by different co-creators or different entities. Now lemme throw more light on the idea of knowledge creation and that will bring more reality to what we are discussing in terms of how we can balance AI with human intelligence to co-create hybrid intelligence or knowledge. Now traditionally knowledge has been one dimensional, which is more academic active engagement and passive student engagement. So student become more of receptive of what knowledge. However, the emerging knowledge creation emphasize that knowledge can be co-created. And it recognized students as active participant in the knowledge creation, particularly in higher education.

And theories that underpin us is that dwell process theory that recognize that knowledge can emerge from touches and explicit knowledge. So there are two forms of knowledge that can co-exist to create what we call the hybrid knowledge. So we have the tat, which is more human centric and it relies on human intelligence to create knowledge. Then we have the explicit, which relies on technological knowledge, particularly with ai. And yet through the integration of these two forms of knowledge are a great knowledge emerged. So a lot of theories have actually positioned that. Key among them is the socio technical theory and the complimentary theory that seems to what earn the advantages and disadvantages of both sources of knowledge to co-create what we term the hybrid knowledge. Now, what is the problem?

Ever since this idea has been championed and AI has become key within academia, AI adaptation has a lot of research has been undertaking around this area in terms of how do students adapt. So theories of technology adaptation has been applied here. However, a critical question that has been neglected is the preparedness of students. It’s easy to adapt to a technology, but how you can use it effectively is another aspect. And most literature have neglected this in academia, particularly from the student side, limited knowledge, limited literature have explored how student can co-create knowledge with ai. And I, it’s become a concern when a lot of students are adapting the technology, but in terms of preparedness to use, the technology remains unexplored. Thereby several scholars, particularly Deman, have called for research on student readiness for co-creating authentic hybrid knowledge with other forms of artificial intelligence, particularly chart GTP.

Now, the objective of this research was to explore what are the key skills and competence that student can effectively use in co-creating authentic hybrid knowledge with ai. Thereby what are the key skills and competence that student require to prepare them to effectively co-create hybrid knowledge with ai. Before we dodge into that, I would like to do a quick test with you. And it’s not actually a test just to get your understanding of particularly if you are dealing with students, how are they prepared to work with ai. So please, the QR code is here. Also, those who want to use the web, you can use the web link here and the code here to assess the mentimeter. And please let me know your thought and I’ll share the results on the screen once. So I’ll give you about three minutes to complete this task. It’s just one question and once you complete that, I’ll share the result on the screen. So please the QR code this year.

Matthew Short:

All right. And at this point, I hope folks can see my web browser at this point showing the results to the poll question. Looks like out of 20 results, the plurality selected, not very prepared. At 45%, unsure at 20, 25%, somewhat prepared, 10%, very well prepared.

Andrews Agya Yalley:

Okay, thank you Matthew for that. Okay, so I think you can share now. Yep. Okay, so I’ll go back. So from the result, it’s quite clear that most students are not ready and not prepared to use chart GTP and other AI technology. And that tells us as academics and other higher education administrators, a lot of chance, and I can personally tell you, students are not even confident admitting using charge GTP. I’ve tried it in several classes that I teach. You ask them, do they use charge GTP? They’ll say no until I raise my hand to say, yes, I use charge ETP. Then you’ll see some reluctantly trying to raise their hand. And that tells you the unpreparedness and the misconception within academia when it comes to using charge ETP. So based on this challenge, particularly on student unpreparedness to use the technology, and we can’t also deny the fact that industry has embraced this technology.

From hotels marketing straight down to project management, this technology has been embraced and how do we develop that, the skills that students can become prepared to use this technology. As a result, we undertook some structured interviews about 10 interview from academics. And whilst it would’ve been better to choose students, we thought academics can have more perspective, particularly when students are adapting to this technology. Actually they don’t really understand the co-creation aspect, but academics can have more insight. So we chose academics who have from diverse background and who have experienced using charge ETP. We went ahead to undertake their in-depth interview and this was conducted remotely from academics across the world. And we use thematic analysis to identify the factors that are critical for the successful adaptation use of charge GTP and other AI technology. What were the findings? And let me just share the findings with you. One of the key findings we identified was emotional intelligence.

So majority of academics, about 80% of the sample used. That is eight out of 10. Recognize emotional intelligence as a very key skills when using ai, and yet it relates to ability to understand and manage emotions. And within the context, how do students enhance the AI output? Critically are student able to identify sensitive issues within AI and be very careful how they present the output. So they identified emotional intelligence as very critical skills that students require as the scrutinize AI output. Particularly earlier I highlighted issues of gender and racial stereotypical content when using ai. One of the participants indicated that AI is limited by emotions and to produce responsible knowledge, human emotion is important to scrutinize all AI output. So this on thein the idea that emotional intelligence is one of the key skills that student are expected to develop as they undertake co-creation. And it goes beyond that within employment as well, emotional intelligence become key skills. Another critical skills that was identified was AI literacy.

Majority of students are using ai, but actually they have no idea how it works. And for somebody to work effectively with a technology, they should have a background knowledge of how the technology works. They should understand what is ai, what are the limitations and strengths of ai, and more importantly about prompt writing skills. Because the output of AI is all about the quality of the prompt writing skill. Again, 60% of the participant, which is six out of the 10 highlighted AI literacy as very critical skills when engaging with ai. One participant indicated without being competent in using ai, it’s impossible to generate any meaningful output. So this demonstrate that without any competence, particularly with prompt writing skills, the quality of the output won’t be meaningful.

Another key finding was, sorry, the subject matter knowledge I. And this relate to the concept theories and principles that underpin a particular subject area and its impact is even on the prompt writing scale. One student understand the subject knowledge, they are able to write meaningful prompt, they are able to interpret AI output from a subject specific perspective. DAOs subjects expertise enhances AI search as well as its contextualization of its output. A participant indicated that how can you collaborate with AI in creating new knowledge without the co-creators understanding of the subject area? It is essential that you understand the subject area before you can have any fruitful interaction with the ai.

So this ain’t on the fact that without understanding the subject area upon which one is searching the knowledge from to create a very relevant or contextualized information is very difficult. That’s emphasizing that student needs to understand the key principles, concept, theories that underpin the subject area that they are searching for. Another finding was ethical awareness. Students should have some understanding and appreciation of the ethical issues that relate to the use of ai, particularly under biases, issues of privacy, academic integrity, patent, how do we use an image from AI who answer it? These are key questions that student needs to demonstrate some awareness. Do you own the output of an AI or do you acknowledge the use of ai? So ethical awareness ensures that there is responsible use of ai. One participant indicated that making moral decisions are the strongest trait of humans as compared to technology. Thus recognizing that humans can make more moral decisions than ai and to co-create knowledge that is morally sound. Student needs to process and demonstrate knowledge of the ethical issues relating to the subject matter and the technology. So this participant goes back to the subject matter knowledge area that is not just about the technology ethical issue, but again, the students should understand the ethical issues within that steady domain. DAOs emphasizing the importance of ethical awareness when it comes to the use of ai.

Another issue which is almost similar to ethics, but different is legal awareness and that’s related to intellectual property and also the issues of academic integrity. And most institutions, higher education institutions are coming up with AI policies in terms of how students use and even how academics also use ai. General articles are taken on also that and asking users to give credit where they use AI to indicate that they have used AI against 60% stress on its importance that legal awareness is very important. DAOs legal awareness ensure that there is compliance. So where an institution as an AI policy, are you compliant with that policy? Again, are you using it responsibly?

Have you checked the sources, the information that you are getting? Have you checked? Because AI has clearly indicated charge, GTP and Gemini has clearly indicated that they are not responsible for false information that the system can generate, the platform can generate. So here, how are you being responsible in the use of ai? So one participant comment commented, students much like artists and researchers needs to comply with intellectual property laws and academic integrity policy. They must ensure that knowledge code generated with AI demonstrates legal re. So here he or she emphasized on the ness of the AI output, is it legally rigor? Because failure to comply with some of the legal legislation might lead to court action.

And again, when it comes to academic integrity, it might be disciplinary action from a university DAOs students having legal awareness of the use of AI becomes very critical factor. Again, critical thinking and critical thinking has been very grounded when it comes to postgraduate. One of the key differentiation between undergraduate and postgraduate is critical thinking. However, with the use of ai, it’s emphasized that critical thinking should be a skill that every student should have, and this is the ability to analyze and validate AI output. Recently I asked AI to do some tax for me after I asked it to generate a reference list and it created a reference list even in my name, which I have no idea where that paper is from.

And yet this has been termed as what AI hallucination. So each AI dreams of things that do not exist. Hence when using AI output, we need to validate the reliability and validity of those outputs, ensure that the output is accurate and relevant. And here to put more weight on it, 80% of the participants identified critical thinking has very fundamental when using AI in filtering out inaccuracies in AI output. So one participant commended AI provides information that require further scrutiny and critical thinking skill is paramount in creating authentic and diversified knowledge, thus emphasizing the importance of what critical thinking in scrutinizing AI output. And this was found as together with emotional intelligence as the strongest skills that student needs to have critical thinking. And finally, creativity.

Again, AI is packaging baggage out and there will be a time it has been predicted that most output that will come from AI will be a lot of baggage because people are putting false information is picking up and generating those information. Now, how do we bring creativity when using ai? And yet the creativity can extend from how we generate our keywords When we are searching, what are the prompts? Are we being creative in a prompt that we are writing again, when we are scrutinizing AI output, how creative in or innovative are we to scrutinize those outputs and adapt it to our context? So here the issue of contextualization and customization becomes very critical when it comes to addressing the limitations associated with ai. Again, 70% of participants emphasize on creativity as very important when it comes to personalizing and contextualizing AI output.

An example of, except from one of their participants was AI knowledge and output generally lack contextualization and customization. It is the student creativity that can contextualize and customized the knowledge produced. So this emphasize that creativity is another bedrock when it comes to using AI student creative skills becomes important. Now the question is how do we ensure that these skills are attained in academia? How do we ensure that these skills are developed before students use these technologies? And this is the main emphasis of this presentation. So I’ll quickly move on to that. So for emotional intelligence, it’s important that students are trained. So academics needs to incorporate emotional intelligence in developing their curriculum. So here in terms of workshop reflective exercise, peer collaboration, students are being trained on developing this intelligence and they should encourage discussion on ethical and emotional dimensions where AI is generated, for instance, an academic can bring it in class for students to discuss.

Looking at the limitations associated with that in terms of ethically and emotionally. From a policy perspective, this digital literacy needs to be integrated in almost all curriculum. So policy makers, particularly accreditation boss, needs to ensure that EI, emotional intelligence is integrated into curriculums. And again, the issue of funding needs to be provided in what? Exploring how emotional intelligence can influence human AI collaboration. Again, when it comes to AI literacy, higher education should offer AI literacy courses from prompt engineering to responsible use of AI student needs to be taught so that they can use AI responsibly and be able to prompt AI appropriately. And yet this can also be done through workshops where students interacts with AI models. Similarly, there need to be established what a literacy as an ai literacy as a mandatory component in higher education curriculum. Higher education curriculum needs to ensure that AI literacy policy makers needs to ensure that AI literacy is a mandatory component.

In the curricular subject matter. Knowledge is another key issue or skill. And here practitioners needs to ensure that students are diving deep into the conceptual understanding of the subject. And by doing so, they’re able to develop the skills necessary to engage with ai. Again, prompt writing training needs to be delivered particularly on the subject specific area. What are the key words that when you are prompting ai, you can use in a particular subject area? And again, for policy makers, they should promote initiative that integrate AI into subject based curriculum. So for each specific subject area, what AI specific tools are necessary for ethical awareness for practitioners? They should incorporate AI training in all AI related cause is focusing on bias, discrimination and responsible use. So again, and for some reason that can also come under the AI literacy, but they need to ensure that these are incorporated in all training that they are offering students. And this can also be looked at from debate and case study analysis, looking at ethical issues as they can develop student ai, ethical awareness through debates and case studies for policy makers, they should set a policy on the et ethical use of ai.

There should be a specific policy for every institution on et ethical use of ai. And there should be also regulatory framework that ensure that AI use in high education oppose the highest standard. So in every institution there are policies and procedure and there’s the need for AI to have specific policies and procedures to ensure that its use is upheld in highest standard.

Matthew Short:

And DR

Andrews Agya Yalley:

For legal awareness? Yes, I’m running out of time. I

Matthew Short:

Guess I was going to say, yeah, I was going to say we will definitely share your PowerPoint with the audience today that’s able to follow up on the session. I hate to rush you, but just if there are a few parting thoughts with the next minute or so, we do want to let folks transition to other sessions as time allows here.

Andrews Agya Yalley:

Okay, so I’ll just try and round up quickly. Yeah. So similarly for legal awareness, this needs to be integrated into curriculum and for practitioners, again, what are the policies and procedures that are being developed for policymakers within those institution. The same goes with critical thinking. So let me move on to my, I think this was summarize. So when it comes to curriculum, how do we ensure that these skills are integrated into curriculum? So all the key skills that we identify when developing curriculums and assessment strategies, how do we embed these skills into it? And again, the faculty need training because most academics do not have the skills necessary to understand AI and embed them in the curriculum. So there is a need to train the faculty when it comes to that. And there can be also academia industry collaboration where there’s clear understanding of what industry expect in terms of the use of ai.

And based on that they can integrate it into curriculum and develop student skills and where possible offer student internship opportunities to come to the industry and learn some of these skills. For policymakers, there should be regulation on the use of ai. Again, AI should be promoted as a core literacy in every higher education institution. And there should be opportunity for funding on research related to the use of AI and the specific skills that have identified in co-creating hybrid intelligence in higher education. And more importantly, they should ensure there is equitable access to ai where some students from vulnerable groups are denied access to the current softwares or platforms when it comes to ai. They should ensure the equitable accessibility of AI to every student regardless of their background For future research, there is the need to undertake longit studies to understand how these factors interact, interact together, particularly on AI literacy and human AI collaboration. And again, we should understand the disciplinary variation. Most of my samples come from the business discipline. However, when it comes to science, it might be different. So there’s the need to study the disciplinary variation when it comes to ai, human co-creation. So on that note, I will end and if there is any question, I can answer that. Thank you, Matthew. Okay.