
With the emergence of generative AI (or genAI)–notably ChatGPT in 2022–its sudden popularity in universities and educational environments has become almost impossible to ignore. But how do students actually use these interfaces for their studies? And what do professors have to say about this unsolicited implementation into their classrooms and curriculums? Based on observations and conversations with some students and professors here at Waseda University’s School of International Liberal Studies, multiple perspectives and urgent points arose to consider redirecting the methods in which students and professors could utilize artificial intelligence. Before unraveling any observations or points of view, however, it is important to have a basic understanding of what generative AI interfaces are and how they function.
Generative AI typically consists of large language models, or LLMs. This type of machine learning model consumes large relative datasets to create outcomes that can align with each asked prompt. The key point about these models is that they undergo natural language understanding (NLU) and natural language processing (NLP.) NLU and NLP formulate what’s derived from the data into a human-like response so that it can be understood by any public user, like students. It’s easy to be deceived by their human-like responses to assume that they’re unquestionably trustworthy with the information they produce, and this may explain why some students excessively use genAI to aid in their writing and analytical skills.
So, how much of students’ work is generated by artificial intelligence, and how much of it is actually their own? It seems that there are two main methods in which students utilize genAI: a vast majority of students take advantage of genAI like ChatGPT to replace absorbing and applying information taught in their classes, but some seem to use it to benefit the application of their own understanding by prompting different methods or key points that relate to a topic developed from their own thinking. But why would a large proportion of students resort to replacing their efforts to learn? What is it about using their own thinking that is tempting students to rely on AI to think for them?
When discussing this with some professors and students at Waseda, it seems that today’s generation of students are having an inverse relationship with what they have to do and the time that they actually have to do it; students’ schedules seem to be overflowing with extracurriculars, clubs, part-time jobs, and internships, all whilst being a full-time student. It seems as if there simply isn’t enough time to only pay attention in class when you could be applying for jobs or finishing other assignments at the same time. So, the reliability of genAI to replace some students’ thinking or efforts allows them to have one less responsibility and minimize their need to multitask. But is this really effective?
Most students–and professors–will likely argue that it is effective, as long as you utilize genAI properly. That is, rather than using it to replace students’ thinking, using it to guide their own critical thinking instead may be a better method. But many will give into the risk of copying what the models spit out into their papers without fact-checking or editing its output. These are the concerns that many professors have with students using genAI. Because of the lack of education in how to properly utilize ChatGPT or similar models, some professors are hesitant to permit usage of them in their classes. There are some professors, however, who encourage students to use AI. One professor, in particular, encourages its use in their class for two main reasons. First, as a teacher, this professor believes that they should have some responsibility in educating students on the proper way to use generative AI, especially because the frequency of its utilization is only increasing. And second, for their class subject specifically, the usage of genAI will actually strengthen students’ learning about the class content by helping them understand the processes behind their answers. In other words, this professor prioritized how students solved problems over what their solutions were, and by encouraging genAI usage, students could find more insightful ways to have these processes explained to them.
Even with professors like these, however, current educational curriculums continue to be lapped by genAI’s growth. Are students still truly learning in class, and are current examinations still relevant to showcase what they’ve learned? When discussing this with another professor at Waseda University, they stated that they believe current education curriculums are outdated due to this surge of genAI. They are even considering resorting back to what some would consider a more analog teaching style, where students will have to undergo in-person, oral examinations and participate in discussion-based lectures to prove their understanding of class contents without the interference of artificial intelligence. How wide will the gap between genAI’s progress and teaching styles become before students fall in?
With the growing usage of genAI models, students’ inescapability from crammed schedules and professors’ overexertion of teaching, there seems to be a consistent factor that affects it all: time. With the unimaginably fast and productive changes in internet capabilities, taking advantage of what’s right at the tips of our fingers is slowly starting to backfire and unreasonably raise the expectations of how much students can take on and how impactfully professors can teach. At the pace that these advanced technological shifts are outracing curriculums, the risk of education being forced to undergo major changes continues to rise greatly.