header banner
OPINION

Educating Beyond the Bots

The current discourse about artificial intelligence not only reflects a narrow view of education. It also represents romanticization of, or alarmism about, new technologies, while insulting students as dishonest by default.
By Shyam Sharma

The current discourse about artificial intelligence not only reflects a narrow view of education. It also represents romanticization of, or alarmism about, new technologies, while insulting students as dishonest by default. 


“It has saved me 50 hours on a coding project,” whispered one of my students to me in class recently. He was using the artificial intelligence tool named ChatGPT for a web project. His classmates were writing feedback on his reading response for the day, testing a rubric they had collectively generated for how to effectively summarize and respond to an academic text. 


The class also observed ChatGPT’s version of the rubric and agreed that there is some value in “giving it a look in the learning process.” But they had decided that their own brain muscles must be developed by grappling with the process of reading and summarizing, synthesizing and analyzing, and learning to take intellectual positions, often across an emotionally felt experience. Our brain muscles couldn’t be developed, the class concluded, by simply looking at content gathered by a bot from the internet, however good that was. When the class finished writing, they shared their often brutal assessment of the volunteer writer’s response to the reading. The class learned by practicing, not asking for an answer. 


Beyond the classroom, however, the discourse about artificial intelligence tools “doing writing” has not yet become as nuanced as among my college students. “The college essay is dead,” declared Stephen Marche of the Atlantic recently. This argument is based on a serious but common misunderstanding of a means of education as an end. The essay embodies a complex process and experience that teach many useful skills. It is not a simple product.


But that misunderstanding is just the tip of an iceberg. The current discourse about artificial intelligence not only reflects a shrunken view of education. It also represents a constant romanticization of, or alarmism about, new technologies influencing education. And most saddening for educators like me, it shows a disregard toward students as dishonest by default. 


Broaden the view of education


Related story

Musk says Twitter deal could move ahead with ‘bot’ info


If we focus on writing as a process and vehicle for learning, it is fine to kill the essay as a mere product. It is great if bot-generated texts serve certain purposes. Past generations used templates for letters and memos, not to mention forms to fill. New generations will adapt to more content they didn’t write.


What bots should not replace is the need for us to grow and use our own minds and conscience, to judge when we can or should use a bot and how and why. Teachers must teach students how to use language based on contextual, nuanced, and sensitive understanding of the world. Students must learn to think for themselves, with and without using bots. 


One simple approach to using AI tools in class is to start either by having students check out what the tools suggest or compose/do the writing/thinking themselves first. Either way, teachers can let students then compare AI-assisted and non-assisted versions of their work, using independent thinking. Next, students should be asked to figure out how to cite any AI-produced text or ideas they borrow into their writing/work. Finally, in writing, or with a class discussion, students should be asked to critically reflect on the use of AI–the what, how, and why. I call this the C/C3C approach: check with AI or compose yourself first, then compare the two versions, then cite anything borrowed from AI, and always critically reflect (in writing or discussion). By learning to use the tools and reflecting on the technical, ethical, and other important issues involved, students best prepare for both effective and conscientious use of AI in their lives and careers. Students should learn to use it to save time and energy, expand knowledge and perspective, and magnify their efforts and skills. But they must not bypass learning and must also be mindful of any ethical and political issues involved. 


Don’t reject or romanticize technology


The other common reaction either rejects or romanticizes new technology: the pendulum swing between technology as a maker of either utopia or dystopia. Extreme reactions dominate our discourses about technology (with little in the middle). “Here’s how teachers can foil ChatGPT,” argued Markham Heid in the Washington Post, [have students write] “handwritten essays.” Heid suggests that we run from technology if it undermines something we value. He recommends teachers to go back to internet-less writing or even handwriting, listing their benefits. 


But escape is not a solution. Nor is some tech hero going to arise and “save” us all from the ills of chatbots and cheating. We must engage disruptions of the status quo, harnessing affordances of new tools for individual and social good.  


The other extreme is romanticizing technology. “Has AI reached the point where a software program can do better work than you?” asks the title of an NPR radio interview. It implies that we are competing with technology, which will win, and there is nothing we can do about an unwinnable force. The guest, a UPenn business professor, discusses how he uses ChatGPT to automate as much of his job as he can. He has the bot generate syllabi, assignments, assessment rubrics, and lectures. AI is “going to replace all of us,” he says. 


The tendency to romanticize new technology also undermines our understanding of it. Bots may help save time, but can bot-based materials and methods help educators prepare students to understand and think, create and communicate, lead and manage effectively? What ethical and professional values will bot-dependent teaching convey? Why not instead design learning to test, question, and help improve the technology? How can we compare human work, communication, knowledge, and relationship with bots’ equivalence of them? Language bots generate texts based on plausible patterns drawn from the internet, which are often dangerous, however extensive and well curated their corpus. So why not focus on where they fail and why, when human agency and conscience should intervene and how?  


Teach with trust


Public discourse about education remains skewed for a third reason. There is a widespread belief that students cheat whenever they can. That is offensive.  


Students cheat mostly when they don’t find an assignment worth the time and effort, are not motivated, or don’t have the skills. And all these factors are within the purview of good teaching. Even the adamantly dishonest few deserve to be educated about the why and what and how of assignments they are asked to do. Only the hopeless moralist can view disengagement as more acceptable than dishonesty.


The only cure for distrust and dishonesty is to ensure motivation among students to do their own work. Students who appreciate the educational goals behind writing-intensive assignments are eager to use AI tools to generate topics and themes, ask and answer questions, spark their own critical and creative thinking processes. They deserve assignments and credits for learning how to use emerging tools to get the work done, how to judge the tools and the implications of their design and use and misuse alike. AI is going to be embedded into everyday tools we use, such as word processors and communication devices. It is time to teach students how to help address the dangers new and powerful technologies pose to people and social systems (such as when police or governments, doctors or drivers, corporations or individuals cause harm by ignoring their faults). Students should learn to use AI tools to generate content and to assess it, brainstorm ideas and to explore further the process points. If anything, AI is exposing the need for a human touch to our communication. It is calling for trust in teaching. 


Yes, college professors who don’t teach and inspire students eagerly are more likely to be “victims” of rapidly advancing natural language processing tools. ChatGPT could be the new colluding cousin who is now only a mouse click away—at no cost and reduced potential of embarrassment. But then ghost writers and paper mills, patchworked papers and talking-point essays have been around for a long time. All past “technologies” of academic dishonesty should have woken up every professor, or already displaced them. Effective educators give students credit for the process and experiences of, and skills acquired for, researching and reading, evaluating sources and synthesizing ideas, developing and sharing their own intellectual positions, citing and engaging sources, addressing and advancing complex perspectives. 


New technologies aggravate old problems for sure. But they also help to solve them, and new ones. Artificial intelligence certainly poses increasing risks to education and other domains of society. Educators should encourage students to use AI to brainstorm, to gather information and compare it with more methodically found and analyzed library sources, to find faults and gaps in AI-generated answers to their questions, to analyze and seek to understand how the AI is working and what practical and ethical dangers may be involved in using/relying on it, and so on. 


Students are quick to assess the value and risks, uses and abuses of groundbreaking technologies. Educators should be too. 

Related Stories
My City

Twitter responds to Musk’s claims, calls them ‘exc...

My City

Musk makes meme on Twitter legal threat after scra...

My City

After pandemic pause, Avengers swing, soar into Di...

SOCIETY

Locals urged to take precaution as water level in...

POLITICS

WHO SEARO member states urged to "go beyond bilate...