Columns and comments about emerging AI continue to abound, and I find myself “going down the rabbit hole” as I try to determine which predictions are most likely and what changes will occur. Free artificial intelligence apps are widely available. What do we do about them for education?
I’m old, old enough that my youth predates not just computers, smart phones, TV streaming, and virtual reality. I remember the revelations of ATMs, email, fiber optics, robotic surgeries, solar energy, bar codes, and MRIs. I didn’t get my first calculator until college [not that I ever really mastered using a slide rule…], I typed my high school and college papers on a manual typewriter after drafting them on legal pads longhand, and long-distance phone calls were an absolute luxury. I have often pondered the way technological innovations have changed my own life, and the way they changed my teaching. The one thing I’m sure of is that, for better or worse, tech innovations will continue. It’s up to us to figure out the “for better.”
Free AI, through sites like https://openai.com/ and https://openai.com/dall-e-2/ , will impact teaching. That’s a given. But artificial intelligence, at least so far, is only as good as the data it’s fed. Since it is generative rather than creative, it cobbles together writing from the data sets that have been inputted into “large language learning models” [LLMs], the same tools that allow auto-complete in apps like Microsoft Word and most email apps. These LLMs are, however, much more powerful: “It turns out that with enough training data and sufficiently deep neural nets, large language models can display remarkable skill if you ask them not just to fill in the missing word, but also to continue on writing whole paragraphs in the style of the initial prompt” [nytimes.com]. That means AI can generate the kinds of essays and technical writing we too often ask our students to produce.
Does that make AI the end of teaching content area writing in schools as we know it? I hope so. I’ve long thought our focus on the formulaic three-part, five-paragraph essay stifles meaningful thought. We have an exciting opportunity here: AI can transform the teaching of writing if we let it. If we discard our traditional approach of parroting back ideas in favor of asking deep prompts that require original and personalized thought, we can not only beat the system of having AI replace students’ own writing, but we can make teaching and learning more powerful at the same time. For example, my final exam question for some of my more advanced classes was to trace the evolution of one significant idea throughout the literature we’d read all year along with how the student’s own vision of that idea evolved from the readings. How can AI answer not only questions like what it means to be a good citizen or what our responsibility as humans is to each other but also provide the student’s personal response? I suspected that AI cannot give a profound reply, so I asked it the following prompt: “Based on the books we read, what does it mean to be a good citizen and what do I believe?” As I expected, the generated paragraph was superficial and grossly inadequate:
Being a good citizen typically means being responsible, law-abiding, and actively engaged in the community. This can include things like paying taxes, obeying laws, participating in the democratic process, volunteering, and treating others with respect. Ultimately, what it means to be a good citizen may vary depending on the individual and the context, and what you believe may depend on your personal values and beliefs. [written by openai.com]
My students, on the other hand, grappled with this question with specificity that showed their thinking and their learning.
Issues remain. AI is flawed: “Some skeptics argue that the software is capable only of blind mimicry — that it’s imitating the syntactic patterns of human language but is incapable of generating its own ideas or making complex decisions, a fundamental limitation that will keep the L.L.M. approach from ever maturing into anything resembling human intelligence… Other critics believe that software like GPT-3 will forever remain compromised by the biases and propaganda and misinformation in the data it has been trained on, meaning that using it for anything more than parlor tricks will always be irresponsible” [Ibid.]. Furthermore, our legal system lags behind technological developments, and we have work to do about intellectual property and plagiarism.
But AI isn’t going away, and we need to decide how best to harness it. Kevin Roose, author and technology columnist for the New York Times, argues that “schools should thoughtfully embrace ChatGPT as a teaching aid — one that could unlock student creativity, offer personalized tutoring, and better prepare students to work alongside A.I. systems as adults” [nytimes.com 2]. He argues that banning AI won’t work: even if schools block openai.com, learners will find ways to access it. He also assures readers that AI can be a valuable teaching tool, writing about its effectiveness in helping students work through their outlines, helping teachers work though their lesson plans, and serving as a debate sparring partner and an after-school tutor. Roose writes, “Even ChatGPT’s flaws — such as the fact that its answers to factual questions are often wrong — can become fodder for a critical thinking exercise. Several teachers told me that they had instructed students to try to trip up ChatGPT, or evaluate its responses the way a teacher would evaluate a student’s” [Ibid.]. He even sends us to Ditch that Textbook [ditchthattextbook.com], which offers 20 specific strategies to make AI an ally for learning.
Going back to my advanced age… I find myself remembering the controversy about other technological innovations: would calculator use render us unable to do math in our heads, would online translation obliterate the need to learn other languages, would internet research make plagiarism inevitable and hard to spot, etc. Over time, learning from each other, we have managed to harness the potential of other technologies while limiting its harm somewhat. We need to approach AI with that mindset. I expect the journey will be challenging, but if it leads to deeper questions and more hands-on learning, it will be worth it.
Note: The image was generated by Dall-E, but all the writing except the indented paragraph is indeed my own!