Befriending AI

Columns and comments about emerging AI continue to abound, and I find myself “going down the rabbit hole” as I try to determine which predictions are most likely and what changes will occur. Free artificial intelligence apps are widely available. What do we do about them for education?

I’m old, old enough that my youth predates not just computers, smart phones, TV streaming, and virtual reality. I remember the revelations of ATMs, email, fiber optics, robotic surgeries, solar energy, bar codes, and MRIs. I didn’t get my first calculator until college [not that I ever really mastered using a slide rule…], I typed my high school and college papers on a manual typewriter after drafting them on legal pads longhand, and long-distance phone calls were an absolute luxury. I have often pondered the way technological innovations have changed my own life, and the way they changed my teaching.  The one thing I’m sure of is that, for better or worse, tech innovations will continue. It’s up to us to figure out the “for better.”

Free AI, through sites like https://openai.com/ and https://openai.com/dall-e-2/ , will impact teaching. That’s a given. But artificial intelligence, at least so far, is only as good as the data it’s fed. Since it is generative rather than creative, it cobbles together writing from the data sets that have been inputted into “large language learning models” [LLMs], the same tools that allow auto-complete in apps like Microsoft Word and most email apps. These LLMs are, however, much more powerful: “It turns out that with enough training data and sufficiently deep neural nets, large language models can display remarkable skill if you ask them not just to fill in the missing word, but also to continue on writing whole paragraphs in the style of the initial prompt” [nytimes.com]. That means AI can generate the kinds of essays and technical writing we too often ask our students to produce.

Does that make AI the end of teaching content area writing in schools as we know it? I hope so. I’ve long thought our focus on the formulaic three-part, five-paragraph essay stifles meaningful thought. We have an exciting opportunity here: AI can transform the teaching of writing if we let it. If we discard our traditional approach of parroting back ideas in favor of asking deep prompts that require original and personalized thought, we can not only beat the system of having AI replace students’ own writing, but we can make teaching and learning more powerful at the same time. For example, my final exam question for some of my more advanced classes was to trace the evolution of one significant idea throughout the literature we’d read all year along with how the student’s own vision of that idea evolved from the readings. How can AI answer not only questions like what it means to be a good citizen or what our responsibility as humans is to each other but also provide the student’s personal response? I suspected that AI cannot give a profound reply, so I asked it the following prompt: “Based on the books we read, what does it mean to be a good citizen and what do I believe?” As I expected, the generated paragraph was superficial and grossly inadequate:

Being a good citizen typically means being responsible, law-abiding, and actively engaged in the community. This can include things like paying taxes, obeying laws, participating in the democratic process, volunteering, and treating others with respect. Ultimately, what it means to be a good citizen may vary depending on the individual and the context, and what you believe may depend on your personal values and beliefs. [written by openai.com]

My students, on the other hand, grappled with this question with specificity that showed their thinking and their learning.

Issues remain. AI is flawed: “Some skeptics argue that the software is capable only of blind mimicry — that it’s imitating the syntactic patterns of human language but is incapable of generating its own ideas or making complex decisions, a fundamental limitation that will keep the L.L.M. approach from ever maturing into anything resembling human intelligence… Other critics believe that software like GPT-3 will forever remain compromised by the biases and propaganda and misinformation in the data it has been trained on, meaning that using it for anything more than parlor tricks will always be irresponsible” [Ibid.]. Furthermore, our legal system lags behind technological developments, and we have work to do about intellectual property and plagiarism.

But AI isn’t going away, and we need to decide how best to harness it. Kevin Roose, author and technology columnist for the New York Times, argues that “schools should thoughtfully embrace ChatGPT as a teaching aid — one that could unlock student creativity, offer personalized tutoring, and better prepare students to work alongside A.I. systems as adults” [nytimes.com 2]. He argues that banning AI won’t work: even if schools block openai.com, learners will find ways to access it. He also assures readers that AI can be a valuable teaching tool, writing about its effectiveness in helping students work through their outlines, helping teachers work though their lesson plans, and serving as a debate sparring partner and an after-school tutor. Roose writes, “Even ChatGPT’s flaws — such as the fact that its answers to factual questions are often wrong — can become fodder for a critical thinking exercise. Several teachers told me that they had instructed students to try to trip up ChatGPT, or evaluate its responses the way a teacher would evaluate a student’s” [Ibid.]. He even sends us to Ditch that Textbook [ditchthattextbook.com], which offers 20 specific strategies to make AI an ally for learning.

Going back to my advanced age… I find myself remembering the controversy about other technological innovations: would calculator use render us unable to do math in our heads, would online translation obliterate the need to learn other languages, would internet research make plagiarism inevitable and hard to spot, etc. Over time, learning from each other, we have managed to harness the potential of other technologies while limiting its harm somewhat. We need to approach AI with that mindset. I expect the journey will be challenging, but if it leads to deeper questions and more hands-on learning, it will be worth it.

Note: The image was generated by Dall-E, but all the writing except the indented paragraph is indeed my own!

AI Is Here!

AI [Artificial Intelligence] has the potential to impact teaching writing in a number of ways. For example, AI-powered writing assistants can help students improve their writing skills by providing real-time feedback and suggestions on grammar, spelling, and style. AI-powered tools can also help teachers grade written assignments more quickly and accurately, freeing up time for other activities such as providing individualized feedback to students. Additionally, AI can be used to create personalized learning plans for students, helping them to progress at their own pace and improve their writing skills more effectively.

While AI has the potential to greatly improve the teaching of English, there are also some potential dangers to consider. For example, relying too heavily on AI-powered tools could lead to students becoming overly reliant on technology and losing their ability to think and write independently. Additionally, AI tools are only as good as the data they are trained on, so if they are not trained on a diverse range of writing styles and voices, they may not be able to accurately assess or improve students’ writing. Finally, the use of AI in education raises concerns about privacy and the potential for bias in the technology. It is important for educators and policy makers to carefully consider these potential dangers and take steps to mitigate them when using AI in the teaching of English.

Bet you thought I wrote those first two paragraphs. Nope! I asked open-source artificial intelligence [https://chat.openai.com/chat] to write them. The prompt for the first was “how will ai impact teaching writing” and the second, “the dangers of ai for teaching English.” Both paragraphs are well written and informative. How many students would offer a more clear and effective expression of the issues?

What does this mean for education? Because each generated block of text and image is unique, plagiarism becomes impossible to identify through an internet search. Students can turn in essays and submit college essays written by AI, and no one would know the difference. What do schools do to adjust to this new reality? Making all writing happen within the classroom period penalizes students who take time with their writing process and prevents true polishing of written drafts. We need to rethink how we teach and evaluate writing.

Artificial intelligence now can also generate unique, free, uncopyrighted images with Dall-E. To make Dall-E work, researchers feed images to a computer, matching them with word descriptors. AI is trained to make connections and then use them to produce new images, but the images are only as good as the input and training. “Bias found on the internet can show up in results, such as a prompt for a “CEO” that produces only white men” [washpost.com].Researchers then overlay “noise” in terms of visual static and teach AI to remove the noise to reveal a clear image. AI generates the images based on the words used as input. Dall-E generated the image accompanying this blog after I requested “a water color image of a computer screen with AI generated images” [https://labs.openai.com]. As a blogger, I no longer have to pay to use images or worry about avoiding copyright issues. Where, though, does that leave visual artists?

Wael Abd-Almageed, a professor at the University of Southern California’s school of engineering, warns that “Once the line between truth and fake is eroded, everything will become fake. We will not be able to believe anything” [washpost.com2]. Such image generation poses clear risks. “Each evolution of image technology has introduced potential harms alongside increased efficiency. Photoshop enabled precision editing and enhancement of photos, but also served to distort body images, especially among girls, studies show” [Ibid.]. AI can generate the kind of “deep fakes” that have been misused politically. Companies that offer and use AI have struggled to build in safeguards, but the technology is progressing more quickly than those efforts. AI researcher Maarten Sap said “asking whether OpenAI acted responsibly was the wrong question. ‘There’s just a severe lack of legislation that limits the negative or harmful usage of technology. The United States is just really behind on that stuff’” [Ibid.].

Hand wringing won’t help. Teachers need to focus on process more than output. New Zealand education technology expert warns that AI will transform teaching: “it’s going to require an awful lot of work from teachers and institutions. Teaching that looks at a completed product only – an essay for example – is finished” [the spinoff]. And, since “AI doesn’t know or care about truth” [Ibid.], educators will need to find strategies to evaluate for truth and teach students those strategies.

Whether we like the advent of AI or fear it, we need to figure out how best to work with it. AI does provide opportunities. For example, “teachers could generate a set of AI ‘essays’ on a topic, then set students (individually or in groups) to critique them and write their own better version” [Ibid.]. AI may even prompt long overdue changes in pedagogy. Perhaps we will finally focus on “better ways to assess for learning, such as constructive feedback, peer assessment, teachback” [Ibid.].

This is all fairly new to education. In the summer of 2020, OpenAI began offering limited access to a new program called Generative Pre-Trained Transformer 3 [GPT-3]. Even if “GPT-3 seems to be manipulating higher-order concepts and putting them into new combinations, rather than just mimicking patterns of text” [nytimes.com], we face a sea change in communication and education. We are behind the 8 ball in thinking about how to adapt and even thrive with this new world order and how to regulate it fairly and effectively. But we must, for a continuously increasing ability of computers to generate text and images is already happening. AI is here now.