
AI [Artificial Intelligence] has the potential to impact teaching writing in a number of ways. For example, AI-powered writing assistants can help students improve their writing skills by providing real-time feedback and suggestions on grammar, spelling, and style. AI-powered tools can also help teachers grade written assignments more quickly and accurately, freeing up time for other activities such as providing individualized feedback to students. Additionally, AI can be used to create personalized learning plans for students, helping them to progress at their own pace and improve their writing skills more effectively.
While AI has the potential to greatly improve the teaching of English, there are also some potential dangers to consider. For example, relying too heavily on AI-powered tools could lead to students becoming overly reliant on technology and losing their ability to think and write independently. Additionally, AI tools are only as good as the data they are trained on, so if they are not trained on a diverse range of writing styles and voices, they may not be able to accurately assess or improve students’ writing. Finally, the use of AI in education raises concerns about privacy and the potential for bias in the technology. It is important for educators and policy makers to carefully consider these potential dangers and take steps to mitigate them when using AI in the teaching of English.
Bet you thought I wrote those first two paragraphs. Nope! I asked open-source artificial intelligence [https://chat.openai.com/chat] to write them. The prompt for the first was “how will ai impact teaching writing” and the second, “the dangers of ai for teaching English.” Both paragraphs are well written and informative. How many students would offer a more clear and effective expression of the issues?
What does this mean for education? Because each generated block of text and image is unique, plagiarism becomes impossible to identify through an internet search. Students can turn in essays and submit college essays written by AI, and no one would know the difference. What do schools do to adjust to this new reality? Making all writing happen within the classroom period penalizes students who take time with their writing process and prevents true polishing of written drafts. We need to rethink how we teach and evaluate writing.
Artificial intelligence now can also generate unique, free, uncopyrighted images with Dall-E. To make Dall-E work, researchers feed images to a computer, matching them with word descriptors. AI is trained to make connections and then use them to produce new images, but the images are only as good as the input and training. “Bias found on the internet can show up in results, such as a prompt for a “CEO” that produces only white men” [washpost.com].Researchers then overlay “noise” in terms of visual static and teach AI to remove the noise to reveal a clear image. AI generates the images based on the words used as input. Dall-E generated the image accompanying this blog after I requested “a water color image of a computer screen with AI generated images” [https://labs.openai.com]. As a blogger, I no longer have to pay to use images or worry about avoiding copyright issues. Where, though, does that leave visual artists?
Wael Abd-Almageed, a professor at the University of Southern California’s school of engineering, warns that “Once the line between truth and fake is eroded, everything will become fake. We will not be able to believe anything” [washpost.com2]. Such image generation poses clear risks. “Each evolution of image technology has introduced potential harms alongside increased efficiency. Photoshop enabled precision editing and enhancement of photos, but also served to distort body images, especially among girls, studies show” [Ibid.]. AI can generate the kind of “deep fakes” that have been misused politically. Companies that offer and use AI have struggled to build in safeguards, but the technology is progressing more quickly than those efforts. AI researcher Maarten Sap said “asking whether OpenAI acted responsibly was the wrong question. ‘There’s just a severe lack of legislation that limits the negative or harmful usage of technology. The United States is just really behind on that stuff’” [Ibid.].
Hand wringing won’t help. Teachers need to focus on process more than output. New Zealand education technology expert warns that AI will transform teaching: “it’s going to require an awful lot of work from teachers and institutions. Teaching that looks at a completed product only – an essay for example – is finished” [the spinoff]. And, since “AI doesn’t know or care about truth” [Ibid.], educators will need to find strategies to evaluate for truth and teach students those strategies.
Whether we like the advent of AI or fear it, we need to figure out how best to work with it. AI does provide opportunities. For example, “teachers could generate a set of AI ‘essays’ on a topic, then set students (individually or in groups) to critique them and write their own better version” [Ibid.]. AI may even prompt long overdue changes in pedagogy. Perhaps we will finally focus on “better ways to assess for learning, such as constructive feedback, peer assessment, teachback” [Ibid.].
This is all fairly new to education. In the summer of 2020, OpenAI began offering limited access to a new program called Generative Pre-Trained Transformer 3 [GPT-3]. Even if “GPT-3 seems to be manipulating higher-order concepts and putting them into new combinations, rather than just mimicking patterns of text” [nytimes.com], we face a sea change in communication and education. We are behind the 8 ball in thinking about how to adapt and even thrive with this new world order and how to regulate it fairly and effectively. But we must, for a continuously increasing ability of computers to generate text and images is already happening. AI is here now.