Partnering with AI

Image courtesy of Dall-E

Educators around the country recognize that AI is ushering in an unavoidable transformation. Those who fear this transformation wring their hands and try to block AI, “ [b]ut the barricade has fallen. Tools like ChatGPT aren’t going anywhere; they’re only going to improve, and barring some major regulatory intervention, this particular form of machine intelligence is now a fixture of our society” (nytimes.com). The “breakneck pace of AI developments suggests that humans could never outrun it,” so we need to learn how to embrace AI and use it wisely. Educational technology researcher Mike Sharples, of the UK’s The Open University, says transformers like GPT-3 are set to disrupt education. Teachers will have to change the way they teach. “As educators, if we are setting students assignments that can be answered by AI, are we really helping students learn?” he asks. (thespinoff.co.nz)

Education faces a critical choice now: we can fight an inevitable shift, or we can learn to use that shift to improve teaching and learning. The first approach is doomed, the second overdue. The pressure of AI should force educators to develop deeper questioning and thinking approaches.

We already know about efforts to defeat AI that won’t work. Last December Markham Heid, a health and science writer, called for handwritten essays to “beat AI.” He claimed, “The dump-and-edit method isn’t necessarily an inferior way to produce quality writing. But in many ways, it is less challenging for the brain — and challenging the brain is central to education itself” (thewashingtonpost.com). While writing by hand has a different neurological impact than keyboarding that may be useful, it also has significant drawbacks: slowing down the process for fast keyboarders who cannot write as fast as they think [a major issue for me], potential legibility issues for the teacher who’s reading the work, and greater challenges to performing significant revision. And handwritten essays would have to be completed during class time, to ensure no use of AI, which would shorten any writing opportunity.

Nor can we avoid “cheating with AI” by turning to technology. Tools to detect the use of AI and prevent cheating “aren’t reliably accurate, and it’s relatively easy to fool them by changing a few words, or using a different A.I. program to paraphrase certain passages” (nytimes.com).

From Kevin Roose, a technology columnist: “Instead of starting an endless game of whack-a-mole against an ever-expanding army of A.I. chatbots, here’s a suggestion: For the rest of the academic year, schools should treat ChatGPT the way they treat calculators — allowing it for some assignments, but not others, and assuming that unless students are being supervised in person with their devices stashed away, they’re probably using one” (Ibid.). This approach fails to address writing outside the classroom adequately, though. Should we just succumb to AI or consider how best to make writing outside the classroom enhanced by AI instead of being replaced by it?

Mike Sharples, a professor in the U.K., used GPT-3 “to urge educators to “rethink teaching and assessment” in light of the technology, so that we might make it a teaching assistant and a tool for creativity instead of a cheating resource (theatlantic.com). Paul Fyfe, English professor and instructor in a “Data and the Human” course, went further, asking students to “cheat” by writing an assignment with AI and then reflecting on “how the experiment tested or changed their ideas about writing, AI or humanness.” He argues that students who refine their awareness of artificial prose may also be better equipped to recognize what Fyfe calls “synthetic disinformation” in the wild. Students in his experiment, for example, discovered plausible-sounding false statements and quotes from nonexistent experts in the essays they produced with the help of AI” (https://www.insidehighered.com/).

Peter Greene, a writer about K-12 policies and practices, posits that “Authentic assignments grow out of classroom discussion and debate. When an English class studies a particularly rich work of literature, the focus and emphasis will grow out of the class itself, leading naturally to ideas for essays about the work. The discussion becomes one of the texts being considered, and it’s a text the software has no access to.” He also suggests using local concerns, current events, and real issues in the school community; such topics are not only challenging for algorithms to fake, but they also tend to be “richer and more rewarding.” Research papers that use primary sources and live interviews are another option. (forbes.com)

If ChatGPT kills certain types of writing, like formulaic five-paragraph essays and typical college admission essays, will that really be a loss? Only if we fail to replace those performative types of writing with deeper, more meaningful kinds of writing. For example, Greene suggests using  ChatGPT as a prompt tester. If teachers feed their prompts to the chatbot and it produces an essay they would consider well-written, then “that prompt should be refined, reworked, or simply scrapped… if you have come up with an assignment that can be satisfactorily completed by computer software, why bother assigning it to a human being?” (forbes.com2)

What other concrete strategies will make AI a helpful partner in education?

  • Create outlines: Cherie Shields, a high school English teacher in Oregon, had students in one of her classes to use ChatGPT to create outlines for their essays comparing and contrasting two 19th-century short stories that touch on themes of gender and mental health. Students evaluated the outlines and then used their revised versions to write their essays longhand. She said this approach “had not only deepened students’ understanding of the stories” but also ”taught them about interacting with A.I. models, and how to coax a helpful response out of one” (nytimes.com).
  • Focus on process as well as product: New Zealand education technology expert Stephen Marshall, from Victoria University of Wellington: “Teaching that looks at a completed product only – an essay for example – is finished” (thespinoff.co.nz)
  • Use AI to learn to edit and verify instead of regurgitating: Ben Thompson, full-time writer for Stratechery, which provides analysis of the strategy and business side of technology and media as well as the impact of technology on society, suggests a radical approach: schools should have a software suite that tracks AI use and challenges students to use that suite to generate their answers to one given prompt: “every answer that is generated is recorded so that teachers can instantly ascertain that students didn’t use a different system.” He predicts that “the system will frequently give the wrong answers (and not just on accident — wrong answers will be often pushed out on purpose); the real skill in the homework assignment will be in verifying the answers the system churns out — learning how to be a verifier and an editor, instead of a regurgitator.” Wouldn’t that help develop critical twenty-first century skills for an AI-dominated world? (stratechery.com)
  • Evaluation and critical thinking: “Several teachers…instructed students to try to trip up ChatGPT, or evaluate its responses the way a teacher would evaluate a student’s”  (nytimes.com). Krista Fancher’s student loaded a social entrepreneurship project from the previous year and “asked chat gpt to find everything wrong with the solution. It did. He used the list of flaws to redesign the project and built a new prototype designed to connect grandparents and their grandchildren.” (ditchthattextbook.com).
  • Problem-solving and synthesis: AI can help students create projects in which themes and elements are connected in non-linear fashion. One teacher annually checked her seniors’ understanding of Paradise Lost by having them put John Milton on trial before local lawyers, asking if he had successfully justified the ways of God to man. (forbes.com)
  • Teacher planning: use AI to
    • write personalized lesson plans for each student
    • generate ideas for classroom activities
    • serve as an after-hours tutor debate sparring partner
    • serve as a tool for English language learners to improve their basic writing skills.
  • AI applied rubrics: Ronak Shah gave his science fair rubric to ChatGPt and had students submit their work for feedback that would have taken him hours. He and his students found the feedback helpful: “it offered tweaks to improve replicability and validity. It complimented innovative and unique ideas. In fact, it summarized all of its feedback with lots of ‘glow and grow’ phrasing” (edweek.org).
  • Challenge students to best ChatGPT: Shah also gave ChatGPT test questions from his science test and then gave the machine- generated answers to students. He challenged them to improve on the machine’s answer, and “Students were offended at the notion that a robot could be smarter than they are and worked collaboratively to find any way to strengthen the otherwise very strong responses” (Ibid.).
  • Ronak Shah recommends these changes:
    • “First, validate the world students actually live in and question rigid attachments to pedagogy that don’t fit the world they’ll inherit. As teachers, it is our responsibility to open ourselves up to the challenges students will have to face. If we focus our time and energy on that, we’ll be able to do it better. It’s OK to let go of the rest.
    • “Second, change the relationship among students, teachers, and technology… Challenge the students to form an alliance with you, to create content and express knowledge better than a generative AI tool like ChatGPT.
    • “Third, we have to change the way we assess students and the role those assessments play in school accountability. Our assessments are mostly designed to test student thinking on items that are easy to ask and measure on a test. But just because they’re easy to measure doesn’t mean we’re measuring the right things.
    • “Let’s move toward a future where teachers and assessments focus on collaborative, real-world performance rather than answers to narrow skill or fact questions. And let’s embrace ChatGPT and other AI software to help us get there” (Ibid.).

In May 2023, United States Office of Educational Technology published Artificial Intelligence and the Future of Teaching and Learning, a thorough if somewhat academic explorationwith seven recommendations:

  • Emphasize Humans in the Loop
  • Align AI Models to a Shared Vision for Education
  • Design Using Modern Learning Principles
  • Prioritize Strengthening Trust
  • Inform and Involve Educators
  • Focus R&D on Addressing Context and Enhancing Trust and Safety
  • Develop Education-Specific Guidelines and Guardrails (teched.gov)

This committee jargon is unlikely to drive coordinated and meaningful change. Neither individual school districts nor teachers themselves have the capacity and resources to make such global changes. We need a national approach.

Trailblazing teachers are publishing ways to use AI and sharing their ideas – check out “Ditch that Textbook” for excellent examples (ditchthattextbook.com). That’s a great start, but it’s not enough. The pace of AI advancement may seem terrifying, but fear won’t slow it down. We need a coordinated national response on how to deal with AI’s impacts across the board. In education, we need a coordinated national response to professional learning about AI for educators. AI can destroy or transform education. It’s up to us to fight for a valuable and long overdue transformation that will not only convert AI from an enemy to a partner but will also force us to provide the kind of deeper learning opportunities and adaptation of currently needed skills that we have yet to accomplish. The time is now, if not yesterday!

AI Revisited

Image courtesy of Dall-E

The tsunami of information, fear-mongering, and arguing about the impact of Artificial Intelligence has swamped my mailbox and muddied my thinking. There’s just so much that I’m taking a different tack to cover more ground. Here’s a bulleted list for you.

  • Industry is racing ahead of academia: Until 2014, most significant machine learning models were released by academia. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. “Building state-of-the-art AI systems increasingly requires large amounts of data, compute, and money, resources that industry actors inherently possess in greater amounts compared to nonprofits and academia” [stanford.edu].
  • According to the AIAAIC database, which tracks incidents related to the ethical misuse of AI, the number of AI incidents of misuse and controversies has increased 26 times since 2012 [Ibid.].
  • While the proportion of companies adopting AI has plateaued, the companies that have adopted AI continue to pull ahead, more than doubling since 2017 [Ibid.].
  • “Princeton University computer science professor Arvind Narayanan has called ChatGPT a ‘bulls— generator.’ While their responses often sound authoritative, the models lack reliable mechanisms for verifying the things they say. Users have posted numerous examples of the tools fumbling basic factual questions or even fabricating falsehoods, complete with realistic details and fake citations.” Just look at the supposed case against a law professor, citing a non-existent Washington Post article accusing him of sexual harassment that never occurred” [washingtonpost.com].
  • Will Oremus warns, “The bad news is that anxiety at the pace of change also might be warranted — not because AI will outsmart humans, but because humans are already using AI to outsmart, exploit, and shortchange each other in ways that existing institutions aren’t prepared for. And the more AI is regarded as powerful, the greater the risk people and corporations will entrust it with tasks that it’s ill-equipped to take on… OpenAI is now leading a headlong race, tech giants are axing their ethicists and, in any case, the horse may have already left the barn”  [washingtonpost.com2].
  • PCMag.com reportsthat half of Americans can’t distinguish between AI and human writing, warning that it will only get worse as AI tools continue to improve [pcmag.com].
  • Last fall the Biden White House unveiled an AI Bill of Rights to protect users, but it’s voluntary and has no teeth [whitehouse.gov].
  • Italy has blocked ChatGPT as of early April [nytimes.com].
  • “A group of prominent artificial intelligence researchers is calling on the European Union to expand its proposed rules for the technology to expressly target tools like ChatGPT, arguing in a new brief that such a move could ‘set the regulatory tone’ globally” [washingtonpost.com3].
  • Just this past week, Meta unveiled a Powerful new Meta AI tool that can identify individual items within images, allowing it to generate masks for any object in any image or any video, even including objects and image types that it had not encountered during training [techxplore.com].
  • Also just this past week, a start-up in New York is among a group of companies working on systems that can produce short videos based on a few words typed into a computer using generative AI [nytimes.com2].
  • This month Sen. Michael F. Bennet (D-Colorado) tweeted, “The use of AI is growing — without any required safeguards to protect our kids, prevent false information, or preserve privacy. The development of AI audits and assessments can’t come soon enough” [washingtonpost.com4].
  • On March 29, 2023, “more than a thousand tech leaders and researchers…signed an open letter calling for a six-month pause in developing the most powerful AI systems” [npr.org].

But can you stop a moving train going full speed ahead?

I don’t think so. Clearly we’re in the midst of a transformative upheaval that will change society and the roles within it profoundly. And we are unprepared.

What do we do? We need a major national/international initiative to bring together the best thinkers in varied fields that include technology and ethics; they must develop some standards and vision to help us ensure that AI becomes a force more for good than not. This call to action is urgent.

As educators, we need to stop wringing our hands and move on to the work of deciding how to work with AI. Prohibitions fail. What do we do to use this tool well? What do we do to minimize its potential for harm? Urgent work to be done, long overdue.

Befriending AI

Columns and comments about emerging AI continue to abound, and I find myself “going down the rabbit hole” as I try to determine which predictions are most likely and what changes will occur. Free artificial intelligence apps are widely available. What do we do about them for education?

I’m old, old enough that my youth predates not just computers, smart phones, TV streaming, and virtual reality. I remember the revelations of ATMs, email, fiber optics, robotic surgeries, solar energy, bar codes, and MRIs. I didn’t get my first calculator until college [not that I ever really mastered using a slide rule…], I typed my high school and college papers on a manual typewriter after drafting them on legal pads longhand, and long-distance phone calls were an absolute luxury. I have often pondered the way technological innovations have changed my own life, and the way they changed my teaching.  The one thing I’m sure of is that, for better or worse, tech innovations will continue. It’s up to us to figure out the “for better.”

Free AI, through sites like https://openai.com/ and https://openai.com/dall-e-2/ , will impact teaching. That’s a given. But artificial intelligence, at least so far, is only as good as the data it’s fed. Since it is generative rather than creative, it cobbles together writing from the data sets that have been inputted into “large language learning models” [LLMs], the same tools that allow auto-complete in apps like Microsoft Word and most email apps. These LLMs are, however, much more powerful: “It turns out that with enough training data and sufficiently deep neural nets, large language models can display remarkable skill if you ask them not just to fill in the missing word, but also to continue on writing whole paragraphs in the style of the initial prompt” [nytimes.com]. That means AI can generate the kinds of essays and technical writing we too often ask our students to produce.

Does that make AI the end of teaching content area writing in schools as we know it? I hope so. I’ve long thought our focus on the formulaic three-part, five-paragraph essay stifles meaningful thought. We have an exciting opportunity here: AI can transform the teaching of writing if we let it. If we discard our traditional approach of parroting back ideas in favor of asking deep prompts that require original and personalized thought, we can not only beat the system of having AI replace students’ own writing, but we can make teaching and learning more powerful at the same time. For example, my final exam question for some of my more advanced classes was to trace the evolution of one significant idea throughout the literature we’d read all year along with how the student’s own vision of that idea evolved from the readings. How can AI answer not only questions like what it means to be a good citizen or what our responsibility as humans is to each other but also provide the student’s personal response? I suspected that AI cannot give a profound reply, so I asked it the following prompt: “Based on the books we read, what does it mean to be a good citizen and what do I believe?” As I expected, the generated paragraph was superficial and grossly inadequate:

Being a good citizen typically means being responsible, law-abiding, and actively engaged in the community. This can include things like paying taxes, obeying laws, participating in the democratic process, volunteering, and treating others with respect. Ultimately, what it means to be a good citizen may vary depending on the individual and the context, and what you believe may depend on your personal values and beliefs. [written by openai.com]

My students, on the other hand, grappled with this question with specificity that showed their thinking and their learning.

Issues remain. AI is flawed: “Some skeptics argue that the software is capable only of blind mimicry — that it’s imitating the syntactic patterns of human language but is incapable of generating its own ideas or making complex decisions, a fundamental limitation that will keep the L.L.M. approach from ever maturing into anything resembling human intelligence… Other critics believe that software like GPT-3 will forever remain compromised by the biases and propaganda and misinformation in the data it has been trained on, meaning that using it for anything more than parlor tricks will always be irresponsible” [Ibid.]. Furthermore, our legal system lags behind technological developments, and we have work to do about intellectual property and plagiarism.

But AI isn’t going away, and we need to decide how best to harness it. Kevin Roose, author and technology columnist for the New York Times, argues that “schools should thoughtfully embrace ChatGPT as a teaching aid — one that could unlock student creativity, offer personalized tutoring, and better prepare students to work alongside A.I. systems as adults” [nytimes.com 2]. He argues that banning AI won’t work: even if schools block openai.com, learners will find ways to access it. He also assures readers that AI can be a valuable teaching tool, writing about its effectiveness in helping students work through their outlines, helping teachers work though their lesson plans, and serving as a debate sparring partner and an after-school tutor. Roose writes, “Even ChatGPT’s flaws — such as the fact that its answers to factual questions are often wrong — can become fodder for a critical thinking exercise. Several teachers told me that they had instructed students to try to trip up ChatGPT, or evaluate its responses the way a teacher would evaluate a student’s” [Ibid.]. He even sends us to Ditch that Textbook [ditchthattextbook.com], which offers 20 specific strategies to make AI an ally for learning.

Going back to my advanced age… I find myself remembering the controversy about other technological innovations: would calculator use render us unable to do math in our heads, would online translation obliterate the need to learn other languages, would internet research make plagiarism inevitable and hard to spot, etc. Over time, learning from each other, we have managed to harness the potential of other technologies while limiting its harm somewhat. We need to approach AI with that mindset. I expect the journey will be challenging, but if it leads to deeper questions and more hands-on learning, it will be worth it.

Note: The image was generated by Dall-E, but all the writing except the indented paragraph is indeed my own!

AI Is Here!

AI [Artificial Intelligence] has the potential to impact teaching writing in a number of ways. For example, AI-powered writing assistants can help students improve their writing skills by providing real-time feedback and suggestions on grammar, spelling, and style. AI-powered tools can also help teachers grade written assignments more quickly and accurately, freeing up time for other activities such as providing individualized feedback to students. Additionally, AI can be used to create personalized learning plans for students, helping them to progress at their own pace and improve their writing skills more effectively.

While AI has the potential to greatly improve the teaching of English, there are also some potential dangers to consider. For example, relying too heavily on AI-powered tools could lead to students becoming overly reliant on technology and losing their ability to think and write independently. Additionally, AI tools are only as good as the data they are trained on, so if they are not trained on a diverse range of writing styles and voices, they may not be able to accurately assess or improve students’ writing. Finally, the use of AI in education raises concerns about privacy and the potential for bias in the technology. It is important for educators and policy makers to carefully consider these potential dangers and take steps to mitigate them when using AI in the teaching of English.

Bet you thought I wrote those first two paragraphs. Nope! I asked open-source artificial intelligence [https://chat.openai.com/chat] to write them. The prompt for the first was “how will ai impact teaching writing” and the second, “the dangers of ai for teaching English.” Both paragraphs are well written and informative. How many students would offer a more clear and effective expression of the issues?

What does this mean for education? Because each generated block of text and image is unique, plagiarism becomes impossible to identify through an internet search. Students can turn in essays and submit college essays written by AI, and no one would know the difference. What do schools do to adjust to this new reality? Making all writing happen within the classroom period penalizes students who take time with their writing process and prevents true polishing of written drafts. We need to rethink how we teach and evaluate writing.

Artificial intelligence now can also generate unique, free, uncopyrighted images with Dall-E. To make Dall-E work, researchers feed images to a computer, matching them with word descriptors. AI is trained to make connections and then use them to produce new images, but the images are only as good as the input and training. “Bias found on the internet can show up in results, such as a prompt for a “CEO” that produces only white men” [washpost.com].Researchers then overlay “noise” in terms of visual static and teach AI to remove the noise to reveal a clear image. AI generates the images based on the words used as input. Dall-E generated the image accompanying this blog after I requested “a water color image of a computer screen with AI generated images” [https://labs.openai.com]. As a blogger, I no longer have to pay to use images or worry about avoiding copyright issues. Where, though, does that leave visual artists?

Wael Abd-Almageed, a professor at the University of Southern California’s school of engineering, warns that “Once the line between truth and fake is eroded, everything will become fake. We will not be able to believe anything” [washpost.com2]. Such image generation poses clear risks. “Each evolution of image technology has introduced potential harms alongside increased efficiency. Photoshop enabled precision editing and enhancement of photos, but also served to distort body images, especially among girls, studies show” [Ibid.]. AI can generate the kind of “deep fakes” that have been misused politically. Companies that offer and use AI have struggled to build in safeguards, but the technology is progressing more quickly than those efforts. AI researcher Maarten Sap said “asking whether OpenAI acted responsibly was the wrong question. ‘There’s just a severe lack of legislation that limits the negative or harmful usage of technology. The United States is just really behind on that stuff’” [Ibid.].

Hand wringing won’t help. Teachers need to focus on process more than output. New Zealand education technology expert warns that AI will transform teaching: “it’s going to require an awful lot of work from teachers and institutions. Teaching that looks at a completed product only – an essay for example – is finished” [the spinoff]. And, since “AI doesn’t know or care about truth” [Ibid.], educators will need to find strategies to evaluate for truth and teach students those strategies.

Whether we like the advent of AI or fear it, we need to figure out how best to work with it. AI does provide opportunities. For example, “teachers could generate a set of AI ‘essays’ on a topic, then set students (individually or in groups) to critique them and write their own better version” [Ibid.]. AI may even prompt long overdue changes in pedagogy. Perhaps we will finally focus on “better ways to assess for learning, such as constructive feedback, peer assessment, teachback” [Ibid.].

This is all fairly new to education. In the summer of 2020, OpenAI began offering limited access to a new program called Generative Pre-Trained Transformer 3 [GPT-3]. Even if “GPT-3 seems to be manipulating higher-order concepts and putting them into new combinations, rather than just mimicking patterns of text” [nytimes.com], we face a sea change in communication and education. We are behind the 8 ball in thinking about how to adapt and even thrive with this new world order and how to regulate it fairly and effectively. But we must, for a continuously increasing ability of computers to generate text and images is already happening. AI is here now.