
Image courtesy of Dall-E
Educators around the country recognize that AI is ushering in an unavoidable transformation. Those who fear this transformation wring their hands and try to block AI, “ [b]ut the barricade has fallen. Tools like ChatGPT aren’t going anywhere; they’re only going to improve, and barring some major regulatory intervention, this particular form of machine intelligence is now a fixture of our society” (nytimes.com). The “breakneck pace of AI developments suggests that humans could never outrun it,” so we need to learn how to embrace AI and use it wisely. Educational technology researcher Mike Sharples, of the UK’s The Open University, says transformers like GPT-3 are set to disrupt education. Teachers will have to change the way they teach. “As educators, if we are setting students assignments that can be answered by AI, are we really helping students learn?” he asks. (thespinoff.co.nz)
Education faces a critical choice now: we can fight an inevitable shift, or we can learn to use that shift to improve teaching and learning. The first approach is doomed, the second overdue. The pressure of AI should force educators to develop deeper questioning and thinking approaches.
We already know about efforts to defeat AI that won’t work. Last December Markham Heid, a health and science writer, called for handwritten essays to “beat AI.” He claimed, “The dump-and-edit method isn’t necessarily an inferior way to produce quality writing. But in many ways, it is less challenging for the brain — and challenging the brain is central to education itself” (thewashingtonpost.com). While writing by hand has a different neurological impact than keyboarding that may be useful, it also has significant drawbacks: slowing down the process for fast keyboarders who cannot write as fast as they think [a major issue for me], potential legibility issues for the teacher who’s reading the work, and greater challenges to performing significant revision. And handwritten essays would have to be completed during class time, to ensure no use of AI, which would shorten any writing opportunity.
Nor can we avoid “cheating with AI” by turning to technology. Tools to detect the use of AI and prevent cheating “aren’t reliably accurate, and it’s relatively easy to fool them by changing a few words, or using a different A.I. program to paraphrase certain passages” (nytimes.com).
From Kevin Roose, a technology columnist: “Instead of starting an endless game of whack-a-mole against an ever-expanding army of A.I. chatbots, here’s a suggestion: For the rest of the academic year, schools should treat ChatGPT the way they treat calculators — allowing it for some assignments, but not others, and assuming that unless students are being supervised in person with their devices stashed away, they’re probably using one” (Ibid.). This approach fails to address writing outside the classroom adequately, though. Should we just succumb to AI or consider how best to make writing outside the classroom enhanced by AI instead of being replaced by it?
Mike Sharples, a professor in the U.K., used GPT-3 “to urge educators to “rethink teaching and assessment” in light of the technology, so that we might make it a teaching assistant and a tool for creativity instead of a cheating resource (theatlantic.com). Paul Fyfe, English professor and instructor in a “Data and the Human” course, went further, asking students to “cheat” by writing an assignment with AI and then reflecting on “how the experiment tested or changed their ideas about writing, AI or humanness.” He argues that students who refine their awareness of artificial prose may also be better equipped to recognize what Fyfe calls “synthetic disinformation” in the wild. Students in his experiment, for example, discovered plausible-sounding false statements and quotes from nonexistent experts in the essays they produced with the help of AI” (https://www.insidehighered.com/).
Peter Greene, a writer about K-12 policies and practices, posits that “Authentic assignments grow out of classroom discussion and debate. When an English class studies a particularly rich work of literature, the focus and emphasis will grow out of the class itself, leading naturally to ideas for essays about the work. The discussion becomes one of the texts being considered, and it’s a text the software has no access to.” He also suggests using local concerns, current events, and real issues in the school community; such topics are not only challenging for algorithms to fake, but they also tend to be “richer and more rewarding.” Research papers that use primary sources and live interviews are another option. (forbes.com)
If ChatGPT kills certain types of writing, like formulaic five-paragraph essays and typical college admission essays, will that really be a loss? Only if we fail to replace those performative types of writing with deeper, more meaningful kinds of writing. For example, Greene suggests using ChatGPT as a prompt tester. If teachers feed their prompts to the chatbot and it produces an essay they would consider well-written, then “that prompt should be refined, reworked, or simply scrapped… if you have come up with an assignment that can be satisfactorily completed by computer software, why bother assigning it to a human being?” (forbes.com2)
What other concrete strategies will make AI a helpful partner in education?
- Create outlines: Cherie Shields, a high school English teacher in Oregon, had students in one of her classes to use ChatGPT to create outlines for their essays comparing and contrasting two 19th-century short stories that touch on themes of gender and mental health. Students evaluated the outlines and then used their revised versions to write their essays longhand. She said this approach “had not only deepened students’ understanding of the stories” but also ”taught them about interacting with A.I. models, and how to coax a helpful response out of one” (nytimes.com).
- Focus on process as well as product: New Zealand education technology expert Stephen Marshall, from Victoria University of Wellington: “Teaching that looks at a completed product only – an essay for example – is finished” (thespinoff.co.nz)
- Use AI to learn to edit and verify instead of regurgitating: Ben Thompson, full-time writer for Stratechery, which provides analysis of the strategy and business side of technology and media as well as the impact of technology on society, suggests a radical approach: schools should have a software suite that tracks AI use and challenges students to use that suite to generate their answers to one given prompt: “every answer that is generated is recorded so that teachers can instantly ascertain that students didn’t use a different system.” He predicts that “the system will frequently give the wrong answers (and not just on accident — wrong answers will be often pushed out on purpose); the real skill in the homework assignment will be in verifying the answers the system churns out — learning how to be a verifier and an editor, instead of a regurgitator.” Wouldn’t that help develop critical twenty-first century skills for an AI-dominated world? (stratechery.com)
- Evaluation and critical thinking: “Several teachers…instructed students to try to trip up ChatGPT, or evaluate its responses the way a teacher would evaluate a student’s” (nytimes.com). Krista Fancher’s student loaded a social entrepreneurship project from the previous year and “asked chat gpt to find everything wrong with the solution. It did. He used the list of flaws to redesign the project and built a new prototype designed to connect grandparents and their grandchildren.” (ditchthattextbook.com).
- Problem-solving and synthesis: AI can help students create projects in which themes and elements are connected in non-linear fashion. One teacher annually checked her seniors’ understanding of Paradise Lost by having them put John Milton on trial before local lawyers, asking if he had successfully justified the ways of God to man. (forbes.com)
- Teacher planning: use AI to
- write personalized lesson plans for each student
- generate ideas for classroom activities
- serve as an after-hours tutor debate sparring partner
- serve as a tool for English language learners to improve their basic writing skills.
- AI applied rubrics: Ronak Shah gave his science fair rubric to ChatGPt and had students submit their work for feedback that would have taken him hours. He and his students found the feedback helpful: “it offered tweaks to improve replicability and validity. It complimented innovative and unique ideas. In fact, it summarized all of its feedback with lots of ‘glow and grow’ phrasing” (edweek.org).
- Challenge students to best ChatGPT: Shah also gave ChatGPT test questions from his science test and then gave the machine- generated answers to students. He challenged them to improve on the machine’s answer, and “Students were offended at the notion that a robot could be smarter than they are and worked collaboratively to find any way to strengthen the otherwise very strong responses” (Ibid.).
- Ronak Shah recommends these changes:
- “First, validate the world students actually live in and question rigid attachments to pedagogy that don’t fit the world they’ll inherit. As teachers, it is our responsibility to open ourselves up to the challenges students will have to face. If we focus our time and energy on that, we’ll be able to do it better. It’s OK to let go of the rest.
- “Second, change the relationship among students, teachers, and technology… Challenge the students to form an alliance with you, to create content and express knowledge better than a generative AI tool like ChatGPT.
- “Third, we have to change the way we assess students and the role those assessments play in school accountability. Our assessments are mostly designed to test student thinking on items that are easy to ask and measure on a test. But just because they’re easy to measure doesn’t mean we’re measuring the right things.
- “Let’s move toward a future where teachers and assessments focus on collaborative, real-world performance rather than answers to narrow skill or fact questions. And let’s embrace ChatGPT and other AI software to help us get there” (Ibid.).
In May 2023, United States Office of Educational Technology published Artificial Intelligence and the Future of Teaching and Learning, a thorough if somewhat academic explorationwith seven recommendations:
- Emphasize Humans in the Loop
- Align AI Models to a Shared Vision for Education
- Design Using Modern Learning Principles
- Prioritize Strengthening Trust
- Inform and Involve Educators
- Focus R&D on Addressing Context and Enhancing Trust and Safety
- Develop Education-Specific Guidelines and Guardrails (teched.gov)
This committee jargon is unlikely to drive coordinated and meaningful change. Neither individual school districts nor teachers themselves have the capacity and resources to make such global changes. We need a national approach.
Trailblazing teachers are publishing ways to use AI and sharing their ideas – check out “Ditch that Textbook” for excellent examples (ditchthattextbook.com). That’s a great start, but it’s not enough. The pace of AI advancement may seem terrifying, but fear won’t slow it down. We need a coordinated national response on how to deal with AI’s impacts across the board. In education, we need a coordinated national response to professional learning about AI for educators. AI can destroy or transform education. It’s up to us to fight for a valuable and long overdue transformation that will not only convert AI from an enemy to a partner but will also force us to provide the kind of deeper learning opportunities and adaptation of currently needed skills that we have yet to accomplish. The time is now, if not yesterday!