AI Revisited

Image courtesy of Dall-E

The tsunami of information, fear-mongering, and arguing about the impact of Artificial Intelligence has swamped my mailbox and muddied my thinking. There’s just so much that I’m taking a different tack to cover more ground. Here’s a bulleted list for you.

  • Industry is racing ahead of academia: Until 2014, most significant machine learning models were released by academia. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. “Building state-of-the-art AI systems increasingly requires large amounts of data, compute, and money, resources that industry actors inherently possess in greater amounts compared to nonprofits and academia” [stanford.edu].
  • According to the AIAAIC database, which tracks incidents related to the ethical misuse of AI, the number of AI incidents of misuse and controversies has increased 26 times since 2012 [Ibid.].
  • While the proportion of companies adopting AI has plateaued, the companies that have adopted AI continue to pull ahead, more than doubling since 2017 [Ibid.].
  • “Princeton University computer science professor Arvind Narayanan has called ChatGPT a ‘bulls— generator.’ While their responses often sound authoritative, the models lack reliable mechanisms for verifying the things they say. Users have posted numerous examples of the tools fumbling basic factual questions or even fabricating falsehoods, complete with realistic details and fake citations.” Just look at the supposed case against a law professor, citing a non-existent Washington Post article accusing him of sexual harassment that never occurred” [washingtonpost.com].
  • Will Oremus warns, “The bad news is that anxiety at the pace of change also might be warranted — not because AI will outsmart humans, but because humans are already using AI to outsmart, exploit, and shortchange each other in ways that existing institutions aren’t prepared for. And the more AI is regarded as powerful, the greater the risk people and corporations will entrust it with tasks that it’s ill-equipped to take on… OpenAI is now leading a headlong race, tech giants are axing their ethicists and, in any case, the horse may have already left the barn”  [washingtonpost.com2].
  • PCMag.com reportsthat half of Americans can’t distinguish between AI and human writing, warning that it will only get worse as AI tools continue to improve [pcmag.com].
  • Last fall the Biden White House unveiled an AI Bill of Rights to protect users, but it’s voluntary and has no teeth [whitehouse.gov].
  • Italy has blocked ChatGPT as of early April [nytimes.com].
  • “A group of prominent artificial intelligence researchers is calling on the European Union to expand its proposed rules for the technology to expressly target tools like ChatGPT, arguing in a new brief that such a move could ‘set the regulatory tone’ globally” [washingtonpost.com3].
  • Just this past week, Meta unveiled a Powerful new Meta AI tool that can identify individual items within images, allowing it to generate masks for any object in any image or any video, even including objects and image types that it had not encountered during training [techxplore.com].
  • Also just this past week, a start-up in New York is among a group of companies working on systems that can produce short videos based on a few words typed into a computer using generative AI [nytimes.com2].
  • This month Sen. Michael F. Bennet (D-Colorado) tweeted, “The use of AI is growing — without any required safeguards to protect our kids, prevent false information, or preserve privacy. The development of AI audits and assessments can’t come soon enough” [washingtonpost.com4].
  • On March 29, 2023, “more than a thousand tech leaders and researchers…signed an open letter calling for a six-month pause in developing the most powerful AI systems” [npr.org].

But can you stop a moving train going full speed ahead?

I don’t think so. Clearly we’re in the midst of a transformative upheaval that will change society and the roles within it profoundly. And we are unprepared.

What do we do? We need a major national/international initiative to bring together the best thinkers in varied fields that include technology and ethics; they must develop some standards and vision to help us ensure that AI becomes a force more for good than not. This call to action is urgent.

As educators, we need to stop wringing our hands and move on to the work of deciding how to work with AI. Prohibitions fail. What do we do to use this tool well? What do we do to minimize its potential for harm? Urgent work to be done, long overdue.

Leave a Reply

Your email address will not be published. Required fields are marked *