Apocalyptic Scorecards

IEEE Spectrum recently published an AI “apocalypse” scorecard related to current hype associated with large language models. “The AI Apocalypse: A Scorecard: How Worried Are Top AI Experts about the Threat Posed by Large Langualage Models Like GPT-4?” summarizes the perspectives of 22 AI “luminaries” on two questions: (1) whether today’s LLMs are a sign that artificial general intelligence (i.e., human-like intelligence) is likely; and (2) whether such an intelligence would “cause civilizational disaster.”

Here is a tally of the results:

  1. AGI? 14 scored no, 8 yes
  2. Civilizational disaster? 12 scored no, 4 yes, 6 maybe

I just published a book that attempts to broaden how we think and speak about the apocalyptic imagination. Due to the popularity of certain apocalyptic works, “apocalypse” often refers to the end of reality as we know it. More broadly (and historically) understood, an apocalypse can uncover our hopes as well as our fears. (I explain this and provide an overview of the book in its introduction, which is subtitled “Imagined and Real AI.”)

After exploring a number of concepts such as attention, agency, augmentation, and ethics in the book, I introduce a rather different type of apocalyptic scorecard in the fifth chapter. In this scorecard, I ask a set of questions that may help us assess real as well as imagined AI:

  1. Reflective attention: What ultimate hopes and goals are identified? Are these sufficiently critical, multicultural, and participatory? Does the AI ecosystem provide the conditions for cultivating constant critical reflection on and refinement of these, individually and collectively?
  2. Structural agency: What advantages of collective action are used to realize shared goals? Are the AI structures and systems designed to support these ends continuously curated to ensure they enhance rather than inhibit human agency?
  3. Knowledge augmentation: Are people growing in knowledge and seeking greater wisdom? Do AI systems support this growth?
  4. Ethical foundation: Do the AI systems advance political, economic, and social justice and peace?
  5. Reformation: What formative practices accompany AI systems to shape individual and collective attention and agency with, against, and beyond these systems? When AI systems do fail, how may they be rejected, reformed, or resisted?

The last chapter uses this scorecard to evaluate realistic and imagined AI futures depicted in AI 2041: Ten Visions for Our Future, by Kai-Fu Lee and Chen Qiufan, and in Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence.

I was agnostic about AGI when I wrote this book a year ago, but we do seem to be coming closer to something similar to it. I am not concerned about existential risk (i.e., the elimination of our species or civilization). I agree with many others who say there are plenty of real risks that need to be addressed now if we want to improve the quality of our lives and world. A robust apocalyptic imagination—and scorecard—can help us realize better futures.