AI and the Transformation of the Library: Exploring New Information Processes and Practices

In a recent post, I argued that libraries have an important role in creating a better information environment for human as well as artificial agents. That post touched on the long history of libraries and information technologies, as well as current concerns related to generative AI from the perspective of information ethics. In this post, I explore various ways we can expect AI to transform library processes and practices. Given the need for proactive design and intentional use of AI within and beyond libraries, I focus on the importance of information practices.

AI and the Information Lifecycle

The configuration of libraries has always been linked with the lifecycle of information: information is created, transmitted, and used to create more information. Within this cycle, libraries perform specialized functions—such as collecting, organizing, preserving, and mediating access to information—so that information may be used and generate new information.

Libraries have taken on different functions within the information lifecycle throughout history. Some years ago, I argued that complexities associated with digital materials caused libraries to reposition themselves within this cycle: they have shifted closer to the point of the creation of information to ensure immediate and long-term access to it. With recent advances in AI—especially generative AI—library functions within the cycle are changing dramatically as well. Automating more information processes that were previously performed by humans will require proactive and ethical design, as well as ongoing oversight. To enable us to use AI intentionally and wisely, we need to develop new information practices for both information professionals and users of information.

AI and Information Practices

Information practices consist of skills that enable participation in our increasingly complex information environment effectively and ethically. In After Virtue (Notre Dame, 2007), Alasdair MacIntyre argues that practices depend on and can cultivate virtues. A practice, according to MacIntyre, is:

  • a coherent and complex combination of skills;
  • a socially established and cooperative activity; 
  • dedicated to securing moral goods internal to it;
  • embedded in a moral narrative or framework about the goods and ends involved;
  • participation in a shared moral tradition with standards of excellence upheld by other expert practitioners and supporting institutions.

Using the virtues Shannon Vallor identifies in Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford, 2016)—which she argues are the virtues most crucial for flourishing in our current technosocial condition—the relationship between information skills, ethics, and virtues can be aligned in a way such as this: 

Information PracticesInformation Virtues
Information SkillsInformation Ethics
Reflect on intentions, the nature of information, and information needsAttention; Equity, Diversity, and InclusionSelf-Control; Courage; Perspective; Technomoral wisdom
Discover, interpret, critique, manage, and synthesize informationAuthenticity; Access; Privacy; SecurityHonesty; Humility; Empathy; Flexibility
Use information ethically and effectivelyAgency; Intellectual Property; Community and CitizenshipJustice; Care; Civility; Magnanimity

Artificial and Human Agency

As we allocate more of our work and agency to AI, we need to be attentive not only to how automated processes are designed and managed but also to the creation and cultivation of related information practice—which are also formative practices. 

Here is a high-level framework showing how new automated processes could be balanced with human (in)formation practices that are ethical and virtuous:

Information FunctionAutomated Processes (In)Formation Practices
Selection– Creation of new materials to collect
– Selection of materials for use 
– Disciplined, courageous, and wise reflection on information needed for selection and use (e.g., research) 
– Discerning selection of diverse and inclusive resources
Mediation– Classification and description of materials
– Analysis of collection materials
– Discovery and research assistance (e.g., conversational AIs)
– Use analysis of resources, services, and spaces
– Equitable, safe, and secure access to resources, services, and spaces
– Honest, humble, and charitable critiques of information sources and networks
– Just, caring, and civil synthesizing and sharing of information

All of this requires much more development and specificity, but I think it points to the need for us to focus more on information practices as we continue to automate more information processes.

Apocalyptic Scorecards

IEEE Spectrum recently published an AI “apocalypse” scorecard related to current hype associated with large language models. “The AI Apocalypse: A Scorecard: How Worried Are Top AI Experts about the Threat Posed by Large Langualage Models Like GPT-4?” summarizes the perspectives of 22 AI “luminaries” on two questions: (1) whether today’s LLMs are a sign that artificial general intelligence (i.e., human-like intelligence) is likely; and (2) whether such an intelligence would “cause civilizational disaster.”

Here is a tally of the results:

  1. AGI? 14 scored no, 8 yes
  2. Civilizational disaster? 12 scored no, 4 yes, 6 maybe

I just published a book that attempts to broaden how we think and speak about the apocalyptic imagination. Due to the popularity of certain apocalyptic works, “apocalypse” often refers to the end of reality as we know it. More broadly (and historically) understood, an apocalypse can uncover our hopes as well as our fears. (I explain this and provide an overview of the book in its introduction, which is subtitled “Imagined and Real AI.”)

After exploring a number of concepts such as attention, agency, augmentation, and ethics in the book, I introduce a rather different type of apocalyptic scorecard in the fifth chapter. In this scorecard, I ask a set of questions that may help us assess real as well as imagined AI:

  1. Reflective attention: What ultimate hopes and goals are identified? Are these sufficiently critical, multicultural, and participatory? Does the AI ecosystem provide the conditions for cultivating constant critical reflection on and refinement of these, individually and collectively?
  2. Structural agency: What advantages of collective action are used to realize shared goals? Are the AI structures and systems designed to support these ends continuously curated to ensure they enhance rather than inhibit human agency?
  3. Knowledge augmentation: Are people growing in knowledge and seeking greater wisdom? Do AI systems support this growth?
  4. Ethical foundation: Do the AI systems advance political, economic, and social justice and peace?
  5. Reformation: What formative practices accompany AI systems to shape individual and collective attention and agency with, against, and beyond these systems? When AI systems do fail, how may they be rejected, reformed, or resisted?

The last chapter uses this scorecard to evaluate realistic and imagined AI futures depicted in AI 2041: Ten Visions for Our Future, by Kai-Fu Lee and Chen Qiufan, and in Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence.

I was agnostic about AGI when I wrote this book a year ago, but we do seem to be coming closer to something similar to it. I am not concerned about existential risk (i.e., the elimination of our species or civilization). I agree with many others who say there are plenty of real risks that need to be addressed now if we want to improve the quality of our lives and world. A robust apocalyptic imagination—and scorecard—can help us realize better futures.

9.5 Theses about Technology in the Midst of a Pandemic and Protests

The COVID-19 pandemic, the related infodemic, the digital transformation these have accelerated, and the social and especially racial inequities these have highlighted have caused me to rethink, reorder, and refine my 9.5 theses about technology. (I share some thoughts about the most significant changes here.)

Here is the current version of these, including many updated links to relevant posts:

1. We are living through a unique and transformative moment in history. New digital and networked information and communication technologies, increasingly powered by autonomous and intelligent systems, are profoundly and irrevocably changing our lives and world.

2. Our present revelatory or apocalyptic moment uncovers old patterns of injustice and alerts us to how technologies create asymmetries of power that exacerbate old and create new social inequities. With digital technologies, we see these in divides relate to access, literacy, and wisdom.

3. Technology has been with us—and defining what it means to be a human—from the beginning. Technology had a significant role in human evolution, enabling us to become human and more human.

4. Technologies are neither inevitable nor neutral. We design and use them, creating and engaging with affordances that both enable and limit our agency. In a period of technological disenchantment, we are awakening to the responsibilities of designers and users.

5. There is cause for hope for our technological future. To move beyond naïve optimism, we need new narratives and new eschatologies that look beyond utopian and dystopian visions and are truly apocalyptic.

6. We are digitally naïve. Individually and collectively, we need to reflect on how we are shaping new information and communication technologies and how they are shaping us, and we need to close the current ethical gap between our intentions and actions. We need to become digitally literate as well as digitally wise agents.

7. Attention management is the greatest challenge facing us individually and culturally. We need to cultivate active and receptive forms of attention and upgrade our formative practices along with our material technologies. Instead of digital withdrawal or rejection, we should pursue our appropriate digital vocation.

8. There is a new digital dimension to reality, blending, enveloping, and transforming our physical lives and world.

9. Our lives are characterized by a digital device paradigm. We interact with surface layers of technology supported by invisible substructures and surrounding environments of surveillance.

9.5. We should be humbled by our finitude and history of corruption. Innovation should be balanced with curation, acknowledging that appropriate limits are ambiguous

Here is a presentation on these I prepared for one my classes:

9.5 Theses about Technology

In 2017, during the quincentennial of Martin Luther’s 95 theses and the Protestant Reformation, I drafted 9.5 theses about our present technological moment.

Since then, during a period which largely seemed to be about making the case for a digital reformation, I continued to refine these theses through discussions with colleagues, teaching, and writing.

Here are my 9.5 theses in their present form. I’d be happy for a disputation with anyone interested in one! You’re welcome to comment here or on any of the linked posts below.

1. Technology has been with us—and defining what it means to be a human—from the beginning. Technology had a significant role in human evolution, enabling us to become human and more human.

2. Technologies are neither inevitable nor neutral. We design and use them, creating affordances that enable and limit our agency. During this time of tech backlashes, we are awakening to the responsibilities of both designers and users.

3. We are living through a unique and transformative moment in history. New digital and networked information and communication technologies, powered by autonomous and intelligent systems, are profoundly and irrevocably changing our lives and world.

4. Attention management is the greatest challenge facing us individually and culturally. We need to upgrade our formative technologies along with our material technologies. This includes cultivating active and receptive forms of attention as well as new practical wisdom and formative practices. Digital withdrawal is not an option.

5. There is a new digital dimension to reality, blending with and enhancing our embodied lives and physical world.

6. Our lives are characterized by a digital device paradigm. We interact with surface layers of technology supported by invisible substructures and surrounding environments of surveillance.

7. New technologies create new asymmetries of power and social inequities—digital divides related to access, literacy, and wisdom.

8. We are digitally naïve. Individually and collectively, we need to reflect on how we are shaping our technologies, how they are shaping us, and close the current ethical gap between our intentions and actions. We need to become digitally literate and digitally wise.

9. There is cause for hope for our technological future. To move beyond naïve optimism, we need greater narratives that look beyond utopian and dystopian visions and are truly apocalyptic.

9.5. Innovation should be balanced with curation, and human creation should be constrained. This is something of a half-thesis, because appropriate balance and limits are ambiguous.

M.M.XVIII
Updated October 2019

Most recent (2020) version available at Digital Wisdom.

Scholarly Trajectories

I am, primarily, an academic administrator. I enjoy my work very much, but it means that scholarly activities typically fall into the category of “other duties as not assigned.”

My earliest research focused on the history of books and libraries. But as my responsibilities and the digital age progressed, I became more interested in the future of libraries, the increasingly digital dimension of our lives and world, and ethical issues related to new information and communication technologies.

At present, I find my research interests developing in two different—but not entirely unrelated—directions.

One direction concerns the future of libraries: What is the role of this ancient institution in a digital information environment? Or, as McLuhan put the question back in the 1960s, what is the future of this “old figure in a new ground”?

The other direction concerns technology and ethics. Due to my sense of vocation and institutional context—i.e., leading and teaching at a faith-based university—my approach to this topic is theological. While a theological approach to technology is not unrelated to the future of the library, it does result in a divergent scholarly trajectory.

Earlier this year, with a couple of my colleagues, I started blogging at Patheos about technology and theology. The blog is called Digital Wisdom.

I’ve not grasped the art of blogging yet, but I’ve come to the realization that what I’d like post would be of potential interest to two distinct audiences. So I’ve set up this blog on my own site for posts related to library futures. I’ll continue to post more theologically oriented posts about information and technology at Digital Wisdom.