Agriculture robotic and autonomous car working in smart farm, Future 5G technology with smart agriculture farming concep. istockphoto.com/Kinwun
What Does AI Mean for the Church and Society?
It has been said that AI is the natural next step for humanity’s cosmic evolution. How should the church respond to society's growing dependence on technology?
Recently, Danish artist Jeppe Lange released an unforgettable film. Only 13 minutes long, the project has gained notoriety for containing 10,000 images that are all shown in rapid succession via links made through artificial intelligence (AI). It begins and ends with images of the universe, but in between are a dizzying array of shapes, colors, people and objects that one tries to connect up to one another at the speed of the film. But one soon falls into an almost meditative state – appreciating the beauty of the world captured by AI.
In an interview last spring following the film’s debut at the Copenhagen film festival, Lange commented on his aim of making a sci-fi documentary of how the world had turned into one single organism that uses data from security cameras, microphones and other devices to connect in an “almost divine consciousness.” “In my visual research I stumbled upon this method that in a very simple manner illustrates the associations of a contemporary AI – thereby also offers a glimpse into this awakening consciousness,” Lange says in the interview. This peek into AI-vision is set to change the way many in the arts view AI, which is already used by artists with mixed results and even some prizes.
And to be clear some have even theorized (and then were debunked) that AI—specifically Google’s LaMDA—had attained consciousness recently. The declaration filled some with excitement and others with dread of what it could mean for humanity’s future. But, what does AI mean though for our perception of ourselves, our humanity and our faith life? My belief is that all scholarship on faith and technology to date will need to be reimagined to include implications of our reliance on AI in culture, business and daily life. That is how fundamental of a change this technology is to what our future will look and feel like in the coming decades.
Understanding AI and its presence in our lives
If you think you have successfully ducked having anything to do with these issues, think again. I guarantee that multiple times each day you rely on AI. Whether it is doing a Google Search or asking a question of your virtual assistant Siri on your phone or other device all the way to responding to a text from your bank that says it has detected potentially fraudulent activity.
AI has been here for quite sometime and thanks to all the wonderful ease at which it has helped us live our lives, it shows no sign of going away or being replaced with anything other than improved AI. Things really seemed to pick up around 2017 or so as the computing power and storage on the cloud reached a key inflection point in how data could inform AI and AI in turn could help us process in new ways all this new data. In fact, in my life as a financial journalist I have interviewed several hedge fund firms that have made phenomenal use of AI and machine learning technology.
AI has been here for quite sometime and thanks to all the wonderful ease at which it has helped us live our lives, it shows no sign of going away or being replaced with anything other than improved AI.
When it comes to understanding AI, it is crucial to realize there are some opposing viewpoints. These are among the researchers and practitioners themselves. There is a mindset that physicist Max Tegmark has labeled ‘digital utopianism’ in his book “Life 3.0: Being Human in the Age of Artificial Intelligence.” Tegmark uses Google’s Larry Page as an example, who has written, “Digital life is the natural and desirable next step in the cosmic evolution—let our digital minds be free and the outcome is certain to be good.”
…the fear of machines turning evil isn’t the worry for experts; it is the level of competence that is the preeminent concern of those in the beneficial AI movement…[the main concern is that] technological growth [may] become uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.
At the other end of the spectrum may be what Tegmark has dubbed the “beneficial AI movement.” These researchers say that Artificial General Intelligence (AGI) is possible in this century, and that AI safety research is a must in order to ensure good outcomes, which unsurprisingly is where Tegmark falls in calling for key concerns to be voiced and that researchers not be fearful of being perceived as alarmist technophobes. AGI by the way is the stuff we have gotten used to thinking about as just new technology—self-driving cars and IBM’s Watson. AGI is an artificial intelligence that mimics our own human cognition so well we don’t know whether it is another human or machine. Admit it—you’ve waited for a snarky comeback from Siri for years now!
For Tegmark though the key to determining whether Siri is just wired to be snarky or if Siri is ensuring you have the wrong directions in order for you to drive off a cliff is ensuring that the goals of the AI are in alignment with our goals as humans. But the fear of machines turning evil isn’t the worry for experts; it is the level of competence that is the preeminent concern of those in the beneficial AI movement.
There are those, in light of concerns over AI being used in financial systems, weapons and biotechnology research, who are especially advocating for AI safety research. The problem is that the research is potentially a laggard and may not keep ahead of all the advancement that gets us to the point of singularity—or to a point in time when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.
The Church’s Response to AI
It seems that we have been here before in struggling to form the right questions before we know the outcome of the technological implementation of our leading discoveries. One can’t help to think of the debate over genetic engineering and Dolly the Sheep. Researchers today dealing with much more complex gene editing techniques than were used in the 1990s are required by their institutions to have boards that review their research and those boards in turn may employ bioethicists.
Do AI engineers need to have tech-ethicists as well to make sure they act in the interest of humanity? And more importantly, how should the church respond to society’s growing dependence on artificial intelligence? Does it have the potential to divide us further as the algorithms of social media have? Or will it lead to life-saving innovations that will equitably be available to all and prevent further fracturing of society? It often starts at our seminaries. The Center for Theology and the Natural Sciences (CTNS) at The Graduate Theological Union in Berkeley, Calif. kicked off in July a major grant from the John Templeton Foundation to study AI and its ethical implications.
…how should the church respond to society’s growing dependence on artificial intelligence? Does it have the potential to divide us further as the algorithms of social media have? Or will it lead to life-saving innovations that will equitably be available to all and prevent further fracturing of society?
“Virtuous AI? Artificial Intelligence, Cultural Evolution, and Virtue” is the title of a project that will seek to answer a number of questions such as: How and to what extent will AI influence the evolution of human culture and virtue? Can AI assist humans in the acquisition of virtue? Is AI itself capable of virtue? At CTNS, Ted Peters has done some work in this area with a book in 2019, “AI and IA: Utopia or Extinction?” He has asked the basic questions of whether AI advancement advances human flourishing or extinguishes it. Another fellow Lutheran theologian Phil Hefner looks at the AI challenges to some extent in his recently released book, “Human Becoming in an Age of Science, Technology, and Faith.”
In the final chapter, shared exclusively this past May in Covalence, Hefner explores extensively our increasing reliance on robots. He writes: “Unless we recognize that we are creating human life, even in forms we normally consider to be insentient, we will think of these creations only as tools. In doing so we hide from the real challenge and its implications. There comes a point where a tool, by becoming more complex, becomes a functioning form of life. Hiding behind the idea of “tool” enables us to avoid responsibility and be ignorant of the gravity of our creating.”
The church cannot remain silent on this key societal change. It will take some leaning on those AI researchers who have concerns that need to be heard, and it is well worth the time for leaders to learn what is around the corner rather than only respond when it is on our doorstep, which was the case when Dolly let out her first ‘baa.’ Theologians and AI scientists need to be in dialogue and learning from one another how we can best make use and guide the future impact of this technology on generations to come.
Theologians and AI scientists need to be in dialogue and learning from one another how we can best make use and guide the future impact of this technology on generations to come.
Join the Conversation
82 posts about this topic
Join the conversation on the BioLogos forum
At BioLogos, “gracious dialogue” means demonstrating the grace of Christ as we dialogue together about the tough issues of science and faith.
About the author
Susan Barreto is an author with a long-time interest in religion and science. She currently edits Covalence, the Lutheran Alliance for Faith, Science and Technology’s online magazine. She has written articles in The Lutheran, Zygon Center for Religion and Science newsletter and for Covalence. In addition to her work on the steering committee, Susan is also a board member for the Center for Advanced Study of Religion and Science that is the supporting organization for the Zygon Center and the Zygon Journal. She also works with the Rheticus group that hosts ongoing discussions on science and faith at the Lutheran Campus Center at the University of Illinois at Urbana-Champaign after recently moving to Urbana from Chicago. She also co-wrote Our Bodies Are Selves with Dr. Philip Hefner and Dr. Ann Pederson. Susan’s interest in science and God’s creation can be traced back to growing up on a farm in Northwest Illinois.