Forums

Uniquely Unique | Technology

Part Four in the Uniquely unique mini-series. We explore our use of technology starting with the simplest tools up to the recent advances in artificial intelligence to see what role it has had in our human identity


Share  
Twitter
Facebook
LinkedIn
Print
3 Comments
3 Comments
Image

Part Four in the Uniquely unique mini-series. We explore our use of technology starting with the simplest tools up to the recent advances in artificial intelligence to see what role it has had in our human identity

Description

Maybe you’ve noticed that we humans are the only creatures making podcasts. That’s at least partially because we’re the only creatures that have developed the tools to make it happen—microphones and compressors, computers and word processors. But technology encompasses a lot more than just machines with microchips. In this episode we explore our use of technology starting with the simplest tools up to the recent advances in artificial intelligence to see what role it has had in our development and in our identity as a member of the human species. 

In this new Language of God mini series—Uniquely Unique—Jim is joined by our producer Colin for a deep dive into these questions and more. The quest? To try to come to a better understanding of what it means to be human, to bear the image of God. Along the way, you’ll hear from a variety of experts from a wide range of disciplines, drawing on biology, history, anthropology, philosophy, theology and more to try to make sense of our human identity.

The quote from Rosalind Picard was from episode 65: Rosalind Picard | Flourishing in the Age of Computers
The quote from Amy Crouch was from episode 70: Amy & Andy Crouch | Finding the Off Switch

Subscribe to the podcast


Transcript

Part One

Stump: 

Welcome to Language of God. I’m Jim Stump and I’m back here with Colin in our fourth episode in our series, Uniquely Unique. If you’re just tuning in now, you might want to go back and start from the beginning because we’ve been building on a lot of ideas. 

Hoogerwerf: 

For three episodes now we’ve been trying to answer this question ‘what does it mean to be human?’ In episode two, when we spoke to biologists, we were left feeling like we are more the same as other creatures than different. But in the last episode we started to find some places where humans really do seem to stand apart. We ended that episode by talking about culture and about the incredible pace and extent of culture in human societies, where an idea can spread across the globe in minutes. An idea can change the future of our species and of all species on earth. 

Stump: 

David Lahti gave us two kinds of culture, both of which we’re involved in constantly and to a great depth. The first kind is copying and imitation of behaviors that can change over time, like with language.

Lahti: 

So the other kind of culture in non-human animals is that a previous way of doing something functional is improved upon in some way, and the improvement gets copied. So that’s technology. 

Hoogerwerf: 

This kind of technology is very rare in animals and limited, as far as we have seen, to food gathering. 

Stump: 

But in humans, technology is obviously not rare. And it’s limited only by the laws of physics and our imaginations. And in this instance, when we look at tool use and technology, that separation we started to see between humans and other creatures starts to feel pretty vast. 

McHargue: 

There’s very few questions that you can get me to be a human exceptionalist on any more. 

Hoogerwerf: 

This is Mike McHargue, more commonly known as Science Mike. 

McHargue: 

You know, when we look across the animal kingdom, there’s so many behaviors we think of as kind of uniquely human. And then you’ll figure out that animals actually maybe beat us at some things we think we’re really good at. In particular, I think of one study in which they trained pigeons to evaluate art. And they found out that pigeons are better at picking out counterfeit art than art critics and professional art dealers and stuff like that. You know, the diversity of life on this planet, and the skills animals have in surviving, is phenomenal. And when we look at tool usage, you know, you look at the fact that eusocial insects, like ants, farm, we would look at certain cephalopods using pretty advanced tool sets and training other members of their species to use tools and things that we think are uniquely human. But when we get to tools, there’s a couple of things I think that define human tool usage as distinct from and kind of superior to other species on this planet. 

Hoogerwerf: 

Is there something about this capacity that is at the very heart of what makes us human? Could this be the answer that we’ve been looking for, that what makes us human is that we are the creature that engages our world through the use of tools and technology? 

Stump: 

Before we jump to that conclusion, it should be said that there’s a long history of people emphatically saying “no” to this, who thought that technology might even be the biggest threat to our being human. 

Hoogerwerf: 

And there are times when it does feel like maybe it has gone too far, aren’t there? We’ve even started to mix our technology with our biology and our technology has led to some pretty terrible things. 

Stump: 

So far in this series, as we’ve tried to figure out what humans are—this ground-up approach—we’ve done so, at least partially, by saying what humans are not. By differentiating ourselves from the animals, we’ve been able to see what we are. But when we bring technology into the mix, we might have to add another category here, differentiating ourselves from technology. As our technology advances and as technology and biology begin to mix even more, when we ask “what does it mean to be human” we will need to explore the line between human and computer. 

Hoogerwerf: 

So that’s a bit of where we’re going. But first, let’s start by looking at how far we’ve come with technology. What is technology exactly? What counts and what doesn’t?

Stump: 

Technology has been a part of the human story for a long time. Some of the earliest stone tools that have been found date back to about 2.6 million years. 

Hoogerwerf: 

Remember when we were sitting in Rick Potts’ office? 

Stump: 

One of our very earliest podcast interviews.

Hoogerwerf: 

After the interview was over he brought us over to his desk and handed us a stone hand axe which he said was the oldest human artifact in the whole Smithsonian collection? 

Stump: 

Yeah it was a 2 million year old Oldowan chopper, and essentially it was just a stone used for bashing things. If it had been lying on the ground somewhere, I never would have noticed it as anything but a stone. Not quite what we think of as a tool.

Hoogerwerf: 

Yeah, pretty simple. And that kind of tool use is not confined to humans. Some animals also use rocks or sticks to get at food that they otherwise couldn’t. So something else had to happen. 

Stump: 

Well by about 200,000 years ago there’s a real acceleration in tool making. We start to see pointed stones tied to sticks which can be thrown, and other tools used to prepare food and then you get tools for making clothing, and vessels for carrying water and food. As this technology developed, it had reciprocal effects on us. We usually think of technology as something which we create and we manipulate. And that’s true but it’s only half the story. The other half is who we are today, which is a direct outcome of the tools and technology we have developed in the past. 

McHargue: 

I think from an evolutionary biology perspective, that’s a pretty easy to defend idea, that tools and tool-making have shaped our development.

Hoogerwerf: 

When you look at modern humans and compare them to one of our anatomically modern ancestors that didn’t have technology, Mike pointed out some really interesting differences. Our bite is weaker, we are weaker overall, we have less muscle mass, less hair…

McHargue: 

And when you compare those changes, we’ve actually seen those kinds of development in other species as well. And that happens whenever species are domesticated. And so there are some experts who believe that tools and toolmaking have basically caused humans to self-domesticate and for the features we find in domesticated animals to become prevalent in our species, which is not a bad thing. I don’t say that as in some ways dismissing modern people. We have shaped the world in such a way using tools, where the little brute strength we had to begin with became even less necessary. And we’ve had a greater emphasis on particular types of intelligence. We’ve invested more deeply in cognitive intelligence and social intelligence than our immediate predecessors have.

Stump: 

There have been lots of studies recently, showing some more focused ways technology changes us. The advent of written language affects human memory in significant ways. I saw this first-hand when my wife and I lived in West Africa for a time and had regular interaction with people who couldn’t read or write, but had an amazing ability to remember the details of intricate spoken instructions. And I’m old enough to recall the days when you could remember a bunch of other people’s phone numbers. Now I only remember one other person’s number. We have outsourced the task of remembering such things to written documents and devices.

Hoogerwerf: 

Yeah, and our constant use of GPS seems to have changed the way we spatially orient ourselves in the world. 

Stump: 

My kids think it is unbelievable that we used to unfold maps in the car to navigate to places, and I think it’s strange that they don’t seem to have much of a mental picture of places and where they are in relation to each other. I’m not sure they could make it home if their phone died.

Hoogerwerf: 

OK boomer. You might need to give us millennials a little more credit than that. 

Stump: 

Technically, Generation X, thank you very much.

Hoogerwerf: 

Ok, moving on. In another example there is some evidence to show that the increase in Cesarean sections for child birth, a medical technology which has only been in practical use for a couple hundred years, has allowed for women with narrower birth canals to survive childbirth and pass on these genes so that the average size of the birth canal in women has gotten smaller. Despite your wistfulness for maps, these changes aren’t necessarily bad things, especially medical technology that has limited human suffering and death. 

Stump:  

According to David’s definition, technology is anything in which a way of doing something functional is improved upon in some way and then gets copied. Many of these basic tools are so ubiquitous in our lives now that we barely think of it as technology. Instead we think of technology as something with a microchip inside of it, something that plugs into the wall or has a battery. 

Stump: 

It seems pretty obvious here that our use of technology, while that’s not limited to humans, takes us back to the redwood vs. rose bush comparison we’ve made when you look at what kind of technology exists in non-humans. It might be true that some primates use rocks to open nuts, but we’re sitting here recording our voices into microphones from different cities while typing and reading from the same text document and getting pings and dings from people around the world sending messages with information and requests. This is a very different kind of thing. 

Hoogerwerf: 

And even Science Mike, who, I think a little like me, is cautious to go about heaping special status on humans, agreed that human tool making really is something different. 

McHargue: 

So there are other species that will design and customize tools for a purpose on this planet, including, obviously chimpanzees, but other animals as well. But our ability to use a tool, and then based on the experience of using the tool, iterate the way in which the tool was designed in a single generation is very exceptional and unusual. That would apply to early anthropologically modern humans, that would apply to some of our close hominid relatives as well. And then when you get the Homo sapiens specifically, where we really kind of leveled that up was tool sets or tool chains. In other words, making tools that make more advanced tools is kind of the place where Homo sapiens has differentiated themselves from every other species on the planet. We don’t even— Think about the word ‘tool’ in a manufacturing context. Tooling is the molding you make to design an application specific instrument or tool and no other animal does that. And that’s what allows us to industrialize to produce things at incredible scale, and to so intentionally design a physical object to dramatically amplify our effort in a specific task. 

Hoogerwerf: 

Technology is so deeply ingrained into our lives that it’s hard to imagine what it would be like to be a human without technology. And I want to ask, could we be us without technology? We could strip away computer chips and that would be a very different life for most of us on this planet. But technology extends a long way beyond microchips. 

McHargue: 

The use of technology may be the most fundamental aspect of what makes us human. And wrestling with the implications of being a technologically oriented species has been the epic journey of our existence on Earth. You know, without technology there would be no farming, without technology there would be no Bible, without technology there would be no art. So we should always remember, technology is fundamentally a part of who we are. And how we use technology, I think, determines the merit of what it means to be human and what we mean to the rest of life.

Stump: 

It’s pretty hard to imagine a human kind of existence without the tools we have today, but obviously there were humans around before there were microchips. Strip away all our tools, the computers and smart devices, automobiles, and the plow, and even simple tools, and we become what our ancestors would have been like 2.6 million years ago, an intelligent animal making our way in the world. There definitely wouldn’t be as many of us, we wouldn’t live as long, wouldn’t spread to the far corners of the globe. 

Hoogerwerf: 

This brings us to a place we’ve been before, doesn’t it? To a group of creatures who are anatomically similar to us and yet so different, in this case without any form of tools. And we have asked, is this still human? Can we be human without any form of tools?

Stump: 

And the answer to this question might be one we’re starting to get used to, which goes something like this: Technology is probably part of what makes us human, it contributes to our identity, but I don’t think we can reduce it to the whole answer of what makes us human. 

Hoogerwerf: 

Right. Our biology, our culture and language, have all brought us to this current time period in which technology is a major part of our existence, but probably not the single factor that makes us human. Without technology it does get hard to think about being human in the same way, but if technology were to suddenly disappear, well, of course, you and I wouldn’t be communicating any longer because our internet connection would drop, but in that instant we would still both be human, wouldn’t we? 

Stump: 

Seems like it. 

Hoogerwerf: 

So maybe it’s better to say that technology is an outcome of our humanness rather than a necessity for it? 

Stump: 

Maybe. What do you mean?

Hoogerwerf: 

Well I guess the tension I arrive at is between a real need for tools, really being convinced that tools are important for us to be human but also knowing that the times I have felt most human are probably times when I was outdoors, in wild places, far from cell service, far from screens. 

Stump: 

It sounds like you’re hinting at the fact that maybe some technology is important, even necessary for our humanity but other technology might be detrimental to it. I think this is a really good question which gets at a change at some point in the development of technologies. Is there something different about how digital technology affects us? Or is this just our generation’s technology that we have to cope with and learn to live with the way earlier generations had to change and adapt to what it was like living with automobiles or with books or even with walls?

McHargue: 

I think the “or” there might be an “and.” Like so number one, yes. Previous generations of humans have accommodated dramatic change. I think you can make an argument that someone born in the late 1800s has seen, you know, more change in their lifetime, than someone born in the late 1900s would have, fundamental change, a reordering of the world. And that reordering mainly came with industrialization, not digitization. So previous generations have dealt with fundamental changes to society based on technology. And there is something unique about our tools that are digital, our digital technology. And that is the modifiability and the extensibility of digital technology. When you build a clock, using gears, it is always going to be a clock, it will tell the time forever. When you build a hammer or chisel, those things are going to be what they are, or an axe or even, you know, something more complex. A locomotive is a very complex machine, but it will always kind of pull loads along a track. And that’s what it’s going to do. The innovative nature of digital technology is twofold. Number one, a digital device can become whatever the software running on it makes it become. So it can be a television, it can be a gaming device, it can help you solve problems that involve computation and build really, really elaborate models. And then we’ve extended our digital technology to the point that it facilitates very, very rapid communication and very high fidelity. And so I think the fact that digital technology can change itself so quickly using code and allow people across the globe to communicate more quickly than has ever been possible in human history does represent a fundamentally different impact on this technology on human society than anything that came before.

Stump: 

When we’re talking about the difference between technology today and technology, say, 150 years ago, one of the biggest changes comes in the form of what is ambiguously referred to as artificial intelligence. 

Hoogerwerf: 

Yeah. AI has become a bigtime buzz word, so much so, that I’m not sure I even know what it means anymore. I was only 14 when the movie A.I. came out, about a robotic kid who wants to become real. I can’t say that I remember much about the movie except that it was very long. But that’s the kind of AI that I still think about…the kind that has continued to be the subject of movies and tv shows…very human looking robots that make their own decisions about the world—and usually seem to turn on the humans. 

Stump: 

I remember that movie only marginally better than you, and particularly the scene where humans got a kick out of brutally destroying the machines in a gladiator kind of setting. And unfortunately that has quite the ring of truth to it. In a previous podcast episode Rosalind Picard from MIT was telling us that in experiments with people interacting with AI, they became increasingly cruel to the machines the more human-like their responses were.

Picard:

For example, in a robotic baby doll, if people strung it up by its toes and it screamed, more people would actually string it up by its toes, people actually enjoy torturing this little baby robot doll, little baby girl. It’s sick, you know, the things this brings out in human nature sometimes.

Hoogerwerf: 

Yeah, the movies might just be science fiction but there is obviously some truth to the idea that interactions between humans and computers are going to be complicated.

Stump: 

In another aspect of that, about a year ago I read the novel by Ian McEwan called Machines Like Me. It’s a revisionist history in which artificial intelligence was developed during the 1980s and Alan Turing was still alive to be a character in the story. There are a few of these self-aware robots that look just like humans, and some of the predictable responses ensue. But more penetrating by McEwan was the consideration of how the machines themselves might respond to becoming conscious. And, spoiler alert, they have an existential crisis and aren’t sure they even want to be alive.

Hoogerwerf: 

Yikes.

Stump: 

I think all those kinds of stories are fascinating, but it doesn’t appear that that kind of thinking, conscious robot is going to materialize any time soon. The dates predicted for the robot uprising in Terminator and Blade Runner haven’t quite panned out.

Hoogerwerf: 

So what do we really mean, today, when we say artificial intelligence?

McHargue: 

An artificial intelligence is a machine consciousness that solves problems. When I talk about consciousness, I would kind of follow the physicist, Dr. Kaku’s model for consciousness, where I would understand that at a very basic level, a consciousness is a feedback loop that interacts with its environment. By that definition, a thermostat would be conscious at a very basic level.

Stump: 

But a thermostat is obviously not what we’re trying to get at here. It helps to break this definition down into some more specific categories. 

McHargue: 

So artificial intelligence researchers and specialists kind of delineate two different notions of what AI is. You have artificial specialized intelligence, which like Google would qualify, you know, Google search engine. Very, very, very, very brilliant at a specific task. Something like AlphaGo zero, which is designed to, you know, learn to play games with a fixed rule set and visibility into the entire game board. That would be a specific intelligence. And when you look at ASIs, we can already wildly exceed human intelligence with ASIs, artificial specialized intelligence. But when people think of AI, they think of something else, which is an AGI, an artificial general intelligence: something that can solve problems in multiple ways, something that might be self-aware, something that brings the intense insights and modeling abilities of AI to any problem set in front of it. And of course, in my opinion, nothing like that exists today. And nothing’s really on the drawing board either. 

Stump: 

That point is important to remember. We are nowhere close to creating the kind of artificial intelligence that we worry might start to rival human intelligence. Artificial intelligence, as we know it today does beat us at many things, but it is still a very different kind of intelligence. 

Hoogerwerf: 

I think it’s also tempting to think about human brains as essentially the ideal model for what we’re trying to do with AI. We’ve already talked about the brain a bit, back in our biology episode but there we still were only comparing our brains to those of other animals. But you can’t really say that a brain is just a more advanced digital processor, can you? Could we one day build a human brain out of other materials?

McHargue: 

Our brains do not do what computers do at all. Our brains are absolutely computing devices, don’t get me wrong, but they’re not digital computing devices. You can’t cleanly break out the difference between like storage and memory and processing in a neural network. Neural networks are self organizing. They encounter stimuli, and then sort of self organize in response to that stimuli. And even though now we can build digital structures that emulate the functioning of neurons, the level of complexity in neural networks that exists in the biological space is wildly beyond what we can emulate in digital technology today. 

[musical interlude]

BioLogos: 

Hey Language of God listeners. If you enjoy the conversations you hear on the podcast, we just wanted to let you know about our website, biologos.org, which has articles, videos, personal stories, and curated resources for pastors, students, and educators. And we’ve recently launched a new animated video series called insights. These short videos tell stories and explore many of the questions at the heart of the faith and science conversation. You can find them at biologos.org/insights or there’s a link in the shownotes. All right, back to the show!

Part Two

Stump: 

OK, maybe we don’t need to belabor the point of wondering whether artificial intelligence in the form of conscious robots could really be human. But there is another related question, which is about the integration of computer technology into us biological humans. What does that do to our humanity?

Hoogerwerf: 

Way back at the beginning of this series we said there might be two different ways of approaching our main question, what does it mean to be human? Using David’s Lahti’s language we’ve been talking about a ground-up approach—meaning, looking at the actual parts.

Lahti: 

From the earth-up direction, what we are as an organism, how we got here, what makes us interesting and different from other things that have come up from the dirt. 

Stump: 

Yes. And the second is a heaven’s-down approach. 

Hoogerwerf: 

We haven’t done as much of that. 

Stump: 

No. And I think there’s an interesting conversation about the balance of those two approaches to any question we ask and how we might tend toward one of the approaches more than the other in our society. But I do think we have come to a point in this series where we need to start exploring from the heavens down. It’s pretty clear that our technology is something that makes us unique compared to other creatures on this planet. I think the case to say that it is uniquely unique is a pretty good one too. But when we start asking these questions about comparing ourselves to computers and about integrating technology with our own bodies, we are asking questions that need to have some understanding of the purpose of a human. And for that we will need help from a theologian. 

Bretherton: 

Great to be with you. I’m Luke Bretherton. And I’m the Robert E. Cushman Distinguished Professor of Moral and Political Theology at Duke University.

Hoogerwerf: 

Before we hear from Luke Let’s just quickly go back to where we left off with Science Mike. 

Stump: 

So he was making the point that our brains are really not like computers…

McHargue: 

Our brains do not do what computers do at all.

Hoogerwerf: 

And Luke agrees with this, but he comes at it from a different angle. 

Bretherton: 

I think there’s an assumption often lying behind the question that somehow AI, as a way in which computers, machines, technology, mimic or echo in some way human consciousness, thereby becomes a rival to the human. And underlying that is an assumption that somehow human consciousness is what makes us human. And that the focus of human consciousness is our ability to kind of reason and will, in intentional ways. I think we need to unpack that quite a lot. And there’s a broader view and I think it’s a deeply theological view. And we have to root it in a robust theological anthropology that says we’re psychosomatic whole, or we’re bio-spiritual creatures, who actually, our bodies, our sweat glands, our taste buds, are as much a part of our consciousness as our nervous system. Our bowels—and in Scripture, you know, the bowels play a great part in how we come to know the world well. In modern parlance, we’ll often talk about it in terms of knowing something in your heart—but this symbolic language points to how the body is involved in knowing just as much as the mind. 

Stump: 

“Psychosomatic whole” is a fancy term for saying that our minds and bodies are inseparable. There are some in the Christian tradition who have this view that the real person is some immaterial thing, and there’s a real theological debate about life after death and just what it is about us that survives. But it is the pagan Greek philosopher Plato who said the body is a prison for the soul and that our goal should be to break free of that bodily prison. Descartes reinforced this idea by suggesting that our bodies are just machines, that it is the mind that is the true me. But those ideas are pretty hard to get out of Scripture, which values the material created world and looks forward to a bodily resurrection — not some disembodied existence floating on a cloud.

Hoogerwerf: 

And even if there’s not some separate, immaterial part of us, it’s also a problem to think of our brains simply as mechanical devices.

Bretherton: 

In a world and culture, which increasingly valorizes the machine, valorizes calculation as the premier or primary form of way of knowing and knowledge, and confuses knowledge with calculation, then I think there’s this problem where humans begin to mimic and echo and imagine and narrate themselves—imagination and narration being two deeply human things—but we, we begin to imagine and narrate ourselves in machine like terms. And we’ve seen this very much at work in, I think, a huge swath of technological developments in technology and science, which tend to imagine and narrate the human brain and its operation as like a computer, or as like a machine and we begin then to think of all things around us, including the human body, as mechanistic, as merely mechanistic. And I think that’s very troubling and worrying, extremely prevalent, and I think, increasingly dangerous, because it leads to the ways we fundamentally dehumanize ourselves and begin to treat each other like machines.

Hoogerwerf: 

But it is still tempting to see the body as a machine. Science has helped us to recognize many of the parts and how they work and it makes it even easier for us to think of ourselves as a set of cogs and gears that turn and result in what we are. We’ve been doing some form of this kind of thinking throughout the series, looking to our biology in hopes of finding some mechanism that we can say is responsible for what we are. But there is a way that humans navigate the world which can’t be done merely by the kinds of calculations that computers do.  

Bretherton: 

People often talk about critical thinking and, you know, rationality and these kinds of stuff and can get a bit sneery when I say what about, you know, wisdom, and truth as categories to think about and think with. I think categories like wisdom and truth, actually aren’t reducible to calculation, you’re not going to want to mimic a machine or think about how humans are and relate together and navigate the world if you think it’s important to humans to develop wisdom. Because machines aren’t wise, they can’t develop wisdom, you’re not going to—wisdom isn’t reducible to a calculation. Truth isn’t that kind of thing. 

Stump: 

But what happens if we begin to integrate more and more computer technology into our bodies? Is there a point at which a human body could become so integrated with technology that it is more machine than anything else?

Hoogerwerf: 

So this is the question of transhumanism?

Stump: 

Right.

Hoogerwerf: 

Can you give us a definition of transhumanism? 

Stump: 

That’s going to be tricky to pin down as different groups emphasize different aspects of this, but generally speaking transhumanists are looking to enhance or even transcend the human condition through technologies that can extend and improve our physical and cognitive abilities.

Hoogerwerf: 

Enhancement seems to be the key there, but we’ve already done that to some degree, haven’t we? We’ve radically extended lifespans from just a few centuries ago, right?

Stump: 

There’s an important distinction in these conversations between remedying our natural abilities and enhancing them. And while the average lifespan has drastically increased through medicines and diet and less brutishness to each other, it’s not so clear that that has enhanced our natural lifespan — the oldest people back then still made it to about the same ages that the oldest people today make it.

Hoogerwerf: 

But there is a fair amount of talk in the science news these days about reversing the aging process in cells. That would be an enhancement.

Stump: 

Yes it would. And it’s fair to ask what the human condition would be if people were able to live without aging, that no one would die of natural causes.

Hoogerwerf: 

And that distinction between remedy and enhancement seems to get pretty blurry when we talk about genetically engineering our babies so they don’t have certain diseases, or so that they’re a little bit smarter or taller or better looking.

Stump: 

These are very thorny ethical problems and they are right around the corner. We’re going to have to know how to make these decisions long before we’ll need to deal with the fully conscious robots from the movies.

Hoogerwerf: 

And there’s also the question of more fully integrating technology into us. I’ve had my apple watch for a year now. It has helped me to track my exercise in a way that I never could’ve before and I think it has really benefited my health. But sometimes I imagine a similar technology that could send roots into my arm and tell me so much more…exactly what kinds of nutrients I’ve taken in, what I need more of, less of, what infections are brewing before I’m aware of them. And I can imagine those technological roots eventually spreading through my body, like another set of arteries, and all for the betterment of my health and happiness. But then I stop and wonder is that really good for me?

Stump: 

Here’s where we need to ask again about the purpose of the human from a theological perspective. I don’t think it is simply to keep this order of things going. I’m all for improving things while we’re here, but I’m afraid I don’t have faith in humanity that we can transform ourselves into what we were intended to be. We’ve developed lots of great technologies that have relieved suffering and perhaps brought happiness, but there’s almost always a shadow-side to these too, that they can be used for evil purposes as easily as good ones

Hoogerwerf: 

Yeah, so any one form of technology could potentially be used for good or for evil. It’s not necessarily the technology itself. We’ve done a few podcast episodes about technology in the past—one with Amy and Andy Crouch and another with Rosalind Picard—and in both of those conversations we learned that it’s not quite right to think of technology as neutral either. It does something to us. 

Crouch:

The problem is, anything that is designed by people or by God will not be neutral. Because the very act of creating something and designing something, in that act, you have to come up with a whole bunch of answers to questions about like the purpose of human behavior, the purpose of your own creation. And really of what it means to be a person. 

Stump: 

So when we consider our relationship with technology, it’s not quite so simple as thinking whether we’re going to employ it for good purposes or bad purposes. If we’re putting technology into our bodies, we’re not just asking whether we use it to keep us healthier, or to control other people. We should be asking whether subjecting ourselves to that kind of electronic manipulation itself might do something to us.

Hoogerwerf:  

We mentioned earlier about outsourcing some of our abilities for things like remembering phone numbers; It’s one thing to outsource my knowledge about how far I ran to my apple watch, but what happens if we attempt to outsource the higher cognitive functions like ethical decision making? People right now are working on self-driving cars and how the technology will decide what to do when an on-coming car swerves into its lane. Will it turn into the tree on the side of the road causing more harm to its passengers, or onto the sidewalk where the passengers may be safe but there may be pedestrians?

Stump: 

And the point Luke was making was that we can’t reduce these kinds of decisions to an algorithm of some kind. We can’t outsource wisdom to computer code and think that we’ll still be human.

Hoogerwerf: 

So what should the Christian role be in relation to developing technologies? Do we fully embrace them, or do we reject and try to recapture some purer humanity? Luke thinks neither of these is the proper Christian response.

Bretherton: 

The problem is I think, we’re in a moment when we tend to think about change, either in a kind of forward linear terms, and you know, what is good is what’s new, what’s in the future, we need to progress away from the past. Or we flip that, and again there’s strong ancient precedent, it’s re-formation getting back to basics re-naissance, a rebirth, and a kind of Golden Age-ism. And we need to go back to really be true to ourselves. And in theological terms, I think neither of these are right, we think about in terms of baptism. In baptism, we both recover a self that’s been lost because of sin and idolatry, we recover who we are created to be in Christ and we’re born again. There’s a fundamental rupture, as we enter into a self that is given, eschatologically given, from the kingdom coming to be. And so as Christians, we should have a sense of change as always recovery and revolution, always reconnecting and rupture.

Stump: 

I like that we’re connecting our understanding of technology more explicitly to Christian theology here. Mike tries to do that too.

McHargue: 

I mean do you care if I get, like, pretty, deeply religious in my language here?

Stump: 

You’re talking to BioLogos here, so that’s fine. 

McHargue: 

I think technology represents the very image of God in us. When we kind of look at this amazing creation story in Genesis of God saying, “Let us create them in our own image.” Well, what in the world does that mean? Well, in the biblical portrayal of God, you have this cosmic entity, like this incredible, all-powerful creator, who creates with intention. Who makes things and then says “you know, that was good. Yeah, that was good to make that. That was good to design something with intention.” That is the way humans use technology. And I think, at the heart of the biblical narrative, is this wrestling with what we do with that agency that we see play out in our use of technology. Do we make things that are good? Or do we not? And so that’s why I’m so drawn to wrestling with humans and technology and what we do with it because as a Christian, I see that as one of the fundamental notes on which our faith story rests. We have the image of God, we have the capacity to create with intention. Are we going to create peace? Or are we going to create something else?

Hoogerwerf: 

Well that leads us very well into our next episode, when we look back at history and see the times when our attempt to identify what makes us uniquely unique has not led to peace but has caused great destruction and suffering. 

Stump: 

David Lahti said something that is really insightful about human nature, and unfortunately not quite so cheery and optimistic.

Lahti: 

All organisms, besides humans, naturally follow the dictates of their Creator and live according to their ultimate purpose. We are the only ones who are able to deviate from that.

Stump: 

Hm, on that note…

Hoogerwerf: 

See you next week.

Credits

BioLogos

Language of God is produced by BioLogos. It has been funded in part by the John Templeton Foundation and more than 300 individuals who donated to our crowdfunding campaign. Language of God is produced and mixed by Colin Hoogerwerf. That’s me. Our theme song is by Breakmaster Cylinder. We are produced out of the remote workspaces and homes of BioLogos staff in Grand Rapids, Michigan.

If you have questions or want to join in a conversation about this episode find a link in the show notes for the BioLogos forum. Find more episodes of Language of God on your favorite podcast app or at our website, biologos.org, where you will also find tons of great articles and resources on faith and science. Thanks for listening. 


Featured guests

David Lahti

David Lahti

David C. Lahti is an Associate Professor of Biology at Queens College, City University of New York, where he runs a Behavior & Evolution laboratory focusing mainly on learned behavior in birds and humans. Prof. Lahti received a BS in biology and history from Gordon College. He received a PhD in moral philosophy and the philosophy of biology at the Whitefield Institute, Oxford, for a study of the contributions science can and cannot make to an understanding of the foundations of morality. He then received a PhD in ecology and evolutionary biology at the University of Michigan for a study of rapid evolution in an introduced bird. He has been a Darwin Fellow at the University of Massachusetts and a Kirschstein NRSA Research Fellow with the National Institutes of Health, where he studied the development and evolution of bird song. His current research projects include rapid trait evolution following species introduction, cultural evolution in humans and animals, and the evolution of our capacity for morality and religion.
Mike McHargue Headshot

Mike McHargue

Mike McHargue is a science expert and film & TV consultant for clients including Marvel Studios. His recent notable projects include WandaVision, Loki, and Dr. Strange in the Multiverse of Madness. He is also a best-selling author and podcaster loved by millions and a co-founder of Quantum Spin Studios.
Luke Bretherton

Luke Bretherton

Luke Bretherton is Robert E. Cushman Distinguished Professor of Moral and Political Theology and senior fellow of the Kenan Institute for Ethics at Duke University. Before joining the Duke faculty in 2012, he was reader in Theology & Politics and convener of the Faith & Public Policy Forum at King’s College London. His latest book is Christ and the Common Life: Political Theology and the Case for Democracy (Eerdmans, 2019).

Rosalind Picard

Rosalind Picard

Rosalind Picard is founder and director of the Affective Computing Research Group at the MIT Media Lab and founding faculty chair of MIT's Mind+Hand+Heart Initiative. She has co-founded Affectiva, Inc. providing emotion AI technology, and Empatica, Inc. creating sensors and analytics to improve health.
amy crouch

Amy Crouch

Amy Crouch is a student at Cornell University studying linguistics, English, and anything else she can fit into her schedule.


3 posts about this topic

Join the conversation on the BioLogos forum

At BioLogos, “gracious dialogue” means demonstrating the grace of Christ as we dialogue together about the tough issues of science and faith.

Join the Conversation