Forums
Featuring guest Rosalind Picard

Rosalind Picard | Flourishing in the Age of Computers

Rosalind Picard's work developing computers with emotional intelligence has highlighted the amazing capacity of humans.


Share  
Twitter
Facebook
LinkedIn
Print
3 Comments
3 Comments
robotic arms playing piano

Rosalind Picard's work developing computers with emotional intelligence has highlighted the amazing capacity of humans.

Description

Sometimes it seems that the gap between humans and computers is growing increasingly small. But as scientists have worked to develop intelligent computers, they have usually ignored emotions. Rosalind Picard has spent a career developing technology that can read and human emotion and has had a hand in technology that has led to a great deal of human flourishing and even saved lives. But her work has also highlighted the huge gap that still exists between humans and computers, how little we actually know about ourselves, and what amazing creatures we are.

  • Originally aired on December 10, 2020
  • With 
    Jim Stump

Before You Read

Dear reader,

We’ll get right to it: Young people today are departing the faith in historic numbers as the church is either unwilling or unable to address their questions on science and faith. BioLogos is hosting those tough conversations. Not with anger, but with grace. Not with a simplistic position to earn credibility on the left or the right, but a message that is informed, faithful, and hopeful.

Although voices on both sides are loud and extreme, we are breaking through. But as a nonprofit, we rely on the generosity of donors like you to continue this challenging work. Your tax deductible gift today will help us continue to counter the polarizing narratives of today with a message that is informed, hopeful, and faithful.

Transcript

Picard:

As we build computers in our image, and we see what they can and can’t do, we see more and more that we don’t fully understand what we are and who we are, and that there, may be, you know, a lot more. And there really is a rational basis for saying that just the natural material, stuff that we can build and shape into something like us is not enough. There is something more. And I think that’s very exciting about this technology, it causes us to think very humbly about not just the limits of our knowledge, but the limits of our tools in science, and the limits of our philosophy.

My name is Rosalind Picard, and I’m a professor at the MIT Media Lab and I’m also a co-founder of the company Empatica with an E and there I serve as chief scientist and chairman of the board.

Stump:

Welcome to Language of God. I’m Jim Stump. 

This year has been one where many of us have become deeply reliant on technology, much of which uses some sort of artificial intelligence. This very podcast has remained possible through the use of remote recording software and robotic online transcription services. Our worship services now rely on Zoom and Facebook. And our health and livelihood has become reliant on the technology involved in developing a vaccine for COVID. 

Rosalind Picard is a computer scientist at MIT and has been working on developing the kind of technology that she hopes will contribute to human flourishing on many levels. But one of the requirements, in her mind, is that computers are able to read and replicate human emotion. It might sound scary to some people but, as you’ll hear, the fears are based mostly on science fiction accounts.   

Rosalind is also a Christian and her faith has inspired much of her work, which looks to relieve human suffering. We talk about that but then also about when technology works against human flourishing.  

Let’s get to the conversation. 

Interview Part One

Stump:

Rosalind, welcome to the podcast. Thanks so much for joining us today.

Picard:

Thanks, Jim. It’s a pleasure to be here with you.

Stump:

Good. Well, you’re a professor and researcher at MIT, in computers. And I like to start with a bit of background. So were you always interested in computers, or where did that come from?

Picard:

No, I was not. As a child, my mom was a teacher and an artist. And my father was a Navy pilot. And I was interested in a lot of different things. In fact, I really didn’t even like science as a kid in school.

Stump: 

So how did this develop, then?

Picard:

Yeah, it was a long process where, I knew I was good at math, and I was just a good student, and really, I think in high school, I got, kind of, pointed toward the possibility of engineering because I was curious about how things worked, and in college, where I went at Georgia Tech, I became especially interested in technology, how computers work, could I actually understand everything going on inside them, and build ones maybe that imitated the human brain?

Stump:

So the capabilities of computers back then were a little bit different. What were those first machines you were working with?

Picard:

Oh, the very first one I saw was one, a friend of mine who I used to go do some art projects with, her dad had giant computers that were used in financial computation. And they would spit out sheets with numbers on them, and games and all, and honestly, I didn’t think it was that interesting. I think he thought it was really interesting but I would rather go paint little ceramic bunny rabbits.

Stump:

And given how far computing power and technology has come today, when you were first working on those and thinking about modeling the human brain and all of that, did you think that by the time computers can do what they do now that we’d have fully artificial intelligence or conscious machines or anything? What’s been the trajectory of that?

Picard:

Well, we are far from conscious machines and from—  You know, what AI is today, or artificial intelligence, is so overhyped. It’s almost an embarrassment. Even many scientists are speaking in ways we should know better than to speak. You know, it’s not like in the movies where, you know, it’s essentially anything a human wants can be brought forth by a machine. That happens in art. That is not what’s happening in the real world. However, there is powerful stuff happening with machines, and some of it is awesome. And, you know we can talk more about that if you’d like.

Stump:

We will, in just a little bit. That’s, I think, going to be the main topic of our conversation here today. But as we are a podcast about science and faith, let me go back again and ask about that second term a little bit. Were you raised in a religious community or environment?

Picard:

I was raised in a family that really never showed anything religious during my childhood. We did not go to church even on Christmas and Easter, as they say, the C and E Christians, we weren’t even that. We did have a Christmas tree, but there was no talk about God or Jesus. I was given a King James Version Bible, the kind with the gold leaf, as a child, and it looked like something holy that I really shouldn’t open or touch. So, you know, I never read it. It wasn’t until I was a young teen and a family I babysat for started to challenge me to think about what I believed, that I started to actually change my mind about faith. At that point, though, I really had declared myself to be an atheist. All the religious stuff I’d seen around me I considered kind of anti-intellectual. And, you know, if you believe that stuff, you couldn’t really be a rational person, I thought.

Stump: 

So how did that change things for you then? What were the things, I guess, that compelled you to take faith more seriously?

Picard:

I think being challenged to read the Bible probably was the biggest effect on me. These neighbors kept inviting me to church. And I kept feigning stomach aches and stuff to get out of it. It only worked like a couple of weeks, especially since the neighbor was a doctor. So that idea backfired pretty quickly. But what they finally said was, “hey, it doesn’t matter so much whether or not you go to church, it matters what you believe. Have you read the Bible?” And I realized it was the best-selling book of all time. And I had not read it. And I thought myself, you know, this highly intellectual grade school student, so a middle school student at that point, so I thought, okay, I should have read it. And they challenged me to start by reading Proverbs, which was a great place to set me down, because the first lines of Proverbs are just filled with—all Proverbs is filled with wisdom. And I could not maintain my intellectual arrogance while reading Proverbs. I realized I had a lot to learn. And it was very humbling. And from there I decided to read the whole Bible, and that changed my life.

Stump:

So after you got out of the book of Proverbs, did you come across passages where you said, “Now come on. This can’t really fit with the way we understand the world to be now?”

Picard:

Well, I came across, certainly, a lot of things I didn’t understand and a lot of challenges to things I did understand. However, I have never come across anything that challenges my scientific understanding.

Stump:  

So let me push into that just a little bit further here, because I’m really interested in points of connection between your faith and your scientific work. And let me preface that question, I guess, by noting that in some areas of science there’s this kind of superficial conflict, maybe between science and faith, say, human origins, where it might be claimed that what science says stands in conflict with what the Bible says. But then at BioLogos, we think if you push beyond that superficial level, there’s actually this deep harmony between science and faith. In your field—computer science, computer technology—it might be thought that at the superficial level, there just isn’t any contact at all between science and faith, like in math, you’re going to get the same answers whether you’re a Christian or an atheist or a Buddhist, right? But is there a deeper level to your work where it does matter that you’re a Christian, where your work itself is actually influenced by your faith?

Picard:

Yeah, there is. And it’s, as you’re suggesting, right, it’s not in the math, per se, or in the computation or the logic of how the computer works, but it is in the guiding forces that we employ, when we decide what to build, and what powers to give it, and who to build it for, in particular. It’s interesting, in the world recently, there’s so much more emphasis on justice, which is fabulous to see. And when we look deeply at what’s been done in, you know, in many areas—in medicine, and technology—across a lot of the great innovations of the last century, it’s not justly distributed. And unless you intentionally seek that up front, you know, I think the evidence shows that doesn’t happen on its own. And as a Christian, I feel challenged to look for opportunities to do more to rectify the injustice that just happens naturally, if we don’t seek to, kind of level things out, more equitably.

Stump:

Good. Well, let’s dig in a little bit more than to some of your work and what it is that you’re doing and how you are working toward justice in that way. So, beginning with Affective Computing, this is one of your main areas of work. Not effective with an E but affective with an A. So what’s, what’s your elevator pitch for what affective computing is and why it’s important?

Picard:

Well, hopefully the affective is nicely confused with effective. The original idea was that most computer interactions were frustrating, they were annoying, they were designed largely by engineers to, you know, achieve some well-defined objective task that usually had essentially no consideration for human feelings in the loop. And subsequently, people, you know, there’s just a lot of computer rage, right? Things that were just so annoying. Take for example this old little software agent, this paperclip that Microsoft deployed that was designed to make things friendly, thinking people would like a little face and a voice and little smiling thing. And–

Stump:

What was something called?

Picard:

It was called Clippy.

Stump:

Clippy, yeah.

Picard: 

And Clippy actually had brilliant AI behind it. It had some of the most sophisticated machine learning, it could see that you were probably writing the letter, and it would, you know, dance out on your screen and say, “oh, it looks like you’re writing a letter, let me help you.” And people got so mad that there would be pictures online of Clippy hanging up by a noose. [Jim laughs] And people actually who shot their computer up, you know, several times through the hard drive. And there was a story of a chef in New York who threw his computer in a deep fat fryer, he was so mad. So computer rage was a real thing. And I mean, it still is in a lot of places. And we were thinking, how could we make computers more intelligent? And what engineers were focusing on mostly was, did it accurately know what you were doing, like writing a letter. And I said, “you know, I think there’s another aspect of intelligence that we’ve been completely ignoring.” And that’s emotional intelligence. That’s the intelligence that even your dog has, when it sees you come home and sees you look really stressed and upset, versus you look really happy and playful. And in one case, the dog puts its ears back and its tail down, and looks empathetic. And in another case, the dog wags his tail and looks happy. And always the dog looks happy to see you. And when you take those nonverbals, and that acknowledging of human feelings into an interaction, the interaction goes more smoothly. So I proposed that computers that are trying to be intelligent also need emotional intelligence. And then I also showed a whole bunch of ways computationally that that could be achieved.

Stump:

So the book you wrote that was called Affective Computing, where you made that argument was published 20 years ago. if I’m not mistaken.

Picard: 

Yeah, the time has flown. 

Stump:

What’s been the progress in this field since then?

Picard:

Oh, my goodness, progress has been tremendous. It started a little slowly, because a lot of people thought it was nuts. But it has absolutely exploded in work over the last five to ten years. The progress includes camera-based systems that can interpret all the movements on your face and, in context, start to see if that smile might mean that you’re frustrated, and have a snarky jerky smile, or a very slowly-building, rounded, happy smile that involves the whole face, and perhaps also, your head bouncing up with joy, looking beyond the face at the entire set of gestures and movements from a person in context, and interpreting if they’re a pleased or upset customer, and how, then, to incorporate that into an interaction so that the customer has a better experience. So there’s a lot of progress reading faces a lot also now reading voices, tone of voice, if you sound pleased, curious, bored, irritated, angry. And, you know, again, all of that needs to be interpreted carefully in context. And it’s not perfect, just like people aren’t, but it’s able to do a lot now.

Stump:

So this isn’t just a matter of raw computing power somehow, for computers to understand our emotional states, right? It’s not just–

Picard:

No, no.

Stump:

So what is—and I realize we’re getting into philosophical territory here, of what’s the difference between how computers process this information versus how we process information? And why can’t we simulate that, exactly?

Picard:

Yeah, I think the biggest problem is we don’t know how it works in people. We do not understand how people recognize emotions. And one of the ways we think it works, or one of the things we think helps people who are better at recognizing emotions actually succeed, is we think that people sometimes contagiously get emotions. When you’re say, listening to somebody describing what’s going on, sometimes you contagiously get infected, if you will, with their anxiety or their stress, without even intending to. And then something in you feels that, and starts to interpret that. And so it’s not simply a logical set of observations and reasoning about them, that infers somebody else’s stress. It’s also that you are a human being who’s capable of experiencing something very similar to what they’re going through, if not the same thing—it’s not always going to be the same thing. And that substrate lets you better understand it. And computers do not have that substrate, they do not have feelings, they do not have the ability to feel and map somebody else’s feelings onto what they feel. The best they can do is have a little computational configuration of logic that they try to match to, with some words attached to it, that they try to match to what’s going on.

Stump:

So even if you could get that logic module to simulate the kind of emotional responses, you’re saying there’s still something different here?

Picard:

Yes, there’s something different. And there’s also just a big gap in our human understanding of how all of this works. And one of the things we learn over and over in trying to build smart computers is just enormous appreciation for how humans work, how our brains work, how our feelings work. We are absolutely mind-blowingly amazing. And we have barely scratched the surface of understanding what’s going on inside ourselves.

Stump:

I can anticipate, though, some difficulties for how we react to computers as they get better and better— robots say. So you’re talking about the rage people had at Clippy earlier. If it gets to the point where if I bang on my keyboard when I’m frustrated, and the computer responds with “ouch, stop that, you’re hurting me!” And you know it sounds believable, do I have some duty or ethical obligation then to treat it differently? Or are you just saying, nope!

Picard:

No interesting question. And let’s say suppose it’s not your computer, but one of these humanoid-appearing robots. People have been building robots that look like their wife, or like their daughter. And then they demonstrate at the engineering conference, some of the behaviors. For example, one engineer demonstrated slapping his wife-robot across the face. Now, would you—you know, that just causes a visceral aversion in all of us, right? Horror. And what should the robot do? Should the robot just ignore it? Because if—  It was shown, for example, in a robotic baby doll, if people strung it up by its toes and it screamed, more people would actually string it up by its toes, people actually enjoy torturing this little baby robot doll, little baby girl. It’s sick, you know, the things this brings out in human nature sometimes. So what they decided to do with that first My Real Baby Doll was make it so that if people misbehaved with it, it would just shut down. It would not behave in these ways that gratify these evil impulses in some people. Similarly, if a humanoid robot has an ability to respond when you hit it in a certain way, is that something we want to choose to, you know, imitate a human response? And, you know, emit sounds of pain or objection? Or should it just shut down? And also, one could imagine, maybe both alternatives being reasonable in different circumstances. Maybe you’re trying to train somebody who has violent tendencies to start to feel for other people or something, right? Could a robot be helpful in that? There are all kinds of new, call them…opportunities. They’re already here in some of these things, you know, teleoperating these robots, right? You’re gonna have a human operating the robot’s response, and adapting it on the fly to what is deemed to be most therapeutic for helping somebody with a particular problem.

[musical interlude]

BioLogos:

Hey Language of God listeners. If you enjoy the conversations you hear on the podcast, we just wanted to let you know about our website, biologos.org, which has articles, videos, book reviews, and other resources for pastors, students, and educators. We also have an active online forum. We discuss each podcast episode, but it goes far beyond that, with lots of open discussions on all kinds of topics related to science and faith. Find it all at biologos.org.

Interview Part Two

Stump:

So I want to get to asking you about some of the more practical applications you’ve worked on with regard to this. But let me push just a little further into artificial intelligence. You’ve brought that term up a couple of times, and it gets used, as you’ve noted, in lots of different ways, and perhaps irresponsibly. You say that we’re a long ways off from this, right? What is it that computers can’t really do very well, that we do?

Picard:

Yeah, and let me distinguish between what computers can’t do today, versus statements of “they will never be able to do X.” Because it’s very hard to say something is impossible in the future. It’s simply that it’s not possible now with what we know now. And with what we know now about computers and lots of different forms of them, and even extrapolating enormous computational ability of them, they do not have feelings or consciousness. They do not have, quote unquote, a mind of their own, like some people like to say metaphorically, even in popular media these days. They actually really do not think in the sense that people think, even though we commonly use human-like terms like thinking, learning, believing, understanding, when describing the appearance of what they’re doing.

Stump:

Okay, so then, tell us what these computers can do really well, perhaps even better than we can. And maybe do that by sharing some of the technology that you’ve been involved in helping to develop some of the applications, then.

Picard:

Yeah, and by the way, Jim, you’re really good at sounding skeptical. [laughter] I don’t know if people have told you that. It’s– just, just noticing, you know, the tone of your voice, right, which, and by the way, computers are not good at reading skepticism. They’re not good at reading between the lines. They’re not good at sarcasm, or joking, or humor. These are very complicated things, even for people to describe how they happen. And computers are still far from all that. There are people working on giving them these abilities. Okay, so what are computers really good at? Computers are very good, when you have a whole lot of data, at identifying patterns in that data that can be mapped to, what I’ll call labels, what people want a computer to say about those patterns. So for example, we might give computers a whole bunch of data showing different kinds of melanoma—skin cancer—and the computer doesn’t learn one pattern for all of it, it learns large sets of patterns of what those melanomas look like. And then when you show it a new picture of somebody’s skin, it is not perfect, but getting better with more data at seeing whether that looks like a melanoma or not. 

Stump:

Okay. So apply that then some of the other applications that you’ve been working on and developing that detects different kinds of patterns.

Picard:

So we have been focused on detecting patterns that look like human affective states—is this a human that looks happy, frustrated, annoyed, you know, good states, bad states, interested states, bored states for student learners and also now increasingly, health states. We’re especially focused, lately, on states of whether or not a person is having a neurological event or a mental health event. Specifically, in neurology, we’ve been detecting whether or not people are having the most life-threatening kinds of seizures, and then the AI in a wearable form, alerts somebody to come be with them at that time because death rates are significantly lower if a person has somebody there at the time of grand mal seizure. We’re also focused on mental health states. We are now able to use an AI, in conjunction with a doctor, that improves the ability to track if a person is depressed, getting better, or getting worse. And we can now do that on a finer grain basis in between doctor’s visits, and recommend behaviors that might help that person get well.

Stump:

So what’s the wearable technology look like in this case? Is this just one of the apps on my Apple watch that will be able to do these things? Or does it have to have a lot more sensors than that?

Picard:

The Apple Watch is lacking one of the key sensors that is useful in our research. Deep in our brain, when the parts of our brain that are involved, especially in memory, attention, and emotion, are activated, and particular parts that are involved in processing fear and anxiety, it causes a skin-conductance response and electrical change in our sweat gland activity, well, actually an activation of our sweat gland activity that shows up as an electrical change in our skin. And with the addition of a sensor in the wearable that can measure those electrical changes in the skin, we can pick up some very interesting changes related to that neurological activity. So we’re using that in our seizure detection algorithms and in the mental health tracking algorithms. So those are sensors that are custom, that we developed at MIT Media Lab, that are now commercially available through Empatica. They are not yet in consumer devices.

Stump:

So this, when it’s monitoring for seizures like this, so somebody that, this device can tell when they’re about to have a seizure? Or just once it started, that it alerts somebody, or what does this look like in practice then for somebody who’s wearing this device?

Picard:

Yeah, the Empatica Embrace device that is FDA cleared for monitoring for the most dangerous kinds of seizures, detects when the seizure is happening. And it turns out the most dangerous time is right after a seizure, when it might appear that the person that has the seizure has ended and they’re holding still, hopefully, there’s somebody there to reposition them and make sure there’s nothing in their airway. And during those minutes afterwards, that’s when it’s possible for somebody to stop breathing. We think the seizure can actually, even though it looks like it’s ended on the outside, it could actually be spreading deep in the brain in a way that maybe the person’s not moving there holding still, but it can attack a part of the brain that can turn off your breathing, and those minutes following the seizure. So it’s really important that somebody be there during that time to stimulate you and possibly provide first aid.

Stump:

And without violating confidentiality or anything, have there been instances where you know that this device you and your lab had been working on have saved this person’s life? 

Picard:

We have heard lots and lots and lots of cases where people said the device alerted them, they got there, and the person was found not breathing, and when they turned to them on their side, or flip them over, they went from blue to pink and started breathing again. It absolutely makes my skin conductance go through the roof, hearing these stories. Now, again, the device is not saving the person’s life. The device is detecting this event that’s potentially dangerous, summoning a person nd it’s the human being, it’s the human plus the AI that makes the difference. It’s a human getting there in time to give this first aid, to reposition a person, to stimulate them so that they go from not breathing, to breathing. And in some cases to do more than that. I should say too that it’s not perfect. You know, a battery could die, wireless could not communicate properly, the phone alert could go to voicemail. Unfortunately, we’ve also seen tragically, some cases where, you know, the person, for example, thought they only needed to wear this at night, and then they died during the day, or they thought it was going to their caregiver and their caregiver’s phone was off. So there are limitations. The human plus AI system I think is much better than just the human system right now. But it’s still got room for error.

Stump:

Is the mental health application, then, as far along as the seizure one is here?

Picard:

The mental health is really just getting started. And it’s a lot more complicated. And we have, you know, the events are not as discrete as seizures. Actually, it’s also been kind of paused. Well, it’s still going on, but it’s a little slowed right now because of this global pandemic. And that has caused us to start a whole ‘nother effort, which we had been doing in influenza, very slowly in the background. And then when the coronavirus pandemic hit our, all of our in person studies and suddenly we couldn’t do those, we shifted to doing COVID-19 studies. And now actually, Empatica has been funded by HHS and US Army and is doing studies around the US and in the UK, using a wearable plus AI to detect physiological changes that happen when you’re first exposed to the virus, before you even come down with any symptoms. And we’re doing gold standard, daily PCR testing, so that we can test every day if you might have a significant amount of virus in your system. And that’s allowing us to build a new kind of AI, that in the future, I hope you’ll be able to check your watch and see, you know, “hey, I feel fine today. I’m going to go visit Grandma,” you know, who might be high risk. But it says well, you may feel fine. But actually you have an 80% chance of fighting a viral infection right now, even though you’re asymptomatic, I suggest you stay away from Grandma, get a test. And, you know, make sure that you’re not developing, because it can take days from first exposure to developing symptoms. And so people have this period usually have several days where they may actually be contagious to others without even realizing that they’re sick.

Stump:

There’s a TV commercial out now for Apple Watches and all the things that they can monitor, right, Besides fitness sorts of things that we started with—sleep patterns, even EKG. You know, it gets us wondering, where this is all headed and do we ever get to the point where there’s too much information that I’m trying to monitor all the time? Is there ever a point where it causes me more stress to think about all of the different things that could be going wrong in my body right now? Or where is this all headed, eventually?

Picard:

Yeah, I’m really glad you’re asking that. Because the society really needs to have a lot more conversations about what we want with the power of this data. When you’re sharing, continuously tracked mental health data with a trusted psychiatrist or trusted friend, then you know that they really just want to use it to help you. When a large consumer company that sells you, you know, 20 different services is tracking it, then you have to ask, you know, well, how are they going to use this to sell you different things in the music space? Or how are they going to use this to influence payments? Or how are they going to, you know, if you’re in a country like China, where the government owns the back-end data access to everything, what is the president of China going to do with your mental health data and your physical health data? And your everything-you-spent-money-on data, and where you go to church data or don’t go to church data, and tracking everything about you? I don’t know if you’ve followed the Google Location  Tracking. You know, it sends little updates. I’ve been entertained by this because since the pandemic, my behavior has changed dramatically. 

Stump:

It’s not very interesting those monthly reports of where you have been.

Picard:

I really have only been to five places during the pandemic. They know which grocery stores. Now I’m spending my Sunday mornings at home, on Zoom. So it’s very powerful, what you can glean from that data. And I think we need to be asking exactly what you asked and looking very carefully at who are we giving access to this data and what are their plans for these data. And not just give open ended, you can have my data and do whatever you want with it, which is what these big companies have right now.

Stump:  

There’s a periodical from your institution that I look at occasionally—The MIT Tech Review. And just last week, I had clicked on an article and saw the online version of the MIT Tech Review that uses the tagline, “because technology is never really neutral.” So in a similar vein, I’ve heard and used the line even myself, that technology itself is neither positive nor negative, but neither is it neutral. And I think these kind of taglines help to counter the popularly held belief, maybe, that technological innovations can be used by good people for good purposes and by bad people for evil purposes and that’s the end of the story. But what do you think further about what our interaction with computer technology is actually doing to us, to the way we live, to this prospect of human flourishing?

Picard:

Yeah, I like that tagline in general, because it makes people stop and think about “how is technology biasing us? How is it making our lives better or worse?” There, there was a wonderful study out of U Penn, that looked, for example, at social media use, and while most studies are just looking at correlation, you know, so they’re confounded by, you know, is A causing B, or is B causing A, or something else causing them both, or whatever. This study did the gold standard of causality. It’s called a randomized control trial. They randomly assigned undergrad students to a group that was—they were all, you know, iPhone users and this is U Penn, Ivy League—and they randomly assigned people to use Facebook, Snapchat, Instagram the way you normally do, or limit yourself to 10 minutes a day for three weeks. And after three weeks, they looked at a lot of different pre- and post- measures to see what changed. And what they found was that the group that limited to 10 minutes a day showed clinically significant reductions in loneliness and depression. And those who used it, as usual, did not show reductions in loneliness and depression. And, mind you, these things are rampant, and they’re growing, you know, actually across society now, with a pandemic.

Stump:

So there’s something very ironic about this, right? That the social networks that we are increasingly a part of have the exact opposite effect on us.

Picard:

Yeah, now, I want to be careful not to paint it all as bad, right? Some, some people do use these social media in good ways, right, to connect with someone. But by and large, our youth are, you know, they’re operating with a fear of missing out, with anxiety, with comparisons of likes, “why didn’t I get many likes on that?” They are often looking for more than just baby pictures, you know, and that is a problem. And they’re also growing up with this. So they’re not growing up with the, you know, there’s an opportunity cost, right, they’re using this one thing, which means they’re not spending time on another thing. They’re not spending time with face-to-face, getting to know complicated fellow teenagers and young adults, dealing with the day to day reality of not looking your best, not looking like how you post yourself on Facebook. And, you know, not just looking at the highlights and the wonderful moments, but the majority of life, which is much richer than that.

Stump:

So let me turn the skepticism on here again. Is is one way of responding to this, though, of saying “now come on, technological innovation—isn’t this something that every generation has worried about?” Didn’t tools radically change us back, you know, hundreds of thousands of years ago, the way we lived? Didn’t the printing press radically change the way we interact and automobiles? Is computer technology somehow different from these other technological innovations in what it does to us and how it affects our lives?

Picard:

Yeah, it’s similar and it’s different. It’s similar in that, yes, people will tend to overreact to each new thing. And there will be naysayers and Luddites, and people who want to throw it all out. And it’s also the case that there are real changes. You know, paper was invented and was a new technology, and when people could suddenly write things down much more easily, you know, people did not have to remember things as much. And our brains probably did change. We could have scanned brains back then versus now, they probably would look different, although they look different in a lot of different ways, right, because so many changes have happened, in lifestyle mostly. We find, however, that the language we’re using today about computers, is drawn from language we use about people, and we’re making computers in our image. We’re making computers that look like people. And you know, like the daughter-robot, or the wife-robot, or, you know, the imitation-of-you robot that some of my colleagues dream of sending around the world so they don’t have to travel so much. And the one that right now wouldn’t have to worry about the virus if it traveled, right, if we all send our robots off to meetings, our avatars to collect, and to bring back the interesting tidbits. So right now, I think it’s a bit different because it’s being designed to imitate humans and it’s being designed in our image. And that raises new questions beyond just augmenting our abilities or augmenting our memories or augmenting our strengths and our muscles, as technology has always augmented.

Stump:

Do you reflect on this as a Christian, and you’re using these, at least alluding to theological language almost, of us making computers “in our image,” as we Christians talk about we are made in the image of God? Does this augmentation, this integration of computer technology into our lives further and further— and even pushing further into say, elements of transhumanism—does this fundamentally make us something other than human? Does it move us beyond what we’re really supposed to be? In this Christian sense of no, God made us this way, and we ought not mess with that so much.

Picard:

I won’t use the word transhuman because that’s got a bunch of evolving definitions right now. But I think it does call into question the materialistic view, you know, when people say we are nothing but, and they fill in the blank after that—there’s just not evidence for that. As we build computers in our image, and we see what they can and can’t do, we see more and more that we don’t fully understand what we are and who we are, and that there, may be, you know, a lot more. And there really is a rational basis for saying that just the natural material, stuff that we can build and shape into something like us is not enough. There is something more. And I think that’s very exciting about this technology, it causes us to think very humbly about not just the limits of our knowledge, but the limits of our tools in science, and the limits of our philosophy. And I think for a person who does believe in God, they find this exciting, and I find this exciting and completely compatible with faith that allows that science, while incredibly powerful and valuable, and I now love it even though as a kid I hated it, it’s not all there is, right? It is itself subject to something much greater.

Stump:

From your perspective, from where you sit in this field, what are the promises and the perils of computer technology for making this a better world for greater human flourishing? Maybe I’ll ask it like this: as you as you survey the field and see trends, and see trajectories where it may be going, what is it that causes you concern? And the flip side of that, what is it that gives you hope, with regard to these advancing technologies and how they’ll affect our lives?

Picard: 

I think my, my biggest concern just goes back to human arrogance and greed and our own myopia, where people so often are driven more by how to, you know, make money in a business or control other people or other parts of human nature that don’t focus on what is really good for most people. They focus very selfishly on what is good for a very small number. And that scares me, because people, a small number of people with those kinds of emphases in power, can use technology to bring about a lot of control over other people. And I think that could be very harmful. My hope is in things that help human nature rise above itself, and believe in and show love for others. That shift from being so self focused, which, by the way, is also a recipe for bad wellbeing, and to be much more other focused. And when we remember that each person ism, from a Christian standpoint, made in God’s image, and worthy of love, and all people have equal worth, we can then be inspired to show this love and care for everybody, and not just get wrapped up in making some amazing piece of technology for one’s own benefit. But really stepping back and asking what’s best for this large group of people? What’s best for others? Where is their real need, not just something that, you know, beefs up my resume or helps these people paying for my work make even more money. So I think we all have to step back and say, you know, “where are these needs in society?” And try to reorient the great power of technology to those needs. That gives me hope when people talk to each other about those needs, and listen to each other, and show love for each other, and then turn the power of technology to addressing real world problems.

Stump:

Well, may it be so. Thanks so much, Rosalind, for talking to us today.

Picard:  

Thanks, Jim.

Credits:

BioLogos:

Language of God is produced by BioLogos. It has been funded in part by the John Templeton Foundation and more than 300 individuals who donated to our crowdfunding campaign. Language of God is produced and mixed by Colin Hoogerwerf. That’s me. Our theme song is by Breakmaster Cylinder. We are produced out of the remote workspaces and the homes of BioLogos staff in Grand Rapids, Michigan.

If you have questions or want to join in a conversation about this episode find a link in the show notes for the BioLogos forum. Find more episodes of Language of God on your favorite podcast app or at our website, biologos.org, where you will also find tons of great articles and resources on faith and science. Thanks for listening. 


Featured guest

Rosalind Picard

Rosalind Picard

Rosalind Picard is founder and director of the Affective Computing Research Group at the MIT Media Lab and founding faculty chair of MIT's Mind+Hand+Heart Initiative. She has co-founded Affectiva, Inc. providing emotion AI technology, and Empatica, Inc. creating sensors and analytics to improve health.

3 posts about this topic

Join the conversation on the BioLogos forum

At BioLogos, “gracious dialogue” means demonstrating the grace of Christ as we dialogue together about the tough issues of science and faith.

Join the Conversation