Joanna Ng | Data, Truth, & AI
Joanna discusses some of the risks that come from putting too much trust in computers and artificial intelligence.

This image was created with the assistance of DALL·E 2
Joanna discusses some of the risks that come from putting too much trust in computers and artificial intelligence.
Description
Joanna Ng has worked on many projects which have been turned into tools we use everyday. In the episode she talks about the journey to become a Master Inventor and some of the highlights of her career and then discusses some of the risks that come from putting too much trust in computers and artificial intelligence.
- Originally aired on November 02, 2023
- WithJim Stump
Transcript
Ng:
Data is a capture of reality at a point in time. So if a lot of the current AI we see that are in the news, are using internet data, which are data of human that are sinful with wrong imperfect behavior, that we used to model to build and predict things, which is very dangerous.
My name is Joanna Ng, I’m currently the founder of a small startup AI company called Devarim Design.
Stump:
Hi, everybody. Welcome to Language of God. I’m your host, Jim Stump. Joanna Ng is a computer scientist. She’s worked on many projects which have turned into things you and I use on a daily basis without so much as a thought of what went into the design and implementation of them. For example, the simple act of buying an item over the internet. It’s clear from talking to Joanna that she lives in a world that few of us will ever understand completely, even though we are constantly using and benefiting from the work that she does. When telling us about some of her work, it’s hard to stay away from technical jargon, and it might be the case that there’s just no good way of explaining some of these things to a lay audience like us. To help with the terminology, we put together a little glossary of a few concepts that come up with what else, but the help of ChatGPT.
There’s an irony there, I can’t help but ponder. Well, anyway, you can find that in the show notes. In the first part of the conversation, Joanna tells a bit of her story of her natural proclivity for computer science, her work with IBM and how she came to be a master inventor. That leads us into conversation about some of the risks and pitfalls of technology, especially when we mistake the information we get from a limited set of data for a wider truth about the world. Stick with us through some of the technical bits, because there’s some important wisdom here about how people of faith might think about and respond to a world that’s increasingly infused with technology.
Let’s get to the conversation.
Interview Part One
Stump:
Well, Joanna Ng, welcome to the podcast. Thanks so much for joining us.
Ng:
My pleasure.
Stump:
So you first came onto our radar a couple of years ago in a Christianity Today article, 12 Christian Women in Science You Should Know. So we took their advice and we’re trying to get to know you. So tell us a bit about yourself incorporating the Christian part of that, the science part of that, even the woman part of that, if you’d like.
Ng:
Okay. So I was born in Hong Kong, and left Hong Kong after high school to come here for university.
Stump:
And here, is Canada, right? Not the United States.
Ng:
Here in Canada, yes. Yeah, here in Canada for university. I became a Christian in the first year of university, and that was when God interrupted my life.
Stump:
Tell us a little bit about that. What was going on and what was the interruption?
Ng:
So I aspired to be a psychologist ever since I was in grade eight. So I read every book about psychology. And then when I got into university, the first year was general study. And at the end of the first year, I was supposed to submit a form to declare the major for second year onward. So a no-brainer, I filled in the major as psychology because that was my childhood dream. But by then, I was already a Christian. I was a new Christian, newly baptized. And when I filled in the form, according to my aspirations since childhood, there was an unexplainable unrest, and the lack of peace that came from nowhere. I got my parents’ approval, everything was fine. There was really practically, in reality, nothing could stop me from majoring psychology and pursue a career there. But I absolutely had zero peace and I couldn’t sleep and I couldn’t eat and I didn’t know why. But I knew enough that it could come from God.
So I went back and prayed about it, and I said, “What’s going on?” And so that went on for a few days, and then I cried out to God and said, “Look, I don’t have plan B. So if you want to interrupt me, you need to open my eyes to see what path you want for me. But I surrender my career into you, it’s just that I saw no plan B.” So after, I was willing to surrender 100%, not knowing what was the answer. Once I surrendered, the Lord opened up my eyes in the first year because I was late in registering classes. So I took all the computer courses as my optional courses. In Canada, we call them bird courses, like French took French, and Chinese took Chinese, so we could bump up the GPA.
So I took all the computer science courses as my bird courses, and I treated them like bird courses. I did very little effort and focus on my psychology classes. I didn’t know they were supposed to be hard, because with very little effort, I did really, really well. And then I felt the Spirit was saying, “Are you blind? Did you not see what your other classmate struggle despite of all the effort, and to you it came easy? That was how I wired you.” So By faith, I filled in majoring in computer science. And all of a sudden, there was a tremendous peace, that was so supernatural from head to toe that came. And I didn’t know what I was signing up for other than so naively thinking, C programming sounds easy because it’s C, and I didn’t know that there was a lot more into it.
And that’s how I got into the technology field. But because it was so divinely started, I knew God had the purpose to put me there. And then the rest is history, because doors just opened from having a career in IBM, and not just a programming job, but work in groundbreaking technology project by project by project, eventually—
Stump:
Yeah, I want to ask you about some of those. Stay in college here and university for a little bit though. How common was it for women to be in the computer science department at that time?
Ng:
Very uncommon. It was a shock for me. So I went to an all-girls kindergarten, graduated with the same 40 girls in high school, and a sudden jump into computer science where I was the minority. But the training in the girls school gave me enough confidence to ignore boys. So that confidence helped in surviving being the only girls. Well, not really. It’s about 15% in my class.
Stump:
15%, huh.
Ng:
Yeah. It’s just surprising that even to this day, that percentage is not any higher.
Stump:
Do you have any guesses for why that is? What’s the explanation for that?
Ng:
Ok, so there is a benefit of coming from girls school, because we do things for the right reason, ignoring the boys’ influence. And then after I came out of that, I think some things that they would scale back not to intimidate the male ego, I never have that problem, but that was one of the reasons given. And also, I think some of the communication in boys and girls can be quite intimidating to girls. So for example, I learned that from my husband. Boys discuss things as a matter of fact, that they couldn’t care less about how you feel about it. So when we discuss science, or especially in software development where we talk about design review a lot, some of the comment was very direct, and it could be earth-shattering. But if we learn to understand the male communication, which is more fact-based, less feeling based, then some of the girls are turned off or felt very intimidated by that tone of talking, that sometimes could come across as condescending.
And I think that battle of learning the different communication style is prevalent throughout my career. I mentor a lot of females in the industry. I mentor a lot of females in my company. Mostly, that became a core issue, which is to learn the different communication styles, and don’t take it personal, don’t go home and cry about it because it’s not about you, it’s about what they say.
Stump:
Okay. So did you join IBM then right after university? Did you go straight there?
Ng:
I actually worked for Honeywell for about a year, and Honeywell was a hardware company. So I was—
Stump:
They make thermostats, right?
Ng:
Yeah. Yes. So I was the only software developer there producing a software code for graphics software to be burned on the EEPROM, which is the hardware of their thermostat. But they treated me as the only software programmer and female, like another staff, like a secretary. So I knew I didn’t want to stay in a hardware company. I wanted to be in a software company, because I wanted to see my peer, I wanted to be challenged in software development, and that’s when I joined IBM.
Stump:
Okay. So what was your first job or your first assignment at IBM? What were you doing?
Ng:
I was a compiler writer.
Stump:
So tell us what that means.
Ng:
Okay, so I was a compiler writer. I regularly was sent to York Town Research to transfer compiler software code back into Toronto lab. And I didn’t know it was supposed to be hard. Sometimes naive is good. So basically, I was in charge of the parser, the LALR parser that tokenize a stream of text, that decide what is a syntax error of a programming language according to a finite state machine, and then generate machine code for the backend to optimize.
Stump:
Okay. There’s a lot of technical jargon in there, that many people won’t understand. And I wonder if it’s even possible to explain to us, okay, so what exactly were you doing then? What was happening? A parser and—
Ng:
Yeah, so basically I transfer the technology called a finite state machine. We can treat it like a black box. So basically in that black box, it will be able to process programming code, which is text, and tell you what is the error in the syntax of a programming language.
Stump:
Okay. So there’s one group of people somewhere that are inputting text into this.
Ng:
The programmer.
Stump:
The programmer is putting text. And somehow, as this gets compiled, there are errors that come out and you are responsible for finding those errors? Is that safe to say?
Ng:
I was responsible for parsing the programming language, being input, and turn them into intermediary machine code that turned into actual zero and one to one machine.
Stump:
Okay. Keep describing some of your career then at IBM, and how that led into you and some of your inventions and what you were working on.
Ng:
So after a while, I’ve learned everything I need to learn about compilers, which is a very good technology foundation for me, because I learned the power of abstraction, because the same parser would generate C compiler, C++ compiler, Java compiler. Then I saw the beauty of abstraction, and that’s given me a very solid foundation, especially for AI. Then I needed more challenge. So at the time, the internet was new, so I moved on to doing Java server work, and then eventually getting into online commerce, which is a new horizon where it gave birth to a lot of my patents. When my career started, I did not aim at being an inventor. I just enjoyed the work, because I just liked the challenge. I just enjoyed the work. And then when I was in the e-commerce days, it started by my boss asking, “I wonder if we can display a query from a database through HTML, so that the HTML is not static.” And that’s the birth of e-commerce in IBM.
Stump:
So I assume these are things that are being widely used today every time we buy something online?
Ng:
Yeah. Yeah.
Stump:
And so eventually, in your career at IBM, you were designated a master inventor? What does it take to get that title?
Ng:
A master inventor means that you are prolific in filing patterns, you know how to spot ideas that has never been done, and be able to express it in patent lingo that can claim that this is an intellectual property that is yours. So intellectual property is like if you think about physical property, that you know where to put the funds to claim this is yours. So in IP, there is a way of comparing ideas whether it is a repeat of a different iteration of the same idea, or whether it is an idea that has never been ever pursued. And so to be a master inventor, you have to have at least two granted patents with lots of filing.
And so I became a master inventor after I got my 10th patent granted. And one of the obligations of the master inventor is to mentor aspiring inventors within IBM. So annually, I gave a patent mentorship talk in software group level as well as locally in the Toronto lab. And I usually round up aspiring inventors to get their hands dirty, to work on the details of the patent so they can get the hang of it.
Stump:
Yeah. Well, good. Of course, what I really want to talk about here is artificial intelligence. And I’m curious to hear then how your career started to intersect with that, because for somebody who is not deep in the weeds of the technology of computer science, I don’t see yet how your career of doing patents related to e-commerce would lead you to intersect with development of artificial intelligence. So maybe give us a little bit more of that side of your career, and then we’ll push deeper into artificial intelligence today.
Ng:
So after e-commerce, people realized that I’m an inventor, even though I don’t call myself that. So I work with the software group technology office to do innovation. And shortly after that, I was asked to be the head of research of IBM Canada that work with the inventor in Canada Lab, as well as professors across Canada University and some US universities. And so I had a seven-year tenure in that role. That is when I got into AI, and that was way before AI was even a hype. Actually, AI has been there since 1950. And I still remember even in university, I took AI courses. It was not a hype. And so in my research capacity, which is really my sweet spot because I could define the research agenda, I worked very closely with Watson, and worked with a few universities to do very initial AI projects. That was very eye-opening.
I’ll give you one, that is very classic. I can’t quote organizations because it’s a failed project, so it cannot be officially quoted. So worked with a US university who had access to US veterans health database data. And trying to observe what are the common patterns, is there a model that we can predict things? And so that is how I learned that some of the AI presumptions is wrong. So one of the AI presumptions is that the data is the ground truth. And how I immediately found out that AI is not the ground truth, is by looking at the veteran database, their blood pressure. 85% to 90% of them had blood pressure over 140, which real truth say you have high blood pressure. But if you use that data to build a model, and really assume data is the ground truth, then if you use that model and say, “Oh, I have 145 in blood pressure,” I might find they’ll say, “Yes, you’re fine, because that’s the majority of it.”
Stump:
Okay. Okay. So this is really interesting. So the data, artificial intelligence models were saying, “Because this is what most everybody has, this is the normal thing, this is what—”
Ng:
Yes, the grand truth is in the data, which is not true. That’s how I found out that data is a capture of reality at the point in time. So if a lot of the current AI we see that are in the news are using internet data, which are data of human that are sinful with wrong imperfect behavior, that we used a model to build and predict things, which is very dangerous.
Stump:
Yeah. Okay, let’s dig a little deeper into this if we can. So you mentioned there’s a long history of development of this back to the ’50s. Alan Turing is where most people look to start talking about this. And I don’t want to dwell too long on the history of this development, but are there some major developments or breakthroughs that you would point to, that got us to where we are today, and then we’ll talk about where we are today in a little bit?
Ng:
So AI’s been around for a while. The reason why we see an erupt AI acceleration is based on three factors. The number one, we never had the volume of the data that we see today. And the acceleration of the volume of data is due to the internet. Today, every year, I think the estimate of 2020 data, from the internet, is about 64.2 zettabytes. And it will only keep growing because with adding the internet of things, it has an annual compound rate of 23%. So data is needed. That volume of data accelerates the development of AI. So that’s number one.
Stump:
Okay, number one.
Ng:
Number two, there is a drastic increase in computational power. It’s estimated that the computer capacity is increasing seven times year over year. And that growth is important, because it’s very compute-intensive to process all the data. So ChatGPT was possible because of that, and they heavily use the NVIDIA chip, which really gives them a lot of computation power. So that’s the second reason. The third reason is because of the data, it gives birth to the tech giants, who have unlimited wealth. And that create a very interesting phenomenon. It’s been estimated that by 2021 alone, the five big tech, like the Apple, Amazon, Google, Facebook Meta, and Microsoft, together had a combined revenue of 1.4 trillion revenue in 2021. So it’s important for us to understand the context of the AI acceleration. Point number one, the data that these big tech use, are mostly coming from user data, from the internet. There’s no fair trade of data, that every user had at least 10,000 data points, if you are ever online.
Stump:
10,000?
Ng:
At least.
Stump:
You know 10,000 things about me because I’ve been online?
Ng:
Because you have been online, either you like something or comment on something, or you post something, you buy something. Your data is out there. You don’t know about it. There are certain usage without your consent, and it’s like you’re using my blood, but you’re not telling me, and I don’t even know you’re sucking my blood. So a lot of the major development, including ChatGPT, wouldn’t be possible without the volume of data. But we have to understand where that data come from. It comes from you and I.
Stump:
Yeah. So there are privacy concerns there, for sure. But is there also, what you were mentioning earlier, about the ground of truth, that it’s just using data that’s been produced by other people that you may or may not know if it’s really true?
Ng:
Absolutely. So for example, there are biases in the data. People who are less well off, don’t have money to buy a smartphone or computer to use the internet, they are never represented. The colored, the visible minority, the poor, they are not represented. Globally, the data mostly comes from well-off countries like North America. And we have to also understand the data represents the status quo. It doesn’t represent the ideal that we want to go to. So the languages are discriminatory, misogynistic. And those are the data sources that built the large language model of ChatGPT.
Stump:
Yeah. So just like the blood pressure example you gave, that’s what it takes as the norm. If that’s what the data is that’s there, that’s what’s normal, that’s what’s true in its estimation?
Ng:
Yes. And there is one thing I need to mention. So a major part of my IBM career is to take research and commercialize it into a product that ordinary citizens in the world would use by putting their hands on it. And there is a very strict discipline of when you turn research into a product. And I’m not going to go into a lecture of how that’s done, because that itself could be an hour or two worth of discussion. But the key thing though is that when we productize, we are releasing a piece of technology that graduated from research into the citizens of the world. And there are product liabilities. I remember every time when IBM shipped a product, we sat down with the IBM lawyers and say, “When you release this product, what can you claim? What can you highlight that would not be covered in the usage?”
And therefore there would be liability, if there is a defect, or if there is a misrepresentation, or if there is a failure to warn about danger, what can we warranty? So it’s a very legal term, meaning that if I’m not doing what I told you I claim to do, I breached the warranty and I have financial consequence, or I am responsible for fixing defects within what I claim the product can do, and I can end up in a lawsuit by being overrepresented or by misrepresenting, or I fail to warn you about the danger. The reason why I mentioned that is that there is a very irresponsible AI commercialization that free the tech companies from product liability, which is societally irresponsible.
I’ll name some example. OpenAI released ChatGPT, refuse to disclose their data source, even to this day, refuse to let their data subject to audit for all the biases and all that stuff that we talked about, and they release it as a beta so that they can opt out from product liability. And we should not let this happen, because releasing AI commercialization without product liability, then the societal costs cannot be held accountable.
Stump:
So it sounds like, and I’ve heard people give this analogy before, that there should have been something like clinical trials like we do with drug tests before they’re released to the public, to see are there unintended consequences or side effects that we’re not aware of. Is that even possible for technology like this? Can you do it?
Ng:
Absolutely. Absolutely. Okay, so this is not new. Years ago, IBM Canada acquired a company called Algorithmics. They’ve been using AI to do financial investment for years. Now Canada is very heavily regulated in the financial sector. So that’s why when 2008 the market crashed, Canada is very intact. When that software was released, there are well-grounded rules that are legislated. For example, when you use that piece of software, IBM have to disclose the algorithm, the model, and the data that we use. And so if you know all that, “These are the sources that we do our calculations, these are the algorithmic approach that we use, and here’s the algorithm,” if you lose money by following the prediction, you cannot sue IBM.
But if you found out that there are some falsehood in the source of data in the algorithm and cause you to lose money, IBM is liable for that loss. So in the financial sector, we understand because we are talking about money. So we know how to govern. But that set of governing should apply to all other sectors. Just because some teenager was taken down to a path of depression that are not money, doesn’t mean it’s not consequential.
Stump:
Right. Yeah, that’s kind of what I had in the back of my mind when I was asking about unintended consequences or side effects. If social media had been tested through clinical trials like this for a couple of years, would we have said, “Oh, this isn’t very good for us. We probably shouldn’t release this to the public”?
Ng:
Right. And now on that point, we know that if it is not AI, let’s say it is the film technology, which is another technology I followed, the first film ever introduced was in the late 1800. And the best entertainment for the family at the time was 40 seconds, black and white with no voice, and then we evolve. So if we substitute AI with film industry, we understand how if the film industry was not legislated, we would see 12-year-old exposed to porn. Why didn’t we do the same for the social media and for AI? Why do we let the psychological operative algorithm that designed to get the people addicted to a particular topic, be allowed unguarded, on social media, that are still working to this day, without guardrails, without protecting our next generation.
[musical interlude]
Interview Part Two
Stump:
So there are leaders in the tech field, and even in AI, that have been at least publicly saying there should be some regulation, some government oversight of this. Is that for real? Will that happen, do you think?
Ng:
It depends on the country. And for the legislation that I’ve seen, so for example, Canada is working towards a law where in the future, if that law is passed, you cannot put an AI offering commercialized without disclosing your source, without letting your data being audited. But that is in the future. But that is not enough. We are not protecting our next generation enough. We are not guarding against misinformation enough. There is a philosophy that content is no one’s responsibility. That is bogus. There should be legislation to sue people who put out false information. There should be guard rails to stop algorithm designs to get people hooked, so that we might be psychologically being manipulated by the social media without knowing, as an adult. But as a teenager or as a child, that is even worse.
So I have seen regulations about AI. I think Europe is quite ahead, and Canada has also some substantial one, but those are more at the technology level. But there is not enough at the content level, at the algorithmic level, in the intent of psychological addiction manipulation. That legislation doesn’t exist. And so for US, I think Biden, in June, gathered the big tech to talk about AI, but it was left as voluntary effort, which doesn’t work. Who would voluntarily walk away from a billion of revenue annually for the common good?
Stump:
Mm-hmm. Well, I’d like to hear you talk a little bit more about where this might be going. Of course, when something like this makes it into the popular press, there’s going to be a lot of hype. And here in the last year, since ChatGPT has really exploded into the public awareness, there’s been a lot of hype both on the positive and negative, where optimistically, some people will claim that AI is going to end disease and poverty. And on the pessimistic side, AI is going to destroy humanity. And very famously, we have lots of the creators of AI signing the declaration about them being nervous of what it could do. Could you cut through some of that hype for us, if you would? What are the realistic expectations of both benefits and risks of AI in the not-too-distant future?
Ng:
Okay. So the existential narrative of AI is very disappointing, especially when it is made by scientists within the field, because it is distracting. It distracts us from the real problem that we need to face. Though there are risks, if there is ever an existential threat of AI, it is man’s own doing. Like say for example, you allowed a machine model to automatically press a button for nuclear bomb, and we just die without even knowing. And that’s man’s doing. So I go against promoting AI as a sentient being, because it is a cheap op-out of man. And so I strongly, strongly go against it.
We know there’s a driverless car in the making, and insurance companies keep saying, “Well, who am I going to sue if an accident happened?” Behind every single technology, there is an entity, a company, a group of people responsible. So do not promote AI as a sentient being, and demote human as a responsible entity and let the world end, because we end it ourselves. So I don’t buy that. But there are potential risks of AI. So for example, with the generative AI, people on the agenda of conspiracy theory can pump content that are misinformation, and allow the model to pick it up, compared to news media who might have less money, that cannot promote the truth as widely as they promote the lies.
So the exponential generation of fake content, would further feed the deep divide, because for the first time, because of social media, because of the psychological operation algorithmic approach in the service side, there became a reality that we do not have a commonly shared reality. Back in the days, we watched the news together with the parents. We all got the same facts. Now, we might have different opinions, how to solve the problem, like the economic policy, someone more control, someone less control, and that’s fine, those are political opinions. But we can discuss that based on the same shared version of facts. This is no longer true. If you are on the deep end onto a certain narrative of, did the 2020 election happen fairly, then they keep feeding you your version that you and I no longer share the same view of what actually happened. And this can become a political threat, and would incapacitate the mechanism of democracy.
Stump:
So let me give an example that I’ve heard, or at least a variation on an example I’ve heard, and let you weigh in as whether this is a realistic threat or not. So if we gave an AI a more general goal, to say increase my financial portfolio. And so the threat isn’t… We’re not going to give it access to push the button of nuclear weapons, right?
Ng:
No.
Stump:
But if we give it a goal like that and it has access to social media, as you’re just saying, it’s doesn’t seem inconceivable to me that I could figure out, “Oh, if I put out these messages on social media, and get this candidate elected, that will help your financial portfolio of a big company,” or something, or if I started a revolution in a small African republic somewhere, it might help this mining company do better. So as you’re saying, that’s not incredibly farfetched, is it that these—
Ng:
No. No. No. I’ll tell you what actually happened. Have you heard about the Cambridge Analytica?
Stump:
Oh, yeah.
Ng:
They used Facebook data to build a model of the persuadable. So if you are very adamant supporting Democrat, you got scoped out. If you are very adamant to support Republican, you got scoped out. So they built a model and said, “Here are the characteristics of the persuadable.” And then once they built that model, they know who they are, and they target fake news, depending on who paid, who was their client, and told the narrative so that this persuadable group would be more inclined to vote for the client that paid them. That is a fact. And the Parson professor went to Europe to ask for his personal data, because in Europe, GDPR enforce that if a user asks you to get back their data, you can. And the Cambridge Analytica would rather plead guilty than to give the data to this Parson professor, which to this day, he has not received the data.
So it’s just telling you how dangerous it is. And I don’t think we appreciate the danger of the superpower of the five tech companies. You look at Elon Musk, he can decide Ukraine can have this technology and therefore they can win the war. If there was a situation where the top CEO of the top tech company can get together, just the six of them, five of them could decide who your next president is in 2024.
Stump:
Wow. Okay. How about the other side? Are there any optimistic views you have of what AI could do for us and for humanity if it’s used well in the future?
Ng:
Yes. That’s why I’m frustrated with the existential narrative, because it distracts us from AI to bless. There’s a lot of ways that AI can bless. One of the projects that I worked with in my days with the US university, I’ll just give that as an example. Today, the oncologist prescribed drugs to a certain type of cancer, like say breast cancer or lung cancer, based on what they know about the drug. And what they know about a drug might be just whoever marketing guy show up from the pharmaceutical company. In the long run, with AI, with your genotype, with enough data, we can say, if you are in this age group, if you are in this gender, and if this is your genotype, among the six type of drugs to cure lung cancer, data evidence shows that if you pick drug B, it would give you the biggest chance of survival. Those things can happen, and those are the good things.
And even for companies, there’s a lot of low-hanging fruit that they can benefit from AI including seeing what are the models of your most ideal customer? What are the models of their satisfaction? It can do the business a lot of good. And for dementia, it can do a lot of good by compensating or augmenting the mental capacity that they are losing. That, in a certain percentage, not fully, can be compensated by AI.
Stump:
Just in closing here, you’ve done this a little bit already, but more explicitly maybe, I’d like to get you to reflect particularly on AI from the perspective of a Christian, and what we as Christians can bring to this conversation, to the developments, to the direction that things are going. We’re creators. I think that reflects something of the creator in us, and you’ve created a lot of things. Is there a limit on what we should create, such that even if we have the technical expertise to create something, we shouldn’t go further down that path, or should we say no, because we have the ability we can create these things and we just have to have guardrails to make sure they’re used correctly? Where does your Christian faith play into how you answer those kinds of questions?
Ng:
So there are two groups of people I would speak to, which prompted a few pages that I wrote yesterday. The number one is the church in general. I sense a share global fear about AI among the church. And that has to stop, because that doesn’t position the church, the body of Christ, to be the salt and light in the world. We do not need to fear, because it’s been under the Lord’s control. How do I know that? In Daniel 12:4, Daniel was given a prophecy that in the end of time, knowledge will increase. And this, I’m sure, includes AI. It talk about people go to and flow, which is like fights, and then the knowledge will increase. So we know nothing is outside of the Lord’s control. So that fear has to be gone, because without that fear, only then can we take our position to be the light and the salt. And what does that mean? As a church, we are here to call out the deception, and to shine the light. And every Christian has an Esther moment today, because every question-
Stump:
Esther. You’re saying Esther moment?
Ng:
Yeah, Esther moment. We are made for such a time as this.
Stump:
Such a time as this, right.
Ng:
Yes, we are made for such a time as this. It is a given. AI era, we are already in it. So this is our Esther moment. And as Esther, what do we do? So as a citizen of the world, as a Christian, we need to get literate on AI. So AI literacy is something I promote a lot. It’s like, in the days of driving horses, now someone invented the car. So we need to have driving literacy to live in that era, rather than live in fear of cars. It doesn’t make sense. So in the same way, as a citizen, as a Christian, for the church, we are supposed to show the light. The first thing we do is AI literacy. Once we know enough, so that ignorance doesn’t feed fear, then knowledge informs us to act. And therefore the church needs to be more vocal in enforcing the legislation that I talked about in the previous hour, to stop the deception from exponentially multiplying using AI and using this digital platform. So that’s number one. But we cannot stand in that position to enforce God’s will, if we don’t know enough.
The second group of people I talk to is people like me. Number one, I believe like 100 years ago, God called Jesus-loving people to build great schools. So you see Harvard, the Princeton, the Bronx, the whole nine yard, and even my girls school in Hong Kong were found by Jesus-loving Christians, and they still stand to this day. I believe this is a time that God is calling Jesus-loving people to lead the way, to build digital platform that reflect his value. The current digital platform is a platform of exploitation, is a platform of deception, is a platform for manipulation, but it doesn’t have to be this way. People keep driving the stick-shift car and complain about it every day, until the automatic transmission was invented.
So I believe that God is calling those whom He divinely ordained to be in the field of tech, in the field of AI, to lead the way, to build something as an alternative that is more reflective of his value. And so I call for more scholarship for STEM, recruit more of Jesus-loving, spirit-led technologists, scientists, and investor into the field so that we can build a brighter platform that is more light and take away the darkness.
Stump:
Good. Well, just in closing then, tell us a little bit about your company, Devarim Design and what you’re doing in that regard to try to lead the way.
Ng:
Okay. I can’t talk about it too much, but in a general sense, because there are enough companies to eat me up, but I’m working on something called Axiom AI. So I’m working on two things. One is what I call the Axiom AI, which is different from Explainable AI. Explainable AI is 100% working on data. And so Explainable AI is still confined by the data. So I’m working on something on Axiom AI, which balances out the symbolic AI which is machine learning, with symbolic AI, which is the ground truth, so that there is a balance. So something like, “Oh, I have 140 blood pressure,” and being told normal, could be offset by saying, “No, you’re not normal. I don’t care if 100% of that data subset says this has high pressure, from an Axiom point of view, you are still not normal.” So I’m working on something like that.
Stump:
So it doesn’t take the data as the ground truth then. It recognizes—
Ng:
Yeah, with a reference. And I’m also working on an augmented cognition that is AI for me. So remember I talked about the automatic transmission. For me, the automatic transmission that would improve over the stick shift, the first thing is to have an infrastructure that every user in the internet can keep their own data. Okay, so it’s a very, very baby-step right now. But the vision is there will be a day… I’ve been thinking about we live the internet today. What do I want my children and my grandchildren’s digital platform to be like? So I want to see them being able to live in a world where they own their own data, and then they can willfully contribute their own data either anonymously, or not, to a cause of their choice.
So for example, if someone want to have an AI project to build a model, to understand teenager of this generation, then, “Oh yeah, it’s a good project. I want to contribute my data in an anonymous way.” Or if someone wanted to work on building a model to increase longevity, then I’m willing to contribute my health data in an anonymous way to help to that cause, so I give blood at my choice rather than you suck my blood and I don’t even know about it.
Stump:
Well, good. I hope that comes into being, and that the big tech companies don’t gobble you up too soon.
Ng:
I’m trying. I’m very low profile right now, as you notice.
Stump:
Well, Joanna, it’s been a pleasure to talk to you, and I hope we might do this again sometime as this technology continues to accelerate so rapidly, and we want to get a perspective again. But for now, we thank you so much for sharing with us.
Ng:
Happy to share. Yeah, we need to get our voice out there. AI literacy for the public is the first step.
Stump:
All right, very good. Well, blessings to you and on your work.
Ng:
Okay, thank you.
Credits
BioLogos:
Language of God is produced by BioLogos. It has been funded in part by the Fetzer Institute. Fetzer supports a movement of organizations who are applying spiritual solutions to society’s toughest problems. Get involved @fetzer.org; and by the John Templeton Foundation, which funds research and catalyzes conversations that inspire people with awe and wonder. And BioLogos is also supported by individual donors and listeners like you, who contribute to BioLogos.
Language of God is produced and mixed by Colin Hoogerwerf, that’s me. Our theme song is by Breakmaster Cylinder. BioLogos offices are located in Grand Rapids, Michigan in the Grand River Watershed. If you have questions, or want to join in a conversation about this episode, find the link in the show notes for the BioLogos forum, or visit our website, biologos.org, where you’ll find articles, videos, and other resources on faith and science. Thanks for listening.
Glossary of Terms from the Episode:
Groundtruth: The information or data that acts as a reference point against which we can measure the performance of computer programs or algorithms.
Compiler: A special computer program that turns the code that programmers write into something a computer can understand and run. It’s like a translator between humans and computers.
Parsing: Parsing in computer science is like grammar-checking a sentence. It looks at the code to make sure all the parts are in the right order and make sense together, so the computer can understand what to do.
Black box: A system or device where you can see what goes in and what comes out, but you don’t know exactly how it works on the inside.
Bootstrap: The initial push that gets a computer or program running so it can do more complicated tasks on its own. Just like you need that first push to start pedaling a bike, a computer needs a bootstrap to get going.
Featured guest

Joanna Ng
Joanna Ng is first and foremost a disciple of Christ, whose faith informs her as an inventor and a technologist. A former IBM-er, she has 49 patents granted to her name, attained the accreditation of an IBM Master Inventor, was on two IBM patent review boards, educating and mentoring aspiring inventors. Joanna held a seven-year tenure as the Head of Research, Director of Centre for Advanced Studies, IBM Canada; published two Computer Science academic books, The Smart Internet (2010) and The Personal Web (2013); and 20+ peer-reviewed academic papers. She started Devarim Design for the advancement of AI, focusing on applying AI in augmented intelligence after she left IBM.
Join the conversation on the BioLogos forum
At BioLogos, “gracious dialogue” means demonstrating the grace of Christ as we dialogue together about the tough issues of science and faith.