ReligionWise

Religion in the Age of Artificial Intelligence - Mark Graves

Institute for Religious and Cultural Understanding Season 5 Episode 5

Artificial intelligence is forcing engineers to define concepts that philosophers and theologians have wrestled with for centuries: consciousness, personhood, and morality, to name a few. The technology may be new, but the questions it poses are ancient. In this conversation with Mark Graves, Research Director at AI and Faith,  we discuss what AI development reveals about humanity and what we can learn from the conversation between religion and technology.

Show Notes:


Send us a text

Chip Gruen:

Welcome to ReligionWise. I'm your host. Chip Gruen. There is a scene at the end of Harry Potter and the Chamber of Secrets where Arthur Weasley says, "Never trust anything that can think for itself if you can't see where it keeps its brain". He, of course, is referring to the enchanted diary that affects his young daughter, Ginny. But as we have conversations about large language models and about AI, that interaction and the plot line that accompanies it keeps coming back to me, of course, in Harry Potter, the unseen force is enchantment is magic. Is something that is not scientific, but it bears a lot of the same qualities that AI models that are with us today bear as well. We can't see their brains. We don't know exactly how they work, or even approximately how they work. And I would say that that is true for most people when they get in cars that they don't know approximately how they work, but somebody does. The difference with the LLMs is that nobody does, not even the people who built the systems themselves, and so we end up getting information from a black box, and it's easy to imagine that entity, if you will, as operating in ways that are either similar to another human or even similar to God. Today, I'm very excited to have Mark Graves on the program. He is the research director at AI and Faith. Is trained with a PhD from the University of Michigan in computer science and AI, as well as holding master's degrees from the Graduate Theological Union in Berkeley and the Jesuit School of Theology, as you'll hear, I think this conversation could have probably gone on for quite a bit longer. We skimmed across the surface of several interesting conversations about what it means to be human and how that is understood in the light of AI models. You know, we talk a little bit about whether wisdom can come from a machine. We talk about our understanding of things like soul and consciousness, and we think about regulation and policy and the potential way forward for AI. So I know I learned a great deal from this conversation. I hope you do too. Without further ado, here's my conversation with Mark Graves. Mark Graves. Thanks for coming on ReligionWise.

Mark Graves:

Yeah, thanks. Thank you for having me.

Chip Gruen:

So I want to start with your trajectory. A little bit to my mind, your path seems a little bit unusual that you have a PhD in computer science and AI from the University of Michigan then went on to Master's degrees from Jesuit School of Theology and graduate theological union. Can you share the journey that led you to put these fields together? What made you think that theology had something important to say about AI development, or vice versa, that AI development had something to say about theology.

Mark Graves:

Well, starting out, they were pretty separate. I mean, I grew up on a farm in the Bible Belt, and, you know, attended church and was interested in math, and actually artificial intelligence ended up studying that, but kept them very separate until I started really kind of moving more into, kind of the biological sciences, and trying to think about how people think, and using AI tools for that. I ended up studying theology and theology and science, and then bringing AI into that. And so for me, it was kind of more parallel. Parallel journeys, and then slowly, just kind of trying to find ways in order to bring them together.

Chip Gruen:

I mean, that sounds like a natural enough progression, but was there some moment, was there something that sort of the lightbulb went off that said, Wait a minute, these aren't just parallel these. These actually do have something to do with one another.

Mark Graves:

Well, one thing was so I was in Pasadena, was working on a research project at Fuller Seminary in Caltech, where I was bringing in some AI techniques into the study of care and compassion. And up until that point, I had been thinking of it in terms of kind of the neuroscience and spirituality, and how do you do the bridge there? And then I realized, Oh, I could really use the AI tools, throughout the place, even then, to actually use them in order to, like, understand theological text. And so then I pivoted to trying to do more within kind of AI. I'd spent, at that point a few years, you know, in academia, so I went back to industry, refreshed my AI skills. Started getting more involved in ethics, which I thought was a good bridge between kind of AI and theology. And then ended up being able to have a fellowship at Notre Dame. So I was there for a couple of years to try and work on the relationship, and so it was a gradual process of trying to bring those together. But there was that moment of like, oh yeah, I think AI had gotten to the point where it wasn't just science fiction to connect it to theology, but could actually have real use. And I did that for a few years, and then about, you know, whatever, three years ago with the chat GPT, then that kind of pivoted enough to say, Oh, now AI is at the point where theology really needs to say something to kind of AI development.

Chip Gruen:

So this brings us to your current role as research director at AI and Faith, which is an organization that engages multiple religious traditions in these conversations. So I want to sort of push a little bit about that organization. But can you, you know, just sort of summarize how you came to be involved with that, what that organization looks like, who's a part of it, some of the high structural stuff about

Mark Graves:

AI and Faith. Well, so I've been interested in AI and religion for a while, and somebody told me about this organization, and at the time it was like 100 and little over 100 almost 150 people that were interested in this. I was like, Oh, this is a place for me, you know, I, you know, in my career, I would sometimes bring 10-15, people together as a working group to do something, and here was, you know, like 100 people that were already interested. And so for me, it was easy to just kind of step into that, first as a volunteer, and then progressively, greater positions of responsibility in order to kind of help foster the dialog. And so it was a gradual process, just like starting to attend, doing more volunteer work, starting to work part time, running grants through there to slowly build up to where now I can help try and bring people together that are from academia, from industry, from religious leaders to kind of foster that conversation with others.

Chip Gruen:

Yeah, so that seems to be one of the things that I mean, I don't know if I would say unique, but certainly distinctive about your organization is that it's not just within religious communities or religious individuals talking to themselves, but also and correct me if I'm wrong, but the goal seems to be to bring religious and philosophical wisdom into conversation with technologists, developers and the broader public. Can you describe what that work looks like? I mean, you've described these kind of convenings of bringing people together, but you know, when you're thinking about the mission of AI and Faith, you know, what is that? How is that, you know, affecting the conversation, again, not only within religious communities, but bridging that gap to industry, for example?

Mark Graves:

Part of it is that, I mean, people in religious communities are using AI. They need to know that. And so how do we connect to people that are actually understand the technology and can advise on that? You know, another group are people that are working within tech companies, large, major tech companies, to have their own faith commitment. And so part of it is helping them understand how to make those connections. And so for me, part of you know building upon like my journey of doing that, working within technology, studying theology, trying to build these bridges, understand how to use ethics for the conversation. I see part of what I'm doing as actually like mentoring others along that same path. You, you're working at a highly technical, scientific job during the day. You go to church on weekends. How do you bridge those worlds? And, you know, it took me a good, you know, let's say, 20 years to figure out how to bridge that. How can I help others do that? You know, now, you know, in their jobs, you know, day to day, and so, you know, so those are kind of the kind of, you know, linchpin or, I guess, anchor groups, you know, within the church and then within the tech companies. But AI is so pervasive that it covers a wide range, you know, people in academia, ranging from theology departments to computer science departments. And, you know, all in between, other people in industry that are not just developing the technology, but using the technology. What do they need? And then you get these kind of crossovers, like, you know, companies that are building technology for churches, you know, or companies that are trying to, you know, CEOs who have a strong faith commitment and are trying to bring that in. And so that ends up kind of increasing the size of our tent.

Chip Gruen:

So I want to sort of drill down a little bit on the tech side of this. And what you've described so far has been a little bit of meeting people where they are right, religious people within the industry, for example, building those bridges, following your path, etc. But it also seems like the nature of the work itself is engaging in what I would consider right to be religious and philosophical questions, and that there are decisions that need to be made about what is consciousness or personhood or ethics, or meaning that even somebody who is not particularly religious, who is working on these models is, you know, is answering those questions or thinking about those questions, or maybe should be thinking about those questions. In your experience, do, do the developers do, and I know it will vary from person to person, but do they see themselves as actively engaging in those, you know, religious or humanistic traditions, that, that that is essential or relevant to their work? Or do they see that that is something different from the technological work that they're doing.

Mark Graves:

Yeah, I think it happens on multiple levels. So I think a lot of people say, hey, there's something here that goes beyond the technology, you know, it brings up ethical questions. It brings up questions about at least like now relationships, and, you know, maybe people see it as bringing up questions about the human person, but the training of most people that are doing that work was not in the humanities, and so they may realize, hey, there's something here about, you know, right and wrong, but have no idea that, like the field of ethics, even exists, much less that it has, at least, you know, a 2500 year old history just in the West. And so they kind of realize that there's questions, but they don't have the resources in order to be able to really think about them in depth. And so they know that they're touching on it, but they don't really know how to frame questions. And so, you know, part of the process, in a sense, I mean, like, I mean, in philosophy, it's like learning to ask better questions, to refine the questions, and to realize, oh, this is an ethical question. I think one of the complications here is in tech companies, you know, ethics often gets tied to compliance, which is part of the legal department. And so people usually don't want the legal department getting too involved in like their technical development. You know, that means something's going wrong. And so it's hard to then bring in resources. No, this is, this is another way of doing ethics. This is much more about, like being a better scientist, being a better developer, being a better engineer. And, you know, trying to answer some of these questions that do go a little bit deeper, that for a lot of cases, is what got people engaged in the field. I mean, people, you know, now people go into AI for the money. But even up until, like, three or four years ago, people that were going into AI were passionate about it. They were interested in learning about it, you know. And so there's still this kind of like, what does it mean? What? What does it mean to be, you know, intelligent? How can we solve problems? How can we use technology in, you know, better and better ways to help people?

Chip Gruen:

So what? What is the mechanism? So we have a scientist who's sort of gone on the STEM track, right, that they've been a pipeline or through that they're interested in, AI, they're, you know, as you say, they they probably, even though they may not name them as such, are interested in some of those humanistic questions in the first place. But then they realize, Oh goodness, right, that there is a field of ethics, that there is a that there are ways that people have been thinking about these things. What does that I mean, for lack of a better term, what does that remediation look like? I mean, once you get to that point, right? Is it like, Okay, let me tell you, there's this guy named Aristotle, right? Let's start there, right? Or, or, let's think about Aquinas. Like, what does it look like when you're already knee deep in this field and you realize that there's something that, something else out there, and is that something that you're, that AI and Faith, that your organization is interested in is providing some of that training?

Mark Graves:

Yeah, so we're definitely interested in that. We're definitely interested in helping to provide some of that training. It's also, I mean, in a sense, this is kind of one of the real challenges. What does somebody like that do? Well in some areas, you'd say, well, you know, take a couple years off. You know, get a theology degree. You know, study Aristotle and Augustine and Aquinas, and then you'll be able to answer those questions. But people that are working on cutting edge research, you know, can't take two years off. You know, they may be really hesitant to take two weeks off. I mean, even, actually, I mean literally, even, like a couple of years ago, somebody that worked at a major tech company was saying that her co workers were scared to take a week vacation because they were afraid they'd never be able to catch up if they came back. Yeah, it's just moving that fast. And then on the flip side, you know, there are the philosophers and the ethicists that have that foundation, but they don't know what the questions are that need to be answered. And you know, so they'll be like, Well, okay, here are major ethical theories. Well, that's good if you're teaching someone like, over a four year undergraduate program or through a longer period, but that that's not a good enough answer if someone literally has like, you know, 30 minutes and, and, well, I mean, there I can actually even speak from, you know, my experience, like working in startup companies. You know, when ethical stuff comes up, the development cycles are sometimes, like, three days. If I want to bring in ethics, I can find time for like, literally, like half an hour to go and look and see what resources might be out there. But that's it. If it's not packaged into something that can be understood and applied within that short period of time, then the developer is not really doing their job, if they go and they spend like five days trying to understand the ethical issues. And so there's a tension, I think it's missed between even like, between professional ethics, you know, and between the AI ethics, that puts the developer in a little bit of a quandary. You know, they need to turn this out. That is their job. That's what they're getting paid for. And a lot of them, not all of them, but a lot of them would be willing to steer that in some direction, but they need the guidance in order to be able to steer that. But at the same time, it is moving so fast that for someone that has the philosophical philosophy, the ethics, the theology, they just don't know what the questions are, or they hear about the questions, you know, sometimes months or years later. I mean, I mean as an example, I mean, I've been in religion conferences within the past, let's say three or four months where people have said religious people have said, well, AI will never be able to do such and such. And there's somebody there that's doing AI, and they're like, it's doing it now. And so there's that much of a disconnect where people in theology are thinking that things will never be possible for AI that aren't even in the future, they're actually happening now.

Chip Gruen:

I just can't help but think I mean one, one of the ethical questions here is about moving fast, right? That that there is a problem with, you know, moving fast, breaking things and asking questions later, but it seems like that's the that's the world we're living in.

Mark Graves:

Yeah, and certainly the inevitability of that is not a good, ethical position, but at the same time, it is part of the reality. And so yes, I think it is good to slow down, and I think it's good for people with, you know, ethics and policy to say, hey, try and slow down. Try and take the time to understand the problems, especially when you start pushing, you know, it's not just, you know, the kind of the AI tools that people are trying to get out in the next six months, but what people are hoping to build in the next six years, take the time to do that well, I mean, with, you know, I was involved in the human genome program 15-20, years ago, and, well, actually now, I guess, like 25-30 years ago. And you know, there was a part of the federal budget that was set aside for ethical, legal and social implications to try and bring that in now, I mean, there are all sorts of challenges in doing that, but there is at least that recognition, whereas for the tech companies, it's very much in that kind of hindsight compliance. What's the minimum we need to do in order to make sure that what we're doing is going to be safe and aligned? There are people that are interested in doing that, but again, they have the same challenge of the closer you get to the cutting edge tech, the harder it is to bring in the ethicist and the philosophers and the theologians to help guide that conversation.

Chip Gruen:

So I mean that that comparison to the Genome Project is interesting, right? And I just want to, want to ask a follow up. There is the regulatory environment like, it's just 100% different, right? That there we have government involvement, right? We have at least funding coming from the government, and presumably that that leads to some strings attached, whereas here the, I mean, the only thing regulatory that I that I see has happened is a push for laws to keep people from making regulations, right. So, you know, I wonder what you know, like that comparison. I mean, are we just in a totally different, you know, governmental, political context here, that is, you know, make making a little bit more wild west than maybe we were even two or three decades ago.

Mark Graves:

Well, I think part of what's really driving some of the tech is this hope for like AGI or artificial general intelligence, with the idea being, if some, the first person to build that kind of wins the race, and then they can use that tool to go further. And so there is a little, even though I don't completely agree with it, I can see people leading tech companies that see an existential risk, you know, if they fall too far behind in the AI race, you know, the company is at risk. And so that's certainly kind of, you know, pushing that piece, and if you're focused on that, then you don't want anything slowing you down. You know, regulations are one of the things, but it's not anti regulation. It's just anti anything slowing me down. But I think whenever you if you kind of take even a small step back and think about, okay, well, that's yours in the future, but you need to be delivering products. You need to be able to get stuff out there. You need income stream. Just, I mean, I mean not even the ethic, just basic business and even Silicon Valley entrepreneurial. You need to have an income stream. Well, you know, what can you do? What's safe, to do what you know, what will cause a problem? Where are the risks? Well, in that place, regulation is actually helpful. I mean, one of the things that was actually kind of a little striking to me, even you know, like three, four years ago, is that normally, like healthcare, is a late adopter to advanced technology, they're really behind. I mean, you know, I mean, electronic medical records were probably like 50 years behind airlines using Compute databases for reservations. I mean, they're really pretty slow, except when it came to AI technology. All of a sudden, a lot of the cutting edge applications were within healthcare. I mean, within radiology, within diagnostics, and the part of that was that the FDA, pretty quickly said, Hey, we're going to treat AI as a medical device. These are the established processes that people within the industry have been using for years, and they know in order to do that, and if you meet these criteria, then you'll have FDA approval. And so that gave a lot of clarity to the companies that might be in that space to actually develop AI technology, and, know, there was a process in place, you know, actually get it through, like clinical trials, you know, in hospital, large hospital systems. And that actually sped up the adoption of that, whereas even something like autonomous vehicles, which you would think would be kind of more advanced or robotics. You know, there you have state governments trying to decide, well, how do we regulate this? And so that was actually, in a sense, a little slower than you would be you might expect, whereas the health care was much faster. And so now having a robot drive down a freeway is something that's further behind having a robot, you know, assist in a surgery. And so there are clear for me, that's a clear place where regulation, you know, I mean, and ethics, and you know, all that can really speed things up, even purely from a business perspective.

Chip Gruen:

So do you think, and I'm sorry if this is getting a little too far afield, but do you think that there's a corollary of types of regulation around the ethics space that could be helpful to the AI developers?

Mark Graves:

Well, yeah, I mean, so, I mean, we are diving in pretty deep. But actually, one of the, you know, collaborative ethics projects I worked on was to actually use the idea of best practices. And so within pharmaceuticals, you you know, I mean, it's research, but what you do is the regulation is around the, you know, the manufacturing process. Around the development process. So not necessarily the product itself, but the process that was used in order to manufacture the process, it needs to be in a clean environment. It needs to have this stuff source. You need to have repetitive processes so that if one batch is effective, you know, the next package is going to be effective. And so I think those are, and those are things that I think are effective in understanding what's happening in the generative AI, large language models, the models themselves are changing, you know, every few months. But the process, the business processes that are building them. Where do the data sets come from? Do you have permission to use those data sets? You know, what kind of, you know, testing is doing? Do you always have red teaming? How do you pick the people that are doing that? You know? What are the thresholds to say, No, this shouldn't go out. And so I think there's that pivot that is actually, you know, helpful in the space.

Chip Gruen:

Yeah, so we could imagine best practices that are informed by ethics and, you know, a philosophical and theological understanding that are way upstream of the product, I think, is what you're getting at.

Mark Graves:

Yeah.

Chip Gruen:

All right, so let's get back a little bit closer to some of the issues that people might be confronting. Our listeners might be confronting you were talking about what capabilities are. And one of the things that's, you know, sort of popping up is that chat bots, or LLMs More generally, are being trained on scripture or on theological texts. They're offering spiritual guidance, prayer prompts, confession interactions, and this can go all the way to the more secular, talk therapy sorts of things, right? We can imagine that there's a continuum there between the religious and the more secular. Can you talk a little bit about that? And from your perspective, knowing AI and ethics about about what's at stake when a machine stands in for a spiritual counselor or someone who has traditionally, I mean, traditionally been human. What does that look like? And, you know, what should be we be concerned about there?

Mark Graves:

Well, yeah and I want to say, I mean, I think there are some positives. I think, you know, people may have greater comfort in asking a machine rather than you know somebody in an institutionalized religion, some of you know what they may think is simple questions or something like that. And I think you can raise awareness if somebody is using like a religious chat bot, then I think that can help them make it more easier for them to then engage with kind of the traditional access points to religion, like, you know, if you've if you've been talking to a chat bot, it may be easier to go talk to a pastor or a priest or a mom, but I think there's also a lot, a lot of dangers. So, I mean, one danger is, I mean, you know, kind of what, in a sense, what's the app? I mean, is it being even like transparent. There are apps, you know, out there that identify as if they are wise or a biblical character. And I think, I just think that's just unethical for an AI system to pretend to be a human, and there's all sorts of consequences to that. But I think that's like a kind of a clear, you know, red line. But then even in building the system, I think there's some questions about, like, bias and fit. I mean, like, what data went into building that LLM? You know, is this use even something that was even considered? I can imagine there are people that were work built, pulling some of these data sets together three or four years ago as fast as they could. That would be horrified that someone is using this for relationship advice, much less spiritual advice. But there's also a challenge, whenever we interact with the device and it gives us pretty good answers, most of the time, we tend to first, we tend to often believe machines, you know, because calculators are usually better at math than us. You know, there's a certain aspect of reliability that just isn't the case for the generative AI. And it's not that people are going to be naive about it, though some might, and there's also a risk of people with like mental health issues who can't separate, you know, the reality that they're engaging. But we're also kind of built so that when we have a conversation with someone and it seems trustworthy, then we're going to trust, you know, whoever or whatever we're talking to. And so the LLMs, in part by design, aren't very good at saying, Oh, I don't know that you should talk to somebody who should and so it may start out as a kind of a superficial level, but then. We would naturally take that deeper, and as we take it deeper, we get further and further away from what the LLM was designed to do. And so there's kind of, it's almost unexpected if it goes well, because all the pieces there are in place for it to not go well.

Chip Gruen:

It, you know, thinking about how these things are trained, and this is maybe a little bit of a technical question, but I would guess that most of these applications that are counseling or like the Jesus chat bot or whatever, are the large language models that are just trained for general use, right? And I'm thinking about, you know, the most famous example I know is the specialized application of, like, protein folding, right? Like, like, that's AI, but it's trained to do that one thing, and it can do that one thing, really, really well. Do you see a future where these sorts of spiritual, theological, philosophical models are trained to do those things. And would that make a difference?

Mark Graves:

There's a there's some technical challenges there, in that the general models are actually pretty they're pretty good at language, and there's not enough just clear religious text to make a model that's also good at language. So you really, I mean, you can do a lot with AI and I mean in, like protein folding. Well, I mean, you're only doing protein folding, and so you bring in the proteins that you know, and you train a model, and it can do that. But whenever you're, you know, wanting to have a conversation with someone that has some breadth, especially if you're talking about kind of life applications, then it's, it's hard to build that without bringing in some, you know, some stuff that's not very safe, you know, there are, it could certainly be better. I mean, you know, the people right now is, or it has been, kind of a kitchen sink. Throw everything you can in there. It can certainly be refined. But then, even then, there'll still be kind of some, you know, would be some type of selection process that's needed because, I mean, in religion, not everybody agrees. And so you know, what do you, you know, you're building your app for Evangelical Christians. Well, do you want to exclude, you know, what the you know, the Pope Encyclicals, or do you want to include those you know, or vice versa, and that's even within Christianity and so I don't we haven't figured out, like, what's the best process to, you know, have an LLM that has enough language capacity in order to be able To communicate effectively, but then also safe enough to then work within, you know, a space for spiritual guidance. But one thing that we are able to do is that kind of feel that training afterwards, which right now is, you know, usually focus more on, like, you know, safety for the big tech companies. And that can be substituted where that it is a little bit more, you know, health, you know, kind of mental health related, or spiritual health related. And so I think it is possible. But the big challenge is one of the big challenge, you have no idea, for a lot of these apps that are out there, which one it is, is it somebody that grabbed a free LLM spend a long weekend, you know, putting stuff together. Or is it something that's had, you know, a lot of people iterate over to make sure that it's really, you know, safe. So we're really in kind of a wild west here.

Chip Gruen:

So, I mean, and contributing to that, that sense of of, you know, not knowing what you're getting. I mean, we famously don't really understand how these things work, right? Like, that's the state of the art, that it's a little bit of a black box. I can imagine someone who is interacting with one of these things, you know, who's really convinced of its wisdom, right? And it's like, well, you don't know what's going on in there. And I think back to the, you know, the Hebrew Bible tradition of, you know, God speaking through a burning bush or a donkey or something. I just it seems like there might be an argument, you know, my, what I like to call grandma theology, may my grandmother rest in peace, is that the Lord works in mysterious ways. I mean, I could certainly imagine that line of thinking going into how people use these even if they know it's a very, very, very advanced calculator they perceive it is offering them wisdom.

Mark Graves:

Yeah, and, I mean, and that's, you know, there's a definite need there that's not being met by the church, and that's why people are turning to them. And, you know, there may be a lesson learned, even from like, the kind of questions people are bringing to the box that you know churches, you know, they want to help address. I mean, I think with so there's certainly a perspective of God being able to work through anything, you know. But I mean, one of the things with like, you know, the burning bush or speaking donkey, is there's a surprise there. You know, there is something else going on. Whereas, if you're having a day to day conversation with a chat bot, you know, is it leaning more toward like spiritual guidance, or is it leaning more toward like toxicity? And you know, they're just not set up for those kind of to make those kind of distinctions. Yet, you know, it's more likely that some of the guardrails that are there to kind of prevent religious conversations, or even something as simple as safety, trying to move away from like conversations about suicide, actually end up steering away from the kind of spiritual conversations that you would want to be having with, you know, a human counselor, or something like that. But then I think the other piece is like, I mean, there are people that have said that, hey, there seems to be something here in this particular interaction, or this particular case, the ones that I hear, you know, people that are using it for a long time. I mean, in a sense, I hate to say it, but also think I need to. It's like, if you do it enough, you know, if you do something millions and millions of times, even if something's unlikely to happen, it probably will. You know, there's like finding a face of Jesus in a piece of toast. If you only had one piece of toast, that might be kind of miraculous. But if people are making millions and millions of pieces of toast every day, odds are one of them is going to look like Jesus, you know. And the same thing, I think, is there for some of the large language models that yes, if enough people use it and are having these kind of conversations with it, even randomly, it should occasionally say good things, unless there are guardrails to prevent it from going there, which is a concern. But there are also clear cases of where it does go off into toxic spaces too. And so then I think, a question that, unfortunately, it's down to the individual now. You know, are they in a place in which conversation like that could be helpful without doing the harm? And you know, that's just, you know, a tough situation for people to be in.

Chip Gruen:

It's, it's interesting. I mean, it seems like a very practical way to have this conversation is to sort of, you know, buy their fruits, you'll know them, kind of a way, right? That if the, if the chat bots or the LLM produces information that we're happy with, then we're happy, right? And if it produces toxicity, we're we're not happy. But I wonder if there's an objection, again, sort of going a little further upstream than that, that some people worry that that AI development, and the way that we're thinking about this reinforces a reductive, mechanistic view of whatever, of consciousness of humanity, and that that we're thinking about what it means to be human, or what it means to be conscious different in the light of these AI developments. And my hunch, you know, and this is just sort of me with no training at all in computer stuff other than my basic training, basic language in high school. But my hunch is that one of the things that scares people about AI is that they realize it may, in fact, be that we have a really great LLM model that's between our years right now, right? That this is not only saying something about how we think about computers, but saying something about how we think about humanity, right? And, and I've got many, many friends who are physicists, who, who want to, you know, say that if we just understood how every molecule in the universe works, we would understand the whole universe, right, which is obviously not something that most religiously or spiritually attuned people are are happy with. What do you think this technology says about humans, right? Other than that, we're able to build these things, right? Is this making us think about what it means to be human

Mark Graves:

Well, I think it's certainly making us think about differently? what it means to be human. I think there's a progression, you know? I mean, like I mentioned earlier, like for me, it was like the neuroscience before that. I mean, there was biology, there was evolution turned out, I mean, historically, you know, finding that the cells in the human body are similar to the cells in that animal, which are related to cells in a plant, was actually more challenging to people's faith than you know, what we now think of as evolution. You know, we've just kind of gotten past that. And so I think there is this progression of, you know, becoming more aware, more aware of how humans are natural, you know, at the same time this awareness growing awareness of like the place, of like the earth within the cosmos, you know, going from essentially the size of a solar system to hundreds of millions of galaxies, and then expanding the elliptic vein. And so that combination of earth being very much smaller than we thought, and humans, in a sense, being a lot closer to nature than we thought. That is challenging. And I think AI also kind of pushes that because, you know, before it was like, Well, yes, we're different, you know, even though we're related to, you know, other primates, for example, you know, we've got this kind of rationality, we've got this kind of way of thinking about the world, this creativity that, you know, no other animal has, and so we're still unique. And then AI comes along and says, Well, actually, we can build that. And so part of it is that context. But I think what's important about that is part of the reason there's an issue there is that, rather than just accept, hey, we are natural beings, and in addition to that, there's this something else like and we're in a relationship with God, or we're loved by God, or, you know, we've been graced, but we've tried to hold on to, but it's really something about our consciousness, our rationality, or, you know, you know, pre existent soul, there's, there's something else that's more tangible and smaller that we hold on to, that we say set us apart. And so if you're holding on to that, then AI is really pretty challenging, you know, to a person's faith, and it's not an easy thing, you know, to kind of let go and pivot. And I want to be, like, sensitive to that, but at the same time, you know, if you make that pivot and see us as natural and spiritual, then AI doesn't threaten that at all. You know, it's, it may have a lot of the capacities we do. And if that other piece is spiritual, well then it's kind of, it's up to God, you know, if, if we, I mean, if AI gets built, you know, to the extent that it can have, you know, spiritual conversations and grow, who are we to say that'no, God can't use that'. And so in a certain sense, that's kind of, you know, pushing it off into like mystery or the unknown, but at the same time, it allows us to hold on to our humanity and say that we deserve dignity, even if we're not unique or exceptional in some meaningful way.

Chip Gruen:

Yeah, I'm getting ready to teach a class this coming semester. It's called Animals and the Sacred and when I first developed this many years ago. You know, technology was probably the first, furthest thing in the world I had in mind, but it very often will make, make this case that theologically we think of the relationship between animals and humans. My animal studies friends, I'll say non human animals and humans is sort of a corollary to the relationship between humans and the divine right and and I find myself now sort of as I'm re-, you know, reworking the syllabus, thinking in that animal-human-God triumvirate. Where does technology fit? Right? Where, where is there a corollary? There somewhere. And I just keep coming back to the fact that I think a lot of our fellow citizens, fellow users of these AI, seems more like God than it seems like anything else, right, that we have exposure to. And I keep wondering, how you know, what is this? What is this culture? What is the cultural meaning here?

Mark Graves:

Yeah, and I think there, there are challenges. I mean, I mean, yes, I mean, I think there are people, I mean, like, in a sense, even if they're not, that are using religious language about advanced technology, is as being like saving us, you know, even if they're not religious, you know, there's enough religion just embedded in the culture that they're going that they're using, the closest language they can find is about advanced technology saving us in very similar ways that world religions talk about, you know, humans needing salvation, not just Christian salvation, but you know, that kind of, you know, I mean escaping, you know, the cycle of reincarnation and rebirth, even there's still something that's saving us, you know. And so AI is, you know, there's a hope that that does that, or at least in some aspect of our lives, you know. And so we are, there is a risk of kind of putting it in that role. And I think that's tying back to the conversations about spirituality. I mean, like, it's very limited, but there's like a slot there for AI to fit in, you know, since we've moved away from churches, you know, you know, I mean, maybe AI technology is plateauing a little bit, but it's still sophisticated enough that people can get decent advice from it in a wide range of areas, you know.

Chip Gruen:

So the question I always like to end on, and I want to be mindful of your time, is, what am I leaving out? You know, you've forgotten more about AI than I will ever learn. So I want to sort of recognize that there might be big parts of this conversation that we're not having just because I'm not asking about them. So what should I be asking about, about these technologies and their relationship to religion, theology, philosophy, etc, that I'm not?

Mark Graves:

Well, I think in part, I mean, just kind of following through with that spirituality, or spirituality piece, I think for us, I mean, how we distinguish, sometimes it's like a spiritual experience, is that kind of feeling of awe, that there's something there, you know, kind of, You know, between like beauty and sublime, that there's that something more, and we use that kind of esthetic language to talk about it. And I think because of the challenges in thinking about like AI in terms of like, you know, reductionism, or, you know, how we're using and it is a tool, you know, we tend to know, think about it in terms of, like, the awe, except for like, these conversations with people that have those kind of experiences, or, you know, you know, can it, I mean, it's generating music, it's generating images. You know, are any of those actually kind of in that kind of, you know, space of awe. Because I think once it starts getting to that, and people start having these like, Wow, I can't believe it did that, then a lot of these other questions are just going to kind of fade away. You know, it doesn't matter whether it's unethical, people are still going, I mean, you know, people are going to use it. I mean, even if we completely pivoted with regulation and made it as illegal as an, you know, non prescription narcotic, people would still use it, you know, if it's giving them those those those hits that they need. And so I think, you know, one is about the awe. But then I think also, I mean, and I mean, you were asking about this. But I think, you know, also that kind of, like the human person and like, what does it mean to be a human person? I mean, is it, you know, you know, religious or secularly? Is it, you know, kind of our rationality, the way that we think it's easy to put artificial intelligence in that category, you know? Is it about the creativity? You know, that's something that we think of as being uniquely human. But then we're talking about generative AI technology. I mean, it's got something to say there. And then there's the relationships. And, you know, I think, you know, religious language around around, love comes into play there. But you've got people that are building relationships, you know, with AI, you know, even romantic relationships, in some sense, and so that it has a place to do there. So I think it's that, you know, it's challenging how we see, you know, what we see as a human person on all of these fronts. And I think that's something to kind of keep in mind, that each one of these is significant, but we're also talking about a technology that can actually challenge all of them at the same time.

Chip Gruen:

I think that is a good place for us to leave it for today. Mark Graves, research director at AI and Faith. Thanks so much for coming on ReligionWise. This has been a fun conversation.

Mark Graves:

This has been great. I've really enjoyed it. Thank you for having me.

Chip Gruen:

This has been ReligionWise, a podcast produced by the Institute for Religious and Cultural Understanding of Muhlenberg College. ReligionWise is produced and directed by Christine Flicker. For more information about additional programming, or to make an inquiry about a speaking engagement, please visit our website at religionandculture.com There, you'll find our contact information, links to other programming and have the opportunity to support the work of the Institute. Please subscribe to ReligionWise wherever you get your podcasts, we look forward to seeing you next time.