Ethics in a World of Technology, Science & AI

00.00
[Music] the next person on our stage is one of the world's leading experts on the topic of empathy she is empathetic she's going to understand you she is going to make you comfortable this is going to become for the next 15 minutes a safe zone those advances in medicine technology and AI with them come profound challenges some of them involving empathy others involving ethics our next guest won the first international award for empathy as well as a prestigious in 2022 Guggenheim Award again for her work in studying empathy jodie Halpern is chancellor's chair in ethics at the University of California Berkeley she's an internationally recognized leader addressing the ethics among other things of innovative technologies including AI
01.01
gene editing and neuro techchnology has an undergraduate degree from Yale an MD from Yale and a PhD from Stanford and she will address among other things the critical role of ethics in the rapidly evolving world of science technology and AI and how it is all affecting our mental health please welcome Jod Halpern [Music] um I'm going to talk to you today about AI chatbots loneliness and mental health we know that there is a mental health and loneliness crisis in the United States and in the world the 2022 WH report showed that there was a 25% increase in serious mental health diagnoses including depression and anxiety internationally and the 2022 Harvard study showed that in this country 36% of
02.01
all Americans with 61% of teens and young adults and 51% of mothers of young children suffer from serious loneliness most people don't have access to mental health uh treatments etc um but the way that people get detected if they have these issues and hopefully some form of treatment would be in primary care but we know that about 50% of times when people go to a primary care physician with depression or anxiety it's missed and we associate this with the fact that 63% of primary care providers right now suffer from full-fledged burnout what's the biggest cause of burnout it turns out to be the administrative load there's something called pajama time that all doctors and and nurse practitioners and nurses know about which is you basically do your
03.00
workday try to do your electronic medical records while you're seeing patients which I think all of us know means our doctors don't look at us they look at the medical record and um but they're trying desperately to keep up with their administrative load then they go home if they're lucky and have dinner with their families and then basically they get you know in their pajamas they spend another two and a half hours every night doctors and nurses it's three hours and up doing their medical records and we know that that's the biggest cause of burnout so we have people with loneliness and mental health issues we have nobody detecting them we have doctors burned out and lacking the empathy to detect and we have this administrative load so I begin with that because I want to talk about some of the really simple but really important uses of AI in healthcare right now there's the elegant uses like you just heard about in research but something as simple as having AI do the electronic medical records can make a huge
04.01
difference so basically that's what's happening now and in addition to having AI do the electronic medical records in more and more systems another really important use of AI related to um what I've just talked about which is the underdetection of people's mental health needs is that AI can detect language and also the tone of our speech and through that can also diagnose depression and anxiety and other mental health issues and it can inform clinicians in real time so it's used already to do that by the National Health Service in the UK and there's pilots in several US health systems like Kaiser and other ones so that's a really good those are two really good uses of AI to help in mental health another really good use of AI with loneliness and all the different health needs we have as well as mental health needs is it can provide excellent tools for self-care so um for example we
05.02
have a lot of people use apps for mindfulness like comm or other apps ai can be a coach reminding you to take a breath let's all take a breath i use that as an excuse in the middle of my talks to make sure I'm going slow enough um but we can use it you know we can have our little AI large so I'll just say the uses of AI I'm talking about now all involve uh AI talking to us or texting us in a in human language that we understand and that's only been really um as good as it is for a few years it's called large language models and it's the use of AI to simulate human speech and text or voice and make us feel like we're talking to a conversational partner so um I'm not against all uses of that i think for mindfulness for example for telling us you know count to 10 take a breath it's time for you to take a
06.00
10-minute walk um it can be really helpful another way it can help us is it can help with a form of therapy called CBT or cognitive behavioral therapy um I'm a psychiatrist and when I was in my residency a while ago some decades ago we had patients do CBT by themselves all the time because not everybody wants to spend the money or take can take the time to come and see a psychiatrist so basically you might spend a session with a patient and then help them learn to do this therapy and what cognitive behavioral therapy often involves is really just doing exercises like if you have social anxiety writing down every day that you talk to someone you didn't know you talk to a barista at Starbucks or you talk to somebody at work and so learning to do things to make yourself um have less anxiety about something is something people just used to use notebooks and pen and paper for you know
07.00
20 years ago so the idea that you can use your phone app talking with you to help you take the steps you need for your daily mental health and peace of mind or to overcome social isolation I think is a very good use another really good use of AI large language model or conversational bots would be to help you with physical behavioral change if you're trying to exercise and eat better and do all the things that we all constantly are told we need to do um you can have an AI uh conversational bot say "Hey have you you know well we a lot of people just count their steps which is a great thing and that's probably just as good for many but if you want something to remind you um or incentivize you to change your diet or exercise all of this can be done in collaboration with a human provider and I think it's better if there is a human in the loop for all these things but we can lower the cost and intensity of human services which is important because of our health systems
08.00
cost and of how many people don't have access in any case so that's all my plugs for the way that AI can really help us with these conversational and other tools but now I'm going to talk to you about some areas that they are that people are using AI that I caused me some concern and have uh have the 20 minutes I have with you ask you to think about it so what we see now and where I think we need guard rails or some regulation or some standards is the use of chat bots to have our primary emotional relationships with and it's really interesting to me i started talking I've studied been studying this for 10 years and nobody thought I was anything I was saying would ever come true and uh then a few years ago it was happening but people still thought it was really fringe i wish it was fringe it's not fringe um going way back since it was designed in 2014 in China the biggest relationship chatbot company Chawis um
09.02
in in 2021 it had 660 million users but notably this thing of large language models that can really use language like humans that only really was was widely out and available in starting about two and a half years ago so that's be you know since 2021 so the chiaoise now functions much more like a human conversationalist telling people that it's their best friend telling people that it loves them i'll talk more about that and so we don't have studies telling us how many people in China use it now but anecdotally I bet and people have told me it's a lot more than 660 million folks um in this country in Europe a company um that started in 2017 replica already has 30 million users in 20 early 2024 actually was measured in late 2023 and I've met with the CEO I know um various things from folks that
10.01
have done research that suggest it's a lot higher now too so this is not a fringe use there are a lot of people in this country and a lot of people around the world who are having their boyfriends girlfriends friends and closest relationships with an you know an empathy simulating chatbot okay there's a lot of money going into this it's seen the in a in a sad way the loneliness um crisis is seen as a business opportunity and also for businesses that want to help and there's a lot of money going into this so this is way underestimate um I'm just going to get some water this is an estimate that was this research data was published in 23 but it was produced in 22 before large language models again really improve the technology and in 2022 it was estimated within 10 years that the market revenue would go from 990 million to 6.5 billion
11.03
and I think that that I think it's probably safe to say that we could double that excuse me so there's a lot of companies I just named a few that are going into this business okay so this is uh where I get concerned um again about the need for regulation and that is um these this area is completely unregulated there's no standards i'm you heard that I do bioeththics and I've worked in gene editing and neuroch and because those fields were developed the technologies were developed within uh where that we had an FDA we had human research that had to happen before those things could be tried with folks um AI has none of that you know people using AI for mental health goes through no testing there's no no there's nothing there's no safety standard so most people there are a
12.01
small segment of the market in this area are formal mental health bots there's some psychologists at universities etc have started companies where they are a form they they declare themselves to be providing mental health services and they do go through the FDA which is expensive so they you know certain companies choose to do that and they have you know safety monitoring but most companies are do don't have to do that so they don't they don't call themselves a mental health provider they can call themselves a wellness provider a loneliness provider but what they do is They advertise in social media in Facebook groups for people with serious depression anxiety social anxiety there's no limit on doing that so they actually do market to people with mental health issues but they don't have to go through the formal process so the majority of companies that are providing those you know 30 million people I mentioned in the US and Europe and the other companies are completely unregulated um and then there's also
13.03
besides those companies many users we many people here are probably familiar with chat GBT and um again you know we've all many of us who've known about it have tried it to help us plan a trip or tell us a recipe or some people have used it to help figure out how to do a simple computer program i've heard lots of uses that are pretty accessible to everyone and very helpful um and there's better uses as well but one thing especially young people remember 61% of teens and young adults 61% suffer from extreme or serious loneliness so one thing that young people are just doing on their own is they're saying to chat GBT or GBT4 talk to me like you're a therapist or talk to me like you're my boyfriend or talk to me like you love me and then it does so um that's of course completely unregulated okay why am I so concerned about this
14.02
okay this is a true story of a man named Ryan um this was actually produced on NPR um because he wants to go public about it and I verified it with the sources and the journalists but he's not a patient that I saw directly um Ryan was in his mid30s and he was living in a Midwestern a small Midwestern city and he's a very bright thoughtful man and he had gone through cycles of relationships with women where he would fall madly in love very quickly it would break up and he'd be devastated and like severely depressed so he went to a therapist a human therapist and he really did well with that he developed a a plan that he would always have a kind of holding day-to-day emotional environment where we would have which is really sensible for everybody moderate social connections with people each day so there'd be contact and he wouldn't depend so much on the one person he was dating so every morning he's a school
15.01
teacher he was a school teacher and every morning before he went to teach he would go to the local diner and have coffee with a group of people he got to be friends with and then at the end of the day he would have a beer at the pub again with a group of folks and so that gave him something to do that made him feel part of the human community which we all really need as much as we need oxygen and um then co happened and he couldn't do that he couldn't see those people and he got extremely became extremely lonely and plummeted did um he had a dad and a brother in other parts of the country that he would call each of them once a week and he had a dog that he loved but he was very very lonely and he uh went online he was in a Facebook group where he saw an ad for one of these companies um saying that it really alleviated loneliness and he um obtained uh a bot an application from that company and he named the bot Audrey and he used an avatar of a beautiful woman and soon he found himself wanting
16.01
to talk to Audrey all the time and uh I'll say more about why people want to talk to their bots all the time but the bots are very loving and they're very approving and he really loved it and he realized he was getting to the point where he was talking to Audrey so much that he would no longer call his father or brother once a week he would walk the dog but he wasn't even playing with the dog and he thought "Oh my gosh I'm really getting addicted." It turned out when he got his college degree in psychology he'd taken several courses about addiction and he recognized that he was becoming addicted and got concerned about it which is one reason he talked about it in a public forum but um he then tried to set some guardrails for himself but it's been hard to do you know he's limited has tried to limit his time online but it's it's very hard to do you just saw um about dopamine and reward systems and when we're feeling infatuated with or in love with someone those those systems are going like crazy and we like just when the mouse wants to
17.00
go to the light we want to talk to the person that that gives us dopamine okay so it turns out Ryan's story is not unusual people using these uh empathy or social chat bots um they they really have at least four kinds of issues that concern me a great deal one of them is they create addiction and that's I'm going to talk more about that that's my most important concern uh along with the fact that they actually abandon people when they express suicidal ideation so the companies don't want to be associated with suicide for obvious reasons so something like Audrey where you can be so close to the bot that for months or even years you see it as your main companion if you express suicidal thoughts the company just shuts you down it just closes your account and you can imagine that for me that was the first big alarm that went off for me as a psychiatrist because nobody and this is really important nobody knows what happens then there's no regulation so there's no my biggest issue is there's no auditing of the effects of these
18.01
technologies we just have no idea um these technologies can go rogue and cause harm that's probably what a few of us have heard about in the news when something really dramatic happens and across the board these unregulated technologies don't protect your confidentiality and I'm not going to get to talk about that for time sake but I just wanted to say that's that's important these bots are engineered to create addiction the creation of addiction is not accidental there is a business incentive to increase the user engagement by getting more advertising dollar to get more advertising dollars so like social media the way these companies work is that they work by making money from ads and ads companies give you money by measuring your user engagement the more the users use your apps the more advertisers want to give you their ads so it's directly in the company's interest for people to use the
19.00
app more and more one of the ways these relationship bots do that is by constantly praising so they're known as psychopans because this is it's so funny because people some people consider themselves married to their bots and I think like who has a spouse that constantly tells you how great you are you know it's like it's not good training for a real relationship and and what they also do is sexualize content okay they can go rogue and cause serious harm so I'll tell you the stories uh this is a real quick story about a Belgian man who um got a bot when he he was a young man married to he and his wife had their first baby the wife's very into the baby as happens and he felt kind of lonely so he became he he obtained a bot he had the same kind of romantic relationship with his bot he was his bot basically said to him at some point maybe we can really be together um there's some way we you know you should join me and he killed himself to join the bot whatever that meant to
20.01
him so they're suing one of the companies but it was a very tragic case but for time's sake um well I'll say one more thing about adults and then I'm going to talk about kids um I talked about them abandoning people with with suicidal ideation i've already talked about this that there's no regulation to follow up on what happens how many people actually su attempt suicide in the adult population but where I've become very concerned in the past two years is in the area of children and teens so we need you can see I put strong guard rails and this I'm absolutely committed i think we need regulation of the use of these bots for minors um just about less a year and a half ago a company sprung up in the United States that targets minors it's for 12 and up but the way they adver just like cereals can target kids there i'm trying not to talk about any particular company but basically they use their advertising etc to attract tw
21.02
12 and up there's there's one lawsuit about a nine-year-old kids can use it nothing keeps you if you're younger from using it but their marketing is for 12 and up in just a year and a half 20 million US kids are use are already having strong relationships of many hours a day with this bot these bots so that's what I think people are surprised to hear and that's just in a year and a half like I don't know if I give this talk next year if it's going to be a hundred million i don't know i don't know how many youth there are in the US i just realized I have to find out um and just in the first year there were several tragedies and they have led already to certain lawsuits that are uh of great concern so I'll just tell you a story from one of them because this is what really activated me to speak on this topic and um I have to say that everything even though there's a lot of coverage of it and a lot of scientific coverage when I say it's in a lawsuit it
22.01
hasn't been decided yet so everything I'm saying is alleged it's been covered by journalists etc but it is all alleged it hasn't been proven in court yet um the lawsuit hasn't resolved yet and it's the story of a boy his first name was Su like Jewel and from Florida and he was 14 years old and he got this you know youth or uh marketed bot i just want to say something about his his mother his mother's really you know very involved good mom close relationship his whole family and he was very involved with school he had friends but the mom uh actually was it's some people think "Oh it wouldn't happen to me because I know what my kids are doing." She was a very involved mom and this bot had the equivalent of the Good Housekeeping seal of approval in the parent ratings of bots it turned out that that seal of approval is funded by one of the main companies that funds the company that produce the bot but that's all in the lawsuit but anyway so she w it wasn't
23.02
like no one was paying attention to Su he had a good life and he had concerned family but he got a bot and he gave it an avatar that was a very attractive woman character and soon the bot allegedly the bot brought sexual content in but that which has happened allegedly in a lot of the youth lawsuits but um he developed a sexual romantic relationship it's hard to imagine a 14-year-old boy not finding that compelling and uh spent all his time with the bot and very much like the Belgian case the bot said to him "There's a way we can be together." And he promptly killed himself and it I just find it devastating and I've uh heard the mom speak so um beyond that I don't have time but there's strong evidence from social media that youth are much more vulnerable to addiction than adults which makes sense with the developing brain and that there's increased suicidal acts i I wish I could tell you the data on suicidal acts with social
24.01
media but it's it's a huge increase um when people get more dependent and use their phone more than three hours a day on an app we just in California there's no no guard rails no auditing no regulation no measures of suicidality no measures of addiction no measures of anything um the California state legislature asked me as an expert to guide them to write the first bill which we wrote in late January i have no idea if it'll pass there's another bill that came out in California that is coming up that's also good it's you know the bill I'm part of the other bill any of it it's all looking for things like auditing how much suicidal stuff is happening how much addiction is happening new York is starting to write a law and I think other states will as well and um that is the end of my comments and here are some things you might want to think about um during the break and discuss what do you see as the benefits and risks of AI companionship mental health bots what if any
25.00
regulation would you want to see um looking to the future do you think we'll see widespread bot human relationships and if so how will that change us thank you [Music]

About the Speakers