The Most Influential Voice on AI in America

00.00
[Music] [Music] it's my pleasure to welcome Daniel Hutton Locker he's known as the most influential voice on AI in America he's the dean of the MIT schwarzman College of computing uh leading academic institution that's focused on Computing Education and Research in AI in the world along with Eric Schmidt and Henry Kissinger you've probably heard of those names he is the co-author of a seminal book addressing the fascinating exploration of the intersection between technical artificial intelligence capabilities and social science the name of the book is the age of AI and our human future uh central question is how will artificial intelligence change The Human Experience we've heard a lot about it during the morning we're going to
01.00
hear more about it now I want him to persuade me that the caption on the morning is not we're screwed because that's sure the way I felt listening to an awful lot about AI this morning maybe not and let's hear on that topic and more from Daniel Hutton Locker Daniel [Music] welcome thanks so much it's terrific to be here with you all what an inspiring previous uh discussion and and uh I promise I won't try to sing even if there wasn't that beforehand so what I'm really aiming to do today you know as as a as a teacher I always like to say what's the what's the main thing I'm trying to get across the main thing I'm trying to get across is that we all need better tools and understanding for the AI future ahead of us and that the AI future ahead of us does not have to be in in the words the
02.00
introduction we're all screwed but it does need to be something that is participatory so AI has been uh in our lives for almost a decade now uh and we saw a bunch of this from earlier speakers today but I just sort of want to walk through this for a moment so it's actually 2015 when how many of you were Google users within you know at that point in time probably a lot of you used the search engine the whole search engine changed in 2015 and nobody noticed they replaced the underlying technology which is classic hand coded software with uh machine learning and artificial intelligence 2020 Deep Mind developed machine learning for Google Maps you know how many of you use that or other navigation software to find your way around I certainly do all the time that's AI telling you where to go so it's telling you what to find how to find things it's telling you where to go thanks so much um
03.00
and uh then in 2022 of course we saw what's called generative AI uh you know with chat GPT being launched generating text and Deli and a whole set of other programs generating images and video and so forth so that's the it but what is it that's what we're experiencing that's the stuff I could stand up here and talk about the technology underlying it but actually what's important to all of us is not the technology so much it's what is it used for and when it's being used for something what does that mean to us and how can we help inform its development so here's a first take at what is AI any technology and I'm not going to talk about the specific Technologies you'll hear people talk about them a lot I don't actually think it's that relevant even though I'm a computer scientist who's worked on this technology for decades so any technology what's it doing it's making decisions recommendations predictions or creating content like text images audio video Etc
04.02
which we've seen some really great examples of so far today but with that you should stop and say wait a minute this guy doesn't quite know what he's talking about we've had weather prediction since like we were children you know what that that was an AI so it's not any technology for doing this it's any technology for doing this that is trained to perform a task rather than most software and most systems that require precise instructions developed by experts what I like to call handcrafted code and what this change is enabled by is machine learning and the outcome of this is that these systems now produce results that have three properties that we certainly don't associate with computers historically historically computers are rigid right and they sort of do exactly what you tell them and often it's hard to get to to tell them to do what you want these systems to produce result results that are imprecise adap and
05.00
emergent and this is what makes them often feel humanlike when we use them so I think of AI as a form of a funh house mirror and it sort of distorts the world you get some other picture you don't quite necessarily understand what's what in it and that altered view is not inherently good or bad so one thing I think is really important for you to take away from today is that people who are saying AI is inherently good or AI is inherently bad are confused or they have some other agenda that they're trying to convince you of that's not actually what AI is about the fact that it's sort of an amplifier for human capabilities whether good or bad means we have to be very attentive when using AI but we shouldn't be timid like the you know some of my technical peers were calling for pauses and the development of generative AI a year or so ago that's timidity that's a mistake but so is the sort of unbridled Audacity Of I should be able to use AI for whatever whatever
06.00
I want to implications of it be damned we need to be attentive so what does that mean well one thing is and this is something that everybody can understand but nobody's really quite telling you it depends a lot on how it's trained on how the AI system because it's not being handc by somebody and depending on how it's trained you probably read a lot in the news about how it can reinforce errors and bias um and uh you know around hiring and Loan decisions and other things but what you don't read quite as much about is it can actually make decisions better and fairer it depends how it's trained because when AI is reinforcing errors and bias what it's really doing is capturing our errors and our bias and amplifying them it's capturing what we do as people and using that as the basis for what it's learning but when we use it in the right way we can use it to question when we
07.02
might be making mistakes or when we might be being biased and produce better results so just saying oh it increases bias let's ban it is a huge mistake we missing the positives of how it can be used to help one key issue in how it's used is and you see this if you've used chat GPT or any of these uh related generative AI techniques the thing sounds like an expert it's very fluent it sort of you know tells you what to do that's actually not a good use in fact in studies that have been done in the medical domain if AI sort of gives the answer as an expert it actually can increase bias and increase errors but if it's an interlocutor having a discussion with somebody about what things have be been considered Etc it can actually reduce errors and reduced bi us so it depends a lot on how it's being used and that's not the technology per se that's something that all of us can look at and
08.01
look for when AI is being used so AI is not human but it certainly can appear to be um this this image is just of a reminder here this is a um a rendering of of Taos from Greek mythology Taos was a being that was forged to be sort of a humanlike being that could help defend creit so we've had these things in our mythology for a long time maybe even wishfully but human behavior is much more than intellect and what AI is encoding is some for form of intellect AI doesn't have intention it doesn't emotion motivation morality judgment all of those things that make us human but and this is sort of the catch and the thing we all have to attend to AI can simulate those things in ways that can be misleading and I'm sure many of you saw because a number of uh reporters when uh chat GPT first came out you know one of them got chat GPT to
09.02
tell him to divorce his spouse and other things like that right these things can simulate emotion and other things but they don't have it and you know to to me at least simulated morality is not morality and I hope it's not to most of you but it's very hard not to be misled we've had Millennia of stories starting with Taos have not before of envisioning man-made humanlike beings and one of the things that you hear about out there's alignment of AI with human values how do we get AI to reflect human values more like you know our morality or our judgment this is good but it's a stop Gap it's not going to solve the problem that AI is really just a form of intellect so one of the things we really need to tease apart is because AI is so different it's affecting the very ways that we identify ourselves what makes humans different from other beings in the world world and
10.00
this is the U a major focus of the book uh that I wrote um with the late Henry Kissinger and with Eric Schmidt a few years ago which is AI is a new kind of intellect so not only does it not have these other human attributes even its intellect is different it's not human reason and it's not faith in the Divine it's a third way of understanding the world so this raises fundamental questions about what it means to be us and it's therefore going to make us uneasy even if it's it's doing something good so how do we distinguish this uneasiness from things that really cause risks to need that need to be addressed and those are the kinds of questions that you all need to be thinking about am I just kind of feeling uneasy to here or is this thing really being used for something that I should be scared of so one place that I think is very important and we should sort of be scared in some sense is but that doesn't scared doesn't mean I just should say this repeatedly because I ask these questions and then people think I'm like
11.01
I'm a I'm very Pro the use of AI I think that it's going to solve all kinds of things that we cannot solve alone as humans and things that matter to all of us like human health uh and so it's it's important but it's also important to do this in ways that preserve things like human agency that pre pres preserve things like human dignity and human responsibility so AI can do either it can enfranchise or disenfranchise it can uh it can um it can inform or misinform these things have been true for mechanization for for many years like if we go back to the mechanization of the Industrial Age we have the same issues but AI is not about physical world it's about the mental world it's more confusing it's more different but it's very important to think about these analogies like assembly line work is pretty disenfranchising you're there you're a cog in a big system but Power Tools come on at least ask the men in the room right I mean you know that's
12.00
that's very INF franchising you can go and do a lot of stuff a lot of agency so when we think about driver navigation as I mentioned that's AI telling you where to go next how to avoid traffic Etc it can be really empowering or really disempowering it can be empowering for the individual it's sort of like a power tool but for the ride sharing driver they're supposed to follow the route on the map they are a cog in a much bigger system and that reduces their agency and their individual choice so this brings us to I think one of the most important things when we think about AI which is that a lot of AI right now is focused on substitution substituting AI for humans and we saw this in some of the earlier presentations today ways AI you could be used to replace humans in various contexts but if we go all the way back really to the same era as Alan Turing as is pointed out uh earlier this morning uh responsible for a lot of the ways
13.01
that we think about H about machine intelligence jcr lick ligher had a paper the title is not like a real you know sort of uh title it's not a grabbing title for a non-technical person and it's certainly dated to the 60s man computer symbiosis but his thesis is very powerful it's humans and intelligent machines together better than either one alone so it's not about AI that replicates human behavior and therefore can be used to rep Place humans it's about places where humans alone can be augmented by Ai and AI alone can be augmented by humans and this is a different view that's very important as we've had more and more focus on issues of AI safety which many people are rightly concerned about and asking questions about this question of alignment that I mentioned before it really understand underscores the importance that Standalone AI is often not the answer it's not saying that sometimes Standalone AI is good but many
14.00
times when you want human judgment when you want human intuition when you want human morality and values you really want to look for ways to combine Ai and humans so that naturally raises questions of governance should we be trying to nudge this in some way through governance regulations the one the the approach to governance that comes to mind most immediately and here we've had a group of Faculty at MIT looking at these kinds of questions uh and trying to advise people in the government and I think one of our biggest conclusions is that there should not be separate policies for AI at least at this point in time rather if human activity without AI is being regulated then the use of AI should similarly be using AI should not be some excuse for example to be able to give lousy medical advice we regulate who can give medical advice we shouldn't allow AI to give medical advice unless it's similarly being regulated in some sense and maybe that will mean that it's collaborative
15.02
AI so that's that's sort of the point there you know separate policies can create inconsistencies it also helps us pursue policy approaches that encourage deployment of human AI collaboration rather than Standalone AI if we're going to regulate AI we think about doing it in the same sort of ways that we regulate humans in terms of the behavior and the outcomes so I'm not a huge fan of Regulation I think regulation is a necess NE evil I think the you know the the completely open capitalist system can sometimes go off the rails so in particular I think when we think at all about regulation as a form of governance we really have to think about norms and responsibility and then the laws the regulations have to be consistent with that if you think back to you know the prohibition in the United States that was laws completely inconsistent with societal Norms that did not go really well except if you were a criminal producing and smuggling
16.01
bootleg alcohol so uh so I think it's very important to develop these and I want to offer you one that uh that that we clearly don't have right now what's the fork in a toaster for AI what is clear misuse so that it's your responsibility in doing it this picture here is actually an image of a very early toaster about uh you know early 1900s and you can see that it' be pretty easy to electrocute yourself with this toaster um so it's we've developed Norms but we've also developed uh U best practice guard rails against the uses that we want to become the Norms so they're enclosed you know jamming the fork in it's pretty clear mechanically you're doing something wrong there so these are the kinds of things we need to develop with AI guard rails against other uses well understood legal responsibilities uh and Norms that we can all understand now in a specialized area I'm I'm at least cautiously
17.00
optimistic right now recently the big AI providers um have uh a offered copyright infringement Indemnity to their commercial users of their generative AI so a company uses you know Microsoft's generative AI to produce some text and then distributes it and then gets sued for copyright infringement since nobody at at uh nobody at the company wrote that text Microsoft is out actually willing to take responsibility for that so some of these things are starting to happen on their own without new regulation doesn't mean regulation won't be needed but things are going in a reasonable Direction so and coming to the sort of last part of this I want to talk a little bit about people learning from AI so we t a little bit uh education came up in some of the earlier uh sessions so Alpha zero is a program that learned how to play chess but not from prior Human Experience and this is one of the really interesting things about AI it doesn't
18.00
have to learn from humans right now most AI does but it doesn't have to and when it doesn't it can really change the world now chess you might argue not the world broadly but in the world of Chess it's completely changed how chess players play at the grandmas level because it discovered new tactics and new strategies in 2021 Science magazine listed as a breakthrough of the year the ability to predict the 3D structure of proteins from what we can actually measure which is the amino acid sequences and this is at 2021 they thought it was a big deal I can tell you in 2024 it's transformed the life sciences in ways that even in 2021 nobody would have predicted including Science magazine but now generative AI that the techniques that drive these large language models like chat GPT are going to further revolutionize our understanding of proteins helping Drive the development of a new area called metagenomics so I want to come to one
19.00
scary topic um and certainly one that um uh Henry Kissinger my co-author on the book uh um and you know this talk draws from the book in many other places but on the book knows much more knew much more about than just about anybody in the world uh so there are real cautionary parallels to the pre-World War I period that was mass industrialization and the advances there LED to big changes in military capacity and speed in fact this photo is a photo of a train track that was literally put in place in a matter of weeks to transport troops to the front lines something that was impossible and so all you know literally a decade before but the diplomacy and the doctrines all assumed it was really hard to get material to the front lines and that made the front lines a flasho in a way that was not true before and AI certainly is changing the capacity for uh uh for military uh engagement and
20.01
interaction it's this different form of intelligence as I hope I uh convinced you before and our current strategies both military and our diplomatic doctrines don't take that into account it's all about human experience and human interaction so in closing I just I really want to stress that building a better future using AI is up to everybody in this room and I hope I've given you some framework some tools for starting to think about and starting to be able to learn more on your own about how we can have machines become capable collaborators partners and teachers how we can have ai augmentation especially in these higher risk and higher reward settings or a machine on its own probably is not something we want we want other things other than some form of just raw intelligence there are places you know like medicine we want an empathetic doctor not just the super smartest one around um this is going to drive new insights discoveries and
21.01
innovations that broadly improve people's lives as long as we're attentive to both the positive and negative aspects at the same time thank you very [Applause] [Music] much

About the Speakers