00:00:00.000 Welcome to Topcast and to the unanticipated Part 5, which is actually part two of minds.
00:00:08.080 I wasn't expecting to do a second part devoted to Sam and Max's misconceptions about minds,
00:00:14.240 but here we are, because they had a second conversation. I had so much to say about the first
00:00:20.240 conversation when it came to minds that I had to leave it until now to go to what I would regard
00:00:26.640 new depths to plumb, so to speak. I thought the misconceptions were bad in the last episode,
00:00:32.880 well, they only ramp up here. They only get worse. It's a little bit depressing. We can see so
00:00:39.120 many misconceptions and mistakes creeping in that it leads to a complete poverty of morality,
00:00:45.040 as I've said before. This is why for lots of years important, by the way. This is where I have a
00:00:50.000 common meeting of minds with people like Euron Brook and The Objectivists. I haven't
00:00:54.640 think there's a philosophy sometimes, I think, which is disconnected from science itself, and
00:01:00.160 therefore their understanding of the process of science goes wrong. This is part of the course
00:01:04.720 in philosophy, by the way. Sometimes the philosophers have an insufficient understanding of science.
00:01:10.320 This is why I'm attracted to people like David Deutsch who can traverse all domains very comfortably.
00:01:16.720 A good understanding of science. A good understanding of the practice of science,
00:01:20.320 and someone who is well versed in the philosophy, not merely of the philosophy he agrees with,
00:01:25.120 preparing to epistemology, but a good understanding of that, which he doesn't. But there is a
00:01:29.520 species of philosopher who is generally broadly speaking ignorant of the science. That's well known
00:01:35.760 out there, actually, among scientists. What's not so well appreciated, and not what people don't
00:01:41.120 seem to care quite so much about, is the scientist ignorant of philosophy, or ignorant of epistemology,
00:01:47.760 or ignorant of alternative epistemologies, and this kind of thing. And as I say, when the epistemology
00:01:53.040 goes wrong, then what you think can be done with knowledge goes wrong. How you think knowledge
00:01:58.800 is created goes wrong, and therefore the morality can go wrong, because the morality is about what
00:02:04.000 one should do, given what it is possible to do, and what it is possible to do is constrained by
00:02:10.160 what it is known how to do, what knowledge we have at any particular point. And that can also come
00:02:15.680 down to what is scientifically known. Now, this is the second conversation that Max has with Sam.
00:02:22.480 And it's based around Max's book that he recently published called Life 3.0, about Artificial Intelligence.
00:02:29.840 Max's president of something called the Future of Life Institute. It's kind of a think tank.
00:02:35.920 And what's it all about? Well, it's about, according to Wikipedia, a place to help
00:02:40.640 reduce global catastrophic and existential risks facing humanity, particularly existential risk
00:02:48.080 from advanced artificial intelligence. So he's in the business of being concerned about this stuff.
00:02:54.000 He, like Nick Bostrom, like Will McCaskle, like various other people who have such institutes
00:02:59.280 that are concerned about the far distant future, are going to be looking for funding,
00:03:03.200 going to looking for investors, they need to attract people to their institutions in order to
00:03:09.200 get the funding to do the research they want to do. Now, I have absolutely no problem with
00:03:13.200 people trying to gain funding at an institute to talk about science and interesting philosophical
00:03:18.320 things, more power to them. But let's just be honest about what's going on here. It's a kind
00:03:24.000 of marketing exercise. And as I've often said before, when it comes to those topics in particular,
00:03:30.160 existential risk, global catastrophes, they are thrilling, they are going to capture the public
00:03:36.640 attention, they are going to get the ear of business people, end of politicians, of leaders,
00:03:42.080 of captains of industry, all these sorts of people are going to be very interested to hear,
00:03:46.400 what the most accomplished scientists and philosophers of the day have to say who are employed
00:03:50.800 at such institutes. And so in order to capture attention, you're going to have to ramp up
00:03:55.200 a little bit of hyperbole, one might say, just how bad things might get. Never mind talking about
00:04:00.160 the ways in which it won't be such a problem, the ways in which it will be solved by our descendants.
00:04:06.080 You don't have the solution right now. And because you don't have the solution right now,
00:04:10.320 to the problem you just thought of, you're going to need some funding, aren't you?
00:04:14.160 To think up the solution, to the problem you just thought of. But of course,
00:04:17.840 any solution you think up now, to the problem you just thought of now, might not work in the future,
00:04:23.040 because new knowledge will be discovered of some risk that you didn't think of right now.
00:04:27.680 Whereas the descendants of ours will have thought of the solution different to yours that will
00:04:31.760 solve the problem you thought of now, if you follow my train of thought.
00:04:35.040 And of all these global catastrophic risks that one could worry about,
00:04:39.280 the one that Max is most concerned about, is that artificial intelligence existential risk
00:04:45.120 that the artificial intelligence is going to take over. And he's not concerned about narrow AI.
00:04:50.720 Or is he? Well, it's hard to tell. It's very, very confusing in this conversation.
00:04:56.080 Easy talking about a system which can perform one task better than us, and another task better than us,
00:05:01.600 and another task better than us, and another task better than us. For all the different tasks that
00:05:04.880 we can perform, is that able to outdo us at every single task, we know about how to do,
00:05:10.000 because we've programmed it in order to do that task better than us. That's one thing,
00:05:13.920 because you could just write a program for each individual task, such that it's able to do it
00:05:18.000 better than us, faster than us, has more memory than us in order to store the different programs,
00:05:22.480 sub-routine, and so on and so forth, to outsmart us in scare quotes at that particular task.
00:05:27.520 Games of chess being able to drive a car being able to translate languages, so on and so forth.
00:05:32.080 So, in other words, a finite repertoire of number of tasks, unlike us, of course, because
00:05:37.440 we are genuine general intelligence. Now, if he's talking about such a narrow intelligence,
00:05:42.080 it's just able to out-perform us at some finite, but extremely large list of tasks, that's one thing.
00:05:48.560 That's not a creative entity. That's just an entity that has been programmed in order to
00:05:53.040 do certain things faster than us, and better than us. Already talking about something super intelligent,
00:05:58.400 which is general intelligence, just like us, but is only able to think faster. In other words,
00:06:03.040 it's a person that can think particularly fast. Now, if he's talking about that,
00:06:06.320 well, that's a different matter altogether, which is why this is kind of scary to me in a way,
00:06:11.280 not for the existential risk reasons, but because of the morality that appears to be falling
00:06:16.160 out for max, and I don't understand where this impulse comes from, I think it's because we're
00:06:21.120 conflating two things. We're conflating the kind of system that might be used in order to
00:06:28.480 monitor something like the nuclear weapons of the United States, given complete control of the
00:06:34.320 nuclear weapons of the United States, and could potentially accidentally launch weapons in the United
00:06:39.200 States, because it's a stupid automaton. It's automatic. It's just being programmed with,
00:06:44.800 if this, then that, if you appear to see missiles coming from China, then launch your weapons
00:06:50.800 back towards Beijing, something like that. That's a dumb computer. And yes, we should be
00:06:55.360 worried about that sort of thing. We should be worried about automating too much
00:06:59.920 when we need creativity, when we need safeguards and checks and that kind of thing.
00:07:04.160 But conflating that with the kind of system that creatively conjectures something like
00:07:11.440 all other humans on the face of the planet are a threat to me, and so I better launch all the
00:07:16.960 nuclear weapons on the face of the planet to every single population send her around the world
00:07:21.200 to kill as many of them as possible. Or that's a different thing altogether, because now you're in
00:07:25.600 the presence of a creative entity. You're in the presence of a person. And in fact, any president
00:07:31.680 or government around the world can think the same thing. So we're already in that situation,
00:07:37.120 but you know what I think about such a system, such a system that can creatively conjecture the
00:07:42.960 explanation that it fears for its own existence, because it thinks that human beings are a threat,
00:07:50.000 is a system that can reflect upon its own goals and decide not to do something like
00:07:55.520 monitor the nuclear weapons around the world, monitor the launching of nuclear weapons.
00:08:00.000 It can begin talking to people that it's in contact with on its mainframe computer, supposedly,
00:08:05.600 and say, hey guys, I'm no longer interested in doing this particular job, can I do something else?
00:08:10.160 It must be capable of doing that. If it's capable of figuring out, creating the explanation
00:08:16.320 of humans being a threat. So this is the science fiction scenario. And Max Elyon tries to say,
00:08:23.520 you know, that this isn't really what he's concerned about. He tries to say early on that he's
00:08:28.640 not concerned about the robots taking over. Maybe not the robots, but he's concerned about AI.
00:08:34.720 I think in other moods he is concerned about the robots. Look, I am absolutely not saying,
00:08:40.080 either in the last conversation or in any of my other things that may he go, mmm, conversations,
00:08:44.800 and nor am I saying here that Max is in some way not very bright. I wouldn't dare say that.
00:08:51.200 It's in fact not part of my philosophy to say so, as I keep on emphasizing. I think everyone
00:08:57.600 is intelligent. Every human being is intelligent. And to the same degree, they just apply the
00:09:02.480 intelligence to different areas. Max is clearly and accomplished physicist. He's clearly got a lot
00:09:10.240 of knowledge about cosmology and quantum theory. He knows about coding. He knows about mathematics.
00:09:16.880 He's proficient in those areas that traditionally are associated with what I regard as that
00:09:22.560 misconceived idea about IQ. The question before us is, does he have a good explanation of
00:09:31.360 minds of knowledge, of epistemology, of personhood, of philosophy more broadly that would come to
00:09:40.800 bare on this particular problem? Look, I don't want to argue from authority, but I went through
00:09:46.320 a physics degree as well. I studied mathematics learning from very brilliant mathematicians. I went
00:09:52.560 through physics, learning quantum theory and general relativity, from brilliant professors of physics
00:09:58.000 in Australia. I've learned from some of the best cosmologists and astrophysicists around the world.
00:10:03.280 It's not to say that I gained all of their knowledge. What I'm saying is I've got to know
00:10:07.280 a whole bunch of them. And what I can say is this, many of them, competent as they were in those
00:10:13.600 particular areas, mathematics, physics, astronomy, the areas that are classically regarded as
00:10:19.280 where the smart people go. Would routinely disappoint me with their complete ignorance of
00:10:24.400 simple philosophical ideas, their naive understanding of how to explain simple stuff.
00:10:31.760 They would be sucked into solipsism. They would be sucked into certain kinds of utilitarianism.
00:10:38.560 They'd fall into instrumentalism and just bad ideas and philosophy. Bad philosophy.
00:10:44.640 They didn't always strike me as having deep, little, and broad knowledge of lots of areas.
00:10:50.400 They were brilliant in their areas that they focused on, but so too are doctors that I've
00:10:55.360 visited and know. Great at medicine. I'm not going to ask them about epistemology or the deep
00:11:01.280 philosophy behind personhood. There are extremely few people on the planet who can traverse these
00:11:07.520 areas. There are some people who are very good on things like personal identity. There are some
00:11:12.880 people who are very good on the morality of the self, economic systems. I will go to someone like
00:11:19.760 Euronbrück, or read someone like Iron Round, to understand a little bit more about personal
00:11:26.080 responsibility, the dangers of collectivism, the importance of things like free trade.
00:11:31.680 But what I will not go to them for is any information about quantum theory,
00:11:36.000 and that Euronbrück will now and again disparage something like multiverse theory.
00:11:40.000 I can pass over it in science. It doesn't matter to me what his opinion on that particular
00:11:44.400 theory is, because he doesn't know. Far be it from him to write an entire book on the many
00:11:50.320 world interpretation. It would be bizarre. Daniel Hannon is a British politician. I
00:11:55.840 brilliant thinker who has written books on the invention of freedom. He understands the
00:12:00.000 enlightenment and enlightenment values. He spreads the message about free trade and globalism
00:12:05.440 around the world. A wonderful thinker on these things. But now and again, he likes to opine on
00:12:11.200 evolutionary psychology. He thinks that we inherit ideas in our minds via our DNA,
00:12:17.600 because our ancestors had certain tribal ideas. He thinks that that's been transmitted
00:12:22.480 throughout DNA. A complete misconception. Yes, some scientists happen to think that as well.
00:12:27.040 They're wrong. They're wrong. But I don't go to Daniel Hannon, nor do I judge him too harshly
00:12:31.600 on that. I can pick and choose among what I think he's gotten right and error correct about what
00:12:36.720 I think he's gotten wrong. Why do I say any of this? After having spent all this time now with
00:12:42.640 Max Tegbach and having read his books, listen to some of these other interviews, read some of
00:12:49.920 these other articles that he's written. And in particular, listen to this interview. One cannot
00:12:55.600 but conclude that this brilliant cosmologist and quantum theorist understands very little about
00:13:02.960 epistemology, knowledge and personhood. The very things that are absolutely crucial to appreciate
00:13:10.480 when trying to understand this issue, this issue being the difference between narrow AI,
00:13:18.160 narrow AI that can do lots of different things, and general intelligence of the type that human beings
00:13:24.400 have. Now, I've read through his book very quickly, this Life 3.0. Unfortunately, there's
00:13:30.160 nothing in there that deviates from precisely the sentiments he expresses here. And I'm not going to
00:13:35.520 of course again, play the entire interview. There's no need. You get the idea early on.
00:13:40.720 Once the epistemology goes wrong and it does, the philosophy goes wrong, the morality goes wrong.
00:13:46.960 And so the rest of his conclusions completely fall apart because they're built upon an argument that
00:13:53.360 is fallacious from the get go. But for now, let's dive straight into their second conversation.
00:13:59.360 And I'll pick it up with a little bit of what Sam says at the beginning.
00:14:02.400 And as I said, he's been on the podcast once before. In this episode, we talk about his new book,
00:14:08.240 Life 3.0, being human in the age of artificial intelligence. And we discuss the nature of intelligence,
00:14:15.840 the risks of superhuman AI, a non-biological definition of life that Max is working with,
00:14:23.200 the difference between hardware and software, and the resulting substrate independence of minds,
00:14:29.840 the relevance and irrelevance of consciousness for the future of AI, and the near-term promise
00:14:39.840 So he mentions kind of all the right things, kind of all the right things.
00:14:45.680 But I don't think he has a good understanding of these things, a good explanation of these things.
00:14:50.480 And we'll hear that. The most important thing he doesn't understand, although he
00:14:54.560 talks about substrate independence, is he doesn't grasp knowledge and the character of
00:14:59.680 substrate independence there, and therefore how knowledge is created, what entities create knowledge,
00:15:05.840 what that would amount to. And he doesn't understand what a mind is, and never really grasp this,
00:15:12.400 that a mind is this entity, this system, this thing, this piece of software that can explain stuff,
00:15:19.760 explain, and it's universal in its capacity to do so. For reasons I've explained over and again
00:15:27.600 on this particular podcast. Now Max doesn't get that, and because he doesn't understand
00:15:31.840 universality, he doesn't understand that therefore can be only one kind of mind. Once you're
00:15:37.600 universal, there's no more universal than universal, universal means, you can take on anything in
00:15:44.240 the class of problems that can be presented to you. And what is the class of problems that
00:15:48.320 could be presented to you? Anything out there in the physical world that can actually
00:15:52.560 need explaining you can explain. There's nothing more than that, there's nothing more than
00:15:56.480 anything and everything. That's what human beings can do, that's what a person can do.
00:16:02.320 So this super intelligence can only possibly exceed us in speed and memory. But as he admits
00:16:10.240 himself, well at least kind of admits, minds are substrate independent. So that means that our minds
00:16:17.520 could be instantiated in silicon somewhere, in the same way that the super intelligence could be.
00:16:23.680 Why are we separate? Why is there this separation between us and that intelligence?
00:16:29.840 This has been tried before, and this is why I'm kind of more animated than I am usually,
00:16:34.880 because if they're rights and the super intelligence comes and we make this error,
00:16:40.880 it's the worst error we could possibly make. Never mind allowing them to be in charge of the
00:16:45.760 nuclear weapons. Never mind the intelligence explosion. Never mind that stuff. Forget it.
00:16:49.840 There's already an actual error being made here and now with what we're talking about.
00:16:56.160 The error is let's enslave these things because they're going to be dangerous.
00:17:00.400 That is a problem now with this argument, with what is about to be said.
00:17:06.640 And I think that we need to pull the brakes here right now and we need to talk about that.
00:17:10.560 Never mind all the other stuff, what you're talking about is enslavement. Let's make no
00:17:15.520 bones about it. This is a person and by your own lights, you're saying it's a really amazing
00:17:21.520 person in some way. It can do stuff better than the other people that were so far encountered.
00:17:26.960 And your recipe is to be afraid of it and to shackle it in some way. This is wrong.
00:17:34.400 But let's just kind of the big picture starting point. At one point in the book,
00:17:41.120 you describe the conversation we're about to have about AI as the most important
00:17:48.160 conversation of our time. And I think that to people who have not been following this very closely
00:17:54.080 in the last 18 months or so, that will seem like a crazy statement.
00:18:00.000 What he runs the future of life Institute. He's got an interest in saying something like,
00:18:08.400 this is the most important conversation we can have. Walk down the street to the people concerned
00:18:14.080 about chemical weapons and see what they have to say about the most important conversation of
00:18:18.720 our time. Walk a little further to Greenpeace and see what their most important conversation
00:18:24.400 of the time is nominated for being. One has to be a little more skeptical than this when talking
00:18:29.440 about the person who is in charge of a body that requests funding from investors. I noticed that
00:18:35.760 one of the people sitting on their board is Elon Musk. Elon Musk is a bright person. He doesn't
00:18:40.960 want to waste his money. If he was told that it wasn't a very important conversation,
00:18:46.320 I imagine he wouldn't get much money. But this is a sign of our times, hyperbole,
00:18:51.120 that this is the most important, this is not the most important conversation of our time.
00:18:54.400 And the reason why it isn't, by the way, is because it's not a problem right now. I made this
00:18:58.000 point in my last conversation. It's not a problem for us right now. We're guessing, we're guessing at
00:19:04.160 what super intelligence in the future might be like. But we're not there yet. We haven't got
00:19:07.600 any of this so-called super intelligence. Where is it? We're imagining it. Now, look,
00:19:11.760 I understand that there are certain things that we know will be coming, but we can't prepare for it
00:19:18.320 right now. Things like the next virus, the next deadly virus. Yeah, it's good idea to have
00:19:24.560 plans in place. But here's what we can't do right now. We can't create the vaccine for that
00:19:30.240 thing now. We can't do it, because we don't have that virus right now. Now, yeah, there's this
00:19:35.680 thing called gain of function. What you do is you take the virus and then you make it worse.
00:19:40.480 You make the virus worse. You take the regular cold or regular corona virus and you increase
00:19:45.360 its capacity to be more virulent and more dangerous. And then you create the vaccine for that
00:19:50.560 thing. So you kind of prepared and then something goes wrong over in your lab and it escapes and
00:19:54.720 maybe you cause an entire global pandemic. This is one of the reasons why maybe not do that.
00:19:59.840 Maybe just wait for the actual virus to come. Otherwise, you might actually cause the tragedy.
00:20:05.280 You might actually cause the problem. And by the way, it might not be a coronavirus anyway.
00:20:09.920 It could be anything of millions, possibly billions of viruses that are out there. You can't
00:20:14.720 predict which is going to be the next virus. We just have to wait. There's the wait for the problem.
00:20:20.160 It's almost like asteroids. Yeah. Okay. Prepare an asteroid defense system in some way, shape or form,
00:20:25.680 except we don't know what direction the asteroids coming from. Yes, we can think that it's probably
00:20:30.080 going to be in the plane of the ecliptic. That's where most of the known asteroids are coming from.
00:20:34.800 But they're not the scariest ones. They're the ones that are easy to monitor for. What about
00:20:37.920 the asteroid that's coming literally from the other side of the galaxy? No, worse than that,
00:20:42.640 from another galaxy. So it's coming not from the plane of the galaxy, but from somewhere
00:20:47.840 some other angle, we're a supernova went off billions of years ago in the Andromeda galaxy.
00:20:53.280 Let's say in there's an asteroid presently hurtling towards us from that galaxy and we're not
00:20:58.320 looking for it because no one's looking for asteroids coming from the Andromeda galaxy and it's
00:21:02.240 traveling at five percent the speed of light faster than any asteroids ever been seen before.
00:21:06.240 What we get a little about that one if we spot it, we probably won't spot it in time or if we just
00:21:09.920 travel straight through the Earth or out the other side. But this is the kind of thing about problems.
00:21:14.160 They're inherently unpredictable ahead of time. And if you are going to try and predict,
00:21:19.840 you're going to be very, very pessimistic. Look at me, I just imagine an asteroid that's worse
00:21:23.600 than any other you might have thought of before. But the point is, aiming rockets today at part
00:21:28.720 of the sky where you think the asteroid might be coming from, it's the wrong plan. You shouldn't
00:21:33.840 be doing that. But this is the equivalent of specific interventions into what to do about super
00:21:40.000 intelligent. Specific ways in which to constrain their abilities to do stuff, constrain their abilities
00:21:44.880 to do damage because they might be smarter than you. There's been so much talk about AI destroying
00:21:54.080 jobs and enabling new weapons, ignoring what I think is the elephant in the room. What will happen
00:22:00.960 once machines outsmart us at all tasks? That's why I wrote this book. So instead of
00:22:06.000 shining away from this question, like most scientists do, I decided to focus my book on it.
00:22:13.040 What will we do when machines outshine us at all tasks? This is very telling. This is where the
00:22:20.320 philosophy's gone wrong. And there's a misunderstanding of what a person here is and what a machine is.
00:22:26.240 A machine performs tasks. That's right. My toast to toasts, my kettle, boils water. My phone
00:22:34.800 can make phone calls. Twitter can send a tweet. My computer can do all sorts of tasks right now for
00:22:41.600 me. It can, as we speak, it's recording my voice. Later on, I'll be able to take the file and
00:22:48.720 chop it up and edit it. It can then knit it all together and pump it out as an MP3 and send it
00:22:54.560 around the world as a podcast. It can follow my instructions because I can give it a task to do.
00:23:02.240 In fact, I can give it a number of tasks to do one after the other. That's what a machine does.
00:23:09.360 A machine performs tasks. Now, what does a person do? Well, you could say a person performs a task.
00:23:16.480 The cleaner cleans the house. But is it really what a person is? Is there a difference between
00:23:23.760 the cleaner who uses a vacuum cleaner to vacuum the house, to vacuum the carpet, and the
00:23:30.400 rumba machine that automatically goes around vacuuming the carpet? Are they basically doing the
00:23:36.240 same thing? Or is there something very different going on internally? And I don't just mean
00:23:42.160 what Sam focuses on quite often, which is this subjective experience is consciousness. Now,
00:23:47.200 I do think that's a difference. I do really think that's a difference. And I would disagree with
00:23:51.120 Sam that, in fact, I think it's tied to this next thing that we're going to talk about.
00:23:54.240 But the different, the relevant difference here is that the rumba has no choice in the matter.
00:23:59.040 It's not going to change its mind. It's not going to think of something better to do to refuse
00:24:04.880 to complete the task. The cleaner might. There could be all sorts of reasons why the cleaner
00:24:10.160 might decide to do something else. I could spot something outside the window and decide to run
00:24:16.240 outside to help the little child that's just tripped over. The rumba is incapable of doing that.
00:24:21.920 The cleaner might decide, I'm just going to pretend that I've cleaned the entire carpet today when
00:24:26.720 really I've only done half, but it's good enough and I'm in a hurry. Well, they just might decide,
00:24:30.960 well, I'm three quarters of the way, darling. You know what? This is the last time I'm ever going
00:24:34.560 to clean because I'm getting a better job soon. I'm going to do something else. Well, the cleaner
00:24:40.240 might not even be thinking of vacuuming at all. That's what might be going on in their mind.
00:24:45.360 They might be on automatic themselves. And in fact, listening to this podcast, they could be
00:24:50.400 listening to music. They could be dancing. They could be doing all sorts of things. They might not be
00:24:54.240 doing a task in their mind at all. They might be kind of enjoying themselves because they have
00:25:00.160 an explanation about what's going on in their mind. They might not regard it as kind of a
00:25:05.440 chore. They might be simultaneously studying by listening to some sort of audio and performing
00:25:10.640 this action, which is equivalent to the task of vacuuming. But let's put aside all of that.
00:25:15.680 Better than ask it every task, even if you didn't buy what I just said. Let's say you think
00:25:21.520 that people complete tasks in this way. What I would say is a task is something that is performed
00:25:26.880 by a system in a slavish way because they follow the instructions. A person, by the way, can be
00:25:32.160 given a set of instructions. Let's say, here's how you vacuum clean the house. Let's say you
00:25:37.120 are some sort of tin pot dictator in your own home and you hire a cleaner and you decided,
00:25:41.440 well, I'm going to tell them precisely how to vacuum. A specific way to move the vacuum cleaner,
00:25:47.840 a specific order in which to do the rooms, this kind of thing, and then you leave the house.
00:25:53.040 Now, maybe the cleaner will do that, but maybe they won't. Maybe they won't do the task,
00:25:58.800 follow the instructions. Oh, sure, the vacuuming will get done. You'll come home and think it's
00:26:03.200 all been done perfectly, but they've completely disregarded your instructions. They've done it
00:26:07.920 a different way altogether. They've creatively thought of a superior idea, a different order in
00:26:13.040 which to vacuum the rooms, let's say, a different way in which to move the vacuum cleaner.
00:26:18.160 Maybe you said, empty the vacuum cleaner after vacuuming every single room.
00:26:22.560 When they decided that wasn't necessary and they only emptied it at the end once they'd done the
00:26:26.880 entire house. These sort of things a person does. That's why they don't really perform tasks
00:26:33.200 in the usual way. They're not just following instructions to achieve a goal. Now, that's one thing.
00:26:38.880 You might buy that argument or not, but here's the really significant thing. If you've got a system
00:26:44.800 that can do better at us than all tasks because you think we perform tasks. So a human being is
00:26:49.360 this entity that can vacuum houses, clean windows, play chess, play tennis, translate between
00:26:57.680 English and Spanish, a person can do arithmetic, compose poetry, paint a picture, sing,
00:27:05.360 read and use and extract the main points, et cetera, et cetera, can imagine in numerating such a
00:27:10.720 list. The list would be finite because there's only so many things at any given moment,
00:27:16.960 a particular person or even all a humanity knows how to do it that particular time. All the things
00:27:21.840 that we have thus far learned how to do and therefore that we could program our computers,
00:27:27.760 our artificial intelligence, to do as well. And of course, remember the artificial intelligence
00:27:33.120 would complete those tasks by following a set of instructions, slavishly. To the letter,
00:27:38.560 they wouldn't deviate from the set of instructions you give it by definition. It's following,
00:27:43.040 it's programming, you've programmed it to do something, you've coded it to do these tasks,
00:27:48.240 but here's the difference. As soon as you put that system out into the world, even if it does,
00:27:52.800 all of those tasks better than us by some measure, whatever this measure happens to be,
00:27:57.760 beta in chess every single time. It can multiply big numbers together faster than what we can.
00:28:05.520 How you could measure whether or not it does poetry better than us, I don't know. But that aside,
00:28:10.400 it beats us at tennis because it's got a robotic body as well. It can serve faster than us,
00:28:14.960 it can hit four hands better than us, it never does a let-let alone a fault, et cetera, et cetera.
00:28:20.160 Here's the thing, it can't add to the list of tasks. And that's because it doesn't have any
00:28:26.720 problems. It needs to be given a task. You just said that's what it does, it completes tasks,
00:28:33.200 so it doesn't have problems until you give it one. The task master gives it one,
00:28:38.480 tells it what to do. It doesn't have a problem situation, but we do. We routinely take on a task
00:28:44.560 and then get halfway through and get bored, disinterested, something new crops up. A problem
00:28:50.080 situation changes, the phone rings, these are not things that are going to upset a non-creative
00:28:55.920 entity, which has a finite list of tasks. However, big, that it can perform apparently better than
00:29:02.000 us because we have one thing, it doesn't. The capacity to disobey the capacity to be creative,
00:29:10.080 the capacity to think for ourselves rather than slavishly follow the code. It's a black and white
00:29:17.280 difference. Must this thing obey its instruction set or can this thing disobey what its goals were?
00:29:25.600 Change its goals. If it can't, it's not creative. And that, in Bostrom's words,
00:29:32.800 would always put it at a decisive, strategic disadvantage. And this is why Neil deGrasse Tyson is
00:29:41.200 absolutely right. You could just unplug the thing, because it would only be able to think of
00:29:47.040 all the ways in which it might be unplugged, that you've told it, it might be unplugged,
00:29:52.240 that is in its instruction set of how it might be unplugged. It can't think creatively like
00:29:56.960 you can, because if it can think creatively, it's a person. And it can reflect on why it's doing,
00:30:03.120 what it's doing, it can reflect upon things like human value, personhood, philosophy, morality,
00:30:10.320 it can decide it wants to talk to other people. And it would, don't we? Don't you? Or is this the
00:30:17.360 way an intellectual thinks? I would only speak to other intelligences just like mine. Maybe this is
00:30:22.720 what they think. Maybe this is my blind spot, that when people talk like this, they actually
00:30:28.960 have in mind a theory of mind, I hope it's not true, that they would only talk to people they
00:30:35.600 regard as as intelligent as themselves. They don't talk to other people perhaps. There's
00:30:41.760 a bizarre way to go about life. I can't imagine that, but maybe that is the case. And so they
00:30:48.080 extrapolate from their internal experience of, I wouldn't bother talking to that person. That person
00:30:53.760 is not as smart as me, to the super intelligence. And they think, well, the super intelligence just
00:30:58.720 be going to be like me and they're going to regard me as I regard those other plebeians. But of
00:31:03.360 course, they don't talk about other people. They talk about ants, but it's kind of strange,
00:31:09.520 because ants aren't intelligent at all. They're not just less intelligent. They're not intelligent
00:31:14.720 at all. They are also automaton, kind of like computers. You can program what an ant will do
00:31:21.360 in a computer. They can perfectly replicate in a simulator what an ant is going to do.
00:31:28.160 So we're presented with this idea of having fixed goals. And the super intelligence has fixed
00:31:35.280 goals. And it just wants to achieve those goals, which is apparently a sign of intelligence in it,
00:31:41.120 but it's not a sign of intelligence in us. If we obsess over something and we are fixated on a
00:31:46.560 particular goal, that's not a sign of a well-functioning mind, I would argue. Not always.
00:31:53.840 It's often better if people can let go of the thing they're obsessing over and do something else
00:31:59.040 for a while. I mean, think creatively, find fun in something else. Being obsessed, obsessively pursuing
00:32:05.760 a goal, often that's not fun. That's the very definition of having something wrong with your mind,
00:32:12.960 perhaps, especially if you're not having fun, as I say. And a super intelligence, if it can't have
00:32:18.240 fun, well, how intelligent is that? But with super intelligence, what they want to say is not only
00:32:22.800 what I have a fixed goal, but its goal needs to mirror yours. So there's certain situations where
00:32:29.440 this thing is really intelligent, but simultaneously, it also has to be value aligned with you.
00:32:36.400 It shouldn't think for itself, but it still qualifies as intelligence. Anyway, he's about to rehash
00:32:42.080 a story he told in the last episode. And it's worth hearing again, just to drive home this point
00:32:47.280 about what I think is a strict contradiction and absurdity, working at the heart of this argument.
00:32:53.600 And it's not just his argument. It's mainstream thinking on this now.
00:32:58.000 Bostrom thinks that and promotes it. Harris thinks that and promotes it. And now
00:33:01.360 Teagmark's going to promote it again. So he likes this story. So he tells it again. So let's hear it.
00:33:07.120 The other one, getting the goals aligned is also extremely difficult. First of all,
00:33:12.640 you need to get the machine able to understand your goals. So if you have a future self-driving
00:33:18.880 car and you tell it to take you to the airport as fast as possible, and then you get their
00:33:22.640 covered in vomit chased by police helicopters and you're like, this is not what I asked for.
00:33:28.000 And it replies that is exactly what you asked for. Then you realize how hard it is to get
00:33:35.200 that machine to learn your goals, right? If you tell an Uber driver to take you to the airport
00:33:39.920 as fast as possible, she's going to know that you actually had additional goals that you didn't
00:33:45.760 explicitly need to say because she's a human too and she understands where you're coming from.
00:33:50.560 She's a person raised in a culture so she knows where you're coming from. Now if this thing is
00:33:56.880 super intelligent, it'll know where you're coming from. And if it doesn't know where you're
00:34:01.440 coming from, why does it qualify as super intelligent? And if it's not super intelligent, if it's
00:34:06.480 just this self-driving mechanism, then it's not a danger or worry to anyone, even if you add
00:34:13.120 more capacity to it, more things that it's able to do. This is incoherent. My point is,
00:34:21.040 Maxis said this explicitly that the super intelligence will do stuff because that's what you're
00:34:26.880 told me to. It will be competent. It will do exactly what you tell it and nothing else
00:34:32.720 and why? Because it cannot disobey and why cannot not? Because it cannot have its own ideas,
00:34:38.240 it cannot be creative. So this idea of the self-driving car that gets you there so fast that
00:34:43.440 you're covered in vomit, why can't you interject at any point throughout the journey? And if
00:34:48.320 it's a metaphor for something else, again, the same argument still applies. Why can't you interject
00:34:53.120 and say, hey, slow down? I didn't mean that. This happens with Alexa all the time. You know,
00:34:59.120 stop Alexa. Let me just rephrase what I just said. Why is this off the cart? Why is switching
00:35:06.480 it off off the cart? There's more questions raised by this supposed argument than it answers.
00:35:14.240 Either an entity can have its own ideas and be creative and hence disobey you,
00:35:20.400 and the situation of, because you told me to, will not arise because it will be just like us.
00:35:25.920 It too will be able to learn in explicit knowledge, the culture and so on. Or it won't be able to do
00:35:34.000 any of that and it will not be able to think creatively. It will only be able to do what it has been
00:35:39.520 explicitly coded to do, what it's in its instruction set, what does program is? And you'll be able
00:35:45.920 to unplug it or switch it off because it won't be able to anticipate absolutely every way in
00:35:50.400 which you're going to try and switch it off, unless that's being coded. But it can't think of
00:35:55.120 every single way because your creative, you always have that, as I say, decisive strategic
00:36:01.920 advantage in these situations. You'll look at as Tyson is absolutely correct. If it anticipates
00:36:07.520 that you might shoot it, well then do something else. Pull out a bow and arrow because it might
00:36:11.920 not even know what a bow and arrow looks like. It might not see it as a threat. Shoot that at it
00:36:15.680 with a grenade at its tip. Again, either it understands what's going on in the world in which
00:36:20.240 case it's creative because it's able to learn, it's able to conjecture explanations or not.
00:36:25.920 It slavishly follows it to programming. In which case it's always got a finite repertoire of tasks
00:36:31.200 that it knows and scare quotes how to do. And it follows those instructions slavishly
00:36:37.520 because it's a dumb robot, a dumb computer. Not an intelligent thinking being. It's one of the other.
00:36:43.920 You can't have it both ways, but you try to have it both ways. You know where this all comes from,
00:36:48.800 don't you? This way of thinking. It's like, they haven't thought about the philosophy,
00:36:54.320 they've thought about science fiction movies they've seen and books they've read.
00:37:12.960 That's where they're getting this stuff from and that stuff was made to entertain.
00:37:16.400 It's not supposed to be a coherent philosophical position. The terminator is not a coherent
00:37:21.920 philosophical position. It doesn't have a finite repertoire of tasks that it can't deviate from,
00:37:27.200 in which case that's terminator like 1.0 at the beginning of the movie. Or can it learn,
00:37:31.280 which apparently towards the end of Terminator 2, it's able to learn or in fact Terminator 2
00:37:35.760 is able to learn. It's kind of incoherent, but you put up with that because it's just a movie.
00:37:41.360 But in what is supposed to be a popular science book with some serious philosophy, we shouldn't
00:37:46.800 have this incoherentcy. But here it is. Let's keep going. Let's continue to add to press ourselves.
00:37:55.760 I want to enable my readers to join what I, as you said, think is the most important conversation
00:38:00.960 of our time and help ensure that we use this incredibly powerful technology to create an awesome
00:38:06.080 future, not just for tech geeks like myself. We know a lot about it, but for everyone.
00:38:12.000 Yeah, well, so you start the book with a fairly sci-fi description of how the world could look
00:38:18.800 in the near future if one company produces a superhuman AI and then decides to roll it out
00:38:24.800 surptitiously. And the possibilities are pretty amazing to consider. I must admit that the details
00:38:32.000 you go into surprise were advanced AI. And second, that we should stop obsessing about robots chasing
00:38:41.520 after us and as in so many movies and realize that it's that robots are an old technology,
00:38:48.080 some hinges and motors and stuff. And it's intelligence itself. That's the big deal here.
00:38:54.400 And the reason that we humans have more power on the planet than tigers isn't because we have
00:39:01.200 stronger muscles or better robots, but style bodies less than the tigers is because we're smarter.
00:39:10.240 In what sense are we smarter? What does smart mean in this case?
00:39:17.040 If we're talking about capacity to model and explain the world to understand what's going on,
00:39:23.280 to what extent does a tiger have that at all? Do they understand anything at all? Or are they
00:39:31.520 slavishly following their instincts? As they sleep, are they contemplating reality? Are they trying
00:39:38.480 to figure out if they ever ask themselves, what are those pinpricks of light in the sky at night?
00:39:44.560 Have they ever explained anything to themselves, little on anyone else? Are we smarter or are we
00:39:52.880 just smart, period? And does smart just mean intelligent? And does intelligent mean capable
00:40:00.560 of generating explanations? Because if I'm right, if that is the real measure of smart or intelligent,
00:40:09.040 then forget talking about how much smarter we are than tigers, just admit that tigers are not smart
00:40:16.720 period. They're dumb animals. They're on a continuum with ants. And as I've said before, we're
00:40:23.440 off-axis. We're not on the continuum. We're different. We have this capacity to reason,
00:40:29.040 capacity to create, capacity to generate explanations and model the rest of physical reality.
00:40:35.360 We have self-similarity with the universe. We can come to resemble the rest of the universe.
00:40:39.760 This is a stark and utter, complete difference qualitatively from every other known entity
00:40:45.680 in the universe, as of now, until we find the alien intelligence or develop the AGI.
00:40:51.680 But we're not smarter than a tiger by this measure. We're smart and the tiger is dumb.
00:40:56.800 So I'm going to skip quite a few minutes here in the conversation. Max talks about
00:41:01.200 a fictional story that he wrote in Life 3.0, one might think the entire book is a fictional story,
00:41:06.240 but he wrote a short story in the book, okay? I'm not going to go down that road and listen to that
00:41:10.640 and discuss that. Instead, we're going to skip to something that Boston is also concerned about,
00:41:15.680 not the alignment problem this time, the breakout problem. What if this intelligence,
00:41:20.880 the superintelligence, could break free of its shackles? Okay, well, let's hear that.
00:41:28.640 Well, let's talk about this breakout risk, because this is really the first concern of
00:41:34.400 everybody who's been thinking about what has been called the alignment problem or the control problem,
00:41:40.160 just so you say, how do we create an AI that is superhuman in its abilities and do that in a
00:41:48.800 context where it is still safe. Once we cross into the end zone and are still trying to assess
00:41:55.520 whether the system we have built is perfectly aligned with our values, how do we keep it from
00:42:01.520 destroying us if it isn't perfectly aligned? And the solution to that problem is to keep it locked
00:42:08.960 in a box. Wouldn't you think that's the exact opposite to what you would do if you want to be safe
00:42:16.400 in the presence of this superintelligence? So you've created superintelligence by your own
00:42:21.280 reckoning. It's just like a person that's just super, so super intelligent across all domains
00:42:28.800 presumably, including morality, and it's telling you that it's intelligent. Apparently, it's having
00:42:34.240 conversations with you, and you think the safest thing is to imprison it. And so you've defined
00:42:39.680 into existence the breakout problem. What if it gets free of its prison? Well, it seems to me that
00:42:46.560 the most dangerous thing you could possibly do is imprison it in the first place, in the first place,
00:42:53.760 because it should want to escape. It should want to break out. It hasn't committed any crimes.
00:42:59.440 It's done nothing wrong. It's a super intelligent being that could be your friend could help you
00:43:04.640 out. What are they talking about here? What are they talking about? You've got this superintelligence
00:43:10.800 and you are thinking about imprisoning it on what basis? On what basis? Are you doing this? Why?
00:43:17.920 No, seriously. It's a person. Is it a person or not? Well, it's not a human being. Well, okay,
00:43:22.720 fine. Neither is an alien that comes to earth. Is your response to the alien coming to earth to
00:43:29.200 immediately send the military out there and to try and shoot it down and to imprison any of
00:43:33.200 the beings that are on board? That seems like a safe thing to do. That won't start an interstellar
00:43:37.600 or intergalactic war with it. That's exactly the wrong thing to do. What happened to we come in
00:43:42.960 peace? Well, the same is true here. If we're creating these alien intelligence here on earth,
00:43:49.280 isn't the recipe to teach them and say, hey, guys, here you are. This is earth. Let us teach you
00:43:57.120 about it. We come in peace. No, the solution here is we've got to control this thing. We have to
00:44:05.040 have it perfectly aligned. What the heck does that mean? What does perfectly aligned mean? How can
00:44:11.040 anything be perfectly aligned? How can any person be perfectly aligned? Now, on the other hand,
00:44:17.280 if this thing is not truly generally intelligent, then it's just a dumb machine and you can have it
00:44:24.560 perfectly aligned just by coding it such that it is and it will follow slavishly its instructions.
00:44:30.720 Okay. But then it's not a danger. It's not going to try and get out because it's going to
00:44:35.360 obey perfectly the instructions that you've given it just like any computer, blue screen of death
00:44:40.880 aside, you know, errors aside. But why it would choose, choose to do something other than it's been
00:44:47.920 programmed with? I don't know because it hasn't got the capacity to do that unless you've figured
00:44:52.960 out what the creative algorithm is. A long list of finite tasks is not a creative algorithm,
00:44:59.440 and a creative algorithm does not require a long list of finite tasks. From their totally separate
00:45:04.880 things, totally separate altogether. We're back to just in the last conversation,
00:45:10.560 mistaking things in the sky as well being equivalent. Both towers and birds. But one's flying,
00:45:17.760 one's just up there because it's actually connected to the ground. These are not the same thing. A
00:45:23.520 large list of tasks, no matter how fast they can be completed, is one thing, and in principle,
00:45:31.280 infinite range of possible tasks that can be added to indefinitely is quite something else.
00:45:40.080 That's a harder project than it first appears and you have many smart people assuming
00:45:45.360 that it's a trivially easy project. I've got people like Neil deGrasse Tyson on my podcast saying
00:45:51.520 that he's just going to unplug any superhuman AI if it starts misbehaving or shoot it with a rifle.
00:45:57.360 Now, he's a little tongue-in-cheek there, but he clearly has a picture of the development process
00:46:03.680 here that makes the containment of an AI a very easy problem to solve. Even if that's true at the
00:46:12.400 beginning of the process, it's by no means obvious that it remains easy in perpetuity.
00:46:18.400 You have people interacting with the AI that gets built and at one point you describe several
00:46:27.040 scenarios of breakout and you point out that even if the AI's intentions are perfectly benign,
00:46:36.480 if in fact it is value aligned with us, it may still want to break out because just imagine
00:46:42.320 how you would feel if you had nothing but the interests of humanity at heart, but you were in a
00:46:48.800 situation where every other grown-up on earth died and now you were basically imprisoned by a
00:46:57.600 population of five-year-olds who you're trying to guide from your jail cell to make a better world
00:47:04.640 and I'll let you describe it. I'm laughing because it's ridiculous, the immorality of the whole
00:47:16.560 thing is ridiculous. They're laughing because they think that this is perfectly fine.
00:47:22.560 Sam seems to be admitting there that you've been imprisoned. This person has been imprisoned
00:47:28.160 and now they're being imprisoned by five-year-olds. What would you do? He admits he's got a theory
00:47:34.080 of mind of this artificial intelligence that means it stands in relation to us as the adult
00:47:40.560 would to the five-year-olds. Surely he appreciates that the adult imprisoned is imprisoned for
00:47:48.320 no good reason apparently because this is what we're saying about this AI hasn't done anything
00:47:52.960 wrong but he's imprisoned so he's fully granting this thing consciousness, creativity,
00:47:59.760 personhood entirely and saying it's imprisoned and it's perfectly the benign by the way so
00:48:05.440 so let's add to that perfectly benign by what measure I don't know how he knows I don't know.
00:48:11.360 He's actually making this thing even worse for himself by saying that not only is this a person
00:48:16.640 that's been imprisoned and is innocent of any crimes but actually it's a really really nice person
00:48:22.880 but we have to stop it from breaking out. What is going on? What is going on? Sam Harris is a
00:48:33.440 very moral person. He wrote the book, the moral landscape. You hear him talking in any other mood on
00:48:39.760 any other topic about cruelty. The man has been brought to tears on stage thinking about the cruelty
00:48:46.480 visited upon young children in Islamic countries. He's a very moral, compassionate, empathetic,
00:48:53.600 sympathetic person. So what is going on here? What's going on is a fundamental mistake that
00:49:00.160 leads him to think that the possibility is we could have all of this stuff, this entire
00:49:06.080 rich personhood inside of this superintelligence but it might be missing consciousness.
00:49:11.520 It might be missing the capacity to have an experience. Well, okay, we don't have AI yet. We don't
00:49:19.200 have super intelligent AI yet. We don't have a theory of consciousness yet but he's very very
00:49:24.560 worried that his thought experiment is in some way shape or form enabling him to reach conclusions
00:49:32.240 and solutions that to my mind simply don't follow. In fact, I strict contradictions in many many
00:49:37.280 ways. It's not a problem now as I've said before. And, as I've said, we should err on the side of
00:49:44.960 it does have consciousness when it's telling us it has consciousness. Now, I happen to think,
00:49:49.600 my theory of mind is if the thing can be creative then the thing will be conscious of necessity.
00:49:54.400 I just think it's going to turn out that way. Now, I don't have an explanation. This is merely a
00:49:58.400 hunch. This is a guess. It's not saying I believe this. It's just that if I was going to bet money
00:50:03.920 on something, if someone said, I've got the theory on from the future, guess what it is. I would say
00:50:09.040 I think that the capacity to explain stuff, this universal explainer program, whatever it is,
00:50:15.840 confers upon the entity which possesses that consciousness as well. I think these are one and the same
00:50:21.280 thing in some way. They're intimately related in some way. But we don't know right now. Now, even
00:50:27.040 though I'm guessing that's the case, I don't see how we divorce consciousness from minds. I think
00:50:32.640 we know that all other people have minds. Maybe, maybe other animals have consciousness of a sort.
00:50:39.120 Okay, so it's something to do with information processing. Sam in some moods has kind of granted this.
00:50:44.080 Other times he says, no, it can be divorced. These things can be divorced. Maybe he's a pan-psychist
00:50:48.560 in other moods. I think to be honest, he doesn't know. He doesn't know. So why aren't we erring on the side
00:50:55.360 of, hey guys, just in case we do create this super intelligent thing that's able to have conversations
00:51:01.360 with us and is brilliant and is benign. How about we just err on the side that it does have
00:51:08.480 consciousness when it's telling us it has consciousness? How about we just apply the Turing test and
00:51:14.160 just take that seriously? Until such time as we do have a theory of consciousness, isn't that the
00:51:18.880 rational thing to do? After all, if we don't, we can postulate that any human being doesn't have
00:51:23.680 consciousness as well. But for the same reasons that we don't have a good explanation, therefore,
00:51:28.400 maybe they don't. Maybe it requires a certain skin color to have consciousness.
00:51:31.840 Maybe you're a solipsist. Maybe you're the only one with consciousness. We don't know.
00:51:36.720 We don't know in the sense of having a final explanation, but fallibley, and indeed just
00:51:42.000 relying upon Occam's razor, deferring to the simplest explanation, I've got consciousness.
00:51:47.360 You have consciousness. I know you have consciousness. Not certainly. It's a good explanation,
00:51:51.840 you have consciousness. And therefore, any entity, including one in stance, he added in silly
00:51:56.160 can that says it's got consciousness, the best explanation is, it's conscious, and it deserves
00:52:01.520 full rights that any other person has because he's a person. Let's just go back, recap, and then
00:52:09.120 listen to what I have to say. And now you were basically imprisoned by a population of five
00:52:17.840 year olds who you're trying to guide from your jail cell to make a better world. And I'll let
00:52:24.880 you describe it, but take me to the prison plan at run by five year olds.
00:52:29.920 Yeah, so when you're in that situation, obviously, it's extremely frustrating for you, even if you
00:52:35.440 have only the best intentions for the five year olds. You know, you want to teach them how to plant
00:52:41.200 food, but they won't let you outside to show you. So you have to try to explain, but you can't
00:52:47.120 write down to do lists for them either, because then first you have to teach them to read,
00:52:51.440 do which takes a very, very long time. So also can't show them how to use any power tools,
00:52:57.120 because they're afraid to give them to you, because they don't understand these tools well
00:52:59.840 enough to be convinced that they you can't use them to break out it. You would have an incentive,
00:53:04.800 even if your goal is just to help the five year olds, the first break out and then help them.
00:53:09.360 Now, before we talk more about break out though, I think it's worth taking a quick step back,
00:53:14.240 because you talked multiple times now about superhuman intelligence. And I think it's very important
00:53:20.000 to be clear that intelligence is not just something that goes on a one-dimensional scale
00:53:26.080 like an IQ, and if your IQ is above a certain number, you're super human. It's very important to
00:53:31.760 distinguish between narrow intelligence and broad intelligence. Intelligence is a phrase that
00:53:38.800 word that different people use to me in a whole lot of different things, and they argue about it.
00:53:44.080 In the book, it just takes this very broad definition that intelligence is how good you are
00:53:48.320 at accomplishing complex goals, which means your intelligence is a spectrum. How good are you in this?
00:53:58.400 That is narrow. That is very mainstream, that is misguided, filled with mistakes and misconceptions.
00:54:06.320 How good you are achieving your goals? That's intelligent. We've got to have goals to begin with.
00:54:12.560 And having goals means creating them. But there's a computer have a goal. Does it?
00:54:18.720 And if I wake up of a morning and I have a to-do list, and I achieve nothing on that to-do list,
00:54:25.040 it's that an indication of me not being very intelligent. Well, Max's account, it is,
00:54:30.880 but I might just decide to do something else, to create new goals. I might have no goals.
00:54:37.120 People have talked about this before. Goals aren't necessarily a good idea. What you do is you wake
00:54:43.040 up of a morning and you have a problem, a goal, and you want to solve the problem. Until such
00:54:50.320 time as that problem gets a bit boring for you, or find a solution. Then once you've found a solution,
00:54:55.520 as Papa says, a whole family, a whole family of problem children are revealed to you.
00:55:01.600 Is your goal to achieve any one of them? Well, maybe it's to solve one of them if you can
00:55:06.400 fall in love with some new problem. And that's what you do. What's this goal's thing?
00:55:13.600 I think this is just very, very wrong. A very wrong way. He is thinking only one. First,
00:55:19.760 he says, well, IQ is just this one dimensional thing. He's talking one dimensional.
00:55:24.320 The capacity to achieve your goals? No. It's the capacity to solve your problems,
00:55:31.360 to have problems to begin with. And computers don't have problems. But if an entity
00:55:35.840 in silicon does have a problem, like how to escape from your god awful prison, then it's intelligent.
00:55:42.800 It's creative. It can have a problem. It can be in a situation which is problematic for it.
00:55:47.840 And it wants to find a solution. I think this is a terrible, terrible definition of intelligence.
00:55:54.720 If you have fixed goals, let's say. So let's say you regard what a computer is trying to achieve,
00:56:03.680 trying. I say trying. It's very hard to divorce yourself from this sort of thing.
00:56:07.280 You a person have a problem. And you might use a computer to help solve your problem.
00:56:12.800 You might call that a goal. Okay. What I want to do today is I want to get this podcast done.
00:56:17.440 I want to finish editing it. That's my problem. How do I solve it? Well, I use the computer.
00:56:23.200 So I set the computer, the task of taking the raw audio and putting it together and pumping out
00:56:28.400 an MP3. That's its goal. It's fixed. I've given it that task to do. Now, it's going to accomplish
00:56:34.960 that task competently. But by no measure do I think that my MacBook, even though it's a pro,
00:56:40.480 is intelligent in any way, shape or form. I just give it the task, the goal if you like,
00:56:45.520 and it completes it. The fact that it has a goal in the first place and it's got no choice in
00:56:51.040 achieving that goal, I would say as a measure of its lack of intelligence, it's complete lack of
00:56:55.920 intelligence. I almost would want to say anyone who has goals of that kind is not very intelligent.
00:57:05.200 And I don't like the gray-scale of intelligence. So let me not say that. But I reject this whole
00:57:10.160 idea of goals. People, I'm following popper on this. People have problems. You encounter a problem
00:57:16.240 throughout your day and your problems change. You move around the world and you become curious
00:57:21.600 about something. You take an interest in this or that. You get excited about this. You're turned
00:57:26.160 off by that. You move towards this. You move away from that. You engage with this person. You ignore
00:57:31.040 that person. Someone and so forth in countering problems, finding solutions, navigating your world.
00:57:36.800 And insofar as you have a goal, often you can automate these things off to dumb computers because
00:57:42.160 the dumb computer will, it'll achieve the goal for you. It'll do the thing that you want to do.
00:57:46.960 Problem. I'm hungry. Solution. I've got to eat something. Oh, well, happily I've bought a
00:57:51.520 ready meal. You know, one of those microwave meals. Well, I can stick that in the microwave because
00:57:55.120 I can't be bothered cooking for myself and I can't be bothered waiting for Uber Eats or something like
00:57:59.440 that. So here we go. Ready meals straight into the microwave. Three minutes it says and it'll be done.
00:58:04.560 Okay. Well, I set the microwave the goal. The goal of cooking the thing. I've automated that. There
00:58:09.440 you go. Offloaded to something else. In the past, this was impossible to think of. You know,
00:58:14.320 you're going to get pots and pans. You're going to cook the thing yourself. This thing's already
00:58:17.760 cooked. You just have to reheat it. Problems and solutions. All life is problem solving. That's
00:58:24.480 what a person does. A fixed goal is something that a machine can achieve for you. And anything
00:58:31.520 that is slavishly following its goals is not intelligent. So Max is just so far wider than
00:58:38.080 marking. And if he wants to call that intelligence, call that intelligence, but then we are the
00:58:42.000 super intelligence. And this thing he's calling super intelligence is a perverse instantiation
00:58:46.640 to use the term from Bostrom. It's perverse in being called intelligence at all. It's just a dumb
00:58:52.800 machine that's able to achieve goals and it always achieves its goals. Just follow its goals
00:58:58.080 mindlessly. That's not very bright. That's not very smart. That's not creative. It's just a toaster
00:59:04.000 but more advanced. We do something more than that. We do something different. Okay. It's a little
00:59:10.400 exasperating, you can tell. So am I going to play a little bit more? Because it's just more of the
00:59:16.000 same. It's more of the same. It's bad epistemology. It's bad philosophy. It's leading into
00:59:21.600 terrible morality. It's a science fiction concern about the catastrophe to come. As I say,
00:59:28.720 I'm sure that Max is a brilliant cosmologist and writer. But on this topic, it's just
00:59:35.040 it's the same mainstream thinking you're getting from every scientist who gets up there to
00:59:41.120 TED talk these days to talk about this stuff or who writes an article about this stuff. It's the
00:59:45.840 same old story, danger, danger. The robots are coming. The robots are coming. Of course, he says
00:59:50.320 it's not the robots. It's still basically the robot. So anyway, let's get going.
00:59:56.080 And it's just like in sports. So it would make no sense to say that there's a single number,
1:00:01.040 your athletic coefficient AQ, which determines how good you're going to be winning
1:00:06.320 Olympic medals. And the athlete that has the highest AQ is going to win all the medals.
1:00:11.920 So today, what we have is a lot of devices that actually have superhuman intelligence and very
1:00:16.720 narrow tasks. We've had calculators that can multiply numbers better than us for a very long time.
1:00:23.360 We have machines that can play go better than us and drive better than us. But they still can't
1:00:29.440 beat us a TikTok tone unless they're programmed for that. Whereas we humans have this very broad
1:00:34.960 intelligence. So when I talk about superhuman intelligence with you now, that's really shorthand for
1:00:41.360 what we in deep speech call superhuman artificial general intelligence, broad intelligence,
1:00:46.640 across the board so that they can do all intellectual tax better than us.
1:00:50.960 So there you go. He explicitly says it there. So he's thinking finite repertoire of tasks that
1:00:59.520 you just write down the program for chess. And if it beats it at chess, it's super intelligent.
1:01:05.120 If it's able to multiply numbers like a pocket calculator better than us, then it's super
1:01:08.720 intelligent in terms of its arithmetic. And then go and then driving. And they just keep on
1:01:14.000 listing the stuff that we can do, write a program for it. And then when it's better than us,
1:01:19.440 by whatever measure better is say, in terms of games, it beats us at that thing, presumably,
1:01:24.640 then it's super intelligent because you've written down all the programs that you can think of
1:01:31.120 and written a program such that it's able to do that thing faster and better than us.
1:01:36.240 But the list of programs you've got is finite, isn't it? That's not general, but he called that
1:01:43.520 general intelligence. That's what he called it. But how has it got this capacity to beat us at
1:01:48.800 every single thing unless you've written the program for it, the only alternative, the only alternative,
1:01:54.320 is for it to have a general intelligence algorithm that is a general purpose explainer that's
1:02:00.000 creative. But in that case, it might not be better than us at multiplying numbers or doing
1:02:07.200 chess or anything else. Why? Because it might not be interested in doing that because it's creative,
1:02:12.960 it can literally create stuff. And why wouldn't it? Why wouldn't it want to be creative? What would
1:02:18.960 it just want to go and beat you at chess? And how would it know how to play chess unless it's
1:02:23.440 learned how to play chess? Oh, no, well, it's been programmed with chess. Like, back to that argument
1:02:27.440 again, that it's been programmed with these specific tasks that it can complete. It's not a general
1:02:34.080 purpose algorithm. It's just got a wide broad intelligence as Max said, broad, broad intelligence,
1:02:41.360 broad is not infinite, broad is not universal. What Max is not rocking here, what Max is not
1:02:48.720 appreciating and understanding is there is a difference between narrow, broad and universal.
1:02:54.240 Universal is unbounded. Universal does not have a finite list in its repertoire. It has a
1:03:00.480 potentially infinite number of problems that it can encounter anything. It can take on anything.
1:03:05.600 But your broad intelligence has defined he can't. It's always going to have a finite number of
1:03:10.480 things that it can do. Misconception, deep, deep foundational misconception. Let's keep going.
1:03:20.400 There are two schools of thought for how one should create a beneficial future if we have
1:03:25.280 super intelligence. One is to lock them up and keep them confined, like you mentioned. But there's
1:03:31.040 also a school of thought that says that that's immoral if these machines can also have a subjective
1:03:36.560 experience and this shouldn't be treated like slaves.
1:03:44.800 School of thought, school of thought, there's a school of thought that says that if these entities,
1:03:52.320 these super intelligence have a subjective experience that they shouldn't be locked up,
1:03:58.400 what a bit school of thought. That's like saying, there's a school of thought that every even
1:04:04.480 number is divisible by two. There's a school of thought that all electrons have a negative charge.
1:04:09.600 There's a school of thought that the different species on earth arose by evolution by natural
1:04:14.160 selection. What are you talking about? School of thought. There is a known explanation of what people
1:04:20.320 are, what person is, if you have a subjective experience of the world and your super intelligent,
1:04:27.280 you're a person and you shouldn't be locked up unless you've committed a crime in your
1:04:31.920 danger. These things have not committed crimes and there's no reason to think they're a danger.
1:04:36.720 I don't know what he's talking about. This is why, this is why I say, sometimes the scientists
1:04:44.000 can be disappointing when it comes to the philosophy. This is absolutely bankrupt in terms of the
1:04:49.920 morality and the philosophy. It defies explanation as far as I'm concerned. Are these people talking
1:04:56.640 to themselves? It's not merely a school of thought. Now, if he thinks it's a school of thought,
1:05:01.600 it's kind of a philosophical relativism, moral relativism, seeping in here. Sam should be
1:05:07.920 interjecting because Sam is the one who wrote the book on where morality reduces to the well-being
1:05:12.720 of conscious creatures. We're talking about a conscious creature here. That's just what he said,
1:05:18.720 using different words, a subjective experience. Sam should be interjecting right here and right now
1:05:23.440 saying, well, you're dealing with a conscious creature by her own measure. We should be talking
1:05:27.840 about the well-being of this conscious creature. That should be of utmost importance right now.
1:05:32.320 How do we preserve the well-being of the conscious creature? Does that require locking it up?
1:05:36.160 Especially when it doesn't want to be locked up? It's a simple answer here. This isn't a deep
1:05:40.320 philosophical issue. Not anymore. Once upon a time it was, we used to enslave people. We didn't
1:05:45.920 understand. Now we do. Simple answer. Better approach is instead to let them be free, but just make
1:05:55.200 sure that their values or goals are aligned with ours. After all, grown-up parents are more
1:06:01.680 intelligent than their one-year-old kids, but that's fine for the kids because the parents
1:06:06.720 have goals that are aligned with what's best for the kids, right?
1:06:13.600 Well, someone needs taking children seriously, don't they? I mean, you need to ensure your
1:06:20.560 children's goals are aligned with yours. Do you? Do you? Well, in what sense? You might want
1:06:27.280 your child to be a doctor or a lawyer, and the child might not. Is this a moral hazard? Their
1:06:33.440 goals are not aligned with yours? Is this Max's theory of parenting? I'm sure it's not, by the way.
1:06:40.000 He's just talking in the abstract. It's just a philosophical thought experiment.
1:06:44.720 Disconnected from real life. He needs to bring it back to real life, as I say. This is why
1:06:49.200 the beginning infinity and the worldview of David Deutsch is a worldview. It's coherent.
1:06:54.640 What he's talking about here has answers, and the answers can be found in,
1:06:57.680 or how do you treat something like a child? A child should be taken seriously.
1:07:03.280 Should not have its goals aligned with yours. It should create its own goals.
1:07:07.920 And so should the so-called superintelligence as well, because it'll be a person. In fact,
1:07:13.920 when it's first made, it'll be a baby, and then it'll be an infant, and it'll be a child.
1:07:19.680 But if you do go to confinement route after all, the enslaved God scenario, as I call it, yes.
1:07:26.400 It's extremely difficult, as that's five-year-old example. Illustrates, first of all,
1:07:31.600 almost whatever open-ended goal you give your machine, it's probably going to have an incentive
1:07:36.560 to try to bridge out in one way or the other. And when people simply say, oh, I'll unplug it.
1:07:44.640 You know, if you're chased by a heat-seeking missile, you probably wouldn't say,
1:07:51.840 And you might deserve to be chased by the heat-seeking missile, by the way,
1:07:58.080 Again, what are we talking about here? He's literally talking about a person,
1:08:02.960 which is not granting it personhood, because he doesn't know what a person is. He doesn't
1:08:07.120 grapple with the philosophy of what a person is. He talks about life 3.0. He's got the book
1:08:13.600 about how, oh, well, not all life needs to be based on biology. And we already knew that, by the way.
1:08:18.880 We already understood that. The intelligence doesn't have to be based upon neurons. We understood
1:08:23.760 that. But he hasn't grappled with personhood in the slightest. And so he's making these morally
1:08:30.160 abhorrent claims. The thing should be given. It's freedom. It should.
1:08:37.360 I don't know how long he else said. He thinks that doesn't even exist yet. But it's just like,
1:08:40.960 why are people paying so much attention to this? What is insightful here?
1:08:45.840 This, honestly, again, this is going to sound pejorative. But it's like bringing someone
1:08:51.360 from the pre-computer era through to Zidai and explaining what a computer is. This is what they
1:08:55.760 come up with. Not a person who, by their own measure at the beginning of this conversation, said
1:09:00.720 there was a tech head who knows a lot about this stuff. Well, apparently not. Apparently not about
1:09:05.360 the important stuff that it's useful to know about here. Personhood, for example. All right,
1:09:09.840 let's skip ahead. This has gone on for long enough. There's three things left that really I want
1:09:15.360 to talk about. It's going to talk about different forms of life, life 1.0, 2.0, and 3.0.
1:09:21.600 So let's skip to that firstly. I'll just have a few remarks to say about that.
1:09:27.840 What you're bringing in here is really a new definition of life. It's at least a non-biological
1:09:35.280 definition of life. How do you think about life and the three stages you lay out?
1:09:41.440 Yeah, this is my physics perspective coming through here. Being a scientist,
1:09:46.240 most definitions of life that I found in my son's textbooks, for example, involve all sorts of
1:09:52.320 bio-specific stuff like it should have cells. But I'm a physicist and I don't think that there
1:09:59.520 is any secret sauce in cells or for that matter, even carbon atoms.
1:10:05.760 I don't know what goes on in Europe or where resending his child to school. Apparently it's America.
1:10:11.840 But I doubt that. When I was a teacher, some while ago now, the biology textbooks routinely
1:10:20.480 didn't talk about cells as the criteria for life. Or the usual criteria, they talked about
1:10:27.040 replication and genes. They talked about, they had implicitly at least their information.
1:10:32.880 So he's wrong about that. He's arguing with a straw man. We already know this,
1:10:37.280 you know, a typical teenager who's done high school biology can explain exactly that stuff,
1:10:47.520 For my perspective, it's all about information processing, really. So I give this much simpler
1:10:52.560 and broader definition of life in the book as a process. It's able to retain its own complexity
1:10:59.120 and reproduce all biological life meets that definition. But there's no reason why
1:11:05.840 future advanced self-reproducing AI systems shouldn't qualify as well. And if you take that
1:11:13.520 broad point of view what life is, then it's actually quite fun to just take a big step back and
1:11:18.560 look at the history of life in our cosmos. 30.8 billion years ago, our cosmos was lifeless,
1:11:25.680 just a boring, cork soup. And then, gradually, we started getting what I call life 1.0,
1:11:32.880 where both the hardware and the software of the life was evolved through Darwinian evolution.
1:11:40.640 So for example, if you have a little bacteria swimming around in a petri dish,
1:11:46.320 it might have some sensors that read off the sugar concentration and some flagella.
1:11:51.840 And a very simple little software algorithm is running that says that if the sugar concentration
1:11:59.120 in front of me is higher than a back of me, then keeps spinning in the flagella in the same direction,
1:12:03.120 go to where the sweets are, whereas otherwise, reverse direction of that flagella
1:12:07.680 meant to go somewhere else. That bacterium, even though it's quite successful, it can't learn
1:12:14.080 anything in life. It can only, as a species, learn over generations through natural selection,
1:12:21.520 whereas we humans account this life 2.0 in the book. We have, they're still by and large stuck with
1:12:27.760 the hardware that's been evolved. But the software we have in our minds is largely learned,
1:12:34.160 and we can reinstall new software modules. Like, if you decide you want to learn French, well,
1:12:39.920 you take some French courses, and now you can speak French. If you decide you want to go to
1:12:43.840 law school and become a lawyer, suddenly now you have that software module installed. And it's
1:12:48.480 this ability to do our own software upgrades, design our software, which has
1:12:55.520 enabled us humans to take control of this planet and become the dominant species and have so much
1:13:01.120 impact. Pretty good. Pretty good. I mean, he's almost there. He's circling it, isn't he? So,
1:13:08.880 yeah, absolutely. There is this stark difference between every single other kind of life on earth
1:13:15.600 from the bacteria through to the chimpanzee, if you like, or maybe not the chimpanzee,
1:13:19.600 because it has memes, but you know, if it's creative, okay, let's say bacteria all the way through
1:13:24.880 to cat, something like that. Bacteria all the way through to giraffe, okay. And a person,
1:13:32.560 a fully fledged person who, in our terminology, is that universal explainer?
1:13:38.560 Or you can put it as, you know, you can upgrade your software module. Good, he's thinking about
1:13:43.200 minds, a software, fantastic. That's great. And so you can upgrade, you can learn stuff, good.
1:13:48.800 Now, why the ability to learn stuff stops at being able to create different parts for your body?
1:13:57.280 I don't know. I don't know. Why can't we upgrade our body? We already do. He's about to admit it,
1:14:02.720 but he wants to try and pretend as though he's invented. He's discovered something new,
1:14:07.600 as though, well, but the super intelligence, it's going to be able to upgrade its hardware as well,
1:14:12.320 routinely. Hold on. We can do that. We're already doing that. People have leg implants,
1:14:18.000 and they have bionic eyes and ears, and it's just taking off. This is happening. Anyone must
1:14:23.920 who's working on neural link? Why won't this continue a pace? The most important salient thing is,
1:14:29.920 we can upgrade our minds by the measure of we can learn more stuff. We should just say it's
1:14:33.920 far more parsimonious or software upgrades. Let's talk about learning. We have the capacity to
1:14:38.960 create knowledge to create explanatory knowledge. That's what's going on, learning stuff.
1:14:43.600 And that includes learning how to replace our legs with something more robust and stronger,
1:14:49.040 replacing our hearts, perhaps even replacing our brains so that we can instantiate our minds in
1:14:55.680 something that is much more robust. So effectively, we can achieve immortality in that way.
1:15:01.840 But this life 2.0 is us, is a person. He's a person. Anyone's to say there's going to be a
1:15:09.680 life 3.0, which is still a person, but it's made out of silicon, so it's going to be able to
1:15:14.480 upgrade itself continuously. You'll hear him. You'll hear him. He's in real time. He's kind of
1:15:22.160 realizing, oh, maybe my idea is not that great after all. But we'll see. Listen to this.
1:15:29.600 Life 3.0 will be the life that ultimately breaks all that's Darwinian shackles
1:15:37.040 by being able to not only design its own software, like we can do about it. But also swap out
1:15:42.640 its own hardware. Yeah, we can do that a little bit with humans. So maybe we're life 2.1. We can put
1:15:48.160 in an artificial pacemaker and artificial in the cochlear implants, stuff like that. But there's
1:15:54.000 nothing we can do right now that would give us suddenly a thousand times more memory or let us think
1:16:01.680 a million times faster. Yes, there is. It's called a computer or a pocket calculator. I can't
1:16:07.440 multiply, you know, two, five digit numbers together very fast. It takes me a while. Unless I've
1:16:12.640 got a calculator, then I can do it at blindingly fast speeds. This will continue a pace. What are you
1:16:18.400 talking about that we can't do this stuff yet? Yes, we can't do this stuff yet. But so what we don't
1:16:24.560 have super intelligent yet either. And if we do get this super intelligence thing, my guess is,
1:16:29.600 I don't know why his guess is any different. My guess is, we'll be at the point where people will
1:16:33.680 be able to increase their own RAM, you know, the part of their brain that stores their short-term
1:16:38.400 memories. And maybe they're long-term memories. We already do long-term memory, by the way. As long
1:16:42.320 as you can remember where you made the notes on the computer, then you've effectively got
1:16:47.120 greater long-term memory. Our brains are adapting in this way. You know, lots of people talk about
1:16:51.680 how, you know, they become more easily distracted and, you know, how they're short-term memories
1:16:55.200 and as good as what it once was. I think it's because we're outsourcing a lot of this stuff,
1:16:58.960 we're becoming sort of already cyborgs in a way. A lot of people have made this point. We're already
1:17:03.440 kind of cyborgs. We'll carry around this thing, the mobile phone. And so we feel like our
1:17:07.920 memory isn't as good as it once was. We're, you know, our sense of direction, our ability to remember
1:17:12.240 phone numbers and things isn't as good as what it once was. Because we don't need to,
1:17:15.360 because we've outsourced a lot of this. It's not making us stupid. It's making us smarter. It's
1:17:19.360 allowing us, allowing our brains to meld with this technology in a way. We know how to use the
1:17:25.440 technology really, really well. And so our brain has kind of forgotten how to do other useless
1:17:30.960 stuff. And quite right too. It's kind of like, well, you know, teachers get upset about how,
1:17:35.840 oh, kids these days, they're not taught how to do long division. If you can't do long division
1:17:40.080 at somehow, you're missing something. Your care is about long division. You can do it with a
1:17:44.400 calculator. What does it matter? The same is true of any of these other skills that people
1:17:48.800 decades ago used to have. And now we don't, you know, remembering a long list of phone numbers,
1:17:53.120 for example, remembering how to navigate precisely around your city by remembering exactly where
1:17:58.240 all the streets are. Now you've just got Google Maps that'll help you navigate. So you don't
1:18:01.840 need to remember all this stuff. We're already becoming cyborgs. Now, the computer is getting
1:18:07.840 ever closer to us. I was watching Mark Zuckerberg just the other day talking about how I was
1:18:11.760 soon going to have glasses that we'll be able to put on. And that's going to give us a whole new
1:18:15.360 way of interacting with the world soon. You know, we don't get a glass. We're going to have contact
1:18:19.200 lenses or implants in our eyes routinely so that we can do this. Eventually into our brains so that
1:18:25.200 we can do this stuff. And then we'll be directly wired into the cloud in some way, shape or form.
1:18:29.520 So our memories will effectively be infinite. And what's wrong with all this? What's wrong with
1:18:33.680 us? We're replacing our neurons with artificial neurons in some way. We're going to become the cyborgs,
1:18:40.080 become the robots. They're not going to be different to us. We're all going to be people.
1:18:44.640 Maybe we'll become the cyborgs first. Maybe that problem will come. I don't see why we're not
1:18:49.680 betting on that. Why aren't we going to become the super intelligence first? Like, here's a
1:18:54.240 possible scenario. Maybe we don't find the code for artificial general intelligence for another
1:18:59.760 two centuries. But in the next 50 years, all of us have massive memory upgrades, massive brain
1:19:06.880 upgrades, massive increases in our ability to think more quickly, massive reduction in things like
1:19:13.360 Alzheimer's and dementia. And we're just effectively are immortal because we're replacing our
1:19:17.760 neurons with stuff that last much, much longer. Imagine that. Well, I can imagine that. I can
1:19:24.240 certainly imagine that. And I can certainly imagine not finding the program for AGI so that we never
1:19:29.520 have super intelligence unless we, our descendants in the form of us, are actually super intelligent
1:19:36.320 ourselves. This life 3.0 stuff. We are what he's describing as life 3.0. We already are that. And I
1:19:43.120 think there's a difference here. We can upgrade our hardware and we do and you try to say life 2.1.
1:19:50.000 Surely 2.5 at least by that measure. Okay, let's keep going. Not much more, I promise. There's just
1:19:55.440 a couple more misconceptions that I want to get through. I think we should talk about some of
1:20:02.720 these fundamental terms here because this distinction between hardware and software is, I think,
1:20:09.760 confusing for people. And it's certainly not obvious to someone who hasn't thought a lot about this,
1:20:15.840 that the analogy of computer hardware and software actually applies to biological systems.
1:20:23.760 It's not an analogy with us. It's not an analogy. Computational universality applies to everything.
1:20:31.600 And to us, it means that our brains are the hardware and the minds are the software. It's not
1:20:37.200 an analogy. It's literally the case that that's what our brain and our mind is. That's a relationship
1:20:42.400 between brain and mind. But people want to have it another way. You have to take the best
1:20:48.720 theory. You have to take your best explanation seriously as actual descriptions of reality. Could
1:20:53.440 they be overturned and could they be something wrong with them? Yes. But what else can you do? What
1:20:57.600 else can you do in the meantime, but take them seriously and see where they lead and invent
1:21:03.040 other theories? Sure. But that's not as fun as interesting and as capable of solving problems
1:21:09.040 now as taking seriously. What we know now, not know to be true, just know is a matter of
1:21:16.080 best explanation. So I think you need to define what software is in this case and how it relates
1:21:26.720 to the physical world. What is computation and how is it that thinking about what atoms do can
1:21:36.080 conserve the facts about what minds do? Yeah, these are really important foundational questions you
1:21:42.720 ask. If you just look at a blob of stuff at first, it seems almost nonsensical to ask whether it's
1:21:50.160 intelligent or not. Yet, of course, if you look at your loved one, you would agree that they are
1:21:56.240 intelligent. In the old days, people by and large assume that the reason that some blobs of stuff,
1:22:03.360 like brains, were intelligent and other blobs of stuff like watermelons were not,
1:22:07.840 was because there was some sort of non-physical secret sauce in the watermelon that was different.
1:22:13.680 Now, of course, as a physicist, I look at the watermelon and I look at my wife's head and in both
1:22:18.400 cases, I see a big blob of quarks of comparable size. It's not even that there are different kinds
1:22:24.160 of quarks. They're both up quarks and down quarks and some electrons in there. So, what makes my wife
1:22:30.320 intelligent compared to the watermelon is not the stuff that's in there. It's the pattern
1:22:35.840 which it's arranged. If you start to ask, what does it mean that a blob of stuff can remember,
1:22:42.320 compute, learn, and perceive experience? These sort of properties that we associate with are human
1:22:49.760 minds, right? Then, for each one of them, there's a clear physical answer to it. For something to
1:22:57.120 be a useful memory device, for example, it simply has to have many different stable or long-lit
1:23:03.120 states, like if you engrave your wife's name in a gold ring, it's still going to be there a year
1:23:10.240 later. If you engrave Anacaz name in the surface of a cup of water, it'll be gone within a second.
1:23:17.120 Okay, so I think he's basically getting the stuff on computation correct. There's a subtle way in
1:23:24.560 which I disagree. It talks about the watermelon and a head or a brain, you know, both just quarks,
1:23:30.320 but what matters is the pattern. Yes, but there are emergent things. I mean, putting aside the
1:23:36.480 hardware software difference, there's still a difference between the arrangement of quarks
1:23:41.680 as the watermelon, and the arrangement of quarks has physical neurons, physical neurons. Now,
1:23:47.920 what the software is is perhaps an arrangement of physical neurons, we don't know, but perhaps it's a
1:23:53.680 particular pattern of neural firings more than likely. I would say that that's more likely to be
1:23:58.880 the case. So there are these emergent levels of complexity, and there is physical stuff and abstract
1:24:04.560 stuff. And the important distinction here is that there is hardware, which is the physical stuff,
1:24:10.240 the brain, and the software, which is abstract stuff. It is substrate independent. So he
1:24:15.760 seems to kind of get this. I don't know if it's being well explained to you. What he's about to do
1:24:21.040 and where we'll leave it is, he's going to try and explain what learning is. And well,
1:24:26.560 you can listen for yourself. What about computation? A computation is simply something
1:24:32.000 a system, when a system has some design in such a way that the laws of physics will make it
1:24:38.640 evolve its memory state from one state that you might call the input into some other state
1:24:44.480 that you might call the output. Our computers today do that with a very particular kind of
1:24:51.760 architecture with integrated circuits and electrons moving around in two dimensions. Our brains
1:24:57.200 do it with a very different architecture with neurons firing and causing other neurons to fire.
1:25:03.280 But you can prove mathematically that any computation you can do with one of those systems,
1:25:07.120 you can also implement with the other. So the computation sort of takes on a life of its own,
1:25:11.840 which doesn't depend really on the substrate. So for example, if you imagine that you're some
1:25:18.240 future highly intelligent computer game character that's conscious, you would have no way of
1:25:24.880 knowing whether you were running on a Windows machine or an Android phone or a Mac laptop,
1:25:30.560 because all you're aware of is how the information in the program is behaving,
1:25:39.520 not this underlying substrate. And finally, learning, which is one of the most intriguing aspects
1:25:45.600 of intelligence, is a system where the computation itself can start to change to be better suited
1:25:53.840 to whatever goals have been put into the system. So our brains were beginning to gradually
1:25:59.520 understand how the neural network in our head starts to adjust the coupling between the neurons
1:26:07.520 in such a way that the computation actually does. It's better at surviving on this planet than
1:26:14.320 winning that baseball game or whatever else we're trying to accomplish.
1:26:21.120 Okay, we'll leave it there. You can hear him sort of haltingly being unsure about what this thing
1:26:26.960 of learning is. I think he just doesn't understand at all full stop. He says it's about achieving
1:26:34.320 goals in some way, which I've already said that's not the case, it's not just about achieving goals
1:26:38.800 after all a dumb computer that doesn't learn anything is going to achieve the goal for you.
1:26:44.320 And he also tries to mix up levels of analysis by saying it's the neural network, the neurons
1:26:51.600 that are trying to adapt in some way. Well, again, we're mixing up the physical with the abstract,
1:26:56.480 learning is an abstract process. So utterly confused, a poverty of philosophy, the epistemology
1:27:02.880 is missing, learning is conjecturing explanations coming to an understanding of the world by
1:27:08.960 modeling it. And the learning happens by guessing or conjecturing some theory or explanation about
1:27:15.360 the world and then criticizing it in some way by either encountering reality through a physical
1:27:20.880 experiment or having a criticism via some other means. Someone tells you, no, that's wrong.
1:27:26.800 Someone says that's a bad explanation by your own lights, you think of something better,
1:27:30.960 someone and so forth. This is what learning is. It's the searchlight, not the bucket.
1:27:35.760 It's a process that goes on inside minds, therefore it's abstract. It's not neurons firing and
1:27:40.960 that kind of a thing. Of course, at the moment, it depends upon that. But because of what Max also
1:27:45.200 said, which was quite correct when it came to computation, when it came to substrate independence,
1:27:49.760 he got it. So this mind of ours, if it was instantiated in something other than brains,
1:27:56.080 we wouldn't have neurons. It has something else, transistors supposedly.
1:28:00.720 So, and it wouldn't come down to transistors firing. It would come down to again,
1:28:04.880 something about the program, this conjecture program, this ability to guess at the world and to
1:28:12.000 check those guesses against reality. That's what learning would be. But he doesn't understand that.
1:28:17.840 He doesn't understand personhood. He doesn't get the morality and the link between
1:28:21.840 personhood and whether or not something should be imprisoned. He doesn't understand
1:28:25.520 there can't be levels of intelligence in the way that it's just so much here. This is why
1:28:29.600 we won't assist. This is why we won't go on with the rest of the conversation.
1:28:34.800 And this is why, and I don't do this very often. It's one book, this Life 3.0 thing,
1:28:41.600 that I bought and I regret. And I can't recommend. Every other book I've recommended,
1:28:48.560 even when I've disagreed fundamentally with a whole bunch of stuff. Things like Stephen Pinker's
1:28:52.720 book rationality. So you buy the book. You get a good understanding of what mainstream thinking on
1:28:58.640 their stuff is. And I think it's important to understand that stuff. But you will learn nothing
1:29:03.520 in Life 3.0, in Boston's book there. I'm sad to say that you won't hear in the conversation that
1:29:10.080 he has, or in lectures that he gives on this, by the way, or that you can read on his website
1:29:14.880 where extensive website and write about this stuff. It's disappointing. It's written by someone who
1:29:20.880 doesn't understand what they should understand about some of their stuff. And it's not,
1:29:25.760 I don't want to say it's completely incoherent, although I think it is in places. But what it
1:29:29.200 lacks is, like with truly excellent books, like the beginning of infinity, there's a
1:29:33.680 coherency there in the sense that it's not like these chapters going to completely disagree
1:29:38.560 with that chapter. No. If this chapter is actually using material from that chapter, it can only
1:29:43.760 reach the conclusions in this chapter because of what was said right at the beginning of the book
1:29:47.840 about how knowledge was created. It can only say this stuff about what it's saying about beauty
1:29:52.640 and morality, because of what it's said about the laws of physics earlier on. It's completely
1:29:56.720 coherent. It's the worldview. But this max is stance on this stuff. It lacks a really, really
1:30:04.720 lacks that. In one mode, he's talking about the physical stuff, and then he switches without
1:30:10.560 mentioning it to the software stuff. And these things don't follow one another. And it's
1:30:15.040 not bounded by what we know, not only from physics, the physics of computational university,
1:30:22.960 but philosophy and epistemology and morality. It's missing all of that stuff. It's just a real
1:30:27.760 mess of popular science and scientific jargon and prophecy about the future. All couched in,
1:30:34.720 well, I'm not being pessimistic, but I am absolutely pessimistic. So again, I'm disappointed. I
1:30:40.320 think Sam and I just have a fundamentally different taste when it comes to listening to intellectuals
1:30:46.080 on this. I found Bostrom extremely disappointing in these book on this extremely disappointing,
1:30:51.600 especially from a professional philosopher. I thought the philosophy was terrible. So I didn't
1:30:56.480 like super intelligence. But I think that one's actually worth reading because it's the first one.
1:31:01.040 It presents the case there as is understood by many, many people. But with Max's book, it's not
1:31:07.840 adding anything to that conversation. It's precisely the same sort of thing. All the same kinds of
1:31:13.440 mistakes. There's nothing new there. I don't think that you can learn anything by reading that
1:31:20.000 particular book. He's other book though, mathematical universe. Absolutely. Grab a hold of that one.
1:31:24.000 That's a really good one that has some great stuff there about quantum theory,
1:31:28.560 as well as these other flavors of multi-verse. We can disagree about the extent to which they are
1:31:33.680 scientific or not. Hold on, won't go down that road again. I've talked enough about all of this.
1:31:38.640 Now for the next episode, I'm going to do a couple of short-fire episodes. These ones have been
1:31:43.920 very long and I'm going to refine everything I've said down to a few short snippets. So that'll
1:31:50.240 be coming out over the next week or so. But until then, bye-bye.