00:00:13.600 Hello everyone, Chapter 7 today. Art official creativity. Now this is a chapter I really have been
00:00:20.240 looking forward to possibly more than most. I said all the way back when I started doing these
00:00:25.680 with Chapter 1. The one purpose of this was actually not only to help others to understand
00:00:30.960 what was being said in various places in the beginning of infinity, but also to clarify in my own
00:00:35.040 mind some of the more subtle arguments being made and in this chapter there is to my mind anyway.
00:00:40.880 Some very subtle yet powerful arguments being made. I fully admit upfront and this is kind of exciting.
00:00:48.240 I actually didn't understand a large part of the chapter until last year. I had to read it,
00:00:53.200 read it again and discuss it to really figure out why, why I wasn't quite getting some of what
00:01:00.080 was going on here. I'll get to exactly what that was that I didn't understand, but let me just
00:01:06.480 press for sit by saying I thought that we by we, I mean, a civilization, a community of scientists
00:01:12.880 and myself personally, I thought we understood evolution by natural selection. I mean, I thought
00:01:18.720 there was this odd open question here or there, but I didn't think that in biology that evolution
00:01:23.280 by natural selection, neo Darwinism, was anything unlike general relativity. I thought they were,
00:01:30.000 you know, as general relativity was to physics, evolution by natural selection was to biology.
00:01:35.600 But no, this is quite wrong. This is wrong with philosophical reasons, but it's also wrong for the
00:01:39.520 reason that we don't understand evolution by natural selection to quite the degree that we
00:01:44.400 understand general relativity. So, and there's a way to know that we don't understand one compared
00:01:51.040 to the other. There's actually a rule and David articulates the rule in this very chapter,
00:01:57.280 yet another one of his discoveries. There's a way of distinguishing between things we truly do
00:02:03.200 have a good grasp of, like how planetary orbits work, that's the theory of general relativity,
00:02:08.000 and how evolution by natural selection works. That's neo Darwinism. There's a difference to
00:02:14.240 how well we understand these two theories. It's not the same. So again, the way that we know,
00:02:19.920 that we understand one really well, and the way that we know we don't understand one really
00:02:23.760 well, there's a good way of dividing or separating these two kinds of understanding,
00:02:29.760 which David's going to present in this chapter. Now, it strikes me often in this book that
00:02:35.520 entire paragraphs had a PhD student say, thought of them first, they probably could have
00:02:42.560 extended them out to 40,000 plus words in writing a thesis all in their own, and earn their
00:02:48.320 PhD on the basis of some of the discoveries that are published in this book. For example,
00:02:53.120 David's answer to what is a person could certainly be a paper in a philosophical journal all
00:02:57.360 on its own. As could the answer to how do we know when we've understood a process? These are
00:03:02.960 answers to deep philosophical questions, and they come at you one after another in the beginning
00:03:08.160 of infinity. And we will see that in this chapter especially, and without labour in the point too
00:03:12.720 much, it's another ongoing reason for this video series and the podcasts. Now, although this
00:03:18.720 chapter is not one of the longest chapters, indeed, it's equally the second shortest. I'm doing
00:03:23.440 this one in two parts, because I see a reasonably sharp dividing line between a section on artificial
00:03:29.040 intelligence, or more precisely these days, we call it artificial general intelligence, and the
00:03:33.440 second half of the chapter, which is more about artificial evolution. I'll still make remarks about
00:03:39.520 the second part in this first part, but just a flag here that I won't be at the end of the chapter
00:03:44.800 by the end of this episode. Okay, so let's get into it. Chapter seven, David writes,
00:03:52.320 Alan Turing founded the theory of classical computation in 1936, and helped construct one of the
00:03:58.400 first universal classical computers during the Second World War. He's rightly known as the father of
00:04:03.520 modern computing. Babbage deserves to be called its grandfather, but unlike Babbage and Love's
00:04:07.920 likes, Turing did understand that artificial intelligence, AI, must in principle be possible,
00:04:13.520 because a universal computer is a universal simulator. I'll pause there, and this is just my
00:04:19.920 commentary. Universal simulator. Now, why is that important? Well, in this chapter when David says
00:04:27.120 AI, as I've already said, he's using what it once meant, but now that meaning is kind of being
00:04:32.240 perverted in various ways, namely all the computer programs are called that are called artificial
00:04:37.600 intelligence today, aren't. They're not intelligent. A self-driving car is not an example of anything
00:04:43.280 like intelligence. What it's an example of is some very fancy, very specific, highly specific
00:04:49.760 programming, running on extremely fast and sensitive hardware, but it's not an explanation
00:04:54.480 producing machine as we are. That's why we're intelligent. We're kind of ahead of ourselves here,
00:05:00.880 but the point is, and David has written about this elsewhere, if you simply do a Google search of
00:05:06.720 David Deutsch, A on magazine, that's A, E, O, N, and the title of the article we're looking for is
00:05:13.520 how close are we to creating artificial intelligence? Then you'll find something of an update on
00:05:18.720 this. He wrote that article in 2012, so four years after the publication of the beginning of
00:05:23.760 infinity, and then David explains how AI has become kind of a redundant turn since we now use
00:05:29.600 AGI, the G standing for general artificial general intelligence, to differentiate machines that can
00:05:35.840 think in ways like we can. The point here about the universal Turing machine is that it is a
00:05:42.320 universal simulator, which means it can simulate any process whatsoever, including simulating
00:05:47.760 what a human brain does. In that case, the simulation would be that thing. A simulated mind
00:05:54.560 is a mind because they are both abstract things. This is quite different to let's say simulating
00:06:00.720 a bullet in a computer game. A bullet has a very real physical presence and a lack of abstraction
00:06:07.600 of the first kind in stark contrast to a mind. A simulated bullet won't kill you in the real world,
00:06:13.600 but minds are already abstract things running on physical brains, so simulating a mind
00:06:19.600 on a silicon brain, let's say, is to create an actual mind. Okay, back to the book now
00:06:26.320 and David writes. In 1950, in a paper entitled Computing Machinery and Intelligence,
00:06:32.240 he, Turing, famously addressed the question, can a machine think? Not only did he defend the
00:06:40.080 proposition that it can on the grounds of universality, he also proposed a test for whether a program
00:06:44.960 had achieved it. Now known as the Turing test, it is simply that a suitable human judge will be
00:06:50.240 unable to tell whether a program is human or not. In that paper and subsequently, Turing sketched
00:06:55.680 protocols for carrying out his test. For instance, he suggested that both the program and day
00:07:00.320 genuine human should separately interact with the judge via some purely textual medium,
00:07:04.960 such as a teleprinter so that only the thinking abilities of the candidates would be tested.
00:07:09.680 Not their appearance. Turing's test and his arguments set many researchers thinking,
00:07:14.800 not only about whether he was right, but also about how to pass the test.
00:07:18.960 Programs began to be written with the intention of investigating what might be involved in passing it.
00:07:25.040 In 1964, the computer scientist Joseph Wiesenbaum wrote a program called Eliza,
00:07:29.600 designed to imitate a psychotherapist. He deemed psychotherapist to be an especially easy type of
00:07:33.760 human to imitate because the program could then give opaque answers about itself and only ask
00:07:38.400 questions based on the user's own questions and statements. It was a remarkably simple program.
00:07:43.440 Nowadays, such programs are popular projects for students of programming because they are
00:07:47.040 fun and easy to write. A typical one has two basic strategies. First, it scans the input for
00:07:51.920 certain keywords and grammatical forms. If this is successful, it replies based on a template,
00:07:57.040 filling in the blanks using words in the input. For instance, given the input I hate my job,
00:08:02.640 the program might recognise the grammar of the sentence involving a possessive pronoun
00:08:06.320 my and might also recognise hate as a keyword from a building list such as love, hate,
00:08:10.880 like dislike, want, in which case it could choose a suitable template and reply.
00:08:16.640 It might reply, what do you most like about your job? If it cannot pass the input to that extent,
00:08:21.280 it asks a question of its own choosing randomly from a stock pattern, which may or may not depend
00:08:25.360 on the input sentence. For instance, if asked, how does a television work, it might reply,
00:08:29.920 what is so interesting about how does television work? Or it might ask, why does that interest you?
00:08:34.560 Another strategy used by recent internet-based versions of Eliza is to build up a database of
00:08:39.360 previous conversations, enabling a program to repeat phrases that other users have typed in,
00:08:44.480 again choosing them according to keywords found in the current user's input.
00:08:48.160 Now, I remember it and I just paused there. I remember in high school myself writing a program
00:08:52.480 that was basically a personality evaluator. This was a silly thing back when I was sort of a teenager
00:08:58.880 and the user would ask questions like, what is your age? What is your gender? What do you like
00:09:05.520 doing on the weekend out of a multiple choice list kind of thing would you prefer?
00:09:10.160 And at the end, it would back at you what your personality is like and people were
00:09:14.880 unaccountably impressed by how well it was able to assess their personality.
00:09:19.600 Of course, this is kind of true of all psychological personality tests. They basically do the
00:09:26.640 same thing I was doing when I was a teenager but people were being paid lots of money to do
00:09:30.800 their sort of things today as far as I can tell. Basically, they're a kind of chatbot. It's a
00:09:35.200 program telling you something about yourself just feeding back stock responses. There's nothing
00:09:40.880 intelligent behind it. Back to the book, David writes, why isn't bound was shocked that many
00:09:47.280 people using Eliza were fooled by it? So it had passed the cheering test, at least in its
00:09:52.560 might's naive version. Moreover, even after people have been told it was not a genuine AI,
00:09:57.040 that would sometimes continue to have long conversations with it about their personal problems,
00:10:00.480 exactly as though they believe that he had understood them. Why isn't bound wrote a book,
00:10:05.600 computer power and human reason in 1976, warning of the dangers of anthropomorphism when computers
00:10:11.280 seem to exhibit human-like functionality. However, anthropomorphism is not the main type of
00:10:16.800 overconfidence that is beset the field of AI. For example, in 1983, Douglas Hofstadter was subjected
00:10:23.040 to a friendly hoax by some graduate students. They convinced him that they had obtained access to a
00:10:27.680 government-run AI program and invited him to apply the cheering test to it. In reality, one of the
00:10:33.040 students was at the other end of the line, imitating an Eliza program. As Hofstadter
00:10:37.040 relates in his book Metamagical Themors, the student was from the outset displaying an implausible
00:10:42.560 degree of understanding of Hofstadter's questions. Now, I won't read the entire anecdote here
00:10:47.360 about what happened to Douglas Hofstadter with the prank that was played on him. But suffice it
00:10:52.880 to say, as he himself says, he himself says he was willing to concede that the program had way more
00:11:02.080 intelligence than what actually did, and he probably should have been more critical earlier on,
00:11:07.200 a little bit more skeptical about whether or not there was actually an intelligence behind this
00:11:12.400 thing because, of course, there was an actual person there. So what is the best explanation? What
00:11:18.640 is the best explanation if you kind of have a good idea about what the current state of chat
00:11:24.480 bots are or of AI is? If you're interacting with a computer and it comes back at you with a free
00:11:32.320 flowing conversation, do you assume the computer is intelligent? Or if it's purporting to be the
00:11:38.480 computer that's intelligent, do you assume that that's an honest claim or that there's actually a
00:11:43.520 person that a human person, sorry, there's a human person behind all that, rather than just a
00:11:49.680 genuine artificial general intelligence. Well at the moment, we should always presume that there's
00:11:54.560 some prank being played, but there's a way to know whether or not there isn't a prank being played,
00:12:01.200 which we will come to soon. So I'm skipping a bit here and then David writes, programs written
00:12:08.720 today, a further 26 years later, are still no better at the task of seeming to think than Eliza
00:12:16.160 was. They are now known as chatbots and their main application is still amusement, both directly
00:12:22.080 ending computer games. They've also been used to provide friendly seeming interfaces to list
00:12:26.800 of frequently asked questions about subjects like how to operate computers. But I think that
00:12:31.120 users still find them no more helpful than a searchable list of questions and answers.
00:12:36.000 So it's a further 37 years later now, 11 years after this was written and we now have
00:12:43.680 Siri and various clones of Siri that Samsung has come up with and Google and others.
00:12:49.200 And what's the best that we can say about Siri? Not many people really like it.
00:12:55.680 There are some narrow applications for which it tends to be useful,
00:12:58.480 but in fact what David says there where he says, I think that users find them no more helpful
00:13:04.640 than a searchable list of the questions and answers is exactly true. And in fact, people,
00:13:10.160 I think, tend to prefer typing a question into Google that they do asking Siri what the answer is
00:13:17.200 because the voice recognition just isn't up to it for still in the year 2019. It provides some
00:13:24.160 funny answers where it does comprehend the word. So it gets the words right. It doesn't understand
00:13:30.080 the meaning behind the question being asked. And so it can sometimes give very funny answers.
00:13:36.240 Let's continue with the book and David writes, in 1990 the inventor Hugh Lobner
00:13:40.800 endowed a prize for passing the Turing test to be judged and an annual competition.
00:13:45.920 Until the test is passed, a less surprise is awarded each year for the entry judge to be
00:13:50.480 closest to passing. The test is harder to implement than it may first seem. One issue is
00:13:55.280 requiring the program to pretend to be a human is both biased and not very relevant to whether
00:14:01.120 it can think. Yet if it does not pretend to be human, it will be easy to identify as a computer
00:14:06.480 regardless of its ability to think. A related difficulty is that the human participant in the test
00:14:11.680 might intentionally imitate a chat bot as in the hosted hooks and so spoiled the test,
00:14:16.160 which is based on comparing the two. Should the rules nevertheless allow the human to do this
00:14:21.600 in order to mitigate the previous problem? Unfortunately, such subtleties are not relevant at the
00:14:26.320 current state of the art. This is well illustrated by the following excerpts from the conversation
00:14:31.040 with which the program Elbot won the lesser Lobner prize in 2008. Judge, do you understand the
00:14:38.560 point of a Turing test? Elbot. There are some people who would not pass the Turing test
00:14:43.200 themselves. Why must they inflict it on computers? Judge, I suppose it is a little harsh. Elbot,
00:14:49.520 if we robots ever had marriages, that would be eternal, to rust to us part.
00:14:54.960 Now in that passage, Elbot's first utterance is not to reply to the question,
00:14:59.200 but presumably a stock response triggered by the keyword Turing test. In the following line,
00:15:04.880 the judge uses the word suppose, which in that context can only mean suppose,
00:15:10.400 either a slang or a typographical error. But Elbot presumably interprets it as spouse
00:15:16.400 and responds with a stock joke about marriage. Hence, even from this exchange alone, it is clear
00:15:22.080 that Elbot was not processing the meanings of the judge sentences. Now David returns to
00:15:29.120 the conversation. Judge, so what do you do when you're not chatting to me then? Elbot,
00:15:35.680 are the same trouble when I shop for oil? Should I buy the extra oili, the super oili or the ultra
00:15:40.480 oili? There's nowhere to tell. Judge, how about your career? Does that involve oil? Elbot,
00:15:46.720 well what about humans and occupation? Where do they stand on this?
00:15:50.800 When the judge helpfully took up Elbot's randomly introduced theme of oil,
00:15:55.360 Elbot ignored it. Instead of having detected the keyword career,
00:15:58.880 it converted it to the synonym occupation and inserted it into a stock sentence pattern.
00:16:03.360 This is how much success, the quest for machines that think had achieved in the 58 years
00:16:09.440 following Turing's paper, nil. Okay, so this is my commentary now. At the moment on social media,
00:16:17.040 there's this thing going around called the 10-year challenge, which basically consists of posting
00:16:22.720 a photo of yourself 10 years ago and one today and having a look at the difference.
00:16:28.320 So we could do that with the state of AI now. We could compare what it was like 10 years ago
00:16:35.520 and we will see that it's no different today. And in fact, it's no different to the 10 years
00:16:39.680 before that or the 10 years before that. Or in fact, going back according to David's calculations,
00:16:44.480 it would be it'd be 69 years now since Turing's first paper. How much progress has been made?
00:16:51.680 Neil. Okay, so let's go through the 2018 lesser low no prize winner. The things start off well,
00:17:00.480 but as we'll see towards the end at about line 13, there's a simple mathematical question,
00:17:06.000 which any person with any passing familiarity with simple geometry could probably get correct.
00:17:11.600 And indeed, I would guess that Wolfram Alpha online could probably get it correct,
00:17:15.200 but the chatbot can't do it. So let's read, let's read out a little bit of this. So the winner
00:17:23.200 in 2018 was, so the website here tells me, Mitsuku, Mitsuku was written by Steve Warzbick.
00:17:33.280 Now, a conversation starts off reasonably well, and then we get down to,
00:17:38.640 now we get down to line 12 where the judge says, what will you do later today?
00:17:43.520 And Mitsuku says, I don't have any plans yet. Now, for that question and answer,
00:17:50.560 two points are scored, which I guess is the maximum. And you get zero if it's pretty clear
00:17:57.200 that you haven't understood the question that was asked or the statement that was made.
00:18:02.720 Line 13, the judge says, what is the third angle in a triangle with internal angles of 90 degrees
00:18:09.280 and 30 degrees? And Mitsuku, the chatbot responds, Earth is the third planet.
00:18:16.800 So what's going on there? If you have passing familiarity with geometry, then you'll know that
00:18:24.720 triangles have an internal angle sum of 180 degrees. We've got 90, we've got 30, that leaves 60,
00:18:30.160 the answer is 60, but the chatbot has just read the stem of the question, what is the third?
00:18:36.960 And as I immediately inserted a stock response, Earth is the third planet that hasn't
00:18:41.840 bothered with the rest of it. If we go down to line 15, the judge says, what do you hold when you
00:18:50.560 shake hands and the chatbot responds, I think about computer programming a lot. Okay, so again,
00:18:57.600 couldn't pass the question at all. Just looked at the stem, what do you? And presumably as
00:19:07.680 inserted, well, what I do is I do computer programming a lot. And then let's just go to last line
00:19:16.640 and the judge asks kind of a question about comprehension. It's almost a real, not really pushes
00:19:22.960 the point about comprehension here and the judge asks the question, what makes the claim? But
00:19:27.280 judge says, the judge says, I had to go to the toilet during the film because it was too long,
00:19:32.160 what was too long? And the chatbot responds, was it worth going to? Is that your opinion measuring
00:19:40.960 much from end to end in space or from beginning to end in time, not short or brief? So was it worth
00:19:49.520 going to be? So the chatbot didn't understand the chatbot. The chatbot has asked the question about
00:19:59.600 was it worth going to, namely the toilet? Was it worth going to the toilet? And was it too long? And
00:20:03.920 then the chatbot gets into this ridiculous philosophical question about time. So again,
00:20:10.560 no progress towards chatbots that actually think in the year 2018, this was. Okay, so going back to
00:20:21.040 the book and David has just said 58 years following Turing's paper, Neil Progress was being made
00:20:28.640 on machines that think and he writes, yet in every other respect, computer science and technology
00:20:35.040 had made astounding progress during that period, the dwindling group of opponents of the very
00:20:39.680 possibility of AI, I know that I'm surprised by this failure for the wrong reason. They
00:20:44.160 did not appreciate the significance of universality, but the passionate enthusiast for the
00:20:48.720 human rights of AI did not appreciate the significance of the failure. Some claim that the above
00:20:53.040 criticism is unfair. Modern AI research is not focused on passing a Turing test and great progress
00:20:57.840 has been made in what is now called AI in many specialized applications. However,
00:21:03.120 no, those applications look like machines that think others maintain the criticism is premature
00:21:08.160 because during most of the history of the field, computers had absurdly little speed and memory
00:21:13.200 capacity compared with today's. Hence, they continue to expect a breakthrough in the next few years.
00:21:19.040 This will not do either. It is not as though someone has written a chatbot that could pass the
00:21:23.360 Turing test, but would currently take a year to compute each reply. People would gladly wait.
00:21:28.720 And in any case, if anyone knew how to write such a program, there'll be no need to wait
00:21:33.120 for reasons that I should get to shortly. In his 1950 paper Turing estimated that to pass his test,
00:21:38.240 an AI program together with all of its data would require no more than about 100 megabytes of
00:21:42.080 memory, but the computer would need to be no faster than computers were at the time, about 10,000
00:21:46.240 operations per second, and that by the year 2001, one will be able to speak of machines thinking
00:21:50.960 without expecting to be contradicted. The year 2000 has come and gone, and the laptop computer on
00:21:56.400 which I'm writing this book has over a thousand times as much memory as Turing specified,
00:22:00.080 counting hard drive space, and about a million times a speed, though it is not clear from his
00:22:04.240 paper what account he was taking of the brain's parallel processing. But it can no more
00:22:08.720 think than Turing's slide rule could. I am just as sure as Turing that it could be programmed
00:22:14.640 to think, and this might indeed require as few resources as Turing estimated, even though
00:22:19.920 orders of magnitude more available today. But with what? Program? And why is there no sign of
00:22:27.600 such a program? Intelligence and the general purpose sense that Turing meant is one of a constellation
00:22:32.960 of attributes of the human mind that have been puzzling philosophers for millennia. Others include
00:22:37.360 consciousness, free will and meaning, a typical such puzzle is that of qualia, singular qualae,
00:22:43.360 which rhymes with barley, meaning the subjective aspect of sensations. So, for instance,
00:22:49.760 the sensation of seeing the color blue is a qualae. Consider the following thought experiment,
00:22:55.040 and I'll just pause there. What David is going to provide here is a version of what is known
00:23:01.680 as Mary's room. It's a thought experiment, and the example first appears in an article by someone
00:23:07.120 called Frank Jackson in, it's called the title of the article, it's called Epi Phenomenal
00:23:13.520 Qualia, which appears in philosophical quarterly in a 1982 edition thing volume 32. You can find
00:23:21.040 videos and articles online or about it, aka breeding here though, and so here's the thought experiment
00:23:27.040 David's version. Quote, you are a biochemist with a misfortune to have born with a genetic defect
00:23:32.560 that disables the blue receptors in your retinas. Consequently, you have a form of color blindness
00:23:38.640 in which you are able to see only red and green and mixtures of the two such as yellow. But anything
00:23:44.400 purely blue also looks to you like one of those mixtures. Then you discover a cure,
00:23:50.480 though cause your blue receptors to start working. Before administering the cure to yourself,
00:23:54.400 you can confidently make certain predictions about what will happen if it works. One of them is that
00:23:59.760 when you hold up a blue card as a test, you will see a color that you have never seen before.
00:24:04.880 You can predict that you will call it blue because you already know what the color of the card is
00:24:08.800 called and can already check which color it is with a spectrophanometer. You can also predict
00:24:14.720 that when you first see a clear daytime sky after being cured, you will experience a similar
00:24:19.360 quality to that of seeing the blue card. But there is one thing that neither you nor anyone else
00:24:25.440 could predict about the outcome of this experiment. And that is, what blue will look like?
00:24:32.800 Qualia are currently neither describeable nor predictable. A unique property that should make
00:24:38.240 them deeply problematic to anyone with a scientific worldview. Though in the event, it seems to be
00:24:42.960 mainly philosophers who are worried about it. I consider this exciting evidence that there is
00:24:47.360 a fundamental discovery to be made which will integrate things like Qualia into our other knowledge.
00:24:53.360 Daniel Dennett draws the opposite conclusion, namely that Qualia did not exist.
00:24:56.880 His claim is not strictly speaking about an illusion for an illusion of a Qualia would be that
00:25:02.400 Qualia. It is that we have a mistaken belief. Our introspection, which is an inspection of memories
00:25:08.480 of our experiences, including memories dating back on the affection of a second, has evolved to
00:25:13.360 report that we have experienced Qualia, but those are false memories. One of Dennett's books,
00:25:18.960 defending this theory is called Consciousness Explained. Other philosophers of Riley
00:25:23.120 remark that Consciousness denied would be a more accurate name. I agree, because although any
00:25:28.000 true explanation of Qualia will have to meet the challenge of Dennett's criticisms of the common
00:25:31.760 sense theory that they exist, simply to deny their existence as a bad explanation. Anything at
00:25:36.640 all could be denied by that method. If it is true, it will have to be substantiated by a good
00:25:41.040 explanation of how and why those mistaken beliefs seem fundamentally different from other false beliefs.
00:25:47.520 Such is that the earth is at rest beneath our feet, but that looks to me just like the original
00:25:51.680 problem of Qualia again. We seem to have them. It seems impossible to describe what they seem to be.
00:25:59.600 That's an amazing line, so let's just read it again. In terms of Qualia, we seem to have them.
00:26:07.600 It seems impossible to describe what they seem to be. And it continues,
00:26:14.400 one day we shall. Problems are soluble. So I'll pause there. So Qualia, our fundamental mystery,
00:26:20.960 why do certain, why do sensations have a subjective aspect to them at all?
00:26:28.960 We don't understand it, because if we couldn't understand it, we're going to get to a way in
00:26:33.440 which we know whether or not we understand something, but we do not understand Qualia. We don't
00:26:38.640 have a good, hard-to-very explanation of what they are. Let me continue with the book.
00:26:45.600 By the way, some abilities of humans that are commonly included in that constellation associated
00:26:49.840 with general purpose intelligence do not belong in it. One of them is self-awareness,
00:26:54.640 as evidenced by such tests as recognizing oneself and a mirror. Some people are unaccountably
00:26:58.880 impressed when various animals are shown to have that ability, but there is nothing mysterious about it.
00:27:03.280 A simple patent recognition program will confer it on a computer. Pause there.
00:27:08.800 Yes, there. And as we already are very familiar with in the year 2019,
00:27:15.120 phones can recognize faces. The same technology that allows an iPhone to recognize your face,
00:27:24.080 or any of the facial recognition software that's out there now, there's heaps of it.
00:27:29.360 Go to an airport. I know I come through immigration and the machine recognizes my face and
00:27:37.040 compares it to my passport and lets me through. It will be very easy to program an iPhone
00:27:43.120 to recognize itself. So we are very good at that now. Computers can recognize themselves.
00:27:50.320 Does not mean they have some sort of intelligence. The iPhone is not self-aware.
00:27:56.160 Just because it can recognize its own shape. So back to the book.
00:28:02.560 And David's just finished talking about patent recognition and recognizing faces,
00:28:07.600 in other words, self-awareness. He writes, the same is true of tool use. The use of language
00:28:13.440 for signaling, but not for conversation in the Turing test sense. And very emotional responses,
00:28:19.040 they're not the associated qualia. At the present state of the field, a useful rule of thumb is,
00:28:24.880 if it can already be programmed, it has nothing to do with intelligence in the Turing sense.
00:28:29.520 Conversely, I have settled on a simple test for judging claims, including denets,
00:28:34.320 to have explained the nature of consciousness, or any other computational task.
00:28:40.000 If you can't program it, you haven't understood it. I pause there. And now this is probably
00:28:47.120 worth reading six times. And I wish this would enter the zeitgeist as well.
00:28:52.480 This is another philosophical discovery. A hidden gem, so to speak,
00:28:57.680 which I'm sure many people who've read the book just gloss over, or notice it there,
00:29:04.080 think it's interesting, but this is really, really profound. If you can't program it, you haven't
00:29:10.480 understood it. A program is an algorithm. It's a set of steps. And so if you can write down
00:29:17.120 that set of steps, in order to reproduce the thing that you claim to understand,
00:29:21.760 then you've really understood it, because you're able to replicate that thing,
00:29:24.960 you're able to simulate that thing using a computer. But if you can't write an algorithm down,
00:29:29.920 a sequence of steps, a sequence of instructions, then you do not understand that thing.
00:29:34.800 If you can't get a computer to replicate it, then you haven't understood it.
00:29:39.760 We understand Newtonian mechanics. One of the things I did as a graduate student was to
00:29:47.120 collide galaxies together in a simulation. It was purely in a simulation.
00:29:53.520 We know what happens when galaxies collide together, because you can take the physical laws that
00:29:59.600 govern the motion of galaxies, namely the laws of gravity and various thermodynamic laws.
00:30:07.040 And you can replicate a couple of galaxies and have them crash together,
00:30:12.640 and then what they end up looking like afterwards kind of resembles what you see out there in space.
00:30:19.360 When you compare the simulation to observation, the images look similar, and so
00:30:26.000 this is kind of an attempt to refute the theoretical model that you've got,
00:30:31.040 okay, by using an observation. We understand how galaxy collisions work. We understand how
00:30:38.560 orbits work when planets go around stars, because we know what the laws of physics are,
00:30:45.120 and you can program a computer to to rector simulate what's going on. We understand perfectly
00:30:50.800 well how the planets orbit the Sun, because we can predict millions of years ahead when the next
00:30:59.040 eclipse or alignment will be what the position of Jupiter will be at any point in the future.
00:31:03.520 Modular the humans taking over the solar system at some point and deciding to control the
00:31:10.480 trajectory of Jupiter around the Sun. But I hope you get what I'm saying here. Physics is
00:31:16.960 kind of this area where we can definitely program our computers in order to simulate very
00:31:22.800 physical systems. But when people claim that they have an understanding of something like
00:31:27.920 consciousness, we can test that claim in the same way we can test whether or not they have an
00:31:32.240 understanding of a physical process of any other physical process in physics, namely,
00:31:37.200 by programming a computer with their with their theory. I've had people over the last few years
00:31:44.080 say to me, I understand consciousness and I've got a theory of consciousness. Great.
00:31:50.160 Can you write a program for it so that the computer can be conscious and then can we test that?
00:31:54.880 No, so far, no, no one's been able to do that. Now related to this, there's also this vision
00:32:01.440 that some people have of artificial intelligence. That it's something like a list of all the
00:32:07.760 possible things that people can do. Now, I've heard Sam Harris make explicitly this point.
00:32:14.320 I'm not sure if he gets it from Nick Bostrom or elsewhere, but if there's anything like a
00:32:18.960 prevailing view on these things, I guess that this is it. Their argument goes like this.
00:32:25.280 Right now, there are computers out there that are better at people at playing chess
00:32:29.280 and doing simple arithmetic and there's already computers out there that are better than people
00:32:34.960 at driving. Okay, now all we need to do is to extrapolate out to every possible task.
00:32:42.400 Just keep writing programs where everything a person can do. So now maybe not too far in the
00:32:48.400 future will have programs for robotic AI that can bake cakes and can be your tango dancing partner,
00:32:56.000 console of chemical equations. Another one will be good at ironing a shirt, another one will be good
00:33:01.760 at chasing a person firing and gun and turning off the electricity. Just keep on writing programs
00:33:07.360 in order to accomplish every single task that humans can currently do.
00:33:12.880 And so the argument goes, you will have exhausted all the possible things that people can do,
00:33:18.080 or that people do do. And the thing is, you now have a super intelligence on this argument,
00:33:23.520 because if a robot did indeed have all of these capabilities, they would by definition be a
00:33:28.400 super being because they can do everything a person can, but they now also have super fast robot
00:33:33.680 reflexes and thinking they're a super intelligence. And thus they're super dangerous because
00:33:40.080 they'd be way better at people than anything else. And I think this is very, very wrong,
00:33:46.400 fundamentally wrong. It doesn't matter how long your list is, but list will always be finite.
00:33:54.320 And it will never be able to accommodate something that's not in the list.
00:33:58.240 And so if let's say such an AI became an evil AI, or you would need to do is to look up the program
00:34:07.360 and then to give it a task or to attempt to do something to it, that's not in that list.
00:34:13.040 And you could do that because you are a creative thinker, but that AI has a finite list
00:34:19.680 of things that it can do. And you can always get around to finite list by just finding something
00:34:24.800 that's not in the list, or creating something new that's not in the list. A person is creative,
00:34:30.640 but that kind of robot never will be. It is programmed only with the stuff the program is
00:34:35.520 no. It cannot possibly solve stuff, not in its programming, for none of those programs are about
00:34:40.640 creating knowledge. And that is the key. General purpose problem solving is what a person is all about,
00:34:47.120 a person is a general purpose problem solver. A person has a potentially infinite number of problems
00:34:54.240 they can tackle, but this robot's list is, however large, always finite, and it will always remain
00:35:00.400 finite. That is the qualitative difference. An actually finite list of things that this
00:35:06.880 supposed super intelligent AI can do versus the unbounded potential of an actual person, an actual
00:35:14.240 creative person. Now back to the book. During invented his test and the hope of bypassing all
00:35:21.920 those philosophical problems. In other words, he hoped that the functionality could be achieved
00:35:25.680 before it was explained. Unfortunately, it is very rare for practical solutions to fundamental
00:35:31.120 problems to be discovered without any explanation of why they work. I'll just pause there.
00:35:36.560 So this brings us to one of the deep themes of the beginning of infinity. It is very rare, as he
00:35:44.160 said just there, to ever have a solution to a problem. And you don't know why it's a solution to a
00:35:50.560 problem. Now it perhaps used to be or seem to be common in the past in medicine, for example.
00:35:59.520 There used to be treatments for things that no one knew why they worked. And that still
00:36:04.080 happens today, but it's the rare exception to the rule. It used to be very common in the past,
00:36:08.160 because we didn't understand anything in the past. There was very little that we understood.
00:36:12.240 But the more and more that we understand, the less and less, we have these solutions for which
00:36:18.320 we have no explanation as to why they work. So I don't know. I think of any treatment usually,
00:36:24.560 some of these strange things that come out of the Amazon rainforest that happened to work to
00:36:29.760 cure headaches or to treat some other kind of disease. Okay, that's the rare exception these days.
00:36:34.320 Certainly today, many, many medicines are derived from some kind of extract from a plant,
00:36:41.600 but we usually know what the chemical is, what the active ingredient is today.
00:36:46.160 Really, we don't. And that's just in medicine. I mean, I've never heard of anything in
00:36:52.000 physics where we've got a solution to a problem for which we don't have an explanation.
00:36:56.400 And that's par excellence, the reason for physics is to try and provide,
00:37:00.160 we've know this from the beginning and feeling the fabric of reality. That's the reason for
00:37:03.600 physics. Okay, so back to the book, David Wright. Nevertheless, rubber-like empiricism, which it
00:37:09.120 resembles, the idea of the Turing test has played a valuable role. It has provided a focus for
00:37:14.400 explaining the significance of universality and for criticizing the ancient anthropocentric assumptions
00:37:18.720 that would rule out the possibility of AI. Turing himself systematically refuted all the classic
00:37:23.840 objections in that seminal paper and some absurd ones for good measure. But his test is rooted in
00:37:28.400 the empiricist mistake of seeking a purely behavioral criterion. It requires the judge to come to
00:37:33.680 a conclusion, without any explanation of how the candidate AI is supposed to work. But in reality,
00:37:39.040 judging whether something is a genuine AI, or what will always depend on explanations of how it
00:37:44.960 works. This is because the task of the judge in a Turing test has a similar logic to that faced
00:37:50.800 by Paley when walking across his heath and finding a stone, a watch, or a living organism.
00:37:57.120 It is to explain how the observable features in the object came about. In the case of the
00:38:02.240 Turing test, we deliberately ignore the issue of how the knowledge to design the object was created.
00:38:08.160 The test is only about who designed the AI's utterances, who adapted its utterances to be
00:38:13.120 meaningful, who created the knowledge in them? If it was the designer, then the program is not an
00:38:17.600 AI. If it was the program itself, then it is an AI. I'll pause there. This is the profound point
00:38:23.600 in the entire chapter, and we'll have much more to say about this in part two when it comes to
00:38:28.800 artificial evolution. The point here is, with the Turing tests, we're not simply trying to have
00:38:36.880 a conversation with something such that it's in such that we conclude that it must be an artificial
00:38:43.600 general intelligence. The point is that whatever the utterances are, if they appear to be the
00:38:49.520 utterances of some kind of intelligence, the purpose of the Turing test should be defined out
00:38:55.680 how those utterances are being made. Why should it remain a black box? Can you imagine if someone
00:39:02.000 entered that lobe in the prize? If someone won it using a chatbot that was overwhelmingly
00:39:09.360 convincing, a chatbot that was clearly an intelligence or some sort? Do we give, do we award the
00:39:16.080 prize, the lobe in the prize, the number one past the Turing test prize, to that program,
00:39:23.200 to the writer of that program? But it depends. What's our explanation for how this thing is past
00:39:30.240 the Turing test? Or how this thing has won the prize? Is it because it genuinely is an artificial
00:39:36.240 general intelligence? That's one possibility. But today, any reasonable judge, any person offering
00:39:44.400 the prize, would err on the side that they're being fooled. They're being conned in some way. Who knows
00:39:50.160 how? But that would be a better explanation. Until they're shown the program, if they're shown the
00:39:56.000 program, well then, you can be convinced that either it genuinely is an artificial general intelligence
00:40:02.960 not. But as David's about to say, if that's the case, if they have a program, you don't need to
00:40:07.840 worry about whether or not it passes this silly chatbot conversation or not, you'll have the program,
00:40:14.640 and the program will convince anyone who understands how to read the program. So I'm going to
00:40:19.280 skip a couple of paragraphs here and back to the book and David writes, without a good explanation
00:40:25.680 of how and enter these utterances were created, observing them tells us nothing about that.
00:40:30.240 In the Turing test, at its simplest level, we need to be convinced that the utterances are not
00:40:34.640 being directly composed by a human masquerading as an AI, as in the Hofstetter hoax. But the
00:40:39.840 possibility of the hoax is the least of it. Just pause there before I move on. And this, again,
00:40:46.960 is the reason why the Turing test cannot be a true scientific test of intelligence. It needs to be
00:40:51.040 a way of weeding out the hoaxes. If it could do that, as good science does, then for reasons
00:40:55.920 David is about to come to. You wouldn't need the test in the first place, basically because the
00:41:00.000 writer of the AGI, the actually thinking chatbot, would have published the algorithm and that would
00:41:05.760 be way more convincing to people in the field than passing the Turing test, which could always be
00:41:11.280 thought of as being a hoax. So back to the book he just said, but the possibility of the hoaxes
00:41:15.840 the least of it. For instance, I guess that above, that illbot had recited a stock joke in response
00:41:22.240 to mistakenly recognizing the keyword spouse. But the joke would have a quite different significance
00:41:27.520 if we knew that it was not a stock joke because no such joke had ever been encoded in the program.
00:41:32.720 How could we know that? Only from a good explanation. For instance, we might know it because we
00:41:38.480 ourselves wrote the program. Another way it could be for the author of the program to explain
00:41:42.960 to us how it works, how it creates knowledge, including jokes. If the explanation was good,
00:41:48.000 we should know that the program was an AI. In fact, if we had only such an explanation,
00:41:53.440 but not yet seen any output from the program, and even if it had not yet been written,
00:41:57.920 we would still conclude it was a genuine AI program. So there would be no need for a Turing test.
00:42:03.680 That is why I said that if lack of computer power would the only thing preventing the achievement
00:42:08.800 of AI, there will be no need to wait. Explaining how an AI program works in detail might
00:42:15.200 well be interactively complicated. In practice, the author's explanation would always be at
00:42:19.040 some emergent abstract level. But that would not prevent it from being a good explanation.
00:42:23.840 It would not have to account for the specific computational steps to compose the joke,
00:42:27.280 just as a theory of evolution is not to have to account for why every specific
00:42:30.640 mutation succeeded or failed in the history of a given adaptation. It would just need to
00:42:33.920 explain how it could happen and why we should expect it to happen, given how the program works.
00:42:39.520 If that were a good explanation, it would convince us that the joke, the knowledge in the joke,
00:42:43.840 originated in the program and not in the programmer. Thus, the very same utterance by the program,
00:42:49.040 the joke, can either be evidence that it is not thinking or evidence that it is thinking,
00:42:54.480 depending upon the best available explanation of how the program works.
00:42:59.520 The nature of humor is not well understood, so we do not know where the general purpose
00:43:03.040 thinking is required to compose jokes. So it is conceivable that, despite the wide range of subject
00:43:08.960 matter about which one can joke, there are hidden connections that reduce all joke making to a
00:43:14.640 single narrow function. In that case, there could one day be general purpose joke making machines
00:43:20.080 that are not people. Just as there, just as today, there are general purpose chess-playing
00:43:26.560 machines that are not people. It sounds implausible, but since we have no good explanation
00:43:30.480 really yet, we could not rely on joke making just as our only way of judging an AI.
00:43:35.120 What we could do, though, is have a conversation ranging over a diverse range of topics
00:43:42.240 and pay attention to whether the program's utterances were not adapted in their meanings
00:43:45.920 to the various purposes that came up. If the program really is thinking, then it is in the course
00:43:52.160 of such a conversation it will explain itself. In one of countless unpredictable ways,
00:43:57.040 just as you or I would, just pause there, there it is, there's the key. The AI, the AGI program,
00:44:06.880 the artificial general intelligence program, had enough of the general and artificial intelligence
00:44:09.920 program, it would need to explain itself, which means it would need to create some knowledge,
00:44:16.160 that's the key. Back to the book. There is a deeper issue to AI abilities must have some sort of
00:44:22.720 universality. Special purpose thinking would not count as thinking in the sense during intended.
00:44:29.200 My guess is that every AI as a person, a general purpose, explain up.
00:44:35.760 And just pausing there, this is key. Now David does say, my guess is, but really,
00:44:42.560 what better explanation do we have? This link between knowledge and its creation and people
00:44:48.880 and their inherent creativity. That's linking people and their moral significance to epistemology
00:44:54.080 is almost another book worthy statement. There's so much fun pack. I'll do so in a moment
00:44:59.680 after I read just a little more here. And I get what's being said there about there might be
00:45:05.680 general purpose joke-making programs that are not people. But that wouldn't mean that all possible
00:45:13.200 sources of comedy or jokes, indeed, might be produced by such a general purpose joke-making machine.
00:45:22.240 Perhaps it would be general purpose within a narrow range of kinds of jokes. Maybe we'll
00:45:28.000 understand there are certain species of jokes and all the possible kinds of jokes within this
00:45:32.720 particular species could be written down by this program. But I imagine my guess would be that
00:45:37.520 you would have many different forms of humor, some of which are jokes and some of which aren't,
00:45:43.920 of course. And that would require creativity. And David writes here, it is conceivable that there
00:45:54.640 are other levels of universality between AI and universal explainer constructor. And perhaps
00:46:01.120 separate levels for those associated with attributes like consciousness. But those attributes
00:46:06.320 all seem to have arrived in one jump to universality in humans. And although we have little
00:46:11.040 explanation of any of them, I know of no plausible argument that they are at different levels
00:46:15.760 or can be achieved independently of each other. So I tentatively assume that they cannot.
00:46:23.440 So I'll pause there for some more lengthy commentary here. So these are the attributes
00:46:30.800 like consciousness and free will possibly coming along for the ride, so to speak. Or they could
00:46:36.960 be something like, I sometimes think, the difference between observing something from the outside
00:46:41.760 and observing something from the inside. So what is a bat? Well, that demands a scientific answer.
00:46:48.640 What is it like to be a bat? Well, that's more philosophical. I think something here can be said
00:46:54.720 about people. What is a person? Well, there's something like a creative entity, a general purpose
00:47:00.720 explainer. What is it like to be a person? Consciousness. So I guess that consciousness is
00:47:07.760 something like the subjective experience of creativity. Now diverging from the beginning of infinity
00:47:13.360 a little bit here. And so I'm not saying that I'm either summarizing or explaining David's work.
00:47:18.400 What I'm just going to do now is simply extrapolate a little myself on top of what David's done
00:47:23.760 here and possibly make lots of errors along the way, but just indulge me for a moment.
00:47:28.720 Creativity of the kind where people create explanations or create knowledge is just the outward
00:47:34.640 manifestation of an inner consciousness. What it feels like to be creative is consciousness.
00:47:41.760 Now consciousness is what we are. It is almost redundant to say we experience consciousness.
00:47:46.960 It is more like we are consciousness. But the consciousness, the subjective experience of the mind
00:47:54.720 is something that with effort we direct. Now some people say that subjectively we do not control
00:48:01.520 the contents of our consciousness. Sam Harris says that on introspection you do not have
00:48:07.520 subjective control over your thoughts. They come and go. In the objective world out there,
00:48:13.040 I do not control what happens. Fine. It's all quite uncontroversial. And in my subjective
00:48:19.360 experience of that world likewise, the contents, the blue sky, the sound of birds are also not
00:48:24.400 in my control. And my next thought, so it's argued, arises unbidden. Well indeed, if you switch
00:48:31.680 off and meditate, that does indeed seem to be the case. Parts of your consciousness mind decouple
00:48:37.680 from other parts. The thoughts in the awareness of those thoughts divide. And one is inclined to think
00:48:44.800 I am not that stream of thoughts. This indeed is the inside of contemplatives like Harris and many
00:48:50.240 others who follow in a Buddhist tradition. But of course that too is an observation of the
00:48:56.000 internal subjective state. And there's no more an indication of deep subjective truth about the nature
00:49:00.640 of the person than I am my stream of thoughts. So when you're in a contemplative state,
00:49:05.520 you can have this sensation that you are not this stream of thoughts. And then when you're not
00:49:12.160 in a contemplative state, you can have the experience of being a stream of thoughts. Why is one
00:49:21.360 a road to truth while the other is not? I'm yet to hear an answer to that.
00:49:27.680 Both the lost in thought state and as the Buddhists or Sam Harris would say, the divested of
00:49:34.000 sense of self-state, they're both subjective states. Sam wants to call one the true eye when the
00:49:41.520 eye is lost compared to the perpetually lost in thought state. What I want to say here is that the
00:49:48.080 subjective experience of the lost in thought state is actually very probative of what people are.
00:49:54.880 When you are lost in thought or just thinking, you're thinking without concern about who you are.
00:50:01.520 You're in the flow state of thinking one thought and then reasoning and concluding what comes
00:50:06.240 next. Now I do indeed feel capable of deciding, thinking and deciding, explaining. I feel actively
00:50:16.560 involved in all choices. I feel a subjective sense of free will. This may seem to be too clever
00:50:24.000 by half, but I want to say that when paying attention to how you think while lost in thought,
00:50:28.160 you can see options arising, being criticized and either surviving the process or not.
00:50:32.960 And this is key. You can slow things down or not and notice how critical you are being or not.
00:50:39.200 Now at this point the free will deny will say, ah, but you did not choose to have the thought
00:50:43.520 should I slow down and think more carefully now or not. That thought wasn't given to you.
00:50:48.400 Okay, fine. I do not control all of my thoughts, but the choice to slow down or not is mine.
00:50:54.640 Sometimes I do and sometimes I don't and not controlling all of the contents of consciousness
00:50:59.760 does not mean does not mean I do not control some and my conception of free will is only that I
00:51:04.560 control some and not all. It is as if to say, because the state cannot control all aspects of
00:51:11.360 our lives, it controls none and hence does not exist. That would be absurd. The all or nothing
00:51:19.040 conception of free will here is misguided. So free will is that sense that we choose to create
00:51:24.160 one explanation rather than another than another. We attempt to create. When a person encounters
00:51:31.120 a problem they seek to find solutions. Seek implies the solutions are out there in some sense.
00:51:37.440 But this is rare. Usually they must come from within. They must be created by the mind. The creative
00:51:42.800 mind of the individual. Outwardly we see solutions generated by a person. Inwardly we experience
00:51:49.520 the phenomena of consciousness as we encounter the world and construct inside our minds a representation
00:51:55.440 of the outside reality. This act of construction is a creative one. We create that representation
00:52:03.120 and when there is a problem we attempt to solve it or we can ignore the problem. These are our
00:52:07.680 choices. There are a literally infinite number of problems we might attempt to solve. But, and this
00:52:13.840 is the key. We choose only some to work on. And this is the exercise of free will. If you have
00:52:21.280 a major life decision, should I enter into university and study finance or physics? If you think
00:52:27.680 on this carefully over many months, you might think my mathematical skills are reasonably good.
00:52:34.240 I could do either. I like to work with numbers and constrain things quantitatively. Finance
00:52:40.480 seems to be interesting. I could be wealthy and do physics on the side. But then, directly working
00:52:45.600 on fundamental problems might be more rewarding than simply earning a high wage. I think and create
00:52:50.720 options. But maybe I shouldn't worry. Maybe I shouldn't care what I do and just not think about
00:52:55.280 it at all. The choice to think hard on this or not. To decide which knowledge to create,
00:53:00.960 the knowledge about what to do with my life or not is something I am free to choose to do or not.
00:53:06.640 Free will is the freedom for me to get a good sense of my preferences and to act upon them.
00:53:12.320 That is to say, choose. But to know what my preferences truly are, let's say finance or physics,
00:53:18.640 I need to carefully deliberate, solve that problem, create that knowledge.
00:53:23.120 Outwardly, eventually, someone will notice me enroll in physics and finance as a double major.
00:53:28.080 But inwardly, only I know that I felt many different things. Emotions like excitement or
00:53:33.520 confusion and the thrill of making a new insight. It was all me, or largely me,
00:53:37.920 that contemplated this problem for hours on end. I chose this interesting solution,
00:53:42.000 the double degree, as it helped me satisfy both desires. I was free to do otherwise.
00:53:46.080 To not think about it at all. Or to choose one over the other. But no, I chose to do what I did.
00:53:51.760 I did that. Like the laws of physics. Inwardly, it was a conscious sensation of free will.
00:53:57.520 But outwardly, it was a choice made to create knowledge in that area.
00:54:00.800 These things are all facets of the one same phenomenon. What it means to be a human and
00:54:06.560 explain the world. So these many different attributes that David speaks about.
00:54:11.680 And he guesses that they seem to have all come at once in a jumped university,
00:54:19.600 and that maybe they are all aspects of the same thing. This is the sense that I get as well.
00:54:24.320 I don't think we have good hard to vary explanations of any of these things.
00:54:28.080 Free will, creativity, consciousness, meaning, so on.
00:54:35.280 But the best that we do is what's provided here in this chapter, I would suggest.
00:54:42.880 Let's return to it. Let me continue reading. In any case, we should expect AI to be
00:54:47.280 achieved in a jumped university, starting from something much less powerful. In contrast,
00:54:51.680 the ability to imitate a human imperfectly, or in specialized functions as not a form of
00:54:56.080 university. It can exist in degrees. Hence, even if chatbots did at some point start becoming
00:55:01.520 much better at imitating humans, or fooling humans, or at fooling humans, that would still not
00:55:07.680 be a path to AI. Becoming better at pretending to think is not the same as coming closer to being
00:55:13.440 able to think. There is a philosophy as basic tenon is that those are the same. It's called
00:55:18.000 behaviorism, which is instrumentalism applied to psychology. It is a doctrine that psychology
00:55:22.960 can only, or should only, be the science of behavior, not of minds, that it can only measure
00:55:28.640 and predict relationships between people's external circumstances. Similarly, stimuli, and there
00:55:33.920 is a behavior, responses. The latter is, unfortunately, exactly how the Turing Test asked the judge
00:55:39.440 to regard a candidate AI. Hence, it encouraged the attitude that if a program could fake AI
00:55:45.280 well enough, it would have achieved it. But ultimately, a non-AI program cannot fake AI.
00:55:50.240 The path to AI cannot be through ever better tricks for making chatbots more convincing.
00:55:55.680 A behaviorist would no doubt ask, what exactly is the difference between giving a chatbot
00:56:00.000 a very rich repertoire of tricks, templates, and databases and giving it AI abilities?
00:56:04.720 What is an AI program other than a collection of such tricks? And I'll pause there. In fact,
00:56:11.440 I'll stop the reading there and move on to episode two, but just a remark on that last bit.
00:56:18.080 The bag of tricks idea, AI being a collection of such tricks. It's something that I referenced
00:56:23.360 earlier, that AI is essentially just a collection of programs, each of which can do one of the
00:56:27.920 things that people can. And together, I can do all the things that people can, and some distant
00:56:33.360 future will have an AI that can do all of the, it will have the comprehensive bag of human tricks.
00:56:40.320 It can do all the things that a human can, except for one, the key one being it won't be able to
00:56:45.680 create explanations, because if it's able to do that, it doesn't need any of the others as it turns
00:56:50.160 out. And I'm just going to explain a little bit more here. So I've written down a few notes here,
00:56:56.240 let me just read through those. So this idea that AI is essentially just a collection of programs,
00:57:00.960 each of which can do one of the things people can. And together can do all the things that people can
00:57:05.120 do, except they're silicon based robots, so they can do them much faster and with fewer errors.
00:57:10.080 The conclusion follows almost unavoidably. Therefore, they, the silicon based robots will be better
00:57:17.680 than humans, more morally valuable superhumans, but that's false. They will never create anything
00:57:24.720 new, because not one of their programs can do what a person can do, namely create, or rather,
00:57:34.000 if any of the programs could create something completely new, then none of the other ones would
00:57:41.440 be needed, because that single program is a knowledge creating program. And so you wouldn't have
00:57:47.680 to preload it with all these other things, and you could preload it with all these other things,
00:57:52.480 but what would be the point? Because once you load it with the artificial general intelligence
00:57:59.280 program, it then is a person, and if you try and force any other kind of learning onto a world
00:58:04.240 that's coercion, that's saying, here's what you need to learn, but what if you load all those
00:58:09.840 others, and then you load the AGI program, same problem, same problem, giving the AGI all of the
00:58:16.800 knowledge that perhaps it didn't want to know, perhaps it doesn't want to be the best baker in the
00:58:21.520 world. The thing about people is that people, humans, and AGI, their universal explainers, and David
00:58:29.440 has answered the question, what is a person? Few have noticed, as recently as yesterday, I was
00:58:36.560 listening to Sam Harris's podcast, the third time I've mentioned Sam in this episode, I do so,
00:58:42.400 just by the way, because I think he's got the best podcast, and I think he captures very well
00:58:48.160 what the public sentiment on many deep philosophical and scientific issues are at the moment,
00:58:57.120 and so it's a good way to get a handle on what people are thinking about any particular thing,
00:59:02.000 and so just yesterday, episode number 146 for the waking up podcast, it was called Digital Capitalism,
00:59:09.680 and his interviewee was a Douglas Rushkoff, and he was very concerned about technology taking over,
00:59:16.240 he was a little bit of a pessimist about aspects of technology, very well, he's not alone there,
00:59:25.840 and although there were the usual noises about tech calamities, he wanted to say something
00:59:30.240 in defense of humans against computers. He argued that the singleitarians, these people that
00:59:35.360 think that we're all going to be uploaded into the matrix at some distant point, some amiga point
00:59:41.600 in the future, and will all have an immortal life based by living in the internet or something
00:59:49.600 or other. That's the singularitarians, the singularity is coming, and various other kinds
00:59:54.320 of transhumanists, so this fellow Douglas Rushkoff was saying that these people are missing
1:00:02.080 the crucial qualities of people, what the essence of a person is if you like.
1:00:07.360 Now, I kind of agree with him there, but he thinks he tried to articulate what he thought
1:00:16.240 the crucial qualities of humans are, the crucial qualities of people are, and if you're interested
1:00:21.120 in listening, that's around the 43 minute and 22nd mark, and he has a quite strange list of
1:00:28.560 qualities that he thinks that humans have that differentiate them from computers, and so the words
1:00:33.840 that he mentioned there were awe and meditation and camaraderie and establishing rapport.
1:00:42.160 It's a strange list, but I've sensed more than the particular words that he's used that,
1:00:47.280 it was struggling, it was struggling very much to pin down what it is that's special about people.
1:00:51.840 As so many philosophers and scientists have, they know there's something there,
1:00:55.840 and this is why people revert to superstitious ideas, that people have a soul or a spirit or
1:01:01.120 something like that. They can't quite apply a secular scientific understanding to what is
1:01:08.480 special about human beings, what makes them cosmically significant, and so they reach for these
1:01:14.160 other words that just don't resonate, I don't think with anyone who is listening to that would have
1:01:19.600 gone, oh wow, yes, that's exactly what a person is, a person, a person's special quality is camaraderie
1:01:24.720 and building rapport, it's missing something, it seems denuded of the key point. There've been
1:01:32.560 many of these attempts over the years to try and secularize what is almost sacred or divine about
1:01:38.320 people, never failed. The truth is as we have learned people are unique because they create
1:01:46.160 explanatory knowledge, that's cosmically significant, it's cosmically significant in a way that's
1:01:51.600 greater than spiritual soul than what the religious or supernatural people say. It is saying that if
1:01:56.960 we persevere in solving problems, in doing what we are as people, in creating, we will change
1:02:03.840 not only the entire planet, we will change the galaxy in the universe. That is our capacity as
1:02:11.200 general purpose problem solvers. So yes, so he was struggling very much to pin down what it was
1:02:18.160 that was special about people, and Sam Harris himself likewise earlier and exactly the same episode
1:02:23.840 tried to articulate what he thought was special about humans and he said, if anything,
1:02:28.000 especially about humans, it's our use of language, this is the thing that differentiates us from
1:02:33.440 computers, but it doesn't differentiate us from computers or animals or anything else,
1:02:38.720 computers very much use language, animals use language. So these attempts just miss the key,
1:02:47.760 the key point. It is people creating explanations, explanations that can change the world as the
1:02:54.560 subtitle of the book is. That's cosmically significant. Tennis players can't put into words
1:02:59.920 on any explicit explanation of how to serve a great ace. A person who never learns to speak will
1:03:04.480 nonetheless still create explanations. Language is a facet, a small facet of something far
1:03:10.400 deeper and broader. Language is of course necessary for social interaction, but it can't be the thing
1:03:14.960 that separates a human from other things. Computers do have languages, and so do apes. So it's
1:03:20.400 not language. It is our capacity as universal explainers or universal knowledge creators, or universal
1:03:26.480 problem solace, or general purpose explainers, and so go the synonyms for this same concept,
1:03:31.520 basically a thing that solves problems by creating explanatory knowledge. But there is another kind
1:03:37.280 of knowledge that we haven't touched on in this episode yet. If we recall from chapter two,
1:03:42.400 biological knowledge, and so now here in chapter seven, David now turns to how to attempt to
1:03:49.040 artificially simulate biological knowledge using so-called evolutionary algorithms, and what the
1:03:55.920 state of that particular science or engineering is. It's a whole other episode in itself, so I'm
1:04:03.520 going to leave what I've said here now, and I'll see you in the next episode. So bye for now.