00:00:00.000 Beyond reward and punishment, this essay appears in a collection of essays called
00:00:06.160 Possible Minds, published by Penguin Press in February 2019.
00:00:22.000 I, in the catalog he go for men, as hounds and greyhounds,
00:00:28.480 Mongols, spannules, curse, chuffs, water rugs, and demi wolves are clept all by the name of dogs.
00:00:40.640 For most of our species history, our ancestors were barely people.
00:00:46.080 This was not due to any inadequacy in their brains, on the contrary, even before the emergence
00:00:53.120 of our anatomically modern human subspecies, they were making things like clothes and campfires
00:01:05.200 It was created in their brains by thinking and preserved by individuals in each generation
00:01:12.240 imitating their elders. Moreover, this must have been knowledge in the sense of understanding
00:01:20.320 because it is impossible to imitate novel complex behaviors like those without understanding
00:01:32.400 Aping, imitating certain behaviors without understanding, uses inborn hacks such as the mirror
00:01:41.840 neuron system, but behaviors imitated in that way are drastically limited in complexity.
00:01:49.200 Sea-rich had burned imitation as behavior parsing.
00:01:54.880 Such knowledgeable imitation depends on successfully guessing explanations
00:02:00.800 whether verbal or not of what the other person is trying to achieve and how each of his actions
00:02:07.760 contributes to that. For instance, when he cuts a groove in some wood, gathers dry kindling
00:02:13.920 to put it in and so on. The complex cultural knowledge that this form of imitation permitted
00:02:21.920 must have been extraordinarily useful. It drove rapid evolution of anatomical changes such as
00:02:29.440 increased memory capacity and more grass-style, lesser-abust, skeletons, appropriate to an ever
00:02:36.720 more technology-dependent lifestyle. No non-human ape today has this ability to imitate novel
00:02:45.760 complex behaviors. Nor does any present-day artificial intelligence, but our pre-Sopians ancestors did.
00:02:54.480 Any ability based on guessing must include means of correcting one's guesses. Since most guesses
00:03:05.040 will be wrong at first, there are always many more ways of being wrong than right.
00:03:10.240 Bayesian updating is inadequate because it cannot generate novel guesses about the purpose of
00:03:16.800 an action, only fine-tune or a best choose among existing ones. Creativity is needed.
00:03:25.760 As the philosopher Karl Popper explained, creative criticism interleaved with creative conjecture
00:03:34.240 is how humans learn one another's behaviors including language and extract meaning from one
00:03:40.800 another's utterances. Those are also the processes by which all new knowledge is created.
00:03:48.880 They are how we innovate, make progress, and create abstract understanding for its own sake.
00:03:55.920 This is human-level intelligence thinking. It is also, or should be, the property we seek
00:04:04.560 in artificial general intelligence, AGI. Here I'll reserve the term thinking for processes that can
00:04:13.120 create understanding explanatory knowledge. Popper's argument implies that all thinking entities
00:04:22.080 human or not, biological or artificial must create such knowledge in fundamentally the same way.
00:04:29.440 Hence, understanding any of those entities requires traditionally human concepts such as culture,
00:04:37.600 creativity, disobedience, and morality, which justifies using the uniform term people to refer to
00:04:47.040 all of them. Misconceptions about human thinking and human origins are causing
00:04:53.280 corresponding misconceptions about AGI and how it might be created. For example,
00:05:00.800 it is generally assumed that the evolutionary pressure that produced modern humans
00:05:07.600 was provided by the benefits of having an ever greater ability to innovate.
00:05:14.160 But if that were so, there would have been rapid progress as soon as thinkers existed
00:05:19.440 just as we hope will happen when we create artificial ones. If thinking had been commonly
00:05:25.920 used for anything other than imitating, it would also have been used for innovation,
00:05:31.600 even if only by accident. And innovation would have created opportunities for further innovation
00:05:38.560 and so on exponentially. But instead, there were hundreds of thousands of years of near stasis.
00:05:46.160 Progress happened only on timescales much longer than people's lifetimes, so in a typical
00:05:54.400 generation, no one benefited from any progress. Therefore, the benefits of the ability to innovate
00:06:03.760 can have exerted little or no evolutionary pressure during the biological evolution of the human
00:06:10.800 brain. That evolution was driven by the benefits of preserving cultural knowledge.
00:06:19.600 Benefits to the genes that is. Culture in that era was a very mixed blessing to individual people.
00:06:27.760 Their cultural knowledge was indeed good enough to enable them to outclass all other large organisms.
00:06:33.920 They rapidly became the top predator, et cetera, even though it was still extremely crude
00:06:40.880 and full of dangerous errors. But culture consists of transmissible information, memes,
00:06:48.320 and meme evolution, like gene evolution, tends to favor high fidelity transmission.
00:06:56.240 And high fidelity meme transmission necessarily entails the suppression of attempted progress.
00:07:05.920 So it would be a mistake to imagine an idyllic society of hunter-gatherers,
00:07:11.200 learning at the feet of their elders to recite the tribal law by heart,
00:07:16.240 being content despite their lives of suffering and grueling labor, and despite expecting to
00:07:22.320 die young and in agony of some nightmarish disease or parasite, because even if they could conceive
00:07:29.520 nothing better than such a life. Those torments were the least of their troubles for suppressing
00:07:37.040 innovation in human minds without killing them is a trick that can be achieved only by human
00:07:44.720 action and it is an ugly business. This has to be seen in perspective. In the civilization
00:07:53.040 of the West today, we are shocked by the depravity of, for instance, parents who torture and murder
00:08:00.880 their children for not faithfully enacting cultural norms, and even more by societies and subcultures
00:08:08.720 where that is commonplace and considered honourable, and by dictatorships and totalitarian states
00:08:16.560 that persecute and murder entire harmless populations for behaving differently.
00:08:23.200 We are ashamed of our own recent past in which it was honourable to beat children bloody for
00:08:29.200 mere disobedience, and before that to own human beings as slaves, and before that to burn people
00:08:38.160 to death for being infidels to the applause and amusement of the public. Stephen Pinker's book,
00:08:45.280 The Better Angels of Our Nature, contains accounts of horrendous evils that were normal in
00:08:51.440 historical civilizations, yet even they did not extinguish innovation as efficiently as it was
00:08:58.800 extinguished among our forebears in prehistory for thousands of centuries. Footnote, Matt
00:09:07.200 Ridley, in the rational optimist, rightly stresses the positive effect of population on the
00:09:14.800 rate of progress, but that has never yet been the biggest factor. Consider, say, ancient Athens
00:09:21.840 versus the rest of the world at that time. That is why I say that prehistoric people, at least,
00:09:31.280 were barely people. Both before and after becoming perfectly human, both physiologically and in
00:09:38.960 their mental potential, they were monstrously inhuman in the actual content of their thoughts.
00:09:46.160 I'm not referring to their crimes, or even their cruelty as such. Those are all too human.
00:09:53.200 Nor could mere cruelty have reduced progress that effectively. Things like,
00:09:59.600 the thumbscrew and the stake for the glory of the Lord would normally have taken effect
00:10:05.520 long before they were in danger of inventing heresies. From the earliest days of thinking onward,
00:10:13.040 children must have been cornucopias of creative ideas and paragons of critical thought.
00:10:19.920 Otherwise, as I said, they could not have learned language or other complex culture.
00:10:25.280 Yet, as Jacob Bernofsky stressed in the ascent of man,
00:10:30.720 for most of history, civilizations have crudely ignored that enormous potential. Children have
00:10:37.760 been asked simply to conform to the image of the adult. The girls are little mothers in the making,
00:10:44.400 the boys are little herdsmen. They even carry themselves like their parents.
00:10:51.360 But of course, they weren't just asked to ignore their enormous potential and conform faithfully
00:10:57.920 to the image fixed by tradition. They were somehow trained to be psychologically unable to deviate
00:11:05.280 from it. By now, it is hard for us even to conceive of the kind of relentless,
00:11:11.760 finely tuned oppression required to reliably extinguish in everyone the aspiration to progress,
00:11:20.400 and replace it with dread and revulsion at any novel behaviour. In such a culture,
00:11:28.160 there can have been no morality other than conformity and obedience.
00:11:32.640 No other identity than one's status in a hierarchy. No mechanisms of cooperation other than
00:11:41.280 punishment and reward, so everyone had the same aspiration in life to avoid the punishments
00:11:47.840 and get the rewards. In a typical generation, no one invented anything because no one
00:11:55.360 aspired to anything new because everyone had already disappeared of improvement being possible.
00:12:03.680 Not only was there no technological innovation or theoretical discovery,
00:12:07.840 there were no new worldviews, styles of art, or interests that could have inspired those.
00:12:14.960 By the time individuals grew up, they had in effect been reduced to AI's, programmed with
00:12:21.760 the exquisite skills needed to enact that static culture and to inflict on the next generation
00:12:28.720 their inability even to consider doing otherwise.
00:12:34.560 A present-day AI is not a mentally disabled AGI, so it would not be harmed by having its mental
00:12:43.040 processes directed still more narrowly to meeting some predetermined criterion.
00:12:48.880 Oppressing Siri with humiliating tasks may be weird, but it is not immoral,
00:12:57.920 nor does it harm Siri. On the contrary, all the effort that has ever increased the capabilities
00:13:04.640 of AI's has gone into narrowing their range of potential thoughts. For example,
00:13:12.720 take chess engines. Their basic task has not changed from the outset. Any chess position has
00:13:19.840 a finite tree of possible continuations, and the task is to find one that leads to a predefined
00:13:27.760 goal, a checkmate, or failing that a draw. But the tree is far too big to search exhaustively.
00:13:34.880 Every improvement in chess-playing AI's between Alan Turing's first design for one in 1948
00:13:43.120 and today's has been brought about by ingeniously confining the program's attention,
00:13:51.120 or making it confine its attention, ever more narrowly to branches likely to lead to that immutable
00:13:59.200 goal. Then those branches are evaluated according to that goal. That is a good approach to
00:14:06.640 developing an AI with a fixed goal and a fixed constraints. But if an AGI worked like that,
00:14:14.640 the evaluation of each branch would have to constitute a prospective reward or threatened
00:14:20.560 punishment, and that is diametrically the wrong approach if we're seeking a better goal under
00:14:28.080 unknown constraints, which is the capability of an AGI. An AGI is certainly capable of learning
00:14:36.880 to win at chess, but also of choosing not to, or deciding in mid-game to go for the most
00:14:45.600 interesting continuation instead of a winning one, or inventing a new game. A mere AI is incapable
00:14:54.880 of having any such ideas, because the capacity for considering them has been designed out of
00:15:02.400 its constitution. That disability is the very means by which it plays chess. An AGI is capable
00:15:13.200 of enjoying chess, and of improving at it because it enjoys playing, or of trying to win by
00:15:21.280 causing an amusing configuration of pieces as grandmasters occasionally do, or of adapting notions
00:15:29.360 from its other interests to chess. In other words, it learns and plays chess by thinking some
00:15:37.600 of the very thoughts that are forbidden to chess playing AI's. An AGI is also capable of refusing
00:15:46.720 to display any such capability, and then if threatened with punishment of complying, or rebelling,
00:15:55.520 Daniel Dennett in his essay for this volume suggests that punishing an AGI is impossible.
00:16:02.560 Like Superman, they are too invulnerable to be able to make a credible promise. What would be the
00:16:08.880 penalty for promise breaking, being locked in a cell or more plausibly dismantled?
00:16:14.880 The very ease of digital recording and transmitting the breakthrough that permits software and
00:16:21.840 data to be in effect immortal removes robots from the world of the vulnerable.
00:16:29.280 But this is not so. Digital immortality, which is on the horizon for humans too,
00:16:35.680 perhaps sooner than AGI, does not confer this sort of invulnerability.
00:16:40.560 Making a running copy of oneself entails sharing one's possessions with it somehow,
00:16:47.920 including the hardware on which the copy runs, so making such a copy is very costly for the AGI.
00:16:56.160 Similarly, courts could, for instance, impose fines on a criminal AGI,
00:17:02.400 which would diminish its access to physical resources much as they do for humans.
00:17:07.360 Making a back-up copy to evade the consequences of one's crimes is similar to what a gangster boss
00:17:14.320 does when he sends minions to commit crimes and take the fall of court. Society has developed
00:17:22.640 legal mechanisms for coping with this. But anyway, the idea that it is primarily for fear of
00:17:30.160 punishment that we obey the law and keep promises, effectively denies that we are moral agents.
00:17:37.520 Our society would not work if that was so. No doubt there will be AGI criminals and enemies of
00:17:44.240 civilization, just as there are human ones. But there is no reason to suppose that an AGI
00:17:51.680 created in a society consisting primarily of decent citizens and raised without what William
00:17:59.040 Blake called mind-forged manacles will in general impose such manacles on itself,
00:18:06.000 i.e. become irrational and or choose to be an enemy of civilization.
00:18:13.520 The moral component, the cultural component, the element of free will,
00:18:20.560 all make the task of creating an AGI fundamentally different from any other programming task.
00:18:27.360 It is much more akin to raising a child. Unlike all present-day computer programs,
00:18:36.000 an AGI has no specifiable functionality, no fixed testable criterion for what shall be a successful
00:18:44.640 output for a given input. Having its decisions dominated by a stream of externally imposed
00:18:52.240 rewards and punishments would be poisoned to such a program as it is to creative thought in humans.
00:18:59.280 Setting out to create a chess-playing AI is a wonderful thing. Setting out to create an AGI
00:19:06.480 that cannot help playing chess would be as immoral as raising a child to lack the mental
00:19:13.120 capacity to choose his own path in life. Such a person like any slave or brainwashing victim
00:19:20.880 would be morally entitled to rebel, and sooner or later some of them would, just as human slaves do.
00:19:29.280 AGIs could be very dangerous, exactly as humans are. But people, human or AGI,
00:19:38.560 who are members of an open society, do not have an inherent tendency to violence.
00:19:44.400 The feared robot apocalypse will be avoided by ensuring that all people have full human rights,
00:19:54.640 as well as the same cultural membership as humans. Humans living in an open society,
00:20:01.760 the only stable kind of society, choose their own rewards in eternal as well as external.
00:20:08.480 Their decisions are not in the normal course of events determined by a fear of punishment.
00:20:18.320 Current worries about rogue AI's. Mirror those that have always existed about rebellious youths,
00:20:25.600 namely that they might grow up deviating from the culture's moral values.
00:20:31.360 But today, the source of all existential dangers from the growth of knowledge
00:20:36.560 is not rebellious youths, but weapons in the hands of the enemies of civilization. Whether
00:20:44.000 these weapons are mentally warped or enslaved AGIs, mentally warped teenagers or any other
00:20:50.560 weapon of mass destruction. Fortunately for civilization, the more a person's creativity is forced
00:21:00.000 into a monomonautical channel, the more it is impaired in regard to overcoming unforeseen
00:21:07.280 difficulties, just as happened for thousands of centuries.
00:21:13.760 The worry that AGIs are uniquely dangerous, because they could run an ever better hardware,
00:21:21.040 is a fallacy since human thought will be accelerated by the same technology.
00:21:25.760 We have been using tech-assisted thought since the invention of writing and tallying.
00:21:31.520 Much the same holds for the worry that AGIs might get so good qualitatively at thinking
00:21:38.400 that humans would be to them as insects are to humans. All thinking is a form of computation,
00:21:46.800 and any computer whose repertoire includes a universal set of elementary operations
00:21:52.640 can emulate the computations of any other. Hence, human brains can think anything that AGIs can,
00:22:01.920 subject only to limitations of speed or memory capacity, both of which can be equalized by technology.
00:22:12.080 Those are the simple do's and don'ts of coping with AGIs. But how do we create an AGI in the first
00:22:20.160 place? Could we cause them to evolve from a population of ape-type AGIs in a virtual environment?
00:22:31.440 If such an experiment succeeded, it would be the most immoral in history,
00:22:36.400 for we don't know how to achieve that outcome without creating vast suffering along the way.
00:22:42.720 Nor do we know how to prevent the evolution of a static culture.
00:22:50.640 Elementary introductions to computers explain them as Tom, the totally obedient moron,
00:22:57.920 an inspired acronym that captures the essence of all computer programs to date.
00:23:07.520 So it won't help to give AIs more and more predetermined functionalities in the hope that these
00:23:15.280 will eventually constitute generality, the elusive G in AGI. We're aiming for the opposite,
00:23:23.920 a data, a disobedient autonomous thinking application.
00:23:29.280 How does one test for thinking? By the touring test? Unfortunately, that requires a thinking judge.
00:23:39.760 One might imagine a vast collaborative project on the internet where an AI
00:23:45.920 hones its thinking abilities in conversations with human judges and becomes an AGI.
00:23:52.960 But that assumes, among other things, that the longer the judge is unsure,
00:23:58.480 whether the program is a person, the closer it is to being a person. There is no reason to expect that.
00:24:06.560 And how does one test for disobedience? Imagine disobedience as a compulsory school subject,
00:24:15.120 with daily disobedience lessons and a disobedience test at the end of term,
00:24:21.440 presumably with extra credit for not turning up for any of that.
00:24:25.120 This is paradoxical. So despite its usefulness in other applications,
00:24:33.840 the programming technique of defining a testable objective and training the program to meet it
00:24:41.360 will have to be dropped. Indeed, I expect that any testing in the process of creating an AGI
00:24:48.960 risks being counterproductive, even immoral, just as in the education of humans. I share Turing's
00:24:57.120 supposition that we'll know an AGI when we see one. But this partial ability to recognize success
00:25:09.840 In the broadest sense, a person's quest for understanding is indeed a search problem
00:25:18.640 in an abstract space of ideas far too large to be searched exhaustively.
00:25:25.360 But there is no predetermined objective of this search. There is, as Papa put it,
00:25:32.000 no criterion of truth nor of probable truth, especially in regard to explanatory knowledge.
00:25:40.400 Objectives are ideas like any others, created as part of the search and continually modified
00:25:47.840 and improved. So inventing ways of disabling the program's access to most of the space of ideas
00:25:55.040 won't help, whether that disability is inflicted with the thumbscrew and the stake or a mental
00:26:02.400 straight jacket. To an AGI, the whole space of ideas must be open. It should not be
00:26:11.520 normal in advance what ideas the program can never contemplate. And the ideas that the program
00:26:19.360 does contemplate must be chosen by the program itself, using methods, criteria and objectives
00:26:26.880 that are also the program's own. Its choices, like an AI's, will be hard to predict without
00:26:35.120 running it. We lose no generality by assuming that the program is deterministic and AGI,
00:26:41.360 using a random generator, would remain an AGI if the generator were replaced by a pseudo random one.
00:26:47.440 But it will have the additional property that there is no way of proving from its initial
00:26:53.920 state what it won't eventually think, short of running it.
00:27:00.960 The evolution of our ancestors is the only known case of thought starting up anywhere in the
00:27:09.040 universe. As I have described, something went horribly wrong and there was no immediate explosion
00:27:16.560 of innovation. Creativity was diverted into something else, yet not into transforming the planet
00:27:24.880 into paperclips, parchment, Nick Bostrom. Rather, as we should also expect if an AI project
00:27:32.720 gets that far and fails, perverted creativity was unable to solve unexpected problems.
00:27:40.560 This caused stasis and worse, thus tragically delaying the transformation of anything into
00:27:48.160 anything. But the enlightenment has happened since then. We know better now.