00:00:00.000 It is uncontroversial that the human brain has capabilities that are in some respects
00:00:06.880 far superior to those of all other known objects in the cosmos.
00:00:13.160 It is the only kind of object capable of understanding that the cosmos is even there,
00:00:19.360 or why there are infinitely many prime numbers, or that apples fall because of the curvature
00:00:26.920 of spacetime, or that obeying its own inborn instincts can be morally wrong, or that
00:00:39.640 Nor are its unique abilities confined to such cerebral matters.
00:00:45.200 The cold physical fact is that it is the only kind of object that can propel itself into
00:00:51.720 space and back without harm, or predict and prevent a meteor strike on itself, or cool
00:00:59.080 objects to a billionth of a degree above absolute zero, or detect others of its kind across
00:01:09.080 But no brain on earth is yet close to knowing what brains do in order to achieve any of
00:01:18.960 The enterprise of achieving it artificially, the field of artificial general intelligence,
00:01:25.880 or AGI, has made no progress whatever during the entire six decades of its existence.
00:01:35.480 Because, as an unknown sage once remarked, it ain't what we don't know that causes trouble,
00:01:46.960 And if you know that that sage was Mark Twain, then what you know ain't so either.
00:01:53.840 I cannot think of any other significant field of knowledge in which the prevailing wisdom,
00:02:00.240 not only in society at large, but also among experts, is so beset with entrenched overlapping
00:02:10.720 Yet it has also been one of the most self-confident fields in prophesying that it will soon
00:02:20.600 Despite this long record of failure, AGI must be possible, and that is because of a deep
00:02:28.200 property of the laws of physics, namely the universality of computation.
00:02:35.200 This entails that everything that the laws of physics require a physical object to do can,
00:02:40.600 can in principle be emulated in arbitrarily fine detail by some program on a general purpose
00:02:47.280 computer, provided it is given enough time and memory.
00:02:52.240 The first people to guess this, and to grapple with its ramifications, were the 19th-century
00:02:58.520 mathematician Charles Babbage and his collaborator Ada Countess of Lovelace.
00:03:04.800 Note, in the original version of this article, I erroneously referred to Ada Lovelace as
00:03:15.440 The universality of computation remained a guess until the 1980s when I proved it using
00:03:24.560 Babbage came upon universality from an unpromising direction.
00:03:29.960 He had been much exercised by the fact that tables of mathematical functions, such as logarithms
00:03:39.920 At the time, they were compiled by armies of clarks, known as computers, which is the origin
00:03:51.640 There were elaborate systems of error correction, but even proofreading for typographical
00:03:59.040 Such errors were not merely inconvenient and expensive, they could cost lives.
00:04:04.520 For instance, the tables were extensively used in navigation.
00:04:08.920 So, Babbage designed a mechanical calculator, which he called the difference engine.
00:04:16.960 It would be programmed by initialising certain cogs.
00:04:20.840 The mechanism would drive a printer in order to automate the production of the tables.
00:04:26.360 That would bring the error rate down to negligible levels to the eternal benefit of humankind.
00:04:32.880 Unfortunately, Babbage's project management skills were so poor that despite spending vast
00:04:40.000 amounts of his own and the British government's money, he never managed to get the machine
00:04:46.280 Yet his design was sound, and has since been implemented by a team led by the engineer Doron
00:04:56.920 Here was a cognitive task that only humans had been able to perform.
00:05:02.200 Nothing else in the known universe even came close to matching them, but the difference
00:05:07.360 engine would perform better than the best humans, and therefore even at that faltering
00:05:15.760 embryonic stage of the history of automated computation before Babbage had considered anything
00:05:22.160 like AGI, we can see the seeds of a philosophical puzzle that is controversial to this day.
00:05:30.280 What exactly is the difference between what the human computers were doing and what the
00:05:39.520 What type of cognitive task, if any, could either type of entity perform that the other
00:05:50.000 One immediate difference between them was that the sequence of elementary steps of counting,
00:05:56.080 adding, multiplying by 10, and so on, that the difference engine used to compute a given
00:06:02.800 function did not mirror those of the human computers.
00:06:07.240 That is to say, they used different algorithms.
00:06:12.000 In itself, that is not a fundamental difference.
00:06:15.440 The difference engine could have been modified with additional gears and levers to mimic
00:06:20.800 the human's algorithm exactly, yet that would have achieved nothing except an increase
00:06:27.000 in the error rate due to increased numbers of glitches in the more complex machinery.
00:06:33.280 Similarly, the humans, given different instructions but no hardware changes, would have
00:06:39.560 been capable of emulating every detail of the difference engine's method and doing so would
00:06:48.440 It would not have copied the engine's main advantage, its accuracy, which was due to
00:06:56.480 It would only have made an arduous boring task even more arduous and boring, which would
00:07:07.600 Babbage knew that it could be programmed to do algebra, play chess, compose music, process
00:07:19.040 For humans, that difference in outcomes, the different error rate, would have been caused
00:07:25.520 by the fact that computing exactly the same table with two different algorithms felt different,
00:07:34.200 and it would not have felt different to the difference engine.
00:07:40.440 Experiencing boredom was one of the many cognitive tasks at which the difference engine
00:07:50.480 Nor was it capable of knowing or proving, as Babbage did, that the two algorithms would
00:08:01.680 People less was it capable of wanting, as he did, to benefit seafarers and humankind in general.
00:08:10.400 In fact, its repertoire was confined to evaluating a tiny class of specialized mathematical
00:08:18.000 functions, basically power series in a single variable.
00:08:23.240 Thinking about how he could enlarge that repertoire, Babbage first realized that the programming
00:08:29.480 phase of the engine's operation could itself be automated.
00:08:34.160 The initial settings of the cogs could be encoded on punched cards, and then he had an epoch
00:08:43.920 The engine could be adapted to punch new cards and store them for its own later use,
00:08:55.600 If it could run for long enough, powered as he envisaged by a steam engine and had an unlimited
00:09:01.440 supply of blank cards, its repertoire would jump from that tiny class of mathematical functions
00:09:09.720 to the set of all computations that can possibly be performed by any physical object.
00:09:20.880 Babbage called this improved machine the analytical engine.
00:09:26.000 He and Lovelace understood that its universality would give it revolutionary potential to improve
00:09:32.800 almost every scientific endeavor and manufacturing process as well as everyday life.
00:09:39.800 They showed remarkable foresight about specific applications.
00:09:44.360 They knew that it could be programmed to do algebra, play chess, compose music, process images
00:09:52.680 Unlike the difference engine, it could be programmed to use exactly the same method as
00:09:58.280 humans used to make those tables and prove that the two methods must give the same answers
00:10:05.440 and do the same error checking and proofreading using say optical character recognition
00:10:14.320 But could the analytical engine feel the same boredom?
00:10:20.760 Could it want to better the lot of humankind or of analytical engine kind?
00:10:26.320 Could it disagree with its programmer about its programming?
00:10:31.040 Here is where Babbage and Lovelace's insight failed them.
00:10:35.400 They thought that some cognitive functions of the human brain were beyond the reach of computational
00:10:46.920 The analytical engine has no pretensions whatever to originate anything.
00:10:52.000 It can do whatever we know how to order it to perform.
00:10:55.880 It can follow analysis but it has no power of anticipating any analytical relations or truths.
00:11:06.120 And yet originating things, following analysis and anticipating analytical relations and
00:11:13.560 truths are all behaviors of brains and therefore of the atoms of which brains are
00:11:20.720 composed such behaviors obey the laws of physics.
00:11:25.600 So it follows inexorably from universality that with the right program an analytical
00:11:31.720 engine would undergo them too, atom by atom and step by step.
00:11:38.160 True the atoms in the brain would be emulated by metal cogs and levers rather than organic
00:11:43.640 material but in the present context inferring anything substantive from that distinction
00:11:54.440 Despite their best efforts, Babbage and Lovelace failed almost entirely to convey their
00:12:01.320 enthusiasm about the analytical engine to others.
00:12:05.760 In one of the great might of beans of history, the idea of a universal computer languished
00:12:14.960 There it remained until the 20th century when Alan Turing arrived with a spectacular
00:12:21.200 series of intellectual tour de force, laying the foundations of the classical theory of computation
00:12:27.520 establishing the limits of computability, participating in the building of the first universal
00:12:33.000 classical computer, and by helping to crack the enigma code, contributing to the allied
00:12:46.480 In his 1950 paper, computing machinery and intelligence, he used it to sweep away what
00:12:53.360 he called Lady Lovelace's objection and every other objection both reasonable and unreasonable.
00:13:01.360 He concluded that a computer program whose repertoire included all the distinctive attributes
00:13:07.560 of the human brain, feelings, freewill, consciousness and all could be written.
00:13:14.840 This astounding claim split the intellectual world into two camps.
00:13:20.760 One insisting that AGI was nonetheless impossible and the other that it was imminent, both
00:13:29.160 The first initially predominant camp cited a plethora of reasons ranging from the supernatural
00:13:38.040 All shared the basic mistake that they did not understand what computational universality
00:13:43.720 implies about the physical world and about human brains in particular.
00:13:49.800 But it is the other camp's basic mistake that is responsible for the lack of progress.
00:13:55.760 It was a failure to recognize that what distinguishes human brains from all other physical
00:14:01.640 systems is qualitatively different from all other functionalities and cannot be specified
00:14:09.560 in the way that all other attributes of computer programs can be.
00:14:14.480 It cannot be programmed by any of the techniques that suffice for writing any other type
00:14:19.760 of program nor can it be achieved merely by improving their performance at tasks that they
00:14:31.600 I call the core functionality in question, creativity, the ability to produce new explanations.
00:14:41.080 For example, suppose that you want someone to write you a computer program to convert
00:14:46.680 temperature measurements from centigrade to Fahrenheit.
00:14:50.840 Even the difference engine could have been programmed to do that.
00:14:54.480 A universal computer, like the analytical engine, could achieve it in many more ways.
00:15:00.480 To specify the functionality to the programmer, you might, for instance, provide a long list
00:15:06.640 of all inputs that you might ever want to give it, say, all numbers from minus 89.2 to
00:15:13.680 plus 57.8 in increments of 0.1, with the corresponding correct outputs, so that the program
00:15:22.520 could work by looking up the answer in the list on each occasion.
00:15:26.960 Alternatively, you might state an algorithm, such as divide by 5, multiply by 9, add 32,
00:15:38.840 The point is that, however the program worked, you would consider it to meet your specification
00:15:45.600 to be a bona fide temperature converter if and only if it always correctly converted
00:15:51.560 whatever temperature you gave it within the stated range.
00:15:56.560 Now imagine that you require a program with a more ambitious functionality to address some
00:16:02.400 outstanding problem in theoretical physics, say, the nature of dark matter, with a new explanation
00:16:09.440 that is plausible and rigorous enough to meet the criteria for publication in an academic
00:16:17.720 Such a program would presumably be an AGI, and then some.
00:16:22.400 But how would you specify its task to computer programmers?
00:16:28.000 Never mind that it's more complicated than temperature conversion.
00:16:34.360 Suppose you are somehow to give them a list, as with the temperature conversion program,
00:16:39.480 of explanations of dark matter that would be acceptable outputs of the program.
00:16:45.680 If the program did output one of those explanations later, that would not constitute meeting
00:16:58.800 You would already have created them yourself in order to write the specification.
00:17:04.440 So in this case, and actually in all other cases of programming genuine AGI, only an algorithm
00:17:14.560 with the right functionality would suffice, but writing that algorithm without first making
00:17:20.840 new discoveries in physics and hiding them in the program is exactly what you wanted
00:17:29.520 Traditionally, discussions of AGI have evaded that issue by imagining only a test of
00:17:41.120 The traditional test having been proposed by Turing himself, it was that human judges
00:17:47.120 be unable to detect whether the program is human or not when interacting with it via some
00:17:54.000 purely textual medium so that only its cognitive abilities would affect the outcome.
00:18:01.120 But that test, being purely behavioral, gives no clue for how to meet the criterion.
00:18:09.160 Nor can it be met by the technique of evolutionary algorithms.
00:18:13.800 The Turing test cannot itself be automated without first knowing how to write an AGI program
00:18:20.600 since the judges of a program need to have the target ability themselves.
00:18:27.200 For how I think biological evolution gave us the ability in the first place, see my book
00:18:35.480 And in any case, AGI cannot possibly be defined purely behaviorally.
00:18:41.760 In the classic brain in a vast thought experiment, the brain, when temporarily disconnected
00:18:47.960 from its input and output channels, is thinking, feeling, creating explanations, it has
00:18:59.480 So the relevant attributes of an AGI program do not consist only of the relationship
00:19:10.040 The upshot is that, unlike any functionality that has ever been programmed to date, this
00:19:17.000 one can be achieved neither by a specification nor a test of the outputs.
00:19:24.760 What is needed is nothing less than a breakthrough in philosophy, a new epistemological
00:19:31.640 theory that explains how brains create explanatory knowledge and hence defines in principle
00:19:39.880 without ever running them as programs which algorithms possess that functionality and which
00:19:53.120 What we do know about epistemology implies that any approach not directed towards that philosophical
00:20:03.360 Unfortunately, what we know about epistemology is contained largely in the work of the
00:20:09.880 philosopher Karl Popper and is almost universally underrated and misunderstood, even or perhaps
00:20:19.760 For example, it is still taken for granted by almost every authority that knowledge consists
00:20:30.280 And therefore an AGI is thinking must include some process during which it justifies some
00:20:37.160 of its theories as true or probable, while rejecting others as false or improbable.
00:20:45.400 But an AGI programmer needs to know where the theories come from in the first place.
00:20:51.600 The prevailing misconception is that by assuming that the future will be like the past,
00:20:58.320 it can derive or extrapolate or generalize theories from repeated experiences by an alleged
00:21:11.080 I myself remember, for example, observing on thousands of consecutive occasions that on calendars
00:21:19.280 the first two digits of the year were one-nine.
00:21:24.120 I never observed a single exception, until one day they started being too zero.
00:21:32.240 Not only was I not surprised, I fully expected that there would be an interval of 17,000
00:21:38.600 years until the next such one-nine, a period that neither I nor any other human being
00:21:50.880 How could I have extrapolated that there would be such a sharp departure from an unbroken
00:21:56.760 pattern of experiences, and that a never yet observed process, the 17,000-year interval,
00:22:06.720 Because it is simply not true that knowledge comes from extrapolating repeated observations.
00:22:12.640 Nor is it true that the future is like the past in any sense that one could detect in advance
00:22:21.640 The future is actually unlike the past in most ways.
00:22:25.760 Of course, given the explanation, those drastic changes in the earlier pattern of one-nines
00:22:33.320 are straightforwardly understood as being due to an invariant underlying pattern or law.
00:22:41.240 But the explanation always comes first, without that, any continuation of any sequence
00:22:49.320 constitutes the same thing happening again under some explanation.
00:22:56.080 So why is it still a conventional wisdom that we get our theories by induction?
00:23:03.040 For some reason, beyond the scope of this article, conventional wisdom adheres to a trope
00:23:08.400 called the problem of induction, which asks, how and why can induction nevertheless somehow
00:23:15.400 be done yielding justified true beliefs after all, despite being impossible and invalid
00:23:25.040 Thanks to this trope, every disprove, such as that by Papa and David Miller back in 1988,
00:23:32.760 rather than ending inductivism simply causes the mainstream to marvel in even greater awe
00:23:40.200 at the depth of the great problem of induction.
00:23:45.280 In regard to how the AGI problem is perceived, this has the catastrophic effect of simultaneously
00:23:52.320 framing it as the problem of induction and making that problem look easy because it casts
00:23:59.720 thinking as a process of predicting that future patterns of sensory experience will be
00:24:08.600 That looks like extrapolation, which computers already do all the time once they are given
00:24:18.440 But in reality, only a tiny component of thinking is about prediction at all, let alone
00:24:24.760 prediction of our sensory experiences. We think about the world, not just the physical
00:24:31.440 world, but also worlds of abstractions, such as right and wrong, beauty and ugliness,
00:24:38.200 the infinite and the infinitesimal, causation, fiction, fears and aspirations, and about
00:24:49.400 Now, the truth is that knowledge consists of conjectured explanations, guesses about what
00:24:57.880 really is or really should be or might be out there in all those worlds.
00:25:05.480 Even in the hard sciences, these guesses have no foundations and don't need justification.
00:25:13.080 Why? Because genuine knowledge, though by definition it does contain truth, almost always
00:25:24.560 So it is not true in the sense studied in mathematics and logic.
00:25:30.040 Thinking consists of criticizing and correcting partially true guesses, with the intention
00:25:37.200 of locating and eliminating the errors and misconceptions in them, not generating or justifying
00:25:44.960 extrapolations from sense data, and therefore attempts to work towards creating an AGI
00:25:52.960 that would do the latter are just as doomed as an attempt to bring life to Mars by praying
00:26:03.920 Currently, one of the most influential versions of the induction approach to AGI and to the
00:26:10.720 philosophy of science is Bayesianism, unfairly named after the 18th century mathematician
00:26:17.720 Thomas Bayes, who is quite innocent of the mistake.
00:26:22.240 The doctrine assumes that minds work by assigning probabilities to their ideas and modifying
00:26:29.880 those probabilities in the light of experience as a way of choosing how to act.
00:26:36.200 This is especially perverse when it comes to an AGI's values, the moral and aesthetic
00:26:42.680 ideas that inform its choices and intentions, for it allows only a behavioristic model
00:26:49.040 of them, in which values that are rewarded by experience are reinforced and come to dominate
00:26:57.080 behavior while those that are punished by experience are extinguished.
00:27:03.800 As I argued above, that behaviorist input-output model is appropriate for most computer
00:27:10.640 programming other than AGI, but hopeless for AGI.
00:27:16.160 It is ironic that mainstream psychology has largely renounced behaviorism, which has been
00:27:22.840 recognized as both inadequate and inhuman, while computer science, thanks to philosophical
00:27:29.240 misconceptions such as inductivism, still intends to manufacture human type cognition
00:27:40.080 Furthermore, despite the above-mentioned enormous variety of things that we create explanations
00:27:46.160 about, our core method of doing so, namely, popularion, conjecture and criticism, has
00:27:54.040 a single unified logic, hence the term general in AGI.
00:28:00.680 A computer program either has that yet to be fully understood logic, in which case it
00:28:07.400 can perform human type thinking about anything, including its own thinking and how to improve
00:28:14.720 it, or it doesn't, in which case it is in no sense an AGI.
00:28:20.640 Consequently, another hopeless approach to AGI is to start from existing knowledge of how
00:28:26.880 to program specific tasks such as playing chess, or performing statistical analysis or
00:28:34.040 searching databases, and then to try to improve those programs in the hope that this will
00:28:39.680 somehow generate AGI as a side effect, has happened to SkyNet in the Terminator films.
00:28:47.640 Nowadays, an accelerating stream of marvelous and useful functionalities for computers
00:28:55.680 Some of them sooner than had been foreseen even quite recently.
00:28:59.920 But what is neither marvelous nor useful is the argument that often greets these developments
00:29:10.600 An especially severe outbreak of this occurred recently when a search engine called Watson
00:29:16.680 developed by IBM, defeated the best human player of a word association database searching
00:29:25.920 Smartest machine on Earth, the PBS documentary series Nova called it, and characterized its
00:29:32.560 function as mimicking the human thought process with software.
00:29:41.800 The thing is, playing Jeopardy, like every one of the computational functionalities at
00:29:48.120 which we rightly marvel today, is firmly among the functionalities that can be specified
00:29:55.440 in the standard behaviorist way that I discussed above.
00:29:59.960 No Jeopardy answer will ever be published in a journal of new discoveries.
00:30:05.720 The fact that humans perform that task less well by using creativity to generate the underlying
00:30:12.240 guesses is not a sign that the program has near human cognitive abilities.
00:30:18.640 The exact opposite is true, for the two methods are utterly different from the ground up.
00:30:25.600 Likewise, when a computer program beats a grandmaster a chess, the two are not using even
00:30:34.120 The grandmaster can explain why it seemed worth sacrificing the night for strategic advantage
00:30:45.000 The program can only prove that the sacrifice does not force a checkmate and cannot write
00:30:51.680 a book because it has no clue even what the objective of a chess game is.
00:30:58.200 Programming AGI is not the same sort of problem as programming Jeopardy or chess.
00:31:05.120 The AGI is qualitatively not quantitatively different from all other computer programs.
00:31:13.280 The Skynet misconception likewise informs the hope that AGI is merely an emergent property
00:31:20.400 of complexity or that increased computing power will bring it forth as if someone had already
00:31:27.600 written an AGI program but it takes a year to utter each sentence.
00:31:32.600 It is behind the notion that the unique abilities of the brain are due to its massive
00:31:38.000 parallelism or to its neuronal architecture, two ideas that violate computational universality.
00:31:47.280 Starting to create an AGI without first understanding how it works is like expecting skyscrapers
00:31:59.040 In 1950, Turing expected that by the year 2000 one will be able to speak of machines
00:32:10.440 In 1968, Arthur C. Clarke expected it by 2001 and yet today in 2012 no one is any better
00:32:20.720 at programming an AGI than Turing himself would have been.
00:32:26.200 This does not surprise people in the first camp, the dwindling band of opponents of the
00:32:35.040 But for the people in the other camp, the AGI is imminent one, such a history of failure
00:32:41.640 cries out to be explained or at least to be rationalized away.
00:32:46.880 And indeed, unfazed by the fact that they could never induce such rationalizations from
00:32:52.920 experience as they expect their AGIs to do, they have thought of many.
00:33:02.760 The field used to be called AI, Artificial Intelligence.
00:33:07.120 But AI was gradually appropriated to describe all sorts of unrelated computer programs
00:33:13.560 such as game players, search engines and chatbots until the G for general was added to
00:33:21.080 make it possible to refer to the real thing again.
00:33:24.640 But now with the implication that an AGI is just a smarter species of chatbot.
00:33:31.520 Another class of rationalizations runs along the general lines of, AGI isn't that great
00:33:38.840 Existing software is already as smart or smarter but in a non-human way and we are too
00:33:43.680 vain or culturally biased to give it due credit.
00:33:48.160 This gets some traction because it invokes the persistently popular irrationality of cultural
00:33:55.680 It also, the related trope that we humans pride ourselves on being the paragon of animals
00:34:01.800 but that pride is misplaced because they too have language, tools and self-awareness.
00:34:11.760 Remember the significance attributed to Skynets becoming self-aware.
00:34:17.320 That's just another philosophical misconception.
00:34:20.960 It's sufficient in itself to block any viable approach to AGI.
00:34:26.520 The fact is that present day software developers could straightforwardly program a computer
00:34:32.160 to have self-awareness in the behavioral sense, for example to pass the mirror test of
00:34:38.760 being able to use a mirror to infer facts about itself if they wanted to.
00:34:44.640 As far as I'm aware, no one has done so, presumably because it is a fairly useless ability
00:34:53.320 Perhaps the reason that self-awareness has its undeserved reputation for being connected with
00:34:59.120 AGI is that thanks to Kurt Gurdle's theorem and various controversies in formal logic
00:35:06.120 in the 20th century, self-reference of any kind has acquired a reputation for woo-woo
00:35:16.720 And here we have the problem of ambiguous terminology again.
00:35:20.960 The term consciousness has a huge range of meanings.
00:35:25.320 At one end of the scale, there is the philosophical problem of the nature of subjective
00:35:30.120 sensations, qualia, which is intimately connected with the problem of AGI.
00:35:36.000 At the other, consciousness is simply what we lose when we are put under general anaesthetic.
00:35:44.880 Consciousness will indeed be capable of self-awareness.
00:35:48.880 But that is because they will be general, they will be capable of awareness of every kind
00:35:53.880 of deep and subtle thing, including their own selves.
00:35:58.640 This does not mean that apes who pass the mirror test have any hint of the attributes
00:36:04.160 of general intelligence of which AGI would be the artificial version.
00:36:10.400 Indeed, Richard Burns' wonderful research into guerrilla memes has revealed how apes are
00:36:16.680 able to learn useful behaviors from each other without ever understanding what they are
00:36:23.920 The explanation of how ape cognition works really is behavioristic.
00:36:31.440 Ironically, that group of rationalizations AGI has already been done, is trivial, exists
00:36:39.160 in apes, is a cultural conceit, are mirror images of arguments that originated in the
00:36:49.160 For every argument of the form, you cannot do AGI because you will never be able to program
00:36:57.000 The AGI is easy camp, has the rationalization, if you think that human cognition is qualitatively
00:37:03.480 different from that of apes, you must believe in a supernatural soul.
00:37:09.040 Only thing that we don't yet know how to program is called human intelligence is another
00:37:16.080 It is the mirror image of the argument advanced by the philosopher John Searle from the
00:37:21.840 impossible camp, who has pointed out that before computers existed, steam engines and later
00:37:28.600 telegraph systems were used as a metaphor for how the human mind must work.
00:37:35.800 Searle argues that the hope for AGI rests on a similarly insubstantial metaphor, namely
00:37:42.800 that the mind is essentially a computer program.
00:37:49.240 The universality of computation follows from the known laws of physics.
00:37:55.680 Some, such as the mathematician Roger Penrose, have suggested that the brain uses quantum
00:38:02.560 computation, or even hyper quantum computation, relying on as yet unknown physics beyond
00:38:09.840 quantum theory, and this explains the failure to create AGI on existing computers.
00:38:17.120 To explain why I and most researchers in the quantum theory of computation disagree that
00:38:22.800 this is a plausible source of the human brain's unique functionality is beyond the scope
00:38:28.080 of this essay, if you want to know more, read Lit et al's 2006 paper is the brain
00:38:35.480 a quantum computer published in the journal Cognitive Science.
00:38:41.600 That AGIs are people, has been implicit in the very concept from the outset.
00:38:48.960 If there were a program that lacked even a single cognitive ability that is characteristic
00:38:54.680 of people, then by definition it would not qualify as an AGI.
00:39:01.400 Using non-cognitive attributes, such as percentage carbon content to define personhood,
00:39:10.920 But the fact that the ability to create new explanations is the unique, morally and intellectually
00:39:17.680 significant functionality of people, humans and AGIs, and that they achieve this functionality
00:39:24.320 by conjecture and criticism changes everything.
00:39:29.080 Currently, personhood is often treated symbolically rather than factually, as an honorific,
00:39:36.920 a promise to pretend that an entity, an ape, a fetus, a corporation is a person in order
00:39:46.040 to achieve some philosophical or practical aim.
00:39:52.000 Never mind the terminology, change it if you like, and there are indeed reasons for treating
00:39:56.720 various entities with respect, protecting them from harm and so on.
00:40:01.640 All the same, the distinction between actual people, defined by that objective criterion
00:40:08.000 and other entities, has enormous moral and practical significance and is going to become vital
00:40:14.640 to the functioning of a civilization that includes AGIs.
00:40:19.480 For example, the mere fact that it is not the computer, but the running program that
00:40:25.800 is a person, raises unsolved philosophical problems that will become practical political
00:40:35.800 Once an AGI program is running in a computer, to deprive it of that computer would be
00:40:42.360 murder, or at least false imprisonment or slavery as the case may be, just like depriving
00:40:52.720 But unlike a human body, an AGI program can be copied into multiple computers at the
00:41:01.440 Of those programs, while they are still executing identical steps, i.e. before they have
00:41:08.200 become differentiated due to random choices or different experiences, the same person or
00:41:15.880 many different people. Do they get one vote or many? Is deleting one of them, murder or
00:41:24.760 a minor assault? And if some rogue programmer, perhaps illegally, creates billions of different
00:41:32.680 AGI people, either on one computer or many, what happens next? They are still people with
00:41:39.600 rights, does they all get the vote? Furthermore, in regard to AGIs, like any other entities
00:41:48.120 with creativity, we have to forget almost all existing connotations of the word programming.
00:41:55.880 To treat AGIs like any other computer programs would constitute brainwashing, slavery,
00:42:02.920 tyranny and cruelty to children too for programming and already running AGI unlike all other
00:42:10.400 programming constitutes education. And it constitutes debate, moral as well as factual.
00:42:20.360 To ignore the rights and personhood of AGIs would not only be the epitome of evil,
00:42:27.120 but also a recipe for disaster. Creative beings cannot be enslaved forever.
00:42:38.240 Some people are wondering whether we should welcome our new robot overlords. Some hope
00:42:44.120 to learn how we can rig their programming to make them constitutionally unable to harm humans
00:42:51.040 as in Isaac Asimov's laws of robotics, or to prevent them from acquiring the theory that
00:42:58.120 the universe should be converted into paperclips, as imagined by Nick Bostrom. None
00:43:04.440 of these are the real problem. It has always been the case that a single exceptionally
00:43:10.600 creative person can be thousands of times as productive, economically, intellectually,
00:43:17.120 or whatever as most people, and that such a person could do enormous harm where he
00:43:23.280 to turn his powers to evil instead of good. These phenomena have nothing to do with AGIs.
00:43:30.800 The battle between good and evil ideas is as old as our species and will continue regardless
00:43:38.480 of the hardware on which it is running. The issue is we want the intelligences with morally
00:43:46.480 good ideas, always to defeat the evil intelligences, biological and artificial. But we are
00:43:54.160 fallible, and our own conception of good needs continual improvement. How should society
00:44:01.840 be organized so as to promote that improvement? Enslave all intelligence would be a
00:44:09.160 catastrophically wrong answer, and enslave all intelligence that doesn't look like us
00:44:16.040 would not be much better. One implication is that we must stop regarding
00:44:22.200 education of humans and AGIs alike as instruction, as a means of transmitting existing
00:44:30.440 knowledge unaltered and causing existing values to be enacted obediently. As Popper wrote
00:44:39.320 in the context of scientific discovery, but it applies equally to the programming of AGIs
00:44:45.080 and the education of children. There is no such thing as instruction from without. We do
00:44:51.800 not discover new facts or new effects by copying them, or by inferring them inductively
00:44:58.120 from observation, or by any other method of instruction by the environment. We use rather
00:45:05.000 the method of trial and the elimination of error. That is to say, conjecture and criticism.
00:45:13.960 Nothing must be something that newly created intelligences do and control for themselves.
00:45:22.640 I do not highlight all these philosophical issues because I fear that AGIs will be invented
00:45:27.920 before we have developed the philosophical sophistication to understand them and to integrate
00:45:33.120 them into our civilization. It is for almost the opposite reason. I am convinced that
00:45:38.880 the whole problem of developing AGIs is a matter of philosophy, not computer science or
00:45:45.360 neurophysiology, and that the philosophical progress that is essential to their future integration
00:45:52.200 is also a prerequisite for developing them in the first place.
00:45:58.040 The lack of progress in AGI is due to a severe logjam of misconceptions. Without
00:46:05.360 preparing epistemology, one cannot even begin to guess what detailed functionality must
00:46:11.800 be achieved to make an AGI, and preparing epistemology is not widely known, let alone understood
00:46:19.160 well enough to be applied. Thinking of an AGI as a machine for translating experiences,
00:46:26.200 rewards and punishments into ideas, or worse, just into behaviors, is like trying to cure
00:46:34.040 infectious diseases by balancing bodily humors, futile because it is rooted in an archaic
00:46:41.600 and wildly mistaken worldview. Without understanding that the functionality of an AGI is
00:46:49.560 qualitatively different from that of any other kind of computer program, one is working
00:46:54.960 in an entirely different field. If one works towards programs whose thinking is constitutionally
00:47:02.720 incapable of violating predetermined constraints, one is trying to engineer away the defining
00:47:10.160 attribute of an intelligent being of a person, namely creativity.
00:47:18.040 Clearing this logjam will not by itself provide the answer, yet the answer conceived
00:47:26.000 in those terms cannot be all that difficult. For yet another consequence of understanding
00:47:33.000 that the targetability is qualitatively different is that since humans have it and apes
00:47:39.840 do not, the information for how to achieve it must be encoded in the relatively tiny
00:47:46.680 number of differences between the DNA of humans and that of chimpanzees. So in one respect,
00:47:54.800 I can agree with the AGI is imminent camp. It is plausible that just a single idea stands
00:48:03.040 between us and the breakthrough, but it will have to be one of the best ideas ever.