00:00:02.760 Now this is a conversation that I've been eagerly wanting
00:00:06.720 to have for years, so this is very exciting for me.
00:00:14.360 that AI's will be no more fundamentally intelligent
00:00:28.000 I suppose you mean capable of all the same types
00:00:38.520 So that would include, you know, doing science and doing art
00:01:06.480 computation hardware, and the other is about software.
00:01:10.960 So if we take the hardware, we know that our brains
00:01:47.080 Well, we can assume that maybe in 100 years time
00:01:53.080 we'll both be dead and therefore the number of conversations
00:02:01.280 and also some conversations depend on speed of computation.
00:02:11.720 the traveling salesman problem, then there are many
00:02:15.920 traveling salesman problems that we wouldn't be able
00:02:25.720 that we're not limited in the programs we can run
00:02:33.760 So, all limitations on us, hardware limitations on us
00:02:46.840 to the level of any other entity that is in the universe.
00:02:51.720 Because, you know, if somebody builds a computer
00:02:56.280 then we can use that very computer or that very technology
00:03:08.080 As far as explanations go, can we reach the same
00:03:17.120 Let's say, usually this is said not in terms of AGIs,
00:03:22.360 but in terms of extraterrestrial intelligences.
00:03:29.960 what if they are to us as we are to ants and so on?
00:03:37.560 which is easily fixable by adding more hardware.
00:03:50.840 that we are inherently incapable of comprehending.
00:03:58.760 He thinks that, you know, we can comprehend quantum mechanics,
00:04:08.240 can comprehend something beyond quantum mechanics,
00:04:14.320 which we can't comprehend and no amount of brain addons
00:04:36.560 certain qualia that maybe we can experience love
00:04:46.240 not just memory and speed, but specialized hardware.
00:04:50.200 And I think that falls victim to the same argument.
00:05:01.000 And if there's hardware that is needed for love,
00:05:05.400 let's say that somebody is born without that hardware,
00:05:12.160 that does love or that does mathematical insight
00:05:20.120 in the same way that the other part of the brain
00:05:28.760 and by chemicals that will cause concentrations
00:05:32.720 So therefore, an artificial device that computed
00:05:48.240 Could do the same job when it would be indistinguishable
00:05:50.720 and therefore a person who augmented with one of those
00:06:09.640 and humans have the same range in the sense of defined.
00:06:15.520 Okay, so I think the software question is more interesting
00:06:35.480 that even the smartest humans can explain, right?
00:06:39.840 and I asked him to create the theory of quantum computing,
00:06:46.560 you could do this and just refer him a reference
00:06:56.000 which means that they can't even perform basic tasks
00:07:03.200 So are these humans capable of explaining quantum computing
00:07:20.560 so these tasks that you're talking about are tasks
00:07:26.040 However, there are humans who are brain damaged
00:07:29.400 to the extent that they can't even do the tasks
00:07:33.680 And there comes a point when installing the program
00:07:41.440 that would be able to read the driver's license
00:07:44.080 or whatever would require augmenting their hardware
00:08:10.960 If it was hardware, then getting them to do this
00:08:15.760 would be a matter of repairing the imperfect hardware.
00:08:20.440 If it's software, it is not just a matter of them wanting to,
00:08:28.880 It is a matter of whether the existing software
00:08:58.040 but he will never be able to speak Mandarin Chinese
00:09:29.680 that he doesn't want to go through that process?
00:09:49.400 many of my relatives, a couple of generations ago
00:10:03.960 And yet very quickly, they did speak those languages.
00:10:06.920 Again, was it because what they wanted changed?
00:10:27.080 in the sense that my ancestors wanted to learn languages,
00:10:33.520 There is a level of dysfunction below which they couldn't,
00:10:46.280 It's like the question of could apes be programmed
00:10:58.680 but although programming them would not require
00:11:10.080 that repairing a defect, that it would be repairing a defect,
00:11:14.880 but it would require intricate changes at the neuron level
00:11:38.560 and also doesn't have certain specialized modules
00:11:59.560 or drive a car is not being used in this conversation.
00:12:13.680 because you'd be intentionally creating a person
00:12:37.360 But wait, wait, wait, I said that it could only be hardware
00:12:41.720 at the low level, well, either at the level of brain defects
00:12:46.720 or at the level of using up the whole of our allocation
00:12:56.800 By the way, it's software analogous or it's a hard
00:13:07.680 Software can be genetic too though that doesn't mean
00:13:18.920 is because these people also happen to be the same people
00:13:25.560 It's mysterious to me why these people would also choose
00:13:32.760 or why they would choose to do worse on academic test
00:13:38.560 So why they would choose to do exactly the sort of thing
00:13:41.360 somebody who is less cognitively powerful would do.
00:13:44.280 It seems the more partial motion explanation there
00:13:45.920 is just that they are cognitively less powerful.
00:13:50.480 Why would someone choose not to go to school, for instance,
00:13:53.600 if they were given the choice and not to have any lessons?
00:13:57.360 Well, there are many reasons why they might choose that.
00:14:11.360 because you're just referring to a choice that people make,
00:14:19.840 As being by definition forced on them by hardware,
00:14:31.880 that required Brett Hall to be able to speak fluent Mandarin Chinese
00:14:49.920 Then he would be, quote, choosing the low-level tasks
00:14:55.000 rather than the, quote, cognitively demanding task.
00:14:59.000 But it's only the culture that makes that cognitively demanding task.
00:15:12.600 Right. I mean, it doesn't seem bad, I'm sure to say
00:15:14.520 that the kind of jobs you could do sitting down on a laptop
00:15:26.520 or if there's not something like intelligence recognition
00:15:28.320 or whatever you want to call it, that is a thing
00:15:39.560 and, or I guess an anti-correlation between people
00:15:43.040 and people who are doing, like let's say programmers, right?
00:15:47.840 all of them are above level one on this literacy survey.
00:15:51.760 Why did they just happen to make the same choices?
00:16:11.200 make use of certain abilities that people have.
00:16:22.680 then it's best to make the way that the company
00:16:29.840 or the signs on the doors, or the numbers on the dials,
00:16:40.120 You could, in principle, make each label on each door,
00:16:45.960 I don't know, you know, there are thousands of human languages,
00:17:01.920 it's a language that many educated people know fluently.
00:17:10.800 oh, there is something, there is some hardware reason
00:17:34.040 who are currently designated as non-functional literates
00:17:41.360 If they, again, if they made the right choices,
00:18:06.120 So just learning, so if someone doesn't speak English,
00:18:26.120 is at a disadvantage learning about quantum computers,
00:18:30.520 but not only because of their deficiency in language.
00:18:35.800 If they come from a culture in which the culture
00:18:56.320 if a person doesn't think in terms of, for example, logic,
00:19:01.400 but thinks in terms of pride and manliness and fear,
00:19:12.720 that feel the lives of let's say prehistoric people,
00:19:27.440 then to be able to understand quantum computers,
00:19:38.680 but a range of other features of the civilization.
00:20:31.280 like depending on how well good a day you were having.
00:20:33.560 And these are people who are adopted by families
00:20:39.160 Get in fact, a hard work theory explains very well
00:20:48.840 Whereas I don't know how software would explain
00:20:52.000 Well, the hardware theory explains it in the sense
00:20:59.200 So it doesn't, it doesn't have an explanation beyond that
00:21:13.520 at the level of brain that are correlated with IQ, right?
00:21:15.480 So you actual skull size is like a 0.3 correlation with IQ.
00:21:24.160 or the entire genetic variants of human intelligence.
00:21:26.080 But we do have identified a few actual Harvard differences
00:21:50.040 and differ only in the amount of hair they have
00:21:55.040 or in the amount in their appearance in any other way
00:22:01.440 that none of those differences make any difference
00:22:07.640 Only who their parents were makes a difference.
00:22:14.480 It wouldn't it be surprising that there's nothing else
00:22:17.040 correlated with IQ other than who your parents are?
00:22:40.520 Where they correlate things like how many adventure
00:22:47.840 movies have been made in a given year correlated
00:22:57.800 It's the number of films made by a particular actor
00:23:02.120 against the number of outbreaks of birth through all that.
00:23:20.240 It's not just that correlation isn't causation.
00:23:44.120 is that the things that are correlated are things
00:24:07.960 and measure the IQ, they control for certain things.
00:24:14.480 And like you said, identical twins read together.
00:24:41.200 between the ages of three and a half and four and a half.
00:24:46.840 that we don't know yet, but you know, something like that.
00:24:49.880 Then you would expect that thing, which we don't know about,
00:24:53.000 and nobody has bothered to control for in these experiments.
00:24:59.120 We would expect that thing to be correlated with IQ.
00:25:02.280 But unfortunately, that thing is also correlated
00:25:06.280 with whether someone's an identical twin or not.
00:25:17.000 This is say an aspect of appearance or something.
00:25:36.840 there's an infinite amount of possible explanations,
00:25:40.200 So it could be that there's some unknown trait,
00:25:45.600 different adopted parents, so they can use it as a basis
00:25:50.600 But that is, I mean, I would assume they don't know
00:25:57.600 to treat kids differently at the age of three, for example?
00:26:02.560 It's like, it would be something like getting the idea
00:26:08.520 but I'm just trying to show you that it could be something
00:26:12.960 If you ask parents to list the traits in their children
00:26:16.960 that cause them to behave differently to all of their children,
00:26:24.720 that they're not aware of, which also affect their behavior.
00:26:29.240 So we'd first need an explanation for what this trait is
00:26:32.600 that researchers have not been able to identify it,
00:26:42.440 because parents have a huge amount of information
00:26:49.000 about their children, which they are processing in their minds.
00:26:59.720 Okay. So I guess, let's leave this topic aside for now.
00:27:05.920 So if it creates everybody something that doesn't exist
00:27:15.400 go on YouTube and look up cat opening a door, right?
00:27:17.600 So you'll see, for example, a cat develops a theory
00:27:21.640 that applying torque to this handle, to this metal thing,
00:27:28.120 Now, what it'll do is it'll climb onto a countertop
00:27:35.760 get on a countertop and try to open the door that way.
00:27:37.840 But it conjectures that this is a way given its,
00:27:40.560 given its morphology that it can access the door.
00:27:45.200 And then the experiment is, will the door open?
00:27:49.000 this seems like a classic cycle of conjecture and reputation.
00:27:54.840 at least having some bounded form of creativity?
00:28:14.640 thriving in environments that they've never seen before.
00:28:21.920 In fact, if you go down to the level of detail,
00:28:27.760 in animals I've never seen the environment before.
00:28:30.200 I mean, maybe a goldfish in a goldfish bowl might have,
00:28:40.200 it sees a pattern of trees that it has never seen before
00:28:43.640 and it has to create strategies for avoiding each tree
00:28:50.760 and not only that for actually catching the rabbit
00:29:03.240 Now, this is because of a vast amount of knowledge
00:29:11.960 Well, it's not the kind of knowledge that says first turn left,
00:29:19.600 It's instruction that takes input from the outside
00:29:24.840 and then generates a behavior that is relevant to that input.
00:29:34.520 but it involves a degree of sophistication in the program
00:29:37.360 that human robotics has not yet reached anywhere near that.
00:29:43.200 And by the way, then when it sees a wolf of the opposite sex,
00:29:46.880 it may decide to leave the rabbit and go and have sex instead
00:30:12.520 because that same program will lead the next wolf
00:30:17.600 to do the same thing in the same circumstances.
00:30:23.160 are once it is never seen before and it can still function
00:30:27.200 is a testimony to the incredible sophistication
00:30:32.200 of that program, but it has nothing to do with creativity.
00:30:36.200 So humans do tasks that require much, much less programming
00:30:52.360 sophistication than that, such as sitting around a campfire,
00:31:03.120 Now animals can do the wolf running away thing,
00:31:13.680 but they can't tell a story, they don't tell a story.
00:31:17.520 Telling a story is a sort of typical creative activity,
00:31:22.640 it's the same kind of activity as forming an explanation.
00:31:43.080 that lets it jump on a branch so that the branch will get
00:31:46.440 out of its way in some sense will also function
00:31:50.240 in this new environment that it's never seen before.
00:31:53.200 But there are also other things that it can't do.
00:32:13.280 that jumping on a metal rod would get a wooden plank
00:32:25.480 I mean, if we don't know, at least I don't know
00:32:36.600 But if it was, for example, if it contained undergrowth,
00:32:42.280 then dealing with undergrowth requires some very sophisticated
00:32:46.280 programs, otherwise you will just get stuck somewhere
00:32:51.720 Now, I think a dog, if it gets stuck in a bush,
00:33:00.000 other than to shaking itself about until it gets out.
00:33:18.480 It's just that it's programming just doesn't have that.
00:33:21.040 But an animal's programming easily could have that
00:33:24.280 if it lived in an environment in which that happened a lot.
00:33:41.720 So if, for example, I wrote a deep learning program,
00:33:46.400 and asked it, make me a trillion dollars on the stock market.
00:34:02.240 you might do better inventing a weapon or something.
00:34:12.240 So you can invent a paper clip to use an example
00:34:19.920 You can invent it, if paper clips hadn't been invented,
00:34:22.800 you can invent a paper clip and make a fortune.
00:34:29.080 but it's not an AI because it's not the paper clip
00:34:35.280 that has caused the whole value of the paper clip.
00:34:40.240 And similarly, if you invent a dumb arbitrage machine
00:35:05.280 for arbitrage opportunities that no one else sees.
00:35:24.760 But the thing is, so the models that are used nowadays
00:35:38.560 or if such a neural network that was kind of blank
00:35:42.980 and if you just arbitrarily throw financial history at it,
00:35:45.480 wouldn't it be fair to say that the AI actually figured out
00:36:11.080 by taking opportunities that were too expensive
00:36:15.240 So you can make money, you can make a lot of money.
00:36:28.040 Somebody has the idea that a smartphone would be good
00:36:37.960 And that idea cannot be anticipated by anything less
00:36:56.600 you discuss the possibility that virtual reality generators
00:37:04.840 people like Sam Harris speak of both thoughts and senses
00:37:14.120 but they are both things that come into consciousness.
00:37:16.920 So do you think that a virtual reality generator
00:37:21.200 could also place thoughts as well as sense data into the mind?
00:37:26.040 Yes, but that's only because I think that this model is wrong.
00:37:33.880 as then it puts it with the stage cleared of all the characters.
00:37:56.280 and you're envisaging it as having certain properties,
00:38:02.160 but that doesn't matter, we can imagine lots of things
00:38:07.640 In fact, that's in a way characterizes what we do all the time.
00:38:19.000 about this empty stage as being thoughts about nothing.
00:38:23.400 One can interpret the actual hardware of the stage
00:38:44.960 Okay, and then let's talk about the touring principle.
00:38:50.000 It's otherwise been called the Church Touring Deutsche Principle.
00:38:55.760 so by the way, it states that any universal computer
00:38:59.480 Would this principle imply that you could simulate
00:39:04.720 in a compact efficient computer that was smaller
00:39:14.720 Again, no, it couldn't simulate the whole universe.
00:39:32.960 the more closely it could simulate the whole universe.
00:39:36.760 But it couldn't ever simulate the whole universe
00:39:43.200 because it, well, if you wanted to simulate itself
00:40:02.400 Even if we discovered ways of encoding information
00:40:23.520 because that would mean because of the universality
00:40:55.160 that you realize what computational universality is.
00:41:06.560 is the most important thing in the theory of computation
00:41:14.080 unless you have a concept of a universal computer.
00:41:26.000 there are people who have tried offering explanations
00:41:32.520 Or how could mere physical interactions explain consciousness?
00:41:59.320 for why they are namely that each individual case of this
00:42:08.840 So let's say that some people say, for example,
00:42:20.640 Nobody can prove that it's possible until they actually do it
00:42:33.400 that it's not true that this is a fundamental limitation.
00:42:41.880 it is, with that idea that it is a fundamental limitation,
00:42:46.600 that the trouble with that is that it could be applied
00:43:15.440 So there is no way to refute that by experiment,
00:43:36.040 You have to have an explanation for why it is impossible.
00:43:41.840 almost all mathematical propositions are undecidable.
00:43:53.560 because thinking we could decide everything is hubris.
00:44:16.320 you can then say, well, what does this actually mean?
00:44:18.880 Does this mean that maybe we can never understand
00:44:24.600 Well, it doesn't because if the laws of physics
00:44:37.480 It would limit our ability to make predictions,
00:44:40.480 but then lots of our ability to make predictions
00:44:52.480 and therefore the properties of the physical world.
00:44:59.400 which has distributed powers and checks and balances?
00:45:03.720 So the reason I asked is the last administration
00:45:10.640 And, you know, that theory could have been tested
00:45:16.040 but because our system of government has distributed powers,
00:45:18.960 you know, Congress opposed the testing of that theory
00:45:22.920 So if our American government wanted to fulfill
00:45:24.960 a proper criteria on, would we need to give the president
00:45:39.040 that perfectly fulfills proper criteria, criterion.
00:45:46.440 I think the British one is actually the best in the world
00:45:55.080 Making a single change like that is not going to be the answer.
00:46:20.760 What they wanted to do, what they thought of themselves
00:46:24.600 as doing was to implement the British constitution.
00:46:42.760 The trouble is that they all, in order to do this,
00:46:48.480 and then they wondered whether they should get an alternative
00:46:51.680 king, whichever way they did it, there were problems.
00:46:59.040 I think made for a system that was inherently much worse
00:47:29.520 or sorry, never had it, that the king did used to have it
00:47:35.400 of the enlightenment and so on, no longer had a full legitimacy
00:47:42.640 to legislate, so they had to implement a system
00:47:49.080 where him seizing power was prevented by something
00:48:00.160 checks and so the whole thing that they instituted
00:48:29.240 In the British system, blame is absolutely focused,
00:48:49.760 and right to the government, that's where it's all focused
00:48:54.240 into and there are no systems that do that better,
00:49:01.080 but as you well know, the British system also has flaws
00:49:07.880 and we recently saw with the sequence of events,
00:49:12.400 with Brexit referendum and then Parliament bulking
00:49:17.200 with implementing some laws that didn't agree with
00:49:32.480 and there was sort of mini constitutional crisis,
00:49:36.120 which could only be resolved by having an election
00:49:42.880 which is by the mathematics of how the government works,
00:49:48.480 although we have been unlucky several times recently
00:50:03.400 are there will be like a finite amount of total matter
00:50:08.120 There's a limit and that means that there's a limit
00:50:11.400 on the amount of computation that this matter can execute
00:50:18.080 Perhaps even the amount of economic value we can sustain, right?
00:50:32.960 So what you've just recounted is a cosmological theory.
00:50:47.840 We know very little about the universe is in the large,
00:50:57.120 So it doesn't make all that much sense to speculate
00:51:11.200 about the asymptotic form of very small things.
00:51:25.760 like 10 to the minus 42 seconds and that kind of thing.
00:51:38.760 It's just that we don't know what happens beyond that.
00:51:45.360 what happens on a large scale may impose a finite limit,
00:51:48.240 in which case computation is bounded by a finite limit
00:51:59.840 from it's being imposed by inherent hardware limitations.
00:52:04.840 For example, if there's a finite amount of GMP available
00:52:14.000 in the distant future, then it's still up to us,
00:52:31.320 of even more worthwhile things that have yet to be invented.
00:52:39.640 with 10 to the 10 to the 10 to the 10 bits with.
00:52:42.880 Now, my guess is that there are no such limits,
00:52:49.960 but my worldview is not affected by whether there are such limits
00:52:55.760 because as I said, it's still up to us what to fill them with.
00:53:01.040 And then if we get chopped off at some point in the future,
00:53:06.160 then everything will have been worthwhile up to them.
00:53:20.360 so there should be like an exponential growth of knowledge.
00:53:26.840 or decrease in research productivity, economic growth,
00:53:31.640 that there's a limited amount of fruit on the tree
00:53:50.240 There are sociological factors in academic life
00:54:05.120 but that has been a tendency in what has happened.
00:54:21.720 And for example, I think there was, I've often said,
00:54:33.600 there was a stultification in theoretical physics,
00:54:47.520 quantum computers would have been invented in the 1930s
00:54:59.200 but it just goes to show that there are no guarantees.
00:55:09.280 does not guarantee that we won't start declining tomorrow.
00:55:18.960 are parochial effects caused by specific mistakes
00:55:30.760 Okay, so I wanna ask you a question about Bayesianism
00:55:39.080 is because there seems to be a way of describing
00:55:44.680 when the relative status of a theory hasn't changed.
00:55:57.000 But suppose in the future, we were able to build an HEI
00:56:02.600 on a quantum computer and we were able to design
00:56:06.560 as you suggest to have it be able to report back
00:56:11.280 Now, it seems that even though many worlds remains
00:56:29.200 So what has happened there is that at the moment
00:56:33.200 we have only one explanation that can't be immediately knocked
00:56:43.440 we might well decide that this will provide the ammunition
00:56:50.800 to knock down even ideas for alternative explanations
00:57:02.480 because for a start, we know that quantum theory is false.
00:57:14.400 But I would replace the idea of increased credence
00:57:19.320 with a theory that the experiment will provide a quiver,
00:57:40.040 of arguments that goes beyond the known arguments,
00:58:23.560 And they would advocate a methodology of science
00:58:36.400 Now, of course, I think that empiricism is in stake
00:58:41.640 So we shouldn't, but not everybody thinks that.
00:59:07.560 which shouldn't have been needed in the first place,
00:59:10.080 but that's why I think that that's the way I would express
00:59:30.840 is the best way to deal with existential dangers
00:59:36.040 so you have something like gain a function research, right?
00:59:38.760 And it's conceivable that it could lead to more knowledge
00:59:42.720 but I guess at least in basing terms, you could say,
00:59:48.880 or has led to the spread of a man-made pathogen
00:59:54.280 that would have not otherwise been naturally developed.
00:59:57.520 So would your belief in open-ended scientific progress
1:00:01.120 allow us to say, okay, let's stop being a function research?
1:00:05.160 No, it wouldn't allow us to say, let's stop it.
1:00:14.160 let us do research into how to make laboratories more secure
1:00:33.520 through which the reagents pass more impermeable
1:00:37.040 before we actually do the experiments with the reagents.
1:00:44.160 just because new knowledge might be discovered.
1:00:50.600 but which knowledge we need to discover first,
1:00:55.960 which is non-trivial, non-trivial part of any research
1:01:02.400 But would it be conceivable for you to say that
1:01:04.680 until we figure out how to make sure these laboratories
1:01:10.120 we will stop the research as it exists now.
1:01:15.840 meanwhile, we'll focus on doing the other kind of research,
1:01:23.800 Yes, in principle, that would be reasonable.
1:01:25.480 I don't know enough about the actual situation
1:01:28.200 to have a few, I don't know how these labs work.
1:01:32.360 I don't know what the precautions can assist of.
1:01:38.720 And when I hear people talking about, for example, lab leak,
1:01:51.360 So the leak is not an elixir from the lab to the outside.
1:01:56.800 The leak is from the test tube to the person
1:02:01.360 and then from the person walking out the door.
1:02:05.120 And I don't know enough about what these proportions are
1:02:12.800 to know to what extent the risk is actually minimized.
1:02:31.440 that all labs have to stop and meet a criterion,
1:02:40.400 I suspect that the stopping wouldn't be necessary
1:02:54.720 I asked him why he thinks that human salations
1:02:58.520 is only gonna be around for 700 more years.
1:03:06.280 that creative, optimistic societies will innovate ways
1:03:10.760 of safety technologies faster than totalitarian static societies
1:03:15.160 can innovate way, destructive technologies.
1:03:17.320 And he responded, maybe, but the cost of destruction
1:03:20.880 is just so much lower than the cost of building.
1:03:24.920 And that trend has been going on for a while now.
1:03:32.320 like the kinds that we saw many times over in the Cold War?
1:03:38.120 First of all, I think we've been getting safer
1:03:40.040 and safer throughout the entire history of civilization.
1:03:47.200 that wiped out a hard third of the population
1:04:10.920 All our cousin species have been wiped out.
1:04:19.040 Also, if a asteroid, 10 kilometer asteroid had been on target
1:04:27.080 with the Earth at any time in the past two million year
1:04:30.920 or whatever it is, history of the genus Homo,
1:04:36.880 Whereas now, it'll just mean high taxation for one.
1:04:42.360 You know, that's how much amazingly safer we are now.
1:05:03.520 And on the other hand, the atomic bomb accident sort of thing
1:05:13.280 would have had no zero chance of destroying civilization.
1:05:17.440 All they would have done is cause a vast amount of suffering.
1:05:22.520 And but I don't think we have the technology
1:05:29.040 I think all we would do if we just deliberately unleashed hell
1:05:34.920 all over the world is we would cause a vast amount of suffering.
1:05:39.520 But there would be survivors and they would resolve
1:05:46.280 So I don't think we're even able to let alone
1:05:56.200 I think we are doing the wrong thing largely
1:06:00.920 in regard to both external and internal threats.
1:06:05.920 But I don't think we're doing the wrong thing
1:06:12.400 And over the next 700 years or whatever it is,
1:06:22.680 But I see no reason why if we are solving problems,
1:06:34.000 I don't think this forget to take another metaphor.
1:06:44.400 and there's one black ball and you take out
1:06:46.720 of white ball and white ball and white ball and white ball.
1:06:49.000 And then you hit the black balls and that's the end of you.
1:06:51.680 I don't think it's like that because every white ball
1:06:54.760 you take out and have reduces the number of black balls
1:07:02.680 So again, I'm not saying that's the law of nature.
1:07:07.160 It could be that the very next ball we take out
1:07:09.480 will be the black one, that'll be the end of us.
1:07:17.720 I do want to talk about the fun criteria on.
1:07:22.600 how other people define other positive emotions
1:07:24.760 like eudomonia or well-being for satisfaction
1:07:33.280 And all these things are not very well defined.
1:07:41.080 until we have a satisfactory theory of qualia
1:07:45.880 at least and probably more or satisfactory theory
1:07:49.000 of creativity, how creativity works and so on.
1:07:58.200 for the thing that I explain more precisely,
1:08:03.720 but still not very precisely as a creation of knowledge
1:08:10.880 without where the different kinds of knowledge
1:08:30.400 in which the everyday usage of the word fun differs
1:08:35.480 from that is that fun is considered frivolous
1:08:39.720 or seeking fun is considered as seeking frivolity.
1:08:45.920 But I think that isn't so much a different use of the word.
1:08:53.240 about whether this is a good or a bad thing.
1:08:56.440 But nevertheless, I can't define it precisely.
1:09:00.400 The important thing is that there is a thing
1:09:03.240 which has this property of fun that you can't,
1:09:25.880 and whether it's doing it according to the theory
1:09:33.840 And therefore it is subject to the criticism
1:09:37.320 and another way of looking at the fun theory
1:09:43.080 The subject of the criticism that this isn't fun, i.e.,
1:09:47.600 this is making a privileging one kind of knowledge
1:09:52.720 arbitrarily over another, rather than being rational
1:09:59.960 Is this placing a limitation on universal explainers then
1:10:08.720 And it seems to me that sometimes we actually can
1:10:13.520 Like, for example, take exercise, no pain, no gain.
1:10:17.320 but once you start going, you understand the mechanics,
1:10:19.800 you develop a theory for why it can and should be fun.
1:10:22.840 Yes, yes, well, that's quite a good example
1:10:25.920 because there you see that fun cannot be defined
1:10:30.040 as the absence of pain, so you can be having fun
1:10:40.440 And that physical pain is not sparking suffering, but joy.
1:10:48.800 However, there is such a thing as physical pain,
1:10:58.360 And that's important because if you are dogmatically
1:11:20.320 that maybe this can't be fun or maybe this isn't yet fun
1:11:35.920 and your pain, your suffering doesn't matter,
1:11:39.640 then that opens the door to not only to suffering,
1:11:47.080 but to stasis, you won't be able to get to a better theory.
1:11:57.720 So like first of all, Aristotle thought that's like,
1:12:00.040 I guess a sort of widely defined sense of happiness
1:12:04.280 is what should be the goal of our endeavors.
1:12:14.480 so that what you just said might very well be fun.
1:12:18.440 The point is the underlying thing is as far you know
1:12:23.440 going one level below where really to understand
1:12:26.120 that we'd need to go about seven levels below that,
1:12:29.680 But the important thing is that there are several kinds
1:12:36.640 And the one that is written down in the exercise book
1:12:41.120 that says you should do this number of reps
1:12:47.160 and it doesn't matter if you feel that and so on.
1:12:50.200 That's an explicit theory and it contains some knowledge
1:12:57.360 That's like all our knowledge is like that.
1:13:10.240 like our knowledge of grammar is my favorite example
1:13:14.720 because we know why certain sentences are acceptable
1:13:19.280 but we can't state explicitly all in every case,
1:13:31.720 and explicit knowledge as conscious and unconscious knowledge.
1:13:35.040 All those are bits of program in the brain.
1:13:43.640 if you define knowledge as information with causal power,
1:13:48.320 they are all information with causal power.
1:13:51.440 They all contain truth and they all contain error
1:13:55.800 and it's always a mistake to shield something,
1:13:59.880 to shield one of them from criticism or replacement.
1:14:03.760 Not doing that is what I call the fun criterion.
1:14:13.520 So why would creating an AGI through evolution
1:14:19.680 your theory is that you need to be a general intelligence
1:14:23.640 But by the point, and evolved a simulated being
1:14:35.840 So the kind of simulation by evolution that I'm thinking of,
1:14:42.920 but the kind I'm thinking of and which I said
1:15:00.840 which in this simulation would be some kind of NPCs
1:15:16.640 and some of them might or might not become people.
1:15:31.640 the way that the only way that I can imagine
1:15:47.960 I have proposed that it was needed to transmit memes.
1:15:52.600 So there'd be people who were transmitting memes creatively
1:16:04.000 before it managed to increase their stock of memes.
1:16:11.080 So in every generation there was a stock of memes
1:16:15.120 that was being passed down to the next generation.
1:16:17.480 And once they got beyond a certain complexity,
1:16:20.080 they had to be passed down by the use of creativity
1:16:28.200 and as I say, I can't think of any other way it could have been,
1:16:30.920 where there was genuine creativity being used
1:16:37.160 but not so quickly that it didn't increase the meme bandwidth.
1:16:42.160 Then in the next generation, there was more meme bandwidth.
1:16:47.160 And then after a certain number of generations,
1:16:59.360 you know, firmware, I expect, to use this firmware
1:17:03.520 for something other than just blindly transmitting memes
1:17:12.880 So in that time, it would have been very unpleasant
1:17:31.240 But I don't think there would have been a moment
1:17:44.440 at the time when they were blindy transmitting memes.
1:17:48.640 Because they were using genuine creativity.
1:17:52.600 They were just not using it to any good effect.
1:18:02.960 So you're not aware that you're in the experience machine,
1:18:08.560 But you're still doing the things that would make you have fun.
1:18:14.560 So would you be tempted to get in the experience machine?
1:18:16.720 Would it be compatible with the fun criteria
1:18:19.560 but I'm not sure what the experience machine is.
1:18:26.920 so I mean, is it just a virtual reality world
1:18:31.840 in which things work better than in the real world
1:18:36.640 So I thought I could remember Robert Nozick
1:18:38.160 and the idea is that you would enter this world
1:18:41.600 and but you would forget that you're in virtual reality.
1:18:47.600 in every possible way that it could be perfect
1:18:54.040 But you would think the relationships you have here
1:18:55.720 are real, the knowledge you're discovering here is novel
1:18:59.720 Is it, would you be tempted to enter the session world?
1:19:04.520 Well, no, I certainly wouldn't want to enter
1:19:07.880 the world any world which involves erasing the memory
1:19:16.360 Related to that is the fact that the laws of physics
1:19:19.520 in this virtual world couldn't be the true ones
1:19:27.120 So I'd be in a world in which I was trying to learn
1:19:30.440 laws of physics which aren't the actual laws.
1:19:33.640 And they would have been designed by somebody
1:19:36.360 for some purpose to manipulate me as it were.
1:19:39.840 Maybe it would be designed to like be a puzzle
1:19:47.640 but it would have to be by definition a finite puzzle
1:19:54.280 And meanwhile, in the actual world, things are going wrong
1:19:57.160 and I don't know about this and eventually they go so wrong
1:20:05.120 The final question I always like to ask people I interview
1:20:08.960 is what advice should you give to young people?
1:20:34.680 So for example, I may have an opinion that it's dangerous
1:20:51.520 And I think it's a good epistemological reason for that.
1:20:55.920 Namely that if your short-term goals are subordinate
1:21:00.320 to your long-term goal, then if your long-term goal is wrong
1:21:04.360 or deficient in some way, you won't find out until you're dead.
1:21:08.480 So it's a bad idea because it is subordinating the things
1:21:13.000 that you could error correct now or in six months time
1:21:16.840 or in a year's time to something that you could only
1:21:25.240 So I'm suspicious of advice of the form, set your goal
1:21:33.200 and even more suspicious of make your goal be so-and-so.
1:21:43.680 But why is it, what might it be, the relationship between
1:21:54.520 Again, I tried to make this example of quote advice
1:22:05.440 I just gave an argument for why certain other arguments are bad.
1:22:10.600 So but if it's advice of the form a healthy mind
1:22:15.560 in a healthy body or don't drink coffee before 12 o'clock
1:22:28.720 It's a, if I have an argument, I can give the argument
1:22:36.600 Who knows what somebody might do with an argument?
1:22:53.880 I don't claim that they are privileged over other arguments.
1:22:59.240 I just put them out because I think that this argument works.
1:23:05.720 And I expect other people not to think that they work.
1:23:09.880 I mean, we've just done this in this very podcast.
1:23:14.160 You know, I put out an argument about AI and that kind of thing
1:23:19.920 You know, if I was in the position of making that argument
1:23:27.920 and saying that, therefore you should do so and so,