00:00:10.000 To introduce David Deutsch as the father of the Quantum Computer,
00:00:14.000 we're not capturing the full impact of his work.
00:00:18.000 Only when we scale up and put his efforts into the history of knowledge
00:00:23.000 can be grasped as its state in his famous 1985 paper.
00:00:28.000 It marked not just the invention of a new gadget, a faster computer,
00:00:33.000 but a new explanation of computation and of the world
00:00:37.000 that has transformed our understanding of both.
00:00:45.000 For 70 years after Copernican's post-positive female centrism,
00:00:49.000 his effort was largely dismissed as merely a quote,
00:00:56.000 We clearly experienced insisted Cardinal Bellamy
00:01:14.000 quote, what the system of the world could really be.
00:01:18.000 What Galileo we stand at the end of our own Copernican delay?
00:01:37.000 for containing the strangeness of a new explanation.
00:01:41.000 When Andrew really Kirk has called the new quantum age,
00:01:45.000 the age when quantum theory began to gain purchase on the real,
00:01:49.000 to hold when an improved technology first took shape.
00:02:00.000 this improvement was more than a change in degree.
00:02:04.000 Whereas Alan Turing famously described the machine running
00:02:13.000 David Deutsch in 1985 extended the Attorney Church conjecture
00:02:21.000 a machine running on the physical systems we call cubits
00:02:29.000 The world he demonstrated could be perfectly simulated,
00:02:52.000 that can allow and contain our relakings of it.
00:03:09.000 That special status tells us some very important things
00:03:23.000 AGI, artificial general intelligence must be possible
00:03:28.000 because what we now know about the physics of computation
00:03:42.000 require a physical object to do can imprison the emulated
00:04:00.000 and how different should all reactions to them be?
00:04:16.000 But in myself, David has focused on the history
00:04:38.000 is the capacity to produce explanatory knowledge.
00:04:42.000 Knowledge that allows us to survive in the world
00:04:56.000 we should not take our success in advancing knowledge for granted.
00:05:00.000 From the perspective of the history of knowledge,
00:05:06.000 need to be put into the context of our need for it.
00:05:10.000 Perhaps the biggest threat artificial intelligence poses
00:05:18.000 So now I'm going to turn it over to you, David,
00:05:37.000 OK, well, as species or as a civilization or civilizations,
00:05:57.000 others causing suffering and tragedy on such a scale
00:06:19.000 Because given that each danger has a non-zero probability
00:06:30.000 One of the many that one can easily get sucked into
00:06:34.000 when trying to apply game theory and probabilities
00:06:42.000 are the important determinants of what will happen.
00:06:59.000 Our job is to make that infinite series of bad probabilities
00:07:12.000 On the other hand, if you think that we won't always face dangers,
00:07:16.000 you think that they will come a blessed utopian moment
00:07:20.000 after which our comfortable existence is guaranteed
00:07:25.000 You will have to provide some criterion distinguishing
00:07:42.000 We'll have to do no other surviving species camp.
00:07:52.000 explanatory knowledge, to overcome an endless stream of dangers.
00:07:58.000 We know only a few of them, and then not their probabilities,
00:08:12.000 or merely careless extraterrestrial pessimists have worn.
00:08:17.000 And of course, artificial general intelligence,
00:08:41.000 and any of those infinitely many existential dangers,
00:08:51.000 Therefore, I think it's useful to classify each potential danger
00:09:03.000 we currently don't have the knowledge to overcome it.
00:09:34.000 Why don't we yet have an adequate knowledge of those things?
00:09:38.000 I'm not sure, maybe not enough people are interested enough.
00:09:46.000 but also in this first category are impacts from space,
00:09:52.000 where large objects we don't have in acknowledge
00:10:01.000 In this case, I do know it's because we as a civilization
00:10:11.000 We prefer to gamble risking our entire long-term future
00:10:26.000 You may think it's self-evident that that gamble has been worthwhile.
00:10:30.000 Here we are not contaminated and not wiped out,
00:10:34.000 but isn't that just because we're not yet living in that future
00:10:41.000 After all, we are living in the aftermath of a closely-related gamble,
00:10:46.000 namely the decades-long campaign opposing nuclear plastics.
00:10:56.000 turned into a tremendous drag on the project to combat climate change.
00:11:02.000 The short term is in opposing those two nuclear technologies.
00:11:09.000 It's the hallmark of the version of the precautionary principle,
00:11:14.000 which has in turn been a major strand of the environmental movement.
00:11:19.000 Wouldn't it be rather ironic if that version of the principle
00:11:23.000 and the movement were about towards the great environmental
00:11:30.000 precisely by advocating selfish short-term benefit
00:11:34.000 at the expense of the long-term health of the climate?
00:11:54.000 and through its obeying simple laws of motion that we already know.
00:12:01.000 but a finite amount of knowledge will protect us from Super Bowl camp
00:12:10.000 But the bigger and faster the approaching asteroid
00:12:18.000 the more of a special kind of knowledge we'll need.
00:12:25.000 Wealth is the set of transformations that one is capable of bringing about,
00:12:35.000 that we could deflect harmlessly given a certain time to prepare.
00:12:39.000 You may recognize that notion of wealth as a constructed theoretic.
00:12:50.000 that to have any chance of envisaging the future of technology,
00:12:59.000 The intuition is that the more something you want to make
00:13:03.000 or transform, the more effort you have to put in.
00:13:08.000 That has been true from the dawn of our species
00:13:15.000 Even automation reduces the constant of proportionality.
00:13:21.000 Even just maintaining the robots is effort proportional to the amount of output.
00:13:43.000 And wealth will consist of our library of programs.
00:13:48.000 The universal construct can be programmed to self-reproduce.
00:13:52.000 So, once you have one, you soon have two to the end of them.
00:14:17.000 you can sit back and watch your two to the end Teslas
00:14:26.000 And no, we are not going to have a universal constructer apocalypse
00:14:37.000 It can't think. It doesn't know that it's current job
00:14:47.000 Unless, of course, you put an ADI program into it.
00:15:04.000 Each of you is precisely one of those universal constructors
00:15:17.000 Now, the second category of near-existation things
00:15:26.000 It won't be solved with just the known laws of physics
00:15:29.000 and some wealth and some universal constructors.
00:15:33.000 They'll only be solved with new explanatory knowledge.
00:15:39.000 topically, there are plenty of potential pandemic apocalypse.
00:15:55.000 Our little knowledge we have of how to defend ourselves
00:16:03.000 The missing knowledge here is by chemistry, epidemiology,
00:16:26.000 Or be it not explanatory knowledge, not intelligently,
00:16:37.000 extra terrestrial paleontologists may eventually be amazed
00:16:41.000 that a civilization with billions of individuals
00:17:05.000 yet they are the ones that occur at the least feared
00:17:14.000 Like in 1900, no one knew that smoking was dangerous.
00:17:18.000 By the time the knowledge that they were dangerous had been created,
00:17:23.000 decades later, cigarettes had killed hundreds of millions of people.
00:17:37.000 So, how can we create the knowledge to protect ourselves
00:17:49.000 How to address the risk that by the time we do know,
00:17:53.000 we won't have time enough to create the record.
00:17:58.000 The answer is, by creating general purpose knowledge,
00:18:13.000 about novel aspects of that have now become urgent.
00:18:23.000 The survival of our species depends absolutely on progress
00:18:31.000 and on the speed at which we make progress there.
00:18:42.000 is understanding the theory of university constructors
00:18:57.000 customized to deflect an approaching shot of neutronium
00:19:02.000 or 10 billion doses of a new vaccine in a hurry
00:19:12.000 So that's how we deal with the third category unknown
00:19:22.000 The fourth category is at once even more dangerous
00:19:34.000 at least the theoretical knowledge deal with it.
00:19:44.000 It's a bit paradoxical that the unknowable is less dangerous
00:19:50.000 but that's because the only thing that is unknowable
00:19:58.000 And so, the only true, dangerous things in that sense
00:20:03.000 in the universe are entities that create explanatory knowledge,
00:20:17.000 Now, the knowledge of how to prevent people from being dangerous
00:20:24.000 It took our species many millennia to create it,
00:20:31.000 The only way to prevent people from being dangerous
00:20:37.000 Specifically, it is the knowledge of liberal values,
00:20:55.000 regardless of their hardware characteristics are decent.
00:21:12.000 and they may devote their creativity to doing that.
00:21:22.000 that is the great majority of the population of such a society
00:21:27.000 will devote some of their creativity to thwarting that,
00:21:31.000 and they will win provided that they keep creating knowledge
00:21:53.000 aren't we doom, aren't we drawing balls out of an urn,
00:22:06.000 what is actually lack of knowledge or ignorance.
00:22:11.000 It's been be deviling planning for the unknown for decades now.
00:22:15.000 Whenever you draw out a white ball of knowledge
00:22:21.000 you're turning some of the black balls still in the urn white.
00:22:28.000 For example, the next pandemic is a matter of random mutations
00:22:35.000 the next extinction asteroid is already out there.
00:22:42.000 There's no such thing as the probability of it.
00:22:46.000 Outcomes can't be analyzed in terms of probability
00:22:52.000 that predict that something is or can be approximated
00:23:12.000 I misdirection away from the baseless assumptions.
00:24:02.000 because it could be applied to any fundamental research.
00:24:10.000 isn't the growth of knowledge itself dangerous?
00:24:13.000 Isn't it worth shortening our lead over the bad guys?
00:24:18.000 but early delaying our ability to defend ourselves
00:24:22.000 against unknown dangers in order to be confident
00:24:30.000 The moratorium approach, the regulatory approach.
00:24:58.000 to have the mentality of genocidal suicide bombers.
00:25:02.000 And when we have decided to strip their victim,
00:25:09.000 of the protection of AGIs raised to be decent people,
00:25:16.000 Again, reliable knowledge of how to raise decent people
00:25:51.000 And that is knowledge of how to suppress knowledge creation.
00:26:37.000 and from the civilizational or species perspective,
00:28:02.000 No, well, it depends what you mean by alignment,
00:31:32.000 which must have required explanatory knowledge.
00:32:13.000 that they actually contribute to climate change.
00:32:16.000 Just, it seems like the material infrastructure
00:37:07.720 Well, in terms of my answer to the previous question,
00:37:14.480 that the good future is like the good kind of evolution,
00:37:49.360 as we put our AGI as a program, not a piece of heart.
00:37:54.880 So the same hardware that we put our AGIs into,
00:38:05.720 artificial hardware to increase our power to communicate
00:38:16.040 and we've been using artificial aids to thinking
00:38:23.760 where we can more directly, let's say have a module
00:38:29.880 that you can automatically look up Google inquiries with.
00:38:55.600 because if there is any difference between the two,
00:39:01.400 the AGIs do plus something, I don't know what is.
00:39:16.520 and as a process, as a result of the growth of knowledge,
00:39:26.560 what will happen if we allow our society to become multiracial.
00:39:31.760 And the answer is, there's no fundamental difference
00:39:35.080 between races, there's no fundamental difference
00:39:39.240 Yeah, I have two related questions in the first.
00:39:42.600 Could you expand a bit on the statement that AGIs are persons?
00:40:01.800 So, if something isn't general, then it's not an AGI.
00:40:08.400 The question is, what kind of program is an AGI?
00:40:33.440 So, well, I don't, perhaps I haven't understood your question.
00:40:50.440 and I'm just asking you to expand it, but I just asked it
00:40:55.080 for a long question, which is, if all GIs are persons
00:40:58.560 then our GIs conscious, do you think there's a link to that?
00:41:02.200 Ah, well, I think so, but we don't know what consciousness is
00:41:09.960 and we don't know the theory of AGIs and so on,
00:41:21.120 if those five or six things, three wheel is another one
00:41:28.040 can be implemented, any of them can be implemented
00:41:34.680 But if they can, this will raise interesting moral issues
00:41:45.320 It's like, if you and I disagree about something morally,
00:41:52.000 we ought to be able to discuss it rational and agree.
00:41:56.520 Now, if that isn't true, if something has moral significance
00:42:00.520 but is fundamentally unable to be creative, let's say,
00:42:05.760 then that raises the moral issue about whether that should
00:42:08.560 have the same moral status as somebody who's fully G.
00:42:12.560 But I myself don't think that problem will arise.
00:42:29.720 So, and we can see, we can guess at least why it did
00:42:43.080 Now, if that was possible, let's say, without qualia,
00:42:50.800 then why on earth did the tremendous machinery of qualia
00:42:55.800 evolve if it wasn't practically useful evolution?
00:43:00.600 So, I think they must be connected, but we shall see.
00:43:04.280 I had a sort of question regarding political law
00:43:25.440 But I would like to question that by bringing up the issue
00:43:30.440 of malevolence, people who winnly and knowingly hinder the spread
00:43:33.440 of knowledge, whether that is of access to knowledge
00:43:41.440 I mean, I would say, the average poor child in India,
00:43:44.440 but that way, we'd not be able to access most of that.
00:43:47.440 So, I would say, the average poor child in India,
00:43:56.440 to fill it that way, we'd not be able to access most
00:44:20.440 So, is an increase in knowledge going to solve that?
00:44:38.440 to the other billions of people was a bad idea.
00:44:52.440 is what is sometimes called a first-world problem,
00:44:59.440 We wouldn't think that somebody was being deprived
00:45:04.440 of the internet before the internet had been invented.
00:45:07.440 And we wouldn't think that it's somehow an indictment
00:45:17.440 at a time when only a few thousand people had it.
00:45:32.440 but given it's opposed that there always will be,
00:45:36.440 the cure for that is also creativity on the part
00:46:29.440 something despite having knowledge of biochemistry
00:46:37.440 than the speed of those who are trying to invent cures
00:46:41.440 not just specific cures, but the knowledge of how to make cures
00:46:46.440 This is what's going to keep our civilization in existence.
00:47:06.440 Isn't there, in a sense, a fundamental distinction between
00:47:36.440 and if so, isn't that something connected to the reality
00:47:48.440 something that just emerges as soon as it was something.
00:48:00.440 If there were, yes, then they would need a moral issue
00:48:07.440 that lead it to immoral actions would be a crime,
00:48:25.440 That can be done whether or not they have emotions.
00:48:33.440 would have thought that some people would have thought
00:48:51.440 I think this is a wrong way of thinking about it.
00:49:27.440 I think you know, I'll use your boss for a minute.
00:49:48.440 really easy to make nuclear bombs, for example,
00:49:51.440 then couldn't could there actually be a flat ball in there
00:50:02.440 in such that there would be a flat ball in yours?
00:50:19.440 when we have, when they judge that we have a bit too much hubris,
00:50:35.440 The black ball about nuclear weapons being easier and so on.
00:50:47.440 of how to cope with that would have evolved earlier.
00:51:02.440 The survivors might have wanted that never to happen again.
00:51:13.440 it might have been that the laws of physics will extinguish us.
00:51:18.440 But the laws of physics do not have it in for us.
00:51:26.440 it will be because we have not created the knowledge
00:51:38.440 But what makes knowledge in your view is it's capacity
00:51:48.440 So I've gone through five or six definitions of knowledge
00:51:54.440 My current definition of knowledge is information
00:52:07.440 you're thinking of knowledge as being a component
00:52:27.440 And that includes moral knowledge and mathematical knowledge
00:52:44.440 So that's an explanatory knowledge is a special kind.
00:52:56.440 Knowledge in its dumb knowledge is non-explanatory.
00:53:09.440 Whereas explanatory knowledge can cross any barrier
00:53:27.440 to create explanatory knowledge, you can create any.
00:53:34.440 There isn't a more powerful means of processing information
00:53:43.440 Can I also come back to the question of political knowledge
00:53:51.440 the corporations are reading, now these CEOs are reading,
00:53:59.440 But the truth is, it's a lot scarier than that.
00:54:17.440 You can't point to certain set of malevolent actors
00:54:23.440 So I don't know how does simply having political knowledge
00:54:29.440 or knowledge of climate catastrophe somehow work against
00:54:43.440 It's just a kind of non-intelligent abstract force
00:54:50.440 to the type of humanistic knowledge we're talking about.
00:54:54.440 This theory that there is this systemic tendency.
00:55:15.440 and went out right as Ryan concluded his question.
00:55:19.440 So if you can just start up with answering Ryan's.
00:55:25.440 that just as I was about to give a marvelous answer
00:55:41.440 I think this is this thing you were talking about in general.
00:55:47.440 Is this thing that the Bay Area people call mollock?
00:55:51.440 It's something which is a proper general property of a system
00:55:59.440 which makes the system to the members whose actions
00:56:08.440 And I think that all theories of that kind are just false.
00:56:27.440 The analysis of the situation is always of the form.
00:56:31.440 Well, a person is all the participants are facing this decision
00:56:36.440 where they have something to gain and something to lose
00:56:42.440 And as a result, all of them are dumped into deep shit.
00:56:59.440 They just do automatically what this particular version
00:57:11.440 It is something that can arise momentarily as a problem
00:57:19.440 We have every other kind of problem all the time.
00:57:31.440 They start accusing each other of behaving in that way
00:57:35.440 and defending each defending themselves by saying,
00:57:39.440 And then people think creatively about how they can change the things
00:57:47.440 that would have benefited from the dam and whatever it is.
00:57:59.440 so that they can't all get behind and undertake to pay for.
00:58:08.440 Sometimes, because that's a creative act in itself,
00:58:14.440 there's no guarantee that somebody can instantaneously come to it.
00:58:40.440 by something that uses the boot stamping on a human face.
00:58:49.440 this just assumes the government or whoever does the stamping
00:58:56.440 Well, if they have that knowledge, someone else could have that knowledge too.
00:59:00.440 The government doesn't consist of a lot of the kings
00:59:05.440 or things with divine right to have some different access to knowledge
00:59:21.440 it doesn't exist, then the park isn't going to be made
00:59:27.440 or until somebody works out how to do without the park or whatever.
00:59:35.440 If you may have to go very, very well in terms of knowledge and acquisition
00:59:39.440 and the real systems that can university gather knowledge
00:59:43.440 and resolve the problem to keep adding more light bulbs to the end
00:59:46.440 and maybe, you know, we expand the energy of the sun
00:59:51.440 and then we open stars with very distant future
1:00:06.440 So if you adopt my view that the growth of knowledge
1:00:12.440 consists of converting problems into better problems,
1:00:18.440 which then can't be solved because it's the best problem,
1:00:23.440 So, I mean, that whole picture might be false,
1:00:31.440 So, I'm generally speaking a follower of Carl Popper
1:00:39.440 The popular way of looking at this is that...
1:00:48.440 He who tries to prophesy the growth of knowledge
1:00:58.440 You know, I can't imagine what physics theories
1:01:02.440 are going to be invented in the next 10 years,
1:01:07.440 But on general grounds that there is no argument
1:01:12.440 or scenario, reasonable scenario that we know of today
1:01:37.440 The word wisdom is one that is sometimes defined in relations
1:01:43.440 And I think I remember Alfred all quiet and talked about
1:01:47.440 I don't remember the exact word that you use
1:01:51.440 I was wondering whether you had a definition of wisdom
1:01:54.440 to accompany the kind of definition of knowledge
1:01:56.440 or is wisdom just another version of knowledge
1:02:01.440 I use the term knowledge like the definition I gave you
1:02:04.440 and all the other definitions I've ever tried
1:02:09.440 that has this special property that the problem solving
1:02:19.440 And what's more the different kinds of knowledge,
1:02:22.440 like knowledge of physics, morality, politics, art
1:02:34.440 These are only approximate classifications.
1:02:48.440 as a convenience for deciding which building
1:02:52.440 different kinds of people should have their offices in
1:03:05.440 at least the distinction between them is very, very unsharp.
1:03:11.440 And again, Popper said there's no such thing as subjects.
1:03:18.440 So if someone asks you, what subject are you a doctoral?
1:03:30.440 You decide for yourself what to call a subject.
1:03:37.440 And I can't resist because you just said that you were a peerier.
1:03:52.440 Oh, well, there are several experiments known.
1:03:59.440 I think I invented the first one that would distinguish
1:04:04.440 a variety of quantum mechanics from a range of
1:04:10.440 competitor computations, including everything that has a collapse
1:04:18.440 Because there's a possible experiment, which would go one way
1:04:24.440 And the other way, if the wave function, including the observer,
1:04:31.440 So if that went the other way, I would drop it like stone.
1:04:37.440 Can you briefly tell us, doing some examples of the dust on the experiment?
1:04:42.440 Oh, well, the experiment depends on precisely which
1:04:50.440 There isn't an experiment that would refute all of them in one go.
1:04:54.440 You have to specify something like Penrose's idea
1:04:58.440 that the wave function collapses when you get more than
1:05:02.440 10 to the minus 8 kilograms of either side of the superposition,
1:05:07.440 Or that the wave function collapses when it hits a conscious observer.
1:05:19.440 You're testing Everett against the theory that the wave function collapses
1:05:25.440 Then what you do is you make a conscious observer,
1:05:28.440 which is the most convenient way to do that,
1:05:39.440 where halfway through where two different trajectories
1:05:43.440 of the computation take place inside the computer's memory
1:05:53.440 When it's halfway through and hasn't yet interfered,
1:06:00.440 it would be having just having bounced off the mirrors
1:06:03.440 and not yet reached the final interference mirror.
1:06:07.440 Then the AGI measures which mirror it is at,
1:06:17.440 and then makes a permanent record of the form,
1:06:26.440 and I have got a result, and it is one and only one of left or right.
1:06:34.440 but I do certify that it is one of those two.
1:06:47.440 the part where the declaration is sealed off,
1:06:50.440 and the rest is subjected to minus the Hamiltonian
1:06:57.440 so that all the memory of which of the two things it was conscious of happened
1:07:06.440 is wiped out, and then the interference is performed.
1:07:10.440 If the hitting the conscious observer causes a collapse of the wave function,
1:07:18.440 then you will get a 50-50 split of the two outcomes,
1:07:46.440 I am a little bit worried that you might have something
1:07:49.440 which might refute the patterns you set aside from appearing in you,
1:07:55.440 I mean, I don't know if it would be predicting something,
1:07:58.440 which is not predicted by the quantum theory,
1:08:02.440 but that's not the challenge for your own new...
1:08:07.440 The challenge is to find something which would be...
1:08:11.440 which would differentiate between standard options.
1:08:15.440 This would, for anybody who thinks that the wave function
1:08:29.440 then from my point of view, I'll let you and it decide
1:08:40.440 and on coherent quantum computations being...
1:08:43.440 Sorry, coherent quantum measurements being performed
1:08:47.440 on a human brain, then we're going to have to wait
1:09:02.440 Roman quantum mechanics is just ever quantum mechanics
1:09:08.440 The trouble is with the berm interpretation
1:09:25.440 which moves along the grooves in the wave function.
1:09:32.440 you have to systematically equivocate on the question,
1:09:41.440 then it has grooves that are performing computations
1:09:45.440 in principle conscious observer computations,
1:09:51.440 And so you can't say that some of the pilot wave doesn't exist.
1:09:56.440 and therefore the multiplicity of the average interpretation
1:10:00.440 is just there in the pilot wave interpretation.
1:10:08.440 if then you're saying that something that doesn't exist
1:10:29.440 but I was just getting a bit far away from the topic
1:10:53.440 of consciousness could possibly not be resolved
1:10:58.440 except if you start from the premise of a multiverse.
1:11:03.440 Well, I don't know about start from the premise,
1:11:11.440 if you base your ontology on something that isn't true.
1:11:19.440 if you're going to say that free will can't happen
1:11:27.440 or by Copenhagen interpretation or whatever,
1:11:30.440 then you're going to conclude that free will doesn't exist
1:11:38.440 From the, if you replace that by the true premise,
1:11:54.440 what free will is or what quality are or whatever,
1:11:57.440 but it's, it's removed the knockdown argument
1:12:03.440 quality can't be different from anything else
1:12:09.440 So you knock out a bunch of false arguments.
1:12:25.440 that you're unwilling to replace or criticize,
1:12:29.440 then you rather like sticking down the piece of a jigsaw puzzle
1:12:36.440 that, that will produce errors in the picture
1:12:40.440 arbitrarily far away from the piece of blue down.
1:12:46.440 and then you'll, you won't be able to construct the picture.
1:12:53.440 this is why the pursuit of truth is useful.
1:13:24.440 Those are, you know, those are widespread conventional terms,
1:13:28.440 but is there some way to kind of sharpen those terms
1:13:32.440 in relationship to what, how the multiverse could eliminate many?
1:13:38.440 Well, so I don't think that quantum computers
1:13:47.440 but I don't think that's the kind of problem it is.
1:13:51.440 So therefore, I think that a creative program
1:13:58.440 could be made on a deterministic class computer,
1:14:04.440 that calling that such computer deterministic
1:14:11.440 because AGI is going to be interacting with the world
1:14:14.440 and the world is, is not going to be deterministic
1:14:26.440 depends on some kind of randomness, randomness is everywhere
1:14:30.440 and that would be true whether it's classical or not.
1:14:33.440 I can't resist asking David when that, as a labor in the office,
1:14:46.440 we can take any comfort at all in the context
1:14:50.440 of thinking about these existential questions from the thought
1:14:59.440 and so again, I don't think that probability is the right way
1:15:16.440 or better going to a casino and the betting
1:15:21.440 and so that there will be, if you play your cards right,
1:15:25.440 you can arrange it so that there will always be some worlds
1:15:28.440 in which you come out and multi-millionaire.
1:15:36.440 with drawing profound conclusions from this
1:15:40.440 is that one thing we do know about probability
1:15:44.440 is that if the average interpretation is true,
1:15:47.440 then when one is analyzing a situation of randomness,
1:16:04.440 I was just wondering whether at a sort of psychological level
1:16:07.440 when you sort of step back and think about these existential problems,
1:16:14.240 place is right, then it's always inevitable
1:16:19.440 that there are some branches in which humanity
1:16:22.440 or some of all of us should be ranking some bodies.
1:16:26.440 Yes, I would try not to let my psychological approach
1:16:37.440 to an event come into conflict between what I know is there.
1:16:45.440 So, you know, I might have an objection to eating a suite
1:16:51.440 in the form of a tarantula, but then I say to myself,
1:16:57.440 This is just a piece of candy, and I'm going to eat it.
1:17:00.440 And you say, well, yeah, but still, still, what do you feel about it?
1:17:05.440 Well, what I feel about it, there's different from the right answer,
1:17:14.440 And speaking of slightly out of the questions,
1:17:21.440 the question you said is the last question,
1:17:25.440 that's part of the way it's only a late computation
1:17:28.440 creativity to feel with a risk to probably money
1:17:34.440 I was wondering if you might be able to elaborate on the moment.
1:17:38.440 Yeah, well, since it's the last question you see,
1:17:47.440 And these are three things that I don't know.
1:17:53.440 So, for example, is morality reducible to epistemology?
1:17:58.440 Well, if it isn't, what on earth can morality be?
1:18:05.440 Are there moral axioms that are uncritusizable?
1:18:22.440 and that problem wouldn't arise if it was epistemology
1:18:27.440 how to frame epistemology without requiring foundation
1:18:40.440 what if the laws of physics were different?
1:18:46.440 Like the ones I mentioned with the Greek gods and the malevolence?
1:18:49.440 Suppose there were malevolence and the laws of physics
1:18:59.440 Could we say at the moment we say that if something is
1:19:04.440 property of the laws of physics, it doesn't have moral value or common.
1:19:10.440 But in such a, can we, am I right in thinking
1:19:14.440 that those kinds of laws of physics are immoral?
1:19:33.440 and the question that the implement who's in our left
1:19:37.440 is about whether consciousness in free will and
1:19:43.440 and qualia and moral value and all that stuff are all come together
1:19:48.440 necessarily or can they be separate or can they be made separate?
1:19:56.440 they've always come together, but we could artificially make them separate
1:20:09.440 but I can't give you a watertight argument by this and so.
1:20:16.440 In all these cases, we have to wait until somebody comes up with
1:20:22.440 I'm always a bit, a bit, a tub when people have strong feelings
1:20:31.440 about things like free will, the moral value,
1:20:38.440 you know, is it moral to kill animals, eat animals,
1:20:42.440 are animals conscious and all those things?
1:20:45.440 When they do not know what consciousness is, none of us do.
1:20:49.440 It's part of life, but I think we should have done it now.