00:00:00.000 Hello, so this is a response for Andy, who asked a question that I've been asked before
00:00:05.760 and which I'm sure will be asked in the future, which is where exactly the disagreement
00:00:10.880 between David Deutsch and Sam Harris lay within the second podcast, the waking up podcast
00:00:18.080 of Sam Harris, where he interviewed David Deutsch again for a second time. This time
00:00:22.920 specifically about the moral landscape, I think that was Sam's third or fourth book. During
00:00:30.480 that podcast, one of the first things that David said was that it's very difficult to
00:00:35.160 articulate precisely what the disagreement is because they come from such different places
00:00:41.720 when it in terms of the epistemology. And so the kinds of vocabulary that David
00:00:46.240 uses, the kind of vocabulary that David uses as prosaic and as common as it might be,
00:00:56.000 has a very different meaning in the perperian sense than it does for many, many academic
00:01:01.600 philosophers. And so this has the consequence that anyone trained in academic philosophy
00:01:08.920 strangely enough, is often particularly poorly placed to really understand the
00:01:15.600 perperian notion of knowledge. And the perperian notion of knowledge is, ironically, in some
00:01:20.680 way, closer to the common sense notion of what knowledge is. The academic version of what
00:01:28.040 knowledge is, following on from Plato, often has these overtones of certainty and justification
00:01:35.120 and foundation and belief. And the perperian idea is far more fallibleist and anti-foundation
00:01:42.440 list. It's more about trying to guess what is true and then checking whether or not
00:01:46.680 your guess is correct. So this idea that they will both speak in different languages is
00:01:52.960 something that David flags early on. And it's humorous in retrospect, because when
00:01:59.600 you listen to the entire podcast, I've done a few times now, that flag that David
00:02:04.720 plans, about the fact they're speaking different languages, highlights every single
00:02:10.600 error that appears to occur throughout the conversation, where David will make a point
00:02:15.920 and explain precisely what the disagreement is, and Sam doesn't understand. And it's
00:02:22.440 no fault of Sam, it's not like he's incapable of understanding, but he has an epistemology
00:02:29.200 following Plato, that it's possible to have a foundation and indeed it's desirable to have
00:02:34.720 a foundation. And although Sam can make noises that sound as though he's not an infallibleist
00:02:42.360 thinker, as though he isn't striving for a particular certain way of thinking, nonetheless,
00:02:51.760 what he said is reveals precisely what his internal psychology is on these matters.
00:03:00.000 Although he can probably give a good definition of what fallibleism is, ultimately, and
00:03:06.920 this happens on many, many topics, he's not a fallibleist. He's definitely a rational person
00:03:13.080 and he's very reasonable and he's trying to reach the truth like we all are, but as
00:03:18.240 we will see, I've written a few notes of examples where he has or he explicitly reveals
00:03:24.560 the fact that he's not a fallibleist, that he thinks that it's possible to finally
00:03:31.800 grok the answer, whether it's in morality or whether it's in science. So we'll get there
00:03:36.480 eventually and I'll attempt to explain exactly where this chasm of difference is between
00:03:41.720 the two. And this is why it's very difficult for the majority of people who hear the podcast
00:03:48.040 to understand where the disagreement is, because when they hear words like knowledge, or
00:03:53.640 when they hear words like foundation, they're hearing something different to what somebody
00:03:58.280 who has read pop or who has read doge understands these words to me.
00:04:05.040 In the last video I did about the beginning of infinity was chapter four creation. There's
00:04:10.600 a section there where David speaks about how knowledge is created inside of human
00:04:17.920 mind. We don't know all of the details, but we know something. We know it can't be spontaneously
00:04:21.840 generated. Now that might seem like, again, a prosaic sort of mundane vanilla thing to say,
00:04:28.920 but it means that the bucket theory of mind is false. It means that when you're using words,
00:04:33.560 you cannot download what you think those words mean into the person with whom you're
00:04:39.440 having a conversation. You can't download it from your brain into their brain. They've
00:04:44.800 got a certain understanding of what the words that you're using mean, and you've got a certain
00:04:48.360 understanding of what the words you're using mean. And you can try and explain it using
00:04:52.440 other words, but then those other words you use, in other words we explain the words
00:04:55.320 that you're using, might not do the job. And this is an example. This is a wonderful example
00:05:02.320 this conversation of where David says up from that the words that he's using have a different
00:05:08.920 sense to what they have inside of Sam's mind. He flags it, and tries to explain again and
00:05:17.720 again throughout the conversation. And although Sam admits that he is willing to go there
00:05:24.280 with David, willing to grant him certain things, he never really does. And so the error
00:05:29.760 never quite gets corrected. And so therefore at the end of the conversation, Sam isn't
00:05:34.600 sure where the disagreement is. The majority of people who have an alternative epistemology
00:05:41.280 something other than what Karl Popper views knowledge as. For example, they think that
00:05:46.000 knowledge is about justified true belief. They think that you need to begin with a foundation
00:05:49.760 and on that foundation, then you accumulate knowledge, you build it up. And this is an
00:05:55.600 anti critical vision about how knowledge is created. In the preparing view, you simply
00:06:01.120 have problems. You can start anywhere at all. And you attempt to solve those problems when
00:06:06.360 you have them, when you have ideas that are in conflict with one number, by using a critical
00:06:11.040 method. It's a completely different vision. Instead of accumulating and building, you're
00:06:16.480 kind of refining and cutting things down and reestablishing new ideas or improving new
00:06:23.760 ideas. So there's no reason to think of knowledge as this sort of base that you begin
00:06:29.880 with and then you're constructing towers. And then at some point, you finish the tower, of
00:06:34.360 course, as well, under that vision. In the preparing view, it is literally an infinite
00:06:40.920 process, not easy to visualize, but I guess people want a visual sense of what's going
00:06:47.440 on there. But we're talking about the abstract growth of knowledge, not the construction
00:06:52.760 of buildings. Also, very early on in the conversation. And this is another hint about, there's
00:07:02.240 problems with process here. And I guess there's problems with psychology or even linguistics.
00:07:08.200 The problem with process here is that David at very early on says, okay, well, let me
00:07:17.640 explain what the disagreement is. And then begins to preface what he's about to say.
00:07:23.560 But before he gets to the main point, Sam interrupts, and this happens in conversations.
00:07:28.440 And so it never quite seems as though David's able to get to explaining precisely what
00:07:34.520 he is, except that it's embedded in various other parts of the conversation. And so this,
00:07:39.840 again, will leave a typical listener, perhaps, with the impression that David never actually
00:07:47.520 did what he said he was going to do, which is to articulate where the disagreement was.
00:07:52.360 He does so. But it occurs some many minutes after he says, now I'm going to explain
00:07:59.760 what the disagreement is, because Sam interrupts as a natural conversation would have these
00:08:05.560 interruptions. Okay, so the preparing vision is that knowledge is always conjectural.
00:08:12.560 It's guest. And so when we're talking about what will increase the well-being of conscious
00:08:19.320 creatures, which is what Sam is concerned about as kind of, if not the foundation, then
00:08:25.480 the purpose of morality, the purpose of morality is to maximize the well-being of conscious
00:08:32.560 creatures. However, if we are going to attempt to maximize the well-being of conscious
00:08:38.360 creatures, and then we're kind of guessing what their states are going to be. And we
00:08:45.360 will come to, and we can also come to see later on, that Sam sees the well-being of conscious
00:08:51.640 creatures as intimately tied to their biology, that he doesn't take seriously the notion
00:08:58.320 of substrate independence, a let alone the universality of human beings. Human beings
00:09:06.520 have a universal mind, and so therefore their well-being cannot depend upon neuroscience,
00:09:13.400 their neurology, it cannot depend upon the particular makeup of neurons inside their brains.
00:09:18.760 So instead, just to preface what morality really consists of, it's about solving moral
00:09:25.760 problems. And in order to solve moral problems, we have to conjecture explanations about
00:09:30.960 what might improve things. And they can always be false. We can always criticize them. And
00:09:35.880 that includes any starting point that we might have. If we think that we need to start
00:09:39.800 with the well-being of conscious creatures, that could change, especially in so far as we
00:09:46.600 refine what we mean by conscious creatures. Now another thing that I got the sense that Sam
00:09:52.520 might have been hinting at, and this is early on, is he wants to defend the thesis that
00:09:57.000 moral truth exists, and I'm with him there, and I think David's with him there as well.
00:10:02.120 Moral truth absolutely exists. And it exists in a similar way to the way that mathematical
00:10:08.760 truth exists. And at some point I think David actually does, indeed, mention that, that
00:10:14.960 there's this kind of objectivity to mathematical truth and to moral truth. Neither of
00:10:21.440 which are dependent upon the truth about physical reality. Physical reality is a separate
00:10:29.000 kind of objectivity to what mathematical reality is. There can be truths in mathematical
00:10:35.480 reality that do not depend upon what's going on in physical reality. One such truth is that,
00:10:43.000 for example, the decimal expansion of pi is infinite. But it is impossible in physical reality
00:10:50.520 to represent that anywhere, because there simply aren't enough atoms in the universe, or even
00:10:56.920 the multiverse. You need a literally infinite number of particles, an infinite number of different
00:11:04.200 states of the universe in order to represent this decimal expansion of pi. But the infinite
00:11:11.080 decimal expansion of pi exists out there in abstract mathematical reality. It's a thing.
00:11:17.240 So the other thing here is a distinction between abstract objective, abstract objective,
00:11:25.000 ontological reality that's out there in terms of mathematics. Indeed, in terms of the laws of
00:11:31.240 physics, whatever the true laws of physics actually are, they exist, they absolutely exist,
00:11:37.960 but our knowledge of those laws of physics, like our knowledge of mathematics, or our knowledge
00:11:44.040 of morality. So this is the difference, again, between ontology and epistemology. Ontology is,
00:11:51.080 what is true in reality? Now, what is true in reality? We don't know. All we have are
00:12:01.320 fallible explanations of that reality. The fallible explanations of that reality are not that
00:12:07.880 reality. This is true of physics, where the laws of physics absolutely, absolutely exists. They're
00:12:14.600 really real. We don't know what they are exactly. We have approximations to them. And so for
00:12:21.400 example, we used to think that the law of gravity was Newton's law of gravity, which looks like
00:12:28.040 f equals g m1 m2 over r squared. We now know that's false. It works within a certain domain.
00:12:34.280 It's extremely useful for solving particular problems, but ultimately it's false and you can do
00:12:38.920 experiments to show that it's false and it fails in certain regards. The same is true of mathematical
00:12:45.720 truth. Mathematical ontological truth is out there. What we have are explanations of that
00:12:52.840 ontological truth. So we have epistemic claims, okay, which are just things that we write down,
00:13:00.440 things that we understand in our brains about that reality. It doesn't matter what the domain
00:13:06.120 of inquiry happens to be, whether it's science or mathematics or morality. There are
00:13:11.160 truths in each of these areas. They are part of reality, but we only have, because we are
00:13:16.200 fallible humans, we don't have direct access to any of it. We're fallible. And so when Sam says
00:13:24.520 that moral truth exists, it's not entirely clear whether he thinks that we have direct access to it.
00:13:33.000 And at times, I think that he thinks we may be able to get that truth in hand.
00:13:39.880 And I'm not convinced by that. Again, I'm a fallibleist, David's a fallibleist. And so this is one
00:13:45.640 of the areas of disagreement. When people make the noises on the subject as if to say, we're nearly
00:13:52.280 there. We're about to find out what the truth happens to be or in some distant future we're going
00:13:57.240 to know what the truth is. So I'll just move on to, I'll just move on to foundationalism. So
00:14:03.480 foundationalism is this idea that you can, you begin with a foundation. And then on that
00:14:10.760 foundation, you build up the rest of your knowledge. But it's the idea you need to begin somewhere,
00:14:15.240 you've got to start somewhere, you know, you're axioms, you're premises. And from that you can
00:14:20.760 derive everything else that you need to know that this is a platonic mistake. This idea that
00:14:30.280 you just begin here and then you justify, justify, justify, justify, and eventually you keep on
00:14:36.280 justifying in your reach the end because you found everything that needs to be understood.
00:14:40.600 This is completely the opposite in many, many senses to what
00:14:46.200 perperian epistemology is about and about how knowledge is constructed in reality. There's no
00:14:51.960 reason to begin here because we don't know that here is an absolute truth. We don't know that
00:14:59.720 anything's an absolute truth. And so because of that, all we can do is to guess what might be
00:15:05.720 the case in order to solve any particular problem. We don't have to worry about what the foundation
00:15:11.640 is. There's no bedrock. And even though there's a final reality out there, for many, many reasons,
00:15:17.160 we can't ever get to that final reality. And this is part of the conception of the beginning of
00:15:24.680 infinity. So you can always correct errors. There is an infinite amount to learn. We are fallible.
00:15:33.880 And so you can never be sure that whenever you've discovered something that appears to solve your
00:15:38.360 problem, that you're not going to find errors with it. And so with Sam when it comes to
00:15:44.840 morality, he wants two kind of foundations. He wants to talk about morality as being
00:15:52.120 about the well-being of conscious creatures. And this idea, the north to establish an objective
00:15:58.520 morality, in order to begin somewhere, let's consider the thought experiment of the worst possible
00:16:04.520 misery for everyone. So the worst possible misery for all conscious creatures. And he says that if
00:16:09.800 anything is worth avoiding, it's worth avoiding that. And we can agree that if there's anything
00:16:17.800 worth avoiding, it's worth avoiding that. But there's no need for this foundation. There's no need
00:16:23.560 to begin there because it doesn't help us solve any particular moral problem. It's a response
00:16:31.160 or a critique to either be political ways of thinking or relativism. Moral relativism, moral relativism
00:16:41.720 is this idea that your morality depends upon the culture from which you come or from the family
00:16:50.120 in which you find yourself or from your particular frame of reference, your particular psychology
00:16:55.640 determines what your morality is. And so Sam is right to want to critique moral relativism. This idea
00:17:03.640 that we shouldn't criticize other cultures or criticize other people for their moral beliefs.
00:17:11.640 And so if there happens to be a culture out there somewhere, that thinks that if little girls learn
00:17:17.640 to read, they should be stone to death, then who are we to criticize that culture? And as Sam says
00:17:26.200 in his very powerful TED Talk on this topic, who are we not to criticize such a culture?
00:17:33.240 So Sam wants to respond to the moral relativists by saying, well, let's consider the worst
00:17:39.080 possible misery for everyone. Now a moral relativist would say that no such state exists,
00:17:45.080 but he's appealing to people to say, you can't go with the moral relativists because
00:17:52.120 the worst possible misery for everyone is objectively bad. And therefore what is objectively good
00:17:58.600 is any movement away from that state. Now this cannot be a foundation for morality
00:18:07.960 because as David points out in the conversation later, and I think I've observed in my response
00:18:15.880 many years ago to Sam Harris on this as well in the moral landscape challenge,
00:18:22.520 as soon as you get a small distance away, any distance away from the worst possible
00:18:27.160 misery for everyone, I think David says one millimeter away from the worst possible
00:18:31.000 misery for everyone, then what? Then what? That foundation that we begin with is of no use whatsoever
00:18:40.280 to allow us to decide what to do next, which is the sphere of morality. What do we do next? What
00:18:45.800 should we do now? And that's if you're a millimeter away. Now we're, you know, the year 2018 here
00:18:51.320 on Planet Earth, we are a lot further away than one millimeter away from the worst possible misery
00:18:55.160 for everyone. So this worst possible misery for everyone is a critique of the idea
00:19:00.840 that there's no difference between good and bad. And as a as a critique of that, it's a good critique
00:19:08.280 that it doesn't get you answers to moral problems because where we are now, we have an infinite
00:19:14.760 space of possibilities before us. And what do we do next depends upon a whole raft of things
00:19:21.960 about what we value, but what should we value is and also another part of morality. So Sam has
00:19:29.080 this unalterable foundation he wants to begin with, that I would say the only purpose of which is
00:19:36.200 as a critique of relativism. And the other is the well-being of conscious creatures.
00:19:44.360 And the well-being of conscious creatures and insofar as that to the domain within which we want to
00:19:49.080 conceive morality, it has problems for human beings because there cannot be a dependence upon the
00:19:57.720 biology. And yet this is the assumption, this is the implicit assumption that operates behind this,
00:20:04.280 and we'll get to exactly why in a moment. But I just want to sort of fixate on these two foundations.
00:20:13.480 One being, morality is about the well-being of conscious creatures. And the difference between
00:20:19.000 good and evil can be articulated by considering the worst possible misery for everyone. So we've
00:20:23.080 got these two immovable things. Now this is a misconception because it simply makes the same mistake
00:20:32.360 that religious thinkers make, which is that you need to begin with a dogma. You need to begin
00:20:37.080 with this unalterable foundation and upon these two pillars, that's where you build the rest of
00:20:43.640 your knowledge. These things cannot be criticized, but this is false, this is wrong. And even if
00:20:50.840 your intentions are good, people's religious people have good intentions. The idea that we need to
00:20:58.040 begin with the Ten Commandments. The idea that we need to begin with the fact that God exists,
00:21:02.840 or love exists, or that Jesus zoomed up the heaven, or that Mary was a virgin, etc., etc.,
00:21:08.680 people have good intentions in wanting to enshrine dogma, in wanting to enshrine a foundation.
00:21:15.240 Now Sam says, well he's willing to conceive that this could be wrong, that this could be fallible.
00:21:19.880 However, he refuses to admit that morality could be anything other than about conscious creatures.
00:21:27.000 And he says again and again that this is his foundation. So why can't morality be about
00:21:37.000 conscious creatures? Why can't it purely be about conscious creatures? Now Sam has a
00:21:42.600 pretty forceful argument in the moral landscape about how if we were to consider a universe
00:21:47.960 in which there were no conscious creatures, then that by definition would be a universe without value.
00:21:55.640 Fine. But morality is a sphere of truth, of ontological truth. So those ontological truths
00:22:05.000 actually exist. They're out there in abstract reality, and so they occupy that reality.
00:22:13.160 Even if we can't find out what they are perfectly and we can't, they're independent of the
00:22:21.480 experiences of conscious creatures. So they might be about the experience of conscious creatures,
00:22:29.640 but they are independent of conscious creatures. And in particular they cannot be about
00:22:35.560 the neurology of the conscious creatures. And they can't be about the neurology of conscious
00:22:40.680 creatures, because human minds are universal. Even if you could possibly have a non-universal mind,
00:22:48.840 like for example, if a cat has conscious states, those conscious states, and especially
00:22:56.280 the universal mind of the human being of a person, can in principle, proven by David Deutsch,
00:23:05.000 they could in principle be downloaded onto a computer. We could be put into a matrix.
00:23:14.600 At that point, once our mind, which is a kind of program, is putting into a silicon computer,
00:23:22.040 or whatever the computers of the future happen to be, then it cannot possibly be the case.
00:23:27.320 The morality is about anything to do with the biology of the human brain,
00:23:33.480 because we're going to have biological brains anymore. We'll be a universal explainer inside
00:23:39.960 of some kind of silicon computer. There's a time about half an hour in, I can't remember now,
00:23:44.840 but I had to write myself in that because I thought this was a very valuable insight,
00:23:50.360 where David says that the criterion by which institutions should be judged is by how good they
00:23:56.920 are resolving disputes between people without violence, without coercion. And he's not saying he
00:24:02.360 knows what they are, but this is an absolutely crucial point about politics and economics and morality
00:24:09.880 generally. It means that the scope of government, for example, is extremely limited,
00:24:18.200 that if we want political institutions that work and that are moral institutions, they cannot be
00:24:23.400 coercive. And so when people consider things like the welfare state, and when they have good
00:24:29.880 intentions, like replacing the welfare state, let's say, with something that's an incremental
00:24:34.600 improvement like universal basic income, nonetheless, this requires some amount of coercion,
00:24:41.240 that if Joe over here is not earning much money, however, he's earning just enough in order that
00:24:49.720 you think that he should hand over some of his money to marry because of universal basic income,
00:24:56.120 then that will require a certain amount of coercion. The only way to avoid that is to allow Joe
00:25:01.640 to give Mary charity, to willingly, voluntarily do this, but the people who argue for universal
00:25:09.000 basic income in the same way that people who argue for welfare are the same way that people who
00:25:13.640 argue that socialism should obtain or that communism should obtain or any other kind of system
00:25:21.000 in which the government determines where the wealth gets distributed, where your wealth gets distributed,
00:25:27.240 wants to implement a coercive system. And Deutsche's criterion here is that being institution,
00:25:34.760 the political institution, needs to be judged by how good it is at resolving disputes between
00:25:41.640 people without resorting to coercion. And so if you can't reason someone into something,
00:25:46.680 then arguing that we need to use force, especially to extract money or something like that,
00:25:54.120 that's clearly inferior. And because people have, and this is tied to fallibleism, and it's
00:26:00.280 tied to this idea about what human beings are, that we're universal explainers, that we can come
00:26:05.800 to understand each other, that if you have an idea and it's good, and I'm a reasonable person
00:26:11.400 who's a universal explainer, then you will be able to use words at argument and explain to me
00:26:17.320 why it is that your idea is better. Now in discussions about economics and government,
00:26:23.320 it seems to me to be the case that very, very often we end up in a situation where
00:26:29.320 one side throws up the hands and says, well I don't have, I cannot convince you, nevertheless
00:26:34.920 we need to use force here, we need to use a mechanism whereby money is taken from these people
00:26:41.960 and give them to those people, etc. Okay that's a diversion. And David further adds,
00:26:48.280 this is an important quote, that there's no limit to the possibility of removing evil via knowledge.
00:26:55.160 And so all evils are caused by a lack of knowledge, and so therefore he's saying that
00:27:00.440 whenever there's a problem, whenever there's an evil or suffering, then what we need to try and
00:27:06.280 bring to bear is knowledge, we need to bring some kind of creative inspiration to that situation
00:27:13.240 in order to find a solution. But coercion can't be the thing. Now I'm going to read a
00:27:20.520 direct quote I've written something down that Sam says word for word, that's at the 49 minute 50
00:27:27.000 mark and he says, imagine this future of a completed science of the mind where we not only
00:27:34.360 understand the brainbases or the computational basis of every possible experience, but we can
00:27:39.880 intervene as completely as we would want. And we now have this machine that I can put on your
00:27:45.400 head and we can dial in any possible conscious state, it's just this perfect experience machine.
00:27:51.320 So that's a two sentences, it's a very long sentence and then a short sentence,
00:27:58.600 but I just want to emphasize this about Sam Harris who I admire, I think he's got the best
00:28:03.240 podcast out there, I've read all of these books, I think it's a fantastic thinker,
00:28:08.600 but I just want to emphasize that the language he uses is no accident, it's anti-fallableist,
00:28:15.000 it's foundationalist, it's not popularion. And even though he says various points in the
00:28:21.800 conversation that he's willing to concede the fallibleist point, he's underlying epistemology
00:28:28.680 which shapes his psychology and therefore the way in which he comprehends the world is
00:28:34.200 here in stark contrast against his explicit statements. So the explicit statements about yes,
00:28:39.800 I'm a fallibleist, yes, I'm willing to admit that I could be wrong about this. It's very,
00:28:45.480 very different to what comes out when he's just speaking naturally and so again he says
00:28:50.440 completed science. So as if we can finally rock the final answer, we can finally get there
00:28:56.600 and we will get to a point in science where there will be nothing further to discover
00:29:01.080 a completed science. If you didn't think that was possible he wouldn't put the word
00:29:05.000 completed in there, you would just say imagine this future of science of the mind where we not
00:29:08.360 only understand the brain based, etcetera, but he says completed. He also says every possible
00:29:13.160 experience as though that were possible, but as David goes on to explain it's not possible to
00:29:19.240 have every possible experience, you know, I take written down an algorithm, put into a computer,
00:29:24.760 enumerated in a computer, that simply isn't possible. And it's not possible because
00:29:29.480 that presumes that you can predict the content of future knowledge. So for example, the experience
00:29:36.920 of what will be discovered in a hundred years time, that's a possible experience. The experience
00:29:42.840 of discovering something that no one has yet discovered, that experience can't be put in there.
00:29:49.640 So you can't have this machine that Sam wants where you can put it on your head and dial in
00:29:54.440 any possible conscious state and it can't be a perfect experience for Shane. Okay, so how would
00:30:00.200 it help a perperian rewrite this? Okay, and again I'm saying it's no accident that the way in
00:30:05.880 which he tries to conjure this. So one way that you might write it is, imagine this future of a
00:30:12.600 science of the mind where we not only understand the brain bases or the computational basis of
00:30:17.720 the experience, but we can intervene. As we intervene in any way we like and we now have this
00:30:25.000 machine that I can put on your head and we can dial in conscious states. It's just this experience
00:30:30.600 machine. Okay, so that would be fine. I think that that that would work in a perperian sense.
00:30:36.920 But because Sam thinks that you can have this completed science of the mind that you can have
00:30:43.880 these machines that could have all these perfect states and you could just pick the one that you
00:30:49.640 like the most perfect one. Then he thinks that you can get to a peak and so these peaks on the
00:30:56.440 moral landscape he thinks are absolutely peaks upon which you can make no further improvement.
00:31:01.800 Of course, then later on he will say that oh no I didn't mean that. I mean you can make improvements.
00:31:08.760 So David interjects at that point with what I've said because he says something along the lines
00:31:14.760 that the vast majority of states, these conscious states that are in this machine,
00:31:20.120 will never know because we won't have the knowledge. The infinite majority will always be unknown.
00:31:25.640 So it's a big difference between Sam and David. Sam says we can dial in any possible conscious
00:31:32.040 state and David says the overwhelming infinite majority will always be unknown. There's a big
00:31:38.840 difference between having every possible conscious state and saying and denying the fact that you
00:31:45.080 can and in fact the infinite majority will forever not be known. That's a huge disagreement
00:31:53.240 in terms of quantity. It's a difference between zero and infinity and David used the example.
00:32:01.240 There is the experience of knowing tomorrow's scientific discovery which we will never
00:32:05.320 download. But Sam comes back and he says I know he didn't mean that these conscious states would
00:32:11.720 be finite but I think that's kind of a fudge. Either the computer can replicate all the possible
00:32:22.760 states or it can't. And so if it can't then his machine cannot be a perfect experience machine.
00:32:30.760 It can't be based on a completed science of the mind where you understand all the ways in which
00:32:37.320 the computational states or the brain states relate to conscious experience.
00:32:42.040 And the reason, if we manage to find a way in which to capture the mind inside of silicon,
00:32:53.400 we'll know what the algorithm is for creativity. We'll know what the algorithm is for a human brain.
00:32:58.680 But that doesn't mean that we will know what every single computational state
00:33:03.960 is. How that relates to subjectivity. Because there can still be an infinite number and uncountably
00:33:11.640 infinite number of conscious states. Being able to write down the algorithm for creativity
00:33:17.800 doesn't mean we know all the possible outputs of that algorithm. If it's a creative algorithm,
00:33:21.800 we already have them right, they're running on our brains right now. Even if we knew what the
00:33:26.760 algorithm for creativity is in our own brain, that doesn't mean that you know what the output is
00:33:30.920 going to be. Because presumably, part of that algorithm has the quality that it is a knowledge creator
00:33:39.000 and no knowledge creator can predict the growth of knowledge. That's simply a fact of epistemology
00:33:45.400 because as soon as you create something new, then you're going to find errors. And the errors
00:33:51.080 that you're going to find in that eventually, that bit of knowledge eventually depends purely upon
00:33:54.920 your preferences and your free will. But this is another probably a disagreement between David and Sam.
00:34:01.240 But Sam gives this idea that he's wrong, this idea that he's wrong about his thought experiment
00:34:10.920 that you're going to have this completed science and that you can have all the possible
00:34:13.720 conscious states. So he gives that short shift and he wants to go back to feelings. And this
00:34:20.360 happens a lot in the conversation. He wants to go back to considering how people either feel good
00:34:27.160 or they don't feel good and how you could feel better and how you could feel worse. And so
00:34:31.400 he just wants to consider, okay, well just imagine that you could feel like Mozart did during
00:34:37.080 his best moods or like John von Neumann during his best moods. What would feel like to be
00:34:42.120 Mozart composing a symphony? Must that not have been a wonderful state to be in?
00:34:47.320 And David points out that, well, this idea, and again David doesn't quite use these words but
00:34:55.720 it's this idea that anchoring morality to pleasure versus pain is misconceived. And when Sam talks
00:35:05.080 about the worst possible suffering for everyone, I think he really has in mind torture or pain
00:35:11.720 it's some sort of physical type suffering and you could turn up all the pain receptors in a
00:35:17.800 conscious creature and that would be the worst possible misery. And at the other end of the dial
00:35:22.840 you could just maximize pleasure. And so he starts to talk about pleasure later. But he can't
00:35:27.880 decouple this idea of feelings, sensations from morality, which is what David attempts to do here
00:35:36.840 when they start talking about what it feels like to be Mozart. And Sam says, you know,
00:35:43.480 it must be a very happy kind of experience. But David said, well, there's pleasure and then
00:35:49.720 there's joy and the joy that Mozart would have had is the joy of solving problems in music.
00:35:57.000 So in what way could you download what it's like to be Mozart?
00:36:01.800 Well, perhaps you can download this sensation, but it wouldn't exactly be the joy that Mozart felt
00:36:09.880 because the joy that Mozart felt had a lot to do with the fact that he just solved some problem
00:36:15.880 in music, some problem in composition. And that has a certain sensation associated with it's
00:36:22.760 sure, but the joy, the enduring feeling of having solved the problem, can only come from having
00:36:29.960 solved the problem. Now, one might want to say, well, you could download that experience,
00:36:35.960 the experience of their being Mozart and solving the problem, but then you would be Mozart,
00:36:41.240 then you would be Mozart. If you're downloading his entire mind into your own,
00:36:47.000 you're no longer yourself, you're not yourself having the experience of what Mozart had,
00:36:52.280 you are Mozart because you are solving the problem that he solved, you're valuing the problem
00:36:58.600 in the way that he valued it, everything about you is then tuned to being Mozart.
00:37:05.240 So it can't be the case that you can simply download sensations because happiness is a product
00:37:11.560 of doing something. And so always happiness is about solving problems. And suffering is a
00:37:20.360 condition in which you're thwarted in some way, you're unable to perpetually solve your problems,
00:37:26.440 or to solve a particular problem. And it's upsetting you. So it's about problems. Morality is
00:37:31.400 about problems. So David is arguing that you can't download the sensation of what it was like
00:37:39.640 to be Mozart or John Bond or anyone without recreating that individual person.
00:37:47.000 But Sam insists that it can't, you can have a form of happiness that is independent of
00:37:54.920 problem solving. And so he mentions a lot of pleasure and he mentions pain. And so he talks
00:38:02.600 about how drugs or medication or certain states during meditation, like enlightenment,
00:38:11.560 can give you a form of happiness or pleasure that is independent of what? It is independent of
00:38:20.840 problem solving. And everyone would agree that an opiate home might be pleasurable.
00:38:30.440 But this is a temporary thing. And David points out that most heroin addicts would suggest
00:38:36.440 that the experience of most heroin addicts would precisely be that. Maybe at first when people
00:38:42.120 try and drug it's interesting and new because you're having new sensations. But as time goes on,
00:38:48.120 if people are using this drug all the time, it becomes increasingly boring. There's nothing new.
00:38:54.200 And so they might become addicted. And in fact, that will become a real problem.
00:38:57.640 That the pleasure is no longer coupled to happiness. The pleasure is now coupled to unhappiness.
00:39:03.720 There is this deep divide here, this real chasm, another real chasm between Sam arguing frequently
00:39:11.720 that morality is about feelings in some way, the well-being of conscious creatures.
00:39:16.200 But also, here's every example is about some kind of state of happiness or state of pleasure or
00:39:22.200 state of pain. And David wants to say that instead, morality isn't really about that. It's
00:39:28.920 about solving moral problems. And so these are two different ways of viewing what morality is
00:39:34.680 about. Emotion versus reason. Emotion and reason versus problem solving. Now, Sam attempts
00:39:44.440 are thought experiment with the matrix. And the interesting part here for me was when,
00:39:53.640 I can't remember exactly the point of the thought experiment. But the interesting part for me was
00:39:59.800 where Sam said that the morality of the people within the matrix, if you're inside this matrix and
00:40:06.360 you're just having a great time, the morality of the people within that computer program isn't
00:40:12.680 relevant because they're not real people. And then David says, well, then that means that they're
00:40:21.880 not creative. So it wouldn't be a pleasurable experience to be inside of this matrix type, computer
00:40:28.040 world, because what you want inside of this matrix type, heaven. Let's say we could make a matrix
00:40:34.520 that was heavenlike. But Sam saying that you could do anything you like with the people that are
00:40:39.960 there, because the people that are there aren't real people. And David says, well, then that means
00:40:46.520 they're not creative by definition, by his definition of what a person is, a person's a universal
00:40:51.880 explainer, a person's a creative thing. Now, if they're not creative, if these people in
00:40:58.040 some of the matrix aren't creative, then they can't collaborate with you on any of your problems.
00:41:01.160 They can't really help you with any of your problems, really, because they're not able to
00:41:06.120 contribute to your problem situation, because all they have is a finite set of responses that
00:41:12.920 they can probably give you, like a non-player character inside of some computer game. They're not
00:41:18.120 really going to be able to help you very much. Anymore than Wikipedia can help you. But if it's a
00:41:22.840 real problem in your personal life, Wikipedia may or may not be helpful, but really what is
00:41:27.400 out of people, ultimately, we want other people. We need to collaborate in order to get our
00:41:31.800 problem solved. And our problems are important insofar as there are other people that around us
00:41:35.800 that we want to solve their problems. We want them to help us. Anyway, we need other people.
00:41:40.440 Now, David says that because of this, because these other non-real people aren't creative,
00:41:49.560 that we'd eventually notice that if we were inside of this matrix that's kind of this
00:41:53.720 heaven-like matrix, then we'd quickly notice that they aren't responding like normal people are.
00:42:01.960 And Sam in response to this goes right. So he seems not to get it or doesn't buy the argument.
00:42:09.320 And to me, this is another chasm of difference between the conception about what a person is.
00:42:16.360 So Sam kind of thinks, and this is the prevailing conception, that what we have our computer
00:42:22.760 programs that are maybe artificially intelligent in some sense. And we have people, and maybe we'd
00:42:31.400 want to agree that artificial general intelligence is a person. But then there's kind of this
00:42:34.840 continuum between the two, and then maybe there's something further beyond artificial general
00:42:40.280 intelligence. But this is simply false. You have only two states. It really is a binary thing.
00:42:47.000 People don't like this. They say you're a black and white. You think you're thinking shades
00:42:50.200 of gray. Of course. But not in this situation, it's the difference between black and white.
00:42:55.800 It is here because we have things that are not creative and things that are. There's not
00:43:03.800 partial degrees of creativity. Either you can tackle a problem because you're a universal
00:43:10.840 explainer or you can't. And so this is a difference as well, okay? That you have people and
00:43:18.200 things that aren't people. You have general purpose explainers and things that are not general
00:43:23.160 purpose. We are general purpose explainers. And we want to interact with other general purpose
00:43:28.680 explainers, other people. They are what's valuable in this world. They are the ones that are likely
00:43:34.760 to be conscious. They're creative. They've got free will, Sam won't like that. But creativity is
00:43:42.520 the thing that makes a universal explainer, a universal explainer. And you quickly notice if something
00:43:48.280 is not a universal explainer. It's not going to be able to give back to you in the same way.
00:43:51.480 It's not worthy of the same kind of love and compassion and fun times that universal explainers
00:43:59.800 or other people are. So there's a real difference here. There's a real chasm of understanding once
00:44:05.720 more between Sam's idea about the centrality of people and David's. Then we get into a section
00:44:14.120 that is a little bit of a diversion, I think, more of a distraction from the meat of the
00:44:19.880 disagreement. And Sam talks about meditation and utility. And I'd agree with Sam that meditation
00:44:29.720 is a very useful, pleasurable thing. It has a whole bunch of benefits. And he says that that state
00:44:39.720 is a state in which you can have happiness, be not solving problems. And I profoundly disagreed.
00:44:44.920 And I was so glad that at this point, this is where David jumped in and said, well, you know,
00:44:50.520 that could be because it might feel as though the subjective feeling that it's not about subjective
00:44:57.160 feelings. And so this is when Sam talks about meditation, he always talks about the subjective
00:45:01.720 feeling side of it. And yes, there's a subjective feeling side of it. Doesn't mean you can't be
00:45:05.880 wrong about your own subjective feelings, but what David contributes to this is a very
00:45:09.560 douchey in response to meditation, even though he says he's not experienced in the area
00:45:17.560 himself. He says, well, the pleasure that one might get from meditation might be because
00:45:25.400 you kind of dampen down your conscious state and maybe in dampening down your conscious state,
00:45:33.800 you allow your unconscious mind and your unconscious mind is real, your unconscious mind is
00:45:39.240 there and it's attempting to solve problems as well. But sometimes the conscious and the unconscious
00:45:43.000 probably have this interaction where there could be obstacles or blocks where your conscious
00:45:48.360 mind is just getting in the way of your unconscious mind. And meditation might just cause your
00:45:51.880 conscious mind to relax for a while, to go to the background for a while and allow your unconscious
00:45:57.160 mind to do its thing. And then it can solve problems unconsciously so that when you go back to your
00:46:02.520 conscious mind, suddenly you feel a lot more creative and Sam admitted that this is indeed
00:46:07.720 subjectively the case, that he's had the experience of feeling as though he's far more creative
00:46:12.920 after having meditated. So admitting that the pleasure of meditation really is cashed out
00:46:20.680 in what happens after the meditation, namely the problems begin to be solved after the meditation
00:46:27.000 at a rate greater than what they were before. And he said that this is one of the reasons that
00:46:32.840 culture has taken on meditation as being an important tool because people recognize this, that if
00:46:38.440 people are stressed or feeling bad or depressed, their meditation can be a very good prescription
00:46:44.120 to help that sort of thing because it allows the problems that are normally there to be dropped.
00:46:50.200 But you don't simply drop them such as they disappear, you drop them such that your unconscious
00:46:55.960 mind during the meditative state can do its thing, whatever that thing is, so that at the other end
00:47:01.320 your conscious mind, once you come out of the meditative state, maybe it's days, maybe it's weeks
00:47:06.040 whatever, is able to work better on solving those problems. So it does come back to problems, there
00:47:11.560 isn't this state of pleasure with respect to morality that is completely independent of problem
00:47:19.480 solving. So it's a key point, it's a key point. And so what I'd say there about the meditative
00:47:26.680 state, and I've tried to mention this before, when Sam's talked about it, is that a lot of what's
00:47:31.960 going on in the meditative state, this feeling of the divestment of I, the feeling that you no
00:47:38.920 longer, your witness of your conscious experience, as Sam would say, that you really do feel kind
00:47:46.840 of a distance between you and objective reality or even you and your thoughts, you look at
00:47:53.320 thoughts as objects. Fine, get it, it's great. But this is only to say that that state is very
00:48:00.600 difficult to describe. It's exactly the same as the difficulty of trying to describe what the color
00:48:06.520 blue looks like to you to someone else who's looking at the same sky, let's say. So it's the
00:48:12.520 difficulty of trying to articulate what quality are alike, we don't know how long, and that's all
00:48:18.280 that that is, that this inexplicit type knowledge that we have, we know, we have subjective
00:48:24.520 knowledge, we know what the sky looks like, I know what the sky looks like, but I can't put it
00:48:28.600 into words, it's inexpensive. And the same is true of the meditative state. And this is a mystery
00:48:35.480 for now, but it's just a problem, like how do we do it? Okay, well we can't do it now, I don't see
00:48:40.760 it as being some sort of reason to think that there's something massively spiritual or
00:48:48.120 hugely mysterious about this area. It could be the case, but it could just be a mundane problem,
00:48:52.680 I might just turn out to be a mundane problem. Someone hasn't thought of the solution. Okay, so
00:48:57.240 again, David says that moral theory should be approach like scientific theories, they don't need
00:49:01.560 foundations, they don't need foundations. And that the, there are a lot of theories out there,
00:49:10.760 a lot of moral theories like, um, can't's categorical imperative, okay, or, or rules as fairness,
00:49:17.800 or stuff that comes out of the Bible, or the golden rule, etc, etc, etc, whatever your moral
00:49:23.240 theory happens to be, or indeed Sam's own wellbeing of conscious creatures. All of these,
00:49:31.160 these principles, these ideas, these theories should be seen as critiques, as critiques of each other,
00:49:38.200 or as critiques of any other theory that someone proposes, or is a critique of a solution
00:49:44.680 that someone proposes. They shouldn't be seen as foundations from which you begin to build up
00:49:51.080 everything else. Okay, and in response to David saying that, that these, these particular theories,
00:49:57.960 these famous theories from utilitarianism to camps, categorical imperative to, so believing in natural
00:50:04.600 law, etc, etc. In response to this, Sam says, let me recategorize my foundation and he goes on,
00:50:14.760 so he hasn't heard what David has said about foundationism, or insofar as he's heard it,
00:50:20.520 he's heard something different to what David was trying to, to, to impart to him.
00:50:25.800 The receiver of the message, and the sender of the message, I'm not guaranteed to get the same
00:50:30.840 message, or the message that the sender sends is not the message that the receiver is guaranteed
00:50:38.600 to get. It depends upon error correction, and there's no good way to know what the method of
00:50:44.040 error correction is, in order to ensure that someone's idea, or that your idea gets into someone
00:50:50.760 else's mind, we don't know how to do that. We always make mistakes, and so I think this is
00:50:55.640 another example here that Sam just says it off the cuff, it's part of his psychology, part of his
00:51:00.680 vocabulary, it's part of his way of viewing the world, of thinking about these matters, about
00:51:06.760 thinking about morality and epistemology, that you have a foundation, you need a foundation, he
00:51:11.640 doesn't understand that you don't need one, and so he continues to return to it, and he just
00:51:15.080 wants to re-explain it, and so he said that David there, just let me re-explain my family, you're not
00:51:19.480 accepting my foundation, and I know you're against foundationalism David, but let me re-explain my
00:51:23.800 foundation, I can't be the only one that sees the problem there, so just to emphasize that this
00:51:29.480 comes from, I'm trapped before at the beginning of an affinity where David really articulates this,
00:51:35.560 to understand stuff, to learn, we human beings have to conjecture and explanation,
00:51:42.440 but Sam on the other hand is working with it, our foundation was sort of an anti-fallableist
00:51:46.920 framework, and although he says he explicitly isn't doing that, he doesn't need to do that,
00:51:52.040 to me it sounds very much like a person who says something like, I'm not religious, really I'm not
00:51:58.920 religious, but now let me explain to you the divinity of God and how Jesus zoomed up to heaven
00:52:06.040 and how I go to massively someday. So on the one hand, the person saying that I'm religious,
00:52:12.040 and on the other hand, with every utterance they make, they are articulating and announcing to the
00:52:18.440 world the ways in which they're religious, and so yeah Sam is saying he's a fallableist and he
00:52:26.280 doesn't need these foundations, but he's explaining again and again and again where these foundations
00:52:31.000 are, this is the big difference, this is the huge difference of opinion that they have,
00:52:37.880 that one person is arguing that a foundation is important, and the other one is saying that that's
00:52:42.680 the very mistake that you're making, and the other one then goes back to argue and say, well,
00:52:46.760 if you don't like that foundation, let me explain the other one, and David is saying, well,
00:52:50.840 you don't need the foundation, you're not quite understanding what I'm saying, and Sam goes,
00:52:55.560 well, okay, well, if you don't like that foundation, let me re-explain what the foundation is and see
00:53:00.040 if that will convince you. So this is why there's not as much progress on that front as what
00:53:08.360 there might have been because they're not agreeing on what the word means, I don't know what
00:53:12.680 Sam is hearing exactly, but he's hearing something like, I just don't know what Sam is hearing at
00:53:20.040 that point, because at one point Sam even says something like, but you need a foundation,
00:53:27.560 like even popularian science needs a foundation, and David tries to say, well, no, that's wrong,
00:53:33.560 David says that the idea that we need to start anywhere is false, everything's
00:53:39.320 predesizable, you don't need to start in a particular place, you don't need to start down here,
00:53:42.920 or up there, or anywhere else, what you have are moral problems, and then you need to approach
00:53:48.840 those moral problems with a critical eye, in the same way in science. Now, if David has a problem
00:53:57.000 in quantum computation, he really doesn't need to look at what the foundations of all of
00:54:05.400 science happens to be, or what all the physics happens to be. Now, they might be ways in which
00:54:11.800 they can critique his solution to a particular problem in quantum computation.
00:54:16.600 Yeah, let's say, for example, he decides to create some algorithm to run on a quantum
00:54:23.080 computer, but the algorithm requires that parts of the quantum computer exceed the
00:54:28.200 switch that require the quantum computer to have switching speeds that exceed the speed of
00:54:32.200 life. Well, relativity would be a critique of that, that it wouldn't be possible, and so that
00:54:38.040 could be ruled out. But it's not like he always has to begin with some particular set of facts,
00:54:43.880 and then from that build up. If he has a problem, so mainly if there's a problem anywhere in science,
00:54:50.040 then you solve that problem, without being too concerned about what else is beneath that.
00:54:56.920 Again, there's a point where Sam gets into feelings again, and so David comes
00:55:02.040 back with, or if you consider someone like Isaac Newton, Isaac Newton would have been quite happy
00:55:08.040 when he was solving problems. No doubt that state of mind he was in was just as happy as anything
00:55:12.760 that people today have. Just as happy as when David invented a theory of quantum computation,
00:55:18.680 or Edward Whitman solves a problem in string theory, Newton would have had that when he found the
00:55:23.640 universal law of gravitation, let's say. However, his state of comfort would have been
00:55:29.560 god awful, it would have been terrible. His clothes would have etched, his food would have been
00:55:35.640 terrible, his bath would have been cold, he would have been cold. There would have been a whole
00:55:41.000 bunch of reasons why he would have been uncomfortable compared to us. And so it can't be due to
00:55:48.040 comfort or pleasure, or the absence or presence of pain, that this idea of happiness can be
00:55:54.440 a cash down. Now happiness is kind of independent of those things, it's about solving your problems,
00:55:58.520 whatever your problems happen to be. And it doesn't need to be this snobbish thing of like you
00:56:02.840 need to find the next universal law of gravitation. There can be anything in your own personal life,
00:56:08.040 all problems are parochial, it's just what you happen to be interested in. But
00:56:13.160 so long as you're solving them, then that's what will make you happy, so long as the problems
00:56:17.640 are interesting and worthwhile that they don't need to be this profound type thing.
00:56:22.760 And it doesn't need to be anchored to creature comforts. In the far distant future, people will
00:56:29.000 look back and think how uncomfortable we are. Here I am sitting in this silly chair. Maybe in the
00:56:34.200 distant future people will look at a video like this and go, oh my god, they used to sit in chairs
00:56:38.600 like that, how ridiculous. We're now floating around on clouds, how could those poor people?
00:56:43.320 Sam says at some point that there might be aliens out there that have available to them,
00:56:48.680 states of mind, states of pleasure, that we do not have available to us. The biology of our mind
00:56:54.440 might foreclose certain states. And again, this is a profound misunderstanding of David Deutsches
00:57:00.760 understanding of what a person is. David Deutsches discovery of what a person is as a universal
00:57:05.880 knowledge creator. Given that our minds are universal, there can be no such state because our minds
00:57:11.640 can access any state. It doesn't depend upon our biology. Sam insists at this point in the
00:57:16.440 conversation that it does depend upon our biology. But it doesn't and it can't because our minds
00:57:22.040 are substrate independent. We can one baby download into computers. So we are not foreclosed
00:57:28.200 about having certain experiences. Our minds are already universal. We don't even need more
00:57:34.040 processing speed or more memory power in order to do this. Our minds are universal. And if we
00:57:38.440 did need more memory, well then let's hook ourselves up to a computer but find that completely
00:57:43.640 implausible. And David points out that discoveries in morality must always create more problems.
00:57:50.040 Sam mentions that, well, this is just a local wrinkle. Again, this idea that in the distant future,
00:57:56.280 Sam has this conception that we will be in a less problematic state than what we're in today.
00:58:02.120 And it's just wrong. He thinks that we'll be almost there. We'll just be ironing out the wrinkles.
00:58:07.240 Again, this anti fallibleous notion that he doesn't really take seriously the beginning of
00:58:12.040 infinity. That even then we'll be at the beginning of infinity. Even then, when he thinks that we'll
00:58:16.760 be almost at the peak, like David says, well, we will just see more problems from that point.
00:58:21.880 There'll still be existential problems. How can we get rid of all the existential problems? How can
00:58:25.320 we stop all the stars exploding? Well, maybe we can. But once we do that, how can we stop the
00:58:29.400 universe expanding and so on? The problems will always be there. We'll always discover something new.
00:58:33.720 And when you discover something new, you end up creating a, well, you end up being okay.
00:58:38.360 You end up finding a whole bunch more problems. And there's this funny moment where David says,
00:58:48.200 he doesn't see a reason why there should be a limit on the size of a mistake we can make and
00:58:52.440 Sam laughs at that. But I think that Sam's kind of laughing nervously. It's like a reflex
00:58:59.000 because he doesn't have an answer to that. He kind of gets it. He's a very intelligent person,
00:59:02.920 and he understands that, indeed, there's no maximum size on the problems we might make, even in
00:59:09.640 the future. And so things could go terribly wrong, even then. And so I think he kind of almost
00:59:15.640 gets there and realizes that, oh, you can't really be on a peak. Because if you're a peak,
00:59:21.400 then you've kind of solved everything. But that's not possible. You can still make mistakes with
00:59:25.400 fallible people. With 10 minutes to go, with 10 minutes to go in the conversation, Sam's still
00:59:33.240 asking what the disagreement is. He mentions morality as being a navigation problem. David says,
00:59:39.160 we'll change what we mean by better and worse. So even if we think right now we should go this
00:59:43.560 way rather than that way, well, in the future we might realize that that was a terrible era that
00:59:47.480 we could change our minds about what is right and wrong. And David also says that neuroscience
00:59:52.840 can't be very relevant because, and Sam agrees, but David explains again that it's because
00:59:58.840 the brain is universal. And I'm not sure that Sam quite understands what that means. And this is
1:00:03.160 a subtle point. This is very difficult. I don't know that many people really have comprehended
1:00:09.800 the significance of that. The brain is used as universal. People are universal. Human beings are
1:00:17.160 people precisely because they're universal knowledge creators and universal explainers, they're
1:00:21.640 creative. That's what it means to be creative. And David points out again that if we were in a
1:00:26.840 matrix like Sam imagined earlier on, then no moral question would have anything to do with science
1:00:34.280 because that matrix, this program, this fake program that would all be uploaded into,
1:00:39.160 wouldn't have physical laws that are instantiated in physical reality. Instead of be based upon
1:00:44.840 simulated physical laws, so any decision you made there wouldn't have anything to do with science.
1:00:49.960 They'd have to do with the laws that are instantiated inside of the program,
1:00:54.840 what if the program had decided to put in there? That's what, if you think that morality is anchored
1:00:59.800 to science, then you'd have to admit that at that point, morality then becomes anchored to the
1:01:03.640 rules of the program, the rules of the computer game or the matrix that you're in. And again,
1:01:07.480 that can't be true because morality has an objective reality beyond the matrix, beyond
1:01:13.400 our physical reality as well. Even though truths about physical reality might be relevant
1:01:19.000 at times to morality, you can't use, morality can't be derived from the laws of physics or
1:01:27.000 from the laws of neuroscience or anything else. And Sam says he wants to talk more about that
1:01:32.200 another time, so I'd love to hear that conversation. You should tune in to the just the final
1:01:38.600 two minutes of that particular podcast by the way, there's a good laugh, a good joke.
1:01:43.720 So I hope that went some way to trying to tease out what the differences are. I think there
1:01:47.240 are a number of differences there, not less of which is foundationalism. And this conception
1:01:53.000 about what morality is and that when you've got moral theories, really the moral theories that we
1:01:58.360 often talk about, like natural law and utilitarianism and the golden rule, etc.
1:02:06.200 The really these should be seen as critiques, critiques to use in order to find out what is wrong
1:02:13.320 with other theories or with other proposals within the domain of morality. But morality is about
1:02:19.640 solving moral problems, just the way that science is about solving scientific problems.