00:00:18.120 but I wanted to just give a little bit of an intro to David.
00:00:31.840 While Popper's work may not be that well known,
00:00:41.240 I think that David Deutsch has done a wonderful job
00:00:43.960 at bringing Popper's work to people through his books.
00:00:55.720 to social media, I think he's pretty accessible
00:01:00.560 which is not such a common thing among people who are specialists
00:01:09.560 have inspired a culture whereby people have come together
00:01:15.200 But this has led to groups such as four strands
00:01:20.480 Individuals have been inspired to create podcasts,
00:01:34.960 My primary interest is in foundations of physics
00:01:39.080 I find myself starting my physics class every year now
00:01:43.160 for the last couple of years with the discussion
00:01:51.360 And it's interesting how that leads to all sorts
00:01:54.360 of interesting discussions throughout the school year
00:01:56.760 as we look into physics and just overall the connection
00:02:08.280 So I really want to thank David for making himself available.
00:02:27.040 to help host this session because my experience
00:02:32.720 So back in 2009, we didn't know I was a religious blogger
00:02:36.600 and I had fellow religious bloggers suggest to me
00:02:54.360 And ended up reading a whole lot of different books
00:02:59.720 And eventually became very convinced of all four
00:03:13.120 And so eventually this even led to me going back to school.
00:03:19.200 I wanted to go back to school and get a master's degree
00:03:22.040 in computer science to study computational theory
00:03:26.240 And so this is something that really has ended up
00:03:33.560 and then eventually now maybe even eventually turning
00:03:46.000 And Saudi mentioned the four strands blog four strands.org.
00:03:51.960 I am the one kind of behind that that runs that and hosts that.
00:03:58.600 Camille, do you want to do a quick introduction?
00:04:26.240 And was the primary reason we started the podcast together
00:04:29.640 just because we really enjoyed discussing knowledge
00:04:45.080 But I was thinking a little bit something about the transition
00:05:15.800 because you don't need to assume so much for it to happen.
00:05:19.480 You just make people make small changes into their ideas
00:05:26.400 But then that seems to why wasn't the static society
00:05:40.720 before the enlightenment that the argument is something
00:05:46.720 like the enlightenment could have happened earlier.
00:05:51.200 But the enlightenment happened at in a particular culture
00:06:03.840 Something like this, I don't know if this question.
00:06:25.400 there's a selection effect that we tend to think
00:06:28.400 that what happened before was like the enlightenment
00:06:34.800 But the thing is long-lived static societies are rare.
00:06:45.880 Most societies, most cultures that have ever existed,
00:07:16.680 were not in the direction that would stabilize it.
00:07:32.720 and as a result they were unable to correct them
00:07:35.400 and so they were killed by the neighboring tribe
00:07:39.240 or they ran out of food and didn't know what to do
00:07:47.440 if you can use that concept with humans, you can't really,
00:08:01.760 was maybe better described as just continual chaos
00:08:11.800 And then statisticity sort of emerged out of that sometimes,
00:08:16.400 but because statisticity made the cultures last longer
00:08:21.000 and grow more, those are the ones that we kind of see
00:08:26.000 when we look back, we see the ancient Egyptian empire
00:08:36.480 the many failures that must have outnumbered that culture.
00:08:45.720 Right, I don't know if that answers your question,
00:09:03.240 My name is Pedro and thank you so much for coming up.
00:09:10.480 I think the reason I was being recommended to your book
00:09:13.920 is that I was asking a person on how to do research
00:09:26.200 that the most important take from this book is,
00:09:31.120 maybe just for me, but it's about self error correction.
00:09:40.160 so the first question is do you have any advice
00:09:52.520 that a person can question himself in his everyday life?
00:10:13.080 I think in the end, we as human in all sciences,
00:10:18.080 what we were trying to really understand is about causality,
00:10:25.120 but the problem to me is that I'm not a theoretical physicist
00:10:32.920 and I get a sense that my understanding of causality
00:10:43.680 is being considered as causality in like the space time causality,
00:10:55.440 But I think at times I can see why things are not causal
00:11:05.600 but I kind of find it really hard to give a definition
00:11:29.560 My third question and this is the last question.
00:11:32.760 So right now I am a grad student working in statistics
00:11:37.760 and I think fundamentally it's a problem of induction
00:11:45.120 and that we're trying to combat every day life.
00:11:56.560 what do you think is the most important thing to do
00:12:06.720 to help in the process of scientific discovery?
00:12:13.640 And this kind of, what would you think would be
00:12:19.200 the most important thing to in for a statistician
00:12:34.280 because I think that's the easiest, statistics is interesting
00:12:43.520 And the way it enters into science is that it enters
00:12:51.240 only in what I call in my book, The Perspiration Phase.
00:13:16.360 and use statistical theory to get an answer out of that,
00:13:25.120 were even collected and that part doesn't involve statistics.
00:13:40.880 Some people think that creativity is just extracting
00:13:44.600 knowledge out of data, but that is the opposite
00:13:47.080 of what the truth is and as Papa has taught us.
00:13:55.000 that it can be kind of used as a tool to reject hypothesis,
00:14:02.640 but it seems that it is, do you think it's likely
00:14:07.320 that it can be used to discover hypothesis as well
00:14:16.560 for the same reason that any piece of mathematics
00:14:24.800 unless you have a theory that first have a theory
00:14:41.480 There is very nice transcript on the internet somewhere
00:15:04.080 And the first lecture, I think it's the first lecture,
00:15:08.840 I think I'm the only professor of scientific method
00:15:17.040 is that there is no such thing as scientific method.
00:15:34.360 There is no such thing as a philosophical method
00:15:38.560 or a self-improvement method or a psychological method.
00:15:59.040 maybe the theme of my book is all problem-solving
00:16:02.800 and maybe the theme of all Papa's books is also problem-solving.
00:16:11.280 There is a method for, there are various methods
00:16:24.720 if they are in one's culture or in one's psychology.
00:16:28.840 But that by itself doesn't do anything positive.
00:16:32.520 It merely frees one from the sabotage of those methods.
00:17:04.360 and modern philosophy was what they say about causality
00:17:08.520 is that they basically deny that there is such a thing.
00:17:27.840 and the block universe and the block multiverse
00:17:38.960 that they equally well predict the past from the future
00:17:48.760 that there is no room for causation in that picture.
00:17:55.320 It's just that causation is a high level concept.
00:18:00.320 And there's no mention of difference between liquids
00:18:11.280 and yet there are well developed physical theories
00:18:15.640 And the causality hasn't really been important in physics
00:18:33.080 In constructed theory, if I can plug that for a moment,
00:18:47.920 than it is in the prevailing mode of explanation.
00:18:52.920 And in other fields, the then physics causation is important
00:19:28.600 which is to say I'm interested in trying to understand
00:19:31.120 the mind and the way that the mind creates knowledge
00:19:48.880 So my question is, do you think that replicators
00:20:02.160 that the replicators are sort of the key explanation
00:20:05.080 for why biological evolution manages to create knowledge.
00:20:11.040 that there's something similar going on in the human mind,
00:20:15.560 or do you think that there's some other process
00:20:17.440 that's responsible for creating knowledge in human minds?
00:20:22.000 To some extent, that's a question of implementation.
00:20:33.920 If I knew I'd be really working hard on that now,
00:20:39.160 if I had any kind of idea that I thought was halfway viable.
00:20:46.280 my guess is that that's not how the implementation works
00:20:55.240 There could be a logically equivalent implementation
00:20:59.880 But the thing is, in the mind or in a computer,
00:21:23.200 which hasn't done the equivalent of replication.
00:21:32.360 that I don't think we understand biological evolution
00:22:04.240 And so, although the underlying theory is the route
00:22:22.080 I doubt that there are replicators in the brain.
00:22:43.800 And I keep trying to get people to think about knowledge
00:22:50.320 And I have a very difficult time explaining to them
00:22:56.400 without just pointing them to your books and pop-ers books.
00:23:03.440 how you think about the difference between knowledge
00:23:09.600 that you could offer in terms of how to get people
00:23:13.920 to realize that I'm trying to talk about something
00:23:16.000 that's not information when I say the word knowledge.
00:23:19.720 Yeah, the only communication strategy that works
00:23:42.440 I think of knowledge as a species of information.
00:23:47.440 And I've, at various times, used several different
00:24:03.520 And my most recent choice is to say that knowledge is
00:24:15.360 So, knowledge is, is that property of a computer program
00:24:24.360 And if, for example, you know, you have a word processor,
00:24:29.360 and the word processor is useful because it knows,
00:24:39.360 the process is useful because it knows it has the process
00:24:52.280 knowledge of things like there are such things as words.
00:24:55.840 There are such things as letters and sentences.
00:25:02.720 And there are different aspects of the context,
00:25:09.280 So, knowledge is, is information with causal power.
00:25:23.480 both knowledge and information are very unusual,
00:25:26.720 they're abstractions and many people don't like
00:25:33.280 So that's something you have to persuade them of.
00:25:35.560 But then further information and knowledge are extremely
00:25:49.960 that I sometimes have to have to work hard to persuade people
00:25:56.160 or rather to get people to see what I'm even talking about,
00:26:22.440 What's more important is to have an interesting discussion.
00:26:26.600 I'd like to hear that you start out with it too.
00:26:31.240 All right, Paul with, would you like to go ahead, please?
00:26:41.960 My question is about more philosophy and moral truth.
00:26:46.760 I'm concerned, this is a topic you've touched before.
00:26:49.480 I'm concerned about how the is of the economy is interpreted
00:26:54.080 as hopelessly nihilistic that it condemns us to relativism.
00:27:00.240 And the idea that if moral values can't be derived from fact,
00:27:03.760 they can't be true because they don't refer to object identities.
00:27:08.880 the economy is often perceived as a deep problem and a deep mystery.
00:27:16.840 because the impossibility of deducing values from facts
00:27:19.880 does not amount to a demonstration that their thoughts
00:27:24.560 And in a sense, moral ideas can be refuted by mere facts
00:27:30.160 And I find myself in the, I think, the minority of people
00:27:40.880 And copper described it position that he called
00:27:43.800 critical Judaism that I think that I interpret in this way.
00:27:48.480 And so my question is this, I know that you've talked about the fact
00:27:54.320 And I wanted to ask you, how do you understand the,
00:27:58.560 How does it bear on your concept of moral truth?
00:28:01.680 And does the concept of truth apply to moral propositions?
00:28:17.760 Poppa, it's a bit hard to interpret on issues of objective
00:28:27.120 morality, because he doesn't really discuss that point.
00:28:30.560 You can only infer Poppa's position as far as I know anyway.
00:28:40.320 You can infer when he says, for example, that we can make moral progress.
00:28:44.880 Then, and also that there is such a thing as making progress in philosophy
00:28:52.080 generally, that he certainly rejects the position that science is the only thing
00:28:57.840 I like to use the argument that when people say that there's a difference
00:29:08.320 between the possibility of progress in morality and in science,
00:29:12.560 in that in science we have this method of experiment that can take us forward
00:29:20.080 Well, I think that's an unpopular point of view,
00:29:32.720 It's a bit arbitrary to say that scientific knowledge is possible.
00:29:38.320 If, at the same time, you're going to take that critique of moral knowledge seriously,
00:29:43.520 because the same critique that the deniers of moral knowledge takes seriously
00:29:48.720 has been used by many people to deny that scientific knowledge is possible.
00:29:55.760 The fact that you can't deduce it from anything is irrelevant in all fields.
00:30:01.760 Knowledge can never be deduced, so the ought is a distinction merely says
00:30:10.640 that you can't deduce moral knowledge from scientific knowledge,
00:30:17.600 You can't deduce scientific knowledge from anything,
00:30:21.520 so you can't deduce moral knowledge, but we're not after deducing knowledge
00:30:26.640 and what we're after is solving problems, and there have to be moral problems
00:30:32.320 as soon as you have a creative entity that is solving problems,
00:30:38.240 then the moral issues arrive arise because you've got to wonder what should I want.
00:30:51.600 you can't gaze into your navel and find what you want about everything,
00:30:58.080 you've got to think about what you want and criticise it and create knowledge about it,
00:31:02.480 so I think one can take a completely uniform view of all those fields,
00:31:12.400 and therefore the ought is distinction as it is not epistemologically relevant,
00:31:19.280 it's not relevant to what kind of knowledge we can create.
00:31:26.640 Actually, I had a question which was similar to that, if you guys don't mind me
00:31:31.200 in rejecting in there. My question was about when I talked to people about that,
00:31:39.040 one of the question that's raised is that when it comes to science, we all,
00:31:43.840 the laws of nature constrain everything, we don't have a choice in that,
00:31:49.040 but the moral seems to be different, I guess one of the difference between morality is that
00:31:54.000 it's the claim that we discover moral principles, then we still have a choice, we're not bound,
00:32:01.200 if as if they feel like there's something more concrete in science,
00:32:05.920 would you like to say something about perhaps maybe you have any ideas about roots of morality,
00:32:11.440 in the sense of, do you tie it to, I've listened to your discussion with Sam Harris,
00:32:19.840 it doesn't seem like you tie it to anything to do with neuroscience, but do you think about it,
00:32:25.360 is there something at the back of your mind as to what are the roots of morality?
00:32:30.640 I think in general it's not very helpful to think about what the roots of something are,
00:32:38.720 because when you find some roots, there are always going to be roots beneath that,
00:32:44.240 and you'll never get to the bottom of it, so foundations are sometimes useful,
00:32:50.720 but not because they're underlying everything, but because they reveal something of the structure of
00:32:58.480 things. When you, I'm a theoretical physicist, I work on the foundations of physics,
00:33:10.480 when you make a terrible mistake at the foundations of physics, you may get ridiculed and you may
00:33:19.040 lose your income and so on, but when you make a mistake at the foundations of morality,
00:33:25.440 the physical world will come for you much worse, so it's not really, and I'm not only talking
00:33:34.640 about other people coming from it for you, even if you were a person on a desert island
00:33:39.920 who made moral mistakes, it would cause physical trouble for you, you would make mistakes in your
00:33:49.200 life, which might shorten it just from making a mistake in morality, so I don't think this
00:33:58.000 distinction that morality is a matter of choice is true, or at least it's no more a matter of
00:34:05.200 choice than any other ideas a matter of choice. We choose and create our own ideas according to
00:34:10.480 our values about what's true, but our values about what's true, even though they are completely
00:34:18.000 changeable, are not at all arbitrary. It's like, it's maybe the best example of this is pure
00:34:25.200 mathematics. Some people are reduced to claiming that mathematics is arbitrary, it's just,
00:34:35.600 really mathematics is just the study of what mathematicians think it is clever or glorious or
00:34:45.920 whatever to think about, which makes, reduces mathematics to basically a study of human brains,
00:34:53.840 mathematicians brains, or the brains of a community of mathematicians, but it's simply not true,
00:35:00.480 mathematics is the study of abstractions that actually exist and properties of them that exist
00:35:09.520 and are independent of us. We can choose which mathematical objects we think are interesting
00:35:17.120 and worth trying to understand, but we can be mistaken and we can follow dead ends.
00:35:28.640 I think in mathematics it's also unusual to run into a brick wall like that, and by the way,
00:35:35.440 I think that running into a dead end and making large mistakes, unless they kill you,
00:35:43.600 it's not all bad. In fact, it can be just as good as successfully discovering things, which
00:35:51.840 the latter can leave you feeling empty, whereas as Papa says, if you're engaged with problems,
00:36:05.120 even if you've never solved them, then you're still having fun.
00:36:10.320 I have one quick question. I hope is quick. I enjoyed reading your constructor, theory paper.
00:36:16.880 You made a very big deal in that paper though about it underpinning the rest of physics,
00:36:21.680 and I kept wondering why that was, because it seemed like it would be a valid theory about
00:36:26.800 constructors in the same way, you know, information theories about theory about information,
00:36:30.960 or computational theories about theory about computation without the claim that it underpins
00:36:35.680 all of physics. So what was the motivation there to say that, and is that an absolutely necessary
00:36:42.160 motivation or would it still be a good theory without that? Well, I guess that no particular
00:36:49.840 motivation is ever essential, but the reason, I think construct a theory could stand it by itself,
00:37:02.000 but rather like philosophy, if there were no applications to anything else, then it would be useless,
00:37:09.120 it would just be a piece of mathematics. The reason I think it's important that it underlies
00:37:17.120 as many areas of physics is just that I think it does underlie them. I think that there are
00:37:27.040 several areas of physics where progress has been stalled because of the assumption that
00:37:35.440 the prevailing mode of explanation, namely initial conditions and laws of motion,
00:37:41.360 is the only legitimate form that it's kind of without ever being stated explicitly,
00:37:49.600 it's taken for granted that a valid explanation in physics has to be of that form. And yet
00:37:57.200 already in existing physics, there are explanations which are of the constructed theoretic form
00:38:03.600 instead and cannot be expressed in terms of initial conditions for source of motion.
00:38:08.560 And that is kind of shrugged off because people think it's not legitimate. So in thermodynamics,
00:38:16.800 there are explanations that seem to directly conflict with explanations in terms of initial
00:38:25.600 conditions and laws of motion. And the usual that the conventional response to that is, say,
00:38:30.800 basically to say, oh, well, a thermodynamics isn't really true. It's just an approximation scheme.
00:38:37.040 And at root, these quantities like work and heat and the laws of thermodynamics are not actually
00:38:45.920 true. But that's just a prejudice. And my feeling is that in that area and in many other areas,
00:38:56.320 such as theory of computation, and in areas of physics where initial conditions and laws of motion
00:39:05.920 approach has been successful, I think in all those areas, there is scope for making progress
00:39:13.440 via constructed theory if constructed theory is true, and probably not if it isn't.
00:39:21.840 And we'll find out of it's true by only by trying to make such progress using it.
00:39:29.120 Thank you. I wanted to give work cash a chance to ask a question. He wasn't able to
00:39:34.880 on through his interface, race his hand, and he did it about this point. So are you still there?
00:39:39.040 And can you ask your question? Yeah, I'm here. Thanks. Hey, David, a big fan. I just wanted to ask
00:39:46.640 you, this isn't my view, but I just want to play devil's advocate here. And I, because I don't
00:39:50.080 have a rebuttal to this argument, which is there's a Bayesian critique of a popper, which is that
00:39:55.360 verification and disinformation, both through real information about a theory. And that while
00:40:00.800 popper can deal with this confirmation, there's no way to integrate evidence that verifies a
00:40:05.360 theory. And that's like, Bayes is backwards compatible with popper in that it can
00:40:11.280 integrate verifying and is disconfirming evidence. It just weighs disconfirming evidence
00:40:16.560 higher and updates heavier based on that. So how would you deal with that criticism?
00:40:19.840 I think the context in which that criticism arises, I think, contains mistakes.
00:40:33.920 First of all, the context is that there is some data or information, which we receive, and then
00:40:43.760 we have to make sense of it, either by refuting a theory or by confirming a theory or whatever,
00:40:50.640 but we start off with data. That just isn't true, as we have learned from popper.
00:40:56.880 So in that respect, the whole picture of science and of thinking generally that that
00:41:05.360 underlies that critique is just wrong. Secondly, so that's like where science is coming from,
00:41:15.200 then there's where science is going to. So this critique suggests that what we're trying to
00:41:24.400 do, the way science is going to, is getting justified beliefs that what we really want to do
00:41:35.200 is to make the probability that we assign or the credence that we have for true theories should go
00:41:44.960 up. We need some method that will make the credence of true theories go up. And then they say,
00:41:50.480 well, the first of all, popper seems to only have a method that makes credence go down.
00:41:57.120 So, you know, how can that possibly be a picture of science? Well, the answer is that science
00:42:03.680 from beginning to end doesn't resemble that picture. So science is problem-based
00:42:17.440 and the way it proceeds is by conjecture. And after it has problems and conjectures,
00:42:22.560 it has criticisms. And none of that appears in the Bayesian picture.
00:42:27.840 So, of course, they're going to think that the papier in the papier in view of science
00:42:36.720 doesn't adequately represent science, but what has really happened is that their picture of
00:42:41.840 science, which is basically empiricism and optimism, some kind of that, is just wrong
00:42:47.840 brute and branch, false, brute and branch. All right, thank you, Mike. Yes, hi, everyone. Hi, David.
00:43:02.720 So, I was just wanting to ask you about modes of explanation and knowing how important they are
00:43:07.840 to kind of structuring some of your, some of your work and Bruce just brought up constructor
00:43:13.280 theory, which I think you might describe as his own mode of explanation. And I was trying to
00:43:18.800 particularly link it to computation. So, I have your shorthand if you can't program it, you
00:43:25.440 haven't understood it. I was wondering if you follow that, is inventing a new mode of explanation?
00:43:31.680 Is that synonymous with inventing like a new type of algorithm? Is the link to computation
00:43:37.760 and explanation? Can it be kind of forged in that way? But not yet, you don't have to speak
00:43:43.120 specifically just to that. So, I'm reluctant to reduce things to algorithms. I think that usually
00:43:52.800 sucks the creativity out of the picture and makes it wrong. So, I'm trying to think
00:44:07.600 whether this maxim, if you can't program it, you haven't understood it, which is really a bit of
00:44:14.320 a paraphrase of Feynman. Whether this applies to everything or just theories about
00:44:26.160 how information works in the world. And in particular, AGI and so on. So, if you can't program an algorithm,
00:44:35.120 you haven't understood it. If you can't program any kind of information process,
00:44:40.880 then you haven't understood it. If you can't say you have a process of how stars work,
00:44:52.960 a theory about how stars work, then it's also true. I'm thinking out loud here. Then it's also true
00:44:59.760 that if you can't program that you haven't understood it. But that doesn't mean programming
00:45:04.960 every, the motion of every molecule in the star. It means programming the things that the
00:45:14.480 features of the theory, of your explanatory theory, that your theory says, explain the star.
00:45:23.680 So, it's those that you have to be able to program. But finding out what those are
00:45:30.720 is not a matter of programming anything. It's a matter of creativity and problem solving.
00:45:39.040 So, my tentative answer is that that maxim doesn't apply to everything. It doesn't apply to
00:46:04.080 So, yeah, thanks for doing this so much. It's really an honor to talk to you a bit up.
00:46:09.440 I find that all the things that we have signed objectivity to and I feel like the hardest
00:46:15.360 one for me personally is aesthetics. So, for instance, I find the K paintings of like,
00:46:20.960 Altamaran last go to be beautiful. But that, the reason I do is because of how old they are.
00:46:29.520 And it's human time speaking to us from 30,000 years ago trying to survive the harsh,
00:46:35.600 harsh ice ages. And I feel like if someone painted this state rotunda,
00:46:41.360 this same way with the bison's and everything, said it was a masterpiece. I probably want to slap
00:46:45.760 them in the face and say, I don't find it very beautiful. So, I don't know if me ascribing a
00:46:53.040 setting value to the K paintings Alaska because of the romantic notion of humankind painting
00:46:59.520 them so many years ago and maybe the first and then what are they trying to say or trying to
00:47:03.280 say anything else at all? Is the fair to subscribe the setting value to that for reasons like that?
00:47:09.200 Or should we just judge it for just how it looks and it shouldn't be the environment who did
00:47:14.640 and what you're trying to say? Does that make sense? Yes, I think to some extent this is just a matter
00:47:21.520 of the fact that language and terminology, we don't have an absolutely exact language
00:47:35.920 to describe everything we want to talk about. So, often we use metaphors and often we use
00:47:41.040 terminology that slides over from one area to an adjacent area and so on. So,
00:47:48.160 mathematician can describe an equation as beautiful, a person can describe someone's mind as
00:47:56.960 beautiful. And they mean something by that, they mean something objective by that, but it is not
00:48:07.040 the same thing as what we mean when we describe say a piece of music as beautiful or a sculpture
00:48:14.880 as beautiful. And even with those things, we may describe a painting as beautiful because it is
00:48:24.000 very apt in a certain situation. Like, I don't know, how do you judge Goya's painting of some
00:48:32.320 partisans getting shot? How do you separate the beauty of the fact that he is captured by the way
00:48:43.280 a very ugly situation so well? How do you separate that from beauty in the sense that if the same
00:48:52.080 skill and insight had been used to describe an orange harvesting festival? It could also
00:49:02.080 describe that as beautiful, but there'd be a different kind of beauty being described there.
00:49:07.840 I think there is such a thing as artistic beauty, which is often mixed with other values that we
00:49:17.680 want to put into an object. And maybe we shouldn't get hung up on whether that is really
00:49:27.840 beauty as kind of essentialism to ask that. The thing is that there are many
00:49:37.920 features of an object that are desirable. And the cave paintings are desirable in one sense
00:49:46.560 and are clearly rubbish in another sense. And there's nothing wrong with that. If somebody wanted
00:49:56.480 to, if somebody was interested in understanding the distinction there more deeply, then they would
00:50:03.680 probably find themselves inventing a more refined terminology for it. Rather than say,
00:50:12.640 is this really beautiful? They would say, there is a thing that we want. It is this, you know,
00:50:19.200 I'm going to explain it. And the cave painting has that, has heaps of that. And there's this other
00:50:25.520 thing, which we want in a different context, which, which, which the people who did the cave painting
00:50:32.400 also wanted, but weren't very good at achieving. And, you know, if somebody was spending their life
00:50:38.960 on teasing out that distinction very finely, then they'd probably invent a more fine
00:50:48.080 technology. Hi, Jesse. Hey, David. I have a question that might be somewhat personal, personal,
00:51:02.240 but have a lot of implications in a lot of people's lives. And I know Lulee's talked about this.
00:51:08.240 It revolves around just romantic relationships, personal relationships. And the whole
00:51:15.600 dichotomy of genes versus memes. We need society to procreate now. We don't have,
00:51:24.560 you know, we don't live an infinite life. We know immortality is
00:51:30.080 possible in some sense. But I guess there is a sense of like, we want to create these best,
00:51:36.880 the best memes that we can. We want to create best, the best explanations that we have in our lives.
00:51:43.120 But how do you think about that in terms of children and education, whether or not to have a
00:51:50.560 family or be in a relationship or just work on, you know, things like constructor theory and AGI
00:51:58.080 and life extension or biotech or just really curious to see how you think about all those ideas?
00:52:04.000 I don't think it's a good idea to try and save the world in the sense of
00:52:18.240 subordinating one's own values to what one thinks the world's values are. So maybe the world
00:52:27.440 needs a larger population. My guess is that it does. In other words, that would be a good thing
00:52:37.600 that the world as a whole would thrive better if it had more people in it. But to another
00:52:45.280 people, of course, think that the world would thrive better if it had fewer people in it.
00:52:49.440 I think in both cases, it's a bad idea to subordinate one's own life to that objective.
00:53:00.640 I don't think it's even, for example, a good idea in my own life to try to
00:53:08.960 publicize my own ideas. I do it to some extent, but I don't subordinate it to the fun of actually
00:53:19.920 trying to solve problems and some of the problems of only of interest to me and some are
00:53:25.440 interested to me and like half a dozen other people in the world and some are interested of
00:53:30.480 interest or more people. But the way I would choose what to do is try to meet my own values. And
00:53:40.880 to the extent that my own values include having preferences about how the world is, then
00:53:52.000 meeting my values would include trying to make the world better. But trying to make the world
00:53:58.640 better as an overarching principle for how to make personal decisions, I think, is a mistake.
00:54:10.240 Yeah, I guess that answers a little bit of it. And then it's just like
00:54:15.200 being young, a big part of culture and general in society is just finding
00:54:21.600 a significant other partner. And there's the whole debate against
00:54:25.200 polyamory or to have committed monogamous relationship. And that drives a lot of culture.
00:54:36.800 Yeah, well, different people find answers in different ways and that they have extremely
00:54:45.360 And I guess from the context of the beginning of infinity of what was actually useful,
00:54:52.160 it was useful to make more people and to do that people had families to do that in a kind of
00:54:58.400 divide and conquer kind of sense whether they knew it or not, right? People kind of,
00:55:03.440 when they team up, they can do, they're more than the sum of their parts.
00:55:09.600 Yes, although there are many ways of teaming up and each of them has a better and worse ways of
00:55:16.960 doing it. So, you know, you form a society, you form friends, you form families and all of those
00:55:26.160 can involve mistakes in how to do it. And we've got here by people making progress with that.
00:55:38.880 But for most of history, they didn't make progress.
00:55:51.680 Tracy? Sure. Hi. Hi. So, I'm hoping this is just more a fun white-hearted question,
00:56:01.200 maybe. But on Thursday, I woke up ahead of dreams that I had gotten the opportunity to meet you,
00:56:10.240 David, in the very next day, I find out that suddenly there's this opportunity to meet you
00:56:17.520 to this Zoom meeting, exciting for me to kind of strange. So, maybe the fun part could do,
00:56:25.040 maybe to speak to the human brain regarding its potential for quantum prediction, maybe, or
00:56:31.360 just the idea of quantum prediction in general.
00:56:33.440 So, I'm not entirely sure what you mean by quantum prediction, but predicting
00:56:44.000 the growth of knowledge is inherently impossible. And there's no reason to think that
00:56:54.720 quantum effects might be implicated in the human brain. And
00:57:00.000 the idea that quantum theory has kind of mystical, that it justifies various
00:57:08.240 traditional mystical ideas, always comes from mistakes about quantum theory.
00:57:16.080 It doesn't come, the real world doesn't implement those. So, I think there wasn't a connection
00:57:25.920 in that, you know, I would guess that there wasn't a connection in that respect. Maybe that's
00:57:36.640 maybe that's a boring reply, but my guess is that's the truth of it.
00:57:46.080 All right, Ms. Rob, I don't know if I pronounced that right.
00:57:49.040 Yeah. You can hear me? Yes. Hi, everyone. Nice to meet you, David. So, I just wanted to ask
00:57:58.960 about replication crisis, especially in psychology and in general to like in life sciences.
00:58:06.720 So, around 2010, like people started to realize that there are a lot of studies that can
00:58:13.040 be replicated. And so, people started to implement many standards of like data sharing and
00:58:19.200 open code and stuff like that. And there were also emphasis on importance of replication
00:58:25.600 studies, like studies that repeat the experiment as closely as possible to the original study.
00:58:32.320 So, there is a sense that if a study is replicated, then it must be true. And less emphasis on
00:58:38.960 mechanism, like by mechanism, I mean explanatory theory, they establish a link and by experiment,
00:58:46.320 then afterwards give an explanation how this process might happen in the mind. But
00:58:53.920 they prioritize replication seems to me the point that we can replicate the Newtonian laws
00:58:59.840 infinite many times, but they are not actual explanation of how the world works around us.
00:59:06.160 I just wanted to know how you see this, what you can say about methodology of like
00:59:16.080 psychological studies. Yeah, I entirely agree. And I think the replication crisis in
00:59:25.280 psychology and related fields, as you have just said, I think it's the wrong way to think about it,
00:59:35.920 the replication crisis is a small facet of what goes wrong when you apply
00:59:50.960 scientism to psychology and anything that involves knowledge, anything that involves human knowledge.
00:59:57.520 If you try to study it, as if it were physics, you will be doing scientism, you will get it wrong.
1:00:07.680 And the fact that it's not replicated is almost a positive feature of a theory,
1:00:20.640 because it's at least saying that the explanatory part of the psychological theory, which was
1:00:32.400 kind of unstated and taken for granted and implicit and denied and so on, that thing existed.
1:00:40.960 So there was an explanation there, and that's why the explanation can be falsified by an experiment.
1:00:50.720 If something can be replicated in psychology, then it's not really psychology.
1:00:58.480 For example, people do wonderful work creating optical illusions and explaining why
1:01:06.160 they work. And they work in psychology departments, many of these people, but that's not psychology.
1:01:16.000 That is a study of the human visual system and how the information is processed,
1:01:23.200 but that information is not being processed by a creative process.
1:01:27.840 There are other kinds of things that stem from that that you might ask
1:01:38.080 then after the, after the built-in interpretations of sensory data,
1:01:47.600 there is also, there's further interpretation happens,
1:01:50.800 which can be creative, and which also affects how we perceive things.
1:01:57.920 And you can form theories about those, but those theories have to be explanatory,
1:02:04.880 and there has to be a model of those. And there I would say that replicating them on a computer
1:02:14.080 would be a might be a useful thing to do with those explanatory theories.
1:02:20.560 So if you can't program it, you haven't understood it might be relevant there.
1:02:25.920 But as I think you hint, I think the real trouble with psychology and related fields
1:02:32.960 is that is scientism and a lack of an even a denigration and deliberate avoidance
1:02:45.920 of explanatory theories. This was explicit in the case of behaviorism, but behaviorism has
1:02:55.200 kind of been rejected. But the aspect of behaviorism that says that one should not
1:03:04.640 have explanatory theories, but rather one should have massive data which is replicated.
1:03:10.960 That is still there, and that's what really needs to be reformed.
1:03:17.040 All right, David. Thanks for doing this. You had mentioned earlier, I'm speaking from Jerusalem
1:03:22.000 Israel. You had mentioned earlier the Popper lecture and later paper on the non-existence of
1:03:30.560 scientific method. So I just thought you might get a kick out of this volume that I found literally
1:03:36.160 lying next to a dumpster from 1958, which is apparently the first Popper piece of writing that
1:03:43.200 was translated into Hebrew. I know you're from iPhone, so I thought you might get a kick out of that.
1:03:48.480 Anyway, my question is, in your first chapter in your book and your theory on explanation,
1:03:58.720 I've always wondered, I always got the feeling as you step through the phases
1:04:04.080 leading up to the breakthrough method that we have today, which is, of course, one step
1:04:14.320 in the long chain. I've always wondered how you see the relationship between that theory and
1:04:20.960 Poppers. I would normally bring this up, but I know this is a Popper oriented group.
1:04:27.680 So I was just wondering if you saw that theory is a corrective as completely 100% compatible
1:04:34.400 with and just another way of looking at it, or how do you see it relating to Popper's theories
1:04:38.960 and theory of explanation. Thanks. So I privately and personally think that it is Popper's theory,
1:04:47.600 and I'm not a historian of science, and I'm not really interested in who had what idea,
1:04:55.440 but I see, for example, the first chapter of the beginning of infinity,
1:05:00.000 is just a small explanatory footnote to Poppers epistemology.
1:05:10.480 And if somebody comes along and says, no, it's not, Poppers thought something completely
1:05:17.200 different. I don't care. I'm only interested in what the truth is. On the other hand,
1:05:24.400 at the other extreme, if someone comes along and says, that's exactly what Popper said,
1:05:32.480 and even your footnote is in a footnote of Popper on page 483. Well, again, I don't care.
1:05:42.240 I'm trying to understand the world, and I'm interested in what's true.
1:05:48.480 And that's attributing it to Popper is merely a matter of kind of academic courtesy.
1:05:58.160 So I think that Popper had an entirely explanatory conception of science.
1:06:08.640 I can't prove that from his writings, and I know that, for example, David Miller
1:06:13.360 thinks that that's not entirely true. Again, I'm sorry if it's not dismissive to keep saying,
1:06:22.720 I don't care, but it's not what I'm interested in. Thanks. Thank you. Dennis.
1:06:32.400 Hey, guys. Can you hear me? Yes. Great. Hey, David.
1:06:35.840 Dennis, I have you earlier you mentioned in response to Ella, Ella was asking about self-replicating
1:06:42.240 ideas in a mind. And your response was that it wouldn't be really a finer city correctly.
1:06:48.640 It wouldn't really be efficient in terms of memory, because instead, one could have a quantity
1:06:53.440 field of sorts on ideas that wouldn't code how many instances of an idea and exist,
1:06:58.880 and then that way one could save a lot of memory. But I want to take them on to defend the theory
1:07:03.440 if I may, as it happens, Ella has thought of the same thing when we first started discussing this
1:07:09.920 theory. Now, I suppose the quantity field would be denotationally equivalent to having replicators
1:07:17.520 on the surface, but the structure of the implementation would be wholly different. And I think
1:07:22.960 one would lose a lot of explanatory power by removing replicators, because one would need to
1:07:27.040 come up with separate explanations for everything that the the replicator base actually might
1:07:34.960 currently explain, for example, the memories, how people evolved, like some idea it's
1:07:38.880 surviving the mind that others. And so I'm not sure just because a programmer would prefer to use
1:07:44.320 quantities instead replicators that that means that biological evolution would have chosen, I say,
1:07:48.880 chosen in scare quotes to do so as well. Most of the criticism of this near Darwinian theory
1:07:55.920 the mind, if you want to call it that, but I've heard so far, is along those lines that we don't
1:08:00.800 need replicators and that we could replace them with something else. And if I understood you
1:08:07.040 correctly, your criticism is along the same lines, but the epistemological problem that I see with
1:08:12.400 that is we could say that for any theory, right, I mean, even hard to vary ones, we could think of
1:08:17.920 ways to replace key components of them, even if usually that means that they become easier to
1:08:24.240 vary as a result. And I think that's what happens when we drop replicators. The problem reminds me
1:08:30.320 a little bit of the the fossil thing, which I believe you've brought up before in defense of the
1:08:34.400 multiverse. So like people might claim that we don't need dinosaur, we don't need to claim that
1:08:42.800 dinosaurs really existed to explain fossils, even though it is already artificial. And we could
1:08:48.720 simply come up with other ways fossils may have come about that don't involve the existence of
1:08:54.000 dinosaurs. And then denotationally, I suppose, those theories are the same, or at least similar,
1:09:00.080 because the output of the theories, the dinosaur fossils are the same, or going a bit off the
1:09:06.080 rail, like we could we could claim that many, instead of claiming that many dinosaurs existed, we
1:09:10.640 could claim that there was a single dinosaur that had a quantity value that determined how many
1:09:14.080 fossils left behind us, right? So I guess the problem is that this won't convince the advocates of
1:09:21.920 the past existence of dinosaurs, rightly, I think, because they would want to know why dinosaurs
1:09:27.120 couldn't have existed, not why they need not have existed. So in a way I agree that dinosaurs
1:09:33.120 need not have existed, for the same reason that no theory need necessarily be true.
1:09:40.480 And so that applies to self-replicating ideas of the mind as well. But what I'd really be interested in
1:09:45.600 is a reputation like what an argument why replicators can't play a role in how the mind works.
1:09:54.960 Can you think of such an argument? No, and I did say that I don't know how many of that works,
1:10:02.560 and maybe you're right, that maybe it's the fact that I learned program a long time ago,
1:10:13.200 and my formative programming years were in an era where memory was expensive, and
1:10:19.440 it was worth spending time thinking of more efficient ways of storing the data.
1:10:27.120 And now, memory is extremely cheap, and it's usually not worth doing that. And as you say,
1:10:33.680 one of the things you gain when you have a redundant representation of something
1:10:40.160 is you get much more flexibility in explanatory power.
1:10:46.400 So having said that, I think your comparison with the dinosaur theory is a bit unfair.
1:11:00.800 If your problem is that you want to make an artificial fossil,
1:11:06.400 it would not be a good idea to start by making dinosaurs.
1:11:09.920 You need to take the shortcut that's available and making artificial fossil that way.
1:11:19.120 And again, if you want to explain how the fossil got there,
1:11:25.280 that would be a terrible way of approaching that problem.
1:11:28.640 But if you want to make an artificial fossil, then going via dinosaurs is far too inefficient.
1:11:34.480 But, you know, since I don't know how it works, I can't really pontificate about how to do it.
1:11:53.760 I guys, how's it going? Thanks to Sadia and Bruce for putting this event on,
1:11:58.560 and for David's for answering questions. So my question was about, well,
1:12:05.520 you know, the explanation of how creativity works or just what creativity is,
1:12:10.880 and just critical rationalism in general, seems to contradict certain
1:12:16.400 commonly held assumptions, which are effectively just statements that people are mechanical.
1:12:24.240 And, you know, for example, operand conditioning, which is that
1:12:32.160 learning and just alterations to human thought or the thought of people more generally and
1:12:38.640 their behavior is best achieved using like a framework of rewards and punishments.
1:12:45.280 And so, that's when dealing with problems in psychology, like maybe addiction.
1:12:53.280 And order, you know, it seems to get a lot of uses within psychology and then in
1:12:59.680 behavioral economics as well in the form of incentives and disincentives to do certain things.
1:13:06.880 I think the original question I actually had about specifically the addiction and making
1:13:11.120 choices was sort of answered already when you were speaking about, you know, just creating the
1:13:17.680 best moral theories and so on. But I was maybe wondering if you could say something about incentives
1:13:25.040 and disincentives and how valuable the work done in behavioral economics is and whether it's
1:13:33.920 just fundamentally based on faulty assumptions and there is not much use to it or it's just
1:13:40.320 maybe continually useful based on the cultural ideas that have given time or something like that.
1:13:49.280 Oh, yeah, so I have to recognize that the lots of things in the world do not involve creativity
1:13:58.560 and such things can be analyzed in terms that would be dehumanizing if
1:14:08.800 applied to things that do involve creativity. And economics, for example, is a field where
1:14:19.600 it's sort of the important issues are dominated by creativity, but not totally exhaustively described
1:14:28.800 by creative processes. There are other processes as well. And in, if you're looking at an
1:14:36.080 area of the economy where, where, uh, not much creativity has been used because people find the,
1:14:43.200 the setup basically satisfactory and what they want is a mechanical way of getting through
1:14:49.680 to various things, then you can find an algorithm that sets the prices in those situations.
1:14:56.800 You know, like when there's, when there's a shortage of some raw material, then you can work out
1:15:03.760 how, uh, how, at least the first idea of how you can set the price, although someone else might
1:15:09.520 think of a better idea. And already you haven't modeled that. And similarly, if there are things
1:15:15.600 that happen in the human mind, in the human brain, I should say, that aren't creative.
1:15:24.240 Like we have optical illusions and, and that kind of thing. And if they feed into the problem
1:15:31.280 that you have, which is partly about creativity and partly not, then that might be helpful.
1:15:36.880 I'm not going to say that isn't helpful, but I say that it's, it's, it's, it's, whenever creativity
1:15:44.000 touches on something, it changes it profoundly. And it really becomes the most important thing
1:15:51.360 to, to try to understand in regard to that field. Rewards and punishments are, uh, uh, an
1:16:03.200 abomination, really, in anything to do with humans, because they are, uh, trying to forcibly
1:16:12.880 change a human situation, which had involved some creativity to one that doesn't. And that is just
1:16:22.240 bad. Um, I wonder, you know, it's like, it's like, um, uh, these, these purported cures for,
1:16:32.800 for gayness and, and so on, uh, by giving gay people electric shocks and, and, uh, if, if people
1:16:42.320 want to be treated like that, they are making a mistake. Uh, I don't care if it works or not,
1:16:48.880 works in, in quotes. You know, I, I'm wondering how, how would you cure, if you thought that,
1:16:56.480 an S&M fetish was bad for you, and you had one, you thought it was bad for you.
1:17:04.400 What kind of conditioning would you, would you, would you expect to cure that? Like, you know,
1:17:13.760 being given electric shocks. Oh, hilarious. Thank you. Thank you. Um, Carl.
1:17:21.440 Yeah, I do. Thanks for doing this. It's been really fun. So I remember you
1:17:26.400 saying in an interview that the weather animal suffered not is a philosophical question rather
1:17:31.600 than a scientific one. And I definitely agree. So I'm just curious to hear if you found any
1:17:38.240 convincing arguments for either side of that issue. And if you haven't, how do you think we
1:17:43.840 morally should treat the issue of like whether animal suffer or not? Um, yes, I think not much is
1:17:52.800 known about, about this. Uh, I think there are some tiny clues in various places. Um,
1:18:05.360 and I think that maybe the main thing is, I think there is, since we know so little about this,
1:18:12.640 I think there is room for a, a range of views that, that can all be considered reasonable,
1:18:22.160 depending on where one is coming from. Um, the, the one can rule out, I think, uh, the extremes,
1:18:34.080 like, like, um, thinking that on the one hand, thinking that, um, that we should respect the
1:18:42.800 wishes of trees, um, is, is very close to being, uh, being untenable philosophically because of
1:18:55.360 what we know factually. And the other extreme, I think that it is, is, um, wrong to adopt,
1:19:06.960 uh, a position of, um, principled callousness, and trying to abolish, for example, all laws
1:19:19.760 about animal cruelty and whatever on the grounds that there's no evidence that anyone is suffering
1:19:25.760 when there's animal cruelty. There is no evidence, but I think the, that is different from saying
1:19:33.600 that there is a, uh, uh, good reason for adopting that view. Um, but in between those extremes,
1:19:41.920 there's a huge range of positions that, that I think are reasonable. But would you say that
1:19:48.560 this is a mind form of the precautionary principle that in the absence of knowledge, we should
1:19:54.960 like, uh, well, I'm also a bit that once you use the precautionary principle, I think it's,
1:20:00.880 it's more that what we should do in the face of ignorance, in the face of ignorance, we should
1:20:06.880 be a first thing is to be tolerant of, of, of multiple views. And the precautionary principle precisely
1:20:15.040 isn't. So, uh, you know, I would say be tolerant of multiple views about this. Um,
1:20:23.200 uh, you said about evidence that a tiny piece of evidence in, in, in regard to dogs,
1:20:30.800 um, dogs, um, look like they have feelings, um, more than, uh, similar other animals do.
1:20:43.360 Um, and, um, we know that this is because they have been, um, subjected to artificial selection
1:20:55.280 for precisely the attribute of looking as though they have feelings.
1:21:01.040 Now, I'm not sure that looking as though you have feelings can be done without having them.
1:21:10.400 Uh, this is a very weak argument. I, I can easily think of ways that that might not be right,
1:21:17.440 but, uh, you know, beggars can't be choosers. I think we have touches of evidence that maybe
1:21:26.080 some animals have some element of, um, um, uh, uh, uh, quality. But there is the,
1:21:36.640 uh, you know, if this is counts as anecdotal evidence for something that there is strong
1:21:42.240 anecdotal evidence the other way as well, that if, if you look at animals like chimpanzees that,
1:21:48.720 that look as though they have feelings that, that in other experiments, um, it's fairly clear that
1:21:56.720 they, they do not, uh, uh, uh, uh, they do not have an idea of what's going on, that they're
1:22:05.440 just behaving mechanically. I think the experiments were, uh, uh,
1:22:11.920 but you tend to simply reject the notion of, uh, like philosophical zombie dogs then, I guess.
1:22:18.000 Yes, that, that I would, uh, yes, because that, that's, um,
1:22:24.400 that's one of these all purpose explanations that could be used throughout anything.
1:22:28.320 Uh, I can imagine the theory with a, um, physical zombie Jupiter, where Jupiter doesn't exist,
1:22:36.960 but I mean, it looks as though it does. So that's a whole class of explanations that have to
1:22:42.240 be rejected on principle. All right. Thank you, Carl. Cameron.
1:22:52.560 Okay. Um, thanks. Um, my, my question is in sort of around my trouble reconciling sort of
1:23:02.800 popularism, tortunism with, um, behavioral genetics, um, namely that it seems to conflict with,
1:23:10.960 uh, universal computation. Um, so I think you've noted that your position is that the
1:23:15.840 mind is not a blank slate. You know, so we have unborn genetic knowledge and,
1:23:20.240 um, but importantly, that can be over, written or over written, um,
1:23:26.640 examples such as fasting, jealousy and skydiving, the suicide. Um,
1:23:35.360 but so, and my understanding of the behavioral genetics literature is that, um,
1:23:40.800 genes seem to predict many behaviors. I think a lot of people in that field might say explain,
1:23:46.640 which I think you have issued with, but, um, and over the last 50 years, the main evidence of
1:23:52.160 that is around identical twins versus fraternal twins. I think it's been more similar,
1:23:58.000 siblings been more similar than the doctor, siblings, and, and adopted children being similar to
1:24:04.080 their biological parents and not similar to, uh, the doctor appearance. I think Robert,
1:24:11.600 Robert Plumman describes, um, uh, genes and Plumix in behaviors,
1:24:18.800 describes, uh, what is rather than what can be, which I think aligns with, uh, one of your comments
1:24:25.600 around the amount of, the amount that genes influence our behaviors itself of product or function of,
1:24:32.960 of culture. Um, uh, but I think you've also noted that, I think your position is that, uh,
1:24:41.280 genetic knowledge or genetic influences, uh, is probably easy to be overridden and probably
1:24:47.680 happens early on. Um, so I, I have trouble reconciling that with, I suppose, the fact that of the,
1:24:54.640 the doctor children being sort of systematically similar to their biological parents,
1:25:00.800 their particular biological parents, and it seems to me that genetic influences do have a very,
1:25:07.600 um, large influence over what currently is. Um, so yeah, if you just want to react to that.
1:25:15.120 Uh, yes, I think that the, um, uh, experiments on twin studies and sibling studies and so on
1:25:27.120 correlations between, uh, behaviors of, uh, genetically similar and environmentally similar.
1:25:34.960 Uh, none of those experiments addresses the issue. Uh, or what I could say is addresses the issue of
1:25:46.480 put it in computer terms. Where is the code located that is responsible for those similarities
1:25:53.520 and differences? And where did that code come from? Um, given that, that, as you just mentioned,
1:26:01.600 given that the, um, degree of genetic influence on behavior is itself determined by culture,
1:26:12.320 that alone means that you, you can't do an experiment to, uh, distinguish, um, uh, cultural from,
1:26:25.280 um, from, you can't do a behavioral experiment to distinguish cultural from genetic behaviors.
1:26:33.200 Uh, sorry, you've got to be very careful in talking about these things. You can't do a behavioral
1:26:38.080 experiment to distinguish between differences between different people's genetic or cultural, uh,
1:26:45.600 cultural, uh, knowledge. Uh, and so I, in regard to this issue, I would, um, uh,
1:26:58.720 just reject the relevance of all those experiments. Um, the, uh, I think there is a very strong
1:27:10.080 argument as you just said also that, um, genetic behaviors that, again, the differences between
1:27:18.240 genetic behaviors of different humans are relatively easy to override. I don't mean that one can
1:27:28.880 override them oneself just by waking up one morning and deciding to, um, on the contrary,
1:27:35.600 that might be very hard. But, but, um, uh, for example, uh, memes, either rational or anti-rational
1:27:47.120 memes, uh, can just, not override, but just replace, um, genetic behaviors systematically
1:27:57.360 because they have evolved the knowledge of how to do so. And, uh, there are cultures where,
1:28:06.080 um, people are more or less, uh, careful about dying. And, um, it's not to say that
1:28:18.880 someone in that culture or someone in a different culture could change that setting at will.
1:28:24.320 But on the other hand, it is a, I think it provides a very strong argument for saying that
1:28:32.000 if that is a problem that one has, it is soluble. One can, one can alter one's inborn tendencies
1:28:43.600 in the same way that one can alter any other idea that one has, that it affects one's behavior.
1:28:49.600 One can have a habit of writing with one's right hand. And, uh, then if one's right hand becomes
1:28:59.200 paralyzed, but with some illness, one can learn to use the left hand. And one can't do that overnight,
1:29:07.760 but one can, one can do it. And one can do it arbitrary, arbitrarily well. And there are ways of
1:29:12.880 doing it faster or slower, and there are always ways of improving those ways and so on.
1:29:19.840 Right. I think that the genetic explanations or one can always form genetic explanations,
1:29:27.120 I think they are in, in, in regard to, um, behaviors that are changeable,
1:29:35.600 they are, those explanations are dehumanizing and false.
1:29:46.640 Hi. Thank you, David. Thank you, Bruce and Sadia. Actually, tomorrow is my birthday,
1:29:53.440 so I guess this must be one of the most original birthday presents to get to ask you a question.
1:29:58.880 Um, my question is the following, um, is our society open enough for us to, at one point,
1:30:06.080 refute justificationism in favor of critical rationalism, collectively enough. And what do we have
1:30:12.560 to imagine as kind of acceleration effects on the growth of knowledge when that happens?
1:30:17.840 Well, happy birthday. And, uh, I think, you know, if we're, if we're to be, um, rigorous
1:30:29.120 Dr. Nair Poparians, that's a joke. Um, then, then, uh, we shouldn't ask, is society rational enough
1:30:40.000 to accept critical rationalism? It's, we should ask, is society capable of making
1:30:47.600 progress? Because we don't know that critical rationalism is true. We don't know that what we
1:30:53.040 think of as critical rationalism really is critical rationalism as, uh, perhaps there's a better
1:30:58.880 view of it that is different in our view and so on. So the, the question should be, is society
1:31:07.120 capable of making progress? And I think it obviously is, it is making enormous progress.
1:31:12.800 The things that worry us about, uh, when we notice that some things are going backwards.
1:31:20.560 Uh, it's, it's, um, natural and good that we should, uh, focus a bit on those.
1:31:31.200 Uh, rather than go, you know, go on about how well things are going. We should be focused on
1:31:38.560 problems and things going backwards in some respects is a problem and it deserves having
1:31:45.360 creativity devoted to it. But overall, the big picture is that there's enormous progress being made
1:31:55.040 at, uh, a rate that's unprecedented in history. So, um, yes, I think there is such progress.
1:32:05.760 I, I think that, that, uh, society can, although it may not, you know, people on the whole
1:32:12.880 may make the wrong decisions and everything may go wrong. Um, but it is possible for things to go
1:32:20.800 right. And I think at present they still are going right on the whole. Um, so I'm optimistic.
1:32:29.600 All right. Thank you. And then final question, Aaron.
1:32:32.320 Oh, wow. Um, thanks so much. You, I've read an interview where you described being
1:32:39.360 messy and untidy in your kind of, um, your home. But being very rigorously organized on your laptop
1:32:51.680 and I couldn't follow what the distinction was. Why is it orderly in one domain and not in the other?
1:33:02.640 I think I was going through a phase of, of experimenting with the Mac OS and noticing how,
1:33:11.920 what, what, um, how pre-sort out and sophisticated the model was. Of course, it is nothing compared
1:33:21.200 with today's. And also it's, it's not just the Mac that nowadays it has those things.
1:33:27.280 I think there isn't a, and I think nowadays I, I'm, I'm pretty sloppy in my management of,
1:33:36.080 of my computer as well. So I, I'm sloppy in all ways. And what's more, I think, if I can
1:33:43.360 make a personal self criticism, I, I think I'm too sloppy in most ways. Um, there's some kind of
1:33:50.560 irrationality there. But being very sloppy compared with the norm on a computer or in one's mind,
1:34:00.320 or in one's home, or in one's office, and all those things is useful for most people most of the
1:34:09.760 time, um, for the reason that I said in, in that interview long ago. Um, the, um, the, um,
1:34:23.360 um, the, the reason is that the, that, um, imposing a structure is a theory. And, uh,
1:34:37.040 one, and it includes in explicit theories. And if one takes a view on that that's too rigid,
1:34:44.240 then one is, uh, putting a strain on, uh, the, the possible new ways of thinking about that,
1:34:56.800 All right. Thank you. Uh, David Deutsch, thank you very much for joining us. I, I know I really
1:35:02.960 enjoyed this. I can tell this has just been a, a fun chat for most of us. So thank you for,
1:35:07.520 um, it's showing up for, um, the Carl Popper beat and great. Thank you very much. Thank you, David.
1:35:15.120 Uh, just wondering, by the way, did you had anything to do with writing the script for, uh,
1:35:19.760 pickle rig for Rick and Morty by the end? No, I wish I had someday not today, but someday I wouldn't
1:35:28.800 mind, uh, asking you that what if pickle rig found himself on earth, which suddenly transformed into
1:35:35.840 a planet made of cheese, do you think you'd be able to survive? The consistency of pizza.
1:35:44.080 That other time just wanted to leave you with that. Yeah, maybe if you do this again next year,
1:35:48.080 you can invite the author of that episode, because, uh, wherever the author or all says well,
1:35:54.880 uh, they got that amazingly right. It's, it's like a, it's like a manifesto for, for human
1:36:02.160 creativity. All right. Thank you, everybody. Hi, thank you. Thank you. Thank you. Thank you. Thank you.
1:36:11.440 Thank you. Thank you. You're welcome. Thank you. Thank you. Thank you. Thank you. Thank you.
1:36:24.160 See people, I've seen people coming on TV and saying how they were inspired by Richard Dawkins and
1:36:32.080 and then, then they say, well, yes, the evolution is a survival of the fittest and, and so on.
1:36:38.400 And they, they just, they haven't got it. And, uh, you know, E.O. Wilson hasn't, hasn't got it. I mean,
1:36:48.800 from our point of view, maybe from his point of view, we haven't got it over his point of view.
1:36:54.800 Dawkins hasn't got it. So I don't know what the magic thing is that makes progress.
1:36:59.200 No, but if, if a lot of young people are interested in ideas, then there's going to be progress,
1:37:08.400 even if one doesn't notice it from one's own point of view.
1:37:13.040 And, and I agree with you because one of the things I realize is it's almost like you have to even
1:37:17.920 go into psychology of it too. It isn't just enough for the ideas to be available.
1:37:23.200 Uh, if people are not willing, it seems like somehow people are either oblivious or,
1:37:30.080 or I don't know if they're not interested, that why they're going to certain things? Sometimes
1:37:35.120 I wonder if they could even just look at themselves like almost like, turn back on themselves
1:37:40.880 and see why certain thoughts and ideas are coming or I don't know. I really do struggle with that too.
1:37:47.120 Uh, but despite having said that, I think that at least those of us who are willing,
1:37:51.600 who are constantly struggling, it really does help, uh, to have those ideas, um, you know,
1:37:58.800 I mean, we might have gotten there in a while, but most of us, I mean, we have limited life spans
1:38:09.920 I think it takes a while, right? I mean, and there's so many ways to phrase things,
1:38:15.760 like even survival of the fittest, if you think of that as like survival of the replicator that
1:38:20.880 replicates the best, you can kind of see how it still fits, right? And so it's, I think that's
1:38:26.960 part of it is just, it's hard to get away from the memes that exist in a culture. If
1:38:33.920 evolution is about survival of the fittest, you can kind of see how even if you understand
1:38:38.240 Dawkins that's still true. So you still use that term, even though it's misleading.
1:38:42.720 Yeah, well, it always used it, right? But, but, uh, I don't know, you know, you can't seem
1:38:48.720 to people's minds, but I suspect that in many cases, when people say survival of the fittest,
1:38:55.200 they are imagining animals fighting it out. Yes. I think you're right. I think I think we have
1:39:02.320 this big mingling in our minds of different ideas, and we don't really differentiate them
1:39:07.600 that well. So I think you're right. But, but ideas also have power, and they illuminate people,
1:39:14.880 and you know, there's, there is progress, there really is. Yeah.
1:39:19.920 I agree. I think it's kind of interesting too that when you look into the theory of evolution too,
1:39:25.920 I mean, of course, you know, they would say that there isn't any directionality in evolution.
1:39:29.840 It's not like things are going towards more complexity. Well, first of all, there isn't a,
1:39:34.400 you know, a definition of complexity that everybody agrees to. But it's kind of hard to turn away
1:39:40.960 and, and not recognize that there is something there, like we have seen organism becoming more
1:39:47.840 complex, and it kind of goes hand in hand with the whole thing of recognizing where some people,
1:39:52.560 somehow think that there is no progress in ideas. Yeah. It's some kind of example in denial. Sorry,
1:40:00.560 sorry, go ahead. Sorry, sorry. Some people would like to deny that there's progress for various
1:40:06.880 reasons, psychological, political, and so on. Once you deny that there's progress, you have a sort of
1:40:16.080 an automatic take on a number of things that, that you have to be ignorant about,
1:40:21.680 in, in, in, if you don't take that view. And so it's kind of comforting. It's kind of
1:40:26.800 a competitive comfort, the pessimism, there's a certain comfort in pessimism. Interesting. I feel the same
1:40:35.600 thing in evolutionary biology too. I think sometimes some people have had such a reaction to the whole,
1:40:43.680 because so far, many religions have recognized the, the significance of humans, you know,
1:40:51.040 as humans, like my background, I used to be in Muslim, but we were always told that all the
1:40:57.200 angels bow down to the humans, you know, so God made something and then taking a turn against God
1:41:03.200 because, you know, why I have been worshipping you. So, so it seems like a lot of reaction nowadays
1:41:10.400 in a reaction to religion, some ideas have been reached, which I think it's
1:41:14.640 interesting. Yeah, all, like Papa says that, that all science begins with mysticism. And I think
1:41:25.920 philosophy, you know, began with religion. And there's, and what began with religion means is that
1:41:37.120 religion was groping towards some truth, attained some truth, some falsehood, and usually try to
1:41:43.520 suppress criticism. Yeah, so, and I think, you know, maybe the atheist movement should give a little
1:41:51.840 ground here and, and realize that, that doing better than religion is not synonymous with denying
1:42:01.760 everything that every religion says, because that's like starting from year zero.
1:42:08.160 Yeah, it almost kind of becomes the same sort of thing that you see in different religion
1:42:13.520 where people to give themselves clean, they sometimes feel, clean, they feel like they have
1:42:18.400 to put somebody else down, because otherwise, how are they going to convince their kids to
1:42:22.800 stick to their religion and not think about something else? Yes, you're still allowed to deny
1:42:29.520 some aspects or many aspects of the opposing view. But if you try to deny all aspects of opposing
1:42:37.200 view, you will definitely go wrong. Interesting. Might be of the Brexit debate. I was
1:42:46.000 rewatching the video with Dominic Cummings explaining why I leave one devote. And he said, everyone in
1:42:52.320 this room, I guess predominantly left us, he was saying, vastly overvalue the, the rightness of
1:43:00.720 being on the opposite side of the racists, like Nigel Farage and all, and all these guys. So
1:43:08.240 being on the opposite side of someone who is wrong is not the right way to think about.
1:43:12.960 Oh, yeah. By the way, David, I have a somewhat random question for you as long as you're here.
1:43:20.880 What, did you have any expectations about what would happen when you first published the
1:43:25.200 beginning of today? Well, I was hoping that people would buy it. Yeah, well,
1:43:39.440 one thing I thought at the time, I, with beginning of infinity, I ended up finishing it under a
1:43:47.360 deadline. And it wasn't as polished as I was hoping it would be. And I had to leave out an entire
1:43:54.880 chapter that I had planned in the order, you know, it took, it took almost 10 years to write
1:44:02.400 as did the fabric of reality. But with the fabric of reality, I finished it in my own time.
1:44:08.960 And beginning of infinity, it was, it was a bit rushed. And so I was thinking that it wasn't as good.
1:44:17.360 And, although many people criticize it in many ways, few people said it wasn't as good. So,
1:44:30.320 you know, go figure. It actually seems to, we get in infinity seems to be the more popular of the two
1:44:36.560 books from what I, from what I've seen. Personally, I'm a fabric of reality fan. I actually,
1:44:42.000 I read fabric reality two years before beginning of infinity came out. So I was
1:44:46.560 anxious when it came out. I'm curious, what was the chapter that you didn't get to do?
1:44:54.080 I don't know what it would have been called, but it was about scientism and related issues.
1:45:00.560 A few paragraphs of that chapter got into the chapter on choices.
1:45:08.080 You know, working out how many people go, go into the museum and come out and then you,
1:45:14.320 you form the theory that people are being spontaneously created and destroyed. And that, that idea,
1:45:20.160 that was from the other chapter, but I had been planning a long chapter on scientism.
1:45:24.800 I now think that scientism deserves a whole book and I am not the person to write it.
1:45:31.360 So maybe that never would have been written.
1:45:35.760 It's interesting, you say that because my first experience when I broke away,
1:45:39.600 I don't want to say broke away from religion. For me, it was a very natural progression when
1:45:46.080 I recognized one day that I was an atheist. But I felt kind of almost isolated in a little bit
1:45:53.440 of an isolation in my own community because I was just so weird in that way. But I started
1:46:00.160 looking for other places and there were a lot of atheist groups and pre-pinkers and, you know,
1:46:05.600 they called themselves those methods. And then I joined them. I kind of almost felt like I was
1:46:11.360 going to some sort of a religious place. Like, I really wanted to be with people where I could
1:46:17.760 just literally talk, you know, without saying, oh, you're not allowed to ask this question.
1:46:24.000 But I didn't find that. And that kind of me realized, when I heard you talk about scientism
1:46:30.960 where I read, I'm like that quick right away. That unfortunately, either you have that or
1:46:40.000 no other end where you're just not allowed to ask certain questions.
1:46:48.960 I'm going to say, David, it's fine meeting. It's the new people. I'm currently visiting
1:46:54.080 Austin, Texas right now. So I know you've got a little bit of history there.
1:46:57.520 And it's funny to see how, or maybe plenty is the wrong word, but it's very interesting to note
1:47:06.000 how the knowledge based you of the world changes the whole shape of certain kind of discussions
1:47:12.880 that otherwise would be maybe people focused like classes of people and scientists up here and
1:47:18.880 all these sorts of things or just asking questions about where knowledge is created, where conflicts
1:47:26.320 are happening, where disagreements are happening, simplify so many things, and to the point where
1:47:32.000 people will ask, like, my favorite recent thing is that somebody will ask me for a relationship
1:47:37.200 advice or something. And I'll give them the same caveat that you always do is I don't know that
1:47:41.440 much about relationships. But what's the problem? And then you can kind of just ask a few questions
1:47:47.280 and see, okay, well, you know, I can think a little bit about disagreements. So kind of questions
1:47:52.560 you've asked. And I'm constantly surprised that there's always something to be said,
1:47:58.640 it may not be incredibly relevant. But what my friend told me, and I didn't really expect this
1:48:04.160 would happen, as he said, whenever I talked to Carlos, and you know, I always tell him,
1:48:09.200 you're effectively talking to David and directly, but so much. And, yeah.
1:48:15.920 I said, he says, the problem is unchanged. And yet I feel so much better. And the analogy that I
1:48:30.080 gave him was that he was like someone who had to build a spaceship, and he was currently in
1:48:38.240 the desert. And he had just been transported to a beautiful high-tech facility with all sorts of
1:48:45.120 tools around. He hasn't built the spaceship yet. But suddenly, the situation surrounding the
1:48:51.120 problem is now totally different. Whereas I might have been this person doesn't like me,
1:48:55.920 this, you know, it becomes just about what is what knowledge is lacking, you know, what
1:49:01.200 discussion do I need to have? How can I take this person? Why do I disagree with me? And that
1:49:06.240 would be a problem. And who I might try to light to or otherwise try to get something and say,
1:49:11.440 oh, well, how can I just make the problem an objective thing? We can both try to
1:49:14.960 solve and double our efforts and create some possibilities here. And so he just seems to
1:49:22.000 have that view that things become so much easier once you have this view of knowledge, even if
1:49:27.600 you haven't directly solved the problem. Maybe you're describing the transition to optimism.
1:49:34.720 If you think about what's going wrong in terms of a lack of knowledge, then in a certain, although
1:49:41.040 you still don't know what that knowledge is, in a certain sense, you know that what's standing
1:49:48.080 between you and the good outcome is a lack of knowledge and you want, you need to create knowledge
1:49:54.640 and that puts it, it already puts an optimistic spin on things even before you solve anything.
1:50:01.360 Whereas if you think of things in terms of people, then everything becomes whom,
1:50:08.400 you know, the famous thing that Lenin is supposed to have said, just a very accurate description
1:50:17.280 of a whole class of worldviews. Who, whom? And you've got to get rid of whom? If you get rid of it
1:50:28.000 in politics, that's like getting rid of who should rule and so on. And presumably from what
1:50:34.160 you've just said in relationship things, you get away from whom and you turn towards what actually