00:00:03.120 Yesterday I was asked if I'd like to help moderate a conversation.
00:00:14.840 I've spent a lot of my career doing various kinds
00:00:19.400 I hear you're not entirely a fan of all of that.
00:00:24.560 which kinds of forecasting you do or don't like
00:00:30.360 So for example, solar power costs have been falling steadily
00:00:44.160 that power law of cost as a function of the quantity produced.
00:00:48.080 I presume you're okay with that kind of forecasting.
00:00:53.160 because you seem to be, well, when you say half of all,
00:01:05.200 You're kind of implicitly assuming that's a frequency,
00:01:09.280 And you're kind of assuming that that's equal to a probability.
00:01:12.480 Namely, it's going to be, there's a probability of one half
00:01:28.960 And that's the kind of inference I think isn't valid.
00:01:51.600 and why they express forecasts in terms of probability
00:02:12.280 the maximum distance that a human can travel from Earth
00:02:30.800 and then we'll follow a completely different law again
00:02:39.880 you are implicitly referring not only to the statistics
00:02:57.320 is entirely derived from that explanatory theory
00:03:18.520 And it may stop before the laws of physics say so
00:03:40.680 as the thing we're trying to get in their head.
00:04:05.960 well, look, if the standard here is just better than chance,
00:04:13.160 like find the kind of trends that technologies follow
00:04:19.880 surely we'd want to do something like projecting that forward.
00:04:25.840 but isn't it a reasonable thing to do to take our best fit
00:05:00.840 because we will always be able to look back and say,
00:05:02.960 ah, this thing happened and it wasn't in our predictions,
00:05:18.560 is one that your best explanation didn't foretell.
00:05:30.520 But then everything hangs on your current explanation.
00:05:40.160 and it does look similar to many other kinds of technologies.
00:05:53.920 And then when we, and then we are looking at sort of rates
00:06:19.120 but I, I'm assuming that that is a good assumption.
00:06:33.320 you mean not only that the graphs look the same,
00:06:37.960 you mean that you expect solar power technology
00:06:57.560 And it's not going to be something that you haven't foreseen,
00:07:04.600 if you were predicting the future of power stations in 1900,
00:07:09.160 you wouldn't be expecting nuclear power stations.
00:07:12.840 So your forecast based on the best existing knowledge
00:07:22.800 It would be out by and having the wrong explanation,
00:07:28.040 which would make the real things behave in a different way,
00:07:37.080 And then as I, as I've, I've, I've written somewhere,
00:07:43.360 I can't remember, but after a radioactive activity
00:07:46.520 was discovered, you could have then made the prediction
00:07:53.320 And then you wouldn't have been able to predict
00:07:55.480 the environmental movement, which will sabotage that.
00:07:59.320 So you would have been completely wrong again about,
00:08:06.640 can you ever predict things perfectly with certainty?
00:08:15.760 compared to some absolute chance, that's wrong too.
00:08:21.360 what can you predict how well not can you predict the future?
00:08:25.080 And then we want to go through particular topics
00:08:27.200 and just point out the various degrees to which
00:08:29.680 we might be more or less confident and accurate
00:08:47.400 I've talked in terms of whether someone's going to win
00:08:56.120 And that makes it clear that part of the answer
00:09:02.120 I can't tell whether someone's going to tomorrow
00:09:06.960 discover a piece of mathematics that wins the field medal
00:09:12.800 So if we think about predicting a longer timescale,
00:09:15.840 of course, all else equal, that should be harder.
00:09:18.400 But then if we go to the most solid things we know,
00:09:23.680 it would seem like that's where we have the best chance
00:09:28.360 And so since I tend to make longer term predictions
00:09:33.720 then it would seem like to focus on these most solid predictions.
00:09:45.560 I might say we're limited to our future light code.
00:09:49.240 And that has consequences, substantial consequences
00:09:55.920 But surely it's reasonable to tentatively guess
00:10:06.200 but we're pretty sure of the speed of light limits.
00:10:13.760 on predicting where aliens are in space and time.
00:10:21.640 then basically any alien anywhere in the universe
00:10:24.240 could be here right now because there would be no limits
00:10:31.280 now we would have to predict there's no aliens anywhere
00:10:39.840 only the aliens in our backward light cone could be here.
00:10:42.800 And therefore, then we can make many more concrete predictions
00:10:47.200 about, you know, the nature of the advanced development
00:10:52.120 and that's something that I have done recently.
00:10:56.240 a reasonable tentative thing to do is to assume
00:11:00.400 It's also another basic implication is that, you know,
00:11:03.520 it seems like exponential growth in of things we care about
00:11:07.440 is hard to continue in a light-conlimited world
00:11:13.480 at a cube rate and exponential growth requires much faster growth.
00:11:18.960 well, it looks like exponential growth can't go on forever
00:11:35.480 because it is a way of exploring the consequences,
00:11:40.480 the cast iron consequences of our existing best knowledge.
00:11:46.240 And there, I think it makes very little difference
00:11:53.520 we're doing this because we're relatively very sure
00:12:19.680 But for some purposes, like in this case, it doesn't matter.
00:12:51.160 of stars basically or galaxies in the longer run
00:13:01.520 Like Feynman said, there's plenty of room at the bottom.
00:13:04.600 And there's not an infinite amount of room at the bottom.
00:13:09.720 and the fact that nagentropy seems to be finite
00:13:16.840 And that also sets the limit on our future growth
00:13:19.800 in terms of nagentropy because nagentropy is bounded
00:13:29.320 Now you're talking about things that are not as secure
00:13:40.560 Is something that crazy theories of quantum gravity
00:13:50.520 But they don't predict infinite amounts of nagentropy
00:13:54.040 They predict infinite amounts of potential structure
00:13:57.440 That is, you could add enough energy to a system
00:14:00.000 then you could create lots of small scale structure.
00:14:06.040 standard theories of the fineness of nagentropy,
00:14:08.720 say within the volume or within a black hole, et cetera.
00:14:15.880 that anyone analysis could have more ways it could be wrong.
00:14:20.880 We could sort of count the ways thing could be wrong.
00:14:23.480 But we'd kind of like a way to weigh all the different ways
00:14:26.320 something could be wrong in some sort of overall measure.
00:14:28.360 And that's what people usually use probability for.
00:14:37.280 how much it depends on things we might be wrong about
00:14:40.760 and make a judgment about focusing on the things
00:14:45.080 And it may not affect the bottom line that much in many cases.
00:14:51.600 But I think it is simply wrong to use probability
00:14:57.480 And it would be better to make the assumptions explicit
00:15:02.200 rather than say, well, well, it's not going to be,
00:15:07.320 we're not going to supersede the second or the thermodynamics
00:15:12.520 By the way, thermodynamics, at least in one version of it,
00:15:29.320 has been overturned by quantum information theory.
00:15:33.360 So for example, supplanted, perhaps, is a better word.
00:15:36.640 I mean, still most of the results of information theory
00:16:00.280 And that these allow exponentially more information
00:16:09.160 and not other means that we have to change our conception
00:16:16.840 may or may not feed into the practical conclusions
00:16:21.760 that we want this information theory to inform.
00:16:25.960 But the amount of negative entropy in this glass
00:16:28.800 is not substantially changed by quantum information theory.
00:16:43.920 even if we switch to quantum information theory.
00:16:46.160 It happens to be so for quantum information theory.
00:16:51.000 But quantum information theory allows information channels
00:16:55.680 to transmit twice as much information as classical channels.
00:17:02.960 if we've been having this conversation 40 years ago
00:17:05.360 or whatever it is, whenever channel capacity doubling
00:17:12.040 we know for sure that if an object can be in two
00:17:17.880 distinguishable states and used for information transfer,
00:17:21.240 then the rate at which it transfers information
00:17:24.520 is also one bit per unit time or whatever it is.
00:17:42.080 you know, you might have some qualifications for that, right?
00:17:47.960 work out a tentative design for our thorium reactor
00:17:52.320 And I had to make some, you know, suggested claims to you
00:17:55.280 that this particular design could have this rate
00:17:57.520 and this density and then this heat conductivity, et cetera.
00:18:01.120 And then for any tentative design I offered you,
00:18:03.480 you could play the same game you're playing now
00:18:05.200 and say, well, yeah, but you're using standard information theory
00:18:07.800 and what about potential future information theories
00:18:13.920 And I might say, well, you're not helping very much
00:18:17.400 to point out all the logical ways it could be wrong.
00:18:19.920 Don't you want to like make some tentative assumptions
00:18:26.040 And that's the way I want to ask about the future.
00:18:27.760 I want to say don't we want to think about the future?
00:18:30.680 Well, shouldn't we like take our best understanding
00:18:43.320 And so you have to reject on principle all objections
00:18:52.320 to a project that are of the form we could be wrong.
00:19:04.560 and I wouldn't be making those objections to the thorium reactor
00:19:08.000 and I don't make those objections to your analysis
00:19:12.120 of future aliens occupying galactic clusters and so on.
00:19:24.080 about some other ways we could guess about the future.
00:19:26.480 And then we could say specifically like what assumptions
00:19:35.960 So for example, we might say future stock prices
00:19:39.440 will follow a random walk just like current stock prices do
00:19:42.320 because our theory of why stock prices follow a random walk
00:19:45.720 is very solid theory about information basically.
00:19:48.440 It's an information based theory about the random walk
00:19:50.720 and you say as long as the information theory holds,
00:19:53.560 then future prices should follow a random walk.
00:19:55.720 So that sounds like a reasonable long term claim.
00:19:59.200 I go out a million years and say in a million years
00:20:02.040 in the future stock prices will follow a random walk
00:20:04.160 because unless information theory is dramatically changed,
00:20:11.760 but I would be explicit about the explanatory assumption there.
00:20:19.200 To me, the explanatory assumption there is basically
00:20:39.040 And therefore I would say we can expect that to be true
00:20:49.240 for showing that the epistemological isn't what it is.
00:20:54.240 it's perfectly reasonable to take what we think we know now
00:20:57.040 and apply it even if a Nobel Prize might be one someday
00:21:15.240 And that's based on something called the Kelly Rule.
00:21:17.600 And there's a theorem basically that the maximum investment
00:21:23.440 to the future final value of different investment categories.
00:21:32.760 among investors, such that the one who invest more successfully
00:21:42.040 that there might be investors who have certain objectives,
00:21:48.560 about how success in achieving things can come about
00:22:03.960 I think in many cases the bottom line is the same,
00:22:08.080 whether we are explicit about the explanatory basis
00:22:14.800 But in some cases, you know, you can wander off,
00:22:26.360 from the assumption that the amount of information
00:22:31.280 carrying capacity at small scales is not infinite.
00:22:57.880 So I mean, as you know, say, beckonstein's mound
00:23:19.200 of the total amount of entropy in physical systems
00:23:21.440 that we actually deal with based on the physical properties
00:23:36.840 finite entropy systems, you know, the glass of water here
00:23:44.560 because actually beckonstein had several bounds.
00:23:47.560 And I think one of them is solid, the one you mentioned,
00:24:00.720 which purported to limit the maximum speed of a computer.
00:24:16.680 that equal to the surface area and that kind of thing,
00:24:20.480 And I have actually co-authored a paper recently
00:24:42.960 between a beckonstein bound and the speed of light.
00:24:46.920 And that is, is rests on ultimately on assumptions
00:25:09.600 but which mostly puts a mental block on an analyzing
00:25:18.880 And so you and I can at least share this approval of,
00:25:26.280 And so again, I'm interested in like going with you there
00:25:30.640 So I just gave you the sort of the evolutionary analysis
00:25:43.040 So today, people discount the future in the market
00:25:51.040 because the market value of distant future consequences
00:25:56.920 we shouldn't bother to do much to prevent future harm.
00:26:03.320 And we have a evolutionary analysis that gives a plausible
00:26:08.480 explanation for why we evolved such a high discount rate,
00:26:11.680 which is to say that the analysis suggests that
00:26:15.680 we often have a choice between investing in ourselves
00:26:29.480 that we then have a factor of two per generation discount rate,
00:26:32.480 which does roughly match the typical market discount rates.
00:26:37.720 And it says, well, we're failing sort of to collectively
00:26:42.760 about the future generation half as much as ourselves.
00:26:52.560 And so these theories of sort of self reproducing investment
00:27:02.960 something will arise that doesn't have this long-term discount
00:27:15.960 Even though today, we don't care much about the future
00:27:21.480 to discount the future, a factor of two per generation,
00:27:24.920 future more asexually reproducing organizations
00:27:33.360 So eventually, people will care about the future,
00:27:37.160 That's a prediction I will make about the future.
00:27:42.680 So I see, you can point out the potential errors
00:27:45.360 in the potential ways the analysis could go wrong.
00:27:48.160 But still, it's worth having projections like that,
00:27:51.560 at least to tentatively put them up and say, does that make sense?
00:28:02.240 we care about our future selves or about our descendants.
00:28:30.160 I mean, there's also the, I don't know if you've
00:28:33.600 studied the case as well, there's all sooner or later
00:28:43.120 Well, I think this can't be deduced from biology.
00:28:54.800 So evolution hasn't encoded preferences as an us
00:28:58.400 for what happens when we get immortal, because that never happened.
00:29:01.400 So at the best, it would project the preferences
00:29:11.440 are discounting our own personal immortality with.
00:29:16.160 to us to prevent ourselves from dying in a century,
00:29:19.040 we don't actually willing to put very many resources
00:29:21.840 into it because of this factor of two per generation.
00:29:30.640 I might say, well, when there are immortal creatures
00:29:35.120 then what kind of preferences will be selected for?
00:29:38.560 That seems to me the way I want to do the analysis.
00:29:42.280 somewhat skeptical of analyzing preferences for time using
00:29:49.400 Well, you're using really biologically evolution,
00:29:53.120 because it could be that this number, the discount rate,
00:30:10.800 Then you will say, ah, but then there's group selection,
00:30:17.520 will not have that meme or religion or whatever it will be.
00:30:34.040 And as you know, group selection is an extremely problematic
00:30:48.880 to say that group selection basically never happens,
00:30:54.600 And when you then take it to memes and societies
00:31:00.200 and civilizations, it can go wrong in all sorts of other ways
00:31:06.280 And there just aren't enough subgroups of society,
00:31:11.520 because there will be subgroups within those subgroups who
00:31:17.280 Our whole society at the moment, the entire world,
00:31:20.680 is dominated by ideas which say that we shouldn't
00:31:26.760 make progress on principle, either in particular cases
00:31:34.960 The idea that we shouldn't make progress as fast as possible
00:31:38.440 such that our point of view will become dominant in the long run
00:31:51.520 that view will be superseded by some other view
00:32:02.000 So David, I think you'd screw with Robin and just in the nature
00:32:06.640 So for example, Robin has also said that proposed that we humans
00:32:10.680 have explicit goals in the future, we will evolve such
00:32:13.400 that we may have explicit goal to have more descendants.
00:32:17.560 like a freely floating propensity, which our future generations
00:32:23.440 And the idea of this discount rate, my model of view,
00:32:25.400 is that you see thoughts as highly iterated and things
00:32:38.680 is to think about what are our best robust models
00:32:41.520 of evolution that includes different potential units
00:32:44.920 like groups and includes culture as well as physical genes.
00:32:48.040 That is, if we thought, say, a simple model of physical gene
00:32:56.560 and that the conclusions I was talking about were
00:32:58.600 dependent on that particular set of assumptions,
00:33:00.680 and we might say, well, that's just the wrong set of assumptions.
00:33:07.520 are relatively robust to the different size units of selection
00:33:11.560 and to culture being part of the co-evolutionary package.
00:33:15.240 And so I'm certainly happy to grant an interesting thing
00:33:18.360 to think about that at the moment our genetic induced
00:33:29.280 And something has gone very wrong in the past to induce that.
00:33:31.960 And I've thought a lot about what that could be.
00:33:34.200 But nevertheless, when I make the long-term predictions,
00:33:36.920 I set aside current behavior and just ask what in theory
00:33:45.520 And most of the analysis I know is robust to which kind
00:33:49.840 It doesn't that much whether behaviors encoded in textbooks
00:33:54.280 that people teach or in genes that influence in individual
00:34:06.040 Are you think things will equilibrate rather than staying
00:34:08.320 out of equilibrium, what culture we're going away from?
00:34:22.960 And in the future, there'll be more of the packages
00:34:30.080 were infinitely many cultures and civilizations and so on.
00:34:41.480 why a particular type of culture might take over
00:34:47.920 you could make a robust evolutionary theory of it.
00:34:51.360 But I don't see how it can be robust in predicting
00:35:00.400 Like I said, in 1990, they could have predicted
00:35:07.320 And all the ideas that have informed the use of those things.
00:35:13.760 Now, yes, if there were a million Earths, well, actually,
00:35:19.680 Even if there were a million Earths or a trillion or whatever,
00:35:35.600 says that certain types of mistakes will be made.
00:35:39.720 Some people say all cultures destroy themselves
00:35:54.640 that underlies the assumption that evolution theory alone
00:36:04.680 So as you know, our standard theory of thermodynamics
00:36:11.280 The fluctuation versions predict how fluctuations
00:36:18.000 And of course, we have similar very alternatives
00:36:32.880 that Earth is plenty big and the future is plenty long
00:37:00.760 And the error correcting processes can go wrong.
00:37:11.720 an overarching reason why things will come to equilibrium.
00:37:16.960 In which case, you know, if you'd been around four
00:37:24.200 that humans will never exist and that nothing from Earth
00:37:35.000 are having higher fertility than nonreligious people.
00:37:43.640 Now, at least people who descended from religious people,
00:37:46.480 now you might ask, well, will the children of religious people
00:37:52.080 that then means that religion doesn't actually increase.
00:37:54.760 But you know, we can certainly expect a few generations now.
00:37:58.000 More people will have descended from religious people,
00:38:01.280 than will have descended from nonreligious people
00:38:06.720 some we might discover a new thing that suddenly
00:38:19.120 that, again, our expectation is that, on average,
00:38:26.920 And that explanatory assumption is a very kind of
00:38:56.680 doesn't have any rivals, but any viable rivals.
00:39:20.800 and it might, there might be an asymptotic form of it.
00:39:38.040 That's, you know, you can almost define a touring complete thing
00:39:44.160 as a thing that doesn't have an asymptotic state
00:39:53.800 to make biology type assumptions about human growth of knowledge
00:39:59.240 just as it's a mistake to make chemistry type assumptions
00:40:04.000 So as an analogy that perhaps on a faster time scale,
00:40:09.960 we have somewhat of an evolutionary theory of growth
00:40:13.080 of the economy in the sense that we say that there are many firms
00:40:16.720 in an industry and some of them are more productive than others
00:40:19.520 and the more productive ones consistently grow relative
00:40:24.840 that each industry will be more productive in the future
00:40:33.600 that industries will get more productive in the future
00:40:35.400 because that's just making arbitrary assumptions
00:40:37.520 about correlations between culture and productivity of firms
00:40:45.720 basically by the selection of the more productive firms winning.
00:40:55.120 And you know, if I want to plan my pension or whatever,
00:40:58.520 then I would assume that there will be economic growth
00:41:28.360 No asteroid destroys everything, yeah, of course.
00:41:33.400 destroys everything, no crazy economic theory adopted
00:41:41.560 you're going to make those assumptions, aren't you?
00:41:51.040 but you'll still mostly bet that that's the most
00:42:01.920 If you really think everything's going to hell,
00:42:05.840 Yeah, sure, you can find some people willing to sell it to you.
00:42:11.640 that they are not being explicit about their assumptions.
00:42:14.760 Again, and I would rather say nothing is secure.
00:42:21.040 Therefore, I'm going to, I'm going to make my investments
00:42:24.680 on the basis that if this investment goes wrong,
00:42:32.320 For example, it may be that democracy is broken down
00:42:37.320 or that there's a world war or something like that.
00:42:43.760 about my investment, you know, I should have put all my money
00:42:47.480 into rubidium because that's the basis of the future weapons.
00:42:58.000 one of the biggest contrary assumptions or scenarios
00:43:05.080 a strong world government that strongly regulates investments
00:43:10.600 and then prevents the sort of competitive environment
00:43:18.800 That is my best judgment of our biggest long-term risk
00:43:22.160 in terms of, you know, the thing we should focus attention on
00:43:24.520 is this creation of a strong civilization-wide government,
00:43:29.000 say solar system-wide government that is going to be
00:43:31.800 wary of competition and wary of allowing independent choices
00:43:36.600 and probably wary of allowing interstellar colonization.
00:43:45.720 So I'm very into sort of a baseline set of assumptions
00:43:49.840 and the conclusions and then asking what's the most likely
00:43:52.360 or the biggest thing that could go wrong there.
00:43:54.160 And that would be my guess for the biggest thing
00:43:58.200 It seems more useful to have a concrete scenario like that
00:44:02.800 than just the general idea that, hey, who knows,
00:44:11.760 I mean, that is explaining how things could go wrong.
00:44:16.080 There's also the fact that though that things could go wrong
00:44:20.360 in a way that our existing explanations don't predict.
00:44:24.440 By the way, you haven't yet spoken of a type of prediction
00:44:32.640 Well, all the specific examples you've given are,
00:44:37.520 I would just, if you like, it's a matter of terminology
00:44:44.160 It's a matter of epistemology and fundamental theory.
00:44:47.640 But the bottom line is you haven't yet given an example
00:44:51.320 where I would disagree with you that something is worth
00:45:01.760 going to be a conditional on the our civilization
00:45:08.680 or our species not surviving for the next century,
00:45:14.280 I think it is overwhelmingly likely that the reason
00:45:29.760 That we know over that I'm into generating new scenarios
00:45:33.960 for things to go wrong that other people haven't
00:45:43.040 of the fact that what we really should be doing
00:45:45.440 is creating more knowledge, more general purpose knowledge.
00:45:56.440 So one of them is the one you've just mentioned,
00:46:02.240 or working on things that don't even have names yet.
00:46:09.080 I agree we should have some broad just exploration
00:46:11.520 of all possible areas of knowledge, which isn't terribly
00:46:17.640 on particular things that seem promising for particular reasons.
00:46:20.560 So for example, I have this book, The Age of M, as you may know,
00:46:26.520 which some people might find out unlikely and just tries
00:46:29.080 to work through in great detail all the consequences of that.
00:46:32.520 And my justification would be, even if you think it only
00:46:35.720 has a 1% chance, if it's worth having 100 books on the future,
00:46:42.920 we could lay out a picture of a wide range of options, right?
00:46:46.480 And so that seems to me that we should just take particular scenarios
00:46:51.320 that we can think are not crazy and work out their consequences
00:46:59.560 you've caught me in an illegitimate use of probability
00:47:05.400 Given that the only assumption that the civilization
00:47:12.960 or species will be destroyed, then it's overwhelmingly likely
00:47:19.640 And this just shows how deeply this mistaken notion
00:47:30.800 has permeated our culture, even though I hate it,
00:47:37.240 So as you know, there are betting markets out there.
00:47:43.840 And in fact, there's standard theorems and finance
00:47:52.920 sort of conditional probabilities or combinatorial probabilities.
00:47:56.320 We can elaborate pretty large complicated combinatorial sets.
00:47:59.440 And we have a lot of literature that at least says these
00:48:04.960 That is what, when it says 20%, 20% of the time that happens
00:48:08.360 and that they are relatively accurate in the sense
00:48:10.440 that compared to other ways that make these forecasts,
00:48:19.600 to have these systems that produce calibrated accurate forecasts
00:48:24.760 You don't approve of them, but you probably still
00:48:27.960 I mean, when you get a weather forecast that says 70% chance
00:48:31.160 of rain, then that weighs on you more than at 10% chance
00:48:34.200 of rain, don't you listen to these things, aren't they useful?
00:48:37.880 And I admit that I think the connection between what
00:48:47.080 is the reason why risks can be approximated by probabilities.
00:48:52.520 And also the reason why frequencies in certain situations
00:48:57.160 can be approximated by probabilities is an unsolved problem.
00:49:09.720 OK, so until you figure that out, it's not too crazy
00:49:13.040 for the rest of us to continue to use them, right?
00:49:18.000 this appearance of use isn't just an illusion, right?
00:49:20.040 I mean, we are actually getting substantial value
00:49:35.800 Because the chance is low, most people would say.
00:49:41.920 And we need this idea that the chance is low is a break
00:49:48.480 on the urgency of working out what the chance actually is.
00:50:13.600 there either is or isn't an asteroid out there heading
00:50:16.800 towards us in the next 100 years or in the next year.
00:50:26.920 Once it's in the sky, it's going to be no good with this.
00:50:30.520 We can't sue anybody and say the probability of that was very low.
00:50:38.920 And no, but when you have the right theory of asteroids,
00:50:41.280 then it can make a stochastic prediction and you have chance.
00:50:45.520 So you need a theory of asteroids with a theory
00:50:55.440 I mean, people have actually calculated the rate of asteroids
00:51:14.400 Like, and indeed, they have in our solar system.
00:51:18.440 But all the other solar systems we look at have them
00:51:28.200 And therefore, I don't hold much with stochastic theories
00:51:34.240 You might remember that the first steam engines
00:51:39.200 Thermodynamics was generated after the steam engines, right?
00:51:46.920 And if you say it doesn't have a good theoretical foundation,
00:51:51.200 But we're still going to use it until you prove theoretically
00:51:57.840 I agree that not only can we use it, but we should use it.
00:52:06.000 is not just legitimate, but at least things you've told me,
00:52:15.200 And it's a bit of a scandal that more people aren't doing it.
00:52:19.760 But I think the same is true of all fundamental theories,
00:52:35.680 Nevertheless, we're completely justified in using each of them
00:52:41.640 Well, using them for predicting things like satellite
00:52:47.880 or bits, but not for predicting things like how much information
00:52:55.680 So there, the theory itself says that it doesn't know.
00:53:07.800 thought more about where problematic is problematic.
00:53:12.120 is especially problematic, then I'm probably just
00:53:13.920 going to keep using it, because it's interesting.
00:53:29.000 So you've given some, it still works even then,
00:53:34.000 like when you're saying that the stock price of something
00:53:41.640 Even when knowledge accumulates, the stock price
00:53:52.280 And so for example, when you say that all long-lived civilizations
00:54:00.920 in the past have failed within, well, they have failed.
00:54:10.600 all civilizations that have lasted longer than 400 years.
00:54:14.160 If you count ours as having lasted for 300 years,
00:54:17.600 all civilizations have lasted for more than 400 years
00:54:37.960 that says why it isn't namely that our civilization
00:54:42.640 is different from all the others in the relevant way.
00:54:53.560 based on our civilization, based on past civilizations,
00:54:57.800 because all of them depended on the future growth of knowledge.
00:55:01.680 And if you look in detail, actually, at how they failed,
00:55:10.200 is that in all cases, more knowledge would have saved them.
00:55:18.040 the major reason for growth for all past history.
00:55:22.680 That is, even biology, the major reason for innovation
00:55:43.240 been the dominant factor in growth for all of history
00:55:48.120 So that fact doesn't distinguish us from our ancestors.
00:55:52.160 Maybe the way we grow knowledge or the rate at which we
00:55:58.240 and was the main long-term force has not changed.
00:56:12.840 that our descendants might grow knowledge faster
00:56:30.480 Again, just like the life as a chemical process
00:56:35.880 is different from all other chemical processes.
00:56:39.040 And thinking as a information processing process
00:56:46.320 is different from all other information processing processes.
00:56:53.200 in terms of frequencies derived from the other,
00:56:58.880 Always, by the way, in the direction of pessimism.
00:57:02.040 Because it's not just that we are unlike previous civilizations.
00:57:24.600 It sounds like we agreed to talk for about an hour.
00:57:31.200 or not in the middle of some dispute or something like that.
00:57:44.720 And we'll make them take sides, who's the winner here, right?
00:57:52.920 I was just teasing you about the previous debate.