00:00:05.720 pop as idea that ideas doesn't matter where they come from.
00:00:43.560 I guess maybe I could still reformulate that as criteria.
00:00:48.160 But I'm gonna compare it to different kinds of ideas
00:00:51.560 depending on the sort of idea that I'm thinking about.
00:00:55.040 Like, the sort of idea, like, should I go to the gym today
00:01:07.760 as to whether or not I'm going to accept it even tentatively
00:01:12.000 than asking the question, like, do I think there's a tree
00:01:17.800 outside or is it raining or what is the fundamental nature
00:01:36.080 which according to Popper is where thinking actually begins.
00:01:49.480 I'm not sure what Popper would put it that way.
00:01:52.080 He might want to say that many problems are of the form
00:02:02.200 And he would say that even an amoeba has problems,
00:02:07.760 And still, I think the conflict between ideas idea
00:02:16.240 even in amoeba, the problem is the implicit problem
00:02:34.480 And until you have an existing genome and a mutant,
00:02:39.960 there's no evolution, there's no natural selection.
00:02:48.880 And what's more, well, nothing can happen until the solution
00:03:01.480 So again, it's important in evolutionary biology.
00:03:06.880 What's in common with the amoeba and the person
00:03:10.920 who was going to go down the road to fetch a hamburger,
00:03:15.240 or whatever it was, was that the thing is that the idea
00:03:40.080 what I should do, unless that's a problem to you,
00:03:43.360 unless the what I should do is a place in your mind
00:03:50.120 And that problem has come to the top of your scheduling ideas.
00:04:08.440 we're not wondering, where should I put my foot?
00:04:17.520 And since there's no problem, there's no thinking.
00:04:26.920 My guess, so I'm not sure if this is a tangent.
00:04:31.960 there is, in fact, a bunch of little bits of conflict
00:04:39.000 think we may need to clarify what sense data means.
00:04:41.400 I find saying sense data are ideas, although also theory
00:04:52.800 have a little bit of sense of being off balance or on balance.
00:04:59.840 that manage this, something with a little bit wrong
00:05:11.960 to things, doing one thing, stabilizing itself,
00:05:16.080 doing another thing, changing levels of importance,
00:05:40.160 what I'm actually doing is just retrieving a memory.
00:05:43.720 And if I accidentally retrieve, if there's some corruption
00:05:55.200 And therefore, the only thing I can think of when someone
00:06:12.680 then it might be that an error correction process comes in.
00:06:15.400 And but what it's doing, we can anthropomorphize it
00:06:43.360 And similarly, there's also things that an amoeba does,
00:07:12.240 So in human bodies and in systems in the world,
00:07:15.640 there are, yeah, let's start with human bodies.
00:07:33.200 and it will store, yeah, I guess you wouldn't say
00:07:39.880 actually I'm not sure if you would say it creates knowledge.
00:08:09.960 and produce some output and some of that output might be
00:08:17.560 I have in fact written algorithms that do that.
00:08:30.840 there's like some specific, there's a specific process.
00:08:34.640 The side that's in the shade grows a little bit faster
00:08:49.360 That knowledge was generated over a million years
00:09:15.480 All right, you know, when the combine harvester comes,
00:09:32.920 But the sunflower's knowledge arose from those few cases
00:09:38.280 where death where some ancestor of the sunflower survived death.
00:10:27.280 I'm like, ah, this like this mathematical proof
00:10:36.920 like all of the pieces all along to derive that,
00:10:41.720 And at the end, it seems like I have new knowledge.
00:10:53.240 a semi-lacrum of knowledge than an amiibo or a sunflower.
00:11:11.240 can verify each step according to rules of inference.
00:11:34.000 So I agree that proving is not the same as understanding
00:11:36.280 and it seems like the thing that you're saying is
00:11:41.520 there's also resolving problems, which does relate to,
00:11:45.840 like that does reflect my experience of doing math,
00:11:50.240 like I will sit down and I sort of have a bunch
00:11:59.320 I sort of pushed those two ideas together until I'm like,
00:12:05.560 All right, and you're, that's resolving a problem
00:12:30.240 or you're really guessing in the case of learning
00:12:42.600 the problem situation that he tried to write down,
00:12:45.640 but wrote down that you can never write that down fully
00:13:04.600 we don't all have to completely reinvent calculus each time.
00:13:09.640 It seems like we're able to train people in calculus
00:13:15.080 Yeah, well, like with many things in the educational system,
00:13:20.560 learning calculus usually means learning the algorithms.
00:13:34.360 that the understanding that they had of calculus
00:13:37.680 was wrong and the things that they thought were
00:13:43.680 And then they learn this kind of better set of things
00:13:49.800 and then they're like, holy shit, that was all false.
00:13:58.920 like all mathematical ideas, like all ideas are,
00:14:08.800 They are conjectured, they are calculus, for example,
00:14:38.120 So it inherently involves two different positions.
00:14:41.920 So how can the instantaneous velocity at one position
00:14:48.880 Meanwhile, Leibniz was struggling with the same idea
00:15:00.760 Most of the things they guessed were flat out wrong
00:15:03.400 in the sense that they didn't even solve the problem
00:15:07.080 Finally, they found ideas that did solve the problem.
00:15:09.960 But later generations of mathematics and mathematicians
00:15:13.000 were like, okay, it worked, but that doesn't mean it was true.
00:15:23.240 In the 19th century, what they thought would clarify everything
00:15:34.240 And so they evolved better and better methods of proof.
00:15:46.400 for mathematicians to demonstrate a method of proof
00:16:02.160 And Gertil proved that that was impossible there.
00:16:20.200 I'm not sure that I'd buy that in full generality.
00:16:24.520 You can't show that a Turing machine, any given Turing machine
00:16:29.840 will halt because you could have an algorithm that asks
00:16:34.280 whether or not it halts, and then if it doesn't,
00:16:41.240 You can't show that it won't halt, yes, excuse me.
00:16:47.880 So therefore, although there's such a thing as valid programs
00:17:03.360 Because we can, and that is akin to, there are true ideas
00:17:19.000 and false ideas, but we can't distinguish between them.
00:17:26.600 What we can do is find errors, which is sort of like noticing
00:17:40.480 Have an explanatory theory that a certain program won't halt.
00:17:45.040 For example, if the program is finding prime numbers, yes.
00:17:49.280 You can actually form a theory of prime numbers,
00:17:53.920 But that theory of prime numbers depends on the theory of integers,
00:17:57.800 which girdle proof cannot be proved consistent.
00:18:16.360 in that register and whatever it is that moves over one.
00:18:19.280 I'm pretty sure that's not going to halt because of.
00:18:28.720 Particular, you know, pretty straightforward application
00:18:33.240 of the axims, but trying to prove the axims is impossible.
00:18:47.400 And the process that searches through the integers
00:18:58.040 And those are the axims that set up the arithmetic
00:19:10.560 What we can't do is derive that it's true from anywhere.
00:19:38.040 At least for human beings, although I'm just, I mean,
00:19:47.760 Like, would other, would it be different for other entities?
00:19:54.440 Understanding something is not the same as deriving it?
00:19:57.840 Also, incidentally, we can't really derive anything
00:20:08.520 Somewhat unfortunate, maybe, or at least disappointing,
00:20:15.680 it's the only thing that makes life worthwhile, yes.
00:20:19.040 Well, it sounds like you don't think that it's disappointing.
00:20:26.600 I'm not even sure exactly how, like the world where there is a bottom
00:20:30.160 might be just very radically different from our world.
00:20:34.680 But OK, we can't derive anything from the bottom.
00:20:39.320 I think I do think that that's true for a number of reasons,
00:20:52.800 And so the thing that we're calling knowledge in this context
00:20:57.000 is conjecture, where we're starting from where we make some guess.
00:21:02.280 We, OK, so actually, we make a bunch of guesses.
00:21:08.160 And then we look at the guesses next to each other,
00:21:10.840 and we're like, well, we're starting from this one
00:21:15.440 and it has logical contradiction, or maybe not necessarily
00:21:20.800 Like, yes, we don't, like, we don't make them all.
00:21:29.800 and they have inborn, they have conflicts with each other.
00:21:41.800 but can you give me some examples of the sort of thing
00:21:55.120 that there will be a breast there, and what it will look like.
00:22:03.480 I'm not sure if that's encoded, actually, but what it will
00:22:08.280 I mean, stimuli make it go towards that and do the right thing.
00:22:13.720 And some make it go away from it and do a different thing.
00:22:23.560 And the baby has also other expectations and other knowledge,
00:22:30.720 some of it in explicit, I mean, all of it in explicit,
00:22:37.680 So immediately, I want to ask what you mean by,
00:22:47.640 Is that because there's a very, we couldn't mathematically
00:23:00.240 So I have to just make up stuff about how to look at the baby.
00:23:05.200 We couldn't make a robot that behaves like a baby.
00:23:09.960 So one thing is that some of these expectations
00:23:14.040 are also desires, desires for certain sensations.
00:23:19.640 So when it's drinking milk, it is not satisfied
00:23:25.640 with the state of drinking milk, because that whole process
00:23:31.800 involves being unsatisfied so that you drink more.
00:23:37.760 where drinking milk is actually unpleasant, because you're cold.
00:23:47.640 because the physiological mechanisms aren't perfect either.
00:24:00.280 It's a, in fact, it's a cobbled together compromise.
00:24:04.640 And because the sensory apparatus isn't good enough
00:24:11.560 to detect the exact criterion of when the digestive system
00:24:23.920 So the baby decides whether to cope with the problem
00:24:31.240 of the pleasure of getting more milk with the discomfort
00:24:37.720 And at some point, I think sooner rather than later,
00:24:45.520 with something like a dog, they have similar ideas
00:25:10.760 they will start and stop drinking at exactly the same moment.
00:25:15.240 Of the same moment, you know, you will be able to see
00:25:21.280 that triggered the one response rather than the other.
00:25:37.840 For humans, it's already, my guessing is already not true
00:25:43.280 You know, I haven't got strong evidence either way,
00:25:51.360 for this specific thing of whether balancing the pleasure
00:26:00.160 Yes, but the thing is that babies at some point very early on,
00:26:10.800 they do things like learn the meanings of words.
00:26:14.840 And there we know that there can't be an inborn criterion
00:26:20.560 because, well, at least it's a very hard to imagine
00:26:24.080 there's an inborn criterion because everybody learns
00:26:26.880 different meanings for all the words and for the grammar.
00:26:30.760 And so no two people grow up learning the same language.
00:26:35.000 What is the problem that is solved by learning language?
00:26:41.360 Oh, well, a baby learns to improve its environment
00:26:55.360 I can scream, but it would scream in for many other reasons as well.
00:27:05.360 but I know that it doesn't know that it has a stomach ache.
00:27:19.360 But it can form a guess and it can correct errors in that guess.
00:27:24.360 And it doesn't know in the sense that none of us really know anything.
00:27:29.360 So I'm saying that there isn't a deterministic connection
00:27:39.360 between various states of nerves in the digestive system
00:27:49.360 Again, there's only a loose connection, which is initially formed by evolution.
00:28:05.360 You can have pain, which is actually caused by one part of the body
00:28:11.360 that appears as a pain in the other different part of the body,
00:28:14.360 just because the sensory system is not just a crude, lashed up system.
00:28:23.360 All right, I think there's maybe two things there.
00:28:26.360 I could be maybe hearing you to say at least one of two things.
00:28:29.360 So I was surprised to say here you say that it wasn't deterministic.
00:28:33.360 It seems like, it seems like ultimately all of this has to be able to be described
00:28:38.360 in terms of the interactions of particle fields, which are deterministic.
00:28:45.360 No, that you can have a perfectly true series of particles.
00:28:48.360 And it will tell you nothing about the behavior of babies.
00:28:57.360 Well, it seems like if I have a detailed enough model of both babies
00:29:01.360 and a theory of particles, I will get something about the behavior of babies.
00:29:06.360 In order to do that, you would have to have a model of the knowledge creation process in a baby,
00:29:25.360 Okay, hold on, let me, let me say that because part of the thing that a baby is doing is creating knowledge
00:29:32.360 in order to predict what knowledge it will create, it will, you need a process that itself creates that knowledge.
00:29:40.360 And therefore your computer simulation for predicting the baby is itself a baby.
00:29:52.360 It seems like suppose I took all the particles in a baby.
00:29:55.360 Well, you know, we may have some problems with that.
00:29:58.360 We take all the particles of a baby, we measure where they are and where they're going.
00:30:09.360 You have to, you still can't predict what the baby will do.
00:30:12.360 You have to put it in, put all those atoms, their positions and everything into a computer.
00:30:20.360 Still don't know until you get to the point that you were wondering about, you know, why is it doing this and not that.
00:30:32.360 And you can't predict what it will do until it does it.
00:30:40.360 Oh, okay, I see, I see, but, but I, it's a, I can't predict in my program what it will do before it does it.
00:30:48.360 But I could run the program, like if I take too literal to the, to the, the particle level clones of two babies, and I run one of them forward in the same conditions.
00:30:58.360 And then see what happens at time step and then I can also predict that at time step and for the other baby.
00:31:07.360 Yes, you've set up a situation where predicting some, some baby is retrodicting another one.
00:31:13.360 Yes, yes, that's exactly right. That is, that is what I'm trying to do.
00:31:18.360 So you, so basically, you, you cannot predict the growth of knowledge because because.
00:31:27.360 The only like certain types of knowledge, because the.
00:31:34.360 Process that predicted it would be the process that you were trying to predict. And it's only after, so the, the baby, you know, call it Fred.
00:31:46.360 Fred is normally instantiated as a bunch of proteins and, and carbohydrates and stuff, but you could also instantiate that person in a computer program.
00:31:59.360 It's also Fred. There is no way in which that's not Fred that they are both Fred.
00:32:05.360 Although I might give, I might like say Fred one and Fred two just to like keep.
00:32:11.360 If Fred one and Fred two behave the same, which we are, we are assuming, then they are the same person.
00:32:17.360 Yeah, I'm on board with they're the same person and you, okay, and you're like, well, you can't predict what Fred, like in general, you can't predict what Fred does, because in order to do that, you need to just watch what Fred does.
00:32:34.360 All right, so there's there's something different about knowledge creation from other physical processes.
00:32:42.360 And here is where there is a weakness if you want to find a weakness in my old position about this.
00:32:49.360 Okay, what is that thing that's different was what's the thing that makes creative computer programs different from non creative ones, I don't know.
00:33:11.360 Is there a reason so this is also nothing it okay.
00:33:15.360 It's so, so the reason why it is not the case, so I can look at a, a bit of Python code and be like and work through the logic and predict what the Python code will output without running the Python code itself.
00:33:35.360 Yeah, I'm an explanation of what it's doing, then you might be able to predict what it's going to do without running it.
00:33:44.360 Yeah, so I there are, I can either sort of work through the code step by step and be like, and now it, you know, this variable goes up the for loop and goes down again and I sort of keep keeping count, maybe on paper, and that's just running me algorithm in my head.
00:33:58.360 I could also do something which is more like a mathematical proof and being like, well, if I'm starting with these conditions, I'd know that the output can't be outside of these bounds.
00:34:11.360 And that that's a different that is a kind of prediction of an algorithm, which is possible for my Python program, at least at some context.
00:34:23.360 But is not possible for Fred because, because base because I there's no way to to prove what Fred is going to do short of simulating actual Fred.
00:34:43.360 I do expect, particularly in the in the long future, when we have a we are much better at just instantiating AGIs when we want to be this thing where I can predict and and a I can predict some instantiation of an agent by
00:35:04.360 listing what another instantiation of an agent did is pretty practically relevant, it means fits it is it is in in practice, I can have a deterministic model.
00:35:18.360 Actually, that's not quite right, given some conditions or some inputs, I can have a deterministic model of how a individual develops through time.
00:35:30.360 It becomes relevant as this discussion progresses, but as it stands alright so so it is both the case that human beings are deterministic they're defined by the laws of physics, at least at best we can tell be.
00:35:49.360 Not even sure exactly what it would mean if they were not.
00:35:57.360 So you can't predict what a human does without.
00:36:09.360 At least in full generality like sometimes I will like you know.
00:36:14.360 If I get a like poke a person with a needle I predict that they're going to make a sound.
00:36:29.360 And it depends yeah but but as I said, many of the things that humans do on not knowledge creation so so long as a human is doing something that isn't knowledge creation they're much more predictable.
00:36:38.360 Right okay, because human humans are are animal mechanisms and also have some knowledge creation machinery.
00:36:47.360 Yes, let me let me think of if there is a counter example of predicting knowledge creation.
00:36:57.360 So I'm so one thing that I might do is I'm like well more's law has held pretty well I'm predicting that we will somehow I'm not sure exactly how of course, because if I did then I would make a lot of money probably.
00:37:10.360 I predicted somehow we will figure out how to get you know even fewer transistors on to these chips.
00:37:18.360 I'm like pretty confident in that prediction I mean I have been paying that much attention I hear that more law maybe failing lately but that's a kind of predicting the growth of knowledge.
00:37:31.360 Although it's in big terms I'm not saying specifically what knowledge will create I'm saying we're going to create knowledge which allows us to do something in this category.
00:37:43.360 So you there are aspects of the situation that that you can guess will not be affected by the growth of knowledge you like for example.
00:37:56.360 You know that the bits will not be stored in something small and atomic nucleus so something has to give by the time we get there also you know that the way that progress has been going basically it has been an implementation of the same idea in higher and higher resolution.
00:38:22.360 The idea has been an array of bits of memory that the memories divide into bits they are dressed in certain ways that there's a microprocessor which has has certain elementary operations, which we know as a matter of logic.
00:38:41.360 Now what they are and how they would fit together, so the problem of making the same more transistors on the same chip or smaller transistors is is a problem of just implementing things smaller so you could yes that originally when they used.
00:39:05.360 Optical what's it called anyway optical systems to photographically change something which then allowed assets to etch into the silica and or however they did it originally.
00:39:23.360 Later you could predict that well you could get a higher resolution if you use ultraviolet light instead of visible light and that will and then you can guess what it would take are there any fundamental reasons for and so on so.
00:39:41.360 To the extent that it's an purely well I want to say a purely engineering problem but that's that's to denigrate engineers I mean they they have to be creative as well they got a lot of problems.
00:39:54.360 Right, but they the problems they are solving is how to do it, yes, then what to do, so you can guess that that's not such a big deal, but.
00:40:14.360 For instance, inventing the computer in the first place, yes, if you're forecasting the future and you're like, if you don't account for will have artificial computers that's like a big difference, even the.
00:40:21.360 Even the subscaling down thing didn't go smoothly there were various places where they were hold ups there's a hold up right now I believe that none of the big companies are managing to get the bugs out of whatever the latest thing is cool.
00:40:41.360 Okay, you know, most of us sort of got a bit of a glitch if you look at microscopically predicting Moore's law is not a case of predicting groups of knowledge.
00:40:55.360 At least in the sense that you mean it here, yes, I mean like there is there is an important sense in which we're predicting something that I might call the growth of knowledge, but.
00:41:07.360 And also an important distinction so the big picture this is pointing towards yeah is whether we can say what what we can say and what whether we can say certain things about the behaviors of knowledge creating systems who's design we don't know yet.
00:41:29.360 Yes, that seems right, I mean, I think we can know some things about them, but yes, that is where we're going at least I hope so.
00:41:43.360 There's some inborn ideas and those ideas that have some important ideas that are in conflict humans start out with some ideas in conflict well.
00:41:57.360 That may be misleading with the thing we have some ideas which come into conflict as they are executed they come into conflict.
00:42:14.360 And also, I'm not sure if this is relevant, but you're claiming that that doesn't happen for other animals.
00:42:23.360 Yes, they do have ideas, but their ideas don't come into conflict in their execution.
00:42:30.360 So if there are two of them in conflict to fixed ideas that are that are we would say naively are in conflict then there's there's also a fixed idea for resolving the dispute so they aren't really in conflict.
00:42:51.360 All right, I think this is a little bit of a tangent, but I have sometimes seen a dog, which is told to stay and also it really really wants to get up and it's like very obvious from watching the dog that it really really wants to get up.
00:43:04.360 Yeah, and I would definitely suggest that this is there's a conflict happening there.
00:43:10.360 It's not sure how I would assess whether or not there's an inborn criterion.
00:43:17.360 Well, you can't just from that description, though note that when a human is very inclined to disobey a thing that he also has a strong reason for obeying it often results in him not looking like that at all.
00:43:32.360 Because he's biting his time waiting for his moment.
00:43:38.360 It does it there's a way that for a human it can be much more integrated than the dog that you really wants to.
00:43:47.360 Yeah, but the human can can choose to react in any way.
00:43:54.360 Can you can choose to react just like the dog if he wants to or like a cat.
00:44:01.360 Whereas a dog can choose to react like anything except a dog.
00:44:16.360 I think something I think that is probably true, although like I'm not I don't I don't think that dogs pretend to be other animals ever, although if they did that would.
00:44:33.360 I'm sure there are animals that pretend to be other animals but that's okay.
00:44:38.360 Yeah, those animals can't stop pretending to be other animals if the conditions are right.
00:44:46.360 We can't stop is sort of interesting because it seems like humans also can't stop if the conditions are right because like for any given human behavior it's like from the laws of physics perspective it is deterministic.
00:45:00.360 Given some inputs it's going to produce some output.
00:45:08.360 But that's still to patient on what we can choose.
00:45:17.360 The fact that it can't stop doing that is a limitation on what it can do.
00:45:26.360 Oh, I see, I see, oh, yes, many animals, many animals are mimicking other animals.
00:45:39.360 Right okay a moth pretending to be a frog cannot stop it that is just embedded in the structure.
00:45:47.360 I'm a little bit suspicious of this concept of choice, I sort of expect that choice is a hand weighty term that we use to describe the behavior of systems that are too complicated for us to predict well.
00:46:10.360 The reason why it's vague is that it involves creativity and we don't understand creativity.
00:46:23.360 Is the conclusion of I think inescapable arguments like the ones that I've just said about predicting the baby.
00:46:32.360 That that the growth of knowledge, for example, is unpredictable. We know that about it, even though we don't know anything about how it works.
00:46:41.360 I mean, sorry, we do know something that's the thing.
00:46:45.360 In fact, we know quite a lot if you take, if you think that.
00:46:49.360 The epistemology is in part a theory of creativity, then we do know something about it, but we don't know enough to program it and that's that's the, you know, that's the important.
00:47:02.360 We don't, yeah, we don't know enough to program it yet.
00:47:05.360 It is unclear to me exactly how far away we are from that.
00:47:12.360 To me too, like I said in that article, you know, we might be just one idea away. Yeah.
00:47:27.360 Yeah, sometimes people are like we have no idea how to build an AGI and I'm like, I think it's so for one thing, it's you it's no individual human can assert what all the humans know.
00:47:40.360 There's might be some human right now who's putting the finishing touches on it as we speak.
00:47:48.360 I just my best guess is it won't be an individual human.
00:47:52.360 It will probably be a large group of humans that have been working really hard with a lot of money, but, you know, it could be, you know, a guy in his garage who had some clever idea and then coded it up and then there's person in there.
00:48:11.360 Was the plagued philosophers and scientists for for millennia.
00:48:18.360 And then one person or actually two people Darwin and Wallace discovered it.
00:48:25.360 Meanwhile, there were hundreds or thousands of biologists in the world going down a completely hopeless path of of cataloging insects and putting them into groups and and so on and and that didn't lead to the answer and and just Darwin found the answer, basically sitting in his own chair, they.
00:48:50.360 They're not out and looked at it's told us a story about going out on HMS Beagle and and finding birds and so on, but he found nothing out there that he didn't already know when he was sitting in his armchair.
00:49:05.360 The thing that made him different from all the other people that were cataloging birds was this idea that these ideas that were already beginning in his armchair and which he then when he went out and looked at different kinds of what was it finches finches yeah, you knew what he was looking for.
00:49:28.360 And everyone else who had cataloged thousands of different species of things they were looking for something else and they didn't find it so yeah so I certainly I certainly think this sort of thing happens, at least sometimes where there are are lone.
00:49:44.360 People who have very who are doing something different than what the whole field is doing.
00:49:51.360 Often make progress where the whole field is missing it then then there there's a question of when what sorts of fields does that happen in.
00:50:01.360 Although it doesn't seem that crucial to me right now I agree that it is it is possible that the first AGI maybe built by some guy has garage.
00:50:11.360 Okay, I guess I want to talk about creativity so creativity is here's my understanding of where where what creativity is right so you got a bunch of ideas and.
00:50:23.360 Use you sort of execute the ideas and then they conflict with each other and then when they conflict there's a problem and I guess some part of your mind identifies that there's a problem or I'm not sure exactly how that process works but.
00:50:38.360 That's an essential thing that the matter fact that they're conflicting if you don't notice it doesn't constitute a problem in the psychological sense.
00:50:55.360 Thinking is always the results of is always the result of a problem what am I getting from this one one thing that is.
00:51:02.360 I do that I sort of need to reformulate around is.
00:51:11.360 At we start from problems. Okay, so so we have ideas the ideas have some conflict that produces a problem. Yes, we identify that there's a problem by some mechanism.
00:51:23.360 And you may be mistaken about that too. Sure, sometimes we're mistaken that that's a problem.
00:51:31.360 That's if your mistake if you mistakenly identify there's a problem that there is a problem. Oh, yes, yes.
00:51:37.360 But it's not between the theories you thought it was between yeah right.
00:51:42.360 That happens you know that's not that's not just a theoretical point that happens yeah.
00:51:54.360 That's more basically creativity generates a conjecture.
00:52:03.360 And the conjecture is like maybe it may be the whole situation works like this does that resolve this conflict between ideas. Yes.
00:52:16.360 The process of being a human is just doing that over and over and over a million times a day.
00:52:21.360 And there's a positive here, which is we don't know anything or almost nothing about the process that generates the conjectures in this place.
00:52:33.360 I think most of what we know is negative. We know how it doesn't work.
00:52:40.360 We know certain things about how it can't possibly work. And Popper is a sort of a posseosis of understanding why lots of common sense ideas about how it must work can't be true.
00:52:59.360 Okay, so that is, I think this is maybe a tangent to this whole discussion, but that is interesting to me because my current best guess, although the way we're using words is maybe a skew somewhat, so it may turn out that when I say this, it actually relates to something different in your ontology.
00:53:20.360 But my current best guess is that conjectures are basically inductive and that you generate conjectures via extrapolating out from what in a long various dimensions from what things you've seen or reasoned about before.
00:53:39.360 And I think that there's some reason why that can't be the case.
00:53:43.360 Yes, extrapolation implies similarity and similarity implies a theory of what is what counts as similar and what counts as different.
00:53:58.360 And therefore the content of the extrapolation is in that theory can be false and can't be discovered by extrapolation.
00:54:07.360 So that's why. Okay, so I was with you until, and that theory can't be discovered by extrapolation.
00:54:15.360 Yes, somebody, you know, some somebody down the street here might say that that.
00:54:26.360 It doesn't like black people because black people commit crimes and then I could say that that's the wrong way of thinking about it it's actually poor people who commit crimes.
00:54:41.360 I mean, I don't believe either those theories but i'm just using it as a very simple example.
00:54:45.360 Now, he thought he was inducing his theory because to him he saw black people back person commits crime black person commits another crime and other black person commits a crime and so on.
00:55:00.360 And to him, you know, he was thinking that that he had induced the theory, or as a different person sees the same data and he sees poor people committing crime poor person committing crime and so on.
00:55:14.360 Use that both of them have actually just been interpreting the data in the light of their preexisting theory.
00:55:25.360 Yeah, and have pretended to themselves, I mean they don't know that they're doing this, but they have reinterpreted the thought process in their own mind as one of extrapolating a theory from the data.
00:55:37.360 What they have been doing is interpreting the data in the light of a theory and the theory came first.
00:55:47.360 Yeah, actually what they're doing is interpreting the data in light of a theory.
00:55:51.360 Is it there a reason why so okay so so so the conjecture generation process.
00:55:59.360 That seems to me that that could be doing something like interpreting data in light of a theory i'm not saying that it's not theory laden there is there is theory like background knowledge or whatever.
00:56:12.360 When one does that one isn't creating knowledge.
00:56:16.360 One may subjectively feels though one is, as I said, because one mistakes the the process is being one of getting the theory from the data, but what one is actually do what we do all the time we have.
00:56:30.360 You're just making a guess though it's just a conjecture you still there's still like many layers of.
00:56:35.360 I mean everything is a conjecture but in the case where, for example, the person has only that one theory about this particular issue.
00:56:45.360 Although it's a guess it's not a guess about this thing it's it's a guess that he formed before and it's just applying now because it is only guess so this guess arose at some point in his past as a result of solving a problem.
00:57:02.360 And but then if he solved it his satisfaction if it didn't conflict with anything that that he knows of then it became part of him that wasn't controversial it wasn't the subject of thinking.
00:57:17.360 And then when it was used that use also isn't thinking.
00:57:25.360 I do, I do feel like we're drawing some tight lines around thinking here, which I'm not sure if I would on reflection indoors.
00:57:34.360 I'm using whatever shorthand is okay useful but but like I said, at some point in my past I got the value of three times seven.
00:57:49.360 They're probably imprinted on my brain and now whenever I want to know what three times seven is yeah okay but but that's not new thinking you're just that's that's an automatic process that is yes of recall that always I mean when we have a theory that's uncontroversial we can apply it without thinking.
00:58:10.360 That's what you've referred a while ago to compiling ideas when we compiling ideas that that's that's that's that's a very good way of putting it because.
00:58:23.360 Not only do we make it sort of automatic and therefore its application is not as it used to be part of a creative thought process it's it's just a mechanical process, but we may also throw away some knowledge we may forget some knowledge we may forget.
00:58:43.360 The variable names and the sub routine structure only retain the machine code that that happens when when you learn to drive.
00:58:52.360 A beginner is just thinking all the time about when should I change gear and that kind of thing whereas an experienced driver not only does the thing but can't can't explain to you what is just done yeah.
00:59:07.360 Okay, I guess I want to summarize here a little bit.
00:59:15.360 When people talk about extrapolating they are making an error or or yeah they're always making error I'm not sure that I will need to think further to see if I think that it's always the case but they but you're claiming at least they're always making error which is.
00:59:33.360 They are taking some categorization schema and they are using that categorization schema to interpret some data that they have it's it's not in no way pure.
00:59:44.360 You always start with some categorization schema and if you're like well conjectures.
00:59:52.360 Then you're sort of passing the book and you're like well where did you get that categorization schema in the first place that must have evolved from some problem that you solved where this this seemed like a good enough solution to solve that.
1:00:06.360 Which would give a different answer to the question where he would say I may not remember exactly but it must have been itself an induction process so the the philosophers who in the 19th century who were thinking about induction, they would say we have in our minds a principle of induction where did it come from we induced it.
1:00:28.360 Yeah, I definitely don't think that I think that's stupid it's it's an infinite regress and it isn't true yeah exactly.
1:00:41.360 Yeah, okay, when I say that that's stupid I am you know it's like when I was reading Aristotle in high school and you could tell you know.
1:00:49.360 Parts where I disagree that would just like the margins would be filled up with many essays about how Aristotle did not understand economics but that's not because I'm smarter than Aristotle I just have the you know great advantage of coming several thousand years later after the invention of economics, which I much appreciate.
1:01:13.360 So, so, so we we there there's sort of a question it seems to me that there's sort of a question I'm not sure how far we want to follow this because I.
1:01:27.360 These are areas that I'm less clear about myself and so I'm not sure that I can make coaching arguments but it seems to me that in general in epistemologies you have a problem of dealing with an infinite rigorous that you if you're like well, if you're just trying to induce things all the way down and you're like well how do you know that you can rely on induction well because of induction you have a bit of a problem.
1:01:51.360 And also it seems to me in what I've understood of preparing a epistemology there is a a similar danger of an infinite regress where you're like well.
1:02:04.360 We generate a conjecture and then we criticize it and you're like well where did the conjecture come from and you're like well there was an unconscious process now correct me if if you don't believe any of this I don't want to put words in your mouth.
1:02:16.360 I don't think that's an infinite rest okay when you when you say where the conjectures come from I mean we.
1:02:25.360 We know where they come from it just that we don't know the implementation details and that they turn out to be crucial in this case, but but they come at the at the lowest level of implementation, they must be random.
1:02:42.360 Or they must be random you I mean not random within particular probability distribution function but they are not their random in the sense that mutations are random.
1:02:57.360 That is it's not that every genetic changes equally likely it's that what we mean by random in that case is that the changes are not.
1:03:10.360 are not caused by any particular aspect of the problem of the environment in the case of an animal so it's not that the the giraffe.
1:03:22.360 The stretches it's neck to get the high hanging fruit or vegetable and it and then the mutations have the mutations happen anyway, whether it's stretching or not that and they come first and any any.
1:03:44.360 The improvement in the giraffe's neck comes before the selection pressure applies applied yeah so so that is absolutely the yeah so that is absolutely the case in the in the case of natural selection.
1:04:03.360 That's plausibly the case in human in the human generation of ideas, although it is I'm not sure that that's the only way that it could work like the thing the thing that i'm hearing you suggest is that.
1:04:18.360 When you're trying to solve a conflict between ideas you start with basically a random steed and then you're like just going to try something random does that solve the problem.
1:04:30.360 And it makes all the difference that where in the process this random supposed random number happens it it's not that you start with the random seed it's that you start with with say a very high level heuristic which is which is then.
1:04:50.360 The satisfactory in some way, and so you want to modify it okay by a heuristic modifying heuristic and so on down to you know white about by time 11 stages down.
1:05:04.360 Then you have somewhere where you don't have a heuristic so that there's some process as well let's try the first thing that comes to hand it's not that it's a random number generator it's right okay you're starting with with background knowledge but then you tweak the background knowledge to see if that works.
1:05:19.360 Better, and then you continue a process of.
1:05:23.360 Of sort of yes in pruning tweaking and pruning the high level to the higher level tweaks are dependent on the problem.
1:05:32.360 Yes okay it's just when eventually after 11 level 11 you get down to a place where you've exhausted all the possibilities of extracting it from the problem so you just take the first thing that comes to hand.
1:05:45.360 No way, I didn't understand that sentence, you get down to level 11 if there are specific level 11 or you're just like that.
1:05:53.360 No, I mean I'm sure it's not levels either it's because it's it's much more.
1:06:02.360 Like at the highest level your your thinking.
1:06:09.360 It says to you, do you know that there are actually white criminals as well.
1:06:15.360 Okay, and that and you say no, your first thought is no that can't be.
1:06:21.360 And then you think well actually I have seen newspaper reports okay it's part of a conspiracy they're just saying that you know.
1:06:40.360 The nature of the problem is has a large input into the nature of the conjectures to solve it.
1:06:45.360 When you say a higher level, but you're referring to.
1:06:49.360 Like at the level of human thoughts as opposed to the level of the underlying mechanisms that are generating the thought.
1:07:00.360 That of less explicit and so on and and there.
1:07:21.360 It, it, it at the highest level you're looking you know that you're looking for something that contains phrases like white people and black people and crime and so on.
1:07:38.360 And, and you, you quickly scan through sort of various ones of those and see that that kind of thing isn't going to solve your problem.
1:07:49.360 And then all of the solutions that are all of the.
1:07:52.360 Yeah, all of the, the conjectures that are made out of those building blocks.
1:07:59.360 All the ones that you've tried all the ones that seem promising to you.
1:08:04.360 And then with their various strategies say well let's generalize the problem yeah okay i'm not i'm not talking about all criminals you know, so you can have a more general thing that tries to hedge your theory.
1:08:22.360 And then then if you're rational you'll think well wait a minute am I using a hedging process that would that would save any theory.
1:08:31.360 You might settle on a theory like Marxism that explains any situation in terms of the same reason.
1:08:39.360 And you might be satisfied with that for a while you might be satisfied with that for years or you might not yep and.
1:08:46.360 But what I was referring to happening at level 11 is that.
1:08:51.360 If the thing is still unsatisfactory when you've tried everything every kind of way of changing things and every kind of way of changing criterion and all that sort of write down to in explicit levels.
1:09:03.360 Then what will happen is that the the process will just try something.
1:09:29.360 All right so so so my here's my story humans going about in the world when they go about in the world they have ideas the when they execute the ideas they are in conflict with each other that causes a problem some mechanism in a human mind recognizes there's a problem and.
1:09:45.360 Initiates a process is maybe the process is actually going on in the background all the time, but a process either starts or continues of generating conjecture that attempts to to resolve the conflict between ideas.
1:10:01.360 That conjecture process so first of all, we don't really know how it works, but here's a conjecture about how the conjecture process works.
1:10:12.360 We have a bunch of background so yeah so now we're in the box of this is the conjecture generation procedure.
1:10:18.360 We have a bunch of background knowledge and we will start by just applying our background knowledge and seeing if that solves our conflict.
1:10:33.360 And then if that doesn't work that doesn't result the con like sometimes it does sometimes we're like oh you just forgot about regression to the mean right there it is.
1:10:44.360 But if that doesn't work then you might sit there and sort of modify your background ideas until you get a version that your men you're trying to modify versions you're like well maybe it's not exactly.
1:11:01.360 Yeah we were using sample of racism where we're like well okay maybe maybe it's.
1:11:10.360 I have some idea that only black people commit crimes and then I am like okay well maybe it's mostly black people that commit crimes and I'm like does that solve the yeah I'm sort of tweaking it a little bit and then.
1:11:25.360 I'm I'm continue so so level one is I just try my background ideas level two is I am where again levels is sort of like.
1:11:35.360 The order in which we're doing things maybe again we're all changing your initial theory yeah you're changing it intentionally to meet you know to meet this problem and it's not a random change yet.
1:11:51.360 So you're thinking that this this kind of thing might solve the problem right so so I do want to flag that in terms of like in the project of designing how an AGI works saying we're intentionally doing it to solve this conflict sort of leaves open like how exactly does that work.
1:12:09.360 Like I can't write code that's like now come up with a solution that yes, so it matches this conflict in ideas, then you just say at this point that that conjecture is only.
1:12:22.360 Step two sort of recognizing a problem is step one conjecture is step two then there's criticism we haven't seen criticism yet but but at each point when when you notice that tweaking the problem to saying that maybe it's just mostly black people.
1:12:38.360 Noticing that that is doesn't work is a critical process yeah and the critical process in general is is is in general also creative.
1:12:53.360 So you're creating usually you're not using stock criticisms you're usually creating a criticism sometimes it stock criticism will do like you said you know it.
1:13:07.360 If you have if you have a list in your mind of you sort of biases that you have then you say well, I'm doing this bias would do it well this bias.
1:13:16.360 But but also in any kind of non trivial thought process the critical phase is also conjectural.
1:13:26.360 Create creative and and when you say it's conjectural you and that means I'm I have some conjecture which i'm putting forward that maybe.
1:13:35.360 Resolves the conflict between ideas and then there's another process which is going to generate a conjecture for why our first conjecture might not work yes.
1:13:50.360 I guess we also need to critique our critique right we're going to generate like so you you can form the theory that your critique has a flaw.
1:14:03.360 Or you cannot form that theory that the number of possibilities increases exponentially as you go along and and what you actually chooses is only a small proportion of those and which ones you choose determined by heuristics of how you choose those things which are themselves theories conjectures which can be changed.
1:14:30.360 I didn't capture that last sentence which are themselves conjectures which can be changed.
1:14:34.360 Yes, so you're your critiques are themselves conjectures which can be changed.
1:14:41.360 Yes, so all of this gives sort of the impression of a very busy mind there's just like a bunch of pieces that are all both like there's some conflict and we're going to generate conjecture and we're generating a counter conjecture that will maybe critique the conjecture.
1:14:57.360 And we ain't scratch the surface yet yeah right.
1:15:02.360 But just I was just pointing out that the process I've described.
1:15:08.360 Is grows exponentially because at each stage you've got the choice of whether to.
1:15:17.360 Whether to reject the conjecture on the basis of the criticism or reject the criticism on the basis of the conjecture or for a new one of our.
1:15:31.360 So there's like four possibilities at each level so at by the by the time you've got to level level 11 it's it's for to the power of 11 and there's not room for that in the whole brain.
1:15:54.360 Conjectural knowledge which is being changed which is subject to change.
1:16:00.360 Okay, so so on checking that you said there were four cases you can accept the the conjecture.
1:16:08.360 You can accept the criticism of the conjecture, you can modify the conjecture or you can modify the criticism of the conjecture.
1:16:19.360 You can start trying to modify them yes by by the same.
1:16:25.360 Yeah we're actually we're calling our do epistemology function we can sort of double click and be like all right now we're going to do the the.
1:16:35.360 And it's like chess it for for a human to play chess you've got to have some drastic pruning mechanism.
1:16:42.360 Yeah, the way the way humans learn chess is creative.
1:16:48.360 So this this thing is not just an analogy that is exactly how we learn to play chess we learn to prune our pruning algorithms and so on.
1:16:57.360 Okay, where I'm like maybe this is a good move so it seems like when I'm learning chess I'm largely I had a conversation with Lulu about this on Twitter in fact largely learning from feedback where I.
1:17:08.360 I conjecture maybe this is a good move and then I get trounched by the person I'm playing against and then I learn turns out that was not a good move and in that sense the the.
1:17:21.360 The criticism part is sort of coming from I guess it okay it it has to turn into an idea in order to to operate in my mind, but the idea is well I lost that game by a lot so something.
1:17:35.360 Where did I get more yeah that then you form a conjecture and it's not just okay this was a wrong move it's a conjecture of about.
1:17:45.360 More more importantly I think more often than that it's it's a question of what was wrong with my way of choosing that move.
1:17:58.360 And then you know at at deeper and deeper levels you you build up what chess players call an intuition.
1:18:06.360 You know what which doesn't take the form of if this happens then this happens then this happens then this happens it takes the form of this is a position feels like a good attack yeah.
1:18:20.360 Without any specific thing and that that that is how we learn knowledge about chess.
1:18:27.360 That's how grandmasters could beat computer programs that analyzed orders of magnitude more cases than the grandmaster did yeah.
1:18:42.360 Because there's compressed theories about what makes good.
1:18:51.360 All right, and you would say that the chess program it's not just the chess program the chess engine all it does is.
1:19:01.360 Dry out implications no no knowledge creation is happening.
1:19:06.360 Yes, although I mean there's knowledge creation in the discovery of game freeze.
1:19:12.360 Like the what you need to do to build the chess engine is.
1:19:16.360 Yes, although I don't know how familiar you are with so so it seems to me that I would say that Alpha go does do knowledge creation.
1:19:28.360 Yes, it may if so it's evolutionary knowledge creation it's it doesn't know what it's doing it doesn't know what it's doing yes.
1:19:38.360 But, but it is doing the same thing of playing out a game tree and then learning from that which sorts of moves seem like good moves overall.
1:19:52.360 And you know at some point it changed from using RNA to using DNA because DNA is more robust, but it didn't know what it was doing there there was there was no instantiation of that explanation that I just gave.
1:20:10.360 In the process that it it was just dumb evolution yeah yeah knowledge very slowly and Alpha zero creates knowledge very very very slowly it's just a return to the number of games at least yes.
1:20:30.360 I once made it yeah that there's something you might call sample efficiency, which is how much go skill do you get per game of go that you play and at it by this measure Alpha zero is very very bad compared to a human.
1:20:45.360 But Alpha zero makes up for it by playing way more games than a human has a chance to play in their lifetime.
1:20:52.360 In terms of specifically playing chess it does but knowledge that a pro chess player has enables all sorts of other things to happen as well, such as explaining what he's doing so.
1:21:10.360 A chess player can write a book saying how to play attacking chess and Alpha zero even though it attacks better than he does can't write anything like that that seems correct to me whenever you have something that can learn from.
1:21:28.360 Apparently, much fewer cases it's because the programmer has put in the knowledge that will restrict it to those cases now a little bit of that happens whenever you get an ordinary I a learning things in that when a human learns those things they are because they're learning explanatory knowledge the knowledge has reach.
1:21:53.360 They have also learned something about, for example, when they learn chess they've also learned something about games in general about making certain kinds of decision in general and not others and the.
1:22:12.360 That is because they are not learning when they learn chess they're not just learning how to win how to win a chess.
1:22:22.360 But it's not learning, for example, how to make a beautiful game.
1:22:29.360 Now, because of the way that chess players learn chess they learning these heuristics which are connected with heuristics about beauty and other things, and so.
1:22:43.360 You often hear chess players saying that okay one, but it was boring and the Alpha zero never says that and in fact.
1:22:57.360 Well, I think it might be possible and should be done should be tried to make Alpha zero not only.
1:23:07.360 When games, but report whether they were boring or not.
1:23:14.360 And this is a much more difficult problem, but I think it's possibly within the scope of dumb AI.
1:23:24.360 By the way, I'm not convinced that that that Alpha zero is like biological evolution that that's kind of the most I can imagine it being but I would guess that it's done more than biological evolution.
1:23:39.360 It seems to me, I have, I have heard it said that you have said, although I don't want to put words in your mouth and maybe you'll be like I would never say something like that, that you think that a super intelligence is impossible because.
1:23:58.360 It's universality of explanations or yeah universe it explains our university.
1:24:08.360 And, and so there's sort of like two categories of things there are things that can generate explanatory knowledge at all, and there are things that cannot generate explanatory knowledge at all.
1:24:21.360 So first, I want to see if that's about right and then I have a response to that. These things in question being programs.
1:24:31.360 Yeah, although also we can we can specify pretty much we can think of anything as a program right they're all, but yes, it's just that people say is there something which is to humans as humans are to ants.
1:24:47.360 I before answering that I have to rephrase it in the form is there something which is to the program running in human brains as the programming in human brains is to the program running in ant brains.
1:25:05.360 Yeah, you see that that that it's just conflating two different kinds of difference. One thing about an ant brain is that it only has a few K of of memory and and processing space and humans have have many gigabytes at least.
1:25:24.360 Maybe something that has many exabytes, which is vastly more let's say for and there certainly could be a thing like that. Yep. In fact, there is a thing like that a human plus the internet is that.
1:25:39.360 Yeah, although also there's an important difference if it's in your head versus accessible to you on the internet. Yeah, if I could modify myself to have exabytes of memory, I think, I think I would be like that would make me quite different as a person than if you just gave me access the internet, which is also a really big improvement. I don't want to sound ungrateful for the internet.
1:26:03.360 I mean, in a way, you'd be a better person like when when they invented writing when they have a different person not necessarily better.
1:26:19.360 So I agree with Nick Bostrom that in technological improvement will enable ways of being that we don't know about now that we aren't conceiving now and which are in fact better than now.
1:26:37.360 So those those people would be better than us in a certain sense, but I but it that is not to say that the ideas they have will be some kind of different thing from our ideas, we could have those ideas just by plugging in some addons to our brains.
1:27:01.360 Um, yeah, so I think so, so I think I broadly agree with that. I don't think that super intelligences will have some fundamentally different sort of idea than humans and especially enhanced humans would have, there are some important details about how that enhancement can work that I would want to work through.
1:27:25.360 It seems to me, though, that we were just talking about a whole zoo of conjecture processes and critique processes and fractal conjecture critique processes and pruning processes that are happening in order to generate even a single conjecture for a given conflict between ideas in a human mind.
1:27:47.360 And as we say, we don't know how all of that works in detail.
1:27:55.360 It seems to me like there is probably a class of algorithms that do that that basically do those functions and that if you.
1:28:07.360 And that's some of those algorithms are better or worse than other algorithms and therefore you could you could write a mind which is much better at doing epistemology at getting better explanations faster and a mind that is much worse at that, based on the algorithms that you're using to do all of these different things.
1:28:38.360 We call subroutines and we know that many of the subroutines that we consciously call are executed unconsciously yep wouldn't make any difference to us if they were executed unconsciously in an atom.
1:28:54.360 I sort of want to hold off on talking about add on for now, although I do agree that in principle that that's a thing that we could that humans humans could do that and it is desirable that we do that, but I would start.
1:29:09.360 I want to start by just clarifying that it is possible to have a thing which is effectively much smarter than modern humans, because the algorithms that it's running to do epistemology are better in some ways than the algorithms that humans run to use epistemology.
1:29:27.360 I still think you're kind of having the wrong picture of what it takes to be better. We have, you know, I just said our unconscious mind is an example of an add on pencil and paper is another example of an add on.
1:29:45.360 We, there's no is our house. We're not a different person by getting a better add on until we decide to improve as a person by by by learning to choose those unconscious processes or processes in the add on rather than others.
1:30:12.360 Okay, so once there are ais that are have developed the knowledge to use vast amounts of memory for thinking.
1:30:24.360 I think we'll not have in that fast, by the way, I think that's not the we are not memory limited or speed limited.
1:30:33.360 We're in the bottleneck in human thinking is overwhelmingly software but okay that it's a different issue perhaps, but by the time ais have that humans will also have it that it's just additional hardware that humans could also use.
1:30:54.360 Okay, so so I want to flag there's a by the time claim like like now now we're not just talking about in principle what things are possible, but we're talking about technological trajectories and which ones will will reach fruition at different points in time.
1:31:11.360 Is not fully across for me, but is a big chunk of a crux for me if I thought if I thought that that human augmentation strong forms of human augmentation was going to arrive before AGI, I would.
1:31:25.360 I don't think this fully solves the problems that I'm concerned about, but it does seem like plausibly we're in a much better situation with regards to these problems.
1:31:36.360 Thanks for meeting with me for being really fun.