00:00:00.000 Okay, I'd like to welcome all of you present here in the great room and everyone viewing
00:00:06.920 on the internet to an exercise in re-enlightenment, an effort to imagine what forms enlightenment
00:00:16.880 Tonight's effort actually started the moment we entered this room and looked up.
00:00:23.360 Over 200 years ago, James Berry painted a decidedly optimistic view of the relationship
00:00:37.000 Our springboard is optimism, a word that first appeared in English in Berry's lifetime
00:00:42.200 in the 18th century, at the intersection of two questions.
00:00:52.760 The answer enlightenment provided back then the answer gave us modern science and modernity
00:00:57.920 itself was that the world was something that could and should be known because knowledge
00:01:07.320 When we invoke optimism tonight then, we do not mean an inclination to see the glass
00:01:18.200 Our shorthand for this enlightenment formulation is epistemological optimism.
00:01:25.640 Bruno Bryan will introduce the people who will pace our discussion next.
00:01:33.880 A unique collaboration has made this night possible.
00:01:37.040 A collaboration based on a shared attitude toward knowledge.
00:01:41.960 Working in conjunction with the group I represent, the Reenlightenment project, and international
00:01:46.960 network concerned with the future of enlightenment.
00:01:50.720 Three of London's most prominent institutions have shared resources in intellectual, physical,
00:02:00.240 I want to thank those three enlightenment institutions.
00:02:04.560 Our host tonight, the RSA, King's College London, and its Center for Enlightenment Studies,
00:02:11.440 and the British Museum, not only for sponsoring this event, but also in their very willingness
00:02:17.600 to work together for making what Matthew Taylor calls 21st century enlightenment happen.
00:02:26.320 It's a great pleasure to welcome everybody to this enlightenment conversation today.
00:02:32.720 I'm going to introduce our two main speakers, but the spirit of this event is very much
00:02:36.880 that we should have some initial speeches, but then we should open the floor and have
00:02:40.440 a genuine discussion about the meaning of enlightenment optimism and its relationship
00:02:45.160 I'm going to introduce first Professor David Deutsch, a distinguished physicist, fellow
00:02:49.640 of the Royal Society, and visiting professor at Physics at the University of Oxford, and
00:02:55.280 a man who has a very legitimate claim to be the father of the quantum computer.
00:03:00.520 He is also the author of a number of books on the nature and history of purposeful knowledge
00:03:05.120 creation, including the beginning of Infinity published in 2001.
00:03:10.120 David is going to start our conversation, and he's going to be followed by Martin Rees,
00:03:14.560 Lord Rees of Ludlow, an astrophysicist and cosmologist, and author in 2003 of our final
00:03:21.880 He's a former president of the Royal Society, and the astronomer Royal and a self-described
00:03:27.120 worried member of the human race, so take it away David.
00:03:31.960 2011, actually, at the beginning of Infinity, so Martin was first.
00:03:40.000 So I think we've been deprived of a lot more than flying cars and Mars colonies.
00:03:49.360 I think civilization is currently burdened by a debilitating pessimism, not just prophecies
00:04:02.640 The term technological fix has become as pejorative as luddite used to be.
00:04:10.600 The aspiration for technological solutions is now widely regarded as naive, a fantasy that
00:04:17.440 ignores the inevitability of missteps and side effects, and that naivety is labeled optimism.
00:04:26.760 Because optimism has come to mean something like the assumption that the best will happen
00:04:31.640 or probably will, and pessimism that the worst will.
00:04:41.240 They're irrationalities that people accuse each other of having, and everyone classifies
00:04:46.720 themselves as somewhere in the middle, perhaps admitting to a slight bias in one direction
00:04:52.080 or the other, and therefore admitting to slight irrationality.
00:04:58.920 But in fact, both ends of the spectrum and the middle are predictions of success or failure,
00:05:06.600 maybe probabilistic, derived only from an attitude or a principle, not from explanations
00:05:18.640 And prediction without explanation is prophecy, which is reliance on the supernatural, which
00:05:26.680 isn't a rational attitude to planning for the future.
00:05:36.600 Conventional pessimism is right that civilization has no guaranteed future, nor does our species.
00:05:45.720 The overwhelming majority of civilizations and species that have ever existed are now extinct,
00:05:52.320 including significantly, every one of our cousin species, every species that has ever tried
00:05:59.400 to survive by creating knowledge that was not in their genome, new explanatory knowledge,
00:06:07.040 how to make clothes and fire and farming, and to live the new ways of life that that enabled.
00:06:15.120 That is our biological niche to survive through the exercise of creativity.
00:06:22.360 And we are the last species left in that niche.
00:06:33.760 We conquer problems by creating knowledge or they conquer us.
00:06:39.800 So there's nothing new in our situation of all sorts of existential danger.
00:06:44.760 It's undeniable that the worst can happen because the very worst has already happened many
00:06:56.840 All those civilizations who believed that their famines and droughts and disasters were
00:07:03.880 divine punishment for their wickedness or whatever.
00:07:07.560 In reality, it was just that they didn't know enough about irrigation, medicine, and
00:07:14.640 If the ancient Athenians had known about antibiotics or just about hygiene, they could have
00:07:23.040 prevented the plague that contributed to the fall of their nascent optimistic society.
00:07:30.520 And if they had, then as Carl Sagan speculated, we might now be at the stars.
00:07:37.960 The technology would be regulating trivialities like the planetary climate as automatically
00:07:46.760 as it's now regulating the temperature in this room.
00:07:50.240 We know that's possible because of a momentous dichotomy that follows directly from the
00:07:57.040 rejection of the supernatural, namely, every transformation of physical systems that is
00:08:05.760 not forbidden by laws of physics is achievable given the right knowledge.
00:08:11.480 And hence, the rational attitude to the future is what I call optimism, the principle
00:08:17.160 of optimism, namely that all evils are caused by lack of knowledge.
00:08:29.880 If we fail at anything that's physically possible, it's because of some knowledge that
00:08:39.720 Admittedly, some of the dangers that we currently foresee are themselves side effects
00:08:48.960 But trying to slow that down won't help because what do you slow down?
00:08:53.400 In 1900, no one could possibly have foreseen that research in pure physics into the esoteric
00:09:01.040 properties of the element uranium would within 50 years become the centerpiece of everyone's
00:09:07.600 existential fear or that another half century later, the centerpiece would be carbon dioxide.
00:09:18.160 In our future, too, the greatest dangers will inevitably be unforeseen.
00:09:24.760 And the only type of knowledge that's capable of dealing with those is fundamental knowledge
00:09:33.600 Any area of fundamental research could suddenly become essential to our survival, biology,
00:09:40.880 engineering, in World War II, pure mathematics was.
00:09:48.000 You also need knowledge of how to structure human institutions to retain the miraculous
00:09:53.720 property of keeping civilization stable under rapid change, traditions of criticism and
00:10:01.720 error correction, and we need wealth, meaning the ability to deploy technology in practice,
00:10:14.040 The world doesn't just contain optimists and pessimists and wise and unwise technology
00:10:21.160 It contains enemies of civilization as well, and knowledge is impartial.
00:10:30.520 But the enemies of civilization all necessarily have one thing in common.
00:10:40.160 And so they fear error correction and truth, and that's why they resist changes in their
00:10:48.000 ideas, which makes them less creative and slower to innovate.
00:10:53.920 So our defense against the existential danger from malevolent uses of technology, the only
00:11:04.200 The good guys must use their only advantage to stay ahead.
00:11:13.600 Well, I'm typecast as a pessimist because of the book I wrote, which I called our final
00:11:27.600 century with a question mark, the publisher will remove the question mark, and then the
00:11:32.040 American publishers retighted the book as our final hour, and they like instant gratification
00:11:40.080 But my theme in that book was that although Earth is 45 million centuries old, this is the
00:11:49.080 first century once when one species asks, can the term in the planet's future?
00:11:55.760 We've entered an era that's called the Anthropocene.
00:12:00.200 And the difference between this and previous eras is that we are in an interconnected world.
00:12:08.040 The whole world can be affected by local activities.
00:12:12.400 We depend on electric power grids, air traffic control, international finance, globally dispersed
00:12:20.320 And all these are at risk, unless they are highly resilient, their manifest benefits could
00:12:26.200 be outweighed by catastrophic breakdowns cascading through the system, real world analogs of
00:12:37.760 Air travel can spread a pandemic within days, and social media can spread panic and
00:12:47.040 And I think that as admitted, knowledge doesn't always lead to benign applications, and
00:12:58.480 I do worry that as technology gets more powerful, there is a serious and new downside.
00:13:06.840 This, for instance, in diagnostics, vaccines and antibiotics are good for dealing with pandemics,
00:13:17.560 For instance, just two years ago, researchers showed, if it's surprisingly easy, to make
00:13:27.280 And last October, the US government decided to stop funding these so-called gain of function
00:13:33.200 experiments, and concerned about the new CRISPR technique for gene editing have been in
00:13:39.200 a news because of the Chinese work on human embryos.
00:13:42.800 Well, these researchers are just more niches in the still under-regulated fields of synthetic
00:13:48.880 biology, which is burgeoning, called biohacking, even as a hobby.
00:13:57.360 And this is, I think, worrying, because where it's hard to make a clandestine H-bomb, biotech
00:14:06.120 involved small-scale, dual-use technology, and millions of one day have the capability
00:14:12.040 to misuse it just as they can misuse cybertech today.
00:14:16.960 And we talked about regulations, but whatever regulations are imposed on prudential or
00:14:23.520 ethical grounds, they can't be enforced anymore than a drug laws can.
00:14:29.880 Anything that can be done will be done somewhere by someone.
00:14:35.520 So one of the most intractable challenges to all governments will stem from the rising
00:14:40.240 apartment of tech-savvy groups, or even individuals, by bio and cyber technology.
00:14:47.520 There's a risk from error as well as terror, the global village will have its village idiots
00:14:55.960 And I think this is a huge challenge to governance, which is going to lead to tension between
00:15:06.440 What about another fast-to-fast technology, robotics and machine intelligence?
00:15:12.080 These are disrupting labor market, but robots are still clumsy, they can't tie our shoelaces,
00:15:17.480 or cut your toenails, but they're advancing fast.
00:15:22.280 And deep mind, a company that was brought up by Google recently has created a machine
00:15:27.880 that can figure out the rules of all the old Atari games without being told and then
00:15:34.200 And this is a step towards generalized machine learning.
00:15:38.160 And of course, we've heard about Google's driverless car, about the military use of autonomous
00:15:44.440 And some of those with the strongest credentials in AI think this field needs regulation
00:15:52.440 And of course, in looking decades ahead, we must keep our minds open, or at least a jar,
00:15:57.280 to prospects that may now seem science fiction.
00:16:01.280 How can we ensure that evermore sophisticated computers don't go rogue if they could infiltrate
00:16:07.120 the internet and the internet of things, they could manipulate the rest of the world?
00:16:12.600 Well, these are the kind of threats, which make me, although an optimist about what technology
00:16:18.400 can do, a pessimist about what will actually happen given political and psychological
00:16:28.240 And these are threats caused by the empowerment of individuals, but I want to turn to another
00:16:33.360 kind of concerns we have, which are those threats caused collectively by arising and
00:16:40.240 more demanding population of humans, affecting the global ecology, etc.
00:16:58.800 Future generations are clearly harmed if fish stocks dwindle.
00:17:03.480 And the plants in the rainforest whose genes can be useful to us.
00:17:06.640 If for some, threats to biodiversity are worrying for another reason, because the environment
00:17:14.600 has a value over and above its benefit to us humans.
00:17:18.680 Whereas it were destroying the book of life before we've read it.
00:17:22.080 And to quote the great ecologist, E.O. Wilson, if our despilation of nature causes mass extinctions,
00:17:28.760 he says it's a sin that future generations will least forgive us for.
00:17:34.200 And this is a real threat, a sixth extinction, and these threats are aggravated by climate
00:17:42.560 I want to finish up with a few words about climate change, because it's an issue where
00:17:52.120 Climate projections are still uncertain, but even amongst those who accept the IPCC consensus,
00:18:01.680 And I think it's important that the debate about climate change is only partly a debate
00:18:09.160 It's far more a debate stemming from differences in economics and ethics, especially in
00:18:15.720 how far ahead we look and on the extent to which we limit our gratification for the benefit
00:18:24.200 To be more specific, some economists, for instance, born Lomburg and the Copenhagen
00:18:29.560 consensus, they downplayed a priority of tackling climate change.
00:18:34.520 But that's because they apply a standard discount rate, as you would in deciding where
00:18:41.000 And in effect, they therefore write off what happens beyond, say, 2050.
00:18:48.240 And of course, in downplaying climate change on those assumptions, they're consistent
00:18:52.480 with no one thinks there'll be a real climatic global disaster as soon as that.
00:18:59.920 But if you care about those who live into the 22nd century and beyond, you should apply
00:19:11.120 And then, as Stern Weitzman and other economists argue, you would deem it worth paying
00:19:16.320 an insurance premium now to protect future generations against the risk of crossing tipping
00:19:22.840 points and triggering runaway long term changes like the melting of green and ice.
00:19:28.720 So even those who agree there's a significant probability of catastrophe-essensory
00:19:34.720 hence will differ in the urgency with which they advocate action today.
00:19:40.080 And the assessment will depend on discount rate, expectations of future growth, and optimism
00:19:46.720 But above all, the assessment depends on an ethical choice.
00:19:51.880 In optimizing people's life chances should rediscoverate on grounds of date of birth.
00:19:59.360 Well, for politicians who have to cope with these trends, of course, the immediate
00:20:05.960 trumps the long term, the national trumps the global, and that's why it's hard to
00:20:11.600 maintain concerns about climate change higher in the agenda.
00:20:16.880 Activists and experts by themselves can't generate or sustain political will.
00:20:22.800 Only if their voices amplified by a wide public and by the media will, these long term
00:20:28.880 global causes, rise high enough on the political agenda.
00:20:34.000 And I'd like to finish with a plug for religion, because I think in this context the
00:20:46.120 The Catholic Church transcends normal political constraints.
00:20:49.640 There's no gains saying its global reach, nor its durability, and long term vision, nor
00:20:57.760 And the Pope's words will have major impact in Latin America, Africa, and East Asia, perhaps
00:21:07.680 And he remind us of our responsibilities to our children, to the poorest, and to our stewardship
00:21:14.000 of the diverse life on earth, and longer term vision than we get from our politicians.
00:21:19.480 Well, I'm a technical optimist, but I'm a political pessimist, and as I've indicated,
00:21:25.800 I think the unintended consequences of violence technology could trigger serious even catastrophic
00:21:32.760 setbacks to our civilization, we'll have a bumpy ride through this century.
00:21:38.640 And we shouldn't be complacent about the risks being small.
00:21:42.680 It's an important maxim that the unfamiliar is not the same as the improbable.
00:21:48.720 But I don't want to leave you in a despairing note, so I conclude with the words from
00:21:55.600 the great biologist Peter Medova, I quote, the bells are told for mankind, I like the
00:22:02.200 bells of our pine cattle, they're attached to our own necks, and it must be our fault
00:22:08.320 if they don't make a tune full and melodious sound.
00:22:24.680 I'm going to bring in, as our first discussed Matthew Taylor, who is the chief executive
00:22:28.880 of the Royal Society of Arts, who's both familiar with this freeze the progress of human
00:22:33.560 knowledge and culture around us, and ask him to say something in response.
00:22:39.240 I've had lots of challenging events here over my nine years at the RSA, but I think trying
00:22:44.920 to follow David and Martin is probably the toughest gig I've ever had.
00:22:49.560 I'm reminded, in your last comments, Martin, by what I call Sport Gramsci, which is pessimism
00:22:58.000 with the intellect and optimism of the will, that is to say that there are plenty of
00:23:02.440 reasons for us intellectually to share some of your pessimism, but we can do nothing but
00:23:06.760 try our best in these circumstances, and let me refer back to Cliff said that we use the
00:23:15.000 phrase 21st century enlightenment is our strap line, and if I explain for a moment what
00:23:19.880 that's about, I think it's relevant to this debate.
00:23:23.480 And what we mean by 21st century enlightenment is that we think that there are core values
00:23:28.040 of the enlightenment, the value of autonomy, of universalism, and of humanism, that those
00:23:36.520 values are as important today as they were then, but we think two things, firstly that
00:23:42.880 those values have been, in many ways, hollowed out, the things that they have come to
00:23:47.200 mean today is much poorer than the philosophers of the enlightenment, what they indicated
00:23:54.680 And secondly, that even were those ideas, as rich as they were in the 18th century, would
00:23:59.320 still need to renew them for 21st century challenges.
00:24:02.720 So for example, autonomy has been hollowed out and turned into a kind of narrow, shallow
00:24:12.520 Humanism is seen too much as being about rights and entitlements and not enough in our view about
00:24:17.680 relationships and empathy and respect, and humanism often turns into a kind of materialistic
00:24:25.960 utilitarianism, a shallow materialistic utilitarianism, and that I think is the relevance
00:24:31.720 to this debate, which is that the enlightenment set in train some very powerful engines
00:24:38.960 of progress, science and technology is one, markets is another, government and bureaucracy
00:24:43.360 is a third, but each of those domains has difficulty often in talking about what the right
00:24:48.320 thing to do is, getting back to these substantive question of what is the nature of human
00:24:55.280 So I think as we renew those in light and we return those in light and principles and
00:24:58.880 we renew them, a critical one is opening up a wider discourse for the nature of, to discuss
00:25:05.720 And I think that one of the things, one of the variables, not the only variable, one of
00:25:08.680 the variables, that will determine whether or not it is optimism or pessimism, that is
00:25:14.240 the most appropriate way to think about the future, is whether or not we are able to have
00:25:18.400 And that one of the reasons that there is a kind of public fear of a variety of things
00:25:22.760 whether it's artificial intelligence or robotics or genetic manipulation or whatever, is
00:25:26.840 the sense that these things are continuing without that kind of rich public discourse
00:25:30.800 about what this is for and in what progress comprises.
00:25:35.760 I might ask Martin and David just as they like to respond briefly to that.
00:25:41.200 Well, I simply like to say that by expressing pessimism, I didn't want to discourage efforts
00:25:50.000 In fact, I'm involved in a new group at Cambridge University, which is trying to do just
00:25:54.600 this, to convene experts, to try and minimize these downsides from new technology and
00:26:00.000 to engage with people with the technology, because as you say, the decisions about how
00:26:05.640 these techniques should be applied are decisions that require extensive prior discussion.
00:26:13.040 They're not just for the experts and as you know far better than me, any political decision
00:26:17.840 involves science, but it involves ethics, economics, and politics as well.
00:26:23.440 Yes, I'm surprised by how much I agree with both people who have spoken, but I think
00:26:31.400 the crucial difference is not what we fear and what might happen, but as you said, what's
00:26:42.480 And the optimistic view in the way I define it is that the thing to do is defense,
00:26:52.280 because the bad things will happen, not only the ones we know about, you know, you spoke
00:27:00.640 of Yale Wilson said, our descendants won't forgive us for a mass extinction.
00:27:06.840 Well, there are plenty of things much worse than that that, for example, the non-existence
00:27:11.200 of our descendants, which would be worse than that, and things which threaten that could
00:27:18.440 come up, new things that we haven't thought of.
00:27:21.720 And I can't see any alternative to the argument that the defense against that is rapid
00:27:30.400 No, we need to think about these extreme high constant low probability risks, and not
00:27:37.000 enough thought to be given to them compared to the huge amount of attention being given
00:27:41.840 to minor risk, to the class of genshin' food, no radiation doses, things like that.
00:27:47.360 There's too much focus on those, but so we're in denial about the possible but potentially
00:27:55.040 catastrophic risks from these new technologies.
00:27:59.400 So in Matthew's spirit of a richer public debate about all of these issues, I'm going
00:28:02.480 to open up the floor and ask for contributions.
00:28:07.280 It will be very helpful, not least, because we're live streaming this event if you could
00:28:10.160 just briefly say your name and perhaps if you have an affiliation or something you'd
00:28:14.720 just like to say about yourself, that would be fantastic.
00:28:17.680 And then we'll try and move the discussion as it were horizontally as well as back and
00:28:28.080 So Lord Ries, a few months ago, you wrote a very powerful piece, I think it was in the
00:28:34.480 financial time, saying that some centuries in the future, it's inevitable, not possible
00:28:41.680 or likely, inevitable that the next raft of advances will have to be taken by machines
00:28:50.120 Surely, in a way, that might be, I'm asking both of the lecturers, could that be a cause
00:28:58.080 Can we not hope and try to develop them to act as better saviors of mankind than we've
00:29:05.720 Yes, indeed, what I did say was that although the rate at which AI will develop is uncertain,
00:29:17.360 the direction of travel is clear and in a century or two there will be post-human intelligences
00:29:24.840 which would be of an inorganic kind and I think they'll find their main arena probably
00:29:29.760 away from the earth, they don't want to be on the planet, they'll be out there.
00:29:32.800 And if we think of a huge timeline which has taken four and a half billion years to get
00:29:38.400 to us since the earth formed and the ability of years ahead, then the next few billion
00:29:44.320 years will be inorganic evolution of these advanced machines and the organic civilizations
00:29:52.400 that we're part of in that concertine of perspective will be just a few millennia.
00:30:00.000 And as to how one reacts to that, it's interesting the response I got to the article.
00:30:04.640 Some people said, you know, it's wonderful to think that these machines may have even deeper
00:30:09.240 cogitations than humans, et cetera, et cetera, whereas others I think implicitly assumed
00:30:14.920 that consciousness was specific to these sort of wet hardware in organic brains and would
00:30:25.880 And this, of course, is a philosophical question and if you think there'd be no consciousness
00:30:29.880 and you might think it's a valueless future universe, whereas on the other hand, if you think
00:30:33.480 consciousness is just an emergent property, then these creatures were far more elaborate
00:30:39.840 brains than organic brains would have deeper thoughts, et cetera, and we should indeed
00:30:55.760 I wanted to ask Professor Deutsch about the principle of optimism, which I think was
00:30:59.120 to find that wickedness is all caused by insufficient knowledge.
00:31:05.040 So is it then the case that when we are confronted by people who are strong supporters
00:31:10.400 of Islamic terrorism or people who are fundamentalist deniers of climate or whatever, that
00:31:17.080 the way to deal with it is giving more knowledge to them? Is that your suggestion?
00:31:24.480 It's not something we can pour, you can pour from one person into another mechanically.
00:31:30.240 It's an active process on the part of the recipient.
00:31:33.600 So giving them knowledge is rather the wrong metaphor for what needs to be done.
00:31:58.640 We all agree that the future of humanity will give us unknown problems.
00:32:06.240 Problems that we can't imagine, even imagine today.
00:32:08.600 So to solve those problems, we need the creativity.
00:32:14.400 And if we are judged, who is the only part of the pessimist or the optimist, up to us
00:32:18.440 and pessimists, I think we have to find out who contributes more to our urge for creativity.
00:32:26.760 I think the problem is, okay, I'll leave it at that.
00:32:36.480 Well, I don't think, I think it's well known that it's hard to make progress if you think
00:33:01.440 Some of the greatest buildings that have been left by past civilizations, even up until the
00:33:05.320 last couple of hundred years, have been created for religious purposes, for God, not for
00:33:11.840 I wondered how much the erosion of religion has stopped us thinking long-term.
00:33:16.600 Was it only for God and the future and the afterlife, et cetera, that was the reason
00:33:28.320 I mean, if you think back to those who built the cathedrals, they thought the world was only
00:33:33.600 a few thousand years old, that would last only another thousand, but they built cathedrals,
00:33:39.680 which they wouldn't live to see finished, which still inspire us centuries later.
00:33:44.760 And that's why I did mention that I think that the religions are on our side in many
00:33:50.800 of these attempts to promote long-term thinking.
00:33:55.400 But to go back to, if I can just inject something slightly irrelevant about buildings, I
00:34:03.480 think, when we looked at the London skyline, what depressed me is of the architecture,
00:34:09.000 but the fact that until 50 years ago, the prominent buildings were all for the public.
00:34:14.080 They were religious buildings, they were railway stations, museums, parliament, et cetera.
00:34:20.440 Now they're either apartments for the transients who perridge all their banks and things
00:34:27.080 And I think this shows the decline of our cultural civilization.
00:34:30.280 And that's why I'm irritated whenever I see the London skyline.
00:34:35.280 Adam Sutler from the History Department at King's College of London, I'd like to ask
00:34:40.600 both speakers to what extent you think that the advance of technology brings with it an
00:34:44.680 inherent tendency to increase the level of inequality in our societies, both political
00:34:49.680 inequality and economic inequality as small numbers of innovators have huge impact and
00:34:54.800 become very rich and very powerful while that technology puts many others out of work and
00:35:01.480 makes them feel at the very least disempowered in society.
00:35:04.480 So to what extent is that true and if it's true, how big a problem is it and what should
00:35:14.080 So I think the luddites were mistaken as history shows.
00:35:18.480 The jobs that they thought were being lost were soon replaced.
00:35:26.600 I think we're going to merge with the machines rather than be taken over by them.
00:35:31.520 And merging with machines began about 6,000 years ago when writing was invented.
00:35:40.720 We are completely different people knowing writing than we were before that.
00:35:49.480 Apart from that, technology and knowledge are impartial.
00:35:59.000 I mean, I think in the long term, David made a rabbit in the medium term, clearly there
00:36:03.200 is a concern about certain jobs being eroded and inequalities.
00:36:08.040 And of course, the jobs being eroded by machines are not just blue collar, but things
00:36:14.040 like medical diagnosis and routine legal work and things like that.
00:36:20.440 And there is clearly a risk of greater inequality if the money earned by the robots as
00:36:28.600 And I personally think that governments as a president of complexion might happen.
00:36:35.160 But I think this is an extra incentive for massive redistribution.
00:36:42.400 So the money earned by the robots is used to provide dignified and secure work for the kind
00:36:48.160 of jobs that anyone can do, which machines can't do as well, like caring, teaching assistants,
00:36:55.040 gardening in public parks and things like that.
00:36:57.160 So I think there'll be a stronger need for massive redistribution in order to provide a
00:37:04.440 And remember, if we don't have a fairer society, if we have more embedded people, then
00:37:09.080 all the threats I mentioned earlier of individuals using bio-era or cyber-terror to cause disruption
00:37:21.600 So we have an interest to ensure there are fewer embedded people, and that's another
00:37:28.720 At this point, I'd like to bring in Nezahat Galtakin, who is advisor to Tech SISTI's Future
00:37:35.360 And has agreed to join us today as a discussion.
00:37:37.520 I just thought this might be a moment to focus a little bit more on technology as part
00:37:47.040 So I consider myself as an optimist, but I was told recently that pessimists are educated
00:37:55.920 So maybe I haven't been educated enough, but I've been fortunate to be around the
00:38:01.040 technology sector for the last 16 years, because I found myself in Silicon Valley in late
00:38:08.080 90s when Google was started on campus when I was at Stanford.
00:38:13.880 So I'm a huge believer in technology and innovation, because I believe it does democratize
00:38:22.720 access to information and acquisition of knowledge.
00:38:26.280 So I will be compelled to disagree with the statement was made earlier, whether technology
00:38:34.240 creates perhaps few individuals who tend to be perhaps super wealthy and whatnot.
00:38:42.840 There are three billion active Internet users and put things in perspective also.
00:38:48.920 There are as many mobile phones as a number of people living on Earth today.
00:38:54.200 So it's roughly seven billion and there are over two billion active social media accounts.
00:39:00.840 Now we can argue whether it's being used properly or abused or whatnot, but it's there
00:39:06.520 and it facilitates access to information and sharing of experiences and knowledge between
00:39:14.360 And technology and innovation has advanced and reinvented a number of traditional industries
00:39:24.120 if you look at retail, how we purchase some of the items that we choose to purchase today.
00:39:31.280 And financial services industry, the way that we interact with our banks, for example,
00:39:38.600 And we tend to hear a lot about whether tech is a very disruptive force, i.e. eliminating
00:39:45.120 some of the traditional industry positions and so forth.
00:39:50.240 But I think equally we have to look at it the way the technology has advanced some of
00:39:55.400 this industries, the way that data is being processed, the way that data is being provided
00:40:05.240 And maybe to put things in perspective again, it took 75 years for telephone to reach 100 million
00:40:16.960 It took worldwide seven years to reach 100 million users and it took Facebook about 2.3 years
00:40:26.400 and Candy Crush Saga, I don't know if you're familiar with this game, I don't play it,
00:40:32.960 So it is exponential the pace that the access to that information and availability the
00:40:41.280 digital tools that things can change quite quickly.
00:40:46.400 And of course, we talk about artificial intelligence, what it does essentially, and some
00:40:52.000 of the things that we've seen over the years, people might argue to be revolution, but
00:40:57.320 in many ways I think they have been revolution also.
00:41:02.080 And what the computing power does today is not only to synthesize data, but to be in a
00:41:12.400 So that's why we talk about predictive analytics, for example.
00:41:16.840 I do agree with the point that when there is a new way of doing things, it equally provides
00:41:27.840 on the flip side, perhaps a situation where it opens to abuse, like cyber attacks, for
00:41:37.640 Now on the flip side, of course, that it's a huge industry today that employs millions
00:41:43.360 of people as well, the companies that provide security software solutions to us as consumers
00:41:49.920 and to corporations and to governments as well.
00:41:53.640 So obviously, these are the bad people attacking our information, privacy, and so forth.
00:41:59.400 But that's an industry on its own right, and there are companies that are being started
00:42:06.840 And perhaps just to finish off, I was watching this video clip from former President
00:42:20.040 She is one of the most famous entrepreneurs in the U.S. in the medical technology field.
00:42:28.400 She dropped out of college at the age of 19 and started as a company called Theranos,
00:42:34.640 which provides cost-effective blood tests to wider methods.
00:42:43.800 And the idea is providing access to information at the most important time that it's needed.
00:42:50.920 So that's the medical information so that people can act on that to prevent some of the
00:42:59.320 And the second person on that interview was Jack Ma, the founder and the CEO of Alibaba
00:43:09.040 And the main point that they made, which I firmly agree, was the power of information
00:43:16.400 technology to help economic progress and fight against inequality.
00:43:24.200 Because now we are empowering individuals to become entrepreneurs on their own right.
00:43:31.240 So there are platforms like Alibaba.com or eBay or others, which we call online market
00:43:37.480 places, enabling individuals to start their own businesses.
00:43:41.840 So there might be one person in rural China selling apples to the people in Beijing, for
00:43:50.280 So I think that economic impact is really important.
00:43:53.880 And right now, roughly 10% of UK GDP is digital economy.
00:44:01.320 Financial services in the search I came from originally used to represent 20% roughly
00:44:07.240 before the Lehman bankruptcy, I used to work there as well, so I went through that process.
00:44:12.320 So digital economy and technology as a sector is becoming one of the biggest contributors
00:44:21.000 And that's in developing economies as well as in developing economies as well as in developing
00:44:27.760 economies and the average at the moment with G20 is 5.3.
00:44:32.360 So UK is already almost twice as high in terms of digital economy's contribution to
00:44:39.720 So again, I'm an optimist as far as technology and innovation is concerned.
00:44:44.440 And hopefully, people will be using it more effectively and efficiently as opposed to
00:45:04.880 So Professor Deutsch said, knowledge is impartial.
00:45:12.040 And you, a lot, reset, it all depends on ethical choices.
00:45:17.760 So my question would be, is all we are talking about tonight, about ethics and maybe power
00:45:26.480 and not technological fixes and markets and economies?
00:45:31.360 What are they driven about, isn't the driver an underlying ethical issue?
00:45:37.760 Well, of course, any new technology poses new challenges.
00:45:50.320 For instance, biology is posing a whole lot of new challenges which we didn't have until
00:45:57.680 a few years ago, but modifying the germline and all that.
00:46:01.480 So we have to confront new challenges which are stimulated by technology.
00:46:09.120 And going back to the issue of digital economy, et cetera, I agree with what was said earlier.
00:46:17.680 I think if you look at the welfare of the average blue collar worker in this country, they
00:46:24.080 got worse off in real terms in the last 20 years.
00:46:26.800 The only reason they're probably better off is because of the consumer surplus, as it were,
00:46:31.760 from IT, the fact that they've got multi-channels and internet and all that.
00:46:39.640 And also, it's been a huge benefit to have mobile phones in Africa.
00:46:42.600 But let's not be too complacent because there are all these mobile phones, but there are
00:46:47.840 far fewer people who have clean water or have proper toilets.
00:46:52.480 So one needs to sort of emphasise that it's only a tiny part of what people in Africa
00:46:58.640 need for their development, although it's a big plus.
00:47:03.440 Sorry to come in again, but there's something, as I said, I thought, is important when we
00:47:11.800 talk about the digital economy, which is I don't think we really understand it fully.
00:47:15.720 We don't know how to respond to it because it's effects are paradoxical in the sense
00:47:22.160 that your absolute right, it creates opportunities for millions, hundreds of millions of
00:47:26.960 small entrepreneurs, but it also concentrates power incredibly in the hands of a very small
00:47:33.040 number of people, all of whom live in the same part of the world.
00:47:36.800 And those platforms, the big boys, seems to me they now have the capital and they have
00:47:44.080 the capacity to attract talent and they have the capacity to continuously, algorithmically
00:47:48.480 improve what they do in a way which it feels like it's almost inconceivable that
00:47:54.560 So arguably, we have never in human history had corporate leaders as powerful as the corporate
00:48:01.080 leaders who now live in Silicon Valley, and I don't think we really understand the implications
00:48:05.840 of that or nobody know what to do, but you know how to go Europe, you know there's
00:48:09.560 kind of bumbling around thinking that they can regulate this.
00:48:11.920 I suspect that's not going to work, but there's a big issue there about a position we
00:48:30.040 I've got a question about accident and mistake as a vehicle for knowledge discovery and
00:48:36.440 kind of getting over the human ego when we think about how many things that have transformed
00:48:40.520 society have happened by mistake, unintended positive consequence, unintended consequences
00:48:46.480 I think are often assumed to be negative, but we've been the beneficiaries of lots of
00:48:55.000 I think the unintended positive consequences always come as a result of creativity from
00:49:01.720 You know, penicillin was discovered because of some accidental experiment which wasn't intended,
00:49:09.840 but plenty of people observed mold many times in human history.
00:49:14.480 It took a scientist guessing what that was about and guessing that it might have applications
00:49:19.840 and then applying that and developing it and so on.
00:49:23.000 So yes, accidents can be good, but the key that makes knowledge is creativity, not accident.
00:49:33.280 On this particular topic, I'd like to bring in to Mandra Hartness, who's very kindly joined
00:49:39.720 She described herself as a comedian and also a radio broadcast.
00:49:44.280 Do you have a program coming up entitled, Future Proofing?
00:49:49.160 So would you like to say something from your perspective to Mandra?
00:49:53.760 I want to want a competition and which meant I want to laptop.
00:49:58.040 Yeah, I think I'm here mainly because of Future Proofing, which is a ready for a series
00:50:03.960 back next spring where we look at ideas that will shape the future like the singularity
00:50:09.240 or synthetic biology and I constantly find myself going, wow, the potential for this is amazing.
00:50:16.720 It throws out more questions than it answers and that's a good thing.
00:50:22.640 I was really struck by the thing you said right at the beginning, David, about technological
00:50:30.560 But I think this throws up one of the great ironies of this discussion because I agree with
00:50:35.280 you, things that should be quite straightforward technological solutions to recognize problems.
00:50:41.680 I mean, I would suggest nuclear power as one of those are rejected as technological fixes
00:50:49.000 if they suggest in that that's not the solution, there should be a solution involving
00:50:55.600 But I think that the root of that actually is not a suspicion of the technology.
00:51:00.120 It's a suspicion of the people who would be using the technology or even people in the
00:51:07.720 So the idea that if you gave people nuclear power, they would just use more energy when
00:51:15.640 And I think ironically, that's a suspicion of people which is at the root of that complaint
00:51:22.800 is not something that is subject to a technological fix.
00:51:26.840 And I think in fact, Lord Reese put his finger on that when he said, I'm a technological
00:51:33.480 And I think for that, we do need to get back to what Matthew was suggesting was the much
00:51:37.600 wider root and discussion of the enlightenment, the idea that actually it's not just about
00:51:44.720 harnessing knowledge to create technology to do more things better.
00:51:51.120 There is, you know, there's half of the autonomy and idea at the root of the enlightenment,
00:51:56.560 which is about material freedom, if you like the freedom that we all have now to travel
00:52:01.360 the world or to be literate or to communicate with each other and not to die in childhood
00:52:07.160 and things like that would really do liberate us from the shackles of nature in some major
00:52:14.160 But there is also the other side of freedom, which is the freedom to debate, the freedom
00:52:19.360 to experiment, the freedom to take risks, freedom of thought.
00:52:23.560 And those freedoms are much more tricky and they're much more about having arguments
00:52:29.240 and negotiating and accepting that maybe we don't all agree on what progress is and what
00:52:37.280 And those are not things that you can just have straightforward knowledge about.
00:52:41.160 You know, we don't know what is the best form of progress because us in this room, we
00:52:48.680 I'm certain some of you have a vision of the future which is a clean city full of people
00:52:53.640 cycling and being vegetarians and I have to say that's my idea of how and knowledge isn't
00:53:01.080 We just have to sit down and have an argument with it, preferably over glass of wine.
00:53:13.720 I have to tell you that thought of the cycling and the city, but you are anyone to come
00:53:21.520 So I work at the University of Oxford and a physicist and I'm actually working with David
00:53:28.640 So I think there was a point that was touched on by both of you, which would be nice
00:53:39.120 How much do your opposition, the optimistic one, the pessimistic one depend on the assumption
00:53:45.880 that there are or there are not fundamental limits to knowledge creation and, you know,
00:53:54.480 for the person who, I mean, for the position that actually argues that there are such limits,
00:54:04.680 I mean, what would be, what would enlighten and mean in a world where there were like fundamental
00:54:12.880 limitation and by fundamental and physical limitation to how much knowledge we can create
00:54:18.200 and how many problems we can solve and so that's my question.
00:54:24.720 My position depends entirely on there not being such limits, well, or 99% on there not
00:54:32.560 I think if there was a fundamental limit to the human capacity to improve things, then
00:54:41.320 When, as soon as we hit that limit, it seems to me that we've got enough knowledge now
00:54:49.520 to provide a good world for the 7 billion people on it now and there's a gap between
00:54:55.240 what we could do and what is actually happening with the present knowledge.
00:55:00.440 So, extra knowledge is not a prerequisite for providing a distant life for everyone, it's
00:55:08.000 Of course, greater knowledge will be a bonus for, particularly for health and better IT
00:55:14.840 There's another question which I think perhaps you'll have in the back of mind which
00:55:18.680 is about the limits to knowledge and here, I think Dave and I disagree, I think there,
00:55:24.000 there's a lot of aspects of physical reality which our brains just aren't up to compounding
00:55:31.240 So I think we are going to hit the buffers in terms of what we can understand.
00:55:35.040 Maybe a computer could compute, but in terms of what we can actually grasp, I think there
00:55:39.440 are going to be limits, many problems, but I think we can't use that even if it's true
00:55:48.480 as an excuse for not improving the world because, as I say, the knowledge we have now is
00:55:53.480 more than enough to produce a far better world.
00:55:55.920 So this is about an asymmetry between what a computer can compute and the cognitive
00:56:02.480 capabilities that we have, you're seeing a widening app opening up and you're not seeing
00:56:08.680 Cognitive abilities are just another computation and when we merge with computers.
00:56:14.960 This most computers, I don't mean sort of cyborg science fiction thing, I mean the
00:56:20.160 same sort of thing that happened when we invented writing, when we invented, we understand
00:56:24.360 things via, let's say, writing down an equation on a piece of paper, which we couldn't
00:56:32.040 And so paper is just another one of those transhuman technologies which will expand,
00:56:37.240 which will, and computers are as well, it's no difference.
00:56:42.120 So transhuman rather than opposed to whatever, I'm not sure what's the difference.
00:56:48.680 But he hands up, so just a fact there, if that's okay.
00:56:58.680 So I'm fascinated by the panel's comments and the questions.
00:57:05.000 Of course, this will begin with what is enlightenment.
00:57:08.800 And one of the questions I want to invite you to reflect upon is what is knowledge.
00:57:14.400 It seems to me that a number of the assumptions here have been knowledge of scientific or
00:57:20.920 And I wonder if you could reflect a little bit on the potential of that, the limits of that,
00:57:27.200 where humanists are to go back to an earlier point, where humanistic knowledge figures
00:57:31.440 into your version of, and vision of the enlightenment today and in the future.
00:57:38.240 It's essential, as I said, because the science alone can't possibly progress unless society
00:57:45.600 is such as to be stable under what science produces.
00:57:49.840 So we're going to have to create knowledge about human institutions as well, forever.
00:57:57.680 Paul Bonner, the Fellow of the RSA, and the question is, there's agreement or disagreement
00:58:08.200 about optimism or pessimism, but is there agreement about what the purpose of trying to
00:58:13.680 seek enlightenment actually is to improve the world?
00:58:21.680 We might just continue that conversation, we've driven to answer that multi-day.
00:58:28.080 We all have our vision of how the world could be, and of what we want to aspire to do
00:58:32.320 as individuals, and that is different to pay people of religion and have such a different
00:58:39.400 But I think we all have views on how we would like things to be, and we're all aware that
00:58:43.600 there's a gap between the way we'd like things to be and the way things are.
00:58:47.800 And the question is, to what extent can collective action and politics narrow that gap?
00:59:02.040 I graduated a few months ago, so if I sound a bit naive.
00:59:05.720 But it seems like knowledge is sort of like a currency to buy some enlightenment, so
00:59:14.080 But there's a small elephant in the room with Julian Assange this week, and WikiLeaks.
00:59:30.200 Knowledge is, I don't think that question takes into account that knowledge is impartial.
00:59:39.080 So it's not information, there's an infinite amount of information around, most of it
00:59:44.720 isn't knowledge, and even the knowledge is mostly false.
00:59:48.880 So we just have to keep improving, but I repeat myself.
00:59:55.880 Well, I think what David's saying is that there's a distinction between knowledge of general
1:00:02.160 principles and science, et cetera, and information which may be appropriate for some people
1:00:08.000 to have, but not for others, if it's personal or private information.
1:00:12.120 And that's why this whole issue about information and privacy is so important.
1:00:18.400 I mean, I think it's going to be in more tension as we worry about security, because clearly
1:00:25.840 we need more information to be given to the authorities who have more security, but we
1:00:32.840 Kristin here. Roger Dore fella's aside here.
1:00:38.520 Let's take up an answer to a previous question about how we tackle ISIS and other fundamentalist
1:00:47.440 And the answer you gave was knowledge, but you couldn't do it directly, it had to be done
1:00:53.080 I'm rather intrigued as to what are the indirect methods for tackling these very big
1:00:58.680 issues at the present time, using knowledge?
1:01:01.280 Well, I don't know, I'm a physicist, this is – I know that in the past, civilization
1:01:12.600 has managed to civilise uncivilised societies, at the end of the Second World War, it
1:01:22.040 But most attempts to do this, and it was done spectacularly well, but most attempts to do
1:01:31.400 And I don't know why, but this is a type of knowledge that we need to have.
1:01:39.760 All right, my name's Chia, I'll just visit a member of the public, my question. The point,
1:01:56.120 very short point I wanted to make is that sometimes I feel that in society there's too
1:01:59.360 much emphasis placed on being clever, as opposed to being wise. I do appreciate it's
1:02:04.080 harder to teach someone how to be wise, so I was just wondering if you had any thoughts
1:02:10.560 Well, I mean, I think that is a very important distinction. I certainly – those of us who
1:02:16.160 spend our lives among academics and who are supposed to be clever in some sense fully
1:02:22.600 realised that these people should not be let loose on politics. They don't have more
1:02:27.440 than average wisdom in more general areas. And so clearly, there is this distinction
1:02:33.800 and the judgment is separate quality from deep knowledge or specialised knowledge. And
1:02:40.720 that is why, in fact, ambivalent about the clamber to have more scientists in politics.
1:02:47.760 We need to have a public, which is more widely informed about the basic ideas of science,
1:02:53.080 so is to be responsible citizens to address the issues we have a scientific dimension,
1:03:00.120 but it's not clear to me that we benefit much from having more professional scientists
1:03:06.120 It's a surprising thing to hear you say. So we have so few in this country that – well,
1:03:11.960 I mean, I'd rather politician had a degree in history than a degree in dentistry.
1:03:17.840 I'm just sensing that there's an issue of scale between these two positions because in David's
1:03:37.680 initial formulation, he was asking us to kind of zoom out and say, you know, which species
1:03:43.760 has survived and what hasn't. And it may have been your American publisher that zoomed
1:03:49.800 you into an hour, but I'm sensing that in a lot of your answers, you also kind of zoom
1:03:54.560 in the notion that there's enough knowledge in the world to make this world as it is a better
1:04:01.600 place. As I'm thinking about, it seems like a stranger in the stranger answer as if there's
1:04:07.720 a kind of fun of knowledge here, and it's just how we manipulate or distribute it, and
1:04:14.320 that seems fundamentally different from the gesture that David's making. So I don't know
1:04:20.760 whether these are just two opposing stances based on the difference of scale or whether
1:04:25.600 there's some way in which these are in conversation.
1:04:28.200 Well, on that, I mean, surely everyone would agree that we could hugely improve what's
1:04:36.360 happening in developing world by proper application of existing technology and medicine,
1:04:44.320 But we don't know how. Well, the difficulty is not a scientific difficulty. It's a political
1:04:51.560 difficulty. I've got a government of governance. Yes. Well, maybe it's a knowledge of how
1:05:00.040 to optimize the economy and incentives in those countries so that these things happen.
1:05:05.480 And governance is the most important issue in Africa, probably in peeding development.
1:05:10.400 So we could do that. But I think another point which actually is response to an earlier
1:05:16.160 point made over there is that although there are many issues where different cities could
1:05:24.760 take different attitudes of our big vegetarians, et cetera, the point is there's a growing
1:05:28.920 number of issues which are global, and climate is obviously one of them. And here we
1:05:35.440 do need to have a debate which does embrace as much as world as possible. We can't have
1:05:42.240 a choice as to how war we want the world to be. We've got to agree on that. And there
1:05:49.280 are a few other things like resourceful resources, et cetera, where the same issue applies.
1:05:53.240 So I think there are more issues with have a scientific dimension and also more issues
1:05:59.560 which need to be decided on a scale larger than a nation.
1:06:02.880 As well as scale it's also a question of range. So thinking about the enlightenment and
1:06:10.200 the capacity of the enlightenment to transform knowledge into a kind of optimism through
1:06:13.720 a future orientation. That was generally, as we see around here, quite incremental through
1:06:18.280 human history and on a kind of horizon that wasn't anywhere near as long as the horizon
1:06:23.840 you were describing right. And so I think is it something about our ability to think in
1:06:28.960 enlightenment terms perhaps more than three four generations ahead to mitigate against catastrophes
1:06:34.560 that we can barely anticipate asteroids colliding with our planet. And it's that that's
1:06:39.800 the kind of failure of 21st century enlightenment.
1:06:42.680 Well I think perhaps this is true. We need to think about events which are remote in different
1:06:48.600 parts of the world. But also we do in the context of climate change in particular need
1:06:55.080 to think about what might happen in a century hence. And this is not within the range of
1:07:03.200 normal economic decisions. So we do need to have a different kind of economics and think
1:07:10.040 longer term have a low discount rate. And incidentally the other thing we need to do is
1:07:14.480 to include in our estimates of capital, natural capital and include desploitation of natural
1:07:24.160 resources as a negative contribution to GNP. And so I think these are ways in which a different
1:07:31.360 attitude we can always could stimulate long term thinking but we certainly have to make
1:07:35.920 these changes if we are to address such long term issues.
1:07:40.360 Incidentally I don't have a discount rate. There's just one context where policy implies
1:07:46.560 a zero discount rate and that's in disposing of radioactive waste when people talk with
1:07:51.360 a straight face about whether the depository is safe for 10,000 years. That's applying
1:07:56.760 a zero discount rate. And it's ironic that in that context we think 10,000 years ahead
1:08:03.200 where it's in keeping the lights on we don't think 30 years ahead.
1:08:08.600 The tiny point I think is being missed in the debate. And that is in certain key areas
1:08:13.640 we know less than we used to know. So political leaders know less about how to lead now
1:08:20.560 than political leaders in the 40 or 50 years ago. Because the world has become more complex,
1:08:26.200 because populations have become more diverse, because we are less deferential, our political
1:08:31.200 leaders are much more at sea, much less confident of their knowledge about how it is you
1:08:36.000 drive change into societies. So we must factor into this debate that in some ways technological
1:08:41.560 and scientific progress and its consequences make some forms of knowledge go backwards
1:08:51.400 And they were complaining that internet billionaires have muscled in. You can't complain
1:08:58.040 about that at the same time saying that now politicians don't know enough.
1:09:04.040 What is because the things that they have got are a lot clever than democracy for
1:09:10.920 example. So that's one of the challenges we're talking about.
1:09:13.600 So you could think of that as a democratization of power going out of the hands of government
1:09:19.280 to the people. Because those billionaires get their money from people signing up to Facebook
1:09:25.640 and so on. Would easily sign up to something else.
1:09:31.320 I am Peter DeBore. I'm part of the Enocha Renaimen project.
1:09:35.920 I think the conversations got to a really interesting point. When we asked our two panelists,
1:09:41.480 I mean speakers to talk a little bit about this topic. And we gave you the rubric of
1:09:47.320 epistemological optimism. I think what we had in the back of our minds and what's now
1:09:52.600 come out in the conversation is the thought that we have to pay attention perhaps urgently
1:09:57.600 now this particular moment in human history in relation to a number of things that
1:10:02.320 had already been said about the particular position that we have vis-a-vis the speed of
1:10:06.160 change in relation to technology and science and so on. What we had in mind was that we
1:10:11.480 need to take a set of stances to what we call knowledge, what we think of as knowledge.
1:10:17.040 So when you're talking about the idea that you might have an attitude. So one of the
1:10:22.360 attitudes that you might take towards knowledge is we could call it the moment optimism.
1:10:26.960 But there are a whole number of other attitudes or stances we could take in respect to
1:10:31.200 knowledge that's just come out to what David said and it also picks up on what Matthew was saying.
1:10:36.480 We might take an attitude in which we wish to curate knowledge. In other words, we wish
1:10:41.920 to preserve the best that we've thought about something. This picks up on David's idea
1:10:47.320 about we progress towards knowledge by getting the right answers and when we find that they
1:10:53.280 are wrong, we correct. So there's a curative, a curation aspect to that. But that's not
1:10:59.480 the only important stance we need to take. We also need to take stances which have something
1:11:05.760 of the ethical in them which you've spoken about. So I'm not going to go on too long
1:11:09.920 because I know we're going to wrap up. But one of the real problems, as you know and everyone
1:11:13.600 in the room knows about the climate debate, is that in the current world in which we're
1:11:18.720 in, in which communication is ubiquitous around the globe and continuous, it is difficult
1:11:24.640 to establish the positions from which so-called scientists or those whom we trust to have
1:11:30.360 knowledge about this have the main say. So it could be changed all over place in some places.
1:11:36.400 People think it's good in some cases. Some people think it's bad. But one of the stances
1:11:41.400 that we can take to knowledge is to trust those who advance as it were the best explanation.
1:11:48.480 And that's an attitude. In other words, you can say, Martin, you can say looking into your
1:11:54.320 crystal ball, you fear that things may go wrong. The problem with crystal ball gazing
1:11:59.640 is that of course the future hasn't yet happened and hopefully it will happen, but you
1:12:03.960 say it might not. But it hasn't happened. So you have to develop something of a relationship
1:12:09.320 of trust to the person who says that. So that's another aspect that if you like of the
1:12:14.960 political or the social, which needs to be put out there on the table in relation to
1:12:19.400 this question about knowledge and our future. Thank you, Pete. I think on that question
1:12:24.920 of good explanations and trusting the explainer, we might ask our two speakers to say a few
1:12:29.720 words in summary. Perhaps starting with you, Martin? Yes. Well, I agree with what Pete
1:12:36.600 DiBola says. Matthew pointed out that the job of politicians is getting harder and the
1:12:44.760 power they have is getting more limited also because of multinationals and all that. And
1:12:51.480 they certainly need all the advice they can get. And I think one consequence of what Pete
1:12:56.360 DiBola was saying is that there does need to be some system whereby they can be advised
1:13:03.480 to politicians from experts. But I think my impression in this world is that advice from
1:13:12.720 experts who politicians directly is often not heated. It's far better if the experts
1:13:19.200 get through to the public and the press. And then there is pressure from MPs postbags
1:13:26.800 and from the press and the politicians do respond to that. So that's another reason why
1:13:31.000 I think scientific experts should engage with the wide public and with the press because
1:13:37.200 that's the way they'll have more influence. So I think we need to involve the public
1:13:43.280 more widely. And one reason I am pessimistic is that the job of politicians is getting
1:13:48.360 very, very difficult and the control they have is getting less simply because of the
1:13:56.120 dispersal of power which is in many ways so beneficial.
1:14:00.040 David? Yeah, well, I agree. And I agree that I also distrust the idea of scientist kings.
1:14:08.200 I think Plato was very wrong about that. I'm not sure it's fair that I get the first word
1:14:15.280 and the last word. So I just say that trusting those who provide the best explanations
1:14:23.120 is it quite it. The democratization of knowledge via the best explanations would mean
1:14:32.680 you see once there's a process has decided that something's the best explanation. There's
1:14:38.520 no need to trust anyone and that's really the whole point not to be knowledge to have
1:14:46.600 effects. It shouldn't be filtered through any single source. So I agree with that.
1:14:57.560 Well, we're not giving anyone the last word because the last words will take place over
1:15:02.120 a drink in the Benjamin Franklin room and what more appropriate room title in Benjamin Franklin
1:15:07.360 for optimism knowledge and enlightenment. So I mean, many thanks to the RSA for hosting
1:15:12.000 this event and generously providing us with a drinks reception afterwards. But please join
1:15:16.040 me in thanking our two wonderful speakers. Thank you very much.