Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I'm your host, Julia Gillard. And with me is today's guest owned Cotton Barrett. Owen is a mathematician at the Future of Humanity Institute at the University of Oxford, where his work focuses on, I would describe it as theoretical questions involved in trying to improve the far future.

[00:00:59]

Owen, welcome to the show. And does that seem like an accurate way to describe your research?

[00:01:04]

That's a bit grandiose sounding, but that is what the eventual aim is, it seems like. We the future could be a very big thing ahead of us, and if we're able to do things to make it go a bit better rather than worse than that could be quite important. And it's hard to work that out, but it's maybe important enough that it's worth further study.

[00:01:29]

What a delightfully affable British way to describe this very important project. I love it just so we can get clear on our terms when we talk about the far future, what kind of timescale are we talking about? Is this, you know, 10 years, 100 years, a thousand years? What kind of order of magnitude?

[00:01:51]

I mean, let's go with 100 years up. But I really do mean not just like capping out at millions of years and kind of millions of years is roughly how long humanity has been around for or even necessarily billions of years.

[00:02:13]

At the moment, we are a apparently unique force in the universe because we're not just moving it around according to kind of following existing patterns, but we're building things deliberately as we want them to be. And at the moment, we can do that over most of the surface of a single planet. Right now, we don't have the technology to go out and do it over grander scales.

[00:02:42]

But it is possible that in the future our descendants will develop this technology. And if they do, they could spread out through the universe and be around a long, long time.

[00:02:56]

Yeah, this is kind of a perfect time to be taping this episode since Elon Musk just announced his grand plan for for galactic domination, you know, starting with Mars. Right.

[00:03:09]

He has a. He has an ambitious timescale on that. I don't know enough about the details of the technological side of things to know how feasible that is, but it's exciting that people are thinking about this, right?

[00:03:24]

Right. Yeah, no. Well, we'll stay we'll stay solidly theoretical in this call and let Ellen worry about the technology for now. So to flush out, again, the terms a little bit when we're talking about intervening on or affecting the far future, are you focusing more on how to cause good things to happen? Like what can we do today to eliminate poverty in 100 years? Or are you more focused on preventing bad things from happening? Like how can we reduce the risk of humanity being wiped out by a pandemic in 100 years?

[00:03:59]

Or is that the wrong way to divide up the question?

[00:04:02]

Yeah, I think that there is something in this division. I think that generally we think of kind of different ways the future might go. Maybe we go down one path of another, and that's often triggered by some event which could happen either for better or worse. And so thinking in terms of what might shift that direction does seem a useful way of thinking about the future. But there's a bit of an asymmetry between events which look like they'll improve the world and ones which could do very large amounts of damage and look bad.

[00:04:41]

And that is that generally people want the good ones and they don't want the bad ones. OK, so what? Well, so if we. Eventually, I think that if our civilization continues and continues to thrive, we will work out how to and then we will eliminate poverty. And so there may be some benefits of doing that a bit earlier than later. But it's as long as we're going in the right direction, then we may hope to achieve that eventually.

[00:05:18]

Whereas with some of the bad events, if we all get wiped out by a pandemic, then it isn't the case necessarily that if we avert that and we stop ourselves being wiped out by a pandemic, then it's just going to happen inevitably later because there isn't the pressure of people wanting to take actions to make sure that it does happen. In fact, it's quite the reverse. People will want to take actions to try and ensure that it doesn't happen.

[00:05:45]

And as we grow as a civilization and we get better capabilities, we may be better placed eventually to make sure that these things never happen. And so there's just a period in the middle where potentially a large catastrophe, we're not yet certain to be able to avoid it. And if it did happen, it could totally throw us off a kind of positive trajectory towards a valuable long term future.

[00:06:18]

And so this this gives some reason to focus on avoiding very large scale bad outcomes.

[00:06:29]

Yeah, that is an interesting asymmetry. Do you think that that applies mainly to I had sort of casually picked an example of that outcome that was it was about wiping out humanity as opposed to making the quality of life for humanity a lot worse. Do you think that that asymmetry applies even to bad outcomes that aren't, you know, that don't wipe out humanity entirely?

[00:06:55]

I think that it actually it applies more weakly.

[00:06:59]

The distinction, which seems particularly important to me here, is about whether outcomes are just going to cause a bit of a better world or worse world in the short term or whether they're after effects could reverberate down kind of future, the future ages so that they shift us from kind of one long term trajectory onto another.

[00:07:29]

And it's very clear how an event which could cause human extinction would do this. Events which just make people a bit unhappier in the short term or a bit happier in the short term. It's we have much less of a clear story how they could eventually do that. And maybe there were some intermediate things where if we had something which improved or damaged international cooperation in the future, we might think actually that's the kind of thing which can affect weather, not just kind of weather.

[00:08:07]

We have good trade deals and people are prosperous in the short term, but also whether the world enters into a period of conflict and possibly we wipe ourselves out. And so in that kind of context, I don't see such an asymmetry between the good and the bad events. Right.

[00:08:25]

That makes sense. Yeah. And I will want to get into at some point in this conversation, get into the the relationship between causing good outcomes in the short term and, you know, affecting our long term future. Because it's not it's not I guess it's not obvious to me that aiming for the long term is the best way to help the long term as opposed to aiming for the short term and counting on that to kind of propagate forward. But I'll put a plug in that for now and focus first on our ability to make predictions at all about the future like this, as it is commonly pointed out that it is very difficult to make predictions, especially about the future.

[00:09:05]

And I think that goes, you know, at least double for the far future. And so Phil Tetlock, who wrote Super Forecasting and and ran the Good Judgment Project, describes his attitude towards forecasting as a kind of optimistic skepticism, where he's he's skeptical about our sort of baseline ability to make predictions about the future accurately and successfully. But at the same time, he's optimistic about our ability to improve at least somewhat on that baseline through sort of careful work, which is pretty close to my take as well.

[00:09:43]

But it looks super. Forecasters were making predictions on the order of months, not decades or centuries. So I'm wondering, I guess first, like. Where you fall on that optimism, skepticism, skepticism, break down and then also how you think the much larger timescale that you're talking about should affect that balance?

[00:10:03]

Yeah, I have a mixed view on this. I mean, one thing which is important to note about the kinds of problems that fetlocks super forecasters and other forecasters that they were being compared against were trying to tackle, is that they were selected for being kind of hard on this scale of months because they wanted to test. They wanted to have things where they could actually get differential accuracy between people. And I think that the difficulty of making predictions there is a lot, according to the question I can be if you ask me who will be the president of the United States be in 30 years time?

[00:10:46]

I have no idea. If you ask me, will the sun come up in 30 years time, I can be pretty confident in saying yes. Right. There's a there's like a pretty broad spectrum in between these things.

[00:10:57]

Now, that is an excellent point. Is there what would you say is the is the least obvious prediction about the far future that you can be reasonably confident in? Does that question make sense? Yeah, and when you say be confident that, of course, we have like different levels of confidence.

[00:11:19]

I know that the infamously fuzzy phrase, here's one which I will take my stand behind. I'm not entirely confident in it, but I think I am moderately confident in it and maybe more so than a lot of people is that if a as a technological civilization, we stay broadly on a path of continuing to accumulate knowledge and we don't get badly knocked off this course, then I think eventually we will spread out through most of our galaxy. And interesting, possibly even other galaxies as well, so some of my colleagues that the Future of Humanity Institute have done some empirical work looking into actually what would it take to get to other star systems and have been looking at what can we say about the limits of technology?

[00:12:26]

What can we understand from what we know about physics, what we know about physical laws? Well, we think we can't travel faster than light speed. We can calculate the amount of energy that it might take to get to other systems. We can calculate the amount of energy that we can gather from the sun, and we can make comparisons between this. Now, we don't know exactly where technology will like how efficiently technology will let us convert these things.

[00:12:58]

But we have spent. A bit of time asking the question, do they look like they were any blocks here? Are there any things which are just such fundamental obstructions that even with many centuries of improving technology, maybe even thousands of years, we're just never going to be able to overcome them? And it looks like the answer is no. And for this reason, that's my prediction. I'm not going to make any predictions about the time scale of it of achieving this, because I think that predicting dates for future technology is often quite a lot harder than just predicting what capabilities may eventually be developed.

[00:13:48]

Yeah, yeah, absolutely. In fact, I think that predictions about what technologies will never be developed are sort of bolder predictions than which technologies might plausibly be developed. So so I think I agree with you about the about humans leaving planet Earth in the long term, being at least plausible. And it seems to me that that's sort of like having it be plausible as sort of all we need to affect the decisions that we make today. Like we don't actually, it seems to me we don't actually need to be able to pinpoint when it we expect it to happen or whether it's a, you know, 20 percent or 60 percent or 80 percent chance, because as long as we think that it's not negligible probability, that makes the far future of humanity all the more important because there's so much more potential for humanity spreading throughout the galaxy as opposed to humanity staying on Earth.

[00:14:55]

There's like I don't know how to put this, but I guess if we thought that humanity was just going to stay on earth and the chance of us getting wiped out is much higher, like our entire species getting wiped out is much higher and therefore there's less value, potential value that we could capture in the far future. Whereas if we think we're going to leave Earth, you know, there's a lot more at stake there. And it's really important that that humanity doesn't end in the next couple of centuries.

[00:15:18]

There's bones that make sense, less chance of making it through and having an extremely long future. And also this.

[00:15:28]

It's a smaller future and there may be less people living, flourishing lives in a future where we're confined to just one planet than one where we can actually spread out and use a much larger fraction of the resources of the universe. And there's a question about whether that matters. But I think if we are in a position to create a lot of people who actually just have very high quality lives than most people would think, that it's better to have more of them rather than fewer.

[00:16:02]

Can I just go back to another thing that you was in that question that you were asking, that which is saying it doesn't matter that much about the timeline? I agree. Insofar as we're thinking what's on the table here, like what could we eventually do? I can think of situations where we might think, OK, that is actually relevant for our decisions today.

[00:16:24]

If we thought that in the next five years we were going to be able to get off the planet and make it to other habitable planets out there in the universe and start getting lots of people to build new homes, we might start being less concerned about climate change on our planet today, right?

[00:16:49]

No, that makes sense. Yeah, I think I just meant to say that, like, we don't need to be confident in timelines in order to be able to act in some way on that prediction. But I agree that the more confident we can be, the more like the the more decisions that will affect in the near term. That's right, and it matters for understanding when we need to say collectively, OK, now is the time that we really need to start looking into space travel and.

[00:17:21]

Doubling down on this, I'm wondering in general how how quantitative you think we can be when we're making predictions about the far future, like both in terms of how likely something is and also in terms of how big its effects will be, like when people ask me about why I'm interested in promoting rationality or developing strategies to help us get better at improving our judgment and changing our minds and so on, I can tell a very plausible story, I think, about why that will be important for the future of humanity not to be too grandiose, but and I believe that story.

[00:18:01]

But I would have a really hard time quantifying the magnitude of that effect, like the magnitude of how likely it is that, you know, my efforts or or the efforts of everyone working on rationality will actually make a difference. And if so, how large the difference will be. And, you know, as a result, I would have a hard time comparing that intervention to other interventions I could be doing instead, like, yeah, I don't know, working on cancer research or working to reduce poverty or or reduce the risk of a pandemic.

[00:18:33]

So I'm wondering if you have any thoughts on can we quantify these things at all? And if not, can we still make comparisons between different interventions?

[00:18:43]

Yeah, this is a topic I love. I think that I mean, you're right. It is hard to quantify anything like this. But we ultimately we have to make decisions between the different things and we can try to do that in a. Quantified, just informal, well, I think about it and kind of this one feels better or we can try to.

[00:19:14]

Have slightly more explicit comparisons by quantification, and when we think of quantification, often we think of like measurement of I'm going to go out and collect this data and now I have data to back up what I'm doing. But if we're thinking of improving the rationality of the world as a mechanism of making, they're going to be better decisions about whatever the big important issues are decades down the line and that changing the long term future of humanity, there's no way we can do a randomized control trial on this.

[00:19:48]

So if we are going to quantify it, we have to. The numbers that we're using for quantification are themselves not going to be kind of solidly founded, but I think that we can still get somewhere with this. We can try and break it down and we can. Into Blake, the big question of how much does this work we're doing now, help in the long term into smaller pieces that even if we can't measure exactly a bit closer to things we understand, and so our intuitions are a bit better trained on them, and then we can try to make our judgments about those and combine all of these at the end.

[00:20:31]

And that's going to get you numbers, which are still pretty uncertain. But I think it can be helpful for having orders of magnitude. And we can compare and we can notice in some cases. Sometimes when I do this, I, I actually go actually this whole direction does look a bit better than I had intuitively thought. And then I go through and I check well I give some weight to my informal intuitions, which was saying that they didn't think this looked so good.

[00:21:02]

And then I go back and I look at the numbers I estimated for the different components and I'm a bit skeptical about them this. And then I shuffle all the numbers around until I feel happier with the whole with it as a whole.

[00:21:14]

Yeah, it reminds me a little bit of these papers, I think there are by Kahneman and Tversky, but certainly someone in that field where they were looking at hiring decisions and they were comparing our sort of default gut hiring decision where we just like use our intuition to decide who's the best candidate. They compared that to a much more quantitative method where, you know, we don't go out and collect data, but we do like come up with the categories that we think are important to a good hire.

[00:21:44]

And we rate people in those categories and then we give them a total score across those categories. And that that sort of more formalized method did better than the intuitive guessing that the default method. But what did even better than both of them was a method where you use the the formal approach to sort of like call your attention to things that you hadn't been paying attention to before. You haven't hadn't been giving weight to. And then after you go through that whole exercise, you then go with your gut, except now it's an informed gut instead of an uninformed gut.

[00:22:17]

I think that's exactly right.

[00:22:18]

And I think that one of the I think I see that as being two major mechanisms of benefit from using these formal models. One of them is making sure that it does call our attention to the relevant factors and making a kind of helping to avert Skoch insensitivity where we don't pay attention to exactly which are the most important factors or how much a large difference in one factor matters compared to in another place. And the other mechanism, which I think is quite important, is it allows us to discuss these things better.

[00:22:56]

If I have a disagreement with a colleague where we both just have intuitive judgments about a thing, then we can notice that we can disagree, that we disagree and we can point to things that we think and maybe we can uncover a lower level disagreement. But actually it's pretty hard because we often have similar thoughts and we're just shading things a little bit differently. But if we have got an explicit breakdown and now we notice we disagree, then we can go into the explicit breakdowns and find out actually which factor, where is the disagreement coming from?

[00:23:32]

And now we can have a productive conversation about the part that we're at least on the same page about.

[00:23:40]

Right. Right. And also potentially identify what information would we need to get in order to know, settle the crux of the disagreement, which is hard to do if you haven't pinpointed it? Exactly.

[00:23:53]

Well, even if you don't have somebody with a disagreement, you can just keep track of the different variables that are going into your personal model. And you can keep track on how large your kind of.

[00:24:06]

And Labor's I mean, you know, in an informal sense, just capturing, like how uncertain you are about each of the different variables and then that can help you to know at the end which variables would you have to dive into and explore more to actually decrease your total uncertainty.

[00:24:25]

Right. Right. Speaking of uncertainty about the future, I'm wondering if you think that that is an argument for discounting interventions designed aimed at the far future as opposed to interventions aimed at helping people now? Like if I you know, we were talking earlier in this conversation about how there's so much at stake in the future of humanity. And so, you know, really matters that we not die out now, which I buy. But at the same time, it's just much less obvious that any one thing that we try to do now will actually have good effects for humanity in the future as compared to saving a life of someone who exists today.

[00:25:13]

So does uncertainty play a role at all when you're trying to compare different causes to each other? It plays a massive role.

[00:25:20]

Sorry. Yeah, that was a dumb question. Of course, the plays are all I guess I just mean to ask, how does that how do you incorporate it or how do you think about it?

[00:25:27]

Yeah, I mean, there's in some ways I think like the whole the whole game is working out how to deal with uncertainty. But this question that you're asking specifically about when we're thinking about helping people further in the future, does uncertainty, in fact, wash out the benefit that we might be able to help more people?

[00:25:48]

I think there's something to that and it bears thinking about pretty carefully, even over short timescales. We can see this. If I have a plan which is going to help somebody next year, then I can be as long as the steps in the plan are reasonably solid, I can be pretty confident that it will happen because a year is not a large timescale. And so all of the kind of supporting institutions that it might rely on are likely to still be around.

[00:26:18]

If I have a plan to help people in 30 years, it's more likely that something fundamental will have shifted the ground underneath. So the plan is not going to be well founded. Maybe I'm planning to help people with a health condition in 30 years, but it relies on this specific hospital and the hospital goes bankrupt and closes down. Or maybe in the meantime we discover a cure for the condition they have. And now we've we've invested this money early in a long term preventative measure.

[00:26:56]

But the cure is quick and easy and effective, and we didn't need to bother with that. So this is a kind of low level background uncertainty, which I think cuts through a lot of things, even when we have well understood mechanisms for how we eventually want to help.

[00:27:14]

Hmm.

[00:27:15]

I do worry about this when I'm thinking about things which might affect the very long term future. If we're looking at tens of thousands of years or millions of years or billions of years, then certainly that's an extremely long time. And I just don't think we can have confident predictions about how things will proceed. But there are cases where I think we can see a broad enough route to either benefit or harm that we don't need to worry about tracking the specifics.

[00:27:50]

If there's an event in 30 years time which threatens to cause human extinction, I'm pretty confident that if it does cause human extinction, there won't be humans around 100000 years later. I don't need to kind of discount my uncertainty between that 30 years and the hundred thousand years.

[00:28:10]

Right. It's possible that if we avoid it and we don't have human extinction, then then they could still humans could still go extinct in the following hundred thousand years. That may even be likely. And so there's some discounting that would occur on account of that.

[00:28:30]

There's another level of uncertainty, though, which I think also bites here, which is that there isn't much that I can do which where I have a very solid, definite plan of how to reduce the chance of extinction from an event in 30 years time. And this is a case where uncertainty is biting pretty hard now. It's only biting over a timescale of something like 30 years. So that may be manageable, but it is dealing with quite unprecedented events.

[00:29:03]

We haven't had humanity go extinct before. We don't know exactly how to manage and deal with this. So. Right. That could lead to quite significant amounts of discounting, particularly if they're very precise plans which need things to go in a particular way in order to eventually be a. Right. Do you think that there are any other theoretically sound reasons for discounting the future relative to the present, aside from uncertainty? Like I mean, I know in practice people do, in fact, prioritize the future far more than I'm sorry, prior to the present, far more than the future.

[00:29:48]

To what extent do you think that's rational versus irrational?

[00:29:52]

This is getting into I mean, there's like large debates in economics about this question. There are a number of different reasons why people end up using discount rates. So one of the major ones is just if you're thinking about something like money or something, you can easily turn into a convert into an out of money. Then we can think about the opportunity cost of our investments. And if I can stick the money in savings, then I'll earn some interest on that.

[00:30:25]

And that means that I ought to care less about 10000 pounds or ten thousand dollars in ten years time than I care about it today. That doesn't necessarily apply to things like people's lives.

[00:30:41]

Yeah, I was just trying to figure out the parallel there, but it's not. Yeah, I don't I don't see how that would carry over. It's it's a bit complicated. Another reason that people in some cases, people just can't, as they say. Well, actually, we observed that people have preferences for the person towards over the future. And what we ought to be doing, kind of as a society to promote the social good is to promote the preferences of all the people who have done so.

[00:31:12]

To the extent that they prefer the person over the future, then that gives some reason for us to prepare for further present to the future.

[00:31:20]

It's a little circular. I mean, part of what we're trying to do is figure out whether our current preferences make sense or not. So to use them as evidence that they make senses seem circular to me.

[00:31:33]

I am pretty sympathetic to your view here. I'm trying to be fair and like mention all the reasons that people might give.

[00:31:41]

I appreciate that. I think that you like there's a defensible position which says it's shouldn't necessarily be our place to judge what people ought to want and we should just act on the basis of what they do want. I think it's a bit funny and I think it's also has this odd effect that it doesn't necessarily weight the preferences of the future people.

[00:32:03]

And if you think that future people matter comparatively much to present people, then that could be an issue for you.

[00:32:12]

Right. I also heard an interesting argument recently that we should expect that if the world continues to become more developed and wealthier, that far future people will be better off than people today. And so to make sacrifices today, like to to forego economic growth today, to reduce pollution that could affect quality of life in the future is kind of like taking its a little equivalent to forcing poor countries to make sacrifices to benefit rich countries, except the countries here are actually generations.

[00:32:45]

So they're separated in time and not in space. What do you think of that?

[00:32:49]

I think there's something to this and I think there's a you know, this is a pretty empirical question about how much richer future people will be than we are. But certainly we are much richer than people were three hundred years ago. And if it were just a case of moving money or moving physical resources between one time and another, then I think it's this argument is pretty clear cut. Nobody thinks, oh, well, it's the present generation should make sacrifices for the future generations.

[00:33:24]

So let's work out how to let's take something useful like cars and will bury a load of cars like and, you know, sail them up properly so that they'll be able to get them out. And then in 300 years time, they can unbury the cars and they'll have all this wealth from us.

[00:33:41]

It's a little bit less clear in cases where there are trade offs, where we actually just have much more leverage over the future than they will. Dávid, differing degrees of this in cases where it's just about wealth, we actually we can do better than burying a car for them. We can go and put the money in savings and then we're to have more money for them in the future than we'd have now that probably at least it's not clear whether that is a large enough growth in wealth to cancel out the fact that they'll be richer and will care less about any particular dollar.

[00:34:28]

In the case of something like if we're actually talking about taking steps to reduce a catastrophe which could cause extinction, then this level of argument becomes much stronger. If we could prevent an extinction event in 30 years time, then people in 35 years time will benefit a tremendous amount from this. But they have no ability to spend their wealth to try and buy this good. It's only people in the next 30 years who have any leverage over that at all.

[00:35:05]

And it might be that people earlier in that 30 year period have even more leverage than people later. For instance, if there's a long string of actions that you need to take or if you just need enough time to gather attention to the issue.

[00:35:24]

Right, right. Good point. I had hinted earlier in the conversation about wanting to delve into what kind of approach is most likely to have a good effect on the far future, because, you know, as I said, it wasn't obvious to me that aiming to affect the far future is better than just aiming to do good things now, which will sort of indirectly improve our for our future. Like, you know, you've talked about looking back over the history of our civilization and how we've gotten more well-off and better developed and so on.

[00:35:59]

I think we've also had, you know, a fair amount of moral progress. And so it seems to me that, you know, all the all of that progress was not the result of any one person or entity trying to make the far future better. It was just the result of individuals doing things that seemed good, you know, very locally, like this invention will improve the lives of current workers or this invention will make me rich or, you know, I want to cure this particular disease.

[00:36:26]

And those interventions did have sort of snowball effect. Like, the more you know, the more you reduce disease, the more you reduce poverty and make it easier for people to go to school and get educations. And that increases their ability to develop new technology. And that, in turn helps us reduce more diseases, et cetera, et cetera. So it seems like we do have some track record, some good track record for doing near term good things that then have these flowthrough effects that end up powerfully shaping the future, even though we weren't aiming for that.

[00:36:57]

And we don't as far as I can see, we don't really have any examples in this other category of aiming to affect the far future. So do you I mean, how is that not an argument for just continuing to do what we've been doing successfully so far?

[00:37:11]

We have. I agree.

[00:37:14]

We've done like extremely well out of people just often just following these local what looks good and pursuing that and then producing a good effect. And I certainly don't think and I don't think anyone thinks that we should stop doing that. But quite a lot of our society is set up around encouraging these things. We provide incentives for people to do things which help other people, and people generally feel good about themselves when they do that.

[00:37:58]

And so we have a lot of resources going into this. And that's fantastic.

[00:38:03]

We don't have. Many people explicitly thinking about whether there's anything we can do to help the long term future or even if just maybe among these shorter term things where we're helping people, whether some of them would be more useful than others from a longer term perspective, and that may mean that there are good opportunities here to actually help in quite an effective way that nobody is taking because nobody's giving attention to this.

[00:38:44]

So we have, as promised, been speaking very theoretically. So before we close, I wanted to give you the opportunity to talk in a little bit more specifics about whether there are any interventions that you feel have a fighting chance of of impacting humanity's future in a positive way.

[00:39:03]

Yeah, I think that there are a lot of different things which we could do, which may have some positive benefits on the future, which include these just generally looking around for things to help the world get well in the short term.

[00:39:19]

But the ones which I feel most optimistic about are things which target possible ways that things could go wrong in coming decades.

[00:39:36]

And so actually this pandemic scenario, hopefully it's unlikely, but I think it is a real issue, the advent of artificial intelligence. And again, it's hard to predict timelines, but maybe this is a thing which could come in the next few decades. And if it does, if we get really powerful artificial intelligence, then that might mean that our world looks radically different from how it does today. And that could be a great improvement. People have also outlined scenarios where that could lead to worse outcomes and trying to make sure that we are placed to get good outcomes from that seems valuable to me and then other interventions which try to position ourselves better for facing future challenges.

[00:40:33]

So this could be trying to do research into questions which look like they're going to be particularly important for understanding what's coming in the long term future and what the dimensions are. It could be trying to improve cooperation at an international level. It could be trying to improve rationalities you were talking about earlier. I think that there's a lot we can draw a more direct route from improving rationality, particularly among people who may go on to be key decision makers in coming decades to making sure that the world goes in a good direction then we can from some other goals which look like they would just be fantastic in the short run, like curing cancer.

[00:41:28]

And there would be a whole lot of good knock on effects from that. But they're less direct and more diffuse.

[00:41:36]

Yeah, that's my sense as well. Although it is it is really hard emotionally and strategically to make the case for sort of long term indirect things being more important than saving lives from a disease like cancer.

[00:41:52]

Now, I think that part of actually part of the thing which. Strengthens the case for that is that it is emotionally hard and there are lots of people who go out and do things to help people and there's a pull towards helping.

[00:42:12]

Where we can see this direct way of this is definitely going to help. And I understand how and I lost a relative to cancer and I don't want that to happen again. And that means that we collectively are already investing in opportunities like this.

[00:42:32]

And so I definitely don't think we should move all our resources away from things like this.

[00:42:39]

But if we are just controlling a few marginal resources and just taking up a couple of extra opportunities, then going for things where we can see a strong, reasoned argument that they could be effective, but there isn't so much emotional pull may find things where the low hanging fruit are yet unpicked to take a cool and kind of counterintuitive argument for doing counterintuitive things, which makes me very happy.

[00:43:09]

And I think you have to be you have to like maintain a bit of skepticism about when applying it. But I think there's something to it.

[00:43:17]

Yeah, the argument on the margin seems to be such a stumbling block when I have conversations with people about, you know, the most valuable ways to help the world, because, you know, whenever I try to make the argument about on the margins causes like like researching and trying to prevent existential risks or investing in infrastructure to to make us better at handling risks when they come up, etc., people often jump to. Well, but if we invested everything in that, then, you know, what are we going to do?

[00:43:49]

Just like let people die from poverty and disease now and and not try to save them at all. And of course, we're already investing tons of resources into that into those causes. And I'm talking about like a small change on the margin.

[00:44:02]

That's right. And I think that this is partly because the idea of margins isn't one which is quite in the public consciousness.

[00:44:11]

Yeah, I keep looking for another way to phrase it, and I haven't found a good one yet, but I'm not sure it's even just about the words.

[00:44:18]

It's about the ideas. Often people don't have a distinct concept of kind of absolute priority. How much ideally would we devote to this problem collectively and the marginal priority of a given all the decisions other people already making, like what's particularly valuable to do now?

[00:44:40]

Right. We were ending on a on a hopeful note and now we're on a frustrating note. I think there's an opportunity.

[00:44:48]

There's a there's an opportunity here for people to learn a new concept, which you and I both think is important and spread this out so it becomes kind of common social knowledge and then we'll get better decisions as a result.

[00:45:06]

Excellent. I like that perfect lesson here before I find a way to accidentally make things frustrating and depressing again. So we'll move on now to the rationally speaking pic.

[00:45:32]

Welcome back. Every episode, we invite our guest to introduce the rationally speaking pick of the episode that's a book or article or website that has influenced our guests thinking in some interesting way. So, Owen, what's your pick for today's episode?

[00:45:48]

My pick is probing the improbable. It's a paper which looks at the question of.

[00:45:58]

What should we do and how should we understand it when we have a model which tells us that an event is extremely low probability and it raises this interesting point that if our model says say that the probability of an event is one in a billion a year, then straight off, we think we're pretty confident this event isn't going to happen. But if it did happen, we would think probably not. Oh, we just got extremely unlucky. But I guess our model was wrong and we should actually think about that in advance.

[00:46:38]

Like, we don't have to wait for that to happen. And so when beforehand we should assign some probability to the model that we're using being wrong and factor that into our assessment of likelihood. Interesting.

[00:46:55]

And and do you think that the way that people typically think about these very low probability events is just they never step outside the model and they just sort of go with that estimate?

[00:47:04]

I think even that's a. The like, that's even one level of sophistication, I think, that often at an intuitive level, thinking a very low probability events, people think, well, it's never happened and then they just don't think about it anymore. And I think the next level up is they build a model and they try and understand as a way to understand what's going on that we talked about the advantages of models a bit earlier in the discussion.

[00:47:37]

And then I think, yes, they often will say, well, I've put my knowledge into this model. So that tells me what I should think. And they don't notice the ways that it might go wrong.

[00:47:51]

Right. Yeah, excellent. Maybe I should do a whole episode on the the promise and peril of models or something, because that's a really interesting thread that just keeps coming up in these discussions. Cool. Well, we'll link to probing the probing the improbable was the name great.

[00:48:08]

Has a longer name as the subtitle as well, but I forget what it is.

[00:48:12]

That's right. Well, we'll put that up on the website and and we'll link to the Future of Humanity Institute page as well. Oh, and thank you so much for joining us. This is a really fascinating conversation.

[00:48:22]

Thanks, Julia. It was a great conversation. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[00:48:39]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.