Today's episode of Rationally Speaking is sponsored by Livewell Give Oil takes a data driven approach to identifying charities where your donation can make a big impact. Give all spends thousands of hours every year vetting and analyzing nonprofits so that it can produce a list of charity recommendations that are backed by rigorous evidence. The list is free and available to everyone online. The New York Times has referred to give well as quote, the spreadsheet method of giving give. Those recommendations are for donors who are interested in having a high altruistic return on investment in their giving.
Its current recommended charities fight malaria, treat intestinal parasites providing vitamin A supplements and give cash to very poor people. Check them out at Give Weblog.
Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and I'm here with today's guests, Anders Sandberg. Anders is a researcher at Oxford at the Future of Humanity Institute. His background is originally in computational neuroscience. That's what he does, PDN, but his research now focuses primarily on long term futures.
What are the what are the plausible what are the likely and possible trajectories for humanity in the next centuries or millennia? And what, if anything, can we do to steer those trajectories? Listeners may already be familiar with Andrew's work, in part because it came up on a recent episode of the podcast, the episode we did with Steven Webb on the Fermi Paradox. The paper that we discussed at the end of that episode on Dissolving the Fermi Paradox enters with a co-author on that.
And then also if you follow me on Twitter, I recently shared a paper by Enda's on the critical scientific question what would happen if the Earth was suddenly made of blueberries? So, as you can see, its interests are wide ranging. But today on the show, we're going to focus on that central thematic cluster of his work, long term futures of humanity and how to think about them, which I'm very excited to talk about. Tender's, welcome to the show.
Thank you, Julia. It's great to be here. Yeah, well, we'll leave Blueberry Earth for another time, but but we'll link to the paper because it's delightful to read and it's gotten a lot of well-deserved attention recently.
It demonstrates that if you want to really get attention for your paper, you should publish it in July and when and nobody else is doing. And suddenly interest just explodes.
I mean, the fact that it was about, you know, the blueberry jam and blueberry granita layers of this hypothetical earth in very explicit, rigorous scientific detail, I'm sure had something to do with it, aside from the timing. But yeah. So you you've published a bunch of papers about long term futures. You're currently working on a book, I understand called Grand Futures on this topic.
That is correct. Excellent. Well, so let's start off in true, rationally speaking form with an objection. I'm sure you're very familiar with this objection. Sure. It's come up a lot. What do you say to the people who object to that? It's basically impossible to predict the future and any attempts to do so are just you know, you can speculate, you can speculate about what could happen, but we should have pretty low credence in any particular speculation.
And that's, you know, low enough that it's, you know, not very actionable to to do such speculation. That sort of a fundamental objection to the, you know, central thrust of your research. How do you think about that?
So my main objection to that objection is that we actually do predictions every day. And indeed, that is what we got brains for. A brain, after all, is on the work on the intended to make sure that we can eat better without getting eaten. And typically in higher animals, it does that by making various forms of predictions about the near future. The important difference, of course, between the near future and the far future is, well, the far future is sometimes much less predictable.
And the critic would probably say it's nearly always very unpredictable. I disagree. There actually a fair bit of the far future that is predictable enough that we can say interesting and true things about it that might be useful for actions in the present.
Yeah, I mean, I imagine that's sort of the crux of disagreement where, you know, people making this objection would say, sure, there are things that we can have confidence and like we can have confidence in, you know, things that are involved, just extrapolations of the laws of physics. Like we can, you know, predict that entropy will increase and we can predict that humanity can't literally live forever because, you know, eventually, you know, the universe will expand and, you know, et cetera, et cetera.
But but in terms of, you know, useful or actionable things, I think the belief or the assumption among most people, including most scientists, is that there's just nothing in that in that intersection. So maybe it would be helpful for you to give a couple examples of things that you think we can say with some confidence about the future that are, you know, non obvious or non-trivial and and interesting.
So I think it's useful to recognize that the limits set by the laws of physics, we might, of course, sometimes quibble about do we know all the laws of physics? And pretty obviously we don't. So we might have to update this. But there is a fair amount of limits that we have extremely good reasons to believe in. The third reason to believe in thermodynamics, it's not just good theoretical arguments, even though they're very strong, but also a lot of empirical arguments.
It would be exceedingly weird if we found a way of overthrowing that, even though we can't rule it out. So that means that we can to some degree lean on the known laws of physics and not just laws of physics in the elegant scientific form, but also the things we have demonstrated to be possible using actual engineering. So there is a very nice conceptual diagram Eric Drexler came up with where he was outlining a space of possible technologies. And it has a boundaries set by limits, set by the laws of physics.
And somewhere in the middle of a region that is allowed, we have a small spot of technology we have achieved. But between that spot and the limits, there are this unknown area where it might be possible technologies. And he argues that quite often we can explore that because we can demonstrate that given technology and physics we actually know works. If we were to make that particular machine, we can demonstrate that it would have these particular properties. So even though we might not have a coquis made out of diamonds or know that they susheela surrounding the sun, we can actually prove things using very standard physics and tell what they would be doing.
And that gives us a bit of knowledge about what's possible. It doesn't tell us what will be done. We can't tell whether we actually will eventually get the mature nanotechnology or build a disulfiram for some, but we can show a fair bit about the properties we must have and at least show some of the upper or lower bounds and what they could that. So this is one of the ways I'm using the book to look at the possibilities of a future, trying to see both what looks like it's ruled out by a well understood laws of physics, but also things that it would be exceedingly weird if they were not possible because we're already doing smaller versions of it.
It's just a matter of scaling it up if we really, really wanted to.
Interesting. So does the seem I mean, at least to my, you know, naive, I this seems like non-trivial or it's not obvious and interesting. Is it also actionable? Are there things that we would do differently now? Because we, you know, have deduced that certain technologies are physically possible?
So the most obvious thing is the question, should they be spreading out into space and how quickly do we need to do that? So there are two aspects. The first one is, of course, could you actually settle space? And the second part is, well, how quickly do you need to do that? Because the remote galaxies are being becoming more remote every day. The expansion of a universe means that there are parts of the universe we can never reach.
And if we wait too long, we will never be able to reach many remote parts of the universe. So if there is some value in getting there, we might need to start very early. So this case, you can apply what we know of astrophysics and the relativity theory that tells us a bit about the speed of spacecraft. And we can evaluate, for example, how long we can afford to wait. And it turns out that if we wait more than a few hundred billion years, essentially all galaxy classes are totally separate that we will never get outside our own cluster.
That still means that, well, if it started within a few billion years, we can get quite a bit of a universe, even though we lose about 17 galaxies per year. Now, whether that's a really good reason to start early or later depends very much both on the value of theory. How much do you think you lose by losing this potentially colonizing galaxies? But also how much do you believe that in the future we can get faster spacecraft?
Because it's quite often much better to wait a long while if you think that your technology is going to give you a much faster spacecraft and then go fast than set out in a fairly crummier spaceship and then get overtaken by everybody else. I see.
So maybe a way to characterize the general usefulness of this kind of theorizing is it tells us what the it gives us a better sense of of what the payoff structure would be for different courses of action or the possible payoff structure, like the possible costs or upsides and downsides of colonization at a certain point or a later point, the possible upsides and downsides of humanity, you know, dying out now versus not.
And then in terms of, you know, what we do, you know, we use the we use that that deduced potential payoff structure plus, you know, our value system and what we you know, what we value and and then sort of make better informed decisions about what to do.
Exactly. And it's very useful to understand this structure because then you can start looking at what things are sensitive to changes in assumptions. For example, what if it turns out that the acceleration of the universe is slower than expected? Does that actually change what we ought to be doing? What if we find out some physics that suggests that spacecraft might actually be slower than we expect? In that case, maybe we should cut down on our ambitions and so on, by the way.
So I think it probably isn't feasible or worth attempting to go into a discussion of of the technical details of potential space colonization right now or in this episode. But it is probably worth pointing out, because I think this is will not be obvious to many listeners, that when you talk about the feasibility of space colonization, you're, I believe, assuming humans being digital, that, you know, human consciousness will be at that point in the future uploaded onto computers.
And that's why, you know, the incredibly long distances in space want to be won't be fatal to this idea. That correct?
That is correct. So in my paper with Stuart Armstrong, where I looked at intergalactic colonization, we assumed that you would be using digital consciousness as encoded in rather small spacecraft. And of course, it doesn't have to be humans. It could just be artificial intelligence. It seems to me after reviewing the literature that you can probably set of a solar system with biological humans. It's tough in some places, but you can do with going to the stars as a biological human.
That's going to be extremely tough. So although it might be allowed by the laws of physics, it's somewhat likely. But unless you're the future values of a society, doing it really, really demands that its biological people is probably not going to be done by bio humans.
Yeah, it's just it's funny how much I think this this one detail results in people talking past each other.
Like I think people like you who discuss space colonization often feel that it's not necessary to to explicitly specify that you're talking about digital consciousnesses. But that is not obvious to the listeners. And so they just think it's completely just a non-starter that we could call it as the stars, as humans. I always try to call explicit attention to that assumption, which is a very good practice.
It's a little bit like when people exclaim that, oh, that's impossible to immediately ask them in what sense of impossible, impossible are bound by the laws of physics. Impossible in the sense that, oh, that require unknown technologies or unknown science, or we can't do it over the next technology generation. Quite often people move very smoothly between them without thinking. And that produces, again, a lot of people talking past each other. Absolutely, yeah.
And interestingly, an additional in practice, meaning of impossible. When people use the word, what they often mean when you put them is assigned less than 20 percent probability to that or sorry. I mean, maybe, maybe that's not true when they literally use the word impossible.
But when they when they say confidently that such and such won't happen, they often just mean like less than twenty percent chance, which is often, in fact, what the the person saying this thing could happen also believes that it's, you know, less than twenty percent, but like above one percent or something. And so they in fact don't have any real disease. Agreement. But they're using language differently. This is a surprisingly common state of affairs. Yeah, it's a very good point.
So, yeah, and again, I'm sure many listeners will still be skeptical of the idea of components of this model, including whether it's reasonable to think that we can have digital consciousnesses. But I'm just going to ask you to like take that as a premise for the sake of this episode and maybe we can discuss it on another episode. So zooming out a little bit more. Again, there's some people, including, I think one of your coauthors on your most recent paper on long term trajectory is Robin Hanson, who argue that we should be skeptical about our ability to steer the long term future intentionally.
And the arguments are like a central part of the argument is past. Humans mostly have not been able to steer the future intentionally. So if we think that we can, that's a little bit suspicious. What makes us think that we are in such an unusual position? What makes us different from the reference class? What do you think of that?
I think Robin is right about the general set of humans because indeed most humans probably don't affect the future in the large scale sense. But that's not clear that it's true for all humans in all situations because in particular, we see a lot of POF dependencies in history, a fair bit about where a society has been shaped by surprisingly small groups and the individuals, sometimes by accidentally or deliberately doing the right of wrong thing at the right moment, sometimes by having deliberate agendas.
What would be an example of a small group of humans intentionally shaping the future, like with like in in the way that they intended to, as opposed to just doing a thing that had unintended consequences.
So, for example, two good examples of groups that pushed society in particular directions is the Fabian Society and the Mount Pellerin Society. So both of them were fairly successful in pushing forward what we would call a democratic socialist agenda and a libertarian agenda globally. And they had deliberate aims at doing this. They had a strategy then they they were probably also somewhat lucky because I would imagine that are probably 10 groups we'd never heard of but have similar ideas and never succeeded.
But in these cases, they actually have the right synergies. We managed to make the right choices and go to big effect. Yeah, I so I basically agree with that, and I have other examples in mind myself that I think qualify like I think the founding fathers qualify as a small group of people that affected the long term or at least the medium term future, the multiple century long future in spreading democracy around the world. And I think there's a few examples, at least of foundations, philanthropic foundations or individual scientists funded by those foundations affecting the long term future and really positive ways intentionally, such as, for example, the Roosevelt Foundation, who funded scientists to try to come up with agricultural improvements that would save lives in the developing world.
And one of those people they funded was Norman Borlaug, who created the Green Revolution by creating crops that were, you know, much hardier and could feed more people anyway.
I guess I'm you know, I've heard people like Robin make this argument before. I find it a little confusing just because I'm all for looking at track records to reason about, you know, what's what's realistic for us now to expect. But. There does seem to be some like really important and obviously true differences that make our generation special, like the past generations didn't have existing or near term likely technology that could dramatically wipe out civilization and wipe out I mean, either literally render extinct or, you know, even more plausible than that, just like decimates.
That's like not a thing that was true in the past. So it just doesn't seem unreasonable for us to think we're in a special position where we can, like, do things that would make that make those technologies more likely to impact humanity or, you know, less likely. Does that make sense?
Oh, yeah. Of course, being able to wipe out the entire future in some sense is one way of making a very big mark in history. Yes, absolutely. It's just not ever going. Yeah, for sure.
But yeah. But then, like, reducing that chance counts as a like if you if you think there are things we could do that would make that more likely, you should also think there are things we can do that would make that less likely, which counts are positively impacting the future of humanity.
Oh yeah. And I think in general, the reason that our world might be different might have to do with the causal structure of our current situation. So when you think about how to affect the long term future, if you're in a very noisy and chaotic environment, you might do something. But other small factors are also going to mess up the processes. So the end result means that you cannot actually push the future in the desired direction because it's just moving along due to all the other influences.
This is a bit like trying to control the weather by clap your hands when there are lots of butterflies and other weather patterns going on. You don't have much of a chance of doing anything in other domains. Of course, things are very irregular. If you move a rock on the moon, it's going to remain in that spot until it gets hit by a meteorite probably in hundreds of millions of years. So depending on the environment, you have very different chances.
David Christian, a historian who coined the term Big History, explained that using a Robin lovely metaphor for once he used quantum mechanics as an analogy. Right.
Usually when people use quantum in an analogy or metaphor, they're missing it, messing things up. But David really made the point. Well, he said that individually, history is very quantum mechanical. There is a lot of randomness in the interaction, which means that it's very hard to predict anything in the large. Many parts of history are actually fairly regular. The growth in wealth, for example, has been exponential for thousands of years with small deviations that correspond to dramatic things.
So just like quantum mechanics turns into classical mechanics, as you scale things up, a lot of our local interactions turn into classical history when you scare them up. Unfortunately, that means that most of our interactions average out there. This is why most of our choices won't change the direction of the future. However, sometimes we can deliberately scale up the quantum interaction, and that's why transistors work that this way. Some experiments on the generating where the quantum effects on a macro scale work because we deliberately set up the conditions.
So the small causal influences that are on the quantum scale or individual scale now can be scaled up. The interesting thing is it's not just what we have weapons of mass destruction that might destroy the future. We also set up a lot of new ways of having causal impact on each other and the future. Some of them are probably just increasing the noise level. But I do think we are actually getting better tools for coordination and mass coordination, but are likely to have strong effects on the future.
Are you talking about the Internet, for example, or something else? The Internet is the most obvious. And again, the Internet is not one tool. It's a platform that allows us to construct various tools. Social media, again, are complicated because we are very different styles of users of social media, ranging from everything from the fake news and the a lot of noise in the kind of popular culture over to various ways of rapidly coordinating people for an emergency or for solving scientific problems.
The interesting part here is it's so early days. After all, social media has hardly existed in the US for more than 20 years. That's inconceivable. It's almost less than a human generation and it probably takes us a few generations to figure out how to use any tool. Well, so we should expect, actually, that the power of social media to coordinate people into doing various things is going to be growing. It's going to grow quite a bit over the coming decades.
And that is interesting, but also very hard for us to predict. But we can probably predict rather well that, yes, it's going to help groups to coordinate some of these coordination activities are going to be adversarial, which might lead to a lot of bad effect. But you're also going to see that most coordination you must do. It's intended to reach mutually beneficial goals and we're going to get better at that. So I think there is good reason to believe that some of these tools are likely to help us actually control parts of the future better.
We might also learn more about which parts of the future can be controlled and which cannot be, again, going back to that chaotic versus ordered situation. We can make very accurate predictions about lunar and solar eclipse as thousands of years in the future because of the behavior of classical mechanics in the solar system. But next year's fashion or next year's stock market? Well, that's because of a lot of very densely, closely interconnected. You must stop trying to outwit each other.
It's not going to work out that well. Learn to make a prediction of it, but we can recognize this difference and put our money in somewhat safer investments by realizing that we shouldn't be trusting people, making strong predictions about the stock market. And we might use other data to figure out how to sell their space probes and be fairly aware that this is going to work out well, though, because of a law of gravity is not changing anytime soon.
Hmm. Switching tacks a little bit.
What do you think about the argument that if we want to affect the future in a positive way, our best bet is not to to do anything intentional, just to cast a wide net and fund a lot of different kinds of scientific research, a lot of different kinds of technological development on the logic that in the past, looking back at humanity's track record, that's what has caused the world to get better, not, for the most part, intentional attempts to steer the future, just people, you know, investigating and discovering things, people just trying to create value for, you know, the near future.
And those things sort of accumulated to increase humanity's capabilities and quality of life and so on. Over the long run, why not just like keep doing what has worked in the past?
I think there's a lot of truth to that, except that most of the things that were really beneficial were not Short-Term good. It was not the people making sure that their own garden was well watered or inventing a solution to adjust their own personal problem. But looking a bit ahead, actually figuring out more general tools, figuring out the scientific solutions to problems that were peculiar, but maybe not that applicable at the time. So you really want to cost your net much wider than most people normally would do, because I think most people would solve problems that are close to them because people do get the rewards relatively quickly.
The reason you want to cast your net very widely is that generally we're pretty stupid and the universe is way more complicated than that. We can get into our brains. So in general, we need to do a lot of experimentation in order to gain the information. We need to see where we should be headed. So I'm very much in favor of having people cast this wide net, try to invent various things and make things better in general. But that doesn't mean it's useless to try to think long term.
You can start recognising that some actions do have long term effects by understanding ecology. We realize that extinction is forever, or at least until we can start these extinct species, which still requires us to store the genetic material somewhere. Maybe we should get started on that. Now, we would really want to actually have this broad understanding, both costs of time and space. And that's certainly leads to a bit bigger planning because, for example, we can look back and say, what information have we been missing the most?
What would historians and marketers really, really wish they had? And it turns out that a lot of everyday information from the classical world is just gone and just remains mysterious to us because we only have texts written by the people for other purposes. So actually saving your bills and receipts and the everyday email might actually be quite important thing. So we want to store that for the long term future, even though we can't even foresee what the purpose of this.
But on the other aspect, you also want to know where to strive for, and that requires thinking about fundamental values, thinking about macro Australia, where would we like to end up? Because that tells you a little bit where to prioritize your net costing. You might not know what kind of physics of our future really benefits from, but you might notice that maybe we should develop more efforts on making physics that helps us survive rather than is creating new weapons of mass destruction.
And I think also we need the hope of thinking of a long term future. If we knew that the future would be just like the present, that there was no way of actually getting it better. I think most of us would say, yeah, in that case, we might want to save it, but we're not going to feel that strongly for it if we, on the other hand, have a hope that it could become amazingly much better.
That's actually a very good reason, not just to try to reduce existential risk, but also go on to actually try to cast votes, nets into the murky waters of knowledge and try to see can they find something shiny here that leads us towards the future.
So, yeah, I want to pull on that thread the the motivating force of of learning or realizing that the the future could be very vast and potentially very positive and how that would affect our actions today.
What what would you say to someone who says, look, you know, if future generations exist, then, you know, of course I want them. I care about their welfare. I want them to flourish. I don't want them to suffer or I want to minimize suffering. But I don't particularly care about ensuring that future generations exist. You know, I care about individuals and the welfare of individuals. I don't particularly care about species. How do you respond to that?
Samuel Scheffler has an interest in. Thought experiment where I suggested suppose you knew that a month after you died, the world will disappear. In that case, how would it change your life? And he argues in his book Death and Afterlife, that this actually would have strong effects. A lot of our activities don't make sense unless we assume that we care about the future after us. And not just that there is a little bit of future, but that there's actually quite a lot.
A lot of our human activities are very, very long term centric. It's not just building cathedrals in the middle of a village, knowing that it's going to take a century to finish. But it's also setting up societies, again, not just for our children, our children's children, but because we think that matters. I think it's relatively rare that when people actually don't care about future generations coming into being, there's certainly some people who argue that and there even antenatal is to argue that actually we should prevent future generations for coming to be because it's bad for them.
But most of us tend to assume that is a good thing. That's where a few generation. Now, whether we want a lot of them and what kind of life supposed be depends very much on your value for. So I used to be in the category of people that I was just referring to, the people who feel like, you know, I want individuals to have high welfare, but I don't particularly care about the continuation of the human species.
Like one KRUX for me was do other humans have strong preferences that humanity continues to exist?
Because I do feel the strong moral intuition that I want. I care about whether people's preferences are satisfied, even if those preferences are about things that happen after their death. And so even if I personally don't feel, you know, that I care about the continuation of the human species, I it matters to me if, you know, millions of other people want the human species to continue.
So and it's interesting, like it had seemed to me just from conversations with people about the long term future, that a lot of people just didn't seem to care.
Like if you ask people if you talk about extinction, extinction risks, a lot of people will sort of shrug and say, like, you know, does humanity really deserve to continue or like, does it really matter once I'm dead or things like that? But you're suggesting that people's revealed preferences say something different, that they would be spending their time in different ways if they if they really didn't care about the continuation of the species.
Exactly. And quite often people have very odd claims. On one hand, they say they don't care about humanity going extinct at some point, and yet they are very keen on recycling. And again, I guess Robin Hanson would say, yeah, but the recycling is all about signaling. So maybe that's more socially than whether the environment survives, which might be true. But I think the interesting part here is that people tend to have is we are the foremost psychologist would say it's in construal theory that when you think about stuff that's outside your normal life, you think about it in a very different way from the everyday things.
Again, getting back to my initial point about us making predictions, when we make predictions about our everyday life, they work out in a very different way from making predictions about the future, especially the abstract future of a species, whereas nobody around, even when you start, bring it over to something concrete like talking about. So what about my great grandchild? Let's imagine her life in the year 2100. Suddenly you make it concrete and a lot of framing and cognitive biases and modes of thinking that we use in everyday life come into play might not necessarily make for the best way of thinking about the future.
But it's very clear that people, when talking in general about the long term future, unless be careful, they get away with a lot of very sloppy thinking because most of the time it doesn't matter to be very accurate about the long term.
Yeah. Can I share with you one other thing that shifted my thinking about how much it matters, whether humanity continues to exist over the long haul and see what you think of it.
It it kind of has the ring of fallacious thinking, but it feels intuitively right to me that I'm quite curious what you'll say.
So basically, when I look back at at humanity's past, it seems to me that a large percentage of humans who have lived in the past had pretty rough lives. You know, we didn't have anesthesia. We didn't have you know, we couldn't really treat infections or a lot of diseases. People didn't have a lot of autonomy. There were sort of there's like a pretty narrow set of things that you could do with your life. There was a lot of cruelty and violence and.
But all of that, all of that, you know, toil and tedium and suffering and thwarted preferences was kind of in a sense necessary to create modern civilization. And it just kind of seemed like.
The only way that I can not feel really depressed about the suffering that humans went through in the past is to kind of retroactively make it worth it by Bilic, causing there to be lots of future generations of happy humans are not just happy, but like flourishing, thriving humans.
It just or to put it a different way, like it just seems like such a shame if humans went through all these generations of of living rough lives and got to the point where they were they had almost made it possible to to bring into existence many more generations of flourishing humans.
But then they just stopped to give an analogy, to maybe make this a little more clear, like let's let's say you are you're an adult and you're like pretty unhappy with your life and you're considering committing suicide. And then you think back to your parents and maybe your grandparents and remember, gee, they sacrificed a lot and scrimped and saved and, you know, gave up a lot of their own dreams in the hopes of giving me a life where I had the possibility to do, you know, great things and be happy.
And wouldn't it kind of. And that's like already a sunk cost. They've already spent that that time, that sacrifice. But isn't it just kind of a shame if I don't try to make the most of that sacrifice going forward instead of just giving up?
And so that's sort of how I feel about humanity writ large.
And the reason I say it has the ring of fallacious thinking is that it's the kind of the sunk cost fallacy. But I think I might endorse it in this case. What do you think?
Yeah, it's a bit like the hope that the future will redeem the past. Yeah, exactly. It's actually quite interesting to think about flipping things around so we can imagine a history of that gets better and better and better until it eventually ends. And then another history that starts out in a golden age and then gets worse and worse and worse for both of them, of course, have the same amount of goodness in them. But I think most of us would still say the first one is better than the other one.
And I wonder why it might be that. Actually, it's always enjoyable if the future is better than the past because we're built like that. We don't like disappointment if I get an ice cream and it's worse than expected. I'm actually feeling much worse rather than just the sheer utility of that slightly crummy ice cream. And the same thing might be about the future. We went to some extent needs for future to increase because that actually gives us something extra.
The expectation that the future is so good and better place is super important for us.
And the awesome. So, OK, swimming out again, I just want to try to I want to see if I can get us to summarize the ways in which it's valuable to theorize about the future.
We talked about the how they're useful things we can deduce just from the laws of physics about what technologies will be possible and what that implies about the the potential gains to reap from the colonization of space, depending on when we start.
We also talked about like theorizing about possible really good futures and how that might motivate us, like why that might give us more motivation to try to ensure the continuation of humanity. I'm sure I'm missing at least one or two. But what would you add to this sort of general list of ways that it's valuable to think about the future?
So I think it's also useful for doing some of the planning that might actually have long term effects. So there is this concept of a long reflection, which I think is really important. So Will, MacAskill suggested this, that maybe we need to get our act together before settling the universe to figure out some ground rules on the overall goals, because there might be only that particular window of time where we're all causally connected. Once we go off to distant stars, we're no longer going to be able to coordinate.
So if there are things that we need to coordinate, we need to do that before we get out of reach. So I mentioned earlier that we might go so far away that in a few hundred billion years it's absolutely impossible to send down the signals. But probably even before long before that, it's going to be hard to get everybody together right now where a third of a second away from each other on this planet as the electron flies. And it might be that we need to solve certain problems, figure out some philosophical ground rules, or even set some legal or technical ground rules in order to ensure that we get a good future.
So that's a choice point we need to be aware of. But it's approaching, not approaching super fast, except, of course, that time is kind of moving at the speed of light, but at some point is going to be too late to coordinate. So before that, we should have at least made a serious attempt at doing that coordination. What does that coordination require? We don't know yet. We need to figure that one out even before we start coordinating.
So sometimes understand the structure of the future, like in this case, the causal structure of an expanding spacetime tells us something about what agenda we need to set. Hmm, interesting.
OK, well, that's probably as good a place as any to stop, but Enda's, before I let you go, I wanted to ask you my favorite question to ask my guests at the end of an episode, which is, is there any book or article or blog or even just a person, a thinker, who that you have disagreements with but that you nevertheless think is valuable to to read or engage with anything come to mind?
Well, of course, the default answer would always be a Robin answer to that question, no matter who you're asking. Yeah, but I was thinking of Gustav Arrhenius, who is also sometimes my boss at the Swedish Institute for Future Studies. He's written a very nice paper about life extension, where he is arguing that basically it's no point in doing life extension from a population ethics perspective because you get other people sorry.
When you say you get other people, you mean his point is that there's nothing particularly good or important about extending the lives of existing people. It's just as good to have those people die and then create new people, more or less.
He's got a few subtenants and analyses, he says, since he's a proper philosopher and like me and I disagree with him. But I think it's important to engage with that as consequentialist, the pro life extension person like me. He's also, of course, is justly famous for his impossibility theorem in the population ethics, which is a real headache inducer.
I agree. I think we've discussed that that paper on the show before is one of my favorite pieces of philosophy, although I yes, I agree it's a real headache inducer. So we'll link to that and we'll link to the sorry.
Was it a paper that he published about about the argument against life extension? He's got a book chapter.
But the paper I'm thinking of is Life Extension versus Replacement, which is in Journal of Applied Philosophy 2008.
Well, we'll link to that as well, as well as to your maybe to your most recent paper on long term trajectories and and of course, to your blueberry earth paper as well.
And of course, thank you so much for being on the show. Well, there were a bunch of dangling threads that we should talk more about in future episodes. But but I'm so glad we got you on for for an inaugural, rationally speaking conversation after all.
I think we're going to have a very long future is going to be a room for some interesting but so many episodes, so many different trajectories, so many different kinds of blueberries.
Excellent. I can't wait. Well, this concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.