Transcribe your podcast
[00:00:00]

The following is a conversation with William MacAskill. He's a philosopher, ethicist and one of the originators of the effective altruism movement. His research focuses on the fundamentals of effective altruism or the use of evidence and reason to help others by as much as possible with our time and money, with a particular concentration on how to act given moral uncertainty. He's the author of Doing Good, Better, Effective Altruism and a Radical New Way to Make a Difference. He is a co-founder and the president of the Center of Effective Altruism Seet that encourages people to commit to donate at least 10 percent of their income to the most effective charities.

[00:00:43]

He co-founded 80000 Hours, which is a non-profit that provides research and advice on how you can best make a difference through your career. This conversation was recorded before the outbreak of the coronavirus pandemic for everyone feeling the medical, psychological and financial burden of this crisis. I'm sending love your way. Stay strong. We're in this together. Will beat this thing. This is the artificial intelligence podcast, if you enjoy it, subscribe on YouTube review of five stars and a podcast supporter and patron or simply connect with me on Twitter.

[00:01:19]

Allex Friedman spelled Fridmann as usual. I'll do one or two minutes of ads now and never any ads in the middle. They can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. The show was presented by Kashyap, the number one finance app and the App Store, when you get it, is called Lux podcast. Kashyap lets you send money to friends, buy Bitcoin and invest in the stock market with as little as one dollar.

[00:01:48]

Since Kashyap allows you to send and receive money digitally, peer to peer and security and all digital transactions is very important. And you mentioned that PCI data security standard that cash up is compliant with. I'm a big fan of standards of safety and security PCI. DNS is a good example of that, where a bunch of competitors got together and agreed that there needs to be a global standard around the security of transactions. Now we just need to do the same for autonomous vehicles and A.I. systems in general.

[00:02:19]

So, again, if you get cash up from the App Store or Google Play and use the Kolchak's podcast, you get ten dollars in cash. Apple also donate ten dollars to First, an organization that is helping to advance robotics and stem education for young people around the world. And now here's my conversation with William McCaskill. What is utopia for humans and all life on Earth look like for you? That's a great question. What I want to say is that we don't know.

[00:03:06]

And the utopia we want to get to is an indirect one that I call the long reflection. So period of scarcity no longer have the kind of urgent problems we have today, but instead can spend perhaps tens of thousands of years debating, engaging, ethical reflection in order before we take any kind of drastic lock in actions like spreading to the stars. And then. We can figure out what is like, what is of kind of moral value, the long reflection, that's a it's a really beautiful term.

[00:03:42]

So if we look at Twitter for just a second. Do you think human beings are able to reflect in a productive way? I don't mean to make it sound bad, because there is a lot of fights and politics and division in our discourse.

[00:04:02]

Maybe if you zoom out, it actually is civilized discourse.

[00:04:06]

It may not feel like it, but when you zoom out. So I don't want to say that Twitter is not civilized discourse. I actually believe it's more civilized than people give it credit for. But do you think the long reflection can actually be stable where we as human beings with our descendants of brains would be able to sort of rationally discuss things together and arrive at ideas?

[00:04:30]

I think overall we're pretty good at discussing things rationally.

[00:04:36]

And, you know, at least in the earlier stages of our mind, of our lives being open to many different ideas and being able to be convinced and change our views. I think that Twitter's designed almost all of the worst tendencies. So if the longer the election were conducted on Twitter, you know, maybe it would be better, just not even not even to bother. But I think the challenge really is getting to a stage where we have a society that.

[00:05:10]

Is as conducive as possible to rational reflection, to deliberation. I think we're actually very lucky to be in a liberal society where people are able to discuss a lot of ideas and so on. I think when we look to the future, it's not at all guaranteed that society would be like that rather than site, rather than a society where there's a fixed canon of values that are being imposed on all society and where you aren't able to question that. That would be very bad for my perspective, because it means we wouldn't be able to figure out what the truth is.

[00:05:45]

I can already sense we're going to go down nearly a million tangents.

[00:05:48]

But what do you think is the if Twitter's not optimal?

[00:05:55]

What kind of mechanism in this modern age of technology can we design where the exchange of ideas could be both civilized and productive and yet not be not be too constrained where there's rules or what you can say and can't say, which is, as you say, is not desirable, but yet not have some limits is what can be said or not and so on. Do you have any ideas, thoughts on the possible future? Of course, nobody knows how to do it, but do you have thoughts of what a better Twitter might look like?

[00:06:28]

I think that text based media are intrinsically going to be very hard to be conducive to rational discussion, because if you think about it from an informational perspective, if I just send you a text of less than what is it now, 240 characters, 280 characters, I think that's a tiny amount of information compared to, say, you and I are talking now where you have access to the words I say, which is the same as in text, but also my tone, also my body language.

[00:07:00]

And we're very poorly designed to be able to assess. You know, I have to read all of this context and anything you say. So, you know, I say maybe your partner sends you a text and as a full stop at the end, are they mad at you? Right. You don't know.

[00:07:15]

You have to infer everything about this person's mental state from whether they put a full stop at the end of a text or not.

[00:07:21]

Well, the flip side of that is it truly text that's the problem here, because there's there's a viral aspect to the text where you could just post text nonstop. It's very immediate.

[00:07:35]

You know, in the Times before Twitter, before the Internet, you know, the way you would exchange text as you read books. Yeah. And that while it doesn't give body language, it doesn't get tone. It doesn't so on. But it does actually boil down after some time, I think in some editing. So I'll boil down ideas. So, uh, so yeah.

[00:07:58]

Is, is the immediacy and the viral nature, the out there which produces the outraged mobs and so on.

[00:08:05]

The potential problem I think is a big issue. I think there's going to be the song selection effect where, um, something that provokes outrage.

[00:08:14]

Well, that's high arousal. You're more likely to to be tweet that, um, where there's kind of sober analysis is not as sexy, not as vital. I do agree that long form content, um, is much better to productive discussion in terms of the media are very popular at the moment. I think that podcasting is great where like your podcasts are two hours long. So they're much more in depth than Twitter and you are able to convey so much more nuance, so much more caveat, because it's an actual conversation.

[00:08:54]

It's more like the sort of communication that we've evolved to do rather than kind of these very small little snippets of ideas that when also combined with bad incentives, just clearly aren't designed for helping us get to the truth.

[00:09:06]

It's kind of interesting. It's not just the length of the podcast medium, but it's the fact that it was started by people that don't give a damn about quote unquote demand.

[00:09:18]

Yet of that, it's there's a relaxed sort of the style like that Joe Rogan does. There's a freedom to to to express ideas in unconstrained way. That's very real. It's kind of funny in that it feels so refreshingly real to us today. And I wonder what the future looks like. It's a little bit sad now that quite a lot of sort of more popular people are getting into podcasting and like, you know, and they and they they try to sort of create the try to control it.

[00:09:54]

They try to constrain it in different kinds of ways. People I love Akono Brian and so on, different comedians.

[00:10:00]

And I'd love to see where the real aspects of this podcasting medium persists.

[00:10:07]

Maybe in TV, maybe new to me. B, Netflix is pushing those kind of ideas, and it's kind of it's a really exciting word, that kind of sharing of knowledge. Yeah, I mean, I think it's a double edged sword as it becomes more popular and more profitable, where on the one hand you'll get a lot more creativity, people doing more interesting things with the medium. But also perhaps you get this race to the bottom where suddenly maybe it'll be hard to find good content on podcasts because it'll be so overwhelmed by, you know, the latest bit of viral outrage.

[00:10:41]

So speaking of that, jumping on effective altricial for a second, so, so much of that Internet content is funded by advertisements just in your in the context of effective altruism, we're talking about in the richest companies in the world, they're funded by advertisements, essentially Google.

[00:11:03]

That's their primary source of income.

[00:11:06]

Do you see that as do you have any criticism of that source of income? Do you see that source of money as a potentially powerful source of money that could be used?

[00:11:18]

Well, certainly can be used for good, but is there something bad about that or somebody?

[00:11:23]

I think the significant one is with it where it means that. The incentives of the company might be quite misaligned with. You know, making people's lives better. Where, again, perhaps the incentives are towards increasing drama and debate on your social news, social media feed in order that more people are going to be engaged, perhaps kind of compulsively involved with the platform, whereas there are other business models, like having an opt in subscription service where perhaps they have other issues.

[00:12:08]

But there's much more of an incentive to provide a product that its users are just like really wanting because, you know, now I'm paying for this product. I'm paying for this thing I want to buy rather than I'm trying to use this thing and trying to get a profit mechanism that is somewhat orthogonal to me, actually just wanting to use the use the product. And so, I mean, in some cases it'll work better than others. I can imagine.

[00:12:41]

I can in theory, you imagine Facebook having a subscription service, but I think it's unlikely to happen anytime soon.

[00:12:49]

Well, it's interesting and it's weird now that you bring it up that it's unlikely.

[00:12:53]

This example, I pay, I think, 10 bucks a month for you to read. Mm hmm. And that's another thing. I get it. Much for that. Except just. So no ads yet, but in general, it's just a slightly better experience and I would gladly I'm not wealthy.

[00:13:13]

In fact, I'm operating very close to zero dollars, but I would pay 10 bucks a month to Facebook and 10 bucks a month to Twitter.

[00:13:20]

Yeah. For some kind of more control. Yeah.

[00:13:24]

In terms of advertisements and so on. But the other aspect of that. Is data yet personal data?

[00:13:30]

People are really sensitive about this, and I as one who hopes to one day create a company that may or may use people's data to do good for the world, wonder about this one, the psychology of why people are so paranoid.

[00:13:49]

Well, I understand why, but they seem to be more paranoid than is justified at times. And the other is how do you do it? Right.

[00:13:56]

So it seems that Facebook. Is it seems that Facebook is doing it wrong? That's certainly the popular narrative.

[00:14:06]

It's unclear to me, actually, how wrong it like I tend to give them more benefit of the doubt because they're you know, it's a really hard thing to do. Right. And people don't realize it. But how do we respect, in your view, people's privacy?

[00:14:23]

Yeah, I mean, in the case of how worried are people about using the data, I mean, there's a lot of public debate and criticism about it.

[00:14:34]

Um, when we look at people's revealed preferences, you know, people's continuing massive use of these sorts of services, it's not clear to me how much people really do care. Perhaps they care a bit, but they're happy to, in effect, kind of sell the data in order to be able to use a certain service as a great term revealed preferences.

[00:14:56]

So these aren't preferences. Your self report in the survey. This is like your actions speak.

[00:15:01]

Yeah, exactly. So you might say, oh, yeah, I hate the idea of Facebook having my data, but then when it comes to it, you actually are willing to give that data in exchange for being able to use the service. And if that's the case, then I think unless we have some explanation about why. Why there's some negative externality from that or why there's some coordination, failure? Or if there's something that consumers are just really misled about, where they don't realize why giving away data like this is a really bad thing to do, then.

[00:15:46]

Ultimately, I kind of want to respect people's preferences, they can give away the data if they want. I think there's a big difference between companies use of data and governments having data where, you know, looking at the track record of history, governments knowing a lot about their people can be very bad if the government chooses to do bad things with it. And that's more worrying, I think. So let's jump into it a little bit.

[00:16:16]

Most people know, but actually I did two years ago, had no idea what effective altruism was until I saw there's a cool looking event at MIT group here. The I think the it's called the Effective Altruism Club or a group that's like, what the heck is that?

[00:16:36]

Yeah.

[00:16:38]

And one of my friends said I mean, he said that they're just a bunch of eccentric characters. So I was like, oh, yes, I'm in. So I went to one of their events and looked up, what's it about? It's quite a fascinating philosophical and just a movement of ideas. So can you tell me what is effective? Altruism.

[00:16:59]

Great. So the core of effective altruism is about trying to answer this question, which is how can I do as much good as possible with my scarce resources, my time and with my money? And then once we have our best guess answers to that, trying to take those ideas and put that into practice and to do those things that we believe will do the most good. And when our community of people, many thousands of us and the world who really are trying to answer that question as best we can and then use our time and money to make the world better.

[00:17:32]

So what's the difference between sort of classical general idea of altruism and effective altruism?

[00:17:41]

So normally when people try to do good, they often often just aren't so reflective about those attempts.

[00:17:51]

So someone might approach you on the street asking you to give to charity and you'll you know, if you're feeling altruistic, you'll give to the person on the street or if you think, oh, I want to do some good in my life, you might volunteer at a local place or perhaps you'll decide, you know, pursue a career where you're working in a field that's kind of more obviously beneficial, like being a doctor or nurse or a health care professional.

[00:18:21]

But it's very rare that people apply the same level of rigor and analytical thinking to lots of other areas. We think about to take the case of someone approaching you in the street. Imagine if that person instead was saying, hey, I've got this amazing company. Do you want to invest in it?

[00:18:39]

It would be insane for me and no one would ever think, oh, of course, I'm just a company. Like you'd think it was a scam.

[00:18:46]

But somehow we don't have that same level of rigor when it comes to doing good, even though the stakes are more important when it comes to trying to help others than trying to make money for ourselves.

[00:18:55]

Well, first of all, so there is a psychology at the individual level of doing good, just feels good. And so in some sense, and that's pure psychological part, it doesn't matter. In fact, you don't want to know if it does. Good or not because. Most of the time it won't. Yeah, so like in a certain sense, it's understandable why altruism without the effective part is so appealing to a certain population.

[00:19:28]

By the way, let's zoom out for a second. Do you think most people two questions. Do you think most people are good in question? Number two is, do you think most people want to do good?

[00:19:42]

So are most people good?

[00:19:43]

I think it's just super dependent on the circumstances that someone has. And I think that the actions people take and their moral worth is just much more dependent on circumstance than it is on someone's intrinsic character.

[00:19:59]

So there's evil within all of us. It seems like the better angels of our nature.

[00:20:05]

There's a tendency of us as a society to tend towards good less war. I mean, with all these metrics, what is that us becoming who we want to be, or is this some kind of societal force? Was the nature versus nurture thing here? Yeah.

[00:20:22]

So in that case, I just think yes. So violence has massively declined over time. I think that's a slow process of cultural evolution, institutional evolution, such that now the incentives for you and I to be violent are very, very small indeed. In contrast, when we were hunter gatherers, the incentives were quite large. If there was someone who was, you know, potentially disturbing the social order and hunter gatherer setting, there was a very strong incentive to kill that person.

[00:20:55]

And people did. And it was just the guarded, you know, 10 percent of deaths among hunter gatherers arguably were murders after hunter gatherers.

[00:21:04]

When you have actual societies is when violence can probably go up because there's more incentive to do mass violence. Right.

[00:21:11]

To take over other conquer other people's lands and murder everybody in place and so on.

[00:21:18]

Yeah, I mean, I think total death rate for the mother from human causes does go down. But you're right that if you're in a hunter gatherer situation, you kind of right.

[00:21:30]

Group that you're part of is very small, then, you know, you can't have massive wars that just massive communities don't exist.

[00:21:36]

But anyway, the second question, do you think most people want to do good? Yeah, and then I think that is true for most people. I think you see that with the fact that, you know, most people donate a large portion of people volunteer. If you give people opportunities to easily help other people, they will take it. But at the same time, where are a product of our circumstances? And if it were more socially rewarded to be doing more good, if it were more socially rewarded, to do good, effectively them not effectively, then we would see that behavior a lot more.

[00:22:11]

Um, to why why should we do good?

[00:22:15]

Yeah. My answer to this is there's no kind of deeper level of explanation. So my answer to kind of why few good is well there is someone whose life is on the line, for example, whose life you can save via donating just actually a few thousand dollars to an effective nonprofit like the Against Malaria Foundation. That is a sufficient reason to do good. And then if you ask, well, why or I do that and I just show you the same facts again, those it's that fact that is the reason to do good.

[00:22:49]

There's nothing more fundamental than that. I'd like to sort of make more concrete. The thing we're trying to make better so you just mentioned malaria. There's a huge amount of suffering in the world.

[00:23:03]

Are we trying to remove so is ultimately the goal?

[00:23:10]

Not ultimately, but the first step is to remove the worst of the suffering. So there's some kind of threshold of suffering that they want to make sure does not exist in the world?

[00:23:23]

Or do we really naturally want to take a much further step and look at things like income inequality? So not just getting everybody above a certain threshold, but making sure that there is some. That is, broadly speaking, there's less injustice in the world, unfairness in some definition, of course, very difficult to define and fairness.

[00:23:48]

Yeah, so the metric I use is how many people do we affect and by how much do we affect them? And so that can you know, often that means eliminating suffering, but it doesn't have to could be helping promote a flourishing life instead. And so if I was comparing, you know, reducing income inequality or, you know, getting people from the very pits of suffering to a higher level, um, the the question I would ask is just a quantitative one of just if I do this first thing or the second thing, how many people am I going to benefit and by how much am I going to benefit?

[00:24:27]

Am I going to move that one person from kind of 10 percent, zero percent well-being to 10 percent well-being? Perhaps that's just not as good as moving a hundred people from 10 percent well-being to 50 percent well-being.

[00:24:40]

And the idea is the diminishing returns, the idea of when you're the when you're in terrible poverty.

[00:24:50]

Then the one dollar that you give goes much further than if you were in the middle class in the States, for example.

[00:24:58]

Absolutely. And this this fact is really striking. So if you take even just quite a conservative estimate of how we are able to turn money into being, the economists put it as like a log curve, that's all or steeper. But that means that any proportional increase in your income has the same impact on your well-being. And so someone moving from a thousand dollars a year to two thousand dollars a year has the same. Impact of someone moving from one hundred thousand dollars a year to two hundred thousand dollars a year, and then when you combine that with the fact that we are in middle class, members of rich countries are a hundred times richer in financial terms and the global poor.

[00:25:48]

That means we can do 100 times to benefit the poorest people in the world as we can to benefit people of our income level. And that's this astonishing fact. Yes, it's quite incredible.

[00:25:58]

A lot of these facts and ideas are just. Difficult to think about because. There's an overwhelming amount of suffering in the world. And even acknowledging it is difficult. Not exactly sure why that is, I mean. I mean, it's difficult because you have to bring to mind, you know, it's an unpleasant experience thinking about other people's suffering. It's unpleasant to be empathizing with it firstly and then secondly, thinking about it means that maybe we'd have to change our lifestyles.

[00:26:36]

And if you're very attached to the income that you've got, perhaps you don't want to be confronting ideas or arguments that might cause you to use some of that money to help others. So it's quite understandable in psychological terms, even if it's not the right thing that we ought to be doing.

[00:26:55]

So how can we do better? How can we be more effective, how does data help? Help? Yeah, in general, how can how can we do better?

[00:27:04]

It's definitely hard. And we have spent the last 10 years engaged in kind of some deep research projects to try and answer. Kind of two questions, one is, of all the many problems the world is facing, what the problems we ought to be focused on, and then within those problems that we judge to be the most pressing, where we use this idea of focusing on problems that are the biggest in scale, that are the most tractable, where we can do have a.

[00:27:36]

They kind of make the most progress on that problem and they are the most neglected within them, what the things that have the kind of best evidence so we have the best guess will do the most good. And so we have a bunch of organizations. So I give well, for example, is focused on global health and development and has a list of seven top recommended charities.

[00:27:59]

So the idea in general, I'm sorry to interrupt, is so we'll talk about sort of poverty, animal welfare and existential risk. This is all fascinating topics, but in general, the idea is there should be a group now. So there's a lot of groups that seek to convert money into good.

[00:28:21]

And then you also on top of that, want to have a. Accounting of how good they actually perform that conversion, how well they did in converting money to good so ranking of these different groups, ranking this charities, so that does that apply across basically all aspects of effective altruism?

[00:28:46]

So there should be a group of people and they should be they should report on certain metrics of how well they've done. And you should only give your money to groups that do a good job.

[00:28:57]

That's the core idea. I'd make two comments. One is just it's not just about money. So we're also trying to encourage people to work in areas where they'll have the biggest impact. Absolutely. And in some areas, you know, there really people have but money, poor other areas of kind of money rich and people poor. And so whether it's better to focus time or money depends on the course area. And then the second is that you mentioned metrics.

[00:29:25]

And while that's the ideal and in some areas we do, we are able to get somewhat quantitative information about how much impact an area is having. That's not always true for some of the issues. Like you mentioned, exponential risks.

[00:29:41]

Well, we're not able to measure in any sort of precise way like how much progress we're making. And so you have to instead fall back on just ridiculous argument and evaluation even in the absence of data.

[00:29:58]

So let's first sort of linger on your own story for a second. How do you yourself practice effective altruism in your own life? Because I think that's a really interesting place to start.

[00:30:11]

So I've tried to build effective altruism into. At least many components of my life, so on the donation side. My plan is to give away most of my income over the course of my life, I've set a bar I feel happy with and I just donate above that bar. So at the moment, I donate about 20 percent of my income to.

[00:30:35]

Then on the career side, I've also shifted kind of what I do where I was initially planning to work on very esoteric topics in the philosophy of logic, philosophy of language, things that are intellectually extremely interesting. But the path by which they really make a difference to the world is let's just say it's very unclear that at best. And so I switched instead to researching ethics, to actually just working on this question of how we can do as much good as possible.

[00:31:05]

And then I've also spent a very large chunk of my life over the last 10 years creating a number of non-profits who, again, in different ways are tackling this question of how we can do the most good and helping them to grow over time to be a woman.

[00:31:19]

And a few of them, the the career selection eighty thousand eighty thousand eight thousand hours is a really interesting group.

[00:31:28]

So maybe also just a quick pause on the origins of effective altruism, because you paint a picture who the key figures are, including yourself, in the effective altruism movement today.

[00:31:44]

Yeah, there are two main strands that kind of came together to form the effect of altruism movement. So one was two philosophers, myself and Toby Ord at Oxford, and we had been very influenced by the work of Peter Singer and Australian moral philosopher who had argued for many decades that because one can do so much good at such a little cost to oneself, we have an obligation to give away most of our income to benefit those in extreme poverty, just in the same way that we have an obligation to run in and save a child from a family planning in a shallow pond.

[00:32:22]

If it was just silly when your suit that cost a few thousand dollars. And we set giving what we can in 2009, which is encouraging people to give at least 10 percent of their income to the most effective charities, and the second main strand was the formation of Give Well, which was originally based in New York and started in about 2007. And that was set up by Holden, Karnofsky and Elie Hassenfeld, who were two hedge fund dudes who were making making good money and thinking, well, where should I donate?

[00:32:55]

And in the same way as if they wanted to buy a product for themselves, they would look at Amazon reviews. They were like, well, what are the best charities found? They just weren't really good answers to that question. Certainly not that they were satisfied with. And so they formed give well in order to try and work out what are those charities where they can have the biggest impact.

[00:33:16]

And then from there and some other influences, can a community grew and said, can we explore the philosophical and political space that effective altruism occupies a little bit? So from the little and distant in my own life time that I've read of Ayn Rand's work arounds, philosophy of Objectivism espouses, and it's interesting to put her philosophy in contrast.

[00:33:43]

Yeah. With effective altruism. So it espouses selfishness as the best thing you can do. Yeah. And but it's not actually against altruism. It's just you have that choice. But you should be selfish in it, right. Or not.

[00:34:00]

Maybe you can disagree here but so can be viewed as the complete opposite of effective altruism or it can be viewed as similar because the word effective is really interesting because if you want to do good then you should be damn good at doing good. Right. That's the that I think that would fit within the morality that's defined by objectivism.

[00:34:25]

So do you see a connection between these two philosophies and other perhaps other interests in this complicated space of beliefs that effective altruism is positioned as opposing or aligned with her?

[00:34:42]

Definitely say that Objectivism, Ayn Rand's philosophy is a philosophy that's, you know, quite fundamentally opposed to effective altruism, in which way, insofar as Ayn Rand's philosophy is about championing egoism and saying that, I never quite sure whether the philosophy is meant to say that just you ought to do whatever will best benefit yourself. There's ethical egoism no matter what the consequences are. Or second, if there's this alternative view, which is, well, you ought to try and benefit yourself, because that's actually the best way of benefiting society, certainly in Atlas Shrugged.

[00:35:21]

She is presenting her philosophy as a way that's actually going to bring about a flourishing society. And if it's the former, then, well, effective altruism is all about promoting the idea of altruism. So it's and saying, in fact, we ought to really be trying to help others as much as possible. So it's opposed there. And then on the second side. I would just dispute the empirical premise. It would seem, given the major problems in the world today, it would seem like this remarkable coincidence, quite suspicious, one might say, if benefiting myself was actually the best way to bring about a better world.

[00:35:58]

So in that point and I think that connects also with career selection that we'll talk about, but let's consider not objectors, but capitalism.

[00:36:10]

So and the idea that you focusing on the thing that you damn are damn good at, whatever that is, may be the best thing for the world. Sort of part of it is also mindset, right?

[00:36:25]

Sort of like the thing I love is robots. Yeah.

[00:36:30]

So maybe I should focus on building robots and never even think about the idea of effective altruism, which is kind of the capitalist notion.

[00:36:41]

Yeah. Is there any value in that idea and just finding anything thing you're good at and maximizing your productivity in this world and thereby sort of lifting all boats and benefiting society and as a result.

[00:36:55]

Yeah, I think there's two things that want to say on that. So one is what your comparative advantages, what your strengths are when it comes to career. There's obviously superimportant because, you know, there's lots of career paths I would be terrible at if I thought being an artist was the best thing one could do.

[00:37:11]

Well, I'd be doomed, just really quite astonishingly bad. And so I do think, at least within the realm of things that could plausibly be very high impact. Yes, choose the thing that you're going to that you think you're going to be able to, like, really be passionate and excel at kind of over the long term. Then on this question of like, should one just do that in an unrestricted way and not even think about what the most important problems are?

[00:37:39]

I do think that in a kind of perfectly designed society, that might well be the case. That would be a society where we've collected all market failures, we've internalized all externalities, and then we've managed to set up incentives such that people just pursuing their own strengths is the best way of doing good. But we're very far from that society. So if one did that, then, you know, it would be very unlikely that you would focus on improving the lives of non-human animals that, you know, aren't participating in markets or ensuring the long run future goes well, where future people certainly aren't participating in markets or benefiting the global poor who do participate but have so much less kind of power from a starting perspective that their views on actually kind of represented by market forces to God is so pure, pure definition.

[00:38:39]

Capitalism just may very well ignore the people that are suffering the most. The wide swath of them.

[00:38:46]

So if you could allow me this line of thinking here. So I've listened to a lot of your conversations and I find. The. If I can compliment you, it's very interesting conversations, your conversation, Rogan and Joe Rogan was really interesting with. Sam Harris and so on, whatever there's a there's a lot of stuff that's really good out there, and yet when I look at the Internet and look at YouTube, which has certain mobs, certain swaths of right leaning folks.

[00:39:24]

Yeah. Whom I dearly love.

[00:39:29]

Well, I love all people, all especially people with ideas. They seem to not like you very much.

[00:39:39]

So I don't understand why exactly.

[00:39:43]

So my my own sort of hypothesis is there is a right left divide that absurdly so caricatured in politics, at least in the United States, and maybe you're somehow pigeonholed into one of those sides and, you know, maybe that that's what it is. This may be your message is somehow politicized.

[00:40:06]

Yeah. I mean, how how do you make sense of that? Because you're extremely interesting. Like you got the comments. I see. And Joe Rogan, there's a bunch of negative stuff. And yet if you listen to it, the conversation is fascinating. I'm not speaking. I'm not some kind of lefty extremist, but just it's a fascinating conversation. So why are you getting some why I'm like al-Amoudi Hate.

[00:40:30]

So I'm actually pretty glad that effective altruism has managed to stay on relatively on politicized, because I think the core message to just use some of your time and money to do as much good as possible to fight some of the problems in the world can be, you know, appealing across the political spectrum. And we do have a diversity of political viewpoints among people who have engaged in effective altruism.

[00:40:56]

We do, however, do get some criticism from the left and the right.

[00:40:59]

Oh, interesting was to both of be interesting to hear. Yeah. So criticism from the left is that we're not focused enough on dismantling the capitalist system that they see as the root of most of the problems that we're talking about. And then I kind of disagree on partly the premise where I don't think relevant alternative systems would say to the animals or to the global poor or to that future generations can do much better.

[00:41:32]

And then also the tactics where I think that a particular ways we can change society that would massively benefit, you know, be massively beneficial on those things that don't go via dismantling like the entire system, which is perhaps a million times harder to do then criticism on the right. There's definitely like in response to the Joe Rogan podcast, there definitely were a number of Ayn Rand fans who weren't keen on the idea of promoting altruism. There was a remarkable set of ideas, just the idea that effectively unmannerly, I think, was driving a lot of criticism.

[00:42:09]

Well, let me.

[00:42:11]

OK, so I love fighting. I've been in street fights my whole life. I'm a I'm as alpha in everything I do as it gets. And the fact that and Joe Rogan said that, I thought Scent of a Woman is a better movie than John Wick put me into this beta category amongst people who are like basically saying that, yeah, unmannerly or it's not tough.

[00:42:38]

It's not it's not some. Yeah. Principled view of strength that is represented by a sportsperson. So actually.

[00:42:47]

So how do you think about this? Because to me, altruism, especially effective altruism. Is. I don't I don't know what the female version of that is, but on the male side, manly as fuck, if I may say so. So how do you think about that kind of criticism?

[00:43:09]

I think people who would make that criticism are just occupying a like state of mind that I think is just so different from my state of mind that I kind of struggle to maybe even understand it, where if something's manly or unmanly or feminine or unfeminine, I'm like, I don't care.

[00:43:26]

Like, is it the right thing to do or the wrong thing to do?

[00:43:28]

So I let me put it not in terms of man and woman. Yes. I don't think that's useful, but I think there's a notion of acting out of fear.

[00:43:38]

Yeah. That or as opposed to out of principle and strength. Yeah.

[00:43:44]

So OK. Yeah.

[00:43:45]

Here's something that I do, you know, feel as an intuition and that I think drives some people who do find kind of a detective and, and so on as a philosophy, which is a kind of taking power of your taking control of your own life and having power over how you're stealing your life, um, and not kind of kowtowing to others, you know, really thinking things through.

[00:44:12]

I find like that set of ideas just very compelling and inspirational. I actually think of effective altruism has really, you know, that side of my personality that's like scratch that itch where. You and just not taking the kind of priorities that society is giving you is granted, instead you are choosing to act in accordance with the priorities that you think are most important in the world. And often that involves then doing, um, quite often quite unusual things from a societal perspective, like donating a large chunk of your earnings or working on these weird issues about AI and so on that other people might not understand.

[00:44:56]

Yeah, I think that's a really gutsy thing to do that is taking control. That's at least at this stage. I mean, that's that's you taking. Ownership, not of just yourself, but your presence in this world that's full of suffering and saying as opposed to being paralyzed by that notion. Yeah, it's taking control and saying I could do something. Yeah.

[00:45:21]

I mean, that's really powerful. But I mean, sort of the one thing I personally hate, too, about the left currently that I think those folks to detect is the social signaling. Mm hmm.

[00:45:34]

Do you when you look at yourself. Yeah.

[00:45:37]

Sort of late at night, would you do everything you're doing in terms of effective altruism if your name is quite popular, but if your name was totally unattached to it. So if it was in secret.

[00:45:49]

Yeah. I mean, I think I would to be honest, I think the kind of popularity is like, you know, it's a mixed bag, but there are serious costs. And I don't particularly I don't like love it.

[00:46:04]

Like it means you get all these people calling you Occoquan do with Rogan.

[00:46:07]

It's like not the most fun thing like, but you also get a lot of sort of brownie points for doing good for the world.

[00:46:14]

Yeah, you do.

[00:46:15]

But I think I think my ideal life I would be like in some library solving logic puzzles all day and I'd like really be like learning maths and so on. So you're and, you know, have a good body of friends and so on.

[00:46:28]

So your instinct for effective altruism is something it's not it's not a it's not one that, you know, it's communicating socially.

[00:46:36]

It's more in your heart. You want to do good for the world.

[00:46:40]

Yeah. I mean, so we can look back to early giving what we can.

[00:46:44]

So, you know, we're setting this up for me and giving me and Toby and I really thought that doing this would be a big hit to my academic career because I was now spending, you know, at that time, more than half my time setting up this non-profit at the crucial time when you should be, like producing your best academic work and so on. And it was also the case at the time. It was kind of like the Toby Ord Club.

[00:47:10]

You know, he was he was the most popular. There was this personal interest story about him and his plans.

[00:47:14]

Donate and sorry to interrupt, but Toby was donating a large amount. Can you tell just briefly what he was doing? Yeah, so he made this public commitment to give everything he earned above 20000 pounds per year to the most effective causes. And even as a graduate student, he was still donating about 15, 20 percent of his income, which is quite significant given that graduate students are not known for being super wealthy.

[00:47:42]

That's right. And when we launched giving what we can the media just loved, this is like a personal interest story. So the story about him and his pledge was the most. Yeah, it was actually the most popular news story of the day, and we kind of had the same story a year later and it was the most popular news story of the day. A year later, too. And so it really was kind of several years before. Then I was also kind of giving more talks and starting to do more writing and then especially with, you know, I it this book doing it better.

[00:48:16]

Then they started to be kind of attention and so on.

[00:48:20]

But deep inside, your own relationship with effective altruism was I mean, it had nothing to do with the publicity that did it. Did you see yourself how did the publicity connect with it? Yeah, I mean, that's kind of what I'm saying is I think the publicity came like several years after afterwards. I mean, at the early stage when we set up, given what we can, it was really just every person we get to pledge 10 percent is, you know, something like hundred thousand dollars over their lifetime.

[00:48:52]

That's huge. And so it was just we had started with 23 members. Every single person was just this kind of huge accomplishment.

[00:49:00]

And at the time, I just really thought, you know, maybe over time we'll have 100 members and they'll be like, amazing.

[00:49:06]

Whereas now we have, you know, over four thousand and one and a half billion dollars pledged.

[00:49:11]

That was just unimaginable to me at a time when I was first kind of getting this, you know, getting this stuff off the ground.

[00:49:19]

So can we talk about. Poverty and the other, the biggest problems that you think in the near term, effective altruism can can attack in each one. So poverty obviously is a huge one.

[00:49:36]

Yeah. How can we help?

[00:49:39]

Yeah. So poverty absolutely is a huge problem. 700 million people in extreme poverty living in less than two dollars per day. Where that's what that means is what two dollars would buy in the US. So think about that. It's like some rice, maybe some beans. It's very, you know, really not much. And at the same time, we can do an enormous amount to improve the lives of people in extreme poverty. So the things that we tend to focus on are interventions in global health.

[00:50:10]

And that's for a couple of reasons. One is like global health just has this amazing track record. Life expectancy globally is up 50 percent relative to 60 or 70 years ago. We've allocated smallpox, that's which killed two million lives every year, almost eradicate polio. Second is that we just have great data on what works when it comes to global health. So we just know that bed nets protect children and prevent them from dying from malaria.

[00:50:39]

And then the third is just, um, that it's extremely cost effective. So it costs five dollars to buy one bed net, takes two children for two years against malaria. If you spend about five thousand dollars on bed nets, then statistically speaking, you're going to save a child's life. Um, and there are other interventions, too. And so given the people in such suffering and we have this opportunity to, you know, do such huge good for such low cost.

[00:51:08]

Well, yeah, why not.

[00:51:09]

So the individuals have for me today, if I wanted to talk about poverty, how would I help?

[00:51:17]

And I wanted to say I think donating 10 percent of your income is a very interesting idea or some percentage or some setting a bar and sort of sticking to it.

[00:51:27]

How do we then take the step towards the effective part?

[00:51:32]

Mm hmm. So you've conveyed some notions, but who do you give the money to?

[00:51:38]

Yeah. So give well, this organization I mentioned as well, it makes charity recommendations and some of its top recommendations. So Against Malaria Foundation is this organization that buys and distributes these insecticide treated bed nets and then it has a total of seven charities that it recommends very highly.

[00:51:59]

So that recommendation is it is almost like a starve approval or is there some metrics? So what are what are the ways that give work conveys that this is a great charity organization?

[00:52:14]

Yeah. So Guilfoile is looking at metrics and it's trying to compare charities ultimately in the number of lives that you can save or an equivalent benefit. So one of the charities it recommends is give directly, which simply just transfers cash to the poorest families where poor family will get a cash transfer of a thousand dollars. And they kind of regard that as the baseline intervention because it's so simple. And people, you know, they know what to do with how to benefit themselves.

[00:52:46]

That's quite powerful, by the way.

[00:52:47]

So before give well before the effective altruism movement was there, I imagine there's a huge amount of corruption. Mm hmm.

[00:52:55]

Funny enough, in charity organizations or misuse of money. Yeah. So there was nothing give up before that? No, I mean, there were some so I mean, the charity corruption, I mean, obviously there's some. I don't think it's a huge issue, I was just focusing on the wrong things prior to give. Well, there were some organizations like Charity Navigator which were more aimed at worrying about corruption and so on. So they weren't saying these are the charities where you're going to do the most good.

[00:53:25]

Instead, it was like, how good are the charities financials? How good is this? How are they transparent? And yes. So that would be more useful for weeding out some of those worst charities.

[00:53:36]

So give while just taking a step further sort of in this 21st century of data and it's actually looking at the effective part. Yeah.

[00:53:46]

So it's like, you know, if you know the wire cutter for if you want to buy a pair of headphones, they will just look at your headphones.

[00:53:52]

But these are the best headphones you can buy. That's the idea with CAPEWELL.

[00:53:56]

OK, so do you think there's a a bar of what suffering is? And do you think one day we can eradicate suffering in our world? Yeah, amongst humans. Let's talk humans for not talk humans, but in general. Yeah, actually. So there's a colleague of mine coined the term abolitionism for the idea that we should just be trying to abolish suffering. And in the long run, I mean, I don't expect to anytime soon, but I think we can I think that would require quite change, quite drastic changes to the way society is structured and perhaps even the you know, the human, in fact, even changes to human nature.

[00:54:39]

But I do think that suffering whenever that occurs is bad and we should want it to not occur.

[00:54:45]

So there's there's a line there's a gray area between suffering.

[00:54:51]

Now, I'm Russian, so I romanticize some aspects of suffering.

[00:54:55]

There's a gray line between struggle, gray area between struggle and suffering. So, one, do we want to eradicate all struggle in the world?

[00:55:09]

There's a. This is an idea, you know, that the human condition inherently has suffering in it and it's a creative force. It's it's a struggle of our lives and we somehow grow from that.

[00:55:27]

How do you think about how do you think about that? I agree. That's true. So, you know, often, you know, the artists can be also suffering from, you know, major health conditions or depression.

[00:55:41]

So I want to come from abusive parents. Yeah, most great artists, I think, come from abusive parents. Yeah. That seems to be at least commonly the case.

[00:55:50]

But I want to distinguish between suffering as being instrumentally good. You know, it causes people to produce good things and whether it's intrinsically good. And I think intrinsically it's always bad.

[00:56:01]

And so if we can produce these, you know, great achievements via some other means where, you know, if we look at the scientific enterprise, we've produced incredible things often from people who aren't suffering, have, you know, the good lives they just live in instead of, you know, being pushed by a sense of anguish, they're being driven by intellectual curiosity.

[00:56:23]

If we can instead produce a society where it's all carrot and stick, that's better from my perspective.

[00:56:31]

Yeah, but I don't disagree with the notion that that's possible. But I would say most of the suffering in the world is not productive.

[00:56:40]

So I would dream of effective altruism curing that suffering. But then I would say that there is some suffering that is productive, that we want to keep the because.

[00:56:52]

But that's not even the focus of because most of the suffering is just absurd yet.

[00:56:59]

Yeah. I mean, it's to be eliminated. So let's not even romanticize this usual notion I have. But nevertheless, struggle has some kind of inherent value that to me at least. Yeah, you're right. There's some element of human nature that also have to be modified in order to cure all suffering.

[00:57:19]

Yeah, I mean, there's an interesting question of whether it's possible. So at the moment, you know, most of the time we're kind of neutral and then we burn ourselves and that's negative. And that's the really good that we get that negative signal because it means we won't burn ourselves again. There's a question like, could you design agents, humans such that you're not hovering around the zero level, you're having it like bliss, and then you touch the flame and you're like, oh, no, you just slightly worse bliss.

[00:57:45]

Yeah, but that's a really bad gesture compared to the bliss you are normally in so that you can have like a gradient of bliss instead of, like, pain and pleasure.

[00:57:53]

Well, on that point, I think it's a really important point and the experience of suffering, the relative nature of it. Maybe having grown up in the Soviet Union were quite poor by any measure. And when I was in my childhood, but it didn't feel like you're poor because everybody around you were poor. And then in America, I feel I for the first time begin to feel poor.

[00:58:26]

Yeah. Yeah. Because of the way there's different there's some cultural aspects to it that really emphasize that it's good to be rich. And then there's just the notion that there is a lot of income inequality and therefore you experience that inequality that's of income.

[00:58:42]

What do you think about the inequality of suffering that that we have to think about?

[00:58:48]

Do you do you think we have to think about that as part of effective altruism?

[00:58:54]

Yeah, I think there are just things vary in terms of whether you get benefits or costs from them, just in relative terms or in absolute terms.

[00:59:03]

So a lot of the time, yeah, there's this hidden treadmill where if you get you know, this money is useful because it helps you buy things or good for you because it helps you buy things. But there's also a status component, too. And that status component is kind of zero sum. If you were saying like in Russia, you know, no one else felt poor because everyone around you is poor, whereas now you've got this these other people who are, you know, super rich and maybe that makes you feel.

[00:59:39]

You know, less good about yourself. There are some other things, however, which are just intrinsically good or bad. So commuting, for example, is just people hate it. It doesn't really change knowing that other people are commuting to doesn't make it any any kind of less bad.

[00:59:57]

But to push back on that for a second, I mean, yes. But also if some people were, you know, on horseback, your commute on the train might feel a lot better.

[01:00:09]

Yeah. You know, there is a relative term. Yeah.

[01:00:12]

Everybody's complaining about our society today, forgetting it's forgetting how much better it is, the better angels of our nature, how the technology is improved, fundamentally improving most of the world's lives.

[01:00:26]

Yes. And actually, there's some psychological research on the wellbeing benefits of volunteering where people who volunteer tend to just feel happier about their lives. And one of the suggested explanations is that because it extends your lieutenant's class, so no longer you comparing yourself to the Joneses who have a slightly better car, but realize that, you know people in much worse conditions than you. And so now you know, your life doesn't seem so bad.

[01:00:55]

That's actually on the psychological level. One of the fundamental benefits of effective altruism.

[01:00:59]

Yeah. Is. Is I mean, I guess the altruism part of effective altruism is exposing yourself to the suffering in the world, allows you to be more, yeah, happier and actually allows you in the sort of meditative, introspective way realize that you don't need most of the wealth you have to to be happy. Absolutely.

[01:01:25]

I mean, I think fact there's been this huge benefit for me, and I really don't think that if I had more money that I was living on that that would change my level of well-being at all, whereas engaging in something that I think is meaningful, that I think is stealing humanity in a positive direction, that's extremely rewarding. Um, and so, yeah, I mean. Despite my best attempts at sacrifice, I don't you know, I think I've actually ended up happier as a result of engaging in effective altruism than I would have done.

[01:01:55]

That's an interesting idea. Yeah. So let's let's talk about animal welfare. Sure. Easy question. What is consciousness? OK, especially as it has to do with the capacity to suffer. I think there seems to be a connection between how conscious something is, the amount of consciousness and its ability to suffer. And that all comes into play about us thinking how much suffering there's in the world with regard to animals. So how do you think about animal welfare and consciousness?

[01:02:25]

OK, well, consciousness. Easy question. OK, yeah. I mean, I think we don't have a good understanding of consciousness.

[01:02:31]

My best guess is it got and by consciousness and meaning what it is feels like to be you, the subjective experience that seems to be different from everything else we know about in the world.

[01:02:43]

Yeah, I think it's clear, very poorly understood at the moment.

[01:02:46]

I think it has something to do with information processing.

[01:02:49]

So the fact that the brain is a computer or something like a computer, so that would mean that very advanced, they could be conscious of information processes in general, could be conscious with some suitable complexity, but that also some suitable complexity.

[01:03:06]

It's a question of whether greater complexity creates some kind of greater consciousness which relates to animals. Yeah, right. Is there if it's an information processing system and it's smaller and smaller then and less conscious than a cow. Less conscious than a monkey.

[01:03:23]

Yeah. Again, this super hard question, but I think my best guess is yes. Like if you if I think well, consciousness, it's not some magical thing that appears out of nowhere. It's not you know, Descartes thought it was just comes in from this other realm and then enters through the pineal gland in your brain. And that's kind of soul and it's conscious. So it's got something to do with what's going on in your brain. A chicken has one three hundredth of the size of the brain that you have ants.

[01:03:56]

I don't know how small it is. Maybe it's a million.

[01:03:58]

Forsys my best guess, which I may well be wrong about because this is so hard, is that in some relevant sense, the chicken is experiencing consciousness to a lesser degree than the human and the ant significantly less. Again, I don't think it's as little as three hundredth as much. I think there's evolutionary reasons for thinking it, like the ability to feel pain comes on the scene relatively early on and we have lots of our brain that's dedicated to stuff that doesn't seem to have anything to do with consciousness, language processing and so on.

[01:04:31]

So it seems like the easy.

[01:04:32]

So there's a lot of complicated questions there that we can't ask the animals about. But it seems that there is easy questions in terms of suffering, which is things like factory farming that could be addressed. Yeah.

[01:04:47]

Is that is that the lowest hanging fruit, if I may use crude terms here of animal welfare?

[01:04:54]

Absolutely. I think that's the lowest hanging fruit. So at the moment, we kill we raise and kill about 50 billion animals every year. So how many. Fifty billion in. Yeah.

[01:05:05]

So for every human on the planet, several times that number of being killed and the vast majority of them are raised in factory farms where basically whatever your view on animals, I think you should agree, even if you think, well, maybe it's not bad to kill an animal, maybe if the animal was these good conditions.

[01:05:23]

That's just not the empirical reality. The empirical reality is that they are kept in incredible cage confinement. They are debarked or detailed without anaesthetic. You know, chickens often peck each other to death like otherwise because of such stress.

[01:05:41]

It's really you know, I think when a chicken gets killed, that's the best thing that happened to the chicken in the course of its life. And it's also completely unnecessary. This is in order to save, you know, a few pence for the price of meat or price of eggs.

[01:05:55]

And we have indeed found. It's also just inconsistent with consumer preference as well, people who buy the products, if they could, they or they, when you do surveys are extremely against suffering in factory farms. It's just they don't appreciate how bad it is and, you know, just tend to go with easy options.

[01:06:14]

And so then the best the most effective programs I know of at the moment are non-profits that go to companies and work with companies to get them to take a pledge to cut certain sorts of animal products like eggs from cage confinement out of their supply chain. And it's now the case that the top 50 food retailers and fast food companies have all made these kind of cage free pledges. And when you do the numbers, you get the conclusion that every dollar you give them to these non-profits result in hundreds of chickens being spared from cage cage confinement.

[01:06:51]

And then they're working to other other types of animals, other products to show that the most effective way to to have a ripple effect, essentially, as opposed to directly having regulation, comes from on top that says you can't do this.

[01:07:09]

So I would be more open to the regulation approach. But at least in the US, there's quite intense regulatory capture from the agricultural industry. And so attempts that we've seen to try and change regulation have it's been a real uphill struggle. There are some examples of ballot initiatives where the people have been able to vote in a ballot to say we want to ban eggs from caged conditions. And that's been huge. That's been really good. But beyond that, it's much more limited.

[01:07:41]

So I've been really interested in the idea of hunting in general and wild animals and seeing nature as a form of.

[01:07:53]

Cruelty that I am ethically more OK with. OK, just from my perspective, and then I read about wild animal suffering, I'm just I'm just giving you the kind of. Yeah. Notion of how I felt because animal because animal factory farming is so bad. Yeah. That living in the woods seemed good. Yeah.

[01:08:18]

And yet when you actually start to think about it all, I mean all of the animals in the animal world are living in terrible poverty, right. Yeah.

[01:08:28]

Yeah. So you have all the medical conditions, all of that. I mean, they're living horrible lives that could be improved. Yeah, that's a really interesting notion that I think may not even be useful to talk about because factory farming is such a big thing to focus on.

[01:08:43]

But nevertheless, an interesting notion to think of all the animals in the wild as suffering in the same way that humans in poverty are suffering.

[01:08:52]

Yeah, I mean, and often even worse, so many animals reproduce via our selection. So you have a very large number of children in the expectation that only a small number survive. And so for those animals, almost all of them just live short lives where they starve to death. So, yeah, there's huge amounts of suffering in nature. I don't think we should, you know, pretend that it's this kind of wonderful paradise for most animals. Yeah, their life is filled with hunger and fear and disease.

[01:09:27]

Yeah.

[01:09:28]

I agree with you entirely that when it comes to focusing on animal welfare, we should focus on factory farming. But we also should be aware of the reality of what life for most animals is like.

[01:09:41]

So let's talk about a topic I've talked a lot about and you actually quite eloquently talked about, which is the third priority.

[01:09:49]

The effective altruism considers this really important is existential risks. Yeah. When you think about the existential risks that are facing our civilization, what's what's before us, what concerns you, what should we be thinking about from the especially from an effective altruism perspective?

[01:10:08]

So the reason I started getting concerned about this was thinking about future generations, where the key idea is just, well, future people matter morally. There are vast numbers of future people. If we don't cause their own extinction, there's no reason why civilization might not last a million years. Could I mean, we last as long as the typical mammalian species or a billion years is when the earth is no longer habitable or if we can take to the stars, then perhaps it's trillions of years beyond that.

[01:10:40]

So the future could be very big indeed. And it seems like we're potentially very early on in civilization.

[01:10:46]

Then the second idea is just, well, maybe there are things that are going to really derail that, things that actually could prevent us from having this long, wonderful civilization and instead. Could cause either cause of extinction or otherwise, perhaps like lock ourselves into a very bad state, and what ways could that happen? Well, causing our own extinction. Development of nuclear weapons in the 20th century, at least on the table, that we now had weapons that were powerful enough that you could very significantly justify society.

[01:11:24]

Perhaps an all out nuclear war would cause a nuclear winter. Perhaps that would be enough for the human race to go extinct when you think you haven't done it.

[01:11:34]

Sorry to interrupt. Why do you think we haven't done it yet? Is it surprising to you that having a you know, always for the past few decades, several thousand of active ready to launch nuclear weapons warheads, and yet we have not launched them ever since the initial launch and Hiroshima and Nagasaki?

[01:12:01]

I think it's a mix of luck. So I think it's definitely not inevitable that we haven't used them. So John F. Kennedy Jr. and Cuban Missile Crisis put the estimate of nuclear exchange between the US and USSR that somewhere between one and three and even.

[01:12:15]

So we really did come close at the same time. I do think mutually assured destruction is a reason why people don't go to war. It would be, you know, why nuclear powers don't go to war. Do you think that holds it can linger on that for a second. Like my dad is a physicist, amongst other things, and he believes that nuclear weapons are actually just really hard to build, which is one of the really big benefits of them currently.

[01:12:48]

So that you don't have it's very hard if you're crazy to build to to acquire a nuclear weapon.

[01:12:55]

So the mutually assured destruction really works when you talk seems to work better when it's nation states, when it's serious people, even if they're a little bit, you know, dictatorial and so on.

[01:13:10]

Do you think this mutually assured destruction idea will carry? How far will it carry us in terms of different kinds of weapons?

[01:13:19]

Oh, yeah, I think is your point that nuclear weapons are very hard to build and relatively easy to control because you can control fissile material is a really important one.

[01:13:30]

And future technology that's equally destructive might not have those properties.

[01:13:35]

So, for example, if in the future people are able to design viruses, perhaps using a DNA printing kit that's on that, you know, one can just buy in fact, the other companies I'm in the process of, uh, creating home DNA printing kits. Um. Well, then perhaps that's just totally democratized, perhaps the power to reap huge destructive potential is in the hands of most people in the world or certainly most people with effort. And then, yeah, I no longer trust mutually assured destruction because some for some people, the idea that they would die is just not a disincentive.

[01:14:20]

There was a Japanese cult, for example, Aum Shinrikyo, in the 90s that had they what they believed was the Armageddon was coming. If you died before Armageddon, you would get good karma.

[01:14:34]

You wouldn't go to hell if you died during Armageddon, maybe you would go to hell. And they had a biological weapons program, a chemical weapons program. When they were finally apprehended, they hadn't stocks of sarin gas that were sufficient to kill four million people, engage in multiple terrorist acts. If they had had the ability to plant a virus at home, that would have been very scary.

[01:14:59]

So it's not impossible to imagine groups of people that hold that kind of belief of death as a suicide as as a as a good thing. For passage into the next world and so on, and then connect them with some weapons, then ideology and weaponry may create serious problems for us.

[01:15:24]

A quick question and what do you think is the line between killing most humans and killing all humans?

[01:15:31]

How hard is it to kill everybody? Yeah, I hadn't thought about that. I've thought about it a bit, I think is very hard to kill everybody. So in the case of, let's say, an all out nuclear exchange and let's say that leads to nuclear winter, we don't really know. But we, you know, might well happen. That would, I think, result in billions of deaths. Would it kill everybody? It's quite it's quite hard to see how that how it would kill everybody for a few reasons.

[01:16:02]

One is just there are just so many people.

[01:16:05]

Yes. You know, seven and a half billion people. So this bad event has to kill all, you know, all the almost all of them.

[01:16:12]

Secondly, live in such a diversity of locations. So a nuclear exchange with a virus that has to kill people who live on the coast of New Zealand, which is going to be climatically much more stable than other areas in the world, or people who are on submarines or who have access to bunkers.

[01:16:32]

So there's a very there's just I'm sure there's like two guys in Siberia, just badass.

[01:16:38]

It's just human nature somehow perseveres and. Yeah.

[01:16:43]

And then the second thing is just if there's some kind of catastrophic event, people really don't want to die. So there's going to be, you know, huge amounts of effort to ensure that it doesn't affect everyone.

[01:16:55]

Have you thought about what it takes to rebuild a society with smaller, smaller numbers, like how big of a setback these kinds of things are? Yeah. So then that's something where there's legal uncertainty, I think, where at some point you just lose genetic sufficient genetic diversity such that you can't come back. There's it's unclear how small that population is. But if you've only got, say, a thousand people or a few thousand, then maybe that's small enough.

[01:17:26]

What about human knowledge? And then there's human knowledge. I mean, it's striking how short on geological time scales or evolutionary time scales, the progress and or how quickly the progress in human knowledge has been like agriculture. We only have in 10000 B.C. cities with only 3000 B.C., whereas typical mammal species is half a million years to a million years.

[01:17:54]

Do you think it's inevitable in some sense, the agriculture, everything that came, the industrial revolution, cars, planes, the Internet, that level of innovation you think is inevitable?

[01:18:08]

I think given how quickly it arose. So in the case of agriculture, I think that was dependent on climate. So it was the kind of glacial period was over the earth warmed up a bit that made it much more likely that humans would develop agriculture.

[01:18:29]

When it comes to the industrial revolution. It's just. You know, again, I only took a few thousand years from cities to Industrial Revolution, if we think, OK, we've gone back to this even, let's say, agricultural era, but there's no reason why we wouldn't go extinct in the coming tens of thousands of years or hundreds of thousands of years.

[01:18:50]

It seems to us that it would be very surprising if we didn't rebound unless there's some special reason that makes things different. Yes. So perhaps we just have a much greater, like, disease burden now. So HIV exists. It didn't exist before. And perhaps that's kind of latent in, you know, and being suppressed by modern medicine and sanitation and so on. But would be a much bigger problem for some are, you know, utterly destroyed the society that was trying to rebound.

[01:19:22]

Or there's just maybe there's something we don't know about.

[01:19:25]

So another existential risk comes from the mysterious, the beautiful artificial intelligence.

[01:19:33]

Yeah, so what's what's the shape of your concerns about?

[01:19:38]

I I think quite a lot of concerns about A.I. and sometimes the different risks don't get distinguished enough.

[01:19:47]

So the kind of classic worry most is closely associated with Nick Pass Them and Koski is that we at some point move from having navigation systems to artificial general intelligence. You get this very fast feedback effect where ajai is able to build, you know, artificial intelligence helps you to build greater artificial intelligence. We have this one system that's suddenly very powerful, far more powerful than others than perhaps far more powerful than, you know, the rest of the world combined.

[01:20:23]

And then secondly, it has goals that are misaligned with human goals. And so it pursues its own goals. It realize, hey, there's this competition, namely from humans.

[01:20:33]

It would be better if we eliminated them in just the same way as Homo sapiens eradicated the Neanderthals. In fact, it, in fact, killed off most large animals on the planet that walk the planet. So that's kind of one set of worries, I think that's not my I think these shouldn't be dismissed, the science fiction. I think it's something we should be taking very seriously.

[01:21:02]

But it's not the thing you visualize when you're concerned about the the biggest near-term. Yeah, I think it's, um. I think it's like one possible scenario that would be astronomically bad. I think that other scenarios that would also be extremely bad, compatibly bad, a lot more likely to occur.

[01:21:18]

So one is just we are able to control A.I. So we are able to get it to do what we want it to do and perhaps does not like this fast take off of A.I. capabilities within a single system.

[01:21:31]

It's distributed across many systems that do somewhat different things, but you do get very rapid economic and technological progress as a result that concentrates power into the hands of a very small number of individuals, perhaps a single dictator.

[01:21:47]

And secondly, that single individual is or a small group of individuals or single country is then able to like lock in their values indefinitely via transmitting those values to artificial systems that have no reason to die. Like, you know, the code is copyable.

[01:22:06]

Perhaps, you know, Donald Trump or Xi Jinping creates their kind of progeny in an image.

[01:22:13]

And once you have a system that's once you have a society that's controlled by AI, you no longer have one of the main drivers of change historically, which is the fact that human lifespans, you know, only 100 years give or take.

[01:22:29]

So it's really interesting as opposed to sort of killing off all humans is locking in.

[01:22:36]

And they're creating a hell on earth, basically a set of a set of principles under which the society operates, that's extremely undesirable. So everybody is suffering indefinitely or it doesn't.

[01:22:49]

I mean, it also doesn't need to be hell on Earth. It could just be the wrong values. So we talked at the very beginning about how I want to see this kind of diversity of different values and exploration so that we can just work out what is kind of morally right, what is good, what is bad, and then pursue the thing that's best, actually.

[01:23:09]

So the idea of wrong values is actually probably the beautiful thing is there's no such thing as right and wrong values because we don't know the right answer. We just kind of have a sense of which values more. Right, which is more wrong. So any kind of lock in makes a value wrong because it prevents exploration of this kind. Yeah. And just, you know, imagine if fascist, you know, imagine if there was Hitler's utopia or Stalin's utopia or Donald Trump's or Peng's forever.

[01:23:41]

Yeah. You know, how how good or bad would that be compared to the best possible future we could create?

[01:23:48]

And my suggestion is it really suck compared to the best possible future.

[01:23:54]

And you're just one individual. There's some individuals for whom Donald Trump is perhaps the best possible future.

[01:24:03]

And so that's the whole point of us individuals exploring the space together. Exactly. And which trying to figure out which is the path that will make America great again. Yeah, exactly.

[01:24:15]

So how can effective altruism help?

[01:24:20]

I mean, this is a really interesting notion they actually describing of artificial intelligence being used as extremely powerful technology in the hands of very few, potentially one person to create some very undesirable effect as opposed to AI. And again, the source of the undesirable ness there is the human eye is just a really powerful tool.

[01:24:43]

So whether it's that or whether AI's ajai just runs away from us completely, how as individuals, as as people and the effective altruism movement, how can we think about something like this? Understand poverty and animal welfare. But this is a far out, incredibly mysterious and difficult problem.

[01:25:05]

Well, I think there's three paths as an individual. So if you're thinking about, you know, career path, you can pursue. So one is going down the line of technically AI safety. So this is most relevant to the kind of AI winning AI taking over scenarios where this is just technical work on current machine learning systems, often sometimes going more theoretical to on how we can ensure that an AI is able to learn human values and able to act in the way that you want it to act.

[01:25:38]

And that's a pretty mainstream issue and approach in machine learning today. So we definitely need more people doing that. Second is on the policy side of things, which I think is even more important at the moment, which is how should developments and I be managed on a political level, how can you ensure that the benefits of AI are very distributed? That's not being power, isn't being concentrated in the hands of a small set of individuals.

[01:26:12]

How do you ensure that the UN arms races between different AI companies that might result in them, uh, you know, cutting corners with respect to safety?

[01:26:24]

And so there the input as individuals who can have is this we're not talking about money. We're talking about effort.

[01:26:31]

We're talking about career choice. We're talking about career choice.

[01:26:34]

Yeah, but then is the case that supposing, you know, you're like, I've already decided my career and I'm doing something quite different. You can contribute with money to where? At the Center for Effective Altruism we set up the long term future Fund. So if you go into effective altruism, dot org, you can donate where a group of individuals will then work out what's the highest value place they can donate to work on extension risk issues with a particular focus on I was part number three.

[01:27:05]

This was path number three, which was that the donation donations were the third option I was thinking of. OK, and then yeah, you can also donate directly to organizations working on this like Center for Human Compatible I, Berkeley Future of Humanity Institute at Oxford or other organizations.

[01:27:25]

To this I keep you up at night, this kind of concern. Yeah, it's kind of a mix where I think it's very likely things are going to go well. I think we're going to be able to solve these problems. I think that's by far the most likely outcome, at least over the next year, the most likely.

[01:27:44]

So if you look at all the trajectories running away from our current moment in the next hundred years, you see a creating destructive consequences as a small subset of those possible trajectories, or at least a kind of eternal destructive consequences.

[01:28:02]

I think that being a small subset at the same time, it still freaks me out. I mean, when we're talking about the entire future of civilization, then small probabilities, you know, one percent probability. That's terrifying.

[01:28:14]

What do you think about Elon Musk's strong worry that we should be really concerned about existential risks?

[01:28:22]

Well, I yeah, I mean, I think, you know, broadly speaking, I think he's right. I think if we if we talked, we would probably have very different probabilities on how likely it is that we're doomed.

[01:28:34]

But again, when it comes to talking about the entire future of civilization, it doesn't really matter if it's one percent or if it's 50 percent.

[01:28:41]

We ought to be taking every possible safeguard we can to ensure that things go well and poorly.

[01:28:47]

Last question. If you yourself could eradicate one problem from the world over that problem. That's a great question. I don't know if I'm. Cheating and saying this, but I think the thing I would most want to change is just the fact that people don't actually care about ensuring the long run future goes. Well, people don't really care about future generations. They don't think about it.

[01:29:10]

It's not part of their aims when there's some sense you're not cheating at all, because in speaking the way you do, in writing the things you're writing, you're doing, you're addressing exactly this aspect. Exactly that. That is your input into the into the effective altruism movement. So for that. Well, thank you so much. It's an honor to talk to you. I really enjoyed it.

[01:29:32]

Thanks so much for having me on. Thanks for listening to that conversation with William MacAskill, a thank you to presenting sponsor Kashyap. Please consider supporting the podcast by downloading Kashyap and using Code Leks podcast if you enjoy this podcast. Subscribe on YouTube. Review five stars and up a podcast support on page one. I simply connect with me on Twitter. Allex Friedemann. And now let me give you some words from William MacAskill. One additional unit of income can do a hundred times as much to benefit the extreme poor as it can to benefit you or I earning the typical US wage of twenty eight thousand dollars a year.

[01:30:14]

It's not often that you have two options, one of which is 100 times better than the other. Imagine a happy hour where you can either buy yourself a beer for five dollars or buy someone else a beer for five cents. If that were the case, we'd probably be pretty generous. Next round's on me, but that's effectively the situation we're in all the time. It's like a ninety nine percent off sale or buy one, get ninety nine free.

[01:30:41]

Might be the most amazing deal you'll see in your life. Thank you for listening and hope to see you next time.