Transcribe your podcast
[00:00:00]

This episode of rationally speaking is brought to you by Stripe Stripe builds economic infrastructure for the Internet. Their tools help online businesses with everything from incorporation and getting started to handling marketplace payments to preventing fraud. Stripe's Culture puts a special emphasis on rigorous thinking and intellectual curiosity. So if you enjoy podcasts like this one and you're interested in what Stripe does, I'd recommend you check them out. They're always hiring. Learn more at Stripe Dotcom. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense.

[00:00:46]

I'm your host, Julia Gillis, and I'm here with today's guest, Rob Giblin.

[00:00:50]

Rob is the director of research at 80000 Hours, which is a nonprofit that focuses on helping people figure out how to maximize the positive impact that they can have with their careers. Before that, Rob was the executive director of the Center for Effective Altruism, and his background for that is in economics.

[00:01:12]

He's also the host of the excellent 80000 Hours podcast, which if you aren't already a fan of, I think you should really check out, if you like, rationally speaking.

[00:01:22]

So a lot of things, Robin, I have to talk about. But the the the thing that really piqued my interest to focus on today is a few years ago, I think back in 2014, I had Ben Todd, who's the president of Eighty Thousand Hours and one of the founders on the show.

[00:01:40]

And we talked about some of the logic behind 80000 hours and effective altruism and like some of the basics around how to pick a career to maximize your positive impact. And, you know, it's been over four years since then. And some of the thinking of 80000 hours in the sort of surrounding effective altruists movement has evolved. Some views have shifted. Other views have maybe clarified. There have been some like misunderstandings or misconceptions about what it hours and in general actually believes.

[00:02:10]

And I thought it would be really great, sort of sit down with Rob and have a review of the evolution of 80000 hours and his thinking on how to affect the world positively. So, Rob, that's what we're going to do today. Great to have you on the show. Yeah.

[00:02:23]

Thanks so much for inviting me on. Listen to the show for many years, and it's good to be good to be speaking rather than listening for once.

[00:02:31]

I want to say, Rob Wilbourne and Julia Gillard together, what is this?

[00:02:34]

The crossover, but only the greatest crossover of all time. BoJack horsemen. Look at that. And I'm sure Rob has more productive things to do with his time than watch TV. So that's for the BoJack fans out there. So, Rob, can you just give me the basics? About 80000 hours. What do you guys do? How big are you? How how long have you been around?

[00:02:55]

So 80000 hours is all about helping people have a larger social impact with their career. So we do research to try to figure out how people can can do more good through their work. And we published that on our website. And we have our podcast. And also we provide one on one advising to to people who bring us that particular situation. And we give them give them ideas for how they can potentially help people in a bigger way with with that work.

[00:03:16]

And roughly how many people have you advised at this point? How many people will just I mean, like order of magnitude?

[00:03:21]

Oh, well, we've had about four or five million people on the site over the last seven years. I guess we have about. Yeah, one and a half million visitors a year now. I think in terms of coaching, I think it would be around 1000 that we've coached so far. We we we didn't do a lot of coaching for a couple of years. We were mostly just doing research. But now we're like growing and growing. The in-person team is providing advice to a whole lot more people.

[00:03:41]

So hopefully that number will go up over time.

[00:03:44]

And for listeners out there who haven't already heard an explanation of your name, what is the 80000 hours name referred to? Right.

[00:03:50]

So 80000 hours is roughly the number of hours that someone would work in a full time career. I think it's about 40, 40 hours a week, 50 weeks a year for 40 years. So we chose that because it gives you an indication of just how important your your career decisions can be, that there's a lot of time you're potentially going to spend. So you should think about how you can spend that. Well, on the other hand, you could think, well, 80000 hours is actually not that much time relative to the scale of the problems in the world.

[00:04:14]

So you're not going to be able to solve all of them with that amount of time. And you're going to have to prioritise products pretty hard as the two angles on it.

[00:04:20]

I think the first time I heard the explanation of your team, I made what I thought was a very clever joke about how if you research devote your career to researching life extension, then you can increase the number of hours and you guys then have to change your name to, you know, fifty million hours or something like that. And and you were very nice about it.

[00:04:36]

But it's pretty clear you'd heard that million.

[00:04:39]

Yeah, that's a lot of jokes of that kind of many times. It's unfortunate that my my laughter wasn't able to be sufficiently sincere.

[00:04:46]

I to try, you know, I appreciated that. So I alluded to kind of an evolution in the thinking or at least like like the structure of the public version of thousand hours arguments. What would you say are, you know, two or three of the main things that are different about the advice 80000 hours given now than it did like five years ago?

[00:05:08]

So I think a lot of people, when they hear about 80000 hours of effective altruism, when I bring it up with them, they think that that is basically means going out and making a lot of money, I think, to give and then donating it to charities that have been proven to have a really large impact.

[00:05:23]

That is sort of the media portrayal of 80000 hours, like most of the articles about you guys have focused on that. That's right.

[00:05:28]

And I think that that's that's really unfortunate. It's it's kind of quite, quite frustrating because that's actually not what I think most people in the effective action community are doing, or at least not what they're not. Whether I mean, it's been a long time and it's not what our advisors. So, yeah, it's the basically that that perception can come about, because that's one of the relatively easier options that people could take to explain. And so when when we're early on in the day, in the life of 80000 hours and effective altruism, if you wanted to, you know, offer something that would be interesting to people that they hadn't really thought of, that would indicate to them how they might be able to do more good or at least sort of it's a little bit counterintuitive.

[00:06:01]

Yeah. Most people hadn't heard of this wanting to give back in 2011 when we were launching. So it's kind of an interesting piece of media about that, that a lot of journalists picked up and still pick up on to this day. There was also a give well is a charity evaluator, I think a sponsor of the show, full disclosure. And they've done research to try to find charities that have very strong evidence behind them and where they have some idea of actual impact that you get the bang, you get Pedulla.

[00:06:25]

That was like relatively well advanced at that time. And that's an interesting approach that you could try to try to take. To have more impact would be to do things that have demonstrated to have a really large impact. But kind of all of these all of these things, I would say just minority views within this broader community or this broader kind of intellectual movement.

[00:06:42]

So when you say their minority views, do you mean that like there are some people who think that most people should do Marting to give, but the people who think that are are just not very common or OK.

[00:06:54]

Yeah. So basically, if you break it down, there's a substantial number of people who are involved in effective activism who think that we should focus on global development and health. I think it's something around 40 or 50 percent, although perhaps if you went to people who are working full time, it goes down to more like 10 or 20 percent then only to give this quite a lot of people who are doing nothing to give. I would say it's maybe, again, about about half of the community is trying to do good that way, although a substantial fraction of those people, I think doing that with the intention of eventually going and doing something else once they've developed their skills and found a really good fit for them.

[00:07:24]

So so both of those things are like substantial, substantial parts of effective activism as a whole. But that by that, by no means dominant and by no means like the the thing that comprises and in terms of doing evidence backed interventions, it's like using that as a strategy to have more impact. I think, again, maybe maybe a third of the community thinks that that's like a you know, one of the key ways that they going to try to do more good is to, you know, use really strong social science evidence to to find things that that really work.

[00:07:52]

And I'd say maybe like another third to kind of you know, that's an interesting approach that that I like use among other different methods. And I think probably the third that I'm in is like being actually somewhat skeptical of that, because I think that interventions are problems where there's very strong evidence for what you can do actually might be negatively correlated with things where you can have a very large impact because those are likely to be fields that are very well developed.

[00:08:13]

That's an example of a well, I'd say global health is probably one of them where there is a lot more evidence in that area than there are in most of the problems that we're more focused on now. And that's because so much effort, so much like intellectual firepower, has gone into those areas that, of course, it's good, all else equal, that you have more evidence about what works and what doesn't. But it's a sign that the area is not so neglected.

[00:08:35]

It's not kind of a problem. You can go out and pioneer this like already millions of people, people working on it.

[00:08:40]

Yeah, I think about that sometimes in terms of the risk reward trade off. Right. Like when you're deciding where to invest, for example, it's not quite the same thing with this kind of a parallel structure there.

[00:08:48]

Right. So you think. Yeah. So giving to bed nets, you might view it as kind of investing in Costco, a Wal-Mart or some like a very reliable company that's going to return dividends pretty, pretty reliably.

[00:08:58]

I guess that is like antimalarial bed nets, which there's a lot of evidence around the the, you know, number of life years you can save by purchasing X number of bed nets.

[00:09:07]

Yeah, but I think if you're actually trying to maximize your impact, it probably makes more sense to be more of a venture capitalist to go out and look for things that have the riskier, maybe harder to find. I have a high chance of failing. Right. But where you can go and do something that other people haven't done and where you would have like. Yeah, just a larger bang bang out of your career and expectation, even though there's like a high probability that that won't work out.

[00:09:31]

So that sounds like you're sort of talking about two different changes or misunderstandings that once were. One of them is do you do earning to give where you're like working in finance or whatever and donating your money to charity versus do you do direct work where you yourself are doing research or working at a charity or, you know, directly working on a problem. And then the other question is, do you work on or give to something where there's like very rigorous evidence, but it's maybe already pretty well developed like global health versus do you work on or give to a cause that's like riskier but potentially more impactful for humanity or the future?

[00:10:03]

Yes, you can do any combination of, like, I guess the combinations you get out of those and you can combine them however you like. Yeah, I guess what we're now more focused on things other than wanting to give most of the time and more, more speculative, higher impact like get research, innovation, policy. But that kind of career approach, rather than wanting to give for things that are I guess boring, would be the other negative way of putting it.

[00:10:26]

Oh, like a reliable I guess if you. Yeah, pretty predictable. Exactly. Sensible. Yeah.

[00:10:31]

OK, they're earning to give being, you know, less emphasized are less important than maybe the popular conception.

[00:10:36]

That's one thing. Are there other things that that are different, about 80000 hours advice now than than people think? Yeah, so I guess the corollary of focusing less on ending to give is trying to find leverage elsewhere by doing kind of scientific research or doing doing policy work where you hope to move a lot of money or legislative power through government. I think another common misunderstanding that people had about 80000 hours advice was that they thought that we were recommending that people kind of go into typical prestigious corporate jobs early on in early on in the career and to do anything to give or some other reason.

[00:11:11]

So, I mean, what one reason would be to to do anything to give another would be just that you wanted to build up a lot of career capital, so you wanted to say. So one thing one part that we did suggest early on was going into consulting, say, for the first few years out of university with the hope that you would build up, you know, a network. Lots of skills. Yeah, lots of connections, potentially some like money in the bank that you could use to take to take risks in your career.

[00:11:33]

Mm hmm.

[00:11:35]

And some people took that path and worked out for some, but it didn't work out for others. And I think it is it is a reasonable path. But these days, because as we've become more confident about the the wealth that the priority paths that we really ultimately want to see people get into as the careers mature, it doesn't seem like going into typical corporate jobs is really anywhere close to the best way to to getting into those positions. Basically, it seems like the career capital that people were getting, the kind of skills that they were building up in consulting or other corporate jobs just don't transfer over so well into the kind of natural sciences or into into policy careers or international relations, things like that.

[00:12:12]

So so the career capital transferability isn't so good. We've become more confident in recommending somewhat unusual paths and working on problems that most people weren't focused on before just because we've had more time to think about it, had more time to hear people's potential objections and decide that we weren't convinced by those objections.

[00:12:28]

Well, I'm sure we'll get more into this later in the conversation. But what's one example of an unusual career path or field that you would now recommend people go into?

[00:12:36]

Right. So the stuff that we talk the most about these days is trying to improve the long term future of humanity in principle by preventing global catastrophic risks. So trying to prevent, for example, a war between China and the United States in the 20th century trying to prevent nuclear weapons from ever being used, trying to prevent new technologies from, you know, really taking civilization off track by being either misused or used in kind of an accidental way that that causes a really large global global catastrophe.

[00:13:03]

And that's not what most people think of when they think of about like charitable work or trying to improve North Korea. I mean, we were aware of those ideas in 2011, but were initially kind of cautious because it, at least to most people, seems to violate common sense that that would be the way to have the largest impact. But, you know, as we've gone on, we've thought about it a lot. We've kind of shopping. Those arguments seem like exactly what does the argument rest on?

[00:13:24]

What kind of objections to people put forward and decided that? No, actually, we really think that this that this is likely a very compelling way, compelling way to do good. And if and if that's ultimately what you want to end up doing, then kind of corporate jobs is just not not really the sensible first best path out of university. You'd want to just get going, go try to do something that would get you into one of these one of the relevant roles directly.

[00:13:45]

So do you think that the change was more about you guys becoming more confident that these sort of weird, like catastrophic risk reduction career paths were the way to go? Or is it about becoming more confident that you could make that case publicly in a way that wouldn't put people off?

[00:14:02]

Hmm. I mean, I think it's a bit of both. I, I think I was personally already quite confident early on. Perhaps that's that's my temperament. So maybe my personal views haven't shifted so much. But I think a lot of other people who are more temperamentally cautious who heard these arguments thought, oh, yeah, that kind of sounds compelling on paper, but I'm just going to stick to like what seems like more common sense to me. I think many of those people have like shifted over a period of years where they just yet just explored it more and became more convinced.

[00:14:30]

So I guess as a group, it's become more possible to to take action in that direction because it's just more of a consensus among people who work in this area full time.

[00:14:38]

You know, as soon as I asked that question, it occurred to me that I think for a lot of people, to some extent, there's there's not that much space between what do I what am I confident in and what could I make the case for to other people in a way that would make sense to them that maybe those two questions were kind of all bound up together for a lot of people?

[00:14:56]

Well, I think, yeah, I would have been willing willing to push on a lot of that stuff, I guess. Yeah. You know what kind of person I am I to. Yeah, I mean, shooter. Straight shooter, I guess.

[00:15:05]

I guess perhaps also a bit more risk taking, a bit more of a venture capitalist when I when it comes to ideas like maybe jump on the thing. I mean, I think it takes all kinds in this respect. You don't want everyone. If everyone was like me, then the world would just like fly back and forth between ideas.

[00:15:17]

Interests me a bit too faddish. It would be an interesting one. Maybe, maybe, maybe a riskier world, which is which is not so great. But I think you do need some people who are willing to try to stake out new ideas and say, no, actually, I do believe this. I'm going something. I'm going to push it forward and then see if they can convince everyone else who's a bit more a bit more cautious.

[00:15:30]

I wanted to ask about one apparent change that I read about on the 80000 Hours Mistakes page, which is really great. I'd recommend people. Check it out, it's just I think it's just called our mistakes, it's a sort of one of the main pages on the 80000 hours website we can link to it. And so they list mistakes that they think they've made in logistics or management, but also, you know, mistakes in sort of having gotten the wrong answer on some question or, you know, given the wrong sort of public position on something.

[00:15:57]

And one thing they say is we always thought personal fit, i.e. how likely you are to excel in a job was important. But over the last few years, we've come to appreciate that it's more important than we originally thought. Most significantly, due to conversations with Holden Karnofsky, who is a founder of Google and now runs the Open Philanthropy Project. So why do you now think that personal fit you being 80000 hours? What do you now think personal fit is more important than you had previously thought?

[00:16:24]

Yeah, I think we always thought the personal fit was quite important, but we perhaps thought that it would be possible for someone who wasn't passionate about an area or wasn't passionate about a particular method to just kind of stick with it just just like got it out and say, no, I'm going to like do this even though I'm really not enjoying it. And I guess over time we've become more pessimistic about people's ability to to stick with that. So for most people, it seems like they have most of their impact kind of later in their career.

[00:16:48]

Once they've built up a lot of skills, they've built up, you know, a real connection, a lot of connections where they can influence what's going on. And so having someone who, like, you know, Grit's through it for a couple of years but then gives up because they just don't they don't have enough energy to to continue with it. It's probably losing most of value from that person's career because they work in it for a few years.

[00:17:08]

They like to stick with it even though they're not enjoying it. And then they leave. And then kind of all of the skills that they've built up or the connections that they build up, all of the organizational capital they create is then kind of like dissipates. And so if you're playing a long term game, then it seems like personal if it becomes more important. Now, one thing that I would say is that we still think that you should try to find a priority area and then try to find kind of a key bottleneck to solving that that problem.

[00:17:33]

And then within that kind of look for something, look for a role that has a good personal fit for you. And typically, there are like so many roles at that point that most people can find something that's that's potentially suitable to them. And I guess and I guess if you can't find anything like that, then nothing to give is always still potentially quite, quite a good option, even if it's not the one that we always suggest that people look at first.

[00:17:53]

I'm curious, does 80000 hours take any kind of official position or I mean, I'm interested in your personal thoughts as well on how to balance. Like, I'm like imagining someone who can actually stick it out in a career that's like not exactly not the ideal career for their own happiness or, you know, intellectual interest or whatever, but where they can have a large impact.

[00:18:15]

And like imagining if someone could do that, like, should they, you know, is that like the right choice for them to make morally? Do you or 8000 hours have a position on, like, whether someone should take a career like that if they if it's not a personal fit in the happiness sense?

[00:18:31]

Yeah, I mean, I think there's a range of views on the team. I don't think we don't really have, like, a position on that PSA. I guess personally, just to be honest, I think morally that morally that would be good if someone really could just go and, you know, save thousands of lives, tens of thousands of lives through that career, even if they didn't enjoy it, as long as they actually could do it and stick with it, even if it wasn't super fulfilling to them, then I think then I agree in some like moral like hypothetical principled sense, then then they should do that.

[00:18:55]

I'm not going I'm not going to like show check that, check that conclusion. But I guess in practice, I don't really like push that agenda very hard because I think in most cases the roles that actual people in the real world are going to find, whether having a large impact they're going to find very stimulating. And it's going to be very rare that someone's like best of all, something that they find unpleasant or unfulfilling.

[00:19:16]

I mentioned the our mistakes page on the 80000 hours websites. I'm just curious if you guys have noticed any, like, impacts from having that page. Like, do people people get angry about, you know, things that you confess you've screwed up that otherwise they wouldn't have known about do it like do they tend to view you more positively? What have you noticed?

[00:19:36]

I think probably almost nobody reads it. Oh, maybe they like well, maybe they see that as mistakes. Page they like go they're like, oh, these people are credible. So that's great. And then they go. Now I haven't I haven't heard that many reactions to it other than people saying, oh, it's great that you that you're acknowledging mistakes. OK, like Ademar. I'm right about that. That's OK. That that's a good sign.

[00:19:53]

Yeah, certainly. Certainly no one's given us a hard time about about any of the mistakes there. I think by and large, people are pretty forgiving. If you're like we messed up, here's how we messed up here. We're going to we're going to fix it. I'm sure there are some grumpy people who to give you a hard time at that point, but mostly I think people are sympathetic. So to the extent that 80000 hours has changed, actually, no, let me ask a different question first.

[00:20:17]

Do you think that any of these changes that you've been describing are things that you've changed your mind about? Or is it like other people at 18000 hours who have who've come to see the wisdom of your view?

[00:20:29]

Well, I think I got very lucky somewhat early on when I was exploring effective of and trying to figure out how to do the most good with my own career. So a lot of the views that I've been describing are basically the worldview of Professor Nick Bostrom, who's the director of the Future of Humanity Institute at Oxford.

[00:20:46]

And I found out about his work, I think, back in 2008 and 2009, read like many of his key papers. And I was like, yeah, this this basically seems right. And I described what his views are in just a second. And being the kind of person who's, like, perhaps easily persuaded by, like, new ideas or new papers that I read, I think I basically got lucky by taking this package. And then over the last 10 years, that worldview has become as mainstreamed itself and a lot more people have gradually just become convinced that bazooms view is like is broadly correct, even if they disagree with with some specifics as as as do I think.

[00:21:21]

Yeah. Nick Bostrom view of things. He's a philosopher. I think one part of it is long termism. So thinking that, like most of the moral consequences of our actions are the most important ones are probably effects that will occur after hour, after our natural lifespans are over. So like more than 50 or 100 years in the future, then you think, well, how could we actually affect the long term? It's a common objection. Is that, well, even if even if you know, the consequences of our actions out hundreds of years in the future are really important, I can't predict what they're going to be.

[00:21:55]

So instead, I want to focus on improving the short term. But then it does seem like they actually are things that we could know now that do improve things for hundreds, thousands, maybe even millions of years. And an obvious one would be preventing global catastrophes from which we never recover. So you have a huge war, which, like I say, takes us back to the Stone Age, and then we never develop technology again and eventually humanity goes extinct.

[00:22:13]

Or you can have an even worse disaster that causes humanity to go extinct in the 21st century. And it's pretty obvious that that has consequences. That affects how the world will look in a million years time or 100 years time or a thousand years time. And there's other potential ways that you could try to change the long term that that extinction focus that might be, for example, you could imagine a global dictatorship like locks us in and we can never escape from that, that that has bad ideas.

[00:22:38]

But so you guys would be a. a.. Right. Right. Exactly. So try to try to prevent that.

[00:22:44]

Yeah, but but that's like that seems to be probably the most prominent thing that could happen in within our lifetimes that would have a very long, very long lasting effects other than we could try to change other than like a nuclear war.

[00:22:58]

Oh, sorry. I'm saying that I or something. I'm saying that whole category like global catastrophic risks. Yeah.

[00:23:02]

You're counting the dictatorship of one of the catastrophic risks that doesn't involve extinction, but still involves like most of the loss of value. Right. And then the next step, I think, in the argument is wondering, well, where do where do these risks come from? It could be like from asteroids. It could be from like super volcanoes, or it could be from like things that we make like nuclear weapons or, you know, advances in biotechnology that would be deeply dangerous, changes in information technology that could disrupt government or disrupt society.

[00:23:28]

Right. And basically that there's very strong reasons that are written up to think that the vast majority of the risks come from humanity itself, that it's like new things that we're going to do that that the probability each year of those things like screwing up civilization, is much larger than the actual risks, in part simply because we know the roughly the unknown risks from super volcanoes or asteroids and so on, because we can look at the historical record and they the risk of seems to be incredibly low, whereas we might be optimistic that we're not going to have a nuclear war.

[00:23:57]

But do we really think it's like a one in a million chance each year? That would seem like way, way too confident, way too confident that it's not going to happen in any given year, right?

[00:24:06]

Yeah, it's funny. People usually people are used to hearing the term overconfidence or or highly confident in terms of predicting that something will happen. I think a little bit jarring or hard to pass when people talk about overconfidence in terms of thinking that, you know, we're not going to have a nuclear war.

[00:24:22]

Yeah, I just think to say that the risk of nuclear war in a given year is one in a million or lower would just require you to really think incredibly well, understand the process that that generates this and that. In every respect, like every every link in the changes, changes, incredibly is incredibly unlikely, which we just don't have much reason to think. I think, yeah. The other risk is more like one in a thousand than the one million, which basically already means that like the risk of humanity, destroying itself is larger than all of the natural risks combined then.

[00:24:49]

So if we think that most of the risks to the long term future of humanity doing stupid things itself, or like failing to coordinate itself such that we have a huge war or we misuse some new technology or discover something that be better not to know that messes us up, how can we how can we get to a good future that's like potentially very big what people are having. Excellent. Lives, that's going to require a lot of technology to to to do itself, basically BOSTROM and I think that we need to basically order the things that we invent, like make sure that we invent things such that we're doing.

[00:25:24]

We're inventing new technologies and ideas that enhance safety sooner so that we're ready when we later invent more dangerous things that that could scare us up. I guess one easy example is that, you know, we invented nuclear weapons. I think that's the point at which for the first time, we had the ability within maybe a decade of the first nuclear explosion to kill billions of people very quickly and potentially, like, really throw civilization off kilter in a way that might be permanent.

[00:25:49]

We would never recover. Now, it took actually decades for us to invent permissive action links that make sure that someone can't just go to a nuclear weapon and commit and launch it and potentially use it.

[00:26:02]

So how does a permissive action link work? OK, so basically this is like a gadget that you have in the nuclear weapon that ensures that it can't be used unless you have a specific code, like a code authorization from the president or the Pentagon or whoever else who said, yes, absolutely, we want to use the nuclear weapons. Basically, for the first decade or two, there were like not even to begin with physical locks, but like then they added physical locks, which you could just break if you could if you could stay with them.

[00:26:27]

And then eventually they added these permissive action links, which required a code to to use the tools, the weapons. But they said the code to zero zero zero zero zero Fazli.

[00:26:35]

Yeah. Because they were really worried about not being able to use them in an emergency.

[00:26:38]

So as the the Air Force officer said, your command wanted to make basically basically the thing that we're worried about much more was that they would need to use them and couldn't rather than that would be used when when they shouldn't use all that.

[00:26:51]

That's just embarrassing. Yeah, but basically my point is that I would have been great if we'd invented permissive action links that we were confident would work and figured out the technology for that before we scaled up nuclear weapons or invented nuclear weapons in the first place. Right.

[00:27:04]

And I think with with many new technologies that we can envisage creating in this century, we can see we can foresee the risks to some degree and we can foresee technologies that we could have beforehand that would make them safer once they arrived. And I think that is one of the key things that we can lean on is both inventing technology like permissive action links, but also the kind of social technology or ways of coordinating humanity such that when we invent things like nuclear weapons or whatever, whatever the next version of that is, will be in a much better position to make sure that they're that they're not accidentally used really badly or deliberately used really badly.

[00:27:37]

So to recap and feel free to jump in and correct me the kind of updated position of 80000 hours about how to maximize your positive, positive impact with your career is to look for opportunities to do like preserve or maximize the like, long term value of of civilization, which most of which are a lot of which flow through finding ways to prevent humanity from like destroying itself in the next, you know, a couple of hundred years or like dealing a severe blow to to our growth trajectory in the next couple hundred years.

[00:28:20]

Right.

[00:28:21]

I think I mean, I think that's probably not the only way that people can have really huge impacts, but it seems like one where it's clear how the scale of the problem is very large for the benefits to be really large if we can make this change. And also, just very few people are working on this really like millions, maybe tens of millions of dollars going to kind of this framework for improving the world. And so we think there are just a lot of like really high impact opportunities for the kinds of people who read 80000 hours within this area.

[00:28:47]

Do you direct people to anything besides, like research or or donating to research organizations?

[00:28:53]

Oh, well, we're encouraging a lot of people to go and get experience kind of policy world, you know, either in London or DC, because I guess we're not sure exactly what policies or government policies we'd like to promote, you know, if any, in these areas. But it seems like it's going to be important to have people who have a lot of experience, you know, understanding what kind of what impacts different policies would have when it comes to regulating or deciding not to regulate new technologies.

[00:29:17]

Or I guess I mean also just focusing on industrial relations. Right. So one of the biggest risk is that like new dangerous weapons are used by one country against another, or just that there's like a normal war between America and China, which would just be just be absolutely devastating. So going in and getting experience in international relations and diplomacy also seems seems really valuable.

[00:29:35]

Great. Do you try to like do back of the envelope calculation or quantifications about why? Like, I don't know, I could imagine making a case for like improving education is going to like create a more educated populace that will then like be less likely to vote for a president who will, you know, launch a nuclear bomb or something. Well, you could tell a plausible sounding story for why a bunch of other things that aren't on 8000 hours list actually do serve the goal of of reducing these global catastrophic risks.

[00:30:06]

So it sounds like you'd have to do some kind of. Rough quantification to say like, yes, you should, you know, go into these political avenues instead of these education startups or something like that.

[00:30:16]

Yeah, I mean, with so this is the question kind of if you want to for the long term future, should you do very targeted things or should you do very broad things so that the benefit of the targeted things is that in a sense you have like a lot of because you're like focusing on like specific organizations or people or policies or technologies. Mm hmm. Very, very clear. Like what what impact they might have on the long term future.

[00:30:35]

For example, you know, let's say that, you know, the U.S. and China are negotiating or they like they're at one another's throats and considering going to war. And you're like in the room there and you're like trying to negotiate to make sure that they don't have a war, like, at any cost. It's like very, very targeted intervention, very focused on like specific circumstance and tentative approach to improving the long term future would just be, as you're suggesting to it, to improve education, maybe grow the economy, to just make people more reasonable in general or to improve like science across the board in the hope that that this would make things better.

[00:31:02]

And I think some of those broad interventions do help somewhat others that people are hopeful about. I'm like I'm like less optimistic. I think the main problem there is just for example, you talk about improving education. Yeah, there's a lot of effort that already goes into improving education. There's a lot of other reasons other than worried about the long term future that people already dedicate their careers and their time and their money to to improving that. And it just seems really hard to move in.

[00:31:29]

Like, how much would it like? How much effort, how many careers would it take to improve in the states education by 10 percent or to make people more reasonable across the board as it as it seems like just just extremely hot. It'd take a lot of money, a lot of efforts. Not not not not clear that it would happen with other things like trying to prevent, you know, a war between the U.S. and China. There's definitely people working on that.

[00:31:49]

But it's like so some people in the governments, there's like some nonprofits that like, you know, have some small programs about this. But it's not like it's nothing like the movement for improving education in the United States or other other countries. So basically, we think as it just pans out, that these broad approaches to improving the world, I just already very crowded people have strong reasons to go into them. And so if you're looking for somewhere where, like, one person can really move the needle by going into it, you typically want to look at more targeted approaches where it seems like there's just actually really useful stuff that can be done, things that could be invented right now, you know, conversations that need to happen between people that very few people are working on.

[00:32:25]

It still seems like there's a tension like even if we exclude the field that are already extremely crowded, like education, especially U.S. education, it still seems like there's a tension between interventions that that have kind of like a clear path towards how they could help reduce global catastrophic risk or just like increase the long term expected welfare of of civilization versus interventions that aren't really aimed at anything in particular, but just would fit into the category of like exploration where like if you OK, so this is this is a general argument that some thoughtful critics of effective altruism sometimes make, which is that if you look back at the history of things that have improved the welfare of humanity, most of them were not intended to improve the welfare of humanity.

[00:33:20]

They were like, you know, some dude in 18th century England who is like, I want to build a better textile mill, not because I think it will spark the industrial revolution and raise living standards for the next, you know, ten generations of people. But just because I think it'll make me more profitable or like the scientist who, you know, studied electromagnetism or or genetics or something, just because that was really interesting and not because he had some story about how it was going to end up helping humanity.

[00:33:49]

And so, like, I think it's it's hard to argue against, like a lot of effort or at least more effort than is already exists going into interventions that seem like they would reduce our risk of catastrophe that no one else is doing.

[00:34:07]

But I'm curious whether, like you also see a role for a lot of people doing this kind of like random exploration of stuff that, like isn't actually intended to help the world. But like, if you look at the track record of such things, at least some of them do, and those things end up pushing humanity forward.

[00:34:26]

Yes, there's a lot of arguments to get out here. And I'm just thinking this like a lot of, like, interlocking arguments here. So and it's like the kind of help. Yeah, it's hard to get the whole world view or a lot of one's woombye amount of time. But so one thing, as you said, people point out that if we look historically, it seems like most of the good was done, kind of incidentally, by people who were just like trying to improve their own business.

[00:34:46]

So it was interesting that that is probably true. But I think it's a terrible argument because there was like so much more effort that went into and so many more people who were just trying to improve their business or, you know, studying science because they were interested in it, because it was because it was their job.

[00:35:00]

And so we don't have, like, a strong track record of people trying to help humanity and failing. I mean, I think lots of people have tried to.

[00:35:08]

Humanity and failed, and some have succeeded also. I just think my point is that it could be the case that people who tried to do targeted things were 100 times more impactful than average. But because they were so, so, so, so much fewer that their share of total good done was we'll still be swamped by the people who did good, incidentally. Yeah. So I don't I don't find that argument very persuasive. Would want to actually like look at people who tried to do target good things who were like I'm going to like try to figure out what research topic is going to be valuable and then and then act on that and then look at like how did they perform relative to the base rate of everyone else.

[00:35:39]

What you just said does seem like an argument for why we shouldn't. Have no one doing the targeted interventions, but is it an argument for why we shouldn't have a mix of targeted interventions and kind of like random exploration of stuff?

[00:35:55]

I don't consider I mean, I agree it would be a pretty strange world, I guess, if everyone was trying to do this targeted stuff. I mean, one thing is I would exhaust, like most of the most of the targeted option that would cease to be neglected because it'd just be like too much effort thrown into that style of doing good. But as it is like because most people are trying to do good in a very broad way, most people are trying to improve education, grow the economy, make them make the world more reasonable.

[00:36:15]

That's why like 99 percent of humanity's effort is going, which means that if you're like part of the one percent or the point one percent who are looking for, like, really targeted opportunities, there's a lot of like there's a lot of money on the table that are great because it's because no other people are looking for it.

[00:36:26]

I think this is a really important point, actually, and this is kind of a misunderstanding that's kind of in the background of a lot of conversations about effective altruism. I think a lot of people hear the arguments that, you know, 8000 hours and other organizations make about the best way to help the world. And they're imagining kind of what you think everyone should do, whereas I think a lot of your advice is given from the perspective of like on the margin, like for, you know, next 100 people who wants to help the world.

[00:36:54]

What would be the best thing for them to do? Yeah, and that isn't necessarily the advice you would give if you were giving advice simultaneously to everyone on the planet, is that right?

[00:37:02]

Yeah, exactly. I mean, imagine that we said, oh, well, if you want to do a lot of good, you should become a surgeon. I mean, obviously be farcical if like all seven billion in the world of people in the world try to become surgeons. Now, that's all we're saying. We're just saying it would be good to have unimaginably good to, like, add some more surgeons relative to everything else.

[00:37:16]

So relative to what that person might have done otherwise, this might just be too, like, hard to answer off the cuff. But if you could wave a magic wand and cause some percentage of the world to follow 80000 hours career advice, what would that percentage be? Oh, interesting.

[00:37:33]

Well, weather advice has to be kind of constant, like so we can't make it any more like any more broader than it is now or like, oh, I can't change it as the people.

[00:37:42]

Right. Yeah.

[00:37:42]

I mean if we could change it as we went outside, but maybe like 50 percent focus on that risk aversion thing. But I guess the advice as it is now, maybe like one in 100, one in a thousand, something like that.

[00:37:54]

Oh, OK. And would you like select a particular like let's say you could filter for some characteristics. What would what's like the group of people that you would want to follow 80000 words, advice.

[00:38:04]

I guess it's like people who are analytical, kind of cautious, curious, trying to be very informed, like care about not just, you know, going ahead with it, with their own intuitions, without listening to other people at all. Yeah, yeah. I guess some of the criteria that are really important. I mean, one thing is we think it's like very possible to go into the into the trying to solve the problems that we're very concerned about and cause harm.

[00:38:26]

So we're like we've written we wrote this article last year about, you know, various ways you can accidentally cause cause harm in your career. And I think people who are like have a very, like running and break things mentality actually like might well going to make things worse in a lot of these areas. They're like very fragile problems to be to be dealing with.

[00:38:43]

Yeah, but that that was that was kind of probably into this discussion of. Yeah. So so what about approaches to improving the world, like just, you know, inventing new technologies in general. So in as much as humanity is like the main problems that we face come from the natural world, like super volcanoes or asteroids or, you know, natural diseases, then improving technology, growing economy like all makes us like larger and more imposing relative to those problems and puts us in a better position to rebuild after an asteroid or deflect the asteroid or control diseases.

[00:39:11]

But in as much as if this kind of fourth point from from Bostrom that I was saying is correct, that in fact, most of the actually sorry if the third point that most of the risk to humanity comes from ourselves, comes from new technologies, that we're going to invent stupid mistakes that we're going to make, then it becomes less clear that just empowering humanity at the broadest level is actually sensible. Because while you like, while you improve our ability to solve the problems we're creating, you also potentially grow the problems as well, because the whole problem in the first place was that way, like running ahead of ourselves, like inventing things, changing the world in very dramatic ways that are, you know, running and running the risk of destabilizing everything and ending it.

[00:39:47]

So is that an argument for the sounds like it would be an argument against sort of broad interventions to increase technological or scientific progress. And it would be an argument for like individuals who want to have a positive impact going into scientific or technological research, but like specifically the research that would produce the, like, safety promoting technologies instead of these safety decreasing technologies.

[00:40:12]

Is that right? Exactly, yeah. You did a great episode a few months ago with Tyler Cowen, who wrote the book Stubborn Attachments, which was sort of tailers to argument for long term ism. But the the main intervention that he promoted in his book to, like, promote long term welfare was increasing economic growth as opposed to reducing global catastrophic risk. Did you feel like you by the end of the episode, understood why your prescription and Tyler's prescription for for maximizing long term welfare was so different?

[00:40:45]

Yeah, somewhat. I mean, it is it was very funny reading my book. I'm like, I agree with 90 percent of this and then I like and then we diverge pretty seriously in the I mean, one thing is the.

[00:40:54]

It's not clear that Tyler and I really disagree all that much, because he doesn't actually say that economic growth is the best way like that one person could take to to improve the long term future or prevent human extinction. And in fact, I think he agrees with practically everything that we said about like that. The risk of extinction is higher than people think and that there's useful stuff that could be done to to reduce the risk of human extinction that that people could work on in a targeted way.

[00:41:17]

Yeah, he explained in the interview that the reason he was talking about economic growth so much was that he thought that many more people would be likely to take that advice, that it was like a lot easier to get people to go out and. Yeah, and to just like try to make more money or be more innovative in their jobs and invent new things. That's something that, like potentially a very large fraction of the population can do, whereas the kind of advice that everyone else is giving is something that like it's hard for for most people to know exactly how to how to act on tough trade off, though, because as you sort of said a few minutes ago, people already have incentives to go out and be innovative and and make money.

[00:41:52]

And there's already like a lot of effort going towards doing that, whereas there isn't as much effort going towards figuring out ways to reduce catastrophic risk. So. Right.

[00:42:02]

Yeah, so that's that's kind of the arguments that I made backwards, back, back, back to Tyler rise. So someone can one can argue even about whether it's speeding up economic growth is good or bad. It's not entirely obvious that it makes things safer rather than the rather than risky, although I think Tyler does offer some pretty good considerations in favor of thinking that faster economic growth on balance makes that makes the world more secure rather than rather than riskier.

[00:42:24]

But that's that's an active debate the people have. But I think that the main argument against this is just that it's like an insanely poorly leveraged approach to reducing human extinction. I mean, let's say that you were thinking, yes, I want to make sure that, like, human civilization persists for hundreds of years. So I'm going to, you know, start a business and, like, try to make it bigger and just grow GDP. It's true that, like, maybe that's like more tractable for many more people can see opportunities to to grow GDP maybe than to like reduce the risk of war between the U.S. and China.

[00:42:49]

But you're losing so much in the fact that like this, the the causal connection between growing GDP or growing the economy or even like inventing new technologies towards preventing human extinction, I think is like very weak. And it's unclear even whether it's positive or negative.

[00:43:02]

So that's like a yeah. Plus just the fact that, you know, we spend one trillion dollars already on R&D globally, you know, about 60 trillion dollars is paid out to to people to do their jobs, to, you know, make to to engage in economically productive activities. So in a sense, this is like it's like the very like background cause area is just growing, growing the economy. And it's like so suddenly it's very hard to say that it's neglected, like relative to other things or all things considered, like especially in a market economy where people have so such strong incentives to to to like make money and in the process, like do the kind to do things that just grow the economy in a very general sense.

[00:43:38]

I mean, you could argue, like from the perspective of, say, a philanthropist or maybe a policymaker or something, as opposed to an individual who's a participant in the economy. You could say an area that's neglected is figuring out how to increase technological growth, which then feeds into productivity, which is something that has written a lot about that. Like our productivity is stagnating and there isn't really a field yet devoted to like, why is scientific progress slowing down?

[00:44:10]

I didn't upload a few months ago with Michael Webb, who wrote that paper on our ideas, getting harder to find. Yeah.

[00:44:15]

So you could like if you weren't worried about global catastrophic risks, you could make the case that that that is like figuring out why scientific progress is slowing down and like how to speed it up again is like high impact and neglected and and at least somewhat tractable or like plausibly tractable.

[00:44:34]

Well, yeah. So if if you really thought that it was just like that global catastrophic risks were impossible, that civilization was just going to continue. And actually it's not clear to me why why it never really matters, because we're going to like as long as we just keep growing, you know, each year, then we're just going to and we're going to get there eventually. And there's no particular reason why we have to grow so quickly. I suppose one thing is that the universe is expanding.

[00:44:54]

Like, that's the interesting thing. That's the most important argument is that like galaxies are receding from us. So the longer we wait, the slower we grow as an economy that the less, you know, value they'll be available to harvest it at the end of it.

[00:45:07]

We're supposed to know that you don't think that the rate of growth matters, like if growth makes people better off than isn't more people being better off sooner, better?

[00:45:15]

I mean, I guess I think of us as like basically we're going to so this is like it's very hard to, like, shift frame from thinking about, like the flow of goodness in any given year. And we're trying to like, increase the flow. So the like next year, more value is generated than this year to thinking about it more as like an endowment that we have. We have like eternity essentially, or we have like as long as we want in this universe, as long as we don't destroy ourselves.

[00:45:36]

And like the limiting factor is kind of how much energy and matter can we harvest? Can we, like, reach in the galaxy and then like use in the entire universe. And then at some point, you know, whenever it's like whenever it's ideal to convert that into to convert that into value. What a lot of time. So as long as you like continuing to like grow, that's like not clear.

[00:45:54]

Well, so, yes, so the reason to go faster than would be. Well, we managed to capture more of the universe before it, like before it recedes from us outside of like. Yeah, yeah. The accessible universe. But it's like less obvious to me that, like, the fact that we would like generate more value next year than this year is like is really sucky. I guess, yeah, this is like taking the long term, like the utilitarian, like consequentialist, like a long term view, very seriously, I suppose, on like on other values where you're, like, concerned about like people alive now in particular, it's like there's more of a common sense for you, like why why we want to speed up improvement a lot.

[00:46:28]

But this is like one thing where I think that the global catastrophic risk crowd deviates from like potentially common sense. So potentially people who are like working in business were thinking, yes, we want to, like, grow the economy as quickly as possible so we can, like, generate more value next year. Whereas I think we're thinking more is like how in the like very long term can we get like the most value out of everything? And from that point of view, it's much more about stability than it is about growing really quickly, growing more quickly, but like with a greater risk of catastrophe is like a terrible trade off in this because, you know, the universe is only receding at a rate of one billion per year.

[00:47:01]

So basically, if you could, like, grow, you can get there like a year sooner than that would only be worth like a one in a billion risk of of the whole thing ending. Right. So it's much more focused on stability than than speed.

[00:47:13]

Right.

[00:47:15]

But yeah, I guess so. If you thought that just improving technology did like it did stabilize things and, you know, one can make arguments in that area then working on like science and technology policy to figure out how we can do innovation more quickly could potentially be really valuable, because I agree, not many people are thinking at that level of like Patrick Coliseums done a bunch of interviews lately that I quote in the Atlantic that we can link to, right?

[00:47:37]

Yeah. There are surprisingly few people thinking at a policy level. How about, you know, how to make science research proceed much more quickly. But just personally, I'm not convinced that doing just like increasing the total amount of scientific research that we do each year is like is even positive or certainly is like among the more leveraged ways to improve things, I'd be much more focused on, like how do we improve societal wisdom and prudence so that when we have more advanced technologies, we're more likely to use them well and not use them against one another or just a very stupid ways.

[00:48:09]

One of the thing I wanted to ask you is when we were talking about, like, who should follow 80000 hours advice and like how many what percentage of the world would we want following your advice versus doing other random stuff like exploration? Do you worry at all about, like people following 80000 hours advice who otherwise would have pursued some kind of, like, eccentric passion that like could be the next, you know, the like the 18th century textile mill of the 21st century?

[00:48:38]

Like, do you worry about about like getting rid of the of the, like, potential innovators?

[00:48:44]

Yeah. So I suppose in as much as I'd like very much within this frame of like advancing technology in general doesn't help us, I suppose I wouldn't be so worried about that. But that is a somewhat counterintuitive view that I'm not not sure about actually. So we suggest like 10 priority paths, which are like the ten career paths that we think are most likely to make a really big difference to improving the long term future currently. But like one exception, we haven't kind of whole process for deciding a career is if there's something you're, like incredibly well positioned to do that no one else is able to do, that seems like it would, you know, have a really large impact.

[00:49:14]

Then there's a pretty strong case for just sticking with that rather than switching into into the paths that we've suggested. But we do like probably that the first thing that we think people should do when they're planning their career is to try to figure out what problem they're trying to to solve and potentially to do that before they figure out what method they're going to use or like think too much about like what what they're specifically passionate about just because we think that's like 100 fold thousandfold, possibly even greater variation in like how much bang for buck you get trying to trying to focus on solving different problems.

[00:49:44]

So that is just kind of making sure that you work on something that is enormous and scale that other people aren't working on. Where you can make a difference just seems like it's kind of one of the one of the prime considerations.

[00:49:54]

Now, all of that said, people do sometimes worry that we, like, reduce people's creativity or exploration in their careers.

[00:50:02]

I think if people actually read us, like, very closely because we're so focused on people doing stuff that's neglected because they'll be like, you know, low hanging fruit that other people haven't taken, in a sense, we are extremely in favor of like innovation and exploration. But one way that just like creating a career got in general limits is that we have to put something on the page, suggestions that apply to more than one person that can be generalized somewhat, which can cause people to think, oh, it's it's like only these things.

[00:50:26]

These like very generic kind of stock positions that are available. But very often, like the best opportunity for you with any with any problem is going to something that only you kind of know about. But we of course, we can't put that down because we don't that's that's specific to each individual. So we can kind of think of that in the one on one career coaching that's saying that people should should watch out for. Is that. No, we're not saying that you should just go and get into some, like, position that's like extremely codified and well understood.

[00:50:48]

Often it will involve, like, finding something that's like very unique to you and your unique circumstance.

[00:50:52]

Okay, well, we'll link to the eight thousand hours career guide and just ask people to to just mentally insert those asterisks after all the advice they give. Rob, before I let you go, I wanted to ask you to nominate some resource, whether it's a person or book or an article that has influenced your thinking in some way or or the. You have substantial disagreements with but that you've sort of gotten value from engaging with and it's a lot of possibilities there, do you have anything that fits that?

[00:51:21]

So a lot of your readers will be familiar with James C. Scott, who wrote what's what's the classic, like, anarchist book about seeing like a state of the book about how like a project to improve the world, a kind of modernist project by governments where everything is standardized have often failed and failed really catastrophically. I'm something of a like a defender of high modernism, like these things like very organized ways to improve the sort of think people underwrite, for example, how much high modernism, just like improved agriculture enormously and made us much richer in many ways in the long term, even though obviously the Soviet Union was very unstable and were very catastrophic to begin with.

[00:51:54]

Anyway, he's written another book, which was like maybe my favorite book of last year called Against the Grain. It's a deep history of the first states. So he goes back and looks at, you know, how did the very first countries have just thousands of people kind of form, you know, in 5000 B.C. And it's just an absolutely fascinating history includes many, like, unexpected things about the nature of those very first city states to begin with.

[00:52:15]

Excellent. Yeah, James Scott is great and an at least one of the guests and I on a different podcast have recommended seeing like a state of the book that really influenced our thinking. So it's nice to get a contrarian perspective on this contrarian book.

[00:52:28]

Yeah, I can I can stick up some links of some people like writing reviews where they kind of critique you. I think it's surprisingly rare for people to just say no, actually, I want to, like, defend high modernism.

[00:52:38]

Again, it's really not it's not trendy. Not fashionable exactly. But it's just one of the really quick one is destined for War by Graham Allison, which is about trying to assess the probability of a war between the U.S. and China in the 21st century and looking at kind of historical analogies to to try to guess at that. I think it's like underrated book that. Yeah, might be interesting to. I'm hoping to interview some people on the 80000 Hours podcast about.

[00:53:00]

Yeah. How do we make China and the US get along and cooperate in future.

[00:53:03]

Excellent. Well, Rob, thank you so much. It's been a pleasure having you on the show. Yeah, it's been so much fun. Hopefully we can talk again soon. I look forward to it.

[00:53:10]

This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.