Happy Scribe
[00:00:00]

Hi, Lessness, this is the Editor's Hours podcast, where each week we have an unusually in-depth conversation about one of the world's most pressing problems and how you can use your career to solve it. I'm Rob Wiedlin, director of research at Eighty Thousand Hours.

[00:00:11]

Today, I'm delighted to bring you a conversation between Odden Kela and our CEO Ben Todd Benjamin doing a bunch of independent research recently.

[00:00:19]

And we thought it would be interesting to hear how he's currently thinking about a couple of different topics. This is all very off the cuff compared to our regular episodes.

[00:00:27]

And even though he is our CEO, the things Ben says here are official 80000 sales positions or anything like that.

[00:00:33]

This episode is just 54 minutes long, a conversation so short, I'm sure regular listeners will barely be able to believe it.

[00:00:39]

And it's split into two different sections in the first Ben, and talk about four different flavors of long termism and their distinctive implications for people trying to do good with their careers.

[00:00:49]

In the second part, they move on to things that everyone else might be getting wrong, including how much weight to put on personal fit and whether we might be highlighting the wrong problems and career paths.

[00:01:00]

Given that we're in the same office, it's relatively easy to record conversations between 80000 hours team members. So if you enjoy these types of bonus episodes, do let us know by emailing us at podcast at EDS now is Doug, and we might make them a bit more of a regular feature.

[00:01:14]

Just before that, though, I wanted to let you know that our annual user survey is now open for submissions once a year. For two weeks, we ask all of you, our podcast listeners, article readers, advice receivers and so on to let us know how our work has helped or hurt you. You can find that survey at 80000 hours to August. Survey 80000 hours now offers a whole lot of different services.

[00:01:36]

And your feedback helps us figure out which programs to keep, which to cut and which to expand. The survey is pretty different this year. And among other changes, there's a new section covering the podcast asking what kinds of episodes you liked the most and want to see more of what extra resources that we produce you actually use and some other questions as well.

[00:01:53]

We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career.

[00:01:59]

Whether that impact was positive, neutral or negative, that might mean a different focus in your existing job or a decision to study something different or maybe a choice to look for a completely new job.

[00:02:09]

Or alternatively, maybe you're now planning to volunteer somewhere or donate more or donate to a different organization than you would have otherwise.

[00:02:15]

Every entry you write will be lovingly attended to by a team of experts, survey readers from the Nepalese highlands who will inscribe its submission in beautiful calligraphy, setting out what you've written from a cliff jutting out of the Himalayan months and then ceremonially burn the paper in an ancient user feedback related ritual.

[00:02:30]

Then totally independently of that, they'll be carefully read by me and my colleagues as part of our upcoming annual review. And we'll decide on some things that editors now I should focus on or do differently next year. So please do take a moment to fill out the user survey. Those who do so will have forever.

[00:02:44]

One, my love, you can find it at eighty thousand hours. Dog survey. All right. Without further ado, I bring you and Ben.

[00:02:54]

I listeners, I'm Erdan, I'm a researcher. Eighty thousand hours, and I'm then the CEO and co-founder of 80000 I was and in this episode I'm going to pick Ben's brain about his recent research.

[00:03:04]

So, Ben, you've been doing a bunch of research recently for eighty thousand hours. And I know a little bit about what you've been thinking about. But just for the audience, what are the topics you've been thinking about and what are some ideas you've had?

[00:03:18]

Well, there's a bunch which I'm going to hope to write about soon on the website, but one I've been particularly interested in recently is what are the different types of long termism and what do they imply about which careers seem highest impact? And then what might that mean for what our advice should be?

[00:03:35]

So we've talked about long term ism on the show already, but for anyone who hasn't heard of it, do you want to just say briefly what that is?

[00:03:41]

Yeah, so you just very, very roughly. It's the idea that's what most matters about our actions, is that very long term effects. And by that I don't just mean like effects over a couple of decades, which is often what people mean when they talk about long term thinking, but we really mean maybe over thousands or even millions of years. And so roughly, if you were going to say, well, how high impact is this action, you should be thinking about the question, what might its very long term effects be?

[00:04:05]

And that would be like the key thing to focus on.

[00:04:07]

OK, so what are these different varieties of long term ism? Yeah, so pretty much all the researchers who focus on long term ism agree that we should focus on what's been sometimes called path changes rather than speedups. So you can think of a speed up as just getting us to where are we going to go in the long term faster, whereas a path change is something which changes how the long term is going to be proportionally. So an essential risk as an example of a path change, because in that scenario, if an eccentric happens, then the value of the future is much lower for the rest of the rest of civilization's history.

[00:04:41]

So an existential risk, I think Toby, or defines it like actually as something that just makes the rest of the future much lower in value. So that could be not existing, whereas we might have existed otherwise. Or I guess it could also just be in a really bad state for a very long time as opposed to a neutral or good state.

[00:04:57]

Yes, exactly. Whereas like a speed up might be making an important discovery, like a bit earlier than it would have been made otherwise. And that's like getting us faster progress, but not necessarily changing, like where things end up in the long term.

[00:05:11]

So why is faster progress good at all?

[00:05:14]

Well, it means if there's like if the future is going to be better than you like, getting that sooner. So you're spending longer at that good state, just very simply speaking. Yeah. So the best kind of treatment of speedups versus path changes is in Nikbakht Studies thesis, which we linked to from our article on long termism. But anyway, that's not the main topic I want to talk about. Okay. Like assuming that it's these past changes or what I call them, the last my last podcast, Henges, is the key thing to focus on.

[00:05:42]

We can then divide up different types of long term ism by which henges they think are most important. Okay, so if you take this long term perspective, basically you're trying to find key moments when what we do today can make a difference to what happens in the very long term. So like maybe like most things we do, but we don't really have very long term effects or at least not ones that we know. They just kind of things just got muddled and washed out.

[00:06:05]

But maybe there's some moments where what we do like does have these ongoing effects or like makes the difference between one way the world could have gone, another way the world could have gone. And so, yeah, we're trying to find these influential moments. And then there's like different types of long term ests kind of want to focus on different types of influential moment or they have different views of like what those are. And so, yeah, broadly we can yeah, we can categorize the different influential moments and a couple of different ways.

[00:06:33]

So one would be whether we think the influential moments, the key ones are going to happen, say, over the course of our careers in the next couple of decades, or they're going to happen beyond that period, like further in the future. And the more you think these influential moments are going to come later, the more you want to focus on the type of long term vision, which is called patient long termism. And then like otherwise, you're going to want to like, focus on addressing these moments that are coming up in the next couple of decades.

[00:06:59]

So this is related to the conversation we had with Will on the podcast about the hinge of history. Right. Will this suggest exactly maybe the hinge of history or where that means, I guess, the a cluster of really influential moments or maybe just one is far in the future we should expect it to be, whereas other people think it's maybe in the next few decades or the next century would still be considered less patient than some other priorities.

[00:07:21]

Yeah. And so towboats, recent book, which and Toby, we had on the podcast, he and his book, he's kind of arguing, he's actually arguing the next couple of centuries is the time of parallel. So that's like an unusually Hanji moment or I guess. Well, he's arguing the next couple centuries is the precipice of the time. Apparels that that's prophet's time, though I think he may also think that the next couple of decades are like even unusually important.

[00:07:45]

So Toby would be slightly more on the we need to act now a spectrum where as well was pushing a bit more into the patient end of the spectrum. So like one example of one example of a moment that would be like extremely influential, I guess is like if you think that catastrophic biological risks are an existential risk and you are like present existential risks and you're in charge of some sort of bio weapons program or whatever, there could be a moment, even in the next few decades in your career that you might be able to act in such a way that it actually prevents this existential risk from materializing, which would make a huge difference to the the entire future of humanity.

[00:08:22]

Yes.

[00:08:23]

So in Will's post, he introduces three things that can make a moment influential. One is like all this technology is super and undecided and often quite bad, but one is the degree of pivotally only my.

[00:08:37]

OK, so that's kind of just like how much scope do we actually have to change, like what happens. And so an example of something that can make a time really pivotal is if we discover a new technology such as like something that could create a new bioweapon, that that moment's right. Right. As we're discovered to discover that that would be a really pivotal time, because maybe maybe the details of how that technology is handled could make a big difference to you, whether that's an essential risk or some other shift to the future.

[00:09:04]

OK, what are the other two then? You say there were three ways, the sorts of properties that could make something.

[00:09:10]

So then the second one is how much ability we have to do anything about it, because the time could be very pivotal, but maybe we can actually change what happens. So it's like there's a really important event, but it's not influential. And then the third one is whether we know what those things are. So, like, again, they might be these amazing opportunities around. So you might have the first two things satisfied, but if we can't identify them, then it would also not be influential time in Will's definition.

[00:09:39]

OK, so patient long term, I think that events or times or combinations of events and, you know, the state of humanity at that time where we're actually going to be able to make a huge difference because of these three properties, holding is probably not going to happen for an extremely long time. Is that the idea?

[00:09:55]

Yes. Or just now is not a special time. Like maybe it's just as influential as other other times that will come up in the next few hundred years.

[00:10:03]

Okay, that's really helpful clarification. So you could think that right now is in some sense a you know, there's lots of existential risks right now. But if you thought that there was going to be even more existential risk in a couple hundred years or a thousand years or the same amount, you might still be a patient long term. Yes.

[00:10:20]

Or you might think that well understand them better in the future and so will be better able to deal with them.

[00:10:26]

OK, so that's patient long term ism and impatient to have a new urgent long term horizon that seems better, but then urgent long term ism then divides into several other forms.

[00:10:38]

So we'll have patient long term ism aside and then just focus on the types of urgent long time. OK, so then the next key question is, so we've now decided that there are these important influential moments in the next few decades. The next question is, do we actually know what they are? And so you might think now is it an unusually important time? But we're very unsure about what are the particular influential things. And so that that type of long term, as I call broad long term ism.

[00:11:03]

And so, yeah, this is like a slightly odd position, but you can kind of almost see Toby and his book is like going towards this position. If you put to one side, I think he actually thinks I think it's either the second or third biggest existential risk is from an unknown risk. And so he's relatively on the urgent end of the spectrum. But he also, he thinks is quite a good chance that we don't actually know what the key thing is.

[00:11:29]

OK, so one thing that feels a little confusing here is that you said before that the something that makes something an influential moment is that we have enough knowledge to be able to like, you know, act in this way or take these opportunities. So I guess I'm thinking if you thought we didn't have knowledge of what the what the opportunities were, then you should be a patient long term, even if you think there are those opportunities in the near future.

[00:11:54]

So how can somebody be a broad long term, but also an urgent one?

[00:11:58]

Well, you could think now is a pivotal time and we might be able to do something about it. You don't have the you don't have the third one, the knowledge. But that could still those first two could still be enough to make you quite urgent. OK, but yeah. So yeah. But you're still I agree that the knowledge thing is pushing you maybe towards patients, but maybe the other two factors could outweigh.

[00:12:18]

I guess you could also think we don't in fact know right now what the influential moments are, but you could think, well, when they come up, we might be able to recognize them. So as long as we have people standing guard over the in the next hundred years or something, yes, they'll be recognizable when they come up, even if we can't tell what they are now. Yes.

[00:12:35]

Though, that sounds a bit more like patient long term ism, though. Yeah. I mean, maybe it's worth saying, like, all of these things are a spectrum. So like with broad long term ism, what people then tend to focus on is say suppose one type of broad long term ism is like you believe that now is an unusually risky time in terms of central risk. But the. Not sure what the biggest risks are then what you can do about that is you can focus on reducing risk factors.

[00:12:58]

So just things that generally increase existential risk rather than trying to reduce specific risks.

[00:13:04]

OK, and that's things like trying to increase international cooperation or something, some sort of like robustly good for lots of different things.

[00:13:11]

Yes, I think Toby mentioned is great power conflict in the book as potentially an important risk factor because it seems like many central risks can be prevented if people can coordinate and work together. And it seems like one of the most likely ways that coordination isn't possible is because there's a war between important powers. So if we knew that was going to be no great power conflict, then we should probably think existential risk is would also be a bunch lower over the next over the rest of our careers.

[00:13:37]

So that's broad long term ism. And then I guess the opposite would be narrow. Long term is not the opposite. But, you know.

[00:13:43]

Yes. Though I've been calling that targeted long term. Targeted long term. Yeah.

[00:13:47]

So that would be the idea that its acting is urgent and we know what the key inflection moments are. Right. And that gets us on to we can then divide to targeted long termism into two types. Again, one where it's we should focus on such risks as the main thing and then the other is we should focus on some other type of path change. OK, and those are like two broad types of influential moment is an essential risk or a path change.

[00:14:13]

Can we also divide broad long term ism into the existential risk focused, broad, long term ism and other kind of path change focused?

[00:14:23]

Well, so I've I've defined broad long term ism is like we don't know what the influential movement are. So it kind of could be any of those. I see.

[00:14:29]

Yeah. OK, so what's an example of another kind of path change that's not an existential risk. Yeah.

[00:14:35]

Change moment of the four positions, that one's probably the least popular, but the thing that's actually seems to be most supported with that would be in another I focused long term ism. But some people think that the biggest reason to focus on AI is not because an AI disaster could be an existential risk, but rather because exactly how A.I. is designed might have some other kind of very long term effects, like it might lock in a certain set of values or just kind of change how civilization is and like for a very long time.

[00:15:08]

And it might just change that and like a proportional way rather than a kind of all or nothing way.

[00:15:12]

So, like, how we deal with I could make the difference between having like an OK future and an awesome future. Yeah.

[00:15:18]

Or maybe you could even just think it might be possible that if it's handled in one way, you got like 80 percent of the value and if you handle it slightly worse, you get seventy nine percent of the value. And maybe this is like a whole load of gradations of how well it's handled overall. OK, cool.

[00:15:31]

So we have is that four types then of long termism? Yes. OK, can I ask you which one you're most sympathetic to or.

[00:15:39]

I don't know if we should recap the types, but we've got patient long term ism, then we've got urgent broad long term ism, urgent, targeted long term ism and then not well then that divides into extended risk focused long termism and path change or trajectory change focus, long termism.

[00:15:52]

OK, actually let me just just because taxonomies are fun, let me ask like so can patient long term ism be divided into broad and narrow. Like could you think, well the like influential moments won't be for a really long time, but we're pretty confident that they're going to be coming from like brand new to and maybe that's like super.

[00:16:12]

Yeah, no, I think it's I think it's not super plausible, but OK. You could imagine there will be a further spectrum in there about how much knowledge we have about what these influential moments are going to be OK, and that could influence exactly what type of capacity building you want to do as a patient long term.

[00:16:28]

OK, but like, usually it's going to be more plausible to be sort of a broad patient long term. You know, if it's not going to be for a long time, probably we don't have a great idea. Well, what? Yeah, I guess so.

[00:16:38]

I'm not keen on that technology because I think the most common way people confuse these different types of long termism is between broads versus patients. OK, so they actually have pretty different implications because Broadlawns scientists think so. Sorry, I've defined broad long term is like urgent focused ones. So they want us to do stuff which immediately reduces existential risks or otherwise helps society better navigate whatever the influential moment is going to be in the next few decades, whereas instead patient long term ists generally want to build capacity to empower futurists to take on these influential moments when they come up.

[00:17:13]

Yeah, they could also both both of them will be keen on global priorities, research, probably the patient long time lists are going to be more keen on it because. Well, actually, that's kind of not that's not clear. But maybe research takes a long time to have.

[00:17:28]

Yes. If you like if you like superagent focus, then it's going to kind of stop becoming attractive to the research because. Yeah, because it's quite a long term investment.

[00:17:37]

OK, so now that you've put off the question of which one you are most sympathetic to. Yeah. Do you have a sense?

[00:17:43]

Well, so I guess for me it's been mainly between central risk focus, long termism and patient long termism. But then I and then I like a little bit of credence on. The trajectory change one and the broad one, so actually, overall, I'm a bit more focused on the edge and long termism than patient long term ism, but yeah, like still patient optimism would be my second view out of the out of the four.

[00:18:07]

And is that because you're you think the arguments that Will was making seem somewhat persuasive, but you're not sure. Yeah.

[00:18:13]

And I mean or say so. Treadmill's arguments for weighting seem pretty powerful in some ways.

[00:18:19]

Yeah. So we had a little Tramel on the podcast. But just to like very briefly say that was more focused on how much we're going to have power to affect these moments.

[00:18:28]

Like Phil pointed out, that if we sort of invest resources now to grow them, meaning financial, but also other kinds of resources, we could have a lot more influence in the future, which would sort of push you toward patient long termism.

[00:18:40]

Yeah, I mean, I think also like understanding improving over time seems fairly convincing. And then it kind of also seems like if you're uncertain between the two views, the patient one maybe keeps your options open more, though. Interesting.

[00:18:53]

I would have thought I mean, if you like, the intuitive thing is if you're uncertain between patient and urgent, you should go for the urgent one because it seems like a bigger mistake or something to like.

[00:19:01]

Yeah, I guess I guess you're right. I mean, I'm pretty unsure about which way that goes. But yeah, obviously, if there's like an exchange risk now and we could have prevented it, then that's kind of like the ultimate loss of optionality. I suppose that there are other arguments where uncertainty puts you in favor of cities like Phil, I think talks about Vitz, arguments about discount rates, where if you're uncertain about what the discount rate should be, generally that will push you towards using lower ones over time.

[00:19:26]

Interesting. I don't remember why that was. Is it easy to explain?

[00:19:30]

Not super easy, but if you use it. Well, I don't know how many people will know what discount rate is, but yeah, if you imagine like OK, so either we have a high discount rate, which means like things in the future, like not very important compared to things now or we have a low discount rate, which means things in the future are very important compared to now if you kind of imagine taking the average of those two, you end up with things in the future being pretty important.

[00:19:51]

And then if you kind of do this more mathematically, it actually ends up that you should over as you get further and further in the future, your expectation value for the discount rate should tend towards the lowest conceivable value for the discount rate. Interesting.

[00:20:04]

Yeah, I don't feel like I follow that last step, but we'll just link in the show notes where Phil explained this.

[00:20:09]

You said, yeah, I think so. But it's a it's a famous paper by Bittleman, an economist who we'll link to that in the show too to OK, so those are the four varieties of long term ism.

[00:20:20]

So what's what's the implication of these distinctions for what eighty thousand hours should be doing as an organization to help people do good with their careers.

[00:20:29]

Yeah. So all the different long term lists basically agree that we should want some of all of these kinds of things. And so there should be some kind of portfolio of effort across them. And where people are more differing is just like and exactly like how much effort it would be ideal to go into each one. And so, like, we would take a similar approach and our advice and like among our users, we think it would be cool if people were like pursuing all of these different different types of long termism.

[00:20:53]

Yeah.

[00:20:53]

So can you say more about what it means to pursue the different types of long termism? Yeah, so like, given each type of long termism, you can then think about what priorities that would imply about core selection and career capital and movement building like those are the main differences. So yeah, I guess to give to give an example, like the thing we like have most clear on our key ideas page is the extent to risk focus long termism. And so there we think we want people basically focusing on reducing the essential risks that are biggest and most neglected and most solvable, and so that we've tended to focus on safety by Iraq's nuclear security and climate tail risks as the four key things to work on.

[00:21:34]

So if you were a patient long term to or you thought that in fact other kinds of path changes were more available to us, then you might focus on other priorities than the things we emphasize the most.

[00:21:45]

Yeah, so the the essential risk focus long term would focus on reducing those risks. I just mentioned. They would also want to reduce risk factors as well. But then if you're a patient long term, well, in practice, patient long term US would agree. We would want still to invest a little bit in those things, but mostly we'd want to be building capacity to help people in the future do more good. And so then that would tend to come down to some kind of movement building because that seems to be one of the best ways to get that to be more people caring about these things in the future.

[00:22:11]

And this is a movement or effective altruism movement building in particular, or. Yeah, it could be to be broader.

[00:22:16]

Yeah, it could be any type of move, a building that gets people interested in whatever the long term priorities will be in the future. OK, in a very, very abstract level. But yeah, I think the altruism movement building seems like one that seems to be working. But yeah, I would wonder in the future whether we might want something like movements concerned with political representation of future generations and that that could be like another independent movement that might be really good from a long term perspective, although actually I mean, sort of strikes me that if I'm a patient long term ist, then shouldn't I think that a movement built around increasing representation for future generations is not going to be focused?

[00:22:53]

On grabbing the grabbing the influential moments when they come up, don't I want a movement of people who are kind of like scanning around all the time and looking for those influential moments and then grabbing them and acting when they come about?

[00:23:05]

Yes, that would that's kind of a ideally what you want. But then you could imagine something like that broader thing might just get lots of people interested in the Narbethong. So be patient long term might be focused on movement building. Yeah, there's actually we just released a blog post actually, which has a quick summary of the types of things patient long timers want to focus on. But another big one would be global priorities research to try and figure out what those influential moments are going to be.

[00:23:30]

Yeah. So another thing is patient long term is would be more keen on people investing in that career capital. So, yeah, by career capital we mean skills, connections, reputation, credentials that put you in a better position to make an impact in the long term. So it's kind of like how much you've invested in building your skills and abilities. Yeah.

[00:23:47]

And so patient long time ests. Well, all the different views will want people to focus on gaining capital some degree and you're like very, very urgent, focused. So if you think the key influential moment is coming next year, then there would be no point doing a PhD. Now you should just like do whatever you can do to help. But if you think it's coming over like a few decades, you probably would want to get some career capital if it can pay off within those few decades.

[00:24:11]

But then, yeah, patient long term, it's would be like the most keen on capital because they're really happy to, like, wait for as long as needed to have more impact.

[00:24:19]

So one thing that feels a little confusing about that to me is the career capital question is like, OK, what should I do for the next couple of decades? Whereas the thing that patient and urgent long term is disagree on is how important is the next couple of centuries or even millennia. So like, how much do they really differ on this question of career capital?

[00:24:36]

Yeah, so I think it's like maybe a bit less than might first seem, but some of the more urgent focus long term is to really think the next like one or two decades is pretty unusually important. And so that effectively makes you a discount rate a few percent higher, probably until that kind of increase is the hurdle rates for getting credit costs a little bit. And so, yeah, it wouldn't make a huge difference. But if someone was on the fence about should I do a Masters show, do a PhD or some other like thing they think won't have any impact but will get them some extra capital, it could make the difference that marginal cases and patient people would like default towards the capital.

[00:25:09]

And so it seems like if you thought the most influential moment or moments were going to be sort of coming around the end of the century, it seems like you'd still count as an urgent long term. I mean, obviously there are gradations, but like you're closer to like the urgent long terms than the patient long term. But you might still have the same view on career capital as the patient long term. Yes.

[00:25:29]

Yeah, exactly. So another way of seeing how urgent or patient to be is in terms of like what fraction of our resources we'd ideally give each year. And so this is easiest to see with money. So you can imagine like, well, we have a lot of money. How much should you spend each year on, like, doing good? Yeah. So if you think the next hundred years is unusually influential, roughly, you'll want to spread out your money over that hundred year period and so that.

[00:25:56]

Well, well Infeld's model, that would mean that you would give like a couple of percent a year, something like that. And then that's not actually a very different answer compared to a patient long time list the patient long term, I would maybe give like a percent less. And that's because, like, for the urgent person, while they're still spreading out over a hundred years, so it's still like very spread out. But if you were an agent long term, you thought like the next decade is unusually influential, that Infeld's model, you would start giving something more like 10 percent a year.

[00:26:23]

And then so that would be be like a really big difference. So it's only once you get to these really short timelines that it really starts to make a big difference to like how much we should be giving novices investing. And then the same with how much we should invest and create capital rather than just trying to do the best we can right now.

[00:26:39]

And just to clarify, when we talk about spending resources are giving resources. We mean on things besides things like movement, building. Right. That don't really count as investments of any kind. I categorize move in building as an investment, right? Yeah, that's what I'm saying.

[00:26:52]

So like you could spend, you know, fifty percent of your resources by funding programs. But if they're mostly focused on things like movement building, that wouldn't count as spending. Fifty percent of your sources on this model. It's how much is going on object level things. Yeah.

[00:27:05]

So just to make this really concrete, if you know, a listener is hearing this and say, yeah, you know, I was like really convinced by some of the arguments that I'd heard before for patient long term ism, what, you know, should they maybe do differently because of that?

[00:27:19]

Yeah. So kind of career options for patient long term assets, perhaps the best one of the best things is like any type of movement building that would pay off from that perspective. The second thing would be anything that helps with global priorities, research and then a kind of third category is learning to save. So basically nothing to give. But then you would like put the money into some kind of foundation or donor advised fund and try and invest that and have more money in the future to give.

[00:27:45]

And there's like different you can pursue that to more or less degrees. Like one could just be like you try and give like donate it all at the end of your life or another. Could be like you try and put it in. Some kind of trust to the well tonight, like after you die and then like fourth category as you could get any career capital that will help with those three priorities. So then that would be like crackups. All that helps you to go parties research or that doesn't move a building or that helps you do anything to save.

[00:28:07]

So that would be like something that puts you into a position to make a lot of money. Yes. In that last category. Yes.

[00:28:12]

And I think this is another thing I think people sometimes mix up is they think that patient long term costs should be keen on really transferable career capital. And so you can kind of divide crew capsule into specialist and transferable specialists is like capital that's useful for just like a narrow range of paths and then transferable is like useful in many different careers, patient long term lists. I used to kind of think as well that they should probably focus on transferable capital because I'm not sure what the key moments are.

[00:28:40]

But actually they should focus on getting capital that helps with those those three priorities. But it could be specialist career capital. The people who might be more tempted to focus on transferable career capital would be the broad long term lists because they might take a strategy of just doing something transferable and then like trying to spot that key influential moment that will come up over their lifetime and like work on that.

[00:29:03]

Although I guess you could also think if you're a broad long term, maybe you should focus on, like international relations. I know you'll keep keep using this example, but like you like really narrow career capital for working to, like, help reduce the chance of great power conflict. Yeah, totally.

[00:29:16]

So I think even in the broad long term, this case, there could be a strong case of focusing on specialist career capital, where you would bet on some kind of broad intervention that you thought would help with many different potential risks or other types of influential moments for the for the long term future.

[00:29:32]

OK, so I want to move on here pretty soon to talk about your views on career capital and other research topics. But before we do that, is there anything else you want to say about the varieties of long termism?

[00:29:44]

Yeah, I mean, one interesting question is, what are the biggest priorities within those four things right now and why the gaps are? And yeah, I tried to think a bit about are we getting the patients versus urgency tradeoff right across the community and among our readers. And I think that's like pretty unclear because effectively we are saving a lot of resources for the future, which is like because lots lots of things young. So they're investing in capital and open philanthropy is like saving lots of money.

[00:30:13]

And, um, philanthropy is what is our biggest funder.

[00:30:16]

And it's like the biggest kind of effect of the maligned foundation.

[00:30:19]

And so, yeah, I mean, I think maybe we neglect the patient perspective a little bit, but overall, it's pretty unclear. Yeah. And then within the kind of more urgent forms of long term ism, what might be being neglected. And I think, like most effort is going into the extent risk focused one. I think that makes sense. But I think there's currently maybe a bit of a gap in the broad, focused, long termism.

[00:30:41]

And yeah, one one reason for that is like it seems like, you know, most people don't think that should be our biggest priority, but lots of long term as people think maybe it would be reasonable to say, like put like 10 percent of our resources into that. But it seems like currently it's much less than a couple of percent of people or like money being spent on those style of things.

[00:31:00]

And that's within the effective altruism community. Yes. So, I mean, what if somebody said, OK, but in fact, the broad long term interventions are the things that most people outside of the effective altruism community focus on are like, sorry, not most people, but like, you know, a greater number of people. So that's things like maybe increasing economic growth or trying to reduce the chance of conflict, which seem a bit less weird and neglected than like.

[00:31:26]

Yeah, I wouldn't. The things yeah. I would like the economic growth. Question is how that fits in is a bit of a complicated OK. In some ways economic growth is a speed up, so it's not even one of these for a variety of long term ism, but that in practice some people think, well, we had more economic growth, maybe we'd get through the time of Pareles faster. And so it's reducing at central risk.

[00:31:45]

I guess I was also thinking we just have better tools and better education or something like that at the point when something happens. So maybe we're like a bit robustly better at dealing with it.

[00:31:54]

Yes, but then and then you have to set that against if we have more technology than maybe we have like more risks in some ways as well. So and then you have to kind of quantitatively model the two two different effects. And so there's this paper by Leopold that I guess we've mentioned before on the podcast where he tries to make a quantitative model of those and he ends up concluding economic growth is like overall probably going to be positive for reducing at such risk.

[00:32:16]

And so that then could be a thing that, like more urgent long term assets would want, but maybe, probably not as that top priority. It doesn't seem like the very most leveraged way to reduce risk.

[00:32:25]

OK, so OK, well, putting economic growth aside then. Yeah, something like Great Power Conflict would be a way of reducing risk factors and seems like not especially neglected. Again, this is like maybe not like super, super neglected, but we've had a previous episode with Glen Weil about how might we design institutions, provide global public goods better and being able to do that would probably reduce lots of other central risks. So it's an example of a risk reducing thing.

[00:32:52]

If you might be interested in some of these broader long term as things than many of the things, we cover it on your list of other problems, like some of them are in that category. So we should link to those in the show notes.

[00:33:05]

All right. Well, great. So let's move on to some other research that you've been working on. So you've been thinking about ways that our advice to readers could be improved and. Yeah, so what are some of the areas where you think there might be the most room for improvement in eighty thousand hours of advice? Yeah.

[00:33:25]

So the last few months have been trying to think about the very big picture. What might be the mistakes in our advice or things we've got wrong. And I think like the previous topic could be in that category, like maybe we should focus a bit more on broad long term ism, maybe a bit more on patient long term ism. But another one which we kind of just getting onto is how to trade off transferable. This is Specialist Caret capital.

[00:33:48]

And yeah, I think what you think about that tradeoff can have a really big effect on which, Christine, best, because if you think, like, transferable capital is where it's art, then you should be focusing on just getting like very general skills, like management. Maybe you should just be really focusing on, like just being successful at something, because it seems like being successful gives you lots of options in the future, like it helps you get cool achievements and it helps you meet other high achieving people.

[00:34:14]

And often people who've done something impressive in a field can have lots of options in the future, or it could mean things like becoming a journalist or maybe working in policy, because these are kind of platforms that let you support many different issues in the future. And so, yeah, that if that approach is correct, then maybe our advice would be more just like, OK, just go and become really good at something. And that would be like the key message.

[00:34:35]

Whereas if we think specialist career capital is correct, then instead we should be trying to think, well, what are these like highest impact things over the next couple of decades? And and then we should be trying to take bets on building career capital. That's really relevant for those. So maybe if we say think like technical safety is a really important priority over the next few decades, then maybe you want to go and study machine learning and get skills that are really useful in that particular path.

[00:35:02]

So is the main thing that makes the difference between these two views or, you know, places to emphasize how uncertain you are about what kinds of work is actually going to be most valuable in, say, 20 years?

[00:35:16]

Yeah, so I think one really big dimension is just how uncertain you are about the priorities, which could be priorities, about which problems are most pressing. And remember, again, this is like priorities over the next couple of decades because that's the kind of planning horizon for careers. So that does seem pretty uncertain, like what will be the top priority in 20 or 30 years. But it could also be uncertainty about your own preferences and personal well, some types of transferable capital.

[00:35:42]

If you are very uncertain about like what types of organizations you want to work in, then again, that might make you want to just get a transferable skills at which you could then use in many different organizations in the future.

[00:35:52]

OK, so that would exclude transferable career.

[00:35:54]

Capital is just like become extremely good at one narrow thing that can be used, that can be used in many ways because that's still like you're still focused on a particular skill set.

[00:36:03]

Right. But like you might study management or something like that. OK. OK, is there anything else that sort of influences how much you should focus on specialist versus transferable career capital besides uncertainty about what kind of work is most important and your own sort of preferences?

[00:36:21]

Yes. So I think the other big side of the equation is just kind of how easy it is to get both of these forms of career capital. So, like, one way of thinking about it is suppose you get the transferable career capital and then in 20 years, there's two possibilities. One is that like the best guess of which parties were most pressing that you had now is actually correct. And so then you use that transferable capital to work on those or there's a new cause that's arrived, which we sometimes call Kazaks, and then you, like, use that transferable capital to work on that cause in 20 years time.

[00:36:55]

And then we then want to contrast that with instead you could have just betted on whichever priorities seem most pressing now and then you've built specialist career capital relevant to those priorities. So then the question is like, if you did the specialist path, how much of a better position to you in to work on the priorities compared to if you'd done the transferable routes, if it's still the same priorities, or we've got the other scenario where it's Kazaks, in which case your specialist career capital is presumably less useful than having the transferable career capital.

[00:37:21]

So like you kind of want to think about what's the the Delta and these two different scenarios between the two different things, which I know is nice. It's like really hard to follow.

[00:37:31]

Well, OK, let me try to let me try to summarize it something like so like you want to know two things. One, how? Because maybe I'm just going to put it in even more confusing. How transferable is specialist career capital? So, like, you know, if it turns out that you're wrong about what the best kind of work is for you, how much can you still make that transition? Because you became an amazing machine learning researcher or like even just a somewhat good machine learning researcher.

[00:37:54]

And then the other question is how useful for the things that you like would be your best guess are the is the transferable crackups? Yeah. Is that is the verbal career capital? Yes.

[00:38:05]

And also another thing is how good is Kazaks compared to those other priorities in the future, like a new Kazaks could arrive, but maybe it's only a little bit better, in which case it doesn't really matter if you can't switch into it.

[00:38:16]

And by better, of course, we kind of mean worse because oftentimes courses are like, ah, like bad things in the world that need fixing. Like, so we mean how pressing it is. Yeah.

[00:38:25]

How much better it is to work on this, how much more value you have per like year of work on the course. OK, yeah.

[00:38:32]

Great. So have your views shifted at all on those questions over the years.

[00:38:39]

Well so I mainly see this is just a big uncertainty because I still feel pretty unsure about in general whether people should prefer specialist or transferable career capital. And it also has a big effect on what our recommendations should be. Yeah, I like yeah. The specialist one would be kind of like, well these party paths just do those and this transferable one would be like just be successful at something generally impressive or generally useful. And that seems like two pretty different emphasis.

[00:39:06]

So on that for the second one, that could also mean just to clarify, it could mean just do whatever you're like most passionate about or something are best at.

[00:39:12]

Yeah, I mean, I would focus on Bastet rather than passion, but passion is relevant insofar as it's making you more likely to do well at something.

[00:39:19]

Right. OK, so I guess some of this uncertainty is on the users side, right? Because some of it is about like what they in fact will be good at or something. And then some of the uncertainty is on our side, of course. I mean, we want our users to also help us think that these things in some ways are like do some thinking for themselves, but like questions about how likely is it really that I presents a big existential risk?

[00:39:41]

Yes, totally. And yeah, maybe in terms of what people should do, I kind of think either of these two strategies can make sense and a lot of it will come down to the particular opportunities that are open to you and how good a fit you seem for them.

[00:39:54]

Yeah, and so if they like disagreed with us, like if they think like, oh, they're wrong about these things being the most important issues to work on, they might decide to go more transferable route.

[00:40:05]

Yeah. Or maybe they should take the specialist route, focus on whatever they think the priorities are. Yeah. Right. OK, so yeah.

[00:40:11]

What else is there to say about specialist versus transferable career capital.

[00:40:15]

Yeah. I guess just kind of now Momi like guessing which one seems best. It does seem like a lot of the highest impact rates. You do need to get some specialist capital eventually and there seems to be pretty big gains from that. So it is definitely worth considering. Specializing though, one thing is if you're like unsure between the two or if you need to if you need to get some transferable crackups and some specialist career capital, you should do the transferable one first because then you can still switch.

[00:40:42]

Oh yeah. OK, so there is a couple options open. Yeah.

[00:40:45]

But I mean, so in general there's a kind of uncertainty pushes towards transferable and presumably we're going to be less, I mean everyone will be a little bit less uncertain in the future if we like are doing a good job at trying to figure things out.

[00:40:57]

Well, yeah. That then then you're just really getting into the kind of almost the general like patients versus urgency debates. Oh, interesting.

[00:41:03]

OK, yeah. So I guess that's that's one way that these two debates are linked.

[00:41:07]

Yes. Another. I think another interesting framing is what would be best from a community perspective, and if we think in 10 years, what do we the technologies going to look like having like 10000 McKinsey consultants doesn't seem as useful as having like 10000 experts and loads of different potentially interesting areas. Like we definitely want a bunch of specialists to be a bit more serious. We would want some of both. But yeah, I think thinking about it from a community perspective makes it look more attractive to have people like that on lots of different, interesting, potentially relevant things.

[00:41:37]

Right.

[00:41:38]

So does it seem accurate to say that we seem pretty pro specialist career capital in our content, so like we could be making a mistake by being as pro specialist career capital as we are?

[00:41:49]

Yeah, most of the priority paths are kind of pretty on the specialist end of the spectrum. And so I could imagine maybe we should put a bit more emphasis on the transferable career capital, though. Yeah, maybe just worth clarifying, kind of as I did in my Key Ideas podcast is like I think when people think of transferable capital, the first thing that comes to mind is something like consulting. But actually it could be something much more like just going to become like a great civil servant because civil servants can work on many different problems in the future.

[00:42:19]

Or we mention the example of a journalist out there is like someone who actually has very transferable career capital with respect to courses. And so it's like those kinds of things I like most excited about.

[00:42:29]

Yeah, I wonder if you're really unsure about the transferable versus specialist trade off. Maybe there are some kinds of career capital that seem to be both pretty good for if you are pretty sure that one problem is, you know, the thing that you want to be working on, but also more transferable than some other kinds of career capital.

[00:42:46]

Yeah, I mean, I guess one one big thing is like if you can focus on something specialist but you have good backup options, then you're doing something that's pretty good on both both ends of the spectrum. And I think a lot of our party paths do have good backup options, which is one reason why we haven't been as worried about recommending them, despite a lot of uncertainty about which courses are going to be best in the future. That makes sense.

[00:43:09]

Also in in our kind of now released a slightly longer list of other interesting career paths. And a lot of them are these slightly more transferable things. Those are now on the key ideas page as well.

[00:43:19]

Cool. OK, so are there any other areas where you worry that maybe eighty thousand hours could be getting the balance wrong or, you know, places where you think we're most likely to be mistaken?

[00:43:31]

Yeah, I think there's a broad category of things around. One way you could see it is like how much weight to put on personal things.

[00:43:37]

And so something I worry about, can you just define yourself it for us? Yeah. Loosley personal fit is just how good you expect you'll be at a certain career path, but often way like particularly interested in what's the chance of having like a really outsize success and a particular path.

[00:43:51]

And that's compared to other people in the path. Right. Yeah. If you once got like more formal about it, like the idea of the average person. So yeah, something I would worry about is like, well, we're making these particular lists of career paths and global problems quite salience people. And so I worry about someone who like goes into one of those things and then becomes pretty good, but not amazing when they could have been amazing at some other thing that we haven't listed or maybe just one of our kind of lower priority like things that we think it is like on average about less pressing, but would have actually been like an amazing option for this particular person.

[00:44:25]

So is the worry then that because we aren't emphasizing, you know, thinking about where you in particular, like readers in particular, can excel in if they like, will end up doing something that they excel less at, and that, in fact, that's actually worse for the world because they would have had more, in fact, doing something they were much better at, even if it was a less pressing area in general. Yeah.

[00:44:45]

So like, for instance, you know, we tend to slightly push people towards studying economics because like all else, equal having an economics PhD is like really good for global priorities, research and policy and like a bunch of other things we're interested in. But then you could kind of imagine someone who like stretches to do economics and they just about succeed. But actually they would have just been like amazing at psychology because just a skill slightly matched it more or they would have turned out more interested in it.

[00:45:10]

And we don't emphasize like psychology quite as much as economics. And it would be better to have someone who's a kind of like right at the top of psychology and do something really innovative and interesting compared to someone who's kind of like a bit above average, but not amazing in economics. Yeah.

[00:45:24]

So I guess it's all kind of a matter of degree, right. Because like, if somebody was just a little bit more above average in psychology versus a bit above average in economics, we might still think it's better to do the economics. And it's like just depends on how much better you're going to be in that other path. Yeah, exactly.

[00:45:37]

So, yeah, I mean, I might have just answered the question I'm going to ask, but so why can't we just emphasise personal bit more in the content and solve this problem.

[00:45:44]

Yeah, well so we do try to emphasize it quite a bit. But just because the particular lists of things we say are so much more concrete and salient, it's very easy to focus on those too much and not think about enough like your individual circumstances. And yeah, we're trying we're going to try and add more content to the Key Ideas series and the websites about how to think through your individual situation to try and. Help with this problem, but you can kind of imagine no careers advice is like kind of the opposite to us, where it would just start with, like, OK, what are your strengths that we like the number one question?

[00:46:18]

And then, well, then they wouldn't. They wouldn't. That would be actually. But then like with us, we we kind of start from like, well, what does the world most need? Because I think that's like a really neglected perspective. And also like, well, if you want to have a big impact, it's really important to be actually thinking about, like what will actually help people. And then we kind of work back from there and think, now, how can my strengths fit into that?

[00:46:37]

So we do cover both ends of the spectrum, but we tend to emphasize what the world most needs more.

[00:46:42]

Yeah, I wonder if in fact it's fine because maybe people are like more naturally inclined to think in terms of their personal circumstances and what they're good at, because, you know, they've been being told since they were 11 that that's the way to choose your career and emphasizing what the world needs even more than is like actually, you know, justify it if all else is equal is like maybe a good corrective.

[00:47:02]

Yeah, that's how I mainly feel about it. But I guess I, I worry that. I mean, people who are like really in 30000 hours and then maybe kind of like hear the perspective too much. Yeah.

[00:47:13]

But yeah, hopefully that's not like the typical reader, but hopefully the typical reader is going to say, yeah, no, it's I mean it's in general this is like a very difficult challenge of writing for multiple audiences. Yeah. And tends to be people who've been listening for a long time need to hear like pretty different things from well, a new person, but I really have.

[00:47:32]

OK, so maybe we're not emphasizing personal fit enough. Is there anything that you would recommend to readers to either read of ours or of somebody else's or to do that would help them think better about their personal fit? Yeah.

[00:47:45]

Well, one answer is I'm working on I like how to plan your career process, which we're going to put onto the websites. And then the aim of that would be to kind of actually wade through everything you need to think about, including your personal fit. And so hopefully if you work through that process, it will mean you've significantly thought about personal fit. Another thing is just one of your making her decision, obviously.

[00:48:05]

Do you consider personal fit and bear in mind that it could outweigh which things seem most pressing? In general, it could easily be that it's better to do something that you're good at, especially if you're focused on this more like transferable career, capital focused thing that we were just talking about. And then, yeah, I mean, the third thing is like I think it is with people at some point reflecting on their strengths and trying to really clarify what they are.

[00:48:27]

And yeah, there's a lot that could be said about how to figure out what your strengths are and also just how to predict what you're going be good at. That's like a whole difficult question in forecasting. But one perspective, I think gets a little bit neglected even in like the mainstream, talking about what what your strengths are and what you're going to be good at is not only thinking about kind of broad strokes, the area like what might be a good fit, but also really thinking about the nitty gritty day to day of the particular jobs.

[00:48:53]

And, yeah, trying to build up a picture of what does this job actually involve day to day. And can I see myself doing that and being motivated in it. And yeah, one one exercise I found personally useful. It's got like slightly cheesy name of the energy audits. But basically what you do is you look at the last two weeks in your calendar and you try and categorize things by like energizing or not energizing, and then you try and think about how can you do the energizing ones more often?

[00:49:18]

And you can use that within your role as kind of try to like design your role a bit better. But you can also use that to try to understand what actually are your strengths like, what are the types of activities people to work with, skills that you find like most energizing, which is often a good sign of something that you might have good fit with.

[00:49:34]

I guess the other thing to say here is that often times people can try the work or at least talk to somebody in the in the domain so that they can get a better sense of this day to day and a better sense of their fit with it. And I mean, this is a lot more applicable when people are early in their careers and they can spend a summer doing an internship. And this is, of course, common sense. But it's one one thing maybe to emphasize.

[00:49:57]

Yeah, there's a whole we can have a huge discussion about what are the best ways to predict personal fit. And one perspective is trying to predict it from the armchair. And then you've got a whole question of forecasting and like which predictors are most powerful. And then another perspective is how can you get more information and test things out and learn about them? And I think often people focus quite a bit on the armchair stuff and often just by going to talk to someone in the career, you can learn.

[00:50:22]

And like so much that's obviously really useful that that's like a really good use of time. Yeah.

[00:50:27]

So I just I just listen to how to measure anything by Douglas Hubbard. And, you know, one thing he hammers home is the more uncertain you are, the actually easier it is to reduce your uncertainty. Like if you know practically nothing about an area, just talking to somebody for half an hour is probably going to be a huge deal for making you understand what that path is like. So, yeah, people who feel super uncertainty, in fact, it's like be hopeful because it's easier to reduce your uncertainty.

[00:50:51]

Yeah, totally. And we have this article on the site called How to make a Career Decision, and then that tries to lead you through a process where you like first try and assess things a bit from the armchair. Then you identify your uncertainties and then asks, how can you actually go and figure out these uncertainties in the world? And we have this thing called the ladder of ladder of tests. It's like you can start with really easy ways to get more information and then do like more and more costly things, like an internship would be relatively big commitment, but could still be really worth it if it lets you test out a whole career path.

[00:51:19]

OK, so before we wrap up here, any other ways we might be wrong that listeners should be aware of?

[00:51:25]

Yeah, I mean, maybe in a way the most obvious one is just the list of problems and career paths that we highlight the most could just be the wrong ones.

[00:51:33]

Are there any that you're particularly worried about? Well, I mean, just kind of thinking about the big picture. One thing would just be like maybe we're wrong about long termism just because that is like a very new philosophy and a lot to be figured out about it. Then you can try to ask, well, if we were wrong about long termism, what would we recommend instead and how different would it be, which I think is like really interesting question.

[00:51:55]

So this kind of alternatives to long term ism, like one alternative to long term ism is like currently called near-term ism, which you can you can kind of think of as like, well, what I just some like kind of common sense ways we can, like, help people in a really high return way today and people that tend to focus on helping the world's poorest people, like especially through global health, or if they think animal suffering is a pressing priority, then they might work on reducing factory farming.

[00:52:21]

But then, yeah, I'm not sure if I would end up being anatomist if I rejected long term ism. Another framework might be like kind of the conventional economics framework, which is like maybe not very catchy title, but I think it's pretty different from near term ism because like economists will think of benefits and costs over hundreds of years, but they're just discounting them quite highly. So then they kind of end up being like medium term. And yeah, I could imagine becoming a medium term first and then I don't know what I would work on then, but I could imagine easily ending up working on something like climate change or I mean maybe even still like pandemic reduction and bio risks.

[00:52:58]

I can imagine that still turning out to be a top priority from medium term perspective.

[00:53:02]

Yeah. So maybe some of our recommendations are like a bit more robust to this to this choice. Yeah.

[00:53:07]

And I should have also mentioned, like I alignment's, I could easily imagine that will say seeming like one of the top things from a medium term perspective, either because that could be a big risk from it or because maybe you just want to like maybe the medium term is send up trying to speed up progress and like the economic growth. That's kind of like Tyler Cowen I kind of think of as a medium term sometimes. Yeah, interesting.

[00:53:28]

OK, so there's the period of time term is sort of choice point. Yeah.

[00:53:33]

And those are not like great terminologies because I think most anatomists what actually makes no atomism distinctive is not really about when you think it's good to help people like most NITI MT's agree that future generations matter. What's actually making the difference is something more about what methodology I think is best. And so like they might say, wants to put more weight on things that seem common sense good, as opposed to like more speculative parties or like they want to do things where they can get feedback about how well it works.

[00:54:00]

And one thing about long term ism is like, well, if I'm really trying to make a difference in the next thousand, a thousand years from now, that's my main priority. It's pretty hard to get feedback and they might think you're just likely to go so wrong doing that. That better to focus on the here and now. Yes.

[00:54:13]

So I hope that will be a different name for these different philosophies at some point. Yeah.

[00:54:18]

So I guess we could probably talk forever about like, OK, let's go through all of the problems and paths and decide how uncertain we are that they are in fact as good as as our best guesses at the moment. So we probably shouldn't do that now, but yeah. Is there anything else you want to leave listeners with about where your uncertainties lie about these recommendations?

[00:54:36]

Well, and then while we've already covered a bunch, which is like supposing long termism is correct, then we've got these like different varieties of long termism and how much emphasis to put on each one.

[00:54:44]

Yeah, another area. So patient long term ism is correct. We probably would recommend somewhat different things.

[00:54:50]

Yes. Well, yeah, we didn't think of it as correct. But if it's kind of the main thing we should focus on, then what do you mean we should go.

[00:54:56]

That's correct. If it's well it's the things are really a matter of degree. Like even the people who are most impatient still think we should spend some on object level things today. It's just maybe they would only give like half a percent of the portfolio as opposed to four percent.

[00:55:11]

I feel like the most natural way for me to think about it is like if patient long term ism is correct, then it says you should like only you should spend a tiny bit per year now, I guess, to like keep things running or something until well and to get information and because of diminishing returns and.

[00:55:26]

Yeah. Wait sorry. What diminishing returns. It is like some diminishing returns each year. Then you'll want to like give a little bit each year.

[00:55:33]

Right. But why would a patient long term us think they were diminishing returns each year.

[00:55:37]

Well so yeah, the patient long term they might still think that's like a small number of really good things around now. OK, and so they might they would still want to take those, but then they want to save a decent amount. OK, so I think just thinking the spectrum is like kind of more accurate in most cases. OK, yeah.

[00:55:53]

Yeah I see. You say OK. Yeah. I mean then we could also get into the whole discussion of just like if we're uncertain about all these views, we should probably take some kind of portfolio approach as well. And but I suppose you were saying like suppose we just knew perfectly like what the situation was, right? Yeah. Yeah, so I guess when it comes to just how highly should we be recommending people work on certain issues or take certain career paths, that's something that we're just continually working on and trying to figure out.

[00:56:19]

Yeah, totally.

[00:56:20]

And there's just like there's a lot of other career paths that we haven't looked into yet, and there's a lot more research we'd like to do. So the idea that we'll discover more paths that we think are really promising or like down white, some of our existing ones seems fairly likely.

[00:56:33]

Yeah, I know Ben Garfinkle was recently on the podcast and, you know, articulating some reasons to think maybe A.I. safety was a bit less pressing than some people had thought. And, yeah, it's something we're thinking about, although it's worth saying that he still thinks it's a big deal.

[00:56:49]

Yeah, yeah. And we still think it's one of the most interesting areas to focus on. Cool. All right.

[00:56:54]

Well, thank you, Ben. Cool. Thanks. All right, that's the first piece in a series of chance that will be releasing between Ben and Aiden over the coming weeks and months, if you'd like some further reading on these topics. There's two things that Aden's published recently. The first one is ideas for high impact careers beyond our priority paths. And the second is global issues beyond 80000 hours, current priorities.

[00:57:17]

Ben also recently published a related blog post called The Emerging School of Patient Long Termism.

[00:57:22]

And if you'd like to go deeper on patient long termism, I suggest listening to Fill Chamas episode on that topic, which is interview number 73, as usual, willing to all of those in the Senate. Also, here's just a final reminder that our annual user survey is open for submissions. For now. You can find that survey at 80000 hours ago, survey the editors and as podcast is produced by Katherine Harris, audio mustering by Ben Cordele. Full transcripts are available on our website and made by Zachy or HACC.

[00:57:48]

Thanks for joining us.

[00:57:49]

Talk to you again soon.