Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Give Livewell Give Oil takes a data driven approach to identifying charities where your donation can make a big impact. Give all spends thousands of hours every year vetting and analyzing nonprofits so that it can produce a list of charity recommendations that are backed by rigorous evidence. The list is free and available to everyone online. The New York Times has referred to give well as quote, the spreadsheet method of giving give. Those recommendations are for donors who are interested in having a high altruistic return on investment in their giving.

[00:00:30]

Its current recommended charities fight malaria, treat intestinal parasites, provide vitamin supplements and give cash to very poor people. Check them out at Give Weblog.

[00:00:53]

Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Galef, and my guest today is Kelsey Piper. Kelsey is, in my opinion, one of the best new journalists out there. She writes, Full Time for Future Perfect, which is a branch of Vox that's devoted to topics that have the largest impact on the world, as opposed to just sort of covering topics that are new as a news.

[00:01:20]

Kelsey has also been a blogger for years, which is how I started following her. Her Tumblr, the unit of Caring is one of my favorite things to read. So we're going to start by talking about some of the work she's been doing for a future perfect and then transitioning to talking about some of her personal writing on topics like morality and mental health.

[00:01:37]

So, Kelsey, welcome. Thank you so much for being here. Thanks so much, Julia, why don't you tell our listeners a little bit about how future Perfect came to be, what's its origin story?

[00:01:49]

Yeah, so Future Perfect is funded by the Rockefeller Foundation. And my understanding is that they were interested in the way that having an outlet where some people could focus full time on a question and on coverage of that question sort of influences the broader society. So, for example, how outlets like Breitbart came to be and had a relentless focus on some conservative issues and sort of brought those more into the mainstream. And they were very interested in what would it look like to do this for sort of friendly, altruistic, cosmopolitan centrism.

[00:02:26]

And when they talked to Fox about this, Fox has, of course, Ezra Klein and Dylan Matthews, who are interested in effective altruism. And they thought effective altruism is sort of a good inspiration and like grounding source for a project like this. So a future perfect is effective, altruism inspired and draws a lot on that. And I think that's definitely a big part of our target audience, although it's not an effective altruists outlet and it covers a lot of issues that don't come up in effective altruism.

[00:02:58]

What would you say is in that area of non overlap that something that would count as as, you know, an important topic impacting the world but but wouldn't fall under the umbrella of.

[00:03:08]

Yeah, so IA's have, I think, rightly been very wary of getting too involved in politics, but that is obviously an important topic for the world. And maybe the marginal impact of one person is very small. So it makes sense for effective altruism not to focus there, but it's still where a lot of world affecting decisions happens. So Future Perfect does consider it within our purview. And we'll cover, for example, the anti-poverty plans of 20 20 candidates or which redistribution schemes look like they would work the best, or sort of questions like that, which I think it makes a lot of sense for effective altruism to sort of not focus on, given how contentious they can be and how many urgent, neglected priorities there are, but which are a pretty natural fit for what feature perfect is doing.

[00:03:56]

Yeah, no, the the thing that's just now occurring to me tell me if this sounds right, is that discussions in the world of E.A. are kind of filtered by intervention. So they're all about like, you know, what could you know, we like a group or an individual do now on the margin to have a large positive impact, whereas whereas the coverage in future perfect is is partly about that and partly about just prediction.

[00:04:23]

Like which of these even if we couldn't affect which policy gets passed, it is still like interesting and important to discuss what the likely effects of this or that policy are going to be if they're, you know, plausibly large.

[00:04:35]

So there's that distinction between like like intervention versus prediction. Does that seem right to you?

[00:04:41]

Yeah, that seems like a great articulation of the distinction. Like Future Perfect is more of a Vox's background is explain the news. Right. So Future Perfect is doing a lot of explaining big topics, even if they're big topics where there's no obvious opportunities for an individual to act on the margin. And maybe it's not so central for you. Right.

[00:05:02]

OK, well, let's talk about one of the most recent articles you wrote.

[00:05:06]

So I, I just thought, you know, constantly on Twitter and I just saw that you were having a friendly disagreement on Twitter with another former podcast guest of mine, Rob Bresh, who's a Stanford professor who wrote the book Just Giving. And it was sparked by one of your recent articles, which was about how the now infamous Sackler family, which arguably played a major role in causing the opioid crisis we're suffering from through their ownership of Purdue Pharma, how the Sackler family is having some of their philanthropic donations now refused by their recipients, like I guess mainly the museum they donate to you.

[00:05:43]

In the article, you examined the question of, well, should charities refuse donations from from people who, you know, may not have acquired their fortune in maximally ethical ways. And I think Rob was more on the side of, yes, they should refuse the donations. And you were more on the side of often. No, and it really depends. So do you feel like you understood the crux of your disagreement?

[00:06:06]

Yeah, I think so. So Rob thinks in terms of justice a lot more strongly than I do, you know, and I think effective altruists often come at the world from very utilitarian perspective. Well, where is the money going to do the most good? That's where the money should be. And I do believe that has a lot of value. I think Rob's point was, you know, to some extent, if we're making money off things as unethical as, you know, marketing tobacco in developing countries or causing the opioid crisis by giving misleading instructions about how to dose opioids, you know, there's.

[00:06:40]

Is this justice consideration that, to his mind overrides, but where does the money do the most good considerations? And I think also he thinks that we aren't really choosing between they donate to charity or they keep the money from that perspective. Of course, they should donate to charity. Do we want them to keep the money? But his point was, maybe if we're critical of this in the right way, we can get them to pay the money back to the people harmed, which is more just as an outcome.

[00:07:13]

Does that seem plausible to you? So. In the case of the Sackler is, I think, a lot of what I ended up being curious about and being troubled by was how much we can declare them an exception to our ordinary norms here and how much they're more at one end of a spectrum, a bleak take.

[00:07:35]

Yeah. So from one perspective, you know, most billionaires make money a good way if I like creating products of immense value, which people are willing to pay them a lot of money for. And we're talking about what to do with the very small segment of billionaires. We're not like that and make money unethically. And, you know, from another perspective, which is when I hear articulated a lot at Fox in particular, is billionaires are mostly doing some shady stuff, especially like when startups are getting off the ground.

[00:08:05]

A lot of them are breaking laws and cutting corners and ignoring data privacy. And none of them are really in this commendable category where they made their billion dollars, maybe J.K. Rowling. But but I think there are some people who think she's pretty much the only billionaire who just became a billionaire by making something people wanted.

[00:08:24]

And she I don't got a lot of goodwill now with her or her retroactive changing of the canon or adding a bunch of details, you know, 10 years after the fact to her people.

[00:08:37]

Yeah. So I sell to a lot of the world. I think there are no good billionaires. And if you think there are no good billionaires, then the question of what do we do about bad billionaires? I have a hard time preferring the justice answers because I, I do ultimately want money to go where it's needed the most. If if you think that there are lots of good billionaires, which is more the side I come down on, I think Jeff Bezos is rich because he made the world a much, much better place and collected some of the value from doing that.

[00:09:09]

Right from that perspective, then I think it seems fine to focus on justice in our handling of the Sattler's because they're not our primary mechanism by which philanthropy gets funded and important things get money. They're kind of a little bit of a sideshow and saying we're going to handle that by settling the wrong that they did and not by trying to distribute their money optimally. It doesn't give me nearly as much pause.

[00:09:41]

That is really interesting because I had sort of assumed that the crux between you two was was like justice versus utilitarianism or possibly this empirical crux of is it even plausible that we could get that if we if we like, refused to let these rich families donate their money, that we could instead get the retributive justice outcome that maybe is the best and that, you know, you and Rob disagreed on how plausible that that outcome was, but it hadn't occurred to me that the crux might just be, you know, how atypical what percentage of billionaires would we consider bad?

[00:10:18]

Because it might just make sense to use very different rules in those two different worlds.

[00:10:22]

So, yeah, and I do think the other things you mentioned are also disagreements, but definitely a lot of what made me sort of hesitate about the Sackler case was if I articulate a principle here and then someone goes, you know, Jeff Bezos doesn't pay warehouse workers very well. And this principle, of course, I find him to I mean, how am I going to feel about that? Right.

[00:10:42]

Yeah, I've become so in conversation with Rob partly I've become a little more sympathetic now than I used to be to the argument that we should care about whether philanthropists are receiving status, you know, in respectability or legitimacy for their donations. It used to just seem like a red herring to me or basically my view, you know, four or five years ago or something was look like on the margin, like our choice is between someone gives the money to a good cause or even like my only good cause or they keep it for themselves.

[00:11:18]

How could you possibly be against them giving it, giving the money away? That's crazy. And and all these people who, you know, got mad when Jeff Bezos or Mark Zuckerberg pledged, you know, to give some of their fortune away, I sort of thought they were focused on the red herring of like, well, I don't want anyone to say anything good about this person who I, you know, mad at for these other maybe legitimate reasons.

[00:11:39]

So, like, I have to oppose everything they do instead of saying that, like, this thing is good. So I kind of thought it was confused. Now I'm more I'm more sympathetic to the idea that that these issues are all kind of bound up and they're really hard to separate. And so if someone is going to, like, acquire a lot of status and respectability in society for their donations, there may not be a way to separate that from, you know, like, well, yes, they're doing these bad things, but also it's good that they give their money away.

[00:12:08]

We might not be able to have those two totally separately. That's my best kind of steelman of of the case. I don't know if you would endorse it himself, though.

[00:12:17]

Yeah. Parts of that definitely resonate with me, I think. I think I end up listening to some of the things Rob says and going, yes, so we want a balancing test where we consider, you know, how much good they're doing and how much status they're getting. And it seems like some like that's not quite what Rob believes, like.

[00:12:38]

Right. I mean, that's not a very justice like. Yeah, yeah.

[00:12:42]

Kind of a Scientologist, like some things are right to do and some things aren't. And it's not about like measuring the consequences once you start talking about balance and measuring your back in utilitarian land. Yeah.

[00:12:52]

And I think I'm able to meet Rob as far as. Yeah. That's a consideration to balance against. Right. Considerations, right. Yeah. But you know. Yeah.

[00:13:01]

Well, but let's let's move on to one of your biggest and most influential pieces. It's titled The Case for Taking a I Seriously as a Threat to Humanity. And you've actually written two versions of this. The the kind of full version which even itself at how long was it? Was it 5000 words?

[00:13:22]

It's like six around, OK? Yeah. I mean, even through even that is like had to, you know, simplify and abbreviate a lot.

[00:13:29]

Yeah, I absolutely cut a lot there. Right.

[00:13:32]

I mean not I'm not faulting you at all. It's just, you know, you can't write a book in an article. So and then you also had more recently a super abbreviated version. That's like five hundred words, which I was pretty impressed. You were able to get it down, another order of magnitude.

[00:13:48]

So it seems like you you sort of approach this piece from the perspective of.

[00:13:56]

Like identifying and addressing the main objections to or confusions about the high risk thesis.

[00:14:04]

What would you what emerges sort of the main objections that people have to the idea that we should take it seriously as a threat?

[00:14:11]

So I think there's a couple of them. One is people have a hard time imagining a concrete scenario by which is something happening on a computer is dangerous. That that doesn't sound, you know, completely absurd. And to some extent, the A.I. community sent a little bit of a bind here because a lot of people have said, I don't want to describe a specific scenario that I think, you know, is actually quite unlikely.

[00:14:36]

That seems dishonest to me to describe something that I don't think is how it's actually going to happen, you know, and just like or could even potentially set them up for criticism, words like, oh, so you're telling the specific story. You know, this is kind of like science fiction. You, like think you know what's going to happen. Yeah.

[00:14:54]

Yeah. But then if you don't tell stories and you're just like like John Talent, who I was talking to about this recently, it was like, well, what I say is, you know, when if cockroaches are trying to imagine how humans will kill them, they might imagine cockroaches with lasers attached to the front or something. They probably won't imagine spray because they just don't have any of the concepts to come with spray. And and it's like that.

[00:15:20]

I'm just laughing at the idea of a cockroach imagining lasers attached to cockroaches.

[00:15:24]

But anyway, one soreheads attached to cockroaches. Ed. Yeah, so I've got the balance I've ended up striking is describing a couple of ways that I if I were on a computer and thought very fast and had a lot of money could at the world and then going, yeah, it's probably going to be more complicated than that. But, you know, since at minimum you could do that, I'm pretty scared. And that's been sort of the best balance I've found between being honest about the fact that we don't know and it's not going to look like any neat scenario we can come up with while also giving people a concrete idea that, yes, if you are in fact just on a computer like really fast and have a lot of money, there are some ways that you could do a ton of harm.

[00:16:10]

Yeah, that's really helpful, actually.

[00:16:12]

And that's been a sticking point for me the whole time that I've been engaging with the high risk argument and community is is is just feeling caught between like, well, the abstract argument is too abstract for me to really feel like I can get a handle on or know how to take seriously. And any specific scenario, you know, feels too implausible. And so I don't really know how to engage with this yet. So I, I find I find the compromise pretty helpful.

[00:16:41]

Yeah. Were there other objections that you got?

[00:16:44]

So a big part of this piece was a result of like me talking with the people at Fox who are mostly effective, altruism oriented, mostly very smart and informed and pretty skeptical going in if I risk and sort of saying, like, you know what, what were your questions? And when that comes up, a lot is like, is this really, you know, a bigger deal than climate change? Like where should this be on our list of priorities?

[00:17:08]

You know, when there's a lot of things that seem like they might that is humanity making it through the twenty first century intact. And that's a hard one to do justice to, you know, without sounding dismissive of other concerns or anything. But there I tend to just come down. You know, there are like fewer than 50 people working full time on existential risk from General. I you know, a decade ago it was worse than that and there were probably fewer than 10.

[00:17:33]

That seems like too few. It should be, you know, a couple hundred. We don't need to take a stance on where this ranks, among other global priorities, to to reach that conclusion necessarily.

[00:17:45]

This is a general pattern that I keep noticing. I talked about it a bit on a recent episode with Rob Leblond from Eighty Thousand Hours, where I think one just like consistent, prevalent misunderstanding that people have of the 80000 hours advice is that they don't realize that eighty thousand hours of devices on the margin. So they think that like 80000 hours is, you know, their ideal situation would be everyone in the world ideally follows our advice and goes into these careers.

[00:18:13]

And then, you know, the audience hears that or, you know, imagines they're hearing that and goes, well, if everyone did that, then like, you know, all these bad things would happen. We wouldn't have people exploring, you know, new frontiers or like doing, you know, exploratory research that doesn't have a specific goal, etc., etc..

[00:18:30]

And actually, all along, eighty hours has been like no on the margin, given the current allocation of resources of human capital around the world. Here's what seems like undervalued and, you know, and good. And it's kind of similar to what what you're saying. The high risk argument is that like at the very least, we can say that on the margin it would be good to have more people working on this or thinking about it. Mm hmm.

[00:18:53]

Yeah, I think that's a big part of it. It's that's just. Not a very intuitive mode of thinking for people, right? And it's hard when someone's making an argument to tell whether they're making an argument about the margin or whether they're making an argument about, like, the ideal distribution or what. Right, exactly.

[00:19:09]

Are there any objections that you think are just based on a misunderstanding of the high risk argument?

[00:19:16]

So I know some people seem to think that concern about I like originated with Eliezer Yudkowsky and is pretty much exclusive to effective outrace who came at it through that route and who found it really persuasive just to learn that lots of other people independently reached that conclusion and that Stephen Hawking did not get it from Elazar and that Nick Bostrom seems to have like in parallel, reached many of the same conclusions and that back when computing was just starting, Alan Turing and good, we're all saying, wow, this is like where we're going to go eventually, although who knows when?

[00:19:55]

And I think quite a few people I've talked to just found it persuasive that this was something lots of different intellectual currents of thought sort of converged on because they've been under the impression that it was sort of this one weird quirk of the effect of outrace community that and if it were, then, you know, even if you found the arguments persuasive, that would in fact be a pretty good reason to be skeptical of them, because intellectual communities can absolutely spiral around wrong ideas that are like reinforced by local social norms.

[00:20:25]

Right. Right. And if we were the only people who found this convincing, that would in fact, be a reason to be sort of unconvinced. Right.

[00:20:32]

Yeah. Good point. In your in the long version of your article, at least you say it's tempting to conclude that there's a pitched battle between risk skeptics and high risk believers. In reality, they might not disagree as profoundly as you would think. Can you elaborate on that? Yeah.

[00:20:49]

So you certainly get statements from John Lykken, from Andrew and from lots of people on the skeptical side that sound very dismissive. They're very like, this is science fiction. We don't need to think about this. And I think that contributes to the impression that, you know, the field has some people who are like doomsday and some people who are like, oh, just shut up. I don't know if if you dig into it a little bit more, what John McCain is saying is I think that Ajai is probably more than a hundred years away.

[00:21:21]

I think that most efforts now to make it safer will be unproductive. I don't object to some people like trying to think about these principles and trying to lay some groundwork in time. But, you know, since I believe this is hundreds of years away, that hardly seems like a good priority. And also, some people are hyping this out of all proportion and promising that, like, you know, in 2030, they're going to end death and colonize the galaxy.

[00:21:44]

And I wish they'd stop that. And I think that position, there's really only a couple substantial disagreements between that position and the I risk life very nervous position, like the I risk very nervous person, I think would say, yeah, I think it might be sooner than we expect. It might be hundreds of years away. But there's some reason to expect that actually it could be happen to us a lot faster than that. And secondly, I think there's more potential for work now to matter than you seem to think.

[00:22:16]

And, you know, that disagreement is substantial, but it's a lot smaller than, you know, the accusations of science fiction nonsense and the sort of accusations of burying your head in the sand necessarily get at like people disagree on how much we can do now and how far away it is. But almost everybody thinks that artificial general intelligence is possible and almost everybody agrees that it will be, you know, dangerous and complicated. They just disagree very significantly on when it's going to happen and therefore on like whether the stuff we're doing now could matter.

[00:22:52]

I would go even farther than that, actually, and say that most people, even the people who are usually counted in the sceptics camp, agree that Egypt is is not just possible, but likely to happen, likely to be developed at some point.

[00:23:07]

Yeah, you're right about that. I think John has said he does think we will get ajai, just not for a while.

[00:23:14]

Yambuku. Yes. Yeah. Why do you think this why do you think there there is so much apparent disagreement given the, you know, much more moderate amount of actual disagreement? Why do we keep getting in the situation where the high risk, you know, quote unquote, skeptics and believers keep arguing past each other?

[00:23:34]

So I think a lot of that is that much of this is happening in, you know, news articles that will like try and get a skeptical quote and not necessarily expand on all the depth there. I think part of it is that a lot of people will round. I'm confident this will happen, but not for another century, too. This is science fiction nonsense. They don't have a good. Of evaluating that differently from, you know, maybe faster than light travel is possible, right centuries away or just very hard to think about.

[00:24:05]

And then part of it is that, you know, there certainly is a lot of hype and nonsense out there from, you know, every startups claiming they're doing it when they're doing linear regressions on their two hundred data points to like, yeah, some very bold claims, which I'm actually hesitant to call excessive hype until they fail to be borne out. But certainly very bold claims from like open AI and deep mind about what they're going to be capable of, you know, within the next decade.

[00:24:35]

Right. And I think those have made a lot of people sort of react against the hype by being like, no, calm down, it's nonsense. I can't do any of those things right.

[00:24:45]

Switching tracks a bit now to another article that you wrote along with Dylan Matthews at the beginning of this year was titled 16 Big Predictions about 2019 From Trump's Impeachment to the Rise of A.I.. And in this piece, you did what I wish journalists would do all the time, but in fact, almost never do. You made falsifiable predictions about important things with probabilities attached. Can you share an example of a prediction that you made and just like some of your reasoning for how you picked that probability?

[00:25:17]

Yes. So one that's been on my mind a little bit recently, as I said, I think there is an 80 percent chance that there won't be a recession this year. And that's. So I did some reading of Tatlock brushing up to publish these predictions, just trying to remember what all the advice on doing it right is, this is very nerve racking, you know, because I think it's our first time making predictions, making predictions is hard. We will probably not have incredibly good calibration.

[00:25:44]

We will certainly make some predictions that are false because we made a lot of them. And even if we did have perfect calibration, some of them would be false.

[00:25:53]

And with tens, even if you were perfectly calibrated, you know, there's still a pretty decent chance you'd get, like you'd like, look poorly calibrated on a sample size, but small.

[00:26:02]

Yeah. So I was very nervous about not looking even worse than, you know, sort of inevitably. Well, and I do want to say the prediction community was great. They embraced this. They said, hey, you can criticize these predictions now, but don't criticize them in December unless you're willing to criticize them now. I definitely felt like they understand the concept that you got to socially reward attempts at something if you want it to happen. That was good.

[00:26:27]

I want to reward them for socially rewarding you. Yeah, so good for them.

[00:26:32]

So anyway, one of the pieces of advice was just to do one reference class forecasting then you like naively feel comfortable with. So instead of asking the question, is there going to be a recession by going like, well, you know, there's a government shutdown and it's been a while since the last recession and I have a bad feeling about this year. You go, OK, if I predicted a recession in every year for like the last couple of decades, how often would I have been right?

[00:27:03]

Turns out, you know, you're right about like 15 percent of the time I bumped it up from there to 20 because it has been a long time since the last recession. And there were like some economic indicators, a little bit suggestive that, you know, things looked a little bit worse than maybe the baseline, but that had very little influence on the estimate compared to how much of it was just. All right. Well, if I'd been making this prediction every year, how well would I have done?

[00:27:30]

Right. Which feels weird, but that's sort of the recommended starting point if you want to make predictions.

[00:27:37]

Yeah, interesting. I was reading your interview on 18000 hours a little while ago, and I sort of smiled at this part where you mentioned this this 80 percent prediction and you said you got you've been feeling nervous when it looked like there might be a recession because you would put 80 percent probability on there not being one. And you were like, gee, it's a little disturbing. But to notice that the reason I'm rooting against a recession is because I don't want to be proven wrong as opposed to, you know, all the human suffering it would cause.

[00:28:05]

Yeah, but I'm like very sympathetic. This is one of the big reasons why no one wants to make forecasts because they're, you know, afraid they're going to look bad if they're proven wrong. They're going to have to stress about it and so on.

[00:28:16]

And so I was wondering if you have any tips for how to overcome that fear, you know, to be socially rewarded by people. But, you know, if you can't count on that or in addition to that, what what would you suggest?

[00:28:27]

Yeah, it's super hard to be wrong. I think I think it's just something you have to do a lot of deliberate practice at when you're wrong, sort of going, well, I learned that I was wrong and I'm glad of that because it will let me do stuff better. I also I do believe that having stuff you really want to accomplish makes it easier to be wrong because it's easier to go well. Now I have the information I need to be right, whereas if you're sort of doing this for pride or doing it for its own sake, then, you know, your pride is always going to take a hit when you're wrong.

[00:29:00]

Do you think it would be feasible to add predictions to sort of build predictions into articles or op ed pieces like like, you know, any time you are a freelancer? Writing for Future Perfect makes an argument, you know, in their piece. Could you have them make a corresponding falsifiable prediction or two that sort of, you know, logically or evidentiality related to the argument? Because that to me is kind of the dream, you know, even beyond having people do you know, an annual or monthly batch of forecasts?

[00:29:29]

Yeah, that would be amazing. And, you know, you could have by someone's byline their calibration score. Everybody knows how serious I mean, out of a dream.

[00:29:39]

Yeah, well, I think, you know, it would add some time to articles, at least at first. I don't think it would produce a huge uptick in any of the metrics that journalists are, like, incentivized to care about, unfortunately. Yeah, I do think it would be valuable. I would be pretty excited about sort of figuring out how to make it happen. One point to sort of make is that formulating a prediction precisely is really challenging.

[00:30:06]

And like I said, I think I have a pretty precise formulation. And then I run it by someone and they're like, I don't know what you mean. What about these three cases that this fails to differentiate between? All right. So it's hard and maybe a whole separate skill of its own to specify a prediction clearly enough that it gets at what you mean and is has a single interpretation that. Everybody's going to agree on, right? Yeah, all right, I want to make sure we have time to talk about some of the posts on your on your Tumblr that I particularly liked.

[00:30:38]

So just to remind our listeners, via the blog is the unit of caring Tumblr dotcom. So that's all one word, the unit of caring. And one thing that I like about your writing on your blog, Kalsi, is I like how you you really take seriously both ethical questions, but also questions about people's mental health and sort of personal flourishing. And it's pretty rare. Well, I guess it's pretty rare for anyone to really take seriously either of those, but it's especially rare for someone to take both seriously and and kind of take seriously the the potential tensions between those two things.

[00:31:16]

And so along those lines, one post of yours that stuck with me was about why it's not necessarily always good to just read arguments you disagree with. Like there's this common wisdom about, you know, you should you should seek out and engage with arguments from people you strongly disagree with. That's how you, like, grow and change your mind. That's the virtuous thing to do. I mean, many, many people feel almost morally obliged to do that.

[00:31:39]

So what is your what's your sort of case against that?

[00:31:42]

Yeah, so I think I often see people reading someone they strongly disagree with and it makes them less charitably inclined towards the ideas, you know, it makes them more angry and more defensive. If you immerse yourself in it, it can give you this perception that these ideas that you hate and that are wrong are on the rise and going to destroy everything you love. And I see this on all sides. I see, you know, people who are liberal and hate read conservative sites and become really furious about how horrible conservatives are.

[00:32:11]

I see conservative sites that link, you know, harmless articles like giving advice to trans teenagers and then everybody gets outraged and horrified about them. I see, you know, conservative Catholics reading sex advice guides just to be really miserable about the degeneracy in the world these days. And I think this, you know, people are thinking of, you know, if you're listening to the other side, even if you end up disagreeing that you've done something virtuous, but this isn't virtuous.

[00:32:39]

This is just like it's self-destructive. I don't think it teaches you very much about people. And I certainly think that if they are right, you will never learn that they are right by by doing this. So, yeah, the advice I gave instead was find somebody who you respect a lot and admire and you feel like you have a lot of things to learn from them. Who disagrees with you about something? And this will make you more charitable towards the idea where you two disagree and sort of they will probably be a good person for you to learn about that idea from because you have this baseline respect for them.

[00:33:15]

And that's how to expose yourself to ideas you disagree with, is, you know, through a speaker who you respect and who you think of as on your side in some important ways. So I strongly endorse this advice, and I've I've given it myself, possibly inspired by your post, I honestly don't remember at this point like which how much of my ideas are my own and how much are inspired by people I've read. So I apologize if I have stolen any of it.

[00:33:42]

But but anyway, when I've when I've made this point, sometimes I get the pushback that, well, does that mean you could only ever, like, change your mind a little bit or or, you know, about like a limited you know, you wouldn't change your mind about sort of underlying premises because you've, like, selected for people who already agree with you about those things, because those are the people you respect and so on.

[00:34:05]

I mean, I can respect people who I have some pretty profound disagreements with. I have a very good friend who's an effective altruist and he's Catholic. And that's that's a really substantial disagreement. But, you know, since we're both really interested in making the world the best place we can be and both donating a lot of money to effective charities as one route to accomplishing that and both interested in like achieving lots of the same goals in other domains, you know, this makes me more inclined to have an open mind about and listen to her about Catholic perspectives on things.

[00:34:39]

And I don't think the disagreement there is small.

[00:34:43]

Do you ever change your mind or, you know, moderator or modulate your position in response to the Catholic arguments?

[00:34:53]

Because that I guess that violates my model of how to change your mind, which is that like you should be seeking out not just people you respect or like on a personal level, but people who sort of share your core premises about about just like how to think and what kind of evidence counts and and so on. And that like, if people are going to make you know, if I'm talking to someone who who's against abortion and their arguments are religious or are, you know, truly deontological or something.

[00:35:20]

And I'm a consequentialist, there's just not like a lot for us to engage with with each other.

[00:35:25]

Yeah, I think lots of Catholic IA's are happy to discuss abortion in terms that makes sense to the consequentialist around them. And I've had lots of conversations and I do think they've made me more pro-life, not in the sense that I think the US government should be throwing people in jail for having an abortion. But in the sense of like thinking, it's more probable than I used to think that an abortion is a fairly bad outcome, which like we should be motivated in policy to, like, try to minimize.

[00:35:52]

Interesting because of that, because easy for me to so talking to people who have, you know, strong moral intuitions in that direction and coming up with thought experiments that articulate their intuitions and making comparisons to other kinds of minds that I value, I think if I were to summarize the update like in our language, I would say I'm very uncertain right now about what kinds of minds have the property that when they die, it is bad. Like I think when humans diet is bad, I think when animals diet is probably fine.

[00:36:31]

But it wouldn't actually shock me if I had full information about the experience of being an animal, if I was eventually like, oh, no, it's actually also bad when animals die. And similarly, it seems possible that if I had the full understanding of the experience of being a fetus, that I would end up going, oh yeah, this is the kind of mind where something tragic has occurred when when this dies.

[00:36:55]

And this is a different question or separate question from from the kind of thorough question, I presume.

[00:37:01]

Like, yes. Yes. Like you do think about when animals suffer. It's just the question of whether it's bad when they die is a different question. Kind of think about much harder to think about. Like with suffering, I can kind of go, OK, do they have the same neural structures for experiencing pain that I do? OK, they probably experience pain. That's probably they experience pain in much the same ways that I experience pain. There's there's really no reason to think that the experience of being kicked in the ribs varies between a dog and a human.

[00:37:29]

Given how much of the structures we have to experience, it don't. But that doesn't answer the question of whether it is a bad thing when a dog dies. And that's just, I think, a question I'm confused enough about that pro-life friends were sort of able to convince me, hey, you should be really confused about whether it's bad for the fetus dies. And I was like, yeah, all right. I'm convinced that I should be really confused about that.

[00:37:50]

Yeah, it's really interesting.

[00:37:53]

Another one of your posts that has stayed with me is a post in which you were responding to someone's question. I think the question was, what are your favorite virtues? And you described three, they were compassion for yourself, creating conditions where you'll learn the truth and sovereignty. And I wanted to ask first about that second one, creating conditions where you'll learn the truth. It's an interesting phrasing.

[00:38:19]

It's because it's kind of adjacent to but different from these two much more common ideas that are already in the discourse of one, seeking out truth, like going out and like invest. To getting things and to being willing to change your mind or update when when you're confronted with new evidence or argument, so can you talk about why you specifically picked creating conditions where you'll you'll learn the truth instead of seeking out truth or being willing to change your mind?

[00:38:43]

Yeah. So part of that is that I think that being willing to change your mind and seeking out truth are both very hard virtues to practice and virtues where it's kind of easy to deceive yourself as to how well you're doing at them, because you can tell yourself that you're very willing to change your mind and just haven't run across things worth changing your mind about. And you can change your mind about things that don't matter very much while still having important parts of your worldview that that you sort of aren't actually up for criticizing.

[00:39:13]

And it's hard to tell from the inside whether you're doing that, whereas I think it's pretty easy to tell from the inside whether you're creating conditions under which you can learn the truth. You can ask yourself, how many friendships do I have? How many blogs do I read? How many books do I read? How many podcasts do I listen to where people say things? I profoundly disagree with that. Make me think you can ask yourself, you know, when I encounter a question that, like, makes me wonder if I'm wrong, do I keep learning and keep thinking or do I stop there and say, well, that's enough, you know?

[00:39:50]

So in some ways I prefer it as a virtue just because I think it's more concrete to answer the question, am I practicing this virtue? And that is a good virtue of a virtue. Is it concrete to answer and why practicing it? Yeah, I think if people want to become more virtuous, it's good to throw virtues at them that they they can tell if they're doing it right or not. Right. The other virtue I wanted to talk about was sovereignty, because I, I bet it will be like less it's just like less discussed.

[00:40:25]

But but it seems really important. And I only have realized in the last few years how many how important it is and how many people lock it in important domains. Can you explain briefly what sovereignty means? Yeah.

[00:40:37]

So I characterize sovereignty as the virtue of believing yourself, qualified to reason about your life and reason about the world and to like act based on your understanding of it. And I think it is surprisingly common to feel fundamentally unqualified, even to sort of reason about what you like, what makes you happy, which of several activities in front of you you want to do, which of your priorities are really important to you? I think a lot of people feel the need to sort of answer those questions by asking society what the objectively correct answer is or like trying to understand which answer won't get them in trouble.

[00:41:18]

And so I think it's this really important to learn to to answer those questions with what you actually want and what you actually care about.

[00:41:27]

Can you give an example of a situation in which someone might sort of want to defer to what the quote unquote correct answer is to what they want?

[00:41:38]

Say somebody is contemplating whether to get married? It seems very common to think about like, well, you know, what will people think of me if I don't do this? Like, what will it say about me if I'm unmarried at my age? What will it say about me if I get married at this age? How mad will people be at me if I if I do this? And it can be hard to sort of focus on as your overriding consideration.

[00:42:05]

What do I want? What does my best life look like? And is this the path to it? And that's one that just comes up in like everybody's life. But I think similarly in effective altruism, a lot of people, you know, try to figure out what they should be doing, try to figure out what you know, following other people's advice looks like for them, and really struggle with going like, OK, what outcomes do I want?

[00:42:30]

What actions, like, put me on a path there and what do I actually believe I should be doing? And it seems to me like the same sort of mistake. So I shared this post of yours on Twitter a while ago with and specifically pointed to the sovereignty point, and Rob Hublin objected that, well, maybe you shouldn't have sovereignty on questions where your own judgments are less reliable than the consensus of, you know, the relevant experts. What's your reaction to that?

[00:43:02]

The problem is, I don't think you can have that short cut. Even if it would be nice. You still have to figure out who the relevant experts are and used to out on which areas. Your judgment isn't that good? So, like, I think it is important to have good societal defaults. I think it is important that if somebody is the kind of person to just defer to the consensus on every question that we as a society have good enough consensus is that this doesn't screw them over.

[00:43:26]

But but fundamentally, as an individual thing, you can't really do that. Like, there is no there is no consensus sitting around to be a reasonable backstop and no reasonable way of telling when you should or shouldn't defer to it. You still have to do the work of saying, OK, I think I'm going to defer to experts and I do defer to experts all the time. I think my understanding of sovereignty is very compatible with being like, you know, on this question.

[00:43:51]

I just completely trust this researcher. And whatever answer they come up with, I think they're probably right. But but you have to decide why you trust that researcher in particular, right?

[00:44:03]

You know, one insight that I had from reading your post in particular was that maybe a lot of debates over like whether you should, quote unquote, trust your guts are actually about sovereignty. And I was always very dismissive when people would say things like, oh, you should trust your gut, trust your intuition, because I was basically I was imagining someone trusting their intuition about like, you know, vaccines causing autism as opposed to trusting the scientific evidence.

[00:44:26]

But now I wonder whether maybe a lot of the time trust your gut just means like, well, take your preferences into account because they are important data or, you know, like.

[00:44:38]

Like, actually, I like pay attention to your, you know, hesitation around, like deferring to a particular expert and actually try to figure out for yourself which experts are trustworthy or something like that.

[00:44:51]

Yeah, I definitely think, you know, maybe replace trust your gut with consults. Yeah. Like, yeah. Check in with your gut. Trust your gut as some information. Yeah. And and treat making your gut more informative as an important part of your growth as a person. Right.

[00:45:09]

That's actually very well put because I, you know, I do trust my judgment quite a lot. Like I think I have sovereignty and a lot of domains, although not all domains. But I think one of the reasons I have that is that I have like I've like formed opinions. And then I found out, you know, whether they were right or not. And I revise my thinking. And over time, I've kind of developed some trust in my in my judgment.

[00:45:31]

But it wasn't like trust by default.

[00:45:34]

Yeah, I think I think my process has been similar. Like, I chewed over lots of hard questions and I got a sense of when I tended to be right and when I tended to be wrong. And that informs my gut and the extent to which I feel able to trust it down. Right.

[00:45:51]

Well, this was an unintentionally good segue way for me into the last thing I want to talk about re your Tumblr, which is a thing that I really like, that you do often that many people don't do is you steal men arguments. So like a thing you do sometimes is someone will submit a is an ask, I don't really know Tumblr. I'm just like Alerta who read other people's tumblers. But there's this thing called an ask where people like submit a question or prompt or something.

[00:46:19]

Yeah. And then you answer it. So anyway there's there will be like an ask where someone has some, you know, exaggerated straw man you like, inflammatory position that they want you to respond to about like, you know, it's like terrible that society is like brainwashing kids into thinking they're the other gender that they should, like, chop off their genitals. Like, isn't this terrible? How can you not think this is terrible or something? And and you'll respond to stuff like this in this very, like, calm and measured way saying like, OK, I'm going to pretend you didn't ask the question in that like extremely like unnecessarily inflammatory and kind of exaggerated way.

[00:46:55]

Here's like here's here's like a concern that feels sort of what might be at the root of what you're talking about. That is like a more reasonable concern someone might have. And here's you know why I still disagree with that. And that just feels like it's a like so much more interesting to read. And I can imagine that it would also be much more sort of like interesting and convincing to people, to readers who are, you know, maybe not the original submitter, but at least kind of on the fence are confused about the topic.

[00:47:26]

It's like more useful for them to do here, an answer to a reasonable question than to hear the answer to the original unreasonable question, which would just be like, oh my God, stops Romanic.

[00:47:36]

So and I I've seen you talk also about how still Manning is kind of a guiding principle of of the work you do at future. Perfect. But my question for you is, when I talk to people about still Manning, I sometimes get objections that it might actually be a bad thing. So one of the objections is, well, you know, this is going to like isn't still Manning going to cause you to be overly charitable or sympathetic to views that are actually bad or dangerous and you'll just like kind of assume people mean the more reasonable thing, but actually they mean the unreasonable thing.

[00:48:13]

And like they're unreasonable view is, you know, bad and dangerous and should be combated or or like stomped out. And then the other objection is still Manning might actually be bad for you in that like it might in the process of trying to find the more reasonable interpretation of what someone said. You might actually miss the point they're trying to make, because it's not the thing that most immediately seems reasonable to you. Do either of those concerns seem reasonable to you?

[00:48:39]

Yeah, I think they do. I think, you know, it's sort of unfortunate that we have one word for both, like an internal technique for trying to understand perspectives you don't understand before and like an external rhetorical technique for like trying to engage productively with that argument. Yeah. And I think you need sort of different skills to employ each of them usefully, like as a rhetorical technique. I think the most important thing you need to be able to do is imagine you have an audience reading this post and they flinch when they read the awkward, inflammatory, unreasonable framing because they're like, oh yeah, I sort of feel that way.

[00:49:17]

But like, I wish people would ever say it, like outright who weren't jerks, who say it like in this inflammatory way. And I think if you have a good sense of your audience and you have an accurate, like, well calibrated sense of who's flinching and like what they believe and which had been articulated, then you can be highly effective by. Articulating it for them and saying like, yeah, what if we were talking about this because we should talk about this.

[00:49:44]

Sure. So that the sort of way it goes wrong is if you don't understand your audience and you don't actually get what what people are sort of hoping will be said, which is probably a mistake I make sometimes. And it's certainly a mistake I witnessed a lot is somebody like assumes that the steel man of this engagement is, you know, this argument that they think will be very compelling to most of their readership. And then actually the people who kind of hold that perspective are like, wait a second, that's not that's not a steel man.

[00:50:16]

That's like just a different argument. Right. And so that's the way that one feels. And then the way the internal one fails, I guess maybe it's kind of similar. But I think of the internal one as feeling if you instead of understanding the thing that they're trying to say, you just come up with something that's come enough in your own worldview, that it's actually missing critical components of what makes it work as an argument. And then you're like, well, this is a bad argument.

[00:50:43]

And then you feel free to dismiss it because the strongest version of it was bad when in reality it was more that it was integrated into a different worldview. And when you chopped it out of that worldview and brought it into yours, then it didn't have anything holding it up anymore. Right, Moapa.

[00:50:59]

I like that metaphor, I'm like imagining something, being transported it out of its native climate and, you know, withering. Yeah, exactly.

[00:51:06]

Now in the new climate that if not and then you're like, see, that's that's no good friend, right? Yeah.

[00:51:12]

It's we're almost out of time. Before we wrap up, Kelsey, I wanted to ask you for a recommendation or just a nomination of a book, blog, article or other resource or even a thinker like a person who you have substantial disagreements with, but nevertheless, have, you know, gotten value out of reading or engaging with?

[00:51:32]

Yeah. So one blog I've gotten a ton of value out of engaging with recently is Andrew Gellman's blog on statistical significance and methodology in the sciences. And the main thing I get out of it is that he'll just post lots of papers and break down their methodology and he comes down on the we should just abolish statistical significance side of things. I don't think I do, but I have like, picked up so many mental tools from just reading through what he's doing.

[00:52:04]

And now and I read a paper, I think I have a little shoulder, Andrew Gelman, who's like that that effect size looks suspicious. That seems like you got to have, like, lots of comparisons probably went into that set of results you just reported there. This looks fishy. And I think everybody should have one of those if you're going to be reading any papers.

[00:52:23]

A little shoulder, Andrew? Yes, definitely. So I highly recommend his blog to pick up one of those.

[00:52:30]

And just to remind our listeners, you can read Kelsey's work at Future Perfect, as well as Don's work in the work of the other freelancers who contribute. There's also the feature perfect podcast that you should check out and will link to some of the, ah, some of these articles and blog posts that we mentioned during the episode and unwilling to any Gelman's blog as well. Great. All right.

[00:52:54]

Well, Kelsey, thank you so much for coming on the show. It's been such a pleasure having you. Yeah. Thank you so much. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between Reason and Monthan.