What Rationality Can Do For EA: Talk at EA Oxford

I know it is standard advice to tell people to write general bullet points and know what they want to say, but when I wrote this talk, I found it flowing and gratifying to write up something very close to what I wanted to say (though I elaborated and riffed of course), so here is a slightly abridged / edited version of the write-up I made and had with me at my talk, with some notes on why I said what I did. For just the write up of the core of it, there’s a google doc here, and slides here.

Intro

I’ve been a teacher for 9 years, rationalist for longer than that, when I started thinking of myself as an EA I thought it was just a subset of rationality, that’s how linked those things were for me. I’ve learned that EA draws from a much broader set of people than that, so I’m glad to be here to do two things: make the case for rationality being useful to doing EA well and give a set of sort of teasers of rationality techniques, what they can look like and what they can do for you.

Goal: Introduce myself, give an overview of the plan for the talk, give some sense of my background but also limitations

Rationality

is a word associated with philosophy, I mean it specifically in the less wrong, sequences eliezer yudkowsky sense (raise your hand if that is familiar to you), the set of ideas and people around the art of trying to think better, is a bunch of things:

  • epistemic: answering the question “what do you think you know and why do you think you know it” and

  • instrumental: “given your goals, how to achieve them effectively”,

  • It’s got internal figuring yourself out (goals, internal conflict) as well as

  • external productivity (getting things done)

This talk is about the importance of the epistemics subset specifically (though the lines are fuzzy) to doing effective altruism well

Goal: Again orient to the point of the talk, check in diagnostically with my audience, background knowledge was high so I didn’t dwell on the epistemic / instrumental distinction but did want to make the internal / external distinction to give a sense of the breadth of rationality content and ideas, to try to avoid pigeonholing.

Why epistemics matters

  1. Getting the answer right: not falling prey to what’s so easy to fall prey to, focusing on what’s near, what’s familiar, what’s easy, thinking what our friends think or what the social order thinks

  2. Not being too sure we have the answers, I want us to work with our best guesses, but not be unable to pivot or lose our curiosity

  3. The whole thing is that this is a question “how do we do the most good” not an answer

When we fall prey to confirmation bias, scope insensitivity, the desire to round off the complicated edges to be more persuasive, retreating to the “luxury of being overwhelmed”, there are real things at stake: people, beings, conscious experiences, visions of the future and paths forward that will be better or worse based on whether our goals are achieved. Reality does not grade on a curve.

So as not to be a hypocrite, the persuasion thing might be a hard call! Depends on a lot of things, there’s a real tradeoff there, it’s not easy or obvious, we should absolutely make it, but be aware that we’re doing it and be able to justifying and discuss our reasons for putting the line where we put it

I made that sound dark and hard! And I mean, look, eyes on the prize here, but let’s get concrete. 

Final caveats before beginning: These are tasters; give them a try, but note that they are only part of the story, and only the story from me. If there’s stuff here that doesn’t work for you, that’s fine, and I’d encourage you to think “why is she saying this? What is she doing on stage? What is she trying to achieve? What is a stage? When did people start standing on stages? Should I start standing on stages? Wait, back to rationality: What version of this might work for me?”

Goal: I wanted to make the stakes real. This is also a big part of what EA is to me, and I wanted to convey that; it doesn’t have to be that way for other people, but it’s what keeps me most accountable and honest. “Reality doesn’t grade on a curve” is also one of those things that just hits me and has stuck with me for a long time, and I suspect there are some others out there where that alone might be the biggest takeaway.

Getting emotionally integrated about how big the stakes are has also been a project for me personally in the last while, and so I’m practicing expressing it and letting it flow through me, and I think this was a success here, though probably on the margin other people are way more already-there than I am, so maybe don’t need it. Nonetheless, if EA can sound too cold and abstract, rationality double that, so concretizing it seems valuable.

The joke thing about the stage didn’t work very well! I think because there wasn’t a stage! Also probably going too way too quickly from dark and intense to light and funny, something I’d want to adapt next time, because I actually do like the "let’s boggle at the world” sense that “what even is a stage” gets you, which is a bigger part of rationality camps than I knew before this year!

The Flinch: Hard-to-think thoughts and Line of Retreat

  • Ask for raised hands: How many of you are vegan / vegetarian?

    • What if you decide it doesn’t make sense to abstain from eating meat? (raise your hand if you knew lots of people argue this, keep it up if you know some of the arguments made)

    • Do you know about the poor meat eater problem?

    • What if you decide it’s really unethical to eat meat?

  • [Explained the Flinch, the Jonathan Haidt Line about “what can we believe vs. what must we believe”]

  • Close your eyes and raise your hand if this feels “ahhhhh” to think about

    • AI Safety

      • 1. Maybe AI Safety is by far the most important thing to worry and think about

        • Even if it’s weird, or speculative, or bro-y

      • 2. Maybe AI safety is not that important relative to other things and we are shoveling tons of people into it (because it sounds good and we’re impatient and arrogant)

    • Cause area generally

      • 3. Maybe the cause area I emotionally care most about or currently buy most isn’t the thing I should end up working on or donating to

    • EA

      • 4. Maybe EA is totally missing the most important cause

      • 5. Maybe EA is finding a ton of talented people and pushing them to optimize too quickly relative to how actual change and progress get made*

    • Career

      • 6. Maybe I should have an EA career

      • 7. Maybe I shouldn’t have an EA career

      • 8. Maybe I’m going to have to do a bunch of work to skill up to have a career I’m happy with

    • Life

      • 9. Maybe I’d be happier in the future if I networked more / exercised more / studied more

    • 10. Maybe there are no adults in the room and I just have to figure out using my best sense what to do and do it

Line of retreat: Pick one of the above or a scary thought of your own and really inhabit the world where it’s true. Take it as a thought experiment. Don’t try to figure out if it’s true. Just, what would you do if you were sure that was true?

This is supposed to give you a sense that it’s not the end of the world if it’s true. You would figure out something to do (though it might take more than two minutes to figure out)

  • Litany of Tarski

    • If I should stop being a vegetarian,

    • I desire to believe that I should stop being a vegetarian;

    • If I should not stop being a vegetarian,

    • I desire to believe that I should not stop being a vegetarian;

    • Let me not become attached to beliefs I may not want.

  • Worth pointing out that you are already living in this world where this thing is true

  • We are cultivating a scout mindset, where we really just want to know what’s true. And my motivation is that it really matters to get this stuff right

  • I will add that I personally find it very motivating to aspire to be the kind of person who can do things that feel less virtuous but are more right, who can look the hard thing in the face. I find I can face harder things because even as I bear the cost of the painful update, I can see myself becoming more of this person I want to be.

  • We have to be able to think these things:

    • Don't dismiss ideas as unthinkable (rather than actions as subject to strong injunctions): things that people are afraid of thinking about (because it might make them look bad, might imply bad news, is unpopular) have an elevated chance of offering low-hanging fruit for thinking. - Carl Shulman

    • Being comfortable with your own personality, emotions, and desires can help with being willing to do that kind of analysis, by making fewer conclusions unacceptable to you (empirical ones in particular). - Carl Shulman

    • Value Affirmation is, in my experience, a great way to be able to contend with hard-to-think thoughts, reminding yourself that you are the kind of person who cares about suffering and flourishing and the future, can make more thoughts thinkable. There is support in the literature for this, but it’s pre-replication crisis and I couldn’t find out whether it had replicated. Nonetheless, if interested:

I am also extremely lucky, and I hope you all feel this way as well (to the extent it’s true) to be in a social community that I think will support me if I change my mind on a bunch of this stuff. Not all of it, maybe, and things will change, but they will hear out my arguments, and care that I am taking it seriously (and best of all, tell me if I’m being a moron). If you don’t have this, I recommend being in the market for one.

  • Concrete takeaway: Notice when a thought seems unthinkable (you can start with just the noticing, for the next week), and see if you can spend 5 minutes thinking of what you’d do if it were true

    Goal: I like to make things feel real when possible, concretize and actually experience the flinch and the response to it. I don’t know if 2 minutes was the right length of time, I think it was and the later ones (meant to be 5 minutes but in practice shorter) were maybe too long.

The Temptation: Permission to Disagree and Permission to Check

Things can feel desirable to think because otherwise we’re disagreeing with very smart people, but 

  • 1. They disagree with each other!! Outsourcing can only take you so far. Cf: miri discord chats

  • 2. Even in the “best case scenario” where the people you read are 100% right about everything, most folks don’t want an army of soldiers, but people who can improve and iterate and use the ideas well and clearly, not sloppily because they heard a dumb version of it and felt obligated to buy in

  • 3. What if we’re wrong about stuff? It would be a serious shame if we had enough information to know that now and none of us in this room caught it.

Permission to disagree

  • “Frequently imagine what someone you respect, thinking you were wrong, would say/try to make the best argument against what you are currently thinking.” – Carl Shulman

  • I like that this is concrete: not just thinking of the best arguments, but imagining someone in particular can help (there are also downsides)

Permission to check

  • Really reasonable things still have to be empirically true: giving people money is good for them (are we sure?), teaching people rationality helps (??)

  • Concretize: If you were right, what would you expect to see, if you were wrong, what would you expect to see?

  • Have a strong emotional revulsion to self-delusion and sloppy reasoning/research, including people Wrong on the Internet within communities you have some affiliation with. - Carl Shulman

Actually Checking

In addition to philosophical or conceptual arguments, have you noticed that EAs use big numbers all the time? People also cite a lot studies. Easy to get mentally dragged all over the place. But a bunch of things you can just check!

5 minute look up

  • 1 minute to decide what to look up: Current givewell number for lives? Elasticity of meat / eggs? Look up a study people keep saying exists, about effects of unconditional cash transfers or moral licensing or whatever, that you’ve never bothered to, and read the abstract. Or a study you keep citing to people you haven’t read. Something you’re curious about or skeptical about or would change your mind about something, maybe.

  • Extra credit for predicting in advance

Some thoughts afterwards

  • Note: people, studies, abstracts not always telling the truth

  • Numbers every EA should know

  • Anyone want to guess how many chickens people eat per year in UK vs cows. Highlight to find answers. I calculated from here: https://animalcharityevaluators.org/blog/how-many-animals-does-a-vegetarian-save-in-the-uk/ (19.36 using ACE’s numbers vs. 0.0546) (I think not including eggs and milk)

  • The more advanced version of this is doing a quick Fermi - what numbers feel reasonable? Can you quickly guess what makes sense when you can’t look it up?

  • Also advanced is making an actual model with spreadsheets or Guesstimate.

Concrete takeaway: Notice when you don’t know something and see if it’s a quick google away. Notice when you don’t feel like you’re allowed to disagree and remind yourself that people disagree amongst themselves, and that you have permission and that you will be likely more valuable to the world, within a given cause area, if you have deeper models.

Thoughts: People found some interesting stuff, but I wasn’t sure if in this particular case it really hit. I was more sure it did in the Columbia version, but I’m not sure.

Really Trying

  • My grandmother liked to tell people that they didn’t have the luxury of being overwhelmed.

    • “You are not obligated to complete the work, but neither are you free to desist from it”

  • Surprisingly easy to not really try, to half try because it’s frightening to be vulnerable and put yourself out there, or to look around you for what trying looks like and aim to match that, instead of really really trying to fix the problem

Solve the Problem in 5 Minutes

  • So, pick a thing you’re stuck on, an intellectual problem or life problem or whatever, I have no ideas for x and five minutes by the clock, actually try to solve it - biosecurity, ai safety, the fact that you don’t do your laundry on time. For this time, assume there’s a solution. If it’s because you’re stuck, this is the time for ideas, not good ideas

    • Like what if you had a billion dollars, which, as it happens…

    • Buy something on amazon, text someone, actually check

Thoughts: I made sure to confess that this was putting two things (really try and creatively generate / babble solutions) into one. No one reported solving a problem, but some did say it felt different than normal trying.

Hamming questions

  • What’s the most important thing?

  • Why aren’t you working on it?

  • What’s stopping you? Can you fix it?

  • If you were reading a book about your life, what would you be screaming at the main character right now?

Concrete takeaways: You can do these on your own, with friends, hamming circles

If there is anything you would like to do as a result of this talk, make a concrete plan for doing it. Phone reminder? Calendar reminder? Tomorrow? Next week? What and when?

Conclusion

The idea of rationality is not that it’s easy but that people who have thought a lot about this and taken it seriously have found some techniques that help for a lot of people; they may or may not work for you; the version you encounter here may be reflective of that and may not. You’ll have to figure out how these ideas match with what you find to be true in your life and thinking, and if you decide to pursue them further, find ways to “adjust your seat”

Previous
Previous

Maths and Ethics

Next
Next

How To Raise Others’ Aspirations in 17 Easy Steps