6 Useful Mental Motions: EAGx Oxford 2022 Workshop

I struggled a lot writing this. I didn’t know what people at an EAGx would already know, I didn’t know whether I would be pitching it too advanced or not enough (lots of rationality concepts sound obvious in retrospect). In the end, I picked the framing of “mental motions”, which I like because it feels like they reference a toolkit, a set of things you can do when you want to engage in something deeper, not a normative approach to How To Become More Rational.

Also, I’m on a “deliberate rationality” kick at the moment, thinking about how to do deliberate practice regularly in my life, to strengthen that element of the tripartite model I really liked from this post:

  • shower thoughts (what do you think about by default)

  • gears level modelling (are you actually thinking about things in a way that yield predictions and lets you update)

  • deliberate practice.

Mental motions are the things you practice so that you do them by default when you need them most.

The Slides

Can be perused here, or accessed here.

The Workshop (with commentary)

Intro

What's rationality and why does it matter to EA? Because the problems are hard and mantra 1: reality doesn't grade on a curve.

Commentary: For both epistemic and moral reasons, I really like the reminder that in the end, reality is the only judge. We will or will not achieve our goals. Clever argumentation will not save the world, unless it does. It’s a bracing, clarifying view, one I treasure.

Why mental motions as the framing

How do we make getting the answer right more likely? There's a lot of things, but there's some strategies I'm going to have us practice today, especially around forming deep models of what we care about, which I'm framing as mental motions. The set of mental motions you have access to seems important: What do you do when someone criticizes you, when you're confused, when you think you're not confused, when you decide what to skim and what to read, when you look to generate new ideas, or get the right answer in an important moment. What are the mental moves you do? Do you like them? Do you endorse them?  What do you do with a new idea? 

Commentary: This was me playing with the idea that we have instinctive mental motions, and training them to be better means we are going to be more rational by default. It’s great to do deliberate “sit down and do rationality”, but even better if it’s easy and natural. I’m not sure this is accurate or the most useful thing, but I think it’s plausible.

Inside View

The mental motions here in particular are about forming an inside view. I don’t actually think deferring to other people and outsourcing your thinking is bad, but I do think that

  1. you should know when and what you’re outsourcing so you know where to investigate further if you want to understand it better yourself

  2. you should know how your model works around the things you’re trusting other people on (gears-level thinking)

  3. at some point, deferring runs out, and one day people ask you what you think of a grant or your actions become dependent on your actual model of the world and then you gotta figure out what’s going on

Picking an empirical thing to practice on (the underlying structure of the workshop)

And you've heard a lot of ideas this weekend I hope.

  • Raise your hand if there's something you've heard this weekend that you feel you need to think about more (on matters of empirical fact, which includes "x should y")

  • Raise your hand if something this weekend changed your mind about something reasonably important (on matters of empirical fact, which includes "x should y")

If you raised your hand, great. I have a model of rationality workshops where a huge amount of the value is giving time and space to do the things you know you'd like to, that would be good, so if you have something like that above, keep it in mind. If you don't, you'll want to pick something to practice these mental motions on - a good bet is your most uncertain empirical view, that matters to you, or perhaps the empirical view you have that's most core to your choices, most decision relevant.

Examples:

  • FOOM vs. not FOOM (takeoffs)

  • Climate change could cause civilizational collapse

  • Community building is good / bad

  • More personal assistants are a good idea / bad idea

Caveat here: Some of you will have seen these before, some won't have. I'm giving this talk because I think these mental motions are valuable to me and others in gaining deeper understanding, but of course, people are in different places. This might not be the things are most helpful to you, the thing that will level you up in getting the right answer consistently, when it matters. I am going to suggest items at different points of "advancedness", but if you're finding that this is easy, three things: 1. maybe pick something else to practice on? 2. great, teach others and 3. figure out what the generator is of the thing I'm doing and see if there's a version that works for you - maybe you have better mental motions to practice right now? If it's hard or weird, that's ok. Try it out. See what feels valuable.

Commentary: I was really nervous about whether it would come off as too boring / obvious to people! So this was an attempt to head that off.

Mental Motion 1: What is the story?

What's the thing you care about / the thing you heard this weekend that maybe makes sense? Or what's your criticism?

You're going to explain the reasoning to the person next to you as best you can. Does anything feel like a gap? As you listen, can you envision what's being said in enough detail to know whether you agree?

Commentary: I get a ton of value out of just saying my reasoning out loud to someone (or writing it out). It accesses the Illusion of Explanatory Depth that I like so much, where if you ask people if they know how a fridge or bike or the US health care system works, many will say yes. Then if you ask them to explain it, they’ll find themselves struggling, and then report later much less confidence. Explaining the reasoning highlights where you’re unsure, which logical leaps are unclear, where there’s more work to be done.

I personally am confused about why food only absorbs oil when it’s hot - mushrooms in a hot pan with oil will absorb them, I don’t think they would if you didn’t turn the heat on? Which I think means I don’t actually understand cooking.

I gave people I think two minutes for each side of the pair to walk through the argument, then explained the Illusion of Explanatory Depth and why I thought this exercise was valuable. Then I asked if any of them had found themselves stumbling over parts of the argument, and I think lots of people raised their hands, but I was running on a lot of adrenaline so I definitely do not remember well.

Follow up:

For one of the gaps in your argument, or places you felt unsure, take 30 seconds to figure out how you could fill those gaps, something you could look up or ask someone about (Mantra 2: there's always a next thing to do)

Mental Motion 2: Make it real

The idea is to make the possibility of the claim being true feel real, near mode, exactly as real as the reasoning involved in going to the grocery store and getting oat milk. Can you tell a concrete story about the way the world looks if this is true? Can you tell two?

Examples

  • Claim under evaluation: biosecurity isn't so scary because tacit knowledge is what matters

    • Concrete story: "ok, so you go and you try to make a pathogen but it turns out temperature matters a lot? or the brand of PCR you use?"

  • Claim under evaluation: you should get an econ phd

    • Concrete story: "ok, that's because we live in a world where prestige is going to generate most of my impact, so I need a phd" or "ok, because that's how I'll do great research on evaluation impacts"

  • Holden Karnofsky’s description of digital people and Greg Egan’s Permutation City gave me a visceral sense of what a world of digital people could mean, even though each of the specifics are very unlikely. It made it part of my actual default view of the world, and made me have shower thoughts about it.

Importantly, these stories don’t have to be very likely at all to be true, nor do they have to be rigorous reasoning. The goal is to take what’s abstract and far away and make it feel like just a part of the world, something you think about when you think about what the world is like, not something you only think about when you’re thinking about “AI” or “EA”.

This reasoning is why I like lists of potential projects and research projects (like this one). They make me feel energized, and like these big ideas could really exist in the world.

Take two minutes each and do with the person next to you.

Commentary: I don’t think I managed to communicate this very well! It was too easily confused with the first mental motion, and “this doesn’t have to be true” was confusing, I think. This may be a more advanced technique, or just something that should be taught separately, or before the first motion, or further separated in time.

Commentary 2: I personally like the different types of thinking that mental motion 1 and 2 create. One is convergent, rigorous, skeptical, and the second is generative, creative.

Mental Motion 3: What does the world look like if? (Making predictions)

If the claim is true, what should the world look like in a month? A year? Ten years?

What if it’s false?

Write down some predictions (it’s ok if they’re super vague!)

If the claim was more of a “should” than an empirical thing, what does success look like? Failure? (ie if it’s true that EA should do X, and we do X, what should we expect to happen in the world?)

Now you can check!

Mental Motion 4: Find weaknesses

Can you beat your reasoning up and see where it's weak? What are the cruxes? What facts could change that would make you change your mind? If there’s a criticism of EA or a project you’re considering, what’s the thing that would actually be bad if that was true? Where is the error being made?

The big version of this takes more time, but here's the short version: Can you pinpoint a fact, that if true, would cause you to change your mind about the claim you're considering? What number? If you have a criticism, where is the core point of actual error?

(I gave one or two minutes to reflect on this)

Reminder: Problems in EA are not solved! Important questions are not answered! There’s a lot we don’t know, and you want to know how robust your thinking is to different potential answers.

Is the claim you have in mind still true if:

  • The Happier Lives Institute is right and mental health is way more important than material well being?

  • Surveillance (it’s on the 80k problem areas page!) is the most important cause area

  • Consciousness is an illusion?

  • There's a war with russia?

  • Crypto crashes and there's no more money?

  • It turns out nondomain experts can't make valuable contributions?

  • In five years EA has no new wins?

Take 30 seconds to figure out which of these or similar questions you’d like to explore at some point.

Mental Motion 5: Redteaming

Beating up on your own ideas is scary enough. What about asking someone else why you might be wrong? Are you brave enough to have other people poke holes?

Acknowledging that this is unpleasant and scary!

Ask the person next to you to say the most likely reason your claim is to be wrong.

(I gave two minutes each for this as well, though may have extended it).

Mental Motion 6: Line of Retreat

Some things are scary to imagine being true. Have you noticed any in your thinking here? Is there something that makes you flinch? What's hard to think clearly about / gain purchase on?

Question to ask yourself: What would you do in a world where the scary thing was true?

(Take a minute or two to think about this)

In the end, reality doesn’t grade on a curve (mantra 1) but there are things you can do next to reduce your fear or uncertainty (mantra 2).

Mantras

The ideas I’ve been coming back to in this talk are:

  1. Reality doesn’t grade on a curve

  2. You can’t always outsource to other people

  3. There are (almost always) next steps, even in the midst of a lot of uncertainty

Concrete Takeaways

  • If you found this helpful, this is a practice you can do on your own, just by going through these steps, on your own or with other people

    • Some things here are babble / generative and some are prune / edit (Babble vs. prune)

  • If this wasn’t helpful but you like the idea of a rationality practice, there are options

  • If there’s anything that this made you think you want to do in the future, make a concrete plan now. Put something in your calendar or text a friend or set an alarm now to try out doing this.

Previous
Previous

How To Have An Opinion

Next
Next

Totalizing Worldviews are Scary