Giving the “Is That a Lot?” Rationality Workshop
I got asked to give a rationality workshop to Columbia EA and thought “what’s the thing I want the next generation of EAs to have?” and one cluster in my wish list was:
Caring about the object level
Grounding in numbers and orders of magnitude [there’s a great pedagogical thing of “what’s an answer here you know would be too big? too small?”. We sometimes call things like this “sanity checks” – does your answer make any sense at all]
Actually going to check facts when it’s quick
Calibrated confidence in their views
The ability to add to the conversation, not just pass on what other people think
Reflexively thinking about opportunity cost
Habits that make necessary mind changes easy
Plus some background thinking that
“Is that a lot?” is an amazing rationality technique
Logan Strohl’s capital N Noticing seems really cool for bringing all kinds of things to salience, including experiences of say, confusion, or the impetus to go look up a number
Especially inspired by using a counter (like this or this) to track things during their Naturalism Course
Writing down lists of beliefs seems thoughtful, generative and a way to both track your thinking over time and be accountable
I primarily got this idea from Dan Greene on Spencer Greenberg’s podcast
But also love Katja Grace’s list of novel opinions
Others I can’t find now
Trains should have brakes - your beliefs about what matters and what you should do about it should change when the numbers change. You should have a sense about what would change your mind.
Found after I wrote the workshop and my thoughts, but an extremely good list of similar ideas for how to do research from Carl Shulman
So I wrote the workshop Is That a Lot: blog post version, google doc version, workflowy version
This blog post is my thinking about it, what I would say out loud to someone who was curious why I did what, and what advice I have for giving it.
Generally I think it’s important to check in with people, know what the core elements are but feel willing to riff based on what is interesting or fruitful in the conversation. At the same time, don’t be afraid to say “Excellent, we’re now going to transition to the next conversation” (magic words)
———————————————————————————————————————————
I riffed off of the fact that Peter Singer for a long time cited life-saving numbers in the hundreds of dollars, even when Givewells estimates were in the thousands. (Organizers should look up the actual number at the time they give this workshop)
[When he says CDs or shoes, you can actually just think about how much those things cost. Brooks Brothers shoes are in the few hundred at a quick glance, but use numbers you think are relevant. Maybe Peter Singer has hilariously expensive taste in shoes.]
Beat 1: The workshop starts with asking participants to tell the drowning child story, and to ask them what numbers they remember in the story, and to reflect on whether the numbers played a big role in how convincing it was. “Is that a lot?”
This is nice because it starts immediately interactive and has the element of looking at something that might feel old hat with fresh eyes.
Asking how much they buy the original argument also gives an organizer who’s not familiar with the group a sense of their levels of buy-in, experience with the arguments, etc.
Riff: Then to think about the difference between 300 and 3000. And what if it was 30,000? (The original title of the workshop was 300/3000/30000). The idea here is that the numbers actually matter, that changes of orders of magnitude, in many cases should cause our beliefs or arguments to change.
Which lends itself to thinking about how much numbers we are implicitly anchored on would have to change before we would change our minds. “Is that a lot?”
But first, sidebar: I say “maybe this doesn’t actually matter!” and cite Holden. Then we generate reasons why it might matter or not. I like this because rationality content has a habit of seeming really obvious in retrospect. I want the claims I’m making to come across as claims, and for skepticism to still be in the air. Also I think the reasons to care about numbers are really interesting and emotionally resonant and more numerous than people think, so it’s a good excuse to say them in conversation with what other people generate. (It’s a hugely different world to live in when you think that every CD is a life! Both more horrifying and more good that can be done – I like highlighting both directions of the emotionality)
I found it really good for the flow to engage a bit with each reason people gave, and if I had a quote in my list that agreed with them, to read it at that point. Nice vibe to be encouraging int hat way.
Sidebar continuation: People do think it’s cheap to save a life! Given that we think it’s loads more expensive than the average person does (though the average person probably doesn’t have a strong sense of what numbers make sense and which don’t) that seems relevant to our communication. We might actually be telling people that doing good is surprisingly hard! [I liked going into this, again to push that this is complicated and things aren’t obvious, but it seems optional to me]
Beat 2: EA involves lots of numbers. If the numbers change, our beliefs need to as well.
Personal / emotional element: Developing your own relationship with numbers will also make you feel less dragged around, more calibrated in how confident you should be, how robust your beliefs are, less epistemic learned helplessness. If you know your answer to “Is that a lot?”, you’ll know when a new claim is action relevant, whether it’s above your bar for investigation.
[For young EAs especially, I like giving them the option to have a slightly adversarial framing to the arguments they’re encountering. Don’t let those “big deal EAs” drag you around! I personally find this at least somewhat helpful for activating my own agency.]
Putting it into action: This is my favorite part of the workshop (and possibly should be a bigger part – I ran a different version of this workshop where it was basically the whole thing). I asked people to think about the case for working on the thing they were currently working on was, and a number that feels especially relevant to it. Asking them what these numbers were was an amazing experience in learning a bunch of new cool numbers. Also a great time to express my view that their arguments *should* have numbers associated with them, at least somewhat, that importance and urgency and value come in different amounts. Plus I got to shill the “Numbers Every EA Should Know” and attendant Anki Deck that I like *so* much.
I don’t remember exactly how I went about this, but at some point in here I said “you have permission from me, personally” to do something like have your own opinions on relevant numbers, to have your own argument. People often feel like they need persmission.
Then I get to ask the group, and individuals who are willing, “what is the value for that number that would change your mind / make you rethink”. What if global warming was only expected to be x degrees? Or happen after Y year? What if animals were thought conscious by only z percent of scientists and philosophers?
And I’m pushing them, but I’m also trying to make it clear that I think it’s so useful for them to know what they think, how wide their bars are on this, when they’ll need to update, how much accountability and integrity they’ll cultivate if they choose to make these bounds clear. This is how we know we’re working on the most important thing; we know when we’d switch. [I like the data blind analysis from metascience]
Extensions: This is where I hit on the value of Going to Check (ie for numbers you’re curious about, or matter to your argument, get actually curious and google it! See if it’s findoutable in two minutes), of knowing where you don’t know the number (and deciding whether it’s worth investigating more in the future, if it matters a lot). I got a bit intense about this: something along the lines of “people say they’re curious about how many x or whatever, but they can’t really be, because they have computers right there to find out!” which is stronger than what I endorse, but I say that I’ve been cultivating this habit over the last few years and I think it’s made me more actually curious and notice what fake curiosity was and also more in touch with the world and the facts in it.
[After this, I saw one participant on his phone and he was looking something up :)]
Beat 3: Let’s try it out
The eat cows not chickens arguments works really well for thinking about whether you buy it, and then thinking what numbers would make you buy it or not. Specifically, how many times more calories do cows have than chickens? At the number you think, does that convince you? There are lots of considerations: treatment, pain, sentience, etc. If you found out the number was 10x higher, would that convince you? 10x lower?
Just a really interesting argument for people who haven’t heard it before, plus a chance to try out all these ways of thinking. I think it’s important to encourage people if they bring up things like moral worth; do they have a sense of the order of magnitude more that they value cows than chickens? What if the ratio of calories still overwhelms?
The idea isn’t to be convincing but to be encouraging about model-building. Like, yes! Those things should be included. What do you think the numbers are? What happens if you go fill out this table where you get to put in your own numbers? Go to Guesstimate if you want to formalize it a bit, look up some useful numbers. What you think is important matters, now figure out how much.
It’s 100x, which seems to be way higher than people tend to think.
Extension: Why aren’t you freaking out about OpenAI? At what point would you start? It’s exactly the right form of question. I didn’t end up going in this direction.
I very strongly wanted to end by talking about potential concrete next steps. I had a list of suggestions I printed and handed to everyone, and asked everyone to see if there was one they thought they’d like to do (or come up with one of their own). If so, to make a concrete plan of when they would do it, like right now, envision on what day, even this very second put it in their calendar. I also offered that they could contact me by email or social media and ask me to ask them in a day/week/month if they’d done it. No one took me up on that, but someone did at the next version of this workshop I ran, which was lots of fun.
The success conditions:
It becoming the norm in those EA groups that when empirical claims came up they asked themselves the questions
Is that a lot?
Is that a number that would convince me?
What’s my own sense of what numbers should matter
Those EAs writing out their arguments more, and being more upfront about what numbers mattered to them (were cruxy) and what numbers would change their mind
Maybe having a list of beliefs (I didn’t hit this hard in either workshop, would have liked to all write down 2 beliefs we currently have in this one)
I thought both versions of this went well and were lots of fun! People’s feedback indicated that they liked and appreciated the core themes.
Feedback included (I think one person said each of these):
desire for more concrete examples (though no one said the cow and chicken thing was especially valuable to them)
desire for examples and ideas that were more amenable to people new to EA
learning about numbers relevant to different people’s worldview was interesting (it was one of my favorite parts - the participants knew a lot of interesting numbers I had no context for!)
Comments and questions welcome, especially if you run this workshop or want to and want any guidance or thoughts. If you come up with things that address the feedback above, I’d love to know! (You can also grab me on social media or calendly - links at the top of this page).