EA as Nerdsniping
The experience of reading Less Wrong for the first time was brain crack. Idea after idea felt so novel, insightful, smart and meaningful that I binge read the Sequences.
There’s a lot of thought in Effective Altruism about how and whether to expand, do outreach, have more of a social media presence; a common fear is that the ideas are complex, there are no entirely obvious answers, and that you can easily end up conveying a simplified version that:
A. leaves new people with an inaccurate sense of what others believe
B. encourages taking action based on unnuanced and possibly incorrect views that there will be fewer feedback loops for fixing
C. creates a community with less curiosity and epistemic rigor, making it harder to get hard questions right in the future, or see current errors being made
Inspirations / connections / threads that come to mind:
The goals of keeping EA curious and embracing the intellectual challenge of EA
At the European Summer Camp for Rationality, there was an activity called a Boggle Walk, where you walk around and notice things (strongly recommend the link on Noticing) and get curious and ask questions about why things are the way they are and talk about it. It’s delightful and grounding - the world feels permeated with potential exploration, and you’re in touch with the world in a way you don’t always get to be.
I’ve been reflecting on outreach and social media for EA, and consuming content the way I usually do, and I keep coming back to how interesting it all is.
Can humanity live off of mushrooms in the event of nuclear winter?
What should we put in vaults all over the world so that we can rebuild in the case of civilizational collapse?
Bonus recommendation: The Seed Potatoes of Leningrad
Should we make people happy or make happy people?
When we’re trying to figure out whether there will be slow or fast takeoff of Artificial Intelligence, should we count computer development as fast or slow? Why weren’t there more nonelectric computers, anyway? How many were there?
Is the repugnant conclusion really so repugnant?
Are insects sentient? With what probability?
Why are countries so similar? We don’t seem to have some experimenting with a lot more nuclear power, or human challenge trials, and so on.
How many pounds of food were wasted in the US in 2010, according to the USDA? (from a Fermi competition)
Should we give pigeons contraceptives?
Can we improve mental health at scale?
Are we in a Great Stagnation or not? How important is progress? Could progress be bad?
And perhaps more than anything else: How the f*** do you do the most good??
Dan Myers is a math blogger and curriculum creator who a long time ago pointed out that whether math problems were real world or not was the wrong axis; questions should be perplexing, the kind that immediately grip you and make you want to find out. (How many squares are on a chessboard, by the way? And not just the small ones.) These questions grab me so hard in exactly this way; they make me curious, they make me want to go read and check and do some math.
Similarly, there are some facts about the world that not that many people talk about, that have, in my view, very far-reaching implications all on their own. But we can argue about those after we contend with them, and let ourselves boggle.
There might be digital people one day
Hundreds of thousands or millions of people die of preventable illnesses every year
We can maybe give people their daily recommended calories by taking over paper factories for $1/day
Different people and species probably have different hedonic ranges
60,000 stars become forever inaccessible to us every second
We can maybe program in natural language
We’ve had nuclear weapons for over 50 years and there’s been no large scale world war with them even though we seem to keep losing them
Someone wrote a book about how to rebuild civilization from scratch just in case
If you live on $30 a day you are part of the richest 15% in the world (Tweet)
We could move the sun to get out of the way of asteroids?
Some people think human civilization is way older than we thought
It would be extremely bad news to find life on Mars
We can’t have it all - reasonable notions of fairness (people of different groups who get the same risk score should have the same risk of reoffending and people who ended up reoffending should have gotten similar risk scores) are not consistent with each other if the base rates are different
There could be insane numbers of people in the future
And so so so many more (get in touch to suggest additions)
I’m someone who’s really motivated by what’s intellectually interesting, so this won’t work on everyone, and “curiosity” is just one emotion one might have in response to these (others include joy - Dyson spheres would make so much amazing stuff happen, terror - what if digital minds can get tortured in the millions by their creators?, and immense sadness - think of all the people who live in poverty and die of preventable illnesses).
But in addition to being true, I feel excited about what this kind of approach does to people, the way it empowers them to figure out for themselves what the answers are, grounds us all in object level questions and encourages research and checking. There is a lot that’s very uncertain; the world is very strange and has a lot of detail. I feel excited about the kind of person who pursues big and pressing questions because they’re hard and weird and novel and interesting. I want the willingness to pursue thorny questions, to be uncertain (but calibratedly so) but dive in anyway, and the trait of being excited about correction and feedback from the world and red-teaming because what’s important is getting it right.
It also gives concrete next steps. I’m told people sometimes encounter EA and flounder, despairing and not knowing what to do next. But learning is definitely a thing you can do next (there’s so much to read!), and for some people, especially early in their careers, reading widely, being curious and engaging in research projects might be an excellent way to spend time. I’m glad about all the things people research and try to learn in the Grand Futures classes at rationality camps, and as part of academic Existential Risk Initiatives, and I’m glad about the huge lists of open research questions that make the space feel exciting but also workable.
I’ll also say that from a pedagogical perspective (having been a teacher for 8 years), there’s a wonderful thing that happens when people develop their own need for a concept, and then just at the right moment where things feel on the verge of clarity or are in a muddle, get handed a helpful “economists call that kind of thing an opportunity cost” or “one framework is Neglectedness, Importance, Tractability, shall we see if it helps us here?” (We call this “motivating the lesson”)
People who decide to work on important things because they got nerdsniped into it are a kind of person I’m excited about, and encouraging this feeling in people seems like a good way to get keep EA empirical and rigorous. Presenting EA as a nerdsnipe feels both low risk and highly generative.
Things I’m imagining / in favor of:
Way more lists of open research questions
Fin Moorhouse’s list of “Research I’d Like to See”
TikToks and youtube videos presenting the facts or questions above as puzzles
Student groups and content for them being based in this vibe
This is one of the theme in the rationality content I’m excited about promulgating and creating
Interactive activities: predictions, fermi-ing, model-building, Guesstimating, keeping track of beliefs, putting them into models and seeing what comes out (Givewell’s editable cost effectiveness model, how much direct suffering is caused by various foods)
Outreach based on philosophical thought experiments, open research questions, Kurzgesagt and podcasts like Luisa Rodriguez’s and David Denkenberger’s: based mostly on one interesting question and chock-full of digestible and really interesting facts
The big questions of the world are hard, and they matter, but they’re also fascinating, and I don’t want to lose track of any of it.