EA as Nerdsniping

The experience of reading Less Wrong for the first time was brain crack. Idea after idea felt so novel, insightful, smart and meaningful that I binge read the Sequences.

There’s a lot of thought in Effective Altruism about how and whether to expand, do outreach, have more of a social media presence; a common fear is that the ideas are complex, there are no entirely obvious answers, and that you can easily end up conveying a simplified version that:

A. leaves new people with an inaccurate sense of what others believe

B. encourages taking action based on unnuanced and possibly incorrect views that there will be fewer feedback loops for fixing

C. creates a community with less curiosity and epistemic rigor, making it harder to get hard questions right in the future, or see current errors being made

Inspirations / connections / threads that come to mind:

I’ve been reflecting on outreach and social media for EA, and consuming content the way I usually do, and I keep coming back to how interesting it all is.

Dan Myers is a math blogger and curriculum creator who a long time ago pointed out that whether math problems were real world or not was the wrong axis; questions should be perplexing, the kind that immediately grip you and make you want to find out. (How many squares are on a chessboard, by the way? And not just the small ones.) These questions grab me so hard in exactly this way; they make me curious, they make me want to go read and check and do some math.

Similarly, there are some facts about the world that not that many people talk about, that have, in my view, very far-reaching implications all on their own. But we can argue about those after we contend with them, and let ourselves boggle.

I’m someone who’s really motivated by what’s intellectually interesting, so this won’t work on everyone, and “curiosity” is just one emotion one might have in response to these (others include joy - Dyson spheres would make so much amazing stuff happen, terror - what if digital minds can get tortured in the millions by their creators?, and immense sadness - think of all the people who live in poverty and die of preventable illnesses).

But in addition to being true, I feel excited about what this kind of approach does to people, the way it empowers them to figure out for themselves what the answers are, grounds us all in object level questions and encourages research and checking. There is a lot that’s very uncertain; the world is very strange and has a lot of detail. I feel excited about the kind of person who pursues big and pressing questions because they’re hard and weird and novel and interesting. I want the willingness to pursue thorny questions, to be uncertain (but calibratedly so) but dive in anyway, and the trait of being excited about correction and feedback from the world and red-teaming because what’s important is getting it right.

It also gives concrete next steps. I’m told people sometimes encounter EA and flounder, despairing and not knowing what to do next. But learning is definitely a thing you can do next (there’s so much to read!), and for some people, especially early in their careers, reading widely, being curious and engaging in research projects might be an excellent way to spend time. I’m glad about all the things people research and try to learn in the Grand Futures classes at rationality camps, and as part of academic Existential Risk Initiatives, and I’m glad about the huge lists of open research questions that make the space feel exciting but also workable.

I’ll also say that from a pedagogical perspective (having been a teacher for 8 years), there’s a wonderful thing that happens when people develop their own need for a concept, and then just at the right moment where things feel on the verge of clarity or are in a muddle, get handed a helpful “economists call that kind of thing an opportunity cost” or “one framework is Neglectedness, Importance, Tractability, shall we see if it helps us here?” (We call this “motivating the lesson”)

People who decide to work on important things because they got nerdsniped into it are a kind of person I’m excited about, and encouraging this feeling in people seems like a good way to get keep EA empirical and rigorous. Presenting EA as a nerdsnipe feels both low risk and highly generative.

Things I’m imagining / in favor of:

The big questions of the world are hard, and they matter, but they’re also fascinating, and I don’t want to lose track of any of it.

Previous
Previous

How to Feel About EA

Next
Next

Giving the “Is That a Lot?” Rationality Workshop