How to Feel About EA
There are a lot of ways one could feel about EA, and a lot of aesthetics and vibes it could have. I find it useful to be able to switch between different lenses and notice which ones appeal to me the most. Also I like lists.
I wrote and ran a workshop that was trying to do a lot of things and my favorite part was listing some of the below, asking participants to sort them into 1. I Feel It In My Bones, 2. Believe but Don’t Feel and 3. Don’t Believe, and suggesting ways of putting Category 2 elements into Category 1.
I like having the power to cultivate the feelings I most endorse having, and the experience of grokking a new way of seeing something and appreciating a new aesthetic is really delightful. Important to remember that all of these have their weaknesses and misleading elements.
It’s just normal morality (every person is of equal moral worth) taken to its logical conclusion (which is not a very normal thing to do).
It’s not complicated - I have more than I need, others need it more. If I want to help people, I want to be sure I’m actually helping and not just thinking I am, and I think helping more is better than helping less.
It’s easy to think that billionaires should give up what they have since while they sit on more than they need, people starve, but I don’t see why, as someone who is in the richest 1% of the world population (based on a schoolteacher’s income, I want to note), I am exempt from the same argument.
The idea that every person, wherever and perhaps whenever they are, matters equally, has some serious implications if you think it all the way through. I am not a moral philosopher, but I think with even just basic aggregation it means helping more people is better than helping fewer, because in a group of five and a group of six, the five in each matter equally, and then you’re helping one more person
That’s why perhaps there is a great deal of common cause among worldviews. For instance, existential risk: for most things you think are terminally good, the world ending would cause there to be less of them. This case is made by Toby Ord in The Precipice and Gavin Leech here.
If you wanted this deeper in your bones, I’d recommend
Four Ideas You Already Agree With (That Mean You're Probably on Board with Effective Altruism)
If You're an Egalitarian, How Come You're So Rich? based on title alone, though I’m told by someone I trust that it’s a good book
More intensely, it’s just not doing the usual thing of pretending that there isn’t a huge amount of suffering, that tradeoffs are real and that there is no neutral option.
The Other Mother: “But I reject the idea that I am responsible only to my child. Not as long as the Other Woman has trouble buying food for hers. Because I am a little closer to her now. That woman loves her child like I love mine.”
I know someone who thinks about justifying his actions to mothers - in Africa and in the future - and thinking about whether they’ll forgive him
The Voices of Children in His Tents (Chapter 35 of UNSONG)
Moral circle expansion: the homeless in my city, including the ones with inconvenient mental illnesses, who smell and yell inappropriate things, how do I think of them as viscerally as important as me? People dying of malaria every day across the third world, am I sure I believe their lives matter the way the people I see every day do? What about animals, tortured in factory farms? What about wild animals, dying slowly of illnesses where I will never see them? What about people who will live rich, full lives in 2,000 years, if I can take the right steps now? What about digital sentiences? What about the people who will never live because we lost that galaxy? What about insects or video game characters? There are different reasonable places to get off this kind of wild train ride, but I like being pushed in this way, to take ideas seriously and to figure out where I stand.
A short story about the lived experience of a digital being: Feature Selection
It’s just what it looks like when you really care about getting the right answer. The rigor, the cold-hearted bullet biting, it’s what looking reality in the face and not being sure you’ll win looks like. When my friends test positive for COVID, I do the actual math on what that means for me, because the only thing that matters is actually figuring out what to do next. When a child has a rare genetic disease, parents will learn everything they need to to get the right medical care. I’d guess they’d welcome being told they’re wrong, or that there’s different science to learn, because the only judgment that matters is reality’s.
From Scout Mindset: “When the depressing news hit in 1993 that the drug AZT was no more effective than a placebo, it sparked an important update for the activists. Previously, they had been pressuring the government to release new drugs that seemed promising right away, instead of going through the standard testing pipeline, which can take years. They now realized that had been a mistake born of desperation. “I felt I learned an important lesson,” said member David Barr, “which is that as a treatment activist, to the greatest extent possible, let study results determine the policy positions I support and for which I advocate. My hopes and dreams and fears should not guide that which I advocate for.” Moving forward, their mandate became: Get the science right.”
Cold Research: “If it were his daughter still in an orphanage, Levitt wouldn't just want well-intentioned help for her. He would want the best help for her. And that's exactly why research matters.”
Reason vs. Emotion: “It’s when we really care that we find ourselves trusting our brains.”
Giving gladly: Being in the global 5% (as everyone making above $30k/year is) is an incredible gift. You can turn around and use that money to change and save lives, and for an amount that will not change your quality of life very much. The price and value of a life are unbelievably far apart, and that creates an amazing opportunity.
To get this in your bones (this one’s hard for me, as it doesn’t resonate that strongly, and I’m not sure I want to cultivate it):
Julia Wise’s classic on Giving Gladly
Holden Karnofsky has talked about it, I’m sure, but I can’t find where
It’s a community of people who really care - and I mean really care. Like, cry when they think about the suffering of people they don’t know or who don’t exist yet or stop watching nature documentaries because of the suffering of the animals within them care, and being around them brings me more in line with my own values and makes me a better person.
Quantitative morality is moving, too.
Why Quantitative Methods are Heartwarming is a spectacular blog post
“Suppose the pricing algorithm for ride sharing isn’t as good as it could be. Then day after day there will be people who decide to walk even though they are tired, people who wait somewhere they don’t feel safe for a bit longer, countless people who stand in their hallway a bit longer, people who save up their health problems a bit more before making the expensive trip to a doctor, people who decide to keep a convenient car and so have a little bit less money for everything else. All while someone who would happily to drive each of them at a price they would happily pay lives nearby, suffering for lack of valuable work.”
“a victory for quantitative methods is always a victory for people. And if you don’t know who they are, that means that they quietly worked to end some ongoing blight on humanity, and did it, and weren’t even recognized. Often, even the good they did will look like a boring technical detail and won’t look morally important, because saving every American ten seconds doesn’t look like saving a life. And I’m not sure if there is anything more heartwarming than someone working hard to do great good, relieving the world from ongoing suffering, knowing that neither they nor and what they have given will be appreciated.”
If you have more ideas to add to my list of how to make selling kidneys feel as heroic and life saving as it is, please add them to this thread
It’s fascinating: there are so many cool things to do, questions to answer, puzzles to solve. There are research questions that no one has answered (no one bothered to look at whether we’d survive civilizational collapse until Luisa did, as far as I can know) that you could be the one to answer, there are pressing problems that might be solved through gossip, or through brightening clouds, there are questions about philosophy (I was confused by and interested in the discussion of anthropics by Spencer Greenberg and Toby Ord) and aliens and whether we live in the most important century of all time because the world is going to look really different really soon.
Look at these open research questions!!!
I wrote about this in EA as Nerdsniping
Relatedly, EA is a place where extremely thoughtful and smart people are, and you get to talk to them and cold email them and network with them and maybe work with them.
It’s ambitious: we believe in giving ourselves and each other Hero Licenses around here. The Efficient Market Hypothesis for ideas doesn’t really seem to be true and the world seems a bit crazy and a bit irrational and a bit fucked. If you have a good idea, people are going to support you, and we believe in trying things that might not be good ideas if it’s worth finding out, and people will give you money to try it.
It’s like any other kind of ambition: It’s not that weird to focus a lot of time and effort on something you care about and spend less partying with your friends; it’s a little out of the ordinary when ambitious / career-focused / startuppy types do it, but not by that much, why not do it around improving the world?
Recommendations
Cyberpunk / futurist / science fictional: You’re telling me I might be uploaded one day? Or have to deal with digital people? Or have science done by artificial intelligences? Or reach the singularity in my lifetime? Humanity might actually harness all the energy of the sun or decide what species live and die? You can think about the future in 10 and 100 and 1000 but also in 1,000,000 years - what does that world look like? Is it a grand future where we’ve harnessed AGI to end suffering and want, or an utter dystopia, or just blackness, because we failed to reach our potential?
Strongly recommend this music video of clowns in a van in a parking lot screaming silently about the singularity
It (could be) patient, Burkean, conservative: the future could be very big, but history has already been long, and we have much to learn from it, both in what is robustly good for humans, including their societies and their communities, and in how wrong so many well-meaning people have been. There is a richness to what matters to people, and much to be protected. Now may not even be the best time to make big moves, as we will know more and have been able to invest our resources and grow them over time.
No threat to you: I can’t actually change your values, no one can make you believe this if you don’t buy it, the first goal for interacting with all of these ideas is to figure out whether and what of it is for you. If you’re into it, then great, and if not, then it was maybe never really for you, and that’s good for everyone involved to know. You can take this as far as you want (less sure about this one, seems a bit convenient)
What is the point of this list?
In general, I support the idea that many things matter to you, and it’s valuable to figure out what. This list is in part meant to give a sense of all the different ways one might relate to EA, or different framings one might find attractive for themselves or in describing it to others. None tell the full story, and some tell less of it than others, so it’s important to be honest, but there are a lot of truths in here that might be meaningful to different people in different ways.
I even more strongly support the idea that you might have intellectual beliefs you endorse but haven’t fully integrated yet, and reading through these might give a sense of which orientations you’d like to cultivate, and how you might do so. Some ways of being in the world make me more the person I want to be: more accurate, more truth-seeking, more correctly living in the world as I understand it.
A concept I find really valuable in trying to be more truth-seeking is figuring out what instrumental role my beliefs are playing. Is there something, for instance, that one belief is trying to protect?
Leah Libresco describes this in others as:
I’ve tried explicitly reframing whatever the other person is saying to me as “Watch out! You’re about to step on a kitten!!” and then working out what the kitten is. This way, intensity in argument isn’t necessarily aggressive or insulting, and it’s not something I need to take personally. It’s just a signal of how passionately my interlocutor loves the thing they think I’m about to blindly trample on, and I’d best figure out what it is sharpish.
Spencer Greenberg analogizes beliefs to a temple, with load-bearing elements.
Anna Salamon describes bucket errors, the feeling that two things are linked, so that it feels impossible to decide one thing without deciding everything else.
Carol is afraid to notice a potential flaw in her startup, lest she lose the ability to try full force on it.
Don finds himself reluctant to question his belief in God, lest he be forced to conclude that there's no point to morality.
As a child, I was afraid to allow myself to actually consider giving some of my allowance to poor people, even though part of me wanted to do so. My fear I was that if I allowed the "maybe you should give away your money, because maybe everyone matters evenly and you should be consequentialist" theory to fully boot up in my head, I would end up having to give away *all* my money, which seemed bad.
Noticing this can help disentangle them, and help give permission to make separate decisions separately.
I’d guess that people who are doing outreach or mentorship, or have their own internal conflict might find this list especially useful.
Also lists are fun.
Questions you can ask yourself about feelings that might be useful
These were intended for the original workshop. I’m not sure they make sense here, but I do like them, and anyone experiencing internal conflict about how to feel about EA or other things might find them helpful.
Where is this feeling coming from? Do I endorse that?
Do I endorse this feeling?
Is there something this feeling is trying to protect? Is there another way of protecting that? Can I reaffirm my values / my ability to decide so I can engage with this with less fear?
What are my goals? Why do I care about them?
Is this a feeling that helps me achieve my goals, or makes them harder?
Is the feeling or intuition giving me important information? What is that information?
Is the feeling based on anything false? Is there an option that feels real but isn’t?