A common challenge in the Effective Altruism world is communication. I have mentioned the fidelity model of spreading ideas supported by the centre for effective altruism. I think it can be a good model, but of course it’s a tool not a truth so it has its good uses and its less good uses.
The fidelity model’s usefulness is directly related to how complicated or obtuse an concept or idea is. This is why people really want to be careful about EA: it’s an extremely obtuse idea. I dont think it’s especially complicated, though, as it doesn’t require special knowledge or mathematics to understand.
I would call EA obtuse because it asks you to think in a way most people have not. The ideas of global altruistic maximalization and cause neutrality are weird.
Most people’s framing of the world require realignment if EA is to fit into their decision-making. For better or for worst people really need to make an effort to wrap their minds around EA.
EA has felt that it has had broad but unsuccessful media coverage in the past. EA was painted in a way that it did not want to be painted: it was misunderstood.
This fear of being misunderstood is especially a crisis for those who believe that the idea of EA, particularly longtermism and the far distant welfare of the human race, will be irreparably damaged if the reputation of EA is damaged. Some seem so scared of incorrect representation that they would rather avoid representation entirely than to risk that no one ever thinks of the far future again.
I agree with being careful, but I also think that the fear of good ideas being forever erased from the history of ideas due to the media is also incorrect. EA is not the first to think of future generations, and it is not the last. Nick Boström might be able to claim to be the first consequentialist to really write about humans approaching the heat death of the universe, but I think that is only one small area in the pantheon of EA thinking. The ideas that we hold most important right now, that future people matter and what we do will have consequences for them-, is a repeating theme in human history.
Ideas tend to recycle, and to suggest something will die along with a movement is dramatic. Just because a philosopher hasn’t written it in a journal doesn’t mean the idea has never existed.
The emphasis on the fidelity model then is a response at the very conservative end of the spectrum of possible responses, and one that we should try to see if we can have as one of many options in our communication toolkit.
I want to focus on fixing problems and keeping philosophy as a tool, not our goal. We have consequentialism to help us fix problems, we don’t find problems in order to help us improve consequentialism.
With a focus on solutions, we can then find other, responsible ways of sharing our ideas with others,
Other communication
There are already other attempts to communicate that I think are worth highlighting and applauding. They are not that far from that fidelity model, but I think are still worthy efforts and are getting towards ideas that I think we should be exploring further.
The largest effort in the EA world has been the attempt to promote public intellectuals such as Will MacAskill and Toby Ord. They have had and still have books published by large trade publishers that will reach masses without the intercession of a fellowship or EA member to interpret or answer any questions for the reader.
I think these can be useful, but I also want ask why they are so quickly welcomed in the EA community when it is so careful about communication otherwise. Will and Toby have done wonderful work, but why is there no question about they best way for them to promote their ideas and research? Why do we assume trade paperbacks are the best way to do it, and to be perfectly blunt, why are we so confident that the books they have written will be the most use that they could be for the movement?
Will and Toby are certainly more qualified than me to answer these questions, but at the same time, I ask, do we offer them too much of our confidence?
I personally don’t care—I think they should publish what they want and if they can get a large publishing house to put advertising money behind them, then they should absolutely do it. Yet what do these books mean to the larger collective effort of communicating EA ideas?
In the thread of being an EA-aligned public intellectual, I say if you can get a publisher to print your book, then do it and let us see where it goes. I have a feeling, sorry an intuition, that there aren’t many other people than Will or Toby that you, or the EA Forum, would want to write a popular book on EA.
And I think that’s great because I don’t think EA is the most important thing to talk about here. I am much more interested in our cause areas than I am about EA.
Talk about things first and ideas later
If Will and Toby were asking my advice on what to talk about (and don’t worry, they aren’t) then I would tell them to focus less on the ideas of EA, longtermism, global health and welfare, or any other cause area. I would tell them to focus on things before ideas.
Let me be more concrete.
There was recently a tweet that I saw that essentially asked, “What is a good way to talk about AGI x-risk in social media without the risk of confusion or misinterpretation?”
My first thought was, “Talk about murderbots!”
Of course, when I say murderbots I mean “Autonomous Weapons Systems”, but that wouldn’t be very SoMe friendly.
The tweeter wasn’t really sold. However, I made my case.
If we cannot talk about EA, existential risk, or more specifically AGI existential risk in a tweet or a TikTok then my suggestion:
Find a concrete issue (like any other non-profit) and market the hell out of it. Even if you can’t solve AGI directly (what would a solution even be?), we can help change the discussion around autonomous computers, the physical potential danger of autonomous machines, and even if someone gets interested in the topic the thorny problem of alignment.
If you are comfortable in your cause prioritization, feel that there is a concrete problem that you can promote, then do it!
By promoting particular causes we are then promoting concrete issues, concrete mechanisms of the dangers that we trying to deal with in EA and existential risk.
What do we need to do that? Because EA is abstract. It is a second or even third order meta analysis of what is going on in the world, how one should behave, and what are the appropriate steps to take. It is complicated and I am not even sure I can be more specific than I have been here, because I don’t think there is actually consensus on what EA is. Disagree? Please tell me why!
Do you want to see the confusion I am talking about? Ask a person if they are effective altruist, and what that means to them.
If you are an EAer, do not feel you are doing harm by promoting specific problems with AI. We know automated weapons are dangerous, or that automated legal systems have systemic biases, or that misaligned will turn us all into paper clips. Ok misalignment is difficult to explain, but remember when everyone was afraid of nanobots turning us into grey goo? Lean into that!
This does not mean that we abandon the meta-EA discourse. Perhaps Will MacAskill should be focused entirely on the philosophical question (again not a recommendation, he should do exactly what he wants to) and not worry about the cause areas. We should not neglect EA at the expense of focusing on the applied issues, either.
However, when most people are trying to figure out how to talk about EA in public, I suggest taking the simple way out: Talk about concrete problems that we can solve, the AI risks that we can identify and fix today, and leave the second and third order abstractions in the philosophy of EA to people who want to talk about them.
But what about their epistemics?
But what if they don’t act for the right reasons? What if their epistemics are wrong?
Well, tell me when your epistemics are perfect, and then we can go from there.
When people go to a restaurant, they usually dont want to have to prepare it as well. That can also be the cause in doing good.
You do not have to have perfect epistemics to do well in the world, in the same way that I can have a nice meal without being a chef.
To Conclude:
We should be careful when sharing ideas, but at the same time it is possible to be too careful.
I propose as a compromise that if you feel uncomfortable talking about complicated issues in social media and other noisy media, then you should focus on specific things we can address rather than larger complicated ideas.
If this means that we will have to focus on cause areas first and then epistemics later, that does not bother me.
I think this argument is supported by how people generally come to EA. I have met few people who have come to EA through their epistemics. Lots of new members have come through a cause area. Almost all new members are most interested in joining EA because they want to do the most good. All of the larger, thornier philosophical ideas tend to come into the discussion later after people have become more familiar with cause areas EA thinks are important and the concrete steps we can take to ameliorate these problems.
If we are interested in introducing new people to EA, let’s try to introduce them to some concrete problems and our ideas for solving them first. It allows for a having a clear message that is easily distributed without a large risk of misinterpretation.
How many people do you know came to EA through cause areas or a particular EA angle`? Typically EA is not really swallowed in one bit, its taken in little pieces, often accidentally. Accept things for the way they are and dont fade away if the world isn’t as you think it should be.
Let us be careful, but let us not be shy. Do not be afraid to talk to people. Share your message, share what you care for.
Think about your audience, think about what they care about. What things they respond to.
Not everyone will want to be an Effective Altruist, but lots of people will care about the same causes as you. I dont care if a vegan thinks Effective Altruism is elitist, but also wants to think of better ways to oppose factory farming. I am with them.
I don’t care if an engineer thinks we are a cult, but wants to talk about stopping murderbots.
Find your community outside of EA that wants to work on the same cause area. Work together, communicate, promote your ideas, and solve a crisis, even if your epistemics suck.