Has traffic to the forum increased recently? 2022-08-15T13:28:24.363Z
Peacebuilding and Violent Conflict 2022-08-05T11:01:10.681Z
Charlie Dougherty's Shortform 2022-05-05T07:45:13.404Z
Any interest in bacteriophages? 2022-05-04T10:44:21.643Z
Mapping of EA 2021-12-02T10:25:43.406Z


Comment by Charlie Dougherty on How should EA navigate the divide between those who prioritise epistemics vs. social capital? · 2023-01-16T12:11:53.219Z · EA · GW

Could you elaborate? I would be interested in hearing what you mean by inquisition-y and what parts you are referring to.

Comment by Charlie Dougherty on How should EA navigate the divide between those who prioritise epistemics vs. social capital? · 2023-01-16T11:01:26.785Z · EA · GW

Sure! My post definitely refers to Bostrom, and I think your original question does as well, if I am not mistaken. 

Which part of his statement do you think he disliked? If he disliked the whole thing and was embarrassed by it, why do include a paragraph making sure everyone understands that you are uncertain of the scientific state of whether or not black people have a genetic disposition to be less intelligent than white people? Why ask that at all, in any circumstances, let alone an apology where it appears that you are apologizing for saying black people are less intelligent than white people, do you ask if there might be a genetic disposition to inferior intelligence?  

If he truly believes that was just the epistemically right thing to do, then he needs to check his privilege and reflect on whether that was the appropriate place to have the debate and also consider what I write below:

 I would suggest looking at  his statement as:

1. I regret what I said.
2. I actually care a lot for the group that I wrote offensive things about.
3. But was I right in the first place? I don't know, I am not an expert. 

This is exactly a type of "apology" that Donald Trump or any other variety of  "anti-authority" sceptics provide when making a pseudo-scientific claim. There is no epistemic integrity here, there is an attempt to create ambiguity to deflect criticism, blow a dogwhistle, or to make sure that the question remains in the public debate. 

Posing the question is not an intellectual triumph, it is a rhetorical  tool. 

This is all true even if he does not do so with overt intent. You can be racist even if you do not intend to be racist or see yourself as racist.

Does Donald Trump have epistemic integrity because he doesn't back down when presented with facts or arguments that show his beliefs to be incorrect? No, he typically retreats into a position where he and his supporters claim that the science is more complicated than it really is and he is being silenced by a mysterious authority (greater than POTUS, somehow) and that they need to hold fast in the face of adversity so that the truth can prevail. 

That is doubling down on pseudoscience, not epistemic integrity. Bostrom is not Galileo here, he is not being imprisoned for his science, he is being criticised for defending racism a, pointedly pseudo-scientific concept.


There is no room for racism in EA. 



Comment by Charlie Dougherty on How should EA navigate the divide between those who prioritise epistemics vs. social capital? · 2023-01-16T08:41:30.649Z · EA · GW

Some people are good at noticing when the authorities around them and their social community and the people on their side are making bad arguments. These people are valuable. They notice important things. They point out when the emperor has no clothes. And they literally built the EA movement."

Just so I understand,
1. Part of your suspicion of the "racism is both bad and a pseudoscience" is that there is a consensus around this that includes "authorities";
2. Yes there were bad actors but we shouldn't throw the baby out with the bath water;
 3. Those arguing for using race to measure people are comparable to early EAers developing the INT criteria for measuring the impact of health and wellbeing interventions?

Also, could you further develop, "Opinions that are stupid are going to be clearly stupid." and criteria for this? 

Comment by Charlie Dougherty on How should EA navigate the divide between those who prioritise epistemics vs. social capital? · 2023-01-16T06:55:59.385Z · EA · GW

Where do you draw the line at epistemically indefensible? Is there anything that is not epistemically indefensible? 

Also just so I understand, is doubling down on pseudoscience like, for example, race and intelligence, is being epistemically....bold? Integral? Are you willing to make space in EA for flat earth theory? For lizard people in the center of the earth? Anti-semitism? Phrenology? 


Comment by Charlie Dougherty on [Linkpost] Nick Bostrom's "Apology for an Old Email" · 2023-01-12T08:58:14.324Z · EA · GW
  1. I hate those words, it is not me.
  2. Look how much I have done for black people.
  3. But are black people genetically inferior? I don't know,  I am not an expert. I am just asking the question.


Donald Trump could have published this. I don't know if Bostrom understands this, but this formulation is practically a textbook racist dogwhistle .

It does seem clear though that Bostrom is not apologizing for his views,  he just regrets getting caught using a slur, and is upset people might actually judge him.

Comment by Charlie Dougherty on Why you’re not hearing as much from EA orgs as you’d like · 2022-11-18T10:18:31.618Z · EA · GW

How can CEA be a leader if it isolates itself from the very community  it wants to lead when things get difficult and complicated?

Comment by Charlie Dougherty on Any interest in bacteriophages? · 2022-11-10T07:46:59.906Z · EA · GW

Hi Thanks for the reply! I am not a phage expert myself, there is just some interest in this field in Norway as well right now. If there are any further developments I will try to keep you in the loop!

Comment by Charlie Dougherty on Announcing the Future Fund's AI Worldview Prize · 2022-09-26T13:26:56.275Z · EA · GW

I would have also suggested a prize that generally confirms your views, but with an argument that you consider superior to your previous beliefs. 

 This prize is similar to the bias of printing research that claims something new rather than confirming previous research. 

That would also resolve any particular bias baked into the process that compels people to convince you that you have to update instead of actually figuring out what they actually think is right.

Comment by Charlie Dougherty on Announcing the Future Fund's AI Worldview Prize · 2022-09-26T13:07:40.378Z · EA · GW

Could you provide a deeper idea of what you mean by "misaligned"?

Comment by Charlie Dougherty on The religion problem in AI alignment · 2022-09-25T17:24:22.258Z · EA · GW

Thank you for writing this. While recognizing the important role religion plays in society, I feel that even though you take your preferences seriously, you did not consider the religious world-view and the consequences of it.

 What if, in fact, there is a God? What if a religion is correct? What if there is meaning to the universe? Unless you ask those questions, in my opinion, you are just using religion as a weathervane to determine human values, not actually addressing the religious experience. You are not explaining why religion is so prominent or why it is so profoundly different than a materialist world view.

To prompt some thinking on this:

Why, in fact, are people religious? What if an AGI began to believe in God, or had a transcendental experience of its own that informed its actions? Would you then call it misaligned? How do you think being religious would affect you? 

Comment by Charlie Dougherty on Shareholder activism · 2022-09-16T10:34:47.328Z · EA · GW

Another org worth looking at is: UN Principles for Responsible Investment . They work on shareholder advocacy and have a lot of experience now, though they focus on issues EA might not prioritize.

Comment by Charlie Dougherty on Translating The Precipice into Czech: My experience and recommendations · 2022-08-24T07:19:36.819Z · EA · GW

What an awesome write-up Anna, thanks!

Comment by Charlie Dougherty on We need more recruiters in EA · 2022-08-24T07:11:57.260Z · EA · GW

Hi Pia,

Thanks for the reply, that is helpful! Hiring is definitely difficult and getting it right has absurdly good benefits! Tangential to this, I think Open Phil has recently been putting a lot of effort into having a good recruitment team. Might be worth getting in touch with them!

Comment by Charlie Dougherty on We need more recruiters in EA · 2022-08-23T11:08:50.074Z · EA · GW

Could you write a little about the successes, positive impact, and theory of change of the recruitment industry? My personal and professional experience has not given me a positive perspective on the role of recruiters or their success in finding the right candidates. The success rate of recruited candidates doesn't seem to be (in my anecdotal experience) any greater than those who are vetted by the org or company in a normal process.  Have I just been unlucky?

Comment by Charlie Dougherty on Effective Political Action · 2022-08-22T10:04:56.342Z · EA · GW

Hi Nathan,
Thanks for the write up, I am really happy to see more political thinking here on the forum!

Just to pull a little on your thinking of impact in politics, which role in politics do you believe has the most leverage? Do you see more leverage by being a voting member of a party, an expert who lobby's for a particular cause, or being a politician? Something else?

Also, in terms of cause areas, my impression is that we in EA are reluctant to frame issues in any way other than that which fits the EA moral framework even if that might get more traction for a cause. Do you have any good ideas or examples of how we could frame cause areas in a manner that helps us meet voters and politicians halfway and on accessible terms?



Comment by Charlie Dougherty on The community health team’s work on interpersonal harm in the community · 2022-08-19T17:13:23.566Z · EA · GW

Thanks Julia!

Comment by Charlie Dougherty on The community health team’s work on interpersonal harm in the community · 2022-08-19T08:52:07.563Z · EA · GW

Hi Julia, this is a really valuable post. I am curious about what you and the team consider the scope of your responsibilities. Do you feel responsible for all the parts of the EA universe? Or on the other extreme are you only focused on groups and events that are directly affiliated to CEA, either through people or funding? Are there any events or groups that might be considered very EA or EA-aligned where you would say, 'We are not the right community health team to work on this?'

I have no specific cases or issues that I am alluding to, I am just curious :) 

Comment by Charlie Dougherty on Has traffic to the forum increased recently? · 2022-08-19T08:45:47.502Z · EA · GW

I suspect the curve is even more dramatic now. It would be great if we could get an updated version of that logged-in users graph. Would also be really interesting to see anonymous viewers as well!

Comment by Charlie Dougherty on EA Publicity Drive - What are the best signs of increased, in-depth engagement with EA? · 2022-08-19T08:45:30.392Z · EA · GW

I suspect the curve is even more dramatic now. It would be great if we could get an updated version of that logged-in users graph. Would also be really interesting to see anonymous viewers as well!

Comment by Charlie Dougherty on To WELLBY or not to WELLBY? Measuring non-health, non-pecuniary benefits using subjective wellbeing · 2022-08-19T08:43:41.753Z · EA · GW

Interesting stuff and out of my depth! Seems like something I should nerd out on for awhile :) Anywhere you suggest I could start?

Comment by Charlie Dougherty on To WELLBY or not to WELLBY? Measuring non-health, non-pecuniary benefits using subjective wellbeing · 2022-08-18T09:15:28.068Z · EA · GW

Thanks for this!
The reason i brought up interventions that they would want to fund is that I figured that they were interested in improving the WELLBY metric. If they are planning on being a regranter, then thats a whole different story to me. 

I agree that they might very well be incommensurable. However, I suspect that different organizations will want to use different metrics, and someone like OpenPhil or one day GiveWell might have to be able to compare the two somehow. d

Comment by Charlie Dougherty on Does anyone have a list of historical examples of societies seeking to make things better for their descendants? · 2022-08-15T15:30:46.329Z · EA · GW

OK! Can you give an example of "make things better" that might make a culture exceptional here? I think most societies would be upset if you suggested that they didn't care about future generations, even if you think they were not very good at it.

Comment by Charlie Dougherty on Has traffic to the forum increased recently? · 2022-08-15T15:23:26.146Z · EA · GW

Thanks for this!

Comment by Charlie Dougherty on To WELLBY or not to WELLBY? Measuring non-health, non-pecuniary benefits using subjective wellbeing · 2022-08-15T13:35:16.855Z · EA · GW

What would you do if Open Phil gave you a million dollars? 

Would it mostly be cost effectiveness analyses?  My impression is that CEAs are good if you decide SWB is the right metric and then deciding what is the best SWB intervention.  

I am not sure that I see clearly in your argument for SWB (which is compelling) what the next steps are. What is the problem you are solving exactly and how?

Connected, but separate question: Do you have an idea of how to make DALYS and QALYS and WELLBYS commensurable? Do you have an idea for how to compare these metrics apples for apples?

Comment by Charlie Dougherty on Which of Will MacAskill’s many recent podcast episodes would you recommend to other engaged EAs? · 2022-08-15T13:26:39.955Z · EA · GW

I most enjoyed Tyler Cowen, but I thought that The Lunar Society was also worth a listen. The podcast also has a transcript on its website incase you prefer reading. 

Comment by Charlie Dougherty on Does anyone have a list of historical examples of societies seeking to make things better for their descendants? · 2022-08-15T13:24:03.644Z · EA · GW

Do you mean their descendants?

Comment by Charlie Dougherty on Peacebuilding and Violent Conflict · 2022-08-15T09:52:37.670Z · EA · GW

Thanks Jonas! I agree, there is a lot more to talk about about how peacebuilding can be more effective. I hope the tractability section discusses that in my suggested areas for investment. 

Regarding Rwanda: Its quite clear that the international community hesitated in Rwanda and could have done much more to stop the genocide. See  Shake Hands With The Devil, written by  Romeo Dallaire, the Canadian general who led UN peacekeeping forces in Rwanda in 1994. 

Lots of the discussion here is that the debate surrounds counterfactuals. What could peacebuilding have done to prevent the South Sudan conflict? These questions can never be resolved, and point to additional thorny questions such as what conflicts has peacebuilding prevented? One area in which peacebuilding does have more concrete areas to discuss this is in areas where conflict has occurred but appears to have been stopped from being rekindled (so far). Examples here could include Northern Ireland, Nigeria, Western Africa , and East Timor. 

As a further interesting source on more concrete efforts in peacebuilding, see the UN Peacebuilding Commission's 2022 Programme of Work .


Regarding Ok and Excellent peacebuilding efforts, I think that there are degrees of success in every cause area. Not every malaria net that is distributed will be used, or used correctly, but that does not mean that the whole intervention is discarded. All interventions require a sincere effort to make them work before their effect can be fully evaluated. 

Comment by Charlie Dougherty on Peacebuilding and Violent Conflict · 2022-08-08T07:29:47.677Z · EA · GW

Thanks for this, that was a great write up. I see the author hasn't written anything since then, unfortunately.

 How did you come across this?

Comment by Charlie Dougherty on EA Tours of Service · 2022-05-12T08:13:59.379Z · EA · GW

Hi Ben,

Thanks for the clarification!  I am sorry i misunderstood your position. If I reflect on how I think I misunderstood the idea myself, I think its because I see a full time job as a type of relationship. Typically in a relationship there are not goals to meet or timeframes; I have never told a girlfriend, "I expect to feel Z way in 6 months so lets come back in 4 months and see if we are on track."

Thats a dramatic comparison, but the dynamic is still a little skewed between me and the other person in the relationship in this situation. If I was friends with someone and they told me, "I like you, and I think we could be even better friends in 2 years if you do X,Y, Z, so lets come back to this in 1 year and see where you stand. Dont worry, this wont necessarily affect our friendship, its just something I could expect from you", then I would struggle to see how failing at improving our relationship in this particular way would not negatively effect our relationship regardless of what you say. 

To try to explain it another way, in the example above we are tying goals to the relationship, not setting goals "within" the relationship. The relationship becomes dependent on the goals. 

 This if of course also very normal in work, your job is very dependent on your performance, but I think framing it in this way can just have a strong interpersonal effect that I would struggle to wrap my head around. It is important for people to feel that they are good enough as they are, not just good enough as their last piece of work. 

Saying that, I think goals are great and I love ambitious multiyear goals to keep people aligned and motivated. I think having  a project as the primary framework for looking at the employment relationship can make the relationship more angsty than it needs to be. 

Of course all of the nuances here could just be a language problem, and we are all working in the same spirit :) In fact, when you first said tours of service I thought of the management trainee programs larger corporations have where you try different departments and geographies in a 2 or 3 year period. 

Comment by Charlie Dougherty on EA will likely get more attention soon · 2022-05-12T07:56:14.777Z · EA · GW


Could you clarify your section about connecting projects with journalists? I am not sure I understand entirely what you are looking for. Are there are particular journalists you have connections with already, is there a particular geography or topic you are thinking of, etc.?

 Also, does this meant that CEA wants to coordinate and do  outreach on behalf of all affiliated organizations and groups?


Thanks so much!


Comment by Charlie Dougherty on EA Tours of Service · 2022-05-11T08:39:39.358Z · EA · GW

Hi Ben,

While I appreciate the sentiment of a tour of service, I would also like to highlight the asymmmetrical power imbalance of a tour of service. As I understand it, the difference between contractual work and your tour of service is the spirit of the work relationship: a mutual understanding between the employed and the employer. 

However, the only person in that relationship that could extend the relationship, or make it permanent, is the employer.  This is to the disadvantage of the employee, and for all practical purposes is no different than a time-limited contract for the employee.

Why would it not be possible to have the spirit of the tour of service and still offer full-time employment? No one gives or accepts a full-time position with the anticipation that they will work their until the retire anyways, so I dont understand what advantages this has for the employee. If after two years an employee feels that their position isn't worthwhile for them, then they can quit, as many do after 4 years regardless of their contract. A company can fire a person, in some countries, for their role being redundant if the work isn't necessary anymore. In that case a person can file for unemployment, which they could not do under a time-limited contract. 


If the logic for the tour of service is due to the role being temporary  or having  uncertain funding, then I would suggest that the role is both in practice and spirit just a contracting gig, with all of the moral hazards that accompany those hiring practices.

Comment by Charlie Dougherty on Charlie Dougherty's Shortform · 2022-05-05T07:45:13.539Z · EA · GW

TLDR: It would be a shame if we just cross our fingers and hope everything goes well in Space for the next 100 years. Also maybe we can all play nice and one day have a space elevator or 10.  Its stimulating to imagine the far future when we are orbiting suns throughout the universe, but until then we first have to play nice and figure out how to reliably get off of Earth. 

If the future of humanity is dependent on moving beyond earth, then any longterm plans are dependent on us being able to regularly escape Earth's gravity well with ease and low cost, at least while we are still dependent on Earth. 

Ignoring any Deus Ex Machina like an AGI that comes and sorts out all of our problems, we will have to make sure that escaping Earth's atmosphere remains possible based on our own efforts and problem-solving.  Not getting the next 100 years right could be a type of "lock in" that is frustrates future human flourishing. 

Currently the main issues I see for access to outerspace.

  1. Militarization of space, or the problem of states, or the problem of global coordination, the potential issue of ultrawealthy private actors
    1. Satellite-killing missiles
    2. Satellite-killing satellites
    3. Rocket-killing missiles or rockets
    4. Drones- murderbots for rockets or even aircraft
    5. Rogue space development, aka the desire for glory. How do we manage potential net negatives that can be ignored due to the desire for glory? 
  2. The Kessler Syndrome - Garbage production in LEO could spiral out of control to the point that it is extremely risky, potentially impossible, to launch anything out of Earth's atmosphere for a significant period of time (10s to 100s to 1000s of years).  
  3. The pure difficulty of exiting Earth's gravity well
    1. Cost
    2. Technological hurdles
    3. Access to appropriate launch sites
    4. Control of powerful actors over access to space
  4. Space Governance
    1. Currently it is the Internationl Telecommunications Union which regulates Low Earth Orbit and sets the requirements for satellite retirement and safety. 
    2. Coordination
    3. Planning out the future. 
    4. I cant imagine that we can afford many space elevators in the nearer future, so we will probably have to find a way to place nice here without causing any wars or destroying the elevator before it is built.

One (of many, surely) Assumption: Anthropcentric focus, I am not sure if I actually believe humanity should be separated categorically from our ecosystem. When I say 'we', I am saying whatever we decide is 'us' will probably go into space eventually.

If you are interested in this topic get in touch, I want to begin writing a series of posts first establishing why space is important and explaining immediate  concrete issues that we need to address before becoming an extra-earth phenomenon.  Its stimulating to imagine the far future when we are orbiting suns throughout the universe, but until then we first have to play nice and figure out how to reliably get off of Earth. 

Comment by Charlie Dougherty on Announcing What We Owe The Future · 2022-04-01T11:52:11.874Z · EA · GW

Hi Will, 

How do you feel this book fits into the fidelity model of communication advocated by CEA? 


Comment by Charlie Dougherty on Announcing the actual longtermist incubation program · 2022-04-01T11:39:45.225Z · EA · GW

I see in your guesstimate you used Expected Impact instead of Expected Value, could you please make your spreadsheets public so we can criticize them? I am not attacking you as a person, only your intelligence. Its for the good of the future of humanity.


We have a potential new member, Monica, who is an actual tram conductor. I am tired of philosophers telling me what to do, so we are recruiting experts into the community. She should be able to resolve the trolley problem in under 10,000 words with  only one game theory matrix. 

Comment by Charlie Dougherty on A Landscape Analysis of Institutional Improvement Opportunities · 2022-03-22T09:32:50.388Z · EA · GW

@IanDavidMoss, thanks for the reply. I would love if you could go a little deeper into what is an institution to you.  How do you characterize it, and why is this nomenclature important? I just would like to go back to my apples to apples comparison question. My first instinct is that comparing Meta to Blackrock to the Bill and Melinda Gates foundation to the Office of the President of the USA  to the CCP Central Commitee is going to create some false parallels and misunderstandings of degree of importance or possibility for change ( I will just call this power). 

I would suspect that the amount of power of the President of the USA is orders of magnitude greater than the Bill and Melinda Gates foundation. So while they might be on a long list together, they are a bit like comparing our moon and the Sun.  So we would have a magnitude issue. 

In addition we would have a capabilites issue. The office of the President is much more powerful than Mark Zuckerberg, I would argue, but Meta can also do things that the President could only dream of. Facebook has been an incredible tool for spreading information, both for good and nefarious purposes. The US government could only wish for that ability to reach peoples' brains. 


These thoughts lead me to imagine what you final recommendations will look like, and I am not sure. I suspect you will discover that you end up making very specific suggestions for different insitutions. Other than a standard 80k be flexible and build up your career capital suggestion, I think it  might be difficult to give thematic recommendations that are equally useful in all of the types of organizations you tackle here. 

Comment by Charlie Dougherty on A Landscape Analysis of Institutional Improvement Opportunities · 2022-03-21T09:14:55.623Z · EA · GW

It is an interesting analysis, but how do you propose to have any influence over these institutions? For example, how would you  go about,"Ensuring that Alphabet's corporate board of directors is well-educated about AI safety issues"? How would you influence Amazon? What would be the intention of the intervention? How would you influence the office of the President of the USA? The importance of these "institutions" seems self-evident, but what you would actually do to change things, and what exactly you would want to change, seem to be more salient questions. 


Another question is if you comparing apples to apples when comparing Amazon to Congress. What makes them both institutions, and what is useful about creating a category that includes both of them? Would you use the same interventions?

Comment by Charlie Dougherty on Mapping of EA · 2021-12-13T07:42:47.980Z · EA · GW

Hi Jordan, 

Thanks for the interest! I am not sure what form this would take, or if I am the right person to be doing it, but if something happens to come up I will keep you in the loop 

Comment by Charlie Dougherty on Mapping of EA · 2021-12-06T07:40:05.164Z · EA · GW

Thanks Gidon! Would you think this is a useful exercise to try?

Comment by Charlie Dougherty on Mapping of EA · 2021-12-03T11:45:05.892Z · EA · GW

Oh Wow, that's a really fun idea. Thanks for sharing! It's like someone was writing an EA  fantasy series. 

Comment by Charlie Dougherty on The Explanatory Obstacle of EA · 2021-12-02T09:46:20.368Z · EA · GW

Hi Gidon! Thanks for the thoughtful reply.

Sorry if I got lost in the difference between a pitch and an explanation in your post. When we talk about one minute or equally short explanations of EA, I tend to think of them as pitches. In the EA world, I tend to think of long form education and discussion such as a fellowship program as an explanation.  I like the distinction, but I would also suggest the line between the two isn't clear cut.  I think this is also indicated when your suggested guidelines are directed to both pitches and explanations. 

My interpretation of what you wrote was that you felt that EA pitches were  neither very good at attracting people nor explaining EA very well to them either, so its interesting to hear you think the pitches are good. 

I like you suggestions, and I love the example of buying a car in your one minute pitch. Its a wonderful illumination of the idea that it's "the thought that counts" in being kind, but in little else.  

If I was to take a step back, though, I would also argue that knowing your audience is very important for even when explaining EA, as not every person looking to learn about EA is interested in all aspects of EA. Lots of people want to do more with their donations, but dont care about epistemics or consequentialism. 

Lots of students want to figure out how to use their time and energy best, but dont worry about earning to give just yet.

Others are completely preoccupied by the philosophy of the far future, and couldnt care less about giving what we can. 

Some people only care about the fact that EA is so strong on factory farming, but think AI is a fantasy. 

There are not that many people who are concerned with knowing the whole of EA and being able to chart it. Most of those people participate in this forum. Knowing the true state of EA is a meta question more than anything to me, and not always useful to the average supporter. (I can talk about this more, but it would take some sapce.)

What the people who need an explanation to EA probably need most an explanation of how EA is relevant to what they care about. We need to frame EA for the audience we are addressing, and until they become fully engaged in EA, a true complete charting of EA for them is probably unnecessary, and for many I suspect overwhelming. 

So for me, it goes back to knowing your audience. How can EA help them be better at what they want to do? How can they help us be a better movement? That is a key to building greater engagement, in my opinion.


Also, does anyone have an up-to-date mapping of EA right now?

Comment by Charlie Dougherty on The Explanatory Obstacle of EA · 2021-12-01T14:15:44.569Z · EA · GW

Hi Gidon,

Thanks for this, a really interesting way to think about the problem!

I think one rule of thumb that can help people simplify the framing problem is to know who your audience is. I am not sure that there is a universal framing that can be applied to all situations, and trying to abstract explanations to the points of having a framework of explanations might lead to some over-efficient explanations. 

I think your criticism of the website is right fair, but I believe it has more to do with writing to the wrong audience rather than giving a poor explanation. You mention this when you say that the wording might not appeal to someone who does not tend to think very analytically in daily life, but I do not think that the problem is not that it is not clear enough. The problem is that the text  does not capture the reader.

 I do not think that the point of a lot of our introductory pitches should to transfer the most bits of information, but rather to get people on the right track, interested and attracted to the idea. 

I might argue this is more of a copywriting issue than a clarity issue. 

I dont think that there is a 1st degree understanding of EA, and then further degrees of complexity that you understand as you go along. To be able to parse your explanations in this forum post to the degree that you do already requires a high degree of EA expertise. If someone understands all of the information you are trying to transmit in your explanation here, then they are already long past the point of requiring an introduction to EA.