Some thoughts on David Roodman’s GWP model and its relation to AI timelines 2021-07-19T21:47:39.558Z
Report on Whether AI Could Drive Explosive Economic Growth 2021-06-25T23:02:24.356Z
Report on Semi-informative Priors for AI timelines (Open Philanthropy) 2021-03-26T17:46:03.248Z
EA's Image Problem 2015-10-11T13:44:28.542Z


Comment by Tom_Davidson on Report on Whether AI Could Drive Explosive Economic Growth · 2021-07-06T17:43:36.212Z · EA · GW

Hey - interesting question! 

This isn't something I looked into in depth, but I think that if AI drives explosive economic growth then you'd probably see large rises in both absolute energy use and in energy efficiency.

Energy use might grow via (e.g.) massively expanding solar power to the world's deserts (see this blog from Carl Shulman). Energy efficiency might grow via replacing human  workers with AIs (allowing services to be delivered with less energy input), rapid tech progress further increasing the energy efficiency of existing goods and services, the creation of new valuable products that use very little energy (e.g. amazing virtual realities), or in other ways. 

Comment by Tom_Davidson on Report on Semi-informative Priors for AI timelines (Open Philanthropy) · 2021-03-31T18:52:33.949Z · EA · GW

Thanks for these thoughts! You raise many interesting points.

 On footnote 16, you "For example, the application of Laplace’s law described below implies that there was a 50% chance of AGI being developed in the first year of effort". But historically, participants in the Dartmouth conference were gloriously optimistic

I'm not sure whether the participants at Dartmouth would have assigned 50% to creating AGI within a year and >90% within a decade, as implied by the Laplace prior. But either way I do think these probabilities would have been too high. It's very rare, perhaps unprecedented, for such transformative tech progress to be made with so little effort. Even listing some of the best examples of quick and dramatic tech progress, I found the average time for a milestone to be achieved was >50 years, and the list omits the many failed projects.

That said, I agree that the optimism before Dartmouth is some reason to use a high first-trial probability (though I don't think as high as 50%).


The point that Laplace's prior depends on the unit of time chosen is really interesting, but it ends up not mattering once a bit of time has passed.

Agreed! (Interestingly, it only doesn't matter once enough time has passed that Laplace strongly expects AGI to have already happened.) Still, Laplace's predictions about the initial years of effort do depend on the trial definition: defining a 'trial' as 1 day, 1 year, or 30 years gives very different results. I think this shows something is wrong with the rule more generally. The root of the problem is that that Laplace assigns 50% probability of the first trial succeeding no matter how we define a trial. I think my alternative rule, where you choose the trial definition and the first-trial probability in tandem, addresses this issue.


 If you rule out AGI until 2028 (as you do in your report), the Laplace prior gives you 1 - (1-[1/(2028-1956)+1])^(2036-2028) ≈ 10.4% ≈ 10%, which is well withing your range of 1% to 18%, and really near to your estimate of 8%

My estimate of 8% only rules out AGI by the end of 2020. If I rule out AGI by the end of 2028, it becomes ~4%. This is quite a lot smaller than the 10% from Laplace.

The top of my range would be 9%, which is close to Laplace. However, this high-end is driven by forecasting that the inputs to AI R&D will grow faster than their historical average, so more trials occur per year. I don't think such high values would be reasonable without taking these forecasts into account.


When you write "I also find that pr(AGI by 2036) from Laplace’s law is too high," what outside-view consideration are you basing that on? Also, is it really too high?

I find it too low mostly because it follows from aggressive assumptions about the chance of success in the first few years of effort, but also because of the reference classes discussed in the report.

Another way to justify ruling out Laplace is that if you had a hyper-prior, putting some weight on Laplace and some on more conservative rules, you would put extremely little weight on Laplace by now. (Although I personally wouldn't put much weight on Laplace even in an initial hyper-prior.)

There's a counter-intuitive example that illustrates this hyper-prior behaviour nicely. Suppose you assigned 20% to "AGI impossible" and 80% to another prior. If the other prior is Laplace, then your weight on "AGI impossible" rises to 92% by 2020, and you only assign 8% to Laplace. Your pr(AGI by 2036) is 1.6%. By contrast, if you reduce the first-trial probability in Laplace down to 1/100 then your weight on "AGI impossible" only rises to 29% by 2020 and your pr(AGI by 2036) is 6.3%. So having a lower first-trial probability ends up increasingpr(AGI by 2036).


It is not clear to me that by adjusting the Laplace prior down when you categorize AGI as a "highly ambitious but feasible technology" you are not updating twice

This is an interesting idea, thanks. I think  the description "highly ambitious" would have been appropriate in 1956: AGI would allow automation of ~all labour. In addition, it did seem hard to me to find reference classes supporting first-trial probability values above 1/50, and some reference classes I looked into suggest lower values.

That said, it's possible that my favoured range for the first-trial probability [1/100, 1/1000] was influenced by my knowledge that we failed to develop AGI. If so, this would have made the range too conservative.

Comment by Tom_Davidson on Report on Semi-informative Priors for AI timelines (Open Philanthropy) · 2021-03-29T17:40:16.056Z · EA · GW

Agreed - the framework can be applied to things other than AGI.

Comment by Tom_Davidson on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-15T19:20:25.842Z · EA · GW

Thanks for this Halstead - thoughtful article.

I have a one push-back, and one question about your preferred process for applying the ITN framework.

1. After explaining the 80K formalisation of ITN you say

Thus, once we have information on importance, tractability and neglectedness (thus defined), then we can produce an estimate of marginal cost-effectiveness.
The problem with this is: if we can do this, then why would we calculate these three terms separately in the first place?

I think the answer is that in some contexts it's easier to calculate each term separately and then combine them in a later step, than to calculate the cost-effectiveness directly. It's also easier to sanity check that each term looks sensible separately, as our intuitions are often more reliable for the separate terms than for the marginal cost effectiveness.

Take technical AI safety research as an example. I'd have trouble directly estimating "How much good would we do by spending $1000 in this area", or sanity checking the result. I'd also have trouble with "What % of this problem would we solve by spending another $100?" (your preferred definition of tractability). I'd feel at least somewhat more confident making and eye-balling estimates for

  • "How good would it be to solve technical AI safety?"
  • "How much of the problem would we solve by doubling the amount of money/researchers in this area (or increasing it by 10%)?"
  • "How much is being spent in the area?"

I do think the tractability estimate is the hardest to construct and assess in this case, but I think it's better than the alternatives. And if we assume diminishing marginal returns we can make the tractability estimate easier by replacing it with "How many resources would be needed to completely solve this problem?"

So I think the 80K formalisation is useful in at least some contexts, e.g. AI safety.

2. In the alternative ITN framework of the Founders Pledge, neglectedness is just one input to tractability. But then you score each cause on i) the ratio importance/neglectedness, and ii) all the factors bearing on tractability except neglectedness. To me, it feels like (ii) would be quite hard to score, as you have to pretend you don't know things that you do know (neglectedness).

Wouldn't it be easier to simply score each cause on importance and tractability, using neglectedness as one input to the tractability score? This has the added benefit of not assuming diminishing marginal returns, as you can weight neglectedness less strongly when you don't think there are DMR.

Comment by Tom_Davidson on Ten new 80,000 Hours articles made for the effective altruist community · 2017-09-07T19:38:46.082Z · EA · GW

Great podcasts!

Comment by Tom_Davidson on Am I an Effective Altruist for moral reasons? · 2016-02-11T12:10:27.646Z · EA · GW

I found Nakul's article v interesting too but am surprised at what it led you to conclude.

I didn't think the article was challenging the claim that doing paradigmatic EA activities was moral. I thought Nakul was suggesting that doing them wasn't obligatory, and that the consequentialist reasons for doing them could be overridden by an individual's projects, duties and passions. He was pushing against the idea that EA can demand that everyone support them.

It seems like your personal projects would lead to do EA activities. So I'm surprised you judge EA activities to be less moral than alternatives. Which activities and why?

I would have expected you to conclude something like "Doing EA activities isn't morally required of everyone; for some people it isn't the right thing to do; but for me it absolutely is the right thing to do".

Comment by Tom_Davidson on Against segregating EAs · 2016-01-23T20:55:06.772Z · EA · GW

Yeah good point.

If people choose a job which they enjoy less then that's a huge sacrifice, and should be applauded.

Comment by Tom_Davidson on Against segregating EAs · 2016-01-21T20:53:10.316Z · EA · GW

But EA is about doing the most good that you can.

So anyone who is doing the most good that they could possibly do is being an amazing EA. Someone on £1million who donates £50K is not doing anywhere near as much good as they could do.

The rich especially should be encouraged to make big sacrifices, as they do have the power to do the most good.

Comment by Tom_Davidson on The big problem with how we do outreach · 2015-12-27T02:12:37.751Z · EA · GW

I agree completely that talking with people about values is the right way to go. Also, I don't think we need to try and convince them to be utilitarians or nearly-utilitarian. Stressing that all people are equal and pointing to the terrible injustice of the current situation is already powerful, and those ideas aren't distinctively utilitarian.

Comment by Tom_Davidson on Population ethics: In favour of total utilitarianism over average · 2015-12-26T04:41:42.759Z · EA · GW

There is no a priori reason to think that the efficacy of charitable giving should have any relation whatsoever to utilitarianism. Yet it occupies a huge part of the movement.

I think the argument is that, a priori, utilitarians think we should give effectively. Further, given the facts as they far (namely that effective donations can do an astronomical amount of good), there are incredibly strong moral reasons for utilitarians to promote effective giving and thus to participate in the EA movement.

I think that [the obsession with utilitarianism] is regretful... because it stifles the kind of diversity which is necessary to create a genuinely ecumenical movement.

I do find discussions like this a little embarrassing but then again they are interesting to the members of the EA community and this is an inward-facing page. Nonetheless I do share your fears about it putting outsiders off.

Comment by Tom_Davidson on Are GiveWell Top Charities Too Speculative? · 2015-12-26T04:15:26.947Z · EA · GW

Those seem really high flow through effects to me! £2000 saves one life, but you could easily see it doing as much good as saving 600!

How are you arriving at the figure? The argument that "if you value all times equally, the flow through effects are 99.99...% of the impact" would actually seem to show that they dominated the immediate effects much more than this. (I'm hoping there's a reason why this observation is very misleading.) So what informal argument are you using?

Comment by Tom_Davidson on Are GiveWell Top Charities Too Speculative? · 2015-12-22T19:43:55.192Z · EA · GW

This is a nice idea but I worry it won't work.

Even with healthy moral uncertainty, I think we should attach very little weight to moral theories that give future people's utility negligible moral weight. For the kinds of reasons that suggest we can attach them less weight don't go any way to suggesting that we can ignore them. To do this they'd have to show that future people's moral weight was (more than!) inversely proportional to their temporal distance from us. But the reasons they give tend to show that we have special obligations to people in our generation, and say nothing about our obligations to people living in the year 3000AD vs people living in the year 30,000AD. [Maybe i'm missing an argument here?!] Thus any plausible moral theory will such that the calculation will be dominated by very long term effects, and long term effects will dominate our decision making process.

Comment by Tom_Davidson on Impossible EA emotions · 2015-12-22T18:24:46.275Z · EA · GW

Great post!

Out of interest, can you give an example of an "instrumentally rational technique that require irrationality"?

Comment by Tom_Davidson on Are GiveWell Top Charities Too Speculative? · 2015-12-21T17:21:35.470Z · EA · GW

Why? What are the very long term effects of a murder?

Comment by Tom_Davidson on Are GiveWell Top Charities Too Speculative? · 2015-12-21T17:15:22.027Z · EA · GW

Would you similarly doubt that, on expectation, someone murdering someone else had bad consequences overall? Someone slapping you very hard in the face?

This kind of reasoning seems to bring about a universal scepticism about whether we're doing Good. Even if you think you can pin down the long term effects, you have no idea about the very long term effects (and everything else is negligible compared to very long term effects).

Comment by Tom_Davidson on We care about WALYs not QALYs · 2015-11-20T13:27:46.143Z · EA · GW

In defence of WALYs, and in reply to your specific points:

  1. I don't share your intuition here. Well-being is what we're talking about when we say "I'm not sure he's doing so well at the moment", or when we say "I want to help people as much as possible". It's a general term for how well someone is doing, overall. It's an advantage, in my eyes, that it's not committed to any specific account of well-being, for any such account might have its drawbacks.

  2. I worry that, in adopting HALYs, EA would tie its aims to a narrow view of what human well-being and flourishing consists of. This is unnecessary, for EA is just about helping people as much as possible. Even if we were convinced that the only component of well-being was happiness, it would still be an additional claim to the core of EA.

Comment by Tom_Davidson on At what cost, carnivory? · 2015-11-13T16:52:03.688Z · EA · GW

A small quibble

One conclusion EAs might make is that their personal diets are no big deal, easily swamped as it is by the consequences of donations.

I think it's flat out wrong to conclude our diets "are no big deal". Being vegetarian for a lifetime prevents over 1000 years of animal suffering. That's a huge, huge impact.

My more serious worry is that people will draw this conclusion and eat less ethically as a result, without donating more (they already knew donating was great). But this is just psychological speculation backed up by some anecdotal evidence.

Comment by Tom_Davidson on Don't sweat diet? · 2015-11-10T02:53:25.502Z · EA · GW

Most people who go vegetarian find its very very little effort to be 90% vegetarian after a year or so. To me this warns against the view that people will give extra because "they haven't made the sacrifice of becoming veggie". Very soon the sacrifice becomes a habit and the claim that charitable donations are affected becomes even less plausible.

I'd be interested to know if anyone has given more money because of this thread. I know that i'm more willing to eat diary products and have read others saying it made them happier eating meat.

Comment by Tom_Davidson on Don't sweat diet? · 2015-11-09T11:31:57.279Z · EA · GW

That only seems to show that emissions do harm. Not that the harm is so finely individuated. fwiw there are reasons to doubt the butterfly effect works in the same way given quantum mechanics

Comment by Tom_Davidson on Don't sweat diet? · 2015-11-01T02:53:45.276Z · EA · GW

When you emit carbon dioxide those emissions will go on to harm particular people. When you buy offsets that will avert emissions that would have harmed different people.

What's this claim based on?

Comment by Tom_Davidson on Don't sweat diet? · 2015-10-24T19:43:23.080Z · EA · GW

This is a really good article, and I do find the perspective advocated compelling. However, I would like to voice some worries.

  1. Anyone not committed to an consequentialist mindset is likely to take serious issue with someone who eats meat but donates to charities that encourage other people to give up meat. In general, advocating that someone else make a sacrifice that you aren't willing to make is seen as hypocritical and lacking in integrity. People will criticise you and perhaps, by association, effective altruism.

  2. I'm sceptical, psychologically, that this kind of reasoning will make people donate more in total. I worry the main effect will be that they eat more meat.

  3. This style of argument worries me more generally because rich people will very often be able to compensate for terrible things they wish to do by donating more money to charity. E.g. cheat on your partner and donate to a charity to encourage stable relationships. This buys into the negative image of EA as allowing rich people to justify themselves morally.

Perhaps we should instead claim that the strong reasons to be veg'n aren't affected by the existence of other great ways to do good. Avoiding thousands of years of terrible suffering by going veg'n for life is just a great thing to do: anyone who can should make that sacrifice.

Comment by Tom_Davidson on EA's Image Problem · 2015-10-19T11:55:50.230Z · EA · GW

I agree with this. Let me make explain why I stand by the point that you quote me on. Tl;dr: by "negative effects" I wasn't talking about the hurt feelings of potential EAs.

My point wasn't the following: "It's unfair on relatively poor potential EAs, therefore it's bad, therefore let's change the movement" As you stress, this consideration is outweighed by the considerations of those the movement is trying to help. I accept explicitly in the article that such considerations might justify us making EA elitist.

My point was rather that people criticise us for being elitist etc. Having an elitist pledge reinforces this image and prevents people from joining - not just those in relative poverty. This reduces our ability to help those in absolute poverty. You don't seem to have acknowledged this point in your criticisms.

Comment by Tom_Davidson on EA's Image Problem · 2015-10-15T21:57:14.760Z · EA · GW

Thanks for that.

My basic worries are: -Academics must gain something from spending ages thinking and studying ethics, be it understanding of the arguments, knowledge of more arguments or something else. I think this puts them in a better position than others and should make others tentative in saying that they're wrong.

-Your explanation for disagreeing with certain academics is that they have different starting intuitions. But does this account for the fact that academics can revise/abandon intuitions because of broader considerations. Even if you're right, why you think your intuitions are more reliable than theirs?

The views that I'm confident in are the ones that aren't based on core ethical intuitions (although they overlap with my ethical intuitions), but can be deduced from things that aren’t ethical intuitions, as well as principles such as logical consistency and impartiality... I can extend on this if anyone wants me to

I'd definitely be interested to hear more :)

Comment by Tom_Davidson on EA's Image Problem · 2015-10-15T21:27:13.528Z · EA · GW

Why, do you believe we should redistribute moral virtue?

No, but it's unfair that it's harder for the poor to attain the status. That has negative effects which I talked about in the article.

Comment by Tom_Davidson on EA's Image Problem · 2015-10-14T02:10:45.036Z · EA · GW

Thanks so much for this! Really good and persuasive points.

One important thing to say is that the Pledge should absolutely not be used to distinguish ‘good people’.

My worry is this isn't realistic, even if ideally it we wouldn't distinguish people like this. For example, having taken the pledge myself and told people about it I was congratulated (especially by other EAs). This simple and unavoidable kind of interaction rewards pledgers and shows that their moral status in the eyes of others has gone up. To me, it seems a real problem that this kind of status and reward is so much harder for the poor to attain.

Further, making the Pledge is bound to be an important part of engaging with the movement, even if we don't use it to distinguish virtuous people. To me, again, this feels like a serious issue.

so it doesn’t seem sensible for us to pick a cut-off. I also think the simplicity of the message is pretty crucial... [it's powerful that we can] say ‘we are a community of x people who all actually give 10% of their wages’

Great point! I'm interested to know how we currently accommodate the exception the the rule: students? Could we do the same thing for an income clause as well? To me an income exception seems better motivated because i) its a more important access issue, and ii) the pledge is already about lifetimes earnings, so wouldn't be particularly harder for a student to make (they can just give a little later) than a non-student.

I’m not convinced that your characterisation of the ethical views of effective altruists is accurate, and I think it could be harmful to simplify in the way that you do... the description of being arrogant and dogmatic is less true of them than most other ethicists.

This is a really good point and i'll keep this in mind, especially about the uncertainty. [To be clear neither Toby nor William MacAskill have ever done any of the things I objected to.] It's not clear to me that calling them narrow utilitarians is misleading though (unless they're deontologists)

Comment by Tom_Davidson on EA's Image Problem · 2015-10-14T01:29:33.416Z · EA · GW

Thanks for a thoughtful response.

But what do you mean by "Refrain from posting things that assume that consequentialism is true"? That its best to refrain from posting things that assume that values like e.g. justice aren't ends-in-themselves, or refrain from posting things that assume that consequences and their quantity are important?

Definitely the former. I find it hard to get my head round people who deny the latter. I suspect only people committed to a weird philosophical theories would do it. I thought modern Kantians were more moderate. Let's remember that most people don't have a "moral theory" but care about consequences and a cluster of other concerns: it's these people I don't want to alienate.

I think that encouraging all EAs to speak as if Kant and other philosophers with a complete disregard for consequentialism might be correct would be asking a lot.

I think philosophers who reject consequentialism (as the claim that consequences are the only morally relevant thing) might be correct, and personally find it annoying when everyone speak assumes that any such philosopher is obviously mistaken. I certainly agree there's no need to talk as if consequences might be irrelevant!

I'm sympathetic with your comments about rationality. I wonder if an equally informative way of phrasing it would be "carefully investigating about which actions help the most people". For people who disagree, reading EA's describe itself as "rational" will be annoying because it implies that they are irrational.

I suppose that if being uncontroversial among "experts" is a good measure of reasonableness, then even today we should be more open to the possible importance of acting in accordance with theistic holy texts.

This is a really interesting point. We could see history as a reductio on the claim that the academic experts reach even roughly true moral conclusions. So maybe the academics are wrong. My worry is the idea we can round this problem by evaluating the arguments ourselves. We're not special. Academics just evaluate the arguments, like we would, but understand them better. The only way i can see myself being justified in rejecting their views is by showing they're biased. So maybe my point wasn't "the academics are right, so narrow consequentialism is wrong" but "most people who know much more about this than us don't think narrow consequentialism is right, so we don't know its right".

Comment by Tom_Davidson on EA's Image Problem · 2015-10-14T01:02:41.787Z · EA · GW

I agree with you on technical language - we have to judge cases on an individual basis and be reasonable.

Less sure about the consequentialism, unless you know your talking to a consequentialist! If you want to evaluate an action from a narrow consequentialist perspective, can't you just say so at the start?

Comment by Tom_Davidson on EA's Image Problem · 2015-10-13T14:14:15.446Z · EA · GW

I don't think the existence of another pledge does much to negate the harm done by the GWWC pledge being classist.

I agree there's value in simplicity. But we already have an exception to the rule: students only pay 1%. There's two points here. Firstly, it doesn't seem to harm our placard-credentials. We still advertise as "give 10%", but on further investigation there's a sensible exception. I think something similar could accommodate low-earners. Secondly, even if you want to keep it at one exception, students are in a much better position to give than many adults. So we should change the exception to a financial one.

Do you agree that, all things equal, the suggestions I make about how to relate to each other and other EAs are good?

Comment by Tom_Davidson on EA's Image Problem · 2015-10-12T23:24:56.059Z · EA · GW

Thanks a lot, this cleared up a lot of things.

I think we're talking past each other a little bit. I'm all for EtG and didn't mean to suggest otherwise. I think we should absolutely keep evaluating career impacts; Matt Wage made the right choice. When I said we should stop glorifying high earners I was referring to the way that they're hero-worshipped, not our recommending EtG as a career path.

Most of my suggested changes are about the way we relate to other EAs and to outsiders, though I had a couple of more concrete suggestions about the pledge and the careers advice. I do take your point that glorifying high earners might be consequentially beneficial though: there is a bit of a trade-off here.

As long as we evaluate careers based on impact, we're going to have the problem that highly capable people are able to produce a greater impact... Insofar as your post presents a solution, it seems like it trades off almost directly against encouraging people to pursue high-impact careers.

I hope my suggestions are compatible with encouraging people to pursue high-impact careers, but would reduce the image problem currently currently associated with it. One hope is that by distinguishing between doing good and being good we can encourage everyone to do good by high earning (or whatever) without alienating those who can't by implying they are less virtuous, or less good people. We could also try and make the movement more inclusive to those who are less rich in other ways: e.g. campaigning for EA causes is more accessible to all.

I guess maybe making workers at highly effective nonprofits more the stars of the movement could help some?

This seem like a good idea.

Comment by Tom_Davidson on EA's Image Problem · 2015-10-12T12:53:21.853Z · EA · GW

Thanks for the reply! I would like to pick you up on a few points though...

"On the one hand, you say you "want EA to change the attitudes of society as a whole". But you seem willing to backpedal on the goal of changing societal attitudes as soon as you encounter any resistance... If EA is watered down to the point where everyone can agree with it, it won't mean anything anymore."

I think all the changes I suggested can be made without the movement losing the things that currently makes it distinctive and challenging in a good way. Which of my suggested changes do you think are in danger of watering EA down too much? Do you take issue with the other changes I've suggested?

"Yes, society as a whole believes that "it's the thought that counts" and that you should "do something you're passionate about". These are the sort of attitudes we're trying to change."

I completely agree we should try to change people's attitudes about both these things. I argued that we should say "An action that makes a difference is much better than one that doesn't, regardless of intention" rather than "An agent that makes a difference is much better than one who doesn't" because the latter turns people against the movement and the former says everything we need to say. Again, I'm interested to know which of my suggested changes you think would stop the movement challenging society in ways that it should be?

"I think that EA should play to its strengths and not try to be everything to everyone. We're passionate about doing the most good, not passionate about problems that affect ourselves and our friends. We focus on evidence and reason, which sometimes comes across as cold-hearted (arguably due to cultural conditioning)."

Again, I completely agree . The things you mention are essential parts of the movement. In my post was trying to suggest ways in which we can minimize the negative image that is easily associated with these things.

"But the implicit premise of your post is that EA should seek to improve its image in order to increase its influence and membership, almost necessarily at the expense of other movements... I'm skeptical of your implicit premise."

You're right, although it's not implicit - I say explicitly that I want EA to change the attitudes of society as a whole. This is because I think EA is a great movement and, therefore, that if it has more appeal and influence it will be able to accomplish more. FWIW I don't think it's the last social movement we'll ever need.

"It's a vision of getting a bunch of smart, wealthy, influential critical thinkers in the same room together, trying to figure out what the world's most important & neglected problems are and how they can most effectively be solved."

I think comments like these make the movement seem inaccessible to outsiders who aren't rich or privileged. It seems like we disagree over whether that's a problem or not though.

Overall it seems like you think that paying attention to our image in the ways I suggest would harm the movement by making it less distinctive. But I don't know why you think the things I suggest would do that. I'm also interested to hear more about why you don't think getting more members and being more influential would be a good thing.

Comment by Tom_Davidson on EA Open Thread: October · 2015-10-11T08:06:56.640Z · EA · GW

Hi, I've recently written an article about what I think are some image problems that effective altruism have and how we can combat them. I'd love to post on this website here so that I get feedback and stimulate discussion but don't have enough Karma points to do so. Please like this post so that I can post it!

If you're worried about the material you can see an earlier draft of the article on the Effective Altruism fb group ( or the EA Hangout fb group (