August Open Thread: EA Global!

post by RyanCarey · 2015-08-01T15:42:07.625Z · score: 3 (3 votes) · EA · GW · Legacy · 28 comments

Here's a place to discuss projects, ideas, events and miscellanea relevant to the world of effective altruism that don't need a whole post of their own!

The most interesting current news for effective altruists is that EA Global in San Francisco has just started! 

Note that MIRI is also a few weeks into its fundraiser.

Hope you have lots to discuss amidst these fresh EA talks!

28 comments

Comments sorted by top scores.

comment by Tom_Ash · 2015-08-01T21:34:09.811Z · score: 8 (8 votes) · EA(p) · GW(p)

People might be interested in this discussion of flow through effects on the EA Facebook group. It prompted a length comment thread there already, but it's worth flagging this comment from Holden of GiveWell (whose views are similar to my own):

I disagree with Eliezer's comment, to what seems like a greater degree than other commenters. I'm not hoping to get into a long debate and not attempting to give a full defense, but here's an outline of what I think. I focus on why I reject Eliezer's first sentence; I've deliberately stayed away from the question of what Holden should be doing, what current factory-farming-focused people should be doing, etc. and instead focused on whether there are imaginable people who are rational to invoke flow-through effects as a reason for working on general short/medium-term empowerment even when they value future generations.

*I think if you tried to list all the people who, with at least 20 years of hindsight, seem to have done the most good for the people of the very far future (or more so, people of the year 2100), you would end up feeling that statements like "In general, when somebody who cares about Y designed their project X to mainly impact Y, it's very unlikely that X is also the best way to accomplish some unrelated goal Z." are not right.

*I think it is great that there are people trying to make their best guesses about how the next 100 years and beyond are likely to play out, and come up with the best possible interventions for future people based on those guesses. I count myself among such people. But I think we also need to bear in mind that such endeavors are historically not very successful, that making such predictions in a helpful way may just not be something we're able to do, and that there does seem to be a systematic tendency for actions that increase human empowerment to have better results than anticipated at the time. Thus, I believe there is a very real case for "Solve problems and do good things that you have an opportunity to do well; don't worry too much about where it's all going; and certainly don't feel that just because you have, say, a belief that you have a zero discount rate or a belief that pigs have nontrivial moral value, this is sufficient to say you're blowing it if you're not working directly on AI risk or factory farming related issues." I believe that all of the work we are trying to do stands on the shoulders of a very large number of people who took something much closer to the latter attitude than to Eliezer's. It's certainly true that your impact on x-risk is very diffused if it comes through general empowerment, but I think there are plenty of people who shouldn't rationally believe they can get a larger impact by aiming directly.

So I think I'm defending some version of the "bailey" here. I'm sure there are all sorts of ridiculous ways to take this line of reasoning too far, and I can see how - taken to the limit - it just comes down to "don't try to do the most good, just do what you feel," and I'm not defending that. I'm certainly not saying that antipoverty interventions have anything other than miniscule impacts on x-risk (though many people aren't in a position to believe that more-than-miniscule is an option); I'm not endorsing anything like the symphony comments, and I don't believe I'm on a slippery slope to doing so. But I think there are cases where tractability trumps importance ... even when our best back-of-the-envelope calculations don't seem to say so. I still think that if all far-future-focused donation options look terrible to person X, person X is being reasonable to support strong antipoverty orgs instead even assuming that person X cares about future people too. There are other contexts in which I could imagine invoking this argument as well, w/r/t e.g. career choice. I wouldn't invoke it in the way Eliezer quotes it.

Stepping back to the wider significance of this debate. Certain strains/people in EA give the impression that the ~whole world's attempts at doing good have expected value that amounts to a rounding error, when put alongside the good being done by a very small number of people (mostly in the community) working on particular highly specific paths to impact. This bothers me a lot; that's partly because of how I think it makes EA look to others, but I'd be OK with that if I were intellectually on board with the belief. I'm not. I think Eliezer is, and that that's where his comments are coming from. That's fine - if I shared Eliezer's views and confidence in those views re: the far future, I would agree with what he says here. But I think most of the people agreeing with him here shouldn't be. (Again, I'm not defending the exact arguments he quotes - I'm disagreeing with his first sentence.)

comment by Ben_Kuhn · 2015-08-03T00:46:14.910Z · score: 7 (7 votes) · EA(p) · GW(p)

I work for a fast-growing financial technology startup that's making a really big impact on global poverty. We're looking for generalist engineers to work on a web server/mobile app stack. No specific requirements--we prefer candidates with 1+ years of work experience but are happy to make an exception for top-notch people. Cool things about this job include:

  • Solving a really important and neglected problem
  • A fully distributed team: there's no commute and you can work from wherever
  • An EA environment: everyone is here because it's the highest-impact thing they can do
  • A culture focused on transparency, self-improvement and only working on the most important things

Happy to provide more details privately! (We're trying to maintain a somewhat low profile, so I'd rather not elaborate or answer questions on a public forum.) Email me and tell me a little about yourself/what you're looking for :)

comment by Tom_Ash · 2015-08-03T02:57:28.217Z · score: 4 (4 votes) · EA(p) · GW(p)

EAs have occasionally discussed whether blood donation is effective. I always used to do it before moving to Canada, where as a Brit I'm apparently under suspicion of mad cow disease.

Via Kelsey Piper, I recently found this article from Elizabeth of EA Seattle arguing that it is. Along with the comments from Alexander Berger it sheds light on the issue, while casting doubt on some of the more extreme claims about one blood donation saving multiple lives (if that's not a mixed metaphor).

comment by Elizabeth · 2015-08-03T13:13:38.325Z · score: 7 (7 votes) · EA(p) · GW(p)

For anyone who doesn't want to read the whole thing + comments: It's on average pretty effective( $50-$1667 to a GW charity) but the marginal effectiveness is pretty questionable. They throw out very little usable blood and sometimes have to use old blood, which is bad and would indicate that more blood is useful on the margins. But the best research indicates that the cost of recruiting additional donors is small relative to the price hospitals pay for blood, which suggests they don't think they need more blood that badly. The complicating factor is whether paying for blood changes the quality of the product or the long term availability/blood banks fear of same.

Almost all of the numbers have large confidence intervals because the data just isn't very good.

comment by MichaelDickens · 2015-08-03T16:44:40.902Z · score: 3 (3 votes) · EA(p) · GW(p)

Another consideration here is that for men, donating blood about once a year has positive health effects (see the section "Blood donation" here), so if you're a man, it might be worth it even without the altruistic benefits.

comment by nino · 2015-08-25T20:32:50.722Z · score: 3 (3 votes) · EA(p) · GW(p)

There is now a German translation of effectivealtruism.org at www.effektiver-altruismus.de.

comment by Denis Drescher (Telofy) · 2015-08-26T12:42:19.416Z · score: 1 (1 votes) · EA(p) · GW(p)

Could one of the mods maybe grant posting privileges to Nino? That would be swell. Thank you! :‑)

comment by Evan_Gaensbauer · 2015-08-15T11:22:47.415Z · score: 2 (2 votes) · EA(p) · GW(p)

James Snowden argued that expected value calculations based on predictions flung farther into the future, which depend on greater number of variables, and which are based on less concrete estimates rest on less a strong standard of evidence than interventionists are used to. For some causes, effective altruism is predicated on things we can't just run RCTs on, and depend on predictions of what's likely to happen. I believe this will be the case for more causes as time passes, and will become a more used method for the greatest opportunities to do good effective altruism has as it becomes more ambitious, robust, and bigger as a movement. I think much effective altruism, then, will unavoidably depend on arguments from prediction for the foreseeable future.

A good track record of correct predictions for whatever reference class of work an effective altruist prescribes for the rest of us, then, is the closest we can get to testing the value of interventions which can or will only happen in the future, or once. The more specific predictions are made, with the greatest frequency of turning out accurate, and a more robust fit to the closest reference class for the effective altruism predictions, the more confidence we can have in a forecaster. What I think is promising may be developing or using explicit models of forecasting, which we can test, rather than just relying on the intuitions of individual forecasters, no matter how super they are. This way, more effective altruists can also test or use promising models. I don't know anything about this yet, but the possibility excites me.

I think it will take quite some time for any person or model to build a worthy track record for predictions in the reference class matching its class of domain-specific interventions. However, it seems the value of information could be very powerful, so I think it's worth trying. To this end, I think it's worth more of us to use prediction registries, build prediction markets for effective altruism, practice forecasting to learn and improve, and survey the academic literature to see if there are strategies or theory of forecasting better. Also, encourage other effective altruists to do the same, especially if they prioritize a more speculative or less concrete cause, and are or claim to be some sort of expert.

comment by SydMartin · 2015-08-15T20:48:54.675Z · score: 1 (1 votes) · EA(p) · GW(p)

This sounds like a really great idea. I think as a community we tend to make loads of predictions; it seems likely we do this a lot more than other demographics. We do this for fun, as thought experiments and often as a key area of focus such as x-risk etc. It seems like a good idea to track our individual abilities on doing this sort of predicting for many reasons. Identifying who is particularly good at this, for improvement etc. It does make me concerned that we would become hyper-focused on predictions and lead us to potentially neglect current causes; getting too caught up in planning and looking forward and forgetting to actually do the thing we say we prioritize.

I also wonder about how well near-future prediction ability translates to far-future predictions. In order to test how well you are able to predict thing you predict near-future events or changes. You increase your accuracy at doing these and assume it translates to the far-future. Lots of people make decisions based around your far-future predictions based on your track record of being an accurate predictor. Perhaps, however, your model of forecasting is actually wildly inaccurate when it comes to long term predictions. I'm not sure how we could account for this. Thoughts?

comment by Evan_Gaensbauer · 2015-08-16T18:47:58.428Z · score: 0 (0 votes) · EA(p) · GW(p)

To clarify, there is a class of persons known as "superforecasters". I don't know the details of the science to back it up, except their efficacy has indeed been validly measured, so you'll have to look more up yourself to learn how it happens. What happens, though, is superforecasters are humans who, even though they don't usually have domain expertise in a particular subject, predict outcomes in a particular domain with more success than experts in the domain, e.g., economics. I think that might be one layperson forecaster versus one expert, rather than the consensus of experts making the prediction, but I don't know. I don't believe there's been a study about the prediction success rates of a consensus of superforecasters vs. a consensus of domain experts on predicting outcomes relevant to their expertise. That would be very interesting. These are rather new results.

Anyway, superforecasters can also beat algorithms which try to learn how to make predictions, which are in turn also better than experts. So, no human or machine yet is better than superforecasters at making lots of types of predictions. In case you're wondering, no, it's not just you, that is a ludicruous and stupendous outcome. Like, what? mind blown. The researchers were surprised too.

From the linked NPR article:

For most of his professional career, Tetlock studied the problems associated with expert decision making. His book Expert Political Judgment is considered a classic, and almost everyone in the business of thinking about judgment speaks of it with unqualified awe.

All of his studies brought Tetlock to at least two important conclusions.

First, if you want people to get better at making predictions, you need to keep score of how accurate their predictions turn out to be, so they have concrete feedback.

But also, if you take a large crowd of different people with access to different information and pool their predictions, you will be in much better shape than if you rely on a single very smart person, or even a small group of very smart people. [emphasis mine]

Takeaways for effective altruist predictions:

  • Track your predictions. Any effective altruist seeing value in prediction markets takes this as a given.

  • There are characteristics which make some forecasters better than others, even adjusting for level of practice and calibration. I don't know what these characteristics are, but I'm guessing it's some sort of analytic mindest. Maybe effective altruists, in this sense, might also turn out to be great forecaster. That'd be very fortuitous for us. We need to look into this more.

  • If, like me, you perceive much potential in prediction markets for effective altruism, you'd value a diversity of intellectual perspectives, to increase chances of hitting the "wisdom of the crowds" effect Tetlock mentions. Now, SydMartin, I know both you and I know what a shadow a lack of diversity casts on effective altruism. I emphasized the last paragraph because you just last week commented on the propensity of effective altruism to be presumptuous and elitist about its own abilities as well. I believe a failure to accurately predict future outcomes on the part of this community would be due more to a lack of intellectual diversity, i.e., everyone hailing from mostly the same university majors (e.g., philosophy, economics, computer science). I think this would more play a factor than sociopolitical homegeneity within effective altruism. Still, that's just my pet hypothesis that's yet to pan out in any way.

I also wonder about how well near-future prediction ability translates to far-future predictions. In order to test how well you are able to predict thing you predict near-future events or changes. You increase your accuracy at doing these and assume it translates to the far-future. Lots of people make decisions based around your far-future predictions based on your track record of being an accurate predictor. Perhaps, however, your model of forecasting is actually wildly inaccurate when it comes to long term predictions. I'm not sure how we could account for this. Thoughts?

I'd be concerned a successful track record of near-term predictions would tell us much about potential success with long-term predictions. First of all, for existential risks, I suspect predictions made in the near-term relating to the field of only a single existential risk, such as A.I. risks, should be counted toward expectations for their long-term track record[1]. Even if it's more complicated than that, I think there is something near-term prediction track records can tell us. If someone near-future prediction track records are awful, that at least informs us the team or person in question isn't great at predictions at all. So, we would not want to rely on their predictions further afield as well.

It's like science. We can't inductively conclude that there correct predictions of the past will continue on some arbitrary timescale, but we can rule out bad predictors from being reliable by process of elimination.

I think prediction markets might apply to all focus areas of effective altruism, though not always to the same extent. Running intervention experiments is difficult or expensive. For example, while GiveDireclty, IPA, and the Poverty Action Lab build on so many millions of dollars of development aid each year already, effective altruism itself has been responsible to inject the same empiricism into animal activism. Intervention experiments into animal activism have been expensive for organizations like Mercy For Animals, so these experiments aren't carried out, or refined to try to find better methods, often. Also, there's difficulty in getting the cooperation of animal activists on randomized control trials, as their community isn't as receptive yet. Further, both due the low numbers of volunteers, like Peter Hurford, from effective altruism, and our lack of experience, it's difficult to get experimental designs right the first time, and in as short a timeframe as, e.g., Animal Charity Evaluators, would hope.

However, after a first successful experiment, for whatever value of "success" effective altruism or others assign, other organizations could design experiments using the same paradigm and preregister their plans. Then, an EA prediction registry or market could look at the details of the experiment, or demand more details, and predict the chance it would confirm the hypothesis/goal/whatever. They could judge the new design on how it deviates from the original template, or how closely they expect it to replicate, or how biased they think it will be. If the most reliable forecasters weren't confident in the experiment, that would inform to the rest of us whether it's worth us funding it when organizations ask for funding. This way, we can select animal advocacy RCTs or other studies more efficiently, when we're limited by how many we carry out because of a scarcity of resources.

Of course, this isn't just for experiments, or animal activism. The great thing about a prediction market anyone can enter is nobody needs to centrally allocate the information to all predictors. They could have expertise, hunches, or whatever nobody else knows about, and as long as they're confident in their own analysis or information, they'll bet on it. I was discussing certificate of impact purchases on Facebook yesterday, and Lauren Lee came forward stating she might prefer prediction markets to predict the value and success of a project before it's started, rather than a posterior evaluation based on impact certificates. I don't see a reason there shouldn't be both, though.

Presuming effective altruism becomes bigger and more ambitious in the future, the community will try more policy interventions, research projects, and small- and large-scale interventions we won't have tested yet. Of course, some experiments won't need to rely on prediction markets, but there is little reason forecasters couldn't bet on their success as well to hone their prediction skills.

[1] Yes, this counts as predicting how successful predictions would be. Go meta!

comment by Evan_Gaensbauer · 2015-08-15T09:51:09.126Z · score: 2 (2 votes) · EA(p) · GW(p)

Dylan Matthews recently wrote an article about his impressions of EA Global. He claims the event gave disproportionate speaker privilege for talks on A.I. risk and safety research, which is contested. I personally agree with Ryan Carey that Mr. Matthews arguments countering concern for A.I. risk are weak. I don't currently favor A.I. safety research among all causes, but I think better arguments and shortcomings then the ones Matthews puts forth already come from the rest of the community. Matthews' points that stick for me are ones about the culture of effective altruism. He worries about how elitist, condescending, and dismissive effective altruists are of those prioritizing causes other than the one(s) they themselves prioritize. (In Mr. Matthews' case, his perception is that A.I. risk and metacharity crowds are the biggest culprits).

As someone involved with effective altruism for three years, came in with no favored cause area and was attracted to the general idea of effective altruism, and has only become more emotionally and intellectually sympathetic to each cause as time goes on, reports lime Matthews' frustrate me. While I don't currently definitely favor one cause, I'm sympathetic to cause prioritization and policy reform across all causes, and would choose to put donations into a donor-advised one if I was currently actively donating. I've also discussed much with other effective altruists who share my experience of being cause-agnostic.

Watching the livestream of EA Global, speaker Jeff Sebo started by saying how grateful he was to be there and what he noticed was how everyone was so smart, like so much do from everyone it's overwhelming. Not to say that others elsewhere aren't as smart, or that smartness is the keystone quality of effective altruists, but that the concentration of so much in one day is awesome. That was my impression at the 2014 EA Summit, which I attended in person. Jeff and I and any other effective altruist gets this impression because large bodies of effective altruists don't stick to their own cause-oriented tribes like an echo chamber. They talk to everyone about everything. So, here is a declaration for any effective altruist who thinks the persons in their cause are across the board more mature and act patronisingly to we your peers.

All my experience with effective altruism has taught me these aren't justified attitudes. The densest crowd of nerds you'll ever meet throws as much scrutiny as they do emotional investment at what they consider the most important intellectual and lifestyle conclusions they ever make. Effective altruists break the mould by being full of passionate intensity while lacking all conviction. They go on to resolve this by being infovores who read 10,000 word Wikipedia articles for fun. We're all insatiably curious. The guilty pleasure of effective altruism is sometimes learning more than we need to know to save the world at the intersection of practical ethics, normative rationality, and every science is intensely satisfying. Beyond that, every dedicated effective altruist tends brush up on objections to their other and different favored causes on a routine basis. You'd think this would be prone to confirmation bias entrenching people into their current opinions more, and often it is, but iterative debating challenges them on that too so they try checking their biases. This is why some of us practically invented or at least honed the disciplines of evidence-based charity evaluation and cause prioritization. Effective altruists check their answers!

Each effective altruist you meet is Schroedinger's Brainiac: you cannot justifiably conclude they don't know what you're talking about until they ask you to clarify, which they will. I've met so many animal rights activists who know about the philosophical arguments for prioritising existential risk reduction. The community is replete with computer programmers who will study philosophy and neuroscience in their spare time to figure out how much moral weight to grant animals vis a vis their own moral principles. Cognitive scientists and development economists and writers among us can follow each other's arguments without hesitation. Any veteran effective altruist you meet, anyone who has been around for at least a few months, who hasn't changed their mind on what cause they favor hasn't done so for lack of trying.

Don't assume some other effective altruists are naïve, wilfully ignorant, or unprepared to understand at least the surface-level reasoning behind your arguments. Don't pat anyone on the head. Don't condescend. Don't just pay lip service to good faith, intellectual respect, and manners. All the top articles on this forum are about that. Don't get motivatedly skeptical about someone's level of commitment just because you can't relate to their values. And don't snidely mock anyone behind their backs because you think their beliefs are insane.

It's not that these are forbidden or taboo behaviors orn attitudes. It's not matter of political correctness. I'm not telling you to check your prejudice or your privilege. The average effective altruist knows enough about each aspect of the whole framework that convincing them of your perspective will be legitimately more challenging than an already skeptical or doubting layperson. It's inaccurate to assume other effective altruists aren't prepared to get deep into exploring both your end their beliefs on social impact and interdisciplinary thinking. Any of us comes of as silly and disrespectful for assuming anything less. So be prepared.

comment by Andy_Schultz · 2015-08-03T20:51:19.710Z · score: 2 (2 votes) · EA(p) · GW(p)

Stephen Hawking will be answering questions about AI on reddit: https://www.reddit.com/r/science/comments/3eret9/science_ama_series_i_am_stephen_hawking

comment by Evan_Gaensbauer · 2015-08-03T14:31:36.308Z · score: 2 (2 votes) · EA(p) · GW(p)

Effective altruists have considered the potential effectiveness of blood donation and kidney donation. When I read Peter SInger's recent article in the Boston Review, The Logic of Effective Altruism, he also mentioned effective altruists also consider donating bone marrow. I didn't know people could donate bone marrow.

Does anyone know if there's an analysis somewhere about how effective donating bone marrow is? If not, would anyone willing to do an analysis? I could help, but I have enough analytic skill to do it by myself.

comment by Denise_Melchin · 2015-08-09T18:47:08.274Z · score: 1 (1 votes) · EA(p) · GW(p)

You can sign up for it, but it's pretty unlikely you'll get selected. However, it is potentially life saving and at least in Germany it didn't cost that much time, so I'd recommend it. (I signed up this year.)

comment by Tom_Ash · 2015-08-10T18:49:48.663Z · score: 1 (1 votes) · EA(p) · GW(p)

IIRC there are two registries that it's worth joining in the UK, and you can apply to join one of them online and the other whenever you donate blood:

http://www.anthonynolan.org/8-ways-you-could-save-life/donate-your-stem-cells/apply-join-our-register

http://www.nhsbt.nhs.uk/bonemarrow/qa/index.asp#howcan

This appears to be the place to sign up in Canada:

https://www.blood.ca/en/stem-cells

(It seems that in some countries the term to search for is "stem cell donation for leukaemia" or somesuch.)

comment by Tom_Ash · 2015-08-03T16:15:36.330Z · score: 1 (1 votes) · EA(p) · GW(p)

I don't know of an analysis, but I'm signed up, and would guess it's pretty effective and genuinely can be (counterfactually) lifesaving. It is (or at least used to be) quite painful and puts you out of action for a bit, but has lower costs than kidney donation.

comment by tomstocker · 2015-08-07T15:14:07.289Z · score: 0 (0 votes) · EA(p) · GW(p)

My colleague did it last year - sounds much much easier than a kidney donation (in the UK)!!

comment by Tom_Ash · 2015-08-10T18:44:18.151Z · score: 0 (0 votes) · EA(p) · GW(p)

Interesting, what did it involve? Am I right in remembering it put you out of action for a bit, or has this changed in the past few years?

comment by Evan_Gaensbauer · 2015-08-22T19:19:33.034Z · score: 1 (1 votes) · EA(p) · GW(p)

Effective altruism, more than stemming a tide of anti-intellectualism, or being neutral, seems super pro-intellectual to me. That is, to me the dominant culture seems to be one which a cross between a sort of Transatlantic upright-ness of universities like the Ivies and Oxbridge, and what in North America is called "nerd culture". This seems to be a first filter for persons to find amenable atttiudes eithin effective altruism, arguments for how to do good aside.

Hypothesis: a lack of diversity along other dimensions in effective altruism is filtered by an intellectualist mindset. Thus, for what diversity remains among effective altruism, we should expect to see across demographics a disposition, temperament, or set of experiences the same when adjusting for sex, class, ethnicity, or country of origin.

This could explain why effective altruism is disproportionately college-educated. Also, a lack of diversity may be due to middle-class white men dominating the culture that feeds effective altruism from a culture which is mostly that demographic anyway. This is seen in how effective altruism gains much of its population from CS, maths, econ, and philosphy majors. However, this is confounded by how EA was designed and spread from within that culture anyway, so there could be implicit biases in how we end up communicating EA anyway.

Unfortunately, this hypothesis isn't practical to test, seems prone to biases like confirmation bias and demand characteristics and the availability and representativeness heuristics. If we accounted for those, collecting data on this would likely end up being done in an insensitive and uncontrolled way resulting in anecdotes we couldn't reliably correlate with anything. I don't know if it's testable using the EA survey, which is the only thing which might do it.

The other part of my theory is effective altruism is offputting to altruistic subcultures or communities which are less neutral, indifferent or averse to politicization. Effective altruism isn't averse to such, per se, and I don't think it's got a pervasive culture of political correctness. Rather, when more partisan persons enter the space, they're not expecting a relatively unrelenting scrutiny effective altruism brings. Thus, offput by having preconceived notions unexpectedly challenged, both right-wing and left-wing persons self-select for exclusion from effective altruism, as it also is unsympathetic to predetermined cause prioritization not already favored by at least a vocal minority of sensible effective altruists. For example, animal advocacy might be the smallest major cause in effective altruism, but it's not excluded because so many animal advocates yield to calls for demonstrated or increased effectiveness in their actions. This doesn't seem the case for other causes which have tried to gain a toehold in effective altruism. Effective altruism may also be offputting for tolerating a diverse range of perspectives so long as the proponents in question are making efforts to behave in a way considered reasonable and polite. Depending on one's principles, standing aside a community which accepts perspectives opposite to one's own may be a dealbreaker. This principle of charity is one i wish to remain intact within effective altruism, but iI believe there needs be a greater awareness of persons who pay lip service or demonstrate adherence to community norms in their surface behavior but really are disrespectful to other effective altruists they disagree with when they get away with it. I think there needs to be a constant vigilance each of us holds others as well as ourselves to in addition to an indiscriminate intellectual empathy. I believe this is more important for effective altruism than it is for the rationalist community.

Finally, I agree with Robin Hanson effective altruism needs more grit. That's a virtue which can enhance effective altruism's ability to survive external and internal conflict. For several months, I've observes pessimism among others about the capacity of effective altruism to survive conflict. Observing other movements, like environmentalism and economic justice, I've noticed each is fraught with greater turbulence than effective altruism in terms of the quality and quantity of their disagreements leading to debates. By and large though these and other movements realize great gains without collapsing. I believe pessimism towards the future solidarity and value of a united effective altruism is both unhelpful and unrealistic l.

comment by Evan_Gaensbauer · 2015-08-18T22:40:21.846Z · score: 1 (1 votes) · EA(p) · GW(p)
  1. I'd guess the amount of usage on the EA Forum is spiking in the last week, and growing (at maybe greater rates than in the past) in general. I shared Peter's recent article on the Facebook group, which I'm guessing attracted more people to comment. Also, there is a greater frequency of posts and comments in the last three days than usual. This is good news. I'll have to wait until Ryan or Tom gets back to us with hard data to confirm or deny these hypotheses, but in the meantime I believe regardless more traffic can be brought to the EA Forum by sharing what a consensus of Forum users consider high-quality articles. This also just has the direct consequence of efficiently disseminating important developments in effective altruism to as wide an audience as possible, an important goal in its own right.

  2. If anyone wonders why I make so many comments in the open thread, it's so I have a record of hot takes on effective altruism others can publicly read and comment on, without me needing to invest the time in writing up a full draft on my thoughts on a topic before I receive any feedback. I suggest others do the same.

comment by RyanCarey · 2015-08-19T12:30:36.680Z · score: 1 (1 votes) · EA(p) · GW(p)

Yep, it's had a mild spike in the last week, on the background of increased usership in 5 of the last 6 months, trending up by about 50% since 6 months ago. We definitely should shares more high quality articles to the main EA facebook group.

comment by tomstocker · 2015-08-10T09:02:24.151Z · score: 1 (1 votes) · EA(p) · GW(p)

Eva Vivalt on facebook invited anyone to help transform their data to help answer the question of what the log variance cost effectiveness is for aid interventions - which seems like a key consideration for development as a cause against other EA causes, and important in how we focus our efforts (e.g. on increasing the amount of altruism / aid funding generally or focusing heavily on effectiveness).

"Eva Vivalt Btw, if anyone would like to help with transforming AidGrade's data so as to better speak to this question and pin down the variance in cost-effectiveness terms, let me know. We have some measures of costs but there is still the matter of converting outcomes."

comment by Evan_Gaensbauer · 2015-08-05T19:09:12.856Z · score: 1 (1 votes) · EA(p) · GW(p)

I'm writing a response to Daron Acemoglu's critique of effective altruism recently published in the Boston Review. He criticizes effective altruism for perhaps focusing too much on simple metrics which don't fully take into account what adds value to human lives, and thus adversely and unintentionally risking or incentivizing international aid to take too narrow a focus. While I agree that's actually a valid criticism, I have some caveats for my draft.

  • Effective altruism is currently a marginal movement whose donations and affiliated charities are currently unlikely to interfere in the ways concerning Dr. Acemoglu. Indeed, it's precisely because of the simple mechanisms of poverty interventions effective altruism supports that they don't interfere or complicate lives in ways which would confound their effectiveness. If a time comes when diminishing marginal impact renders these interventions ineffective, effective altruism will shift its focus.

  • Much of effective altruism actually agrees with Dr. Acemoglu that the focus on such hard and specific metrics is too narrow, and this is a matter of internal debate and consideration within the movement. In the meantime, organizations like the Open Philanthropy project are making major grants which are assessed for impact on a narrow basis of QALYs generated.

  • It's possible as effective altruism becomes more influential, it will seek to work with the field of development economics and/or others to assess or develop metrics which are broader or more general than what QALYs account for, while still being relatively accurate, valid, and reliable.

My question for you: is my latter point valid? Is it actually the case there are or could be metrics which may not be as precise as QALYs, but would still be more robust than the heuristic of "important, neglected, and tractable"?

comment by tomstocker · 2015-08-07T15:12:55.259Z · score: 0 (0 votes) · EA(p) · GW(p)

You could for sure improve on the QALY as its done with 1-3 or 1-7 survey responses. V easy to improve on the DALY.

These measures are about benefits / quality of life in terms of individual welfare though. So they'd relate only to the 'important' bit of the heuristic. There are other things that are important, and key EA people have acknowledged and thought about that with, for example, the parliamentary model for resolving uncertainty in moral reasoning.

But is this really what Daren Acemoglu is getting at in terms of narrow focus? Not familiar with his criticism, but his work is all about the importance of market and political institutions - which is important - perhaps even all important in some lights - to human flourishing but very hard to relate to QALYs in the short-medium term or in terms of marginal funding (and nearly all funding is?)

comment by Evan_Gaensbauer · 2015-08-15T10:55:32.462Z · score: 0 (0 votes) · EA(p) · GW(p)

I used to think one argument for donating and against donating later was the haste consideration. However, I've just concluded this is fallacious. The haste consideration doesn't necessarily implore to donate as soon as possible as much as it implodes us to figure out what to donate to, i.e., prioritize as best we can between interventions and/or causes as soon as possible. This goes the same for volunteering, advocacy, and research efforts. Has anyone else thought about this?

For me, this increases the chances I will defer my direct altruistic effort until later, or invest it in whatever course of action is the best currently available for prioritizing what my direct efforts should go towards. This is also because I'm quite uncertain between causes though.

comment by Owen_Cotton-Barratt · 2015-08-15T11:54:42.270Z · score: 1 (1 votes) · EA(p) · GW(p)

I think that as you imply this is sensitive to whether you expect you've worked out the best cause already.

Ben Todd and I wrote about how we think this and other considerations interact earlier this year.

comment by Tom_Ash · 2015-08-03T23:09:01.635Z · score: 0 (0 votes) · EA(p) · GW(p)

[Registered charities around the world you help control]

Do you run or are you on the board of a registered charity, particularly on a European country? Could you get it to undertake EA activities? One potential example would be donation routing to participate in the Worldwide EA Donation Routing Mechanism - we've had success recruiting dormant charities for this or adding it as a new activity of quite different ones.

comment by DanielHendrycks · 2015-08-03T17:37:19.941Z · score: 0 (0 votes) · EA(p) · GW(p)

Would anyone mind posting a very short summary of their favorite talk? I'm trying to shop talks, but I don't know much of what's covered within each talk.