January Open Thread

post by RyanCarey · 2015-01-19T18:12:55.433Z · EA · GW · Legacy · 59 comments

Welcome to January's open thread on the Effective Altruism Forum. This is our place to discuss relevant topics that have not appeared in recent posts.


Comments sorted by top scores.

comment by RyanCarey · 2015-01-19T18:32:10.594Z · EA(p) · GW(p)

A suggestion: if people want career advice and would like to have a wide discussion about it, let it be known that you can do this on the EA Forum. You can ask for advice in a comment here or in a new thread. If you prefer privacy, you may like to use a different username from your usual one but if so, I'm happy to give posting privileges for this purpose, just send me a message!

Replies from: AnonymousFailure, Andy_Schultz
comment by AnonymousFailure · 2015-01-21T04:13:48.456Z · EA(p) · GW(p)

If this comment seems a bit rant-y, I'm sorry! Please don't bother reading it if you aren't in the mood to read a rant. Writing this helped me clear my head and feel better about myself. There's something therapeutic about writing, sometimes; consider this as two parts EA journal, and one part invitation to give me advice.

I have an ugh field regarding applying for jobs (although I suppose that maybe it doesn't officially count as an ugh field anymore, now that I'm aware of it). I'm generally able to work myself up to applying to a job or internship when I have enough time to-- at its worst, the process generally involves me taking 1-2 hours to think about how unintelligent and useless I am, after which I am able to sit down and go through the process of actually applying to the position in question.

I am in a position where I could either do a summer internship and then go to graduate school, or go directly into the workforce. If I don't end up with any good job offers in industry, I'll just default to going to grad school in STEM (I know which field within STEM I'd be going to grad school in, but I don't want to mention it here, as I'd like to remain as anonymous as possible for now).

My feelings about what type of career I'd like to have have been affected by my inability to believe in myself; I could see myself taking a worse job than I would otherwise be able to obtain in order to done with the stressful process of searching for jobs.

I'm not at all passionate about my STEM field-- I only majored in STEM because I wanted to make enough to be able to both live comfortably and have enough wealth to donate to EA causes. Actually, I would have been fine with being a starving artist type, since the fun-ness of doing humanities work would have made up for the relatively lower salary I would have made if I had majored in a humanities field. Majoring in a STEM field will probably allow me to donate more to EA organizations than I would have been able to otherwise, so that's a plus.

Also, I'm not sure how to tie this in, but: the parent I lived with while growing up was very successful professionally, and this makes me feel far more uncomfortable than I would otherwise be about not being more talented than I am. They never accorded me much status, and (at least in part due to my desire to please them) I ended up taking their terrible advice not to go into computer science, which, in retrospect, seems to me to have a somewhat higher reward-to-effort-spent payoff than some of the other sciences. I don't particularly care for CS any more or less than I care for other STEM subjects; it's just that the payoff for studying CS is higher. Oh well.

Thanks for reading :)

Replies from: ruthie, Bitton, Tom_Ash, Giles, William_S
comment by ruthie · 2015-01-22T01:07:00.070Z · EA(p) · GW(p)

I can totally sympathize. Job seeking sucks, especially if you're not feeling like an awesome person who everyone would obviously want to employ. I also know from experience that telling you you're awesome (I don't know you, but you're probably awesome) doesn't necessarily make you feel that way.

I am not a career counselor, but this is the advice I would give you:

  • Don't go to grad school unless you're really sure you want to. Grad school is a really crummy job, and the payoff in terms of career capital is dubious. Most other jobs you can get are better than grad school, even if they're not in your field.

  • It's not too late for a career change. You sound not excited to enter CS, but if you decided that was your best option, coding bootcamps are a thing, and they seem pretty good at turning STEM oriented people into employable coders. There are a lot of other places you could go, and a lot of jobs that don't have much more qualification than some college degree.

  • Earning to give is not the only EA career option. If it won't make you happy, you'll probably just get burnt out on it and maybe resent EA for making you feel like you had to do that. http://www.benkuhn.net/career-ideas has a list of career ideas that aren't earning to give, and it's extremely incomplete.

  • Also don't feel like you have to be passionate about the first job you take (or the second or the third). If you don't know what you want to do, you try things until something works. I also think that a lot of people start jobs that they don't feel passionate about, and then grow passionate about them over time, so not feeling like there's anything exciting for you right now doesn't mean that you'll never have a job you're excited about.

  • Lots of people graduate without a job or a plan. As long as you have some savings or someone you can stay with for a while, waiting until you're out of school and have time and space to think about your life is a totally reasonable plan.

I'm happy to talk more or help you brainstorm ideas besides grad school or industry in something your not excited over PM if you think that will help.

hugs and good luck!

comment by Bitton · 2015-01-22T05:25:59.619Z · EA(p) · GW(p)

Even if your goal is to do as much good as possible, you might do better in a field that really motivates you than in a field that typically produces high salaries but that doesn't interest you much. I also dislike applying for jobs - usually because the jobs I apply for are often jobs that I don't want. However, I like applying for jobs that I do want and that I think I'm qualified for. If you don't feel qualified for jobs in your field then I don't know what to say other than (1) maybe you are qualified but you just have a negatively biased self-image, (2) you can make yourself qualified by learning more and picking up new skills, and (3) figure out what you are qualified and motivated to do and go do that instead.

My best answer to "inability to believe in oneself" (and almost everything else) is rigorous organization. Track your time, set yourself at least daily, weekly, and monthly goals, develop routines (e.g. a morning routine), exercise regularly, set an alarm even on days that you don't have plans, etc. I started this about five weeks ago and saw extreme results almost overnight.

comment by Tom_Ash · 2015-01-21T13:27:12.678Z · EA(p) · GW(p)

It sounds like you might benefit from working on believing in yourself more :) Have you considered trying counselling? CBT might be particularly well suited. (Sorry if this comment sounds patronising, it's difficult to get tone right on these things.)

In terms of concrete career advice, it would be helpful to know what industries you might go into know, and whether grad school would lead to promising careers or higher earnings.

Whatever you choose to do, good luck! Bear in mind that if you donate to effective charities you'll be doing an unusual amount of good, which very much means that you're not a "failure".

Replies from: AnonymousFailure
comment by AnonymousFailure · 2015-01-22T01:54:11.772Z · EA(p) · GW(p)

Giles, Ruthie, and Tom,

Thank you all for the encouragement and advice. Yay! :)

comment by Giles · 2015-01-22T01:17:42.950Z · EA(p) · GW(p)

Hi Anonymous,

Really sorry to hear that you feel like that. I'm glad you find writing about it therapeutic. One thing you can try - it's worked for me - is to write down a "toolbox" of things (such as writing) that allow you to feel better about yourself when you're feeling bad.

This could even include taking 1-2 hours to criticize yourself - if that's what works for you. But having other options might help. Writing them down somewhere visible can help too.

The reason I'm bringing this up is that - for me at least - the mindframe you describe isn't helpful for making big decisions, or even for applying to jobs. So I think that knowing when you're at your best and knowing some things you can try to help you return to that state, is great.

Also really sorry to hear that you're feeling low status on account of a successful role-model. I've felt that one too, although for me it wasn't a parent but rather other members of the EA community who I saw as having accomplished more than I had. I'd love if there was some neat package of advice I could give here, but the only way out I know of involves a lot of grit - gradually learning to compare yourself to your own standards and finding success spirals.

It's really sweet and amazing that you're not blaming anyone in the community for making you feel this way - I know it's not anyone's intention to get you to choose a career you're not at all passionate about for EA reasons, but some of the advice can sometimes sound a bit like that.

Also bear in mind that the career advice from 80,000 Hours isn't to get it right first time, but to allow yourself room to explore and find new directions. Some high-profile EAs have done exactly that, doing a career u-turn when they discover some other path that for them is more effective or more satisfying. So it may be that there's a fun, fulfilling career out there for you - that's effective in helping others - and that lies outside of STEM. Or maybe your current field is right for you after all, and you just need to find the right people to make it exciting for you.

Good luck, and thanks so much for opening up. I'm sure what you're saying resonates with a lot of people.

comment by William_S · 2015-01-25T16:29:54.846Z · EA(p) · GW(p)

I also have had negative experiences with career search stuff (more around making decisions). My suggestion, that I'm also going to try, is find someone else who you can help support you through the career search process, who you can talk over decisions with, get to look over applications, maybe help talk you through the time you spend feeling useless before applying. This could also help keep you from settling with an inferior job, if you have to justify it to someone else.

I would also suggest, from experience, to avoid committing to a job at a time when you feel really down about yourself - I've done that before, and it would have been better to just wait. At least try to wait a few days, talk to some people about it, etc.

(Also, there's a facebook group for EAs to help each other with personal issues, and it's the sort of place where you can post this stuff and get advice - messages are only visible to group members. Message me if you're interested and not already in it, and I can add you)

comment by Andy_Schultz · 2015-01-26T16:55:09.738Z · EA(p) · GW(p)

What would be good advice for people who say they would only be happy with a career in the arts?

Replies from: Ervin, Ervin
comment by Ervin · 2015-01-27T11:38:20.378Z · EA(p) · GW(p)

Also, it would be worth trying to work out what about a career in the arts they think is required for their happiness and seeing whether you could find higher impact alternatives that provided this.

comment by Ervin · 2015-01-27T11:37:09.553Z · EA(p) · GW(p)

Depending on what you mean by the arts I suspect that would be very likely to be low impact. That would suggest trying to convince them to change course though I think that's not likely to be successful, meaning it would be best to focus on other people.

comment by Giles · 2015-01-20T03:11:30.190Z · EA(p) · GW(p)

I was reading The Phatic and the Antic-Inductive on Slate Star Codex.

Why's this relevant?

Birthday and Christmas charity fundraisers of course!

There is a sense in which the concept of a birthday fundraiser is anti-inductive - if they worked, and everyone realised they worked, then a lot more people would be doing them and they wouldn't work so well any more.

But actually running a fundraiser feels more like phatic communication. You're really communicating very little information about the charity you want people to give money to, but people seem to appreciate it and (as far as I know) very rarely get mad.

So is there some kind of lesson here that in some situations, one mindset is better, in other situations a different mindset is better... but always to remember that the other person may have a very different mindset to yourself?

Replies from: Tom_Ash
comment by Tom_Ash · 2015-01-20T11:19:24.277Z · EA(p) · GW(p)

There is a sense in which the concept of a birthday fundraiser is anti-inductive - if they worked, and everyone realised they worked, then a lot more people would be doing them and they wouldn't work so well any more.

I don't think that's quite true, as I don't think most people care enough to do them, whereas EAs of course do. Also, as I'm sure you know, it's not the case that everyone realises they work - people generally don't realise this unless a charity is shouting from the rooftops about this, like we (and you!) have done at Charity Science. When charities do a bunch of people sign up - Charity Water is an example, and they've got enormous numbers of people to do birthday and Christmas fundraisers.

But actually running a fundraiser feels more like phatic communication. You're really communicating very little information about the charity you want people to give money to, but people seem to appreciate it and (as far as I know) very rarely get mad.

That's right, people seem to generally be happy to follow your choice of charity even without reading detailed cost-effectiveness studies. Indeed that's what happens in most of fundraising.

comment by Evan_Gaensbauer · 2015-01-20T08:00:38.022Z · EA(p) · GW(p)

I want to publish several posts on this forum in the coming weeks. This is an open call for reviewers for various posts. I believe it's more important to get the information out there than for me to publish it. So, for topics for which I have insufficient content or information, I'm seeking coauthors. Here's the list. Feel free to comment which ones you'd be willing to review below, or send me a private message. I may draft some of these posts in Google Docs, or another word processor, before I publish them, so send me a private message with your email if you like. Just comment below if you're generally willing to review them, instead of any particular ones:

Does It Make Sense to Make A Multi-Year Donation Commitment to A Single Organization? Essentially, this already published comment

What Doesn't Count As Effective Altruism? Rob Wiblin presented a talk at the 2014 Effective Altruism Summit entitled 'What is Effective Altruism?' Posting a summary of the whole talk on this forum seems redundant, but near the end Mr. Wiblin covered what, at least from the perspective of himself and the Centre for Effective Altruism, what's disqualified from effective altruism. I believe this may make a good post. If the idea of this post raises red flags in your mind about possible controversy, I anticipate that, and you're also welcome to review my post before I publish it.

Neglectedness, Tractability, and Importance/Value The idea of heuristically identifying a cause area based on these three criteria was more or less a theme of the 2014 Effective Altruism Summit. This three-prong approach was independently highlighted by Peter Thiel, not just for non-profit work but entrepreneurship and and innovation more generally, and Holden Karnofsky, as the basis for how the Open Philanthropy Project asks questions about what cause areas to consider. Several months ago I discussed with Owen Cotton-Barratt publishing a post on this subject, or perhaps coauthoring it. Still, that hasn't happened from either of us yet, so I'll definitely be doing it, seeking input from yourself as well.

Effective Collaboration Michael Vassar gave a small lightning-talk at the 2014 Effective Altruism Summit on how organizations and others within the effective altruism movement may better collaborate. In his opinion, there is or was a dearth of this within the movement, and that's a problem. I'd like to interview or contact Mr. Vassar about this, as my notes are incomplete. If I can't achieve that, I likely won't publish this post unless others come forward with their detailed perspectives on this issue.

Volunteer and Human Resource Coordination This would be a followup to the above post, with possible intent to launch or coordinate a project. Vassar noted as an aside that the effective altruism movement may greatly benefit from having something like a COO between organizations, or something like a super-secretary. This could be a person, perhaps full-time, completely dedicated to getting all of effective altruism's logistical ducks in a row. This seems an important intermediate role. It may be fitting to have this organized by the Centre For Effective Altruism. However, just in case, this post may survey .impact and other effective altruist coalitions in an effort towards greater coordination and communication between everyone.

Crowdfunding and Effective Altruism This would be a post exploring how to use crowdfunding effectively, how it's previously been used across the world for effective causes, and what future potential it may hold for effective altruism. As I write this, I realize this post would also need to differentiate crowdfunding versus normal fundraising, and what the advantages and disadvantages of crowdfunding might be relative to normal fundraising. If you have experience in organizing either normal fundraisers, or crowdfunding campaigns, your input would especially be appreciated.

What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism In the face of individuals such as Elon Musk and Peter Thiel making large donations to effective organizations, and cause areas, to the tune of millions of dollars, and in the near future Good Ventures throwing tens, perhaps hundreds, of millions of dollars at effective causes and charities, I anticipate they may exhaust giving opportunities presently available and sensible for most of us. Most of us won't become multi-millionaires, presumably. Even in the face of donating four- or five-figure sums each year, an extremely high net-worth donor or foundation may render redundant the efforts of tens or hundreds of other effective altruists earning to give. Whether its funding our currently recommended charities to the point of room for more funding issues in one fell swoop, or the most effective cause areas requiring only huge donations to be tractable, such as policy advocacy, it poses an issue. I feel like this may pose an identity crisis for effective altruism, and may change how, e.g., 80,000 Hours recommends effective altruists enter earning to give as a career.

Reevaluating Earning to Give This post would be related to the above, and its implications for earning to give. Also, I'd be seeking arguments both for and against earning to give as a career option worth pursuing from within the effective altruism movement, but not from 80,000 Hours.

Member Perspectives on 80,000 Hours This post would be a retrospective and a set of critiques on how various members of 80,000 Hours think of its performance. This could range from general satisfaction with the organization, to measured evaluations of specific outcomes from 80,000 Hours. This would deliberately seeking input from others, like myself, who don't have an affiliation with 80,000 Hours beyond latent membership.

Replies from: Greg_Colbourn, Giles, Owen_Cotton-Barratt
comment by Greg_Colbourn · 2015-01-20T19:18:31.474Z · EA(p) · GW(p)

"What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism"

I've been wondering the same. But I've got a feeling that top tier philanthropists deliberately restrict their giving to ~50% at max. of the room for more funding, both to encourage smaller donors, and also because they only want to support things in proportion to their popular appeal. The latter also explains the motivation for genuinely restricted donation matching.

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2015-01-24T15:49:58.815Z · EA(p) · GW(p)

These are all good points about normal philanthropy. However, I'm still concerned because effective altruism doesn't involve normal philanthropy, or charitable giving. Thanks for responding, as this spurs me to state my case for why effective altruism is a unique movement for which we might need to take special considerations. I count explaining my rationale here in dialogue as drafting my essay on the topic.

For its classic charity recommendations, Givewell is rigorous, and evaluates its top charities of having hit a point of room for more funding issues at the point of a one of those charities receiving, e.g., >= $10 million USD in a single year. The demotion of the AMF from the most recommended charity in 2013 is an example of this. A foundation like Good Ventures could fund these top charities to the point at which each and all of them are not the best marginal donation target. From there, Givewell may be at a present loss for finding the next best set of charities to recommend. With it, effective altruism at large might be at a loss. Lots of people like myself and others I've observed are uncertain enough about what is the best donation target that we're too reluctant to make 3- or 4-figure sum donations to any other charities.

Additionally, the Open Philanthropy Project seeks to release in the next year recommendations to Good Ventures to support efforts to reform criminal justice or immigration policy in the United States, or fund large-scale research efforts. From the perspective of a foundation like Good Ventures, such efforts could do good on a massive scale, and are worth funding even if it requires one million dollars or more to discover if any good can be achieved. From the perspective of the average supporter of effective altruism, such an opportunity is backed by less evidence as robust as Givewell's classic recommendations, and entails much more risk. A multi-million dollar foundation can afford the much higher risks to reap much higher rewards than individuals.

In conjunction, I worry these two issues may squish us smaller donors. If donation is the most obvious effective way of doing good, but it becomes redundant for small-to-medium donors to donate in the name of the best altruistic opportunities, we're at a loss. Effective altruism is dedicated to seeking the best ways of doing good. If lots of us build our careers on donating to the best causes, but our financial contributions become negligible, what next?

We may reach the point that earning to give isn't the best common recommendation for doing good, at which point 80,000 Hours and effective altruism at large may be giving expired advice to several hundred individuals. At this point, informing individuals to seek careers which will allow them to donate more may no longer be the best recommendation. Additionally, I feel it would be irresponsible of effective altruism to recommend those aspiring to do the most good they can to pursue a career of earning to give, when the complete picture begins informing us that isn't their best option.

At this point, my point is bleeding into my idea of "Reevaluating Earning to Give", which is a related but separate topic.

comment by Giles · 2015-01-22T02:37:05.166Z · EA(p) · GW(p)

What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism

I think this fits into a bigger picture. To punch above your weight in terms of impact, you need to know something (or have a skill) that most other people don't. Currently the thing you have to know is "there's this thing called EA and earning to give". As that meme spreads, you'd expect its impact to dwindle, assuming an upper bound on the total amount of good that can be done given current resources.

The number of earning-to-givers * average good done by earning to give <= total amount of good available to be done.

The same equation applies to "knowing about everything that's going on inside EA", so creating better memes than earning to give doesn't appear to solve the problem.

What would help though, would be:

  • finding where my model of what's going on is an oversimplification, and focussing some attention there (maybe with xrisk the amount of good to be done is so huge that we don't hit a limit for a while)
  • increasing the "total amount of good that can be done given current resources".

The second one would seem to suggest increasing the total resources available to doing good - this isn't quite the same as growing the economy, because many agents in the economy are selfish, but it feels related and probably involves an entrepeneurial spirit.

I think the EA algorithm would look something like this:

  • Do what everyone else in EA is doing
  • Think of something new, and if it can be shown to be effective (in the sense of growing things, not just directing resources away from somewhere else even in an indirect sense) then roll it out to the rest of the EA movement.

End ramble.

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2015-01-24T16:01:36.292Z · EA(p) · GW(p)

I don't consider this rambling. I didn't grok it the first time I read your comment, but it seems plenty insightful now. Thanks for helping out!

maybe with xrisk the amount of good to be done is so huge that we don't hit a limit for a while

It seems to me the bottleneck here isn't the output of good to be achieved in the future. However, the bottleneck could be the input of donation targets for the present. For example, every organization seeking to reduce existential risk reduction we can think of could hit points at which further donation isn't a good giving opportunity.

This scenario isn't too implausible. The Future of Life Institute could grant the $10 million donation it received from Elon Musk to the MIRI, the FHI, and all other low-hanging fruits for existential risk reduction. If those organizations hit more similar windfalls, or retain the current body of donors, all those organizations might not be able to allocate further funds effectively. I.e., they may hit a point of room-for-more funding issues, for multiple years. Suddenly, effective altruism would need seek brand new opportunities for reducing existential risk, which could be difficult.

Replies from: Giles
comment by Giles · 2015-01-24T17:33:19.452Z · EA(p) · GW(p)

I think you're imagining a scenario where every organization either:

  • is not seriously addressing existential risk, or
  • has run out of room for more funding

One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn't feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.

Another reason this could happen would be more strategic: that humanity actually can't think of any things it can do that will reduce existential risk. Perhaps there's a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn't be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today's technological world being a Really Hard Problem.

Even if we don't hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.

Which we eventually want - but not while the world is full of danger and suffering.

comment by Owen_Cotton-Barratt · 2015-01-20T12:16:28.399Z · EA(p) · GW(p)

Neglectedness, Tractability, and Importance/Value

I have written an article which discusses a couple of technical models of cause effectiveness, and derives a 3-factor model which can be interpreted as giving a way to measure neglectedness, tractability and importance. You can find it here; the forum thread to discuss it is here.

comment by alexflint · 2015-01-20T21:15:59.633Z · EA(p) · GW(p)

Is there any way for EAs who are looking for housemates to find each other?

Living with other EAs is a really powerful way to strengthen the community, and finding like-minded housemates can also have a financial impact for many of us.

If there's nothing already out there then I'm going to make something.

Replies from: Tom_Ash, RyanCarey
comment by Tom_Ash · 2015-01-20T22:05:38.729Z · EA(p) · GW(p)

Yes, there's SkillShare's 'lodging' category. Great idea, I'll make a forum post to alert people to this opportunity.

Replies from: Tom_Ash
comment by RyanCarey · 2015-01-20T22:04:35.354Z · EA(p) · GW(p)

Yes, the way to meet potential housemates that would be likeliest to succeed would be to ask to be put in contact with someone! Sometimes the human solution is the best one!

Edit: oh, you want to make something? Well, I agree that EAs living together can be good but I don't think this is really a software problem. Given that skillshare.im, which has a wider potential usership than this, didn't get consistent usr, I would bet against this getting used!

comment by Andy_Schultz · 2015-01-19T20:51:48.824Z · EA(p) · GW(p)

Do you think it is better to decide each year where to donate, or to give organizations multi-year commitments of what you expect to donate?

Replies from: Evan_Gaensbauer
comment by Evan_Gaensbauer · 2015-01-20T07:20:04.997Z · EA(p) · GW(p)

Short Answer

It's better to decide each year where to donate if you're donating less than several thousand dollars per year, and you don't have very high confidence where to donate. If you donate several thousand dollars per year, or in excess of that, and you don't believe your donation preferences will change much, it may make sense to let organizations know you expect to donate to them for several years.

Long Answer

The following answer is based on my experience as having previously worked for a charity fundraising company, been a regular monthly donor to World Animal Protection, and the experiences of myself and other supporters of effective altruism as donors.

Donations In General

Based on work and personal experience, it seems most charities, and non-profits, are much more concerned with their short- and medium-term operation and goals. That's because the continuance of the organization is based on meeting these goals, and achieving long-term goals is based upon the organization still existing, e.g., several years from now. Thus, my impression is that most charities and non-profits are relieved enough that their base of doors lets them know they'll continue to donate on a monthly basis, never mind an annual one.

Of course, organizations want this information because it gives them a confidence interval for much funding they'll receive, which in turn dictates their budget for operations and projects. The biggest donors to an organization will have the greatest impact on an organization's budget, and its expectations of future funding. These donors are the ones organizations will focus on getting information on donation plans from on a year-to-year basis. Ask yourself: are you among the biggest donors to an organization?

As an example, the Machine Intelligence Research Institute publishes a [list of its top donors] over its history. A 'top donor' is someone who has donated =>$5000 USD to the MIRI. Animal Charity Evaluators[2] also publishes a list of [top donors], one of their 'top donors' being someone who donates =>$1000. The MIRI has existed longer than ACE has, and also has a greater operating budget. So, what would qualify you as a 'top donor' varies by organization. If you believe you fall into the category of 'top donor' to an organization, and will do so in the future, making a multi-year commitment may make sense. If you don't know the answer to the question, you can always ask the organization.

Just ask them, out of curiosity, what percentage your donation makes up of their total operating budget, and how much your regular donations impact their medium-term budget planning. At this point, you have no obligation to disclose you're considering a multi-year commitment to donating to the organization in question.

Donations and Effective Altruism

I believe the concerns of donors influenced by effective altruism make it so making a multi-year commitment to an organization that you expect to donate to them is not as good an idea.

First of all, effective organizations, such as Givewell-recommended charities, may run into room for more funding issues. This means organizations may hit a point at which the marginal donation to that organization will do less good than the same amount donated to a different organization. For example, at the end of 2013, Givewell believed its normally top-recommended charity, the Against Malaria Foundation, hadn't distributed enough malaria bed nets, or had the capacity to effectively allocate funds, such that Givewell recommended GiveDirectly as its top charity over Against Malaria Foundation for most of 2014. This was controversial within effective altruism, as several individuals, and also Giving What We Can, disagreed with Givewell's analysis and conclusion on regarding the AMF.

Anyway, similar issues may be raised in the future. Now, in a way, most of us don't believe ourselves as capable or experienced as Givewell or Giving What We Can to evaluate charities. So, we trust them to do so on behalf of the movement. If, as an individual, you make a multi-year commitment to organization, it may be awkward in future years when a charity evaluator you trust no longer recommends that organization as the 'best place to donate'. Given that effective altruism focuses on supporting the most effective organizations for any present time, what's considered 'the best place to donate' may change relatively quickly. This would render a multi-year donation commitment sub-optimal, and may hurt your relationship with the organization.

Of course, some of us donated lots of money to the Against Malaria Foundation in 2013, then switched to GiveDirectly in 2014, and will resume donating to the AMF in 2015, now that it's Givewell's top recommendation again. Still, the opportunity for such flexibility is still a point against a multi-year commitment.

Some causes areas are popular within effective altruism because to some they seem very neglected, or hold lots of value, even though they're not as tractable as global poverty. Such cause areas include animal advocacy, or existential risk reduction. By 'tractable', I mean to what extent we're confident what we do in the present will lead to desired goods in the future. Sometimes figuring if a cause area is tractable is difficult because the evidence to reach a conclusion hasn't been collected or evaluated yet, or because building an evidence base just seems too difficult.

Now, the MIRI, and ACE, are organizations working within cause areas for which it's much more difficult to discern how effective and positive financial and other support in the present will be. However, I believe that the novel introduction of greater effectiveness to cause areas may change them such that evaluating their tractability will be more possible. At such a point, you as an individual, or the effective altruism community at large, may conclude organizations within your selected cause area are more or less effective than you once though. So, this could shift your donation priorities within a couple of years. Thus, I believe accounting for this level of uncertainty about the effectiveness of a given organization over time is another mark against making multi-year commitments.

As someone who isn't very confident in which cause area, let alone which charity, may be the most effective target for donations, I wouldn't be making a multi-year commitment for donation even if I had the funds to justify such a thing. If you're extremely confident a single organization is the best, or confident in a single organization within the best cause area, a multi-year commitment could still make sense. However, keep in mind you may be biased if you're (one of) the only supporters of effective altruism with such high confidence in this organization.

In conclusion, I believe from the perspective of effective altruism, making a multi-year commitment to organizations for donation based is usually an imprudent course of action.

[1] The MIRI is an organization within the cause area of existential risk reduction, and supporters of effective altruism provide a great share of the total sum of donations it receives.

[2] Animal Charity Evaluators is an organization mirroring Givewell, in that it evaluates and recommends for donations organizations focused on helping non-human animals, as Givewell does for organizations in the domain of global poverty and public health in the developing world. ACE was incubated by the Centre for Effective Altruism in Oxford, England, but now operates independently out of the United States. It is an organization central to the intersection of animal advocacy and effective altruism.

comment by davidc · 2015-01-19T19:16:55.147Z · EA(p) · GW(p)

This piece from an AI researcher at NYU criticizing Nick Bostrum's SuperIntelligence seems like it's worth a look (and hasn't been posted here yet), for folks interested in the subject.

Replies from: Giles
comment by Giles · 2015-01-19T19:42:25.055Z · EA(p) · GW(p)

I'll bite. It may take a new top-level post though.

Replies from: davidc
comment by davidc · 2015-01-19T19:45:17.323Z · EA(p) · GW(p)

I wanted to make a top-level post for it a few days ago but I need 5 more upvotes before I can create those. So I took the chance to share it here when I saw this "Open Thread".

Replies from: Giles, RyanCarey
comment by Giles · 2015-01-24T04:56:51.818Z · EA(p) · GW(p)

My post is here.

comment by RyanCarey · 2015-01-19T19:54:41.609Z · EA(p) · GW(p)

I've added you as a contributor :)

Replies from: davidc
comment by davidc · 2015-01-19T20:01:08.340Z · EA(p) · GW(p)


comment by ChrisJenkins · 2015-01-30T18:27:01.891Z · EA(p) · GW(p)

There's a "charity portfolio management" startup called The Agora Fund that's trying to funnel money to high-impact nonprofits. It seems to be using a reasonable definition of "impact" (e.g., they're using Givewell as an information source and listing GiveDirectly and Deworm the World as top charities). I'd never heard of them before, but saw that they're hiring in New York.

They try to tailor impact research reports donors' priorities, aggregate donations for payment and tax purposes, and take a cut between 2.25 and 7.25% of the donated amount. (The higher figure is for charities they provide research reports on, so it's actually a larger cut for the charities they recommend.) Page 13 describes the service levels/costs/benefits.

The fact that it's a for-profit company is a bit surprising. Does anyone have any thoughts about whether this seems useful, beneficial, likely to succeed, etc.?

comment by Andy_Schultz · 2015-01-20T01:44:41.441Z · EA(p) · GW(p)

For those of you that earn credit card points, you might want to check if your points can be used to donate money to charity. For my credit card, I'm able to donate more money per point than if I redeemed the points for cash. I believe these donations are tax deductible in the U.S. (I think they would go on line 17 in Schedule A).

Replies from: Tom_Ash
comment by Tom_Ash · 2015-01-20T11:21:40.739Z · EA(p) · GW(p)

Do you know of any credit cards that do this Andy? We could add it to the lists of EA actions on EA Hub and the EA Wiki.

Replies from: Andy_Schultz
comment by Andy_Schultz · 2015-01-21T03:42:25.168Z · EA(p) · GW(p)

I know Citi Forward and Citi Thank You cards earn donatable points, but the Citi Forward card is no longer being issued. For people looking for a new credit card, it looks like cash back cards like Citi Double Cash would beat Citi Thank You. Maybe the EA action could be to look for a cash back or reward card that will give you the most money to donate to charity, depending on your spending habits. For example, a cash back card offering 5% back on certain categories might beat the Citi Double Cash's consistent 2% back if you tended to spend money in those categories.

comment by Joseph_Chu · 2015-01-19T22:36:10.523Z · EA(p) · GW(p)

So, I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I'd thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. I've also posted this question to the Less Wrong open thread, but I'm curious what Effective Altruists in particular would think about this question. If you'd rather you can private message me your answer. Keep in mind the question is intentionally somewhat ambiguous.

The question is:

Truth or Happiness? If you had to choose between one or the other, which would you pick?

Replies from: Tom_Ash, Peter_Hurford, Larks
comment by Tom_Ash · 2015-01-20T11:23:52.781Z · EA(p) · GW(p)

All else being equal, I'd pick happiness.

What understanding do you get from this question, out of interest? Do particular groups tend to answer it one way or another?

Replies from: Joseph_Chu
comment by Joseph_Chu · 2015-01-20T14:03:23.924Z · EA(p) · GW(p)

Well, the way the question is formed, there are a number of different tendencies that this question seems to help gauge. One is obviously whether an individual is aware of the difference between instrumental and terminal goals. Another would be what kinds of sacrifices they are willing to make, as well as their degree of risk aversion. In general, I find most people answer truth, but that when faced with an actual situation of this sort, tend to show a preference for happiness.

So far I'm less certain about if particular groups actually answer it one way or another. It seems like cautious, risk averse types favour Happiness, while risk neutral or risk seeking types favour Truth. My sample size is a bit small to make such generalizations though.

Probably the most important understanding I get from this question is just what kind of decision process people use to decide situations of ambiguity and uncertainty, as well as how decisive they are.

Replies from: Tom_Ash
comment by Tom_Ash · 2015-01-20T22:02:54.639Z · EA(p) · GW(p)

It seems like cautious, risk averse types favour Happiness, while risk neutral or risk seeking types favour Truth.

Interesting. I'm struggling to imagine why that might be, any theories?

Replies from: Joseph_Chu
comment by Joseph_Chu · 2015-01-21T01:32:43.225Z · EA(p) · GW(p)

A possible explanation is simply that the truth tends to be some information that may or may not be useful. It might, with a small probability, be very useful information, like say, life saving information. The ambiguity of the question means that while you may not be happy with the information, it could conceivably benefit others greatly or not at all. On the other hand, guaranteed happiness is much more certain and concrete. At least, that's the way I imagine it.

I've had at least one person explain their choice as being a matter of truth being harder to get than happiness, because they could always figure out a way to be happy by themselves.

comment by Peter_Hurford · 2015-01-21T01:25:07.130Z · EA(p) · GW(p)

I think the hope is that there doesn't have to be a choice.

comment by Larks · 2015-01-20T23:48:11.025Z · EA(p) · GW(p)

Truth, no hesitation.

Replies from: Vincent_deB
comment by Vincent_deB · 2015-01-22T18:20:29.739Z · EA(p) · GW(p)

A big question but why?

comment by kdbscott · 2015-01-19T20:45:52.985Z · EA(p) · GW(p)

Does anyone know of good investigations of the impact of technological unemployment? Any EA people/orgs that have looked at it?

Replies from: Owen_Cotton-Barratt, Tom_Ash
comment by Owen_Cotton-Barratt · 2015-01-20T13:46:05.834Z · EA(p) · GW(p)

Of relevance is this report by Carl Frey. It's looking at which current jobs are vulnerable to automation.

comment by Tom_Ash · 2015-01-20T11:26:28.025Z · EA(p) · GW(p)

Here's something the Economist wrote. They did a special feature on this but I can't find a link right now.

comment by ImmaSix · 2015-01-19T18:43:51.175Z · EA(p) · GW(p)

Does anyone know (from experience) good articles/books on not-necessarily-AI technology risks or non-AI technology risk?

Is "Global Catastrophic Risks" by Bostrom worth reading in this context? It's from 2008; my concern is that it might be outdated.

Replies from: Daniel_Dewey, Larks, Giles
comment by Daniel_Dewey · 2015-01-20T11:46:02.801Z · EA(p) · GW(p)

There's this policy report from September 2014, Unprecedented Technological Risks, signed by Beckstead, Bostrom, Bowerman, Cotton-Barratt, MacAskill, Ó hÉigeartaigh, and Ord. Not a long read, but I'd expect the references to be among the best available.

comment by Larks · 2015-01-19T23:32:50.840Z · EA(p) · GW(p)

I thought it was excellent when I read it (in 2010), and I expect it's probably held up pretty well. I can't think of a better replacement.

comment by Giles · 2015-01-19T19:38:34.226Z · EA(p) · GW(p)

I'd suggest Global Catastrophic Risks as a good primer. (The essays aren't written by Bostrom; he co-edited the book)

comment by Evan_Gaensbauer · 2015-01-25T23:24:12.868Z · EA(p) · GW(p)

I have more ideas for posts to publish to this forum:

The Growth of Effective Altruism: Growing Bigger Vs. Growing Stronger At the 2014 Effective Altruism Summit, Rob Wiblin and William MacAskill espoused how they believe potentially the best action for effective altruism as a movement proper would take is increasing its own population size. I call this "growing bigger". Ostensibly, this is also in large part the current mission of the Centre for Effective Altruism, and its "Effective Altruism Outreach". Anna Salamon of the Center For Applied Rationality seemed skeptical of this approach as best use of recursion to improve the movement. She espoused the perspective that supporters of effective altruism, both as individuals and as a whole community, might be better to ensure the community self-improves, increasing the movement's legitimate self-confidence that it can achieve the good outcomes desired. In other words, she posits increasing the good each effective altruist can do, and the absolute good expected from per individual would be the way to go. I call this "growing stronger". Understanding the semantic distinction between "growing bigger" and growing stronger" seems to me crucial when discussing the "(movement) growth" of effective altruism".

Research Into Social Movements Over the last year, I've observed multiple individuals independent of the Centre for Effective Altruism suggesting how social movements have historically grown and succeeded be studied so effective altruism can learn how successfully it could replicate such methods. However, I've noticed a dearth of updates or coordination on such a project. More than researching myself, I'd be reaching out to the community to note and report on who what others have learned thus far.

Blogging Carnival: Role Models I'm going to propose the monthly topic for the effective altruism blogging carnival for February be "Role Models". If this doesn't come to pass, though, I want to write about it anyway. It's a topic which excites me, so I'm willing to take suggestions for (co-)authoring a profile on an individual who has lived and acted in the spirit of effective altruism in the past or present, if not affiliated with the movement.

What Different Types of Organizations Can Do At the 2014 Effective Altruism Summit, I met multiple entrepreneurs who suggested start-ups and for-profit efforts can produce through their goods or services provide an efficient mechanism for positive social impact in addition to the money to be donated that they generate for their owners or employees. Since then, I've noticed this idea popping up more. Of course, start-ups contrast with bigger corporations. Additionally, I believe there are different types of non-profit organizations, and their differences are important. Charities doing direct work (e.g., the Against Malaria Foundation), foundations (e.g., Good Ventures, Charity Science), research think tanks (e.g., Givewell, the Centre for Effective Altruism), advocacy and awareness organizations (e.g., The Life You Can Save, Greenpeace), scientific research projects (e.g. the International Panel on Climate Change), and political advocacy (Avaaz.org, Amnesty International) are all different.

  • To lump all "for-profit" types of work, and all "non-profit" types of work into two categories underrates the advantages and disadvantages of how to structure an organization driven toward a goal.
  • Different types of organizations differ across nations and law codes, the cultures and traditions of their respective sectors, and their structural limitations. Effective altruism should be aware of such so it can figure out how best effectively achieve goals for a given cause.

Building Relationships With Charities Within effective altruism, there are individuals who have succeeded in earning high enough income that they can and do donate (tens of thousands) of dollars to a single charity each year. The focus of donations within of effective altruism is on charities which the community believes are working in neglected cause areas, so those charities may be smaller than more common ones (such as UNICEF or Greenpeace), and accustomed to such large donations. Owners and investors into companies maintain relationships with executives to check how well they're operating those companies. However, while donations to a charity may be analogous to a profit-seeking investment, the relationship between a (large) donor and a charity may be quite different. Effective altruism is unique in that single large donors tend to care about the transparency and technical details of charities they support more than philanthropists and donors not influenced by effective altruism. So, maintaining a mutually respectful and courteous relationship may be more different and tenuous than average. It seems a guide for those earning to give, and who build relationships with charities over time, could be useful. I don't consider myself remotely qualified to write this guide on my own, so I'll be seeking coauthors or reviewers extensively.

Building Relationships With Donors and Fundraisers There are supporters of effective altruism starting and managing unconventional non-profit organizations. I wouldn't be surprised if more of them do so in the coming years. Their financial and moral supporters may or may not be familiar with effective altruism, and non-profits that try to optimize for effectiveness. This guide would be an inverse to the one above, about how to maintain effective relationships with donors, both in general, and specifically about charities aiming to be more effective. This guide would be intended for maintaining relationships with ongoing donors, in particular ones who are concerned with and constructively critical of how a non-profit is run. How to effectively court new donors, or run successful fundraisers, seems like a separate guide. Thirdly, as they grow, non-profits aligned with effective altruism may form relationships with other organizations that boost their profile or raise funds on their behalf. How to effectively manage relations with them also seems important. Again, these are all topics on which I consider myself unqualified to write guides about, but I think are important enough for me to facilitate. So, I'll be seeking coauthors, reviewers, and experienced supporters extensively.

Replies from: Bitton, RyanCarey
comment by Bitton · 2015-01-26T03:09:34.968Z · EA(p) · GW(p)

I'm interested in the social movement research and in your blogging carnival suggestion.

comment by RyanCarey · 2015-01-25T23:35:47.863Z · EA(p) · GW(p)

Good suggestions.

I think you'll have more luck getting social movement research into the public domain by approaching people individually. In my experience, putting a general request out there is much more likely to fall flat from a bystander effect compared to if you contact the people who you know have been involved.

comment by John_Maxwell (John_Maxwell_IV) · 2015-01-21T23:59:51.520Z · EA(p) · GW(p)

I created this thread on purchasing research effectively that may be of interest to folks here.

comment by Bitton · 2015-01-20T04:47:01.959Z · EA(p) · GW(p)

What do you think of the forum allowing private messaging and tagging people in posts?

Replies from: Ben_Kuhn, Ervin, RyanCarey
comment by Ben_Kuhn · 2015-01-20T05:01:30.193Z · EA(p) · GW(p)

It already allows private messaging. Go to the "messages" section and click "compose".

comment by Ervin · 2015-01-27T11:40:28.420Z · EA(p) · GW(p)

Tagging is a killer feature of Facebook.

comment by RyanCarey · 2015-01-20T08:41:53.204Z · EA(p) · GW(p)

Improving private messaging so that you can compose a message to any user from their user page is my priority in this domain. Tagging could also be good, I suppose.