Posts

Meetup : .impact workathon: Toronto node 2015-07-21T23:38:12.398Z · score: 0 (0 votes)
Iason Gabriel writes: What's Wrong with Effective Altruism 2015-07-21T19:16:19.652Z · score: 8 (8 votes)
Meetup : Toronto: Is existential risk reduction a good use of resources? 2015-03-16T04:27:36.677Z · score: 0 (0 votes)
Meetup : The most good you can do in 5 minutes 2015-01-25T22:53:57.568Z · score: 0 (0 votes)
Comments on Ernest Davis's comments on Bostrom's Superintelligence 2015-01-24T04:40:52.595Z · score: 2 (2 votes)
Another fundraiser report 2015-01-03T20:20:41.700Z · score: 5 (5 votes)
Toronto meetup writeup: 2014-12-11 2014-12-12T21:54:23.291Z · score: 1 (1 votes)
Anti Publication Bias Registry 2014-12-10T16:42:03.354Z · score: 5 (5 votes)
Stuck? Talk to an EA Buddy! 2014-12-10T04:40:01.782Z · score: 16 (16 votes)
Meetup : Toronto EA meetup 2014-12-04T20:17:39.072Z · score: 0 (0 votes)

Comments

Comment by giles on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-12T01:22:18.976Z · score: 22 (18 votes) · EA · GW

A Mindful Approach to Tackling those Yucky Tasks You’ve Been Putting Off

For many of us, procrastination is a problem. This can take many forms, but we’ll focus on relatively simple tasks that you’ve been putting off long-term.

Epistemic status: speculative, n=1 stuff.


Yucky Tasks

Yucky tasks may be thought of several ways:

  • things you’ve been putting off
  • tasks which generate complex, negative emotions.
  • that vague thing that you know is there but it's hard to get a grip on and you’re all like uhggggg

The connection to EA?

EA is not about following well-trodden paths. We’re all trying to do something different and new, and stepping out of comfort zones.

  • donating big sums of money to unusual causes
  • seeing the world through an unusual lens
  • reaching out to people we don’t know
  • planning our careers and our finances
  • and more
  • all while staying organized in our personal lives

For some of us, we may be exceptionally talented or productive in some domains, but find some of the tasks elusive or hard to get a grip on.


So what happens?

Most commonly avoidance. This can go on until there’s some kind of shift: maybe we avoid something until it becomes super urgent, or maybe we just wait until our feelings around it become clearer.

Forcing ourselves to jump right in, tackling the task “forcefully” using all our available willpower. Though this can get the job done it can be unpleasant and unsustainable - we’ll remember all that negativity for the next time, and thus make the next task more difficult. Especially disruptive when working with others.


What’s an alternative?

This talk is about discovering and mapping our mental landscapes surrounding a problem. Tasks, and their associated thoughts and emotions, can be mapped out in a rich web. Often, different sub-tasks will be associated with different emotions, and seeing this laid out can help with getting our emotional bearings, as well as practical problem-solving.

The result is unpacking a complex, muddied anxiety or resentment into something cleaner and truer. We’re still at early stages but we’re hoping to build this technique out into something robust that can help those of us in the EA movement overcome the blocks to personal effectiveness.

------------

(I would like to be part of the late session)

Comment by giles on Introducing the EA Funds · 2017-02-10T23:29:59.015Z · score: 1 (1 votes) · EA · GW

That's great, but the less actively I'm involved in the process the more likely I am to just ignore it. That might just be me though.

Comment by giles on Introducing the EA Funds · 2017-02-10T06:48:50.874Z · score: 6 (6 votes) · EA · GW

This is great!! Pretty sure I'd be giving more if it felt more like a coordinated effort and less like I have to guess who needs the money this time.

I guess my only concern is: how to keep donors engaged with what's going on? It's not that I wouldn't trust the fund managers, it's more that I wouldn't trust myself to bother researching and contributing to discussions if donating became as convenient as choosing one box out of 4.

Comment by giles on Is the community short of software engineers after all? · 2016-10-10T14:28:35.927Z · score: 1 (1 votes) · EA · GW

This by the way is what certificates of impact are for, although it's not a practical suggestion right now because it's only been implemented at the toy level.

The idea is to create a system where your comparative advantage, in terms of knowledge and skills, is decoupled from your value system. Two people can be working for whichever org best needs their skills, even though the other best matches their values, and agree to swap impact with each other. (As well as the much more complex versions of that setup that would occur in real life).

Comment by giles on Why we need more meta · 2015-09-27T17:47:48.464Z · score: 0 (0 votes) · EA · GW

Are you counting donations from people who aren't EAs, or are only relatively loosely so?

Yes. Looking at the survey data was an attempt to deal with this.

Comment by giles on Why we need more meta · 2015-09-27T14:42:52.651Z · score: 1 (1 votes) · EA · GW

I was also hesitant about CFAR, although for a slightly different reason - around half its revenue is from workshops, which looks more like people purchasing a service than altruism as such.

Good point regarding GPP: policy work is another of those grey areas between meta and non-meta.

Not sure about 80K: their list of career changes mostly looks like earning to give and working at EA orgs - I don't see big additional classes of "direct work" being influenced. It's possible people reading the website are changing their career plans in entirely different directions, but I have my doubts.

Not sure what you mean by e.g.3.

I totally get the point regarding GWWC and future earnings, but I'm not sure how to account for it. GWWC do a plausible-looking analysis that suggests expected future donations are worth 10x total donations to date. But I'm not sure that we can "borrow from the future" in this way when doing metaness estimates, and if we do I think we'd need a much sharper future discounting function to account for exponential growth of the movement.

Good point regarding OPP: My direct charity estimate only included the top recommended charities by GW,GWWC and ACE. The OPP grants come to an additional $7.8m in 2014 ("additional" because it's not direct charities I've already considered and isn't meta either).

Anyway, taking all this into consideration I get $3.2m meta, $62m non-meta for a ratio of 5%. (Plus $2.1 million in "grey area"). So we're getting close to agreement!

https://docs.google.com/spreadsheets/d/1PMw_q7vZ0oQgPbY3vrie3Xb0n_wifCidYzpxSDCTbPE/edit#gid=608881848

Some other caveats:

  • It doesn't measure non-financial contributions, such as running local chapters or volunteering for EA orgs.
  • Some of the money going to direct charities comes from people with no connection whatsoever to the EA movement (i.e. not influenced by GiveWell etc.)

Regarding the survey, do you feel that it's biased specifically towards those who prefer meta, or just those who identify as EA?

Comment by giles on Why we need more meta · 2015-09-27T05:10:41.126Z · score: 4 (4 votes) · EA · GW

I can't emphasize the exponential growth thing enough. A look at the next page on this forum shows CEA wanting to hire another 13 people. Meanwhile GiveWell were boasting of having grown to 18 full time staff back in March; now they have 30.

But the direct charities are growing like crazy too! It all makes it very easy to be off by a factor of 2 (and maybe I am in my above reasoning) simply by using out of date figures. Anyone business-minded know about the sort of reasoning and heuristics to use under growth conditions?

Comment by giles on Why we need more meta · 2015-09-27T04:49:53.993Z · score: 6 (6 votes) · EA · GW

I'm helping prepare a spreadsheet listing organizations and their budgets, which at some point will be turned into a pretty visualization...

Anyway, according to this sheet, meta budgets total around $4.2m (that's $2.1m GiveWell, $0.8m CEA and $0.8m CFAR, plus a bunch of little ones). That's more than "a couple", but direct charities' budgets total $52m so we're still shy of 10%.

(Main caveats to this data: It's not all for exactly the same year, so anything which is taking off exponentially will skew it. Also I haven't checked the data particularly carefully).

I've also been counting x-risk organizations as not meta. That one's a bit ambiguous - on the one hand they do a lot of "priorities research and marketing", but on the other hand there isn't really an object-level tier of organizations beneath them that works in the same areas.

As to what self-identified effective altruists are up to: a quick look at the 2014 EA survey only yields number of donations to each organization, not amount of money... but if we go with that, 20% of the donations are to organizations I've counted as "meta".

So my working conclusion would be that if you favour a 50% split across the community, you're looking good for putting all your eggs in meta. If you favour a 10-20% split, you may need to look a bit more carefully.

A final note of caution: you can only push in one direction. If you favoured a 20% meta split, and (just suppose it turned out that) only 5% of donations in your reference class went to meta, it doesn't automatically mean that you should donate to meta. There might be some other category, e.g. direct animal welfare charities, which were also under-represented according to your ideal pie. It's then up to you to decide which needs increasing more urgently.

Comment by giles on Direct Funding Between EAs - Moral Economics · 2015-07-28T20:31:34.635Z · score: 1 (1 votes) · EA · GW

Multiple donors could form coalitions to fund a single donee

Or to fund multiple donees.

Comment by giles on EA Facebook New Member Report · 2015-07-28T17:46:31.911Z · score: 1 (1 votes) · EA · GW

Let me know if you're expecting a surge of Facebook joins (as a result of the Doing Good Better book launch and EA Global) and want help messaging people.

Comment by giles on Certificates of impact · 2015-07-24T23:17:50.120Z · score: 1 (1 votes) · EA · GW

I'm guessing that for these to work, the ownership of certificates should end up reflecting who actually had what impact. I can think of two cases where that might not be so.

Regret swapping:

  • Person A donates $100 to charity X. Person B donates $100 to charity Y.
  • Five years later they both change their minds about which charity was better. They swap certificates.

So person A ends up owning a certificate for Y, and person B ends up owning a certificate for X, even though neither of them can really be said to have "caused" that particular impact.

Mistrust in certificate system

  • Foundation F buys impact certificates. It believes that by spending $1 on certificates, it is causing an equivalent amount of good as if it had donated $2 to charity X.
  • Person A is skeptical of the impact certificate system. She believes that foundation F is only accomplishing $0.50 worth of good with every $1 it spends on certificates (she believes the projects themselves are high value, but that if foundation F didn't exist then the work would have got done anyway).
  • Person A has a $100 budget to spend on charity.
  • Person A borrows $50 from her savings account and donates $150 to charity X. She sells the entire certificate to foundation F for $50 and deposits this back in her savings account.

Why would person A do this? She doesn't care about certificates, just about maximizing positive impact. As far as she is concerned, she has caused foundation F to give $50 to charity X, where otherwise that money would only have accomplished half as much good.

Why would foundation F do this? It believes in certificates, so as far as F is concerned, it has spent $50 to cause a $150 donation to charity X, where the other certificates it could have bought would only be equivalent to a $100 donation.

Comment by giles on Moving Moral Economics Forward · 2015-07-24T21:41:57.689Z · score: 1 (1 votes) · EA · GW

I've just found out that Paul Christiano and Katja Grace are already buying certificates of impact.

Comment by giles on Room for more funding: Why doesn’t the Gates foundation just close the funding gap of AMF and SCI? · 2015-07-24T21:31:22.728Z · score: 0 (0 votes) · EA · GW

Just one comment: the essay asks "Why doesn’t the Gates foundation just close the funding gap of AMF and SCI?" but doesn't seem to offer an answer. The closest seems to be 3b/c which suggests it's a coordination problem or donor's dilemma: everyone is expecting everyone else to fund these organizations.

If that's the case, the relevant question would seem to be: what does the Gates foundation want? If the EA community finds something that GF wants that we can potentially offer (such as new high-risk high-return charities doing something totally innovative), then we can potentially do a moral trade with them.

Comment by giles on Moving Moral Economics Forward · 2015-07-24T01:06:15.358Z · score: 1 (1 votes) · EA · GW

Oh one other thing - I think the trickiest part of this system will be verifying whether someone has actually donated to a charity at the time they said they did. Every charity does it a different way.

Comment by giles on Moving Moral Economics Forward · 2015-07-24T01:00:17.916Z · score: 2 (2 votes) · EA · GW

I'm interested in moving moral economics forward in a different way: by creating some kind of online "moral market" and seeing what your happens.

There are two possible systems I could implement:

I'll describe the points-based system here, as it's the one I've thought through a bit more. I presume it theoretically diverges from a certificate of impact system, but I haven't thought through exactly how.

Users have points. The total number of points in the system is 1 billion.

At any time, a user with nonzero points can make a request that somebody else donate to a particular charity in exchange for some of those points.

Fundamentally that's the only mechanic that I'm imagining right now. Other bells and whistles can be added, such as prediction markets, or other goodies that you can purchase with points such as volunteer time.

The requests stay on the table until someone takes them up, so a (new or existing) user can acquire points by seeing which requests are currently active and donating to one of the relevant charities.

Why would anyone want points? Points can be used to influence other people in which charities they give to, how much and when. (Although if everyone agrees that points are worthless then this leverage disappears).

What are some use cases? A charity is running a fundraiser, and its supporters all want each other to donate to the fundraiser as soon as possible, so that the charity's staff aren't tearing their hair out. If any of these supporters have points, they can use some of them to encourage other supporters to donate early, by raising the points-per-dollar value of the charity.

Any other use cases? Moral trade might be possible - donating to a charity becomes slightly more attractive if you get points in return, and those points are some reflection of how much other people like the charity. I don't know how this would play out in practice though.

Trading points sounds like a lot of work. Yes it would be! Possibly enough to wipe out the value gained by moral trading. So the system would need one other major feature: automatic trading.

How does automatic trading work? Each user assigns a subjective utilons-per-dollar-donated value to each charity, as well as a value to holding onto the cash themselves. The system calculates a utilon-per-point value somehow. It can then automatically set the donation request price to be (utilon per dollar of charity divided by utilon per point). The system can also make suggestions to the user to donate when utility of charity + utility of points you'd get back > utility of holding onto the money.

Aren't you glossing over some things here? Yes, several.

  • These prices and valuations are all at-the-margin, and will change as stuff gets bought and sold and spent. The system shouldn't ever suggest that you donate a million dollars to charity X, because your marginal value of holding onto the money would have gone way up in the middle of that.
  • The utilons-per-point value is calculated "somehow", possibly by looking at historical transactions and seeing which is the highest-utility charity whose donations can be bought with points.
  • This doesn't actually work though, because if you trade a donation for points, it doesn't mean 100% of that donation is as a consequence of your points. The person may have donated anyway, or someone else may have offered up the points anyway.

How is this any use to me if I'm not a consequentialist and don't believe in utilons? I haven't thought about that yet.

This is all just chit-chat, and we're never going to see this happen, right? Wrong. I'm working on it here, although it's currently little more than a login page and a couple of database tables. Development help welcome! https://github.com/edkins/moral-market

Comment by giles on Iason Gabriel writes: What's Wrong with Effective Altruism · 2015-07-22T00:20:17.425Z · score: 2 (2 votes) · EA · GW

I'm a little surprised by some of the other claims about what EAs are like, such as (quoting Singer): "they tend to view values like justice, freedom, equality, and knowledge not as good in themselves but good because of the positive effect they have on social welfare."

It may be true, but if so I need to do some updating. My own take is that those things are all inherently valuable, but (leaving aside far future and xrisk stuff), welfare is a better buy. I can't necessarily assume many people in EA agree with me though.

There's also some confusion in the language between what people in EA do, and what their representatives in GW and GWWC do. I'm thinking of:

(Effective altruists) assess the scale of global problems by looking at major reports and publications that document their impact on global well-being, often using cost-effectiveness analysis.

Comment by giles on Iason Gabriel writes: What's Wrong with Effective Altruism · 2015-07-21T23:58:19.077Z · score: 0 (0 votes) · EA · GW

There's another response that EAs could have to the priority/ultrapoverty strand, which is to bend their utility functions so that ultrapoverty is rated as even more bad, and improvements at the ultrapoverty end would be calculated as more important. Of course, however concave the utility function is, you can still construct a scenario where the people at the ultrapoverty end would be ignored.

Comment by giles on Iason Gabriel writes: What's Wrong with Effective Altruism · 2015-07-21T20:14:45.606Z · score: 5 (5 votes) · EA · GW

I think that the priority/ultrapoverty strand of this argument is one place where you can't ignore nonhuman animals. My intuition says that they're among the worst off, and relatively cheap to help.

Comment by giles on Iason Gabriel writes: What's Wrong with Effective Altruism · 2015-07-21T20:08:29.950Z · score: 0 (0 votes) · EA · GW

My first thought on reading the "Two villages" thought experiment was that the village that was easier to help would be poorer, because of the decreasing marginal value of money. If this was so, you'd want to give all your money to the poorer one if your goal was to reduce "the influence of morally arbitrary factors on people's lives".

On the other hand that gets reversed if the poorer village is the one that's harder to help. In that case fairness arguments would still seem to favour putting all your money in one village, just the opposite one to what consequentialists would favour. (So that this problem can't be completely separated from the Ultrapoverty one).

Comment by giles on Iason Gabriel writes: What's Wrong with Effective Altruism · 2015-07-21T20:03:12.127Z · score: 3 (3 votes) · EA · GW

One thing I find interesting about all the thought experiments is that they assume a one donor, many recipient model. That is, the morality of each situation is analyzed as if a single agent is making the decision.

Reality is many donors, many recipients and I think this affects the analysis of the examples. Firstly because donors influence each others' behaviour, and secondly because moral goods may aggregate on the donor end even if they don't aggregate on the recipient end. I'll try and explain with some examples:

Two villages (a): each village currently receives 50% of the donations from other donors. Enough of the other donors care about equality that this number will stay at 50% whichever one you donate to (because they'll donate to whichever village receives less than 50% of the funds). So whether you care about equality or not, as a single donor your decision doesn't matter either way.

Two villages (b): each village currently receives 50% of the donations from other donors, but this time it's because the other donors are donating carelessly. Moral philosophers have decided that the correct allocation (balancing equality with overall benefit) is for one village to receive 60% of donations and the other to receive 40%. As a relatively small donor, your moral duty then is to give all your money to one village, to try and nudge that number up as close to 60% as you can.

Medicine (a): Philosophers have decided the ideal distribution is 90% condoms and 10% ARVs. Depending what the actual distribution is, it might be best to put all your money into funding condoms, or all your money into funding ARVs, and only if it's already right on the mark should you favour a 90/10 split.

I don't think the Ultrapoverty, Sweatshop and Participation examples are affected by this particular way of thinking though.

I just get the feeling that something like consequentialism will emerge, even if you start off with very different premises, once you take into account other donors giving to overlapping causes but with different agendas. Or at least, that this would be so for as long as people identifying with EA remain a tiny minority.

Comment by giles on Preventing human extinction · 2015-07-19T20:48:11.697Z · score: 0 (0 votes) · EA · GW

I have a minor philosophical nitpick.

No sane person would say, “Well, the risk of a nuclear meltdown at this reactor is only 1 in 1000

There are (checks Wikipedia) 400ish nuclear reactors, which means if everyone followed this reasoning, the risk of a nuclear meltdown would be pretty high.

Existential risks with low probabilities don't add up in the same way. It's my belief that the magnitude of a risk equals the badness times the probability (which for xrisk comes out to very, very bad) but not everyone might agree with me, and I'm not sure the nuclear reactor example would convince them.

Comment by giles on Should your choice of charity change based on how much money you have? · 2015-02-03T14:07:13.513Z · score: 1 (1 votes) · EA · GW

some of the Gates Foundation work is higher impact than GiveWell top charities

Hasn't GiveWell also said that large orgs tend to do so many different things that some end up being effective and others not? Does this criticism apply to the Gates Foundation?

Comment by giles on Stuck? Talk to an EA Buddy! · 2015-01-27T02:59:40.689Z · score: 1 (1 votes) · EA · GW

I've got 16 people on the list and nominally made 5 pairings. In a while I'll prod people to see if they're actually talking to each other.

Comment by giles on January Open Thread · 2015-01-24T17:33:19.452Z · score: 1 (1 votes) · EA · GW

I think you're imagining a scenario where every organization either:

  • is not seriously addressing existential risk, or
  • has run out of room for more funding

One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn't feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.

Another reason this could happen would be more strategic: that humanity actually can't think of any things it can do that will reduce existential risk. Perhaps there's a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn't be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today's technological world being a Really Hard Problem.

Even if we don't hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.

Which we eventually want - but not while the world is full of danger and suffering.

Comment by giles on January Open Thread · 2015-01-24T04:56:51.818Z · score: 0 (0 votes) · EA · GW

My post is here.

Comment by giles on Comments on Ernest Davis's comments on Bostrom's Superintelligence · 2015-01-24T04:41:54.908Z · score: 1 (1 votes) · EA · GW

OK, finished draft done. Sorry for posting it by accident earlier!

Comment by giles on Comments on Ernest Davis's comments on Bostrom's Superintelligence · 2015-01-24T04:41:19.725Z · score: 0 (0 votes) · EA · GW

You're absolutely right. I've changed that bit in the final draft.

Comment by giles on January Open Thread · 2015-01-22T02:37:05.166Z · score: 2 (2 votes) · EA · GW

What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism

I think this fits into a bigger picture. To punch above your weight in terms of impact, you need to know something (or have a skill) that most other people don't. Currently the thing you have to know is "there's this thing called EA and earning to give". As that meme spreads, you'd expect its impact to dwindle, assuming an upper bound on the total amount of good that can be done given current resources.

The number of earning-to-givers * average good done by earning to give <= total amount of good available to be done.

The same equation applies to "knowing about everything that's going on inside EA", so creating better memes than earning to give doesn't appear to solve the problem.

What would help though, would be:

  • finding where my model of what's going on is an oversimplification, and focussing some attention there (maybe with xrisk the amount of good to be done is so huge that we don't hit a limit for a while)
  • increasing the "total amount of good that can be done given current resources".

The second one would seem to suggest increasing the total resources available to doing good - this isn't quite the same as growing the economy, because many agents in the economy are selfish, but it feels related and probably involves an entrepeneurial spirit.

I think the EA algorithm would look something like this:

  • Do what everyone else in EA is doing
  • Think of something new, and if it can be shown to be effective (in the sense of growing things, not just directing resources away from somewhere else even in an indirect sense) then roll it out to the rest of the EA movement.

End ramble.

Comment by giles on January Open Thread · 2015-01-22T01:17:42.950Z · score: 2 (2 votes) · EA · GW

Hi Anonymous,

Really sorry to hear that you feel like that. I'm glad you find writing about it therapeutic. One thing you can try - it's worked for me - is to write down a "toolbox" of things (such as writing) that allow you to feel better about yourself when you're feeling bad.

This could even include taking 1-2 hours to criticize yourself - if that's what works for you. But having other options might help. Writing them down somewhere visible can help too.

The reason I'm bringing this up is that - for me at least - the mindframe you describe isn't helpful for making big decisions, or even for applying to jobs. So I think that knowing when you're at your best and knowing some things you can try to help you return to that state, is great.

Also really sorry to hear that you're feeling low status on account of a successful role-model. I've felt that one too, although for me it wasn't a parent but rather other members of the EA community who I saw as having accomplished more than I had. I'd love if there was some neat package of advice I could give here, but the only way out I know of involves a lot of grit - gradually learning to compare yourself to your own standards and finding success spirals.

It's really sweet and amazing that you're not blaming anyone in the community for making you feel this way - I know it's not anyone's intention to get you to choose a career you're not at all passionate about for EA reasons, but some of the advice can sometimes sound a bit like that.

Also bear in mind that the career advice from 80,000 Hours isn't to get it right first time, but to allow yourself room to explore and find new directions. Some high-profile EAs have done exactly that, doing a career u-turn when they discover some other path that for them is more effective or more satisfying. So it may be that there's a fun, fulfilling career out there for you - that's effective in helping others - and that lies outside of STEM. Or maybe your current field is right for you after all, and you just need to find the right people to make it exciting for you.

Good luck, and thanks so much for opening up. I'm sure what you're saying resonates with a lot of people.

Comment by giles on January Open Thread · 2015-01-20T03:11:30.190Z · score: 6 (6 votes) · EA · GW

I was reading The Phatic and the Antic-Inductive on Slate Star Codex.

Why's this relevant?

Birthday and Christmas charity fundraisers of course!

There is a sense in which the concept of a birthday fundraiser is anti-inductive - if they worked, and everyone realised they worked, then a lot more people would be doing them and they wouldn't work so well any more.

But actually running a fundraiser feels more like phatic communication. You're really communicating very little information about the charity you want people to give money to, but people seem to appreciate it and (as far as I know) very rarely get mad.

So is there some kind of lesson here that in some situations, one mindset is better, in other situations a different mindset is better... but always to remember that the other person may have a very different mindset to yourself?

Comment by giles on Comments on Ernest Davis's comments on Bostrom's Superintelligence · 2015-01-20T02:30:45.188Z · score: 0 (0 votes) · EA · GW

Yes - I clicked on "save and continue" and what I got was "submit". I'd better get back to work on it, I guess!

Comment by giles on January Open Thread · 2015-01-19T19:42:25.055Z · score: 0 (0 votes) · EA · GW

I'll bite. It may take a new top-level post though.

Comment by giles on January Open Thread · 2015-01-19T19:38:34.226Z · score: 1 (1 votes) · EA · GW

I'd suggest Global Catastrophic Risks as a good primer. (The essays aren't written by Bostrom; he co-edited the book)

Comment by giles on The Outside Critics of Effective Altruism · 2015-01-16T05:37:55.926Z · score: 1 (1 votes) · EA · GW

I was googling "effective altruism arrogant" and it turned up a few links which I'm posting here so I don't lose them:

Comment by giles on The Outside Critics of Effective Altruism · 2015-01-13T19:05:46.380Z · score: 1 (1 votes) · EA · GW

Thanks - I knew they were involved in the EA Summit but I didn't know they were the sole organizers. I also knew they weren't soliciting donations. I partially retract my earlier statement about them! (Also I hope I didn't cause anyone any offense - I've met them and they're super super nice and hardworking too)

Comment by giles on The Outside Critics of Effective Altruism · 2015-01-13T18:55:12.377Z · score: 0 (0 votes) · EA · GW

Thanks - most of those names ring a bell but the Selfish Gene is the only one I've read. I guess some of the value of reading them is gone for me now that my mind is already changed? But I'll keep them in mind :-)

Comment by giles on The Outside Critics of Effective Altruism · 2015-01-09T17:57:29.050Z · score: 0 (0 votes) · EA · GW

I don't know if this is relevant to the criticism theme, but I found it was necessary for me to take some of Hanson's ideas seriously before becoming involved in EA, but his insistence on calling everything hypocrisy was a turn-off for me. Are there any resources on how we evolved to be such-and-such a way (interested in self+immediate family, signalling etc.) but that that's actually a good thing because once we know that we can do better?

Comment by giles on The Outside Critics of Effective Altruism · 2015-01-06T15:16:40.809Z · score: 0 (0 votes) · EA · GW

However, I haven't seen a smart outside person spend a considerable amount of time to evaluating and criticising effective altruism.

Would they do it if we paid them?

Comment by giles on TLYCS Pamphleting Pilot Program · 2015-01-06T15:07:40.651Z · score: 0 (0 votes) · EA · GW

Definitely. Some of the team at least are EA insiders and lurking on this very forum, and they'll already know about TLYCS for sure.

Comment by giles on TLYCS Pamphleting Pilot Program · 2015-01-06T14:58:09.902Z · score: 0 (0 votes) · EA · GW

Oh yeah, good point.

Comment by giles on The Outside Critics of Effective Altruism · 2015-01-06T01:22:10.953Z · score: 5 (7 votes) · EA · GW

Another criticsm: the movement isn't as transparent as you might expect. (Remember, GiveWell was originally the Clear Fund - started up not necessarily because existing charitable foundations were doing the wrong thing, but because they were too secretive).

When compiling this table of orgs' budgets, I found that even simple financial information was difficult to obtain from organizations' websites. I realise I can just ask them - and I will - but I'm thinking about the underlying attitude. (As always, I may be being unfair).

Also, what Leverage Research are up to is anybody's guess.

Comment by giles on The Outside Critics of Effective Altruism · 2015-01-06T01:11:48.395Z · score: 0 (0 votes) · EA · GW

"Giles has passed on some thoughts from a friend" is one of the things cited, so if a particular criticism isn't listed we can assume it's because Ryan doesn't know about it, not that it's inherently too low status or something. I definitely want to hear what your friends have to say!

Comment by giles on TLYCS Pamphleting Pilot Program · 2015-01-06T01:04:36.848Z · score: 4 (4 votes) · EA · GW

Also, have you got in touch with the good people at Charity Science?

Comment by giles on TLYCS Pamphleting Pilot Program · 2015-01-06T01:02:47.381Z · score: 1 (1 votes) · EA · GW

Great idea!

Does the pamphleting have to be done on Fridays, or can it be done on pseudo-random days? (I'm thinking about distinguishing the signal from the pamphlets from e.g. people spending more time on the Internet during weekends. Pseudo-random spikes might require fancier math to pick out though, and of course you need to remember which days you handed out pamphlets!

Can you ask people, when they take the pledge, how they found out about TLYCS? (This will provide an under-estimate, but it can be used to sanity-check other estimates). (Also it's a bit ambiguous if someone had e.g. vaguely heard of TLYCS or Singer before, but pamphleting prompted them to actually take the pledge)

There's a typo in your text ("require's") - make sure you get the pamphlets proof-read :)

Do you know in advance what you expect, in terms of:

  • How many pamphlets you will distribute
  • What the effect will be?

(Last I heard, EA was using predictionbazaar.com and predictionbook.com as its prediction markets)

Comment by giles on The Outside Critics of Effective Altruism · 2015-01-05T21:54:28.651Z · score: 2 (2 votes) · EA · GW

Here's the link to the Facebook group post in case people add criticisms there.

Glad you linked to Holden Karnofski's MIRI post. Other possibly relevant posts from the GiveWell blog:

There are more on a similar philosophical slant (search for "explicit expected value") but the above seem the most criticismy.

Comment by giles on The Outside Critics of Effective Altruism · 2015-01-05T19:41:08.865Z · score: 2 (2 votes) · EA · GW

Great topic!

I think you missed this one from Rhys Southan which is lukewarm about EA: Art is a waste of time says EA

I don't see the Schambra piece as particularly vitriolic.

I don't know where to find good outside critics, but I think there's still value in internal criticism, as well as doing a good job processing the criticism we have. (I was thinking of creating a wiki page for it, but haven't got around to it yet).

Some self-centered internal criticism; I don't know how much this resonates with other people:

  • I posted some things on LW back in 2011 which were badly received (and which I'm too embarrassed to link to). This was either a problem with me, or the LW community, or more likely both
  • I spend a lot of time on EA social media when I could be doing more productive stuff
  • I feel like a standard-issue generic EA - like I've internalized all the memes but don't have huge amounts of unique ideas or abilities to bring to the table
  • Similarly my mental model of people in the EA movement is that they're fairly interchangeable, rather than each having their own strengths, weaknesses and personalities
  • In particular, I haven't really managed to make friends with anyone I met through EA
  • I spend a lot of time talking about EA but haven't actually donated much to charity yet
  • In the past I've felt strong affiliation to an EA subtribe (xrisk), viewing the poverty and animal people as outgroups

Also:

  • We mostly speak English and are not as ethnically diverse as we could be
  • One of the central premises of EA, that some charities are so very many times more effective than others, seems pretty bold. I'd like to be able to point to a mountain of evidence to back it up but I'm not sure where this is to be found.
Comment by giles on Another fundraiser report · 2015-01-04T15:43:31.186Z · score: 0 (0 votes) · EA · GW

Is it working now? I wondered why I wasn't getting more karma ;-)

Is anybody else having problems with the image upload feature of the forum?

Comment by giles on Problems and Solutions in Infinite Ethics · 2015-01-04T03:32:51.025Z · score: 0 (0 votes) · EA · GW

there's going to be some optimal level of abstraction

I'm curious what optimally practical philosophy looks like. This chart from Diego Caleiro appears to show which philosophical considerations have actually changed what people are working on:

http://effective-altruism.com/ea/b2/open_thread_5/1fe

Also, I know that I'd really like an expected-utilons-per-dollar calculator for different organizations to help determine where to give money to, which surely involves a lot of philosophy.

Comment by giles on Figuring Good Out - Launch Thread · 2015-01-04T03:19:25.492Z · score: 1 (1 votes) · EA · GW

Note: I didn't actually give this a go.

Comment by giles on Another fundraiser report · 2015-01-04T01:34:45.892Z · score: 0 (0 votes) · EA · GW

As a separate point, I'm not sure what % of unrestricted donations to GiveWell go to its own operations as opposed to being granted to its recommended charities.