Comment by samdeere on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-29T00:30:30.310Z · score: 2 (2 votes) · EA · GW

Re 1, this is less of a worry to me. You're right that this isn't something that SHA256 has been specifically vetted for, but my understanding is that the SHA-2 family of algorithms should have uniformly-distributed outputs. In fact, the NIST beacon values are all just SHA-512 hashes (of a random seed plus the previous beacon's value and some other info), so this method vs the NIST method shouldn't have different properties (although, as you note, we didn't do a specific analysis of this particular set of inputs — noted, and mea culpa).

However, the point re 2 is definitely a fair concern, and I think that this is the biggest defeater here. As such, (and given the NIST Beacon is back online) we're reverting to the original NIST method.

Thanks for raising the concerns.

ETA: On further reflection, you're right that it's problematic knowing whether the first 10 hex digits will be uniformly distributed given that we don't have a full-entropy source (which is a significant difference between this method and the NIST beacon — we just made sure that the method had greater entropy than the 40 bits we needed to cover all the possible ticket values). So, your point about testing sample values in advance is well-made.

Comment by samdeere on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-28T23:49:30.452Z · score: 7 (2 votes) · EA · GW

The NIST Beacon is back online. After consulting a number of people (and notwithstanding that we previously committed to not changing back), we've decided that it would in fact be better to revert to using the NIST beacon. I've edited the post text to reflect this, and emailed all lottery participants.

Comment by samdeere on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T18:07:11.346Z · score: 3 (3 votes) · EA · GW

AFAIK random.org offers to run lotteries for you (for a fee), but all participants still need to trust them to generate the numbers fairly. It's obviously unlikely that there would in fact be any problem here, but we're erring on the side of having something that's easier for an external party to inspect.

Comment by samdeere on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T18:03:41.495Z · score: 6 (4 votes) · EA · GW

The draw should to have the following properties:

  • The source of randomness needs to be generated independently from both CEA and all possible entrants
  • The resulting random number needs to be published publicly
  • The randomness needs to be generated at a specific, precommitted time in the future
  • The method for arriving at the final number should ideally be open to public inspection

This is because, if we generated the number ourselves, or used a private third-party, there's no good guarantees against collusion. Entrants in the lottery could reasonably say 'how do I know that the draw is fair?', especially as the prize pool is large enough that it could incentivise cheating. The future precommitment is important because it guarantees that we can't secretly know the number, and the specific timing is important because it means that we can't just keep waiting for numbers to be generated until we see one that we like the look of.

The method proposed above means that anyone can see how we arrived at the final random number, because it takes a public number that we can't possibly influence, and then hashes it using SHA256, which is well-verified, deterministic (i.e. anyone can run it on their own computer and check our working) and distributes the possible answers uniformly (so everyone has an equal chance of winning).

Typical lottery drawings have these properties too: live broadcast, studio audience (i.e. they are publicly verifiable), balls being mixed and then picked out of a machine (i.e. an easy-to-inspect, uniformly-distributed source of randomness that, because it is public, cannot be gamed by the people running the lottery).

Earthquakes have the nice property that their incidence follows a rough power law distribution (so you know approximately how regularly they'll happen), but the specifics of the location, magnitude, depth or any other properties of any given future earthquake are entirely unpredictable. This means that we know that there will be a set of unpredictable (i.e. random) numbers generated by seismometers, but we (and anyone trying to game the lottery) have no way of knowing what they will be in advance.

(This is not actually that different to how your computer generates randomness — it uses small unpredictable events, like the very precise time between keystrokes, or tiny changes in mouse direction, to generate the entropy pool for generating random numbers locally. We're just using the same technique, but allowing people to see into the entropy pool).

Other plausible sources of randomness we considered included the block hash of the first block mined after the draw date on the Bitcoin blockchain, and the numbers of a particular Powerball drawing.

Comment by samdeere on Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries · 2019-01-25T17:11:38.170Z · score: 3 (5 votes) · EA · GW

Agree with the sentiment, but we're most definitely not rolling our own crypto. The method above relies on the public and extremely-widely-vetted SHA256 algorithm. This algorithm has the nice property that even slightly different inputs produce wildly different outputs. Secondly, it should distribute these outputs uniformly across the entire possibility space. This means that it would be useless to bruteforce the prediction, because each of your candidates would have an even chance of ending up basically anywhere.

For example, compare the input strings 1111111111111111111111111111 and 1111111111111111111111111112 with their SHA256 outputs:

sha256(1111111111111111111111111111)
  = fe16863cfd4015c58da63aa5d2fe80e6e1fcd0bbdd57296fe28844cc7d79581b


sha256(1111111111111111111111111112)
  = b74822540995e7aa1b50a4d9d23a4b13aff99910c3c2111b9bf649e947e5f49c

It doesn't matter how much of the API response remains the same (for example, we could pad the input of every hash we generated with the same fixed string and have the same randomness properties as the proposal above). All that matters is that each response is going to be (unpredictably) different from the next.

ETA: It's perhaps more helpful to see the digits from the API response as a publicly verifiable seed to a pseudorandom number generator, rather than as the random number itself.

Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries

2019-01-24T22:22:45.456Z · score: 7 (8 votes)
Comment by samdeere on EA Funds - An update from CEA · 2018-08-07T19:39:10.365Z · score: 3 (3 votes) · EA · GW

Hey Eli – there has definitely been thinking on this, and we've done a shallow investigation of some options. At the moment we're trying to avoid making large structural changes to the way EA Funds is set up that have the potential to increase accounting complexity (and possibly audit compliance complexity too), but this is in the pipeline as something we'd eventually like to make happen, especially as the total holdings get larger.

Comment by samdeere on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-07-30T21:42:36.919Z · score: 13 (13 votes) · EA · GW

The grant payout reports are now up on the EA Funds site:

Note that the Grant Rationale text is basically the same for both as Nick has summarised his thinking in one document, but the payout totals reflect the amount disbursed from each fund

Comment by samdeere on EA Forum 2.0 Initial Announcement · 2018-07-21T16:07:00.167Z · score: 2 (2 votes) · EA · GW

Two thoughts, one on the object-level, one on the meta.

On the object level, I'm skeptical that we need yet another platform for funding coordination. This is more of a first-blush intuition, and I don't propose we have a long discussion on it here, but just wanted to add my $0.02 as a weak datapoint. (Disclosure — I'm part of the team that built EA Funds and work at CEA which runs EA Grants so make of that what you will. Also, to the extent that the sense that small projects are falling through the gaps because of evaluation-capacity constraints, CEA is currently in the process of hiring a Grants evaluator.)

On the meta level (i.e. how open should we be to adding arbitrary integrations that can access a user's forum account data) I think there's definitely some merit to this, and that I can envisage cool things that could be built on top of it. However, my first-blush take is that providing an OAuth layer, exposing user data etc, is unlikely to be a very high priority (at least from the CEA side) when considered against other possible feature improvements and other CEA priorities, especially given the likely time cost involved in maintaining the auth system where it interfaces with other services, and the magnitude of the impact that I'd expect having EA Forum data integrated with such a service would have. However, as you note, the LW codebase is open source, so I'd suggest submitting an issue there, discussing with the core devs and making the case, and possibly submitting a PR if it's something that would be sufficiently useful to a project you're working on.

Comment by samdeere on EA Forum 2.0 Initial Announcement · 2018-07-21T01:32:16.746Z · score: 5 (5 votes) · EA · GW

Thanks for the comments on this Marcus (+ Kyle and others elsewhere).

I certainly appreciate the concern, but I think it's worth noting that any feedback effects are likely to be minor.

As Larks notes elsewhere, the scoring is quasi-logarithmic — to gain one extra point of voting power (i.e. to have your vote be able to count against that of a single extra brand-new user) is exponentially harder each time.

Assuming that it's twice as hard to get from one 'level' to the next (meaning that each 'level' has half the number of users than the preceding one), the average 'voting power' across the whole of the forum is only 2 votes. Even if you make the assumption that people at the top of the distribution are proportionally more active on the forum (i.e. a person with 500,000 karma is 16 times as active as a new user), the average voting power is still only ≈3 votes.

Given a random distribution of viewpoints, this means that it would take the forum's current highest-karma users (≈5,000 karma) 30-50 times as much engagement in the forum to get from their current position to the maximum level. Given that those current karma levels have been accrued over a period of several years, this would entail an extreme step-change in the way people use the forum.

(Obviously this toy model makes some simplifying assumptions, but these shouldn't change the underlying point, which is that logarithmic growth is slooooooow, and that the difference between a logarithmically-weighted system and the counterfactual 1-point system is minor.)

This means that the extra voting power is a fairly light thumb on the scale. It means that community members who have earned a reputation for consistently providing thoughtful, interesting content can have a slightly greater chance of influencing the ordering of top posts. But the effect is going to be swamped if only a few newer users disagree with that perspective.

The emphasis on can in the preceding sentence is because people shouldn't be using strong upvotes as their default voting mechanism — the normal-upvote variance will be even lower. However, if we thought this system was truly open to abuse, a very simple way we could mitigate this is to limit the number of strong upvotes you can make in a given period of time.

There's an intersection here with the community norms we uphold. The EA Forum isn't supposed to be a place where you unreflectively pursue your viewpoint, or about 'winning' a debate; it's a place to learn, coordinate, exchange ideas, and change your mind about things. To that end, we should be clear that upvotes aren't meant to signal simple agreement with a viewpoint. I'd expect people to upvote things they disagree with but which are thoughtful and interesting etc. I don't think for a second that there won't be some bias towards just upvoting people who agree with you, but I'm hoping that as a community we can ensure that other things will be more influential, like thoughtfulness, usefulness, reasonableness etc.

Finally, I'd also say that the karma system is just one part of the way that posts are made visible. If a particular minority view is underrepresented, but someone writes a thoughtful post in favour of that view, then the moderation team can always promote it to the front page. Whether this seems good to you obviously depends on your faith in the moderation team, but again, given that our community is built on notions like viewpoint diversity and epistemic humility, then the mods should be upholding these norms too.

Comment by samdeere on EA Forum 2.0 Initial Announcement · 2018-07-20T23:48:54.074Z · score: 0 (0 votes) · EA · GW

Yeah MoneyForHealth, it does seem like it would be useful if you can point out instances of this happening on LW. Then we'll have a better shot at figuring out how it happened, and avoiding it happening with the EA Forum migration.

Comment by samdeere on EA Forum 2.0 Initial Announcement · 2018-07-20T23:29:27.805Z · score: 2 (2 votes) · EA · GW

Implementing the same system here makes the risks correlated.

The point re correlation of risks is an interesting one — I've been modelling the tight coupling of the codebases as a way of reducing overall project risk (from a technical/maintenance perspective), but of course this does mean that we correlate any risks that are a function of the way the codebase itself works.

I'm not sure we'll do much about that in the immediate term because our first priority should be to keep changes to the parent codebase as minimal as possible while we're migrating everything from the existing server. However, adapting the forum to the specific needs of the EA community is something we're definitely thinking about, and your comment highlights that there are good reasons to think that such feature differences have the important additional property of de-correlating the risks.

Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org

That's unfortunately not going to be possible in the same way. My understanding is that the Alignment Forum beta is essentially running on the same instance (server stack + database) as the LessWrong site, and some posts are just tagged as 'Alignment Forum' which makes them show up there. This means it's easier to do things like have parallel karma scores, shared comments etc.

We see the EA Forum as a distinct entity from LW, and while we're planning to work very closely with the LW team on this project (especially during the setup phase), we'd prefer to run the EA Forum as a separate, independent project. This also us the affordance to do things differently in the future if desired (e.g. have a different karma system, different homepage layout etc).

Comment by samdeere on Announcing the 2017 donor lottery · 2017-12-21T22:23:33.797Z · score: 1 (1 votes) · EA · GW

An update on this: Cryptocurrency donations are now live on the site, so you can now enter the lottery (or make a regular donation to EA Funds) using BTC, ETH and LTC

Comment by samdeere on Announcing the 2017 donor lottery · 2017-12-18T21:42:53.681Z · score: 1 (1 votes) · EA · GW

An alternative model for variable pot sizes is to have a much larger guarantor (or a pool of guarantors), and then run rolling lotteries. Rather than playing against the pool, you're just playing against the guarantor, and you could set the pot size you wanted to draw up to (e.g. your $1000 donation could give you a 10% shot at a $10k pot, or a 1% shot at a $100k pot). The pot size should probably be capped (say, at $150k), both for the reasons Paul/Carl outlined re diminishing returns, and to avoid pathological cases (e.g. a donor taking a $100 bet on a billion dollars etc). Because you don't have to coordinate with other donors, the lottery is always open, and you could draw the lottery as soon as your payment cleared. Rather than getting the guarantor to allocate a losing donation, you could also 'reinvest' the donations into the overall lottery pool, so eventually the system is self-sustaining and doesn't require a third-party guarantor. [update: this model may not be legally possible, so possibly such a scheme would require an ongoing guarantor]

This is more administratively complex (if only because we can't batch the manual parts of the process to defined times), but there's a more automated version of this which could be cool to run. At this stage I want validate the process of running the simpler version, and then if it's something there's demand for (and we have enough guarantor funds to make it feasible) we can look into running the rolling version sometime next year.

Comment by samdeere on Announcing the 2017 donor lottery · 2017-12-17T23:42:10.566Z · score: 5 (5 votes) · EA · GW

In practice, CEA technically gets to make the final donation decision. But I can't see them violating a donor's choice.

To emphasise this, as CEA is running this lottery for the benefit of the community, it's important for the community to have confidence that CEA will follow their recommendations (otherwise people might be reticent to participate). So, to be clear, while CEA makes the final call on the grant, unless there's a good reason not to (see the 'Caveats and Limitations' section on the EA.org Lotteries page) we'll do our best to follow a donor's recommendation, even if it's to a recipient that wouldn't normally be thought of as a strictly EA.


What happens if a non-EA wins?

It's worth pointing out that one's motivation to enter the lottery should be to win the lottery, not to put money into a pot that you in fact hope will be won and allocated by someone else better-qualified to do the research than you are. If there are people entering the lottery who you think will make better decisions than you (even in the event that you won), then you should either donate on their behalf (i.e. agree with them in advance that they can research and make the recommendation if you win), or wait for the lottery draw, and then follow their recommendation if they win.

(not implying that this necessarily is your motivation, just that "I'll donate hoping for someone else to win" is a meme that I've noticed comes up a lot when talking about the lottery and I wanted to address it)

Comment by samdeere on Announcing the 2017 donor lottery · 2017-12-17T09:03:17.777Z · score: 2 (2 votes) · EA · GW

Agreed — I'll get this updated early next week

Comment by samdeere on Announcing the 2017 donor lottery · 2017-12-17T09:01:53.180Z · score: 1 (1 votes) · EA · GW

Yes! Sorry about the wait on this — just after moving the Pledge form to EffectiveAltruism.org, we decided to prioritise getting the donor lottery ready in time for the end of the US tax year, but this will be implemented soon.

Comment by samdeere on Announcing the 2017 donor lottery · 2017-12-17T08:57:01.651Z · score: 2 (2 votes) · EA · GW

I'm certainly happy with that. I think it's important to point out the positive externalities to the community/other donors if people make interesting research findings, especially if there's a relatively high likelihood that people will be investing time and energy into the research. When responding I had in mind that this could be a very minimalistic thing (e.g. the name of the recipient and possibly a couple of sentences explaining the thinking behind the decision), but on reflection I think the words 'write-up of their research and reasoning' in the OP imply something much more substantial. In either case, I agree that it'd be bad for this to feel like a cost that stopped people entering, so I'm endorsing your phrasing, and I'll edit my previous message to point this out.

Comment by samdeere on Announcing the 2017 donor lottery · 2017-12-16T07:46:43.389Z · score: 2 (2 votes) · EA · GW

Yes, we can take donations in cryptocurrency (it's worth noting that donating appreciated assets can have tax advantages over converting and donating in fiat). We're in the process of figuring out a solution that allows you to do this directly via the website, but for now if you want to donate in crypto please email lottery[at]effectivealtruism[dot]org and we can discuss

Comment by samdeere on Announcing the 2017 donor lottery · 2017-12-16T07:43:24.572Z · score: 3 (3 votes) · EA · GW

It's not clear to me how a donor lottery would capture all the considerations. Can you elaborate?

In this case, you haven't found an advisor who you trust to take into account all the things you consider to be relevant. So, instead of relying on a third-party advisor, you do the research yourself. As research is costly for any given individual to undertake, it may not make sense for you to do this with a smaller donations, but with the larger pot, if you win, you've got more incentive to undertake whatever research you feel is necessary (i.e. that 'captures the relevant considerations').

Does this presume that (some) donors already know where they prefer to donate, rather than offsetting time spent on additional research with a larger donation pool?

It's just meant to illustrate that the value of the amount that you would be able to grant to a preferred organization is the same in expectation whether you participate in the lottery or donate directly. The lottery then may generate additional upside, potentially increasing the effectiveness of your donation if you do more research, and also giving you access to different funding opportunities (providing seed funding for an organization, donating to organizations that have a minimum donation threshold etc)

Is there an expectation (or requirement) that the winning donor provides a write-up of their research and reasoning for their selected charity?

[updated — see more in the discussion below]

We think that it's in the spirit of the lottery that someone who does useful research that would be of interest to other donors should publish it (or give permission for CEA to publish their grant recommendation). Also, if they convince others to donate then they'll be causing additional grants to go to their preferred organization(s). We'll strongly encourage winners to do so, however, in the interests of keeping the barriers to entry low, we haven't made it a hard requirement.

Announcing the 2017 donor lottery

2017-12-16T00:34:51.988Z · score: 23 (23 votes)
Comment by samdeere on Changes to the EA Forum · 2017-07-14T21:00:45.280Z · score: 3 (3 votes) · EA · GW

Hey Josh, thanks for the comment and sorry for the wait on a response.

The TL;DR is that I think that the branding changes provide a small amount of upside in terms of consistency, and have low risk of downside, because I don't expect that they'll significantly change discoverability, forum composition, or that they'll counterfactually change people's impressions of the different parts of the EA online space.

Our primary motivation is to reduce the proliferation of very similar domain names that all correspond to different things (e.g. effective-altruism.com is the Forum, previously effectivealtruism.com was the Doing Good Better site etc). From our perspective it seems useful to consolidate community assets under the same domain, both from the perspective of users seeing them as part of a broadly unified whole, and in the longer term, from a technical perspective (e.g. easier to share logins between different sites on the same domain). I agree that it's probably good to keep some branding differentiation between the Forum and the front page of EffectiveAltruism.org, however I think it's disingenuous for us to pretend that there's no overlap.

Perhaps a good analogy is YCombinator/Hacker News — the front page presents a more welcoming, informative front, whereas Hacker News has a pretty intense community and may not always be welcoming to newcomers. However, I think people are generally pretty good at understanding that the organization and the user-generated content are different things, while understanding them to be part of the same broad sphere.

I wholeheartedly agree that the Forum is a more advanced part of the community, and it's certainly not our intention to try to dilute the quality of conversation or flood it with newcomers who may lack the context to meaningfully contribute to some of the more in-depth discussions or may find the tone unwelcoming. However, this seems like an issue of discoverability. The Forum is already pretty discoverable (fourth result for 'effective altruism' on Google), so if someone totally new is doing a wide survey of what the EA online space is like, they'll find it (and it already has 'Effective Altruism' in the name...). However, we're not planning on adding additional links to it from the www domain, or changing how we market it in other channels — I don't expect this change to significantly change the composition of people posting on the forum, nor do I expect that it significantly changes how people will view the broad idea of 'effective altruism' (especially not relative to the status quo).

Given that there's already a strong association between EA and the EA Forum, I don't think the exact domain matters that much. If we didn't want there to be any association, we should probably take the words 'effective altruism' out of the title and have a completely different domain. This isn't something we're currently considering.

I'd prefer to use a subdomain rather than a nested route because it's a significantly simpler DNS/server setup. I think the SEO point is a bit counter to the other points. I agree that it will have some SEO implications, but if the issue is discoverability, then actually making the Forum less discoverable in a random search seems to work more to your purposes (as above, currently the Forum is the fourth result on Google). In terms of implementation, we're planning to rewrite the old domain to the new one (using 301 redirects and keeping the old domain active to prevent broken links). I'd also planned to advise Google of the domain change using Search Console. I'd be very happy to hear from you if there are additional steps that you think are important here.

Comment by samdeere on Changes to the EA Forum · 2017-07-04T03:26:18.315Z · score: 3 (3 votes) · EA · GW

Short-medium term: some minor UI changes, to bring branding more into line with the rest of effectivealtruism.org

Longer term ideas (caveat — these are just at the thought bubble stage at the moment and it's not clear whether they'd be valuable changes):

  • I think there's appetite for a discussion space that's both content aggregation as well as original content. This might take the form of getting a more active subreddit (for example) happening, but plausibly this could be something specifically built-for-purpose that either integrates with or complements the existing forum.

  • We've thought about integrating logins between the webapp on EffectiveAltruism.org (what is currently just EA Funds) and the forum to avoid the need to manage multiple accounts when doing various EA things online

  • We've also thought a bit about integrating commenting systems so that discussion that happens on various EA blogs is mirrored on the forum (to avoid splitting discussions when cross-posting).

If there are things that you think would be useful (especially if you've been able to give this more thought than I have) that'd be great to know, with the caveat that we're pretty restricted by developer time on this, and the priority is ensuring ongoing maintenance of the existing infrastructure, rather than building out new features.

[eta spaces between dot points]

Comment by samdeere on Changes to the EA Forum · 2017-07-03T18:50:53.473Z · score: 1 (1 votes) · EA · GW

Yep, this is already in place! Try going to www.effective-altruism.org

Comment by samdeere on EA Funds Beta Launch · 2017-02-28T07:39:13.634Z · score: 4 (4 votes) · EA · GW

Thanks for this

There was an issue with refreshing security tokens. I've just pushed a fix for this — if you refresh (or failing that, a hard refresh - e.g. Cmd+Shift+R) then the issue should resolve itself. I suspect that it works in incognito because you don't have any cookies set. If you're still having issues, try clearing cookies for the page*.

If that doesn't fix it, it'd be amazing if you could send the log from your Chrome console to tech[at]effectivealtruism[dot]org (open by pressing 'Cmd+Shift+J', save by right-clicking on the console background and selecting 'Save as...).

*Help on this if anyone needs it: https://support.google.com/chrome/answer/95647?co=GENIE.Platform%3DDesktop&hl=en

CEA Staff Donation Decisions 2016

2016-12-06T16:07:30.766Z · score: 11 (11 votes)
Comment by samdeere on Accomplishments Open Thread - May 2016 · 2016-05-11T11:10:59.004Z · score: 2 (2 votes) · EA · GW

Hey Andy, I'm currently working on something very similar as an upgrade to Giving What We Can's My Giving dashboard. Did you want to shoot me an email at sam.deere@givingwhatwecan.org to discuss — either as an opportunity to collaborate or to work out if there's significant overlap?

Sam

Comment by samdeere on Giving What We Can needs your help this Christmas! · 2015-12-09T13:51:07.892Z · score: 5 (5 votes) · EA · GW

Hi Kieran,

Michelle is in a better position to answer some of these, but I'll answer the ones I can. I'd also suggest having a look at the comments section of our last fundraising prospectus, which covered some similar ground and which may provide more detail to some of your questions.

1) This is largely covered in the step Accounting for members donating a different amount than they pledged, which uses data from members who have reported their donations in My Giving, and comparing their actual donations with their pledges. The upper bound estimate in the Giving Review (80% of people keeping their pledge) uses the same dataset, but only takes into account a binary 'pledge met' vs. 'pledge not met'. The ratio of pledges to donations (117%) has more bearing our calculations because it captures both people who donate less than they pledge, and people who donate substantially more. Overall, due to people on average donating more than their pledge, the ratio is actually larger than 1:1 (so, even if only 80% of members hit their pledge amount, the number of people donating more than their pledge means the cohort as a whole donates more than it pledges).

The only quibble that you might have here is whether this cohort (people who report donations in My Giving) donates at a substantially different rate than people who don't report their donations (after we've factored out people who have both gone silent, and stopped donating, as per the earlier Accounting for membership attrition step). We have good reason to think this isn't the case (we know a lot of people personally who choose not to use My Giving, but who keep their pledges), but if you were more pessimistic about this, you could downweight the Ratio of Actual Donations to Pledged Donations in the spreadsheet (currently 1.17).


2) As above, this was calculated using data from members who have reported their donations in My Giving, and taking the ratio between their pledged amount and their reported donations, averaging over all members. See this section of the impact page for more info.


3) To clarify, are you talking about the impact of changes to members' income over time, or asking whether we're accounting for potential changes to donation patterns over time which affect the counterfactual ratio?

We're currently calculating our counterfactuals based on the amounts members say they would have donated without us — we haven't modelled behaviour changes into the future, and I'm not sure what we'd base such a model on. Whether it's conservative or optimistic is unclear, but I'd say that this is probably a wash — it's hard to know whether people's predictions of what they would have donated are overall optimistic or pessimistic. In our conversations with members, many people who say that they would have donated 10% without us also tell us that we're a useful commitment device (indicating that perhaps they wouldn't stick to their 10% without us, and that our counterfactual impact is actually greater than what we've accounted for in the model).

If you're just talking about the effect of members' income on the counterfactuals (because the calculations assume they will be static, when in reality income is likely to rise) then we think the calculation is fairly conservative. See the Donations Pledged By Members section of the impact page:

This methodology relies on the accuracy of members’ predictions about their future income. In general we have found that these predictions seem conservative, as most people underestimate their future earning potential[7] . If members do not estimate their future salary, we use the median salary for their country. We think that is a fairly pessimistic assumption, as our median member has an expected earning potential higher than the median wage[8] .

Footnotes 7 and 8 expand on this:

  1. For example, many members estimate their future income will be the same as their current income, even though they are at the beginning of their careers - in reality, income typically increases throughout a person’s career

  2. For example, many members attend prestigious universities and/or are pursuing careers that have an average salary much higher than the median wage

See also this comment made on the last prospectus for some discussion of the effect of modelling changing income over time. Using the same model with the updated figures yields a ratio of between 69:1 (using a fairly arbitrary starting pledged amount of $4,200,000 which produces donations over members' careers equivalent to the $344 million pledged amount, but accounts for income growth) and 157:1 (assuming that the current pledges correspond to current income levels, and that all members are at the beginning of their careers). You can play with this assumption by editing the figure in cell C2 on the 'Calculations' sheet of this spreadsheet, and the income growth rates at the bottom of the column.


5) We've taken into account an additional year (2014), where we had strong growth, but where our costs were not significantly higher. Our membership more than doubled (386 in 2009-13 vs 792 in 2009-14) but our costs only went up by around 40% (£238k in 2009-13 vs £332k in 2009-14). The assumptions have remained essentially the same, so the difference in those ratios accounts for most of the difference (with a less significant amount being due to small changes in membership attrition and counterfactual pledge ratios).

As we note in the caveats, we do expect this amount to go down in future as our staff costs increase, and we don't want people to fixate on it as a predictor of our impact. We see it more as a sanity check of whether we're a good bet, vs giving money to other effective causes.

The degree to which this will change significantly in future really depends on how strong our member growth vs. costs growth is. If the cost of creating a new member hits diminishing marginal returns soon (not at all unlikely), then it's likely to drop back fairly quickly. We don't see this as particularly troubling — so long as our absolute number of members keeps increasing and the ratio is positive, then we're still a good bet.

We doubled staff numbers over 2015 (3 > 6) and we're hiring again now (6 > 8 or 9, depending on fundraising), so it's likely that this will push it back down. We think that maximising future membership growth will be contingent on broadening the skillset of our team (and just having more hands on deck to do outreach work would be a huge help!). Expanding the team, strengthening our organisation, and increasing our growth rate seems very important right now (and much more important than maintaining this ratio at the current level). My guess is that it will settle somewhere between 20:1 and 60:1 — not quite as impressive as 104:1, but still suggestive that we're making a big difference!

(This difference is similar to the reason that we don't think that using "overhead" is a good measure of a charity's effectiveness. In effect, this is our overheads increasing, but so long as this leads to greater (counterfactual) overall member growth and donations to top charities, then we should be happy for the ratio to drop.)


6) Hauke, our Director of Research answers this question here. The short version:

  • Supplement GiveWell research and find new charities within our comparative area of advantage (global poverty reduction)
  • Independently check GiveWell recommendations and provide resilience to the effective charity evaluation system
  • Provide supervision for students/early career researchers who want to focus on effectiveness
  • Ensure in-house credibility when talking about top charities and fact-checking all of our public-facing material

Hope that helps!

Comment by samdeere on Giving What We Can needs your help this Christmas! · 2015-12-08T23:22:46.899Z · score: 3 (3 votes) · EA · GW

I've updated our impact page to include a spreadsheet that you can use to test our Realistic impact calculation. Find it under the "Spreadsheet" header.

Also, preparing this spreadsheet for public use revealed a minor error in our workings (the original spreadsheet was using an out-of-date figure for Proportion of people who say we've affected their choice of charity) — this has been corrected, and the impact ratio has accordingly shifted slightly from 103:1 to 104:1.

Comment by samdeere on Opportunity to talk to Against Malaria Foundation founder Rob Mather · 2015-07-01T22:46:06.861Z · score: 0 (0 votes) · EA · GW

Thanks for this Denise — would you like to discuss this in person? If so, happy to add you to the Skype.

Opportunity to talk to Against Malaria Foundation founder Rob Mather

2015-06-22T17:52:54.351Z · score: 5 (5 votes)
Comment by samdeere on Please support Giving What We Can this Spring · 2015-05-14T19:30:37.129Z · score: 0 (0 votes) · EA · GW

Also, sorry if this reply doesn't exactly address your rephrased question – I wrote it in response to your first comment :)

Here's a copy of the spreadsheet with the calculations added in as above if you want to play around with it.

Thanks again for the question, let me know if there's anything else you want clarified.

Comment by samdeere on Please support Giving What We Can this Spring · 2015-05-14T18:36:26.314Z · score: 3 (3 votes) · EA · GW

Thanks for the question Jon.

With regard to the pledged amount, this comes from members' predictions of their future annual salary, which we think are likely to be underestimates. We also use median wage as a stand-in if we're missing future salary data, which (given our members are in general likely to earn more than median wage) we also think is conservative. Accordingly, it's likely that the amount donated will be higher in reality.

We address this in more detail in our fundraising prospectus – see Appendix 2 for the full working.

From page 23 of the prospectus:

This methodology obviously relies on the accuracy of members’ predictions about their future income. In general we have found that these predictions seem conservative, as most people underestimate their future earning potential (1). If members do not estimate their future salary, we use the median salary for their country. We think that is a fairly pessimistic assumption, as our median member has an expected earning potential higher than the median wage (2).

And the footnotes to the above:

(1) For example, many members estimate their future income will be the same as their current income, even though they are at the beginning of their careers - in reality, income typically increases throughout a person’s career (2) For example, many members attend prestigious universities and/or are pursuing careers that have an average salary much higher than the median wage

The final $146m figure is arrived at by multiplying members' estimates of future salary by the number of years they have left in their careers. It therefore doesn't take into account any of this growth that you'd expect in reality. As such, it wouldn't make sense to go back and try to model growth based on the $146 million figure (say, $1.7 million in year one, growing to around $5.6 million/year by year 40, rather than a flat $3.7m per year)*.

Instead, you'd need to apply your model (say, fast wage growth in the first 10-20 years of a career, then slower growth until retirement) to the member estimates first to derive the final figure, and use the yearly amounts in your calculation. Given our assumption that member estimates of future salary err on the low side, this means that both the final pledged amount, and the per-year amounts are likely to be higher, and therefore our effectiveness would in fact be higher, notwithstanding that the discounting/attrition rate would affect the final number more aggressively.


* I've tried this calculation, assuming a 4% growth rate for years 1-20 and a 2% growth rate in years 21-40. With a year one pledge of $1.7 million, this grows to $5.6 million by year 40, for a total of ~$147 million donated. This drops the effectiveness estimate down to 44-1 - a significant drop, but still excellent return for a donation to Giving What We Can. To reiterate, I think this would be a significant underestimate of peoples' future incomes.

Comment by samdeere on Please support Giving What We Can this Spring · 2015-05-14T18:35:32.322Z · score: 0 (0 votes) · EA · GW

Deleted, responded to Jon's new comment above

Comment by samdeere on Please support Giving What We Can this Spring · 2015-04-27T13:44:32.066Z · score: 3 (3 votes) · EA · GW

Hi Ben - good pickup. I've uploaded the latest version of the prospectus with the amended numbers here