DAFs do make it much easier to donate appreciated stock, and this is good advice. However, if you want to make a donation of appreciated assests and you aren't able to set up a DAF, EA Funds accepts donations of stock (in the US) and cryptocurrency (US, UK, and NL) for donations of more than $1000 (no promises that you won't have to send a fax to your broker if you want to donate stock, but in general that hasn't been the case for most of our donors who are donating from Vanguard etc).
For EA Funds this is something that we’re planning to do very soon. It’s something that’s always on the backburner (as shipping features always tends to take priority), but now that there’s a new website that has much better global control of component styling, this is something where I think we can get some easy wins.
Not really (we’ve sporadically used Personas in the past, but not very systematically), but I’ve actually just been doing more reading on this. I expect that (at least for EA Funds) Jobs-to-Be-Done will be a big part of our user research project going forward.
TL;DR; – For Funds/GWWC, the frontends are React (via NextJS) running on Vercel (previously a React SPA running on Netlify). The backend is a bunch of Node.js microservices running on Heroku, connected to a Postgres DB (running on RDS), and wired together with RabbitMQ. We’ve migrated most things to TypeScript, but a lot of the backend is still JS. A lot of business logic is written in SQL/plpgSQL.
EA Funds and GWWC have been on the same platform since 2017, and share the same backend.
Down the React rabbithole a bit: We connect to the backend using Apollo to manage GraphQL queries, we use Immer for immutable state management as needed (though we don’t use Redux or any other global state management). UI components are provided with Material-UI.
I used to use Netlify to host the EA Funds/GWWC frontend, but we’ve moved to Vercel for their first-class NextJS support.
The backend is a collection of quasi-microservices running on Heroku, all written as Node.js apps:
An Express web server that handles our GraphQL endpoint, as well as webhooks/callbacks for various integrations (e.g. Stripe payments).
The GraphQL endpoint is provided by Postgraphile, which is essentially a way of generating a GraphQL schema by reflecting over our Postgres DB. This means that we get to leverage the data structure, foreign key relationships, and type safety of the existing database for free in GraphQL. This approach means that a lot of business logic (especially around user-facing CRUD and reporting) are written directly in SQL, implemented as views and functions.
The services are connected by a RabbitMQ message bus. Database events (e.g. “db.payment.updated”) and webhook calls (e.g. from payment processors) are pushed onto the RabbitMQ bus , so that other services can react to these events. So, if a user inserts a new payment, the Payments service can communicate with the appropriate payment processor, and when the payment status is updated as succeeded, the Emails service can send out a receipt.
The Postgres DB is hosted on Amazon RDS.
In addition to EA Funds/GWWC, I’ve also helped set up a bunch of other sites used by CEA (e.g. EffectiveAltruism.org, EAGlobal.org, CentreForEffectiveAltruism.org, the GivingWhatWeCan.org homepage). Most of these are currently using the Metalsmith static site generator, which generates static HTML files that are served via Netlify. This setup has been fantastic for performance and reliability, but eventually these sites will be ported over to the NextJS monorepo for better maintainability.
In terms of pain points, it’s generally been a pretty solid system. The biggest challenges have been maintenance. E.g. we’ve migrated to NextJS, partly for the improved performance and DX, but also because the previous SPA was running an outdated version of React, and because of the way the boilerplate I used was architected, upgrading to a more modern version (which many packages now require) was more trouble than it was worth. Similarly, all the static sites have historically been hosted in their own repositories, which has meant that they all have slightly different ways of doing things, and improvements made to one don’t propagate to the others. Hence, the move to a monorepo, where we can share components/logic between sites. Also, the more I use TypeScript, the more I hate using vanilla JS, so I guess that’s something of a pain point in the parts of the backend that haven’t been migrated yet!
We run a pretty lightweight version of Agile. We’ve tried doing more or less of the ‘canonical’ Agile/Scrum methodologies at various points, and settled on what we have because it works for us. Basically, JP and I have a weekly meeting where we set sprint goals, broken down by number of story points (where one story point = ~ half a day of productive dev work). These tasks are added to a kanban board that we update throughout the week as things progress. We do daily check-ins with each other, and with our respective managers, to discuss progress/challenges etc. We also do a couple of pair programming sessions each week.
Tasks are triaged based on discussion with our respective managers, taking into account what seems most important to do next (itself a combination of user feedback, outstanding bugs and feature improvements, org strategy, events in the world etc). We have a loose product roadmap that informs where we expect to be going over the next quarter and year, but we don’t make concrete plans for more than a quarter away.
We’ve iterated a lot on this over the past few years, and I think that we’ve found something that works well. I like that the system is lightweight, and strikes a good balance between giving sufficient direction for what to work on, while allowing for a lot of flexibility and not getting mired in process. It also forces us to make reasonable time estimates about what we’re doing, and these are sanity-checked by another dev, which helps avoid scope creep, or underspecified tasks. Regular check-ins make it easier to stay on track – I find it very motivating to be able to show someone else the cool thing I’ve been working on and get feedback.
I think that as we grow we’ll probably need to systematise things more. At the moment, it relies on JP and I having worked together for a long time and being very comfortable with each others’ working styles. I could imagine that as we take on additional developers, or that as the projects we’re working on diverge more significantly (as I’m now focusing almost exclusively on EA Funds) that we’d need to make some changes, probably in the direction of more concrete progress reports through the week.
For EA Funds (and the pledge management parts of GWWC), that’s me. For the Forum it’s JP. We’re both devs first and foremost though, so we get a lot of input from other people (Aaron who manages the content side of the Forum, Luke Freeman who runs GWWC, Jonas who runs EA Funds, Ben West who manages the Forum team and who is a former tech startup CEO etc)
I think the biggest challenges we face are related to capacity rather than specific skills. So, a really productive fullstack dev could have a huge impact just by virtue of helping us to ship things faster, and cover more surface area.
That said, a few things that I think would be great to have more of:
Experience with analytics/measurement/telemetry and using the insights to drive development of new features or content
Exceptional UX/UI chops, to give sites a visual lift and ensure that user flows through our sites are really good
A dedicated product management skillset, conducting user research and using that to inform subsequent iterations of each product
I’ll first caveat by saying that I haven’t worked at either a typical startup or big tech company.
I think that there’s probably not a huge difference between CEA and a very early stage startup. I think that the most relevant dimension is just scale – currently we’re two devs working on a bunch of different projects, which means a high degree of autonomy and ownership over the code in a way that I expect is similar to a lot of small startups. We’re obviously a more mature org though, so we do have a lot of processes in place (CI, a dedicated Operations team etc) that you wouldn’t find at a really new startup. So, in some sense it’s the freedom and ownership of an early stage startup combined with the security and flexibility of an established org. It also means that there’s a lot of time spent on interacting with users (as opposed to just being siloed in your text editor), which I really like, partly because this is a great community and it’s really nice to talk to EAs who use your software, and partly because it helps you to get better at thinking about product development and making things that serve the needs of your constituency really well.
Another thing that’s a bit strange about CEA as a non-profit, and as an EA org, is that the approach to scaling is a bit different. In a for-profit startup, your aim is to grow as fast as humanly possible (at least, when you hit product-market fit). We’ve deliberately avoided that strategy (at least for now), in large part because it doesn’t seem prudent to scale something like the EA community as fast as possible, because scaling fast trades off pretty hard against the fidelity of your message and the existing culture of the community. This could obviously change in future, but historically it’s been part of our approach. This in turn means that the challenges are a lot about understanding how to build solid products that work for EAs, rather than how to run huge k8s deployments etc.
The Centre for Effective Altruism UK (the legal entity behind EA Funds' UK operations) is registered in the Netherlands as a tax-deductible charity.
When you get to the payment page you can select which country you'd like to donate in. To donate in a way that's tax-deductible in the Netherlands, select 'UK/NL' as your country, and then optionally select EUR as your currency code. You can donate via credit card or SEPA transfer.
Yeah, I think ‘never’ is correct for donor lottery winners thus far. I’d guess this situation would be pretty rare in practice (even as we run more lotteries), but we want people to be informed that there are some constraints. People have generally checked in with us beforehand if they’ve got something of an edge-case in mind, and the only times I can remember saying a hard ‘no’ were for partisan political organisations (which we can’t make grants to).
Yeah, I think the case of people not wanting to donate to EA Funds because of social/community dynamics (even if they think, on reflection, that they can't outperform EA Funds) is an interesting one. I guess that if someone is unsure if they can beat EA Funds (or some other 'boring'/deferent option) but that they feel like they'd be subject to social pressure to do something different regardless, that they could always enter anonymously (this doesn't solve the problem of people wanting to prove to themselves that they're good grantmakers, but hopefully goes some way to mitigating the issue).
We're also trying to provide good support to winners, in the form of contact with experienced grantmakers (including members from each of the EA Funds). So, to the extent that this enables winners to 'import' that experience into their decision, while still being able to cast a wider net, it means that even less-confident donors will still be able to remain competitive with alternatives.
Thanks for the question! There are two separate things here, which I'll address separately.
Adding PayPal as a regular payment option to EA Funds that you can select when you're on the website making a donation (which would attract transaction fees)
Using the PayPal Giving Fund (which is fee-free) to donate to EA Funds
PayPal as a regular payment option
We've considered adding PayPal support, but it hasn't been a priority as we've found most donors are able to use one of our other payment methods (e.g. credit cart/bank transfer/check). Adding new payment methods adds some complexity to our payment processing operations (which we try to keep as streamlined as possible to reduce admin overhead), and given that most people use PayPal to process credit card donations, we haven't seen it as offering a significant advantage over our existing credit card payment infrastructure. However, it's useful to know that PayPal is your only option, and is some data in favour of us considering adding it. This likely won't be for a while though, but if we do get to it I'll post an update as a comment on this question.
Using the PayPal Giving Fund
Unfortunately it's not possible to automate donations made through the PayPal Giving Fund, which means it's not viable for us to offer it as a payment option at checkout. Donations to the Giving Fund have to be made through PayPal's own website, which means we can't capture each donor's allocation, and therefore all the money appears as if it's just going to the Centre for Effective Altruism (CEA –the non-profit that EA Funds is a part of). This unfortunately just isn't scalable to hundreds of donors.
For larger donations ($1000+ or equivalent), you can use the PayPal Giving Fund to make a donation to CEA, but you'll need to email us so that we can manually add your donation to EA Funds and allocate it accordingly. If this fits your situation, you can make a donation using one of the links below, then forward your receipt to firstname.lastname@example.org along with your preferred allocation (you can see the available organizations to donate to on this page)
I think that for most donors this can be disregarded. Even if the marginal use of your additional tax dollars is still pretty good (e.g. 10% as valuable as your best charitable option), you're still better off donating to the charity. In extremis, it would imply that your best marginal donation option would be to voluntarily pay more tax, rather than donate.
While it seems in theory possible that the marginal dollar that your government spends is more effective than your best charitable donation option, I'd guess that in practice this is almost never the case, largely just because Your Dollar Goes Further Overseas, but also because your contribution to government revenue will be diffused between the many hundreds of programs that the government runs (some of which may be positive, like preventive health or basic research, others which may be pretty harmful, e.g. subsidies for industrial agriculture or maintaining nuclear arsenals).
Yeah, both good points. To further complicate things, if you're concerned about the net costs of your donation (e.g. both the transaction fees, as well as the administrative costs involved) then sometimes paying the transaction fee means that it's actually cheaper overall to process the transaction. For example, the service paid for by the credit card fees on EA Funds (Stripe) allows us to automate almost all of the accounting, saving a huge amount of person-hours and keeping running costs lower. Obviously there's a break-even point, and for larger donations it definitely makes sense to seek to avoid percentage-based fees.
First, I'll note that we're actually planning to change this system (likely in the next week or two), so that instead of first seeing a default allocation, donors will choose their own allocation as the first step in the donation process.
To your question, the current EA Funds default allocation was chosen as an approximation of some combination of a) a representative split of the cause areas based on their relative interest across EA, and b) a guess at what we thought the underlying funding gaps in each cause area will likely to be. It's definitely intended to be approximate, and is there partly as a guide to give an indication of how the slider allocation system works, rather than an allocation that we think everyone should choose.
Context: I help run EA Funds and am responsible for the user-facing side of things, including the website
Yeah, the Fund balances are updated when the entries for the grants are entered into our accounting system (typically at the time that the grants are paid out). Because it can take a while to source all the relevant information from recipients (bank details etc), this doesn't always happen immediately. Unfortunately this means that there's always going to be some potential for drift here, though (absent accounting corrections like that applicable to the Global Development Fund) this should resolve itself within ~ a month. The November balances included ~ half of the payments made from the Animal Welfare and Meta Funds from their respective November grant rounds.
Thanks again for the thoughtful comments. I agree that the numbers should have been higher; that was an oversight (and perhaps speaks to the difficulty of keeping these numbers accurate longer term). I’m not sure how I missed the extra 80K and Founders Pledge grants (I think they came from an earlier payout report that I forgot to include in my calculations). I’m sorry that this wasn’t done correctly the first time around.
I’ve since removed the grant amounts (leaving just the grantees/grant categories), and I might re-title the field to just be called ‘Past Grantmaking’ or something similar. We’ve also created a public spreadsheet of all of the EA Funds grants, so they’re accessible in once place.
I added the ‘Grantmaking and Impact’ section to the Funds pages in response to feedback that it was hard to get a feel for what each Fund did in a tangible way, especially for newer donors who hadn’t been following the Funds over time and hadn’t yet dived into the payout reports. The idea here was to give a flavour of the kinds of things that each Fund had granted to, rather than to provide an exhaustive list (that’s what the payout reports are for). I still think that this is valuable, but I agree that keeping the numbers accurate has some problems, so for now we’ll remove them.
Most Fund balances are in general reasonably accurate (although the current balances don’t account from the latest round that were only paid out last month). The exception here is the Global Development Fund, which is still waiting on the accounting correction you mentioned to post, but I’ve just been informed that this has just been given over to the bookkeepers to action, so this should be resolved very soon.
1. I don’t have an exact figure, but a quick look at the data suggests we’ve moved close to $2m to US-based charities that don’t have a UK presence from donors in the UK (~$600k in 2019). My guess is that the amount going in the other direction (US -> UK) is substantially smaller than that, if only because the majority of the orgs we support are US-based. (There’s also some slippage here, e.g. UK donors giving to GiveWell’s current recommendation could donate to AMF/Malaria ConsortiumSCI etc.)
2. Due to privacy regulations (most notably GDPR) we can’t, by default, hand over any personally identifying information to our partner charities. We ask donors for permission to pass their details onto the recipient charities, and in these cases stewardship is handled directly by the orgs themselves. CEA doesn’t do much in terms of stewardship specific to each partner org (e.g. we don’t send AMF donors an update on what AMF has been up to recently), but we do send out email newsletters with updates about how money from EA Funds has been spent.
Yeah, that’s interesting – I think this is an artefact of the way we calculate the numbers. The ‘total donations’ figure is calculated from donations registered through the platform, whereas the Fund balances are calculated from our accounting system. Sometimes donations (especially by larger donors) are arranged outside of the EA Funds platform. They count towards the Fund balance (and accordingly show up in the payouts), but they won’t show up in the total donations figure. We’d love to get to a point where these donations are recorded in EA Funds, but it’s a non-trivial task to synchronise accounting systems in two directions, and so this hasn’t been a top priority so far.
I agree that the YTD display isn’t the most useful for assessing total inflows because it cuts out the busiest period of December (which takes in 4-5 times more than other months, and is responsible for ~35% of annual donations). It was useful for us internally (to see how we were tracking year-on-year), and so ended up being one of the first things we put on the dashboard, but I think that a whole-of-year view will be more useful for the public stats page.
It’s hard to say exactly, but I’d be thinking this would be on the timescale of roughly a year (so, a spinout could happen in late 2020 or mid 2021). However, this will depend a lot on e.g. ensuring that we have the right people on the team, the difficulty of setting up new processes to handle grantmaking etc.
Re the size question – are you asking how large the EA Funds organisation itself should be, or how large the Fund management teams should be?
If the former, I’d guess that we’d probably start out with a team of two people, maybe eventually growing ~4 people as we started to rely less on CEA for operational support (roughly covering some combination of executive, tech, grantmaking support, general operations, and donor relations), and then growing further if/when demand for the product grew and more people working on the project made sense.
If the latter, my guess is that something like 3-6 people per team is a good size. More people means more viewpoint diversity, more eyes on each grant, and greater surface area for sourcing new grants, but larger groups become more difficult to manage, and obviously the time (and potentially monetary) costs increase.
I’d caveat strongly that these are guesses based on my intuitions about what a future version of EA Funds might look like rather than established strategy/policy, and we’re still very much in the process of figuring out exactly what things could look like.
I agree with you that on one framing, influencing the long-run future is risky, in the sense that we have no real idea of whether any actions taken now will have a long-run positive impact, and we’re just using our best judgement.
However, it also feels like there are also important distinctions in categories of risk between things like organisational maturity. For example, a grant to MIRI (an established organisation, with legible financial controls, and existing research outputs that are widely cited within the field) feels different to me when compared to, say, an early-career independent researcher working on an area of mathematics that’s plausibly but as-yet speculatively related to advancing the cause of AI safety, or funding someone to write fiction that draws attention to key problems in the field.
I basically tried to come up with an ontology that would make intuitive sense to the average donor, and then tried to address the shortcomings by using examples on our risk page. I agree with Oli that it doesn’t fully capture things, but I think it’s a reasonable attempt to capture an important sentiment (albeit in a very reductive way), especially for donors who are newer to the product and to EA. That said, everyone will have their own sense of what they consider too risky, which is why we encourage donors to read through past grant reports and see how comfortable they feel before donating.
The conversation with Oli above about ‘risk of abuse’ being an important dimension is interesting, and I’ll think about rewriting parts of the page to account for different framings of risk.
Good question. My very rough Fermi estimate puts this at around $750/grant (based on something like $90k worth of staff costs directly related to grantmaking and ~120 grants/year). It’s hard to say how this scales, but we’ve continued to improve our grant processing pipeline, and I’d expect that we can continue to accommodate a relatively high number of grants per year. This is also only the average cost – I’d expect the marginal cost for each grant to be lower than this.
I don’t have a great sense of grantmaker overhead per individual grant, but I’d estimate the time cost something in the range of $500-$1000 per grant recommendation per Fund team member, or $2000-$4000 for a typical ~4-person team (noting of course that Fund management team members donate their spare time to work on the project).
Yeah, I’d love to see this happen, both because I think that it’s good to pay people for their time, and also because of the incentives it creates. However, as Misha_Yagudin says, I don’t think financial constraints are the main bottleneck on getting good feedback or doing in-depth grant reviews, and time constraints are the bigger factor.
One thing I’ve been mulling over for some time is appointing full-time grantmakers to at least some of the Funds. This isn’t likely to be feasible in the near term (say, at least 6 months), and would depend a lot on how the product evolves, as well as funding constraints, but it’s definitely something we’ve considered.
Meta: Just wanted to say thanks to all for the excellent questions, and to apologise for the slow turnaround on responses – I got pretty sick just before Christmas and wasn’t in any state to respond coherently. Ideally I would have noted that at the time, mea culpa.
Late last year I was working on updating and formalising the scope of each of the EA Funds, and in discussions with Elie and others at GiveWell, we updated the wording of the scope to explicitly include projects that were more indirectly serving the mission of the Fund:
In addition, the Global Health and Development Fund has a broad remit, and may fund other activities whose ultimate purpose is to serve people living in the poorest regions of the world, for example by raising additional funds (e.g. One for the World), or exploring novel financing arrangements (e.g. Instiglio).
A previous version of the page had the following wording on it:
You might choose not to support the fund if you think donations to organizations working in effective altruism community building will produce more money for highly effective global health and development charities than the money they receive. Historically, this seems to have been true for Giving What We Can, Charity Science, and Raising For Effective Giving among others.
The previous text wasn’t intended to rule out donations to global-health focused metacharities, rather it was predicated on the assumption that Elie would be most likely to recommend charities doing direct work, and donors who were looking for a larger multiplier on their global health donations might want to consider other options. Because we previously didn’t have a formal policy ruling grants to meta/indirect projects in or out, our internal assessment was that such grants would be in scope (hence the approval of the grant).
However, I can see that this was pretty unclear, and that the text could easily be read as suggesting that the Fund would never make such grants, which could have set donor expectations that were different from our original intention. We should have noticed this discrepancy, and taken it into account by deferring approval of any ‘meta’ grants until after we’d published the more formalised Fund scope – we didn’t, and I want to apologise for that.
If you (or any other donors) would like a refund on donations made to the Fund because you feel you were misinformed about the Fund’s scope, please email funds[at]effectivealtruism[dot]org.
The money is kept aside as the first tranche of backstop for future donor lotteries – if someone wins, we'll first draw from this pool of money to cover the pot, and then we'll use the lottery guarantor's money to cover any remainder.
Thanks – yeah, I agree, and we should have let donors know about this sooner.
The Payout Reports shouldn't affect the Fund Balance, as that number is calculated directly from our accounting system. That said, this means it's subject to some of the vagaries of bookkeeping, which means we ask donors to treat it as an estimate. At the moment we're waiting on a (routine) accounting correction that should be posted once our most recent audit results are finalised, which unfortunately means that the current figure is somewhat inaccurate.
The Payout Report total would have been inaccurate as that's calculated by summing the figures from the published payout reports.
Thanks HStencil for flagging this. As Catherine said, the process of publishing reports can take some time, which is why there's been a delay adding these grants to the EA Funds website. However in the interests of transparency I've added placeholder payout reports for both the Fortify Health grant, as well as another recent grant to One for the World which is also waiting on the full report. We'll update these reports as soon as GiveWell has completed their publication process.
Fund managers are appointed by CEA on the recommendation of the Fund Committee (in the case of the Meta Fund, my understanding is that currently all committee members have input on the decision, with equally-weighted votes, though this is not the case on all Fund teams). My understanding from conversations with members of the Meta Fund team is that the last recruitment round considered candidates from several locations, with an emphasis on trying to find someone from the Bay Area. At least two promising candidates (both based in the Bay) were approached, but were ultimately unable to take the position. While the appointee (Peter McIntyre) does live in London, he's originally from Australia, and has recently moved back from living in the Bay, and maintains strong connections there.
As an aside, I'd note that the team is somewhat more geographically diverse than is being presented here. While the plurality of the team currently lives in London, with a member in Hong Kong and an advisor in the Bay, they also come from five different countries, and as far as I know most have lived in several different cities.
I'm happy to forward nominations for Fund managers (for any of the Funds) on to their respective Chairs. The best thing to do is send me an email at sam[at]centreforeffectivealtruism[dot]org.
A quick update to say that one of the features that seems to have prompted the initial post – the lack of the ability to manage recurring reported donations – has now been implemented. You can access it from the Recurring Donations tab in the Pledge Dashboard sidebar:
These are all great suggestions William, thanks for providing them. I'll take them into account as we're making future updates to the platform – no promises on a timeframe re current tech capacity constraints unfortunately, but I think they're all very sensible ideas and would constitute significant improvements.
For the last two years we've run a donor lottery through the December Giving Season, drawn in January, and we intend to do this again this year-end. Assuming that preparations go well, we'll aim to have this open by Giving Tuesday.
To add to this, I re-analysed the EA Survey responses on cause areas, restricting to just Giving What We Can Members:
Obviously there's a selection effect where the members who take the survey are more likely to be more involved with EA, but I think it's still instructive that Giving What We Can members are a fairly broad church with respect to cause areas, and that it's reasonable to offer different cause areas to them as a default setting on EA Funds.
Disclosure: I work at CEA, and am the person primarily responsible for both EA Funds and the technical implementation of the new Pledge Dashboard.
The success of our factored evaluation experiments depends on Mosaic, the core web interface our experimenters use.
Is Mosaic an open-source technology that an applicant would be expected to have existing familiarity with, or an in-house piece of software? (The text of the job ad is a little ambiguous.) The term is unfortunately somewhat ungoogleable, due to confusion with the Mosaic web browser.
Re 1, this is less of a worry to me. You're right that this isn't something that SHA256 has been specifically vetted for, but my understanding is that the SHA-2 family of algorithms should have uniformly-distributed outputs. In fact, the NIST beacon values are all just SHA-512 hashes (of a random seed plus the previous beacon's value and some other info), so this method vs the NIST method shouldn't have different properties (although, as you note, we didn't do a specific analysis of this particular set of inputs — noted, and mea culpa).
However, the point re 2 is definitely a fair concern, and I think that this is the biggest defeater here. As such, (and given the NIST Beacon is back online) we're reverting to the original NIST method.
Thanks for raising the concerns.
ETA: On further reflection, you're right that it's problematic knowing whether the first 10 hex digits will be uniformly distributed given that we don't have a full-entropy source (which is a significant difference between this method and the NIST beacon — we just made sure that the method had greater entropy than the 40 bits we needed to cover all the possible ticket values). So, your point about testing sample values in advance is well-made.
The NIST Beacon is back online. After consulting a number of people (and notwithstanding that we previously committed to not changing back), we've decided that it would in fact be better to revert to using the NIST beacon. I've edited the post text to reflect this, and emailed all lottery participants.
AFAIK random.org offers to run lotteries for you (for a fee), but all participants still need to trust them to generate the numbers fairly. It's obviously unlikely that there would in fact be any problem here, but we're erring on the side of having something that's easier for an external party to inspect.
The source of randomness needs to be generated independently from both CEA and all possible entrants
The resulting random number needs to be published publicly
The randomness needs to be generated at a specific, precommitted time in the future
The method for arriving at the final number should ideally be open to public inspection
This is because, if we generated the number ourselves, or used a private third-party, there's no good guarantees against collusion. Entrants in the lottery could reasonably say 'how do I know that the draw is fair?', especially as the prize pool is large enough that it could incentivise cheating. The future precommitment is important because it guarantees that we can't secretly know the number, and the specific timing is important because it means that we can't just keep waiting for numbers to be generated until we see one that we like the look of.
The method proposed above means that anyone can see how we arrived at the final random number, because it takes a public number that we can't possibly influence, and then hashes it using SHA256, which is well-verified, deterministic (i.e. anyone can run it on their own computer and check our working) and distributes the possible answers uniformly (so everyone has an equal chance of winning).
Typical lottery drawings have these properties too: live broadcast, studio audience (i.e. they are publicly verifiable), balls being mixed and then picked out of a machine (i.e. an easy-to-inspect, uniformly-distributed source of randomness that, because it is public, cannot be gamed by the people running the lottery).
Earthquakes have the nice property that their incidence follows a rough power law distribution (so you know approximately how regularly they'll happen), but the specifics of the location, magnitude, depth or any other properties of any given future earthquake are entirely unpredictable. This means that we know that there will be a set of unpredictable (i.e. random) numbers generated by seismometers, but we (and anyone trying to game the lottery) have no way of knowing what they will be in advance.
(This is not actually that different to how your computer generates randomness — it uses small unpredictable events, like the very precise time between keystrokes, or tiny changes in mouse direction, to generate the entropy pool for generating random numbers locally. We're just using the same technique, but allowing people to see into the entropy pool).
Other plausible sources of randomness we considered included the block hash of the first block mined after the draw date on the Bitcoin blockchain, and the numbers of a particular Powerball drawing.
Agree with the sentiment, but we're most definitely not rolling our own crypto. The method above relies on the public and extremely-widely-vetted SHA256 algorithm. This algorithm has the nice property that even slightly different inputs produce wildly different outputs. Secondly, it should distribute these outputs uniformly across the entire possibility space. This means that it would be useless to bruteforce the prediction, because each of your candidates would have an even chance of ending up basically anywhere.
For example, compare the input strings 1111111111111111111111111111 and 1111111111111111111111111112 with their SHA256 outputs:
It doesn't matter how much of the API response remains the same (for example, we could pad the input of every hash we generated with the same fixed string and have the same randomness properties as the proposal above). All that matters is that each response is going to be (unpredictably) different from the next.
ETA: It's perhaps more helpful to see the digits from the API response as a publicly verifiable seed to a pseudorandom number generator, rather than as the random number itself.
Hey Eli – there has definitely been thinking on this, and we've done a shallow investigation of some options. At the moment we're trying to avoid making large structural changes to the way EA Funds is set up that have the potential to increase accounting complexity (and possibly audit compliance complexity too), but this is in the pipeline as something we'd eventually like to make happen, especially as the total holdings get larger.
Two thoughts, one on the object-level, one on the meta.
On the object level, I'm skeptical that we need yet another platform for funding coordination. This is more of a first-blush intuition, and I don't propose we have a long discussion on it here, but just wanted to add my $0.02 as a weak datapoint. (Disclosure — I'm part of the team that built EA Funds and work at CEA which runs EA Grants so make of that what you will. Also, to the extent that the sense that small projects are falling through the gaps because of evaluation-capacity constraints, CEA is currently in the process of hiring a Grants evaluator.)
On the meta level (i.e. how open should we be to adding arbitrary integrations that can access a user's forum account data) I think there's definitely some merit to this, and that I can envisage cool things that could be built on top of it. However, my first-blush take is that providing an OAuth layer, exposing user data etc, is unlikely to be a very high priority (at least from the CEA side) when considered against other possible feature improvements and other CEA priorities, especially given the likely time cost involved in maintaining the auth system where it interfaces with other services, and the magnitude of the impact that I'd expect having EA Forum data integrated with such a service would have. However, as you note, the LW codebase is open source, so I'd suggest submitting an issue there, discussing with the core devs and making the case, and possibly submitting a PR if it's something that would be sufficiently useful to a project you're working on.
Thanks for the comments on this Marcus (+ Kyle and others elsewhere).
I certainly appreciate the concern, but I think it's worth noting that any feedback effects are likely to be minor.
As Larks notes elsewhere, the scoring is quasi-logarithmic — to gain one extra point of voting power (i.e. to have your vote be able to count against that of a single extra brand-new user) is exponentially harder each time.
Assuming that it's twice as hard to get from one 'level' to the next (meaning that each 'level' has half the number of users than the preceding one), the average 'voting power' across the whole of the forum is only 2 votes. Even if you make the assumption that people at the top of the distribution are proportionally more active on the forum (i.e. a person with 500,000 karma is 16 times as active as a new user), the average voting power is still only ≈3 votes.
Given a random distribution of viewpoints, this means that it would take the forum's current highest-karma users (≈5,000 karma) 30-50 times as much engagement in the forum to get from their current position to the maximum level. Given that those current karma levels have been accrued over a period of several years, this would entail an extreme step-change in the way people use the forum.
(Obviously this toy model makes some simplifying assumptions, but these shouldn't change the underlying point, which is that logarithmic growth is slooooooow, and that the difference between a logarithmically-weighted system and the counterfactual 1-point system is minor.)
This means that the extra voting power is a fairly light thumb on the scale. It means that community members who have earned a reputation for consistently providing thoughtful, interesting content can have a slightly greater chance of influencing the ordering of top posts. But the effect is going to be swamped if only a few newer users disagree with that perspective.
The emphasis on can in the preceding sentence is because people shouldn't be using strong upvotes as their default voting mechanism — the normal-upvote variance will be even lower. However, if we thought this system was truly open to abuse, a very simple way we could mitigate this is to limit the number of strong upvotes you can make in a given period of time.
There's an intersection here with the community norms we uphold. The EA Forum isn't supposed to be a place where you unreflectively pursue your viewpoint, or about 'winning' a debate; it's a place to learn, coordinate, exchange ideas, and change your mind about things. To that end, we should be clear that upvotes aren't meant to signal simple agreement with a viewpoint. I'd expect people to upvote things they disagree with but which are thoughtful and interesting etc. I don't think for a second that there won't be some bias towards just upvoting people who agree with you, but I'm hoping that as a community we can ensure that other things will be more influential, like thoughtfulness, usefulness, reasonableness etc.
Finally, I'd also say that the karma system is just one part of the way that posts are made visible. If a particular minority view is underrepresented, but someone writes a thoughtful post in favour of that view, then the moderation team can always promote it to the front page. Whether this seems good to you obviously depends on your faith in the moderation team, but again, given that our community is built on notions like viewpoint diversity and epistemic humility, then the mods should be upholding these norms too.
Yeah MoneyForHealth, it does seem like it would be useful if you can point out instances of this happening on LW. Then we'll have a better shot at figuring out how it happened, and avoiding it happening with the EA Forum migration.