Hi Iris Amazon, thank you for your interest in helping the Amazon rainforest in the most effective way possible.
I founded SoGive, an organisation which aims to help donors get EA-based answers to questions such as these. We have not done a careful review of this question, so this comment is off-the-cuff.
I suspect that the best way to help the rainforest is probably to support an animal welfare charity.
Avoiding deforestation is intrinsically effective at preserving the rainforest, and also deforestation is likely to cause forest fires (see, e.g., Cardil et al 2020; note that a fuller analysis would seek to understand your goals better to ensure that tackling deforestation really achieves what you're aiming for)
I understand that Amazon deforestation is mostly caused by cattle ranching (63% according to this source, which cites World Resources Institute using Hansen et al 2019, 80% according to this source; note that a fuller analysis would fact-check these sources further and seek to understand how the numbers were derived)
My best guess is that the Good Food Institute is best charity to donate to for this. In case I haven't made it clear enough thus far, this is a caveated recommendation.
Good Food Institute (GFI)
GFI works to accelerate alternative protein innovation (i.e. plant-based meat or cultivated/lab-made meat). It does this through lobbying, research and other activities.
We have not done a review of GFI. You can find the Animal Charity Evaluators review of GFI here. I cannot vouch for the quality of Animal Charity Evaluators because we haven't reviewed their work carefully enough yet, although we plan to.
A major downside of GFI is that their work will take time, and may not be suitable if you seek immediate impact.
Why GFI and not another animal charity?
As this comment is not a rigorous review, I can't be confident that GFI is the best choice. However, I looked briefly at the Animal-Charity-Evaluators-recommended charities, and observed that their recommendations tend to help animals like chickens, or perhaps fish, but less so cows. This makes sense given that Animal Charity Evaluators focuses on animal welfare, which is much worse for industrially farmed chicken than for most ruminants, such as cows. GFI's work is more systemic and therefore could impact cows as well.
It is certainly possible that another charity is more effective at preventing cattle ranching without me knowing about it. A fuller review would explore this question further.
Is tackling animal product demand definitely the right choice?
Just because we have a chart showing that most of the Amazonian deforestation is caused by cattle ranching, that doesn't necessarily mean that stopping the cattle ranching will stop the deforestation.
For example, it may be that the land will continue to be sought after, but for another purpose (e.g. I understand that palm oil mostly happens in other rainforest-rich countries at the moment, but that there are plans afoot to increase palm oil production in Brazil).
This is yet another area which would need a fuller review in order to have confidence in the recommendation.
If the recommendation turns out to be wrong, I suspect that this is most likely to be the cause.
Why not a charity which works directly to counter deforestation?
It may seem counter-intuitive to suggest a charity which doesn't directly work with rainforests. Below I set out some specific examples of charities working directly with rainforests. We haven't done a full review of all such charities.
However several interventions working with rainforest protection suffer from risks such as leakage (aka displacement; i.e. if you protect one area of rainforest, will the logger simply go elsewhere). Also the weakness of land rights may render some rainforest preservation methods less effective.
This isn't to say that all rainforest conservation work is doomed to failure, only that it's hard, and that we haven't found decent evidence of a rainforest charity overcoming these hurdles.
SoGive has written a shallow, public-information-only review on WWF, which can be found here:
You may find the write-up interesting for its summary of WWF's work, but in short it found that we don't have enough information to form a view on WWF's effectiveness.
An assessment of "more information needed" might sound like it doesn't tell us much, however donors in the EA movement often have a sceptical prior on charity impact (i.e. they believe that achieving impact is hard, and in the absence of evidence we should likely assume that the charity isn't achieving much impact).
Assuming that you too share this sceptical prior, then you may be interested in a charity which is supported by the EA community. The EA community largely supports a recommendation of a donation to GFI, however here are a couple of other EA recommendations:
Founders Pledge (an EA-aligned group whose analysis I have partially reviewed and believe to be generally good) used to recommend donations to Coalition for Rainforest Nations (CfRN). CfRN runs a scheme called REDD+, which allows donors to donate to prevent deforestation.
Since then, I understand that Founders Pledge no longer recommends CfRN (I'm not claiming that their change is caused by the SoGive report; if you want to know why their opinion changed it's best to ask them).
There are further concerns about REDD+ which were not fully outlined in that report, such as the nuances of determining the reference level / counterfactual (i.e. the thorny question of what would have happened to the forests otherwise).
However it is useful to recognise some positives: there is a real lack of carbon offset schemes that are effective at scale, and REDD+ could be that solution, especially since it's recognised by the UN and built into the Paris Agreement.
Cool Earth used to be recommended by Giving What We Can when they did charity analysis. Cool Earth aims to protect rainforests by supporting the indigenous communities living in the rainforests.
We at SoGive believe that certain elements of the GWWC analysis were not given enough credit, as set out here. For example, it didn't give enough credit to what displacement / leakage.
One might imagine that if Cool Earth expanded enough, there would be so much rainforest protected that loggers would have nowhere to go. The fact that only 20% of rainforest is inhabited by indigenous peoples suggests that for at least some types of logger, this isn't credible.
One thing that confused me about the game/ritual was that I had the power to inflict a bad thing, but there was no obvious upside.
All I had to do was ignore the email, which seemed too easy.
This seems to be a bad model for reality. People who control actual nuclear buttons perceive that they get some upside from using them (even if it's only the ability to bolster your image as some kind of "strong-man" in front of your electorate).
Perhaps an alternative version could allow those who use the "nuclear" codes to get an extra (say) 30 karma points if they use the codes?
When I started thinking about these issues last year, my thinking was pretty similar to what you said.
I thought about it and considered that for the biggest risks, investors may have a selfish incentive to avoid to model and manage the impacts that their companies have on the wider world -- if only because the wider world includes the rest of their own portfolio!
It turns out I was not the first to think of this concept, and its name is Universal Ownership. (I've described it on the forum here)
Universal Ownership doesn't go far enough, in my view, but it's a step forward compared to where we are today, and gives people an incentive to care about social impacts (or social "profits")
As I alluded to in a comment to KHorton's related post, I believe SoGive could grow to spend something like this much money.
SoGive's core idea is to provide EA style analysis, but covering a much more comprehensive range of charities than the charities currently assessed by EA charity evaluators.
As mentioned there, benefits of this include:
SoGive could have a broader appeal because we would be useful to so many more people; it could conceivably achieve the level of brand recognition achieved by charity evaluators such as Charity Navigator, which have high levels of brand recognition in the US (c50% with a bit of rounding).
Lots of the impact here is the illegible impact that comes from being well-known and highly influential; this could lead to more major donors being attracted to EA-style donating, or many other things.
There's also the impact that could come from donating to higher impact things within a lower impact cause area, and the impact of influencing the charity sector to have more impact
Full disclosure: I founded SoGive.
This short comment is not sufficient to make the case for SoGive, so I should probably right up something more substantial.
I believe that in time EA research/analysis orgs both could and should spend > $100m pa.
There are many non-EA orgs whose staff largely sit at a desk, and who spend >$100m, and I believe an EA org could too.
Let's consider one example. Standard & Poors (S&P) spent c.$3.8bn in 2020 (source: 2020 accounts). They produce ratings on companies, governments, etc. These ratings help answer the question: "if I lend the company money, will I get my money back". Most major companies have a rating with S&P. (S&P also does other things like indices, however I'm sure the ratings bit alone spends >$100m p.a.)
S&P for charities?
Currently, very few analytical orgs in the EA space aim to have as broad a coverage of charities as S&P does of companies/governments/etc.
However an org which did this would have significant benefits.
They would have a broader appeal because they would be useful to so many more people; it could conceivably achieve the level of brand recognition achieved by charity evaluators such as Charity Navigator, which have high levels of brand recognition in the US (c50% with a bit of rounding).
Lots of the impact here is the illegible impact that comes from being well-known and highly influential; this could lead to more major donors being attracted to EA-style donating, or many other things.
There's also the impact that could come from donating to higher impact things within a lower impact cause area, and the impact of influencing the charity sector to have more impact.
I find these arguments convincing enough that I founded an organisation (SoGive) to implement them.
At the margin, GiveWell is likely more cost-effective, however I'd allude to Ben's comments about cost-effectiveness x scale in a separate comment.
S&P for companies' impact?
Human activity, as measured by GDP (for all that measure has flaws) is split roughly 60%(ish) by for-profit companies, 30%(ish) by governments and a little bit from other things (like charities).
As I have argued elsewhere, EA has likely neglected the 60% of human activity, and should be investing more in helping companies to have more positive impact (or avoiding their negative impact)
The charity CDP spent £16.5m (c.$23m) in the year to March 2019 (source). They primarily focus on the question of how much carbon emissions are associated with each company. The bigger question of how much overall impact is associated with each company would no doubt require a substantially larger organisation, spending at least an order of magnitude more than the c$23m spent by CDP.
(Note: I haven't thought very carefully about whether "S&P for companies' impact" really is a high-impact project)
Not sure how good the Robert Miles channel is for mums (mine might not be particularly interested in his channel!) but for communicating about AI risk Robert Miles is (generally) good and I second this recommendation
I agree with your point that investors have some blind spots, in particular that some areas of finance are not good at incorporating long term considerations.
So I think you're right, the ESG concept probably could achieve some impact by helping address that sort of blind spot.
I probably should have said something more like "To judge whether I, as someone working in ESG investing, is having material impact, we need to see if I'm actually having an influence on scenarios where there is a tension/trade-off". This is because ESG-related work is already working to address that blind spot.
Thanks very much for pointing out that error -- now corrected. I've looked at the answers which have been recorded, and they include an answer which includes comments similar to the comment you made here, so I think it's been recorded. Thank you very much!
How nervous should we be about talking about/recommending action on AI risk?
I think a lot of people in the EA community worry that AI risk is "weird", sufficiently weird that you should probably be careful talking about it to a broad audience or recommending what they donate to. Many would fear alienating people or damaging credibility. (Especially when "AI risk" refers to the existential risks from AI, as opposed to, e.g., how algorithms could cause inadvertent bias/prejudice)
A thought experiment to make this more concrete: imagine you were organising a big sponsored event where lots of people would see 3 recommended charities. Would you recommend that (say) MIRI would be one of the recommended charities?
Thank you to Alex for writing this piece, which I think is really helpful.
I am a Founder and Director of SoGive. We support donors to achieve more impact, and we influence c£1m per annum, the majority of which is from a very small number of major donors.
In this comment, I will say that I think the thrust of Alex's concerns are valid and still stand, to my mind. But first:
I want to take my hat off to the guys at Giving Green.
My first tentative forays into getting SoGive going were as early as 2015 and the official start date was 2017, so it's taken a long time to get to where we are. By contrast Giving Green has achieved a much higher profile than we have, and they've achieved it quickly. I would also say that Giving Green's analytical capabilities are ahead of where we were in 2016. Furthermore, the team is still only working on Giving Green in their spare time, so their progress is impressive.
While achieving traction quickly is great, I question whether Giving Green has achieved their traction too quickly.
For the first several years of our existence, SoGive's recommendations were solely borrowed from other better-resourced organisations like GiveWell, and we're only now in the process of updating our website to reflect our own analysis.
And of course just because SoGive is doing things one way, it doesn't mean that that way is right. But there are reasons for our cautious approach.
I believe it is premature for Giving Green to put equal emphasis on recommendations where there is an EA consensus (like CATF) and recommendations where Giving Green is going out on a limb (like TSM).
I have had a small number of conversations with the Giving Green team now, and I think they are good guys who could create a good analytical organisation given time.
And on some of the points that Dan made in this thread, I have sympathies with his position. For example, on Climeworks, he made the point that "you are betting on the technology, not the company". Contra Alex, I think this is a reasonable argument in favour of the claim that one of the Metaculus forecasts is not analytically helpful. (although doesn't support Dan's claim that both are irrelevant)
Having said that, the majority of Alex's concerns still stand, to my mind.
Furthermore, I have read some of the Giving Green analysis, and believe that Alex's list of concerns would be longer, if only there were time to do a more detailed review.
I'm conscious that reading much of this thread may feel punishing for the Giving Green team. However I really am positive about the long-term potential for this project.
Our approach came about as a result of conversations with people who know generally what works best in influencing lawmakers/lobbying, and specifically in the UK.
Agreed with alexrjl re opinion polls. Implementing a poll/survey is straightforward for us (I used to run a research team when I was a strategy consultant). The reason we're not doing it is that our discussions with experts suggest that there is not much value in doing this.
We reached out to that MP and several other MPs and parliamentarians in the days immediately after the announcement, and are also in conversation with several NGOs active in this space, and other groups.
Timeline -- fairly urgent. There will be a bill going to parliament to change the law, and I don't think anyone knows exactly when that will be, but it can't be this side of Christmas (nothing works that quickly) and it will probably be before April (which is when the financial year starts). Given that they want it to go through and may anticipate opposition, I would guess late January.
Plan: which Tory MPs are relevant: for those which are bound to follow the whip (either because they always follow the whip, or because they are dead against international development) we don't touch them -- there's no point. For those who are more on the fence, probably still little value, as the whip is probably fairly strong (I haven't investigated that last claim very closely, so if anyone has opposing opinions I would be interested to hear them). For those who are against, but who might only abstain rather than rebel (which is what mostly happened when the Conservative party wanted the right to break international law), influencing them to rebel instead of abstain will help. The ask: I think we have two asks: (1) vote against reducing the 0.7% (2) An amendment to the bill so that if it does go ahead, it is written into the Bill that it should be temperary (which is what Rishi said anyway). Budget: as we're using google/facebook ads (and not hiring people) there aren't any "chunked-up" elements of spend -- it's all smoothly spendable. In other words, the more the merrier. If we have only a few thousand, we can use it. If we have a bit more or a lot more, we can use it.
Will the government win: I have discussed this with a few people and heard differing opinions. I don't have a strong opinion on how likely this is.
Lessons from previous campaigns: I haven't studied previous campaigns, but I've spoken to some NGOs working in this space and the thinking that they have outlined is pretty similar to the plan I set out above. So their implicit learning from previous campaigns is supportive
I don't think they do. I seem to remember that this topic was debated some time back and GiveWell clarified their view that they don't see it this way, but rather they just consider the immediate impact of saving a life as an intrinsic good. (although I would be more confident claiming that this is a fair representation of GiveWell's views if I could find the place where they said this, and I can't remember where it is, so apologies if I'm misremembering)
How I think of the impact of saving a life (by donating to the likes of AMF):
a life is saved, and the grief caused by that death is averted
the person whose life is saved lives the rest of their life
Total fertility rates reduce because of lower child mortality
In terms of total number of lives lived, the saving-lives effect and the reducing-fertility rates effect probably roughly cancel each other out in places were the current fertility is high (source: David Roodman on GiveWell blog)
So saving the life helps us, one life at a time, to transition to a world where people have fewer children and are able to invest more in each of them (and averts plenty of bereavement grief along the way)
I am glad you are seriously considering the implications of your philosophical beliefs -- this is laudable. I very much hope you don't conclude it's bad to save children's lives.
Sorry if I misunderstood, but does this rest on the assumption that farmed animal welfare is net negative? More on this here: http://interestingthingsiveread.blogspot.com/2018/12/veganism-may-be-net-negative-but-we.html
I've tried using gather town, and it's fine except for the minor detail that the tech often fails! Another platform called mingle space seems to have enough of the same good features, and seems to work more robustly.
I also run SoGive, an organisation with an exciting mission to expand our analysis to a broad range of charities. We need help with updating our website, so coders, especially those with frontend experience, would be great!
Thanks very much Kris, I'm very pleased that you're interested in this enough to write these comments.
And as you're pointing out, I didn't respond to your earlier point about talking about the evidence base for an entire approach, as opposed to (e.g.) an approach applied to a specific diagnosis.
The claim that the "evidence base for CBT" is stronger than the "evidence base for Rogerian therapy" came from psychologists/psychiatrists who were using a bit of a shorthand -- i.e. I think they really mean something like "if we look at the evidence base for CBT as applied to X for lots of values of X, compared to the evidence base for Rogerian therapy as applied to X for lots of values of X, the evidence base for the latter is more likely to have gaps for lots of values of X, and more likely to have poorer quality evidence if it's not totally missing".
It's worth noting that while the current assessment mechanism is the question described in Appendix 1f, this is, as alluded to, not the only question that could be asked, and it's also possible for the bot to incorporate other standard assessment approaches (PHQ9, GAD7, or whatever) and adapt accordingly.
Having said that, I'd say that this on its own doesn't feel revolutionary to me. What really does seem revolutionary is that, with the right scale, I might be able to say: This client said XYZ to me, if I had responded with ABC or DEF, which of those would have given me a better response, and be able to test something as granular as that and get a non-tiny sample size.
I'm unclear why you are hesitant about the claim of the potential to revolutionise the psychology evidence base. I wonder if you perhaps inadvertently used a strawman of my argument by only reading the section which you quoted? This was not intended to support the claim about the bot's potential to revolutionise the psychology evidence base.
Instead, it might be more helpful to refer to Appendix 2; I include a heavily abbreviated version here:
The source for much of this section is conversations with existing professional psychiatrists/psychologists.
Currently some psychological interventions are substantially better evidenced than others.
Part of the aim of this project is to address this in two ways:
(1) Providing a uniform intervention that can be assessed at scale
(2) Allowing an experimental/scientific approach which could provide an evidence base for therapists
Crucially, TIO is fundamentally different from other mental health apps -- it has a free-form conversational interface, similar to an actual conversation (unlike other apps which either don’t have any conversational interface at all, or have a fairly restricted/”guided” conversational capability). This means that TIO is uniquely well-positioned to achieve this goal.
To expand on item (2), the idea is that when I, as someone who speaks to people in a therapeutic capacity, choose to say one thing (as opposed to another thing) there is no granular evidence about that specific thing I said. This feels all the more salient when being trained or training others, and dissecting the specific things said in a training role play. These discussions largely operate in an evidence vacuum.
The professionals that I've spoken to thus far have not yet been able to point me to evidence as granular as this.
If you know of any such evidence, please do let me know -- it might help me to spend less time on this project, and I would also find that evidence very useful.
Thank you very much for taking the time to have a look at this.
(1) For links to the bot, I recommend having a look at the end of Appendix 1a, where I provide links to the bot, but also explain that people who aren't feeling low tend not to behave like real users, so it might be easier to look at one of the videos/recordings that we've made, which show some fictional conversations which are more realistic.
(2) Re retention, we have deliberately avoided measuring this, because we haven't thought through whether that would count as being creepy with users' data. We've also inherited some caution from my Samaritans experience, where we worry about "dependency" (i.e. people reusing the service so often that it almost becomes an addiction). So we have deliberately not tried to encourage reuse, nor measured how often it happens. We do however know that at least some users mention that they will bookmark the site and come back and reuse it. Given the lack of data, the model is pretty cautious in its assumptions -- only 1.5% of users are assumed to reuse the site; everyone else is assumed to use it only once. Also, those users are not assumed to have a better experience, which is also conservative.
I believe your comments about hypotheticals and "this will be the next facebook" are based on a misunderstanding. This model is not based on the "hypothetical" scenario of people using the bot, it's based on the scenario of people using the bot *in the same way the previous 10,000+ users have used the bot*. Thus far we have sourced users through a combination of free and paid-for Google ads, and, as described in Appendix 4a, the assumptions in the model are based on this past experience, adjusted for our expectations of how this will change in the future. The model gives no credit to the other ways that we might source users in the future (e.g. maybe we will aim for better retention, maybe we will source users from other referrals) -- those would be hypothetical scenarios, and since I had no data to base those off, I didn't model them.
(3) I see that there is some confusion about the model, so I've added some links in the model to appendix 4a, so that it's easier for people viewing the model to know where to look to find the explanations.
To respond to the specific points, the worst case scenario does *not* assume that the effect lasts 0.5 years. The worst case scenario assumes that the effect lasts a fraction of day (i.e. a matter of hours) for exactly 99.9% of users. For the remaining 0.1% of users, they are assumed to like it enough to reuse it for about a couple of weeks and then lose interest.
I very much appreciate you taking the time to have a look and provide comments. So sorry for the misunderstandings, let's hope I've now made the model clear enough that future readers are able to follow it better.
We used kickstarter when we did one. I think we were swayed by the possibility that Kickstarter might recognise how wonderful our project was and we might be selected as one of the projects that people see when they arrive on the main page. If you get this, it's essentially hugely valuable free publicity.
In retrospect, I think this was naive, and probably a mistake. Kickstarter takes (if I remember correctly) 5% of the funds, which is quite a bit.
This question appears to be unpopular -- at time of writing it has a karma of -6.
However I'd like to defend/steelman this question.
First, let's try to understand those who appear not to like this post.
The post makes the claim that inequality is the "the root cause of most of society's ills", however it does not provide evidence for this claim.
I'm not going to try to defend this claim.
What I will say is that whether or not the claim is correct, I would like the Effective Altruism community to be able to help with the question raised by the original poster:
What types of charity will be the most effective for creating a more equal society?
EA ways of thinking *should* be a tool to enable people to answer practical ethical questions such as this, even if the link between a more equal society and all of society's ills is not clear.
For example, some may believe that equality is an intrinsic good.
So, having made the case that this community should be more supportive of this question, here are some brief thoughts.
Society can be made more equal by
(a) raising the wealth/standards for those on the bottom rung
(b) redistributing from the richest to the poorest
Also, most EA thinking tends to either focus on direct impacts work, which is typically required to have good cost-effectiveness, or hits-based work, which is required to have a potentially huge impact.
When helping the poor, the EA community tends to take a global perspective, because people in the developing world are typically much poorer and easier to help than those in the developed world.
A good choice of charity for a redistribution charity with a direct impact is GiveDirectly, which is recommended by GiveWell
For a more hits based approach, some have given consideration to Tax. I have seen a write-up on the EA Forum about this, however I have not reviewed it, and I neither endorse nor disavow it.
As for raising the wealth of the poorest people without simply giving people money, this has turned out to be surprisingly difficult. For example, microcredit does not appear to be particularly effective at this.
Apologies that this response is too brief to do justice to this complex question.
Thank you to Maksim for engaging with the EA community, and I hope you find the responses to your question useful.
I would find it extremely surprising if compromising on charity choice led to you getting 10x more donations. Based on past experience, I'd surprised if it got you 10% more donations.
Many people would express preferences about where to donate if asked if they have preferences. However if they are going through a donation UX, every time they have one fewer click it's a win for them, and very few donors have preferences strong enough to overcome their desire for a clean UX. (I think this is intuitive for many non-EA people).
Hence my recommendation to focus on just one charity (or basket of high impact charities), but allow users the option to donate to anything if they don't like the default choice.
Allfed's work is very exciting, and I hope you all do great things and ensure we are all kept safe.
My intuition says that the No More Pandemics concept would resonate more with the voting population (and, perhaps as important, would seem to the typical political representative to resonate more with the voting population) than a backup plan concept. But I could be persuaded otherwise.
I don't have a strong opinion on this, because my experiences are more based on the UK than the US, which may be different.
However if your intuition said that veterans charities are more likely to appeal to Republicans than Democrats, Democrats might have the same intuition
What I can say is that veterans' charities (certainly in the UK, and probably in the US too) are rich with organisations whose impact enormously underperforms AMF. By several orders of magnitude. So if you did decide to include a veterans' charity, you would need a really good reason.
And if you need someone to assess the charities you're considering, let me know -- I can get someone from the SoGive analysis team to take a look.
What a beautiful idea! De-escalating the political campaigning spend arms race and redirecting the money to high-impact charity sounds lovely! I have some thoughts, not all encouraging.
(1) I suspect your platform might not actually generate much donations
Getting donors to actually navigate to a donation platform is notoriously hard.
My intuition says that the idea is cute enough that it will get some attention (including, perhaps, from the press) but not enough to move lots of money.
However that's just my intuition. Don't trust it. A better guide than my intuition is if you can find a constituency who is willing to promote your concept, and who has influence over political funders. Alternatively, if you have evidence (perhaps conduct some primary research, if necessary?) that people with opposing political views often talk to each other and lament the fact that they throw so much money away in a futile manner, then maybe some press attention could spark something.
(2) To justify your spend, you probably want to generate >$1m in the near to mid term
As a rough rule of thumb, fundraising spend should generate c4x as much as the fundraising cost itself. So if you're going to spend $250k, then you want to generate c$1m to justify the investment.
This is because you should get some reward for taking business risk.
If you believed that the political campaigning spend has some positive benefits (e.g. spreading useful information, or maybe you think that political engagement is an intrinsic good) then your threshold should be higher.
However you probably don't believe this, and given the amount of money spent on political campaigning, I think I agree.
If you believed that the campaign spend is actually harmful, then you could justify a lower target. However note that this would be a fairly convenient belief for you to have, so aim to have really good evidence before even considering this.
(3) Find ways to lower your costs, e.g. through collaboration
If my guesses are right, you have a problem: you need to generate c$1m of donations, and I don't think you will. So to help resolve this...
... I question the value of building your own donation platform.
There is already a plethora of donation platforms who have already spent c$250k in creating a platform. Collaborating with them could
lower your costs (and hence lower the $1m target)
allow you to expend more effort on getting donors and spreading your message
you would probably have to accept some compromises about the nature of the donation platform
After all, if it hasn't been designed with your needs in mind, it probably won't be perfect.
However I expect that your project probably will achieve more impact through getting people to think about and talk about the problem, and less through the actual donations raised. If my expectations are right, then compromises on the details of the platform are OK.
Groups you could collaborate with:
SoGive runs a donation platform (Full disclosure: I founded and run SoGive)
Momentum might be a good fit for you (I can intro you if you wish)
(4) You want to "nudge" users to an apolitical, high-impact charity, such as AMF.
We at SoGive have seen some donors interact with this sort of campaign in the past. I suggest that you want to take the following approach:
As far as your donors are concerned, the money is going "to charity", which means that they aren't thinking too much about what that charity is, they will just assume that anything is good
You need to avoid anything political, because that would distract from the message. So no veterans charities, no climate change, nothing obviously political
Because your donors aren't thinking about what the charity is, suggesting something like AMF will work just fine. Feel free to include something on your website explaining the rationale (e.g. "careful analysis, bang for buck, etc etc"). Not many people will read it.
I also suggesting making this a "nudge"; i.e. allow users to donate to any charity, but make the default AMF. Not many users will depart from the default.
Good luck, and let me know if you want to talk further!
How long until the world risks under-reacting to a pandemic?
There's an uncertainty over how long we'll remain well-prepared for a future pandemic. For example, this study (conducted by my organisation SoGive) surveyed some biorisk orgs. To see the answers, I suggest looking at this comment, and reviewing the answers to the first question:
"Do you think that the world will handle future pandemics and bio risks better as a result of having gone through the current coronavirus pandemic?"
As can be seen, there were several pessimistic answers. I think we should expect there to be some selection effects and biases in these answers, but the concerns around overindexing do strike me as reasonable.
In any case, I agree that a lasting impact sounds valuable.
How to have a lasting impact?
Some of the policy proposals are designed to have a longer-term impact. For example, strengthening the BWC would hopefully last some decades (assuming that institutional inertia has the effect I'm hoping for, although I'm unclear how likely this is). Also, the funding commitment (similar to the 0.7% ODA commitment) is also intended to last a long time.
However it's far from clear that this would last for generations.
Your idea of remembrance days and memorials is really interesting, and something I hadn't thought of.
And it does strike me that the 1918 pandemic had huge societal impacts, but most of the world was oblivious to this pre-COVID.
I don't think I would have the patience for EA thinking if the spread weren't big. Why bother with a bunch of sophisticated-looking models and arguments to only make a small improvement in impact? Surely it's better to just get out there and do good?
There are a number of communities that have been created across the EA space which bring together people with a professional affiliation (I see Aaron has mentioned REG, which is likely the most similar to your concept). I don't believe this has been done with pro athletes before.
I founded and run a group called SoGive which raises funds and does analysis on charities.
I would be happy to connect with you and support you if that would help; I'll send you a direct message on the EA Forum.
Thanks Soeren, this is a useful point to help to tease out the thinking more clearly:
Agree that major institutions/governments will invest better in pandemic preparedness for some (unknown) number of years from now (better than recently, anyway)
Also expect that this work will be inadequate, by (for example) overindexing/overfitting on what's happened before (flu with fatality rate of 2.5% or less, or another coronavirus), but not anticipating other possible pandemics (Nipah, Hendra, or man-made)
If you had asked me in (say) early April, I would have guessed that major institutions will get more funding, and that NGOs who are better at considering tail risks and x-risks and tackling these overfitting errors will also get more funding.
We now think that those major institutions will get more funding, but that the more existential-risk-focused NGOs aren't getting materially more funding, at the moment
Notable responses included the comment from Howie Lempel which reiterated the points in the Open Phil article about how it seemed unlikely that someone watching the field would fail to notice if there was a sudden increase in capabilities.
Also Rob Wiblin commented to ask to make it clear that 80,000 hours doesn't necessarily endorse the view that nanotech/APM is as high a risk as that survey suggests.
I understand it involves "maybe <...> 10 billion people, debating and working on these issues for 10,000 years". And *only after that* can people consider actions which may have a long term impact on humanity.
How do we ensure that
(a) everyone gets involved with working on these issues? (presumably some people are just not interested in thinking about this? Getting people to work on things they're unsuited for seems unhelpful and unpleasant)
(b) Actions that could have a long term impact on humanity could be taken unilaterally. How could people be stopped from doing that?
I think a totalitarian worldwide government could achieve this, but I assume that's not what is intended