Hi, this is Dan from Giving Green. As you might imagine, I have a lot to say here.
First though, let me thank Alex for going about this criticism in what I would consider the right way: he brought his concerns to us, we had a discussion, and he changed some things based on the discussion. He also offered us a chance to comment on his draft to ensure he hadn’t said anything blatantly factually inaccurate. And then he aired his disagreements in a respectful post. So thanks for that Alex.
That being said, I fundamentally disagree with the majority of Alex’s points, and believe that the judgement calls we have made at Giving Green allow us to be impactful to a wider audience.
But let’s start with something else: Giving Green is a young organization, and I think we have a lot of room to improve and pivot. So criticism is welcomed, and some of Alex’s suggestions did resonate with us.
First, I think we could do a better job in promoting the donation options we think are “better” (ie policy, instead of offsets.) I think the offset research is valuable (as described below), but I agree that it’s not totally obvious to users of the website that we recommend policy over offsets, so that’s something we’d like to improve.
Second, although I do think we have some fundamental disagreements about the value of modeling uncertain situations, I do think there would be value in modeling the cost-effectiveness of offsets more explicitly. I think this is a case where the modeling assumptions are tractable, and we could provide users useful cost-effectiveness data, and may even promote certain offsets over others. This is something we’ve wanted to do for a while, but haven’t had the time to implement. (As Alex noted, we have limited funding and have relied heavily on “side of the desk” work to create Giving Green.)
Now onto the disagreements. I think to respond to every point I would have to write a book, but let me tackle the main ones.
Recommending Offsets: I made an argument defending recommending offsets (even though we believe they are less cost-effective than policy charities) on a comment previously on this pos [EA · GW]t. The main idea is that there’s a tradeoff between certainty and high-risk, high-reward options, and I think there’s a market for both. I’ll paste the most fun part of the argument below.
“Finally, at the risk of going down a rabbit hole, one more point. There are a lot of parallels to this offset debate within international development/global health, an area in which EA is much more developed. Within EA communities, most people are quite comfortable with the recommendations from GiveWell, which are all direct-delivery of health services, and therefore things that can be measured with a high level of certainty. (Like offsets!) So why don't big international development agencies (World Bank, etc) concentrate only on direct delivery of health services? It's not because they are just stupid. It's because they think they can have more bang for their buck investing in systemic changes that can't be well-quantified with an RCT (like institution-building, macroeconomic stability, infrastructure, etc). Kinda like...funding charities that work on climate policy. So I would find it curious if the final consensus from EAs on global health is all about certainty, but in environment it is firmly for less-certain policy interventions. My argument would be that there is a clear place for both. “
Quantitative modeling: Alex is of the opinion that because we haven’t explicitly quantitatively modeled some of the tradeoffs we face, that the analysis isn’t to be trusted. I think we just have a fundamental difference of opinion on the value of modeling in situations of extreme uncertainty. Look, I’m a trained economist and am pro-modeling in general. But if you’re going to make a model where the outcomes are decided by key parameters that you have to make uninformed judgement calls on, what is the value of the model? Why not just make your judgement call on the outcome?
I know that modeling is in vogue in the EA community so perhaps this makes us outsiders, but I fundamentally believe that modeling in these circumstances leads only to science-y false precision, and does not actually give more clarity.
Let’s take an example, which leads into a discussion below. Let’s say we were trying to weigh the value of a donation to the Sunrise Movement Education Fund (TSM) vs Clean Air Task Force (CATF). Ok, you could model it, but at some point you’re going to have to make a judgement call on the fundamental tradeoff: CATF is more likely to cause incremental change (though some would argue that this is at the expense of entrenching fossil interests and hurting long-term progress), while TSM has a lower chance of causing more fundamental change (though at the potential expense of increasing polarization and jeopardizing incremental progress). So tell me, how are you going to get an unbiased, data-driven estimate of this key parameter that will determine the outcome of your model? I don’t think it’s possible, so don’t want to go down that rabbit hole.
Recommendation of Sunrise Movement Education Fund (TSM): Understanding how donations to organizations lead to policy change is an exercise in fundamental uncertainty, and is going to involve tough judgement calls. I understand that people could make a different judgement call on the tradeoffs with TSM and come to a different conclusion. To be honest, we’ll know a lot more over the next couple of years, as now is the time for TSM to flex its muscles and get climate on the agenda of the Biden administration (and democratic congress.) But for now, we stand by our research and think it’s a good bet. You can read our justification on the site.
A couple of specific points: it’s true that TSM’s budget has grown massively over the last few years (as has CATF’s for that matter), but I think that’s a poor proxy for neglectedness. I think that there is very little effective climate activism happening out there, and there’s huge room for effective growth.
I’m really not compelled by the “uncertainty about the sign of impact” argument, though i don’t really have a way to argue against it quantitatively since it’s theoretically possible. I would just say that this argument is lobbed at a lot of organizations, since people have different theories of political change. For instance, above I linked to an article making a similar argument about the 45Q tax credit, which is one of CATF’s big claimed accomplishments. It’s messy.
Burn Recommendation: I really think that much of the criticism is off the mark here. Berkouer and Dean (2020) focuses a lot of their analysis on credit and demand curves and other fancy economics because that’s how economists get papers published, but underpinning the paper is a strong RCT that convincingly estimates the effect of purchasing a BURN stove on fuel use. Yes, it would be nice if the sample size used for long-term follow-up was larger. And yes, this is just one study but it’s important to realize that it’s a carbon offset certification (which has a number of validation criteria) plus an RCT, which is rare and gives multiple layers of certainty. Given the difficulty of many carbon offsets, I think this is a unique level of rigor that justifies our recommendation of BURN.
The worry that purchasing offsets will not actually lead to more stoves getting distributed is more valid, as this is very hard to verify. But I’m fundamentally willing to believe that if a company like BURN gets more revenue from every stove they sell, they will sell more stoves. In other words, I think the supply curve slopes up, like it usually does.
This one is a little tougher. Like Alex said, we did not take cost into account when recommending offsets, because we were just looking for any offsets that we felt offered near-certainty. And Climeworks really does offer unparalleled certainty and permanence. But yes, Climeworks is expensive (and we are up front about that on the site). In order for it to be worth it, you have to believe that direct air capture and storage of CO2 is going to be an important part of the climate solution in the future. I don’t find those Metaculus numbers Alex listed too relevant, since you are betting on the technology, not the company. But I can see how reasonable people could disagree here.denise_melchin on Why do content blockers still suck?
Thanks for the response! Freedom unfortunately just stopped working for me many times. After I uninstalled and reinstalled it for the fifth time (which makes it work again for a while) and the customer service had no idea what was going on, I gave up. I still use it for my phone however.
I don't think there is anything on the market which blocks things by default, which is the primary feature I am looking for, plus much more fine grained blocking (e.g. inability to access or google content containing specific phrases).ryan_b on Megaproject Management
Investigating the field more deeply this year is going to be one of my hobby projects, but my early impression is that a big part of the claim is that when you do regular project management things wrong, the penalty at least scales. It also looks like the cost of doing the analysis right, such as reference class forecasting, doesn't come remotely close to scaling with the needs of the project.
I'm glad about this, because I was worried at first the whole inquiry might be useless except to people in a position of responsibility. Instead, it looks like there will be a lot of methods that are always a good idea but also scale really well. Bonus!flowo on ag4000's Shortform
I can also highly recommend Deep Work by Cal Newport, his main thesis is that 'real' work only happens/productivity is high when you're doing it for a few hours at a time instead of 15min blocks with constant interruptions. Edit: should have read the linked post first haha, so see this as another vote for Cal Newportmichaelstjules on Ranking animal foods based on suffering and GHG emissions
This is very cool! Good job!
maxdalton on CEA update: Q4 2020
That's interesting - I'm surprised by that and wonder if it's due to some differences between systems? In the UK people often begin to think about internships in their first or second years, and ther look for jobs in the 3rd year, so I think there's quite a lot of ability to influence and discuss career plans early on. In the US degrees are longer, but early on people are trying to decide their major which is also a significant career decision. I also think that students have a lot more time and interest in engaging with new things, and they tend to be easier to reach (e.g. because they all come to activity fairs). How do you find/target these early-career people? And aren't they already normally in employment/set on a career path?
CBGs remains open to non-student groups.villesokk on Ranking animal foods based on suffering and GHG emissions
Thank you for the kind words Bella!maxdalton on Things CEA is not doing
No, I think you understood the original post right and my last comment was confusing. When I said "grow" I was imagining "grow in terms of people/impact in the areas we're currently occupying and other adjacent areas that are not listed as "things we're not doing"".
I don't expect us to start doing the things listed in the post in the next 4-10 years (though I might be wrong on some counts). We'll be less likely to do thing if others are doing good work in the area (as with fundraising).julia_wise on Ranking animal foods based on suffering and GHG emissions
I was about to say the same thing - I skimmed for "eggs" and it took me a bit to figure it out.tsunayoshi on vaidehi_agarwalla's Shortform
There is also a quite active EA Discord server, which serves the function of "endless group discussions" fairly well, so another Slack workspace might have negligible benefits.