Really enjoyed this, thank you. I especially liked the undertone of 'uncertainty isn't a reason not to try, it's a reason to find out more'. Good life advice in general, I think.smclare on Climate Change Is Neglected By EA
Will and Rob devote a decent chunk of time to climate change on this 80K podcast, which you might find interesting. One quote from Will stuck with me in particular:
I don’t want there to be this big battle between environmentalism and EA or other views, especially when it’s like it could go either way. It’s like elements of environmentalism which are like extremely in line with what a typical EA would think and then maybe there’s other elements that are less similar [...] For some reason it’s been the case that people are like, “Oh, well it’s not as important as AI”. It’s like an odd framing rather than, “Yes, you’ve had this amazing insight that future generations matter. We are taking these actions that are impacting negatively on future generations. This is something that could make the long run future worse for a whole bunch of different reasons. Is it the very most important thing on the margin to be funding?"
I think I agree with you that, as a community, we should make sure we're up-to-date on climate change to avoid making mistakes or embarassing ourselves. I also think, at least in the past, the attitude towards climate work has been vaguely dismissive. That's not helpful, though it seems to be changing (cf. the quote above). As others have mentioned, I suspect climate change is a gateway to EA for a lot of altruistic and long-term-friendly people (it was for me!).
As far as direct longtermist work, I'm not convinced that climate change is neglected by EAs. As you mention, climate change has been covered by orgs like 80K and Founders Pledge (disclaimer, I work there). The climate chapter in The Precipice is very good. And while you may be right that it's a bit naive to just count all climate-related funding in the world when considering the neglectedness of this issue, I suspect that even if you just considered "useful" climate funding, e.g. advocacy for carbon taxes or funding for clean energy, the total would still dwarf the funding for some of the other major risks.
From a non-ex-risk perspective, I agree that more work could be done to compare climate work to work in global health and development. I suspect that, especially when considering the air pollution benefits of moving away from coal power, climate work could be competitive here. Hauke's analysis, which you cite, has huge confidence intervals which at least suggest that the ranking is not obvious.
On the one hand, the great strength of EA is a willingness to prioritize among competing priorities and double down on those where we can have the biggest impact. On the other hand, we want to keep growing and welcoming more allies into the fold. It's a tricky balancing act and the only way we'll manage it is through self-reflection. So thanks for bringing that to the table in this post!willbradshaw on Climate Change Is Neglected By EA
Wildlife conservation and wild animal welfare are emphatically not the same thing. "Tech safety" (which isn't a term I've heard before, and which on googling seems to mostly refer to tech in the context of domestic abuse) and AI safety are just as emphatically not the same thing.
Anyway, yes, in most areas EAs care about they are a minority of the people who care about that thing. Those areas still differ hugely in terms of neglectedness, both in terms of total attention and in terms of expertise. Assuming one doesn't believe that EAs are the only people who can make progress in an area, this is important.
In climate change it counts the lawyers already engaged in changing the recycling laws of San Francisco as sufficent for the task at hand.
This is (a) uncharitable sarcasm, and (b) obviously false. There are enormous numbers of very smart scientists, journalists, lawyers, activists, etc etc. working on climate change. Every general science podcast I listen to covers climate change regularly, and they aren't doing so to talk about Bay-Area over-regulation. It's been a major issue in the domestic politics in every country I've lived in for over a decade. The consensus among left-leaning intellectual types (who are the main group EA recruits from) in favour of acting against climate change is total.
Now, none of this means there's nothing EA could contribute to the climate field. Probably there's plenty of valuable work that could be done. If more climate-change work started showing up on the EA Forum, I'd be fine with that the same way I'm fine with EAs doing work in poverty, animal welfare, mental health, and lots of other areas I don't personally prioritise. But would I believe that climate change work is the most good they could do? In most cases, probably not.ramiro on I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate.
I think "offense-deffense balance" is a very accurate term here. I wonder if you have any personal opinion on how to improve our situation on that. I guess when it comes to AI-powered misinformation through media, it's particularly concerning how easily it can overrun our defenses - so that, even if we succeed by fact-checking every inaccurate statement, it'll require a lot of resources and probably lead to a situation of widespread uncertainty or mistrust, where people, incapable of screening reliable info, will succumb to confirmatory bias or peer pressure (I feel tempted to draw an analogy with DDoS attacks, or even with the lemons problem).
So, despite everything I've read about the subject (though notvery sistematically), I haven't seen feasible well-written strategies to address this asymmetry - except for some papers on moderation in social networks and forums (even so, it's quite time consuming, unless moderators draw clear guidelines - like in this forum). I wonder why societies (through authorities or self-regulation) can't agree to impose even minimal reliability requirementes, like demanding captcha tests before spreading messages (so making it harder to use bots) or, my favorite, holding people liable for spreading misinformation, unless they explicitly reference a source - something even newspapers refuse to do (my guess is that they are affraid this norm would compromise source confidentiality and their protections against legal suits). If people had this as an established practice, one could easily screen for (at least grossly) unreliable messages by checking their source (or pointing out its absence), besides deterring them.edoarad on When can I eat meat again?
I was a bit surprised to read what you wrote about Cultivated Meat. I am not an expert, but I've looked into this topic and my understanding is that there are fundamental technical challenges to be solved at least in cell expansion, the rate and specificity of cell growth, and the creation of thick cuts of any tissue. I'm sure that these can be solved in the end, but they seem very difficult (considering that cell expansion is needed for making blood cells and other non-tissue type of cells in the much more heavily funded biomedical field which is also less bottlenecked by medium cost).
I understand that today we may be possible to make some hybrid products, but that these won't really be similar to the real thing. Is this similar to your view?linch on Climate Change Is Neglected By EA
A year ago Louis Dixon posed the question “Does climate change deserve more attention within EA? [EA · GW]”. On May 30th I will be discussing the related question “Is Climate Change Neglected Within EA?” with the Effective Environmentalism group. This post is my attempt to answer that question.
It's definitely possible I'm misunderstanding what you're trying to do here. However, I think it is usually not the case that if you attempt to do an impartial assessment of a yes-no question, all the possible factors point in the same direction.
I mean, I don't know this for sure, but I imagine if you were to ask me to closely investigate a cause area that I haven't thought about much before (wild animal suffering, say, or consciousness research, or Alzheimer's mitigation), and I investigated 10 sub-questions, I don't think all 10 of them will point in the same way. My intuition is that it's much more likely that I'd either find 1 or 2 overwhelming factors, or many weak arguments in favor or against, and some in the other direction.
I feel bad for picking on you here. I think it is likely the case that other EAs (myself included) have historically made this mistake, and I will endeavor to be more careful about this in the future.urikatz on Climate Change Is Neglected By EA
I feel sometimes that the EA movement is starting to sound like heavy metalists (“climate change is too mainstream”), or evangelists (“in the days after the great climate change (Armageddon), mankind will colonize the galaxy (the 2nd coming), so the important work is the one that prevents x-risk (saves people’s souls)”). I say “amen” to that, and have supported AI safety financially in the past, but I remain skeptical that climate change can be ignored. What would you recommend as next steps for an EA ember who wants to learn more and eventually act? What are the AMF or GD of climate change?urikatz on Climate Change Is Neglected By EA
I wonder how much of the assessment that climate change work is far less impactful than other work relies on the logic of “low probability, high impact”, which seems to be the most compelling argument for x-risk. Personally, I generally agree with this line of reasoning, but it leads to conclusions so far away from common sense and intuition, that I am a bit worried something is wrong with it. It wouldn’t be the first time people failed to recognize the limits of human rationality and were led astray. That error is no big deal as long as it does not have a high cost, but climate change, even if temperatures only rise by 1.5 degrees, is going to create a lot of suffering in this world.
In an 80,000 hours podcast with Peter Singer the question was raised whether EA should split into 2 movements: present welfare and longtermism. If we assume that concern with climate issues can grow the movement, that might be a good way to account for our long term bias, while continuing the work on x-risk at current and even higher levels.linda-linsefors on I Want To Do Good - an EA puppet mini-musical!
Watching it yet again, I think it would feel more right if the guy where not so easily convinced, but instead it ended with him, being "hm, that sounds promising, I'm going to learn some more".
Both the puppet really felt like real people with actual personalty to me, up until t=1:57. But then the guy just complexly changes his mind which broke my suspense of disbelief. I think that's the point when mostly started to sound like "yet another commercial".linda-linsefors on I Want To Do Good - an EA puppet mini-musical!
The format of the video is basically: "Do you worry about these things, then we have the solution." Integrated with some back and forth, that I really like.
"Do you worry about these things, then we have the solution." is a standard panther in commercials, for a good reason. I think this is a good panther also for selling idea ideas like EA. But it also means that you can just say you understand my concerns and that you have solutions, you have to give me some evidence, or else is is just another empty commercial.
The person singing about their doubts felt relatable, in that they brought up real concerns about charity that I could imagine having before EA. I don't remember exactly but these seemed like standard and very reasonable concerns. And got the impression that you (the video maker) really understand "my" (the viewers) worries about giving to charity.
But when you where singing about the solutions you fall a bit short. I don't think this video would win the trust of an alternative Linda, that your suggestions for charity is actually better. I think it would help to put in some argument why treatable decides, and how to lift the barriers you mention.
Every charity says they are special, so just it don't count for much. But if you give me some arguments that I can understand for why your way is better, then that is evidence that you're onto something, and I might go and check it out some more.
All that said, I re-wathced the video, and I like it even more now. The energy and the mood shifts are amazing.
On re-watching I also feel that a viewer should be able to easily figure out the connection between focusing on deceases and avoiding building dependency. But I remember that first time I watched is it felt like there where a major step missing link there. I think it is now when I know what they will say, this gives me some more time to reflect and make those connections myself.
But people seeing this on the internet might only watch once, so...