Posts
Comments
Well done, have been waiting for years to see EA start looking into Zakat!
“As a secular NGO, GiveDirectly may struggle to gain traction with Muslim donors”I strongly agree with this.
Isn’t an obvious solution to market the Zakat compliant fund under a different name than Give Directly?
(Obvious choice would be whatever “Give Directly” is in Arabic)
On “learning from people outside EA and those who slightly disagree with EA views” I highly recommend reading everything by Dr Filippa Lentzos: https://www.filippalentzos.com/.
Also, subscribe to the Pandora Report newsletter:
https://pandorareport.org/
Global Biodefense was great but sadly seems to have become inactive: https://globalbiodefense.com/
So rather than a specific claim about specific activities being done by Anthropic, would you say that:
-
from your experiences, it’s very common for people to join the arms race under the guise of safety
-
you think by default, we should assume that new AI Safety companies are actually joining the arms race, until proven otherwise
-
the burden of proof should essentially rest on Anthropic to show that they are really doing AI Safety stuff?
Given the huge potential profits from advancing AI capabilities faster than other companies and my priors on how irrational money makes people, I’d support that view.
My crux here is whether or not I think Anthropic has joined the arms race.
Why do you believe that it has?
I’m grateful to the women who have publicly spoken about sexual misconduct in EA, which I hope will result in us making EA spaces safer and more welcoming to women.
I’m grateful to the EAs who engaged with criticisms around transparency in EA, and responded by making a more easily navigatable database of grants by all EA orgs, which meaningfully improves transparency, scrutiny and accountability.
I’ve never spoken to him but think he’s doing a great job at a difficult time in a difficult role
I think we should slightly narrow the Overton Window of what ideas and behaviours are acceptable to express in EA spaces, to help exclude more harassment, assault and discrimination.
I also think EA at its best would primarily be more of a professional and intellectual community and less of a social circle, which would help limit harmful power dynamics, help limit groupthink and help promote intellectual diversity.
I know that a few Open Phil staff live outside the Bay Area and work remotely
Would be interested in historical examples of this, and also on elaboration on what the indirect means today are.
(I think philanthropic funding of economic policy research in India pre 1991 would be one example?)
I think AI beats LMIC governance on scale and neglectedness in the ITN framework, so would deserve greater attention from EA even with equal tractability
Worth pointing out a potential benefit of this imbalance:
Most work in 3 of EA's main cause areas, development, animal welfare and pandemic preparedness, takes place outside of EA, and it may be good for object level discussions in these areas to take place in places other than the EA Forum, to benefit from external expertise and intellectual diversity.
This is probably true for engineered pandemics and AI safety too but to a lesser extent because a high proportion of the work in these areas is done by EAs.
I think it is overall a good thing for the EA Forum to focus on the community, and for the community to act as a co-ordinating forum for people working in different high impact cause areas. I think it's better for an object level discussion on development, for example, to take place somewhere where feedback can be obtained from non-EA development economists, over somewhere like the EA Forum, where a lot of feedback will be from students, animal welfare activists, AI researchers, etc.
Super interesting. Well done for this research and well done for changing your mind based on the evidence, especially given how much time you dedicated to this!
(These kinds of posts are super important, and we should think of these like we think of 'negative' / statistically insignificant results in science - the incentives to publish them are not very strong and we should encourage these kind of posts more )
Agree that improving economic growth in LMICs + international wealth redistribution would be effective in solving lots of social problems in LMICs, but both are highly intractable in my opinion, so would probably not solve a specific social problem more cost-efficiently than a targeted intervention aimed at that social problem.
(But FWIW, I don't think improving economic growth in LMICs and international wealth redistribution are so intractable that they have no place in the EA movement)
Thank you for the transparency.
I like the framing "bad ideas are being obscured in a tower of readings that gatekeep the critics away" and I think EA is guilty of this sometimes in other areas too.
Agree that in isolation, spreading the ideas of
(a) AI could be really powerful and important within our lifetimes
and
(b) Building AI too quickly/ incautiously could be dangerous
Could backfire.
But I think just removing the "incautiously" element, and focusing on the "too quickly element", and adding
(c) So we should direct more resources to AI Safety research
Should be pretty effective in preventing people from thinking that we should race to creating AGI.
So essentially, AI could be really powerful, building it too quickly could be dangerous, we should fund lots of AI Safety research before its invented. I think adding more fidelity / detail / nuance would be net negative, given that they would slow down the spread of the message.
Also, I think we shouldn't take things OpenAI and DeepMind say at face value, and bear in mind the corrupting influence of the profit motive, motivated reasoning and 'safetywashing'.
Just because someone says they're making something that could make them billions of dollars because they think it will benefit humanity, doesn't mean they're actually doing it to benefit humanity. What they claim is a race to make safe AGI is probably significantly motivated by a race to make lots of money.
I'm not aware of any thorough investigations of the metacrisis / polycrisis which come from the perspective of trying to work out how our interventions to solve the metacrisis / polycrisis might need to differ from our approach to individual existential risks.
I think this kind of investigation could be valuable. I expect that same existential risks are more likely to set off a cascade of existential risks than others, which would have important implications for how we allocate resources for x-risk prevention.
Well done to everyone involved, I think these are important steps to improving EA’s cultural and intellectual diversity, which will hopefully improve our impact!
“If you don't agree with a certain org or some actions of an org in the past, just don't donate to them. (This sounds so obvious to me that I'm probably missing something.) Whether somebody else (who might happen to have a lot of money) agrees with you is their decision, as is where they allocate their money to.“
I think what you’re missing is that a significant aspect of EA has always (rightly) been trying to influence other people’s decisions on how they spend their money, and trying to make sure that their money is spent in a way that is more effective at improving the world.
When EA looks at the vast majority of Westeners only prioritising causes within their own countries, EA generally doesn’t say “that is your money so it’s your decision and we will not try to influence your decision, and we will just give our own money to a different cause”, it says “that is your money and it’s your decision, but we’re going to try to convince you to make a different decision based on our view of what is more effective at improving the world”.
I believe the “democratise EA funding decisions” critics are doing the same thing.
Worth mentioning that given EA is quite big at Oxford, an elite university, a lot of British EAs will probably fall in the 20% who are forecast to fully pay off their student loan.
Fair!
I think Open Phil is unique in the EA Community for its degree of transparency which allows this level of community evaluation (with the exception of the Wytham Abbey purchase), and Open Phil should encourage other EA orgs should follow suit.
In addition to FTX style regranting experiments, I think (https://forum.effectivealtruism.org/posts/SBSC8ZiTNwTM8Azue/a-libertarian-socialist-s-view-on-how-ea-can-improve) it would be worth experimenting with, and evaluating:
-
The EA Community voting on grants that Open Phil considers to be just above or below its funding bar
-
The EA community voting on how to make grants from a small pot of Open Phil money
Using different voting methods (eg - quadratic voting, one person one vote, EA Forum weighted karma)
And different definitions of ‘the EA Community’ (staff and ex-staff across EA affiliated orgs, a karma cut off on the EA Forum, people accepted to EAG, people who have donated to EA Funds, etc)
“ The variations I've seen so far in the comments (like weighing forum karma) increase trust and integrity in exchange for decreasing the democratic nature of the governance, and if you walk all the way along that path you get to institutions.”
Agree, but I think we should explore what decision making looks like at different points of that path, instead of only looking at the ends.
“ But, as with other discourse, these proposals assume that because a foundation called Open Philanthropy is interested in the "EA Community" that the "EA Community" has/deserves/should be entitled to a say in how the foundation spends their money.”
I think the claim of entitlement here is both an uncharitable interpretation and irrelevant to the object level claim of “more democratic decision making would be more effective at improving the world”.
I think these proposals can be interpreted as “here is how EA could improve the long-term effectiveness of its spending”, in a similar way to how EA has spent years telling philanthropists “here is how you could improve the effectiveness of your spending”.
I don’t think it’s a good idea to pay too much attention to the difference in framing between “EA should do X” and “EA would be better at improving the world if it did X”.
Yes I think it’s uncharitable to assume that Carla means other people taking control of funds without funder buy in. I think the general hope with a lot of these posts is to convince funders too.
I think a significant part of the whole project of effective altruism has always been telling people how to spend money that we don’t own, so that the money is more effective at improving the world.
Seems reasonable to me for EAs to suggest ways of spending EA donor money that they think would be more effective at improving the world, including if they think that would be via giving more power to random EAs. Now whether that intervention would be more effective is a fair thing to debate.
As you touch on in the post, there are many weaker versions of some suggestions that could be experimented with at a a small scale, using EA Funds or some funding from Open Phil, and trying out a few different definitions of the ‘EA community’ - eg - EAG acceptance, Forum karma, etc, and using different voting models, eg - quadratic voting, one person one vote, uneven votes etc, veto power for Open Phil.
People who get accepted to EAG?
Apologies, I don’t mean to imply that EA is unique in getting things wrong / being bad at steelmanning. Agree that the “and everyone else” part is important for clarity.
I think whether steelmanning makes sense depends on your immediate goal when reading things.
If the immediate goal is to improve the accuracy of your beliefs and work out how you can have more impact, then I think steelmanning makes sense.
If the immediate goal is to offer useful feedback to the author and better understand the author’s view, steelmanning isn’t a good idea.
There is a place for both of these goals, and importantly the second goal can be a means to achieving the first goal, but generally I think it makes sense for EAs to prioritise the first goal over the second.
I don’t think I like this framing, because being responsive to criticism isn’t inherently good, because criticism isn’t always correct. I think EA is bad at the important middle step between inviting criticism and being responsive to it, which is seriously engaging with criticism.
Yep I think the timeline in the proposal is unrealistic
Interesting that another commenter has the opposite view, and criticises this post for being persuasive instead of explanatory!
May just be disagreement but I think it might be a result of a bias of readers to focus on framing instead of engaging with object level views, when it comes to criticisms.
I think it’s fairly easy for readers to place ideas on a spectrum and identify trade offs when reading criticisms, if they choose to engage properly.
I think the best way to read criticisms is to steelman as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.
I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact. I disagree with this kind of view.
I think the criticism of the theory of change here is a good example of an isolated demand for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/), which I feel EAs often apply when it comes to criticisms.
It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.
If you’re also reading the “diversify funding sources” and thinking BUT HOW? In a post where I make some similar suggestions, I suggest via encouraging entrepreneurship-to-give:
Good question.
The only other communities I know well are socialist + centre left political communities, who I think are worse than EA at engaging with criticism.
So I’d say EA is better than all communities that I know of at engaging with criticism, and is still pretty bad at it.
In terms of actionable suggestions, I’d say tone police a bit less, make sure you’re not making isolated demands for rigour, and make sure you’re steelmanning criticisms as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.
Sorry yes, essentially “EAs are bad, but so are most communities." But importantly we shouldn’t just settle for being bad, if we want to approximately do the most good possible, we should aim to be approximately perfect at things, not just better than others.
I agree but having written long criticisms of EA, doing this consistently can make the writing annoyingly long-winded.
I think it’s better for EAs to be steelmanning criticisms as they read, especially via “would I agree with a weaker version of this claim” and via the reversal test, than for writers to explore trade-offs for every proposed imperfection in EA.
I like this comment.
I feel that EAs often have isolated demands for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) when it comes to criticisms.
I think the ideal way to read criticisms is to steelman as you read.
For the FLI issue, I think we can confidently say more democratic decision making would have helped. Most EAs would have probably thought we should avoid touching a neo Nazi newspaper with a 10 foot pole.
I like this comment and I think this is the best way to be reading EA criticisms - essentially steelmanning as you read and not rejecting the whole critique because parts seem wrong.
I’ll add that EAs seem particularly bad at steelmanning criticisms.
(eg - if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)
I’ll add that EAs seem particularly bad at steelmanning criticisms - (eg - if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)
“I don't see you doing much acknowledging what might be good about the stuff that you critique”
I don’t think it’s important for criticisms to do this.
I think it’s fair to expect readers to view things on a spectrum, and interpret critiques as an argument in favour of moving in a certain direction along a spectrum, rather than going to the other extreme.
Strongly agree with the idea that we should stop saying “EA loves criticism”.
I think everyone should have a very strong prior that they are bad at accepting criticism, and everyone should have a very strong prior that they overestimate how good they are at accepting criticism.
“EA should cut down its overall level of tone/language policing”.
Strongly agree.
EAs should be more attentive to how motivated reasoning might affect tone / language policing.
You‘re probably more likely to tone / language police criticism of EA rather than praise, and you’re probably less likely to seriously engage with the ideas in the criticism if you are tone / language policing.
“EAs should assume that power corrupts” - strongly agree.
I think the point regarding epidemics, and how EA excessively focuses on the individual aspects of good epistemics rather than the group aspect, is a really good point which I have surprisingly never heard before.
I disagree because I would only count something as neocolonialism if there was a strong argument that it was doing net harm to the local population in the interest of the ‘colonisers’.