Posts

Risks from solar flares? 2023-03-07T11:12:37.842Z
What does Putin’s suspension of a nuclear treaty today mean for x-risk from nuclear weapons? 2023-02-21T16:46:14.480Z
What policy ideas do you think are both tractable and high impact? 2023-02-05T20:15:42.951Z
Now that we trust senior EAs less... 2023-01-15T21:46:50.025Z
Where are the headlines about EA’s successes? 2023-01-13T22:54:56.060Z
Papers on the cost effectiveness / ROI of pandemic preparedness interventions? 2023-01-11T23:06:07.661Z
A libertarian socialist’s view on how EA can improve 2022-12-30T13:07:18.527Z
List of reasons to discount the welfare of future generations 2022-12-17T14:26:01.339Z
List of cause areas that EA should potentially prioritise more 2022-12-17T14:07:55.620Z
EA is probably undergoing "Evaporative Cooling" right now 2022-12-12T12:35:17.888Z
Protecting EV calculations against motivated reasoning 2022-12-12T12:29:10.524Z
You *should* factor optics into EV calculations 2022-12-12T12:20:15.237Z
The Motte and Bailey for Expected Value 2022-12-12T11:54:59.085Z
A socialist's view on liberal progressive criticisms of EA 2022-11-21T23:54:35.211Z
You *should* have capacity for more transparency and accountability 2022-11-13T11:52:39.735Z
Reading suggestions for how regulatory bodies can do better on vaccine approval in a pandemic? 2022-11-12T23:38:36.432Z
EA should consider explicitly rejecting *pure* classical total utilitarianism 2022-11-12T17:52:08.672Z
Restricting brain organoid research to slow down AGI 2022-11-09T13:01:50.812Z
Fund biosecurity officers at universities 2022-10-31T11:49:17.627Z
EA should seek out more criticism of key EA concepts 2022-08-30T15:14:40.613Z
EAs underestimate uncertainty in cause prioritisation 2022-08-23T14:04:31.553Z
“Existential Risk” is badly named and leads to narrow focus on astronomical waste 2022-08-22T20:25:22.770Z
Are AGI timelines ignored in EA work on other cause areas? 2022-08-18T12:13:34.778Z
Does anyone have a list of historical examples of societies seeking to make things better for their descendants? 2022-08-15T13:13:14.566Z
The animals and humans analogy for AI risk 2022-08-13T15:35:42.064Z
Prioritisation should consider potential for ongoing evaluation alongside expected value and evidence quality 2022-08-13T14:53:19.899Z
Longtermism neglects anti-ageing research 2022-08-12T22:52:03.904Z
Internationalism is a key value in EA 2022-08-12T16:59:52.259Z
Earning to give should have focused more on “entrepreneurship to give” 2022-08-09T20:13:39.083Z
Antiviral photodynamic therapy seems underfunded 2022-07-31T15:55:38.409Z
Let's make EA easier to critique! 2022-04-07T23:35:35.837Z
What are academic disciplines, movements or organisations that you think EA should try to learn more from? 2022-03-20T18:21:40.794Z
Stop procrastinating on career planning 2022-02-06T20:32:53.471Z
Technocracy vs populism (including thoughts on the democratising risk paper and its responses) 2021-12-29T03:08:50.394Z
Does anyone know of any work that investigates whether private schools add value to society vs only change *who* attains socioeconomic success? 2021-12-19T21:55:51.645Z
Is there any work on how best to protect young / emerging democracies from becoming autocracies? 2021-10-25T12:04:58.508Z
EA cause areas are just areas where great interventions should be easier to find 2021-07-17T12:16:42.918Z
Should someone start a grassroots campaign for USA to recognise the State of Palestine? 2021-05-11T15:29:10.555Z
Are global pandemics going to be more likely or less likely over the next 100 years? 2021-05-06T23:48:19.019Z
Has anyone done any work on how donating to lab grown meat research (https://new-harvest.org/) might compare to Giving Green's recommendations for fighting climate change? 2021-04-28T12:02:00.999Z

Comments

Comment by freedomandutility on Offer an option to Muslim donors; grow effective giving · 2023-03-16T10:50:13.155Z · EA · GW

Well done, have been waiting for years to see EA start looking into Zakat!

Comment by freedomandutility on Offer an option to Muslim donors; grow effective giving · 2023-03-16T10:49:01.936Z · EA · GW

“As a secular NGO, GiveDirectly may struggle to gain traction with Muslim donors”I strongly agree with this.

Isn’t an obvious solution to market the Zakat compliant fund under a different name than Give Directly?

(Obvious choice would be whatever “Give Directly” is in Arabic)

Comment by freedomandutility on Advice on communicating in and around the biosecurity policy community · 2023-03-03T10:04:49.986Z · EA · GW

On “learning from people outside EA and those who slightly disagree with EA views” I highly recommend reading everything by Dr Filippa Lentzos: https://www.filippalentzos.com/.

Also, subscribe to the Pandora Report newsletter:
https://pandorareport.org/

Global Biodefense was great but sadly seems to have become inactive: https://globalbiodefense.com/

Comment by freedomandutility on Call to demand answers from Anthropic about joining the AI race · 2023-03-03T09:56:51.746Z · EA · GW

So rather than a specific claim about specific activities being done by Anthropic, would you say that:

  1. from your experiences, it’s very common for people to join the arms race under the guise of safety

  2. you think by default, we should assume that new AI Safety companies are actually joining the arms race, until proven otherwise

  3. the burden of proof should essentially rest on Anthropic to show that they are really doing AI Safety stuff?

Given the huge potential profits from advancing AI capabilities faster than other companies and my priors on how irrational money makes people, I’d support that view.

Comment by freedomandutility on Call to demand answers from Anthropic about joining the AI race · 2023-03-02T21:10:12.927Z · EA · GW

My crux here is whether or not I think Anthropic has joined the arms race.

Why do you believe that it has?

Comment by freedomandutility on Appreciation thread Feb 2023 · 2023-02-05T20:11:05.230Z · EA · GW

I’m grateful to the women who have publicly spoken about sexual misconduct in EA, which I hope will result in us making EA spaces safer and more welcoming to women.

Comment by freedomandutility on Appreciation thread Feb 2023 · 2023-02-05T20:09:09.321Z · EA · GW

I’m grateful to the EAs who engaged with criticisms around transparency in EA, and responded by making a more easily navigatable database of grants by all EA orgs, which meaningfully improves transparency, scrutiny and accountability.

Comment by freedomandutility on Appreciation thread Feb 2023 · 2023-02-05T20:07:30.859Z · EA · GW

I’ve never spoken to him but think he’s doing a great job at a difficult time in a difficult role

Comment by freedomandutility on EA's weirdness makes it unusually susceptible to bad behavior · 2023-02-05T11:04:41.299Z · EA · GW

I think we should slightly narrow the Overton Window of what ideas and behaviours are acceptable to express in EA spaces, to help exclude more harassment, assault and discrimination.

I also think EA at its best would primarily be more of a professional and intellectual community and less of a social circle, which would help limit harmful power dynamics, help limit groupthink and help promote intellectual diversity.

Comment by freedomandutility on Criticism Thread: What things should OpenPhil improve on? · 2023-02-05T10:54:20.174Z · EA · GW

I know that a few Open Phil staff live outside the Bay Area and work remotely

Comment by freedomandutility on What I thought about child marriage as a cause area, and how I've changed my mind · 2023-02-01T21:13:52.060Z · EA · GW

Would be interested in historical examples of this, and also on elaboration on what the indirect means today are.

(I think philanthropic funding of economic policy research in India pre 1991 would be one example?)

Comment by freedomandutility on What I thought about child marriage as a cause area, and how I've changed my mind · 2023-02-01T21:12:03.692Z · EA · GW

I think AI beats LMIC governance on scale and neglectedness in the ITN framework, so would deserve greater attention from EA even with equal tractability

Comment by freedomandutility on Karma overrates some topics; resulting issues and potential solutions · 2023-02-01T13:59:53.466Z · EA · GW

Worth pointing out a potential benefit of this imbalance:

Most work in 3 of EA's main cause areas, development, animal welfare and pandemic preparedness, takes place outside of EA, and it may be good for object level discussions in these areas to take place in places other than the EA Forum, to benefit from external expertise and intellectual diversity.

This is probably true for engineered pandemics and AI safety too but to a lesser extent because a high proportion of the work in these areas is done by EAs.

 

 

I think it is overall a good thing for the EA Forum to focus on the community, and for the community to act as a co-ordinating forum for people working in different high impact cause areas. I think it's better for an object level discussion on development, for example, to take place somewhere where feedback can be obtained from non-EA development economists, over somewhere like the EA Forum,  where a lot of feedback will be from students, animal welfare activists, AI researchers, etc.

Comment by freedomandutility on What I thought about child marriage as a cause area, and how I've changed my mind · 2023-02-01T13:54:11.780Z · EA · GW

Super interesting. Well done for this research and well done for changing your mind based on the evidence, especially given how much time you dedicated to this!

(These kinds of posts are super important, and we should think of these like we think of 'negative' / statistically insignificant results in science - the incentives to publish them are not very strong and we should encourage these kind of posts more )

Comment by freedomandutility on What I thought about child marriage as a cause area, and how I've changed my mind · 2023-02-01T13:51:46.952Z · EA · GW

Agree that improving economic growth in LMICs + international wealth redistribution would be effective in solving lots of social problems in LMICs, but both are highly intractable in my opinion, so would probably not solve a specific social problem more cost-efficiently than a targeted intervention aimed at that social problem. 

(But FWIW, I don't think improving economic growth in LMICs and international wealth redistribution are so intractable that they have no place in the EA movement)

Comment by freedomandutility on Regulatory inquiry into Effective Ventures Foundation UK · 2023-01-30T20:45:20.713Z · EA · GW

Thank you for the transparency.

Comment by freedomandutility on Spreading messages to help with the most important century · 2023-01-29T14:25:25.403Z · EA · GW

I like the framing "bad ideas are being obscured in a tower of readings that gatekeep the critics away" and I think EA is guilty of this sometimes in other areas too.

Comment by freedomandutility on Spreading messages to help with the most important century · 2023-01-29T14:22:53.342Z · EA · GW

Agree that in isolation, spreading the ideas of 

(a) AI could be really powerful and important within our lifetimes 

and 

(b) Building AI too quickly/ incautiously could be dangerous

Could backfire. 

 

But I think just removing the "incautiously" element, and focusing on the "too quickly element", and adding 

(c) So we should direct more resources to AI Safety research  

Should be pretty effective in preventing people from thinking that we should race to creating AGI.

 

So essentially, AI could be really powerful, building it too quickly could be dangerous, we should fund lots of AI Safety research before its invented. I think  adding more fidelity / detail / nuance would be net negative, given that they would slow down the spread of the message.

 

Also, I think we shouldn't take things OpenAI and DeepMind say at face value, and bear in mind the corrupting influence of the profit motive, motivated reasoning and 'safetywashing'. 

Just because someone says they're making something that could make them billions of dollars because they think it will benefit humanity, doesn't mean they're actually doing it to benefit humanity. What they claim is a race to make safe AGI is probably significantly motivated by a race to make lots of money.

Comment by freedomandutility on Why are we not talking more about the metacrisis perspective on existential risk? · 2023-01-29T12:16:43.606Z · EA · GW

I'm not aware of any thorough investigations of the metacrisis / polycrisis which come from the perspective of trying to work out how our interventions to solve the metacrisis / polycrisis might need to differ from our approach to individual existential risks. 

I think this kind of investigation could be valuable. I expect that same existential risks are more likely to set off a cascade of existential risks than others, which would have important implications for how we allocate resources for x-risk prevention.

Comment by freedomandutility on Celebrating EAGxLatAm and EAGxIndia · 2023-01-26T22:36:21.247Z · EA · GW

Well done to everyone involved, I think these are important steps to improving EA’s cultural and intellectual diversity, which will hopefully improve our impact!

Comment by freedomandutility on The EA community does not own its donors' money · 2023-01-20T15:05:03.872Z · EA · GW

“If you don't agree with a certain org or some actions of an org in the past, just don't donate to them. (This sounds so obvious to me that I'm probably missing something.) Whether somebody else (who might happen to have a lot of money) agrees with you is their decision, as is where they allocate their money to.“

I think what you’re missing is that a significant aspect of EA has always (rightly) been trying to influence other people’s decisions on how they spend their money, and trying to make sure that their money is spent in a way that is more effective at improving the world.

When EA looks at the vast majority of Westeners only prioritising causes within their own countries, EA generally doesn’t say “that is your money so it’s your decision and we will not try to influence your decision, and we will just give our own money to a different cause”, it says “that is your money and it’s your decision, but we’re going to try to convince you to make a different decision based on our view of what is more effective at improving the world”.

I believe the “democratise EA funding decisions” critics are doing the same thing.

Comment by freedomandutility on UK Personal Finance Tips & Info · 2023-01-20T00:06:05.410Z · EA · GW

Worth mentioning that given EA is quite big at Oxford, an elite university, a lot of British EAs will probably fall in the 20% who are forecast to fully pay off their student loan.

Comment by freedomandutility on The EA community does not own its donors' money · 2023-01-19T23:45:41.084Z · EA · GW

Fair!

I think Open Phil is unique in the EA Community for its degree of transparency which allows this level of community evaluation (with the exception of the Wytham Abbey purchase), and Open Phil should encourage other EA orgs should follow suit.

In addition to FTX style regranting experiments, I think (https://forum.effectivealtruism.org/posts/SBSC8ZiTNwTM8Azue/a-libertarian-socialist-s-view-on-how-ea-can-improve) it would be worth experimenting with, and evaluating:

  1. The EA Community voting on grants that Open Phil considers to be just above or below its funding bar

  2. The EA community voting on how to make grants from a small pot of Open Phil money

Using different voting methods (eg - quadratic voting, one person one vote, EA Forum weighted karma)

And different definitions of ‘the EA Community’ (staff and ex-staff across EA affiliated orgs, a karma cut off on the EA Forum, people accepted to EAG, people who have donated to EA Funds, etc)

Comment by freedomandutility on The EA community does not own its donors' money · 2023-01-19T10:54:44.087Z · EA · GW

“ The variations I've seen so far in the comments (like weighing forum karma) increase trust and integrity in exchange for decreasing the democratic nature of the governance, and if you walk all the way along that path you get to institutions.”

Agree, but I think we should explore what decision making looks like at different points of that path, instead of only looking at the ends.

Comment by freedomandutility on The EA community does not own its donors' money · 2023-01-19T09:37:03.709Z · EA · GW

“ But, as with other discourse, these proposals assume that because a foundation called Open Philanthropy is interested in the "EA Community" that the "EA Community" has/deserves/should be entitled to a say in how the foundation spends their money.”

I think the claim of entitlement here is both an uncharitable interpretation and irrelevant to the object level claim of “more democratic decision making would be more effective at improving the world”.

I think these proposals can be interpreted as “here is how EA could improve the long-term effectiveness of its spending”, in a similar way to how EA has spent years telling philanthropists “here is how you could improve the effectiveness of your spending”.

I don’t think it’s a good idea to pay too much attention to the difference in framing between “EA should do X” and “EA would be better at improving the world if it did X”.

Comment by freedomandutility on The EA community does not own its donors' money · 2023-01-19T09:31:31.556Z · EA · GW

Yes I think it’s uncharitable to assume that Carla means other people taking control of funds without funder buy in. I think the general hope with a lot of these posts is to convince funders too.

Comment by freedomandutility on The EA community does not own its donors' money · 2023-01-19T09:27:05.046Z · EA · GW

I think a significant part of the whole project of effective altruism has always been telling people how to spend money that we don’t own, so that the money is more effective at improving the world.

Seems reasonable to me for EAs to suggest ways of spending EA donor money that they think would be more effective at improving the world, including if they think that would be via giving more power to random EAs. Now whether that intervention would be more effective is a fair thing to debate.

As you touch on in the post, there are many weaker versions of some suggestions that could be experimented with at a a small scale, using EA Funds or some funding from Open Phil, and trying out a few different definitions of the ‘EA community’ - eg - EAG acceptance, Forum karma, etc, and using different voting models, eg - quadratic voting, one person one vote, uneven votes etc, veto power for Open Phil.

Comment by freedomandutility on The EA community does not own its donors' money · 2023-01-19T09:18:33.926Z · EA · GW

People who get accepted to EAG?

Comment by freedomandutility on Doing EA Better · 2023-01-18T14:50:48.573Z · EA · GW

Apologies, I don’t mean to imply that EA is unique in getting things wrong / being bad at steelmanning. Agree that the “and everyone else” part is important for clarity.

I think whether steelmanning makes sense depends on your immediate goal when reading things.

If the immediate goal is to improve the accuracy of your beliefs and work out how you can have more impact, then I think steelmanning makes sense.

If the immediate goal is to offer useful feedback to the author and better understand the author’s view, steelmanning isn’t a good idea.

There is a place for both of these goals, and importantly the second goal can be a means to achieving the first goal, but generally I think it makes sense for EAs to prioritise the first goal over the second.

Comment by freedomandutility on Doing EA Better · 2023-01-18T09:28:56.303Z · EA · GW

I don’t think I like this framing, because being responsive to criticism isn’t inherently good, because criticism isn’t always correct. I think EA is bad at the important middle step between inviting criticism and being responsive to it, which is seriously engaging with criticism.

Comment by freedomandutility on Doing EA Better · 2023-01-18T09:23:10.765Z · EA · GW

Yep I think the timeline in the proposal is unrealistic

Comment by freedomandutility on Doing EA Better · 2023-01-18T00:48:00.743Z · EA · GW

Interesting that another commenter has the opposite view, and criticises this post for being persuasive instead of explanatory!

May just be disagreement but I think it might be a result of a bias of readers to focus on framing instead of engaging with object level views, when it comes to criticisms.

Comment by freedomandutility on Doing EA Better · 2023-01-18T00:26:16.591Z · EA · GW

I think it’s fairly easy for readers to place ideas on a spectrum and identify trade offs when reading criticisms, if they choose to engage properly.

I think the best way to read criticisms is to steelman as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.

Comment by freedomandutility on Doing EA Better · 2023-01-18T00:24:07.136Z · EA · GW

I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact. I disagree with this kind of view.

Comment by freedomandutility on Doing EA Better · 2023-01-18T00:22:58.422Z · EA · GW

I think the criticism of the theory of change here is a good example of an isolated demand for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/), which I feel EAs often apply when it comes to criticisms.

It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.

Comment by freedomandutility on Doing EA Better · 2023-01-18T00:13:04.938Z · EA · GW

If you’re also reading the “diversify funding sources” and thinking BUT HOW? In a post where I make some similar suggestions, I suggest via encouraging entrepreneurship-to-give:

https://forum.effectivealtruism.org/posts/SBSC8ZiTNwTM8Azue/a-libertarian-socialist-s-view-on-how-ea-can-improve

Comment by freedomandutility on Doing EA Better · 2023-01-18T00:07:02.900Z · EA · GW

Good question.

The only other communities I know well are socialist + centre left political communities, who I think are worse than EA at engaging with criticism.

So I’d say EA is better than all communities that I know of at engaging with criticism, and is still pretty bad at it.

In terms of actionable suggestions, I’d say tone police a bit less, make sure you’re not making isolated demands for rigour, and make sure you’re steelmanning criticisms as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.

Sorry yes, essentially “EAs are bad, but so are most communities." But importantly we shouldn’t just settle for being bad, if we want to approximately do the most good possible, we should aim to be approximately perfect at things, not just better than others.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:58:27.322Z · EA · GW

I agree but having written long criticisms of EA, doing this consistently can make the writing annoyingly long-winded.

I think it’s better for EAs to be steelmanning criticisms as they read, especially via “would I agree with a weaker version of this claim” and via the reversal test, than for writers to explore trade-offs for every proposed imperfection in EA.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:55:17.467Z · EA · GW

I like this comment.

I feel that EAs often have isolated demands for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) when it comes to criticisms.

I think the ideal way to read criticisms is to steelman as you read.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:52:00.556Z · EA · GW

For the FLI issue, I think we can confidently say more democratic decision making would have helped. Most EAs would have probably thought we should avoid touching a neo Nazi newspaper with a 10 foot pole.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:50:07.457Z · EA · GW

I like this comment and I think this is the best way to be reading EA criticisms - essentially steelmanning as you read and not rejecting the whole critique because parts seem wrong.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:47:23.241Z · EA · GW

I’ll add that EAs seem particularly bad at steelmanning criticisms.

(eg - if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:46:54.902Z · EA · GW

I’ll add that EAs seem particularly bad at steelmanning criticisms - (eg - if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:40:54.487Z · EA · GW

Good point!

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:39:37.453Z · EA · GW

“I don't see you doing much acknowledging what might be good about the stuff that you critique”

I don’t think it’s important for criticisms to do this.

I think it’s fair to expect readers to view things on a spectrum, and interpret critiques as an argument in favour of moving in a certain direction along a spectrum, rather than going to the other extreme.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:33:10.085Z · EA · GW

Strongly agree with the idea that we should stop saying “EA loves criticism”.

I think everyone should have a very strong prior that they are bad at accepting criticism, and everyone should have a very strong prior that they overestimate how good they are at accepting criticism.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:30:44.479Z · EA · GW

“EA should cut down its overall level of tone/language policing”.

Strongly agree.

EAs should be more attentive to how motivated reasoning might affect tone / language policing.

You‘re probably more likely to tone / language police criticism of EA rather than praise, and you’re probably less likely to seriously engage with the ideas in the criticism if you are tone / language policing.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:26:32.354Z · EA · GW

“EAs should assume that power corrupts” - strongly agree.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:23:41.507Z · EA · GW

I think the point regarding epidemics, and how EA excessively focuses on the individual aspects of good epistemics rather than the group aspect, is a really good point which I have surprisingly never heard before.

Comment by freedomandutility on Doing EA Better · 2023-01-17T23:22:16.292Z · EA · GW

I disagree because I would only count something as neocolonialism if there was a strong argument that it was doing net harm to the local population in the interest of the ‘colonisers’.