Posts

SoGive's 2023 plans + funding request 2022-12-20T02:44:29.361Z
Tainted donations: the Presidents Club dinner – a case study 2022-11-15T03:45:30.886Z
Tainted donations: a historical perspective from Rhodri Davies 2022-11-13T16:37:29.873Z
GiveWell’s approach to supplementary adjustments uses questionable priors 2022-11-01T12:53:10.268Z
Donations to SCI Foundation may funge with other work 2022-11-01T12:52:46.753Z
SoGive review of GiveWell’s discount rates 2022-11-01T12:52:13.858Z
Does this solve Pascal's Muggings? 2022-08-28T14:44:17.322Z
Rhodri Davies on why he's not an EA 2022-08-18T11:50:39.213Z
The 0.7% Campaign appears higher impact than we expected 2022-07-13T22:37:58.831Z
This innovative finance concept might go a long way to solving the world's biggest problems 2022-04-09T23:06:47.915Z
The BEAHR: Dust off your CVs for the Big EA Hiring Round! 2022-03-24T08:56:36.109Z
ESG investing needs thoughtful trade-offs 2021-05-27T06:06:00.551Z
Why SoGive is not updating charity ratings after malaria vaccine news 2021-04-23T20:39:46.897Z
SoGive's moral weights -- please take part! 2021-04-05T22:47:55.223Z
ESG investing isn’t high-impact, but it could be 2021-03-18T14:07:25.776Z
The $100trn opportunity: ESG investing should be a top priority for EA careers 2021-03-18T13:54:57.545Z
Want to know about a UK charity? SoGive probably has a rating on it 2021-03-13T20:55:40.366Z
Update on the 0.7% (£4bn for the poor) 2020-12-19T01:39:14.186Z
£4bn for the global poor: the UK's 0.7% 2020-11-30T15:50:01.883Z
When setting up a charity, should you employ a lawyer? 2020-10-19T18:04:01.943Z
TIO: A mental health chatbot 2020-10-12T20:52:28.105Z
No More Pandemics: a grassroots group? 2020-10-02T20:40:37.731Z
We're (surprisingly) more positive about tackling bio risks: outcomes of a survey 2020-08-25T09:14:22.924Z
Climate change donation recommendations 2020-07-16T21:17:57.720Z
The Nuclear Threat Initiative is not only nuclear -- notes from a call with NTI 2020-06-26T17:29:48.736Z
EA and tackling racism 2020-06-09T22:56:44.217Z
Projects tackling nuclear risk? 2020-05-29T22:41:10.331Z
Call notes with Johns Hopkins CHS 2020-05-20T22:25:13.049Z
The best places to donate for COVID-19 2020-03-20T10:47:26.308Z
Conflict and poverty (or should we tackle poverty in nuclear contexts more?) 2020-03-06T21:59:40.219Z
Microcredit may sometimes be effective, but perhaps shouldn’t be funded by donations 2020-02-19T15:30:25.623Z
Climate discounting: How do you value one tonne of CO2eq averted today versus (say) 30 years from now? 2020-02-12T16:41:21.092Z
Clean cookstoves may be competitive with GiveWell-recommended charities 2020-02-10T18:00:57.512Z
Update on CATF's plans for 2020 2019-12-24T09:21:45.875Z
Why we think the Founders Pledge report overrates CfRN 2019-11-04T17:54:13.171Z
Older people may place less moral value on the far future 2019-10-22T14:47:39.330Z
Could the crowdfunder to prosecute Boris Johnson be a high impact donation opportunity? 2019-06-05T23:43:10.114Z
Please use art to convey EA! 2019-05-25T10:46:08.885Z
Why you should NOT support Aubrey de Grey's work on ageing. (maybe) 2019-02-24T23:43:29.690Z
Why we have over-rated Cool Earth 2018-11-26T02:29:41.731Z
Nudging donors towards high-impact charities (a request for funding for SoGive) 2018-01-13T10:06:16.605Z
Medical research: cancer is hugely overfunded; here's what to choose instead 2017-08-05T15:41:06.692Z

Comments

Comment by Sanjay on Pros and Cons of boycotting paid Chat GPT · 2023-03-18T08:57:24.606Z · EA · GW

It looks like the arguments in favour of a boycott would look stronger if there were a coherent AI safety activist movement. (I mean "activist" in the sense of "recruiting other people to take part, and grassroots lobbying of decision-makers", not "activist" in the sense of "takes some form of action, such as doing AI alignment research")

Comment by Sanjay on Legal Assistance for Victims of AI · 2023-03-17T13:07:56.957Z · EA · GW

I haven't thought hard about how good an idea this is, but those interested might like to compare and contrast with ClientEarth.

Comment by Sanjay on Write a Book? · 2023-03-16T14:04:44.383Z · EA · GW

You asked whether you should spend time on this book at the expense of going part time on your job, i.e. you raised the question of the opportunity cost.

In order to assess that, we need to work out a Theory of Change for your book. Is it to support people interested in doing good, and helping them to be more effective? In that case it would be useful to see your model for this:

  • What's your forecast for the number of people buying your book?
    • What's the shape of your distribution on that? E.g. is there a fat tail on the possibility that it will sell very well? 
  • What proportion of readers do you expect would change behaviour as a result of reading your book?
  • How should you adjust that for counterfactuals? (i.e. what proportion of those people would have ended up reading TLYCS or DGB or something else instead?)
  • How valuable is a counterfactual-adjusted reader who changes their behaviour?
  • How much of your time needs to be given up in order to achieve these outcomes?

I suspect that the cruxiest of the above questions will probably be the one about counterfactuals. Will you have a marketing strategy that enables you to reach people who would not have ended up reading another EA book anyway?

If not, my not-carefully-thought-through intuition is that it would be better for you to focus your time on your day job (assuming it's high impact, which, from memory, I think it is). Which is a shame, because I would have liked to see your book!

Comment by Sanjay on William_MacAskill's Shortform · 2023-03-15T16:01:29.091Z · EA · GW

Thank you for sharing this. I think lots of us would be interested in hearing your take on that post, so it's useful to understand your (reasonable-sounding) rationale of waiting until the independent investigation is done.

Could you share the link to your last shortform post? (it seems like the words "last shortform post" are linking to the Time article again, which I'm assuming is a mistake?)

Comment by Sanjay on How bad a future do ML researchers expect? · 2023-03-15T12:04:24.421Z · EA · GW

I would find it fascinating to see this data for the oil and gas industry. I would guess that far fewer people in that industry think that their work is causing outcomes as bad as human extinction (presumably correctly), and yet they probably face more opprobrium for their work (at least from some left-leaning corners of the population).

Comment by Sanjay on Just Pivot to AI: The secret is out · 2023-03-15T09:09:22.944Z · EA · GW

Thank you for sharing this, I particularly enjoyed the bee comparisons, which I hadn't seen before.

I didn't quite follow the logic behind "working on cool AI projects now seems positive to me". 

It's perhaps because I don't know quite what you mean by "working on cool AI projects".

Are you saying that capabilities research on a "cool AI project" is safer than capabilities research at OpenAI or Anthropic? If so I'm not clear on why?

Or does a cool AI project mean applying AI rather than developing new capabilities?

Comment by Sanjay on How to make climate activists care for other existential risks · 2023-03-12T22:09:18.003Z · EA · GW

Here's how I imagine you might communicate with climate activists (at least based on how this post is written)

"Hey climate activists, I think you're wrong to focus on climate, and I think you should focus on the risk from technology instead. I reckon you just need to think harder, and because you haven't thought hard enough, you're coming to the wrong conclusions. But if you just listen to me and think a bit better than you have done, you'll realise that I'm right."

If the pitch has this tone, even if it's much less blatant than this, I fear that your targets might pick up on it and find it offputting.

I appreciate that you might communicate differently with climate activists than how you communicate on this forum, but I thought it worth flagging.

Comment by Sanjay on Stan van Wingerden's Shortform · 2023-03-10T23:33:36.809Z · EA · GW

I seem to remember that Founders Pledge collaborated with them, but I can't remember the details so I'm not sure how much FP are affected

Comment by Sanjay on Operationalizing timelines · 2023-03-10T18:10:28.284Z · EA · GW

Your main two concerns seem to be that the terms are either vague or don't quite capture what we care about.

However it seems that those issues might be insurmountable, given that we don't know the precise nature of the future AI that has the properties we worry about.

Comment by Sanjay on More Centralisation? · 2023-03-08T14:24:46.892Z · EA · GW

Something worth clarifying:

  • David is suggesting in this post that there be more centralisation in the sense that there should be fewer, larger organisations
  • There has also been talk of EA being too centralised, but this is referring to there being too few funding sources, which (unless I'm misunderstanding) is different from what David is talking about in this post
Comment by Sanjay on FTX Poll Post - What do you think about the FTX crisis, given some time? · 2023-03-08T14:19:57.834Z · EA · GW

I'm fed up of hearing about / thinking about FTX and SBF. I just want to move on now.

Comment by Sanjay on Suggestion: A workable romantic non-escalation policy for EA community builders · 2023-03-08T13:48:45.299Z · EA · GW

I'm unclear on the proposal here. I've taken your bit in italics and adapted it to the EA context:

For three months after an EAG(x) or EA retreat, and for one month after an evening event, community organisers who organised the event, or speakers/organisers at the conference/retreat are prohibited from engaging romantically, or even hinting at engaging romantically, with attendees. The only exception is when a particular attendee and the facilitator already dated beforehand.

Is this what you had in mind? This would mean:

  • If an organiser of a local community organises monthly events, they wouldn't be able to date any regular attendee of those events
  • People who were organising an EAG in a low-key, not-visible way would be forbidden from dating an attendee, or we would need to define a bar for visibility
  • Conference attendees are not prohibited from hitting on other attendees (at least not according to this specific rule)

Overall, I'd find it much easier to work out whether this is a useful proposal if I were clearer on what is being proposed.

Comment by Sanjay on Evidence on how cash transfers empower women in poverty · 2023-03-08T12:00:36.866Z · EA · GW

Why is it that 62% of recipients are women?

Comment by Sanjay on Please don't criticize EAs who "sell out" to OpenAI and Anthropic · 2023-03-05T22:00:10.171Z · EA · GW

Overall though, I agree with the point that it's possible to raise questions about someone's personal career choices without being unpleasant about it. And that doing this in a sensitive way is likely to be net positive

Comment by Sanjay on Please don't criticize EAs who "sell out" to OpenAI and Anthropic · 2023-03-05T21:58:30.648Z · EA · GW

earning-to-give, which I would consider even more reprehensible than SBF stealing money from FTX customers and then donating that to EA charities

 

AI capabilities EtG being morally worse than defrauding-to-give sounds like a strong claim. 

There exist worlds where AI capabilities work is net positive. I appreciate that you may believe that we're unlikely to be in one of those worlds (and I'm sure lots of people on this forum agree).

However, given this uncertainty, it seems surprising to see language as strong as "reprehensible" being used.

Comment by Sanjay on The Role of a Non-Profit Board · 2023-03-04T21:24:11.781Z · EA · GW

Good summary.

I've been on the board of something like 8 charities/community organisations, and I'm often asked about what trusteeship involves. When asked if there's anything written to share, I normally point people towards CC3, but I think this is a clearer, more succinct introduction. 

Comment by Sanjay on Milk EA, Casu Marzu EA · 2023-03-03T20:46:43.862Z · EA · GW

I found it surprising that you described cash transfers as "milk" and bednets, vaccines and avoiding nuclear war as "cheese". 

In my experience, it's more likely to be the latter category which is, "to nearly everyone, intuitively and obviously good."

By contrast, I've heard lots of people confidently and knowingly say that cash transfers don't work (because they don't get to the root of the problem, because the poor will waste the money on alcohol, etc)

Comment by Sanjay on Recent paper on climate tipping points · 2023-03-03T10:43:59.025Z · EA · GW

Sounds like the sort of thing we would enjoy doing in principle. Let me check whether there's capacity within the team.  (I think there's not much capacity, but I'll check)

Comment by Sanjay on Recent paper on climate tipping points · 2023-03-03T00:51:02.620Z · EA · GW

I've not seen the full text of this new paper (the Wang paper), but based on its abstract it doesn't seem hugely inconsistent with my current understanding of tipping points.

There was a high profile paper by Armstrong McKay et al that was published last year. The paper was largely taken to stress the severity of tipping points, (see e.g. this coverage) but when I read the paper, I think the paper is, at least in some ways, quite consistent with what Wang is saying. 

The Armstrong McKay paper listed 16 tipping points, of which

  • 4 of them have an estimated timescale of < 50 years
  • 3 of them have an estimated timescale of 50 years
  • 9 of them have an estimated timescale of > 50 years; 
    • for those 9 tipping points not only is their estimated, but also their minimum timescale is ≥ 50 years, and 2 of them have a minimum timescale ≥ 1000 years 

Hence it seems that the Armstrong McKay paper agrees that "most tipping elements do not possess the potential for abrupt future change within (50) years". (i.e. apparently consistent with Wang)

Also, of the 16 tipping points listed in the Armstrong McKay paper, none of them had a massive impact on the global temperature (i.e. none had more than a 0.6 degree magnitude impact on global temperature). And some of the tipping points actually have a cooling effect.

This again seems consistent with the Wang paper, which says: "Emissions pathways and climate model uncertainties may dominate over tipping elements in determining overall multi-century warming".

One of the things that the Armstrong McKay paper helps to clarify, which doesn't seem to be clear from the Wang paper (as far as I can tell) is that a tipping point might potentially still be quite disruptive even if the global impact is small. (E.g. collapse of the convection in the Labrador-Irminger Seas wouldn't contribute much to global warming -- it actually has a cooling effect -- but it might be significantly disruptive to European and American weather systems).

In short, my understanding (prior to seeing the Wang paper) was that if you're focused on warming (rather than harms) then I largely understood Wang's sanguine-sounding claims to be true anyway.

Comment by Sanjay on Help GiveDirectly beat "teach a man to fish" · 2023-03-02T15:42:18.696Z · EA · GW

Not a submission to the contest, but years ago I supported an NGO in Kenya working with the Luo community. 

The NGO was called Teach A Man To Fish.

The Luo are famously good at fishing. 

The local Luo people didn't complain about the apparent condescension of working with an NGO called Teach A Man To Fish when they were actually very good at fishing.

Why?

They wanted the money they could get from the NGO!

Comment by Sanjay on Pat Myron's Shortform · 2023-02-19T13:40:21.853Z · EA · GW

I imagine that forum norms might be influenced by this post.

Comment by Sanjay on AGI in sight: our look at the game board · 2023-02-18T23:35:38.882Z · EA · GW

There has been literally no regulation whatsoever to slow down AGI development

Thanks for your post; I'm sure it will be appreciated by many on this forum.

The claim that there has been literally no regulation whatsoever sounds a bit strong?

E.g. the US putting export bans on advanced chips to China? (BIS press release here, more commentary: 1, 2, 3, 4)

It looks to me like this was intended to slow down (China's) AI development, and indeed has a reasonable chance that it may slow down (overall) AI development.

(To be clear, I see this as a point of detail on one specific claim, and doesn't meaningfully detract from the overall thrust of your post)

Comment by Sanjay on How good/bad is the new Bing AI for the world? · 2023-02-17T20:19:15.128Z · EA · GW

Has Dustin's account been hacked by Bing AI?

Comment by Sanjay on Why I No Longer Prioritize Wild Animal Welfare (edited) · 2023-02-16T11:24:28.742Z · EA · GW

I strong-upvoted this comment. I found the beginning of the comment particularly helpful:

Scale of WAW is big because it encompasses millions of sub-problems. But unless you are looking into destroying nature (which is politically infeasible and I don’t want to do it), you are looking at things like a particular pigeon disease, or how noise from ships affects haddocks. And then the scale doesn’t look that big. 

Comment by Sanjay on Why I No Longer Prioritize Wild Animal Welfare (edited) · 2023-02-15T20:23:14.322Z · EA · GW

Great to get your takes Saulius, appreciate it.

I've thought about WAW much less than you, but my take is:

  • At the moment, the only WAW-related work we can do involves researching the topic. A lot. Probably for a long time.
  • That's because any real-world-implementation work on WAW would be phenomenally complex, and the sign will be very hard to know most (all?) of the time.
  • But the scale is big enough that it's worth it (except, perhaps, from a longtermist perspective)

As far as I can tell, there's nothing in your post to update away from this opinion? (I read it quickly, so sorry if I missed something)

Comment by Sanjay on Moving community discussion to a separate tab (a test we might run) · 2023-02-06T22:23:25.072Z · EA · GW

I agree-voted with both polls. I recognise the concerns that you outlined with the made-up quotes.

My only real concern is about the definition of "community" posts. To illustrate this, I glanced through some recent posts, selected  a few which I thought were likely to be borderline, and thought that several of them had been tagged as "Community", but didn't have the property of sucking me in an unhealthy way. Examples include 

Native English speaker EAs: could you please speak slower?

“My Model Of EA Burnout” (Logan Strohl)

What's the social/historical story of EA's longtermist turn?

Another post did have that unattractive property (in my view), and was not labelled as community.

If too many "good" posts (whatever that means) are classed as community, I'll just end up looking in the community tab anyway, which might defeat the purpose.

In any case, I'm glad you're giving this a try, and thank you for thinking about this.

Comment by Sanjay on Thank you so much to everyone who helps with our community's health and forum. · 2023-02-06T20:50:01.161Z · EA · GW

Posts can achieve goals other than advancing the discourse, and I'm OK with that.

Comment by Sanjay on [No Longer Endorsed] The EA Forum should remove community posts from search-engine indexing. · 2023-02-05T17:12:02.080Z · EA · GW

I can certainly see how this proposal has upsides.

On the flipside, not being able to easily find such musings might also backfire. E.g. in the era before the FTX crisis, a journalist wanting to write about the culture of excess wealth in EA may have felt honour-bound to give at least some credit to the fact that the community was conscious of this and concerned about it if they had easily found George's post on the EA forum.

This proposal may still be the right thing to do, I just wanted to make sure multiple perspectives were considered.

Comment by Sanjay on EA, Sexual Harassment, and Abuse · 2023-02-03T17:55:40.167Z · EA · GW

At a time when the community has gone through so much, it's hard to hear this.

I confess there's a part of me which wants to disengage from this. I'm tired of worrying about whether EA culture has a problem with fraud, racism, or other things that I find offensive. 

But I shouldn't disengage.

Just because my emotional energies have been zapped by previous dramas, it doesn't reduce the suffering experienced by victims of sexual abuse.

So first I'm going to say something which I think is obvious and uncontroversial to everyone:

Sexual abuse and harassment are wrong, and should not happen.

Secondly, I hereby take this pledge:

---

A pledge of solidarity to those who have suffered from sexual harassment or abuse

If you are upset or suffering because you have been abused or harassed, and you disclose this to me, I pledge to do the following:

  • I will listen and provide you with emotional support -- if you're upset, your distress will be my first priority at the outset.
  • I will not ask you questions to try to work out whether you are telling the truth. I would much rather trust and provide emotional support to someone who later turns out to have been lying than to question -- even subtly -- the legitimacy of someone who has suffered sexual abuse.
  • I will support you to work out the most appropriate next steps. I recognise that choices about your next steps may be complex, and I will not try to rob you of agency as you work out the best way forward.

---

In the spirit of the second bullet point of my pledge, I haven't done any work to assess the truth or otherwise of the claims in this article. And I didn't need to in order to feel disturbed by it.

I also don't claim to be the best standard-bearer of opposing sexual abuse and harrassment -- I don't consider myself one of the top EA leaders, and I have no direct experience of having been a victim of sexual abuse. I'm simply one person (out of many, I believe) who think that EA should be deeply opposed to sexual abuse and harassment.

Comment by Sanjay on Two potential cases of Effective Ventures breaking the law · 2023-02-03T15:20:59.061Z · EA · GW

I agree with Richard and Will's comments that the tone of the post is very allegation-y (and not very question-y). In light of this, I've edited my comment so that it ends with "the tone wasn't right" instead of "the tone wasn't quite right".

Comment by Sanjay on Two potential cases of Effective Ventures breaking the law · 2023-02-03T13:54:57.053Z · EA · GW

I think a crux here is the extent to which the post is an allegation versus a question. If it's an allegation, then I agree it should be rigorously supported, which probably requires legal input.

Technically, the phrasing in the disclaimer makes it clear this is a question. I don't think the tone throughout the piece makes that clear enough though -- at least, not for my tastes.

Having said all that, overall, I do want EA to be a place where people can pose challenging questions like this. And I wouldn't want us to censure posts like this just because the tone wasn't right.

Comment by Sanjay on Karma overrates some topics; resulting issues and potential solutions · 2023-01-30T21:24:22.738Z · EA · GW

I think this does a good job of describing the problem.

The solution is hard. I've certainly found myself getting sucked into reading EA Forum posts about community topics and felt that my time was used poorly.

On the other hand, some of the posts were really valuable (George's post on big-spending EA and some of the very posts in the aftermath of the FTX crisis spring to mind).

I think that means I want a UX which does allow me to see community posts, but somehow gives posts which have more substantive/subject-matter content more prominence. 

I'm really very unclear about exactly what this looks like, which is why this seems hard.

Comment by Sanjay on We're no longer "pausing most new longtermist funding commitments" · 2023-01-30T21:15:04.386Z · EA · GW

This is useful to share, thank you.

I think it would be good if:

  • you shared with grant recipients which tier you think they are in (maybe you've already done this, but if you haven't, I think they would find it useful feedback)
  • If anyone is in tier 4 and willing to have it publicly shared that they are in that tier, I think the community would find it useful

I appreciate that many people would dislike the idea of it being public that there are three tiers higher than them, but some EA org leaders are very community-spirited and might be OK with this.

Comment by Sanjay on Excalidraw: Why and How to Use it · 2023-01-28T13:49:21.730Z · EA · GW

Does anyone know how this differs from similar-sounding options like Miro, Mural and Lucidspark?

Comment by Sanjay on What’s going on with ‘crunch time’? · 2023-01-20T09:58:32.903Z · EA · GW

I think this can be a useful concept, so thanks for sharing. 

I think this post could be usefully expanded on in the following ways:

  • a bit more detail (vignettes, also, if possible, clear definitions) about what makes a decision important and influencable
  • what would we have to forecast in order to adjust our credences about whether a crunch time is coming soon
Comment by Sanjay on How many people are working (directly) on reducing existential risk from AI? · 2023-01-19T21:53:56.812Z · EA · GW

Thank you for your work on this. 

I'd be interested in your opinion on the number of people who should be working on this. 

I appreciate that this isn't a straightforward question to answer. The truth is probably that returns diminish as the number of people working on this increase, and there probably isn't an obvious way to delineate a clear cut off point between "still useful to have another person" and "don't need any more people".

I think this useful because I suspect your view is that there should be lots more people working on this, but from reading the problem profile, I don't think readers would know whether 80k would want the 400 to increase to 500 or 500,000. (I've only skimmed it, so sorry if it is explained)

Knowing the difference between "the area is somewhat under-resourced" and "the area is extremely under-resourced" is useful for readers. 

Comment by Sanjay on Doing EA Better · 2023-01-19T19:01:01.606Z · EA · GW

Yes, we can arrange via DM

Comment by Sanjay on Amazon closing AmazonSmile to focus its philanthropic giving to programs with greater impact · 2023-01-19T18:11:03.558Z · EA · GW

Oh really? I'm no expert on google ads, but I thought it was common to have "conversions", and to pay more if a certain pre-defined event occurs (and a purchase is an example of a conversion).

I suspect Jeff knows more about google ads than I do, so maybe I should adjust my 60% number down.

Comment by Sanjay on FLI FAQ on the rejected grant proposal controversy · 2023-01-19T18:01:38.831Z · EA · GW

I found this clear and reassuring. Thank you for sharing

Comment by Sanjay on Amazon closing AmazonSmile to focus its philanthropic giving to programs with greater impact · 2023-01-19T08:48:44.854Z · EA · GW

EDIT: what I wrote here probably isn't correct (see comments from Jeff below)

My understanding (I can't remember my source for this) is that it's less about charitable giving and more motivated by a war against Google for revenue. I'd give a c.60% chance that this accurately describes Amazon's motivations.

Without Amazon Smile:

  • Someone googles "Trousers from Amazon" (or whatever)
  • When the user clicks on an ad on google's search results and goes to an Amazon page, Amazon gives Google some money
  • If the customer than goes to make a purchase, Amazon gives google a bit more money

I'm imagining a (fictional) dialogue between two Amazon employees: 

  • "Can we convince the user to go to a copy of this webpage which has a different url? then we don't pay the money to google?"
  • "Why would they do they do that?"
  • "We could pay the customer an amount less than the amount we pay to google?"
  • "But the amount Amazon would give to the customer would be so paltry"
  • "What if the money goes to charity instead? People are much more scope insensitive about charitable giving"

My propensity to believe this story is mostly because it seems to explain Amazon's behaviour in a way that sounds difficult to understand otherwise. My credence in this would be higher than c.60% if it were verified by a high quality source.

So if they're closing the programme, I'm wondering if the benefits of recouping ad spend from Google is no longer big enough to warrant the costs of the running the Smile system.

Comment by Sanjay on Doing EA Better · 2023-01-17T22:42:52.517Z · EA · GW

In a post this long, most people are probably going to find at least one thing they don't like about it. I'm trying to approach this post as constructively as I can, i.e. "what I do find helpful here" rather than "how I can most effectively poke holes in this?" I think there's enough merit in this post that the constructive approach will likely yield something positive for most people as well.

Comment by Sanjay on Doing EA Better · 2023-01-17T22:37:07.018Z · EA · GW

You argue that funding is centralised much more than it appears. I find myself learning that this is the case more and more over time. 

I suspect it probably is good to decentralise to some degree, however there is a very real downside to this:

  • some projects are dangerous and probably shouldn't happen
  • the most dangerous of those are ones run by a charismatic leader and appear very good
  • if we have multiple funders who are not "informally centralised" (i.e. talking to each other) then there's a risk that dangerous projects will have multiple bites at the cherry, and with enough different funders, someone will fund them

I appreciate that there are counters to this, and I'm not saying this is a slam-dunk argument against decentralisation.

Comment by Sanjay on Doing EA Better · 2023-01-17T22:15:25.714Z · EA · GW

I appreciated "Some ideas we should probably pay more attention to".  I'd be pretty happy to see some more discussion about the specific disciplines mentioned in that section, and also suggestions of other disciplines which might have something to add. 

Speaking as someone with an actuarial background, I'm very aware of the Solvency 2 regime, which makes insurers think about extreme/tail events which have a probability of 1-in-200 of occurring within the next year.  Solvency 2 probably isn't the most valuable item to add to that list; I'm sure there are many others.

Comment by Sanjay on Doing EA Better · 2023-01-17T22:08:20.843Z · EA · GW

I think I'm probably sympathetic to your claims in "EA is open to some kinds of critique, but not to others", but I think it would be helpful for there to be some discussion around Scott Alexander's post on EA criticism. In it, he argued that "EA is open to some kinds of critique, but not to others" was an inevitable "narrative beat", and that "shallow" criticisms which actually focus on the more actionable implications hit closer to home and are more valuable.

I was primed to dismiss your claims on the basis of Scott Alexander's arguments, but on closer consideration I suspect that might be too quick. 

I feel it would be easier for me to judge this if someone (not necessarily the authors of this post) provided some examples of the sorts of deep critiques (e.g. by pointing to examples of deep critiques made of things other than EA). The examples of deep critiques given in the post did help with this, but it's easier to triangulate what's really meant when there are more examples.

Comment by Sanjay on [deleted post] 2023-01-15T18:47:01.415Z

There are presumably ways in which donating a material amount makes a difference to financial advice, at least in the sense that financial planning should take this into account, and perhaps there are tax implications as well. On this basis I think I’m tentatively favourable to this idea, but I’d be more confident about it if I had seen a bit more detail in your post.

(BTW I’m not criticising you for not having more detail in your post, it’s totally reasonable to jot down something on the forum and hear people’s opinions as a first step)

  • Pricing: It might be worth considering how much work you have per client. I don’t know about the US, but in the UK and EU the regulatory burden for IFAs has been increasing substantially over the last decade. I haven’t spoken to IFAs much recently, so I don’t know whether they would be able to cope with as many as 100 clients per advisor. If 100 is too many for one person, you may need to increase your price. Having said that, if you know that $2k fees are the norm in the rest of the market, you could simply infer from that the $2k pricing is ok.
  • Market sizing: you indicate that you would need c. 100 clients for this to work out from a profitability perspective. Sizing this is easier if we have a clearer understanding if your target market. Presumably the defining feature – from the perspective of why the client would want to choose you – is the fact that your clients will be significant donors (as opposed to being EAs? I can’t imagine that the choice of EA-aligned vs non-EA-aligned charity is going to matter from, e.g., a tax perspective). What are the characteristics of donation decisions where getting advice matters? (e.g. is it absolute amount, or something about the relationship with tax thresholds, or something else?) Once that’s more clearly defined, then it’s easier to size (a) the addressable market within EA (b) the addressable market more widely (non-EAs who also donate substantial amounts are presumably also of interest to you).

 

(Update: I’ve now seen you’ve written a comment where you consider allowing for differing views on x-risks in the next few years. I had assumed that people with short timelines wouldn’t bother getting long term financial advice in the first place, so I imagined that this would not be part of your offering)

Also, I’d certainly see this as a for-profit venture. I’d at least expect you to be donating yourself (presumably that’s linked to your motivations). However doing this as a non-profit means taking scarce donation dollars, when this project, if worth doing, really ought to be fundable without relying on donations.

Lastly, I believe I’ve seen another post on the forum with a very similar idea. I can’t remember much about the post, but you might want to track it down and reach out to the person.

Comment by Sanjay on What you can do to help stop violence against women and girls · 2023-01-15T18:42:52.855Z · EA · GW

Re item 4, it's fair to note that I haven't checked how conservative you've been on other assumptions, so if I did a replication of your work and it ended up being similar, then I agree that could be a reason.

Comment by Sanjay on What you can do to help stop violence against women and girls · 2023-01-14T16:15:13.306Z · EA · GW

Great that you've looked into this Akhil! Speaking as someone with a wife and daughter (and a mother, and other female family members, and female friends...) this is close to my heart.

A key problem with all of these is how to assess effectiveness. IPV typically occurs behind closed doors, which makes it hard to know what's really happening.

Largely because of these considerations, I predict that on further analysis, I will probably be less positive than you. 

While this sounds consistent with a generalised GiveWellian sceptical prior, I say this with some sadness, because I would very much like reducing VAWG to be a high impact cause area.

Also, thank you for asking me for comments before publishing.

---

My main reason for being more pessimistic than you is that your internal and external validity adjustments seem very generous:

Source: your model

For brevity, I'll focus on Community based social empowerment, since it's the one you're most positive about.

  • You have adjustments of 95% internal validity (aka replicability) adjustment, and 90% external validity (aka generalisability) adjustment[1]. I'd consider these numbers to be high (i.e. more prone to lead to generous cost-effectiveness evaluations)[2]
  • Your model's 95% internal validity adjustment is the same internal validity adjustment that GiveWell uses for bednets. For comparison... 
  • ... malaria nets do merit a 95% internal validity adjustment. We have seen plenty of positive evidence for the effectiveness of bednets, and I'm told that there is so much evidence that it's difficult to get ethics approval for more RCTs because ethics boards argue that it's unethical to do studies with controls on something that is such a robustly proven intervention.
  • ... cash transfers do merit a 95% internal validity adjustment. They are a robustly effective way of reducing poverty.
  • ... Community Based Social Empowerment does not merit a 95% internal validity adjustment, in my view. Gathering this sort of evidence from surveys is very difficult, and I'd be surprised if the protocols are robust enough to give us the same confidence we have about the effect of malaria nets on mortality (deaths are relatively easy to count).
  • I also suspect the external validity adjustment is too generous. The intervention relies heavily on cultural context; several GiveWell external adjustments are high too, but human bodies are pretty consistent from one place to the next, whereas cultures vary a lot with geography.

Therefore I predict that:

  • in 90% of worlds where I (or someone from SoGive) sat down and reviewed this carefully, we would have validity adjustments lower than yours (i.e. lower than 95% and 90%).
  • in 50% of worlds where I (or someone from SoGive) sat down and reviewed this carefully, we would have validity adjustments substantially lower than yours (i.e. lower than 50%).
  • In summary, I think there's a 75% chance that we conclude with a >2x worse cost-effectiveness than you, and a 25% chance of a greater than >4x worse cost-effectiveness than you for Community Based Social Empowerment.
  • This would be unlikely to be at the levels of cost-effectiveness where we would deem the intervention high impact.

I haven't thought enough about the other interventions apart from Self-defence (IMPower, which has been done by No Means No). As Matt has alluded to, SoGive has done some work on this topic, and received some information which is not in the public domain. I can't say too much about this, but I can discuss privately and guide you to the relevant researchers. SoGive's plans are to press for permission to publish on this, and finalise within the next few months.

---

For clarity, I've alluded to SoGive in this comment, but this is not an official SoGive comment. Content written in a SoGive capacity has to gone through a certain level of review which has not happened here, so this is written in a personal capacity.

 

  1. ^

    For those less familiar with these models, they are applied in a straightforward, intuitive way. It's roughly equivalent to (Step 1) Calculate the benefit assuming full trust in the evidence; (Step 2) Multiply the benefit by the validity adjustments; (Step 3) divide by costs.

  2. ^

    For those who want access to data to help them form their own view on whether these adjustment are high are not: In SoGive, we have pulled together a spreadsheet with GiveWell's internal and external validity adjustments (we're supposed to also add in SoGive's own adjustments at the bottom, not just GiveWell's, but have been less diligent at doing that). It's meant to be a (not-rigorously vetted) internal resource, but I'm sharing it here in case it helps. It's also probably a couple of years out of date now, but I'd from memory I don't think there are changes material enough to matter in the last couple of years.

Comment by Sanjay on GWWC Should Require Public Charity Evaluations · 2023-01-10T16:47:31.281Z · EA · GW

I'll just add that from SoGive's perspective, this proposal would work. We have various views on charities, but only the ones which are in the public domain are robustly thought through enough that we would want an independent group like GWWC to pick them up.

The publication process forces us to think carefully about our claims and be sure that we stand by them.

(I appreciate that Sjir has made a number of other points, and I'm not claiming to answer this from every perspective)

SoGive is not currently on GWWC's list of evaluators --GWWC plans to look into us in 2023.

Comment by Sanjay on Forecasting extreme outcomes · 2023-01-09T18:36:18.362Z · EA · GW

Thank you for this. It's a useful contribution, and I upvoted it.

I'd be interested in some discussion about when we'd expect this mathematics to be materially useful, especially when compared with other hard elements of doing this sort of forecast.

Example: if I want to estimate the extent to which averting a gigatonne of greenhouse gas (GHG) emissions influences the probability of human extinction, I suspect that the Fisher-Tippett-Gnedenko theorem isn't very important (shout if you disagree). Other considerations (like: "have I considered all the roundabout/indirect ways that GHG emissions could influence the chance of human extinction?") are probably more important.

Comment by Sanjay on Moral Weights according to EA Orgs · 2023-01-09T18:13:37.742Z · EA · GW

I agree this is valuable, thank you for doing this. 

I'll just echo something Matt said about possible lack of independence...

Prior to doing our formal Delphi process for determining our moral weights, we at SoGive had been using a placeholder set of moral weights. The placeholder was heavily influenced by GiveWell's moral weights.

Our process did then incorporate lots of other perspectives, including a survey of the EA community, and a survey of the wider population, as well as explicit exhortations to think things through independently. Despite all these things, I think it's possible that our process might have ended up anchoring  on the previous placeholder weights, i.e. indirectly anchoring on GiveWell's moral weights. I don't think anyone in the team was looking at or aware of FP's or HLI's moral weights, so I don't expect there was any direct influence there.