It looks like the arguments in favour of a boycott would look stronger if there were a coherent AI safety activist movement. (I mean "activist" in the sense of "recruiting other people to take part, and grassroots lobbying of decision-makers", not "activist" in the sense of "takes some form of action, such as doing AI alignment research")
I haven't thought hard about how good an idea this is, but those interested might like to compare and contrast with ClientEarth.
You asked whether you should spend time on this book at the expense of going part time on your job, i.e. you raised the question of the opportunity cost.
In order to assess that, we need to work out a Theory of Change for your book. Is it to support people interested in doing good, and helping them to be more effective? In that case it would be useful to see your model for this:
- What's your forecast for the number of people buying your book?
- What's the shape of your distribution on that? E.g. is there a fat tail on the possibility that it will sell very well?
- What proportion of readers do you expect would change behaviour as a result of reading your book?
- How should you adjust that for counterfactuals? (i.e. what proportion of those people would have ended up reading TLYCS or DGB or something else instead?)
- How valuable is a counterfactual-adjusted reader who changes their behaviour?
- How much of your time needs to be given up in order to achieve these outcomes?
I suspect that the cruxiest of the above questions will probably be the one about counterfactuals. Will you have a marketing strategy that enables you to reach people who would not have ended up reading another EA book anyway?
If not, my not-carefully-thought-through intuition is that it would be better for you to focus your time on your day job (assuming it's high impact, which, from memory, I think it is). Which is a shame, because I would have liked to see your book!
Thank you for sharing this. I think lots of us would be interested in hearing your take on that post, so it's useful to understand your (reasonable-sounding) rationale of waiting until the independent investigation is done.
Could you share the link to your last shortform post? (it seems like the words "last shortform post" are linking to the Time article again, which I'm assuming is a mistake?)
I would find it fascinating to see this data for the oil and gas industry. I would guess that far fewer people in that industry think that their work is causing outcomes as bad as human extinction (presumably correctly), and yet they probably face more opprobrium for their work (at least from some left-leaning corners of the population).
Thank you for sharing this, I particularly enjoyed the bee comparisons, which I hadn't seen before.
I didn't quite follow the logic behind "working on cool AI projects now seems positive to me".
It's perhaps because I don't know quite what you mean by "working on cool AI projects".
Are you saying that capabilities research on a "cool AI project" is safer than capabilities research at OpenAI or Anthropic? If so I'm not clear on why?
Or does a cool AI project mean applying AI rather than developing new capabilities?
Here's how I imagine you might communicate with climate activists (at least based on how this post is written)
"Hey climate activists, I think you're wrong to focus on climate, and I think you should focus on the risk from technology instead. I reckon you just need to think harder, and because you haven't thought hard enough, you're coming to the wrong conclusions. But if you just listen to me and think a bit better than you have done, you'll realise that I'm right."
If the pitch has this tone, even if it's much less blatant than this, I fear that your targets might pick up on it and find it offputting.
I appreciate that you might communicate differently with climate activists than how you communicate on this forum, but I thought it worth flagging.
I seem to remember that Founders Pledge collaborated with them, but I can't remember the details so I'm not sure how much FP are affected
Your main two concerns seem to be that the terms are either vague or don't quite capture what we care about.
However it seems that those issues might be insurmountable, given that we don't know the precise nature of the future AI that has the properties we worry about.
Something worth clarifying:
- David is suggesting in this post that there be more centralisation in the sense that there should be fewer, larger organisations
- There has also been talk of EA being too centralised, but this is referring to there being too few funding sources, which (unless I'm misunderstanding) is different from what David is talking about in this post
I'm fed up of hearing about / thinking about FTX and SBF. I just want to move on now.
I'm unclear on the proposal here. I've taken your bit in italics and adapted it to the EA context:
For three months after an EAG(x) or EA retreat, and for one month after an evening event, community organisers who organised the event, or speakers/organisers at the conference/retreat are prohibited from engaging romantically, or even hinting at engaging romantically, with attendees. The only exception is when a particular attendee and the facilitator already dated beforehand.
Is this what you had in mind? This would mean:
- If an organiser of a local community organises monthly events, they wouldn't be able to date any regular attendee of those events
- People who were organising an EAG in a low-key, not-visible way would be forbidden from dating an attendee, or we would need to define a bar for visibility
- Conference attendees are not prohibited from hitting on other attendees (at least not according to this specific rule)
Overall, I'd find it much easier to work out whether this is a useful proposal if I were clearer on what is being proposed.
Why is it that 62% of recipients are women?
Overall though, I agree with the point that it's possible to raise questions about someone's personal career choices without being unpleasant about it. And that doing this in a sensitive way is likely to be net positive
earning-to-give, which I would consider even more reprehensible than SBF stealing money from FTX customers and then donating that to EA charities
AI capabilities EtG being morally worse than defrauding-to-give sounds like a strong claim.
There exist worlds where AI capabilities work is net positive. I appreciate that you may believe that we're unlikely to be in one of those worlds (and I'm sure lots of people on this forum agree).
However, given this uncertainty, it seems surprising to see language as strong as "reprehensible" being used.
I've been on the board of something like 8 charities/community organisations, and I'm often asked about what trusteeship involves. When asked if there's anything written to share, I normally point people towards CC3, but I think this is a clearer, more succinct introduction.
I found it surprising that you described cash transfers as "milk" and bednets, vaccines and avoiding nuclear war as "cheese".
In my experience, it's more likely to be the latter category which is, "to nearly everyone, intuitively and obviously good."
By contrast, I've heard lots of people confidently and knowingly say that cash transfers don't work (because they don't get to the root of the problem, because the poor will waste the money on alcohol, etc)
Sounds like the sort of thing we would enjoy doing in principle. Let me check whether there's capacity within the team. (I think there's not much capacity, but I'll check)
I've not seen the full text of this new paper (the Wang paper), but based on its abstract it doesn't seem hugely inconsistent with my current understanding of tipping points.
There was a high profile paper by Armstrong McKay et al that was published last year. The paper was largely taken to stress the severity of tipping points, (see e.g. this coverage) but when I read the paper, I think the paper is, at least in some ways, quite consistent with what Wang is saying.
The Armstrong McKay paper listed 16 tipping points, of which
- 4 of them have an estimated timescale of < 50 years
- 3 of them have an estimated timescale of 50 years
- 9 of them have an estimated timescale of > 50 years;
- for those 9 tipping points not only is their estimated, but also their minimum timescale is ≥ 50 years, and 2 of them have a minimum timescale ≥ 1000 years
Hence it seems that the Armstrong McKay paper agrees that "most tipping elements do not possess the potential for abrupt future change within (50) years". (i.e. apparently consistent with Wang)
Also, of the 16 tipping points listed in the Armstrong McKay paper, none of them had a massive impact on the global temperature (i.e. none had more than a 0.6 degree magnitude impact on global temperature). And some of the tipping points actually have a cooling effect.
This again seems consistent with the Wang paper, which says: "Emissions pathways and climate model uncertainties may dominate over tipping elements in determining overall multi-century warming".
One of the things that the Armstrong McKay paper helps to clarify, which doesn't seem to be clear from the Wang paper (as far as I can tell) is that a tipping point might potentially still be quite disruptive even if the global impact is small. (E.g. collapse of the convection in the Labrador-Irminger Seas wouldn't contribute much to global warming -- it actually has a cooling effect -- but it might be significantly disruptive to European and American weather systems).
In short, my understanding (prior to seeing the Wang paper) was that if you're focused on warming (rather than harms) then I largely understood Wang's sanguine-sounding claims to be true anyway.
Not a submission to the contest, but years ago I supported an NGO in Kenya working with the Luo community.
The NGO was called Teach A Man To Fish.
The Luo are famously good at fishing.
The local Luo people didn't complain about the apparent condescension of working with an NGO called Teach A Man To Fish when they were actually very good at fishing.
They wanted the money they could get from the NGO!
I imagine that forum norms might be influenced by this post.
There has been literally no regulation whatsoever to slow down AGI development
Thanks for your post; I'm sure it will be appreciated by many on this forum.
The claim that there has been literally no regulation whatsoever sounds a bit strong?
E.g. the US putting export bans on advanced chips to China? (BIS press release here, more commentary: 1, 2, 3, 4)
It looks to me like this was intended to slow down (China's) AI development, and indeed has a reasonable chance that it may slow down (overall) AI development.
(To be clear, I see this as a point of detail on one specific claim, and doesn't meaningfully detract from the overall thrust of your post)
Has Dustin's account been hacked by Bing AI?
I strong-upvoted this comment. I found the beginning of the comment particularly helpful:
Scale of WAW is big because it encompasses millions of sub-problems. But unless you are looking into destroying nature (which is politically infeasible and I don’t want to do it), you are looking at things like a particular pigeon disease, or how noise from ships affects haddocks. And then the scale doesn’t look that big.
Great to get your takes Saulius, appreciate it.
I've thought about WAW much less than you, but my take is:
- At the moment, the only WAW-related work we can do involves researching the topic. A lot. Probably for a long time.
- That's because any real-world-implementation work on WAW would be phenomenally complex, and the sign will be very hard to know most (all?) of the time.
- But the scale is big enough that it's worth it (except, perhaps, from a longtermist perspective)
As far as I can tell, there's nothing in your post to update away from this opinion? (I read it quickly, so sorry if I missed something)
I agree-voted with both polls. I recognise the concerns that you outlined with the made-up quotes.
My only real concern is about the definition of "community" posts. To illustrate this, I glanced through some recent posts, selected a few which I thought were likely to be borderline, and thought that several of them had been tagged as "Community", but didn't have the property of sucking me in an unhealthy way. Examples include
Native English speaker EAs: could you please speak slower?
“My Model Of EA Burnout” (Logan Strohl)
What's the social/historical story of EA's longtermist turn?
Another post did have that unattractive property (in my view), and was not labelled as community.
If too many "good" posts (whatever that means) are classed as community, I'll just end up looking in the community tab anyway, which might defeat the purpose.
In any case, I'm glad you're giving this a try, and thank you for thinking about this.
Posts can achieve goals other than advancing the discourse, and I'm OK with that.
I can certainly see how this proposal has upsides.
On the flipside, not being able to easily find such musings might also backfire. E.g. in the era before the FTX crisis, a journalist wanting to write about the culture of excess wealth in EA may have felt honour-bound to give at least some credit to the fact that the community was conscious of this and concerned about it if they had easily found George's post on the EA forum.
This proposal may still be the right thing to do, I just wanted to make sure multiple perspectives were considered.
At a time when the community has gone through so much, it's hard to hear this.
I confess there's a part of me which wants to disengage from this. I'm tired of worrying about whether EA culture has a problem with fraud, racism, or other things that I find offensive.
But I shouldn't disengage.
Just because my emotional energies have been zapped by previous dramas, it doesn't reduce the suffering experienced by victims of sexual abuse.
So first I'm going to say something which I think is obvious and uncontroversial to everyone:
Sexual abuse and harassment are wrong, and should not happen.
Secondly, I hereby take this pledge:
A pledge of solidarity to those who have suffered from sexual harassment or abuse
If you are upset or suffering because you have been abused or harassed, and you disclose this to me, I pledge to do the following:
- I will listen and provide you with emotional support -- if you're upset, your distress will be my first priority at the outset.
- I will not ask you questions to try to work out whether you are telling the truth. I would much rather trust and provide emotional support to someone who later turns out to have been lying than to question -- even subtly -- the legitimacy of someone who has suffered sexual abuse.
- I will support you to work out the most appropriate next steps. I recognise that choices about your next steps may be complex, and I will not try to rob you of agency as you work out the best way forward.
In the spirit of the second bullet point of my pledge, I haven't done any work to assess the truth or otherwise of the claims in this article. And I didn't need to in order to feel disturbed by it.
I also don't claim to be the best standard-bearer of opposing sexual abuse and harrassment -- I don't consider myself one of the top EA leaders, and I have no direct experience of having been a victim of sexual abuse. I'm simply one person (out of many, I believe) who think that EA should be deeply opposed to sexual abuse and harassment.
I agree with Richard and Will's comments that the tone of the post is very allegation-y (and not very question-y). In light of this, I've edited my comment so that it ends with "the tone wasn't right" instead of "the tone wasn't quite right".
I think a crux here is the extent to which the post is an allegation versus a question. If it's an allegation, then I agree it should be rigorously supported, which probably requires legal input.
Technically, the phrasing in the disclaimer makes it clear this is a question. I don't think the tone throughout the piece makes that clear enough though -- at least, not for my tastes.
Having said all that, overall, I do want EA to be a place where people can pose challenging questions like this. And I wouldn't want us to censure posts like this just because the tone wasn't right.
I think this does a good job of describing the problem.
The solution is hard. I've certainly found myself getting sucked into reading EA Forum posts about community topics and felt that my time was used poorly.
On the other hand, some of the posts were really valuable (George's post on big-spending EA and some of the very posts in the aftermath of the FTX crisis spring to mind).
I think that means I want a UX which does allow me to see community posts, but somehow gives posts which have more substantive/subject-matter content more prominence.
I'm really very unclear about exactly what this looks like, which is why this seems hard.
This is useful to share, thank you.
I think it would be good if:
- you shared with grant recipients which tier you think they are in (maybe you've already done this, but if you haven't, I think they would find it useful feedback)
- If anyone is in tier 4 and willing to have it publicly shared that they are in that tier, I think the community would find it useful
I appreciate that many people would dislike the idea of it being public that there are three tiers higher than them, but some EA org leaders are very community-spirited and might be OK with this.
Does anyone know how this differs from similar-sounding options like Miro, Mural and Lucidspark?
I think this can be a useful concept, so thanks for sharing.
I think this post could be usefully expanded on in the following ways:
- a bit more detail (vignettes, also, if possible, clear definitions) about what makes a decision important and influencable
- what would we have to forecast in order to adjust our credences about whether a crunch time is coming soon
Thank you for your work on this.
I'd be interested in your opinion on the number of people who should be working on this.
I appreciate that this isn't a straightforward question to answer. The truth is probably that returns diminish as the number of people working on this increase, and there probably isn't an obvious way to delineate a clear cut off point between "still useful to have another person" and "don't need any more people".
I think this useful because I suspect your view is that there should be lots more people working on this, but from reading the problem profile, I don't think readers would know whether 80k would want the 400 to increase to 500 or 500,000. (I've only skimmed it, so sorry if it is explained)
Knowing the difference between "the area is somewhat under-resourced" and "the area is extremely under-resourced" is useful for readers.
Yes, we can arrange via DM
Oh really? I'm no expert on google ads, but I thought it was common to have "conversions", and to pay more if a certain pre-defined event occurs (and a purchase is an example of a conversion).
I suspect Jeff knows more about google ads than I do, so maybe I should adjust my 60% number down.
I found this clear and reassuring. Thank you for sharing
EDIT: what I wrote here probably isn't correct (see comments from Jeff below)
My understanding (I can't remember my source for this) is that it's less about charitable giving and more motivated by a war against Google for revenue. I'd give a c.60% chance that this accurately describes Amazon's motivations.
Without Amazon Smile:
- Someone googles "Trousers from Amazon" (or whatever)
- When the user clicks on an ad on google's search results and goes to an Amazon page, Amazon gives Google some money
- If the customer than goes to make a purchase, Amazon gives google a bit more money
I'm imagining a (fictional) dialogue between two Amazon employees:
- "Can we convince the user to go to a copy of this webpage which has a different url? then we don't pay the money to google?"
- "Why would they do they do that?"
- "We could pay the customer an amount less than the amount we pay to google?"
- "But the amount Amazon would give to the customer would be so paltry"
- "What if the money goes to charity instead? People are much more scope insensitive about charitable giving"
My propensity to believe this story is mostly because it seems to explain Amazon's behaviour in a way that sounds difficult to understand otherwise. My credence in this would be higher than c.60% if it were verified by a high quality source.
So if they're closing the programme, I'm wondering if the benefits of recouping ad spend from Google is no longer big enough to warrant the costs of the running the Smile system.
In a post this long, most people are probably going to find at least one thing they don't like about it. I'm trying to approach this post as constructively as I can, i.e. "what I do find helpful here" rather than "how I can most effectively poke holes in this?" I think there's enough merit in this post that the constructive approach will likely yield something positive for most people as well.
You argue that funding is centralised much more than it appears. I find myself learning that this is the case more and more over time.
I suspect it probably is good to decentralise to some degree, however there is a very real downside to this:
- some projects are dangerous and probably shouldn't happen
- the most dangerous of those are ones run by a charismatic leader and appear very good
- if we have multiple funders who are not "informally centralised" (i.e. talking to each other) then there's a risk that dangerous projects will have multiple bites at the cherry, and with enough different funders, someone will fund them
I appreciate that there are counters to this, and I'm not saying this is a slam-dunk argument against decentralisation.
I appreciated "Some ideas we should probably pay more attention to". I'd be pretty happy to see some more discussion about the specific disciplines mentioned in that section, and also suggestions of other disciplines which might have something to add.
Speaking as someone with an actuarial background, I'm very aware of the Solvency 2 regime, which makes insurers think about extreme/tail events which have a probability of 1-in-200 of occurring within the next year. Solvency 2 probably isn't the most valuable item to add to that list; I'm sure there are many others.
I think I'm probably sympathetic to your claims in "EA is open to some kinds of critique, but not to others", but I think it would be helpful for there to be some discussion around Scott Alexander's post on EA criticism. In it, he argued that "EA is open to some kinds of critique, but not to others" was an inevitable "narrative beat", and that "shallow" criticisms which actually focus on the more actionable implications hit closer to home and are more valuable.
I was primed to dismiss your claims on the basis of Scott Alexander's arguments, but on closer consideration I suspect that might be too quick.
I feel it would be easier for me to judge this if someone (not necessarily the authors of this post) provided some examples of the sorts of deep critiques (e.g. by pointing to examples of deep critiques made of things other than EA). The examples of deep critiques given in the post did help with this, but it's easier to triangulate what's really meant when there are more examples.
There are presumably ways in which donating a material amount makes a difference to financial advice, at least in the sense that financial planning should take this into account, and perhaps there are tax implications as well. On this basis I think I’m tentatively favourable to this idea, but I’d be more confident about it if I had seen a bit more detail in your post.
(BTW I’m not criticising you for not having more detail in your post, it’s totally reasonable to jot down something on the forum and hear people’s opinions as a first step)
- Pricing: It might be worth considering how much work you have per client. I don’t know about the US, but in the UK and EU the regulatory burden for IFAs has been increasing substantially over the last decade. I haven’t spoken to IFAs much recently, so I don’t know whether they would be able to cope with as many as 100 clients per advisor. If 100 is too many for one person, you may need to increase your price. Having said that, if you know that $2k fees are the norm in the rest of the market, you could simply infer from that the $2k pricing is ok.
- Market sizing: you indicate that you would need c. 100 clients for this to work out from a profitability perspective. Sizing this is easier if we have a clearer understanding if your target market. Presumably the defining feature – from the perspective of why the client would want to choose you – is the fact that your clients will be significant donors (as opposed to being EAs? I can’t imagine that the choice of EA-aligned vs non-EA-aligned charity is going to matter from, e.g., a tax perspective). What are the characteristics of donation decisions where getting advice matters? (e.g. is it absolute amount, or something about the relationship with tax thresholds, or something else?) Once that’s more clearly defined, then it’s easier to size (a) the addressable market within EA (b) the addressable market more widely (non-EAs who also donate substantial amounts are presumably also of interest to you).
(Update: I’ve now seen you’ve written a comment where you consider allowing for differing views on x-risks in the next few years. I had assumed that people with short timelines wouldn’t bother getting long term financial advice in the first place, so I imagined that this would not be part of your offering)
Also, I’d certainly see this as a for-profit venture. I’d at least expect you to be donating yourself (presumably that’s linked to your motivations). However doing this as a non-profit means taking scarce donation dollars, when this project, if worth doing, really ought to be fundable without relying on donations.
Lastly, I believe I’ve seen another post on the forum with a very similar idea. I can’t remember much about the post, but you might want to track it down and reach out to the person.
Re item 4, it's fair to note that I haven't checked how conservative you've been on other assumptions, so if I did a replication of your work and it ended up being similar, then I agree that could be a reason.
Great that you've looked into this Akhil! Speaking as someone with a wife and daughter (and a mother, and other female family members, and female friends...) this is close to my heart.
A key problem with all of these is how to assess effectiveness. IPV typically occurs behind closed doors, which makes it hard to know what's really happening.
Largely because of these considerations, I predict that on further analysis, I will probably be less positive than you.
While this sounds consistent with a generalised GiveWellian sceptical prior, I say this with some sadness, because I would very much like reducing VAWG to be a high impact cause area.
Also, thank you for asking me for comments before publishing.
My main reason for being more pessimistic than you is that your internal and external validity adjustments seem very generous:
Source: your model
For brevity, I'll focus on Community based social empowerment, since it's the one you're most positive about.
- You have adjustments of 95% internal validity (aka replicability) adjustment, and 90% external validity (aka generalisability) adjustment. I'd consider these numbers to be high (i.e. more prone to lead to generous cost-effectiveness evaluations).
- Your model's 95% internal validity adjustment is the same internal validity adjustment that GiveWell uses for bednets. For comparison...
- ... malaria nets do merit a 95% internal validity adjustment. We have seen plenty of positive evidence for the effectiveness of bednets, and I'm told that there is so much evidence that it's difficult to get ethics approval for more RCTs because ethics boards argue that it's unethical to do studies with controls on something that is such a robustly proven intervention.
- ... cash transfers do merit a 95% internal validity adjustment. They are a robustly effective way of reducing poverty.
- ... Community Based Social Empowerment does not merit a 95% internal validity adjustment, in my view. Gathering this sort of evidence from surveys is very difficult, and I'd be surprised if the protocols are robust enough to give us the same confidence we have about the effect of malaria nets on mortality (deaths are relatively easy to count).
- I also suspect the external validity adjustment is too generous. The intervention relies heavily on cultural context; several GiveWell external adjustments are high too, but human bodies are pretty consistent from one place to the next, whereas cultures vary a lot with geography.
Therefore I predict that:
- in 90% of worlds where I (or someone from SoGive) sat down and reviewed this carefully, we would have validity adjustments lower than yours (i.e. lower than 95% and 90%).
- in 50% of worlds where I (or someone from SoGive) sat down and reviewed this carefully, we would have validity adjustments substantially lower than yours (i.e. lower than 50%).
- In summary, I think there's a 75% chance that we conclude with a >2x worse cost-effectiveness than you, and a 25% chance of a greater than >4x worse cost-effectiveness than you for Community Based Social Empowerment.
- This would be unlikely to be at the levels of cost-effectiveness where we would deem the intervention high impact.
I haven't thought enough about the other interventions apart from Self-defence (IMPower, which has been done by No Means No). As Matt has alluded to, SoGive has done some work on this topic, and received some information which is not in the public domain. I can't say too much about this, but I can discuss privately and guide you to the relevant researchers. SoGive's plans are to press for permission to publish on this, and finalise within the next few months.
For clarity, I've alluded to SoGive in this comment, but this is not an official SoGive comment. Content written in a SoGive capacity has to gone through a certain level of review which has not happened here, so this is written in a personal capacity.
For those less familiar with these models, they are applied in a straightforward, intuitive way. It's roughly equivalent to (Step 1) Calculate the benefit assuming full trust in the evidence; (Step 2) Multiply the benefit by the validity adjustments; (Step 3) divide by costs.
For those who want access to data to help them form their own view on whether these adjustment are high are not: In SoGive, we have pulled together a spreadsheet with GiveWell's internal and external validity adjustments (we're supposed to also add in SoGive's own adjustments at the bottom, not just GiveWell's, but have been less diligent at doing that). It's meant to be a (not-rigorously vetted) internal resource, but I'm sharing it here in case it helps. It's also probably a couple of years out of date now, but I'd from memory I don't think there are changes material enough to matter in the last couple of years.
I'll just add that from SoGive's perspective, this proposal would work. We have various views on charities, but only the ones which are in the public domain are robustly thought through enough that we would want an independent group like GWWC to pick them up.
The publication process forces us to think carefully about our claims and be sure that we stand by them.
(I appreciate that Sjir has made a number of other points, and I'm not claiming to answer this from every perspective)
SoGive is not currently on GWWC's list of evaluators --GWWC plans to look into us in 2023.
Thank you for this. It's a useful contribution, and I upvoted it.
I'd be interested in some discussion about when we'd expect this mathematics to be materially useful, especially when compared with other hard elements of doing this sort of forecast.
Example: if I want to estimate the extent to which averting a gigatonne of greenhouse gas (GHG) emissions influences the probability of human extinction, I suspect that the Fisher-Tippett-Gnedenko theorem isn't very important (shout if you disagree). Other considerations (like: "have I considered all the roundabout/indirect ways that GHG emissions could influence the chance of human extinction?") are probably more important.
I agree this is valuable, thank you for doing this.
I'll just echo something Matt said about possible lack of independence...
Prior to doing our formal Delphi process for determining our moral weights, we at SoGive had been using a placeholder set of moral weights. The placeholder was heavily influenced by GiveWell's moral weights.
Our process did then incorporate lots of other perspectives, including a survey of the EA community, and a survey of the wider population, as well as explicit exhortations to think things through independently. Despite all these things, I think it's possible that our process might have ended up anchoring on the previous placeholder weights, i.e. indirectly anchoring on GiveWell's moral weights. I don't think anyone in the team was looking at or aware of FP's or HLI's moral weights, so I don't expect there was any direct influence there.