so am understanding you have short AI timelines, and so don't think genetic engineering would have time to pay off, but psychedelics would, and that you think it's of similar relevance as working directly on the problem
This prize will total $1000 between multiple recipients, with a minimum first place prize of $500. We will aim for 2-5 recipients in total. The prize will be paid for by the Quantified Uncertainty Research Institute (QURI).
Why was that not respected? nor mentioned in this post AFAICT
in terms of augmenting humans, my impression is that genetic engineering is by far the most effective intervention. my understanding is that we're currently making a lot of progress in that area, yet some important research aspects seem neglected, and could have a transformative impact on the world.
I guess I was working on the assumption that it was rare that people would want to split their donation between local and effective a priori, and my point was that GM wasn't useful to people that didn't already want to split their donations in that way before GM's existence -- but maybe this assumption is wrong actually
hummm, I guess it's fine after all. I change my mind. People can just give whatever fraction they were going to give to local charities, and then be matched. And the extra matching to effective charities is a signal from the matcher about their model of the world. I don't think someone that was going to give 100% to another charity than those 9 should use GivingMultiplier though (unless they changed their mind about effective charities). But my guess is that this project has good consequences.
I'm henceforth offering a MetaGivingMultiplier. It's the same structure than GivingMultiplier, but replace "local charities" with "GivingMultiplier" and "super-effective charities" with "a cryonics organization" (I recommend https://www.alcor.org/rapid/ or https://www.brainpreservation.org/). Anyone wants to take advantage of my donation match?
I don't want to argue for what to do in general, but here in particular my "accusation" consists of doing the math. If I got it wrong, am sure other got it wrong too and it would be useful to clarify publicly.
On that note, I've sent this post along to Lucius of the GivingMultiplier team.
This doesn't change the "indistinguishable from if I gave X" property, but it is a thing that would have been easy to check before posting.
I did check. As you said, it doesn't change the conclusion (it actually makes it worse).
Second, point (b) matters. It seems like a bold assumption to assume that EA charities have reached "market efficiency"
I'm >50% sure that it doesn't fare better, but maybe. In any case, I specified in my OP that my main objection was (a).
Thus, if you actually think one of the "EA" choices at GivingMultiplier is more valuable than the rest, it seems very likely that you contribute more to their work by choosing them to be matched.
Yep, I did mentioned that in my OP.
Did you see anything on the site that actually seemed false to you?
No, I also mentioned this in OP.
Give people an incentive to think about splitting their donation between "heart" and "head", by...
There's not really a real incentive though. I feel like there's a motte-and-bailey. The motte is that you get to choose one of the 9 charities, the bailey is that the matching to the local charity is actually meaningful.
and the local charity of their choice
That's meaningless as I showed in OP.
If you think they could have been even more clear, or think that most donors will believe something different despite the FAQ, you could say so. But to say that people who use the match "don't understand what's going on" is both uncharitable and, as best I can tell, false.
Importance: not really important to read this comment
Update: I updated; see my reply
GivingMultiplier's description according to the EA newsletter^1:
Let's assume Effective_Charity and Local_Charity.
If you were going to give 100 USD to Local_Charity, but instead donate 10 USD to Effective_Charity and 90 USD to Local_Charity, GivingMultiplier will give 9 USD to Local_Charity and 1 USD to Effective_Charity, so there's now 99 USD going to the Local_Charity and 11 USD going the Effective_Charity. GivingMultiplier would give the money to Effective_Charity anyway. So for the donor, this is indistinguishable from donating 99 USD to Local_Charity and 1 USD to Effective_Charity, but it's done in a more obscure way.^2
Also, sure they are rather transparent about their process – at least in the newsletter; it wasn't obvious from the main page of the website –, but still, their scheme mostly works only insofar as people don't understand what's going on.
A bunch of people don't know Why you shouldn’t let “donation matching” affect your giving, and so they will be misguided by donation matches. If EA charities don't use them, then they might be at a disadvantage. So their reasoning might be that the game theory favors also using this technique under a consequentialist moral framework – sort of like a tit-for-tat with other charities, with deceiving donors as an externality.
One could argue that they should link to the piece against donation matching on their website, but maybe both memes are fit to different environments – maybe it would mostly reduce how much people use that specific service to fill their donation matching need, or something like that. I don't know, I'm trying to steelman it.
They might also want to know where people donate money, so they allow people to choose where some money goes among those 9 charities in exchange for knowing where they donate the rest of the money. And at the same time, they signal support for those 9 charities.
Consequences on the donors
If donation matches don't change how much donors give, but just where they give (which seems plausible to me), biasing them equally against all charities might actually help them make decisions that are more aligned with their worldview than if they were less biased with only a subset of them.
1) There website is actually giving different numbers, but the idea is the same.
2) Sure, there's the real choice of choosing which of the 9 Effective Charities receive the money, but:
a) The part about local charities is a red herring
b) Those charities probably sort-of have reached market efficiency (in the sense that large donors can rebalance their donations according to how much total funding they want each of them to have)
I don't know. You can contact them: https://quantifieduncertainty.org/contact/ And if they don't want donations (yet), Ozzie might be able to recommend another organization has ze has a great understanding of the landscape of prediction platforms.
Note: I don't have the impression The Good Judgement Project has room for more funding. I like what the people behind QURI have been doing (I've been following their work). Disclaimer: I was contracted by both groups, and could be again.
my experience from EAGxBoston is that most people don't know about global coordination and only know about improving institutional decision making also I feel like they are related and I feel like there is not many organizations on both size do you disagree?
I don't know. Depends where you draw the boundaries.
Is there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm?
I feel like a consequentialist would care about the harm itself whether or not it was caused by them.
And a deontologist wouldn't act in a certain way even if it meant they would act that way less in the future.
Here's an example (it's just a toy example; let's not argue whether it's true or not).
A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans.
A deontologist wouldn't eat honey even if they knew they would crack in the future and start eating meat.
If you care much more about the harm caused by you, you might act differently than both of them. You wouldn't eat meat to make 10 other people vegan, but you might eat honey to avoid later cracking and start eating meat.
A deontologist is like someone adopting that framework, but with an empty individualist approach. A consequentialist is like someone adopting that framework, but with an open individualist approach.
I wonder if most self-label deontologist would actually prefer this framework I'm proposing.
EtA: I'm not sure how well "directly caused" can be cached out. Anyone has a model for that?
I wish people x-posting between LessWrong and the EA Forum encouraged users to only comment on one to centralize comments. And to increase the probability that people do follow this suggestion, for posts (which take a long time to read anyway, compare to the time of clicking on a link), I would just put the post on one of the 2 and a link to it on the other
EA isn't (supposed to be) dogmatic, and hence doesn't have clearly defined values.
I think this is a big reason why people have chosen to focus on behavior and community involvement.
Community involvement is just instrumental to the goals of EA movement building. I think the outcomes we want to measure are things like career and donations. We also want to measure things that are instrumental to this, but I think we should keep those separated.
I think it would be good to differentiate things that are instrumental to doing EA and things that are doing EA.
Ex.: Attending events and reading books is instrumental. Working and donating money is directly EA.
I would count those separately. Engagement in the community is just instrumental to the goal of EA movement building. If we entengle both in our discussions, we might end up with people attending a bunch of events and reading a lot online, but without ever producing value (for example).
Although maybe it does produce value in itself, because they can do movement building themselves and become better voters for example. And focusing a lot on engagement might turn EA into a robust superorganism-like entity. If that's the argument, then that's fine I guess.
Ok yeah, my explanations didn't make the connection clear. I'll elaborate.
I have the impression "drift" has the connotation of uncontrolled, and therefore undesirable change. It has a negative connotation. People don't want to value drift. If you call rational surface-value update "value drift", it could confuse people, and make them less prone to make those updates.
If you only use 'value drift' only to refer to EA-value drift, it also sneaks in an implication that other value changes are not "drifts". Language shapes our thoughts, so this usage could modify one's model of the world in such a way that they are more likely to become more EA than they value.
I should have been more careful about implying certain intentions from you in my previous comment though. But I think some EAs have this intention. And I think using the word that way has this consequence whether or not that's the intent.
This seems reasonable to me. I do use the shortcut myself in various contexts. But I think using it on someone when you know it's because they have different values is rude.
I use value drift to refer to fundamental values. If your surface level values change because you introspected more, I wouldn't call it a drift. Drift has a connotation of not being in control. Maybe I would rather call it value enlightenment.
If they have the same value, but just became worse at fulfilling them, then it's more something like "epistemic drift"; although I would probably discourage using that term.
On the other end, if they started caring more about homeless people intrinsically for some reason, then it would be a value drift. But they wouldn't be "less effective", they would, presumably, be as effective, but just at a different goal.
It seems epistemically dangerous to discourage such value enlightenment as it might prevent ourselves from become more enlighten.
It seems pretty adversarial to manipulate people into not becoming more value enlighten, and allowing this at a norm level seems net negative from most people's point of view.
But maybe people want to act more altruistically and trusting in a society as also espouse those values. In which case, surface-level values could change in a good way for almost everyone without any fundamental value drift. Which is also a useful phenomenon to study, so probably fine to also call this 'value drift'.