Posts

Which effective altruism projects look disingenuous? 2021-01-03T07:28:39.662Z
Mati's 2020 donation recommendations 2020-12-08T19:02:08.725Z
When should you use lotteries? 2020-12-08T18:07:12.427Z
The community's conception of value drifting is sometimes too narrow 2020-09-04T02:00:10.326Z
How does change in the cost of security change the world? 2020-08-30T21:53:35.555Z
If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? 2020-08-24T23:23:33.374Z
At what level of risk of birth defect is it not worth (trying) having a (biological) child for the median person? 2020-08-03T20:06:47.134Z
Can you have an egoistic preference about your own birth? 2020-07-16T03:14:31.452Z
[link] Biostasis / Cryopreservation Survey 2020 2020-05-16T07:40:17.922Z
Which norms would you like to see on the EA Forum? 2020-05-10T21:41:42.826Z
How much slack do people have? 2020-04-27T03:37:48.467Z
What are high-leverage interventions to increase/decrease the global communication index? 2020-04-21T18:09:31.429Z
Could we have a warning system to warn us of imminent geomagnetic storms? 2020-04-04T15:35:50.828Z
(How) Could an AI become an independent economic agent? 2020-04-04T13:38:52.935Z
What fraction of posts submitted on the Effective Altruism Facebook group gets accepted by the admins? 2020-04-02T17:15:49.009Z
Why do we need philanthropy? Can we make it obsolete? 2020-03-27T15:47:25.258Z
Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic? 2020-03-27T15:24:36.201Z
How could we define a global communication index? 2020-03-25T01:47:50.731Z
What promising projects aren't being done against the coronavirus? 2020-03-22T03:30:02.970Z
Are countries sharing ventilators to fight the coronavirus? 2020-03-17T07:11:40.243Z
What are EA project ideas you have? 2020-03-07T02:58:53.338Z
What medium/long term considerations should we take into account when responding to the coronavirus' threat? 2020-03-05T10:30:47.153Z
Has anyone done an analysis on the importance, tractability, and neglectedness of keeping human-digestible calories in the ocean in case we need it after some global catastrophe? 2020-02-17T07:47:45.162Z
Who should give sperm/eggs? 2020-02-08T05:13:43.477Z
Mati_Roy's Shortform 2019-12-05T16:31:52.494Z
Crohn's disease 2018-11-13T16:20:42.200Z

Comments

Comment by Mati_Roy on Announcing "Naming What We Can"! · 2021-04-01T20:27:29.201Z · EA · GW

Good thinking. Names and currency (along with status) are one of the few things you have less when others have more, and so benefit from being put on the blockchain

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-02-25T15:00:04.870Z · EA · GW

so am understanding you have short AI timelines, and so don't think genetic engineering would have time to pay off, but psychedelics would, and that you think it's of similar relevance as working directly on the problem

Comment by Mati_Roy on Forecasting Prize Results · 2021-02-23T00:40:09.532Z · EA · GW

This prize will total $1000 between multiple recipients, with a minimum first place prize of $500. We will aim for 2-5 recipients in total. The prize will be paid for by the Quantified Uncertainty Research Institute (QURI).

Why was that not respected? nor mentioned in this post AFAICT

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-02-22T14:10:09.491Z · EA · GW

thanks for your answer!

Genetic engineering doesn't seem to have a comparable track record or a comparable evidence base.

You get humans from primates with genetic modifications, not psychedelic :)

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-02-20T20:44:11.256Z · EA · GW

oh, by bad. apologies. thanks for the quote!

in terms of augmenting humans, my impression is that genetic engineering is by far the most effective intervention. my understanding is that we're currently making a lot of progress in that area, yet some important research aspects seem neglected, and could have a transformative impact on the world.

I wonder if you disagree

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-02-16T19:11:19.945Z · EA · GW

I feel like the burden of proof is on you, no? how will psychedelics help avoid astronomical waste?

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-07T06:26:14.845Z · EA · GW

I guess I was working on the assumption that it was rare that people would want to split their donation between local and effective a priori, and my point was that GM wasn't useful to people that didn't already want to split their donations in that way before GM's existence -- but maybe this assumption is wrong actually

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-05T03:37:52.448Z · EA · GW

hummm, I guess it's fine after all. I change my mind. People can just give whatever fraction they were going to give to local charities, and then be matched. And the extra matching to effective charities is a signal from the matcher about their model of the world. I don't think someone that was going to give 100% to another charity than those 9 should use GivingMultiplier though (unless they changed their mind about effective charities). But my guess is that this project has good consequences.

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-04T12:09:13.636Z · EA · GW

I'm henceforth offering a MetaGivingMultiplier. It's the same structure than GivingMultiplier, but replace "local charities" with "GivingMultiplier" and "super-effective charities" with "a cryonics organization" (I recommend https://www.alcor.org/rapid/ or https://www.brainpreservation.org/). Anyone wants to take advantage of my donation match?

h/t: came up with this with Haydn Thomas-Rose

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-04T11:46:17.443Z · EA · GW

On handling posts that may violate Forum rules:

Thanks for the clarifications.

On private vs. public communication:

I don't want to argue for what to do in general, but here in particular my "accusation" consists of doing the math. If I got it wrong, am sure other got it wrong too and it would be useful to clarify publicly.

On that note, I've sent this post along to Lucius of the GivingMultiplier team.

Thank you.

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-04T08:17:55.956Z · EA · GW

I agree with what Kit said as well.

But that the only reason you're not removing it is because of Kit's comment makes me pretty concerned about the forum.

I also disagree that private communication is better than public communication in cases like this.

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-04T08:09:45.816Z · EA · GW

This doesn't change the "indistinguishable from if I gave X" property, but it is a thing that would have been easy to check before posting.

I did check. As you said, it doesn't change the conclusion (it actually makes it worse).

Second, point (b) matters. It seems like a bold assumption to assume that EA charities have reached "market efficiency"

I'm >50% sure that it doesn't fare better, but maybe. In any case, I specified in my OP that my main objection was (a). 

Thus, if you actually think one of the "EA" choices at GivingMultiplier is more valuable than the rest, it seems very likely that you contribute more to their work by choosing them to be matched. 

Yep, I did  mentioned that in my OP.

Did you see anything on the site that actually seemed false to you?

No,  I also mentioned this in OP.

  1. Give people an incentive to think about splitting their donation between "heart" and "head", by...

There's not really a real incentive though. I feel like there's a motte-and-bailey. The motte is that you get to choose one of the 9 charities, the bailey is that the matching to the local charity is actually meaningful.

and the local charity of their choice

That's meaningless as I showed in OP.

If you think they could have been even more clear, or think that most donors will believe something different despite the FAQ, you could say so. But to say that people who use the match "don't understand what's going on" is both uncharitable and, as best I can tell, false.

[...]

I disagree. shrug

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-01-04T07:55:25.396Z · EA · GW

We already have one gateway drug: poverty alleviation. We don't need more. Psychedelics won't change the civilisation's path. Next.

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-03T08:11:20.136Z · EA · GW

Importance: not really important to read this comment

Update: I updated; see my reply

GivingMultiplier's description according to the EA newsletter^1:

Let's assume Effective_Charity and Local_Charity. 

If you were going to give 100 USD to Local_Charity, but instead donate 10 USD to Effective_Charity and 90 USD to Local_Charity, GivingMultiplier will give 9 USD to Local_Charity and 1 USD to Effective_Charity, so there's now 99 USD going to the Local_Charity and 11 USD going the Effective_Charity. GivingMultiplier would give the money to Effective_Charity anyway. So for the donor, this is indistinguishable from donating 99 USD to Local_Charity and 1 USD to Effective_Charity, but it's done in a more obscure way.^2

Also, sure they are rather transparent about their process – at least in the newsletter; it wasn't obvious from the main page of the website –, but still, their scheme mostly works only insofar as people don't understand what's going on.

Potential motives

A bunch of people don't know Why you shouldn’t let “donation matching” affect your giving, and so they will be misguided by donation matches. If EA charities don't use them, then they might be at a disadvantage. So their reasoning might be that the game theory favors also using this technique under a consequentialist moral framework – sort of like a tit-for-tat with other charities, with deceiving donors as an externality.

One could argue that they should link to the piece against donation matching on their website, but maybe both memes are fit to different environments – maybe it would mostly reduce how much people use that specific service to fill their donation matching need, or something like that. I don't know, I'm trying to steelman it.

They might also want to know where people donate money, so they allow people to choose where some money goes among those 9 charities in exchange for knowing where they donate the rest of the money. And at the same time, they signal support for those 9 charities.

Consequences on the donors

If donation matches don't change how much donors give, but just where they give (which seems plausible to me), biasing them equally against all charities might actually help them make decisions that are more aligned with their worldview than if they were less biased with only a subset of them.

Footnotes

1) There website is actually giving different numbers, but the idea is the same.

2) Sure, there's the real choice of choosing which of the 9 Effective Charities receive the money, but:

a) The part about local charities is a red herring

b) Those charities probably sort-of have reached market efficiency (in the sense that large donors can rebalance their donations according to  how much total funding they want each of them to have)

(a) is my main objection.

Comment by Mati_Roy on Where are you donating in 2020 and why? · 2020-12-16T12:14:41.530Z · EA · GW

I posted on my website because I'm using some formatting not supported here: Mati's 2020 donation recommendations

Comment by Mati_Roy on Mati's 2020 donation recommendations · 2020-12-16T12:10:48.081Z · EA · GW

Ah yes, will do. Had I seen that thread, I would probably only have posted there instead of a top-level post. Thanks!

Comment by Mati_Roy on Bored at home? Contribute to the EA Wiki! · 2020-12-15T15:41:43.183Z · EA · GW

I just submitted a new wiki article, and it says it's under review. How long does that usually take? Let me know if you'd like to have more reviewers to help with that.

Comment by Mati_Roy on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2020-12-15T14:00:02.126Z · EA · GW

That's a complicated way of saying "I don't think it works" 0_o

Comment by Mati_Roy on Mati_Roy's Shortform · 2020-12-14T12:00:44.073Z · EA · GW

although to be fair, longtermism and infinitarianism reasoning often suggest the same courses of actions in our world, I have the impression

Comment by Mati_Roy on Mati_Roy's Shortform · 2020-12-14T11:57:32.464Z · EA · GW

Short-termism is to longtermism what longtermism is to infinitarianism.

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-09T02:18:49.359Z · EA · GW

FYI, the improving institutional decision-making (IIDM) coordinating group within EA is working on a resource directory that will eventually be able to answer questions like these in greater detail.

@nlacombe, sounds like it might be a good idea to donate later instead of now then:) (or just donate to your own Donor-Advised Fund for now).

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-09T02:14:04.227Z · EA · GW

I don't know, but I wouldn't be surprised if the first 2 didn't have room for more funding

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-09T02:12:43.063Z · EA · GW

I don't know. I don't know these 3 organizations, I just saw them in the post.

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-09T02:11:03.940Z · EA · GW

I don't know. You can contact them: https://quantifieduncertainty.org/contact/ And if they don't want donations (yet), Ozzie might be able to recommend another organization has ze has a great understanding of the landscape of prediction platforms.

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-07T14:15:41.592Z · EA · GW

In 80,000 Hours' post on Improving institutional decision-making, they also mention:

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-07T14:13:20.733Z · EA · GW

FHI's Centre for the Governance of AI

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-07T14:07:11.738Z · EA · GW

I've answered for now, but let me know if you create another question so that I move my answer

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-07T14:06:12.509Z · EA · GW

Institutional decision making

Updated: 2020-12-08

Note: I document this here: https://causeprioritization.org/Mechanism_and_institution_design (I might not keep this answer up-to-date, so check out the link)

Note: I don't have the impression The Good Judgement Project has room for more funding. I like what the people behind QURI have been doing (I've been following their work). Disclaimer: I was contracted by both groups, and could be again.

Also, documented here: https://causeprioritization.org/Forecasting :

Note: I don't know if any of those organizations have room for more funding.

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-07T13:58:06.069Z · EA · GW

Global coordination

Note: I document this here: https://causeprioritization.org/Global_coordination (I might not keep this answer up-to-date, so check out the link)

Note: Inclusion in the list doesn't mean endorsement. I love GCF, but I don't have the impression they need more funding. I feel good about the Good Country. I don't know the other 2 well.

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-06T23:18:57.423Z · EA · GW

my experience from EAGxBoston is that most people don't know about global coordination and only know about improving institutional decision making also I feel like they are related and I feel like there is not many organizations on both size do you disagree?

I don't know. Depends where you draw the boundaries.

if you agree, would you still split?

yes

Comment by Mati_Roy on Effective charities for improving institutional decision making and improving global coordination · 2020-12-06T22:07:25.709Z · EA · GW

Hummm, seems to me like it would be better to ask a question for institutional decision making and another question for global coordination, no? if you agree, you can edit it and post another:)

Comment by Mati_Roy on Introducing High Impact Athletes · 2020-12-03T23:53:06.319Z · EA · GW

That's awesome, good work! :)

Comment by Mati_Roy on Mati_Roy's Shortform · 2020-11-09T13:04:39.953Z · EA · GW

oh, of course, for-profit charities are a thing! that makes sense

I learned about it in "Economics without Illusion", chapter 8.

it's not because your organization's product/service/goal is to help other people and your customers are philanthropists that you can't make a profit.

profitable charities might increase competition to provide more effective altruism, and so still provide more value even though it makes a profit (maybe)

https://en.wikipedia.org/wiki/Charitable_for-profit_entity

x-post: https://www.facebook.com/mati.roy.09/posts/10159007824884579

Comment by Mati_Roy on The case for investing to give later · 2020-11-05T11:44:01.706Z · EA · GW

I can't find the donate button on FundersPledge. Do you have no more room for additional funding?

Comment by Mati_Roy on Has anyone done an analysis on the importance, tractability, and neglectedness of keeping human-digestible calories in the ocean in case we need it after some global catastrophe? · 2020-10-23T07:30:01.561Z · EA · GW

related meme: https://www.facebook.com/photo.php?fbid=10158965372794579&set=a.10150313853174579&type=3&theater

Comment by Mati_Roy on Plan for Impact Certificate MVP · 2020-10-04T19:28:20.120Z · EA · GW

Awesome! Documented on Moral economics -- Cause Prioritisation Wiki

Comment by Mati_Roy on Mati_Roy's Shortform · 2020-09-21T17:25:14.492Z · EA · GW

Is there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm?

I feel like a consequentialist would care about the harm itself whether or not it was caused by them.

And a deontologist wouldn't act in a certain way even if it meant they would act that way less in the future.

Here's an example (it's just a toy example; let's not argue whether it's true or not).

A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans.

A deontologist wouldn't eat honey even if they knew they would crack in the future and start eating meat.

If you care much more about the harm caused by you, you might act differently than both of them. You wouldn't eat meat to make 10 other people vegan, but you might eat honey to avoid later cracking and start eating meat.

A deontologist is like someone adopting that framework, but with an empty individualist approach. A consequentialist is like someone adopting that framework, but with an open individualist approach.

I wonder if most self-label deontologist would actually prefer this framework I'm proposing.

EtA: I'm not sure how well "directly caused" can be cached out. Anyone has a model for that?

x-post: https://www.facebook.com/groups/2189993411234830/ (post currently pending)

Comment by Mati_Roy on Which norms would you like to see on the EA Forum? · 2020-09-20T08:45:42.335Z · EA · GW

I wish people x-posting between LessWrong and the EA Forum encouraged users to only comment on one to centralize comments. And to increase the probability that people do follow this suggestion, for posts (which take a long time to read anyway, compare to the time of clicking on a link), I would just put the post on one of the 2 and a link to it on the other

Comment by Mati_Roy on Mati_Roy's Shortform · 2020-09-17T23:22:29.916Z · EA · GW

Policy suggestion for countries with government-funded health insurance or healthcare: People using death-with-dignity can receive part of the money that is saved by the government if applicable.

Which could be used to pay for cryonics among other things.

Comment by Mati_Roy on The community's conception of value drifting is sometimes too narrow · 2020-09-08T19:39:39.989Z · EA · GW
EA isn't (supposed to be) dogmatic, and hence doesn't have clearly defined values.

I agree.

I think this is a big reason why people have chosen to focus on behavior and community involvement.

Community involvement is just instrumental to the goals of EA movement building. I think the outcomes we want to measure are things like career and donations. We also want to measure things that are instrumental to this, but I think we should keep those separated.

Related: my comment on "How have you become more (or less) engaged with EA in the last year?"

Comment by Mati_Roy on How have you become more (or less) engaged with EA in the last year? · 2020-09-08T19:30:26.443Z · EA · GW

I think it would be good to differentiate things that are instrumental to doing EA and things that are doing EA.

Ex.: Attending events and reading books is instrumental. Working and donating money is directly EA.

I would count those separately. Engagement in the community is just instrumental to the goal of EA movement building. If we entengle both in our discussions, we might end up with people attending a bunch of events and reading a lot online, but without ever producing value (for example).

Although maybe it does produce value in itself, because they can do movement building themselves and become better voters for example. And focusing a lot on engagement might turn EA into a robust superorganism-like entity. If that's the argument, then that's fine I guess.

Somewhat related: The community's conception of value drifting is sometimes too narrow.

Comment by Mati_Roy on Suggest a question for Peter Singer · 2020-09-06T20:53:09.633Z · EA · GW

What are your egoistic preferences? (ex.: hedonism peak, hedonism intensity times length, learning, life extension, relationships, etc.)

Comment by Mati_Roy on Suggest a question for Peter Singer · 2020-09-06T20:52:30.218Z · EA · GW

(why) do you focus on near-term animal welfare and poverty alleviation?

Comment by Mati_Roy on The community's conception of value drifting is sometimes too narrow · 2020-09-06T20:27:41.500Z · EA · GW

yeah, 'shift' or 'change' work better for neutral terms. other suggestion: 'change in reveal preferences'

Comment by Mati_Roy on The community's conception of value drifting is sometimes too narrow · 2020-09-05T18:37:37.190Z · EA · GW

I see, thanks!

Comment by Mati_Roy on The community's conception of value drifting is sometimes too narrow · 2020-09-05T18:34:38.525Z · EA · GW

Ok yeah, my explanations didn't make the connection clear. I'll elaborate.

I have the impression "drift" has the connotation of uncontrolled, and therefore undesirable change. It has a negative connotation. People don't want to value drift. If you call rational surface-value update "value drift", it could confuse people, and make them less prone to make those updates.

If you only use 'value drift' only to refer to EA-value drift, it also sneaks in an implication that other value changes are not "drifts". Language shapes our thoughts, so this usage could modify one's model of the world in such a way that they are more likely to become more EA than they value.

I should have been more careful about implying certain intentions from you in my previous comment though. But I think some EAs have this intention. And I think using the word that way has this consequence whether or not that's the intent.

Comment by Mati_Roy on The community's conception of value drifting is sometimes too narrow · 2020-09-05T18:16:16.506Z · EA · GW

This seems reasonable to me. I do use the shortcut myself in various contexts. But I think using it on someone when you know it's because they have different values is rude.

I use value drift to refer to fundamental values. If your surface level values change because you introspected more, I wouldn't call it a drift. Drift has a connotation of not being in control. Maybe I would rather call it value enlightenment.

Comment by Mati_Roy on The community's conception of value drifting is sometimes too narrow · 2020-09-05T02:14:12.491Z · EA · GW

I think another term would better fit your description. Maybe "executive failure".

I don't see it as a micro death

Me neither. Nor do I see it as a value drift though.

Comment by Mati_Roy on The community's conception of value drifting is sometimes too narrow · 2020-09-04T20:02:26.759Z · EA · GW

If they have the same value, but just became worse at fulfilling them, then it's more something like "epistemic drift"; although I would probably discourage using that term.

On the other end, if they started caring more about homeless people intrinsically for some reason, then it would be a value drift. But they wouldn't be "less effective", they would, presumably, be as effective, but just at a different goal.

Comment by Mati_Roy on The community's conception of value drifting is sometimes too narrow · 2020-09-04T19:57:55.717Z · EA · GW

Other thoughts:

  • It seems epistemically dangerous to discourage such value enlightenment as it might prevent ourselves from become more enlighten.
  • It seems pretty adversarial to manipulate people into not becoming more value enlighten, and allowing this at a norm level seems net negative from most people's point of view.
  • But maybe people want to act more altruistically and trusting in a society as also espouse those values. In which case, surface-level values could change in a good way for almost everyone without any fundamental value drift. Which is also a useful phenomenon to study, so probably fine to also call this 'value drift'.