Posts

What are the long term consequences of poverty alleviation? 2021-07-12T18:14:57.270Z
What would an entity with GiveWell's decision-making process have recommended in the past? 2021-06-25T06:12:28.978Z
What effectively altruistic inducement prize contest would you like to be funded? 2021-06-22T21:35:14.853Z
Which effective altruism projects look disingenuous? 2021-01-03T07:28:39.662Z
Mati's 2020 donation recommendations 2020-12-08T19:02:08.725Z
When should you use lotteries? 2020-12-08T18:07:12.427Z
The community's conception of value drifting is sometimes too narrow 2020-09-04T02:00:10.326Z
How does change in the cost of security change the world? 2020-08-30T21:53:35.555Z
If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? 2020-08-24T23:23:33.374Z
At what level of risk of birth defect is it not worth (trying) having a (biological) child for the median person? 2020-08-03T20:06:47.134Z
Can you have an egoistic preference about your own birth? 2020-07-16T03:14:31.452Z
[link] Biostasis / Cryopreservation Survey 2020 2020-05-16T07:40:17.922Z
Which norms would you like to see on the EA Forum? 2020-05-10T21:41:42.826Z
How much slack do people have? 2020-04-27T03:37:48.467Z
What are high-leverage interventions to increase/decrease the global communication index? 2020-04-21T18:09:31.429Z
Could we have a warning system to warn us of imminent geomagnetic storms? 2020-04-04T15:35:50.828Z
(How) Could an AI become an independent economic agent? 2020-04-04T13:38:52.935Z
What fraction of posts submitted on the Effective Altruism Facebook group gets accepted by the admins? 2020-04-02T17:15:49.009Z
Why do we need philanthropy? Can we make it obsolete? 2020-03-27T15:47:25.258Z
Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic? 2020-03-27T15:24:36.201Z
How could we define a global communication index? 2020-03-25T01:47:50.731Z
What promising projects aren't being done against the coronavirus? 2020-03-22T03:30:02.970Z
Are countries sharing ventilators to fight the coronavirus? 2020-03-17T07:11:40.243Z
What are EA project ideas you have? 2020-03-07T02:58:53.338Z
What medium/long term considerations should we take into account when responding to the coronavirus' threat? 2020-03-05T10:30:47.153Z
Has anyone done an analysis on the importance, tractability, and neglectedness of keeping human-digestible calories in the ocean in case we need it after some global catastrophe? 2020-02-17T07:47:45.162Z
Who should give sperm/eggs? 2020-02-08T05:13:43.477Z
Mati_Roy's Shortform 2019-12-05T16:31:52.494Z
Crohn's disease 2018-11-13T16:20:42.200Z

Comments

Comment by Mati_Roy on Mexico EA Fellowship · 2022-09-24T02:39:05.516Z · EA · GW

I sent an email with a group application on September 18th (https://docs.google.com/document/d/1ERZ7spGHZYJjixaY3ln4eVxu_GKkOWqvB9rpz3Abd6M/), but still haven't received a reply; I hadn't used the Google Form given it was a group application -- Did you receive my application? :/

Comment by Mati_Roy on Mexico EA Fellowship · 2022-09-06T01:57:22.772Z · EA · GW

Is this program family-friendly?

Comment by Mati_Roy on What are EA project ideas you have? · 2022-08-31T19:17:41.317Z · EA · GW

Increase the prize for the International Mathematics Olympiads

Rationale: It's a useful source of talent EAs have used, and the current prizes are pretty low (less than 100 USD each AFAIK).

I'd be willing to pitch in that prize. Please reach out to me if interested.

Comment by Mati_Roy on You should join an EA organization with too many employees · 2022-07-30T21:01:20.237Z · EA · GW

read quickly, but basically: value that is harder to capture by the market is more neglected, so actually, there's a lot of opportunities of helping more people per employee in altruistic sectors, so not doing that is an opportunity cost

Comment by Mati_Roy on EA Organization Updates: April-May 2022 · 2022-05-17T20:06:31.091Z · EA · GW

MATS is extending applications until May 22nd for its SERI ML Alignment Theory Scholars Program 2022. More info: https://forum.effectivealtruism.org/posts/nSyvMy3QQTyBzybNx/seri-ml-alignment-theory-scholars-program-2022

Comment by Mati_Roy on The EA Newsletter & Open Thread - January 2016 · 2022-05-17T20:04:22.803Z · EA · GW

nice!

Comment by Mati_Roy on Brain preservation to prevent involuntary death: a possible cause area · 2022-03-22T22:43:12.556Z · EA · GW

I don't have a short answer for you unfortunately.

The Quantum Physics Sequence does address this to some extent.

Comment by Mati_Roy on What effectively altruistic inducement prize contest would you like to be funded? · 2022-03-08T18:57:10.977Z · EA · GW

I'd love that, yes! https://www.facebook.com/mati.roy.09

Comment by Mati_Roy on Mati_Roy's Shortform · 2022-02-08T07:12:07.348Z · EA · GW

Part-time remote assistant position

My assistant agency, Pantask, is looking to hire new remote assistants. We currently work only with effective altruist / LessWrong clients, and are looking to contract people in or adjacent to the network. If you’re interested in referring me people, I’ll give you a 100 USD finder’s fee for any assistant I contract for at least 2 weeks (I’m looking to contract a couple at the moment).

This is a part time gig / sideline. Tasks often include web searches, problem solving over the phone, and google sheet formatting. A full description of our services are here: https://bit.ly/PantaskServices 

The form to apply is here: https://airtable.com/shrdBJAP1M6K3R8IG It pays 20 usd/h.

You can ask questions here, in PM, or at mati@pantask.com.

Comment by Mati_Roy on What to know before talking with journalists about EA · 2022-02-07T19:07:37.575Z · EA · GW

this seems relevant: Guide to Talking About Effective Altruism (https://www.givingwhatwecan.org/get-involved/share-our-ideas/guide-to-talking-about-effective-altruism/) (i haven't read it though)

Comment by Mati_Roy on What to know before talking with journalists about EA · 2022-01-24T21:07:37.142Z · EA · GW

I also suggest you record all interviews

Comment by Mati_Roy on EA Survey 2018 Series: Community Demographics & Characteristics · 2021-12-22T01:46:47.975Z · EA · GW

The link to the raw data at the end of the post is dead. I would like to have the raw data of EA surveys, especially recent ones -- is there a way to get them?

Comment by Mati_Roy on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-12-19T07:36:10.054Z · EA · GW

way to sequence sperm/​​eggs non-destructively

this would give us:

immediate large boost of ~2SD possible by selecting earlier in the process before variance has been canceled out, does not require any new technology other than the gamete sequencing part

see: https://www.gwern.net/Embryo-selection

Comment by Mati_Roy on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-04T17:34:47.679Z · EA · GW

By the way, for the future I would suggest the question format (instead of the post format), so that comments are separated as "answers to the question" and "other" :)

Comment by Mati_Roy on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-04T17:33:31.991Z · EA · GW

First human to achieve some level of intelligence (as measured by some test) (prize split between the person themselves and the parents and the genetic engineering lab if applicable) (this is more about the social incentive than economical one, as I suppose there's already an economical one)

x-post: What effectively altruistic inducement prize contest would you like to be funded?

Comment by Mati_Roy on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-04T17:30:01.693Z · EA · GW

1M USD for the first to create a gamete (sperm and/or egg) from stem cells that result in a successful birth in one of the following species: humans, mice, dogs, pigs (probably should add more options).

(this could enable iterated embryo selection)

x-post: What effectively altruistic inducement prize contest would you like to be funded?

Comment by Mati_Roy on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-04T17:27:33.223Z · EA · GW

You could do both -- that's what I'll do if that's okay :)

Also, comments can also give you points, ya know! :P

Comment by Mati_Roy on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-03T18:59:50.429Z · EA · GW

What's the price range for the bounty?

Comment by Mati_Roy on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-03T18:43:55.513Z · EA · GW

I like this! I'm curious why you opted for the submissions to be private instead of public (ie. submitting by posting a comment)?

Comment by Mati_Roy on EA Survey 2019 Series: Donation Data · 2021-10-29T01:51:48.407Z · EA · GW

awesome, thank you!

Comment by Mati_Roy on EA needs consultancies · 2021-10-18T16:17:36.149Z · EA · GW

Hi all, Haydn and I figured this post was a good place to plug our startup, Pantask. While the services we provide are not as advanced as those listed here, Pantask can offer assistance to EA orgs that need help with day to day operations but can’t afford to hire full time employees. We provide general virtual assistance services, such as organizing chaotic troves of data, manage schedules and emails, and help with brain debugging. We also offer graphic design, copyediting, transcription, and writing services. Our assistants can also perform certain kinds of research (the kind you can do in <8 hours, generally speaking), such as finding service providers, information on grants, etc.

Essentially, if the task can be done by a reasonably competent person without a specialized skill set, our assistants can very likely do it for you. In addition to being EA owned, some of our assistants are also EAs and even more are familiar and interested in EA. We’ve served EA charities before. We charge 30 USD per hour. If you're not used to delegating tasks, we can help you review the tasks you delegate to make sure they are clear, at no additional cost.

You can send tasks to ask@pantask.com, or email either of us at mati@pantask.com or haydn@pantask.com, or call us at (570) 509-3366. You can also schedule me on: https://calendar.google.com/calendar/u/0/appointments/schedules/AcZssZ0Dc0qvV3EbGsGR39_dhoeusVtX6rwnpfXpGVHwRHPGPuIjTd1GPiCRz9qMwTkIZKCPPVB0AQQm

Comment by Mati_Roy on What are the long term consequences of poverty alleviation? · 2021-07-12T18:16:08.115Z · EA · GW
Comment by Mati_Roy on What would an entity with GiveWell's decision-making process have recommended in the past? · 2021-07-12T18:06:04.730Z · EA · GW

Good point, and GiveWell would probably have figured that one out

Comment by Mati_Roy on What would an entity with GiveWell's decision-making process have recommended in the past? · 2021-07-05T16:13:15.110Z · EA · GW

Mayyybe it would have bought slave's freedom one by one instead? (I don't know; just speculating)

Comment by Mati_Roy on What would an entity with GiveWell's decision-making process have recommended in the past? · 2021-07-05T16:12:23.317Z · EA · GW

Another angle (/ piece of the puzzle) to compare different decision-making processes

Comment by Mati_Roy on What effectively altruistic inducement prize contest would you like to be funded? · 2021-06-25T06:17:02.363Z · EA · GW

Also safer technological progress, which is where a significant chunk of the x-risks are coming from. I don't think this would influence the probability of stable dictatorships. 

Comment by Mati_Roy on What effectively altruistic inducement prize contest would you like to be funded? · 2021-06-25T06:13:18.250Z · EA · GW

those are not inducement prizes

Comment by Mati_Roy on What effectively altruistic inducement prize contest would you like to be funded? · 2021-06-22T21:37:19.659Z · EA · GW

I intend to update this answer as I think of more.

  • Creating a gamete from a stem cell (to enable [iterated embryo selection](https://www.lesswrong.com/tag/iterated-embryo-selection))
  • Reanimating a cryonics patient (although, creating a prize that long in advance will probably not create a market pressure in the short term)
  • First human to achieve some level of intelligence (as measured by some IQ test) (prize split between the person and the genetic engineering lab) (this is more about the social incentive than economical one, as I suppose there's already an economical one)
Comment by Mati_Roy on Shouldn't 'Effective Altruism' be capitalized? · 2021-06-22T19:44:48.932Z · EA · GW

the community, yes. the practice / approach, no.

Comment by Mati_Roy on Limited Time Opportunity to Secure Panamanian Residency - EA Group Trip Offer [Imminent Rules Change] · 2021-06-05T21:15:01.165Z · EA · GW
  • FYI, Hunter says you can have residency in Paraguay easily, and only need to stay 1 day per year to maintain it
  • I might be interested in hanging out in Panama, but idk if i want the citizenship
Comment by Mati_Roy on Why do we need philanthropy? Can we make it obsolete? · 2021-04-24T22:19:59.458Z · EA · GW

If a non-profit organization is:

  • not solving some public good (in the economic sense: https://en.wikipedia.org/wiki/Public_good_(economics))
  • not redistributing money directly
  • not helping agents that can't help themselves / use money
  • not helping the donor directly
  • relying on donations

Then:

  • it's probably mostly done for signaling purposes and/or misguided
  • it's likely performing worse than the average company
    • although there could be less efficient ways of redistributing money that would arguably be better than the average company
Comment by Mati_Roy on Announcing "Naming What We Can"! · 2021-04-01T20:27:29.201Z · EA · GW

Good thinking. Names and currency (along with status) are one of the few things you have less when others have more, and so benefit from being put on the blockchain

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-02-25T15:00:04.870Z · EA · GW

so am understanding you have short AI timelines, and so don't think genetic engineering would have time to pay off, but psychedelics would, and that you think it's of similar relevance as working directly on the problem

Comment by Mati_Roy on Forecasting Prize Results · 2021-02-23T00:40:09.532Z · EA · GW

This prize will total $1000 between multiple recipients, with a minimum first place prize of $500. We will aim for 2-5 recipients in total. The prize will be paid for by the Quantified Uncertainty Research Institute (QURI).

Why was that not respected? nor mentioned in this post AFAICT

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-02-22T14:10:09.491Z · EA · GW

thanks for your answer!

Genetic engineering doesn't seem to have a comparable track record or a comparable evidence base.

You get humans from primates with genetic modifications, not psychedelic :)

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-02-20T20:44:11.256Z · EA · GW

oh, by bad. apologies. thanks for the quote!

in terms of augmenting humans, my impression is that genetic engineering is by far the most effective intervention. my understanding is that we're currently making a lot of progress in that area, yet some important research aspects seem neglected, and could have a transformative impact on the world.

I wonder if you disagree

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-02-16T19:11:19.945Z · EA · GW

I feel like the burden of proof is on you, no? how will psychedelics help avoid astronomical waste?

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-07T06:26:14.845Z · EA · GW

I guess I was working on the assumption that it was rare that people would want to split their donation between local and effective a priori, and my point was that GM wasn't useful to people that didn't already want to split their donations in that way before GM's existence -- but maybe this assumption is wrong actually

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-05T03:37:52.448Z · EA · GW

hummm, I guess it's fine after all. I change my mind. People can just give whatever fraction they were going to give to local charities, and then be matched. And the extra matching to effective charities is a signal from the matcher about their model of the world. I don't think someone that was going to give 100% to another charity than those 9 should use GivingMultiplier though (unless they changed their mind about effective charities). But my guess is that this project has good consequences.

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-04T12:09:13.636Z · EA · GW

I'm henceforth offering a MetaGivingMultiplier. It's the same structure than GivingMultiplier, but replace "local charities" with "GivingMultiplier" and "super-effective charities" with "a cryonics organization" (I recommend https://www.alcor.org/rapid/ or https://www.brainpreservation.org/). Anyone wants to take advantage of my donation match?

h/t: came up with this with Haydn Thomas-Rose

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-04T11:46:17.443Z · EA · GW

On handling posts that may violate Forum rules:

Thanks for the clarifications.

On private vs. public communication:

I don't want to argue for what to do in general, but here in particular my "accusation" consists of doing the math. If I got it wrong, am sure other got it wrong too and it would be useful to clarify publicly.

On that note, I've sent this post along to Lucius of the GivingMultiplier team.

Thank you.

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-04T08:17:55.956Z · EA · GW

I agree with what Kit said as well.

But that the only reason you're not removing it is because of Kit's comment makes me pretty concerned about the forum.

I also disagree that private communication is better than public communication in cases like this.

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-04T08:09:45.816Z · EA · GW

This doesn't change the "indistinguishable from if I gave X" property, but it is a thing that would have been easy to check before posting.

I did check. As you said, it doesn't change the conclusion (it actually makes it worse).

Second, point (b) matters. It seems like a bold assumption to assume that EA charities have reached "market efficiency"

I'm >50% sure that it doesn't fare better, but maybe. In any case, I specified in my OP that my main objection was (a). 

Thus, if you actually think one of the "EA" choices at GivingMultiplier is more valuable than the rest, it seems very likely that you contribute more to their work by choosing them to be matched. 

Yep, I did  mentioned that in my OP.

Did you see anything on the site that actually seemed false to you?

No,  I also mentioned this in OP.

  1. Give people an incentive to think about splitting their donation between "heart" and "head", by...

There's not really a real incentive though. I feel like there's a motte-and-bailey. The motte is that you get to choose one of the 9 charities, the bailey is that the matching to the local charity is actually meaningful.

and the local charity of their choice

That's meaningless as I showed in OP.

If you think they could have been even more clear, or think that most donors will believe something different despite the FAQ, you could say so. But to say that people who use the match "don't understand what's going on" is both uncharitable and, as best I can tell, false.

[...]

I disagree. shrug

Comment by Mati_Roy on Cash prizes for the best arguments against psychedelics being an EA cause area · 2021-01-04T07:55:25.396Z · EA · GW

We already have one gateway drug: poverty alleviation. We don't need more. Psychedelics won't change the civilisation's path. Next.

Comment by Mati_Roy on Which effective altruism projects look disingenuous? · 2021-01-03T08:11:20.136Z · EA · GW

Importance: not really important to read this comment

Update: I updated; see my reply

GivingMultiplier's description according to the EA newsletter^1:

Let's assume Effective_Charity and Local_Charity. 

If you were going to give 100 USD to Local_Charity, but instead donate 10 USD to Effective_Charity and 90 USD to Local_Charity, GivingMultiplier will give 9 USD to Local_Charity and 1 USD to Effective_Charity, so there's now 99 USD going to the Local_Charity and 11 USD going the Effective_Charity. GivingMultiplier would give the money to Effective_Charity anyway. So for the donor, this is indistinguishable from donating 99 USD to Local_Charity and 1 USD to Effective_Charity, but it's done in a more obscure way.^2

Also, sure they are rather transparent about their process – at least in the newsletter; it wasn't obvious from the main page of the website –, but still, their scheme mostly works only insofar as people don't understand what's going on.

Potential motives

A bunch of people don't know Why you shouldn’t let “donation matching” affect your giving, and so they will be misguided by donation matches. If EA charities don't use them, then they might be at a disadvantage. So their reasoning might be that the game theory favors also using this technique under a consequentialist moral framework – sort of like a tit-for-tat with other charities, with deceiving donors as an externality.

One could argue that they should link to the piece against donation matching on their website, but maybe both memes are fit to different environments – maybe it would mostly reduce how much people use that specific service to fill their donation matching need, or something like that. I don't know, I'm trying to steelman it.

They might also want to know where people donate money, so they allow people to choose where some money goes among those 9 charities in exchange for knowing where they donate the rest of the money. And at the same time, they signal support for those 9 charities.

Consequences on the donors

If donation matches don't change how much donors give, but just where they give (which seems plausible to me), biasing them equally against all charities might actually help them make decisions that are more aligned with their worldview than if they were less biased with only a subset of them.

Footnotes

1) There website is actually giving different numbers, but the idea is the same.

2) Sure, there's the real choice of choosing which of the 9 Effective Charities receive the money, but:

a) The part about local charities is a red herring

b) Those charities probably sort-of have reached market efficiency (in the sense that large donors can rebalance their donations according to  how much total funding they want each of them to have)

(a) is my main objection.

Comment by Mati_Roy on Where are you donating in 2020 and why? · 2020-12-16T12:14:41.530Z · EA · GW

I posted on my website because I'm using some formatting not supported here: Mati's 2020 donation recommendations

Comment by Mati_Roy on Mati's 2020 donation recommendations · 2020-12-16T12:10:48.081Z · EA · GW

Ah yes, will do. Had I seen that thread, I would probably only have posted there instead of a top-level post. Thanks!

Comment by Mati_Roy on Bored at home? Contribute to the EA Wiki! · 2020-12-15T15:41:43.183Z · EA · GW

I just submitted a new wiki article, and it says it's under review. How long does that usually take? Let me know if you'd like to have more reviewers to help with that.

Comment by Mati_Roy on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2020-12-15T14:00:02.126Z · EA · GW

That's a complicated way of saying "I don't think it works" 0_o

Comment by Mati_Roy on Mati_Roy's Shortform · 2020-12-14T12:00:44.073Z · EA · GW

although to be fair, longtermism and infinitarianism reasoning often suggest the same courses of actions in our world, I have the impression