Posts

Could we have a warning system to warn us of imminent geomagnetic storms? 2020-04-04T15:35:50.828Z · score: 4 (2 votes)
(How) Could an AI become an independent economic agent? 2020-04-04T13:38:52.935Z · score: 13 (5 votes)
What fraction of posts submitted on the Effective Altruism Facebook group gets accepted by the admins? 2020-04-02T17:15:49.009Z · score: 4 (2 votes)
Why do we need philanthropy? Can we make it obsolete? 2020-03-27T15:47:25.258Z · score: 17 (7 votes)
Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic? 2020-03-27T15:24:36.201Z · score: 10 (7 votes)
How could we define a global communication index? 2020-03-25T01:47:50.731Z · score: 4 (2 votes)
What promising projects aren't being done against the coronavirus? 2020-03-22T03:30:02.970Z · score: 5 (3 votes)
Are countries sharing ventilators to fight the coronavirus? 2020-03-17T07:11:40.243Z · score: 9 (3 votes)
What are EA project ideas you have? 2020-03-07T02:58:53.338Z · score: 17 (6 votes)
What medium/long term considerations should we take into account when responding to the coronavirus' threat? 2020-03-05T10:30:47.153Z · score: 5 (2 votes)
Has anyone done an analysis on the importance, tractability, and neglectedness of keeping human-digestible calories in the ocean in case we need it after some global catastrophe? 2020-02-17T07:47:45.162Z · score: 9 (8 votes)
Who should give sperm/eggs? 2020-02-08T05:13:43.477Z · score: 4 (13 votes)
Mati_Roy's Shortform 2019-12-05T16:31:52.494Z · score: 4 (2 votes)
Crohn's disease 2018-11-13T16:20:42.200Z · score: -12 (19 votes)

Comments

Comment by mati_roy on April Fool's Day Is Very Serious Business · 2020-04-07T13:50:38.084Z · score: 1 (1 votes) · EA · GW

Did this end up happening?

Comment by mati_roy on Why do we need philanthropy? Can we make it obsolete? · 2020-04-02T11:04:27.782Z · score: 3 (2 votes) · EA · GW

Thanks for your comment. It makes me realize I failed to properly communicate some of my ideas. Hopefully this comment can elucidate them.

Better democracy won't help much with EA causes if people generally don't care about them

More democracy could even make things worse (see 10% Less Democracy). But much better democracy wouldn't because it would do things like:

  • Disentangling values from expertise (ex.: predicting which global catastrophes are most likely shouldn't be done democratically, but rather with expert systems such as prediction markets)
  • Representing the unrepresented (ex.: having a group representing the interest of non-human animals during elections)
we choose EA causes in part based on their neglectedness

I was claiming that with the best system, all causes would be equally (not) neglected. Although this wouldn't be entirely true as I conceded in the previous comment because people have different fundamental values.

Causes have to be made salient to people, and that's a role for advocacy to play,

I think most causes wouldn't have to be made salient to people if we had a great System. You can have something like (with a lot of details still to be worked out): 1) have a prediction market to predict what values existing people would vote on in the future, and 2) have a prediction market to predict which interventions will fulfill those values the most. And psychological research and education helping people to introspect is a common good that would likely be financed by such a System. Also, if 'advocacy' is about a way of enforcing cooperative social norms, then this would be fixed by solving coordination problems.

But maybe you want to declare ideological war, and aim to overwrite people's terminal values with yours, hence partly killing their identity in the process. If that's what you mean by 'advocacy', then you're right that this wouldn't be captured by the System, and 'philanthropy' would still be needed. But protecting ourselves against such ideological attacks is a social good: it's good for everyone individually to be protected. I also think it's likely better for everyone (or at least a supermajority) to have this protection for everyone rather than for no one. If we let ideological wars go on, there will likely be an evolutionary process that will select for ideologies adapted to their environment, which is likely to be worse from most currently existing people's moral standpoint than if there had been ideological peace. Robin Hanson has written a lot about such multipolar outcomes.

Maybe pushing for altruism right now is a good patch to fund social good in the current System. And maybe current ideological wars against weaker ideologies is rational. But I don't think it's the best solution in the long run.

Also relevant: Against moral advocacy.

I'm not sure you can or should try to capture this all without philanthropy

I proposed arguments for and against capturing philanthropy in the article. If you have more considerations to add, I'm interested.

Also, I don't think inequality will ever be fixed, since there's no well-defined target. People will always argue about what's fair, because of differing values.

I don't know. Maybe we settle on the Schelling point of splitting the Universe among all political actors (or in some other ways), and this gets locked-in through apparatuses like Windfall clauses (for example), and even if some people disagree with them, they can't change them. Although they could still decide to redistribute their own wealth in a way that's more fair according to their values, so in that sense you're right that their would still be a place for philanthropy.

Some issues may remain extremely expensive to address [...] so people as a group may be unwilling to fund them, and that's where advocates and philanthropists should come in.

I guess it comes down to inequality. Maybe someone thinks it's particularly unfair that someone has a rare disease, and so is willing to spend more resources on it than what the collective wants. And so they would inject more resources in a market for this value.

Another example: maybe the Universe is split equally among everyone alive at the point of the intelligence explosion, but some people will want to redistribute some of their wealth to fulfill the preferences of dead people, or will want to reward those that helped make this happen.

What is "just the right amount"?

I was thinking something like the amount one would spend if everyone else would spent the same amount than them, repeating this process for everyone and summing all those quantities. This would just be resource spent on a value; how to actually use the resources for that value would be decided by some expert systems.

And how do you see the UN coming to fund it if they haven't so far?

The UN would need to have more power. But I don't know how to make this happen.

If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?

At this point we would have formed a political singleton. I think a significant part of our entire world economy would be structured around AI safety. So more.

How else would you see (longtermist) AI safety make up for Open Phil's funding through political mechanisms, given how much people care about it?

As mentioned above, using something like Futarchy.

-----

Creating a perfect system would be hard, but I'm proposing moving in that direction. I updated that even with a perfect system, there would still be some people wanting to redistribute their wealth, but less so than currently.

Comment by mati_roy on Mati_Roy's Shortform · 2020-04-02T09:35:02.897Z · score: 1 (1 votes) · EA · GW

Good point. My implicit idea was to have the money in an independent trust, so that the "punishment" is easier to enforce.

Comment by mati_roy on EA Survey 2019 Series: Donation Data · 2020-04-02T09:32:06.544Z · score: 1 (1 votes) · EA · GW

Thanks!

Comment by mati_roy on EA Survey 2019 Series: Donation Data · 2020-04-01T04:13:13.963Z · score: 3 (2 votes) · EA · GW

I wonder how people in the EA community compare with people in general, notably controlling for income. I also wonder how much is given in the form of a reduced salary or volunteering, and how that compares to people in general.

Comment by mati_roy on Why We Sleep — a tale of institutional failure · 2020-04-01T03:26:10.600Z · score: 2 (2 votes) · EA · GW

cross-post means copy-pasting the entire article in the post on the EA forum

Comment by mati_roy on Why do we need philanthropy? Can we make it obsolete? · 2020-03-31T06:12:40.496Z · score: 1 (1 votes) · EA · GW

Thanks for our comment, it helped me clarified my model to myself.

especially politically unempowered moral beings

It proposes a lot of different voting systems to avoid (human) minorities being oppressed.

I could definitely see them develop systems to include future / past people.

But I agree they don't seem to tackle beings not capable (at least in some ways) of representing themselves, like non-human animals and reinforcement learners. Good point. It might be a blank spot for that community(?)

or many of the EA causes

Such as? Can you see other altruistic use of philanthropy beside coordination problems, politically empowering moral beings, and fixing inequality? Although maybe that assumes preference utilitarianism. With pure positive hedonistic utilitarianism, wanting to created more happy people is not really a coordination problem (to the extent most people are not positive hedonistic utilitarians), nor about empowering moral beings (ie. happiness is mandatory), nor about fixing inequalities (nor an egoist preference).

Maybe it can make solving them easier, but it doesn't offer full solutions to them all, which seems to be necessary for making philanthropy obsolete.

Oh, I agree solving coordination failures to finance public goods doesn't solve the AI safety problem, but it solves the AI safety funding problem. In that world, the UN would arguably finance AI safety at just the right amount, so there would be no need for philanthropists to fund the cause. In that world, 1$ at the margin of any public good would be just as effective. And egoists motivations to work in any of those field would be sufficient. Although maybe there are market failures that aren't coordination failures, like information asymmetries, in which case there might still be a used for personal sacrifices.

Comment by mati_roy on Mati_Roy's Shortform · 2020-03-30T08:51:54.536Z · score: 1 (3 votes) · EA · GW

Mind-readers as a neglected life extension strategy

Last updated: 2020-03-30

Status: idea to integrate in a longer article

Assuming that:

  • Death is bad
  • Lifelogging is a bet worth taking as a life extension strategy

It seems like a potentially really important and neglected intervention is improving mind readers as this is by far the most important part of our experience that isn't / can't be captured at the moment.

We don't actually need to be able to read the mind right now, just to be able to record the mind with sufficiently high resolution (plausibly along text and audio recording to be able to determine which brain patterns correspond to what kind of thoughts).

Questions:

  • Assuming we had extremely good software, how much could we read minds with our current hardware? (ie. how much is it worth recording your thoughts right now?)
  • How inconvenient would it be? How much would it cost?

To do:

  • Ask on Metaculus some operationalisation of the first question
Comment by mati_roy on Mati_Roy's Shortform · 2020-03-30T05:09:01.911Z · score: 0 (5 votes) · EA · GW

Nuke insurance

Category: Intervention idea

Epistemic status: speculative; arm-chair thinking; non-expert idea; unfleshed idea

Proposal: Have nuclear powers insure each other that they won't nuke each other for mutually assure destruction (ie. destroying my infrastructure means you will destroy your economy). Not accepting an offered of mutual insurances should be seen as extremely hostile and uncooperative, and possible even be severely sanctioned internationally.

Comment by mati_roy on Why We Sleep — a tale of institutional failure · 2020-03-30T02:06:40.601Z · score: 7 (5 votes) · EA · GW

This gave me the idea of The Bullshit Awards

Comment by mati_roy on What are EA project ideas you have? · 2020-03-30T02:06:11.894Z · score: 7 (4 votes) · EA · GW

The Bullshit Awards

Proposal: Give prizes to people spotting / blowing the whistle on papers bullshitting its readers, and explaining why.

Details: There could be a Bullshit Alert Prize for the one blowing the whistle, and a Bullshit Award for the one having done the bullshitting. This would be similar to the Darwin Awards in that you don't want to be the source of such an award.

Example: An analysis that could have won this is Why We Sleep — a tale of institutional failure.

Note: I'm not sure whether that's a good way to go about fixing that problem. Is shaming a useful tool?

Comment by mati_roy on Why do we need philanthropy? Can we make it obsolete? · 2020-03-30T00:59:21.194Z · score: 2 (2 votes) · EA · GW

Harry Potter meme related to this post ^^: https://www.facebook.com/groups/OMfCT/permalink/2502301776751392/

Comment by mati_roy on What promising projects aren't being done against the coronavirus? · 2020-03-29T03:04:33.924Z · score: 1 (1 votes) · EA · GW

two of the main blockers for predictions markets seem to be 1) legality, and 2) subsidies. seems like this state of emergency / immediate potential benefit of prediction markets might be a good time to address 1), and maybe even 2)

Comment by mati_roy on What are EA project ideas you have? · 2020-03-27T17:41:31.799Z · score: 1 (1 votes) · EA · GW

Moved from my short form; created on 2020-02-28

Group to discuss information hazard

Context: Sometimes I come up with ideas that are very likely information hazard, and I don't share them. Most of the time I come up with ideas that are very likely not information hazard.

Problem: But also, sometimes, I come up with ideas that are in-between, or that I can't tell whether I should share them are not.

Solution hypothesis: I propose creating a group with which one can share such ideas to get external feedback on them and/or about whether they should be shared more widely or not. To reduce the risk of information leaking from that group, the group could:

  • be kept small (5 participants?)
    • note: there can always be more such groups
  • be selective
    • exam on information hazard / on Bostrom's paper on the topic
      • notably: some classes of hazard should definitely not be shared in that group, and this should be made explicit
    • questionnaire on how one handled information in the past
      • notably: secrets
    • have a designated member share a link on an applicant's Facebook wall with rewards for reporting antisocial behavior
    • pledge to treat the information with the utmost seriousness
    • commit to give feedback for each idea (to have a ratio of feedback / exposed person of 1)

Questions: What do you think of this idea? How can I improve this idea? Would you be interested in helping with or joining such a group?

Possible alternatives:

  • Info-hazard buddy: ask a trusted EA friend if they want to give you feedback on possible info-hazardy ideas
    • warning: some info-hazard ideas (/idea categories) should NOT be thought about more. some info-hazard can be personally damaging to someone (ask for clear consent before sharing them, and consider whether it's really useful to do so).
    • note: yeah I think I'm going to start with this first
Comment by mati_roy on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-03-27T17:28:50.856Z · score: 2 (2 votes) · EA · GW

I just want to document that this idea was mentioned in the book Superintelligence by Nick Bostrom.

The ideal form of collaboration for the present may therefore be one that does
not initially require specific formalized agreements and that does not expedite
advances in machine intelligence. One proposal that fits these criteria is that we
propound an appropriate moral norm, expressing our commitment to the idea
that superintelligence should be for the common good. Such a norm could be
formulated as follows:

The common good principle
Superintelligence should be developed only for the benefit of all of
humanity and in the service of widely shared ethical ideals.

Establishing from an early stage that the immense potential of superintelligence
belongs to all of humanity will give more time for such a norm to become
entrenched.
The common good principle does not preclude commercial incentives for
individuals or firms active in related areas. For example, a firm might satisfy the
call for universal sharing of the benefits of superintelligence by adopting a
“windfall clause” to the effect that all profits up to some very high ceiling (say, a
trillion dollars annually) would be distributed in the ordinary way to the firm’s
shareholders and other legal claimants, and that only profits in excess of the
threshold would be distributed to all of humanity evenly (or otherwise according
to universal moral criteria). Adopting such a windfall clause should be
substantially costless, any given firm being extremely unlikely ever to exceed
the stratospheric profit threshold (and such low-probability scenarios ordinarily
playing no role in the decisions of the firm’s managers and investors). Yet its
widespread adoption would give humankind a valuable guarantee (insofar as the
commitments could be trusted) that if ever some private enterprise were to hit
the jackpot with the intelligence explosion, everybody would share in most of

the benefits. The same idea could be applied to entities other than firms. For
example, states could agree that if ever any one state’s GDP exceeds some very
high fraction (say, 90%) of world GDP, the overshoot should be distributed
evenly to all.

The common good principle (and particular instantiations, such as windfall
clauses) could be adopted initially as a voluntary moral commitment by
responsible individuals and organizations that are active in areas related to
machine intelligence. Later, it could be endorsed by a wider set of entities and
enacted into law and treaty. A vague formulation, such as the one given here,
may serve well as a starting point; but it would ultimately need to be sharpened
into a set of specific verifiable requirements.
Comment by mati_roy on Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic? · 2020-03-27T15:27:05.441Z · score: 1 (1 votes) · EA · GW

Epistemic status: narrative driven; arm-chair thinking; contains large simplifications, suppositions, and speculations

Conclusion: I don't know if the overall effect is selecting for or against

Historically

Humans might be good at detecting whether someone is altruistic. So from an evolutionary psychology perspective, altruism might act as a commitment mechanism for cooperativeness (but remember, we're Adaptation-Executers, not Fitness-Maximizers). Similarly, but alternatively, similar alleles could be responsible for both cooperativeness and altruism. In either case, those seems like plausible explanations for why some amount of altruism were selected for, and would continue being selected for.

But I want to focus my answer mostly on speculating on new and future selection pressures for or against altruism. The term to search to read the literature on the topic of its historical selection pressures is 'problem of altruism'. The above is just a quick thought, not a summary of the literature.

General

Narratives for increased selectiveness

It could be that we have a greater opportunity for cooperativeness than we used to. It's now possible to cooperate with people throughout the world, and not just with your local tribe. Plus, with a winners take most financial dynamics, this could have increase benefits of having large group cooperates.

Also, a tribe of people sharing the same moral values will cooperate much more easily. A pure negative preference utilitarian giving money to another pure negative preference utilitarian knows that this money will be used for the pursuit of a shared goal. Whereas a pure egoist can't as easily do this with other pure egoists as they all have different goals / they all want to help different people (ie. themselves, respectively). It's much cheaper for people sharing moral values to cooperate as they don't have to design robust contracts.

Genes

Narratives for increased selectiveness

A) It could be that altruistic people think having more people in absolute or more people like them in comparison is a good thing, and so make an effort to raise more children or conceive more biological children, respectively, on average.

B) It could be that when we get technology to do advance genetic engineering in humans, subsidies or laws encourage or force selecting prosocial genes for the benefit of the common good.

Narratives for decreased selectiveness

A) It could be that altruistic people give resources away to the extent that they don't have enough to raise (as much) children, or to raise them well enough.

B) It could be that altruistic people think it's wrong to create new people, either on deontological or utilitarian grounds. Deontological grounds could include directly being against creating new humans, or indirectly, by being against taking welfare money to do so. From an utilitarian perspective, they could potentially be failing to see the longer-term consequences it would have from the resulting selection effect, or they could rightfully have weighted this consideration as less important (or came to the right conclusion for epistemically wrong reasons).

C) It could be that when we get technology to do advance genetic engineering in humans, people want their kids to mostly care about their family and themselves, and not care about society as much.

Economic power

Related: Donating now vs later (on Causepriotization.org)

Narratives for increased selectiveness

It seems likely that egoists have faster diminishing returns on marginal dollars, and also, as a consequence, are more risk averse to making a lot of money. Ie. you can only save yourself once (sort of), but there are a lot of other people to save. Although if you have fringe moral values, they might be so neglected that this isn't as accurate.

As a potential example for altruistic people taking more risks, it seems more plausible that an egoist person being offered 100M USD to sell zir startup would take the money than an altruistic person given an altruistic person might still have low diminishing returns on money at that level.

It could also be that altruistic people, caring about people in the future, are more likely to invest their money long-term, and so gain power over a larger fraction of the economy.

Narratives for decreased selectiveness

It could be that philanthropists, by redistributing their wealth directly or through public goods, or by helping oppressed groups see their relative capacity to influence the world diminished as they become relatively less wealthy than those who don't. Trivially, if they are rational, they would only do that if they expect this to be the best course of action. But their altruistic instinct might incite them for more rapid gratification, especially if they want to signal those instincts, and other mechanisms, such as Donor-Advized Funds, don't allow them to do so as much.

Other

Ems

On page 302-303 of "The Age of Ems", Robin Hanson explains what ze thinks altruistic ems will donate money to and why they will choose those cause areas. Ze also says "Like people today, ems are eager to show their feelings about social and moral problems, and their allegiance to pro-social norms", although I think ze doesn't explain why, but it might just be a premise of the book that ems are similar to humans a priori, and just live with different incentive structures.

Comment by mati_roy on Mati_Roy's Shortform · 2020-03-25T13:10:49.712Z · score: 2 (2 votes) · EA · GW

From the Global Challenges Foundation:

> The GCF wishes to draw your attention to UN75's ‘One-minute Survey’. It is a survey that anyone can take, opinion polling in 50 countries and artificial intelligence sentiment analysis of traditional and social media in 70 countries, will generate compelling data to inform national and international policies and debate.

> The views and ideas that are generated will be presented, by the Secretary-General, to world leaders and senior UN officials on September 21, 2020, at a high-level event to mark the 75th anniversary.

Now's the time to ask for an existential risk organization within the UN.

Link: https://un75.online/#s2

Comment by mati_roy on What posts do you want someone to write? · 2020-03-25T10:03:21.719Z · score: 1 (1 votes) · EA · GW

Negative income taxes > UBI ?

A short mathematical demonstration of how negative income taxes compares to UBI in terms of economics 101.

Here's a thread in an EA group about the topics

Comment by mati_roy on What promising projects aren't being done against the coronavirus? · 2020-03-23T07:15:07.066Z · score: 1 (1 votes) · EA · GW

I'd be curious to know how your call goes

Comment by mati_roy on App for COVID-19 contact tracing · 2020-03-23T07:11:54.745Z · score: 1 (1 votes) · EA · GW

For reference, South Korea tells you were there's been cases: https://coronamap.site/

Comment by mati_roy on What are EA project ideas you have? · 2020-03-22T08:19:04.690Z · score: 1 (1 votes) · EA · GW

Altruist credits

Epistemic status: not sure if the idea works

Category: meta

Proposal: Pay someone with a 'donation gift card' or 'donation credits'

Details and rationale:

Often, when I work on a project approved by EAs, I don't necessarily want to be paid as much as I want to be able to have people work on my EA projects in the future.

Imagine you have a Donor Advisor Fund called the Altruist Bank which emits one Altruist Credit per USD you put into it. The Altruist Credit can be spent by saying to which charity you want the DAF to send a USD. The Altruist Credit can also be given to other people directly.

My hope was that accepting to be paid with altruist credits would be a strong signal of alignment on altruism, and altruistic people might perform better at altruist projects (as their incentives are more aligned). A discounted wage might also act as a signal, although maybe it can also attract less qualified people (?)

It might also encourage a culture of more donations.

And **maybe** be simpler than everyone individually opening a DAF.

Avoiding possible problems:

  • If we can somehow make it illegal to sell, that would be useful because otherwise anyone can sell their Altruist Credits to altruists for just slightly less than 1 USD each, at which point you're just back with USDs
  • If it became massively used, then it could start to being used just as a currency (as long as everyone expect others to accept it) (although this seems unlikely to happen)

Additional note:

  • I think parallel economies, such as Simbi, are bad for basic Econ 101 reasons, but here maybe the altruistic signaling is of sufficient additional value (?)
Comment by mati_roy on What promising projects aren't being done against the coronavirus? · 2020-03-22T07:31:49.441Z · score: 2 (2 votes) · EA · GW

Awesome! I just replied there.

Comment by mati_roy on App for COVID-19 contact tracing · 2020-03-22T07:29:47.410Z · score: 2 (2 votes) · EA · GW

I skimmed. Looks good!

Here are two posts I had made on Facebook, just for reference:

I want an app that detects when I cough, can connect to a bluetooth thermometer, where I can input other medical information, that tracks my geolocation, connects with other apps to know who I met (ex.: Facebook events, Facebook friends, Meetup events, Uber drivers, etc.) and anonymously takes the data from the other users on that platform to give me an up-to-date probability estimate that I have the coronavirus.
It could also be a requirement to use the bluetooth thermometer before aggregating this specific data from other people in your probability estimate (to maintain the incentives). Same for all other medical information. It only aggregates other's manually entered temperature when you manually enters yours. EtA: Actually, same for privacy: if you opt-in to me public, it shows you the profile of other people that opted-in to me public.
Then I want Uber to make having this app a requirement for working, and have them take their temperature with the Bluetooth thermometer everyday before they can work, and have the coughing listening on.

(source)

Someone linked A new app would say if you’ve crossed paths with someone who is infected.

I also posted this Facebook thread on this. Someone in EA circles wrote about an app they're making; see Tina White's post (this was also linked: https://covid19risk.com/ , but I can't see it at the moment). There's also Trace Together.

Might be worth reaching to all of them if you're interested in helping out with this.

Comment by mati_roy on What promising projects aren't being done against the coronavirus? · 2020-03-22T03:32:33.732Z · score: 2 (4 votes) · EA · GW

1. Highly subsidized prediction market on the effectiveness of various possible interventions.

Alternatively: add this feature on other prediction aggregation platforms (see this feature suggestion on Metaculus)

Related: Do You Feel Lucky, Punk?

2. Inducement prize contests for various important milestones (maybe)

Comment by mati_roy on EA Global Live Broadcast · 2020-03-21T18:36:20.587Z · score: 47 (24 votes) · EA · GW

I vote for having one remote EAG every year. This is great as far as I'm concerned!

Comment by mati_roy on What are EA project ideas you have? · 2020-03-17T10:32:03.313Z · score: 1 (1 votes) · EA · GW

Coronavirus: Should I go to work?

UPDATE: An EA project I'm part of might do this

summary: have an app that helps people decide whether they shouldn't go to work

context: in the last 12 hours I spent maybe about 2 hours 'empowering' someone I know by giving them more information to help them decide whether they should take sick days

problem: knowing what's the probability one's infected (by the coronavirus) helps informing them about whether they should avoid going to work. the probability beyond which you should stay home is not the same for each type of job. at what point should one not go to work?

the 2 main sub questions are:

  • what's the probability that I'm infected?
    • there are already forms that sort of do that ex.: https://covid19.empego.ca/#/, but I would prefer a more probabilistic approach with more detailed input
  • if I'm infected, what damage am I likely to cause, in expectation? how many people am I meeting at work? how many confirmed cases are in my city? etc.
    • there's an app made by EAs that might get released in the coming days that address a similar question

for example: Someone told me: my partner was coughing, had a sore throat, and had X fever during the whole day, but is now feeling better; yesterday ze was okay, and we slept together, but I haven't seen zir since then. ze wasn't outside the country recently, and hasn't met anyone infected as far as ze knows. ze lives in city Y which has Z cases.

there could also be intermediary recommendations (maybe?): go to work, but take the following precautions:

  • wear a mask
  • avoid meetings
  • etc.

addendum: in countries that don't have a monetary incentives for people to self-quarantine, there will be a negative externality not captured. but the tool should still improve decision making.

Comment by mati_roy on Mati_Roy's Shortform · 2020-03-17T07:10:36.644Z · score: 1 (1 votes) · EA · GW

update: now posted as a question: https://forum.effectivealtruism.org/posts/CbwnCiCffSuCzz3kM/are-countries-sharing-ventilators-to-fight-the-coronavirus

topic: coronavirus | epistemic status: question / idea / hypothesis

the coronavirus doesn't hit every countries at the same time, so they should share ventilators. "if you get it first, you may borrow my ventilators (until I need them), and when you don't need yours anymore, you can lend them to me."

to preserve the incentives to create more ventilators, a country could pledge to share as much ventilators as the other country has itself.

it seems like a strictly positive exchange. the risk might be a country not returning the ventilators, but maybe the Chinese and US armies could act as the world's police, or something like that (and the US and China wouldn't exchange ventilators among themselves)

is something like this happening? are countries sharing their ventilators optimally?

Comment by mati_roy on Aligning Recommender Systems as Cause Area · 2020-03-12T00:10:02.577Z · score: 1 (1 votes) · EA · GW

Somewhat related: Prediction markets for content curation DAOs ( https://ethresear.ch/t/prediction-markets-for-content-curation-daos/1312/4 )

Comment by mati_roy on What are EA project ideas you have? · 2020-03-11T22:19:49.324Z · score: 3 (3 votes) · EA · GW

Belief Network

Last updated: 2020-03-30

Category: group rationality; signal boosting

Proposal: Track people's beliefs over time, and what information gave them the biggest update.

Details: It could be done at the same time than the EA survey every year. And/or it could be a website that people continuously update.

Motivation: The goals are

1) to track which information is the most valuable so that more people consume it, and

2) see how beliefs evolve (which might be evidence in itself about which beliefs are true; although, I think most, including myself, wouldn't think this was the strongest form of evidence). It could be that most people make a similar series of paradigm shifts over time, and knowing which ones might help speed things up.

Alternative name: MindChange

What's been done so far: Post on LessWrong What are some articles that updated your beliefs a lot on an important topic? The EA survey also tracks some high-level views, notably on cause prioritization.

Comment by mati_roy on Blood Donation: (Generally) Not That Effective on the Margin · 2020-03-10T22:01:54.867Z · score: 1 (1 votes) · EA · GW

In countries where buying blood is illegal, and so relies on people's altruism, it seems plausible that the government wouldn't be able to operate at its ideal budget per QALY, and so that this was more effective than we would otherwise think. Although I could also imagine that the government doesn't include the donors' time in its cost, and so could actually go over its ideal budget per QALY, but that seems less likely to me.

Comment by mati_roy on Blood Donation: (Generally) Not That Effective on the Margin · 2020-03-10T21:58:58.945Z · score: 1 (1 votes) · EA · GW

Good comment / I agree.

Nitpick (not important to read):

240 pounds / hour for your time

"A unit of red blood cells (RBCs) costs about 120 pounds"

"a donation is 0.5 units of red blood cells"

So 0.5 unit / donation * 120 pounds / unit = 60 pounds / donation

A donation takes the time to go there and get the blood out, plus the lost productivity from feeling weaker (assuming you were going to do something productive counterfactually); I don't know what's that number, but I'd put it at about 1 hour. So ~60 pounds / hour.

Comment by mati_roy on Blood Donation: (Generally) Not That Effective on the Margin · 2020-03-10T21:37:35.475Z · score: 1 (1 votes) · EA · GW

other metrics I like to have a better feeling of what this means are:

  • quality-adjusted life days per dollar
  • quality-adjusted life days per donation
  • number of donations per quality-adjusted life year
Comment by mati_roy on Quantifying lives saved by individual actions against COVID-19 · 2020-03-10T19:07:28.390Z · score: 1 (1 votes) · EA · GW

If you do, I suggest mentioning explicitly it's been cross-posted

Comment by mati_roy on What are EA project ideas you have? · 2020-03-09T23:41:50.855Z · score: 2 (2 votes) · EA · GW

thanks for your comment, I totally agree!

maybe we could ban comments? and delete the page if that doesn't end up working?

Comment by mati_roy on What are EA project ideas you have? · 2020-03-07T22:44:33.299Z · score: 1 (1 votes) · EA · GW

Forum Facebook page

Category: signal boosting

Proposal: Share the best (say >=100 karmas) posts on the EA Forum on a Facebook page called "Best of the EA Forum"

Why? So that people that naturally go on Facebook but not on the EA Forum can be exposed to that content

Date posted: 2020-03-07

Note: I'm willing to help with this, but probably not do it by myself.

Note: If there's a way to get this list easily, it might facilitate the process.

x-post: https://www.facebook.com/groups/1392613437498240/permalink/2947443972015171/

Comment by mati_roy on What are EA project ideas you have? · 2020-03-07T22:41:58.909Z · score: 2 (2 votes) · EA · GW

Impact of the 5% payout rule

Category: meta-EA; research

Proposal: Research what would be the consequences of removing the 5% payout rule.

Motivating intuition: maybe it would help longer-termist causes (?) and it might also increase the global ratio of investing / consumption (?)

Date posted: 2020-03-06

Additional information:

A foundation must pay out 5% of its assets each year while a public charity may not.
Donors to a public charity receive greater tax benefits than donors to a foundation.
A public charity must collect at least 10% of its annual expenses from the public to remain tax-exempt while a foundation does not.

( source: Foundation (United States law) )

Comment by mati_roy on Concrete project lists · 2020-03-07T03:05:46.306Z · score: 2 (2 votes) · EA · GW

2020-03-06: What are EA project ideas you have?

Comment by mati_roy on What are EA project ideas you have? · 2020-03-07T03:04:06.188Z · score: 4 (4 votes) · EA · GW

moving my answers in separate comments below this answer.

particularly useful feedback includes, but isn't limited to:

  • links to a similar project that was already done
  • connection with people interested in this project
  • analysis of the usefulness of the project
Comment by mati_roy on Concrete project lists · 2020-03-07T02:53:37.777Z · score: 2 (2 votes) · EA · GW

2017-06-12: Projects I'd like to see, by William_MacAskill

2019-03-08: A List of Things For People To Do, by casebash

2019-09-17: Young data scientist seeks project ideas, by faunam

Comment by mati_roy on What medium/long term considerations should we take into account when responding to the coronavirus' threat? · 2020-03-05T10:31:09.225Z · score: 5 (2 votes) · EA · GW

epistemic status: I'm not an expert; mostly going with "common sense" | last updated: 2020-03-05

I've been meaning to post on reasons we might want to spend a lot more resources containing a pandemics than its immediate health risk might warrant. On the top of my head, that could include the following.

Practicing

Humanity rarely gets the opportunity to verify how well prepared it is to containing a pandemics. Even if a pandemic is not deemed to be a high risk to our civilisation, acting like it was could still be beneficial to get more evidence on whether we're sufficiently prepared to containing pandemics in general (including more dangerous ones that could happen in the future).

Avoiding an endemic state

To reduce the chance that the virus becomes endemic. If the coronavirus became endemic, it would multiply the burden of the seasonal flus.

Avoiding long term damages

It might be hard to know the long term impact of having had the coronavirus given it's a new virus, but we can still have some idea. For example, Connor Flexman suggests that it might significantly increase the risk of long term lung issues and fatigue. For more information, check out: Will nCoV survivors suffer lasting disability at a high rate?

Avoiding mutations

The more widespread a virus is, the more likely it is for the virus to mutate into a more dangerous form. Containing a virus reduces this risk.

As a matter of fact, it seems like this did actually happen with the coronavirus:

Chinese scientists claim that the #COVID19 virus has probably genetically mutated to two variants: S-cov & L-cov. They believe the L-cov is more dangerous, featuring higher transmitibility and inflicting more harm on human respiratory system.

(source: Global Times)

Comment by mati_roy on Mati_Roy's Shortform · 2020-02-28T16:26:10.150Z · score: 3 (2 votes) · EA · GW

EtA: Moved to my EA project idea list

Group to discuss information hazard

Context: Sometimes I come up with ideas that are very likely information hazard, and I don't share them. Most of the time I come up with ideas that are very likely not information hazard.

Problem: But also, sometimes, I come up with ideas that are in-between, or that I can't tell whether I should share them are not.

Solution hypothesis: I propose creating a group with which one can share such ideas to get external feedback on them and/or about whether they should be shared more widely or not. To reduce the risk of information leaking from that group, the group could:

  • be kept small (5 participants?)
    • note: there can always be more such groups
  • be selective
    • exam on information hazard / on Bostrom's paper on the topic
      • notably: some classes of hazard should definitely not be shared in that group, and this should be made explicit
    • questionnaire on how one handled information in the past
      • notably: secrets
    • have a designated member share a link on an applicant's Facebook wall with rewards for reporting antisocial behavior
    • pledge to treat the information with the utmost seriousness
    • commit to give feedback for each idea (to have a ratio of feedback / exposed person of 1)

Questions: What do you think of this idea? How can I improve this idea? Would you be interested in helping with or joining such a group?

Possible alternatives:

  • Info-hazard buddy: ask a trusted EA friend if they want to give you feedback on possible info-hazardy ideas
    • warning: some info-hazard ideas (/idea categories) should NOT be thought about more. some info-hazard can be personally damaging to someone (ask for clear consent before sharing them, and consider whether it's really useful to do so).
    • note: yeah I think I'm going to start with this first
Comment by mati_roy on Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children · 2020-02-28T04:13:31.378Z · score: 2 (2 votes) · EA · GW
your primary concern should be yourself and your potential co-parent's happiness because that will be massively influenced by your decision - potentially either way, depending on your preferences

why do you believe that? (intuition is a fine answer, but I think it should be made explicit)

Managing own happiness and well-being is an important part of maximizing total aggregate well-being

do you mean because being more happy will directly increase the total amount of happiness, or do you mean being happy will make you more effective at work? (I think it's important to disentangle both of those)

Then, as a distant second you should consider the net positive impact your children would experience through living their own lives.

why "as a distant second"?

The magnitude of their impact on the climate is likely to be much, much smaller than any of the three other factors I have raised.

how do you know that?

it seems to me like that's a lot of claims that aren't backed by anything

might also be worth considering the other indirect impact of having children

Comment by mati_roy on Has anyone done an analysis on the importance, tractability, and neglectedness of keeping human-digestible calories in the ocean in case we need it after some global catastrophe? · 2020-02-19T02:28:05.407Z · score: 1 (1 votes) · EA · GW

I haven't tried to. any section answering my question? (or) are you implying we shouldn't care about reducing the food supply in the oceans given the amount of alternatives we have?

Comment by mati_roy on Has anyone done an analysis on the importance, tractability, and neglectedness of keeping human-digestible calories in the ocean in case we need it after some global catastrophe? · 2020-02-17T07:59:40.780Z · score: 3 (2 votes) · EA · GW

Some information I found

<<

Could the oceans feed us?

If you looked at the amount of fish that we currently eat, it’s just a tiny fraction of the human diet. You can expand that much more without wiping out all the fisheries. If you have significant climate change, that will result in more upwelling [seawater rise from the depth of the ocean to the surface], which will be like fertilizing the ocean surface, and you get more fish. Similarly we can purposely fertilize the ocean in order to get more fish. So then we have enough fish to feed everyone. How do you catch it all?

Then we started to look into how many ships exist—and if we converted all of them to fishing vessels, would that be enough in order to get enough fish harvested to meet demand? It turned out you end up with problems such as round trip distance. You can’t have little fishing boats go out and fish and then drive all the way back. The solution to that is ship-to-ship transfers of fish, which luckily, they already do now. So our fish solution is actually one of the better ones under certain circumstances. [But] it won’t work for everything. You still need some light.

>>

http://nautil.us/issue/101/in-our-nature/what-to-eat-after-the-apocalypse

Comment by mati_roy on Who should give sperm/eggs? · 2020-02-15T06:03:19.971Z · score: 1 (1 votes) · EA · GW

I just found this survey https://www.wearedonorconceived.com/uncategorized/we-are-donor-conceived-2018-survey-results/ thanks to your comment. thank you!

Comment by mati_roy on A list of EA-related podcasts · 2020-01-17T14:41:18.396Z · score: 2 (2 votes) · EA · GW

Alignment Newsletter Podcast: http://alignment-newsletter.libsyn.com/

Comment by mati_roy on Concrete project lists · 2020-01-06T00:58:39.414Z · score: 2 (2 votes) · EA · GW

Researchy Projects for Aspiring EAs - a guide by Edo: https://docs.google.com/document/d/1QO6d6mL5ZRJFqmrcIKmSxhPaQHUkYGXB06phHfkNUYI/edit#heading=h.k8xuawhn91w

Comment by mati_roy on Cost-Effectiveness of RC Forward · 2019-12-11T18:07:06.889Z · score: 1 (1 votes) · EA · GW

external backlink: https://www.facebook.com/groups/canadaeffectivealtruism/permalink/2653695294666565/

Comment by mati_roy on Mati_Roy's Shortform · 2019-12-05T16:31:52.677Z · score: 1 (1 votes) · EA · GW

x-post with https://causeprioritization.org/Democracy (see wiki for latest version)

Epistemic status: intuition; tentative | Quality: quick write-up | Created: 2019-12-05 | Acknowledgement: Nicolas Lacombe for discussions on tracking political promises

Assumption: more democracy is valuable; related: The rules for rulers, 10% Less Democracy

Non-denominational volunteering opportunities in politics

Tracking political promises

Polimeter is a platform that allows to track how well politicians keep their promises. This likely increases the incentive for politicians to be honest. This is useful because if citizens don’t know how their vote will translate in policies, it’s harder for them to vote meaningfully. Plus, citizens are likely to prefer more honest politicians all else equal. The platform allows to create new trackers as well as contributing to existing ones.

Voting reform

The Center for Election Science is working to implement an approval voting mechanism in more jurisdictions in the US. They work with volunteers with various expertise; see: https://www.electionscience.org/take-action/volunteer/.

National Popular Vote Interstate Compact

National Popular Vote is promoting the National Popular Vote Interstate Compact which aims to make the electoral vote reflect the popular vote. They are looking for volunteers; see https://www.nationalpopularvote.com/volunteer.

Comment by mati_roy on Technical AGI safety research outside AI · 2019-11-22T05:03:46.499Z · score: 1 (1 votes) · EA · GW

I added this to my list of lists of open problems in AI safety: https://bit.ly/AISafetyOpenProblems