Posts

The community's conception of value drifting is sometimes too narrow 2020-09-04T02:00:10.326Z · score: 26 (13 votes)
How does change in the cost of security change the world? 2020-08-30T21:53:35.555Z · score: 5 (2 votes)
If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? 2020-08-24T23:23:33.374Z · score: 24 (10 votes)
At what level of risk of birth defect is it not worth (trying) having a (biological) child for the median person? 2020-08-03T20:06:47.134Z · score: -3 (3 votes)
Can you have an egoistic preference about your own birth? 2020-07-16T03:14:31.452Z · score: 5 (2 votes)
[link] Biostasis / Cryopreservation Survey 2020 2020-05-16T07:40:17.922Z · score: 15 (4 votes)
Which norms would you like to see on the EA Forum? 2020-05-10T21:41:42.826Z · score: 5 (2 votes)
How much slack do people have? 2020-04-27T03:37:48.467Z · score: 10 (6 votes)
What are high-leverage interventions to increase/decrease the global communication index? 2020-04-21T18:09:31.429Z · score: 12 (3 votes)
Could we have a warning system to warn us of imminent geomagnetic storms? 2020-04-04T15:35:50.828Z · score: 4 (2 votes)
(How) Could an AI become an independent economic agent? 2020-04-04T13:38:52.935Z · score: 15 (6 votes)
What fraction of posts submitted on the Effective Altruism Facebook group gets accepted by the admins? 2020-04-02T17:15:49.009Z · score: 4 (2 votes)
Why do we need philanthropy? Can we make it obsolete? 2020-03-27T15:47:25.258Z · score: 19 (8 votes)
Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic? 2020-03-27T15:24:36.201Z · score: 10 (7 votes)
How could we define a global communication index? 2020-03-25T01:47:50.731Z · score: 10 (3 votes)
What promising projects aren't being done against the coronavirus? 2020-03-22T03:30:02.970Z · score: 5 (3 votes)
Are countries sharing ventilators to fight the coronavirus? 2020-03-17T07:11:40.243Z · score: 9 (3 votes)
What are EA project ideas you have? 2020-03-07T02:58:53.338Z · score: 17 (6 votes)
What medium/long term considerations should we take into account when responding to the coronavirus' threat? 2020-03-05T10:30:47.153Z · score: 5 (2 votes)
Has anyone done an analysis on the importance, tractability, and neglectedness of keeping human-digestible calories in the ocean in case we need it after some global catastrophe? 2020-02-17T07:47:45.162Z · score: 9 (8 votes)
Who should give sperm/eggs? 2020-02-08T05:13:43.477Z · score: 4 (13 votes)
Mati_Roy's Shortform 2019-12-05T16:31:52.494Z · score: 4 (2 votes)
Crohn's disease 2018-11-13T16:20:42.200Z · score: -12 (19 votes)

Comments

Comment by mati_roy on Plan for Impact Certificate MVP · 2020-10-04T19:28:20.120Z · score: 2 (2 votes) · EA · GW

Awesome! Documented on Moral economics -- Cause Prioritisation Wiki

Comment by mati_roy on Mati_Roy's Shortform · 2020-09-21T17:25:14.492Z · score: 4 (3 votes) · EA · GW

Is there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm?

I feel like a consequentialist would care about the harm itself whether or not it was caused by them.

And a deontologist wouldn't act in a certain way even if it meant they would act that way less in the future.

Here's an example (it's just a toy example; let's not argue whether it's true or not).

A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans.

A deontologist wouldn't eat honey even if they knew they would crack in the future and start eating meat.

If you care much more about the harm caused by you, you might act differently than both of them. You wouldn't eat meat to make 10 other people vegan, but you might eat honey to avoid later cracking and start eating meat.

A deontologist is like someone adopting that framework, but with an empty individualist approach. A consequentialist is like someone adopting that framework, but with an open individualist approach.

I wonder if most self-label deontologist would actually prefer this framework I'm proposing.

EtA: I'm not sure how well "directly caused" can be cached out. Anyone has a model for that?

x-post: https://www.facebook.com/groups/2189993411234830/ (post currently pending)

Comment by mati_roy on Which norms would you like to see on the EA Forum? · 2020-09-20T08:45:42.335Z · score: 12 (4 votes) · EA · GW

I wish people x-posting between LessWrong and the EA Forum encouraged users to only comment on one to centralize comments. And to increase the probability that people do follow this suggestion, for posts (which take a long time to read anyway, compare to the time of clicking on a link), I would just put the post on one of the 2 and a link to it on the other

Comment by mati_roy on Mati_Roy's Shortform · 2020-09-17T23:22:29.916Z · score: 1 (1 votes) · EA · GW

Policy suggestion for countries with government-funded health insurance or healthcare: People using death-with-dignity can receive part of the money that is saved by the government if applicable.

Which could be used to pay for cryonics among other things.

Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-08T19:39:39.989Z · score: 1 (1 votes) · EA · GW
EA isn't (supposed to be) dogmatic, and hence doesn't have clearly defined values.

I agree.

I think this is a big reason why people have chosen to focus on behavior and community involvement.

Community involvement is just instrumental to the goals of EA movement building. I think the outcomes we want to measure are things like career and donations. We also want to measure things that are instrumental to this, but I think we should keep those separated.

Related: my comment on "How have you become more (or less) engaged with EA in the last year?"

Comment by mati_roy on How have you become more (or less) engaged with EA in the last year? · 2020-09-08T19:30:26.443Z · score: 7 (5 votes) · EA · GW

I think it would be good to differentiate things that are instrumental to doing EA and things that are doing EA.

Ex.: Attending events and reading books is instrumental. Working and donating money is directly EA.

I would count those separately. Engagement in the community is just instrumental to the goal of EA movement building. If we entengle both in our discussions, we might end up with people attending a bunch of events and reading a lot online, but without ever producing value (for example).

Although maybe it does produce value in itself, because they can do movement building themselves and become better voters for example. And focusing a lot on engagement might turn EA into a robust superorganism-like entity. If that's the argument, then that's fine I guess.

Somewhat related: The community's conception of value drifting is sometimes too narrow.

Comment by mati_roy on Suggest a question for Peter Singer · 2020-09-06T20:53:09.633Z · score: 1 (1 votes) · EA · GW

What are your egoistic preferences? (ex.: hedonism peak, hedonism intensity times length, learning, life extension, relationships, etc.)

Comment by mati_roy on Suggest a question for Peter Singer · 2020-09-06T20:52:30.218Z · score: 11 (4 votes) · EA · GW

(why) do you focus on near-term animal welfare and poverty alleviation?

Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-06T20:27:41.500Z · score: 1 (1 votes) · EA · GW

yeah, 'shift' or 'change' work better for neutral terms. other suggestion: 'change in reveal preferences'

Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-05T18:37:37.190Z · score: 1 (1 votes) · EA · GW

I see, thanks!

Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-05T18:34:38.525Z · score: 1 (1 votes) · EA · GW

Ok yeah, my explanations didn't make the connection clear. I'll elaborate.

I have the impression "drift" has the connotation of uncontrolled, and therefore undesirable change. It has a negative connotation. People don't want to value drift. If you call rational surface-value update "value drift", it could confuse people, and make them less prone to make those updates.

If you only use 'value drift' only to refer to EA-value drift, it also sneaks in an implication that other value changes are not "drifts". Language shapes our thoughts, so this usage could modify one's model of the world in such a way that they are more likely to become more EA than they value.

I should have been more careful about implying certain intentions from you in my previous comment though. But I think some EAs have this intention. And I think using the word that way has this consequence whether or not that's the intent.

Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-05T18:16:16.506Z · score: 1 (1 votes) · EA · GW

This seems reasonable to me. I do use the shortcut myself in various contexts. But I think using it on someone when you know it's because they have different values is rude.

I use value drift to refer to fundamental values. If your surface level values change because you introspected more, I wouldn't call it a drift. Drift has a connotation of not being in control. Maybe I would rather call it value enlightenment.

Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-05T02:14:12.491Z · score: 1 (1 votes) · EA · GW

I think another term would better fit your description. Maybe "executive failure".

I don't see it as a micro death

Me neither. Nor do I see it as a value drift though.

Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-04T20:02:26.759Z · score: 1 (1 votes) · EA · GW

If they have the same value, but just became worse at fulfilling them, then it's more something like "epistemic drift"; although I would probably discourage using that term.

On the other end, if they started caring more about homeless people intrinsically for some reason, then it would be a value drift. But they wouldn't be "less effective", they would, presumably, be as effective, but just at a different goal.

Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-04T19:57:55.717Z · score: 3 (2 votes) · EA · GW

Other thoughts:

  • It seems epistemically dangerous to discourage such value enlightenment as it might prevent ourselves from become more enlighten.
  • It seems pretty adversarial to manipulate people into not becoming more value enlighten, and allowing this at a norm level seems net negative from most people's point of view.
  • But maybe people want to act more altruistically and trusting in a society as also espouse those values. In which case, surface-level values could change in a good way for almost everyone without any fundamental value drift. Which is also a useful phenomenon to study, so probably fine to also call this 'value drift'.
Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-04T19:51:39.102Z · score: 1 (1 votes) · EA · GW

Thanks!

I agree with your precisions.

levels of engagement with the EA community reduces drop-out rates

"drop-out" meaning 0 engagement, right? so the claim has the form of "the more you do X, the less likely you are of stopping doing X completely". it's not clear to me to which extent it's causal, but yeah, still seems useful info!

I think most of the other 9 areas you mention seem like they already receive substantial non-EA attention

oh, that's plausible!

The post Reducing long-term risks from malevolent actors is arguably one example of EAs considering efforts that would have that sort of scope and difficulty and that would potentially, in effect, increase altruism

Good point! In my post, I was mostly thinking at the individual level. Looking at a population level and on a longer term horizon, I should probably add other possible interventions such as:

  • Incentives to have children (political, economical, social)
  • Immigration policies
  • Economic system
  • Genetic engineering
  • Dating dynamics
  • Cultural evolution
Comment by mati_roy on The community's conception of value drifting is sometimes too narrow · 2020-09-04T19:19:58.986Z · score: 1 (1 votes) · EA · GW

Thanks.

I think "negative value drift" is still too idiosyncratic; it doesn't say negative for whom. For the value holder, any value drift generally has negative consequences.

I (also) think it's a step in the right direction to explicitly state that a post isn't trying to define value drift, but just provide empirical info. Hopefully my post will have provided that definition, and people will now be able to build on this.

Comment by mati_roy on Is value drift net-positive, net-negative, or neither? · 2020-09-04T02:26:37.628Z · score: 1 (1 votes) · EA · GW

if by "something good" you mean "something altruistic", then yes I agree. it's good for someone when others become altruistic towards them.

Comment by mati_roy on Is value drift net-positive, net-negative, or neither? · 2020-09-03T23:38:11.579Z · score: -1 (2 votes) · EA · GW

It's a convergent instrumental goal to preserve one's values. If you change your goals / values, you will generally achieve / fulfill them less.

Value-drifting someone else might be positive for you, at least if you only consider the first-order consequences, but it generally seems pretty unvirtuous and uncooperative to me. A world where value-drifting people is socially acceptable is probably worse than a world where it's not.

Comment by mati_roy on Is value drift net-positive, net-negative, or neither? · 2020-09-03T23:34:39.241Z · score: 1 (1 votes) · EA · GW

"good" and "bad" are usually use to make a value-judgement; so saying "better values" is a confusion. it's *state of affairs* that are good/bad *according* to values.

Comment by mati_roy on How to use the Forum · 2020-09-03T20:25:56.033Z · score: 1 (1 votes) · EA · GW
Click the “Edit block” button to the left of the text box to add an image, a table, a horizontal line, or a math block.

in case anyone else is wondering, the new way to do it is to double click on text and select the image icon on the right of the menu that will appear

Comment by mati_roy on How does change in the cost of security change the world? · 2020-08-30T21:58:11.144Z · score: 1 (1 votes) · EA · GW

Just some thoughts.

If stealing got easier, to compensate we might need to increase punishment, decrease punishment delay, increase surveillance (decrease privacy), increase spending in security, decrease long term planning, decrease owning some types of assets (and vice versa), decrease exchanges / agreements / contracts.

Comment by mati_roy on If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? · 2020-08-28T07:25:21.046Z · score: 1 (1 votes) · EA · GW
if you don't believe me, look at the behavior of the stock market

that's a very unpersuasive argument. I hope you at least consistently beat the market if you believe that. or maybe you believe you're also irrational like the market? in which case, we don't know if the "poverty market" is really underfunded

Possible answers include

I'm pretty skeptical of any of those reasons, but maybe it only takes one, so mayyybe. But if you believe that, then it seems obviously the best poverty alleviation intervention. Funding the most cost-effective poverty interventions while making higher-than-market returns allowing you to fund even more such interventions, etc., and maybe even inspiring others to copy you until the market inefficiency is fixed. That would seem-to-me like a lot of utilons up-for-grabs.

Comment by mati_roy on If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? · 2020-08-28T07:14:24.161Z · score: 3 (2 votes) · EA · GW

Right! Except for anti-poverty interventions targeting public goods, which would be underfunded not because of a market inefficiency, but a political one, most likely AFAICT. And there are also possible other explanations than market failures, such as a high fix cost for loans maybe. But it's still evidence, in either case, that anti-poverty interventions don't have that high a ROI.

Comment by mati_roy on If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? · 2020-08-28T07:05:16.080Z · score: 1 (1 votes) · EA · GW

Thanks for the additional info!

Comment by mati_roy on Mati_Roy's Shortform · 2020-08-27T06:44:36.406Z · score: 2 (2 votes) · EA · GW

If your animal companion kills a human unlawfully, instead of being euthanized, there should be the option for you to pay to put zir in jail.

Posting here because I think maybe having a strong legal framework to protect animals in general might be EA(-ish).

Comment by mati_roy on Mati_Roy's Shortform · 2020-08-26T23:52:28.156Z · score: 7 (2 votes) · EA · GW

I wonder about the risks of optimising for persuasive arguments over accurate arguments. I feel like it's a negative-sum game, and will result in everyone (most people) having a worse model of the world, and that we should have a strong norm against that. Some people have done this for arguments for donating, so maybe you want to update a bit against donating to balance this out: https://schwitzsplinters.blogspot.com/2020/06/contest-winner-philosophical-argument.html

On the other hand, I sometimes want to pay people to change my mind to incentivize finding evidence. A good example is paying for arguments that lead someone to revoke their cryonics memberships, hence making them save money: https://www.lesswrong.com/posts/HxGRCquTQPSJE2k9g/i-will-pay-usd500-to-anyone-who-can-convince-me-to-cancel-my Although if I did that, I would likely also have a bounty for arguments to spend resources for life extension interventions.

So maybe 2 crucial differences are:

a) whether the recipient of the argument is also the one paying for it or otherwise consenting / aware of what's going on

b) there's a bounty on the 2 sides

Comment by mati_roy on If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? · 2020-08-26T21:18:59.024Z · score: 2 (2 votes) · EA · GW

Thanks for your answer.

Human biases to discount future goods/rewards would also play a role.

AFAICT, this is perfectly balanced as both the reward *and* the cost are in the future (given it's a debt).

Comment by mati_roy on If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? · 2020-08-26T21:13:09.433Z · score: 1 (1 votes) · EA · GW

Thanks for your answer!

Lenders have other good opportunities, and maybe discount future returns enough that they would rather engage in spatial rather than temporal arbitrage.

If that's all it was (which it might not as you said), then that would be a good opportunity for EA money, it seems to me

Comment by mati_roy on If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? · 2020-08-25T03:49:01.598Z · score: 1 (1 votes) · EA · GW
banks in the third world would be able to make a personal loan that could only be used by that person for deworming

if they could, I'd be curious to know what would be the ROI of deworming taking into account the fix cost of debts

I also wonder if there's a cost-effective way to reduce that fix cost; I guess not, otherwise it would already have been done; too bad as that would have solved a lot of problems

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-22T22:38:34.816Z · score: 1 (1 votes) · EA · GW

Ray Taylor says:

I'm gonna take flak for this, but the majority of anti-vaxxers are women, and have 2 things in common:
- a negative experience with a doctor in the 2 years preceding their initial interest in anti-vaxx, where they didn't feel their concerns were taken seriously (there are refs for this)
- fear of guilt for possible future harms caused by acts of commission more than acts of omission (not sure if there are refs for that, but i have seen it in several dialogues on and offline)
One thing seems to counter anti-vaxx well: a trusted GP

https://www.facebook.com/mati.roy.09/posts/10158690001894579

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-22T22:37:51.851Z · score: 1 (1 votes) · EA · GW

Seth Nicholson says:

Assuming this is a factor, maybe improving society's epistemic norms would also help? Like, making it clear that tu quoque is not valid reasoning and that people shouldn't be penalized for noticing and admitting to irrational fears without rationalizing them.
(I'm saying this because if anything can be said to be a trigger for me, it's needles. When I tell people what happened to make that the case, they - my therapist included - tend to say I've given them a new nightmare. I avoided getting immunizations for several years because of it. And yet it seems really damn easy to notice the real reason for that and recognize that it shouldn't inform my normative judgments. Although, maybe it's harder to do that if there's no particular incident that obviously caused the phobia?)

https://www.facebook.com/mati.roy.09/posts/10158690001894579

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-22T22:36:46.402Z · score: 1 (1 votes) · EA · GW

Matthew Barnett says:

I’ve looked into this before and I’m pretty sure the expected harm from an adverse reaction to some (many?) vaccines outweighs the expected harm from actually getting the disease it protects against (because the chance of an adverse reaction from eg. the polio vaccine is much higher than the chance you’ll actually get polio). I’d add that as another reason why people would be against personal vaccination, and it’s understandable.

https://www.facebook.com/mati.roy.09/posts/10158690001894579

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-22T22:35:34.976Z · score: 1 (1 votes) · EA · GW
Have you read any interviews with people who don't like vaccines, or visited any of the websites/message boards where they explain their beliefs?

No, I'm uninformed. I added in the OP "epistemic status: I don't really know what I'm talking about" :)

do you think there's a large population of these people who use other beliefs to hide their true beliefs, or don't actually realize what their true beliefs are?

I don't know.

This seems like a lot of guesswork when, in my experience, people who don't like vaccines are often quite vocal about their beliefs and reasoning.

Thanks for the input

Comment by mati_roy on Consider raising IQ to do good · 2020-07-22T22:28:52.726Z · score: 5 (3 votes) · EA · GW

The 3 images are now broken

Comment by mati_roy on Consider raising IQ to do good · 2020-07-22T22:27:33.423Z · score: 1 (1 votes) · EA · GW

Documented here: https://causeprioritization.org/Intelligence_enhancement

I think we should have a tag for this cause area (ie. cognitive enhancement?)

Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-18T23:02:09.832Z · score: 1 (1 votes) · EA · GW

instrumental to what purpose?

Comment by mati_roy on The EA movement is neglecting physical goods · 2020-07-18T22:51:23.566Z · score: 3 (3 votes) · EA · GW
Every once in a while, I see someone write something like "X is neglected in the EA Community". I dislike that. The part about "in the EA Community" seems almost always unnecessary, and a reflection of a narrow view of the world. Generally, we should just care about whether X is neglected overall.

https://forum.effectivealtruism.org/posts/fJyR3Lh9uf3Z6spsi/mati_roy-s-shortform?commentId=cYFYEArwGshiDb8pu

Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-17T19:29:50.022Z · score: 1 (1 votes) · EA · GW

follow-up question

imagine creating an image of a mind without running it (so it has experience 0 minute, but is still there; you could imagine creating a mind in biostasis, or a digital mind on pause)

would most self-labelled preference utilitarians care about the preferences of that mind?

if the mind wants and does stay on pause, but also has preferences about the outside world, do those preferences have moral weight? to the same extent as the preferences of dead people?

imagine creating an image of a mind without running it (so it has experience 0 minute, but is still there; you could imagine creating a mind in biostasis, or a digital mind on pause)

would most self-labelled preference utilitarians care about the preferences of that mind?

if the mind wants and does stay on pause, but also has preferences about the outside world, do those preferences have moral weight? to the same extent as the preferences of dead people?

Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-17T19:29:08.362Z · score: 1 (1 votes) · EA · GW

I do think it's possible for a mind to no want things to happen to it. I guess it could also have a lexical preference to avoid the first experience more than the subsequent ones, which I guess would be practically equivalent to not wanting to be born (except the edge case of being okay with creating an initial image of the mind if it's not computer further)

Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-17T19:28:49.203Z · score: 1 (1 votes) · EA · GW

Seth Nicholson wrote as a comment on Facebook:

I don't think this argument works. "I have a preference for someone to travel back in time and retgone me" is perfectly coherent. It is, as far as we know, not physically possible, but why should that matter? People have preferences for lots of things that they can't possibly achieve. Immortality is a classic.

I responded:

I don't think "time" is fundamental to the Universe. but let's say it is. by some "meta-time" you will (in the future) go in the past. you still have existed before you went back in time.
Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-16T03:18:32.253Z · score: 7 (2 votes) · EA · GW

It seems to me like you can't.

I think I can imagine someone that doesn't want to live, and so it might end up equivalent as wanting to die as soon as they are born. But in that case, living 2 minutes would be twice as bad as living 1 minute. I don't see the "first minute" / the birth as having a qualitative difference. I think it would be possible in principle to create a mind that care more about the first minute, but that still wouldn't literally be a preference about the birth itself. And in any case, I doubt humans have such preferences.

Preferences seem to be about how you want the/your future to be (or how your past self wished its future would have been). But being born isn't something that happens *to* you. It happens, and *then* things start happening to you.

You could have an altruistic preference of not creating other minds, but it wouldn't be an egoistic preference / it doesn't directly affect you personally.

Related thought experiment

I create a mind that is (otherwise) causally disconnected from me (and no other minds exist). That mind wants to create a flower, but won't be able to. It's their only preference. They don't have a preference about their existence.

Is it bad to have created that mind?

It doesn't personally affect anyone. And they personally don't care about having been created (again: they don't have any preference about their existence). So is it bad to have created them?

See related thread on the Effective Altruism Polls Facebook group.

Comment by mati_roy on EA Focusmate Group Announcement · 2020-07-12T21:30:06.768Z · score: 2 (2 votes) · EA · GW

I currently have a daily focusmate for 2 hours. I prefer longer focustmate than 1 hour, and with the same person. So if anyone is interested in having a recurrent session from 1 to 8 hours, let me know.

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-12T15:50:16.919Z · score: 1 (1 votes) · EA · GW

EtA: epistemic status: I don't really know what I'm talking about

I had a friend post on Facebook (I can't find back who it was) and a friend in person (Haydn Thomas-Rose) tell me that maybe some/most antivaxxers were actually just afraid of needles. In which case, developing alternative vaccine methods, like oral vaccines, might be pretty useful.

Alternative hypotheses:

  • antivaxxers mostly don't like that something stays in their body, and that's what differentiate them from other medicine
  • antivaxxers are suspicious that *everyone* needs vaccines, and that's what differentiate them from other medicine
  • antivaxxers are right

Of course, it's probably a combination of factors, but I wonder which are the major ones.

Also, even if the hypothesis is true, I wouldn't expect people to know the source of their belief.

I wonder if we could test this hypothesis short of developing an alternative method. Maybe not. Maybe you can't just tell one person that you have an oral vaccine, and have them become pro-vaccine on the spot, but would rather need broader social validation and time to transition mentally.

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-12T00:19:27.746Z · score: 4 (3 votes) · EA · GW

hummm. I don't know about your specific example; I would need an argument for why it's better to have this "in the EA community". but yeah, there are things that can be "neglected in the EA community" if they are specific to the community. like someone to help resolve conflicts within the community for example. so thanks for the clarification. I should specify that the 'X' in my original comment was element of general {Interventions, Causes}, and not about the health of the community.

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-12T00:15:11.957Z · score: 1 (1 votes) · EA · GW

Sometimes, yeah! Although, I think people over use "more research is needed"

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-10T15:31:55.792Z · score: 3 (2 votes) · EA · GW

Although, maybe the EA Community has a certain prestige that make it a good position from which to propagate ideas through society. So if, for example, the EA Community broadly acknowledged anti-aging as an important problem, even without working much on it, it might get other people to work on it that would have otherwise worked on something less important. So in that sense it might make sense. But still, I would prefer if it was phrased more explicitly as such, like "The EA Community should acknowledge X has an important problem".

Posted a similar version of this comment here: https://www.facebook.com/groups/effective.altruists/permalink/3166557336733935/?comment_id=3167088476680821&reply_comment_id=3167117343344601

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-10T15:02:47.706Z · score: 8 (7 votes) · EA · GW

Every once in a while, I see someone write something like "X is neglected in the EA Community". I dislike that. The part about "in the EA Community" seems almost always unnecessary, and a reflection of a narrow view of the world. Generally, we should just care about whether X is neglected overall.

Comment by mati_roy on EA Forum feature suggestion thread · 2020-06-30T09:05:34.323Z · score: 1 (1 votes) · EA · GW

Have a nice format for linkpost in shortform.

With the goal of having the forum fully replace the EA subreddit at some point.

Comment by mati_roy on Act utilitarianism: criterion of rightness vs. decision procedure · 2020-06-29T17:35:26.681Z · score: 1 (1 votes) · EA · GW
Newcomb's Trolley Problem
A fortune-teller with a so-far perfect record of predictions has placed either 0 or 100 persons in an opque box some distance down the track. If the fortune-teller predicted you will pull the lever, killing the 5 people tiedto the track, ze will have left the opaque box empty. If the fortune-teller predicted you will NOT pull the lever (avoiding the 5 people tied to the track but still hitting the box), ze will have placed 100 people into the opaque box. Since the fortune-teller has already made zir choice of how many people to put into the opque box, is it more rational to pull the lever or not?

Accompanying image: https://photos.app.goo.gl/LvaVQye6tJBVqw2k8

Here, the act that fulfil the criterion of rightness is the opposite of the act you will take, whether you pull the lever or not (by the design of the thought experiment).

The decision procedure that maximize the criteria of rightness is to pull the lever (under a few further assumptions, such as: no quantum mixed strategies, no other superbeings punishing you for having this decision procedure).