Posts

At what level of risk of birth defect is it not worth (trying) having a (biological) child for the median person? 2020-08-03T20:06:47.134Z · score: -2 (2 votes)
Can you have an egoistic preference about your own birth? 2020-07-16T03:14:31.452Z · score: 5 (2 votes)
[link] Biostasis / Cryopreservation Survey 2020 2020-05-16T07:40:17.922Z · score: 15 (4 votes)
Which norms would you like to see on the EA Forum? 2020-05-10T21:41:42.826Z · score: 5 (2 votes)
How much slack do people have? 2020-04-27T03:37:48.467Z · score: 10 (6 votes)
What are high-leverage interventions to increase/decrease the global communication index? 2020-04-21T18:09:31.429Z · score: 12 (3 votes)
Could we have a warning system to warn us of imminent geomagnetic storms? 2020-04-04T15:35:50.828Z · score: 4 (2 votes)
(How) Could an AI become an independent economic agent? 2020-04-04T13:38:52.935Z · score: 15 (6 votes)
What fraction of posts submitted on the Effective Altruism Facebook group gets accepted by the admins? 2020-04-02T17:15:49.009Z · score: 4 (2 votes)
Why do we need philanthropy? Can we make it obsolete? 2020-03-27T15:47:25.258Z · score: 19 (8 votes)
Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic? 2020-03-27T15:24:36.201Z · score: 10 (7 votes)
How could we define a global communication index? 2020-03-25T01:47:50.731Z · score: 10 (3 votes)
What promising projects aren't being done against the coronavirus? 2020-03-22T03:30:02.970Z · score: 5 (3 votes)
Are countries sharing ventilators to fight the coronavirus? 2020-03-17T07:11:40.243Z · score: 9 (3 votes)
What are EA project ideas you have? 2020-03-07T02:58:53.338Z · score: 17 (6 votes)
What medium/long term considerations should we take into account when responding to the coronavirus' threat? 2020-03-05T10:30:47.153Z · score: 5 (2 votes)
Has anyone done an analysis on the importance, tractability, and neglectedness of keeping human-digestible calories in the ocean in case we need it after some global catastrophe? 2020-02-17T07:47:45.162Z · score: 9 (8 votes)
Who should give sperm/eggs? 2020-02-08T05:13:43.477Z · score: 4 (13 votes)
Mati_Roy's Shortform 2019-12-05T16:31:52.494Z · score: 4 (2 votes)
Crohn's disease 2018-11-13T16:20:42.200Z · score: -12 (19 votes)

Comments

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-22T22:38:34.816Z · score: 1 (1 votes) · EA · GW

Ray Taylor says:

I'm gonna take flak for this, but the majority of anti-vaxxers are women, and have 2 things in common:
- a negative experience with a doctor in the 2 years preceding their initial interest in anti-vaxx, where they didn't feel their concerns were taken seriously (there are refs for this)
- fear of guilt for possible future harms caused by acts of commission more than acts of omission (not sure if there are refs for that, but i have seen it in several dialogues on and offline)
One thing seems to counter anti-vaxx well: a trusted GP

https://www.facebook.com/mati.roy.09/posts/10158690001894579

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-22T22:37:51.851Z · score: 1 (1 votes) · EA · GW

Seth Nicholson says:

Assuming this is a factor, maybe improving society's epistemic norms would also help? Like, making it clear that tu quoque is not valid reasoning and that people shouldn't be penalized for noticing and admitting to irrational fears without rationalizing them.
(I'm saying this because if anything can be said to be a trigger for me, it's needles. When I tell people what happened to make that the case, they - my therapist included - tend to say I've given them a new nightmare. I avoided getting immunizations for several years because of it. And yet it seems really damn easy to notice the real reason for that and recognize that it shouldn't inform my normative judgments. Although, maybe it's harder to do that if there's no particular incident that obviously caused the phobia?)

https://www.facebook.com/mati.roy.09/posts/10158690001894579

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-22T22:36:46.402Z · score: 1 (1 votes) · EA · GW

Matthew Barnett says:

I’ve looked into this before and I’m pretty sure the expected harm from an adverse reaction to some (many?) vaccines outweighs the expected harm from actually getting the disease it protects against (because the chance of an adverse reaction from eg. the polio vaccine is much higher than the chance you’ll actually get polio). I’d add that as another reason why people would be against personal vaccination, and it’s understandable.

https://www.facebook.com/mati.roy.09/posts/10158690001894579

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-22T22:35:34.976Z · score: 1 (1 votes) · EA · GW
Have you read any interviews with people who don't like vaccines, or visited any of the websites/message boards where they explain their beliefs?

No, I'm uninformed. I added in the OP "epistemic status: I don't really know what I'm talking about" :)

do you think there's a large population of these people who use other beliefs to hide their true beliefs, or don't actually realize what their true beliefs are?

I don't know.

This seems like a lot of guesswork when, in my experience, people who don't like vaccines are often quite vocal about their beliefs and reasoning.

Thanks for the input

Comment by mati_roy on Consider raising IQ to do good · 2020-07-22T22:28:52.726Z · score: 3 (2 votes) · EA · GW

The 3 images are now broken

Comment by mati_roy on Consider raising IQ to do good · 2020-07-22T22:27:33.423Z · score: 1 (1 votes) · EA · GW

Documented here: https://causeprioritization.org/Intelligence_enhancement

I think we should have a tag for this cause area (ie. cognitive enhancement?)

Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-18T23:02:09.832Z · score: 1 (1 votes) · EA · GW

instrumental to what purpose?

Comment by mati_roy on The EA movement is neglecting physical goods · 2020-07-18T22:51:23.566Z · score: 3 (3 votes) · EA · GW
Every once in a while, I see someone write something like "X is neglected in the EA Community". I dislike that. The part about "in the EA Community" seems almost always unnecessary, and a reflection of a narrow view of the world. Generally, we should just care about whether X is neglected overall.

https://forum.effectivealtruism.org/posts/fJyR3Lh9uf3Z6spsi/mati_roy-s-shortform?commentId=cYFYEArwGshiDb8pu

Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-17T19:29:50.022Z · score: 1 (1 votes) · EA · GW

follow-up question

imagine creating an image of a mind without running it (so it has experience 0 minute, but is still there; you could imagine creating a mind in biostasis, or a digital mind on pause)

would most self-labelled preference utilitarians care about the preferences of that mind?

if the mind wants and does stay on pause, but also has preferences about the outside world, do those preferences have moral weight? to the same extent as the preferences of dead people?

imagine creating an image of a mind without running it (so it has experience 0 minute, but is still there; you could imagine creating a mind in biostasis, or a digital mind on pause)

would most self-labelled preference utilitarians care about the preferences of that mind?

if the mind wants and does stay on pause, but also has preferences about the outside world, do those preferences have moral weight? to the same extent as the preferences of dead people?

Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-17T19:29:08.362Z · score: 1 (1 votes) · EA · GW

I do think it's possible for a mind to no want things to happen to it. I guess it could also have a lexical preference to avoid the first experience more than the subsequent ones, which I guess would be practically equivalent to not wanting to be born (except the edge case of being okay with creating an initial image of the mind if it's not computer further)

Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-17T19:28:49.203Z · score: 1 (1 votes) · EA · GW

Seth Nicholson wrote as a comment on Facebook:

I don't think this argument works. "I have a preference for someone to travel back in time and retgone me" is perfectly coherent. It is, as far as we know, not physically possible, but why should that matter? People have preferences for lots of things that they can't possibly achieve. Immortality is a classic.

I responded:

I don't think "time" is fundamental to the Universe. but let's say it is. by some "meta-time" you will (in the future) go in the past. you still have existed before you went back in time.
Comment by mati_roy on Can you have an egoistic preference about your own birth? · 2020-07-16T03:18:32.253Z · score: 7 (2 votes) · EA · GW

It seems to me like you can't.

I think I can imagine someone that doesn't want to live, and so it might end up equivalent as wanting to die as soon as they are born. But in that case, living 2 minutes would be twice as bad as living 1 minute. I don't see the "first minute" / the birth as having a qualitative difference. I think it would be possible in principle to create a mind that care more about the first minute, but that still wouldn't literally be a preference about the birth itself. And in any case, I doubt humans have such preferences.

Preferences seem to be about how you want the/your future to be (or how your past self wished its future would have been). But being born isn't something that happens *to* you. It happens, and *then* things start happening to you.

You could have an altruistic preference of not creating other minds, but it wouldn't be an egoistic preference / it doesn't directly affect you personally.

Related thought experiment

I create a mind that is (otherwise) causally disconnected from me (and no other minds exist). That mind wants to create a flower, but won't be able to. It's their only preference. They don't have a preference about their existence.

Is it bad to have created that mind?

It doesn't personally affect anyone. And they personally don't care about having been created (again: they don't have any preference about their existence). So is it bad to have created them?

See related thread on the Effective Altruism Polls Facebook group.

Comment by mati_roy on EA Focusmate Group Announcement · 2020-07-12T21:30:06.768Z · score: 2 (2 votes) · EA · GW

I currently have a daily focusmate for 2 hours. I prefer longer focustmate than 1 hour, and with the same person. So if anyone is interested in having a recurrent session from 1 to 8 hours, let me know.

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-12T15:50:16.919Z · score: 1 (1 votes) · EA · GW

EtA: epistemic status: I don't really know what I'm talking about

I had a friend post on Facebook (I can't find back who it was) and a friend in person (Haydn Thomas-Rose) tell me that maybe some/most antivaxxers were actually just afraid of needles. In which case, developing alternative vaccine methods, like oral vaccines, might be pretty useful.

Alternative hypotheses:

  • antivaxxers mostly don't like that something stays in their body, and that's what differentiate them from other medicine
  • antivaxxers are suspicious that *everyone* needs vaccines, and that's what differentiate them from other medicine
  • antivaxxers are right

Of course, it's probably a combination of factors, but I wonder which are the major ones.

Also, even if the hypothesis is true, I wouldn't expect people to know the source of their belief.

I wonder if we could test this hypothesis short of developing an alternative method. Maybe not. Maybe you can't just tell one person that you have an oral vaccine, and have them become pro-vaccine on the spot, but would rather need broader social validation and time to transition mentally.

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-12T00:19:27.746Z · score: 3 (2 votes) · EA · GW

hummm. I don't know about your specific example; I would need an argument for why it's better to have this "in the EA community". but yeah, there are things that can be "neglected in the EA community" if they are specific to the community. like someone to help resolve conflicts within the community for example. so thanks for the clarification. I should specify that the 'X' in my original comment was element of general {Interventions, Causes}, and not about the health of the community.

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-12T00:15:11.957Z · score: 1 (1 votes) · EA · GW

Sometimes, yeah! Although, I think people over use "more research is needed"

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-10T15:31:55.792Z · score: 3 (2 votes) · EA · GW

Although, maybe the EA Community has a certain prestige that make it a good position from which to propagate ideas through society. So if, for example, the EA Community broadly acknowledged anti-aging as an important problem, even without working much on it, it might get other people to work on it that would have otherwise worked on something less important. So in that sense it might make sense. But still, I would prefer if it was phrased more explicitly as such, like "The EA Community should acknowledge X has an important problem".

Posted a similar version of this comment here: https://www.facebook.com/groups/effective.altruists/permalink/3166557336733935/?comment_id=3167088476680821&reply_comment_id=3167117343344601

Comment by mati_roy on Mati_Roy's Shortform · 2020-07-10T15:02:47.706Z · score: 7 (6 votes) · EA · GW

Every once in a while, I see someone write something like "X is neglected in the EA Community". I dislike that. The part about "in the EA Community" seems almost always unnecessary, and a reflection of a narrow view of the world. Generally, we should just care about whether X is neglected overall.

Comment by mati_roy on EA Forum feature suggestion thread · 2020-06-30T09:05:34.323Z · score: 1 (1 votes) · EA · GW

Have a nice format for linkpost in shortform.

With the goal of having the forum fully replace the EA subreddit at some point.

Comment by mati_roy on Act utilitarianism: criterion of rightness vs. decision procedure · 2020-06-29T17:35:26.681Z · score: 1 (1 votes) · EA · GW
Newcomb's Trolley Problem
A fortune-teller with a so-far perfect record of predictions has placed either 0 or 100 persons in an opque box some distance down the track. If the fortune-teller predicted you will pull the lever, killing the 5 people tiedto the track, ze will have left the opaque box empty. If the fortune-teller predicted you will NOT pull the lever (avoiding the 5 people tied to the track but still hitting the box), ze will have placed 100 people into the opaque box. Since the fortune-teller has already made zir choice of how many people to put into the opque box, is it more rational to pull the lever or not?

Accompanying image: https://photos.app.goo.gl/LvaVQye6tJBVqw2k8

Here, the act that fulfil the criterion of rightness is the opposite of the act you will take, whether you pull the lever or not (by the design of the thought experiment).

The decision procedure that maximize the criteria of rightness is to pull the lever (under a few further assumptions, such as: no quantum mixed strategies, no other superbeings punishing you for having this decision procedure).

Comment by mati_roy on What are good sofware tools? What about general productivity/wellbeing tips? · 2020-06-25T11:44:53.434Z · score: 2 (2 votes) · EA · GW

Meta: That seems too general of a question to me

Comment by mati_roy on What do you think about bets of the form "I bet you will switch to using this product after trying it"? · 2020-06-25T11:43:14.150Z · score: 1 (1 votes) · EA · GW

I just tried Roam for a few minutes. I also noticed I had tried it already in December 2019.

My current favorite note taking apps are Gdoc and https://coda.io/

Comment by mati_roy on What do you think about bets of the form "I bet you will switch to using this product after trying it"? · 2020-06-25T11:36:54.912Z · score: 1 (1 votes) · EA · GW

I love it! I've been thinking about this for years, and I hope more people try this. The bet would act as insurance for the time I put exploring the product.

Comment by mati_roy on Can I archive the EA forum on the wayback machine (internet archive, archive.org) ? · 2020-06-25T01:25:12.616Z · score: 1 (1 votes) · EA · GW

Related question on LessWrong: https://www.lesswrong.com/posts/LBXGBgRENPHg2THbr/can-i-archive-content-from-lesswrong-com-on-the-wayback

Comment by mati_roy on Can I archive the EA forum on the wayback machine (internet archive, archive.org) ? · 2020-06-25T01:20:33.883Z · score: 1 (1 votes) · EA · GW

AFAIK, JP Addison is the only dev for the EA Forum, and the main code base is maintained by the LW team: Jim Babcock, Oliver Habryka, Ruby Bloom, Raymond Arnold.

Comment by mati_roy on Can I archive the EA forum on the wayback machine (internet archive, archive.org) ? · 2020-06-25T01:17:36.188Z · score: 2 (2 votes) · EA · GW

2018-11-14, I brought up this problem here: https://forum.effectivealtruism.org/posts/h26Kx7uGfQfNewi7d/welcome-to-the-new-forum?commentId=ydot255NQ3HsT4XT9

2019-08-05, we had an email exchange and came up with the ea.greaterwrong.org solution

Comment by mati_roy on Long-term investment fund at Founders Pledge · 2020-06-24T04:38:23.014Z · score: 1 (1 votes) · EA · GW

That would be great! I would really like to see this! Would likely be in my, or my top recommendation for donations.

Comment by mati_roy on Mati_Roy's Shortform · 2020-06-22T22:05:32.167Z · score: 3 (2 votes) · EA · GW

Not all Effective Altruists are effective altruists and vice versa (where the capitalization means "part of the community" whereas the lowercase version means "does good effectively").

Asking where Effective Altruists give makes sense, but checking where effective altruists are giving seems like it's somewhat getting the causal arrow reversed. To know they are effective, you must first check which organizations are effective, and *then* you can determine that those who gave to those organizations were effective.

But I guess there's also a more meaningful way to interpret the statement. Ex.: Where do smart strategic altruists give money? (you can still determine how smart and strategic they are in some direct ways without checking which organizations are the most effective). If you find some effective organizations first, you can also ask "Where else do those donors give" which might unveil charities you missed.

x-post: Facebook - Effective Altruism Polls

Comment by mati_roy on X-risks to all life v. to humans · 2020-06-19T22:02:13.783Z · score: 1 (1 votes) · EA · GW

x-risks are dependent on one's value system. if you value some form of non-human life colonizing the universe, then human extinction is not necessarily an x-risk for you and vice versa.

Comment by mati_roy on Mati_Roy's Shortform · 2020-06-19T21:48:12.460Z · score: 3 (3 votes) · EA · GW

In the past months, a lot more people weren't working and were receiving a government-funded basic income (and also were socially isolated). I wonder if that increased the probability the BLM events happening. And if so, how we should update our models of what would happen in a future where AI made a lot of people unemployed and where the government provided a UBI.

Comment by mati_roy on Mati_Roy's Shortform · 2020-06-16T07:07:13.421Z · score: 1 (1 votes) · EA · GW

Yes, I think the universe is spatially Big, to the extent that most currently alive people are living outside our Reachable Universe

(I'm assuming "currently alive" can be cached out robustly, but am still a bit confused about the implications of the relativity of simultaneity)

Comment by mati_roy on What are EA project ideas you have? · 2020-06-16T06:50:48.189Z · score: 1 (1 votes) · EA · GW

Shaking hands across the world

Category: Bringing powerful countries closer together

Idea: Handshake statue in Time Square and some equivalent place in China, where people can give each other a handshake across the world

Effectiveness: I don't know;doesn't seem effective, but also maybe such symbols are powerful and would bring the world closer to each other, hence increasing cooperation / reducing risk of wars

Source: Space Force TV show, s1e7 8:30

Comment by mati_roy on Mati_Roy's Shortform · 2020-06-15T20:24:11.783Z · score: 1 (1 votes) · EA · GW

Most likely. Contrary to the common saying. Most people are not in the future. Most people are completely causally disconnected from our universe.

Comment by mati_roy on What are EA project ideas you have? · 2020-06-06T03:01:13.878Z · score: 1 (1 votes) · EA · GW

Rationalist Olympiads

Potential funding: EA Meta Fund

Ideas:

Comment by mati_roy on What are EA project ideas you have? · 2020-06-06T02:59:29.669Z · score: 1 (1 votes) · EA · GW

FDA Policy Think-tank (and/or advocacy group)

Comment by mati_roy on What are EA project ideas you have? · 2020-06-06T02:58:42.366Z · score: 1 (1 votes) · EA · GW

Science policy think tank (or advocacy group?)

Potential problem: it might accelerate all scientific progress, which isn’t relevant in the framework of technological differential progress, or possibly harmful (?) if, for example, AI parrallelizes better than AI safety

Related: https://causeprioritization.org/Improving_science

Comment by mati_roy on What are EA project ideas you have? · 2020-06-06T02:45:40.031Z · score: 1 (1 votes) · EA · GW

Sober September

Created: early 2019 (or maybe before) | Originally shared on EA Work

Cause area: aging

Dry Feb is a Canadian initiative that invites people to go sober for February to raise money for the Canadian Cancer Society: https://www.dryfeb.ca/.

Imagine this idea, but worldwide and for general medical research.

I would suggest fundraising for the Methuselah Foundation for its broad approach. They fund a lot of prizes which create market pressures for medical progress, so avoid the donors to have to figure out which research groups are the most effective. They’ve also had other initiative that helps the field at large such as conferences and roadmaps. More on them here: https://www.mfoundation.org/who-we-are/.

An idea for a name is “Sober September”.

Tangentially, reducing alcohol consumption might also be a somewhat effective intervention to increase QALYs in richer countries (ex.: htp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.183.4036&rep=rep1&type=pdf).

Note: I’m not working on this, but could provide some guidance.

Comment by mati_roy on What are EA project ideas you have? · 2020-06-06T02:42:42.080Z · score: 1 (1 votes) · EA · GW

Royalty free AI images

Created: early 2019 (or maybe before) | Originally shared on EA Work

Cause area: AI safety

Proposal: Make a collection of (royalty) free images representing the idea of AI / AI safety / AI x-risk / AI risk that aren't anthropomorphizing AI or otherwise misportraying AI (both by searching for existing images and by creating more). This could be used by the media, local AI (safety) groups, etc.

Details: I think this is less of a problem than it used to be, but still think this could be valuable. If you want funding for that, you could consider applying for a grant from the Long-Term Future Fund: https://app.effectivealtruism.org/funds/far-future.

Cross-post: https://www.facebook.com/groups/1696670373923332/permalink/2287004484889915/

Comment by mati_roy on What are EA project ideas you have? · 2020-06-06T02:41:20.393Z · score: 1 (1 votes) · EA · GW

Decision Theory Interactive Guide

Created: early 2019 (or maybe before) | Originally shared on EA Work

Proposal: I think this could help understanding decision theories (especially functional decision theory). There could be some scenarios where the user has to choose an action or a decision procedure and see how this affects other parts of the scenario that are logically connected to the agent the user controls. For example: playing the prisoner dilemma with a copy of oneself, Newcomb’s problem, etc. Could be done in a similar way to Nicky Case's games.

Comment by mati_roy on What are EA project ideas you have? · 2020-06-06T02:38:54.875Z · score: 1 (1 votes) · EA · GW

EA StackExchange

Created: early 2019 (or maybe before) | Originally shared on EA Work

Create a quality StackExchange site so that the EA community can build up knowledge online.

Note: The previous attempt to do so failed (see: https://area51.stackexchange.com/proposals/97583/effective-altruism).

Comment by mati_roy on What are EA project ideas you have? · 2020-06-06T02:37:57.193Z · score: 1 (1 votes) · EA · GW

I will document ideas from others I want to signal boost in replies to this comment

Comment by mati_roy on What are EA project ideas you have? · 2020-05-20T00:21:28.267Z · score: 1 (1 votes) · EA · GW

Maybe summarizing the book "Who Goes First? The Story of Self-experimentation in Medicine". Two possibly important thesis:

  • self-experimentation is important
  • medical innovations are available way before they get adopted
Comment by mati_roy on Focusing on Career & Cause Movement Building · 2020-05-12T21:53:51.291Z · score: 1 (1 votes) · EA · GW

related: EA Group Directory

Comment by mati_roy on Which norms would you like to see on the EA Forum? · 2020-05-11T04:37:39.647Z · score: 1 (1 votes) · EA · GW

Taking info-hazards seriously.

Don't share one just "because most people already know about it".

Don't share because "other people are wrong to be concerned". At least build a solid argument in private with others before sharing publicly. See paper on unilitarilist curse.

Don't share because "come on that's weird". Your intuitions might not be well calibrated on this.

Don't share because "it doesn't actually work". There might be ways to fix their bugs. But don't even think about whether there are obviously.

Sharing is rarely useful at the object-level, is potentially dangerous, and creates a norm of tolerating info-hazards. A norm of self-censoring is the best norm I can think. Censoring others creates a large risk of bringing more attention to it unfortunately.

It's not worth the dopamine rush of showing how smart you are. I despise everyone not taking that seriously.

Comment by mati_roy on Which norms would you like to see on the EA Forum? · 2020-05-10T22:23:38.424Z · score: 3 (3 votes) · EA · GW

I hope we'll import the tagging system from LessWrong.

I would very much like to see posts labelled with "short-term" vs "long-term" as I think a lot of discussion is wasted on what often ends up being a fundamental disagreement. The discussion is wasted because it doesn't tackle the fundamental disagreement, and both interlocutors wrongly model the other as also being a short/long termist.

Fundamental disagreement can include:

  • Highly discounting the future
  • Disregarding indirect impact
  • Trusting our intuitions that explicit reasoning that gives weird conclusions are false
  • Etc.

There's not much synergy between the short-termist cluster and long-termist cluster.

Also the labels are not fully self-explanatory. For example, the article "How the Simulation Argument Dampens Future Fanaticism" might superficially seem like it belongs in the short-termist cluster, but doesn't (I would argue).

Example of mostly wasted discussion: example 1

Note: I expect this idea to get some push-back.

Comment by mati_roy on Which norms would you like to see on the EA Forum? · 2020-05-10T22:07:43.737Z · score: 3 (2 votes) · EA · GW

I have a vague impression some of us are thinking "this is too weird for EAs/the EA Forum, I'll post on LW". There's also the fact that LessWrong used to host some EA topics pre EA Forum. This make it so some content is spread over those 2 forums. I'm not sure if/how we can improve on that.

Ideas:

a) Norm for posting EA content on the EA Forum, not on LessWrong

b) Syncing the Alignment Forum with the EA Forum instead of LessWrong (by the way, I think the Alignment Forum was a really good idea to organize information)

Comment by mati_roy on Which norms would you like to see on the EA Forum? · 2020-05-10T21:50:55.771Z · score: 2 (2 votes) · EA · GW

This is more of a feature request at this point, but I would like if we could share draft posts publicly in order to be able to collaborate faster. Once this would be a feature, I would like if it became a common practice.

Comment by mati_roy on Which norms would you like to see on the EA Forum? · 2020-05-10T21:48:20.445Z · score: 2 (2 votes) · EA · GW

When posts would work better as a wiki entry, I wished they would just be a wiki entry (on the Cause Prioritization Wiki) and then shared here as a linked post.

Addendum: Maybe the Cause Prioritization Wiki could be moved to "wiki.effectivealtruism.org" (and also moved to the MediaWiki platform) and better integrated with the Forum (ex.: with a list of recent major edits on the main page).

Comment by mati_roy on Which norms would you like to see on the EA Forum? · 2020-05-10T21:45:48.535Z · score: 3 (3 votes) · EA · GW

I would like people to turn off comments when they share a link to the EA Forum on Facebook and/or invite people to comment on the original post in order centralize the discussion in one place because: a) it makes it easier to see everything that's been said about the post, b) it makes it easier to avoid duplicating work

Comment by mati_roy on Which norms would you like to see on the EA Forum? · 2020-05-10T21:44:23.603Z · score: 2 (2 votes) · EA · GW

I would like for more posts to be posted as questions instead so that other people could answer them as well all in the same place.

For example, I feel like a lot of people would have posted Why do we need philanthropy? Can we make it obsolete? as a post, but I think it's better as a question.