Posts

[Links] Tangible actions to support Hong Kong protestors from afar 2019-08-18T23:47:03.223Z · score: 4 (9 votes)
[Link] Virtue signaling annotated bibliography (Geoffrey Miller) 2019-08-14T22:41:55.592Z · score: 7 (5 votes)
[Link] Bolsonaro is cutting down the rainforest (nytimes) 2019-08-01T00:45:11.495Z · score: 4 (10 votes)
[Link] The Schelling Choice is "Rabbit", not "Stag" (LessWrong post) 2019-07-31T21:27:22.097Z · score: 20 (5 votes)
[Link] "Two Case Studies in Communist Insecurity" (The Scholar's Stage) 2019-07-25T22:17:05.968Z · score: 7 (7 votes)
[Link] Thiel on GCRs 2019-07-22T20:47:13.076Z · score: 26 (10 votes)
Debrief: "cash prizes for the best arguments against psychedelics" 2019-07-14T17:04:20.153Z · score: 47 (24 votes)
[Link] "Revisiting the Insights model" (Median Group) 2019-07-14T14:58:39.661Z · score: 17 (6 votes)
[Link] "Why Responsible AI Development Needs Cooperation on Safety" (OpenAI) 2019-07-12T01:19:39.816Z · score: 20 (9 votes)
[Link] "The AI Timelines Scam" 2019-07-11T03:37:22.568Z · score: 22 (13 votes)
If physics is many-worlds, does ethics matter? 2019-07-10T15:28:49.733Z · score: 14 (9 votes)
What grants has Carl Shulman's discretionary fund made? 2019-07-08T18:40:19.414Z · score: 46 (20 votes)
Do we know how many big asteroids could impact Earth? 2019-07-07T16:06:57.304Z · score: 31 (13 votes)
Leverage Research shutting down? 2019-07-04T20:55:34.890Z · score: 22 (13 votes)
What's the best structure for optimal allocation of EA capital? 2019-06-04T17:00:36.470Z · score: 7 (12 votes)
On the margin, should EA focus on outreach or retention? 2019-05-31T22:22:54.299Z · score: 5 (6 votes)
[Link] Act of Charity 2019-05-30T22:29:41.518Z · score: 4 (4 votes)
Why do you downvote EA Forum posts & comments? 2019-05-29T22:52:06.900Z · score: 6 (6 votes)
[Link] MacKenzie Bezos signs the Giving Pledge 2019-05-28T17:55:30.483Z · score: 13 (8 votes)
[Link] David Pearce on understanding psychedelics 2019-05-19T17:32:49.242Z · score: 6 (11 votes)
Cash prizes for the best arguments against psychedelics being an EA cause area 2019-05-10T18:13:04.968Z · score: 47 (32 votes)
[Link] "Radical Consequence and Heretical Knots" – an ethnography of the London EA community 2019-05-09T17:31:52.354Z · score: 16 (9 votes)
[Link] 5-HTTLPR 2019-05-09T14:56:50.820Z · score: 16 (4 votes)
[Link] 80,000 Hours 2018 annual review 2019-05-08T17:06:06.726Z · score: 23 (9 votes)
[Link] "A Psychedelic Renaissance" (Chronicle of Philanthropy) 2019-05-06T17:57:41.913Z · score: 24 (6 votes)
Why isn't GV psychedelics grantmaking housed under Open Phil? 2019-05-05T17:10:45.959Z · score: 17 (11 votes)
[Link] Totalitarian ethical systems 2019-05-04T18:37:39.166Z · score: 7 (8 votes)
Is preventing child abuse a plausible Cause X? 2019-05-04T00:58:12.568Z · score: 50 (29 votes)
Why does EA use QALYs instead of experience sampling? 2019-04-24T00:58:15.693Z · score: 55 (23 votes)
Should EA collectively leave Facebook? 2019-04-22T18:54:04.317Z · score: 9 (7 votes)
Should EA grantmaking be subject to independent audit? 2019-04-17T17:18:32.303Z · score: 19 (9 votes)
Is Modern Monetary Theory a good idea? 2019-04-16T21:25:30.508Z · score: 15 (9 votes)
What Master's is the best preparation for an Econ PhD? 2019-04-16T21:04:18.295Z · score: 12 (2 votes)
Complex value & situational awareness 2019-04-16T18:42:58.980Z · score: 15 (7 votes)
[Link] Open Phil's 2019 progress & plans update 2019-04-16T17:31:53.811Z · score: 26 (15 votes)
Who in EA enjoys managing people? 2019-04-10T23:49:16.862Z · score: 6 (3 votes)
Who is working on finding "Cause X"? 2019-04-10T23:09:23.892Z · score: 19 (12 votes)
Why did three GiveWell board members resign in April 2019? 2019-04-03T21:32:23.408Z · score: 12 (4 votes)
Is visiting North Korea effective? 2019-04-02T20:50:23.521Z · score: 0 (14 votes)
Altruistic action is dispassionate 2019-03-30T17:33:19.136Z · score: 24 (8 votes)
Why is the EA Hotel having trouble fundraising? 2019-03-26T23:20:16.794Z · score: 33 (18 votes)
Will the EA Forum continue to have cash prizes? 2019-03-25T17:37:30.519Z · score: 14 (5 votes)
EA jobs provide scarce non-monetary goods 2019-03-20T20:56:46.817Z · score: 41 (27 votes)
Is EA a community of elites? 2019-03-01T06:24:31.846Z · score: 7 (7 votes)
What type of Master's is best for AI policy work? 2019-02-22T20:04:47.502Z · score: 13 (7 votes)
What's the best Security Studies Master's program? 2019-02-22T20:01:37.670Z · score: 7 (2 votes)
Time-series data for income & happiness? 2019-02-20T05:38:23.800Z · score: 8 (3 votes)
What we talk about when we talk about life satisfaction 2019-02-04T23:51:06.245Z · score: 18 (7 votes)
Is intellectual work better construed as exploration or performance? 2019-01-25T22:00:52.792Z · score: 11 (4 votes)
If slow-takeoff AGI is somewhat likely, don't give now 2019-01-23T20:54:58.944Z · score: 21 (14 votes)

Comments

Comment by milan_griffes on [Links] Tangible actions to support Hong Kong protestors from afar · 2019-08-20T20:08:32.177Z · score: 2 (1 votes) · EA · GW

+1 to more analysis here being good.

I think there's something to be said for solidarity & acting according to one's principles. Not really sure how to weight that consideration alongside tactical / political considerations like what you're pointing to. (My instinct is to weight solidarity & acting from principles heavily.)

Comment by milan_griffes on [Links] Tangible actions to support Hong Kong protestors from afar · 2019-08-19T15:06:28.517Z · score: 2 (1 votes) · EA · GW

For sure. I think getting the Hong Kong Human Rights and Democracy Act passed is probably good on net, though this is all super complicated & hard to say with confidence.

Comment by milan_griffes on What book(s) would you want a gifted teenager to come across? · 2019-08-08T00:03:03.828Z · score: 2 (1 votes) · EA · GW

+1 to Sapiens, parts of Moral Mazes, Deep Work, and Seeing like a State.

Comment by milan_griffes on What book(s) would you want a gifted teenager to come across? · 2019-08-08T00:00:08.788Z · score: 2 (1 votes) · EA · GW

How to Fail at Almost Everything and Still Win Big

The Dhammapada (especially if they're feeling overwhelmed / burned out)

How To Do Nothing (if they spend a lot of time online / on social media)

Comment by milan_griffes on Where are Leverage research staff now? · 2019-08-06T21:06:41.756Z · score: 6 (3 votes) · EA · GW

I believe many are at Paradigm & Reserve:

Comment by milan_griffes on Information security careers for GCR reduction · 2019-08-02T17:37:14.381Z · score: 4 (2 votes) · EA · GW

lol yeah it's an infosec guy's blog. He's trolling a bit with the domain name.

Comment by milan_griffes on Information security careers for GCR reduction · 2019-08-02T02:23:07.302Z · score: 5 (3 votes) · EA · GW

Here's a compilation of "how to get started in infosec" guides.

Comment by milan_griffes on Is running Folding@home / Rosetta@home beneficial? · 2019-08-02T01:48:10.948Z · score: 1 (2 votes) · EA · GW

cf. Gwern's study of catnip.

Also Luke's post on Scaruffi:


Sometimes I do blatantly useless things so I can flaunt my rejection of the often unhealthy “always optimize” pressures within the effective altruism community. So today, I’m going to write about rock music criticism.
Comment by milan_griffes on Value of Working in Ads? · 2019-08-01T23:05:28.330Z · score: 2 (1 votes) · EA · GW

cf. Gwern's Banner Ads Considered Harmful.

Comment by milan_griffes on Should EA collectively leave Facebook? · 2019-08-01T22:55:57.540Z · score: 2 (1 votes) · EA · GW

See also Why I Quit Social Media on Otium.

Comment by milan_griffes on Four practices where EAs ought to course-correct · 2019-07-31T21:31:28.466Z · score: 5 (4 votes) · EA · GW
In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received.

The Media Training Bible is also good for this.

Comment by milan_griffes on Four practices where EAs ought to course-correct · 2019-07-31T21:23:00.892Z · score: 10 (6 votes) · EA · GW

See On the construction of beacons (a):


Finally, some advice for geeks, founders of subcultures, constructors of beacons. Make your beacon as dim as you can get away with while still transmitting the signal to those who need to see it. Attracting attention is a cost. It is not just a cost to others; it increases the overhead cost you pay, of defending this resource against predatory strategies. If you have more followers, attention, money, than you know how to use right now - then either your beacon budget is unnecessarily high, or you are already being eaten.
Comment by milan_griffes on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-07-29T16:51:11.166Z · score: 6 (3 votes) · EA · GW

More on psychedelics & Openness:

[Erritzoe et al. 2018 found that psilocybin increased Openness in a population of depressed people, which SSRIs do not do.] Maclean et al. 2011, an analysis of psilocybin given to healthy-typed people, also found a persisting increase in Openness. However, Griffiths et al. 2017, also psilocybin for healthy-typed people, found no persisting increase in Openness. So maybe psilocybin causes greater Openness but only sometimes? As always more research is needed.

Also:


Why would increasing Big-Five Openness matter? Erritzoe [et al. 2018] engages with that too:
"... the facets Openness to Actions and to Values significantly increased in our study. The facet Openness to Actions pertains to not being set in one’s way, and instead, being ready to try and do new things. Openness to Values is about valuing permissiveness, open-mindedness, and tolerance. These two facets therefore reflect an active approach on the part of the individual to try new ways of doing things and consider other peoples’ values and/or worldviews."
And:
"It is well established that trait Openness correlates reliably with liberal political perspective... The apparent link between Openness and a generally liberal worldview may be attributed to the notion that people who are more open to new experiences are also less personally constrained by convention and that this freedom of attitude extends into every aspect of a person’s life, including their political orientation."
Comment by milan_griffes on What posts you are planning on writing? · 2019-07-26T07:10:28.758Z · score: 2 (1 votes) · EA · GW

See also:

Comment by milan_griffes on 'Longtermism' · 2019-07-26T07:02:47.602Z · score: 2 (1 votes) · EA · GW
... or how to update my views of the post in response to your critique

For what it's worth, I suspect there's enough inferential distance between us on fundamental stuff such that I wouldn't expect either of us to be able to easily update while discussing topics on this level of abstraction.

Comment by milan_griffes on What posts you are planning on writing? · 2019-07-26T06:46:46.727Z · score: 4 (2 votes) · EA · GW

cf. The Optimizer's Curse & Wrong-Way Reductions

Comment by milan_griffes on 'Longtermism' · 2019-07-26T06:42:11.007Z · score: 3 (3 votes) · EA · GW

Raemon thought that it seems good for leaders to keep people updated on how they are conceptualizing things.

I argued that this doesn't seem true in all cases, pointing out that six paragraphs on whether to hyphenate "longetermism" isn't important to stay updated on, even when it comes from a leader.

---

For stuff like this, my ideal goal is something like "converge on the truth."

I usually settle for consolation prizes like "get more clarity about where & how I disagree with other folks in EA" and/or "note my disagreements as they arise."

Comment by milan_griffes on 'Longtermism' · 2019-07-26T01:01:47.078Z · score: -3 (23 votes) · EA · GW

Basically agree about the first claim, though the Forum isn't really aimed at EA newcomers.

(It also seems like good practice to me for people in leadership positions to keep people up to date about how they're conceptualizing their thinking)

Eh, some conceptualizations are more valuable than others.

I don't see how six paragraphs of Will's latest thinking on whether to hyphenate "longtermism" could be important to stay up-to-date about.


Comment by milan_griffes on 'Longtermism' · 2019-07-26T00:57:21.527Z · score: 0 (5 votes) · EA · GW

Thanks – I agree that confusions are likely to arise somewhere as a new term permeates the zeitgeist.

I don't think longtermism is a new term within EA or on the EA Forum, and I haven't seen any recent debates over its definition.

[Edited: the Forum doesn't seem like a well-targeted place for clarification efforts intending to address potential confusions around this (which seem likely to arise elsewhere)]. Encyclopedia entries, journal articles, and mainstream opinion pieces all seem better targeted to where confusion is likely to arise.

Comment by milan_griffes on 'Longtermism' · 2019-07-25T22:32:01.561Z · score: 0 (19 votes) · EA · GW

This post strikes me as fairly pedantic. Is there a live confusion it's intending to solve?

The Wittgensteinian / Eliezerian view (something like "words are labels pointing to conceptual clusters that have fuzzy boundaries") seems to fully dissolve the need to precisely specify definitions of words.

Comment by milan_griffes on In what ways and in what areas might it make sense for EA to adopt more a more bottoms-up approach? · 2019-07-25T20:23:42.562Z · score: 5 (2 votes) · EA · GW

Argh!

Fixed, thanks.

Comment by milan_griffes on In what ways and in what areas might it make sense for EA to adopt more a more bottoms-up approach? · 2019-07-25T06:42:31.455Z · score: 4 (2 votes) · EA · GW

Just saw this AnnaSalamon comment on LessWrong about generativity & trustworthiness. Excerpt:


To be clear, I still think hypothesis-generating thinkers are valuable even when unreliable, and I still think that honest and non-manipulative thinkers should not be “ruled out” as hypothesis-sources for having some mistaken hypotheses (and should be “ruled in” for having even one correct-important-and-novel hypothesis). I just care more about the caveats here than I used to.
Comment by milan_griffes on What posts you are planning on writing? · 2019-07-25T06:25:09.689Z · score: 5 (5 votes) · EA · GW

Thanks for collating these "criticism of EA" posts.


is that EAs are generally too eager to read and upvote any nicely written criticism by an intelligent person that sounds non-threatening enough.

Reminds me a bit of sealioning, though I think what you're pointing to is not exactly that.

Comment by milan_griffes on What posts you are planning on writing? · 2019-07-25T06:23:40.288Z · score: 1 (3 votes) · EA · GW

+1

Comment by milan_griffes on [Link] Thiel on GCRs · 2019-07-25T06:21:05.974Z · score: 5 (3 votes) · EA · GW

Are these predictions informing your investments? Seems like you could make a lot of money if you're able to predict upcoming macro trends.

Comment by milan_griffes on [Link] Thiel on GCRs · 2019-07-24T20:47:38.116Z · score: 11 (6 votes) · EA · GW

This chart really conveys the concern at a glance:

chart

(source) (a)

... what if the curve swings upward again?

Comment by milan_griffes on [Link] Thiel on GCRs · 2019-07-24T20:30:08.453Z · score: 8 (2 votes) · EA · GW
Thiel thinks GCRs are a concern, but is also very worried about political violence / violence perpetrated by strong states.

Robin Hanson's latest (a) is related.

Given the stakes, it's a bit surprising that "has risk of war secularly declined or are we just in a local minimum?" hasn't received more attention from EA.

Holden looked at this (a) a few years ago and concluded:


I conclude that [The Better Angels of Our Nature's] big-picture point stands overall, but my analysis complicates the picture, implying that declines in deaths from everyday violence have been significantly (though probably not fully) offset by higher risks of large-scale, extreme sources of violence such as world wars and oppressive regimes.

If I recall correctly, Pinker also spent some time noting that violence appears to be moving to more of a power-law distribution since the early 20th Century (fewer episodes, magnitude of each episode is much more severe).

"War aversion" seems like a plausible x-risk reduction focus area in its own right (it sorta bridges AI risk, biosecurity, and nuclear security).

Comment by milan_griffes on What posts you are planning on writing? · 2019-07-24T20:26:46.163Z · score: 7 (4 votes) · EA · GW

PSA: the EA Editing and Review facebook group is intended for this use-case. It has 650 members; feedback on posted drafts is generally good.

Comment by milan_griffes on [Link] Thiel on GCRs · 2019-07-24T06:18:27.810Z · score: 5 (3 votes) · EA · GW

Great point.

I like the Russ Roberts videos as demonstrations of how complicated macro is / how malleable macroeconomic data is.

Comment by milan_griffes on [Link] Thiel on GCRs · 2019-07-23T20:01:30.995Z · score: 4 (3 votes) · EA · GW

Seems like there's dispute about this, at least from Russ Roberts' perspective:

https://www.policyed.org/numbers-game/hows-middle-class-doing/video

https://www.policyed.org/numbers-game/paradox-household-income/video

Comment by milan_griffes on [Link] Thiel on GCRs · 2019-07-23T15:04:16.501Z · score: 2 (1 votes) · EA · GW

Also Nintil has some good notes (a). (Notes at bottom of post.)

Comment by milan_griffes on Integrity and accountability are core parts of rationality [LW-Crosspost] · 2019-07-23T14:04:14.084Z · score: 2 (1 votes) · EA · GW

Hmm... now I'm worried that I'm not parsing you correctly.

Are you intending something closer to (1) or (2)?

(1) "stated beliefs" just means beliefs about what is true about physical reality. You're saying that acting with integrity means doing what accords with what one thinks is true, regardless of the consequences. (Sorta like fiat justitia ruat caelum, incompatible with appeal to consequences.)

(2) "stated beliefs" means beliefs about what is true about physical reality and social reality. You're saying that acting with integrity means doing what seems best / will result in the best outcomes given one's current understanding of social reality. (Compatible with appeal to consequences.)

Comment by milan_griffes on Integrity and accountability are core parts of rationality [LW-Crosspost] · 2019-07-23T13:50:23.336Z · score: 2 (1 votes) · EA · GW
One lens to view integrity through is as an advanced form of honesty – “acting in accordance with your stated beliefs.”

How do you think this definition of integrity interacts with the appeal-to-consequences concern that's being discussed on LW these days? (1, 2)

I haven't thought about this rigorously, but it seems like this definition could be entirely compatible with doing a lot of appeal-to-consequences reasoning (which seems to miss some important part of what we're gesturing at when we talk about integrity).

Comment by milan_griffes on If physics is many-worlds, does ethics matter? · 2019-07-23T13:42:42.154Z · score: 2 (1 votes) · EA · GW

fwiw, my concern isn't premised on "all futures / choices being equally likely." 

I think the concern is closer to something like "some set of futures are going to happen (there's some distribution of Everett branches that exists and can't be altered from the inside), so there's not really room to change the course of things from a zoomed-out, point-of-view-of-the-universe perspective."

I'll give the Chiang story a look, thanks!

Comment by milan_griffes on Pros/cons of funding more research on whether psychedelics increase altruism? · 2019-07-22T21:17:53.897Z · score: 6 (3 votes) · EA · GW

My recent take here: https://forum.effectivealtruism.org/posts/bDfiAHEAmRSLHHreR/debrief-cash-prizes-for-the-best-arguments-against

tl;dr – Yes, more research seems worth it.

Comment by milan_griffes on [Link] Thiel on GCRs · 2019-07-22T20:50:00.114Z · score: 3 (2 votes) · EA · GW

Hacker News comments about the interview, including several by Thiel skeptics.

Comment by milan_griffes on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-20T02:18:48.888Z · score: 2 (1 votes) · EA · GW

(b), perhaps with a dash of (a) too

Comment by milan_griffes on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-17T14:52:28.268Z · score: 2 (1 votes) · EA · GW
... I'm not sure this is such a bad setup overall.

Yeah it doesn't seem terrible. It probably misses a lot of upside, though.

Comment by milan_griffes on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-16T14:22:06.232Z · score: 2 (1 votes) · EA · GW
(This might be the closest thing I've seen to that so far.)

Whoa, I didn't know about this one. Thanks for the link!

Comment by milan_griffes on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-16T14:18:51.991Z · score: 6 (2 votes) · EA · GW

Thanks, I think I overstated this in the OP (added a disclaimer noting this). I still think there's a thing here but probably not to the degree I was holding.

In particular it felt strange that there wasn't much engagement with the trauma argument or the moral uncertainty / moral hedging argument ("psychedelics are plausibly promising under both longtermist & short-termist views, so the case for psychedelics is more robust overall.")

There was also basically no engagement with the studies I pointed to.

All of this felt strange (and still feels strange), though I now think I was too strong in the OP.

Comment by milan_griffes on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-16T14:12:42.145Z · score: 6 (2 votes) · EA · GW
You asked for the best arguments against psychedelics, not for counter-arguments to your specific arguments in favour, so this doesn't seem that surprising.

Fair enough. I think I felt surprised because I've spent a long time thinking about this & tried to give the best case I could in support, and then submissions for "best case against" didn't seem to engage heavily with my "best case for."

Comment by milan_griffes on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-16T14:10:16.010Z · score: 6 (2 votes) · EA · GW

1. I like that the originality of it. (It's not just saying "the evidence base isn't strong enough!")

2. The objection better accords with my current worldview.

Comment by milan_griffes on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-16T14:08:04.214Z · score: 2 (1 votes) · EA · GW
If you instead look at CFAR as a funnel for people working on AI risk, the "evidence base" seems clearer.

Do you know if there are stats on this, somewhere?

e.g. Out of X workshop participants in 2016, Y are now working on AI risk.

Comment by milan_griffes on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-16T14:05:11.199Z · score: 3 (2 votes) · EA · GW
I agree if for CFAR you are looking at the metric of how rational their alumni are. If you instead look at CFAR as a funnel for people working on AI risk, the "evidence base" seems clearer.

Sure, I was pointing to the evidence base for the techniques taught by CFAR & other rationality training programs.

CFAR could be effective at recruiting people into AI risk due to Schelling-point dynamics, without the particular techniques it teaches being efficacious. (I'm not sure that's true, just pointing out an orthogonality here.)

Comment by milan_griffes on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-16T14:00:00.961Z · score: 2 (1 votes) · EA · GW
Why are popularity-contest dynamics harmful, precisely?

A similar sort of thing is a big part of the reason why Eliezer had difficulty advocating for AI safety, back in the 2000s.

Comment by milan_griffes on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-07-14T15:48:53.121Z · score: 3 (2 votes) · EA · GW

Easy money :-)

Comment by milan_griffes on EA Forum Prize: Winners for May 2019 · 2019-07-12T22:42:07.118Z · score: 7 (5 votes) · EA · GW
New this month: Two users who have a recent history of strong posts and comments (Larks and Khorton)

Could you say more about the process by which Larks & Khorton were added to the roster of people who have a vote?

(I'm pretty sure I've been commenting & posting at the roughly same cadence as them. No one approached me about this, so I'm curious about the process here.)

Comment by milan_griffes on In what ways and in what areas might it make sense for EA to adopt more a more bottoms-up approach? · 2019-07-12T20:55:00.093Z · score: 8 (5 votes) · EA · GW
My sense is that there's a lot of causal/top down planning in EA.

My quick thought here is that EA currently has a very strong "evaluative" function (i.e. strong at assessing the pro / con of existing ideas), and a weak "generative" function (i.e. weak at coming up with new ideas).

I'm bullish on increasing EA generativity from the present margin.

Comment by milan_griffes on A philosophical introduction to effective altruism · 2019-07-12T16:55:56.257Z · score: 4 (3 votes) · EA · GW

Thanks, this is helpful.


I am talking about obligations in this Introduction (rather than 'opportunities')

Could you say a bit more about why you chose to go with the 'obligations' framing?

Comment by milan_griffes on A philosophical introduction to effective altruism · 2019-07-11T20:43:56.203Z · score: 6 (3 votes) · EA · GW

From my quick read of your Norton Introduction, it seems like you're arguing for moral realism being a prerequisite to EA. (Words like "duty" and "command" make me think this.)

Is that right?