Posts

[Link] EA Global 2020 announced (CEA) 2019-12-03T19:31:57.242Z · score: 17 (6 votes)
What is EA's story? 2019-11-30T21:45:45.433Z · score: 18 (7 votes)
[Link] "Status in academic ethics" (Charles Foster) 2019-11-27T23:20:04.510Z · score: 18 (9 votes)
[Link] "Art as the starting point" (Autotranslucence) 2019-11-27T17:10:25.705Z · score: 16 (5 votes)
[Link] A new charity evaluator (NYTimes) 2019-11-26T22:44:23.857Z · score: 18 (7 votes)
[Link] Against "Why We Sleep" (Guzey) 2019-11-15T21:15:08.098Z · score: 15 (11 votes)
[Link] "Progress Update October 2019" (Ought) 2019-10-29T21:34:42.504Z · score: 25 (11 votes)
[Link] "One year of Future Perfect" (Vox) 2019-10-15T18:12:55.663Z · score: 24 (13 votes)
[Link] "Machine Learning Projects for IDA" (Ought) 2019-10-12T17:35:18.638Z · score: 10 (3 votes)
[Link] "State of the Qualia" (QRI) 2019-10-11T21:14:23.412Z · score: 22 (11 votes)
[Link] "How feasible is long-range forecasting?" (Open Phil) 2019-10-11T21:01:53.471Z · score: 28 (9 votes)
Should CEA buy ea.org? 2019-10-04T23:10:52.237Z · score: 5 (5 votes)
[Link] Experience Doesn’t Predict a New Hire’s Success (HBR) 2019-10-04T19:30:49.479Z · score: 9 (3 votes)
Why is the amount of child porn growing? 2019-10-02T01:09:45.207Z · score: 6 (14 votes)
[Link] Moral Interlude from "The Wizard and the Prophet" 2019-09-27T18:42:16.728Z · score: 13 (5 votes)
[Link] The Case for Charter Cities Within the EA Framework (CCI) 2019-09-23T20:08:19.947Z · score: 22 (11 votes)
[Link] "Relaxed Beliefs Under Psychedelics and the Anarchic Brain" (SSC) 2019-09-11T14:45:35.993Z · score: 3 (8 votes)
[Link] Progress Studies (Jasmine Wang) 2019-09-10T19:55:55.891Z · score: 17 (8 votes)
Campaign finance reform as an EA priority? 2019-08-30T01:46:55.222Z · score: 17 (11 votes)
[Link] BERI handing off Jaan Tallinn's grantmaking 2019-08-27T17:13:30.112Z · score: 18 (8 votes)
[Links] Tangible actions to support Hong Kong protestors from afar 2019-08-18T23:47:03.223Z · score: 4 (9 votes)
[Link] Virtue signaling annotated bibliography (Geoffrey Miller) 2019-08-14T22:41:55.592Z · score: 7 (5 votes)
[Link] Bolsonaro is cutting down the rainforest (nytimes) 2019-08-01T00:45:11.495Z · score: 4 (10 votes)
[Link] The Schelling Choice is "Rabbit", not "Stag" (LessWrong post) 2019-07-31T21:27:22.097Z · score: 20 (5 votes)
[Link] "Two Case Studies in Communist Insecurity" (The Scholar's Stage) 2019-07-25T22:17:05.968Z · score: 7 (7 votes)
[Link] Thiel on GCRs 2019-07-22T20:47:13.076Z · score: 26 (10 votes)
Debrief: "cash prizes for the best arguments against psychedelics" 2019-07-14T17:04:20.153Z · score: 47 (24 votes)
[Link] "Revisiting the Insights model" (Median Group) 2019-07-14T14:58:39.661Z · score: 17 (6 votes)
[Link] "Why Responsible AI Development Needs Cooperation on Safety" (OpenAI) 2019-07-12T01:19:39.816Z · score: 20 (9 votes)
[Link] "The AI Timelines Scam" 2019-07-11T03:37:22.568Z · score: 22 (13 votes)
If physics is many-worlds, does ethics matter? 2019-07-10T15:28:49.733Z · score: 14 (9 votes)
What grants has Carl Shulman's discretionary fund made? 2019-07-08T18:40:19.414Z · score: 52 (24 votes)
Do we know how many big asteroids could impact Earth? 2019-07-07T16:06:57.304Z · score: 31 (13 votes)
Leverage Research shutting down? 2019-07-04T20:55:34.890Z · score: 22 (13 votes)
What's the best structure for optimal allocation of EA capital? 2019-06-04T17:00:36.470Z · score: 7 (12 votes)
On the margin, should EA focus on outreach or retention? 2019-05-31T22:22:54.299Z · score: 5 (6 votes)
[Link] Act of Charity 2019-05-30T22:29:41.518Z · score: 4 (4 votes)
Why do you downvote EA Forum posts & comments? 2019-05-29T22:52:06.900Z · score: 6 (6 votes)
[Link] MacKenzie Bezos signs the Giving Pledge 2019-05-28T17:55:30.483Z · score: 13 (8 votes)
[Link] David Pearce on understanding psychedelics 2019-05-19T17:32:49.242Z · score: 6 (11 votes)
Cash prizes for the best arguments against psychedelics being an EA cause area 2019-05-10T18:13:04.968Z · score: 47 (32 votes)
[Link] "Radical Consequence and Heretical Knots" – an ethnography of the London EA community 2019-05-09T17:31:52.354Z · score: 16 (9 votes)
[Link] 5-HTTLPR 2019-05-09T14:56:50.820Z · score: 16 (4 votes)
[Link] 80,000 Hours 2018 annual review 2019-05-08T17:06:06.726Z · score: 23 (9 votes)
[Link] "A Psychedelic Renaissance" (Chronicle of Philanthropy) 2019-05-06T17:57:41.913Z · score: 24 (6 votes)
Why isn't GV psychedelics grantmaking housed under Open Phil? 2019-05-05T17:10:45.959Z · score: 17 (11 votes)
[Link] Totalitarian ethical systems 2019-05-04T18:37:39.166Z · score: 7 (8 votes)
Is preventing child abuse a plausible Cause X? 2019-05-04T00:58:12.568Z · score: 55 (32 votes)
Why does EA use QALYs instead of experience sampling? 2019-04-24T00:58:15.693Z · score: 55 (23 votes)
Should EA collectively leave Facebook? 2019-04-22T18:54:04.317Z · score: 9 (7 votes)

Comments

Comment by milan_griffes on EA Meta Fund November 2019 Payout Report · 2019-12-11T20:07:28.121Z · score: 1 (2 votes) · EA · GW

Yeah. I suppose alternative hypotheses include:

  • The LTFF team finds it easier to give good feedback than the Meta Fund team
  • The LTFF team is giving lower-quality feedback than the Meta Fund team
Comment by milan_griffes on Which Community Building Projects Get Funded? · 2019-12-11T20:00:27.960Z · score: 6 (3 votes) · EA · GW
I suggest creating a google sheet with: a list of all grants, the fund the grant came from, the date, a categorization (which would vary by fund but could be similar to the categories used in the Grantmaking and Impact section), and a subtotal for each category. That would make it easy to see all grants in one place (rather than clicking through each payout report), the categorization would be transparent, and the subtotals would update automatically as new grants were made.

+1, this seems like a good idea & quick to implement.

Comment by milan_griffes on EA Meta Fund November 2019 Payout Report · 2019-12-11T19:15:34.189Z · score: 1 (4 votes) · EA · GW

Thanks! I agree that giving good feedback isn't easy.

It seems like the Long-Term Future Fund team is able to give more feedback (and more context in their grant reports) than the Meta Fund team. As far as I know, both teams are composed entirely of volunteers.

Do you have thoughts on why the Long-Term Future Fund is able to give more context about their grant-making than the Meta Fund?

Comment by milan_griffes on Introducing Good Policies: A new charity promoting behaviour change interventions · 2019-12-11T18:42:50.335Z · score: 2 (1 votes) · EA · GW

Also nicotine has cognitive benefits: https://www.gwern.net/Nicotine

Comment by milan_griffes on EA Meta Fund November 2019 Payout Report · 2019-12-11T17:50:43.336Z · score: 10 (5 votes) · EA · GW

Got it, thanks for this context!

Did EA Hotel explicitly ask for feedback on either rejection?

---

Also would be great if someone from the Meta Fund team could say a bit about what this looks like from their perspective / why the Fund decided to reject twice without giving feedback.

Comment by milan_griffes on EA Meta Fund November 2019 Payout Report · 2019-12-11T00:53:36.682Z · score: 3 (2 votes) · EA · GW

Agreed, thanks for this.

(I see now that my comment was premised on a belief that EA Hotel would be happy for this to be public, as they have been quite open about such things to date.)

Comment by milan_griffes on EA Meta Fund November 2019 Payout Report · 2019-12-10T19:19:19.648Z · score: 4 (4 votes) · EA · GW

Did EA Hotel apply to this round?

If so, could you give some context about why it didn't receive a grant?

Comment by milan_griffes on Complex value & situational awareness · 2019-12-09T18:47:26.075Z · score: 4 (2 votes) · EA · GW

Just came across this related LessWrong post on social & causal reality.

Comment by milan_griffes on EA Hotel Fundraiser 6: Concrete outputs after 17 months · 2019-12-07T01:41:36.297Z · score: 27 (9 votes) · EA · GW

From the recent MIRI fundraising post:


Rafe Kennedy, who joins MIRI after working as an independent existential risk researcher at the Effective Altruism Hotel. Rafe previously worked at the data science startup NStack, and he holds an MPhysPhil from the University of Oxford in Physics & Philosophy.

Seems like a promising output of EA Hotel!

Comment by milan_griffes on MIRI’s 2019 Fundraiser · 2019-12-07T01:40:16.000Z · score: 5 (3 votes) · EA · GW

Nice update!

Does MIRI know of any large, likely grants (from Open Phil or others) that are in the pipeline but aren't reflected in the fundraising thermometer?

Comment by milan_griffes on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-07T00:07:20.923Z · score: 3 (2 votes) · EA · GW

Thanks for this thorough answer :-)

Comment by milan_griffes on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-07T00:05:47.008Z · score: 2 (1 votes) · EA · GW

Thanks!

Those are plan changes that have been downgraded after 80k learned more about the situation?

Comment by milan_griffes on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-06T15:49:45.893Z · score: 4 (2 votes) · EA · GW

Thanks! Makes sense that 80k would only do this for "rated-1o" and "rated-1oo" plan changes.


For cases where it seemed we made a particularly large impact we've continued following up for years, in order to update how much impact the plan change had.

Is data about these longer term follow-ups publicly available somewhere? Didn't see it in my quick read of the 2018 review.

Comment by milan_griffes on New research on moral weights · 2019-12-05T18:05:47.772Z · score: 2 (1 votes) · EA · GW

+1, good to see empirical work on this

Comment by milan_griffes on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-05T17:56:08.794Z · score: 5 (4 votes) · EA · GW

What feels most limiting to your advising work at 80k?

(i.e. what things are most keeping your work from being what you'd like it to be in the ideal case?)

Comment by milan_griffes on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-05T17:51:14.554Z · score: 12 (7 votes) · EA · GW

Has 80k considered partnering with academic researchers to run studies on its impact / the impact of different approaches to advice-giving?

Randomization seems straightforward given that demand for 80k advising is larger than supply.

Comment by milan_griffes on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-05T17:47:02.095Z · score: 14 (4 votes) · EA · GW

Does 80k do longterm follow-up with folks who've attributed an impact-adjusted significant plan change (IASPC) to 80k advice?

I'm imagining following up 12 months later (and also 24 months later, 36 months later, if ambitious), to see:

  • how things are going after the change
  • if they still think the change was a good idea
  • if they still attribute the change to the same factors
  • etc.
Comment by milan_griffes on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-01T21:32:37.061Z · score: 8 (20 votes) · EA · GW

Okay, I think this is a pretty bad thing to trade a lot of transparency for.

  • The work of all the orgs I asked about falls solidly within EA circles (except for CSET, and maybe Founders Pledge)
  • Folks outside of EA don't really read the EA Forum
  • I have trouble imagining folks outside of EA being shocked to learn that an org they interface with was invited to a random event (EA has a mixed reputation outside of EA, but it's not toxic / there's minimal risk of guilt-by-association)

I wonder if the reason you gave is your true reason?

---

If CEA wants to hold a secret meeting to coordinate with some EA orgs, probably best to keep it secret.

If CEA wants to hold a publicly disclosed, invite-only meeting to coordinate with some EA orgs, probably best to make a full public disclosure.

The status quo feels like an unsatisfying middle ground with some trappings of transparency but a lot of substantive content withheld.

Comment by milan_griffes on What is EA's story? · 2019-11-30T22:07:36.984Z · score: 2 (1 votes) · EA · GW

Thanks!

I haven't read most of these in a while... if I recall correctly, a lot of their content is more "This is why I'm doing what I'm doing" rather than "This is why what I'm doing is awesome and why you should do it to!"

I suppose Doing Good Better and The Life You Can Save are more in the latter direction.

Curious if any of the content you posted particularly resonates for you?

Comment by milan_griffes on A list of EA-related podcasts · 2019-11-27T16:23:12.924Z · score: 3 (2 votes) · EA · GW
  • Conversations with Tyler
  • The Portal with Eric Weinstein
Comment by milan_griffes on [Link] A new charity evaluator (NYTimes) · 2019-11-27T15:28:16.102Z · score: 3 (2 votes) · EA · GW

Got it. Glad to hear it's an issue with how the NYTimes positioned your quote :-)

Comment by milan_griffes on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-26T17:03:49.605Z · score: 9 (9 votes) · EA · GW

Why is it important for staff of some organizations to be able to attend anonymously?

Comment by milan_griffes on ALLFED 2019 Annual Report and Fundraising Appeal · 2019-11-25T22:26:55.931Z · score: 3 (2 votes) · EA · GW
I’ve also written extensively on issues within the existential and global catastrophic risks field, and I’m honored that I’ve become the third most prolific GCR/X-risk researcher by one measure* and my GCR work...

I spent a few minutes trying to understand how you derived "third most prolific" from that source but couldn't figure it out. Could you unpack it?

Comment by milan_griffes on EA Giving Tuesday, Dec 3, 2019: Instructions for Donors · 2019-11-25T16:20:41.002Z · score: 0 (6 votes) · EA · GW
In 2017, the match lasted 86 seconds; in 2018, it lasted 15 seconds; this year we expect it to run out much faster, plausibly in one second.

It seems basically impossible to reliably execute a newly-learned many-step task within one second.

Comment by milan_griffes on Are comment "disclaimers" necessary? · 2019-11-24T18:08:57.826Z · score: 5 (3 votes) · EA · GW

Got it. I like Larks' distinction. I also think finance & investing communities have good norms around this.

Comment by milan_griffes on Is preventing child abuse a plausible Cause X? · 2019-11-24T17:14:26.977Z · score: 4 (2 votes) · EA · GW

Update: Slate Star Codex reviewed The Body Keeps the Score

Comment by milan_griffes on Are comment "disclaimers" necessary? · 2019-11-24T17:07:49.166Z · score: 7 (4 votes) · EA · GW

I add short disclosure statements when posting about Ought.

I do this because I want to reduce the likelihood of conflict-of-interest stuff coming up. (Feels very unlikely, but could be super messy to deal with if it did happen.)

I've probably been influenced here by GiveWell's early astroturfing controversy (which almost killed the org).

Comment by milan_griffes on Updates from Leverage Research: history, mistakes and new focus · 2019-11-23T18:56:35.761Z · score: 17 (10 votes) · EA · GW

For reference, a version & commentary of some Leverage 1.0 research:

https://rationalconspiracy.com/2014/04/22/the-problem-with-connection-theory/ (a)

Comment by milan_griffes on Updates from Leverage Research: history, mistakes and new focus · 2019-11-22T16:42:31.785Z · score: 17 (9 votes) · EA · GW

Who are Leverage 2.0's main donors? Are they different from Leverage 1.0's main donors?

Comment by milan_griffes on Updates from Leverage Research: history, mistakes and new focus · 2019-11-22T16:38:54.182Z · score: 18 (8 votes) · EA · GW

Given Leverage 2.0's focus on scientific methods, is it planning to engage with folks working on metascience and/or progress studies?

Comment by milan_griffes on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T23:37:40.253Z · score: 6 (4 votes) · EA · GW
There are all sorts of ways this analogy doesn't apply directly to the real world, but it might help pump intuitions.

Yeah, I think this model misses that people who are aiming to be strikers tend to have pretty different dispositions than people aiming to be midfielders. (And so filling a team mostly with intending-to-be-strikers could have weird effects on team cohesion & function.)

Interesting to think about how Delta Force, SEAL Team Six, etc. manage this, as they select for very high-performing recruits (all strikers) then meld them into cohesive teams. I believe they do it via:

1. having a very large recruitment pool

2. intense filtering out of people who don't meet their criteria

3. breaking people down psychologically + cultivating conformity during training


I found it interesting to cash this out more... thanks!

Comment by milan_griffes on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-21T20:44:49.931Z · score: 11 (6 votes) · EA · GW

Huh, these are pretty vague & aspirational. (Overall I agree with the sentiments they're expressing, but they're not very specific about what changes to make to the status quo.)

Did these ideas get more cashed out / more operationalized at the Leaders Forum? Did organizations come away with specific next actions they will be taking towards realizing these ideas?

Comment by milan_griffes on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T18:33:52.348Z · score: 11 (5 votes) · EA · GW

+1. So good to see stuff like this

Comment by milan_griffes on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T18:29:26.144Z · score: 5 (4 votes) · EA · GW

cf. Jeff Kaufman on MIRI circa 2003: https://www.jefftk.com/p/yudkowsky-and-miri

Comment by milan_griffes on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T18:14:16.765Z · score: 5 (4 votes) · EA · GW
To annoy everyone with a sports analogy, the defense and midfield positions are every bit as important as the glamorous striker positions, and if you've got a team made up primarily of star strikers and wannabe star strikers, that team's going to underperform.

But the marginal impact of becoming a star striker is so high!

(Just kidding – this is a great analogy & highlights a big problem with reasoning on the margin + focusing on maximizing individual impact.)

Comment by milan_griffes on Leverage Research: reviewing the basic facts · 2019-11-21T18:11:25.968Z · score: 4 (2 votes) · EA · GW

Huh, do you know what 'Reserve Rights' does / why it exists?

Is there a short explainer of it somewhere?

Comment by milan_griffes on [Link] Against "Why We Sleep" (Guzey) · 2019-11-17T14:52:50.361Z · score: 3 (2 votes) · EA · GW

Some good comments on the LessWrong cross-post.

Comment by milan_griffes on UK General Election: Where would you look for impact analysis of policy platforms? · 2019-11-17T14:23:40.701Z · score: 2 (1 votes) · EA · GW

You could adapt kbog's scoring system to the UK context.

Comment by milan_griffes on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-17T14:01:17.393Z · score: 5 (6 votes) · EA · GW

Published today: "EA residencies" as an outreach activity

Comment by milan_griffes on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-17T13:56:16.859Z · score: -1 (2 votes) · EA · GW
I think we can drop the Bletchley park discussion.

Okay, I take it that you agree with my view.


... future-focused interventions have a very different set of questions than present-day non-quantifiable interventions

How are you separating out "future-focused interventions" from "present-day non-quantifiable interventions"?

Plausibly geoengineering safety will be very relevant in 15-30 years. Assuming that's true, would you categorize geoengineering safety research as future-focused or present-day non-quantifiable?


Comment by milan_griffes on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-16T17:08:13.867Z · score: 40 (18 votes) · EA · GW

Were representatives from these groups invited to the Leaders Forum?


If not, why not?

Comment by milan_griffes on Which Community Building Projects Get Funded? · 2019-11-16T17:02:29.323Z · score: 2 (1 votes) · EA · GW

Oh right, thanks!

Also just saw this good comment on the same topic.

Comment by milan_griffes on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T12:21:01.882Z · score: 2 (1 votes) · EA · GW

See also this recent Qualia Computing post about the orthogonality thesis. (Qualia Computing is the blog of QRI's research director.)

Comment by milan_griffes on Which Community Building Projects Get Funded? · 2019-11-15T19:54:25.648Z · score: 4 (2 votes) · EA · GW
First I want to quickly flag that we no longer do community building grants due to their complexity and instead intend to fund CEA CBG.

Wait, given Nicole's recent post, does this mean that both the Meta Fund & CEA are moving away from making community grants?

(From Nicole's post: "At this stage, I think it is fairly likely that EA Grants won’t continue in its current form, and that we will instead encourage individuals to apply to EA Funds.")

Comment by milan_griffes on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-15T19:36:11.591Z · score: 8 (3 votes) · EA · GW

I feel confused about whether there's actually a disagreement here. Seems possible that we're just talking past each other.

  • I agree that Bletchley Park wasn't mostly focused on cracking Enigma.
  • I don't know enough about Bletchley's history to have an independent view about whether it was underfunded or not. I'll follow your view that it was well supported.
  • It does seem like Turing's work on Enigma wasn't highly prioritized when he started working on it ("...because no one else was doing anything about it and I could have it to myself"), and this work turned out to be very impactful. I feel confident claiming that Bletchley wasn't prioritizing Enigma highly enough before Turing decided to work on it. (Curious whether you disagree about this.)

On the present-day stuff:

  • My claim is that circa 2010 AI alignment work was being (dramatically) underfunded by institutions, not that it wasn't being funded at all.
  • It wouldn't surprise me if 20 years from now the consensus view was "Oh man, we totally should have been putting more effort towards figuring out what safe geoengineering looks like back in 2019."
  • I believe Drexler had a hard time getting support to work on nanotech stuff (believe he's currently working mostly on AI alignment), but I don't know the full story there. (I'm holding Drexler as someone who is qualified and aligned with EA goals.)
Comment by milan_griffes on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-14T12:52:34.209Z · score: 3 (2 votes) · EA · GW
Bletchley park was exactly the sort of intervention that doesn't need any pushing. It was funded immediately because of how obvious the benefit was.

Pretty sure that's not right, at least for Turing's work on Enigma:

"Turing decided to tackle the particularly difficult problem of German naval Enigma 'because no one else was doing anything about it and I could have it to myself'."


If you were to suggest something similar now that were politically feasible and similarly important to a country, I'd be shocked if it wasn't already happening. Invest in AI and advanced technologies?...

What about AI alignment work circa 2010?

Quick examples from the present day: preparing for risks from nanotechnology; working on geoengineering safety

Comment by milan_griffes on Update on CEA's EA Grants Program · 2019-11-14T12:47:22.260Z · score: 2 (1 votes) · EA · GW

Got it. What could staff capacity trade off for here that feels higher priority?

Comment by milan_griffes on Steelmanning the Case Against Unquantifiable Interventions · 2019-11-13T13:13:56.356Z · score: 2 (1 votes) · EA · GW
About predicting effectiveness, it seems your conclusion should be one of epistemic modesty relating to hard-to-quantify interventions, not that we should never think they are better.

This is where I'm at too – e.g. the impact of Bletchley Park would have been hard to quantify prospectively, and in retrospect was massively positive.

Curious if OP is actually saying the other thing (that hard-to-quantify implies lower cost-effectiveness).

Comment by milan_griffes on Institutions for Future Generations · 2019-11-13T03:05:17.802Z · score: 3 (2 votes) · EA · GW

Also this Quora thread: https://www.quora.com/What-is-the-oldest-institution-organization-that-exists-today

Comment by milan_griffes on Institutions for Future Generations · 2019-11-13T03:04:58.309Z · score: 3 (2 votes) · EA · GW

Yeah, it's a great question.

For Catholic stuff, The Great Heresies looks interesting, though old. (I haven't read it.)

I have thoughts about Mahayana Buddhist value transmission. Probably best to DM about that.

I bet Leah Libresco would have good thoughts on Catholic value transmission. Message me if an intro would be helpful.