Posts

Books and lecture series relevant to AI governance? 2021-07-18T15:54:32.894Z
Announcing the Nuclear Risk Forecasting Tournament 2021-06-16T16:12:39.249Z
Why EAs researching mainstream topics can be useful 2021-06-13T10:14:03.244Z
Overview of Rethink Priorities’ work on risks from nuclear weapons 2021-06-10T18:48:35.871Z
Final Report of the National Security Commission on Artificial Intelligence (NSCAI, 2021) 2021-06-01T08:19:15.901Z
Notes on Mochary's "The Great CEO Within" (2019) 2021-05-29T18:53:24.594Z
Intervention options for improving the EA-aligned research pipeline 2021-05-28T14:26:50.602Z
Reasons for and against posting on the EA Forum 2021-05-23T11:29:10.948Z
Goals we might have when taking actions to improve the EA-aligned research pipeline 2021-05-21T11:16:48.273Z
What's wrong with the EA-aligned research pipeline? 2021-05-14T18:38:19.139Z
Improving the EA-aligned research pipeline: Sequence introduction 2021-05-11T17:57:51.387Z
Thoughts on "A case against strong longtermism" (Masrani) 2021-05-03T14:22:11.541Z
Thoughts on “The Case for Strong Longtermism” (Greaves & MacAskill) 2021-05-02T18:00:32.482Z
My personal cruxes for focusing on existential risks / longtermism / anything other than just video games 2021-04-13T05:50:22.145Z
On the longtermist case for working on farmed animals [Uncertainties & research ideas] 2021-04-11T06:49:05.968Z
The Epistemic Challenge to Longtermism (Tarsney, 2020) 2021-04-04T03:09:10.087Z
New Top EA Causes for 2021? 2021-04-01T06:50:31.971Z
Notes on EA-related research, writing, testing fit, learning, and the Forum 2021-03-27T09:52:24.521Z
Notes on Henrich's "The WEIRDest People in the World" (2020) 2021-03-25T05:04:37.093Z
Notes on "Bioterror and Biowarfare" (2006) 2021-03-01T09:42:38.136Z
A ranked list of all EA-relevant (audio)books I've read 2021-02-17T10:18:59.900Z
Open thread: Get/give feedback on career plans 2021-02-12T07:35:03.092Z
Notes on "The Bomb: Presidents, Generals, and the Secret History of Nuclear War" (2020) 2021-02-06T11:10:08.290Z
Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.? 2021-02-02T03:52:43.821Z
Notes on Schelling's "Strategy of Conflict" (1960) 2021-01-29T08:56:24.810Z
How much time should EAs spend engaging with other EAs vs with people outside of EA? 2021-01-18T03:20:47.526Z
[Podcast] Rob Wiblin on self-improvement and research ethics 2021-01-15T07:24:30.833Z
Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? 2021-01-15T06:56:20.644Z
Books / book reviews on nuclear risk, WMDs, great power war? 2020-12-15T01:40:04.549Z
Should marginal longtermist donations support fundamental or intervention research? 2020-11-30T01:10:47.603Z
Where are you donating in 2020 and why? 2020-11-23T08:47:06.681Z
Modelling the odds of recovery from civilizational collapse 2020-09-17T11:58:41.412Z
Should surveys about the quality/impact of research outputs be more common? 2020-09-08T09:10:03.215Z
Please take a survey on the quality/impact of things I've written 2020-09-01T10:34:53.661Z
What is existential security? 2020-09-01T09:40:54.048Z
Risks from Atomically Precise Manufacturing 2020-08-25T09:53:52.763Z
Crucial questions about optimal timing of work and donations 2020-08-14T08:43:28.710Z
How valuable would more academic research on forecasting be? What questions should be researched? 2020-08-12T07:19:18.243Z
Quantifying the probability of existential catastrophe: A reply to Beard et al. 2020-08-10T05:56:04.978Z
Propose and vote on potential EA Wiki entries 2020-08-04T23:49:47.992Z
Extinction risk reduction and moral circle expansion: Speculating suspicious convergence 2020-08-04T11:38:48.816Z
Crucial questions for longtermists 2020-07-29T09:39:17.144Z
Moral circles: Degrees, dimensions, visuals 2020-07-24T04:04:02.017Z
Do research organisations make theory of change diagrams? Should they? 2020-07-22T04:58:41.263Z
Improving the future by influencing actors' benevolence, intelligence, and power 2020-07-20T10:00:31.424Z
Venn diagrams of existential, global, and suffering catastrophes 2020-07-15T12:28:12.651Z
Some history topics it might be very valuable to investigate 2020-07-08T02:40:17.734Z
3 suggestions about jargon in EA 2020-07-05T03:37:29.053Z
Civilization Re-Emerging After a Catastrophic Collapse 2020-06-27T03:22:43.226Z
I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. 2020-05-11T09:35:22.543Z

Comments

Comment by MichaelA on MichaelA's Shortform · 2021-08-04T09:53:31.274Z · EA · GW

Quick thoughts on the question: "Is it better to try to stop the development of a technology, or to try to get there first and shape how it is used?"

(This is related to the general topic of differential progress.) 

(Someone asked that question in a Slack workspace I'm part of, and I spent 10 mins writing a response. I've copied and pasted that below with slight modifications. This is only scratching the surface and probably makes silly errors, but maybe this'll be a little useful to some people.)

  • I think the ultimate answer to that question is really something like "Whichever option has better outcomes, given the specifics of the situation." 
    • I don't it's just almost always best to stop the development or to shape how it's used. 
    • And I think we should view it in terms of consequences, not in terms of something like deontology or a doing vs allowing harm distinction.
  • It might be the case that it's (say) 55-90% of the time better to do one approach or the other. But I don't know which way around that'd be, and I think it'd be better to focus on the details of the case.
  • For this reason, I think it's sort of understandable and appropriate that the EA/longtermist community doesn't have a principled overall stance on this sort of thing.
  • OTOH, it'd be nice to have something like a collection of considerations, heuristics, etc. that can then be applied, perhaps in a checklist-like manner, to the case at hand. And I'm not aware of such a thing. And that does seem like a failing of the EA/longtermist community.
  • [Person] is writing a paper on differential technological development, and it probably makes a step in this direction, but mostly doesn't aim to do this (if I recall correctly from the draft).
  • Some quick thoughts on things that could be included in that collection of considerations, heuristics, etc.:
    • How much (if at all) will your action actually make it more likely that the tech is developed? 
      • (Or "developed before society is radically transformed for some other reason", to account for Bostrom's technological completion conjecture.)
    • How much (if at all) will your action actually speed up when the tech is developed?
    • How (if at all) will your action change the exact shape/nature of the resulting tech?
      • E.g., maybe the same basic thing is developed, but with more safety features or in a way more conducive to guiding welfare interventions
      • E.g., maybe your action highlights the potential military benefits of an AI thing and so leads to more development of militarily relevant features
    • How (if at all) will your action change important aspects of the process by which the tech is developed?
      • This can be relevant to e.g. AI safety
      • E.g., we don't only care what the AI system is like, but also whether the development process has a race-like dynamic, or whether the development process is such that along the way powerful and dangerous AI may be released upon the world accidentally
      • E.g., is a biotech thing being developed in such a way that makes lab leaks more likely?
    • How (if at all) will your action change how the tech is deployed?
    • How (if at all) will your action let you influence all the above things to the better via giving you "a seat at the table", or something like that, rather than via the action directly?

Small case study: 

  • Let's say an EA-aligned funder donates to an AI lab, and thereby gets some level of influence over them or an advisory role or something. 
  • And let's say it seems about equally likely that this lab's existence/work increases x-risk as that it decreases it. 
  • It might still be good for the world that the funder funds that lab, if: 
    • that doesn't really much change the lab likelihood of existing or the speed of their work or whatever
    • but it does give a very thoughtful EA a position of notable influence over them (which could then lead to more safety-conscious development, deployment, messaging, etc.)
Comment by MichaelA on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-03T14:10:21.825Z · EA · GW

Medium-sized donors can often find opportunities that aren’t practical for the largest donors to exploit – the ecosystem needs a mixture of ‘angel’ donors to compliment the ‘VCs’ like Open Philanthropy. Open Philanthropy isn’t covering many of the problem areas listed here and often can’t pursue small individual grants.

This reminded me of the following post, which may be of interest to some readers: Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation

Comment by MichaelA on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-03T14:09:54.809Z · EA · GW

The Metaculus community also estimates there’s a 50% chance of another Good Ventures-scale donor within five years.

I think that that question would count Sam Bankman-Fried starting to give at the scale Good Ventures is giving as a positive resolution, and that some forecasters have that as a key consideration for their forecast (e.g., Peter Wildeford's comment suggests that). Whereas I think you're using this as evidence that there'll be another donor at that scale, in addition to both Good Ventures and the FTX team people? So this might be double-counting?

(But I only had a quick look at both the Metaculus question and the relevant section of your post, so I might be wrong). 

Comment by MichaelA on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-03T14:09:02.275Z · EA · GW

Thanks for this really interesting post! 

Overall I think all the core claims and implications sound right to me, but I'll raise a few nit-picks in comments.

We could break down some of the key leadership positions needed to deploy these funds as follows:

  1. Researchers able to come up with ideas for big projects, new cause areas, or other new ways to spend funds on a big scale
  2. EA entrepreneurs/managers/research leads able to run these projects and hire lots of people
  3. Grantmakers able to evaluate these projects

I agree with all that, but think that that's a somewhat too narrow framing of how researchers can contribute to deploying these funds. I'd also highlight their ability to:

  • Help us sift through the existing ideas for projects, cause areas, "intermediate goals", etc. to work out what would be high-priority/cost-effective (or even just what seems net-positive overall)
  • Generate or sharpen insights, concepts, and/or vocabulary that can help the entrepreneurs, grantmakers, etc. do their work
    • E.g., as a (very new and temporary) grantmaker, I think I've probably done a better job because other people had previously developed the following concepts and terms and some analysis related to them: 
      • information hazards
      • the unilateralist's curse
      • disentanglement research
      • value of movement growth
      • talent constraints vs funding constraints vs vetting constraints
      • (a bunch of other things)
  • Maybe helping refine precise ideas for cause areas, projects, etc. (but I'm less sure what I mean by this)

(That said, I think some other people are more pessimistic than me either about how much research has helped on these fronts or how much it's likely to in future. See e.g. some other parts of Luke's post or some comments on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?)

Comment by MichaelA on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-03T13:58:42.068Z · EA · GW

(Just want to say that I did find it a bit odd that Ben's post didn't mention timelines to transformative AI - or other sources of "hingeyness" - as a consideration, and I appreciate you raising it here. Overall, my timelines are longer than yours, and I'd guess we should be spending less than 10% per year, but it does seem a crucial consideration for many points discussed in the post.)

Comment by MichaelA on [deleted post] 2021-07-28T07:29:25.564Z

Yeah, I think that that'd work for this. Or maybe to avoid proliferation of tags, we should have forecasting and forecasts, and then just long-range forecasting, and if people want to say something contains long-range forecasts they can use long-range forecasting along with forecasts

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-07-26T10:24:09.139Z · EA · GW

I do see this concept as relevant to various EA issues for the reasons you've described, and I think high-quality content covering "the value of open societies, the meaning of openness, and how to protect and expand open societies" would be valuable. But I can't immediately recall any Forum posts that do cover those topics explicitly. Do you know of posts that would warrant this tag?

If there aren't yet posts that'd warrant this tag, then we have at least the following (not mutually exclusive) options:

  1. This tag could be made later, once there are such posts
  2. You could write a post of those topics yourself
  3. An entry on those topics could be made
    • It's ok to have entries that don't have tagged posts
    • But it might be a bit odd for someone other than Pablo to jump to making an entry on a topic as one of the first pieces of EA writing on that topic?
      • Since wikis are meant to do things more like distilling existing work.
      • But I'm not sure.
      • This is related to the question of to what extent we should avoid "original research" on the EA Wiki, in the way Wikipedia avoids it
  4. Some other entry/tag could be made to cover similar ground
Comment by MichaelA on [deleted post] 2021-07-23T19:00:19.067Z

Should this tag be applied to posts that contain (links to) multiple thoughtful long-range forecasts but don't explicitly discuss long-range forecasting as distinct from forecasting in general? E.g., did it make sense for me to apply it to this post

(I say "thoughtful" as a rough way of ruling out cases in which someone just includes a few quick numbers merely to try to give a clearer sense of their views, or something.)

I think LessWrong have separate tags for posts about forecasting and posts that contain forecasts. Perhaps we should do the same?

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-07-19T18:55:14.820Z · EA · GW

My personal, quick reaction is that that's a decently separate thing, that could have a separate tag if we feel that that's worthwhile. Some posts might get both tags, and some posts might get just one.

But I haven't thought carefully about this.

I also think I'd lean against having an entry for that purpose. It seems insufficiently distinct from the existing tags for career choice or community experiences, or from the intersection of the two.

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-07-19T09:49:27.726Z · EA · GW

Actually, having read your post, I now think it does sound more about jobs (or really "roles", but that sounds less clear) than about careers. So I now might suggest using the term job profiles

Comment by MichaelA on You should write about your job · 2021-07-19T09:48:27.501Z · EA · GW

I think the MVP version you describe sounds good. I'd add that it seems like it'd sometimes/often be useful for people to also write some thoughts on whether and why they'd recommend people pursue such jobs? I think these posts would often be useful even without that, but that could sometimes/often make them more useful. 

Comment by MichaelA on You should write about your job · 2021-07-19T07:50:14.533Z · EA · GW

Yeah, I definitely expect it'd be worth many people doing this! 

I also tentatively suggested something somewhat similar recently in a shortform. I'll quote that in full:

Are there "a day in the life" / "typical workday" writeups regarding working at EA orgs? Should someone make some (or make more)?

I've had multiple calls with people who are interested in working at EA orgs, but who feel very unsure what that actually involves day to day, and so wanted to know what a typical workday is like for me. This does seem like useful info for people choosing how much to focus on working at EA vs non-EA orgs, as well as which specific types of roles and orgs to focus on. 

Having write-ups on that could be more efficient than people answering similar questions multiple times. And it could make it easier for people to learn about a wider range of "typical workdays", rather than having to extrapolate from whoever they happened to talk to and whatever happened to come to mind for that person at that time.

I think such write-ups are made and shared in some other "sectors". E.g. when I was applying for a job in the UK civil service, I think I recall there being a "typical day" writeup for a range of different types of roles in and branches of the civil service.

So do such write-ups exist for EA orgs? (Maybe some posts in the Working at EA organizations series serve this function?) Should someone make some (or make more)?

One way to make them would be for people think about career options to have the calls they would've had anyway, but ask if they can take more detailed conversation notes and then post them to the Forum. (Perhaps anonymising the notes, or synthesising a few conversations into one post, if that seems best.) That might allow these people to quickly provide a handy public service. (See e.g. the surprising-to-me number of upvotes and comments from me just posting these conversation notes I'd made for my own purposes anyway.)

I think ideally these write-ups would be findable from the Working at EA vs Non-EA Orgs tag. 

I think the key difference between my shortform and yours is that your suggestion is broader than just "typical day in the life" or just EA org jobs. I think it's indeed better to suggest something that's broader in those two ways. (I had just had in mind what happened to stand out to me that day after a call with someone.) 


Btw, Jamie Harris noted in a reply to my shortform: 

Animal Advocacy Careers skills profiles are a bit like this for various effective animal advocacy nonprofit roles. You can also just read my notes on the interviews I did (linked within each profile) -- they usually just start with the question "what's a typical day?" https://www.animaladvocacycareers.org/skills-profiles

So those profiles might be of interest to people on the object-level or as examples of what these posts could look like. (Though I don't think anyone should really need to see an example, and I haven't actually read any of those profiles myself.) 

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-07-19T07:41:32.254Z · EA · GW

Yeah, this seems worth having! And I appreciate you advocating for people to write these and for us to have a way to collect them, for similar reasons to those given in this earlier shortform of mine.

I think career profiles is a better term for this than job posts, partly because:

  • The latter sounds like it might be job ads or job postings
  • Some of these posts might not really be on "jobs" but rather things like being a semi-professional blogger, doing volunteering, having some formalised unpaid advisory role to some institution, etc.

OTOH, career profiles also sounds somewhat similar to 80k's career reviews. This could be good or bad, depending on whether it's important to distinguish what you have in mind from the career review format. (I don't have a stance on that, as I haven't read your post yet.)

Comment by MichaelA on Books and lecture series relevant to AI governance? · 2021-07-19T07:34:15.876Z · EA · GW

Thanks Mauricio!

(Btw, if anyone else is interested in "These histories of institutional disasters and near-disasters", you can find them in footnote 1 of the linked post.)

Comment by MichaelA on Books and lecture series relevant to AI governance? · 2021-07-18T16:37:58.584Z · EA · GW

Here are some relevant books from my ranked list of all EA-relevant (audio)books I've read, along with a little bit of commentary on them.

  • The Precipice, by Ord, 2020
    • See here for a list of things I've written that summarise, comment on, or take inspiration from parts of The Precipice.
    • I recommend reading the ebook or physical book rather than audiobook, because the footnotes contain a lot of good content and aren't included in the audiobook
    • Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I’d now recommend The Precipice first.
  • Superintelligence, by Bostrom, 2014
  • The Alignment Problem, by Christian, 2020
    • This might be better than Superintelligence and Human-Compatible as an introduction to the topic of AI risk. It also seemed to me to be a surprisingly good introduction to the history of AI, how AI works, etc.
    • But I'm not sure this'll be very useful for people who've already read/listened to a decent amount (e.g., the equivalent of 4 books) about those topics.
    • This is more relevant to technical AI safety than to AI governance (though obviously the former is relevant to the latter anyway).
  • Human-Compatible, by Russell, 2019
  • The Strategy of Conflict, by Schelling, 1960
    • See here for my notes on this book, and here for some more thoughts on this and other nuclear-risk-related books.
    • This is available as an audiobook, but a few Audible reviewers suggest using the physical book due to the book's use of equations and graphs. So I downloaded this free PDF into my iPad's Kindle app.
  • Destined for War, by Allison, 2017
    • See here for some thoughts on this and other nuclear-risk-related books, and here for some thoughts on this and other China-related books.
  • The Better Angels of Our Nature, by Pinker, 2011
    • See here for some thoughts on this and other nuclear-risk-related books.
  • Rationality: From AI to Zombies, by Yudkowsky, 2006-2009
    • I.e., “the sequences”
  • Age of Ambition, by Osnos, 2014
    • See here for some thoughts on this and other China-related books.
       
Comment by MichaelA on EA syllabi and teaching materials · 2021-07-18T10:18:47.269Z · EA · GW

Thanks for making this collection! One thing I don't think has been mentioned yet is the "sample syllabus on existential risks" from the website associated with The Precipice.

This is a sample syllabus on existential risk, intended as a helpful resource for people developing courses on existential risk — in schools, universities, independent reading groups, or elsewhere. It assumes that those participating have access to The Precipice, so takes this as the central text.

The syllabus is divided into 10 topics, covering many facets of existential risk. One could remove, add, or merge topics to cater to longer or shorter course. Each topic has one or two highlighted readings (marked with an asterisk) and supplementary readings for those who wish to explore further. There are also a few key questions to guide thinking and discussion.

The syllabus is by no means comprehensive, and I’m very open to suggestions on how it could be improved or made more helpful to educators. Please send any comments to syllabus@theprecipice.com.

TOPICS

1. Foundations

2. Ethics of existential risk

3. Thinking about existential risk

4. Natural risks

5. Nuclear weapons

6. Climate change

7. Pandemics

8. Unaligned artificial intelligence

9. Political institutions

10. Macrostrategy

Comment by MichaelA on [deleted post] 2021-07-18T07:39:16.166Z

Wishlist for a heroic editor to someday tick off:

Comment by MichaelA on [deleted post] 2021-07-18T07:16:35.339Z

I envision this entry mainly discussing - and linking to posts and non-Forum sources that discuss -  semiconductors/microchips/whatever in relation to AI timelines, AI risk, AI governance, or similar. My understanding is that this variable is discussed in relation to similar topics both by people interested in the long-term future, x-risk reduction, or similar and by people with more "mainstream" interests, such that the entry may as well capture sources from both of those categories of people.

Comment by MichaelA on [deleted post] 2021-07-18T07:13:42.055Z

I'm pretty unsure what the best name/scope for this entry is. Here are some options:

  • Semiconductors
  • Microchips
  • Chips
  • Integrated circuits
  • [maybe something else]
Comment by MichaelA on [deleted post] 2021-07-18T07:08:39.061Z

I think it'd be good to briefly discuss the distinction between RFPs, prizes, impact certificates, regular grantmaking, and maybe hiring, and the pros and cons of RFPs relative to those things. 

Comment by MichaelA on [deleted post] 2021-07-18T07:04:08.869Z

Kat Woods writes:

RFPs are a bit like job ads for organizations, usually for contract work. Instead of hiring an individual for a job, an RFP is put out to hire an organization or individual for a contract, and there’s much less management overhead than if the project was done in-house. (If you’d like a more detailed explanation of how they work, please see Appendix A.) 

The reason why RFPs are amazing is that they fix an underlying problem with most grantmaking: you can make an idea happen even if nobody is currently working on it. 

Think of it from the perspective of a large foundation. You’re a program officer there and just had an awesome idea for how to make AI safer. You’re excited. You have tons of resources at your disposal. All you have to do is find an organization that’s doing the idea, then give them oodles of money to scale it up. 

The problem is, you look around and find that nobody’s doing it. Or maybe there’s one team doing it, but they’re not very competent, and you worry they’ll do a poor job of it. 

Unfortunately for you, you’re out of luck. You could go start it yourself, but you’re in a really high impact role and running a startup wouldn’t be your comparative advantage. In your spare time you could try to convince existing orgs to do the idea, but that’s socially difficult and it’s hard to find the right team who’d be interested. Unfortunately, the usual grantmaking route is limited to choosing from existing organizations and projects. 

Now, if you had RFPs in your toolkit, you’d be able to put out an RFP for the idea. You could say, “The Nonlinear Fund is looking to fund people to do this idea. We’ll give up to $200,000 for the right team(s) to do it.” Then people will come. 

Values-aligned organizations that might not have known that you were interested in these projects will apply. Individuals who find the idea exciting and high impact will come forward. It will also help spread the idea, since people will know that there’s money and interest in the area. 

This is why Nonlinear (1) will do RFPs in addition to the usual grantmaking. This will allow our prioritization research to not be limited to only evaluating existing projects. 

Comment by MichaelA on [deleted post] 2021-07-18T06:55:35.346Z

Some ideas/arguments that could be drawn on for the body of this entry

Muehlhauser writes:

Here's an initial brainstorm of project types for which there might be substantial ongoing demand from EA organizations, perhaps enough for them to be provided by one or more EA consultancies:

  • Run an EA-related RFP, filter the responses, summarize the strongest submissions for the client to consider funding

And also writes in the same post, in relation to why he's enthusiastic about EA consultancies:

Why not just use RFPs? I'm more optimistic about the consultancy model because it can more often leverage an existing relationship with an existing organization that is known to have hit some quality threshold for similar-ish projects in the past. In contrast, with RFPs the funder often need to build a new relationship for every funded project, has much less context on each grantee on average, and grantees are less accountable for performance because they have a lower expectation for future funding from that funder compared to a consultancy that is more fundamentally premised on repeat business with particular clients. 

Comment by MichaelA on [deleted post] 2021-07-18T06:53:14.412Z

I think this entry should serve three roles:

I only have time to plant the initial seeds of those things, but hopefully someone else can do more on them!

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-07-17T15:31:06.077Z · EA · GW

Update: I've now made this entry.

Requests for proposals or something like that

To cover posts like https://forum.effectivealtruism.org/posts/EEtTQkFKRwLniXkQm/open-philanthropy-is-seeking-proposals-for-outreach-projects 

This would be analogous to the Job listings tags, and sort of the inverse of the Funding requests tag.

This overlaps in some ways with Get involved and Requests (open), but seems like a sufficiently distinct thing that might be sufficiently useful to collect in one place that it's worth having a tag for this.

This could also be an entry that discusses pros, cons, and best practices for Requests for proposals. Related entries include Grantmaking and EA funding. 

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-07-17T14:19:04.659Z · EA · GW

Update: I've now made this entry.

Defense in depth

Relevant links/tags:

Seems like a useful concept for risk analysis and mitigating in general.

Comment by MichaelA on [deleted post] 2021-07-17T14:04:42.194Z

I think it'd be useful for someone to expand this entry by drawing on the content of Flynn's original 2017 post and the content of nonzerosum's post, and by explaining how this sort of work can aid in in scalably using labour.

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-07-17T13:57:28.046Z · EA · GW

Update: I've now made this entry.

Semiconductors or Microchips or Integrated circuit or something like that

The main way this is relevant to EA is as a subset of AI governance / AI risk issues, which could push against having an entry just for this.

That said, my understanding is that a bunch of well-informed people see this as a fairly key variable for forecasting AI risks and intervening to reduce those risks, to the point where I'd say an entry seems warranted.

Comment by MichaelA on Intervention options for improving the EA-aligned research pipeline · 2021-07-17T08:05:38.215Z · EA · GW

(Yeah, I didn't mean that this meant your comment wasn't useful or that it wouldn't be a good idea to set up some sort of intervention to support this idea. I do hope someone sets up such an intervention, and I may try to help that happen sometime in future if I get more time or think of a particularly easy and high-leverage way to do so.)

Comment by MichaelA on [deleted post] 2021-07-17T08:03:47.048Z

On overlap between tags and when to apply tags, the tagging guidelines say:

The general tagging principle is that a tag should be added to a post when the post, including its comments thread, contains a substantive discussion of the tag's topic. As a very rough heuristic, to count as "substantive" a discussion has to be the primary focus of at least one paragraph or five sentences in the post or the associated comments.

So I think if a post is primarily about framed animals but does have e.g. a paragraph explicitly about MCE or s-risks, then it should get those tags. If it's just that you believe the post is relevant to MCE or s-risks, but the post doesn't really make the reasoning for that clear to the reader, I think it shouldn't get the tag. (There would probably be trickier cases in between, and if you notice some it might be helpful to comment on the tagging guidelines page so the guidelines can be clarified in light of them.)

Comment by MichaelA on Intervention options for improving the EA-aligned research pipeline · 2021-07-16T10:23:51.020Z · EA · GW

Thanks! Yeah, this seems like a handy idea. 

I was recently reminded of the "Take action" / "Get involved" page on effectivealtruism.org, and I now see that that actually includes a page on Write a literature review or meta-analysis. That Take action page seems useful, and should maybe be highlighted more often. In retrospect, I probably should've linked to various bits of it from this post.

Comment by MichaelA on [deleted post] 2021-07-16T10:12:21.331Z

FWIW, I'm actually kind-of surprised by those questions, and they make me more confident that this sort of entry is useful. One specific thing I find problematic is how often the relevance of non-humans to long-term future stuff is seen as entirely a matter of MCE and/or s-risks, and then there's a totally separate discussion about other interventions, other risks (e.g., extinction risks), how good the future could be if things go well, etc.

Here are some quick notes on how the following points from the entry are not just about MCE or s-risks:

  • To what extent (if at all) is longtermism focused only on humans?
    • This could also be about things like whether current longtermists are motivated by consideration of non-humans, which is something more like current moral circles than moral circle expansion.
    • And moral circles is largely a matter of moral views; this is also substantially about empirical views, like which beings will exist and in what numbers and with what experiences.
  • To what extent should longtermists focus on improving wellbeing (or other outcomes) for humans? For other animals? For artificial sentiences? For something else?
    • Moral circle expansion may help achieve these goals, or may not, and other things may achieve them too
      • Other potentially relevant variables include the long reflection, epistemics, space governance, and authoritarianism
    • And which beings we should focus on is partly a question of which beings will exist, in what numbers, and with what experiences, and how tractable and neglected affecting their wellbeing will be
  • Are existential risks just about humans?
    • I'd actually guess that most of the badness of existential risks even other than s-risks is from effects on beings other than humans
      • Perhaps especially human-like digital minds, but also potentially "weirder" digital minds and/or wild animals on terraformed planets
      • E.g., if we could create vast good experiences for these beings but instead we go extinct or face unrecoverable collapse or dystopia, a big part of the badness could be because of what these beings (not humans) could've experienced
      • S-risks are of course also relevant here, but isn't the only issue
  • Will most moral patients in the long-term future be humans? Other animals? Something else? By how large a margin?
    • (As noted above, this is to a substantial extent an empirical question)

It seems to me that it should be clear how an entry with this title should be distinct from an entry on "the attempt to expand the perceived boundaries of the category of moral patients" and an entry on "a risk involving the creation of suffering on an astronomical scale"? (Copying the descriptions from the MCE and s-risk tags.)

But I'm of course also open to suggestions on how to make the distinctions clearer.
 

Comment by MichaelA on MichaelA's Shortform · 2021-07-15T07:41:32.648Z · EA · GW

Why this might be worthwhile:

  • The EA community has collected and developed a very large set of ideas that aren't widely known outside of EA, such that "getting up to speed" can take a similar amount of effort to a decent fraction of a bachelor's degree
    • But the community is relatively small and new (compared to e.g. most academic fields), so we have relatively little in the way of textbooks, courses, summaries, etc.
    • This means it can take a lot of effort and time to get up to speed, lots of EAs have substantial "gaps" in their "EA knowledge", lots of concepts are misinterpreted or conflated or misapplied, etc.
  • The EA Wiki is a good step towards having good resources to help people get up to speed
  • A bunch of research indicates retrieval practice, especially when spaced and interleaved, can improve long-term retention and can also help with things like application of concepts (not just memory)
    • And Anki provides such spaced, interleaved retrieval practice
    • I'm being lazy in not explaining the jargon or citing my sources, but you can find some explanation and sources here: Augmenting Long-term Memory
  • If one person makes an Anki deck based on the EA Wiki entries, it can then be used and/or built on by other people, can be shared with participants in EA Fellowships, etc.

Possible reasons not to do this:

  • "There's a lot of stuff it'd be useful for people to know that isn't on EA Wiki entries. Why not make Anki cards on those things instead? Isn't this a bit insular?"
    • I think we can and should do both, rather than one or the other
    • Same goes for having Anki cards based on EA sources vs Anki cards based on non-EA sources
  • "This seems pretty time-consuming"
    • I think there are a lot of people in the EA community for whom engaging with the EA Wiki entries to the extent required to make this deck would be worthwhile just for themselves
    • I also think there are even more people in the EA community for whom using all or a subset of these cards will be worthwhile
    • (Though there are also of course people for whom these things aren't true)
  • "Many of the entries probably won't actually be that well-suited to Anki cards, or aren't on very important things"
    • Agreed
    • But many will be
    • The card-maker(s) can skip entries, and the card-users can delete some cards from their own copy of the deck
  • "This seems like rote learning / indoctrination / stifling creativity / rah rah"
    • I quite strongly feel that these sorts of concerns about these like Anki cards are often misguided, including this case
    • I can expand on that if anyone actually does feel worried about this idea for this reason
Comment by MichaelA on MichaelA's Shortform · 2021-07-15T07:41:16.914Z · EA · GW

Maybe someone should make ~1 Anki card each for lots of EA Wiki entries, then share that Anki deck on the Forum so others can use it?

Specifically, I suggest that someone:

  1. Read/skim many/most/all of the EA Wiki entries in the "Cause Areas" and "Other Concepts" sections
    • Anki cards based on entries in the other sections (e.g., Organisations) would probably be less useful
  2. Make 1 or more Anki card for many/most of those entries
    • In many cases, these cards might take forms like "The long reflection refers to... [answer]"
    • In many other cases, the cards could cover other insights, concepts, questions, etc. raised in the body of the entry
    • Making such cards seems less worthwhile for cases in which either:
      • the entry mainly exists as a tag (without itself having much content)
      • the entry is about a quite well-known thing and doesn't really say much that's not well-known in its body (e.g., the International relations tag)
  3. Export the file for the resulting deck and share it on the Forum and maybe elsewhere
    • Other people can then either use the whole deck or pick and choose which parts of the deck to use (e.g., deleting cards when they come up, if the person feels those cards aren't relevant to their interests and plans)

I think this could also be done gradually and/or by multiple people, rather than in one big batch by one person. It could also be done for the LessWrong Wiki. 

If someone does make this deck, I would very likely use some/many of the cards myself and also promote the deck to a bunch of other people. 

(I also currently feel that this would be a sufficiently useful action to have taken that I'd be inclined to reward the person with some token amount of my own money to signal my appreciation / to compensate them for their time / because me saying that now might incentivise them. I'd only do this if the cards the person makes actually seems good. Feel free to contact me if you want to discuss that.)

Comment by MichaelA on EA needs consultancies · 2021-07-03T15:23:59.705Z · EA · GW

(Personal views only)

I found this post and the comments very interesting, and I'd be excited to see more people doing the sort of things suggested in this post.

That said, there's one point of confusion that remains for me, which is somewhat related to the point that "Right now the market for large EA consulting seems very isolated to OpenPhil". In brief, the confusion is something like "I agree that there is sufficient demand for EA consultancies. But a large enough fraction of that demand is from Open Phil that it seems unclear why Open Phil wouldn't instead or also do more in-house hiring." 

I think the resolution of this mystery is something like:

  • Really Open Phil should and plans to do both (a) more in-house hiring and (b) more encouragement and contracting of EA consultancies, but this post just emphasises one half of that
  • There are many reasons why Open Phil doesn't want to just hire more people in-house, and "our needs change over time, so we can't make a commitment that there's much future work of a particular sort to be done within our organizations" is actually a smaller part of that than this post (to me) implies

Does that sound right to you?

---

The rest of this comment just explains my confusion a bit more, and may be worth skipping.

The post says:

EA organizations like Open Phil and CEA could do a lot more if we had access to more analysis and more talent, but for several reasons we can't bring on enough new staff to meet these needs ourselves, e.g. because our needs change over time, so we can't make a commitment that there's much future work of a particular sort to be done within our organizations

[...] This system works because even though demand for these services can fluctuate rapidly at each individual client, in aggregate across many clients there is a steady demand for the consultancies' many full-time employees, and there is plenty of useful but less time-sensitive work for them to do between client requests. [emphasis added]

But then elsewhere you (Luke) write things like:

If their current typical level of analysis quality can be maintained, I would like to see RP scale as quickly as they can.

And:

If this was feasible to do while maintaining quality, I'd probably want to commission enough ongoing analysis from RP on AI governance research questions alone to sustain >10 FTEs there.

And:

(Even within Open Phil, a bit of robustness could come from multiple teams demanding a particular genre of services, e.g. at least 3 pretty independent teams at Open Phil have contracted Rethink Priorities for analysis work. But still much safer for contractors if there are several truly independent clients.)

In light of this and other things, I guess it seems to me like Open Phil is big enough, RP researchers are generalist enough (or are sufficiently interested and capable in multiple Open Phil focus areas), and demand will continue to remain high enough that it seems like it also could really make sense for Open Phil to hire more people who are roughly like RP researchers. 

It seems one could've in the past predicted, or at least can now predict, that some RP researchers will continue to be in demand by someone at Open Phil, for some project, for at least few years, which implies that they or similar people could also be hired in-house.

(I'm not saying such people should be hired in-house by Open Phil. I think the current set up is also working well, hence me choosing to work at RP and being excited about RP trying to scale its longtermist work relatively rapidly. It's just that this makes me think that "our needs change over time, so we can't make a commitment that there's much future work of a particular sort to be done within our organizations" isn't really as large a cause of the rationale for EA consultancies as this post seems to me to imply?)

Comment by MichaelA on [deleted post] 2021-07-03T13:53:38.812Z

Possible names for this entry:

  • Consultancy
  • Consulting
  • Consultants
  • Consultancies
Comment by MichaelA on [deleted post] 2021-07-03T13:53:15.108Z

Some types of things this entry & tag could cover:

  • Posts relevant to the idea of EAs acting as consultants to other EAs
    • E.g., this shortform of mine and maybe some links provided in it would warrant this tag if they were top-level Forum posts
  • Posts relevant to the idea of non-EAs acting as consultants to EAs
  • Posts about pros and cons of EAs doing non-EA consultancy work (e.g. management consultancy), tips for doing that, etc.
Comment by MichaelA on EA needs consultancies · 2021-07-03T09:20:09.213Z · EA · GW

RP and others offering incubation support and grants might also help.  The EA infrastructure fund drive probably helps but most people still don't know much about how to set up and run an organisation.  I think that charity entrepreneurship has a good model to learn from in that regard.  You get in, you learn, then if it goes well you will usually get funded. [emphasis added]

Actually, this makes me think, maybe it would be great if Charity Entrepreneurship's next "round" was focused on EA consultancies, rather than on a particular cause area? Their usual process seems potentially well-suited to this; they can survey relevant stakeholders regarding what needs exist and what might be best for filling them, do some additional shallow investigation of various ideas like those listed in Luke's post, then attract people and help them set these things up. 

At first glance, it seems at least plausible that: 

  • an EA funder would be happy to fund this whole process
  • this process would result in, say, ~3 orgs that will provide a fair amount of value at good cost-effectiveness for at least 2 years, & 1 org that might eventually grow up to be something kinda like RP. 

Maybe I'll contact CE to see what they think. I'd also be interested to hear if anyone thinks this would be a bad idea for some reason.

(I also think people applying to EA Funds, trying to learn from or get advice from RP, and/or trying to get funding and support in other ways would be good. But I agree that this won't always be "enough".)

Edit: Someone downvoted this, which seems reasonable if they mean to say "I do think that this would be a bad idea", but then I'd be quite interested to hear why they think so.

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-07-03T09:13:12.338Z · EA · GW

Update: I've now made this entry.

Consultancy (or maybe Consulting or Consultants or Consultancies)

Things this would cover:

Related entries

career choice | Effective Altruism and Consulting Network | org strategy | working at EA vs. non-EA orgs

(maybe there are also other good choices for related entries)

Comment by MichaelA on EA needs consultancies · 2021-07-03T08:45:53.112Z · EA · GW

(I work at RP, as well as at FHI and the EA Infrastructure Fund, but I'm writing in a personal capacity and describing activities I did in a personal capacity.) 

On a similar note, there have been at least two times in the last few months  when I think I provided quite useful advice to someone who was recently started an organisation or plans to do so soon, basically just via me describing aspects of how RP thinks and works. And probably >10 times in the last few months when I provided quite useful advice to researchers or aspiring researchers simply by describing aspects of how RP generates ideas for research projects, prioritises among them, plans them, conducts them, disseminates findings, and assesses impact.*

I'll also be delivering a 1-hour workshop that partly covers that latter batch of topics to participants of a research training program soon,  and would potentially be open to delivering the same workshop to other groups as well. (You can see the slides and links to related resources here. Note that this workshop is something I'm doing in my personal time and expresses personal views only; it merely draws on things I've learned from RP.)

I say "quite useful" based on things like the people wanting the calls to run longer, asking for followup calls, writing up strategy docs afterwards and asking for my feedback on them, etc. I don't yet have much evidence of actual good outcomes in the world from this.

This all increases my enthusiasm about the idea of more people trying to copy or draw on good bits of RP, including via: 

  • people reading public writeups of aspects of RP's strategy (e.g. here and here)
  • RP producing more such writeups (though as Luke's post implies, there are many other projects competing for our staff time!)
  • Maybe RP people delivering some workshops on aspects of this, like I'm now dipping my toes into doing
  • people having calls with RP staff to talk about these things

(I also of course think there's a lot I and RP could usefully copy or draw on from elsewhere, and I've indeed already "imported" various things from e.g. CLR and FHI into RP or at least my own work.)

Basically, I'd be excited for lots of orgs and individual researchers to operate as anything on a spectrum from "good RP clones" to "very much their own thing, but remixing good aspects from RP and elsewhere". I think there's a lot of room for this.

I'm also now a guest fund manager at the EA Infrastructure Fund, and the version of me that wears that hat would likewise be excited about funding more people to do that sor tof thing. (That of course doesn't mean that I'd want to fund every application like this, but I'd want to fund some and would be excited to have more such applications coming our way.)

(Again, just writing in a personal capacity.)

*I also have my own in-my-view-useful thoughts on these topics, but even if I had deleted all of those from the conversations and just described RP thinking and processes, I think the conversations would've been quite useful.

Comment by MichaelA on MichaelA's Shortform · 2021-06-30T13:17:52.961Z · EA · GW

The x-risk policy pipeline & interventions for improving it: A quick mapping

I just had a call with someone who's thinking about how to improve the existential risk research community's ability to cause useful policies to be implemented well. This made me realise I'd be keen to see a diagram of the "pipeline" from research to implementation of good policies, showing various intervention options and which steps of the pipeline they help with. I decided to quickly whip such a diagram up after the call, forcing myself to spend no more than 30 mins on it. Here's the result.

(This is of course imperfect in oodles of ways, probably overlaps with and ignores a bunch of existing work on policymaking*, presents things as more one-way and simplistic than they really are, etc. But maybe it'll be somewhat interesting/useful to some people.)

(If the images are too small for you, you can open each in a new tab.)

Steps in the pipeline, and example actors
The first first steps + possible interventions. (I'm screenshotting one side of the diagram at a time so the text is large enough to read.)
The last few steps + possible interventions.
The full diagram

Feel free to ask me to explain anything that seems unclear. I could also probably give you an editable copy if you'd find that useful.

*One of many examples of the relevant stuff I haven't myself read is CSER's report on Pathways to Linking Science and Policy in the Field of Global Risk.

Comment by MichaelA on Improving the EA-aligned research pipeline: Sequence introduction · 2021-06-30T09:23:25.701Z · EA · GW

Luke Muehlhauser recently published a new post that's also quite relevant to the topics covered in this sequence: EA needs consultancies

See also his 2019 post Reflections on Our 2018 Generalist Research Analyst Recruiting.

Comment by MichaelA on A central directory for open research questions · 2021-06-27T17:49:13.457Z · EA · GW

Thanks! Added.

Comment by MichaelA on [deleted post] 2021-06-27T07:56:47.673Z

Hey Question Mark, this page is for behind-the-scenes Discussion of this wiki entry, rather than discussion of the topic. This is analogous to Wikipedia's Talk pages, which each say at the top:

This is the talk page for discussing improvements to the [ARTICLE NAME] article.
This is not a forum for general discussion of the article's subject.

(Melodramatic example.)

Speaking of which, I'll talk to the moderators about maybe adding a banner like that to the top of these pages to avoid future confusion.

(Also, this entry is about a specific type of alternative foods, not about things like veganism, though unfortunately the term that's currently used is pretty vague and ambiguous. Hopefully in future the common term will be "resilient foods" instead.) 

Comment by MichaelA on Shallow evaluations of longtermist organizations · 2021-06-27T07:41:21.733Z · EA · GW

Sadly, I imagine there's a significant bias for positive comments (I assume that people with negative experiences would be cautious of offending anyone), but positive comments still have signal.

Yeah, I think that this is true and that it's good that you noted it. 

Though that brings to mind another data point, which is that several people who did the summer research fellowship at the same time as me are now still working at CLR. I also think that there might be a bias against the people who still work at an org commenting, since they wouldn't want to look defensive or like they're just saying it to make their employer happy, or something. But overall I do think there's more bias towards positive comments.

(And there are also other people I haven't stayed in touch with and who aren't working there anymore, who for all I know could perhaps have had worse experiences.)

Comment by MichaelA on [deleted post] 2021-06-26T15:51:29.379Z

It's possible that this entry should be renamed resilient foods now, and (in my view) probable that it should be renamed to that at some future point.

  • Alternative foods is the typical name, but sounds like it could mean alternative proteins or a bunch of other things
  • Apparently ALLFED are going to rebrand this as resilient foods, which does seem much clearer to me
    • But the fact that this term is currently not the standard term is a mark against it

Pablo said:

Very weak preference for alternative foods until resilient foods becomes at least somewhat standard.

Comment by MichaelA on [deleted post] 2021-06-26T15:47:16.238Z

I think it'd be good to mention the Penn State University grant/work, but I wasn't able to immediately identify whether that team actually has done work on alternative foods yet, so I left it out for now.

Comment by MichaelA on Shallow evaluations of longtermist organizations · 2021-06-26T14:42:05.446Z · EA · GW

[I'll put some thoughts on the ALLFED section here to keep discussion organised, but this is responding to Nuno's section rather than David's comment.]

I feel that that 50% is still pretty good, but the contrast between it and the model's initial 95% is pretty noticeable to me, and makes me feel that the 95% is uncalibrated/untrustworthy. On the other hand, my probabilities above can also be seen as a sort of sensitivity analysis, which shows that the case for an organization working on ALLFED's cause area is somewhat more robust than one might have thought.

[...]

In conclusion, I disagree strongly with ALLFED's estimates (probability of cost overruns, impact of ALLFED's work if deployed, etc.), however, I feel that the case for an organization working in this area is relatively solid. My remaining uncertainty is about ALLFED's ability to execute competently and cost-effectively; independent expert evaluation might resolve most of it.

I think this mostly sounds similar to my independent impression, as expressed here, though I didn't specifically worry particularly about their ability to execute competently and cost-effectively. (I'm not saying I felt highly confident about that; it just didn't necessarily stand out much to me as a key uncertainty, for whatever reason.) 

E.g., I wrote in the linked comment:

  • Their cost-effectiveness estimates seem remarkably promising (see here and here).
    • But it does seem quite hard to believe that the cost-effectiveness is really that good. And many of the quantities are based on a survey of GCR researchers, with somewhat unclear methodology (e.g., how were the researchers chosen?)
    • I also haven’t analysed the models very closely
    • But, other than perhaps the reliance on that survey, I can’t obviously see major flaws, and haven’t seen comments that seem to convincingly point out major flaws. So maybe the estimates are in the right ballpark?

One thing I'd add is that most of your (Nuno's) section on ALLFED sounds like it's seeing ALLFED's impact as mostly being about their research & advocacy itself. But I think it's worth also giving a fair amount of emphasis to this question of yours: "Given that ALLFED has a large team, is it a positive influence on its team members? How would we expect employees and volunteers to rate their experience with the organization?"

I'd see a substantial fraction of the value of ALLFED as coming from how it might work as a useful talent pipeline. And I think that this could also be a source of nontrivial downside risk from ALLFED, e.g. if their training is low-quality for some reason, or if people implicitly learn bad habits of thinking/research/modelling, or if their focuses aren't good focuses and they make their volunteers more likely to stay focused on that long-term. 

(I'm not saying that these things are the case. I'd currently guess that ALLFED produces notable impact as a talent pipeline. But I haven't looked closely and think it'd be worth doing so if one wanted to do a "thorough" evaluation of ALLFED.)

Comment by MichaelA on Shallow evaluations of longtermist organizations · 2021-06-26T14:28:39.473Z · EA · GW

Quick bits of info / thoughts on the questions you raise re CLR

(I spent 3 months there as a Summer Research Fellow, but don't work there anymore, and am not suffering-focused, so might be well-positioned to share one useful perspective.)

  • Is most of their research only useful from a suffering-focused ethics (SFE) perspective?
    • I think all of the research that was being done while I was there would probably be important from a non-SFE longtermist perspective if it was important from a SFE longtermist perspective
      • It might also be important from neither perspective if some other premise underpinning the work was incorrect or the work was just low-quality. But:
        • I think it was all at least plausibly important
        • I think each individual line of work would be unlikely to turn out to be important from an SFE longtermist perspective but not from a non-SFE longtermist perspective
      • This is partly because much of it could be useful for non-s-risk scenarios
        • E.g., much of their AI work may also help reduce extinction risks, even if that isn't CLR as an organisation's focus (it may be the focus of some individual researchers, e.g. Daniel K - not sure)
      • This is also partly because s-risks are also really bad from a non-SFE perspective (relative to the same future scenario but minus the suffering)
    • All that said, work that's motivated by a SFE longtermist perspective should be expected to be higher priority from that perspective than from another perspective, and I do think that that's the case for CLR's work
      • That said, if CLR had a substantial room for more funding and I had a bunch of money to donate, I'd seriously consider them (even if I pretend that I give SFE views 0 credence, whereas in reality I give them some small-ish credence)
  • Is there a better option for suffering-focused donors?
    • I think the key consideration here is actually room for more funding rather than how useful CLR's work is
      • I haven't looked into their room for more funding
  • Is the probability of astronomical suffering comparable to that of other existential risks?
  • Is CLR figuring out important aspects of reality?
    • I think so, but this is a vague question
  • Is CLR being cost-effective at producing research?
    • I haven't really thought about that
  • Is CLR's work on their "Cooperation, conflict, and transformative artificial intelligence"/"bargaining in artificial learners" agenda likely to be valuable?
    • I think so, but I'm not an expert on AI, game theory, etc.
  • Will CLR's future research on malevolence be valuable?
    • I think so, conditional on them doing a notable amount of such work (I don't know their current plans on that front)
    • And this I know more about since it was one of my focuses during my fellowship
  • How effective is CLR at leveling up researchers?
    • I think I learned a lot while I was there, and I think the other summer research fellows whose views I have a sense of felt the same
    • But I haven't thought about that from the perspective of "Ok, but how much, and what was the counterfactual" in the way that I would if considering donating a large amount to CLR
      • (I've noticed that my habits of thinking are different when I'm just having regular conversations or reading stuff or whatever vs evaluating a grant)
    • Two signals of my views on this:
      • I've recommended several people apply to the CLR summer research fellowship (along with other research training programs)
      • I've drawn on some aspects of or materials from CLR's summer research fellowship when informing the design of one or more other research training programs
  • "I get the impression that they are fairly disconnected from other longtermist groups (though CLR moved to London last year, which might remedy this.)"
    • I don't think CLR are fairly disconnected from other longtermist groups
    • Some data points:
      • Stefan Torges did a stint at GovAI
      • Max Daniel used to work there, still interacts with them in some ways, and works at FHI and is involved in a bunch of other longtermist stuff
      • I worked there and am still in touch with them semi-regularly
      • Daniel Kokotajlo used to work at AI Impacts
      • Alfredo Parra, who used to work there, now works at Legal Priorities Project
      • Jonas Vollmer used to work there and now runs EA Funds
      • I know of various of other people who've interacted in large or small ways with both CLR and other longtermist orgs

I am not intending here to convince anyone to donate to CLR or work for CLR. I'm not personally donating to them, nor working there. Though I do think they'd be a plausibly good donation target if they have room for more funding (I don't know about that) and that they'd be a good place to work for many longtermists (depending on personal fit, career plans, etc.).

Personal views only, as always.

Comment by MichaelA on Shallow evaluations of longtermist organizations · 2021-06-26T14:09:50.700Z · EA · GW

Despite living under the FHI umbrella, each of these projects has a different pathway to impact, and thus they should most likely be evaluated separately. [...]

Consider in comparison 80,000 hours' annual review, which outlines what the different parts of the organization are doing, and why each project is probably valuable. I think having or creating such an annual review probably adds some clarity of thought when choosing strategic decisions (though one could also cargo-cult such a review solely in order to be more persuasive to donors), and it would also make shallow evaluations easier.

I think I agree with this, and it reminds me of my question post Do research organisations make theory of change diagrams? Should they? and some of the views expressed by commenters there (e.g., by Max Daniel). (Though really the relevant thing here is more like "explicit, clear theory of change", rather than it necessarily being in the form of a diagram.)

(Personal view only.)

Comment by MichaelA on Shallow evaluations of longtermist organizations · 2021-06-26T14:07:03.227Z · EA · GW

I'd welcome comments about the overall method, about whether I'm asking the right questions for any particular organization, or about whether my tentative answers to those questions are correct, and about whether this kind of evaluation seems valuable. For instance, it's possible that I would have done better by evaluating all organizations using the same rubric (e.g., leadership quality, ability to identify talent, working on important problems, operational capacity, etc.)

FWIW:

  • I think I thought the questions you asked about each org seemed good
    • I say "I think I thought" because I wasn't actively trying to find questions I thought weren't useful, come up with more relevant questions, etc.
  • I think it seems reasonable to use different questions for each org
    • It seems reasonable for the questions to be guided by what the org's theory of change is, what seem the major plausible upside scenarios for the org, what seems the major plausible downside risks for the org, what seem the major uncertainties about or potential weaknesses of the org, etc., and these things differ a lot between orgs
    • It might be useful to also have some questions or rubric that is used across all orgs (I'm not sure), but I think it'd still be good to have questions for each org tailored to that specific org
      • Or perhaps the common questions/rubric elements could be broad enough that the tailored questions all fit under one question/element
        • Toy example: You have a broad question about each of importance, tractability, and neglectedness for each org, and then tailored sub-questions for each of those factors for each org