Posts

Database of orgs relevant to longtermist/x-risk work 2021-11-19T08:50:43.284Z
Competition for "Fortified Essays" on nuclear risk 2021-11-17T20:55:36.992Z
Some readings & notes on how to do high-quality, efficient research 2021-11-17T17:01:39.326Z
"Slower tech development" can be about ordering, gradualness, or distance from now 2021-11-14T20:58:04.899Z
How to decide which productivity coach to try? 2021-11-12T08:04:26.974Z
Things I often tell people about applying to EA Funds 2021-10-27T06:51:00.530Z
List of EA funding opportunities 2021-10-26T07:49:42.576Z
"Nuclear risk research, forecasting, & impact" [presentation] 2021-10-21T10:54:27.494Z
When and how should an online community space (e.g., Slack workspace) for a particular type/group of people be created? 2021-10-07T12:41:45.909Z
Event on Oct 9: Forecasting Nuclear Risk with Rethink Priorities' Michael Aird 2021-09-29T17:45:03.718Z
Independent impressions 2021-09-26T18:43:59.538Z
Improving EAs’ use of non-EA options for research training, credentials, testing fit, etc. 2021-09-11T13:52:15.738Z
Books and lecture series relevant to AI governance? 2021-07-18T15:54:32.894Z
Announcing the Nuclear Risk Forecasting Tournament 2021-06-16T16:12:39.249Z
Why EAs researching mainstream topics can be useful 2021-06-13T10:14:03.244Z
Overview of Rethink Priorities’ work on risks from nuclear weapons 2021-06-10T18:48:35.871Z
Final Report of the National Security Commission on Artificial Intelligence (NSCAI, 2021) 2021-06-01T08:19:15.901Z
Notes on Mochary's "The Great CEO Within" (2019) 2021-05-29T18:53:24.594Z
Intervention options for improving the EA-aligned research pipeline 2021-05-28T14:26:50.602Z
Reasons for and against posting on the EA Forum 2021-05-23T11:29:10.948Z
Goals we might have when taking actions to improve the EA-aligned research pipeline 2021-05-21T11:16:48.273Z
What's wrong with the EA-aligned research pipeline? 2021-05-14T18:38:19.139Z
Improving the EA-aligned research pipeline: Sequence introduction 2021-05-11T17:57:51.387Z
Thoughts on "A case against strong longtermism" (Masrani) 2021-05-03T14:22:11.541Z
Thoughts on “The Case for Strong Longtermism” (Greaves & MacAskill) 2021-05-02T18:00:32.482Z
My personal cruxes for focusing on existential risks / longtermism / anything other than just video games 2021-04-13T05:50:22.145Z
On the longtermist case for working on farmed animals [Uncertainties & research ideas] 2021-04-11T06:49:05.968Z
The Epistemic Challenge to Longtermism (Tarsney, 2020) 2021-04-04T03:09:10.087Z
New Top EA Causes for 2021? 2021-04-01T06:50:31.971Z
Notes on EA-related research, writing, testing fit, learning, and the Forum 2021-03-27T09:52:24.521Z
Notes on Henrich's "The WEIRDest People in the World" (2020) 2021-03-25T05:04:37.093Z
Notes on "Bioterror and Biowarfare" (2006) 2021-03-01T09:42:38.136Z
A ranked list of all EA-relevant (audio)books I've read 2021-02-17T10:18:59.900Z
Open thread: Get/give feedback on career plans 2021-02-12T07:35:03.092Z
Notes on "The Bomb: Presidents, Generals, and the Secret History of Nuclear War" (2020) 2021-02-06T11:10:08.290Z
Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.? 2021-02-02T03:52:43.821Z
Notes on Schelling's "Strategy of Conflict" (1960) 2021-01-29T08:56:24.810Z
How much time should EAs spend engaging with other EAs vs with people outside of EA? 2021-01-18T03:20:47.526Z
[Podcast] Rob Wiblin on self-improvement and research ethics 2021-01-15T07:24:30.833Z
Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? 2021-01-15T06:56:20.644Z
Books / book reviews on nuclear risk, WMDs, great power war? 2020-12-15T01:40:04.549Z
Should marginal longtermist donations support fundamental or intervention research? 2020-11-30T01:10:47.603Z
Where are you donating in 2020 and why? 2020-11-23T08:47:06.681Z
Modelling the odds of recovery from civilizational collapse 2020-09-17T11:58:41.412Z
Should surveys about the quality/impact of research outputs be more common? 2020-09-08T09:10:03.215Z
Please take a survey on the quality/impact of things I've written 2020-09-01T10:34:53.661Z
What is existential security? 2020-09-01T09:40:54.048Z
Risks from Atomically Precise Manufacturing 2020-08-25T09:53:52.763Z
Crucial questions about optimal timing of work and donations 2020-08-14T08:43:28.710Z
How valuable would more academic research on forecasting be? What questions should be researched? 2020-08-12T07:19:18.243Z

Comments

Comment by MichaelA on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-30T16:07:11.702Z · EA · GW

Could you say whether this was Habryka or not? (Since Habryka has now given an answer in a separate comment here, and it seems a bit good to know whether those are the same data point twice or not. Habryka's number seems a factor of 3-10 off of yours, but I'd call that "similar" in this context.)

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-11-29T20:00:29.591Z · EA · GW

Corporate governance

Example of a relevant post: https://forum.effectivealtruism.org/posts/5MZpxbJJ5pkEBpAAR/the-case-for-long-term-corporate-governance-of-ai

I've mostly thought about this in relation to AI governance, but I think it's also important for space governance and presumably various other EA issues. 

I haven't thought hard about whether this really warrants an entry, nor scanned for related entries - just throwing an idea out there.

Comment by MichaelA on Wikipedia editing is important, tractable, and neglected · 2021-11-29T17:20:07.788Z · EA · GW

General quibble: Many parts of this post feel like they're just about whether editing Wikipedia is better than doing nothing, doing what non-EAs tend to do, or doing something not at all optimised for EA goals, rather than whether editing Wikipedia is better than relatively obvious alternative ways readers of this post could spend their time. E.g.:

  • Yes, Wikipedia editing may well be better for learning than passive methods like re-reading and highlighting, but the post doesn't mention that there are also many other things better than passive methods, and doesn't try to compare Wikipedia against those. E.g. making Anki cards, writing Forum posts summarising the material, doing an online course rather than just reading, participating in a research training program like SERI.
  • Even if Wikipedia editing histories would be looked on favourably by employers (see my specific quibble for commentary), how does that compare to using the same time on an online course, blog post writing, practicing whatever skills the job requires, running events for a local group, doing forecasting, doing an internship, etc.?

I think it makes sense that this post can't carefully compare Wikipedia editing against all such alternatives. And it could be that Wikipedia editing is a "second best" option for many goals and therefore often first best overall, or something. But (at least when listening to the Nonlinear Library podcast version of this) it felt like the post was sometimes saying "Wikipedia is good for X, therefore if you want X you should strongly consider editing Wikipedia", even when I could quickly think of 5 things that might be better for X that weren't mentioned. And that then feels to me a bit misleading / not ideal.

(Again, I liked the post overall and have already shared it with people. This is meant as constructive criticism rather than as a smackdown. I feel like for some reason the tone above is harsher than I really mean - not sure why it's coming out that way - so apologies if it indeed comes across that way :) )

Comment by MichaelA on Wikipedia editing is important, tractable, and neglected · 2021-11-29T16:02:07.562Z · EA · GW

Specific quibble: I'm skeptical that the following claim is true to a noteworthy degree, and (due tot that) feel that it was a bit odd that the claim was made without reasoning/evidence being provided: 

Wikipedia user profiles are publicly visible and can be linked to a real person—for instance, by making your real name your user name or by adding relevant details about you to your Wikipedia user page. You could then add your Wikipedia profile to a CV or on LinkedIn. Depending on your profession, potential employers may be impressed by a good Wikipedia track record (this likely includes most EA organisations) .

Was this based on conversations with any employers? Obviously employers may be impressed by this, but they also may be impressed by all sorts of other things, and I'd guess this'd be less impressive both to most EA orgs and to most non-EA orgs than various other things people could do with the same amount of time. That's in line with the following thing that you note elsewhere (and which I agree with):

Wikipedia is a global public good (i.e. it is non-rivalrous, non-excludable, and available everywhere). Consequently, Wikipedia editing is likely undersupplied relative to the socially optimal level. A key reason for this is that the incentives to edit Wikipedia are insufficient: [...] (iii) in most (but not all) social contexts, you are likely to get less credit for Wikipedia editing than for more traditional activities (such as writing a book, blog posts, or insightful social media posts). 

Also, I think many (most?) employers will just pay much less attention to any prior experience than to how well a candidate does in work tests, interviews, and/or work trials. 

Comment by MichaelA on Wikipedia editing is important, tractable, and neglected · 2021-11-29T16:00:37.903Z · EA · GW

Thanks! I broadly agree with this, think this is a useful and well-written post, and have already shared it with two people who I think will find it useful.

But I have two criticisms/quibbles, one specific and one more general. I'll put them in separate comments.

Comment by MichaelA on Risks from Atomically Precise Manufacturing · 2021-11-29T12:32:07.270Z · EA · GW

One somewhat tangential thing you might find interesting is how prominent nanotech seems to be in many of the "Late 2021 MIRI Conversations". Though none of the mentions there seem to be suggesting anyone should try to study or influence nanotech itself, more so that nanotech could be a key tool used by agentic AI systems. 

Comment by MichaelA on Propose and vote on potential EA Wiki entries · 2021-11-29T12:29:28.803Z · EA · GW

Brain-computer interfaces

See also the LW wiki entry / tag, which should be linked to from the Forum entry if we make one: https://www.lesswrong.com/tag/brain-computer-interfaces

Relevant posts:

Comment by MichaelA on How would you define "existential risk?" · 2021-11-29T08:15:43.243Z · EA · GW

I wrote a post last year basically trying to counter misconceptions about Ord's definition and also somewhat operationalise it. Here's the "Conclusion" section:

To summarise:

  • Existential risks are distinct from existential catastrophes, extinction risks, and global catastrophic risks.
  • [I'd say that] An existential catastrophe involves the destruction of the vast majority of humanity’s potential - not necessarily all of humanity’s potential, but more than just some of it.
  • Existential catastrophes could be “slow-moving” or not apparently “catastrophic”; at least in theory, our potential could be destroyed slowly, or without this being noticed.

That leaves ambiguity as to precisely what fraction is sufficient to count as "the vast majority", but I don't think that's a very important ambiguity - e.g., I doubt people's estimates would change a lot if we set the bar at 75% of potential lost vs 99%.

I think the more important ambiguities are what our "potential" is and what it means to "lose" it. As Ord defines x-risk, that's partly a question of moral philosophy - i.e. it's as if his definition contains a "pointer" to whatever moral theories we have credence in, our credence in them, and our way of aggregating that, rather than baking a moral conclusion in. E.g., his definition deliberating avoids taking a stance on things like whether a future where we stay on Earth forever or a future with only strange but in some sense "happy" digital minds, or failing to reach such futures, would be an existential catastrophe. 

This footnote from my post is also relevant: 

I don’t believe Bostrom makes explicit what he means by “potential” in his definitions. Ord writes “I’m making a deliberate choice not to define the precise way in which the set of possible futures determines our potential”, and then discusses that point. I’ll discuss the matter of “potential” more in an upcoming post.

Another approach would be to define existential catastrophes in terms of expected value rather than “potential”. That approach is discussed by Cotton-Barratt and Ord (2015).

Comment by MichaelA on Listen to more EA content with The Nonlinear Library · 2021-11-27T14:44:26.736Z · EA · GW

It seems like this feed is now capturing only a small portion of EA Forum posts meeting a 25 karma theshold or a 25 total positive votes threshold. (Not sure which of those thresholds footnote 2 is indicating you intended to use. But I notice that this post was included in the feed despite having <25 individual votes, while having >25 karma.) 

E.g., this post wasn't included, and some of the recent "MIRI conversations" posts weren't captured. 

I have become surprisingly dependent on this feed surprisingly quickly, so (a) I'm wondering if you could return it to capturing everything meeting the intended threshold and (b) I'm grateful to you for making this!

Comment by MichaelA on Pathways to impact for forecasting and evaluation · 2021-11-26T15:26:35.266Z · EA · GW

Thanks, I found this (including the comments) interesting, both for the object-level thoughts on forecasting, evaluations, software, Metaforecast, and QURI, and for the meta-level thoughts on whether and how to make such diagrams.

On the meta side of things, this post also reminds me of my question post from last year Do research organisations make theory of change diagrams? Should they? I imagine you or readers might find that question & its answers interesting. (I've also now added an answer linking to this post, quoting the Motivation and Reflections, and quoting Ozzie's thoughts given below.)

Comment by MichaelA on Do research organisations make theory of change diagrams? Should they? · 2021-11-26T15:23:44.611Z · EA · GW

On the same post, Ozzie Gooen (Nuno's colleague at QURI) wrote:

I looked over an earlier version of this, just wanted to post my takes publicly.[1]

I like making diagrams of impact, and these seem like the right things to model. Going through them, many of the pieces seem generally right to me. I agree with many of the details, and I think this process was useful for getting us (QURI, which is just the two of us now) on the same page.

At the same time though, I think it's surprisingly difficult to make these diagrams to be understandable for many people. 

Things get messy quickly. The alternatives are to make them much simpler, and/or to try to style them better. 

I think these could have been organized much neater, for example, by:

  • Having the flow always go left-to-right.
  • Using a different diagram editor that looks neater.
  • Reducing the number of nodes by maybe 30% or so.
  • Maybe neater arrow structures (having 90% lines, rather than diagonal lines) or something.

That said, this would have been a lot of work to do (required deciding on and using different software), and there's a lot of stuff to do, so this is more "stuff to keep in mind for the future, particularly if we want to share these with many more people." (Nuno and I discussed this earlier)

One challenge is that some of the decisions on the particularities of the causal paths feel fairly ad-hoc, even though they make sense in isolation. I think they're useful for a few people to get a grasp on the main factors, but they're difficult to use for getting broad buy-in. 

If you take a quick glance and just think, "This looks really messy, I'm not going to bother", I don't particularly blame you (I've made very similar things that people have glanced over).

But the information is interesting, if you ever consider it worth your time/effort!

So, TLDR:

  • Impact diagrams are really hard. At these levels of details, much more so.
  • This is a useful exercise, and it's good to get the information out there.
  • I imagine some viewers will be intimidated by the diagrams.
  • I'm a fan of experimenting with things like this and trying out new software, so that was neat.

[1] I think it's good to share these publicly for transparency + understanding.

Comment by MichaelA on Do research organisations make theory of change diagrams? Should they? · 2021-11-26T15:11:15.535Z · EA · GW

Nuno Sempere of QURI just published a short post on Pathways to impact for forecasting and evaluation, which has two big diagrams and opens with:

As part of the Quantified Uncertainty Research Institute's (QURI) strategy efforts, I thought it would be a good idea to write down what I think the pathways to impact are for forecasting and evaluations. Comments are welcome, and may change what QURI focuses on in the upcoming year. 

He adds in a section on "Reflections":

Most of the benefit of these kinds of diagrams seems to me to come from the increased clarity they allow for when thinking about their content. Otherwise, I imagine that they might make QURI's and my own work more legible to outsiders by making our assumptions or steps more explicit, which itself might allow for people to point out criticism. Note that my guess about the main pathways are highlighted in bold, so one could disagree about that without disagreeing about the rest of the diagram.

I also imagine that the forecasting and evaluations pathways could be useful to organizations other than QURI (Metaculus, other forecasting platforms, people thinking of commissioning evaluations, etc.) 

It seems to me that producing these kinds of diagrams is easier over an extended period of time, rather than in one sitting because one can then come back to aspects that seem missing.

Comment by MichaelA on List of EA funding opportunities · 2021-11-22T17:22:18.815Z · EA · GW

Here's more info on Open Phil's 4 requests for proposals for certain areas of technical AI safety work, copied from the Alignment Newsletter

Request for proposals for projects in AI alignment that work with deep learning systems (Nick Beckstead and Asya Bergal) (summarized by Rohin): Open Philanthropy is seeking proposals for AI safety work in four major areas related to deep learning, each of which I summarize below. Proposals are due January 10, and can seek up to $1M covering up to 2 years. Grantees may later be invited to apply for larger and longer grants.

Rohin's opinion: Overall, I like these four directions and am excited to see what comes out of them! I'll comment on specific directions below.

 

RFP: Measuring and forecasting risks (Jacob Steinhardt) (summarized by Rohin): Measurement and forecasting is useful for two reasons. First, it gives us empirical data that can improve our understanding and spur progress. Second, it can allow us to quantitatively compare the safety performance of different systems, which could enable the creation of safety standards. So what makes for a good measurement?

1. Relevance to AI alignment: The measurement exhibits a failure mode that becomes worse as models become larger, or tracks a potential capability that may emerge with further scale (which in turn could enable deception, hacking, resource acquisition, etc).

2. Forward-looking: The measurement helps us understand future issues, not just those that exist today. Isolated examples of a phenomenon are good if we have nothing else, but we’d much prefer to have a systematic understanding of when a phenomenon occurs and how it tends to quantitatively increase or decrease with various factors. See for example scaling laws (AN #87).

3. Rich data source: Not all trends in MNIST generalize to CIFAR-10, and not all trends in CIFAR-10 generalize to ImageNet. Measurements on data sources with rich factors of variation are more likely to give general insights.

4. Soundness and quality: This is a general category for things like “do we know that the signal isn’t overwhelmed by the noise” and “are there any reasons that the measurement might produce false positives or false negatives”.

What sorts of things might you measure?

1. As you scale up task complexity, how much do you need to scale up human-labeled data to continue to maintain good performance and avoid reward hacking? If you fail at this and there are imperfections in the reward, how bad does this become?

2. What changes do we observe based on changes in the quality of the human feedback (e.g. getting feedback from amateurs vs experts)? This could give us information about the acceptable “difference in intelligence” between a model and its supervisor.

3. What happens when models are pushed out of distribution along a factor of variation that was not varied in the pretraining data?

4. To what extent do models provide wrong or undesired outputs in contexts where they are capable of providing the right answer?

Rohin's opinion: Measurements generally seem great. One story for impact is that we have a measurement that we think is strongly correlated with x-risk, and we use that measurement to select an AI system that scores low on such a metric. This seems distinctly good and I think would in fact reduce x-risk! But I want to clarify that I don’t think it would convince me that the system was safe with high confidence. The conceptual arguments against high confidence in safety seem quite strong and not easily overcome by such measurements. (I’m thinking of objective robustness failures (AN #66) of the form “the model is trying to pursue a simple proxy, but behaves well on the training distribution until it can execute a treacherous turn”.)

You can also tell stories where the measurements reveal empirical facts that then help us have high confidence in safety, by allowing us to build better theories and arguments, which can rule out the conceptual arguments above.

Separately, these measurements are also useful as a form of legible evidence about risk to others who are more skeptical of conceptual arguments.

 

RFP: Techniques for enhancing human feedback (Ajeya Cotra) (summarized by Rohin): Consider a topic previously analyzed in aligning narrowly superhuman models (AN #141): how can we use human feedback to train models to do what we want in cases where the models are more knowledgeable than the humans providing the feedback? A variety of techniques have been proposed to solve this problem, including iterated amplification (AN #40), debate (AN #5), recursive reward modeling (AN #34), market making (AN #108), and generalizing from short deliberations to long deliberations. This RFP solicits proposals that aim to test these or other mechanisms on existing systems. There are a variety of ways to set up the experiments so that the models are more knowledgeable than the humans providing the feedback, for example:

1. Train a language model to accurately explain things about a field that the feedback providers are not familiar with.

2. Train an RL agent to act well in an environment where the RL agent can observe more information than the feedback providers can.

3. Train a multilingual model to translate between English and a foreign language that the feedback providers do not know.

 

RFP: Interpretability (Chris Olah) (summarized by Rohin): The author provides this one sentence summary: We would like to see research building towards the ability to “reverse engineer" trained neural networks into human-understandable algorithms, enabling auditors to catch unanticipated safety problems in these models.

This RFP is primarily focused on an aspirational “intermediate” goal: to fully reverse engineer some modern neural network, such as an ImageNet classifier. (Despite the ambition, it is only an “intermediate” goal because what we would eventually need is a general method for cheaply reverse engineering any neural network.) The proposed areas of research are primarily inspired by the Circuits line of work (AN #142):

1. Discovering Features and Circuits: This is the most obvious approach to the aspirational goal. We simply “turn the crank” using existing tools to study new features and circuits, and this fairly often yields an interesting result that makes progress towards reverse engineering a neural network.

2. Scaling Circuits to Larger Models: So far the largest example of reverse engineering is curve circuits, with 50K parameters. Can we find examples of structure in the neural networks that allow us to drastically reduce the amount of effort required per parameter? (As examples, see equivariance and branch specialization.)

3. Resolving Polysemanticity: One of the core building blocks of the circuits approach is to identify a neuron with a concept, so that connections between neurons can be analyzed as connections between concepts. Unfortunately, some neurons are polysemantic, that is, they encode multiple different concepts. This greatly complicates analysis of the connections and circuits between these neurons. How can we deal with this potential obstacle?

Rohin's opinion: The full RFP has many, many more points about these topics; it’s 8 pages of remarkably information-dense yet readable prose. If you’re at all interested in mechanistic interpretability, I recommend reading it in full.

This RFP also has the benefit of having the most obvious pathway to impact: if we understand what algorithm neural networks are running, there’s a much better chance that we can catch any problems that arise, especially ones in which the neural network is deliberately optimizing against us. It’s one of the few areas where nearly everyone agrees that further progress is especially valuable.

 

RFP: Truthful and honest AI (Owain Evans) (summarized by Rohin): This RFP outlines research projects on Truthful AI (summarized below). They fall under three main categories:

1. Increasing clarity about “truthfulness” and “honesty”. While there are some tentative definitions of these concepts, there is still more precision to be had: for example, how do we deal with statements with ambiguous meanings, or ones involving figurative language? What is the appropriate standard for robustly truthful AI? It seems too strong to require the AI system to never generate a false statement; for example it might misunderstand the meaning of a newly coined piece of jargon.

2. Creating benchmarks and tasks for Truthful AI, such as TruthfulQA (AN #165), which checks for imitative falsehoods. This is not just meant to create a metric to improve on; it may also simply perform as a measurement. For example, we could experimentally evaluate whether honesty generalizes (AN #158), or explore how much truthfulness is reduced when adding in a task-specific objective.

3. Improving the truthfulness of models, for example by finetuning models on curated datasets of truthful utterances, finetuning on human feedback, using debate (AN #5), etc.

Besides the societal benefits from truthful AI, building truthful AI systems can also help with AI alignment:

1. A truthful AI system can be used to supervise its own actions, by asking it whether its selected action was good.

2. A robustly truthful AI system could continue to do this after deployment, allowing for ongoing monitoring of the AI system.

3. Similarly, we could have a robustly truthful AI system supervise its own actions in hypothetical scenarios, to make it more robustly aligned.

Rohin's opinion: While I agree that making AI systems truthful would then enable many alignment strategies, I’m actually more interested in the methods by which we make AI systems truthful. Many of the ideas suggested in the RFP are ones that would apply to alignment more generally and aren’t particularly specific to truthful AI. So it seems like whatever techniques we used to build truthful AI could then be repurposed for alignment. In other words, I expect that the benefit to AI alignment of working on truthful AI is that it serves as a good test case for methods that aim to impose constraints upon an AI system. In this sense, it is a more challenging, larger version of the ”never describe someone getting injured” challenge (AN #166). Note that I am only talking about how this helps AI alignment; there are also beneficial effects on society from pursuing truthful AI that I haven’t talked about here.

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-22T08:51:53.833Z · EA · GW

Yeah, though I’m also aiming to work on fewer things as “a goal in itself”, not just as a byproduct of slicing off the things that are less important or less my comparative advantage. This is because more focus seems useful on order to become really excellent at a set of things, ensure I more regularly actually finish things, and reduce the inefficiencies caused by frequent task/context-switching.

Comment by MichaelA on [deleted post] 2021-11-21T10:00:45.102Z

I've made a database of AI safety/governance surveys & survey ideas. I'll copy the "READ ME" page below. Let me know if you'd like access to the database, if you'd suggest I make a more public version, or if you'd like to suggest things be added. 

"This spreadsheet lists surveys & ideas for surveys that are very relevant to AI safety/governance, including surveys which are in progress, ideas for surveys, and published surveys. The intention is to make it easier for people to:


1. Find out about outputs or works-in-progress they might want to read (perhaps contacting the authors)
2. Find out about projects/ideas they might want to lead, collaborate on, or provide input to
3. Find out about projects/ideas that might fill a gap the person was otherwise considering trying to fill (i.e., reduce duplication of work)

I (Michael Aird) made this spreadsheet quite quickly. For now I’m only sharing it with people at Rethink Priorities, people at GovAI, and a couple members of the EA community who I’ve spoken to and who are potentially interested in doing survey work.

I expect this spreadsheet misses many relevant things and that its structure/content could be improved (e.g., maybe it should be a Doc or an Airtable? Maybe some columns should be added/removed?). It might also make sense to have one version that’s more private and another that’s more public.

Please feel free to leave comments/suggestions about anything and to suggest I share this with particular people.

If you’d like access to a link shown in this spreadsheet that you don’t have access to, let me know."

Comment by MichaelA on MichaelA's Shortform · 2021-11-21T10:00:22.003Z · EA · GW

I've made a database of AI safety/governance surveys & survey ideas. I'll copy the "READ ME" page below. Let me know if you'd like access to the database, if you'd suggest I make a more public version, or if you'd like to suggest things be added. 

"This spreadsheet lists surveys & ideas for surveys that are very relevant to AI safety/governance, including surveys which are in progress, ideas for surveys, and published surveys. The intention is to make it easier for people to:

1. Find out about outputs or works-in-progress they might want to read (perhaps contacting the authors)
2. Find out about projects/ideas they might want to lead, collaborate on, or provide input to
3. Find out about projects/ideas that might fill a gap the person was otherwise considering trying to fill (i.e., reduce duplication of work)

I (Michael Aird) made this spreadsheet quite quickly. For now I’m only sharing it with people at Rethink Priorities, people at GovAI, and a couple members of the EA community who I’ve spoken to and who are potentially interested in doing AI-related survey work.

I expect this spreadsheet misses many relevant things and that its structure/content could be improved (e.g., maybe it should be a Doc or an Airtable? Maybe some columns should be added/removed?). It might also make sense to have one version that’s more private and another that’s more public.

Please feel free to leave comments/suggestions about anything and to suggest I share this with particular people.

If you’d like access to a link shown in this spreadsheet that you don’t have access to, let me know."

Comment by MichaelA on Database of orgs relevant to longtermist/x-risk work · 2021-11-21T09:56:18.261Z · EA · GW

"If you spot any errors or if you know any relevant info I failed to mention about these orgs, let me know via an EA Forum message or via following this link and then commenting there."

(The very first link I provide in this post allows changing the filtering & sorting, but not commenting, so you have to instead either send a message or use that other link.)

Thanks for your interest in suggesting extra info / correction :) 

Comment by MichaelA on EA Communication Project Ideas · 2021-11-20T16:40:53.883Z · EA · GW

Most of these ideas do sound good to me (the others I'm ~agnostic on), and in general I like the idea of firing off quick posts like this with relatively concrete ideas of useful things for "junior EAs" to do.

For additional ideas/discussion, people might also be interested in Suggestion: EAs should post more summaries and collections and/or Notes on EA-related research, writing, testing fit, learning, and the Forum.

I think contributing to the EA Wiki might also be a good example of a "small EA communication project which independent EAs with reasonable understandings of EA and communication practices can do, without needing to be employed by an EA organization etc." (paraphrasing you). 

Comment by MichaelA on EA Communication Project Ideas · 2021-11-20T16:37:17.368Z · EA · GW

Make podcasts that read out newsletters

  • Robert Miles does this for the alignment newsletter, for example
  • There are a bunch of other newsletters you could do this for

I'd like this! In particular, I'd personally like this for various AI-governance-related newsletters, e.g. import.ai, ChinAI, CSET's newsletter, and Charlotte Stix's newsletter. I've subscribed to these and would like to at least skim them but almost never do, but I find it easier to make time for listening than for reading.

Comment by MichaelA on Database of orgs relevant to longtermist/x-risk work · 2021-11-20T14:19:30.998Z · EA · GW

Ah, nice, thanks for that! It seems that that indeed allows for changing both "Filtered by" and "Sorted by", including from each of my pre-set views, without that changing things for other people, so that's perfect!

I still want to provide the comment access version as well, so people can more easily make suggestions on specific entries. But I'll edit my post to swap the softr link for the link you suggested and to make the comment access link less prominent.

Comment by MichaelA on Database of orgs relevant to longtermist/x-risk work · 2021-11-20T11:48:30.582Z · EA · GW

Can you share a public grid view of the Airtable in a way that allows people to filter and/or sort however they want but then doesn't make that the filtering/sorting that everyone else sees? I wasn't aware of how to do that, which is the sole reason I added the Softr option. I think the set of Airtable views I also link people to is probably indeed better if people are happy with the views (i.e., combos of filters and orders) that I've already set up.

Agreed that an all-of-EA version of this would also be useful, and that Airtable would be better for that than Notion, a Forum post, or a Google Sheet. I also expect it's something that literally anyone reading this could set up in less than a day, by:

  • duplicating my database
  • manually adding things from Gittins' and Taymon's database
  • maybe removing anything that was in mine that might be out of scope for them (e.g., if they want to limit the scope to just orgs that are in or "aware of & friendly to" EA, since a database of all orgs that are merely quite relevant to any EA cause area may be too large a scope)
  • looking up how to do Airtable stuff whenever stuck (I found the basics fairly easy, more so than expected)
Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-20T10:44:20.070Z · EA · GW

I expect we'll also talk a lot to various people outside of RP who have important decisions to make and could potentially be influenced by us and/or who just have strong expertise and judgement in one or more relevant domains (e.g., major EA funders, EA-aligned policy advisors, strong senior researchers) to get their thoughts on what it'd be most useful to do and the pros and cons of various avenues we might pursue. 

(We sort-of passively do this in an ongoing way, and I've been doing a bit more recently regarding nuclear risk and AI governance & strategy, but I think we'd probably ramp it up when choosing directions for next year. I'm saying "I think" because the longtermism department haven't yet done our major end-of-year reflection and next-year planning.)

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-20T10:35:51.587Z · EA · GW

That all sounds basically right to me, except that my impression is that the cruxes in internal (mild) disagreements about this are just about "a) given that we're already scattered pretty thin on many projects, b) focus is often good" and not "c) we internally disagree about how important marginal biosecurity work by people without technical expertise is anyway". 

Or at least, I personally think I see (a) and (b) as some of the strongest arguments against us doing biosecurity stuff, while I'm roughly agnostic on (c) but I'd guess that there are some high-value things RP could do even if we lack technical backgrounds, and if some more senior biosecurity person said they really wanted us to do some project then I'd probably guess that they're right that we could be very useful on that. 

(And to be clear, my bottom line would still be pretty similar to Linch's, in that if we get a person who seems a strong fit for biosecurity work, they seem especially interested in that, and some senior people in that area seem excited about us doing something in that area, I'd be very open to us doing that.)

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-20T09:50:18.071Z · EA · GW

I'd also note things about scaling (as mentioned elsewhere in the AMA)

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-20T09:48:32.379Z · EA · GW

[This is not at all an organizational view; just some thoughts from me]

tl;dr: I think mostly RP is able to grow in multiple areas at once without there being strong tradeoffs between them (for reasons including that RP is good at scaling & that the pools of funding and talent for each cause area are somewhat different). And I'm glad it's done so, since I'd guess that may have contributed to RP starting and scaling up the longtermism department (even though naively I'd now prefer RP be more longtermist).

I think RP is unusually good at scaling, at being a modular collection of somewhat disconnected departments focusing on quite different things and each growing and doing great stuff, and at meeting the specific needs of actors making big decisions (especially EA funders; note that RP also does well at other kinds of work, but this type of work is where RP seems most unusual in EA). 

Given that, it could well make sense for RP to be somewhat agnostic between the major EA causes, since it can meet major needs in each, and adding each department doesn't very strongly trade off against expanding other departments. 

(I'd guess there's at least some tradeoff, but it's possible there's none or that it's on-net complementary; e.g. there are some cases where people liking our work in one area helped us get funding or hires for another area, and having lots of staff with many areas of expertise in the same org can be useful for getting feedback etc. One thing to bear in mind here is that, as noted elsewhere in this AMA, there's a lot of funding and "junior talent" theoretically available in EA and RP seems unusually good at combining these things to produce solid outputs.)

I would personally like RP to focus much more exclusively on longtermism. And sometimes I feel a vague pull to advocate for that. But RP's more cause-neutral, partly demand-driven approach has worked out very well from my perspective so far, in that it may have contributed to RP moving into longtermism and then scaling up that team substantially.[1] (I mean that from my perspective this is very good for the world, not just that it let me get a cool job.) So I think I should endorse that overall decision procedure.

This feels kind-of related to moral trade and maybe kind-of to the veil of ignorance

That's not to say that I think we shouldn't think at all about what areas are really most important in general, what's most important on the current margin within EA, where our comparative advantage is, etc. I know we think at least somewhat about those things (though I'm mostly involved in decisions about the longtermism department rather than broader org strategy so I don't bother trying to learn the details). But I think maybe the tradeoffs between growing each area are smaller than one might guess from the outside, such that that sort of high-level internal cause area priority-setting is somewhat less important than one might've guessed.

This doesn't really directly answer your question, since I think Peter and Marcus are better placed to do so and since I've already written a lot on this semi-tangent...

[1] My understanding (I only joined in late 2020) is that for a brief period at its very beginning, RP had no longtermist work (I think it was just global health & dev and animals?). Later, it had longtermism as just a small fraction of its work (1 researcher). RP only made multiple hires in this area in late 2020, after already having had substantial successes in other areas. At that point, it would've been unsurprising if people at the org thought they should just go all-in on their existing areas rather than branching out into longtermism. But they instead kept adding additional areas, including longtermism. And now the longtermism team is likely to expand quite substantially, which again could've been not done if the org was focusing more exclusively on its initial main focus areas. 

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-20T09:26:22.597Z · EA · GW

Two things I'd add to the above answer (which I agree with):

  • RP surveyed both interns and their managers at the end of the program, which provided a bunch of useful takeaways for future internships. (Many of which are detailed or idiosyncratic and so will be useful to us but aren't in the above reply.) I'd say other internship programs should do the same.
    • I'd personally also suggest surveying the interns and maybe managers at the start of the internship to get a "baseline" measure of things like interns' clarity on their career plans and managers' perceived management skills, then asking similar questions at the end, so that you can later see how much the internship program benefitted those things. Of course this should be tailored to the goals of a particular program.
  • What lessons we should pass on to other orgs / research training programs will vary based on the type of org, type of program, cause area focus, and various other details. If someone is actually running or seriously considering running a relevant program and would be interested in lessons from RP's experience, I'd suggest they reach out! I'd be happy to chat, and I imagine other RP people might too.
Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T18:42:41.366Z · EA · GW

James Ozden's question above might be sufficiently similar to yours that the answers there address your question?

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T18:41:19.339Z · EA · GW

Oh, and some other resources I'd often point people towards after they join are:

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T18:34:23.984Z · EA · GW

As is probably the case with many researchers, I have a bunch of thoughts on this, most of which aren't written up in nice, clear, detailed ways. But I do have a draft post with nuclear risk research project ideas, and a doc of rough notes on AI governance survey ideas, so if someone is interested in executing projects like that please message me and I can probably send you links. 

(I'm not saying those are the two areas I think are most impactful to do research on on the current margin; I just happen to have docs on those things. I also have other ideas less easily shareable right now.)

People might also find my central directory for open research questions useful, but that's not filtered for my own beliefs about how important-on-the-margin these questions are.

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T18:31:04.726Z · EA · GW

Lazy semi-tangential reply: I recently gave a presentation that was partly about how I've used forecasting in my nuclear risk research and how I think forecasting could be better used in research. Here are the slides and here's the video. Slides 12-15 / minutes 20-30 are most relevant. 

I also plan to, in ~1 or 2 months, write and publish a post with meta-level takeaways from the sprawling series of projects I ended up doing in collaboration with Metaculus, which will have further thoughts relevant to your question.

(Also keen to see answers from other people at RP.)

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T18:20:11.470Z · EA · GW

One thing I'd add is that I think several people at RP and elsewhere would be very excited if someone could:

  1. Find existing resources that work as good training for improving one's reasoning transparency, and/or
  2. Create such a resource

As far as I'm aware, currently the state of the art is "Suggest people read the post Reasoning Transparency, maybe point them to a couple somewhat related other things (e.g., the compilation I made that Neil links to, or this other compilation I made), hope they absorb it, give them a bunch of feedback when they don't really (since it's hard!), hope they absorb that, repeat." I.e., the state of the art is kinda crappy. (I think Luke's post is excellent, but just reading it is not generally sufficient for going from not doing the skill well to doing the skill well.) 

I don't know exactly what sort of resources would be best, but I imagine we could do better than what we have now. 

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T18:12:24.081Z · EA · GW

I agree, but would want to clarify that many people should still apply and very many people should at least consider applying. It's just that people shouldn't optimise very strongly for getting hired by one specific institution that's smaller than, say, "the US government" (which, for now, we are 😭).

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T18:07:19.507Z · EA · GW

I recently spent ~2 hours reflecting on RP's longtermism department's wins, mistakes, and lessons learned from our first year[1] and possible visions for 2022. I'll lightly adapt the "lessons learned for Michael specifically" part of that into a comment here, since it seems relevant to what you're trying to get at here; I guess a more effective person in my role would match my current strengths but also already be nailing all the following things. (I guess hopefully within a year I'll ~match that description myself.)

(Bear in mind that this wasn't originally written for public consumption,  skips over my "wins", etc.)

  • "Focus more
    • Concrete implications:
      • Probably leave FHI (or effectively scale down to 0-0.1 FTE) and turn down EA Infrastructure Fund guest manager extension (if offered it)
      • Say no to side things more often
      • Start fewer posts, or abandon more posts faster so I can get other ones done
      • Do 80/20 versions of stuff more often
      • Work on getting more efficient at e.g. reviewing docs
    • Reasons:
      • To more consistently finish things and to higher standards (rather than having a higher number of unfinished or lower quality things)
      • And to mitigate possible stress on my part, [personal thing], and to leave more room for things like exercise
      • And to be more robust against personal life stuff or whatever
        • (I mean something like: My current slow-ish progress on my main tasks is even with working parts of each weekend, so if e.g. I had to suddenly fly back to Australia because a family member was seriously ill, I’d end up dropping various balls I’ve somewhat committed to not dropping.)
  • Maybe trust my initial excitement less regarding what projects/posts to pour time into and what ideas to promote, and relatedly put more effort into learning from the views and thinking of more senior people with good judgement and domain expertise 
    • E.g., focus decently hard on making the AI gov stuff go well, since that involves doing stuff Luke thinks is useful and learning from Luke
    • E.g., it was good that I didn't bother to finish and post my research question database proposal
  • Maybe pay more attention to scale and relatedly to whether an important decision-maker is likely to actually act on this
    • Some people really do have a good chance of acting in very big ways on some stuff I could do
    • But by default I might not factor that into my decisions enough, instead just being helpful to whoever is in front of me or pursuing whatever ideas seem good to me and maybe would get karma
  • Implement standard productivity advice more, or at least try it out
    • I’ll break this down more in the habits part of my template for meetings with Peter
    • [I'm also now trying productivity coaching]
  • Spend less time planning projects in detail, and be more aware things will change in unexpected ways
  • Be more realistic when making plans, predictions, and timelines
    • (No, really)
    • Including assuming management will take more time than expected, at least given how I currently do it
  • Spend more time, and get better at, forming and expressing hot takes
  • Spend less time/words comprehensively listing ideas/considerations/whatever
  • More often organise posts/docs conceptually or at least by importance rather than alphabetically or not at all
  • Be more strict with myself regarding exercise and bedtime
  • Indeed optimise a fair bit for research management careers rather than pure research careers
    • This was already my guess when I joined, but I’ve become more confident about it"

[1] I mean the first year of the current version of RP's longtermism department; Luisa Rodriguez previously did (very cool!) longtermism work at RP, but then there was a gap between her leaving (as a staff member; she's now on the board) and the current staff joining. 

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T17:52:24.563Z · EA · GW

I like this answer. 

Some additional possible ideas:

  • Letting us know about or connecting us to stakeholders who could use our work to make better decisions
    • E.g., philanthropists, policy makers, policy advisers, or think tanks who could make better funding, policy, or research decisions if guided by our published work, by conversations with our researchers, or by future work we might do (partly in light of learning that it could have this additional path to impact)
  • Letting us know if you have areas of expertise that are relevant to our work and you'd be willing to review draft reports and/or have conversations with us
  • Letting us know about or connecting us to actors who could likewise provide us with feedback, advice, etc. 
  • Letting us know if there are projects you think it might be very valuable for us to do
    • We (at least the longtermism department) are already drowning in good project ideas and lacking capacity to do them all, but I think it costs little to hear an additional idea, and it's plausible some would be better than our existing ideas or could be nicely merged with one of our existing ideas. 
  • Testing & building fit for research management
  • Testing & building fit for ops roles
  • Donating

(In all cases, I mean either doing this thing yourself or encouraging other people to do so.)

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T17:30:26.152Z · EA · GW

[This is like commentary on your second question, not a direct answer; I'll let someone else at RP provide that.]

Small point: I personally find it useful to make the following three-part distinction, rather than your two-part distinction:

  • Academia-like: Projects that we think would be valuable although we don't have a very explicit theory of change tied to specific (types of) decisions by specific (types of) actors; more like "This question/topic seems probably important somehow, and more clarity on it would probably somehow inform various important decisions."
    • E.g., the sort of work Nick Bostrom does
  • Think-tank-like: Projects that we think would be valuable based on pretty explicit theories of change, ideally informed by actually talking to a bunch of relevant decision-makers to get a sense of what their needs and confusions are.
  • Consultancy-like: Projects that one specific stakeholder (or I guess maybe one group of coordinated stakeholders) have explicitly requested we do (usually but not necessarily also paying the researchers to do it).

I think RP, the EA community, and the world at large should very obviously have substantial amounts of each of those three types of projects / theory of change.

I think RP specialises mostly for the latter two models, whereas (for example) FHI specialises more for the first model and sometimes the second. (But again, I'll let someone else at RP say more about specific percentages and rationales.)

(See also my slides on Theory of Change in Research, esp. slide 17.)

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T17:21:42.804Z · EA · GW

(This other comment of mine is also relevant here, i.e. if answering these questions quickly I'd say roughly what I said there. Also keen to see what other RP people say - I think these are good questions.)

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T17:18:25.295Z · EA · GW

Good question! Please enjoy me not answering it and instead lightly adapting an email I sent to someone who was interested in running an EA-aligned research training program, since you or people interested in your question might find this a bit useful. (Hopefully someone else from RP will more directly answer the question.)

"Cool that you're interested in doing this kind of project :)

I'd encourage you to join the EA Research Training Program Slack workspace and share your plans and key uncertainties there to get input from other people who are organizing or hoping to organize research training programs. [This is open only to people organizing or seriously considering organizing such programs; readers should message me if they'd like a link.]

You could also perhaps look for people who've introduced themselves there and who it might be especially useful to talk to.

Resources from one of the pinned posts in that Slack:

  • See here for a brief discussion of why I use the term "research training programs" and roughly what I see as being in-scope for that term. (But I'm open to alternative terms or scopes.)
  • See here for a collection of EA Forum posts relevant to research training programs.
  • See here for a spreadsheet listing all EA-aligned research training programs I'm aware of.

You might also find these things useful: 

I'd also encourage you to seriously consider applying for funding, doing so sooner than you might by default, and maybe even applying for a small amount of funding to pay for your time further planning this stuff (if that'd be helpful). Basically, I think people underestimate the extent to which EA Funds are ok with unpolished applications, with discussing and advising on ideas with applicants after the application is submitted, and with providing "planning grants". (I haven't read anything about your plans and so am not saying I'm confident you'll get funding, but applying is very often worthwhile in expectation.) More info here:

[...] ...caveat to all of that is that I know very little about your specific plans - this is basically all just the stuff I think it's generically worth me mentioning to people interested in EA running research training programs.

Best of luck with the planning, and feel free to send through specific questions where I could perhaps be useful :) 

Best,

Michael"

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T16:54:33.700Z · EA · GW

One other thing I'd add to Linch's comments, adapting something I wrote in another comment in this AMA:

If anyone feels like maybe they're the right sort of person to (co-)found a new RP-like org, please feel free to reach out for advice, feedback on plans, pointers to relevant resources & people! I and various other people at RP would be excited to help it be the case that there are more EA-aligned orgs scaling rapidly & gracefully. 

Some evidence that I really am keen on this is that I've spent probably ~10 hours of my free time over the last few months helping a particular person work towards possibly setting up an RP-like org, and expect to continue helping them for at least several months. (Though that was an unusual case and I'd usually just quickly offer my highest-value input.)

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T16:52:27.133Z · EA · GW

I agree on all points (except the nit-pick in my other comment).

A couple things I'd add:

  • I think this thread could be misread as "Should RP grow a bunch but no similar orgs be set up, or should RP grow less but other similar orgs are set up?" 
    • If that was the question, I wouldn't actually be sure what the best answer would be - I think it'd be necessary to look at the specifics, e.g. what are the other org's specific plans, who are their founders, etc.? 
    • Another tricky question would be something like "Should [specific person] join RP with an eye to helping it scale further, join some org that's not on as much of a growth trajectory and try to get it onto one, or start a new org aiming to be somewhat RP-like?"Any of those three options could be best depending on the person and on other specifics.
    • But what I'm more confident of is that, in addition to RP growing a bunch, there should also be various new things that are very/somewhat/mildly RP-like.
  • Somewhat relatedly, I'd guess that "reduced communication" and "PR" aren't the main arguments in favour of prioritising growing existing good orgs over creating new ones or growing small potentially good ones. (I'm guessing you (Linch) would agree; I'm just aiming to counter a possible inference.)
    • Other stronger arguments (in my view) include that past performance is a pretty good indicator of future performance (despite the protestation of a legion of disclaimers) and that there's substantial fixed costs to creating each new org.
    • See also this interesting comment thread.
    • But again, ultimately I do think there should be more new RP-like orgs being started (if started by fitting people with access to good advisors etc.)
Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T16:30:26.223Z · EA · GW

This comment matches my view (perhaps unsurprisingly!). 

One thing I'd add: I think Peter is basically talking about our "Longtermism Department". We also have a "Surveys and EA Movement Research Department". And I feel confident they could do a bunch of additional high-value longtermist work if given more funding. And donors could provide funding restricted to just longtermist survey projects or even just specific longtermist survey projects (either commissioning a specific project or funding a specific idea we already have).

(I feel like I should add a conflict of interest statement that I work at RP, but I guess that should be obvious enough from context! And conversely I should mention that I don't work in the survey department, haven't met them in-person, and decided of my own volition to write this comment because I really do think this seems like probably a good donation target.)

Here are some claims that feed into my conclusion:

  • Funding constraints: My impression is that that department is more funding constrained than the longtermism department
    • (To be clear, I'm not saying the longtermism department isn't at all funding constrained, nor that that single factor guarantees that it's better to fund RP;s survey and EA movement research department than RP's longtermism department.)
  • Skills and comparative advantage:
    • They seem very good at designing, running, and analysing surveys
    • And I think that that work gains more from specialisation/experience/training than one might expect
    • And there aren't many people specialising for being damn good at designing, running, and/or analysing longtermism-relevant surveys
      • I think the only things I'm aware of are RP, GovAI, and maybe a few individuals (e.g., Lucius Caviola, Stefan Schubert, Vael Gates)
        • And I'd guess GovAI wouldn't scale that line of work as rapidly as RP could with funding (though I haven't asked them), and individual people are notably harder to scale...
  • There's good work to be done:
    • We have a bunch of ideas for longtermism-relevant surveys and I think some would be very valuable
      • (I say "some" because some are like rough ideas and I haven't thought in depth about all of them yet)
      • I/we could probably expand on this for potential donors if they were interested
    • I think I could come up with a bunch more exciting longtermism-relevant surveys if I spent more time doing so
    • I expect a bunch of other orgs/stakeholders could as well, at least if we gave them examples, ideas, helped them brainstorm, etc.
Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T16:04:13.683Z · EA · GW

Here's some parts of my personal take (which overlaps with what Abraham said):

I think we ourselves feel a bit unsure "why we're special", i.e. why it seems there aren't very many other EA-aligned orgs scaling this rapidly & gracefully.

But my guess is that some of the main factors are:

  • We want to scale rapidly & gracefully
    • Some orgs have a more niche purpose that doesn't really require scaling, or may be led by people who are more skilled and excited about their object-level work than about org strategy, scaling, management, etc.
  • RP thinks strategically about how to scale rapidly & gracefully, including thinking ahead about what RP will need later and what might break by default
    • Three of the examples I often give are ones Abraham mentioned:
      • Realising RP will be be management capacity constrained, and that it would therefore be valuable to give our researchers management experience (so they can see how much they like it & get better at it), and that this pushes in favour of running a large internship with 1-1 management of the interns
        • (This definitely wasn't the only motivation for running the internship, but I think it was one of the main ones, though that's partly guessing/vague memory.)
      • Realising also that maybe RP should offer researchers management training
      • Expanding ops capacity before it's desperately urgently obviously needed
  • RP also just actually does the obvious things, including learning and implementing standard best practices for management, running an org, etc.

And that all seems to me pretty replicable! 

OTOH, I do think the people at RP are also great, and it's often the case that people who are good at something underestimate how hard it is, so maybe this is less replicable than I think. But I'd guess that smart, sensible, altruistic, ambitious people with access to good advisors could have a decent chance at making their org more like that or starting a new org like that, and that this could be quite valuable in expectation.

(If anyone feels like maybe they're such a person and maybe they should do that, please feel free to reach out for advice, feedback on plans, pointers to relevant resources & people! I and various other people at RP would be excited to help it be the case that there are more EA-aligned orgs scaling rapidly & gracefully. 

Some evidence of that is that I have in fact spent probably ~10 hours of my free time over the last few months helping someone work towards possibly setting up an RP-like org, and expect to continue helping them for at least several months. Though that was an unusual case, and I'd usually just quickly offer my highest-value input.) 

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T13:49:28.628Z · EA · GW

also I have a strong suspicion that a lot of needed research work in EA "just isn't that hard" and if it's done by less competent people, this frees up other EA researchers to do more important work.

I agree with that suspicion, especially if we include things like "Just collect a bunch of stuff in one place" or "Just summarise some stuff" as "research". I think a substantial portion of my impact to date has probably come from that sort of thing (examples in this sentence from a post I made earlier today: "I’m addicted to creating collections"). It basically always feel like (a) a lot of other people could've done what I'm doing and (b) it's kinda crazy no one had yet. I also sometimes don't have time to execute on some of my seemingly-very-executable and actually-not-that-time-consuming ideas, and the time I do spend on such things does slow down my progress other work that does seem to require more specialised skills. I also think this would apply to at least some things that are more classically "research" outputs than collections or summaries are.

But I want to push back on "this frees up other EA researchers to do more important work". I think you probably mean "this frees up other EA researchers to do work that they're more uniquely suited for"? I think (and your comment seems to imply you agree?) that there's not a very strong correlation between importance and difficulty/uniqueness-of-skillset-required - i.e., many low-hanging fruit remain unplucked despite being rather juicy.

Comment by MichaelA on We’re Rethink Priorities. Ask us anything! · 2021-11-19T13:33:29.682Z · EA · GW

This comment sounds like it's partly implying "RP seems to have recently overcome these bottlenecks. How? Does that imply the bottlenecks are in general smaller now than they were then?" I think the situation is more like "The bottlenecks were there back then and still are now. RP was doing unusually well at overcoming the bottlenecks then and still is now."

The rest of this comment says a bit more on that front, but doesn't really directly answer your question. I do have some thoughts that are more like direct answers, but other people at RP are better placed to comment so I'll wait till they do so and then maybe add a couple things. 

(Note that I focus mostly on longtermism and EA meta; maybe I'd say different things if I focused more on other cause areas.)


In late 2020, I was given three quite exciting job offers, and ultimately chose to go with a combo of the offer from RP and the offer from FHI, with Plan A being to then leave FHI after ~1 year to be a full-time RP employee. (I was upfront with everyone about this plan. I can explain the reasoning more if people are interested.)

The single biggest reason I prioritised RP was that I believe the following three things:

  1. "EA indeed seems most constrained by things like 'management capacity' and 'org capacity' (see e.g. the various things linked to from scalably using labor).
  2. I seem well-suited to eventually helping address that via things like doing research management.
  3. RP seems unusually good at bypassing these bottlenecks and scaling fairly rapidly while maintaining high quality standards, and I could help it continue to do so."

I continue to think that those things were true then and still are now (and so still have the same Plan A & turn down other exciting opportunities). 

That said, the picture regarding the bottlenecks is a bit complicated. In brief, I think that: 

  • The EA community overall has made more progress than I expected at increasing things like management capacity, org capacity, available mentorship, ability to scalably use labor, etc. E.g., various research training programs have sprung up, RP has grown substantially, and some other orgs/teams have been created or grown.
  • But the community also gained a lot more "seriously interested" people and a lot more funding.
  • So overall the bottlenecks are still strong in that it still seems quite high-leverage to find better ways of scalably using labor (especially "junior" labor) and money. But it also feels worth recognising that substantial progress has been made and so a bunch more good stuff is being done; there being a given bottleneck is not in itself exactly a bad thing (since it'll basically always be true that something is the main bottleneck), but more a clue about what kind of activities will tend to be most impactful on the current margin.
Comment by MichaelA on Database of orgs relevant to longtermist/x-risk work · 2021-11-19T13:08:20.056Z · EA · GW

Both sound to me probably at least somewhat useful! I'm ~agnostic on how likely they are to be very useful, how they compare to other things you could spend your time on, or how best to do them, which is mostly because I haven't thought much about software development.

I expect some other people in the community (e.g., Ozzie Gooen, Nuno Sempere, JP Addison) would have more thoughts on that. But it might make sense to just spend like 0.5-4 hours on MVPs before asking anyone else, if you already have a clear enough idea in your head.

I can also imagine having a Slack workspace / Slack channel in an existing workspace for people in EA who are doing software development or are interested in that could perhaps be useful.

(Sidenote: You may also be interested in posts tagged software engineering and/or looking into their authors/commenters.)

Comment by MichaelA on Database of orgs relevant to longtermist/x-risk work · 2021-11-19T10:03:49.673Z · EA · GW

Glad to hear that you think this'll be helpful!

(Btw, your comment also made me realise I should add Training For Good to the database, so I've now done so. )

Comment by MichaelA on How to get technological knowledge on AI/ML (for non-tech people) · 2021-11-17T14:37:09.553Z · EA · GW

As a last point, I would like to mention that I was able to work on this almost full-time while being a part-time research assistant at the Legal Priorities Project. Maybe this is not an option for everyone, although there are a lot of grant opportunities right now, see here. I am uncertain how well my approach is suited for someone who can only use their spare time for it – my best guess is that it would still work out, but simply take longer.

See also List of EA funding opportunities, including this statement:

I strongly encourage people to consider applying for one or more of these things. Given how quick applying often is and how impactful funded projects often are, applying is often worthwhile in expectation even if your odds of getting funding aren’t very high. (I think the same basic logic applies to job applications.)

Comment by MichaelA on How to get technological knowledge on AI/ML (for non-tech people) · 2021-11-17T14:34:31.640Z · EA · GW

Thanks to Leonie for their post and to Henry for this comment! I've now bought & downloaded Artificial Intelligence: A Guide for Thinking Humans (since it was available as an audiobook), I've added this post to my Collection of AI governance reading lists, syllabi, etc., and I expect I'll revisit this post at some future point as well.

Comment by MichaelA on "Slower tech development" can be about ordering, gradualness, or distance from now · 2021-11-15T08:36:44.163Z · EA · GW

Yeah, good point + suggestion , thanks! I've now switched to "ordering". "Sequence" could also perhaps work.

Comment by MichaelA on What has helped you write better? · 2021-11-14T14:30:14.661Z · EA · GW

I previously collected some Readings and notes on how to write/communicate well. I'll copy the whole thing below (as it stands atm, and without the footnotes or comments; see the doc for the complete version).


Preamble

How to use this doc

  • I strongly suggest reading everything in bold
  • Other than that, you can skim or skip around as much as you want
  • Feel very free to add comments/suggestions, including regarding which resources/tips seem particularly useful to you, which seem not useful, and what other things it might be worth adding

What this doc focuses on

  • How to communicate more clearly, engagingly, concisely, memorably, etc.
  • Especially but not only in writing
  • Especially but not only for researchers
  • Maybe also how to achieve those goals more efficiently (e.g., become a faster writer or get faster at preparing presentations) 

Some things this doc doesn’t focus on are listed in the following footnote.

Purpose & epistemic status

There are probably literally thousands of existing resources (and collections of resources) covering the topics covered here, and I’ve engaged with a very small fraction of them. So in some ways it feels silly to make my own one. 

But I don’t know of a resource or collection that covers everything I’d want covered. And writing/communication is an important skill, I think I’m pretty good at it, and I very often give feedback on people’s writing. So I thought this doc might be useful for me, for people I give feedback to, and for other people. Also, this doc will itself link to all relevant things that I know of, think are potentially useful, and remembered to add.

Readings and notes

Books on writing

Other resources - misc

Notes - misc

Tips/suggestions I often find myself giving:

  • You should usually include an actual tl;dr/summary/key takeaways section right near the start - even in most cases where you feel it’s unimportant or inappropriate
    • (At least when writing for e.g. EAs. Sometimes when writing for mass audiences, you’ll better engage people by deliberately not making it clear what you’re writing about or what you’ll ultimately claim.)
    • See Reasoning Transparency 
    • Usually don’t skip this section
    • Usually don’t just have a section with that sort of name but where you actually just say “This post will cover x, y, and z”
      • I don’t just want to know you say something about x, y, z; I want to know the core of what you actually say!
    • There’s a good chance you - whoever you are - think “The key takeaways are too complex to be explained briefly before someone has actually read my introduction, how I explain the terms, etc.” You’re probably wrong. 
      • I kept thinking this for ~8 months, till finally the many many times I was advised to add summaries got to me and I started really trying to do that, at which point I realised it really was typically possible.
      • Have you actually spent 5 minutes, by the clock, really trying to summarise the key takeaways in a way that will make sense to a reader who hasn’t read the whole thing? 
    • (There are some exceptions, e.g. for extremely short posts)
  • Beware the curse of knowledge 
  • You’re probably using jargon a bit too often, should more often provide a brief explanation, and/or should more often provide a hyperlink
    • One reason this is probably happening is the curse of knowledge
      • (Which is a bit of jargon I hyperlinked above because most readers probably aren't familiar with it)
    • You’re probably also often using jargon a bit incorrectly, when simpler language or different jargon would be more appropriate
    • See also 3 suggestions about jargon in EA 
  • You’re probably saying things like “this”, “they”, and “he” too often
    • You know what you’re referring to, but your reader may have forgotten, or there may be multiple candidates such that it’s ambiguous
      •  This is partly due to the curse of knowledge
  • You should probably more often use examples
    • You’re probably being less clear than you think, and examples can help
    • In any case, providing concrete examples is an effective way to elucidate abstract concepts
    • See point 2 in Teaching Graduate Students How to Write Clearly 
    • (Though people do sometimes use examples even when they’re not worth the extra words they cost; hopefully a reviewer can point out if you’ve done that)
  • Often try to be really clear about what you’re not claiming, what’s not in-scope, what debates you don’t settle, etc.
    • E.g., if you’re just writing that one particular type of AI existential risk seems extremely unlikely, there’s a decent chance some readers will take away the message that all AI existential risk is extremely unlikely and/or will think that you think that (while perhaps also thinking you’re wrong and stupid for thinking that)
      • So consider explicitly saying near the start and near the end that you’re not talking about x, y, z  
  • You should often/usually ask someone to review your work
    • I don’t always do this, because: 
      • I write a lot
      • Some things I write aren’t especially important and are taking me away from my main work such that I should just get it out the door fast or not bother writing it at all
      • Relatively clear writing is probably one of my strong points
    • But I do often ask at least one person to review something before I post it, and I do this for most/all things I’ve written that I think are relatively important
  • Many of your sentences should probably be split into multiple sentences; at the least, some of them should probably be broken up with a semicolon
  • Much of what you’ve said can probably be cut, moved into footnotes, or moved into an appendix
  • Generally try to keep the order in which the same items are mentioned consistent
    • As per Teaching Graduate Students How to Write Clearly: “Add structure through consistent constructions. First example: When you state in the abstract that you will discuss topics A, B, and C, retain this order throughout the entire paper. Second example: When you start a paragraph with the statement “Our first hypothesis was confirmed…”, the reader expects a future paragraph to start with “Our second hypothesis was [not] confirmed…” In general, academic writing is clear when it delivers information in accordance with what the readers expect. Do not set up false expectations.”
  • Save the longest part of the sentence/phrase for the end
    • E.g., “ghoulies and ghosties and long-legged beasties and things that go bump in the night”
    • E.g., “life, liberty and the pursuit of happiness”
    • Why? Because you don’t want to hold a big heavy phrase in memory while you are reading the rest of the sentence. 
    • This advice is given in The Sense of Style
      • And my description is adapted from someone’s notes on the book (though unfortunately that notes post itself doesn’t seem very clear or useful)
  • [something about (sub)sections and/or signposting]
  • [something about introducing and motivating your work, indicating its purpose or target audience, and/or indicating directions for further work]
Comment by MichaelA on Reasons for and against posting on the EA Forum · 2021-11-14T13:46:14.317Z · EA · GW

Yeah, I'm indeed thinking about what's good in a moral sense / for the world / "from the point of view of the universe" / from my perspective, not what's good from another person's perspective. But it can also obviously be the case that from Person B's current perspective, their values drifting would be bad. And we could also think about what people want in the moment vs what they want when looking back / reflecting vs what a somewhat "idealised" version of their values would want, or whatever.

In any case, this sort of thing is discussed much further in posts tagged value drift, so you might find those posts interesting. (I won't discuss it in detail here since it's a bit of a tangent + due to busyness.)