Posts

Try working on something random with someone cool 2022-05-18T06:23:56.032Z
Concrete Biosecurity Projects (some of which could be big) 2022-01-11T03:46:27.483Z
eca's Shortform 2021-08-07T19:38:25.779Z
Has anyone found an effective way to scrub indoor CO2? 2021-06-21T21:21:19.009Z
How many times would nuclear weapons have been used if every state had them since 1950? 2021-05-04T15:34:08.722Z
How to PhD 2021-03-28T19:56:49.032Z
COVID-19 brief for friends and family 2020-02-28T22:43:38.726Z

Comments

Comment by eca on Try working on something random with someone cool · 2022-05-18T16:51:01.192Z · EA · GW

One impression I could imagine having after reading this post for the first time is something like: "eca would prefer fewer connections to people and doesn't value that output of community building work" or even more scandalously, "eca thinks community builders are wasting their time".

I don't believe that, and would have edited the draft to make that more clear if I had taken a different approach to writing it.

A quick amendment to that vibe.

  1. Community building is mission critical. It's also complicated, and not something I expect to have good opinions about currently, overall, because of lack of context and careful thought, among other things.
  2. I have personally found these types of introductions enormously valuable, especially in other phases of my career, and it would make me very sad if people turned them off!
  3. Even if I didn't find them personally valuable, I'd guess that they were still very valuable overall because I expect this to be person and context dependent, and I see others get value.
  4. Even if more should be invested in work connections overall, its not clear that the folks sending me intros (THANK YOU!! PLEASE DON'T STOP!) should be the ones doing the collaboration themselves ("Have you worked with [Someone] on anything? Or do you know anyone who has?"). Gains from specialization could imply that the folks making intro connections should focus on that, and others should do more deliberate working on stuff.
  5. Rather, my nebulous aim is some combo of A) sharing what my intuitions (narrowly trained) say the marginal effect of trading intros for work experience would be, for me, B) gesturing at an opportunity for even more value to be produced by community builders, if my experience generalizes and C) hoping selfishly that someone will help me understand whats going on here, so I quite complaining about it to my dinner companions before impulsively writing a forum post.

Not sure how much of this is in my head, but thats a thing.

Comment by eca on Try working on something random with someone cool · 2022-05-18T16:17:57.682Z · EA · GW

Meta note: this was an experiment in jotting something down. I've had a lot of writers block on forum posts before and thought it would be good to try erring on the side of not worrying about the details.

As I'm rereading what I wrote late last night I'm seeing things I wish I could change. If I have time, I'll try writing these changes as comments rather than editing the post (except for minor errors).

(Curious for ideas/approaches/ recommendations for handling this!)

Comment by eca on Try working on something random with someone cool · 2022-05-18T14:43:40.399Z · EA · GW

This seems like a great idea- I actually woke up this morning realizing I forgot it from my list!

One part of my perspective which is possibly worth reemphasizing: IMO, what you choose to work together does not need to be highly optimized or particularly EA. At least to make initial progress in this direction, it seems plausible that you should be happy with anything challenging/ without an existing playbook, collaborative, and “real” in the sense of requiring you to act like you would if you were solving a real problem instead of playing a toy game.

So in this case, while “EA should host hackathons” seems reasonable and exciting to me, especially as a downstream goal if working together turns out o be really useful, it doesn’t need to block easier stuff. I dont think a shortage of good hackathon prompts or organizers should stop groups of EAs from voting on the most interesting local hackathon run by someone else, going together as a group, and teaming up to work on something (with an EA lens if you want). Thats just extremely low cost to try out.

(Im also noticing that “Host an awesome EA hackathon” seems like type of collaborative, challenging project a person could team up on!)

Comment by eca on Concrete Biosecurity Projects (some of which could be big) · 2022-01-15T21:38:12.277Z · EA · GW

thanks for the kind words! I agree that we didn't have much good stuff for ppl to do 4 yrs ago when i started in bio but don't feel like my model matches yours regarding why.

But I'm also wanting to confirm I've understood what you are looking for before I ramble.

How much would you agree with this description of what I could imagine filling in from what you said re 'why it took so long':

"well I looked at this list of projects, and it didn't seem all that non-obvious to me, and so the default explanation of 'it just took a long time to work out these projects' doesn't seem to answer the question"

(TBC, I think this would be a very reasonable read of the piece, and I'm not interpreting your question to be critical tho also obviously fine if it is hahah)

Comment by eca on Concrete Biosecurity Projects (some of which could be big) · 2022-01-15T04:59:36.691Z · EA · GW

meta note- its super cool to see all this activity! but the volume is makin me a bit stressed and i probably won't be trying to respond to lots even if i do one sporadically. does not mean i am ignoring you!

Comment by eca on What are your favourite ways to buy time? · 2021-11-06T03:03:48.742Z · EA · GW

Well I hope it works out for ya! Thanks haha

In case you are looking for content and have interests similar to me I like the following for audio:

  • Institute for Advanced Study lectures (random fun science)
  • Yannic Kilcher (ML paper summaries)
  • Wendover Productions/ Kurzgesagt (random probably not as useful but interesting science and econ funfacts)
  • LiveOverflow (Security)

And i find that searching for random academics names is more likely to turn up lectures/ convos than podcasts

Comment by eca on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-02T21:18:29.200Z · EA · GW

Are you looking for shovel ready bounties (eg write them up and you are good to go) or things which might need development time (eg figuring out exactly what to reward, working out the strategy of why the bounty might be good etc)?

Comment by eca on Can EA leverage an Elon-vs-world-hunger news cycle? · 2021-11-02T13:54:47.816Z · EA · GW

FWIW this seems like a reasonable idea to me and I would be pretty sad if no one at e.g. Givewell had even considered it.

Comment by eca on What are your favourite ways to buy time? · 2021-11-02T13:50:48.739Z · EA · GW
  • Order groceries online! Maybe this is obvious but I have the impression not as many ppl do this as they should. Saves me at least 1 hr (usually closer to 2) for < $20
  • Pay for a bunch of disk space. I find it generates a lot of overhead to have files in different places. For me, the solution has been a high performance workstation plus remote desktop forwarding to my laptop when I travel so I can always have the same disk and workspace
  • Buy more paid apps/ premium upgrades/ digital subscriptions. I haven’t done the math on this so might not be as good as I think, but I have the impression that time spent watching adverts ads up and that in general apps are underpriced/ people have irrational behavior around eg not buying the $5 app they would have been excited for if free. A big one for me is Premium Youtube which lets you listen to videos with the screen off and gives me access to all youtube lectures as if they were podcasts (there are a surprising number of high quality informational videos that can be listened to!)
  • Related to above: make slack time more useful by listening to stuff. Mix of podcast, audiobooks, youtube and text to speech of articles. I use a mix of Pocket, Speechify and Voicedream for the latter. I invested in noise-cancelling earbuds and (separately) found some pretty cheap earbuds you can wear in the shower
  • extra battery packs and unlimited data plan to allow for work time in more places. For same reason, laptop with long battery life. I couldnt find anything better than the newest macbook pro M1.
  • pay for flights at times which work better and reduce time in transit. Your comfort/ restedness affects productivity. Also plane wifi.
  • give yourself a budget for productivity experiments that feel speculative. Many of the above were discovered by spending that budget, a lot of other things failed but its worth it for the wins
  • stop worrying about late fees (within reason). Most of the time the fees for things like late registration at a uni are small relative to not needing to occupy your brain with those sorts of deadlines
  • pay for excercise/ hobbies that increase your wellbeing + energy. For me this is climbing
  • buy extra of things you lose frequently. This is an embarrassing one but I cannot for the life of me keep track of sleep masks, for example. I have like >6 pairs now lol, but there is usually one when I need it. If you are scatterbrained like me, this is worth it for cheap things.
  • corollary of above: don’t waste time looking for cheap things you’ve lost, just buy another.

I think I have a few more I’m forgetting but I will stop there for now.

Comment by eca on eca's Shortform · 2021-10-21T15:11:33.964Z · EA · GW

Quest: see the inside of an active bunker

Comment by eca on Listen to more EA content with The Nonlinear Library · 2021-10-19T23:16:06.664Z · EA · GW

Seems like a good idea if it were easy

Comment by eca on Listen to more EA content with The Nonlinear Library · 2021-10-19T21:57:49.450Z · EA · GW

(Sorry, when I said your story for impact was "plausible", in my head I was comparing it to my own idea for why this would be good, and I meant that it was plausibly better than my story. I actually buy your pitch as written, seems like a solidly good thing; apologies)

Comment by eca on Listen to more EA content with The Nonlinear Library · 2021-10-19T21:45:45.772Z · EA · GW

What a cool project! I listen to the vast majority of my reading these days and am perpetually out of good things to read.

The linked audio is reasonably high quality, and more importantly, it doesn't have some of the formatting artifacts that other TTS programs have. Well done.

Your story for why this is a potentially high impact project is plausible to me, especially given how much you've automated. I have independently been thinking about building something similar, but with a very different story for why it could be worth my time to do it. That means this could be a different story for why your thing was good :), which I thought I'd share

My story was that the top-performing people in a given cause area are large fraction of the valuable work, if you buy power law type arguments. By definition, their time is a lot more valuable than average. But it is also more valuable for them to be better informed, because the changes they make to their decisions by being better informed are leveraged by their high work output or its consequences.

If you buy this story, I think you wind up focusing on figuring out how to cater what is audio-fied to what would be useful to the most productive people in EA. So like, what do top AI safety researchers wish they had time to listen to. I'd bet that this is actually a very different set of things than Forum/ LW posts.

When I started to do my thing, I suspected that a lot of the researchers who are doing the best work would benefit from being able to hear more academic papers, from arxiv for example. But IMO the key problem is that these don't get read well because of formatting issues. I think this is a solvable problem, and have a few leads, but it was too annoying for me to do as a side project. DM me if you're interested in chatting about that

Side point: this view of why this is high impact also speaks to letting the top people in question choose what they listen to, which looks more like an app that does TTS on demand than a podcast feed. This happens to avoid copyright issues, if the existence of other TTS apps is any indication.

You might be able to hack together an equivalent solution (on both copyright and customization) without needing to develop your own app by having a simple website that lets people log in and makes them a private RSS feed (compatible with most podcast players I think, though not confident in any of this). Then if they input a link on the website its compiled and added to their RSS feed for use in the player. If you had an api for calling your TTS script (and had solved these formatting issues) I or someone else could probably hack something like this website together pretty fast

Comment by eca on Listen to more EA content with The Nonlinear Library · 2021-10-19T21:18:46.883Z · EA · GW

And there are various things one could probably do to make it not illegal but still messed up and the wrong thing to do! Like make it mandatory to check a box saying you waive your copyright for audio on a thing before you post on the forum. I think if, like some of the tech companies, you made this box really little and hard to find, most people would not change their posting behavior very much, and would now be totally legal (by assumption).

but it would still be a bad thing to do.

Comment by eca on Listen to more EA content with The Nonlinear Library · 2021-10-19T21:13:59.778Z · EA · GW

This is a reason to fix the system! My point is that it reduces to "make all the authors happy with how you are doing things", there is not some spooky extra thing having to do with illegality

TBC I do not endorse using people's content in a way they aren't happy with, but I would still have that same belief if it wasn't illegal at all to do so.

Comment by eca on Listen to more EA content with The Nonlinear Library · 2021-10-19T21:09:46.469Z · EA · GW

I use speechify, its voices are quite good but has the same formatting issues as all the rest (reading junk text) which I think is the real bottleneck here

Comment by eca on Listen to more EA content with The Nonlinear Library · 2021-10-19T21:04:01.001Z · EA · GW

FWIW I think I endorse Kat's reasoning here. I don't think it matters if it is illegal if I'm correct in suspecting that the only people who could bring a copyright claim are the authors, and assuming the authors are happy with the system being used. This is analogous to the way it is illegal, by violating minimum wage laws, to do work for your own company without paying yourself, but the only person who has standing to sue you is AFAIK yourself.

Not a lawyer, not claiming to know the legal details of these cases, but I think this standing thing is real and an appropriate way to handle

Comment by eca on eca's Shortform · 2021-10-19T19:05:50.537Z · EA · GW

Empirical differential tech development?

Many longtermist questions related to dangers from emerging tech can be reduced to “what interventions would cause technology X to be deployed before/ N years earlier than/ instead of technology Y”.

In, biosecurity, my focus area, an example of this would be something like "how can we cause DNA synthesis screening to be deployed before desktop synthesizers are widespread?"

It seems a bit cheap to say that AI safety boils down to causing an aligned AGI before an unaligned, but it kind of basically does, and I suspect that as more of the open questions get worked out in AI strategy/ policy/ deployment there will end up being at least some examples of well defined subproblems like the above.

Bostrom calls this differential technology development. I personally prefer "deliberate technology development", but call it DTD and whatever. My point is, it seems really useful to have general principles for how to approach problems like this, and I've been unable to find much work, either theoretical or empirical, trying to establish such principles. I don't know exactly what these would look like; most realistically they would be set of heuristics or strategies alongside a definition of when they are applicable.

For example, a shoddy principle I just made up but could vaguely imagine playing out is "when a field is new and has few players, (e.g. small number of startups, small number of labs) causing a player to pursue something else on the margin has a much larger influence on delaying the development of this technology than causing the same proportion of R&D capacity to leave the field at a later point".

While I expect some theoretical econ type work to be useful here, I started thinking about the empirical side. It seems like you could in principle run experiments where, for some niche areas of commercial technology, you try interventions which are cost effective according to your model to direct the outcome toward a made up goal.

Some more hallucinated examples:

  • make the majority of guitar picks purple
  • make the automatic sinks in all public restrooms in South Dakota stay on for twice as long as the current ones
  • stop CAPTCHAs from ever asking anyone to identify a boat
  • stop some specific niche supplement from being sold in gelatin capsules anywhere in California

The pattern: specific change toward something which is either market neutral or somewhat bad according to the market, in an area few enough people care about/ the market is small and straightforward such that we should expect it is possible to occasionally succeed.

I'm not sure that there is anything which is a niche enough market to be cheap to intervene on while still being at all representative of the real thing. But maybe there is? And I kind of weirdly expect trying random thing stuff like this to actually yield some lessons, at least in implicit know-how for the person who does it.

Anyway, I'm interested in thoughts on the feasibility and utility of something like this, as well as any pointers to previous attempts to do this kind of thing (sort of seems like certain type of economists might be interested in experimenting in this way, but probably way too weird).

Comment by eca on My current best guess on how to aggregate forecasts · 2021-10-06T13:52:55.529Z · EA · GW

I wonder how these compare with fitting a Beta distribution and using one of its statistics? I’m imagining treating each forecast (assuming they are probabilities) as an observation, and maximizing the Beta likelihood. The resulting Beta is your best guess distribution over the forecasted variable.

It would be nice to have an aggregation method which gave you info about the spread of the aggregated forecast, which would be straightforward here.

Comment by eca on eca's Shortform · 2021-08-07T19:38:25.986Z · EA · GW

I’m vulnerable to occasionally losing hours of my most productive time “spinning my wheels”: working on sub-projects I later realize don’t need to exist.

Elon Musk gives the most lucid naming of this problem in the below clip. He has a 5 step process which nails a lot of best practices I’ve heard from others and more. It sounds kind of dull and obvious to write down, but somehow I think staring at the steps will actually help. Its also phrased somewhat specifically to building physical stuff, but I think there is a generic version of each. I’m going to try implementing on my next engineering project.

The explanation is meandering (though with some great examples I recommend listening to!) so I did my best attempt to quickly paraphrase them here:

The Elon Process:

  1. “Make your requirements less dumb. Your requirements are definitely dumb.” Beware especially requirements from smart people because you will question them less.
  2. Delete a part, process step or feature. If you aren’t adding 10% of deleted things back in, you aren’t deleting enough.
  3. Optimize and simplify the remaining components.
  4. Accelerate your cycle time. You can definitely go faster.
  5. Automate.

https://youtu.be/t705r8ICkRw

(13:30-28)

Comment by eca on Open Philanthropy is seeking proposals for outreach projects · 2021-07-22T12:02:37.650Z · EA · GW

One more unsolicited outreach idea while I’m at it: high school career / guidance counselors in the US.

I’m not sure how idiosyncratic this was of my school, but we had this person whose job it was to give advice to older highschool kids about what to do for college and career. Mine’s advice was really bad and I think a number of my friends would have glommed onto 80k type stuff if it was handed to them at this time (when people are telling you to figure out your life all of a sudden). This probably hits the 16yo demographic pretty well.

Could look like adding a bit of entrypoint content geared at pre-college students to 80k, then making some info packets explaining 80k to counselors as a nonprofit career planning resource with handouts for students, and shipping them to every high school in the US or smth (possibly this is also an international thing, IDK).

Comment by eca on Open Philanthropy is seeking proposals for outreach projects · 2021-07-22T11:51:02.275Z · EA · GW

Exciting!

This is probably not be the best place to post this but I’ve been learning recently about the success of hacking games in finding and training computer security people (https://youtu.be/6vj96QetfTg for a discussion, also this game I got excited about in high school: https://en.m.wikipedia.org/wiki/Cicada_3301).

I think there might be something to an EA/ rationality game. Like something with a save-the-world but realistically plot and game mechanics built around useful skills like Fermi estimation. This is a random gut feeling I’ve had for a while not something well thought through, so could be obviously wrong.

A couple advantages over the typical static content like videos or written intro sequences:

  • games can be “stickier”
  • ppl seem to enjoy intricate, complex games even while avoiding complex static media for lack of time; this is true of many high-school aged ppl in my experience
  • games can tailor different angles into EA material depending on the user’s input
  • games can both educate and filter for/ identify people who are high aptitude, contra to written content or video
  • because games can collect info about user behavior, you might have a much richer sense of where people are dropping out to prototype/ AB test on
  • anecdotally, smart ppl I went to highschool with seemed to have their career aspirations shaped by videogames, primarily toward wanting to do computer science to be game developers. Maybe this could be channelled elsewhere?

A few downsides of games

  • limited to a particular demographic interested in videogames
  • a lot of rationality/ EA stuff seems maybe quite hard to gamify?
  • maybe a game makes EA stuff seem fantastical
  • maybe a game would degrade nuance/ epistemics of content
  • maybe games are quite expensive to make for what they are?

I have zero expertise or qualifications except occasionally playing games, but feel free to DM me anyway if you are interested in this :)

Comment by eca on How many times would nuclear weapons have been used if every state had them since 1950? · 2021-05-05T20:25:00.066Z · EA · GW

I appreciate the answers so far!

One thing I realized I'm curious about in asking this is something about how many groups of people/ governing bodies are actually crazy enough to use nuclear weapons even if self-annihilation is assured. This seems like an interesting last check against horrible mutual destruction stuff. The hypothesis to invalidate is: maybe the types of people assembled into the groups we call "governments" are very unlikely to carry an "activate mutual destruction" decision all the way through. To be clear, I don't believe this, and I think there is good evidence that individuals will do this, but I feel sufficiently confused about the gov dynamic to ask.

Of all the national regimes and regional ruling factions since 1950, how many would have used nukes even if they new an adversary would retaliate with overwhelming force? Have there been any real situations where non-great power govts were pushed so far as to resort to nuclear (enemy + self) destruction?

For example, my extremely amateur read makes it seem like Israel was at least somewhat close to nuclear in the Yom Kippur War. And I'd guess that some of the more insane genocide-y civil war factions like the Khmer Rouge wouldn't have been that concerned about the self-destruction bit, though I don't know enough history to say for sure, or if they were ever pushed to a breaking point.

I'm familiar with all the standard US-Russia examples of this (I think) and when I put my skeptic hat on/ try to steelman it seems like its hard to know how many additional "filters" would need to be cleared before actual launch. I'd be interested in cases where something of the form "and then [the gov't or civil war faction or w/e] took some action which they indisputably believed at the time would lead to a large scale tragedy, destroy themselves and all their loved ones, etc". Cases where the group definitely believed they slapped "defect" in the mutually assured destruction game (at least on some scale). Maybe none exist outside of cults and terrorist groups? Though some of those group might be more govt-like than others.

Comment by eca on How many times would nuclear weapons have been used if every state had them since 1950? · 2021-05-05T19:52:04.816Z · EA · GW

Great set of links, appreciate it. Was especially excited to see lukeprog's review and the author's presentation of Atomic Obsession.

I'm inclined toward answers of the form "seems like they would have been used more or some civilizational factor would need to change" (which is how I interpret Jackson's answer on strong global policing). Which is why I'm currently most interested in understanding the Atomic Obsession-style skeptical take.

If anyone is interested, the following are some of the author's claims which seem pertinent, at least as far as I can tell (from the author's summary, a couple reviews, and a few chapters but not the whole book):

  1. Nuclear weapons are not cost effective for practical military purposes or terrorists.
  2. Many people have been alarmists about nuclear weapons, in describing their destructive powers and forecasting future developments.
  3. Nuclear weapons have not played a major role as deterrents nor in shifting diplomatic dominance.

It seems like the first two are pretty straightforwardly true. (3) is most interesting, and I haven't been able to make Mueller's argument crisp for myself on this point. My attempt at breaking down (3), with some of my own attempt at steelmanning:

a) Nuclear weapons are really expensive b) Gaining nuclear weapons upsets your neighbors, which is an additional cost c) There are cheaper ways of getting a more compelling deterrent, for example North Korea could invest in artillery to put more pressure on Seoul. d) Countries didn't really have any interest in going to war, anyway, so deterrents were not needed (I think he claims something about Stalin and other communist powers having no interest in war with western powers) e) Nukes are technically complex and even if smaller actors, possibly including e.g. factions in a civil war, were to steal them, they would have a hard time using them f) Nukes are easy to police because nuclear forensics are quite good at attributing events to their creators g) People have to be really crazy to use nuclear weapons given they aren't very effective on military targets and can't actually help you win, only suicide

(It seems worth mentioning that in my actual cursory read of Mueller's arguments in the form mentioned above, I found some points I've omitted because they seem mutually inconsistent and make him seem dogmatic to me. For example at one point in his nuclear terrorism section he seems to use the fact that the CIA would probably have infiltrated a group as evidence for the overarching claim that investment in counter-proliferation is wasted. The contradiction is obviously that the CIA probably wouldn't invest as much in infiltrating terrorist groups attempting to build nukes if that was less of a priority. )

If we take my hypothetical to mean "nuclear weapons are cheaper to build" (sorry for the ambiguity there) then a, b, c and e seem basically null. I read d) as pretty far removed from the facts. Some good evidence for this in the comments of the lukeprog post especially Max Daniel's.

Which leaves f- Nukes are easy to police, and g- people aren't crazy enough to actually use them.

Comment by eca on Notes on 'Atomic Obsession' (2009) · 2021-05-05T15:37:45.245Z · EA · GW

Re direct military conflicts between nuclear weapons states: this might not exactly fit the definition of "direct" but I enjoyed skimming the mentions of nuclear weapons in this wikipedia on the yom kippur war, which saw a standoff between Israel (nuclear) and Egypt (not nuclear, but had reportedly been delivered warheads by USSR). There is some mention of Israel "threatening to go nuclear" possibly as a way of forcing the US to intervene with conventional military resources.

Comment by eca on How many times would nuclear weapons have been used if every state had them since 1950? · 2021-05-05T15:00:15.356Z · EA · GW

Interesting! For (1) how do you expect the economic superpowers to respond to smaller nations using nuclear weapons in this world? It sounds like because of MAD between the large nations, your model is that they must allow small nuclear conflicts, or alternatively pivot into your scenario 2 of increased global policing, is that correct?

Comment by eca on How likely is a nuclear exchange between the US and Russia? · 2021-05-04T13:45:17.485Z · EA · GW

Thanks for this post Luisa! Really nice resource and I wish I caught it earlier. A couple methodology questions:

  1. Why do you choose an arithmetic mean for aggregating these estimates? It seems like there is an argument to be made that in this case we care about order-of-magnitude correctness, which would imply taking the average of the log probabilities. This is equivalent to the geometric mean (I believe) and is recommended for fermi estimates e.g. (here)[https://www.lesswrong.com/posts/PsEppdvgRisz5xAHG/fermi-estimates].

  2. Do you have a sense for how much, if any, these estimates are confounded by the variable of time? Are all estimates trying to guess likelihood of war in the few years following the estimate, or do some have longer time horizons (you mention this explicitly for a number of them, but struggling to find for all. Sorry if I missed)? If these are forecasting something close to the instantaneous yearly probability, do you think we should worry about adjusting estimates by when they were made, in case i.e. a lot has changed between 2005 and now?

  3. Related to the above, do you believe risk of nuclear war is changing with time or approximately constant?

  4. Did you consider any alternative schemes to weighting these estimates equally? I notice that for example the GJI estimate on US-Russia nuclear war is more than an order of magnitude lower than the rest, but is also the group I'd put my money on based on forecasting track record. Do you find these estimates approximately equally credible?

Curious for your thoughts!

Comment by eca on Make your own cost-effectiveness Fermi estimates for one-off problems · 2021-04-28T20:51:03.729Z · EA · GW

Thanks!

Comment by eca on Make your own cost-effectiveness Fermi estimates for one-off problems · 2021-04-28T18:41:19.609Z · EA · GW

Stumbling on this today-did this article ever get published? Would be keen to read

Comment by eca on How to PhD · 2021-03-31T20:54:16.552Z · EA · GW

Strong +1 to this. I think I have observed people who have really good academic research taste but really bad EA research taste

Comment by eca on How to PhD · 2021-03-31T20:53:29.910Z · EA · GW

Taste is huge! I was trying to roll this under my "Process" category, where taste manifests in choosing the right project, choosing the right approach, choosing how to sequence experiments, etc etc. Alas, not a lossless factorization

These exercises look quite neat, thanks for sharing!

Comment by eca on How to PhD · 2021-03-31T20:51:08.527Z · EA · GW

Thanks Seb. I don't think I have energy to fully respond here, possibly I'll make a separate post to give this argument its full due.

One quick point relevant to Crux 2: "I can also think of many examples of groundbreaking basic science that looks defensive and gets published very well (e.g. again sequencing innovations, vaccine tech; or, for a recent example, several papers on biocontainment published in Nature and Science)."

I think there are many-fold differences in impact/dollar between the tech you build if you are trying to actually solve the problem and the type of probably-good-on-net examples you give here.

Other ways of saying parallels of this point:

  • Things which are publishable in nature or science are just definitively less neglected, because you are competing against everyone who wants a C/N/S publication
  • The design space of possible interventions is a superset of, and many times larger than the design space of interventions which also can be published in high impact journals
  • We find power-laws in cost effectiveness lots of other places, and AFAIK have no counter-evidence here. Given this, even a small orthogonal component between what is incentivized by academia and what is actually good will lead to a large difference in expected impact.
Comment by eca on How to PhD · 2021-03-31T20:41:40.488Z · EA · GW

I bet it is! The example categories I think I had in mind at time of writing would be 1) people in ML academia who want to be doing safety instead doing work that almost entirely accelerates capabilities and 2) people who want to work on reducing biological risk instead publish on tech which is highly dual use or broadly accelerates biotechnology without deferentially accelerating safety technology.

I know this happens because I've done it. My most successful publication to date (https://www.nature.com/articles/s41592-019-0598-1) is pretty much entirely capabilities accelerating. I'm still not sure if it was the right call to do this project, but if it is, it will have been a narrow edge revolving on me using the cred I got from this to do something really good later on.

Comment by eca on How to PhD · 2021-03-31T20:30:41.128Z · EA · GW

This is interesting and also aligns with my experience depending on exactly what you mean!

  • If you mean that it seems less difficult to get tenure in CS (thinking especially about deep learning) than the vibe I gave, (which is again speaking about the field I know, bioeng) I buy this strongly. My suspicion is that this is because relative to bioengineering, there is a bunch of competition for top research talent by industrial AI labs. It seems like even the profs who stay in academia also have joint appointment in companies, for the most part. There isn't an analogous thing in bio? Pharma doesn't seem very exciting and to my knowledge doesn't have a bunch of PI-driven basic research roles open. Maybe bigtech-does-bio labs like Calico will change this in the future? IMO this doesn't change my core point because you will need to change your agenda some, but less than in biology.
  • If you mean that once you are on the Junior Faculty track in CS, you don't really need to worry about well-received publications, this is interesting and doesn't line up with my models. Can you think of any examples which might help illustrate this? I'd be looking for, e.g., recently appointed CS faculty at a good school pursuing a research agenda which gets quite poor reception/ crickets, but this faculty is still given tenure. Possibly there are some examples in AI safety before it was cool? Folks that come to mind mostly had established careers. Another signal would be less of the notorious "tenure switch" where people suddenly change their research direction. I have not verified this, but there is a story told about a Harvard Econ professor who did a bunch of centrist/slightly conservative mathematical econ who switched to left-leaning labor economics after tenure.
Comment by eca on How to PhD · 2021-03-31T20:17:33.219Z · EA · GW

"Working backwards" type thinking is indeed a skill! I find it plausible a PhD is a good place to do this. I also think there might be other good ways to practice it, like for example seeking out the people who seem to be best at this and trying to work with them.

+1 on this same type of thinking being applicable to gathering resources. I don't see any structural differences between these domains.

Comment by eca on How to PhD · 2021-03-31T20:15:00.954Z · EA · GW

This is an excellent comment, thanks Adam.

A couple impressions:

  • Totally agree there are bad incentives lots of places
  • I think figuring out what existing institutions have incentives that best serve your goals, and building a strategy around those incentives, is a key operation. My intent with this article was to illustrate some of that type of thinking within planning for gradschool. If I was writing a comparison between working in academia and other possible ways to do research I would definitely have flagged the many ways academic incentives are better than the alternatives! I appreciate you doing that, because it's clearly true and important.
  • In that more general comparison article, I think I may have still cautioned about academic incentives in particular. Because they seem, for lack of a better word, sneakier? Like, knowing you work at a for-profit company makes it really transparently clear that your manager (or manager's manager's) incentives are different from yours, if you want to do directly impactful research. Whereas I've observed folks, in my academic niche of biological engineering, behave as if they believe a research project to be directly good when I (and others) can't see the impact proposition, and the behavior feels best explained by publishing incentives? In more extreme cases, people will say that project A is less important to prioritize than project B because B is more impactful, but will invest way more in A (which just happens to be very publishable). I'm sure I'm also very guilty of this, but its easier to recognize in other people :P -I'm primarily reporting on biology/ bioengineering/ bioinformatics academia here, though consume a lot of deep learning academias output. FWIW, my sense is there is actually a difference in the strength and type of incentives between ML and biology, at least. From talking with friends in DL academic labs, it seems like there is still a pressure to publish in conferences but there are also lots of other ways to get prestige currency, like putting out a well-read arxiv paper or being a primary contributor to an open source library like pytorch. In biology, from what I've seen, it just really really really matters that you publish in a high impact factor journal, ideally with "Science" or "Nature" on the cover.
  • It also matters a whole lot who your advisor is, as you mention. Having an advisor who is super bought in to the impact proposition of your research is a totally different game. I have the sense that most people are not this lucky by default, and so would want to optimize for the type of buy-in or, alternatively, laissez-faire management which I pattern match to the type of research freedom you're describing.

All of this said, I think my biggest reaction is something like "there are ways of finding really good incentives for doing research"! Instead of working in existing institutions-- academic, for-profit research labs, for-profit company-- come up with a good idea for what to research and how, and just do it. More precisely: ask an altruistic funder for money, find other people to work with, make an organization if it seems good. There are small and large versions of this. On the small scale you can apply for EA grants or another org which grants to individuals, and if you're really on to something you ask for org-scale funding. I'm not claiming that this is always a better idea: you will be missing lots of resources you might otherwise have in e.g. academia.

But compared to working with a funder who, like you, wants to solve the problem and make the world be good, any of the other institutions mentioned including academia look extremely misaligned. And IMO its worth making it clear that relative to this, almost any lab/ institute's academic incentives suck. Once this DIY option is on the table I think it is possible to make better choices about whether you like the compromise of working at another institution or whether you will use this option to get specific resources that will make the "forge your own way" option more tractable. E.g.: don't have any good ideas for a research agenda? Great, focus on figuring this out in your PhD. Don't know any good people you might recruit for your project? Great, focus on building a good network in your PhD. Etc etc

I'm curious if you still feel like incentives are misaligned in this world, or whether it feels too impractical to be included in your list, or disagree with me elsewhere?

Thanks again :)

Comment by eca on How much does performance differ between people? · 2021-03-30T16:09:09.978Z · EA · GW

Yeah this is great; I think Ed probably called them sleeping beauties and I was just misremembering :)

Thanks for the references!

Comment by eca on How to PhD · 2021-03-30T16:00:47.092Z · EA · GW

Appreciate your comment! I probably won't be able to give my whole theory of change in a comment :P but if I were to say a silly version of it, it might look like: "Just do the thing"

So, what are the constituent parts of making scientific progress? Off the cuff, maybe something like:

  1. You need to know what questions are worth asking / problems are worth solving
  2. You need to know how to decompose these questions in sub-questions iteratively until a subset are answerable from the state of current knowledge
  3. You need to have good research project management skills, to figure out what order it makes sense to tackle these sub-questions and most quickly make progress toward the goal which is where all the impact is
  4. You need people to have smart ideas to guess the answers to sub-questions and generate hypotheses
  5. You need people to do or build things, like run experiments, code, or fab physical objects
  6. You need operations and logistics to turn money into materials and people, and to coordinate the materials and people
  7. You need managers to foster productive environments and maintain healthy relationships
  8. You need advisors to hold you accountable to the actual goal
  9. You often need feedback loops with the actual goal, in case you've decomposed the problem incorrectly or something else in the system has gone awry.
  10. You need money

I'm making this up, but do you see what I mean?

Then my advice would be to figure out which subset of these are so constraining that you can't start the business of doing the thing, and to solve those constraints e.g. by cultivating instrumental resources like research ability. Otherwise, set yourself up with the set of 1-10 which maximize your likelihood of succeeding at the thing, and start doing the thing. Figure the rest out as you go.

It's totally conceivable that an academic lab is the best place available to you. But I would want you to come to that conclusion after having thought hard about it, working backward from the actual goal.

Assuming the aspects of 1-10 which are research skills are covered, my object level sense is that academia goes wrong on 1,3,5,6,7,8,9.

All told my algorithm might be something like:

  1. What other existing entities/ groups look good on these inputs to the scientific progress machine? These might be existing companies, labs, random people on the internet, non-profits, whatever. Would also include looking for academic opportunities that look better on the above. Don't think about made up categories like "non-profit" when doing this. Just figure out what it would look like to work at/with this entity to accomplish the goal.
  2. What levers do I have to tweak things such that my list of existing places looks even better?
  3. What would it look like for me to make my own enterprise to directly do the thing? What resources am I missing?
  4. What opportunities do I have to pursue instrumental goods/ resources that don't look like doing the thing?
  5. With bias toward doing the thing, see which of working with existing collections of people, pushing existing collections of people to be different in some way, starting your own thing, and gathering instrumental resources you are missing looks like it will lead to the best outcomes.
  6. Do that thing. Periodically reevaluate.

This probably isn't very helpful, but I don't know of any tricks! I could say more stuff about "industry" vs. "academia" but for the most part I think those conversations are missing the point unless you can drill way more into the specifics of a situation.

Good luck :) remember that lots of other people are trying to figure the same kind of thing out. In my experience they are the best people to learn from

Comment by eca on How to PhD · 2021-03-30T15:27:16.656Z · EA · GW

Thanks Charles! I think of your two options I most closely mean (1). For evidence I don't mean 2: "Optimize almost exclusively for compelling publications; for some specific goals these will need to be high-impact publications."

My attempt to restate my position would be something like: "Academic incentives are very strong and its not obvious from the inside when they are influencing your actions. If you're not careful, they will make you do dumb things. To combat this, you should be very deliberate and proactive in defining what you want and how you want it. In some cases this might involve pushing against pub incentives, in other cases it might involve optimizing for following them really really hard. What you want to avoid is telling yourself the reason for doing something is A, while the real reason is B, where B is usually something related to academic incentives. Publishing good papers is not the problem, deluding yourself is."

Comment by eca on How to PhD · 2021-03-30T15:18:44.780Z · EA · GW

I am doing 1. 2 is an incidental from the perspective of this post, but is indeed something I believe (see my response to bhalperin). I think my attempt to properly flag my background beliefs may have led to the wrong impression here. Or alternatively my post doesn't cover very much on pursuing academia, when the expected post would have been almost entirely focused on this, thereby seeming like it was conveying a strong message?

In general I don't think about pursuing "sectors" but instead about trying to solve problems. Sometimes this involves trying to get a particular government gig to influence a policy, or needing to write a paper with a particular type of credibility that you might get from an academic affiliation or a research non-profit, or needing to build and deploy a technical system in the world, which maybe requires starting an organization.

I'd encourage folks to work backwards from problems, to possible solutions, to what would need to happen on an object level to realize those solutions, to what you do with your PhD and other career moves. "Academia" isn't the most useful unit of analysis in this project, which is partly why I wasn't primarily trying to comment on it.

Regarding specific observations and personal experiences: I agree this post could be better with more things like this. Unfortunately, I don't feel like including them. Open invite to DM me if you are thinking about a PhD or already in one and want to talk more, including about my strategy.

Comment by eca on How to PhD · 2021-03-30T15:04:52.549Z · EA · GW

Ugh. Shrug. That isn't supposed to be the point of this post. All my comments on this are to alert the reader that I happen to believe this and haven't tried to stop it from seeping into my writing. It felt disingenuous not to.

But since you raised, I feel like making it clear, if it isn't already, that I do not recommend reversing this advice. At least if you are considering cause areas/ academic domains that I might know about (see my preamble). I have no idea how applicable this is outside of longtermist technical-leaning work.

If you think you might be an exception to this, feel free to DM me. Exceptions do exist, I just highly doubt you (the reader) are one. THIS DOES NOT MEAN I AM NOT EXCITED ABOUT YOUR IMPACT!! I think there are much better opportunities than becoming a professor out there :)

As I said a lot of smart people disagree with me on this, but here is some of my thinking:

  • Most people overestimate their chances for the obvious reasons
  • I've advised at least 10 smart, excellent EAs interested in pursuing PhDs and none of them are in "Anita's" reference class. A first author Nature paper in undergrad is really extremely unique. The only exceptions here are people who are already in early-track faculty positions at good schools, and even then I worry about the counterfactual value. (these are not the people reading this, I imagine)
  • Having a "good story" for becoming a faculty is a huge part luck. I've been interacting with grad students and post docs from top labs at Harvard and MIT since maybe 2015 and for every faculty position people get there are maybe 5 people who are equally or more talented whose research was equally or more compelling in principle; the difference is whether certain parts of their high-risk research panned out in a certain compelling way and whether they were good at "selling it".
  • You approximately can't get directly useful/ things done until you have tenure. I think this should be obvious but some people seem to believe a fairy tale where they are both winning the rat race and doing lots of direct good.
  • Given the above, academia is a 10-15 year crapshoot. (PhD, postdoc or multiple, 5-ish years as a junior faculty)
  • It's not clear to me what you get even after all of this. I think its hard to argue that academia is clearly better than working in a private research org if you want to do direct technology development. This leaves some kind of pulpit/ spokesperson effect. Is this really worth it? Most people who could actually get a tenured faculty position could also write 3 excellent books in the time it takes to do a PhD and post-doc. Are we sure this alternative, as one example among many possible, isn't a faster way of establishing spokesperson credibility?
  • Unless you have worked in top labs with EA-minded people, I don't think it is possible to really understand how bad academic incentives are. You will find yourself justifying the stupidest shit on impact grounds, and/or pursuing projects which directly make the world worse. People who are much better than you will also do this. This just gets worse with time, and needs to be accounted for as a reduction in expected impact when considering an opportunity that only pays off 12 years after steeping in the corrupting juices.
  • Obviously, academia looks a whole lot worse if you believe lots of things need to happen right now, as opposed to 15 years from now. For my part, I would happily trade work hours 15 years from now for more time now, at a roughly 2:1 premium.
  • Another risk you are taking, related to the above, is that the field of research you picked has any relevance 15 years from now. Obviously you can change as you go, but switching your "story" around has a big penalty in the academic job market, from what I've heard.
  • If we think we need more professors as a movement, it could be the case that its way more efficient to just reach out to people who already have faculty positions (or are just one step away, in a highly enriched pool). For example, I know of instances where students have influenced their PIs on research directions and goals, in a direction more aligned with longtermist objectives. It might be that targeted outreach and coalition building among academics is just way higher bang for buck. It's also not clear that we need the most aligned people in faculty positions, rather than people who are allies. Have we ruled this out? Seems like any person considering mortgaging 15 years of their impact might want to spend 1 year testing this hypothesis first.

Putting these random points together, it just feels like a really uphill battle to make academia look good from an impact perspective. I think you need to believe some combination of 1) problems are not urgent 2) academic incentives are actually good (?)/ there is some other side benefit of working toward a faculty position that is really worth having 3) there aren't many other opportunities for people who could be faculty in a technical domain or 4) we are specifically constrained on something professors have, maybe credible spokespeople, AND there are no more efficient ways to get those resources.

OR you might believe that academia is exciting from a personal fit perspective. I think a lot of people are very motivated by the types of status incentives in academia, which is good I guess if you have trouble finding motivation elsewhere. I'd just want to separate this from the impact story.

My spicy take is that advice to go into academia has arisen through some combination of A) EA being a movement grown out of academia in many ways, B) a lack of better career ideas, C) too much distance from the urgency and concreteness of problems on the ground and D) the same mind destroying publishing and status incentives I have mentioned a number of times here, which lead to a certain kind of self-justification.

So where all this caches out for me is finding it plausible that it is worth preserving some optionality for academia, but being very strategic (as I tried to demonstrate in this post). This includes knowing what you actually are optimizing for, and being willing to leave academic optionality if push comes to shove and there is something better. This is why I wrote the Anita case study this way.

I'm very happy to be shown where I'm wrong.

Comment by eca on How much does performance differ between people? · 2021-03-28T01:32:12.130Z · EA · GW

Sorry meant to write "component of scientific achievement is predictable from intrinsic characteristics" in that first line

Comment by eca on How much does performance differ between people? · 2021-03-28T01:28:54.713Z · EA · GW

Neat. I'd be curious if anyone has tried blinding the predictive algorithm to prestige: ie no past citation information or journal impact factors.  And instead strictly use paper content (sounds like a project for GPT-6).

It might be interesting also to think about how talent vs. prestige-based models explain the cases of scientists whose work was groundbreaking but did not garner attention at the time. I'm thinking, e.g. of someone like Kjell Keppe who basically described PCR, the foundational molbio method, a decade early.

If you look at natural  experiments in which two groups publish the ~same thing, but only one makes the news, the fully talent-based model (I think?) predicts that there should not be a significant difference between citations and other markers of academic success (unless your model of talent is including something about marketing which seems like a stretch to me).

Comment by eca on How much does performance differ between people? · 2021-03-28T01:13:31.904Z · EA · GW

Interesting! Many great threads here. I definitely agree that some component of scientific achievement is predictable, and the IMO example is excellent evidence for this. Didn't mean to imply any sort of disagreement with the premise that talent matters; I was instead pointing at a component of the variance in outcomes which follows different rules.

Fwiw, my actual bet is that to become a top-of-field academic you need both talent AND to get very lucky with early career buzz. The latter is an instantiation of preferential attachment. I'd guess for each top-of-field academic there are at least 10 similarly talented people who got unlucky in the paper lottery and didn't have enough prestige to make it to the next stage in the process.

It sounds like I should probably just read Sinatra, but its quite surprising to me that publishing a highly cited paper early in one's career isn't correlated with larger total number of citations, at the high-performing tail (did I understand that right? Were they considering the right tail?).  Anecdotally I notice that the top profs I know tend to have had a big paper/ discovery early. I.e. Ed Boyden who I have been thinking of because he has interesting takes on metascience, ~invented optogenetics in his PhD in 2005 (at least I think this was the story?) and it remains his most cited paper to this day by a factor of ~3

On the scientist vs paper preferential attachment story, I could buy that. I was pondering while writing my comment how much is person-prestige driven vs. paper driven.  I think for the most-part you're right that its paper driven but I decided this caches out as effectively the same thing. My reasoning was if number of citations per paper is power law-ish then because citations per scientist is just the sum of these, it will be dominated by the top few papers. Therefore preferential attachment on the level of papers will produce "rich get richer" on the level of scientists, and this is still an example of the things because its not an intrinsic characteristic.

That said, my highly anecdotal experience is that there is actually a per-person effect at the very top. I've been lucky to work with George Church, one of the top profs in synthetic biology. Folks in the lab literally talk about "the George Effect" when submitting papers to top journals: the paper is more attractive simply because George's name is on it. 

But my sense is that I should look into some of the refs you provided! (thanks :)

Comment by eca on How much does performance differ between people? · 2021-03-26T15:08:02.208Z · EA · GW

Great post! Seems like the predictability questions is impt given how much power laws surface in discussion of EA stuff.

More precisely, future citations as well as awards (e.g. Nobel Prize) are predicted by past citations in a range of disciplines

I want to argue that things which look like predicting future citations from past citations are at least partially "uninteresting" in their predictability, in a certain important sense. 

(I think this is related to other comments, and have not read your google doc, so apologies if I'm restating. But I think its worth drawing out this distinction)

In many cases I can think of wanting good ex-ante prediction of heavy-tailed outcomes, I want to make these predictions about a collection which is in an "early stage". For example, I might want to predict which EAs will be successful academics, or which of 10 startups seed rounds I should invest in.

Having better predictive performance at earlier stages gives you a massive multiplier in heavy-tailed domains: investing in a Series C is dramatically more expensive than a seed investment. 

Given this, I would really love to have a function which takes in the  intrinsic  characteristics of an object, and outputs a good prediction of performance.

Citations are not intrinsic characteristics. 

When someone is choosing who to cite, they look at - among other things- how many citations they have. All else equal, a paper/author with more citations will get cited more than a paper with less citations. Given the limited attention span of academics (myself as case in point) the more highly cited paper will tend to get cited even if the alternative paper is objectively better.

(Ed Boyden at MIT has this idea of "hidden gems" in the literature which are extremely undercited papers with great ideas: I believe the original idea for PCR, a molecular bio technique, had been languishing for at least 5 years with very little attention before later rediscover. This is evidence for the failure of citations to track quality.)

Domains in which "the rich get richer" are known to follow heavy-tailed distributions (with an extra condition or two) by this story of preferential attachment

In domains dominated by this effect we can predict ex-ante that the earliest settlers in a given "niche"  are most likely to end up dominating the upper tail of the power law. But if the niche is empty, and we are asked to predict which of a set would be able to set up shop in the niche--based on intrinsic characteristics--we should be more skeptical of our predictive ability, it seems to me.  

Besides citations, I'd argue that many/most other prestige-driven enterprises have at least a non-negligible component of their variance explained by preferential attachment.  I don't think it's a coincidence that the oldest Universities in a geography also seem to be more prestigious, for example. This dynamic is also present in links on the interwebs and lots of other interesting places.

I'm currently most interested in how predictable heavy-tailed outcomes are before you have seen the citation-count analogue, because it seems like a lot of potentially valuable EA work is in niches which don't exist yet.

That doesn't mean the other type of predictability is useless, though. It seems like maybe on the margin we should actually be happier defaulting to making a bet on whichever option has accumulated the most "citations" to date instead of trusting our judgement of the intrinsic characteristics.

Anyhoo- thanks again for looking into this!

Comment by eca on AMA: Holden Karnofsky @ EA Global: Reconnect · 2021-03-16T14:05:15.990Z · EA · GW

To operate in the broad range of cause areas openphil does, I imagine you need to regularly seek advice from external advisors. I have the impression that cultivating good sources of advice is a strong suite of both yours and OpenPhils.

I bet you also get approached by less senior folks asking for advice with some frequency.

As advisor and advisee: how can EAs be more effective at seeking and making use of good advice?

Possible subquestions: What common mistakes have you seen early career EAs make when soliciting advice, eg on career trajectory? When do you see advice make the biggest positive difference in someone’s impact? What changes would you make to how the EA community typically conducts these types of advisor/advisee relationships, if any?

Comment by eca on Notes on "Bioterror and Biowarfare" (2006) · 2021-03-03T00:37:19.633Z · EA · GW

Interesting point. Note that a requirement for retaliation is knowledge of the actor to retaliate against. This is called “attribution” and is a historically hard problem for bioweapons which is maybe getting easier with modern ML (COI- I an a coauthor: https://www.nature.com/articles/s41467-020-19149-2)

Comment by eca on Project Ideas in Biosecurity for EAs · 2021-03-02T18:52:11.192Z · EA · GW

Makes sense- possibly I'd change my mind about many of these after hearing the motivation. The second half of your response make me believe that we actually don't disagree that much RE a lot of the projects in here being good substantially or primarily because they could help establish a research track record or be a good learning opportunity. 

Happy to chat more about this.

Comment by eca on Project Ideas in Biosecurity for EAs · 2021-02-25T20:28:11.736Z · EA · GW

Thanks for writing this David. Its been on my todo list for a while to write down project ideas like this. I think some of these ideas are useful and worth doing, and getting those out in the open is great.

On the other hand, I think its actually pretty hard to find research which is directly good for reducing biorisk. In my experience the space of ideas which “seem maybe useful” is much larger than the set of projects which actually directly help, on more reflection. This is a general problem and not intended to be a specific critique of the ideas you shared.

I think there is a broader set of projects which are not causing direct good in the world, but are still worth doing to build skills at this type of research. I think its often better for these projects to look like speculatively good direct impact projects, rather than something wholly made up just for learning. But I think its really important to be clear when a project is in this category. Eg “I don’t think this project would be worth your time if you didn’t learn a lot from it, but I think you will so I still recommend it”.

In my opinion the tone of this post makes it sound like these ideas have been more well-vetted/ more strongly in the “directly good to do” category of projects then I assess them to be. (Speaking as a “biosecurity EA” and an individual who cares about this stuff, only, not trying to represent an organizational opinion.)

In case it is helpful for future readers to have an independent— but extremely rough/ not strongly held—assessment of these, I would put 30% in the category of “probably not worth doing even for skill building unless you have very specific goals and circumstances” 65% in the “plausibly worth doing if you think you will learn stuff” and 5% in “plausibly I would recommend someone doing this even if I thought they wouldn’t learn much” category. I think its important to share ideas for (and do!) projects in all these categories, but I would be sad if someone thought they were more widely endorsed as directly useful by “biosecurity EAs” then I believe they actually are.

Comment by eca on COVID-19 brief for friends and family · 2020-06-23T15:51:57.530Z · EA · GW

As anyone who have checked the google doc recently knows already, I haven't been maintaining it. It is now so out of data I consider it to be doing more harm then good, and have killed the link. I think most people have found better resources by now, anyway.