Posts

Careers concerning Global Catastrophic Biological Risks (GCBRs) from a German perspective 2022-09-01T05:05:49.093Z
EA Publicity Drive - What are the best signs of increased, in-depth engagement with EA? 2022-08-16T15:14:24.187Z
What success looks like 2022-06-28T14:30:37.358Z
Idea: Red-teaming fellowships 2022-02-02T22:13:28.566Z
EA Analysis of the German Coalition Agreement 2021–2025 2022-01-24T13:25:14.388Z
EA megaprojects continued 2021-12-03T10:33:53.467Z

Comments

Comment by slg (Simon_Grimm) on Most Ivy-smart students aren't at Ivy-tier schools · 2022-09-01T06:51:57.262Z · EA · GW

Agreed that their research is decent, but they are post-graduate institutes and have no undergraduate students.

Comment by slg (Simon_Grimm) on EA Publicity Drive - What are the best signs of increased, in-depth engagement with EA? · 2022-08-16T19:58:08.131Z · EA · GW

Thanks, I saw a similar graph on Twitter! Wondering what kind of measurements would most clearly indicate more in-depth-engagement with EA—though traffic to the Forum likely comes close to that.

Comment by slg (Simon_Grimm) on EA Publicity Drive - What are the best signs of increased, in-depth engagement with EA? · 2022-08-16T19:56:42.692Z · EA · GW

Thanks, fixed!

Comment by slg (Simon_Grimm) on The Reluctant Prophet of Effective Altruism | The New Yorker · 2022-08-09T13:47:54.542Z · EA · GW

I liked it a lot. Given that he probably wasn't involved beforehand, the author got a detailed picture of EA's current state.

Comment by slg (Simon_Grimm) on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-09T13:45:30.262Z · EA · GW

That makes sense; thanks for expanding on your comment.

Comment by slg (Simon_Grimm) on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-08T10:46:19.281Z · EA · GW

I appreciate that many EA's focus on high IQ and general mental ability can be hard to deal with. For instance, I found this quite aversive when I first got into EA. 

But I'm unsure why your comment has 10 upvotes, given that you do not give many arguments for your statements.

Please let me know if anything below is uncharitable of if I misread something!

Focusing on elite universities

[...] why EA's obsession with elite universities is sickening.

The share of highly talented students at elite universities is higher. Thus, given the limited number of individuals who can do in-person outreach, it makes sense to prioritize elite unis. 

From my own experience, Germany has no elite universities. This makes outreach a lot harder, as we have no location to go to where we can be sure to address many highly talented students. Instead, German EAs self-select into EA by finding information online. Thus, if Germany had an elite uni, I would put most of my outreach efforts there.

Returns to high IQ

But I think the returns to lots of high-IQ people in EA are also pretty modest [...]

If you condition on the view that EA is bottle-necked by highly engaged and capable individuals that start new projects or found organizations, selecting for IQ seems as one of the best first steps.

IQ predicts good performance among various tasks and is thus plausibly upstream of having a diversity of skills. 

E.g., a 2011 study of 2329 participants in the Study of Mathematically Precocious Youth cohort shows no cut-off at which additional cognitive ability doesn't matter anymore. Participants were identified as intellectually gifted (top 1% of mental ability) at the age of 13 years and followed up for 25+ years. Even within this top percentile stratum of ability, being in the top quartile predicts substantially better outcomes: Among the top 0.25%, ~34% of cohort participants have a doctorate, and around 12% have filed a patent 25+ years after being identified as gifted at the age of 13. This compares to 4.5% of the US population holding a doctorate degree in 2018; I couldn't find data on the share of US Americans who have filed a patent, but I wouldn't be surprised if it's at least one order of magnitude lower.
 

More on this cohort can be found here.

 

Value of different perspectives/skills

[...] it's much more important to get people with varied perspectives/skills into EA.

Looking at the value of I) varied perspectives and II) skills in turn.

Regarding I), I'd also want to select people who reason well and scrutinize widely held effective altruist assumptions. But, I wouldn't aim to maximize the variety of perspectives in  EA for the sake of having different views alone (as this doesn't account for the merit of each view). 

And again, generating perspectives with lots of merit is likely linked to high IQ.

On II), I agree that having EAs with various skills is important given that  EA-oriented work is becoming increasingly diverse (e.g., doing AI Safety Research, building pandemic shelters, drafting legislation that governs x-risks). 

Comment by slg (Simon_Grimm) on Leaving Google, Joining the Nucleic Acid Observatory · 2022-06-12T22:34:34.519Z · EA · GW

I was very happy to read this, great to hear that your switch to direct work was successful!

Comment by slg (Simon_Grimm) on Apply for Red Team Challenge [May 7 - June 4] · 2022-03-20T11:25:47.318Z · EA · GW

Noting my excitement that you picked up on the idea and will actually make this happen!

The structure you lay out sounds good.

Regarding the winning team, will there be financial rewards? I’d give it >70% that someone would fund at least a ~$1000 award for the best team.

Comment by slg (Simon_Grimm) on Where would we set up the next EA hubs? · 2022-03-16T20:12:12.004Z · EA · GW

Do you know which funder is supporting the EA Hotel type thing?

Comment by slg (Simon_Grimm) on Where would we set up the next EA hubs? · 2022-03-16T20:09:07.579Z · EA · GW

Maybe you’re already considering this but here it goes anyway:

I‘d advise against the name ‚longtermist hub‘. I wouldn‘t want longtermism to also become an identity, just as EA is one.

It also has reputational risks—which is why new EA-oriented orgs do not have EA in their name.

Comment by slg (Simon_Grimm) on Apply for Professional Coaching · 2022-02-09T22:58:13.556Z · EA · GW

As far as I understand sessions will be fully subsidised by TfG. If you can’t afford them you can choose to pay 0$—unsure if this is standard among EA coaches.

I also think centralisation of psychological services might be valuable as it makes it easier to match fitting coaches/coachees and assess coaching performance.

Comment by slg (Simon_Grimm) on Managing 'Imposters' · 2022-01-30T10:54:59.700Z · EA · GW

Practical advice for how to run EA organisations is really valuable, thanks for writing this up.

Comment by slg (Simon_Grimm) on Retrospective on Catalyst, a 100-person biosecurity summit · 2022-01-19T19:47:35.219Z · EA · GW

Hey, I just wanted to leave a note of thanks for this excellent write-up!

I and some other EAs are planning an event with a similar format—your advice is super helpful to structure our planning and avoid obvious mistakes. 

In general, these kinds of project management retrospectives provide a lot of value (e.g., EAF's hiring retrospective).

Comment by slg (Simon_Grimm) on What are some artworks relevant to EA? · 2022-01-18T07:42:29.560Z · EA · GW

This is cool, I had no idea you were also working on this.

Comment by slg (Simon_Grimm) on Concrete Biosecurity Projects (some of which could be big) · 2022-01-13T10:22:26.572Z · EA · GW

This could be easier, yes. I know of one person who models the defensive potential of different metagenomic sequencing approaches, but I think there is space for at least 3-5 additional people doing this. 

Comment by slg (Simon_Grimm) on Concrete Biosecurity Projects (some of which could be big) · 2022-01-13T10:18:10.703Z · EA · GW

I think he was explicitly addressing your question of sexually-transmitted diseases being capable of triggering pandemics, not if they can end civilization. 

Discussing the latter in detail would quickly get into infohazards—but I think we should spend some of our efforts (10%) on defending against non-respiratory viruses. But I haven't thought about this in detail.

Comment by slg (Simon_Grimm) on Concrete Biosecurity Projects (some of which could be big) · 2022-01-11T20:00:44.995Z · EA · GW

I do mean EAs with a longtermist focus. While writing about highly-engaged EAs, I had Benjamin Todd's EAG talk in mind, in which he pointed out that only around 4% of highly-engaged EAs are working in bio.

And thanks for pointing out I should be more precise. To qualify my statement, I'm 75% confident that this should happen.

Comment by slg (Simon_Grimm) on Concrete Biosecurity Projects (some of which could be big) · 2022-01-11T09:24:58.273Z · EA · GW

Despite how promising and scalable we think some biosecurity interventions are, we don’t necessarily think that biosecurity should grow to be a substantially larger fraction of longtermist effort than it is currently.

 

Agreed that it shouldn't grow substantially, but ~doubling the share of highly-engaged EAs working on biosecurity feels reasonable to me. 

Comment by slg (Simon_Grimm) on Concrete Biosecurity Projects (some of which could be big) · 2022-01-11T09:21:25.580Z · EA · GW

I have only been involved in biosecurity for 1.5 years, but the focus on purely defensive projects (sterilization, refuges, some sequencing tech) feels relatively recent. It's a lot less risky to openly talk about those than about technologies like antivirals or vaccines.

I'm happy to see this shift, as concrete lists like this will likely motivate more people to enter the space. 

Comment by slg (Simon_Grimm) on Democratising Risk - or how EA deals with critics · 2021-12-29T09:26:30.827Z · EA · GW

@CarlaZoeC or Luke Kemp, could you create another forum post solely focused on your article? This might lead to more focused discussions, separating debate on community norms vs discussing arguments within your piece.

I also wanted to express that I'm sorry this experience has been so stressful. It's crucial to facilitate internal critique of EA, especially as the movement is becoming more powerful, and I feel pieces like yours are very useful to launch constructive discussions.

Comment by slg (Simon_Grimm) on Countermeasures & substitution effects in biosecurity · 2021-12-22T19:16:45.333Z · EA · GW

I particularly agree with the last point on focussing on purely defensive (not net-defensive) pathogen-agnostic technologies, such as metagenomic sequencing and resilience measures like PPE, air filters and shelters. 

 If others share this biodefense model in the longtermist biosecurity community, I think it'd be important to point towards these countermeasures in introductory materials (80k website, reading lists, future podcast episodes) 

Comment by slg (Simon_Grimm) on Exposure to 3m Pointless viewers- what to promote? · 2021-12-11T12:16:58.043Z · EA · GW

I do wonder what the downside is here. It's a fleeting, low-fidelity impression of EA that will probably not stick in most minds. However, if 10-20 people donate money after hearing about it through Patrick, it might already be positive in sum.

Comment by slg (Simon_Grimm) on EA megaprojects continued · 2021-12-06T18:38:03.447Z · EA · GW

Do you specifically object to the term megaproject, or rather to the idea of launching larger organizations and projects that could potentially absorb a lot of money?

If it's the latter, the case for megaprojects is that they are bigger bets, with which funders could have an impact using larger sums of money, i.e., ~1-2 order of magnitudes bigger than current large longtermist grants. It is generally understood that EA has a funding overhang,  which is even more true if you buy into longtermism, given that there are few obvious investment opportunities in longtermism.

I agree that many large-scale projects often have cost and time overruns (I enjoyed this EconTalk episode with Bent Flyvberg on the reasons for this).  But, if we believe that a non-negligible number of megaprojects do work out,  it seems to be an area we should explore.

Maybe it'd be a good idea to collect a list of past megaprojects that worked out well, without massive cost-overruns.  Reflecting on this briefly, I think of the Manhattan Project,   prestigious universities (Oxbridge, LMU, Harvard), and public transport projects like the TGV

Comment by slg (Simon_Grimm) on ludwigbald's Shortform · 2021-11-29T14:25:34.180Z · EA · GW

Hey Ludwig, happy to collaborate on this. A bunch of other EAs and I analyzed the initial party programs under EA considerations; this should be easily adapted to the final agreement and turned into a forum post.

Comment by slg (Simon_Grimm) on What high-level change would you make to EA strategy? · 2021-11-06T14:53:18.470Z · EA · GW

Caveat: I work in Biosecurity.

I agree with the last point. Based on Ben Todd's presentation at EAG,

  • 18% of engaged EAs work on AI alignment, while
  • 4% work on Biosecurity.

Based on Toby Ord's estimates in the Precipice,  the risk of extinction in the next 100 years from

  • Unaligned artificial intelligence is ∼ 1 in 10, while
  • the risk from engineered pandemics is ∼ 1 in 30.

So, the stock of people in AI is 4.5x higher than Biosecurity, while AI is only  3x as important.

There is a lot of nuance missing here, but I'm moderately confident that this dysbalance warrants more people moving into Biosecurity. Especially now that there we're in a moment of high traceability concerning pandemic preparedness.

Comment by slg (Simon_Grimm) on What high-level change would you make to EA strategy? · 2021-11-06T14:32:40.493Z · EA · GW

Is there a historical precedent for social movements buying media? If so, it'd be interesting to know how that influenced the outlet's public perception/readership.

As of now, it seems like movements "merely" influence media, such as the NYTimes turning more leftward in the last few years or Vox employing more EA-oriented journalists.

Comment by slg (Simon_Grimm) on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-06T14:21:14.985Z · EA · GW

Spencer Greenberg also comes to mind; he once noted that his agreeableness is in the 77th percentile. I'd consider him a generator.

Comment by slg (Simon_Grimm) on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T17:13:55.034Z · EA · GW

Launching a Nucleic Acid Observatory, as outlined recently by Kevin Esvelt and others here (link to paper). With $100m one could launch a pilot version covering 5 to 10 states in the US.

Comment by slg (Simon_Grimm) on Project Ideas in Biosecurity for EAs · 2021-02-27T09:22:52.963Z · EA · GW

Thanks for this write-up. Concerning this point:

Quantitative investigation of tech capabilities required for broad environmental nucleic acid surveillance to be useful

This article provides a good introduction to current challenges within genomic pathogen surveillance: Ten recommendations for supporting open pathogen genomic analysis in public health

Comment by slg (Simon_Grimm) on The German Effective Altruism Network - recap 2020 · 2021-01-23T13:58:45.694Z · EA · GW

Hi, happy to read about where you stand and where you want to go with NEAD. 

FYI, the link in this sentence seems broken: "currently offering self-hosted alternatives to Slack, Google, and Zoom, one reason for this being our concern with risks from data privacy neglect. " 

Comment by slg (Simon_Grimm) on If you value future people, why do you consider near term effects? · 2020-08-29T14:08:59.349Z · EA · GW

Hi!


I think you mean to say: "every way a higher growth rate would be good is also an equally plausibly reason it would be bad"


Instead you wrote:


"Evidential symmetry here would be something like: every way a higher growth rate would be good is also an equally plausibly reason it would be good eg. increased emissions are equally likely to be good as they are to be bad.) "

Comment by slg (Simon_Grimm) on Objections to Value-Alignment between Effective Altruists · 2020-08-16T18:56:52.513Z · EA · GW

Very nuanced read, thanks. I think it expresses something that quite a number of people had in the back of their minds. Out of interest when you say:

> "My mind conjures the image of a stereotypical EA shockingly easily. [...] The stereotypical EA will use specific words and phrases, wear particular clothes, read particular sources and blogs, know of particular academics and even use particular mannerism."

What did you have in mind there? You can also pm me if you don't want to answer this in a public form.