Posts

How to find good 1-1 conversations at EAGx Virtual 2020-06-12T16:30:26.280Z · score: 48 (23 votes)

Comments

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-07-15T08:34:49.612Z · score: 1 (1 votes) · EA · GW

Cool, thanks.

Comment by aidan-o-gara on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T00:04:12.461Z · score: 4 (4 votes) · EA · GW

Also (you might already be familiar with these as well): The Bottom Billion by Paul Collier, and Poor Economics by Banerjee and Duflo, who won the Nobel with Michael Kremer for work on randomized control trials in development economics.

Comment by aidan-o-gara on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-15T00:01:19.179Z · score: 4 (3 votes) · EA · GW

Hickel has some very interesting ideas, and I really enjoyed your writeup. I find plausible the central claim that neocolonialist foreign and economic policy has put hundreds of millions of people into poverty. I'm a bit unsure of his arguments about the harms of debt and trade terms (hopefully will return later), but the case that foreign-imposed regime change has been harmful seems really strong.

So, question: Might it be highly impactful to prevent governments from harmfully overthrowing foreign regimes?

  • Governments overthrow each other all the time. The United States has overthrown dozens of foreign governments over the last century, including recent interventions in Libya, Yemen, Palestine, and Iraq. The Soviet Union did the same, and modern day Russia aggressively interferes in foreign democratic elections, including the 2016 US election. China might do the same, but I can't find great evidence. I don't know about the UK and EU, I can't find obvious recent examples that weren't primarily US-led (e.g. Iraq, Libya). Official histories probably understate the number of overthrows because successful attempts can remain secret, at least for the years immediately following the overthrow.
  • Toppling governments can be extremely harmful. After 5 minutes of Googling, it seems foreign imposed regime changes might increase the likelihood of civil wars ("in roughly 40 percent of the cases of covert regime change undertaken during the Cold War, a civil war occurred within 10 years of the operation.") and human rights abuses ("In more than 55 percent of the cases of covert regime-change missions undertaken during the Cold War, the targeted states experienced a government-sponsored mass killing episode within 10 years of the regime-change attempt."). I'd also expect to find strong evidence of increased poverty, decreased economic growth, and worse health and education outcomes.

Clearly there's much more to be discussed here, but I'll post now and come back later. A few questions:

  • How tractable is changing the foreign policy of major governments? How does one do it? What are some examples of historical successes or failures?
  • Is this "neglected"? The concept doesn't apply super cleanly here, but my hunch might be that few people involved in foreign policy have EA values, meaning EA might have the "competitive edge" of pursuing an uncommon goal.
  • What are the risks to EA here? Government is generally contentious and polarized, and regime change is an extremely controversial issue. What specific ways could EA attempts to work on regime change or other foreign policy causes end up backfiring?

Some related conversation:

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-07-14T22:47:44.662Z · score: 1 (1 votes) · EA · GW

Command + K should add a hyperlink!

Comment by aidan-o-gara on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T17:27:33.311Z · score: 4 (3 votes) · EA · GW

FWIW, here's an introduction to longtermism and AI risks I wrote for a friend. (My friend has some technical background, he had read Doing Good Better but not engaged further with EA, and I thought he'd be a good fit for AI Policy research but not technical research.)

  • Longtermism: Future people matter, and there might be lots of them, so the moral value of our actions is significantly determined by their effects on the long-term future. We should prioritize reducing "existential risks" like nuclear war, climate change, and pandemics that threaten to drive humanity to extinction, preventing the possibility of a long and beautiful future. 
    • Quick intro to longtermism and existential risks from 80,000 Hours
    • Academic paper arguing that future people matter morally, and we have tractable ways to help them, from the Doing Good Better philosopher
    • Best resource on this topic: The Precipice, a book explaining what risks could drive us to extinction and how we can combat them, released earlier this year by another Oxford philosophy professor
  • Artificial intelligence might transform human civilization within the next century, presenting incredible opportunities and serious potential problems
    • Elon Musk, Bill Gates, Stephen Hawking, and many leading AI researchers worry that extremely advanced AI poses an existential threat to humanity (Vox)
    • Best resource on this topic: Human Compatible, a book explaining the threats, existential and otherwise, posed by AI. Written by Stuart Russell, CS professor at UC Berkeley and author of the leading textbook on AI. Daniel Kahneman calls it "the most important book I have read in quite some time". (Or this podcast with Russell) 
    • CS paper giving the technical explanation of what could go wrong (from Google/OpenAI/Berkeley/Stanford)
    • How you can help by working on US AI policy, explains 80,000 Hours
    • (AI is less morally compelling if you don't care about the long-term future. If you want to focus on the present, maybe focus on other causes: global poverty, animal welfare, grantmaking, or researching altruistic priorities.)

Generally, I'd like to hear more about how different people introduce the ideas of EA, longtermism, and specific cause areas. There's no clear cut canon, and effectively personalizing an intro can difficult, so I'd love to hear how others navigate it.

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-07-14T14:47:04.672Z · score: 1 (1 votes) · EA · GW

When reading the text of a post. You’re right, it’s totally good when scrolling downwards— I’m having trouble when writing comments, scrolling up and down between the text and my comment and getting blocked by the bars.

Comment by aidan-o-gara on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2020-07-14T11:02:10.499Z · score: 3 (3 votes) · EA · GW

Really enjoyed this, thanks! I'll have more thoughts soon, but in the meantime here's another post you might enjoy: "Growth and the case against randomista development" by John Halstead and Hauke Hillebrandt.

Comment by aidan-o-gara on Max_Daniel's Shortform · 2020-07-14T11:00:04.825Z · score: 3 (2 votes) · EA · GW

Very interesting writeup, I wasn't aware of Hickel's critique but it seems reasonable.

Do you think it matters who's right? I suppose it's important to know whether poverty is increasing or decreasing if you want to evaluate the consequences of historical policies or events, and even for general interest. But does it have any specific bearing on what we should do going forwards?

Comment by aidan-o-gara on Tips for Volunteer Management · 2020-07-14T09:51:29.607Z · score: 5 (4 votes) · EA · GW

Thanks for this! I’ve done a bit of volunteering and these suggestions seem very accurate and applicable. I’ll refer to this if I work with a volunteer program again.

Do you have any thoughts on when organizations benefit most from working with volunteers? When is it a bad idea, what makes the difference?

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-07-14T09:37:16.556Z · score: 1 (1 votes) · EA · GW

On mobile, you could shrink the menu bars on the top and bottom of your screen (where the top has the Forum logo, and bottom has “all posts” and other navigation bars). Smaller navbars -> More screen space for reading -> easier to read and comment.

Comment by aidan-o-gara on Andreas Mogensen's "Maximal Cluelessness" · 2020-07-05T07:25:10.227Z · score: 5 (3 votes) · EA · GW

How important do you think non-sharp credence functions are to arguments for cluelessness being important? If you generally reject Knightian uncertainty and quantify all possibilities with probabilities, how diminished is the case for problematic cluelessness?

(Or am I just misunderstanding the words here?)

Comment by aidan-o-gara on HStencil's Shortform · 2020-07-01T04:02:18.033Z · score: 1 (1 votes) · EA · GW

Glad to hear it! Very good idea to talk with a bunch of stats people, your updated tests are definitely beyond my understanding. Looking forward to talking (or not), and let me know if I can help with anything

Comment by aidan-o-gara on HStencil's Shortform · 2020-06-30T06:10:19.829Z · score: 2 (2 votes) · EA · GW

After taking a closer look at the actual stats, I agree this analysis seems really difficult to do well, and I don't put much weight on this particular set of tests. But your hypothesis is plausible and interesting, your data is strong, your regressions seem like the right general idea, and this seems proof of concept that this analysis could demonstrate a real effect. I'm also surprised that I can't find any statistical analysis of COVID and Biden support anywhere, even though it seems very doable and very interesting. If I were you and wanted to pursue this further, I would figure out the strongest case that there might be an effect to be found here, then bring it to some people who have the stats skills and public platform to find the effect and write about it.

Statistically, I think you have two interesting hypotheses, and I'm not sure how you should test them or what you should control for. (Background: I've done undergrad intro stats-type stuff.)

  • Hypothesis A (Models 1 and 2) is that more COVID is correlated with more Biden support.
  • Hypothesis B (Model 3) is that more Biden support is correlated with more tests, which then has unclear causal effects of COVID.

I say "more COVID" to be deliberately ambiguous because I'm not sure which tracking metric to use. Should we expect Biden support to be correlated with tests, cases, hospitalizations, or deaths? And for each metric, should it be cumulative over time, or change over a given time period? What would it mean to find different effects for different metrics? Also, they're all correlated with each other - does that bias your regression, or otherwise affect your results? I don't know.

I also don't know what controls to use. Controlling for state-level FEs seems smart, while controlling for date is interesting and potentially captures a different dynamic, but I have no idea how you should control for the correlated bundle of tests/cases/hospitalizations/deaths.

Without resolving these issues, I think the strongest evidence in favor of either hypothesis would be a bunch of different regressions that categorically test many different implementations of the overall hypothesis, with most of them seemingly supporting the hypothesis. I'm not sure what the right implementation is, I'd want someone with a strong statistics background to resolve these issues before really believing it, and this method can fail, but if most implementations you can imagine point in the same direction, that's at least a decent reason to investigate further.

If you actually want to convince someone to look into this (with or without you), maybe do that battery of regressions, then write up a very generalized takeaway along the lines of "The hypothesis is plausible, the data is here, and the regressions don't rule out the hypothesis. Do you want to look into whether or not there's an effect here?"

Who'd be interested in this analysis? Strong candidates might include academics, think tanks, data journalism news outlets, and bloggers. The stats seem very difficult, maybe such that the best fit is academics, but I don't know. News outlets and bloggers that aren't specifically data savvy probably aren't capable of doing this analysis justice. Without working with someone with a very strong stats background, I'd be cautious about writing this for a public audience.

Not sure if you're even interested in any of that, but FWIW I think they'd like your ideas and progress so far. If you'd like to talk about this more, I'm happy to chat, you can pick a time here. Cool analysis, kudos on thinking of an interesting topic, seriously following through with the analysis, and recognizing its limitations.

Comment by aidan-o-gara on HStencil's Shortform · 2020-06-30T02:30:38.076Z · score: 4 (2 votes) · EA · GW

This is really cool, maybe email it to 538 or Vox? I’ve had success contacting them to share ideas for articles before

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-29T05:46:55.709Z · score: 1 (1 votes) · EA · GW

I like this idea a lot. It probably lowers the effort bar for a top-level post, which I think is good.

Comment by aidan-o-gara on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T21:06:41.068Z · score: 3 (3 votes) · EA · GW

This makes sense. If 99% of humanity dies, the surviving groups might not be well-connected by transportation and trade. Modern manufacturing starts with natural resources from one country, assembles its products in the next, and ships and sells to a third. But if e.g. ships and planes can’t get fuel or maintenance, then international trade fails, supply chains break down, and survivors can’t utilize the full capacity of technology.

As gavintaylor says below, industrialization might need a critical mass of wealth to begin. (Maybe accumulated wealth affords freedom from sustenance work and allows specialization of labor?)

Though over thousands of years, the knowledge that progress is possible might end up encouraging people to rebuild the lost infrastructure.

Comment by aidan-o-gara on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T20:50:40.674Z · score: 1 (1 votes) · EA · GW

Yep, agreed

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T08:34:15.655Z · score: 1 (1 votes) · EA · GW

What if, when you highlight text within a post, a small toolbar pops up where you can click to quote the text in your comment box?

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T07:12:18.373Z · score: 1 (1 votes) · EA · GW

So like, when I'm logged into my account, I'll see every shortform post as top level?

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T07:08:08.350Z · score: 4 (3 votes) · EA · GW

Roots-based approach to the same outcome: Leave an open invitation and a Calendly link in your EAForum bio.

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T07:05:53.584Z · score: 4 (3 votes) · EA · GW

StackExchange might have some great principles to implement here, though I don't know much about it

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-28T07:03:12.424Z · score: 1 (1 votes) · EA · GW

That's an interesting idea for Forum v3: a wiki for all EA materials. Newcomers could go to the Forum and find Peter Singer, Doing Good Better, and links to 80,000 Hours research + new posts every day.

Related: "Should EA Buy Distribution Rights for Foundational Books?" by Cullen O'Keefe

Comment by aidan-o-gara on Civilization Re-Emerging After a Catastrophic Collapse · 2020-06-28T06:31:42.376Z · score: 3 (2 votes) · EA · GW

Some evidence for your counterargument: There was no agricultural revolution in humanity's first 500,000 years, yet an industrial revolution happened only 10,000 years later.

Seems like industrial revolutions are more likely than agricultural.

[I haven't listened to Jabari's talk yet, just a quick thought]

Comment by aidan-o-gara on EA is vetting-constrained · 2020-06-27T21:45:12.024Z · score: 1 (1 votes) · EA · GW

One year later, do you think Meta is still less constrained by vetting, and more constrained by a lack of high-quality projects to fund?

And for other people who see vetting constraints: Do you see vetting constraints in particular cause areas? What kinds of organizations aren't getting funding?

Comment by aidan-o-gara on What to do with people? · 2020-06-27T21:00:30.780Z · score: 1 (1 votes) · EA · GW

Interesting, this definitely seems possible. Are there any examples of EA projects that failed, resulting in less enthusiasm for EA projects generally?

Comment by aidan-o-gara on EA is risk-constrained · 2020-06-24T10:16:25.178Z · score: 4 (4 votes) · EA · GW

What kinds of risky choices do EAs avoid? Are you thinking of any specific examples, perhaps particular cause areas or actions?

Comment by aidan-o-gara on EA considerations regarding increasing political polarization · 2020-06-22T22:17:21.627Z · score: 20 (11 votes) · EA · GW

Maybe EA can't affect political polarization nearly as much as the other way around - political polarization can dramatically affect EA.

EA could get "cancelled", perhaps for its sometimes tough-to-swallow focus on prioritization and tradeoffs, for a single poorly phrased and high profile comment, for an internal problem at an EA org, or perhaps for lack of overall diversity. Less dramatically, EA could lose influence by affiliating with a political party, or by refusing to, and degrading norms around discourse and epistemics could turn EA discussions into partisan battlegrounds.

This post makes a lot of sense, I definitely think EA should consider the best direction for the political climate to grow in. We should also spend plenty of time considering how EA will be affected by an increasing polarization, and how we can respond.

EAs in policy and government seem particularly interesting here, as potentially highly subject to the effects of political climate. I'd love to hear how EAs in policy and government have responded or might respond to these kinds of dynamics.

One question: Should EA try to be "strategically neutral" or explicitly nonpartisan? Many organizations do so, from think tanks to newspapers to non-profits. What lessons can we learn from other non-partisan groups and movements? What are their policies, how do they implement them, and what effects do they have?

Thanks for this post, very interesting!

Comment by aidan-o-gara on How to find good 1-1 conversations at EAGx Virtual · 2020-06-20T20:40:07.245Z · score: 2 (2 votes) · EA · GW

Hey, that’s a great idea. Personally I don’t always turn on video, because of everything from connection issues to not wanting to get out of bed. Videochat can be nice, but definitely isn’t necessary.

I‘ve edited the post to recommend noting that videochat is optional, and audio-only is perfectly good.

Thanks for the thought and the kind words! Hope you had a great conference too :)

Comment by aidan-o-gara on EA Forum feature suggestion thread · 2020-06-20T06:43:47.581Z · score: 13 (4 votes) · EA · GW

Sans-serif font in body text! The comments section is absolutely beautiful to read, but I find the body text of posts very difficult. Most blogs and online news sources seem to use sans-serif, probably for readability.

Alternatively, give users the option to pick their own font. Also, maybe make text black instead of a lighter grey?

Comment by aidan-o-gara on Longtermism ⋂ Twitter · 2020-06-16T23:37:16.510Z · score: 5 (5 votes) · EA · GW

Agreed, and I support EA Forum norms of valuing quick takes in both posts and comments. Personally, the perceived bar to contributing feels way too high.

Comment by aidan-o-gara on Longtermism ⋂ Twitter · 2020-06-16T02:35:48.617Z · score: 6 (5 votes) · EA · GW

Welcome, and thanks for the contribution! I strongly agree with all three recommendations, and would point to #EconTwitter as a Twitter community that has managed to do all three very well.

Maintaining a strong code of conduct seems particularly useful. Different parts of Twitter have very different conversation norms, ranging from professional to degenerate and constructive to cruel. Norms are harder to build than to destroy, but ultimately individual people set the norm by what they tweet, so anyone can contribute to building the culture they want to see.

FWIW, my two cents would be to discourage more serious EA conversations from moving to Twitter. In my experience, it often brings out the worst in people and conversations. (It also has plenty of positives, and can be lots of fun.)

Comment by aidan-o-gara on How to find good 1-1 conversations at EAGx Virtual · 2020-06-15T05:06:56.395Z · score: 10 (7 votes) · EA · GW

Thanks!

Sidenote: I love when people praise my contributions to the EA Forum. Posting here can be intimidating - the bar for quality of conversation is often really high, disagreements can be harsh, and especially when using my real name I don't want to earn a bad reputation. So when other people offer positive feedback or sincere gratitude, it makes me really happy and encourages me to post more often.

If you want to encourage more discussion on the EA Forum, thank someone for their contributions. So thank you Michael!

Comment by aidan-o-gara on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-15T01:59:30.520Z · score: 4 (3 votes) · EA · GW

Thanks for this writeup - I'd greatly appreciate any further information you could provide about anti-slavery.

  • What are the links for these sources?
  • Can you share your cost-effectiveness calculations for The Freedom Fund? What does it assume?
  • What are the best writeups of the problem? Could you link to them?
Comment by aidan-o-gara on EA and tackling racism · 2020-06-13T08:47:46.335Z · score: 4 (4 votes) · EA · GW

That's an interesting point.

EA career advice often starts with a pressing global problem, thinks about what skills could help solve that problem, and then recommends that you personally go acquire those skills. What if we ask the question from the other direction: For any given skillset or background, how can EA nudge people towards more impactful careers?

Racism causes a lot of suffering, and some of the best minds of this generation are working towards ending it. If EA helped those people find the most effective ways to advance racial justice, it would benefit the world and expose more people to EA ways of thinking.

One way the EA movement could succeed over the next few decades is by becoming a source of information for a broad popular audience about how to be more impactful in the most popular causes.

Comment by aidan-o-gara on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-12T05:50:50.180Z · score: 4 (4 votes) · EA · GW

Strongly agree on the importance of audio quality. Cool idea!

Comment by aidan-o-gara on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-11T02:31:40.255Z · score: 9 (6 votes) · EA · GW

This sounds really interesting. I looked into QRI once before and was concerned that I couldn’t find much mainstream recognition of their work.

Would you know how much mainstream recognition has QRI’s work received, either for this line of research or others? Has it published in peer-reviewed journals, received any grants, or garnered positive reviews from other academics? Could you point me to any information here?

Thanks, and looking forward to hopefully hearing this talk.

Comment by aidan-o-gara on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-21T18:36:16.040Z · score: 6 (4 votes) · EA · GW

I really love the new section in Key Ideas, "Other paths that may turn out to be very promising". I've been concerned that 80K messaging is too narrow and focuses too much on just your priority paths, almost to the point of denigrating other EA careers. I think this section does a great job contextualizing your recommendations and encouraging more diverse exploration, so thanks!

Somewhat related: I'm guessing 80K focuses on a narrower range of priority paths partially because, as an organization, specializing is valuable for its own sake. If there's a dozen equally impactful areas you could work in, you won't put equal effort into each area - you're better off picking a few and specializing, even if the decision is arbitrary, so you can reap returns to scale within the industry by learning, making connections, hiring specialists, and building other field-specific ability.

If you actually think about things this way, I would suggest saying so more explicitly in your Key Ideas, because I didn't realize it for a long time and it really changes how I think about your recommendations.

(Unrelated to this post, but hopefully helpful)

Comment by aidan-o-gara on CEA's Plans for 2020 · 2020-04-24T09:46:31.158Z · score: 3 (2 votes) · EA · GW

Why are you moving to Oxford?

Comment by aidan-o-gara on If you value future people, why do you consider near term effects? · 2020-04-16T10:30:31.200Z · score: 7 (5 votes) · EA · GW

Provably successful near-term work could drive the growth of the EA movement, benefitting the long term. I’d guess that more people join EA because of GiveWell and AMF than because of AI Safety and biorisk. That’s because (a) near-term work is more popular in the mainstream, and (b) near-term work can better prove success. More obvious successes will probably drive more EA growth. On the other hand, if EA makes a big bet on AI Safety and 30 years from now we’re no closer to AGI or seeing the effects of AI risks, the EA movement could sputter. It’s hard to imagine demonstrably failing like that in near-term work. Maybe the best gift we can give the future isn’t direct work on longtermism, but is rather enabling the EA movement of the future.

I’m not actually sure I buy this argument. If we’re at the Hinge of History and we have more leverage over the expected value of the future than anyone in the future will, maybe some longtermist direct work now is more important than enabling more longtermist direct work in the future. Also, maybe EA’s best sales pitch is that we don’t do sales pitches, we follow the evidence even to less popular conclusions like longtermism.

Comment by aidan-o-gara on If you value future people, why do you consider near term effects? · 2020-04-16T10:18:44.109Z · score: 6 (5 votes) · EA · GW

If it’s extremely difficult to figure out the direct effects of near-term interventions, then maybe it’s proportionally harder to figure out long term effects - even to the point of complex cluelessness becoming de facto simple cluelessness.

Some people argue from a “skeptical prior”: simply put, most efforts to do good fail. The international development community certainly seems like a “broad coalition of trustworthy people”, but their best guesses are almost useless without hard evidence.

If you’re GiveWell-level pessimistic about charities having their intended impact even with real time monitoring and evaluation of measurable impacts, you might be utterly and simply clueless about all long term effects. In that case, long term EV is symmetrical and short term effects dominate.

Comment by aidan-o-gara on What posts do you want someone to write? · 2020-03-25T05:54:48.717Z · score: 1 (1 votes) · EA · GW

Makes a lot of sense, I'm sure Vox and the New York Times are interested in very different kinds of submissions, writing with a particular style in mind probably dramatically increases the odds of publication.

I still wonder what the success rate here is - closer to 1% or to 10%? If the latter, I could see this being pretty impactful and possibly scalable.

Comment by aidan-o-gara on What posts do you want someone to write? · 2020-03-24T07:47:09.930Z · score: 7 (5 votes) · EA · GW

Similarly, an AMA from someone working at an EA org who otherwise isn’t personally very engaged with EA. Maybe they really disagree with EA, or more likely, they’re new to EA ideas and haven’t identified with EA in the past.

They’ll be deeply engaged on the substantive issues but will bring different identities and biases, maybe offering important new perspectives.

Comment by aidan-o-gara on What posts do you want someone to write? · 2020-03-24T07:42:34.185Z · score: 1 (1 votes) · EA · GW

That’s a super cool idea.

  • What writing currently exists like this? Vox’s Future Perfect, maybe a few one-off articles in other major publications?
  • Where’s best to publish this? Feels like a lot of work for a blogpost, but I doubt the NYT is looking for unsolicited submissions - are there publishing platforms that would be interested in this?
Comment by aidan-o-gara on Ben Cottier's Shortform · 2020-03-13T18:27:18.925Z · score: 3 (2 votes) · EA · GW

I'd love to see this post and generally more discussion of what kinds of x-risks and s-risks matter most. 80K's views seem predicated on deeply held, nuanced, and perhaps unconventional views of longtermism, and it can be hard to learn all the context to catch up on those discussions.

One distinction I like is OpenPhil talking about Level 1 and Level 2 GCRs: https://www.openphilanthropy.org/blog/long-term-significance-reducing-global-catastrophic-risks

Comment by aidan-o-gara on Quantifying lives saved by individual actions against COVID-19 · 2020-03-07T04:07:59.798Z · score: 15 (10 votes) · EA · GW

Yeah I’d love to see it copied over here, looks like an interesting analysis

Comment by aidan-o-gara on EA Handbook 3.0: What content should I include? · 2020-01-30T05:08:17.688Z · score: 1 (1 votes) · EA · GW

These are the articles I sent one friend interested in EA: https://forum.effectivealtruism.org/posts/KvLyxHcwCforffpkC/introduction-to-effective-altruism-reading-list.

Comment by aidan-o-gara on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T01:41:35.735Z · score: 1 (1 votes) · EA · GW

How much ML/CS knowledge is too much? For someone working in AI Policy, do you see diminishing returns to become a real expert in ML/CS, such that you could work directly as a technical person? Or is that level of expertise very useful for policy work?

Comment by aidan-o-gara on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T01:37:47.431Z · score: 1 (1 votes) · EA · GW

How useful is general CS knowledge vs ML knowledge specifically?

Comment by aidan-o-gara on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T01:32:21.002Z · score: 1 (1 votes) · EA · GW

What impactful career paths do you think law school might prepare you particularly well for, besides ETG and AI Policy? If an EA goes to law school and discovers they don't want to do ETG or AI Policy, where should they look next?

Do these options mostly look like "find something idiosyncratic and individual that's impactful", or do you see any major EA pipelines that could use tons of lawyers?

Comment by aidan-o-gara on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-13T01:31:16.098Z · score: 1 (1 votes) · EA · GW

[Didn't mean to comment this]