Posts

Roodman's Thoughts on Biological Anchors 2022-09-14T12:23:23.895Z
Reminder (Sept 15th deadline): Apply for the Open Philanthropy Technology Policy Fellowship 2022-09-09T18:58:54.020Z
Apply to the Open Philanthropy Technology Policy Fellowship! 2022-07-15T19:47:33.964Z
Tips for conducting worldview investigations 2022-04-12T19:28:55.897Z
Features that make a report especially helpful to me 2022-04-12T13:57:15.509Z
AMA: The new Open Philanthropy Technology Policy Fellowship 2021-07-26T15:11:50.661Z
Apply to the new Open Philanthropy Technology Policy Fellowship! 2021-07-20T18:41:46.759Z
A personal take on longtermist AI governance 2021-07-16T22:08:03.981Z
EA needs consultancies 2021-06-28T15:18:38.844Z
Superforecasting in a nutshell 2021-02-25T06:11:28.886Z
Notes on 'Atomic Obsession' (2009) 2019-10-26T00:30:21.491Z
Information security careers for GCR reduction 2019-06-20T23:56:58.275Z
History of Philanthropy Case Study: The Center on Budget and Policy Priorities and State EITC Programs 2018-02-02T16:25:56.097Z
How big a deal was the Industrial Revolution? 2017-09-16T07:00:00.000Z
Three wild speculations from amateur quantitative macrohistory 2017-09-12T09:47:51.112Z
Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood 2017-06-28T15:49:11.655Z
New Report on Consciousness and Moral Patienthood 2017-06-06T13:21:52.145Z
New Report on Early Field Growth 2017-04-26T13:19:33.230Z
Efforts to Improve the Accuracy of Our Judgments and Forecasts 2016-10-25T13:13:13.819Z
Efforts to Improve the Accuracy of Our Judgments and Forecasts (Open Philanthropy) 2016-10-25T10:09:07.145Z
Meetup : GiveWell research event for Bay Area effective atlruists! 2015-06-30T01:22:05.169Z
Will MacAskill on normative uncertainty 2014-04-09T15:27:32.000Z
How efficient is the charitable market? 2013-08-26T04:00:40.000Z
Four focus areas of effective altruism 2013-07-08T04:00:02.000Z
The Cognitive Science of Rationality 2011-09-12T10:35:42.246Z

Comments

Comment by lukeprog on Why does AGI occur almost nowhere, not even just as a remark for economic/political models? · 2022-10-03T09:20:12.718Z · EA · GW

I suspect this is because there isn't a globally credible/legible consensus body generating or validating the forecasts, akin to the IPCC for climate forecasts that are made with even longer time horizons.

Comment by lukeprog on EA Serbia is now launching! · 2022-10-03T09:17:31.231Z · EA · GW

Cool, I might be spending a few weeks in Belgrade sometime next year! I'll reach out if that ends up happening. (Writing from Dubrovnik now, and I met up with some rationalists/EAs in Zagreb ~1mo ago.)

Comment by lukeprog on Questioning the Foundations of EA · 2022-09-02T18:54:03.424Z · EA · GW

(cross-posted)

Re: Shut Up and Divide. I haven't read the other comments here but…

For me, effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do the thing consistent with my second-order desires, something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don't really detect in myself a symmetrical second-order want to NOT want to help strangers. So that's one thing that "Shut up and multiply" has over "shut up and divide," at least for me.

That said, I realize now that I'm often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor's occasional desire to help strangers and suggest they generalize it, but I don't symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that's a more complicated conversation.

Comment by lukeprog on Open EA Global · 2022-09-01T19:03:38.204Z · EA · GW

FWIW I generally agree with Eli's reply here. I think maybe EAG should 2x or 3x in size, but I'd lobby for it to not be fully open.

Comment by lukeprog on Historical EA funding data · 2022-08-14T14:50:19.863Z · EA · GW

Not sure it's worth the effort, but I'd find the charts easier to read if you used a wider variety of colors.

Comment by lukeprog on Some concerns about policy work funding and the Long Term Future Fund · 2022-08-13T08:12:17.429Z · EA · GW

As someone with a fair amount of context on longtermist AI policy-related grantmaking that is and isn't happening, I'll just pop in here briefly to say that I broadly disagree with the original post and broadly agree with [https://forum.effectivealtruism.org/posts/Xfon9oxyMFv47kFnc/some-concerns-about-policy-work-funding-and-the-long-term?commentId=TEHjaMd9srQtuc2W9](abergal's reply).

Comment by lukeprog on The Cognitive Science of Rationality · 2022-08-09T14:10:27.988Z · EA · GW

Thanks, Anna!

Comment by lukeprog on What is your theory of victory? · 2022-07-14T11:05:17.975Z · EA · GW

FWIW I don't use "theory of victory" to refer to 95th+ percentile outcomes (plus a theory of how we could plausibly have ended up there). I use it to refer to outcomes where we "succeed / achieve victory," whether I think that represents the top 5% of outcomes or the top 20% or whatever. So e.g. my theory of victory for climate change would include more likely outcomes than my theory of victory for AI does, because I think succeeding re: AI is less likely.

Comment by lukeprog on EA for dumb people? · 2022-07-11T12:15:03.262Z · EA · GW

FWIW, I wouldn't say I'm "dumb," but I dropped out of a University of Minnesota counseling psychology undergrad degree and have spent my entire "EA" career (at MIRI then Open Phil) working with people who are mostly very likely smarter than I am, and definitely better-credentialed. And I see plenty of posts on EA-related forums that require background knowledge or quantitative ability that I don't have, and I mostly just skip those.

Sometimes this makes me insecure, but mostly I've been able to just keep repeating to myself something like "Whatever, I'm excited about this idea of helping others as much as possible, I'm able to contribute in various ways despite not being able to understand half of what Paul Christiano says, and other EAs are generally friendly to me."

A couple things that have been helpful to me: comparative advantage and stoic philosophy.

At some point it would also be cool if there was some kind of regular EA webzine that published only stuff suitable for a general audience, like The Economist or Scientific American but for EA topics.

Comment by lukeprog on Moral weights for various species and distributions · 2022-07-04T09:50:02.332Z · EA · GW

Since this exercise is based on numbers I personally made up, I would like to remind everyone that those numbers are extremely made up and come with many caveats given in the original sources. It would not be that hard to produce numbers more reasonable than mine, at least re: moral weights. (I spent more time on the "probability of consciousness" numbers, though that was years ago and my numbers would probably be different now.)

Comment by lukeprog on Announcing: EA Engineers · 2022-07-04T09:45:02.505Z · EA · GW

Despite ample need for materials science in pandemic prevention, electrical engineers in climate change, civil engineers in civilisational resilience, and bioengineering in alternative proteins, EA has not yet built a community fostering the talent needed to meet these needs.

Also engineers who work on AI hardware, e.g. to help develop the technologies and processes needed to implement most compute governance ideas!

Comment by lukeprog on Future Fund June 2022 Update · 2022-07-01T07:19:43.476Z · EA · GW

Very exciting!

Comment by lukeprog on Where can I learn about how DALYs are calculated? · 2022-06-11T15:22:50.031Z · EA · GW

+1 to the question, I tried to figure this out a couple years ago and all the footnotes and citations kept bottoming out without much information having been provided.

Comment by lukeprog on [Link] Luke Muehlhauser: Effective Altruism As I See It · 2022-06-08T14:30:39.794Z · EA · GW

Thanks for this! I looked into this further and tweaked the final paragraph of the post and its footnote as a result.

Comment by lukeprog on Sleep: Open Philanthropy Cause Exploration Prize · 2022-06-06T15:33:26.361Z · EA · GW

See also my old report on behavioral treatments for insomnia.

Comment by lukeprog on Updates from Community Health for Q4 2021 & Q1 2022 · 2022-05-29T10:33:32.669Z · EA · GW

Thank you for everything you're doing!

Comment by lukeprog on Some potential lessons from Carrick’s Congressional bid · 2022-05-23T12:16:26.981Z · EA · GW

Yeah, bummer, not happy about this.

Comment by lukeprog on What are examples where extreme risk policies have been successfully implemented? · 2022-05-16T17:05:22.349Z · EA · GW

Nunn-Lugar; see quick summary here: https://www.openphilanthropy.org/blog/ai-governance-grantmaking

Comment by lukeprog on EA and the current funding situation · 2022-05-11T02:26:44.195Z · EA · GW

a Christian EA I heard about recently who lives in a van on the campus of the tech company he works for, giving away everything above $3000 per year

Will this person please give an in-depth interview on some podcast? Could be anonymous if desired.

Comment by lukeprog on How Do You Get People to Really Show Up to Local Group Meetups? · 2022-05-06T19:37:33.594Z · EA · GW

Very old guide, maybe a tiny bit helpful: How to Run a Successful Less Wrong Meetup Group.

Comment by lukeprog on Demandingness and Time/Money Tradeoffs are Orthogonal · 2022-05-05T20:59:03.130Z · EA · GW

Very minor note, but I love that you included "practice the virtue of silence" in your list.

Comment by lukeprog on An easy win for hard decisions. · 2022-05-05T03:13:24.899Z · EA · GW

It's funny, I've done this so many times (including commenting on others' docs of this sort) that I sort-of forgot that not everyone does this regularly.

Comment by lukeprog on Mid-career people: strongly consider switching to EA work · 2022-04-26T17:27:30.428Z · EA · GW

An important point here is that if you're considering this move, there's a decent/good chance you'll be able to find career transition funding so that you can have 3-12mo of runway during which you can full-time talk to people, read lots of stuff, apply to lots of things, etc. after you quit your job, so that you don't have to burn through much or any of your savings while trying to make the transition work.

Comment by lukeprog on Tips for conducting worldview investigations · 2022-04-13T22:04:30.429Z · EA · GW

It's a fair question. Technically speaking, of course progress can be more incremental, and some small pieces can be built on with other small pieces. Ultimately that's what happened with Khan's series of papers on the semiconductor supply chain and export control options. But in my opinion that kind of thing almost never really happens successfully when it's different authors building on each other's MVPs (minimum viable papers) rather than a single author or team building out a sorta-comprehensive picture of the question they're studying, with all the context and tacit knowledge they've built up from the earlier papers carrying over to how they approach the later papers.

Comment by lukeprog on Tips for conducting worldview investigations · 2022-04-13T19:03:34.577Z · EA · GW

It just means "pages."

Comment by lukeprog on Don’t think, just apply! (usually) · 2022-04-12T13:47:49.467Z · EA · GW

Huge +1 to this post! A few reflections:

  • As someone who has led or been involved in many hiring rounds in the last decade, I'd like to affirm most of the points above, e.g.: it's very hard to predict what you'll get offers for, you'll sometimes learn about personal fit and improve your career capital, stated role "requirements" are often actually fairly flexible, etc.
  • Applicants who get the job, or make it to the final stage, often comment that they're surprised they got so far and didn't think they were a strong fit but applied because a friend told them they should apply anyway.
  • Apply to some roles even if you're not sure you'd leave your current role anytime soon. Hiring managers often don't reach out to some of their top prospects for a role because they have limited time and just assume that the prospect probably won't leave their current role.
  • If you apply to a role on a whim and then make it past the first stage, you might find that your interest in the role grows as a result, e.g. because it "feels more real" and then you think about what that role would be like in a more concrete way, and because you've gotten a positive signal that the employer thinks you might be a fit.
  • Just getting your up-to-date information in an employers CRM can be valuable. I am constantly trying to help grantees and other contacts fill various open roles, and one of the main things I do is run filters on past Open Phil applicants to identify candidates matching particular criteria. I've helped connect several "unsuccessful" Open Phil applicants to other jobs, including e.g. to a think tank role which shortly thereafter led to a very influential role in the White House, and things like that. Of course we also check our lists of past applicants when trying to fill new roles at Open Phil, and in some cases we've hired people who we previously rejected for the first role they applied to.
  • That said, it's helpful if you keep applying even if your info is already in a particular employer's CRM, both to indicate interest in a particular role and because your situation may have changed. I often think a prospect won't be interested in a role because, last I heard, only wanted to do roles like X and Y, or only in domain Z, or only after they finish their PhD, or whatever, and then sometimes I learn that 9mo later they changed their mind about some of that stuff so now they're open to the role I was trying to fill but I didn't learn that until after the hiring round was closed.
  • To support people in following this post's advice, employers (including Open Phil?) need to make it even quicker for applicants to submit the initial application materials, perhaps by holding off on collecting some even fairly basic information until an applicant passes the initial screen.
Comment by lukeprog on High absorbency career paths · 2022-04-11T14:31:33.067Z · EA · GW

See also posts tagged with scalably using labor.

Comment by lukeprog on Where is the Social Justice in EA? · 2022-04-05T15:20:23.270Z · EA · GW

As a college dropout from the SF Bay Area EA/rationalist community where it's common for people at parties (including non-EA/rationalist parties) to brag about who dropped out of school earliest, I've never really grokked some people's impression that EA is highly credentialist.

Comment by lukeprog on Case for emergency response teams · 2022-04-05T15:12:52.425Z · EA · GW

Random thought: another way in which such a group could prepare for action is to have some experience commissioning forecasts on short notice from platforms like Good Judgment, Metaculus, Hypermind, etc., so that when there's some emergency (or signs that there might soon be an emergency, a la the early-Jan evidence about what became the COVID-19 pandemic), ALERT can immediately commission crowdcasts that help to track the development or advent of the emergency.

Comment by lukeprog on Legal support for EA orgs - useful? · 2022-03-17T16:41:34.741Z · EA · GW

FWIW a big thing for Open Phil and a couple other EA-ish orgs I've spoken to is that very few lawyers are willing to put probabilities on risks, so they'll just say "I advise against X," but what we need is "If you do X then the risk of A is probably 1%-10% and the risk of B is <1% and the risk of C is maybe 1%-5%." So would be nice you could do some calibration training etc. if you haven't already.

Comment by lukeprog on Information security: Become a hacker, not a consultant · 2022-03-02T15:52:49.219Z · EA · GW

Interesting, thanks.

Comment by lukeprog on We should consider funding well-known think tanks to do EA policy research · 2022-02-27T19:53:05.921Z · EA · GW

Yeah CSET isn't an EA think tank, though a few EAs have worked there over the years.

Comment by lukeprog on We should consider funding well-known think tanks to do EA policy research · 2022-02-27T19:51:42.488Z · EA · GW

Yes, this is part of the reason I personally haven't prioritized funding European think tanks much, in addition to my grave lack of context on how policy and politics works in the most AI-relevant European countries.

Comment by lukeprog on Information security: Become a hacker, not a consultant · 2022-02-27T19:31:34.264Z · EA · GW

Can you say more about why you recommend not pursuing formal certificates? Does that include even the "best" ones, e.g. from SANS? I've been recommending people go for them, because they (presumably) provide a guided way to learn lots of relevant skills, and are a useful proof of skill to prospective employers, even though of course the actual technical and analytic skills are ultimately what matter.

Comment by lukeprog on We should consider funding well-known think tanks to do EA policy research · 2022-02-22T21:20:59.016Z · EA · GW

What EA orgs do you have in mind? I guess this would be policy development at places like GovAI and maybe Rethink Priorities? My guess is that the policy-focused funding for EAish orgs like that is dwarfed by the Open Phil funding for CSET and CHS alone, which IIRC is >$130M so far.

Comment by lukeprog on We should consider funding well-known think tanks to do EA policy research · 2022-02-21T16:34:21.089Z · EA · GW

Yes, we (Open Phil) have funded, and in some cases continue to fund, many non-EA think tanks, including the six you named and also Brookings, National Academies, Niskanen, Peterson Institute, CGD, CSIS, CISAC, CBPP, RAND, CAP, Perry World House, Urban Institute, Economic Policy Institute, Roosevelt Institute, Dezernat Zukunft, Sightline Institute, and probably a few others I'm forgetting.

I don't know why the original post claimed "it is pretty rare for EAs to fund non-EA think tanks to do things."

Comment by lukeprog on The best $5,800 I’ve ever donated (to pandemic prevention). · 2022-02-03T14:29:07.684Z · EA · GW

I donated $5800.

Comment by lukeprog on Which EA orgs provide feedback on test tasks? · 2022-01-30T21:20:40.782Z · EA · GW

When I ran two recruiting rounds for Open Philanthropy in ~2018, IIRC our policy was to offer feedback to those who requested it and made it past a certain late stage of the process, but not to everyone, because we had >1000 applicants and couldn't afford the time to write custom feedback for anywhere near that many people. Not sure what our current practice is.

Comment by lukeprog on New Publication: Effective Altruism and Religion · 2022-01-25T20:51:34.389Z · EA · GW

I'm not religious but (at a glance) I feel happy this book exists.

Comment by lukeprog on The longtermist AI governance landscape: a basic overview · 2022-01-18T17:26:00.104Z · EA · GW

Nice overview! I'm broadly on board with this framing.

One quibble is that I wish this post was clearer about how the example actions, outputs, and institutions you list are not always themselves motivated by longtermist or x-risk considerations, though many people who are motivated by longtermism/x-risk tend to think of the example outputs you list as more relevant to longtermist/x-risk considerations than many other reports and topics in the broader space of AI governance. E.g. w.r.t. "who's doing it," there are very few people at CSET or TFS who are working on these issues from something like a longtermist lens, there are relatively more at DeepMind or OpenAI (but not a majority), and then some orgs are majority/exclusively motivated by a longtermist/x-risk lens (e.g. FHI and the AI program team at Open Phil).

Comment by lukeprog on Concrete Biosecurity Projects (some of which could be big) · 2022-01-14T16:27:34.907Z · EA · GW

The authors will have a more-informed answer, but my understanding is that part of the answer is "some 'disentanglement' work needed to be done w.r.t. biosecurity for x-risk reduction (as opposed to biosecurity for lower-stakes scenarios)."

I mention this so that I can bemoan the fact that I think we don't have a similar list of large-scale, clearly-net-positive projects for the purpose of AI x-risk reduction, in part because (I think) the AI situation is more confusing and requires more and harder disentanglement work (some notes on this here and here). The Open Phil "worldview investigations" team (among others) is working on such disentanglement research for AI x-risk reduction and I would like to see more people tackle this strategic clarity bottleneck, ideally in close communication with folks who have experience with relatively deep, thorough investigations of this type (a la Bio Anchors and other Open Phil worldview investigation reports) and in close communication with folks who will use greater strategic clarity to take large actions.

Comment by lukeprog on Democratising Risk - or how EA deals with critics · 2022-01-07T15:24:15.333Z · EA · GW

Hi Michael,

I don't have much time to engage on this, but here are some quick replies:

  • I don't know anything about your interactions with GiveWell. My comment about ignoring vs. not-ignoring arguments about happiness interventions was about me / Open Phil, since I looked into the literature in 2015 and have read various things by you since then. I wouldn't say I ignored those posts and arguments, I just had different views than you about likely cost-effectiveness etc.
  • On "weakly validated measures," I'm talking in part about lack of IRT validation studies for SWB measures used in adults (NIH funded such studies for SWB measures in kids but not adults, IIRC), but also about other things. The published conversation notes only discuss a small fraction of my findings/thoughts on the topic.
  • On "unconvincing intervention studies" I mean interventions from the SWB literature, e.g. gratitude journals and the like. Personally, I'm more optimistic about health and anti-poverty interventions for the purpose of improving happiness.
  • On "wrong statistical test," I'm referring to the section called "Older studies used inappropriate statistical methods" in the linked conversation notes with Joel Hektner.

TBC, I think happiness research is worth engaging and has things to teach us, and I think there may be some cost-effectiveness happiness interventions out there. As I said in my original comment, I moved on to other topics not because I think the field is hopeless, but because it was in a bad enough state that it didn't make sense for me to prioritize it at the time.

Comment by lukeprog on EA megaprojects continued · 2022-01-05T13:05:56.360Z · EA · GW

I can't share more detail right now and they might not work out, but just FYI, I'm currently working on the details of Science #5 and Miscellaneous #2.

Comment by lukeprog on Democratising Risk - or how EA deals with critics · 2022-01-01T22:27:49.162Z · EA · GW

FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didn't pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this space, perhaps especially after better measure validation studies and intervention studies are conducted, but my initial investigation suggested it would take a lot of work for us to get there, so I moved on to other topics.

At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.

That said, I still think some happiness interventions might be cost-effective upon further investigation, and I think our Global Health & Well-Being team has been looking into the topic again as that team has gained more research capacity in the past year or two.

Comment by lukeprog on AI Governance Course - Curriculum and Application · 2021-11-29T21:57:15.980Z · EA · GW

I'm not involved in this program, but I would like to see that happen. Though note that some of the readings are copyrighted.

Comment by lukeprog on Why fun writing can save lives: the case for it being high impact to make EA writing entertaining · 2021-11-14T15:17:41.118Z · EA · GW

FWIW I broadly agree with Peter here (more so than the original post).

Comment by lukeprog on EA Forum engagement doubled in the last year · 2021-11-04T15:21:07.843Z · EA · GW

FWIW the EA forum seems subjectively much better to me than it did ~2 years ago, both in platform and in content, and much of that intuitively seems plausibly traceable to specific labor of the EA forum team. Thanks for all your work!

Comment by lukeprog on Great Power Conflict · 2021-09-17T20:01:08.813Z · EA · GW

If you know of work on how AI might cause great power conflict, please let me know

Phrases to look for include "accidental escalation" or "inadvertent escalation" or "strategic stability," along with "AI" or "machine learning." Michael Horowitz and Paul Scharre have both written a fair bit on this, e.g. here.

Comment by lukeprog on The motivated reasoning critique of effective altruism · 2021-09-15T22:22:22.327Z · EA · GW

[EA has] largely moved away from explicit expected value calculations and cost-effectiveness analyses.

How so? I hadn't gotten this sense. Certainly we still do lots of them internally at Open Phil.

Re: cost-effectiveness analyses always turning up positive, perhaps especially in longtermism. FWIW that hasn't been my experience. Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it's nearly as likely to be net-negative as net-positive given our great uncertainty and therefore I end up stuck doing almost entirely "meta" things like creating knowledge and talent pipelines.

Comment by lukeprog on What are the EA movement's most notable accomplishments? · 2021-08-25T21:04:08.538Z · EA · GW

Much of the concrete life saving and life improvement that GiveWell top charities have done with GiveWell-influenced donations.