## Posts

EA Infrastructure Fund: September–December 2021 grant recommendations 2022-07-12T15:24:31.256Z
EA Infrastructure Fund: May–August 2021 grant recommendations 2021-12-24T10:42:08.969Z
US bill limiting patient philanthropy? 2021-06-24T22:14:32.772Z
EA Infrastructure Fund: Ask us anything! 2021-06-03T01:06:19.360Z
EA Infrastructure Fund: May 2021 grant recommendations 2021-06-03T01:01:01.202Z
Progress studies vs. longtermist EA: some differences 2021-05-31T21:35:08.473Z
What are things everyone here should (maybe) read? 2021-05-18T18:34:42.415Z
How much does performance differ between people? 2021-03-25T22:56:32.660Z
Giving and receiving feedback 2020-09-07T07:24:33.941Z
Max_Daniel's Shortform 2019-12-13T11:17:10.883Z
When should EAs allocate funding randomly? An inconclusive literature review. 2018-11-17T14:53:38.803Z
Why s-risks are the worst existential risks, and how to prevent them 2017-06-02T08:48:00.000Z

Comment by Max_Daniel on Consequentialists (in society) should self-modify to have side constraints · 2022-08-04T00:44:52.532Z · EA · GW

Yep, this is one of several reasons why I think that Part I is perhaps the best and certainly the most underrated part of the book. :)

Comment by Max_Daniel on EA for dumb people? · 2022-07-17T14:06:11.386Z · EA · GW

Good question! I'm pretty uncertain about the ideal growth rate and eventual size of "the EA community", in my mind this among the more important unresolved strategic questions (though I suspect it'll only become significantly action-relevant in a few years).

In any case, by expressing my agreement with Linch, I didn't mean to rule out the possibility that in the future it may be easier for a wider range of people to have a good time interacting with the EA community. And I agree that in the meantime "making people feel glad that they interacted with the community even if they do end up deciding that they can, at least for now, find more joy and fulfillment elsewhere" is (in some cases) the right goal.

Comment by Max_Daniel on Announcing Non-trivial, an EA learning platform for teenagers · 2022-07-13T14:03:22.620Z · EA · GW

fwiw it also reminded me of the Radiohead logo.

Comment by Max_Daniel on EA for dumb people? · 2022-07-13T00:05:02.248Z · EA · GW

I think realizing that different people have different capacities for impact is importantly true. I also think it's important and true to note that the EA community is less well set up to accommodate many people than other communities. I think what I said is also more kind to say, in the long run, compared to casual reassurances that makes it harder for people to understand what's going on. I think most of the other comments do not come from an accurate model of what's most kind to Olivia  (and onlookers) in the long run.

FWIW I strongly agree with this.

Comment by Max_Daniel on EA for dumb people? · 2022-07-13T00:02:14.636Z · EA · GW

I think social group stratification might explain some of the other comments to this post that I found surprising/tone-deaf.

Yes, that's my guess as well.

Comment by Max_Daniel on Announcing Non-trivial, an EA learning platform for teenagers · 2022-07-12T17:36:46.860Z · EA · GW

I feel like a lot of very talented teenagers actively avoid content that seems directly targeted at people their age (unless it seems very selective or something) because they don't expect that to be as engaging / "on their level" as something targeted at university students.

FWIW I think I would also have been pretty unlikely to engage with any material explicitly pitched at adolescents or young adults after about the age of 15, maybe significantly earlier.

Comment by Max_Daniel on EA Infrastructure Fund: September–December 2021 grant recommendations · 2022-07-12T17:29:49.471Z · EA · GW

Thanks for your feedback and your questions!

I'd be curious to know how open the fund is to this type of activity.

We are very open to making grants funding career transitions, and I'd strongly encourage people who could use funding to facilitate a career transition to apply.

For undergraduate or graduate stipends/scholarships specifically, we tend to have a somewhat high bar because

• (a) compared to some other kinds of career transitions they involve providing funding for a relatively long period of time and often fund activities that are useful mostly for instrumental reasons such as getting a credential (it's a different matter if someone can do intrinsically valuable work on, say, AI safety or biosecurity as part of their degree); and
• (b) there often are other sources of funding available for these that are allocated by criteria that partly correlate with ours – e.g. all else equal we care about someone's potential for academic excellence, which also helps getting merit-based scholarships.

That being said, we have made grants covering undergraduate or graduate studies in the past.

Also, I was curious, I see some individuals are receiving upwards of $50k for a few months of overhead while others are receiving well below$50k for 12 months worth of overhead.

Could you point to some specific examples? That might help me give a more specific answer.

In general, a couple of relevant points are:

• Some grants are funding part-time work, which naturally receives a lower total salary per month.
• Grantees have widely varying levels of work experience, and differ in other ways that can be relevant for compensation (e.g. location-dependent cost of living).
• That being said, inconsistent grant sizes are a known weakness of our process that we are working on fixing by developing some kind of 'compensation policy.'
• In the meantime, if you're an EAIF grantee and think you are receiving insufficient compensation for your work, either on an absolute scale or compared to what comparable work earns elsewhere in EA contexts, I strongly encourage you to reach out to us and request an increase in funding. While we may not approve this in all cases, we will never hold such a request against anyone, and in the only case I can recall in which we did receive such a request we very quickly concluded that the original grant was too small and provided a follow-up grant.
Comment by Max_Daniel on Will EAIF and LTFF publish its reports of recent grants? · 2022-07-12T15:26:47.161Z · EA · GW

These payout reports are now available here, albeit about two weeks later than I promised.

Comment by Max_Daniel on EA for dumb people? · 2022-07-12T10:25:01.953Z · EA · GW

Most people on average are reasonably well-calibrated about how smart they are.

(I think you probably agree with most of what I say below and didn't intend to claim otherwise, reading your claim just made me notice and write out the following.)

Hmm, I would guess that people on average (with some notable pretty extreme outliers in both directions, e.g. in imposter syndrome on one hand and the grandiose variety of narcissistic personality disorder on the other hand, not to mention more drastic things like psychosis) are pretty calibrated about how their cognitive abilities compare to their peers but tend to be really bad at assessing how they compare to the general population because most high-income countries are quite stratified by intelligence.

(E.g., if you have or are pursuing a college degree, ask yourself what fraction of people that you know well do not and will never have a college degree. Of course, having a college degree is not the same as being intelligent, and in fact as pointed out in other comments if you're reading this Forum you probably know, or have read content by, at least a couple of people who arguably are extremely intelligent but don't have a degree. But the correlation is sufficiently strong that the answer to that question tells you something about stratification by intelligence.)

That is, a lot of people simply don't know that many people with wildly different levels of general mental ability. Interactions between them happen, but tend to be in narrow and regimented contexts such as one person handing another person cash and receiving a purchased item in return, and at most include things like small talk that are significantly less diagnostic of cognitive abilities than more cognitively demanding tasks such as writing an essay on a complex question or solving maths puzzles.

For people with significantly above-average cognitive abilities, this means they will often lack a rich sense of how, say, the bottom third of the population in terms of general mental ability performs on cognitively demanding tasks, and consequently they will tend to significantly underestimate their general intelligence relative to the general population because they inadvertently substitute the question "how smart am I compared to the general population?" – which would need to involve system-2 reasoning and consideration of not immediately available information such as the average IQ of their peer group based on e.g. occupation or educational attainment – with the easier question "how smart am I compared to my peers?" on which I expect system 1 to do reasonably well (while, as always, of course being somewhat biased in one direction or the other).

As an example, the OP says "I'm just average" but also mentions they have a college degree – which according to this website is true of 37.9% of Americans of age 25 or older. This is some, albeit relatively weak, evidence against the "average" claim depending on what the latter means (e.g. if it just means "between the first and third quartile of the general population" then evidence against this is extremely weak, while it's somewhat stronger evidence against being very close to the population median).

This effect gets even more dramatic when the question is not just about "shallow" indicators like one's percentile relative to the general population but about predicting performance differences in a richer way, e.g. literally predicting the essays that two different people with different ability levels would write on the same question. This is especially concerning because in most situations these richer predictions are actually all that matters. (Compare with height: it is much more useful and relevant to know, e.g., how much different levels of height will affect your health or your dating prospects or your ability to work in certain occupations or do well at certain sports, than just your height percentile relative to some population.)

I also think the point that people are really bad at comparing them to the general population because society is so stratified in various ways applies to many other traits, not just to specific cognitive abilities or general intelligence. Like, I think that question is in some ways closer to the question "at what percentile of trait X are you in the population of all people that have ever lived", where it's more obvious that one's immediate intuitions are a poor guide to the answer.

(Again, all of this is about gradual effects and averages. There will of course be lots of exceptions, some of them systematic, e.g. depending on their location of work teachers will see a much broader sample and/or one selected by quite different filters than their peer group.

I also don't mean to make any normative judgment about the societal stratification at the root of this phenomenon. If anything I think that a clear-eyed appreciation of how little many people understand of the lived experience of most others they share a polity with would be important to spread if you think that kind of stratification is problematic in various ways.)

Comment by Max_Daniel on Explore the new UN demographic projections to 2100 · 2022-07-11T23:52:40.824Z · EA · GW

Thanks, I think it's great to make this data available and to discuss it.

FWIW, while I haven't looked at any updates the UN may have made for this iteration, when briefly comparing the previous UN projections with those by Vollset et al. (2020), available online here, I came away being more convinced by the latter. (I think I first heard about them from Leopold Aschenbrenner.) They tend to predict more rapidly falling fertility rates, with world population peaking well before the end of the century and then declining.

The key difference in methods is that Vollset et al. model fertility as partly depending on educational attainment and access to contraceptives. By contrast, I believe the UN (at least in the previous iteration) primarily did a brute extrapolation of fertility trends.

Assume we believe the causal claim that increased educational attainment and access to contraceptives causes lower fertility. Then modeling these two causes explicitly will beat a brute extrapolation of fertility if there are countries which:

• Haven't seen much of an increase in educational attainment or access to contraceptives, but will do so in the future; or
• Have seen an increase in these two indicators which, however, (because of time lags or threshold effects) is not yet visible in falling fertility rates.

And I think this happens to apply to quite a few countries.

Some caveats:

• I only had a pretty quick look at both the UN and the Vollset et al. projections. I wouldn't be that surprised if my above summary of their respective methods is based on a misreading on my part or otherwise inaccurate in important ways.
• There are obviously causal drivers of fertility that aren't modeled by Vollset et al. either, such as policy effects – i.e., we might reasonably expect that countries will try to incentivize higher fertility (e.g. tax benefits for households with more children) once they start bearing significant costs from shrinking populations that are increasingly dominated by the elderly. (Similarly their model assumes that, e.g., there won't be technological developments such as transformative AI or perhaps more mundanely things like artificial wombs that change which sorts of things fertility causally depends on.) So arguably both the UN and Vollset et al. should best be viewed as projections holding non-modeled variables constant as opposed to all-things-considered predictions.
• An indirect and in my view fairly weak reason for skepticism about the Vollset et al. projections is that they are by a team of researchers at the Institute for Health Metrics of Evaluation (IHME), and that other work out of IHME was controversial in several instances, including their early COVID models. (I haven't looked into how valid the criticisms of their other work are, nor whether there is any overlap in researchers between previous criticized work and their population projects – at least the first authors/lead researchers seem to be different.)
Comment by Max_Daniel on Explore the new UN demographic projections to 2100 · 2022-07-11T23:22:44.090Z · EA · GW

I think this is a relevant consideration, but murkier than it appears at first glance.

Comment by Max_Daniel on EA for dumb people? · 2022-07-11T23:03:20.056Z · EA · GW

Another relevant Slate Star Codex post is Against Individual IQ Worries.

Comment by Max_Daniel on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-07T11:57:52.298Z · EA · GW

I didn't vote on your comment on either scale, but FWIW my guess is that the disagreement is due to quite a few people having the view that AI x-risk does swamp everything else.

Comment by Max_Daniel on All moral decisions in life are on a heavy-tailed distribution · 2022-07-05T02:07:09.596Z · EA · GW

I agree that something like this is true and important.

Some related content (more here):

Comment by Max_Daniel on Two tongue-in-cheek EA anthems · 2022-07-05T01:55:02.775Z · EA · GW

[Info hazard notice: not safe for work.]

Being somewhat self-conscious about being among the older members of the EA community despite being only in my early 30s, I rather turn toward Tocotronic's Ich möchte Teil einer Jugendbewegung sein ("I want to be part of a youth movement").

In a rare feat of prescience, the band also editorialized EA culture in many of their releases starting with their 1991 debut:

• Digital ist besser  ("Digital is better", an influential argument for transhumanism)
• Drei Schritte vom Abgrund entfernt ("Three steps away from The Precipice", Google review of a London shopping centre by a young EA after they met Toby Ord on an escalator)
• Wir sind hier nicht in Seattle, Dirk ("We are not in Seattle here, Dirk", a common realization among rationalists after they've relocated to the Bay Area)
• Die Idee ist gut, doch die Welt noch nicht bereit ("The idea is good but the world isn't ready yet"; devastating objection to EA by the incumbent intellectual elite)
• Über Sex kann man nur auf Englisch singen ("About sex one can only sing in English", on the innovative idea to utilize the tried and true strategy of speaking in tongues when conveying info hazards)
• Ich werde mich nie verändern ("I will never change", meditative mantra used to stave off the threat of value drift)
• Ich bin viel zu lange mit euch mitgegangen ("I've walked with you for way too long", anonymous EA upon resolving to embark on the mythical quest of 'building inside-view models')
• Morgen wird wie heute sein ("Tomorrow will be like today", an early pretheoretic articulation of the principle of forecasting by trend extrapolation)
• Dringlichkeit besteht immer ("Urgency always obtains", on the haste consideration)
• Pure Vernunft darf niemals siegen (the true meaning – an oblique diss of Immanuel Kant – only becomes apparent when considering the song title's English translation, "Pure Reason must never prevail").
• Macht es nicht selbst ("Don't do it yourself", the battle cry of EAs who've just realized they can use virtual assistants)

(I usually choose to remain silent about the band's most significant early misstep – their flirtation with early Leverage Research in Jungs, hier kommt der Masterplan ["Boys, here comes the masterplan"].)

Comment by Max_Daniel on Examples of someone admitting an error or changing a key conclusion · 2022-06-27T15:55:27.110Z · EA · GW

This doesn't have all of the properties you're most looking for, but one example is this video by YouTuber Justin Helps which is about correcting an error in an earlier video and explaining why he might have made that error. (I don't quite remember what the original error was about – I think something to do with Hamilton's rule in population genetics.)

Comment by Max_Daniel on Contest: 250€ for translation of "longtermism" to German · 2022-06-02T12:01:30.395Z · EA · GW

FWIW to me as a German native speaker this proposed translation sounds like "long nap", "long slumber" or similar. :)

Comment by Max_Daniel on Announcing a contest: EA Criticism and Red Teaming · 2022-06-02T11:56:16.126Z · EA · GW

Thank you so much for your work on this, I'm excited to see what comes out of it.

Comment by Max_Daniel on EA is more than longtermism · 2022-05-23T19:09:23.167Z · EA · GW

I agree with your specific claims, but FWIW I thought albeit having some gaps the post was good overall, and unusually well written in terms of being engaging and accessible.

The reason why I overall still like this post is that I think at its core it's based on (i) a correct diagnosis that there is an increased perception that 'EA is just longtermism' both within and outside the EA community, as reflected in prominent public criticism of EA that mostly talk about their opposition to longtermism, and (ii) it describes some mostly correct facts that explain and/or debunk the 'EA is just longtermism' claim (even though omitting some important facts and arguably underselling the influence of longtermism in EA overall).

E.g., on the claim you quote, a more charitable interpretation would be that longtermism is one of potentially several things that differentiates EA's approach to philanthropy from traditional ones, and that this contributes to longtermism being a feature that outside observers tend to particularly focus on.

Now, while true in principle, my guess is that even this effect is fairly small compared to some other reasons behind the attention that longtermism gets. – But I think it's quite far from ridiculous or obviously wrong.

I also agree that one doesn't need to be a longtermist to worry about AI risk, and that an ideal version of the OP would have pointed that out somewhere, but again I don't think this is damning for the post overall. And given that 'longtermism' as a philosophical view and 'longtermism' as a focus on specific cause areas such as AI, bio, and other global catastrophic risk are often conflated even within the EA community, I certainly think that conflation might play into current outside perceptions of 'EA as longtermism'.

Comment by Max_Daniel on Bad Omens in Current Community Building · 2022-05-23T10:26:09.897Z · EA · GW

Thanks so much for sharing your perspective in such detail! Just dropping a quick comment to say you might be interested in this post on EA for mid-career people by my former colleague Ben Snodin if you haven't seen it. I believe that he and collaborators are also considering launching a small project in this space.

Comment by Max_Daniel on Some potential lessons from Carrick’s Congressional bid · 2022-05-23T08:02:51.779Z · EA · GW

Thank you for taking the time to share your perspective. I'm not sure I share your sense that spending money to reach out to Salinas could have made the same expected difference to pandemic preparedness, but I appreciated reading your thoughts, and I'm sure they point to some further lessons learned for those in the EA community who will keep being engaged in US politics.

Comment by Max_Daniel on Samotsvety Nuclear Risk Forecasts — March 2022 · 2022-05-18T00:26:32.565Z · EA · GW

I recommended some retroactive funding for this post (via the Future Fund's regranting program) because I think it was valuable and hadn't been otherwise funded. (Though I believe CEA agreed to fund potential future updates.)

I think the main sources of value were:

• Providing (another) proof of concept that teams of forecasters can produce decision-relevant information & high-quality reasoning in crisis situations on relatively short notice.
• Saving many people considerable amounts of time. (I know of several very time-pressed people who without that post would likely have spent at least an hour looking into whether they want to leave certain cities etc.).
• Providing a foil for expert engagement.

(And I think the case for retroactively funding valuable work roughly just is that it sets the right incentives. In an ideal case, if people are confident that they will be able to obtain retroactive funding for valuable work after the fact, they can just go and do that, and more valuable work is going to happen. This is also why I'm publicly commenting about having provided retroactive funding in this case.

Of course, there are a bunch of problems with relying on that mechanism, and I'm not suggesting that retroactive funding should replace upfront funding or anything like that.)

Comment by Max_Daniel on EA and the current funding situation · 2022-05-15T20:03:11.308Z · EA · GW

Hi, EAIF chair here. I agree with Michelle's comment above, but wanted to reply as well to hopefully help shed more light on our thinking and priorities.

As a preamble, I think all of your requests for information are super reasonable, and that in an ideal world we'd provide such information proactively. The main reason we're not doing so are capacity constraints.

I also agree it would be helpful if we shared more about community building activities we'd especially like to see, such as Buck did here and as some AMA questions may have touched uppn. Again this is because we need to focus our limited capacity on other priorities, such as getting back to applicants in a reasonable timeframe.

I should also add that I generally think that most of the strategizing about what kind of community building models are most valuable is best done by organizations and people who (unlike the fund managers) focus on the space full time – such as the Groups team at CEA, Open Phil's Longtermist EA Movement Building Team, and the Global Challenges Project. Given the current setup of EA Funds, I think the EAIF will more often be in a role of enabling more such work. E.g., we funded the Global Challenges Project multiple times. Another thing we do is complementing such work by providing an additional source of funding for 'known' models. Us providing funding for university and city groups outside of the priority locations that are covered by higher-touch programs by other funders is an example of the latter.

(I do think we can help feed information into strategy conversation by evaluating how well the community building efforts funded by us have worked. This is one reason why we require progress reports, and we’re also doing more frequent check-ins with some grantees.)

To be clear, if someone has an innovative idea for how to do community building, we are excited and able to evaluate it. It’s just that I don’t currently anticipate us to do much in the vein of coming up with innovative models ourselves.

A few thoughts on your questions:

I wonder if that would have happened if a volunteer organizer came to you and said, "I want to make this my career, but I need to do it full-time and here are reasons I think it is impactful and relatively easy to prove out."

We would be excited to receive such applications.

We would then evaluate the applicant's fit for community building and their plans, based on their track record (while keeping in mind that they could devote only limited time and attention to community building so far), our sense of which kinds of activities have worked well in other comparable locations (while remaining open to experiments), and usually an interview with the applicant.

Can you by chance estimate how many applications you have gotten for full-time community building in American cities?

Unfortunately our grant data base is not set up in a way that would allow me to easily access this information, so all I can do for now is give a rough estimate based on my memory – which is that we have received very few such applications.

In fact, apart from your application, I can only remember three applications for US-based non-uni city group organizing at all. Two were from the same applicants and for the same city (the second application was an updated version of a previous application – the first one had been unsuccessful, while we funded the second one). The other applicant wants to split their time between uni group (70%) and city group community building (30%). We funded the first of these, the second one is currently under evaluation.

(And in addition there was a very small grant to a Chicago-based rationality group, but here the applicant only asked for expenses such as food and beverages at meetings.)

It's possible I fail to remember some relevant applications, but I feel 90% confident that there were at most 10 applications for US-based full-time non-uni community building since March 2021, and 60% confident that there were at most 3.

(I do think that in an ideal world we'd be able to break down the summary statistics we include in our payout reports – number of applications, acceptance rate, etc. – by grant type. And so e.g. report these numbers for uni group community building, city group community building, and other coherent categories, separately. But given limited capacity we weren't able to prioritize this so far, and I'm afraid I'm skeptical that we will be able to any time soon.)

I know personally of at least one big US city (much bigger than Austin) which was denied for FTE and a month later approved for PTE, though I think their plans may have improved in the interim.

Was this at the EAIF? I only recall the case I mentioned above: One city group who originally applied for part-time work (30h/week spread across multiple people), was unsuccessful, updated their plans and resubmitted an application (still for part-time work), which then got funded.

It's very possible that I fail to remember another case though.

I think that 1 FTE is usually worth much more than 2 0.5 FTEs.

I generally agree with this.

FWIW a community organizer in DC did tell me that my few months of doing unpaid CB part-time was nowhere near enough to go straight into paid fulltime work, although I wasn't working at that time and was spending about 20 hours a week on CB and getting up to speed and studying CB from EA and nonEA sources.

I can't speak for that DC organizer (or even other EAIF managers), but FWIW for me the length of someone's history with community building work is not usually a consideration when deciding whether to fund them for more community building work – and if so, whether to provide funding for part-time or full-time work.

I think someone's history with community building mostly influences how I'm evaluating an application. When there is a track record of relevant work, there is more room for positive or negative updates based on that, and the applicant's fit for their proposed work is generally easier to evaluate. But in principle it's totally possible for applicants to demonstrate that they clear the bar for funding – including for full-time work – otherwise, i.e., by some combination of demonstrating relevant abilities in an interview, having other relevant past achievements, proposing well thought-through plans in their application, and providing references from other relevant contexts.

I think part-time vs. full-time most commonly depends on the specific situation of the application and the location – in particular, whether there is 'enough work to do' for a full-time role. (In the context of this post, FWIW I think I agree that often an ambitious organizer would be able to find enough things to do to work full time, which may partly involve running experiments/pilots of untested activities.)

Another consideration can sometimes be the degree of confidence that a candidate organizer is a good fit for community building. It might sometimes make sense to provide someone with a smaller grant to get more data on how well things are going – this doesn't necessarily push for part-time funding (as opposed to full-time funding for a short period), but may sometimes do so. One aspect of this is that I worry more about the risk of crowding out more valuable initiatives when an organizer is funded full-time for an extended period because I think this will send a stronger implicit message to people in that area that valuable community building activities are generally covered by an incumbent professional compared to a situation when someone is funded for specific pilot projects, part-time work, or for a shorter time frame.

Comment by Max_Daniel on Results from the First Decade Review · 2022-05-15T01:45:18.295Z · EA · GW

I'm not sure. – Peter Gabriel, for instance, seems to be an adherent of shorthairism, which I'm skeptical of.

Comment by Max_Daniel on Results from the First Decade Review · 2022-05-13T18:15:22.093Z · EA · GW

The submission in last place looks quite promising to me actually.

Does anyone know whether Peter Singer is a pseudonym or the author's real name, and whether they're involved in EA already? Maybe we can get them to sign up for an EA Intro Fellowship or send them a free copy of an EA book – perhaps TLYCS?

Comment by Max_Daniel on EA and the current funding situation · 2022-05-10T15:26:05.102Z · EA · GW

I don't know but FWIW my guess is some people might have perceived it as self-promotion of a kind they don't like.

(I upvoted Sanjay's comment because I think it's relevant to know about his agreement and about the plans for SoGive Grants given the context.)

Comment by Max_Daniel on What are the coolest topics in AI safety, to a hopelessly pure mathematician? · 2022-05-07T22:46:41.233Z · EA · GW

Maybe the notes on 'ascription universality' on ai-alignment.com are a better match for your sensibilities.

Comment by Max_Daniel on What are the coolest topics in AI safety, to a hopelessly pure mathematician? · 2022-05-07T22:24:51.669Z · EA · GW

You might be interested in this paper on 'Backprop as Functor'.

(I'm personally not compelled by the safety case for such work, but YMMV, and I think I know at least a few people who are more optimistic.)

Comment by Max_Daniel on What are the coolest topics in AI safety, to a hopelessly pure mathematician? · 2022-05-07T21:53:15.758Z · EA · GW

Some mathy AI safety pieces or other related material off the top of my head (in no particular order, and definitely not comprehensive nor weighted toward impact or influence):

Comment by Max_Daniel on What are the coolest topics in AI safety, to a hopelessly pure mathematician? · 2022-05-07T21:39:39.484Z · EA · GW

(Posting as a comment since I'm not really answering your actual question.)

I think if you find something within AI safety that is intellectually motivating for you, this will more likely than not be your highest-impact option. But FWIW here are some pieces that are mathy in one way or another that in my view still represent valuable work by impact criteria (in no particular order):

Comment by Max_Daniel on EA is more than longtermism · 2022-05-05T02:07:24.396Z · EA · GW

If you add the FTX Community, FTX Future Fund, EA Funds etc. my guess would be that it recently made  a large shift towards longtermism, primarily due to the Future fund being so massive.

I think starting in 2022 this will be true in aggregate – as you say largely because of the FTX Future Fund.

However, for EA Funds specifically, it might be worth keeping in mind that the Global Health and Development Fund has been the largest of the four funds by payout amount, and by received donations even is about as big as all other funds combined.

Comment by Max_Daniel on A tale of 2.75 orthogonality theses · 2022-05-04T23:51:32.979Z · EA · GW

FWIW my impression is more like "I feel like I've heard the (valid) observation expressed in the OP many times before, even though I don't exactly remember where", and I think it's an instance of the unfortunate but common phenomenon where the state of the discussion among 'people in the field' is not well represented by public materials.

Comment by Max_Daniel on My bargain with the EA machine · 2022-05-01T22:07:11.672Z · EA · GW

Looks to me like it was created with one of the popular R plotting libraries.

Comment by Max_Daniel on Notes From a Pledger · 2022-05-01T02:27:39.838Z · EA · GW

Thank you for sharing your perspective! I find it extremely helpful to hear from people who are less exposed to the social incentives and people that I usually interact with.

Comment by Max_Daniel on What are things everyone here should (maybe) read? · 2022-04-29T18:20:00.418Z · EA · GW

Gavin Leech has recently jotted down some thoughts on Ramsey here.

Comment by Max_Daniel on A Day in the Life of a Parent · 2022-04-26T23:09:38.100Z · EA · GW

Yeah, I enjoyed the post and am grateful for the data point, but when I read that sentence I was definitely like "huh? ...".

Comment by Max_Daniel on Nick Corvino's Shortform · 2022-04-24T21:26:06.751Z · EA · GW

You might be interested in this report by Ajeya Cotra.

Comment by Max_Daniel on A grand strategy to recruit AI capabilities researchers into AI safety research · 2022-04-23T20:24:38.604Z · EA · GW

The most valuable part of this project I’m interested in personally is a document with the best arguments for alignment and how to effectively go about these conversations (ie finding cruxes).

I agree that this would be valuable, and I'd be excited about empirically informed work on this.

You are most likely aware of this, but in case not I highly recommend reaching out to Vael Gates who has done some highly relevant research on this topic.

I do think it is important to keep in mind that (at least according to my model) what matters is not just the content of the arguments themselves but also the context in which they are made and even the identity of the person making them.

(I also expect significant variation in which arguments will be most convincing to whom.)

You made a logarithmic claim of improving capabilities, but my model is that 80% of progress is made by a few companies and top universities. Less than 1000 people are pushing general capabilities, so convincing these people to pivot (or the people in charge of these people’s research direction) is high impact.

Yes, I agree that the number of people who are making significant progress toward AGI/TAI specifically is much smaller, and that this makes the project of convincing all of those more feasible.

For the reasons mentioned in my original comment (incentives, failed past attempts, etc.) I suspect I'm still much more pessimistic than you might be that it is possible to convince them if only we found the right arguments, but for all I know it could be worth a shot. I certainly agree that we have not tried to do this as hard as possible (at least not that I'm aware of), and that it's at least possible that a more deliberate strategy could succeed where past efforts have failed.

(This is less directly relevant, but fwiw I don't think that this point counts against expected research returns being logarithmic per se. I think instead it is a point about what counts as research inputs – we should look at doublings of 'AGI-weighted AI research hours', whatever that is exactly.)

That being said, my guess is that in addition to 'trying harder' and empirically testing which arguments work in what contexts, it would be critical to have any new strategy to be informed by an analysis of why past efforts have not been successful (I expect there are useful lessons here) and by close coordination with those in the AI alignment and governance communities who have experience interacting with AI researchers and who care about their relationships with them, how they and AI safety/governance are being perceived as fields by mainstream AI researchers, etc. - both to learn from their experience trying to engage AI researchers and to mitigate risks.

FWIW my intuition is that the best version of a persuasion strategy would also include a significant component of preparing to exploit windows of opportunity – i.e., capitalizing on people being more receptive to AI risk arguments after certain external events like intuitively impressive capability advances, accident 'warning shots', high-profile endorsements of AI risk worries, etc.

Comment by Max_Daniel on Free-spending EA might be a big problem for optics and epistemics · 2022-04-23T18:17:02.021Z · EA · GW

I guess Google is more reasonable than German consulting firms :)

FWIW, my sense is that for business trips that last several weeks it is uncommon for companies to host several people in one hotel room, but I only have few data points on this, and maybe there is a US-Europe difference here.

(It is worth noting that one of my data points is about a part of the German federal bureaucracy which otherwise has fairly strict regulation regarding travel/accommodation expenses. There is literally a federal law about this, which may also be an interesting baseline more generally. It is notable that it allows first-class train rides for trips exceeding two hours, and while economy-class flights are mandated as default it does allow business class flights when there are specific "work-related reasons" for them.)

(To be clear, I do think that "running a fellowship in the Bahamas predictably leads to incurring higher costs for accommodation than you would in a place with a larger supply" is a fair point, and I would be sad if all EA events worldwide used that level of fanciness in accommodation for participants while ignoring available alternatives that may be cheaper without a commensurate loss in productivity/impact.

I just don't think it's a decisive argument against the Bahamas fellowship having been a good idea. Like I expect it's among the top 5–10 but very likely not the top 1–3 considerations one would need to look at whether the Bahamas fellowship was overall worth it.

I expect the two of us are roughly on the same page about this.)

Comment by Max_Daniel on rohinmshah's Shortform · 2022-04-23T16:28:42.003Z · EA · GW

I'm wondering if it'd be good to have something special happen to posts where a comment has more karma than the OP. Like, decrease the font size of the OP and increase the font size of the comment, or display the comment first, or have a red warning light emoji next to the post's title or ...

Or maybe the commenter gets a \$1,000 prize whenever that happens.

Good versions of "something special" would also incentivize the public service of pointing out significant flaws in posts by making comments that have a shot at exceeding the OP's karma score.

Obviously "there exists a comment that has higher karma than the OP" is an imperfect proxy of what we're after here, but anecdotally it seems to me this proxy works surprisingly well (though maybe it would stop due to Goodhart issues if we did any of the above) and it has the upside that it can be evaluated automatically.

Comment by Max_Daniel on Free-spending EA might be a big problem for optics and epistemics · 2022-04-23T15:40:01.664Z · EA · GW

Thank you, I appreciate the clarification (and therefore upvoted your most recent comment).

(And yes, I think the hotel had a pool.)

Regarding the meals, I'm not sure if I ever ate in a Michelin-starred restaurant, but I looked up the prices at a Michelin-starred restaurant near Oxford (where I live), and it seems like a main course there is about twice as expensive as one in the restaurant attached to the relevant Bahamas hotel. (If I remember correctly, I had about two meals in that restaurant over the course of ~3 weeks. The other meals were catered office food of in my view lower 'fanciness' than what you get at the main EA office in Oxford or PlennyBars that I had brought from home.)

More broadly, it seems like we have pretty strong empirical and perhaps also value-based disagreements about when spending money can increase future impact sufficiently to be worth it.

Comment by Max_Daniel on Will EAIF and LTFF publish its reports of recent grants? · 2022-04-23T15:27:55.152Z · EA · GW

Regarding the cost-effectiveness bar, this discussion from our AMA last year might be interesting.

Comment by Max_Daniel on Will EAIF and LTFF publish its reports of recent grants? · 2022-04-23T15:24:15.619Z · EA · GW

The EAIF is going to publish its next batch of payout reports before the end of June. They will cover the grants we made between August 2021 and December 2021 or January 2022 (don't remember off the top of my head).

I also think it is likely that we will keep publishing payout reports for the periods after that.

Comment by Max_Daniel on A grand strategy to recruit AI capabilities researchers into AI safety research · 2022-04-22T23:29:32.652Z · EA · GW

Thanks for sharing. – I love the spirit of aiming to come up with a strategy that, if successful, would have a shot at significantly moving the needle on prospects for AI alignment. I think this is an important but (perhaps surprisingly) hard challenge, and that a lot of work labeled as AI governance or AI safety is not as impactful as it could be in virtue of not being tied into an overall strategy that aims to attack the full problem.

That being said, I feel fairly strongly that this strategy as stated is not viable, and that you aiming to implement the strategy would come with sufficiently severe risks of harming our prospects for achieving aligned AI that I would strongly advise against moving forward with this.

I know that you have emailed several people (including me) asking for their opinion on your plan. I want to share my advice and reasoning publicly so other people you may be talking to can save time by referring to my comment and indicating where they agree or disagree.

--

Here is why I'm very skeptical that you moving forward with your plan is a good idea:

• I think you massively understate how unlikely your strategy is to succeed:
• There are a lot of AI researchers. More than 10,000 people attended the machine learning conference NeurIPS (in 2019), and if we include engineers the number is in the hundreds of thousands. Having one-on-one conversations with all of them would require at least hundreds of thousands to millions of person-hours from people who could otherwise do AI safety research or do movement building aimed at potentially more receptive audiences.
• Several prominent AI researchers have engaged with AI risk arguments for likely dozens of hours if not more (example), and yet they remain unconvinced. So there would very likely be significant resistance against your scheme by leaders in the field, which makes it seem dubious whether you could convince a significant fraction of the field to change gears.
• There are significant financial, status, and 'fun' incentives for most AI researchers to keep doing what they doing. You allude to this, but seem to fail to grasp the magnitude of the problem and how hard it would be to solve. Have you ever seen "marketing specialists" convince hundreds of thousands of people to leave high-paying and intellectually rewarding jobs to work on something else (let alone a field that is likely pretty frustrating if not impossible to enter)? (Not even mentioning the issue that any such effort would be competing against the 'marketing' of trillion-dollar companies like Google that have strong incentives to portray themselves as, and actually become, good places to work at.)
• AI safety arguably isn't a field that can absorb many people right now. Your post sort of acknowledges this when briefly mentioning mentoring bottlenecks, but again in my view fails to grasp with the size and implications of this problem. (And also it's not just about mentoring bottlenecks, but a lack of strategic clarity, much required research being 'preparadigmatic', etc.)
• Your plan comes with significant risks, which you do not acknowledge at all. Together with the other flaws and gaps I perceive in your reasoning, I consider this a red flag for your fit for executing any project in the vicinity of what you outline here.
• Poorly implemented versions of your plan can easily backfire: AI researchers might either be substantively unconvinced and become more confident in dismissing AI risk or – and this seems like a rather significant risk to me – would perceive an organized effort to seek one-on-one conversations with them in order to convince them of a particular position as weird or even ethically objectionable.
• Explaining to a lot of people why AI might be a big deal comes with the risk of putting the idea that they should race toward AGI into the heads of malign or reckless actors.
• There are many possible strategic priorities for how to advance AI alignment. For instance, an alternative to your strategy would be: 'Find talented and EA-aligned students who are able to contribute to AI alignment or AI governance despite both of these being ill-defined fields marred with wicked problems.' (I.e., roughly speaking, precisely the strategy that is being executed by significant parts of the longtermist EA movement.) And there are significant flaws and gaps in the reasoning that makes you pick out 'move AI capabilities researchers to AI safety' as the preferred strategy.
• You make it sound like a majority or even "monopoly" of AI researchers needs to work on safety rather than capabilities. However, in fact (and simplifying a bit), we only need as many researchers to work on AI safety as is required to solve the AI alignment problem in time. We don't know how hard this is. It could be that we need twice as much research effort as on AI capabilities, or that we only need one millionth of that.
• There are some reasons to think that expected returns to both safety and capabilities progress toward AGI are logarithmic. That is, each doubling of total research effort produces the same amount of expected returns. Given that AI capabilities research is a field that is several orders of magnitudes larger then AI safety, this means that the marginal returns of moving people from capabilities to safety research are almost all due to increasing AI safety effort, while the effect from slowing down capabilities is very small. This suggests that the overall best strategy is to scale up AI safety research by targeting whatever audiences leads to most quality-adjusted expected AI safety research hours.

(I also think that, in fact, there is not a clean division between "AI capabilities" and "AI safety" research. For instance, work on interpretability or learning from human feedback arguably significantly contributes to both capabilities and safety. I have bracketed this point because I don't think it is that relevant for the viability of your plan, except perhaps indirectly by providing evidence about your understanding of the AI field.)

--

To be clear, I think that some of the specific ideas you mention are very good if implemented well. For instance, I do think that better AI safety curricula are very valuable.

However, these viable ideas are usually things that are already happening. There are AI alignment curricula, there are events aimed at scaling up the field, there are efforts to make AI safety seem prestigious to mainstream AI talent, and there even are efforts that are partly aimed at increasing the credibility of AI risk ideas to AI researchers, such as TED talks or books by reputable AI professors or high-profile conferences at which AI researchers can mingle with people more concerned about AI risk.

If you wanted to figure out to which of these efforts you are best placed to contribute, or whether there might be any gaps among current activities, then I'm all for it! I just don't think that trying to tie them into a grand strategy that to me seems flawed in all those places at which it is new and specific, and not new in all those places where it makes sense to me, will be a productive approach.

Comment by Max_Daniel on Free-spending EA might be a big problem for optics and epistemics · 2022-04-22T21:58:45.342Z · EA · GW

I think it depends on the baseline. If I compare it to staying in a hostel like I would do when backpacking or a trip with friends, then it was definitely fancy. If I compare it to the hotel that a mid-sized German consulting firm used for a recruiting event I attended about five years ago,  then I would say it was overall less fancy (though it depends on the criteria – e.g., in the Bahamas I think the rooms were relatively big while everything else [food, 'fanciness' as opposed to size of the rooms, etc.] was less fancy).

Comment by Max_Daniel on Is the reasoning of the Repugnant Conclusion valid? · 2022-04-22T01:23:06.932Z · EA · GW

It seems to me that your proposed theory has severe flaws that are analogous to those of Lexical Threshold Negative Utilitarianism, and that you significantly understate the severity of these flaws in your discussion.

• Your characterization of welfare gains for people with above-neutral welfare as "giving [...] welfare to people who don't truly need it" seems to assume something close to negative utilitarianism and begs the question of how we should weigh happiness gains versus losses, and suffering versus happiness.
• Your "It is too implausible" defence is not convincing. It seems theoretically unfounded and ad hoc. It applies different standards to different theories without justification by explaining away an uncomfortable example for your favored theory while at the same time considering the (arguably more 'extreme' and 'implausible') Repugnant Conclusion to be a flaw worth avoiding.
• The example case you use to indicate your theory's flaw arguably isn't the most problematic one, and certainly not the only one. Instead, consider a population  consisting of one person with welfare  and a very large number of people with welfare . Compare this to a population  of the same size in which everyone has welfare . While your theory does say that  is not better than , it also denies that  is better than . So for instance your theory denies that we should be willing to inflict a pinprick on Jeff Bezos to lift billions of people out of poverty (unless you think that everyone living in poverty has welfare below zero). In other words, when considering different populations in which everyone has positive welfare, your theory is deeply conservative (in the sense of denying that many intuitively good changes, in particular ones involving 'redistribution', are in fact good) and anti-egalitarian (it is in fact closer to being 'perfectionist', i.e., valuing the peak welfare level enjoyed by anyone in the population).
• It also has various other problems that plague lexical and negative-utilitarian theories, such as involving arguably theoretically unfounded discontinuities that lead to counterintuitive results, and being prima facie inconsistent with happiness/suffering and gains/losses tradeoffs we routinely make in our own lives.

(Also, one at least prima facie flaw that you don't discuss at all is that your theory involves incomparability – i.e. there are populations  and  such that neither is better than the other.)

Comment by Max_Daniel on Free-spending EA might be a big problem for optics and epistemics · 2022-04-22T00:47:45.443Z · EA · GW

(Comment in personal capacity only.)

I support people sharing their unfiltered reactions to issues, and think this is particularly valuable on a contentious topic like this one. Critical reactions are likely undersupplied, and so I especially value hearing about those.

However, I strong-downvoted your comment because I think it is apt to misleading readers by making statement that can be construed as descriptions of actual EA programs (such as the FTX EA Fellowships on the Bahamas) despite being substantively inaccurate. For instance:

• I don't think that "fine dining" is an appropriate description for the food options covered for the FTX EA Fellows. I would say it was fine but lower in both quality and 'fanciness' than food typically provided at, say, EA events in the UK. (Though it's possible it was more expensive because many things in the Bahamas that are targeted at tourists are overpriced.)
• I'm fairly sure that eating caviar is not a typical activity among EAs working from the Bahamas.
• I doubt that traveling business class is common among EAs visiting the Bahamas. I didn't. (I do think that in some cases the cost and increased climate impact of flying business class can be well justified by adding several counterfactual work hours.)
• I don't know what a 500-thread Egyptian cotton towel is, but the towels I saw in the Bahamas looked fairly normal to me.
• As far as I can tell from googling, the hotel in which most FTX EA Fellows stayed was not a five-star hotel.

I'm aware that some of your statements might have been intended as satirical, but I think to readers the line between satire and implied factual claims will at the very least be ambiguous, which seems like a recipe for misinforming readers.

I also have no idea what you're referring to when mentioning an "UK Effective Altruism charity that [...] withheld  £7million+ in reserves". I don't know whether or not this is accurate, but I think it's bad practice to make incriminating claims without providing information that is sufficiently specific for readers to be able to form their own views on the matter.

(I generally support thoughtful public discussion of the issue raised by the OP, and think it made several good points, though I don't necessarily agree with everything.)

Comment by Max_Daniel on How about we don't all get COVID in London? · 2022-04-14T22:07:09.742Z · EA · GW

FWIW my guess is it's because people don't want to wear a mask during lectures and/or think that's a bad norm.

(I didn't downvote your comment.)

Comment by Max_Daniel on Sophie’s Choice as a time traveler: a critique of strong longtermism and why we should fund more development. · 2022-04-13T23:55:23.354Z · EA · GW

Thanks! I think you're right that we may be broadly in agreement methodologically/conceptually. I think remaining disagreements are most likely empirical. In particular, I think that:

• Exponential growth of welfare-relevant quantities (such as population size) must slow down qualitatively on time scales that are short compared to the plausible life span of Earth-originating civilization. This is because we're going to hit physical limits, after which such growth will be bounded by the at most polynomially growing amount of usable energy (because we can't travel in any direction faster than the speed of light).
• Therefore, setting in motion processes of compounding growth earlier or with larger initial stocks "only" has the effect of us reaching the polynomial-growth plateau sooner. Compared to that, it tends to be more valuable to increase the probability the we reach the polynomial-growth plateau at all, or that once we reach it we use the available resources well by impartially altruistic standards. (Unless the effects of the latter type that we can feasibly achieve are tiny, which I don't think they are empirically – it does seem that existential risk this century is on the order of at least 1%, and that we can reduce it by nontrivial quantities.)

(I think this is the most robust argument, so I'm omitting several others. – E.g., I'm skeptical that we can ever come to a stable assessment of the net indirect long-term effects of, e.g., saving a life by donating to AMF.)

This argument is explained better in several other places, such as Nick Bostrom's Astronomical Waste, Paul Christiano's On Progress and Prosperity, and other comments here and here.

The general topic does come up from time to time, e.g. here.