Posts
Comments
Sorry I'm mostly trying to take a day away from the forum, but someone let me know that it would be helpful to chime in here. Essentially what happened:
- The org had arranged accommodation (not a hotel), but it didn't cover the first night she'd be in the country
- The people running the recruitment talked to me in a "this is your friend you recommended, could you help out?" way
- We had a spare room so I offered that; they said yes so I communicated with her about that
- This was all arranged on the day of her flight (before she flew)
(I'm eliding details to reduce risk of leaking information about the person's identity.)
(My personal take based on general theory; not representing any kind of official position or based on specifics of EVF:)
Yeah, combining lots of projects in a small number of legal entities probably increases risk aversion some, relative to them each having their own legal entities. There are various reasons for this, and it’s not clear whether it’s net good.
On the hard analysis (i.e. just looking at ~economic incentives): first order is that it decreases inappropriate risk tolerance since projects that might be judgement proof by themselves are no longer so as part of a larger entity. OTOH it might be that the ecosystem systematically underincentivizes taking upside risks. If large upside risks were correlated with large downside risks (e.g. some activities are just high-variance), which is plausible, it could be bad to asymmetrically make projects internalize downside risk, even though internalizing externalities is usually good. (Impact markets might help here, but have issues of their own …)
On the soft analysis: people may be inclined towards ambiguity aversion, and really not wanting any project to have serious downsides for other projects. This would suggest you might get more risk aversion than is appropriate. OTOH the whole setup could lead to more systematic analysis of risks, in a way that helps to avoid unknowingly taking risks, which is probably an improvement.
Or if you’re asking about the introduction of the Interim CEOs: you might have a concern that they’d be overly risk-averse, if they get the blame for big problems, but don’t get credit for big successes by the projects. I agree that this is a worry in theory; pragmatically, the respective boards will be holding Howie and Zach accountable for “is this a structure which encourages project leads to make appropriately ambitious plans?”, which should help to mitigate it some (probably not all the way because it’s a harder thing to hold them accountable for than whether there were big problems).
Overall my guess is that “effect on risk aversion” is not one of the most important factors for whether this is a good setup.
Hi Jeff —
Side point: The Alphabet and Meta analogies work better for the relationship between EVF (either / both of US and UK) and the projects hosted within it rather than the relationship between EVF UK and EVF US. That is to say, Google rebranded to Alphabet to avoid confusion between the parent company (Alphabet) and the well-known subsidiary company (Google). Similarly, CEA rebranded to EVF to avoid confusion between the legal entities (EVF) and the well-known subsidiary project (CEA).
Why two CEOs? While the projects are hosted within the broader EVF legal entities, neither EVF UK nor EVF US subsumes the other entity. As distinct nonprofits operating in distinct countries, they need distinct leadership; consequently the boards appointed separate Interim CEOs. Similarly, as is typical for nonprofits, each charity has its own board, which is responsible for providing governance oversight (EVF UK used to be a member of EVF US but isn’t any more). There’s a lot in flux, the interim CEO appointments are new, and various things may change over time. We’re still exploring how this best works, and what the right long-term structure will be.
The purchase was in April 2022 not in 2021; however the rest of your comment seems fair.
FYI: I added a brief explanation of why we hadn't posted publicly about it before now to the end of my answer.
(I edited in a way which changed which paragraph was penultimate. I believe Larks was referring to the content which is now expanded on in paragraphs starting "We wanted ..." and "We thought ...".)
I've edited my reply to add a bit more detail on this point.
Hey,
First I want to explain that I think it's misleading to think of this as a CEA decision (I've edited to be more explicit about this). To explain that I need to disambiguate between:
- CEA, the project that runs the EA Forum, EA Global, etc.
- This is what I think ~everyone usually thinks of when they think of "CEA", as it's the group that's been making public use of that brand
- CEA, the former name of a legal entity which hosts lots of projects (including #1)
- This is a legacy naming issue ...
- The name of the legal entity was originally intended as a background brand to house 80,000 Hours and Giving What We Can; other projects have been added since, especially in recent years
- Since then the idea of "effective altruism" has become somewhat popular in its own right! And one of the projects within the entity started making good use of the name "CEA"
- We’ve now renamed the legal entity to EVF, basically in order to avoid this kind of ambiguity!
- This is a legacy naming issue ...
Wytham Abbey was bought by #2, and isn’t directly related to #1, except for being housed within the same legal entity. I was the person who owned the early development of the project idea, and fundraised for it. (The funding comes from a grant specifically for this project, and is not FTX-related.) I brought it to the rest of the board of EVF to ask for fiscal sponsorship (i.e. I would direct the funding to EVF and EVF would buy the property and employ staff to work on the project). So EVF made two decisions here: they approved fiscal sponsorship, agreeing to take funds for this new project; and they then followed through and bought the property with the funds that had been earmarked for that. The second of these is technically a decision to buy the building (and was done by a legal entity at the time called CEA), but at that point it was fulfilling an obligation to the donor, so it would have been wild to decide anything else. The first is a real decision, but the decision was to offer sponsorship to a project that would likely otherwise have happened through another vehicle, not to use funds to buy a building rather than for another purpose. Neither of these decisions were made by any staff of the group people generally understand as "CEA". (All of this ambiguity/confusion is on us, not on readers.)
I’d also like to speak briefly to the “why” — i.e. why I thought this was a good idea. The central case was this:
I’ve personally been very impressed by specialist conference centres. When I was doing my PhD, I think the best workshops I went to were at Oberwolfach, a mathematics research centre funded by the German government. Later I went to an extremely productive workshop on ethical issues in measuring the global burden of disease at the Brocher Foundation. Talking to other researchers, including in other fields, I don’t think my impression was an outlier. Having an immersive environment which was more about exploring new ideas than showing off results was just very good for intellectual progress. In theory this would be possible without specialist venues, but researchers want to spend time thinking about ideas not event logistics. Having a venue which makes itself available to experts hosting events avoids this issue.
In the last few years, I’ve been seeing the rise of what seems to me an extremely important cluster of ideas — around asking what’s most important to do in the world, and taking chains of reasoning from there seriously. I think this can lead to tentative answers like “effective altruism” or “averting existential risk”, but for open-minded intellectual exploration I think it’s better to have the focus on questions than answers. I thought it would be great if we could facilitate more intellectual work of this type, and the specialist-venue model was a promising one to try. We will experiment with a variety of event types.
We had various calculations about costings, which made it look somewhere between “moderately money-saving” and “mildly money-spending” vs renting venues for events that would happen anyway, depending on various assumptions e.g. about usage that we couldn’t get great data on before running the experiment. The main case for the project was not a cost-saving one, but that if it was a success it could generate many more valuable workshops than would otherwise exist. Note that this is a much less expensive experiment than it may look on face value, since we retain the underlying asset of the building.
We wanted to be close to Oxford for easy access to the intellectual communities there. (Property prices weren’t falling off significantly with distance until travel time from Oxford and London had become significantly higher.) We looked at a lot of properties online, and visited the three properties we found for sale with 20+ bedrooms within about 50 minutes of Oxford. These were all "country houses", which are commonly repurposed as event venues in England. The other two were cheaper (one ~£6M and one ~£9M at the end of a competitive process; compared to a purchase price for Wytham of a bit under £15M) but needed significantly more work before they were usable, which would have added large expense (running into the millions) and delay (likely years). (And renovation expense isn’t obviously recoverable if one sells — it depends on how much the buyers want the same things from the property as you do.)
We thought Wytham had the most long-term potential as a venue because it had multiple large common rooms that could take >40 people. The other properties had one large room each holding perhaps a max of 40, but there would be pressure on this space since it would be wanted as both a dining space and for workshop sessions, and would also reduce flexibility of use for meetings (extra construction might have been able to address this, but it was a big question mark whether you could get planning consent). Wytham also benefited from being somewhat larger (about 27,000 sq ft vs roughly 20,000 sq ft for each of the other two) and a more accessible location. Overall we thought that a combination of factors made it the most appropriate choice.
I did feel a little nervous about the optical effects, but think it’s better to let decisions be guided less by what we think looks good, and more by what we think is good — ultimately this was a decision I felt happy to defend.
On why we hadn’t posted publicly about this before: I'm not a fan of trying to create hype. I thought the natural time to post about the project publicly would be when we were ready to accept public applications to run events, and it felt a bit gauche to post before that. Now that there's a public discussion, of course, it seemed worth explaining some of the thinking.
I hope this is helpful.
I agree with all this. I meant to state that I was assuming logarithmic returns for the example, although I do think some smoothness argument should be enough to get it to work for small shifts.
Sorry I don't have a link. Here's an example that's a bit more spelled out (but still written too quickly to be careful):
Suppose there are two possible worlds, S and L (e.g. "short timelines" and "long timelines"). You currently assign 50% probability to each. You invest in actions which help with either until your expected marginal returns from investment in either are equal. If the two worlds have the same returns curves for actions on both, then you'll want a portfolio which is split 50/50 across the two (if you're the only investor; otherwise you'll want to push the global portfolio towards that).
Now you update either that S is 1% more likely (51%, with L at 49%).
This changes your estimate of the value of marginal returns on S and on L. You rebalance the portfolio until the marginal returns are equal again -- which has 51% spending on S and 49% spending on L.
So you eliminated the marginal 1% spending on L and shifted it to a marginal 1% spending on S. How much better spent, on average, was the reallocated capital compared to before? Around 1%. So you got a 1% improvement on 1% of your spending.
If you'd made a 10% update you'd get roughly a 10% improvement on 10% of your spending. If you updated all the way to certainty on S you'd get to shift all of your money into S, and it would be a big improvement for each dollar shifted.
On the face of it an update 10% of the way towards a threshold should only be about 1% as valuable to decision-makers as an update all the way to the threshold.
(Two intuition pumps for why this is quadratic: a tiny shift in probabilities only affects a tiny fraction of prioritization decisions and only improves them by a tiny amount; or getting 100 updates of the size 1% of the way to a threshold is super unlikely to actually get you to a threshold since many of them are likely to cancel out.)
However you might well want to pay for information that leaves you better informed even if it doesn't change decisions (in expectation it could change future decisions).
Re. arguments split across multiple posts, perhaps it would be ideal to first decide the total prize pool depending on the value/magnitude of the total updates, and then decide on the share of credit allocation for the updates. I think that would avoid the weirdness about post order or incentivizing either bundling/unbundling considerations, while still paying out appropriately more for very large updates.
Ah, got you. There are a few people employed in small projects; things with a similar autonomous status to the orgs, but not yet at a scale where it makes sense for them to be regarded as "new orgs".
Technically speaking all employees of the constituent organizations are "employees of EV" (for one of the legal entities that's part of EV).
Yep, your interpretation is correct. We didn't want to make a big deal about this rebrand because for most people the associations they have with "CEA" are for the organization which is still called CEA. (But over the years, and especially as the legal entity has grown and taken on more projects, we've noticed a number of times where the ambiguity between the two has been somewhat frustrating.) Sorry for the confusion!
FWIW I think people are normally more concerned with flawed realisation scenarios than stagnation scenarios. (I'm not sure whether this changes your basic point.)
Thanks, I liked all of this. I particularly agree that "adequately communicating just how much wisdom you can find in EA/LW" is important.
Maybe I want it to be perceived more like a degree you can graduate from?
Yeah maybe I should have been more explicit that I'm very keen on people who've never spent time in EA hubs going and doing that and getting deeply up to speed; I'm more worried about the apparent lack of people who've done that then going into explore mode while keeping high bandwidth with the core.
Oh yeah I'm also into this. I was thinking of getting them more involved with EA directly as something that's already socially supported/encouraged by the community (which is great), but other ways to tap into their knowledge would be cool.
I loved this post; I imagine myself sending it to people in the future, and I'd be interested in seeing more in this genre from you or others. [Edit: it's even better now it has Rohin's comment.]
(Mostly I'm pulled to comment because I'm confused by the apparent disconnect between value-I-perceive and karma-it's-receiving.)
I haven't laid out a particular argument because I think the AI argument is by some way the strongest, and I haven't done the work to flesh out the alternatives.
I guess I thought you were making a claim like "the only possible arguments that will work need to rest on AI". If instead you're making a claim like "among the arguments people have cleanly articulated to date, the only ones that appear to work are those which rest on AI" that's a much weaker claim (and I think it's important to disambiguate that that's the version you're making, as my guess is that there are other arguments which give you a Time of Perils with much lower probability than AI but still significant probability; but that AFAIK nobody's worked out the boundaries of what's implied by the arguments).
You say:
Arguments for the Time of Perils Hypothesis which do not appeal to AI are not strong enough to ground the relevant kind of Time of Perils Hypothesis
What you've shown is that some very specific alternate arguments aren't enough to ground the relevant kind of Time of Perils hypothesis. But the implicature of your statement here is that one needs to appeal to AI to ground the relevant hypothesis, which I think (1) you haven't shown, and (2) is most likely false (though I think it's easier to ground with AI).
I guess with a broad enough conception of "AI" I think the statement would be true. I think to get stability against political risks one needs systems that are extremely robust / internally error-correcting. It's my view that one could likely build such systems out of humans organized in novel ways with certain types of information flow in the system, but I think that's far out of reach at the moment that the social technology to enable it could conceivably be called "AI".
I'm sympathetic to this basic possibilities you're outlining. I touched on some similar-ish ideas in this high-level visualization of how the future might play out.
Re. skipping the quiz by putting in a dummy answer: I agree the user experience is fine if people are bought into doing the whole thing. My worry is that when I try to imagine young-me, (I think) I'd feel some allergy to the fact-of-compulsory-quizzes, because of the implicit social contract of something like "these people know better; I'm here to be judged". Which might put me off the site (either making me stop reading, or just orient to the site as "something to be exploited" rather than "my friend to help me").
Thanks for this, and especially for your last post (I'm viewing this as kind of an appendix-of-examples to the last post, which was one of my favourite pieces from the MIRI-sphere or indeed on AI alignment from anywhere). General themes I want to pick out:
- My impression is that there is a surprising dearth of discussion of what the hard parts of alignment actually are, and that this is one of the most important discussions to have given that we don't have clean agreed articulations of the issues
- I thought your last post was one of the most direct attempts to discuss this that I've seen, and I'm super into that
- I am interested in further understanding "what exactly would constitute a sharp left turn, and will there be one?"
- I'm in strong agreement that the field would be healthier if more people were aiming at the central problems, and I think it's super healthy for you to complain about how it seems to you like they're missing them.
- I don't think everyone should be aiming directly at the central problems because I think it may be that we don't yet know enough to articulate and make progress there, and it can be helpful as a complement to build up knowledge that could later help with central problems; I would at least like it though if lots of people spent a little bit of time trying to understand the central problems, even if they then give up and say "seems like we can't articulate them yet" or "I don't know how to make progress on that" and go back to more limited things that they know how to get traction on, while keeping half an eye on the eventual goal and how it's not being directly attacked.
I also wanted to clarify that Truthful AI was not trying to solve the hard bit of alignment (I think my coauthors would all agree with this). I basically think it could be good for two reasons:
- As a social institution it could put society in a better place to tackle hard challenges (like alignment; if we get long enough between building this institution and having to tackle alignment proper).
- It could get talented people who wouldn't otherwise be thinking about alignment to work on truthfulness. And I think that some of the hard bits of truthfulness will overlap with the hard bits of alignment, so it might produce knowledge which is helpful for alignment.
(There was also an exploration of "OK but if we had fully truthful AI maybe that would help with alignment", but I think that's more a hypothetical sideshow than a real plan.)
So I think you could berate me for choosing not to work on the hard part of the problem, but I don't want to accept the charge of missing the point. So why don't I work on the hard part of the problem? I think:
- I don't actually perceive the hard part of the problem clearly
- It feels slippery, and that trying to tackle it head-on prematurely is too liable to result in doing work that I will later think completely misses the point
- But I can perceive the shape of something there (I may or may not end up agreeing with you about its rough contours), so I prefer to think about a variety of things with some bearing on alignment, and periodically check back in to see how much enlightenment I now have about the central things
- You could think of me as betting on something like Grothendieck's rising sea approach to alignment (although of course it's quite likely I'll never actually get the shell open)
- This is part of what made my taste sensors fire very happily on your posts!
- I think there are a web of things which can put us in a position of "more likely well-equipped to make it through", and when I see I have traction on some of those it feels like there's a real substantive opportunity cost to just ignoring them
(Laying this out so that you know the basic shape of my thinking, such that if you want to make a case that I should devote time to tackling things more head-on, you'll know what I need to be moved on.)
Interested if you'd find the quizzes good for you at your current age? The existence of compulsory quizzes strikes me as sort of condescending. (I'd feel better about the vibe if the same content were framed as optional-but-encouraged puzzles.)
I love this.
(If the $100k is actually money in proportion to what people already have, then it's even more dramatic)
I think it is in proportion to what people already have, but this doesn't make it much more dramatic because you've done the calculation for money to an average person, whereas if it were actually split equally among people the benefits would be dominated by the share of money going to the world's poorest.
You can get your factor-of-1,000 from a combination of:
- Valuing instrumental effects more than short term suffering (perhaps because of taking a longtermist lens in which case this could get you more than a factor-of-1,000 by itself; but one doesn't need to adopt longtermism to think some more moderate factor-of-adjustment is correct here);
- Upweighting the climate costs to count economic impacts on the poorest more than economic impacts on richer people (probably correct and important, but I'm not sure if this is like a factor-of-2 adjustment or a factor-of-20 adjustment);
- Upweighting climate costs to account for tail risks in addition to central projections of economic cost (if 90% of your worry about climate change is about tail risks this should be a factor-of-10 adjustment; different people will have different takes on that);
- The direct moral importance of suffering of different beings (reasonable views vary a lot, but Jeff's linked post seems like a careful thinker trying to have a reasonable take and arriving at figures in the vicinity of factor-of-1,000 just from this factor)
I don't think it's that hard for combinations of these factors to push you over to "climate effects matter more".
It's asking "how much richer would the whole world have to be to make people happier (because richer) by the same amount?". World GDP is ~$10k/person, so I think $100k means "about as big a welfare boost as giving 100 people a 10% increase in income".
(This is a legit way of measuring things, but I think arguably it gives the wrong impression compared to the climate numbers, which I think are more directly economic costs, which fall disproportionately on the poor, rather than equivalences. There's also a decent case that the main reason to care about climate change should be tail risks, which I think aren't assessed here but could change the conclusion that animal welfare effects are robustly a bigger deal than climate change.)
Right. (Which is significantly about level of context not just innate properties of the people; also as I alluded to briefly depends on the coordination problem of finding the right roles for people.)
But I don't really think introducing this changes the concept of funding overhangs?
I'm now noticing I'm unsure whether you disagree that it's okay to approximate #1 as zero. I read your post as arguing against approximating #2 as zero, but maybe that's just because that's something I agree with, and you actually intended to cover both 1+2.
After reflecting further and talking to people I changed "track" in the title to "camp"; I think this more accurately conveys the point I'm making.
So I think the state of affairs right now for some key work is "barely funding constrained at all". I think that's helpful to be able to talk about, and it doesn't seem egregious to me if people round it off as "not funding constrained". But I'm worried that that has more (incorrect) connotations of "extra money isn't helpful" than "funding overhang".
The issue is that we need to be able to talk about two different things:
- The value of deploying funding now for getting marginal work to happen right now;
- The all-things-considered value of funding.
These are obviously related, but they can vary separately. I want to be able to express that #1 is very low (where it's unproblematic if people approximate it as zero), but I'm definitely not OK with people approximating #2 as zero. In a different world #1 could be high while #2 was relatively low (in this case borrowing-to-give would be a good strategy).
[A very rough figure I'd be OK with people using for #2 is 100 nanodooms averted /$M (for funding that isn't significantly correlated with other future EA funding sources; lower if correlated).]
I realized I seemed to be understanding the term very differently than you -- e.g. this sentence didn't resonate for me ...
Funding ‘overhang’ makes it sound like there’s a ‘correct’ amount of funding, and we can have too little or too much.
... so I wrote an explanation of how I'd use the term funding overhang. It might still be correct to stop using the term (and have a different term for the thing I'm calling funding overhang), but I do think there's a valuable concept there and we can have a better version of the conversation if we have common knowledge of what that is.
Hmm, I think "funding overhang" has some helpful connotations that you don't describe here. It evokes (for me) a sense of: not being in equilibrium on current spending; of the elasticity-of-activity-with-funding being very low; of discount rates being higher for labour than money.
I think all of those are basically accurate at the moment in traditionally longtermist areas. Perhaps most importantly, I like how it evokes "if you can find a great idea and people to implement it, there's probably funding available for it that isn't directly trading off against other projects people are implementing now".
And although you say it's better not to think of blacks and whites, I do think there's something binary about this dynamic according to whether funding is a significant bottleneck on more good work happening right now (with a decent size grey area in the middle). And in longtermist EA we've been fairly firmly at the "funding overhang" end of the spectrum for the last three years, even while the bar moves around a bit.
So overall I think the issue is not using the term "funding overhang" but the connotation that if there's a funding overhang the value of marginal funds is zero (which I think is badly wrong). So I'm kinda wondering if we can keep the term "funding overhang" and just get rid of that association ... it feels like it should be achievable ... OTOH there are few cases where it's necessary to talk about (I don't think talking about the funding bar just does the same work, but it's not too hard to find other ways to frame it).
True/false isn't a dichotomy. The statement here was obviously a stretch / not entirely true. I'd guess it had hundreds of thousands of microlies ( https://forum.effectivealtruism.org/posts/SGFRneArKi93qbrRG/truthful-ai?commentId=KdG4kZEu9GA4324AE )
But I think it's important to reserve terms like "lie" for "completely false", because otherwise you lose the ability to police that boundary (and it's important to police it, even if I also want higher standards enforced around many spaces I interact with).
It's from "man things in the world are typically complicated, and I haven't spent time digging into this, but although there surface level facts look bad I'm aware that selective quoting of facts can give a misleading impression".
I'm not trying to talk you out of the bad actor categorization, just saying that I haven't personally thought it through / investigated enough that I'm confident in that label. (But people shouldn't update on my epistemic state! It might well be I'd agree with you if I spent an hour on it; I just don't care enough to want to spend that hour.)
I don't think it's like "Jacy had an interpretation in mind and then chose statements". I think it's more like "Jacy wanted to say things that made himself look impressive, then with motivated reasoning talked himself into thinking it was reasonable to call himself a founder of EA, because that sounded cool".
(Within this there's a spectrum of more and less blameworthy versions, as well as the possibility of the straight-out lying version. My best guess is towards the blameworthy end of the not-lying versions, but I don't really know.)
Yes, I personally want to do that, because I want to spend time engaging with good faith actors and having them in gated spaces I frequent.
In general I have a strong perfectionist streak, which I channel only to try to improve things which are good enough to seem worth the investment of effort to improve further. This is just one case of that.
(Criticizing is not itself something that comes with direct negative effects. Of course I'd rather place larger sanctions on bad faith actors than good faith actors, but I don't think criticizing should be understood as a form of sanctioning.)
I agree with this.
I'm saying it's a gross exaggeration not a lie. I can imagine someone disinterested saying "ok but can we present a democratic vision of EA where we talk about the hundred founders?" and then looking for people who put energy early into building up the thing, and Jacy would be on that list.
(I think this is pretty bad, but that outright lying is worse, and I want to protect language to talk about that.)
I actually didn't mean for any of my comments here to get into attacks on our defence of Jacy. I don't think I have great evidence and don't think I'm a very good person to listen to on this! I just wanted to come and clarify that my criticism of John was supposed to be just that, and not have people read into it a defence of Jacy.
(I take it that the bar for deciding personally to disengage is lower than for e.g. recommending others do that. I don't make any recommendations for others. Maybe I'll engage with Jacy later; I do feel happier about recent than old evidence, but it hasn't yet moved me to particularly wanting to engage.)
Actually no I got reasonably good vibes from the comment above. I read it as a bit defensive but it's a fair point that that's quite natural if he's being attacked.
I remember feeling bad about the vibes of the Apology post but I haven't gone back and reread it lately. (It's also a few years old, so he may be a meaningfully different person now.)
[meta for onlookers: I'm investing more energy into holding John to high standards here than Jacy because I'm more convinced that John is a good faith actor and I care about his standards being high. I don't know where Jacy is on the spectrum from "kind of bad judgement but nothing terrible" to "outright bad actor", but I get a bad smell from the way he seems to consistently turns to present things in a way that puts him in a relatively positive light and ignores hard questions, so absent further evidence I'm just not very interested in engaging]
I wouldn't have described Jacy as a co-founder of effective altruism and don't like him having had it on his website, but it definitely doesn't seem like a lie to me (I kind of dislike the term "co-founder of EA" because of how ambiguous it is).
Anyway I think calling it a lie is roughly as egregious a stretch of the truth as Jacy's claim to be a co-founder (if less objectionable since it reads less like motivated delusion). In both cases I'm like "seems wrong to me, but if you squint you can see where it's coming from".
Either way it looks pretty hard to have a real apples-to-apples comparison, since presumably the open call takes significantly more time from prospective grantees (but you wouldn't want to count that the same as grantmaker time).
Gavin's count says it includes strategy and policy people, for which I think AI Impacts counts. He estimated these accounted for half of the field then. (But I think should have included that adjustment of 50% when quoting his historical figure, since this post was clearly just about technical work.)
[Speaking for myself not Oliver ...]
I guess that a week doing ELK would help on this -- probably not a big boost, but the type of thing that adds up over a few years.
I expect that for this purpose you'd get more out of spending half a week doing ELK and half a week talking to people about models of whether/why ELK helps anything, what makes for good progress on ELK, what makes for someone who's likely to do decently well at ELK.
(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)
Re. non-consequentialist stuff, I notice that I expect societies to go better if people have some degree of extra duty towards (or caring towards) those closer to them. That could be enough here?
(i.e. Boundedly rational agents shouldn't try to directly approximate their best guess about the global utility function.)
Re. 2), I think the relevant figure will vary by activity. 30% is a not-super-well-considered figure chosen for 80k, and I think I was skewing conservative ... really I'm something like "more than +20% per doubling, less than +100%". Losing 90% of the impact would be more imaginable if we couldn't just point outliery people to different intros, and would be a stretch even then.