What I learned from the criticism contest 2022-10-01T13:39:15.782Z
The sense of a start 2022-09-28T13:37:44.782Z
Announcing “Effective Dropouts” 2022-09-21T14:00:26.737Z
Peter Eckersley (1979-2022) 2022-09-03T10:45:46.655Z
Bruce Kent (1929–2022) 2022-06-10T14:03:02.939Z
What's the causal effect of a PhD? 2022-06-05T08:37:23.695Z
Norms and features for the Forum 2022-05-16T09:38:14.625Z
Different forms of capital 2022-04-25T08:05:25.123Z
Emergent Ventures AI 2022-04-08T22:08:23.558Z
Case for emergency response teams 2022-04-05T11:08:22.173Z
COVID memorial: 1ppm 2022-04-02T18:27:55.908Z
Community in six hours 2022-04-01T12:20:34.764Z
Paul Farmer (1959 – 2022) 2022-03-24T13:08:57.061Z
How we failed 2022-03-23T10:08:06.861Z
What we tried 2022-03-21T15:26:30.067Z
Milan Griffes on EA blindspots 2022-03-18T16:17:12.606Z
Hinges and crises 2022-03-17T13:43:04.755Z
Mediocre AI safety as existential risk 2022-03-16T11:50:01.016Z
Experimental longtermism: theory needs data 2022-03-15T10:05:35.294Z
Comparing top forecasters and domain experts 2022-03-06T20:43:36.510Z
Phil Harvey (1938 - 2021) 2022-03-04T19:06:34.088Z
technicalities's Shortform 2022-01-29T15:46:48.915Z
Off Road: support for EAs struggling at uni 2022-01-21T10:30:23.465Z
Writing about my job: Data Scientist 2021-07-19T10:26:32.884Z
AGI risk: analogies & arguments 2021-03-23T13:18:20.638Z
The academic contribution to AI safety seems large 2020-07-30T10:30:19.021Z
[Link] The option value of civilization 2019-01-06T09:58:17.919Z
Existential risk as common cause 2018-12-05T14:01:04.786Z


Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-03T15:35:01.197Z · EA · GW

got none

Comment by Gavin (technicalities) on technicalities's Shortform · 2022-10-03T14:15:47.213Z · EA · GW

Lovely satire of international development. 

(h/t Eva Vivalt)

Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-03T09:42:29.302Z · EA · GW

Good post!

I doubt I have anything original to say. There is already cause-specific non-EA outreach. (Not least a little thing called Lesswrong!) It's great, and there should be more. Xrisk work is at least half altruistic for a lot of people, at least on the conscious level. We have managed the high-pay tension alright so far (not without cost). I don't see an issue with some EA work happening sans the EA name; there are plenty of high-impact roles where it'd be unwise to broadcast any such social movement allegiance. The name is indeed not ideal, but I've never seen a less bad one and the switching costs seem way higher than the mild arrogance and very mild philosophical misconnotations of the current one.

Overall I see schism as solving (at really high expected cost) some social problems we can solve with talking and trade.

Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-03T09:27:17.911Z · EA · GW

I struggled a lot with it until I learned how to cook in that particular style (roughly: way more oil, MSG, nutritional yeast, two proteins in every recipe). Good luck!

Comment by Gavin (technicalities) on technicalities's Shortform · 2022-10-02T16:09:05.009Z · EA · GW

Bostrom selects his most neglected paper here.

Comment by Gavin (technicalities) on Why does AGI occur almost nowhere, not even just as a remark for economic/political models? · 2022-10-02T15:45:15.052Z · EA · GW

There are two totally valid conclusions to draw from the structure you've drawn up: that CS people or EA people are deluded, or that the world at large, including extremely smart people, is extremely bad at handling weird or new things.

Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-02T14:45:51.152Z · EA · GW

It seems bad in a few ways, including the ones you mentioned. I expect it to make longtermist groupthink worse, if (say) Kirsten stops asking awkward questions under (say) weak AI posts. I expect it to make neartermism more like average NGO work. We need both conceptual bravery and empirical rigour for both near and far work, and schism would hugely sap the pool of complements. And so on.

Yeah the information cascades and naive optimisation are bad. I have a post coming on a solution (or more properly, some vocabulary to understand how people are already solving it).

DMed examples.

Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-02T14:26:44.677Z · EA · GW

You are totally right, Deutsch's argument is computability, not complexity. Pardon!

Serves me right for trying to recap 1 of 170 posts from memory.

Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-02T11:57:43.603Z · EA · GW

Yeah maybe they could leave this stuff to their coaching calls

Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-02T11:56:48.282Z · EA · GW

ah, cool

Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-02T11:47:32.325Z · EA · GW

on the 80k site? seems like a moderation headache

Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-02T09:14:04.154Z · EA · GW

I have a few years of data from when I was vegan; any use?

Comment by Gavin (technicalities) on What I learned from the criticism contest · 2022-10-02T09:04:02.772Z · EA · GW

Nice work, glad to see it's improving things.

I sympathise with them though - as an outreach org you really don't want to make public judgments like "infiltrate these guys please; they don't do anything good directly!!". And I'm hesitant to screw with the job board too much, cos they're doing something right: the candidates I got through them are a completely different population from Forumites. 

Adding top recommendations is a good compromise.

I guess a "report job " [as dodgy] button would work for your remaining pain point, but this still looks pretty bad to outsiders.

Overall: previous state strikes me as a sad compromise rather than culpable deception. But you still made them move to a slightly less sad compromise, so hooray.

Comment by Gavin (technicalities) on Winners of the EA Criticism and Red Teaming Contest · 2022-10-01T13:42:11.108Z · EA · GW

Here's mine

Comment by Gavin (technicalities) on Winners of the EA Criticism and Red Teaming Contest · 2022-10-01T13:41:24.369Z · EA · GW

My thoughts and picks from judging the contest. 

Many of my picks narrowly missed prizes and weren't upvoted much at the time, so check it out.

Comment by Gavin (technicalities) on Winners of the EA Criticism and Red Teaming Contest · 2022-10-01T13:23:43.369Z · EA · GW

I think she means the little blurbs in this post.

Comment by Gavin (technicalities) on Let's advertise infrastructure projects · 2022-09-29T12:52:49.749Z · EA · GW

We've done pro bono stuff before, to each according to need.

Comment by Gavin (technicalities) on Let's advertise infrastructure projects · 2022-09-29T10:22:08.260Z · EA · GW

Unclear if research consultancies count as infra, but Arb answer hard questions for people.

Comment by Gavin (technicalities) on technicalities's Shortform · 2022-09-28T12:28:33.928Z · EA · GW

There is a vast amount of philosophical progress. But almost all of it is outside philosophy. Jaw-dropping list, just on the topic of democracy; things that Rousseau writing on democracy suffers from lacking:

  • "Historical experiences with developed democracies
  • Empirical evidence regarding democratic movements in developing countries
  • Various formal theorems regarding collective decision making and preference aggregation, such as the Condorcet Jury-Theorem, Arrow’s Impossibility-Results, the Hong-Page-Theorem, the median voter theorem, the miracle of aggregation, etc.
  • Existing studies on voter behavior, polarization, deliberation, information
  • Public choice economics, incl. rational irrationality, democratic realism"
  • ...

Comment by Gavin (technicalities) on Announcing “Effective Dropouts” · 2022-09-21T16:40:41.046Z · EA · GW

Huge congrats, welcome to the team

Comment by Gavin (technicalities) on Announcing “Effective Dropouts” · 2022-09-21T14:12:19.654Z · EA · GW

It's a pretty big deal though.

Comment by Gavin (technicalities) on Announcing “Effective Dropouts” · 2022-09-21T14:10:45.880Z · EA · GW
Comment by Gavin (technicalities) on Thomas Kwa's Shortform · 2022-09-17T09:32:59.916Z · EA · GW

My comment got detached, woops

Comment by Gavin (technicalities) on Thomas Kwa's Shortform · 2022-09-17T08:46:31.540Z · EA · GW

I noticed this a while ago. I don't see large numbers of low-quality low-karma posts as a big problem though (except that it has some reputation cost for people finding the Forum for the first time). What really worries me is the fraction of high-karma posts that neither original, rigorous, or useful. I suggested some server-side fixes for this.

PS: #3 has always been true, unless you're claiming that more of their output is private these days.

Comment by Gavin (technicalities) on technicalities's Shortform · 2022-09-13T20:07:38.376Z · EA · GW

The ladder of EA weirdness

  1. Obligation to the global poor

  2. Obligation to farmed nonhumans

  3. Obligation to wild nonhumans


n. Obligation to potential humans and nonhumans


m. Obligation to take psychedelics / dissolve the self

o. Obligation to electrons


p. Obligation to acausally trade with those outside the light cone

q. Obligation to acausally trade with those elsewhere in the multiverse

r. Obligation to entities somewhere inside the universal prior

Comment by Gavin (technicalities) on "Agency" needs nuance · 2022-09-13T17:29:06.852Z · EA · GW
  1. "Having inside views": just having your own opinion, whether or not you shout about it and whether or not you think that it's better than the outside view.
  2. "Having strong inside views...": asserting your opinion when others disagree with it, including against the majority of people, majority of experts, etc.


(1) doesn't seem that agenty to me, it's just a natural effect of thinking for yourself. (2) is very agenty and high-status (and can be very useful to the group if it brings in decorrelated info), but needs to be earned.

Comment by Gavin (technicalities) on Where are the EA Influencers? · 2022-09-13T13:01:39.483Z · EA · GW


It does look strange! But it was pretty intentional: see this post for the rationale. 

Recently people have been massively expanding university and "quality press" coverage. But these are relatively low-risk ways of doing mass outreach and the original rationale might stand.

Comment by Gavin (technicalities) on "Agency" needs nuance · 2022-09-13T10:45:33.853Z · EA · GW

Well done on public correction! That's always hard. 

It's key to separate out "social agency"  from the rest of the concept, and coining that term makes this post worthwhile on its own. Your learned helplessness is interesting, because to me the core of agency is indeed nonsocial: fixing the thing yourself, thinking for yourself, writing a blog for yourself, taking responsibility for your own growth (including emotional growth, wisdom, patience, and yes chores).

has inside views

I think you mean "has strong inside views which overrule the outside view". Inside views are innocuous if you simultaneously maintain an "all things considered" view.

Because of a quirk of the instructors and students that landed in our sample, ESPR 2021 went a little too hard on agency. We try to promote agency and wisdom in equal measure, which usually ends up sounding a lot like this post. Got there in the end!

Comment by Gavin (technicalities) on Peter Eckersley (1979-2022) · 2022-09-03T14:49:48.371Z · EA · GW


The irony of you and me is that we're digging around in a privacy advocate's data exhaust. But at least it's out of love.

Comment by Gavin (technicalities) on Celebrations and gratitude thread · 2022-09-02T17:36:15.400Z · EA · GW

Peter Hartree made a shockingly useful plugin for Google docs; lets you search comments, loads > 10x faster than Google's native comments.

Comment by Gavin (technicalities) on Effective Altruism is Unkind · 2022-09-02T07:56:10.077Z · EA · GW

Nice work!

PS: I reckon "EA looks unkind" is a more accurate title.

Comment by Gavin (technicalities) on Have faux-evil EA energy · 2022-08-25T09:36:50.713Z · EA · GW

Comment by Gavin (technicalities) on technicalities's Shortform · 2022-08-17T14:25:06.168Z · EA · GW

So happy to see this new longtermist fellowship running in Kenya.

Comment by Gavin (technicalities) on Paula Amato's Shortform · 2022-08-13T12:50:00.326Z · EA · GW

It's very unclear if she's read any of the work besides the torrid Torres pieces.

See also

Comment by Gavin (technicalities) on technicalities's Shortform · 2022-08-12T09:33:27.389Z · EA · GW

Thread for serious AI safety researchers who aren't longtermists



Comment by Gavin (technicalities) on technicalities's Shortform · 2022-08-10T15:14:34.500Z · EA · GW

Review of the New Yorker piece. It's a model of its type, for good and ill but mostly good. 

The good: The essence is correct. EA is now powerful enough that public scrutiny is fully justified. Lewis-Kraus engages with the ideas, and skips tabloid cheap shots. (The house style always involves little gossipy comments about fashion and eye colour, but here it's more about scruffy clothing than physical appearance). 

For instance, it's extremely easy to caricature utilitarianism. Certainly many professional philosophers do. But Lewis-Kraus chooses the neutral definition: no cavilling about hedonism, reductionism, Gradgrind, nor very much about honor. Similarly, AI risk is oddly underemphasised, and we all know how easy that is to piss on. 

The hypothesis of MacAskill's bad faith is entertained and rejected. So too with Bernard Williams' quietism: looked at and put back on the shelf. "perhaps one thought too few".

The bad: gossip and false balance. Girlfriends and buildings are named, needlessly, privacy and risk be damned. The dissident's gender is revealed for absolutely no reason. Journalists as a class have an underdeveloped sense of the risks they are exposing people to. The house style demands irrelevant detail, and apparently places style above potential impacts.

I can't help but admire the symbols he picks out of real life, even though they are the nonfiction equivalent of puns or entrail reading:

* Of xrisk research: "an Oxford building that overlooks a graveyard."

* "The room featured a series of ornately carved wooden clocks, all of which displayed contrary times; an apologetic sign read “Clocks undergoing maintenance,” but it was an odd portent for a talk about the future"

* "We passed People’s Park, which had become a tent city, but his eyes flicked toward the horizon."

Some risible bits:

> abandon the world view of the “benevolent capitalist” and, just as Engels worked in a mill to support Marx, to live up to its more thoroughgoing possibilities

Incredible. Engels ran a Manchester cotton mill and inherited a fifth of it; he was a benevolent capitalist!

> the chances of human extinction during the next century stand at about 1–6, or the odds of Russian roulette

That's not how odds work

> It does, in any case, seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientists

jfc. If you worry that practitioners of a field are ignoring something, you're a crank and a trespasser. If you worry about the tail risks of your own field, you're suffering from convenient delusions of grandiosity.

The PR suspicion is funny ("Was MacAskill’s gambit with me—the wild swimming in the frigid lake—merely a calculation that it was best to start things off with a showy abdication of the calculus?"). GLK didn't mention any of this in his profile of Rothberg, a businessman with incentives and a presumably similarly sized filter on his speech. But mention consequentialism and suddenly everyone assumes you're a master at acting and a 4D chess player. But he was just primed for it by the dissident so nvm.

> I could see how comforting it was, when everything seemed so awful, to take refuge on the higher plane of millenarianism.

Literally backwards. I find it much more emotionally difficult to contemplate x-risk than terrible but limited events.

But overall GLK is the real deal, as good as magazine writers get. See also him on Paige Harden and Scott Alexander.

Comment by Gavin (technicalities) on Crowdsourced Criticisms: What does EA think about EA? · 2022-08-10T10:48:59.822Z · EA · GW

I meant Hamish

Comment by Gavin (technicalities) on Crowdsourced Criticisms: What does EA think about EA? · 2022-08-09T22:42:50.235Z · EA · GW

I'm troubled that the version of the story you heard didn't mention it was a fuckup she repeatedly apologised for.

Comment by Gavin (technicalities) on EA and the United Nations · 2022-08-09T19:23:46.391Z · EA · GW

I would love to see your estimates. As I say below, I overlooked peacekeeping and it is probably the diamond in the rough.

I am negative because of the lack of estimates, and because it really does seem relatively low importance and tractability. (Every UN insider I've spoken to (now 4) is extremely negative about it.)

I would love to be wrong.

Comment by Gavin (technicalities) on Crowdsourced Criticisms: What does EA think about EA? · 2022-08-09T17:59:27.867Z · EA · GW


Comment by Gavin (technicalities) on Bahamian Adventures: An Epic Tale of Entrepreneurship, AI Strategy Research and Potatoes · 2022-08-09T17:41:53.339Z · EA · GW

army of one.

What's the main benefit of these posts do you think? Accountability, transparency to your community given funding, inspo, braggadocio?

Comment by Gavin (technicalities) on Crowdsourced Criticisms: What does EA think about EA? · 2022-08-09T03:20:53.630Z · EA · GW

Impressed with the infrastructure you built around the post (the anon forms and the votey comments)! Also love the randomisation ideas.

You do well at reporting the views without necessarily endorsing them in the first half - but then the policy suggestions seem to endorse every criticism. (Maybe you do agree with all of them?) But if not there's a PR flavour to it: "we have to spend on climate cos otherwise people will be sad and not like us". Of the four arguments in Policy section 1, none seem to depend on estimating the expected value and comparing it to the existing EA portfolio, as Ben Dixon memorably did.

(I'd have no objection if the section was titled "appropriate respect for climate work" rather than "more emphasis", which implies a zero-sum bid for resources, optimality be damned.)

Comment by Gavin (technicalities) on Who's hiring? (May-September 2022) · 2022-08-09T00:58:09.628Z · EA · GW

We can now sponsor US work visas.

Comment by Gavin (technicalities) on Brief thoughts on EA's image and elitism · 2022-08-08T17:37:46.248Z · EA · GW

Not wrong but not helpful imo. (Past treatments of the theme: here, here, here.)

Main problem is you're not considering the base rate for elitism / credentialism / privilege in the reference class "philanthropy / intellectual movements / technical fields / levers of power". I'm first-generation college (and not elite college either), and I can tell you that my EA clients care the least about this among any class of clients (corporate, academic, government, non-EA philanthropy) by far.

Similarly: cmon, EA is 20% non-straight, as opposed to like 5% in the US. 

It's also just temporary founder effects plus previous lack of resources. One of the many boons of the funding influx is that we can start lifting unprivileged students. I've seen this happen ten times this year. There are lots of people trying to expand into Latin America and India. It's hard!

Comment by Gavin (technicalities) on Longtermism, risk, and extinction · 2022-08-04T18:27:47.351Z · EA · GW

Boring meta comment on the reception of this post: Has anyone downvoting it read it? I don't see how you could even skim it without noticing its patent seriousness and the modesty of its conclusion.

(Pardon the aside, Richard; I plan to comment on the substance in the next few days.)

Comment by Gavin (technicalities) on Saving lives near the precipice: we're doing it wrong? · 2022-07-29T17:22:40.145Z · EA · GW

I suspect downvoters are misunderstanding "know" and "will be"; I think Turchin meant "If we knew" and "it would [then] be reasonable" (subjunctive).

Comment by Gavin (technicalities) on Potential new cause area: Obesity · 2022-07-26T09:14:56.502Z · EA · GW

Have you seen the new GLP-1 agonists? They only got approved for obesity last year, and might actually make a dent. The next generation are apparently even better.

Making these cheaper, more available, and with non-needle delivery are all worthy interventions, more tractable imo than the policy steering you described. But I don't know if they need charity, the market is vast.

Comment by Gavin (technicalities) on tyleralterman's Shortform · 2022-07-25T14:37:21.429Z · EA · GW

I have no proof it mattered, but a few years before the big pivot to longtermism, 80k debated some leftists who emphasised the sheer scope of systemic change and measurability bias. And we moved.

Comment by Gavin (technicalities) on How EA is perceived is crucial to its future trajectory · 2022-07-24T07:29:07.881Z · EA · GW

I would rather gloss that article as "EA pays too much attention to one kind of criticism: vague systemic paradigmatic insinuation".

Comment by Gavin (technicalities) on How important are quantitative abilities and your country of citizenship for policy careers? · 2022-07-23T13:49:07.770Z · EA · GW

Most philosophers will automatically be metaphilosophical optimists. I'd love to know what fraction of the dropouts are pessimists.