Posts

Comments

Comment by Kerkko Pelttari on Michael Nielsen's "Notes on effective altruism" · 2022-06-06T10:02:24.650Z · EA · GW

By many numbers AI risk being solved would only reduce total probability of X-risk by 1/3, 2/3, or maybe 9/10 if you are very heavy on AI-risk probability. 

 

Personally I think humanity's "period of stress" will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be "burning" for quite some time.

Comment by Kerkko Pelttari on FTX/CEA - show us your numbers! · 2022-04-18T22:15:09.853Z · EA · GW

Good questions, I have ended up thinking about many of these topics ofren.

Something else where I would find improved transparency valuable would be what are the back of envelope calcs and statistics for denied fundings. Reading EA funds reports for example doesn't give a total view into where the current bar for interventions is, because we're only seeing the project distribution from above the cutoff point.

Comment by Kerkko Pelttari on Announcing the actual longtermist incubation program · 2022-04-01T20:14:04.858Z · EA · GW

I read a blog post by Abraham Lincoln once and I think the core point was that EA is talent overhung instead of talent constrained.

Since this removes the core factor of impact from the project, it rounds most expected values down to 0, which is an improvement. You can thank me in the branches that would have otherwise suffered destruction by tail risk.

Comment by Kerkko Pelttari on I feel anxious that there is all this money around. Let's talk about it · 2022-04-01T09:00:37.617Z · EA · GW

"There is a good chance, I think, that EA ends up paying professional staff significantly more to do exactly the same work to exactly the same standard as before, which is a substantive problem;"

At least in this hypothetical example it would seem naively ineffective (not taking into account things like signaling value) to pay people more salary for same output. (And fwiw here I think qualities like employee wellbeing is part of "output". But it is unclear how directly salary helps that area.)

Comment by Kerkko Pelttari on Democratising Risk - or how EA deals with critics · 2021-12-29T11:49:16.604Z · EA · GW

Perhaps a general "willingness to commit" X % funding to criticism of areas which are heavily funded by the EA-aligned funding organization could work as a general heuristic for enabling the second idea.

 

(e.g. if "pro current X-risk" research in general gets N funding then some % of N would be made available for "critical work" in the same area. But in science it can be sometimes hard to even say which is a critical work and which is a work that builds on top of existing work.)

Comment by Kerkko Pelttari on Democratising Risk - or how EA deals with critics · 2021-12-29T11:43:22.606Z · EA · GW

I'm not affiliated with EA research organizations at all (I participate in running a local group at Finland and am looking for industry / other EA affiliated career options more so than specifically research).

 

However I have had multiple discussions with fellow local EA:s where it was deemed problematic that some X-risk papers are subject to quite "weak" standards of criticism relative to how much they often imply. Heartfelt thanks to you both for publishing and discussing this topic. And starting up conversation on the important meta-topic of EA research topic and funding decisionmaking and standards.