Posts
Comments
Great! I'd be very curious to see the follow up investigation when that's available.
I broadly agree that there's ample opportunity for more digitally native public institutions and to improve government operations. Here is a post on some of the potential that I see there. Here is another looking at improving state capacity more broadly.
It seems like the Ethereum Foundations foray into a DAO would be worth exploring as a case study here? I like the idea of experimenting with novel approaches to governance with some subset of EA funding. I'm not sure though that this idea is fully fleshed out and might benefit from further fleshing out before convening a council to test the idea.
Ah yeah you cracked the code :)
Are any of the more philosophically incline EA's into the pragmatic school? There's obviously a (very!) strong consequentialist strain here though I could see engaging with pragmatists like Dewey, Rorty and others to be super helpful to this movement.
Much of this is amazing and deserves wider engagement within EA. Thanks for sharing. I LOLed at this bit:
"You can read ungodly reams of essays defining effective altruism - which makes me wonder if the people who wrote them think that they are creating the greatest possible utility by using their time that way "
n00b q: What's AF?
Thanks for the post. Is there a comprehensive repository of EA involvement in politics? Thinking of something similar to Open Philanthropy's grant database.
What's the universe of studies that elicit is searching through? I did a couple queries in domains I'm familiar with and it missed a few studies that I would have expected it to pick up. It'd be helpful if there was some sense for the user of what's being searched so they know where else to look.
Is helping out with theCaDC.org something that you'd be interested in? Many of the analytics are open source and on GitHub.
What's up with AI alignment? That keeps coming up and I really don't get it. I would really love to see a point by point engagement with this piece on "Superintelligence: the idea that eats smart people." https://idlewords.com/talks/superintelligence.htm
How do you validate Elicit's conclusion shown in the tool? Some of the data pulled from papers seems like it could easily be unkowingly wrong. For instance, consider the column on the number of participants in the paper. Do you have a protocol for manually verifying
Have any prominent EA philosophers engaged with critiques of consequentialism? Are there canoncial examples of those that people might point to? I don't really have a desire to write a several thousand word essay though would be curious what more philosophically inclined EA's think of works like "Beyond Consequentialism" by my old prof Paul Hurley. ( https://www.amazon.com/Beyond-Consequentialism-Paul-Hurley/dp/0199698430 )
How do you see what you're doing as different from DataKind or Bayes Impact? Why a new standalone org rather than a chapter in those existing networks?
Here's a sac bee article from a few years back about the water data wars. I'm happy to share some documents from theCaDC.org showing the growth and progress of the initiative: https://www.sacbee.com/opinion/california-forum/article182279056.html
Hi thanks for the note! The link you shared is 404ing however.
I wrote a post calling for folks interested in this topic at the end. Maybe we can get an email list or something similar going? https://forum.effectivealtruism.org/posts/wBre7Rm6tBzKbZ86B/the-california-case-for-state-capacity-as-an-ea-cause-area
Would it be altruistic to walk past a person dying in the streets so you can close an extra deal, make an extra buck and send that to a far away place to help a few more people? That seems rather cold and calculating and not in a good way.
There definitely are people suffering to a high degree in SF btw. The homeless situation is quite dire.
Yes and it's also not ideal to lust after hyper-legible solutions. Sometimes a general direction can be a useful starting point.
Reflecting a bit on the EA call for criticisms, one obvious challenge to the movement is cities like San Francisco. That's a city with tons of EA aligned folks and yet spending a lot of philanthropic dollars to alleviate hunger in Africa or address AGI risk in the future while there's massive human immiseration all around the city just seems extremely dystopian. Here's a good Atlantic article on the issues in SF for those that are curious: https://www.theatlantic.com/ideas/archive/2022/06/how-san-francisco-became-failed-city/661199/
Thanks for the post. I agree broadly and would also note we do a bad job of thinking through long tail risks associated with climate change. Climate exacerbates the polycrisis (rise of caeserism, challenges of mass migration 10-100x what we see today). There's also potential cataclismic effects for human biology like breaking the 98.6 first line homeostatic defense. https://alltrades.substack.com/p/mushrooms-global-warming-and-the?r=hynj&utm_campaign=post&utm_medium=email&utm_source=twitter
Where do EA's interested in improving institutions tend to congregate? I'd love to dive deeper into these subjects with like minds.
Not a prize though this might be relevant: https://www.aqaix.com/case-studies/monterey-bay-stormwater-digital-master-plan
Did any of these arguments change your beliefs about AGI? I'd always love to get a definition of General Intelligence since that seems like an ill posed concept.
Have you written about similar existing efforts to measure impact and pay just for that impact through SIBs / pay for performance frameworks and things of that nature? There are real world examples that would be good to investigate.
This discussion would benefit from an empirical analysis of social impact bonds and previous efforts to quantify and financialize social impact. My not super deep understanding from talking to old school foundation COO types is that those examples are a very mixed bag and very difficult to measure. The market dynamics discussed here take well functioning measurement as a given but that is usually not the case. Consider all the debates about the impact of various education "reform" interventions in the United States for example.
Here is the canonical first US social impact bond example: https://hbsp.harvard.edu/product/UV7311-PDF-ENG
Why isn't there more EA focus on state capacity and improving government operations? The one mention of "government" on the EA topics page is "impactful government careers." Furthermore, the "big list of cause candidates" recently shared only mentions government in the context of survivalist bunkers and AGI. This seems... a bit narrow.
If we really want to uplift the human condition, we'd do well to remember that philanthropy as a whole is an absolute drop in the bucket relative to government investment. It's literally a one percent solution. And within that open philanthropy is a small piece of that. This seems like something that would benefit from a focused effort and an area that rather obviously is struggling.
See the many challenges with the Covid-19 response in the US / Europe / other "western" states for lack of a better term, the difficulty in building much of anything in those places and the general public institutional malaise. Reforming and improving state capacity seems like a first order challenge of the day. It also would be a massive force multiplier, whereby a dollar spent on changing government operations could have several orders of magnitude greater impact. The blog Marginal Revolution "state capacity" tag has a lot of great readings for those new to the topic.
#causeprioritization #ea_critiques
EDIT: note I believe this should say billions of dollars rather than millions.

Transhumanism would benefit from also exploring human augmentation. Thanks for the list.
Has anyone fully dug into the AI credulity risk? By that I mean the XR scenario where a technology A) believes they've invented AGI and B) acts on the belief in Ozymandian fashion. Note A does not require actually inventing AGI. It just requires that like the recent Google employee that they believe it to be so. And note B does not require that they actually have evil intent. Often in fact evil can be done by a desire to do good. I believe that this is a nontrivial risk and compounded by the availability of technologies, the potential future upside to AGI could blind even well intentioned persons to do horrific deeds in the service of the brave new world. The logic is simple as it is elegant and horrifying: What is a finite number of present human lives against an infinity of lifetimes in the beautiful garden of a well aligned AGI? Would not the monster be the person who doesn't do what's necessary to ensure that future? What wouldn't be justified to create such a utopia?
What if instead we replace "world could plausibly develop “transformative AI”" with "someone invents the philosophers stone" and we do a similar analysis of the implications? That'd be a change of pace.
Sometimes I wonder how much brain space we really need arguing how many angels we need dancing on the head of the proverbial pin. This discussion reminds me of my eponymous favorite cartoon.

To be honest this parsing of these two communities that have a ton in common reminds me of this great scene from Monty Python's Life of Brian:
Thanks! The challenge with these types of benefit cost analyses is that often willingness to pay can correlate with ability to pay. There can also be categorical imperative type issues that trump these types of logics. I do think though that its a super useful initiative. Better to have progressively less wrong measures and use best available evidence rather than shoot from the hip. Cheers!
Re 2 the right way to compare high and low confidence numbers is to add error bounds. This chart does not do that.
This post would benefit from an analysis of the relative likelihood of a biorisk and malevolent AI risk.
How do EA's in SF think about local civic action and altruism? That seems a priori like a place with A) a lot of EAs and B) a place with LOTs of local problems. Here's a good Atlantic article that's worth reading in full on the problems of SF: theatlantic.com/ideas/archive/2022/06/how-san-francisco-became-failed-city/661199/
And for reference here's a post I penned recently in response to the call for EA critiques that emphasizes the importance of local as well as global altruistic action: https://forum.effectivealtruism.org/posts/LnuuN7zuBSZvEo845/why-the-ea-aversion-to-local-altruistic-action
Tbh the whole piece is my go to for skepticism about AI. In particular, the analogy with alchemy seems apropos given that concepts like sentience are very ill posed.
What would you say are good places to get up to speed on what we've learned about AI risk and the alignment problem in the past 8 years? Thanks much!
Nice post! Thanks for sharing.
Strong EA "doing the most good", which has risks of slipping to "at any cost" and thus totalitarianism as you say, perhaps should be called "optimized altruism."
Thanks for sharing. I'd emphasize that good sleep is underrated!
How do you create a handy sidebar table of contents on a post?
There's also very much a generational issue with these established organizations and EA, which seems to be a younger set looking through the annual surveys.
I could see two potential prongs:
- doing a thorough study and survey of these organizations to surface more formally lessons learned for EA (maybe worth the time / investment, maybe not)
- partnering strategically with a subset of these more established orgs. Rotary makes a lot of sense intuitively
Ah I figured it out! There are sequences! This is amazing! Thank you for this. I'm going to experiment to see if that gets the feature set I'm looking for: https://forum.effectivealtruism.org/library
Again I have to say: this forum is really well done!
Is there a way to do blog chains on the forum? I'm thinking of something similar to the format on Ribbonfarm.com
It seems like there should be a way to get a graph of comments on front page posts over time going? :)
Yeah I share your suspicion. Reading through the institutional decision making topic, most if not all of the writing seems to be basically applying LessWrong style rationality principles to decision making. There isn't any real say structural analysis. For example in So Cal where I live, there's precisely a zillion local municipalities, a bunch of Balkanized fiefdoms that often work at cross purposes. The challenge isn't a lack of quality information and decision heuristics. It's the reality that there's a panoply of veto points and a rube goldberg esque system that makes it impossibly difficult to get things done. Vitalik had a nice piece on the underlying issues with Vetocracy that's worth a read.
Hi is there a way to get stats on EA membership and activity by location? I can't seem to place that from the individual local chapters pages, which might be for the best since that'd be a pain to scrape one by one, and ideally there'd be a simple table with chapter location and number of members (total is fine, ideally would have a subcategory for a common definition of active members). Anyone know where one might find such a thing?
What's small? 1%? 10%? Do you have a sense of how typical your beliefs are in the EA community? I'd be very curious to have this type of question included in a future EA annual survey. It seems the last one was done in 2020 which means that perhaps its timely for another?
How do you practice charity beginning at home? Do any EA folks give a set percentage of their giving locally? Has anyone seen statistics on typical breakdowns? Is the EA recommended giving percentage 100% to the globally highest impact charities? A EA member passed along this GiveWell post. It seems very intuitive to me that getting your own life, household and community in order is a good thing. It also seems like the more that you get your immediate life and those in it in order, the more you can support people in need further away.

Thanks for sharing! I haven't had a chance to dive super deep though the piece on improving institutions was very theoretical. Here's the direct link to the comment I left on that post in case you're at all curious.
Also a random tech support-ish q that I'm not sure where to put. How does one do something like the "improving institutions" with the little circle so that all the forum posts under that topic pop up? Also a meta q: where one direct these types of getting starting tech support q's? Thanks!
Lastly regarding the articles you sent, how does "practicing compassion and generosity with those around us" get operationalized in the EA community?

Thanks for sharing! I haven't had a chance to dive super deep though the piece on improving institutions was very theoretical. Here's the direct link to the comment I left on that post in case you're at all curious.
Also a random tech support-ish q that I'm not sure where to put. How does one do something like the "improving institutions" with the little circle so that all the forum posts under that topic pop up? Also a meta q: where one direct these types of getting starting tech support q's? Thanks!