Posts

Fifteen theses about the near future of government operations 2022-08-11T22:00:30.405Z
An open letter to my great grand kids' great grand kids 2022-08-10T15:07:37.924Z
The (California) case for State Capacity as an EA cause area 2022-08-04T03:22:24.774Z
Locke's Shortform 2022-07-21T03:05:26.898Z
How might we rigorously measure the outputs of public investment by default? 2022-06-14T16:05:19.236Z
Why the EA aversion to local altruistic action? 2022-06-10T23:01:43.615Z

Comments

Comment by Locke on Introduction to Non-EA International Charity Orgs. · 2022-08-11T22:31:18.690Z · EA · GW

Great! I'd be very curious to see the follow up investigation when that's available. 

Comment by Locke on [Cause Exploration Prizes] Software Systems for Collective Intelligence · 2022-08-11T22:02:32.722Z · EA · GW

I broadly agree that there's ample opportunity for more digitally native public institutions and to improve government operations. Here is a post on some of the potential that I see there. Here is another looking at improving state capacity more broadly. 

 

It seems like the Ethereum Foundations foray into a DAO would be worth exploring as a case study here? I like the idea of experimenting with novel approaches to governance with some subset of EA funding. I'm not sure though that this idea is fully fleshed out and might benefit from further fleshing out before convening a council to test the idea. 

Comment by Locke on Criticism of EA Criticism Contest · 2022-08-11T15:28:01.545Z · EA · GW

Ah yeah you cracked the code :) 

Comment by Locke on Open Thread: June — September 2022 · 2022-08-10T18:41:59.230Z · EA · GW

Are any of the more philosophically incline EA's into the pragmatic school? There's obviously a (very!) strong consequentialist strain here though I could see engaging with pragmatists like Dewey, Rorty and others to be super helpful to this movement. 

Comment by Locke on Freddie deBoer: Effective Altruism Has a Novelty Problem · 2022-08-10T18:18:39.886Z · EA · GW

Much of this is amazing and deserves wider engagement within EA. Thanks for sharing. I LOLed at this bit: 

"You can read ungodly reams of essays defining effective altruism - which makes me wonder if the people who wrote them think that they are creating the greatest possible utility by using their time that way "

Comment by Locke on On Deference and Yudkowsky's AI Risk Estimates · 2022-08-10T18:12:39.359Z · EA · GW

n00b q: What's AF? 

Comment by Locke on EA 1.0 and EA 2.0; highlighting critical changes in EA's evolution · 2022-08-10T15:47:40.344Z · EA · GW

Thanks for the post. Is there a comprehensive repository of EA involvement in politics? Thinking of something similar to Open Philanthropy's grant database.

Comment by Locke on AMA: Ought · 2022-08-10T15:45:13.717Z · EA · GW

What's the universe of studies that elicit is searching through? I did a couple queries in domains I'm familiar with and it missed a few studies that I would have expected it to pick up. It'd be helpful if there was some sense for the user of what's being searched so they know where else to look.

Comment by Locke on I’m Offering Free Engineering and Consultation for EA Aligned Projects · 2022-08-10T15:42:42.826Z · EA · GW

Is helping out with theCaDC.org something that you'd be interested in? Many of the analytics are open source and on GitHub.

Comment by Locke on AMA: Ought · 2022-08-10T15:20:38.544Z · EA · GW

What's up with AI alignment? That keeps coming up and I really don't get it. I would really love to see a point by point engagement with this piece on "Superintelligence: the idea that eats smart people." https://idlewords.com/talks/superintelligence.htm

Comment by Locke on AMA: Ought · 2022-08-10T15:19:31.441Z · EA · GW

How do you validate Elicit's conclusion shown in the tool? Some of the data pulled from papers seems like it could easily be unkowingly wrong. For instance, consider the column on the number of participants in the paper. Do you have a protocol for manually verifying

Comment by Locke on Open Thread: June — September 2022 · 2022-08-10T03:19:25.810Z · EA · GW

Have any prominent EA philosophers engaged with critiques of consequentialism? Are there canoncial examples of those that people might point to? I don't really have a desire to write a several thousand word essay though would be curious what more philosophically inclined EA's think of works like "Beyond Consequentialism" by my old prof Paul Hurley. ( https://www.amazon.com/Beyond-Consequentialism-Paul-Hurley/dp/0199698430

Comment by Locke on Data Science for Effective Good + Call for Projects + Call for Volunteers · 2022-08-10T03:12:42.501Z · EA · GW

How do you see what you're doing as different from DataKind or Bayes Impact? Why a new standalone org rather than a chapter in those existing networks? 

Comment by Locke on The (California) case for State Capacity as an EA cause area · 2022-08-08T01:51:09.480Z · EA · GW

Here's a sac bee article from a few years back about the water data wars. I'm happy to share some documents from theCaDC.org showing the growth and progress of the initiative: https://www.sacbee.com/opinion/california-forum/article182279056.html

Comment by Locke on The (California) case for State Capacity as an EA cause area · 2022-08-08T01:47:02.914Z · EA · GW

Hi thanks for the note! The link you shared is 404ing however.

Comment by Locke on EA's Culture and Thinking are Severely Limiting its Impact · 2022-08-04T03:25:17.226Z · EA · GW

I wrote a post calling for folks interested in this topic at the end. Maybe we can get an email list or something similar going? https://forum.effectivealtruism.org/posts/wBre7Rm6tBzKbZ86B/the-california-case-for-state-capacity-as-an-ea-cause-area

Comment by Locke on Open Thread: June — September 2022 · 2022-08-04T03:14:03.761Z · EA · GW

Would it be altruistic to walk past a person dying in the streets so you can close an extra deal, make an extra buck and send that to a far away place to help a few more people? That seems rather cold and calculating and not in a good way. 

 

There definitely are people suffering to a high degree in SF btw. The homeless situation is quite dire. 

Comment by Locke on EA's Culture and Thinking are Severely Limiting its Impact · 2022-08-04T03:09:27.678Z · EA · GW

Yes and it's also not ideal to lust after hyper-legible solutions. Sometimes a general direction can be a useful starting point. 

Comment by Locke on Open Thread: June — September 2022 · 2022-07-29T20:39:29.215Z · EA · GW

Reflecting a bit on the EA call for criticisms, one obvious challenge to the movement is cities like San Francisco. That's a city with tons of EA aligned folks and yet spending a lot of philanthropic dollars to alleviate hunger in Africa or address AGI risk in the future while there's massive human immiseration all around the city just seems extremely dystopian. Here's a good Atlantic article on the issues in SF for those that are curious: https://www.theatlantic.com/ideas/archive/2022/06/how-san-francisco-became-failed-city/661199/

Comment by Locke on The most important climate change uncertainty · 2022-07-28T02:48:41.041Z · EA · GW

Thanks for the post. I agree broadly and would also note we do a bad job of thinking through long tail risks associated with climate change. Climate exacerbates the polycrisis (rise of caeserism, challenges of mass migration 10-100x what we see today). There's also potential cataclismic effects for human biology like breaking the 98.6 first line homeostatic defense. https://alltrades.substack.com/p/mushrooms-global-warming-and-the?r=hynj&utm_campaign=post&utm_medium=email&utm_source=twitter

Comment by Locke on EA's Culture and Thinking are Severely Limiting its Impact · 2022-07-28T02:41:39.764Z · EA · GW

Where do EA's interested in improving institutions tend to congregate? I'd love to dive deeper into these subjects with like minds. 

Comment by Locke on Impact Markets: The Annoying Details · 2022-07-27T17:40:09.501Z · EA · GW

Not a prize though this might be relevant: https://www.aqaix.com/case-studies/monterey-bay-stormwater-digital-master-plan

Comment by Locke on Why EAs are skeptical about AI Safety · 2022-07-21T16:32:38.861Z · EA · GW

Did any of these arguments change your beliefs about AGI? I'd always love to get a definition of General Intelligence since that seems like an ill posed concept. 

Comment by Locke on Impact Markets: The Annoying Details · 2022-07-21T15:56:08.715Z · EA · GW

Have you written about similar existing efforts to measure impact and pay just for that impact through SIBs / pay for performance frameworks and things of that nature? There are real world examples that would be good to investigate.

Comment by Locke on Impact Markets: The Annoying Details · 2022-07-21T15:50:56.133Z · EA · GW

This discussion would benefit from an empirical analysis of social impact bonds and previous efforts to quantify and financialize social impact. My not super deep understanding from talking to old school foundation COO types is that those examples are a very mixed bag and very difficult to measure. The market dynamics discussed here take well functioning measurement as a given but that is usually not the case. Consider all the debates about the impact of various education "reform" interventions in the United States for example. 

Here is the canonical first US social impact bond example: https://hbsp.harvard.edu/product/UV7311-PDF-ENG

Comment by Locke on Locke's Shortform · 2022-07-21T03:05:27.091Z · EA · GW

Why isn't there more EA focus on state capacity and improving government operations? The one mention of "government" on the EA topics page is "impactful government careers." Furthermore, the "big list of cause candidates" recently shared only mentions government in the context of survivalist bunkers and AGI.  This seems... a bit narrow. 

If we really want to uplift the human condition, we'd do well to remember that philanthropy as a whole is an absolute drop in the bucket relative to government investment. It's literally a one percent solution. And within that open philanthropy is a small piece of that. This seems like something that would benefit from a focused effort and an area that rather obviously is struggling. 

See the many challenges with the Covid-19 response in the US / Europe / other "western" states for lack of a better term, the difficulty in building much of anything in those places and the general public institutional malaise. Reforming and improving state capacity seems like a first order challenge of the day. It also would be a massive force multiplier, whereby a dollar spent on changing government operations could have several orders of magnitude greater impact. The blog Marginal Revolution "state capacity" tag has a lot of great readings for those new to the topic. 

#causeprioritization #ea_critiques 

EDIT: note I believe this should say billions of dollars rather than millions. 

Comment by Locke on Big List of Cause Candidates · 2022-07-21T02:56:41.753Z · EA · GW

Transhumanism would benefit from also exploring human augmentation. Thanks for the list. 

Comment by Locke on Open Thread: June — September 2022 · 2022-07-21T01:53:48.639Z · EA · GW

Has anyone fully dug into the AI credulity risk? By that I mean the XR scenario where a technology A) believes they've invented AGI and B) acts on the belief in Ozymandian fashion. Note A does not require actually inventing AGI. It just requires that like the recent Google employee that they believe it to be so. And note B does not require that they actually have evil intent. Often in fact evil can be done by a desire to do good.  I believe that this is a nontrivial risk and compounded by the availability of technologies, the potential future upside to AGI could blind even well intentioned persons to do horrific deeds in the service of the brave new world. The logic is simple as it is elegant and horrifying: What is a finite number of present human lives against an infinity of lifetimes in the beautiful garden of a well aligned AGI? Would not the monster be the person who doesn't do what's necessary to ensure that future? What wouldn't be justified to create such a utopia? 

Comment by Locke on Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover · 2022-07-21T01:47:30.641Z · EA · GW

What if instead we replace "world could plausibly develop “transformative AI”" with "someone invents the philosophers stone" and we do a similar analysis of the implications? That'd be a change of pace. 

Comment by Locke on Criticism of EA Criticism Contest · 2022-07-21T01:10:24.021Z · EA · GW

Sometimes I wonder how much brain space we really need arguing how many angels we need dancing on the head of the proverbial pin. This discussion reminds me of my eponymous favorite cartoon. 

Comment by Locke on Help me find the crux between EA/XR and Progress Studies · 2022-07-19T22:03:10.526Z · EA · GW

To be honest this parsing of these two communities that have a ton in common reminds me of this great scene from Monty Python's Life of Brian: 

Comment by Locke on How might we rigorously measure the outputs of public investment by default? · 2022-07-13T05:05:04.639Z · EA · GW

Thanks! The challenge with these types of benefit cost analyses is that often willingness to pay can correlate with ability to pay. There can also be categorical imperative type issues that trump these types of logics. I do think though that its a super useful initiative. Better to have progressively less wrong measures and use best available evidence rather than shoot from the hip. Cheers! 

Comment by Locke on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-06T16:34:31.659Z · EA · GW

Re 2 the right way to compare high and low confidence numbers is to add error bounds. This chart does not do that. 

Comment by Locke on People in bunkers, "sardines" and why biorisks may be overrated as a global priority · 2022-06-20T22:34:02.903Z · EA · GW

This post would benefit from an analysis of the relative likelihood of a biorisk and malevolent AI risk. 

Comment by Locke on Open Thread: Spring 2022 · 2022-06-20T21:11:15.191Z · EA · GW

How do EA's in SF think about local civic action and altruism? That seems a priori like a place with A) a lot of EAs and B) a place with LOTs of local problems. Here's a good Atlantic article that's worth reading in full on the problems of SF:  theatlantic.com/ideas/archive/2022/06/how-san-francisco-became-failed-city/661199/

 

And for reference here's a post I penned recently in response to the call for EA critiques that emphasizes the importance of local as well as global altruistic action: https://forum.effectivealtruism.org/posts/LnuuN7zuBSZvEo845/why-the-ea-aversion-to-local-altruistic-action

Comment by Locke on Open Thread: Spring 2022 · 2022-06-20T20:56:13.471Z · EA · GW

Tbh the whole piece is my go to for skepticism about AI. In particular, the analogy with alchemy seems apropos given that concepts like sentience are very ill posed.

 

What would you say are good places to get up to speed on what we've learned about AI risk and the alignment problem in the past 8 years? Thanks much! 

Comment by Locke on Blake Richards on Why he is Skeptical of Existential Risk from AI · 2022-06-14T20:58:24.455Z · EA · GW

Nice post! Thanks for sharing. 

Comment by Locke on The totalitarian implications of Effective Altruism · 2022-06-14T18:53:55.025Z · EA · GW

Strong EA "doing the most good", which has risks of slipping to "at any cost" and thus totalitarianism as you say, perhaps should be called "optimized altruism."

Comment by Locke on What Is Most Important For Your Productivity? · 2022-06-14T18:52:30.955Z · EA · GW

Thanks for sharing. I'd emphasize that good sleep is underrated! 

Comment by Locke on Forum user manual · 2022-06-14T16:30:45.646Z · EA · GW

How do you create a handy sidebar table of contents on a post?

Comment by Locke on Introduction to Non-EA International Charity Orgs. · 2022-06-14T16:27:28.017Z · EA · GW

There's also very much a generational issue with these established organizations and EA, which seems to be a younger set looking through the annual surveys. 

I could see two potential prongs:

  1. doing a thorough study and survey of these organizations to surface more formally lessons learned for EA (maybe worth the time / investment, maybe not)
  2. partnering strategically with a subset of these more established orgs. Rotary makes a lot of sense intuitively
Comment by Locke on How to use the Forum (table of contents) · 2022-06-14T16:21:32.530Z · EA · GW

Ah I figured it out! There are sequences! This is amazing! Thank you for this. I'm going to experiment to see if that gets the feature set I'm looking for: https://forum.effectivealtruism.org/library

Again I have to say: this forum is really well done! 

Comment by Locke on How to use the Forum (table of contents) · 2022-06-14T16:19:02.443Z · EA · GW

Is there a way to do blog chains on the forum?  I'm thinking of something similar to the format on Ribbonfarm.com

Comment by Locke on Stephen Clare's Shortform · 2022-06-14T15:47:29.043Z · EA · GW

It seems like there should be a way to get a graph of comments on front page posts over time going? :) 

Comment by Locke on Why the EA aversion to local altruistic action? · 2022-06-14T02:13:05.993Z · EA · GW

Yeah I share your suspicion. Reading through the institutional decision making topic, most if not all of the writing seems to be basically applying LessWrong style rationality principles to decision making. There isn't any real say structural analysis. For example in So Cal where I live, there's precisely a zillion local municipalities, a bunch of Balkanized fiefdoms that often work at cross purposes. The challenge isn't a lack of quality information and decision heuristics. It's the reality that there's a panoply of veto points and a rube goldberg esque system that makes it impossibly difficult to get things done. Vitalik had a nice piece on the underlying issues with Vetocracy that's worth a read. 

Comment by Locke on Open Thread: Spring 2022 · 2022-06-11T23:22:36.976Z · EA · GW

Hi is there a way to get stats on EA membership and activity by location? I can't seem to place that from the individual local chapters pages, which might be for the best since that'd be a pain to scrape one by one, and ideally there'd be a simple table with chapter location and number of members (total is fine, ideally would have a subcategory for a common definition of active members). Anyone know where one might find such a thing? 

Comment by Locke on Open Thread: Spring 2022 · 2022-06-11T23:20:24.819Z · EA · GW

What's small? 1%? 10%? Do you have a sense of how typical your beliefs are in the EA community? I'd be very curious to have this type of question included in a future EA annual survey. It seems the last one was done in 2020 which means that perhaps its timely for another? 

Comment by Locke on Open Thread: Spring 2022 · 2022-06-11T02:20:57.880Z · EA · GW

How do you practice charity beginning at home? Do any EA folks give a set percentage of their giving locally? Has anyone seen statistics on typical breakdowns? Is the EA recommended giving percentage 100% to the globally highest impact charities? A EA member passed along this GiveWell post. It seems very intuitive to me that getting your own life, household and community in order is a good thing. It also seems like the more that you get your immediate life and those in it in order, the more you can support people in need further away. 

 

Comment by Locke on Why the EA aversion to local altruistic action? · 2022-06-11T02:15:13.356Z · EA · GW

Thanks for sharing! I haven't had a chance to dive super deep though the piece on improving institutions was very theoretical. Here's the direct link to the comment I left on that post in case you're at all curious. 

Also a random tech support-ish q that I'm not sure where to put. How does one do something like the "improving institutions" with the little circle so that all the forum posts under that topic pop up? Also a meta q: where one direct these types of getting starting tech support q's? Thanks!

Lastly regarding the articles you sent, how does "practicing compassion and generosity with those around us" get operationalized in the EA community?

Comment by Locke on Why the EA aversion to local altruistic action? · 2022-06-11T02:10:03.538Z · EA · GW

Thanks for sharing! I haven't had a chance to dive super deep though the piece on improving institutions was very theoretical. Here's the direct link to the comment I left on that post in case you're at all curious. 

 

Also a random tech support-ish q that I'm not sure where to put. How does one do something like the "improving institutions" with the little circle so that all the forum posts under that topic pop up? Also a meta q: where one direct these types of getting starting tech support q's? Thanks!