Posts

Who is Uncomfortable Critiquing Who, Around EA? 2023-02-24T05:55:54.731Z
Select Challenges with Criticism & Evaluation Around EA 2023-02-10T23:36:34.963Z
Eli Lifland on Navigating the AI Alignment Landscape 2023-02-01T00:07:48.051Z
What improvements should be made to improve EA discussion on heated topics? 2023-01-16T20:11:23.545Z
EA could use better internal communications infrastructure 2023-01-12T01:07:52.872Z
14 Ways ML Could Improve Informative Video 2023-01-10T13:53:39.495Z
Why does Academia+EA produce so few online videos? 2023-01-10T13:49:16.271Z
Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism 2023-01-06T22:59:28.475Z
$1,000 Squiggle Experimentation Challenge 2022-08-04T14:20:33.844Z
Announcing Squiggle: Early Access 2022-08-03T00:23:33.276Z
EA/Rationalist Safety Nets: Promising, but Arduous 2021-12-29T18:41:53.836Z
13 Very Different Stances on AGI 2021-12-27T23:30:30.586Z
Can/should we automate most human decisions, pre-AGI? 2021-12-26T01:37:35.765Z
Why don't governments seem to mind that companies are explicitly trying to make AGIs? 2021-12-23T07:08:02.309Z
Flimsy Pet Theories, Enormous Initiatives 2021-12-09T15:10:08.279Z
The "feeling of meaning" vs. "objective meaning" 2021-12-05T01:51:11.182Z
Opportunity Costs of Technical Talent: Intuition and (Simple) Implications 2021-11-19T15:04:05.217Z
Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits 2021-11-17T18:12:18.005Z
Improve delegation abilities today, delegate heavily tomorrow 2021-11-11T21:52:18.782Z
Disagreeables and Assessors: Two Intellectual Archetypes 2021-11-05T09:01:58.207Z
Prioritization Research for Advancing Wisdom and Intelligence 2021-10-18T22:22:32.492Z
Contribution-Adjusted Utility Maximization Funds: An Early Proposal 2021-08-03T23:01:58.012Z
Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:44.627Z
Forecasting Prize Results 2021-02-19T19:07:11.379Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:21:52.151Z
Open Communication in the Days of Malicious Online Actors 2020-10-06T23:57:35.529Z
Ozzie Gooen's Shortform 2020-09-22T19:17:54.175Z
Expansive translations: considerations and possibilities 2020-09-18T21:38:42.357Z
How to estimate the EV of general intellectual progress 2020-01-27T10:21:11.076Z
What are words, phrases, or topics that you think most EAs don't know about but should? 2020-01-21T20:15:07.312Z
Best units for comparing personal interventions? 2020-01-13T08:53:12.863Z
Predictably Predictable Futures Talk: Using Expected Loss & Prediction Innovation for Long Term Benefits 2020-01-08T22:19:32.155Z
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges 2019-12-19T18:16:04.299Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T16:36:10.564Z
Introducing Foretold.io: A New Open-Source Prediction Registry 2019-10-16T14:47:20.752Z
What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) 2019-08-04T20:38:10.413Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:23.576Z
What new EA project or org would you like to see created in the next 3 years? 2019-06-11T20:56:42.687Z
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T21:25:46.305Z
Discussion: What are good legal entity structures for new EA groups? 2018-12-18T00:33:16.620Z
Current AI Safety Roles for Software Engineers 2018-11-09T21:00:23.318Z
Prediction-Augmented Evaluation Systems 2018-11-09T11:43:06.088Z
Emotion Inclusive Altruism vs. Emotion Exclusive Altruism 2016-12-21T01:40:45.222Z
Ideas for Future Effective Altruism Conferences: Open Thread 2016-08-13T02:59:02.685Z
Guesstimate: An app for making decisions with confidence (intervals) 2015-12-30T17:30:55.414Z
Is there a hedonistic utilitarian case for Cryonics? (Discuss) 2015-08-27T17:50:36.180Z
EA Assembly & Call for Speakers 2015-08-18T20:55:13.854Z
Deep Dive with Matthew Gentzel on Recently Effective Altruism Policy Analytics 2015-07-20T06:17:48.890Z
The first .impact Workathon 2015-07-09T07:38:12.143Z
FAI Research Constraints and AGI Side Effects 2015-06-07T20:50:21.908Z

Comments

Comment by Ozzie Gooen (oagr) on Nick Bostrom should step down as Director of FHI · 2023-03-07T19:40:20.957Z · EA · GW

I worked at FHI as a research scholar from 2018-2020. At that time I didn't hear anyone saying that Bostrom should step down (and I definitely didn't think he should). 

To be clear, it has been obvious to everyone that FHI has had severe operations/logistical issues. However, it's much less clear if or how FHI would function without Nick Bostrom. 

I'm pretty nervous about rumors in situations like this.

If I were in charge of making any decision here, I'd send out surveys and have a bunch of conversations. 

Comment by Ozzie Gooen (oagr) on Bad Actors are not the Main Issue in EA Governance · 2023-03-02T03:50:30.404Z · EA · GW

I think good startups often do this, but lots of startups have trouble about this stage. Many do have their own cultures that are difficult to retain as they grow. 

I think EA is more intense as there's more required material to understand, but it's a similar idea.

Comment by Ozzie Gooen (oagr) on Who is Uncomfortable Critiquing Who, Around EA? · 2023-02-27T17:58:30.991Z · EA · GW

I agree that management doesn't get much benefit by giving valuable public negative feedback to people. However, I'd push back on the idea that management can "just fire" people they don't like.

Many managers are middle managers. They likely have a lot of gripes with their teams, but they need to work with someone, and often, it would be incredibly awkward or controversial to fire a lot of people.  

Comment by Ozzie Gooen (oagr) on Who is Uncomfortable Critiquing Who, Around EA? · 2023-02-24T21:48:49.212Z · EA · GW

Thanks! I tried going a bit more into detail in point 2 on the previous post.

https://forum.effectivealtruism.org/posts/TfqmoroYCrNh2s2TF/select-challenges-with-criticism-and-evaluation-around-ea#2__Not_Criticizing_Leads_to_Distrust

Comment by Ozzie Gooen (oagr) on Who is Uncomfortable Critiquing Who, Around EA? · 2023-02-24T21:47:05.588Z · EA · GW

Thanks for the point. I also had someone else make a similar comment in the draft, I should have expected others to raise it as well.

Comment by Ozzie Gooen (oagr) on Who is Uncomfortable Critiquing Who, Around EA? · 2023-02-24T18:52:59.599Z · EA · GW

Good point. I was trying to keep this post focused on one specific bottleneck of criticism, I definitely agree there are others too. 

I added the following text, to clarify:

To be clear, there are many bottlenecks between "someone is in a place to come up with a valuable critique" and "different  decisions actually get made."  This process is costly and precarious at each step. For instance, decision makers think in very different ways than critics realize, so it's easy for critics to waste a lot of time writing to them.

This post just focuses on the challenges that come from the challenges of things being uncomfortable to say. Going through the entire pipeline would require far more words. 

Comment by Ozzie Gooen (oagr) on "EA is very open to some kinds of critique and very not open to others" and "Why do critical EAs have to use pseudonyms?" · 2023-02-24T18:34:11.018Z · EA · GW

I appreciate the thought, but personally really don't see this as a mistake on ConcernedEAs

I actually pushed that post a few days back so that it wouldn't conflict with Owen's, trying to catch some narrow window when there aren't any new scandals. (I'm trying not to overlap closely with scandals, mainly so it doesn't seem like I'm directly addressing any scandal, and to not seem disrespectful).

I think if we all tried timing posts to be after both scandals and related posts, then we'd develop a long backlog of posts that would be annoying to manage. 

I'm happy promoting norms where it's totally fine to post things sooner than later. 

It's a bit frustrating that the Community frontpage section just shows 3 items, but that's not any of our faults. (And I get why the EA Forum team did this)

Comment by Ozzie Gooen (oagr) on Bad Actors are not the Main Issue in EA Governance · 2023-02-22T16:29:09.893Z · EA · GW

I defer a lot to experts / well respected managers.

To me, EA has a bunch of young people optimized a lot for some specific non-management talents. It seems a lot like a startup in that way.

Many startups go through "growing up" periods. Some totally fail at this, but when it works well, the outcome can be very successful.

I imagine as we get good consultants here, they will recommend some fairly straightforward and correlated recommendations that I'd agree with.

I found the Personal MBA reading list to be interesting. There are really a lot of "serious organization" skills that are hard to get good at. 
https://personalmba.com/best-business-books/
 

Comment by Ozzie Gooen (oagr) on A Modest Proposal: Fixing the Polyamory Problem · 2023-02-22T16:03:55.393Z · EA · GW

This seems a lot like satire to me, title definitely implies that.
https://en.wikipedia.org/wiki/A_Modest_Proposal

Comment by Ozzie Gooen (oagr) on Bad Actors are not the Main Issue in EA Governance · 2023-02-22T04:17:07.051Z · EA · GW

Happy to see this, thanks for putting it together.

For what it's worth, I roughly agree with a lot of this. I personally see EA challenges now very much as "maturing in management, generally" as opposed to anything very specific, like, "stopping a few bad actors". 

I expect that many senior people roughly would agree. 

Comment by Ozzie Gooen (oagr) on A statement and an apology · 2023-02-21T17:19:39.758Z · EA · GW

I believe anyone who pitches people to participate in circling with others who are pretty much strangers to them (and not super-carefully-vetted) and applies implicit peer pressure and doesn't warn them that this sort of thing can be psychologically risky and unsafe, is either dangerously clueless or a bad actor.

For what it's worth, I live in the Bay Area, where there are large spirituality communities and surprisingly related "professional development" communities. These practices seem surprisingly normal in these communities.

I think that the leaders of these groups are typically very overconfident in their approaches,  are a bit desperate to sell them, and not very epistemically sophisticated, so very rarely give adequate warnings and help.

Comment by Ozzie Gooen (oagr) on A statement and an apology · 2023-02-21T17:14:39.221Z · EA · GW

This seems really unrelated to Owen, but because I saw this, I'd flag I also went to a circling retreat in Oxford around that time, it might have been the same one. 

I found to be personally fairly uninteresting, and got weird vibes from the instructor. In a  discussion that Friday (the first day), he mentioned a lot of metamodernism stuff including a lot of stuff by Ken Wiber. Spirituality vibes similar to what I know of some communities in the Bay.

I did some online searching that evening, and found some reports of  sexual harassment and similar around the upper parts of Circling Europe. 

My general impression is something like, "Issues of sexual harassment and similar are just endemic in alternative communities." 

I know lots of other people I respect have gotten valuable things from circling and similar circling retreats. I've also done a bit of circling without the official mediators and found it to be mostly fine. 

I just attended on the first day, and decided not to join for the next two. (That said, in fairness, I find incredibly few activities better than my best non-retreat activities, so this itself isn't saying much). 

At the one I was at, maybe 20% of the group seemed like it was EAs, I don't remember specifically.

Comment by Ozzie Gooen (oagr) on Select Challenges with Criticism & Evaluation Around EA · 2023-02-19T05:17:07.981Z · EA · GW

After thinking about it more, I decided that I was wrong, and changed it accordingly. 

Thanks for the comment!

Comment by Ozzie Gooen (oagr) on How good/bad is the new Bing AI for the world? · 2023-02-18T04:35:40.323Z · EA · GW

Related to using the Virtue of Discernment:
https://www.lesswrong.com/posts/W2iwHXF9iBg4kmyq6/the-practice-and-virtue-of-discernment

Comment by Ozzie Gooen (oagr) on How good/bad is the new Bing AI for the world? · 2023-02-18T04:34:47.834Z · EA · GW

Instead of asking, "Is it net good or net bad", I think it's much more interesting to catalogue and understand all the ways it's both good and bad. 

Some negative takeaways:

  • OpenAI & Microsoft are bullish on releasing risky technologies quickly.
  • The market seems to encourage this behavior.
  • Google seems like it's been encouraged to do similar work, faster.
  • Likely to inspire more people to invest in this sort of thing and make companies in the space.

Good things (as you mention):

  • Really good for failures to happen publicly
  • Might be indicative of a slow takeoff. My hunch is that we generally want as much AI progress to happen as possible before any hard takeoff, though I'd prefer it all to happen more slowly than quickly.
Comment by Ozzie Gooen (oagr) on Select Challenges with Criticism & Evaluation Around EA · 2023-02-17T18:40:12.495Z · EA · GW

Maybe, I'm not sure. I think the "good communicator" line is decent, it seems very possible the "bad communicator" should be the other way. 

Comment by Ozzie Gooen (oagr) on New EA Podcast: Critiques of EA · 2023-02-14T00:50:48.618Z · EA · GW

+1 for clarification. It could be neat if you could use a standard diagram to pinpoint what sort of criticism each one is. 

For example, see this one from Astral Codex Ten. 

Comment by Ozzie Gooen (oagr) on New EA Podcast: Critiques of EA · 2023-02-14T00:45:41.680Z · EA · GW

After listening to the rest of that post with James, I'll flag that while I agree that "EA is a lot like what many would call an ideology", I disagree with some of the content in the second half.

I think using tools like ethnography, agent based modeling, and Phenomenology, could be neat, but to me, they're pretty low-priority in improvements to EA now. I'd imagine it could take some serious effort in any ($200k? $300? Someone strong would have to come along with a proposal first) to produce something that really changes decision making, and I can think of other things I'd prefer that money be spent on.

There seems to be some assumption that the reason why such actions weren't taken by EA was because EAs weren't at all familiar and didn't read James' post. I think that often a more likely reason is just because it's a lot of work to do things, we have limited resources, and we have a lot of other really important initiatives to do. Often decision makers have a decent sense of a lot of potential actions, and have decided against them for decent reasons.

Similarly, I don't feel like the argument brought forth against the use of the word "aligned" when discussing a person was very useful. In that case I would have liked for you to have tried to really pin things down on what a good solution would look like. I think it's really easy to error on the side of "overfit on specific background beliefs" or "underfit on specific background beliefs", and tricky to strike a balance. 

My impression is that critics of "EA Orthodoxy" basically always have some orthodoxy  of their own. For example, I imagine few would say we should openly welcome Nazi sympathizers, as an extreme example. If they really have no orthodoxy, and are okay with absolutely any position, I'd find this itself an extreme and unusual position that almost all listeners would disagree with. 

Comment by Ozzie Gooen (oagr) on New EA Podcast: Critiques of EA · 2023-02-13T23:27:10.366Z · EA · GW

Some quick thoughts, poorly structured:

  • I like seeing more attempts at understanding “EA Critiques” / ways of improving EA.
  • I think the timing that this is being released is inconvenient, but I don’t blame you.
  • Personally I feel exhausted by the last few months of what I felt like was much some firestorm of angry criticism. Much of it, mainly from the media and Twitter, feels like it was very antagonistic and in poor taste. At the same time, I think our movement has a whole lot of improvement to do.
  • As with all critiques, I am emotionally nervous about it being used as “cheap ammunition” for groups that just to hate on EA.
  • Personally, I very much side with James already on the Ideology question. I think Helen’s post was pretty bad. I’m not sure how much Helens post represent “core EA understanding”, and as such, the attack on it feels a bit less like “EA criticism”, than “regular forum content”. However, this might well be nitpicking. I listened to around half of this so far and found it reasonable (as expected, as I also agreed with the blog post).
  • I think issues around critique can still be really valuable. But I also think they (unfortunately) need to be handled more carefully than some other stuff we do. I’ll see about writing more about this later.
  • My guess is that 70%+ of critiques are pretty bad (as is the case for most fields). I’d likewise be curious about your ability to push back on the bad stuff, or maybe better, to draw out information to highlight potential issues. Frustratingly though, I imagine people will join your podcast and share things in inverse proportion to how much you call them out. (This is a big challenge podcasts have)
  • I suggest monitoring Twitter. If people do take parts of your podcast out of context and do bad things with them, keep an eye out and try to clarify things.
  • Good luck!
Comment by Ozzie Gooen (oagr) on There can be highly neglected solutions to less-neglected problems · 2023-02-10T23:38:43.564Z · EA · GW

I agree with the thrust of this. To me, much of the issue has to do with the coarseness of the ontology of interventions that we use.

Thins like Discernment can help break this down. 

Comment by Ozzie Gooen (oagr) on Shallow investigation: Loneliness · 2023-02-08T03:58:39.992Z · EA · GW

By chance have you considered/investigated AI friends? 

My impression is that they could be a really big deal. Possibly really net-bad, possibly really net-good.

https://www.facebook.com/ozzie.gooen/posts/pfbid0mRoPn5o3hdEAjzryhZQ7sTAR6EXLwTjhPPrGUrVHft65u75WmgmisiGVo3qtwNPCl

Comment by Ozzie Gooen (oagr) on Eli Lifland on Navigating the AI Alignment Landscape · 2023-02-05T22:09:56.233Z · EA · GW

There are audio version in the Substack. I can see about adding them to the EA Forum more directly in the future.
https://quri.substack.com/p/eli-lifland-on-navigating-the-ai-722

Comment by Ozzie Gooen (oagr) on We're no longer "pausing most new longtermist funding commitments" · 2023-02-05T22:07:31.776Z · EA · GW

Similar. I think I'm happy for QURI to be listed if it's deemed useful.

Also though, I think that sharing information is generally a good thing, this type included. 

More transparency here seems pretty good to me. That said, I get that some people really hate public rankings, especially in the early stages of them. 

Comment by Ozzie Gooen (oagr) on Forecasting Our World in Data: The Next 100 Years · 2023-02-02T03:09:41.675Z · EA · GW

This is very interesting, really happy to see this. As normal, I think it's good to take these with a big grain of salt - but I'm happy to get any halfway-reasonable attempt at a starting point.

One big issue here is that the boundaries are for the 25th/50th/75th percentiles. I would have expected many of there extrapolations to get much wilder (either doom or utopia), but maybe much of that is outside these percentiles.

Even then though, I imagine many readers around here might give >25% odds to at least one of "discontinuous benefit or catastrophic harm", by 2122. 2122 is a really long time.

Many of the confidence bands seem to grow linearly over time, instead of exponentially or similar. This is surprising to me.

One point: I would be pretty enthusiastic about people making "meta-predictions", treating these as baselines. For instance, "In 5 years, these estimates be revised. The difference will be less than 20%. This includes estimates in these 5 years".

That way, onlookers could make quick forecasts on "how correct this set of forecasts" is, using simpler (not time-series) methods.

Comment by Ozzie Gooen (oagr) on Eli Lifland on Navigating the AI Alignment Landscape · 2023-02-01T22:36:20.864Z · EA · GW

It seems like a bunch of care/preparation went into having good questions, so I think here I'd have a lot of trust in the interviewer's brief.

Just fli - in this case, we spent some time in the beginning making a very rough outline of what would be good to talk about. Much of this is stuff Eli put forward. I've also known Eli for a while, so had a lot of context going in.

Comment by Ozzie Gooen (oagr) on We're no longer "pausing most new longtermist funding commitments" · 2023-02-01T21:46:07.363Z · EA · GW

Same for QURI (Assuming OP ever evaluates/funds QURI)

Comment by Ozzie Gooen (oagr) on Eli Lifland on Navigating the AI Alignment Landscape · 2023-02-01T19:45:42.111Z · EA · GW

For those who go through this, I'm really curious how important the transcript was. 

In terms of (marginal) work, this was something like:
- In person prep+setup: 3 hours
- Recording: 1.5 hours
- Editing: ~$300, plus 4 hours of my time
- Transcription: $140, plus around ~5 hours of our team's time.

(There was also a lot of time in me sort of messing around and learning the various pieces, but much of that could be later improved. Also, I was really aggressive on removing filler words and pauses. I think this is unusual, in part because it's resource-intensive to do well. )

I'd like to do something like, "Only do transcripts for videos that get 50 upvotes, or we are pretty sure will get 50 upvotes", but I'm not sure. (My guess is that poor transcripts, which means almost anything that takes less than ~$200/3 hours time, will barely be good enough to be useful)

Comment by Ozzie Gooen (oagr) on Eli Lifland on Navigating the AI Alignment Landscape · 2023-02-01T19:39:04.983Z · EA · GW

Glad you liked it! 

I'll see about future videos with him.

I'll flag that if others viewing this have more suggestions, or would like to just talk about your takes on things like this, publicly, do message me. 

The transcripts are pretty annoying to do (the hardest labor-intensive part to outsource), but the rest isn't that bad. 

Comment by Ozzie Gooen (oagr) on Literature review of Transformative Artificial Intelligence timelines · 2023-01-31T21:57:45.048Z · EA · GW

Yea, I assume the full version is impossible. But maybe there are at least some simpler statements that can be inferred? Like, "<10% of transformative AI by 2030."

I'd be really curious to get a better read on what market specialists around this area (maybe select hedge fund teams around tech disruption?) would think.

Comment by Ozzie Gooen (oagr) on Literature review of Transformative Artificial Intelligence timelines · 2023-01-30T00:58:34.794Z · EA · GW

This seems pretty neat, kudos for organizing all of this! 

I haven't read through the entire report. Is there any extrapolation based on market data or outreach? I see arguments about market actors not seeing to have close timelines, as the main argument that timelines are at least 30+ years out.

Comment by Ozzie Gooen (oagr) on My highly personal skepticism braindump on existential risk from artificial intelligence. · 2023-01-26T23:29:08.144Z · EA · GW

I earlier gave some feedback on this, but more recently spent more time with it. I sent these comments to Nuno, and thought they could also be interesting to people here.

  • I think it’s pretty strong and important (as in, an important topic).
  • The first half in particular seems pretty dense. I could imagine some rewriting making it more understandable.
  • Many of the key points seem more encompassing than just AI. “Selection effects”, “being in the Bay Area” / “community epistemic problems”. I think I’d wish these could be presented as separate posts than linked to here (and other places), but I get this isn’t super possible.
  • I think some of the main ideas in the point above aren’t named too well. If it were me, I’d probably use the word “convenience” a lot, but I realize that’s niche now.
  • I really would like more work really figuring out what we should expect of AI in the next 20 years or so. I feel like your post was more like, “a lot of this extremist thinking seems fishy”, more than it was “here’s a model of what will happen and why”. This is fine for this post, but I’m interested in the latter.
  • I think I mentioned this earlier, but I think CFAR was pretty useful to me and a bunch of others. I think there was definitely a faction that wanted them to be much more aggressive on AI, and didn’t really see the point of donating to them besides that. I think my take is that the team was pretty amateur at a lot of key organizational/management things, so did some slipper work/strategy. That said, there was much less money then. There wasn’t a whole lot of great talent for such things. I think they were pretty overvalued at the time to rationalists, but I would consider them undervalued, in terms of what EAs tend to think of them as.
  • The diagrams could be improved. At least, bold/highlight the words “for” and “against. I’m also not sure if the different size blocks are really important
Comment by Ozzie Gooen (oagr) on What improvements should be made to improve EA discussion on heated topics? · 2023-01-20T22:08:34.460Z · EA · GW

There was some discussion of this here: https://forum.effectivealtruism.org/posts/jRJyjdqqtpwydcieK/ea-could-use-better-internal-communications-infrastructure

Comment by Ozzie Gooen (oagr) on Possible changes to EA, a big upvoted list · 2023-01-18T20:10:24.232Z · EA · GW

I'd recommend splitting these up into different answers, for scoring.  I imagine this community is much more interested in some of these groups than others.

Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-18T18:49:57.999Z · EA · GW

Small point:
> Finally, we ask that people upvote or downvote this post on the basis of whether they believe it to have made a useful contribution to the conversation, rather than whether they agree with all of our critiques.

I think this makes a false dilemma, and recommends what seems like an unusual standard that other posts probably don't have.

"believe it to have made a useful contribution to the conversation" -> This seems like arguably a really low bar to me. I think that many posts, even bad ones, did something useful to the conversation. 

"whether they agree with all of our critiques." -> I never agree with all of basically any post. 

I think that more fair standards of voting would be things more like:
"Do I generally agree with these arguments?"
"Do I think that this post, as a whole, is something I want community members to pay attention to, relative to other posts?"

Sadly we don't yet have separate "vote vs. agreement" markers for posts, but I think those would be really useful here.

Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-18T05:21:45.296Z · EA · GW

Portions of this reform package sound to my ears like the dismantling of EA and its replacement with a new movement, Democratic Altruism ("DA")

 

I like the choice to distill this into a specific cluster.

I think this full post definitely portrays a very different vision of EA than what we have, and than what I think many current EAs want. It seems like some particular cluster of this community might be in one camp, in favor of this vision.

If that were the case, I would also be interested in this being experimented with, by some cluster. Maybe even make a distinct tag, "Democratic Altruism" to help organize conversation on it. People in this camp might be most encouraged to directly try some of these proposals themselves. 

I imagine there would be a lot of work to really put forward a strong idea of what a larger "Democratic Altruism" would look like, and also, there would be a lengthy debate on its strengths and weaknesses.

Right now I feel like I keep on seeing similar ideas here being argued again and again, without much organization.

(That said, I imagine any name should come from the group advocating this vision)

Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-18T04:01:44.827Z · EA · GW

I imagine you'd also likely agree that these proposals tradeoff against everything else that the EA orgs could be doing, and it's not super clear any are the best option to pursue relative to other goals right now.

Of course. Very few proposals I come up with are a good idea for myself, let alone others, to really pursue. 

Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-18T03:53:38.878Z · EA · GW

I think there's probably a bunch of different ways to incorporate voting. Many would be bad, some good. 

Some types of things I could see being interesting:

  • Many EAs vote on "Community delegates" that have certain privileges around EA community decisions.
  • There could be certain funding groups that incorporate voting, roughly in proportion to the amounts donated. This would probably need some inside group to clear funding targets (making sure they don't have any confidential baggage/risks) before getting proposed.
  • EAs vote directly on new potential EA Forum features / changes.
  • We focus more on community polling, and EA leaders pay attention to these. This is very soft, but could still be useful.
  • EAs vote on questions for EA leaders to answer, in yearly/regular events.
Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-18T00:32:01.477Z · EA · GW

Thanks! I definitely agree that improvement would be really great.


If others reading this have suggestions of other community examples, that would also be appreciated!

Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-18T00:30:25.982Z · EA · GW

Background
First, I want to say that I really like seeing criticism that's well organized and presented like this. It's often not fun to be criticized, but the much scarier thing is for no one to care in the first place. 

This  post was clearly a great deal of work, and I'm happy to see so many points organized and cited. 

I obviously feel pretty bad about this situation where, several people all felt like they had to do this in secret in order to feel safe. I think tensions around these issues feel much more heated than I'd like them to. Most of the specific points and proposals seem like things that in a slightly different world, all sides could feel much more chill discussing.

I'm personally in a weird position, where I don't feel like one of the main EAs who make decisions (outside of maybe RP), but I've been around for a while and know some of them. I did some grantmaking, and now am working on an org that tries to help figure out how to improve community epistemics (QURI).

Some Quick Impressions
I think one big division I see in discussions like this, is that between:

  1. What's in the best interest of EA leadership/funding, conditional on them not dramatically changing their beliefs about key things (this might be very unlikely).
  2. What's an ~irreconcilable different opinion (with a reasonable time of debate/investigation, say, a few days of solid reading).

Bucket 1 is more about convincing and informing each other. The way to make progress there is by deeply understanding those with power, and explaining how it helps their goals.

Bucket 2 is more about relative power. No two people are perfectly aligned, even after years of deliberation. Frustratingly, the main ways to make progress here are to either move it from some players to others,  or doing things like just making power moves (taking actions that help your interests, in comparison to other stakeholders).

Right now, in EA, the vast majority of funding (and thus control) ultimately comes from one source. This is a really uncomfortable position, in many ways. 

However, other members of the community clearly have some power. They could do some nice things like write friendly posts, or some not so nice things (think of strikes) like leaking information or complaining/ranting to antagonistic journalists.

I imagine that eventually we could find better ways to do group bargaining, like some sort  of voting system (similar to what you recommend).

Back to this post, I think that some of the way this post is written reminds me of "lists of demands" that I'm used to seeing in fairly antagonistic negotiations, in the style of Bucket 2.

My guess is that this wasn't your intention. Given that it's so long (and must have involved a lot of coordination to write), I could definitely sympathize with "let's just get it out there" instead of making sure it's style is optimized for Bucket 1 (if that was your intention).  That said, If I were a grantmaker now, I could easily see myself putting this in my "some PR fire to deal with" bucket rather than "some useful information for me to eventually spend time with". 

Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-18T00:02:13.057Z · EA · GW

By chance, can you suggest any communities that you think do a good job here? 

I'm curious who we could learn from.

Or is it like, "EAs are bad, but so are most communities." (This is my current guess at what I believe)

Comment by Ozzie Gooen (oagr) on Book critique of Effective Altruism · 2023-01-17T23:13:10.142Z · EA · GW

Thanks for the post, I found that interesting! 

Sorry you felt like you'd make mistakes here. We all make mistakes, I make them constantly. 

I look forward to your future posts.

Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-17T23:09:11.325Z · EA · GW

Thanks!

Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-17T22:57:18.810Z · EA · GW

I think criticism is really complicated and multifaceted, and we have yet to develop nuanced takes on how it works and how to best use it. (I've been doing some thinking here).

I know that orgs do take some criticism/feedback very seriously (some get a lot of this!), and also get annoyed or ignore a lot of other criticism. (There's a lot of bad stuff, and it's hard to tell what has truth behind it).

One big challenge is that it's pretty hard to do things. Like, it's easy to suggest, "This org should do this neat project", but orgs are often very limited in what things they could do at all, let alone what unusual things or things they aren't already thinking about and good at they could do.

There's definitely more learning to do here.

Comment by Ozzie Gooen (oagr) on Doing EA Better · 2023-01-17T22:52:47.766Z · EA · GW

On Democratic Proposals - I think that more "Decision making based on democratic principles" is a good way of managing situations where power is distributed. In general, I think of democracy as "how to distribute power among a bunch of people".

I'm much less convinced about it as a straightforward tool of better decision making. 

I think things like Deliberative Democracy are interesting, but I don't feel like I've seen many successes. 

I know of very little use of these methods in startups, hedge funds, and other organizations that are generally incentivized to use the best decision making techniques.  

To be clear, I'd still be interested in more experimentation around Deliberative Democracy methods for decision quality, it's just that the area still seems very young and experimental to me.

Comment by Ozzie Gooen (oagr) on What improvements should be made to improve EA discussion on heated topics? · 2023-01-16T23:02:54.405Z · EA · GW

Prizes for commenters that do "moderator" activities. 

  • Clarifying the opinions of people.
  • Politely explaining to difficult people conversational norms.
  • Making conversation inviting and friendly.
Comment by Ozzie Gooen (oagr) on What improvements should be made to improve EA discussion on heated topics? · 2023-01-16T23:00:42.083Z · EA · GW

Thanks! I was planning on forwarding this to you, happy you saw it earlier :)

Comment by Ozzie Gooen (oagr) on What improvements should be made to improve EA discussion on heated topics? · 2023-01-16T21:15:15.202Z · EA · GW

Yea. I like the idea of more/better moderation. I would note that it's a pretty thankless+exhausting job (many thanks to the current mods), so one big challenge is finding people  strong enough, trusted enough, and willing to do it.

Comment by Ozzie Gooen (oagr) on What improvements should be made to improve EA discussion on heated topics? · 2023-01-16T20:48:24.713Z · EA · GW

Blog posts on the EA Forum outlining the incentives and reasons for such a heated environment

Comment by Ozzie Gooen (oagr) on What improvements should be made to improve EA discussion on heated topics? · 2023-01-16T20:47:38.847Z · EA · GW

EAs read literature into good conversational norms

Comment by Ozzie Gooen (oagr) on What improvements should be made to improve EA discussion on heated topics? · 2023-01-16T20:46:55.073Z · EA · GW

We bring in a professional moderator (like, a marriage therapist), to help oversee some of the discussion online.