Posts

Open Communication in the Days of Malicious Online Actors 2020-10-06T23:57:35.529Z · score: 33 (17 votes)
Ozzie Gooen's Shortform 2020-09-22T19:17:54.175Z · score: 7 (1 votes)
Expansive translations: considerations and possibilities 2020-09-18T21:38:42.357Z · score: 12 (2 votes)
How to estimate the EV of general intellectual progress 2020-01-27T10:21:11.076Z · score: 38 (14 votes)
What are words, phrases, or topics that you think most EAs don't know about but should? 2020-01-21T20:15:07.312Z · score: 29 (14 votes)
Best units for comparing personal interventions? 2020-01-13T08:53:12.863Z · score: 16 (5 votes)
Predictably Predictable Futures Talk: Using Expected Loss & Prediction Innovation for Long Term Benefits 2020-01-08T22:19:32.155Z · score: 11 (4 votes)
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges 2019-12-19T18:16:04.299Z · score: 53 (16 votes)
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T16:36:10.564Z · score: 31 (11 votes)
Introducing Foretold.io: A New Open-Source Prediction Registry 2019-10-16T14:47:20.752Z · score: 48 (24 votes)
What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) 2019-08-04T20:38:10.413Z · score: 31 (10 votes)
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:23.576Z · score: 37 (12 votes)
What new EA project or org would you like to see created in the next 3 years? 2019-06-11T20:56:42.687Z · score: 75 (38 votes)
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T21:25:46.305Z · score: 35 (12 votes)
Discussion: What are good legal entity structures for new EA groups? 2018-12-18T00:33:16.620Z · score: 14 (6 votes)
Current AI Safety Roles for Software Engineers 2018-11-09T21:00:23.318Z · score: 11 (5 votes)
Prediction-Augmented Evaluation Systems 2018-11-09T11:43:06.088Z · score: 6 (2 votes)
Emotion Inclusive Altruism vs. Emotion Exclusive Altruism 2016-12-21T01:40:45.222Z · score: 2 (4 votes)
Ideas for Future Effective Altruism Conferences: Open Thread 2016-08-13T02:59:02.685Z · score: 2 (4 votes)
Guesstimate: An app for making decisions with confidence (intervals) 2015-12-30T17:30:55.414Z · score: 42 (44 votes)
Is there a hedonistic utilitarian case for Cryonics? (Discuss) 2015-08-27T17:50:36.180Z · score: 9 (11 votes)
EA Assembly & Call for Speakers 2015-08-18T20:55:13.854Z · score: 8 (8 votes)
Deep Dive with Matthew Gentzel on Recently Effective Altruism Policy Analytics 2015-07-20T06:17:48.890Z · score: 4 (3 votes)
The first .impact Workathon 2015-07-09T07:38:12.143Z · score: 6 (6 votes)
FAI Research Constraints and AGI Side Effects 2015-06-07T20:50:21.908Z · score: 2 (2 votes)
Gratipay for Funding EAs 2014-12-24T21:39:53.332Z · score: 5 (5 votes)
Why "Changing the World" is a Horrible Phrase 2014-12-24T00:41:50.234Z · score: 7 (7 votes)

Comments

Comment by oagr on Linch's Shortform · 2020-10-15T17:44:02.895Z · score: 4 (2 votes) · EA · GW

Definitely agreed. That said, I think some of this should probably be looked through the lens of "Should EA as a whole help people with personal/career development rather than specific organizations, as the benefits will accrue to the larger community (especially if people only stay at orgs for a few years).

I'm personally in favor of expensive resources being granted to help people early in their careers. You can also see some of this in what OpenPhil/FHI funds; there's a big focus on helping people get useful PhDs. (though this helps a small minority of the entire EA movement)

Comment by oagr on Nathan Young's Shortform · 2020-10-11T22:22:25.338Z · score: 2 (1 votes) · EA · GW

I think people have been taking up the model of open sourcing books (well, making them free). This has been done for [The Life You can Save](https://en.wikipedia.org/wiki/The_Life_You_Can_Save) and [Moral Uncertainty](https://www.williammacaskill.com/info-moral-uncertainty). 

I think this could cost $50,000 to $300,000 or so depending on when this is done and how popular it is expected to be, but I expect it to be often worth it.

Comment by oagr on [deleted post] 2020-10-11T22:18:37.150Z

Many kudos for doing this, I've been impressed seeing this work progress. 

I think it could well be the case that EAs have a decent comparative advantage in prioritization itself. I could imagine a world where the community does help prioritize a large range of globally important issues. This could work especially well if these people could influence the spending and talent of other people. Things that are poorly neglected present opportunity for significant leverage through prioritization and leadership.

On politics, my impression is that the community is going to get more involved on many different fronts.  It seems like the kind of thing that can go very poorly if done wrong, but the potential benefits are too big to ignore.

As Carl Shulman previously said, one interesting aspect about politics is the potential to absorb a deep amount of money and talent.  so I imagine one of the most valuable things about doing this can work is producing information value to inform us if and how to scale it later.

Comment by oagr on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-09T03:51:48.100Z · score: 7 (6 votes) · EA · GW

From a few conversations with him, I think he semi-identifies as an EA. He's definitely known about EA for a while, there is evidence for that (just search his name in the EA Forum search). 

I think he would admit that he doesn't fully agree with EAs on many issues.  I think that most EAs I know wouldn't exactly classify him as an EA if they were to know him, but as EA-adjacent.

He definitely knows far more about it than most politicians.

I would trust that he would use "evidence-based reasoning". I'm sure he has for DXE. However, "evidence-based reasoning" by itself is a pretty basic claim at this point. It's almost meaningless at this stage, I think all politicians can claim this.

Comment by oagr on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-09T03:36:34.116Z · score: 12 (8 votes) · EA · GW

I think it's possible to use good leadership practices and bad leadership practices.  I think the success of DxE has shown that he can do some things quite well.  

I've met Wayne before. I get the impression is he quite intelligent and has definitely been familiar with EA for some time. At the same time, DXE has used much more intense / controversial practices in general than many EA orgs, many practices others would be very uncomfortable with. Very arguably this contributed to their successes and failures. 

Sometimes I'm the most scared of the people who are the most capable. 

I really don't know much about Wayne, all things considered. I could imagine a significant amount of investigation concluding that he'd either be really great or fairly bad.

Comment by oagr on Open Communication in the Days of Malicious Online Actors · 2020-10-08T21:40:11.676Z · score: 4 (3 votes) · EA · GW

Agreed on preventative measures, where possible. I imagine preventative measures are probably more cost-effective than measures after the fact.

Comment by oagr on Open Communication in the Days of Malicious Online Actors · 2020-10-07T22:33:37.942Z · score: 4 (2 votes) · EA · GW

Thanks so much for the feedback. 

On the example; I wrote this fairly quickly. I think the example is quite mediocre and the writing of the whole piece was fairly rough. If I were to give myself a grade on writing quality for simplicity or understandability, it would be a C or so. (This is about what I was aiming for given the investment). 

I'd be interested in seeing further writing that uses more intuitive and true examples. 
 

Comment by oagr on Open Communication in the Days of Malicious Online Actors · 2020-10-07T16:28:06.591Z · score: 14 (9 votes) · EA · GW

Very fair question. I'm particularly considering the issue for community discussions around EA. There's a fair EA Twitter presence now and I think we're starting to see some negative impacts of this. (Especially around hot issues like social justice.)  

I was considering posting here or LessWrong and thought that the community here is typically more engaged with other public online discussion.

That said, if someone has ideas to address the issue on a larger scale, I could imagine that being an interesting area. (Communication as a cause area)

I myself am doing a broad survey of things useful for collective epistemics, so this would also fall within that.

Comment by oagr on Can the EA community copy Teach for America? (Looking for Task Y) · 2020-10-07T03:29:14.042Z · score: 7 (5 votes) · EA · GW

I've been thinking a fair bit about this.

I think that forecasting can act as really good intellectual training for people. It seems really difficult to BS, and there's a big learning curve of different skills to get good at (can get into automation).

I'm not sure how well it will scale in terms of paid forecasters for direct value (agreeing with "the impact probably isn't huge). I have a lot of uncertainty here though.

I think the best analogy is that to hedge funds and banks. I could see things going one of two ways; either it turns out what we really want is a small set of super intelligent people working in close coordination (like a high-salary fund) or that we need a whole lot of "pretty intelligent people" to do scaled labor (like a large bank or trading institution). 

That said, if forecasting could help us determine what else would be useful to be doing, then we're kind of set.

Comment by oagr on Open Communication in the Days of Malicious Online Actors · 2020-10-07T03:22:12.218Z · score: 3 (2 votes) · EA · GW

Thanks for letting me know, that's really valuable.

Comment by oagr on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-09-25T09:12:42.863Z · score: 2 (1 votes) · EA · GW

A very simple example might be someone saying, "What's up?" and the other person saying "The sky.". "What's up?" assumes a shared amount context. To be relevant, it would make much more sense for it to be asking how the other person is doing.

There are a bunch of youtube videos around the topic, I recall some go into examples.

Comment by oagr on Thomas Kwa's Shortform · 2020-09-24T09:06:43.896Z · score: 18 (7 votes) · EA · GW

First, neat idea, and thanks for suggesting it!

Is there a reason this isn't being done? Is it just too expensive?

From where I'm sitting, there are a whole bunch of potentially highly useful things that aren't being done. After several years around the EA community, I've gotten a better model of why that is:

1) There's a very limited set of EAs who are entrepreneurial, trusted by funders, and have the necessary specific skills and interests to do many specific things. (Which respected EAs want to take a 5 to 20 year bet on field anthropology?)
2) It often takes a fair amount of funder buy-in to do new projects. This can take several years to develop, especially for an research area that's new.
3) Outside of OpenPhil, funding is quite limited. It's pretty scary and risky to start something new and go for it. You might get funding from EA Funds this year, but who's to say if you'll have to fire your staff in 3 years.

On doing anthropology, I personally think there might be lower hanging fruit first engaging with other written moral systems we haven't engaged with. I'd be curious to get an EA interpretation of parts of Continental Philosophy, Conservative Philosophy, and the philosophies and writings of many of the great international traditions. That said, doing more traditional anthropology could also be pretty interesting.

Comment by oagr on Ozzie Gooen's Shortform · 2020-09-22T19:17:54.494Z · score: 18 (11 votes) · EA · GW

EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals. 

If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields. 

If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice. 

Comment by oagr on Solander's Shortform · 2020-09-20T17:10:53.902Z · score: 2 (1 votes) · EA · GW

I think this is one of the principals of GiveDirectly. I imagine that more complicated attempts at this could get pretty hairy (try to get the local population to come up with large coordinated proposals like education reform), but could be interesting.

Comment by oagr on Expansive translations: considerations and possibilities · 2020-09-19T09:19:01.992Z · score: 4 (2 votes) · EA · GW

Thanks! Some responses:


I'm not sure I understand exactly why you think of this as being of perhaps similar epistemic importance as forecasting.
 

I plan to get to this more in future posts.  The TLDR is something like,
"Jugemental forecasting has a lot of room to grow in both research and technology. If it gets really great, that could be really useful for our shared epistemics. It would help us be more accurate about the world. Expansive translations have similar properties."
 


by "futuristic translation" did you mean any form of expansive translation as is written in your post

Correct. I think that these definitions will require a lot of technology and research to do well, so I'm labeling them as "futuristic".
 


The case I see for its importance is basically that it increases our capacity for sharing ideas more efficiently, which can improve general reasoning about complex issues and hasten progress. Is this mostly how you think of it?

Yep, that's a good way of putting it. 
 


One interesting point regarding how promising this is, is that either there would be an economic incentive for someone to create such an innovation or that there won't be enough public interest.

It's a common point around EA circles, but I think things are more complicated. Having worked in the tech sector for a while, and read a fair bit around the edges, I think the idea that "technology progress that's useful for industry is an efficient market" has large gaps in it. A lot of really ambitious technological development takes decades to develop and begins in academic institutions long before corporate ones. I think doing great work in this area could require long-term systematic efforts, and the way things are right now, those seem to be very haphazard and spotty to me. 

I think it's possible that much of "effective general scientific, academic, and technological progress" is a highly neglected area, even though it seems on the surface that things possibly can't be that bad. 



 

 


 

Comment by oagr on [deleted post] 2020-09-17T15:11:43.112Z

It seems to me like this post got a reasonable amount of feedback

That was kind of my point. I interpreted you before as saying you wanted to see fewer posts like this, and was pointing out that if we did, those posts wouldn't get feedback like this. (That said, I think that the feedback on this could have been better, I'm responsible for this too)

Do any counterexamples come to mind?

I think going through this would take a lot of time, especially if you really don't share this intuition. Happy to discuss in a call. I would note that "lots of critical comments" doesn't exactly mean that "what's going on" is obvious. There are many mediocre posts that have bad ideas that get no comments. I would expect that if one had a post that got a ton of bad comments, they would assume it could be because their post was much worse than those that got few comments, or that people dislike them personally.

I also think that feedback on the Forum tends to be more helpful (on average) than you'd get on almost any other free online platform. My main criticism of the Forum's commentariat is that they don't write enough comments (I'd love to see people get more feedback), but I don't know what alternative platform would be better in that regard.

I wouldn't really disagree, but expect that we can still aim for much better. Many of the forums I encounter (especially ones that border on discussions of Morality) are really really bad. 

A question: Do you think the Forum would be a better site, overall, if it had only upvotes and comments, but no downvotes?

I'm not excited about this idea. The upvote/downvote system is crude enough as it is; if anything, I'd like to see more specificity come to it (like, I upvote the reasoning of this post, but not its conclusions). I am excited about things like important community members (myself included) learning how to better handle emotionally sensitive communication, though there are many useless ways of attempting this. Eventually, it could be interesting to have ML bots or similar to help here.

Comment by oagr on [deleted post] 2020-09-15T16:22:04.296Z

I think I'd also like to recommend having the EA Community Health team or similar jump in on situations like this, and hopefully have a call with Sanjay and the top critics. Social disagreements on sensitive issues are really tough to have in person, let alone on a public forum with everyone in the community judging you. 

Comment by oagr on [deleted post] 2020-09-15T16:12:04.276Z

Thanks for the thoughtful response.

I think one area where we may have different feelings is in the bar for publishing to the EA Forum. My intuition was that we should have a very low bar, but have easy sorting and make things easy for readers to skip content. The feeling is something like, "We thank you for writing any content to the forum. Once it's here, it's easy enough to remove or hide it if really needed, so content is generally EV-positive."
 

I would argue that the original Crybaby's post may have been worse than this one, but still am fine with it being hosted publicly. If the next Holden were to write a post like that, on this Forum, I'd want them to come away learning how to do better, but not having a very negative experience.


One of the main uses of posting to the Forum is not for readership but for feedback. And some of the worst posts may be exactly those that could benefit the most from feedback.


I know a bunch of people who are reluctant to post to the Forum, and my impression is that we're losing out a bit here.

If you see the Forum as more of a professional thing, I would hope we could eventually have some other alternative to give feedback to people on their written up thoughts and early blog posts.  (Not saying that this is your responsibility, just that I would like to see someone do it).
 

---


Often when I see posts heavily downvoted / other comments upvoted, it's because they seem to hit a nerve that a large part of the community deeply cares about, but the comment responses don't make this clear (it is confusing!). For example, there have been a bunch of emotionally charged threads on transparency vs. censorship. I'm worried that people posting will touch on these issues without realizing it, then take the vote differences to be about them personally.

Comment by oagr on [deleted post] 2020-09-13T13:41:35.229Z

Just a bit more here, after seeing the other comments come in:

My impression is that the OP is being genuine and not trying to be mean spirited. I find the style of this one post a odd and bit off putting (though admitting less so than the original crybaby post), but that's something we'll get when encouraging people to write on the forum.


I don't think it is a good principle that people should trawl through blog posts from 13 years ago,  on a different website, looking for something to demand a public apology for. If we start accepting posts like this then this entire forum could end up being nothing but such articles!

At the very top of this post the author says,

"As it's many years old, I thought that everyone must assume it is no longer representative of GiveWell. However despite its age I've seen it quoted even quite recently."

Unless we think the author is lying, the fact that they've seen it quoted (to what seems like a fair amount), seems like a valid reason to me to identify it. It doesn't sound like the case of someone maliciously searching through their history for all bad posts.


It engages with a blog post written in 2007 in a way which will predictably lead readers to infer that as far as public evidence goes, this is still Holden's view / style. On the contrary, GiveWell's and OpenPhil's styles are now wildly different from this post.

My reading is that the OP wasn't doing this maliciously. I could easily imagine that the OP wasn't familiar with the dramatic change in GiveWell and OpenPhil's styles, and is rather new to the field. Or they thought that the fact that the post was "updated in 2016" (from the post) was evidence that it was still supported.


There's some nasty subtext in the voting patterns where things are heavily upvoted and downvoted with rather little explanation. My impression is that a large crowd is very afraid of some kinds of discussions, so pounces where there's a bit of pattern matching. These background discussions are things often the original authors are unaware of.

I remember a long while back I wrote some posts to LessWrong that were decently downvoted without much reasonable explanation and that discouraged me from writing for a long time. Here, Sanjay (from looking at other posts) seems to be the kind who is actively contributing content that's upvoted, and I'd hate to discourage them unnecessarily.

Comment by oagr on Judgement as a key need in EA · 2020-09-12T19:30:39.289Z · score: 3 (2 votes) · EA · GW

I'm doing research around forecasting, and I'd just note:
1) Forecasting seems nice for judgement, it is very narrow (as currently discussed). 
2) It seems quite damning that every single other field isn't currently recommended as an obvious improvement to judgement, but right now not much else comes to mind. There's a lot of Academia that seems like it could be good, but right now it's a whole lot of work to learn and the expected benefits aren't particularly clear. 

If anyone else reading this has suggestions, please leave them in the comments.

Comment by oagr on [deleted post] 2020-09-12T19:24:19.743Z

A bit like Holden(2007), I'm a fan of honest communication. And if honest communication can be used to bash on some charities, it could also be used to bash on blog posts bashing some charities.

My guess is that Holden looks back quite poorly on the post in question, and leaves it up for transparency instead of because he still supports it. I've seen this post get shown as an example of how not to do good communication. I'm thankful it's still up for this purpose, though a bit surprised it doesn't have a disclaimer. Apparently it was updated on 2016, I'm not sure what's going on there.

OpenPhil now seems to have a very different philosophy to that discussed in that post. 

I have personally taken down my previous blog because I found some of it pretty cringe-worthy. I mean to put it back online sometime but it will take some time to add all the disclaimers I'd want to feel comfortable with it.

Comment by oagr on Why do social movements fail: Two concrete examples. · 2020-09-08T11:49:08.314Z · score: 2 (1 votes) · EA · GW

Many of my interests are related to General Semantics, so I'd like to understand it better.

I think it's likely that it wasn't bad, it just wasn't done particularly well. It may have been ahead of its time.

It seems like a pity that it almost completely ended. I guess they basically didn't seem to find a mix of funding and great talent to continue the discipline. 

I wouldn't want it to be seen as a lesson that those ideas are a dead-end, which clearly does not seem true to me.

I got the impression that the field was pretty messy and early, a bit like Econ in the very early stages.

Comment by oagr on Why do social movements fail: Two concrete examples. · 2020-09-08T11:45:53.893Z · score: 4 (2 votes) · EA · GW

Great work here. 

I like the content, though am not sure that "A list of causes" is the ideal organization.  I think with General Semantics there needed to be an ecosystem where the benefits were greater than the costs for a range of stakeholders, and in this case they plainly weren't.  Your lists of causes seem like about the right content for these benefits/costs.

I imagine another way of organizing this would be something like a list of the costs and benefits of the product and costs and benefits for the organizing, with the corresponding conclusions that the costs were generally higher than the benefits for the reasons you mentioned. 

Comment by oagr on More empirical data on 'value drift' · 2020-09-02T21:05:54.448Z · score: 4 (2 votes) · EA · GW

Nice work with this!

One thing that comes to mind, (though perhaps a bit strange), is to really consider Effective Altruism under a similar lens as you would a SaaS product or similar. In the SaaS (software as a service) industry, there are a fair bit of best practices around understanding retention rates, churn, and doing cohort analysis and the like. There's also literature in evaluating the quality of a product on NPS score and better. It could be neat to have people rank "Effective Altruism" and "The EA Community" on NPS scores.

Likewise, it could be interesting to survey people with things like, "How would you rate the value you are getting from the EA ecosystem", and then work to maximize this value. Consider the costs (donations, career changes) vs. the benefits and see if you can model total value better.

Comment by oagr on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T19:07:22.463Z · score: 4 (2 votes) · EA · GW

Yea, I think the court analogy doesn't mean we should all aim to be "rational", but that some of the key decision makers and discussion should hold a standard. Having others come in as emotional witnesses makes total sense, especially if it's clear that's what's happening. 

Comment by oagr on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T15:57:02.649Z · score: 11 (4 votes) · EA · GW

Thanks for the points Denise, well taken.  

I think the issue of "how rational vs. emotional should we aim for in key debates" (assume there is some kind of clean distinction) is quite tricky.

I would point out some quick thoughts, that might be wrong.
1. I'm also curious to better understand why there isn't more discussion by women here. I could imagine a lot of possible reasons for this. It could be that people don't feel comfortable providing emotional responses, but it could also be that people notice that responses on the other side are so emotional that there may be severe punishment.
2. Around the EA community and on Twitter, i see much more emotional-seeming arguments in support of Robin Hanson than for him. Twitter is really the worst at this.
3. Courts have established procedures for ensuring that both judges and the juries are relatively unbiased, fair, and (somewhat) rational. There's probably some interesting theory here we could learn from.
4. I could imagine a bunch of scary situations where important communication gets much more emotional. If they get less emotional, it's trickier to tell. I like to think that rationally minded people could help seek out biases like the one you mention and respond accordingly, instead of having to modify a large part of the culture to account for it.

Comment by oagr on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T08:32:19.815Z · score: 13 (7 votes) · EA · GW

To be more clear, I think the snarky comments on Twitter on both sides are a pretty big anti-pattern and should be avoided. They sometimes get lots of likes, which is particularly bad. 

Comment by oagr on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T08:30:42.367Z · score: 15 (12 votes) · EA · GW

I think this is a complex issue, and a confident stance would require a fair bit of time of investigation.

I don't like the emotional hatred going on on both sides. I'd like to see a rational and thoughtful debate here, not a moralistic one. I don't want to be part of a community where people are colloquially tarred and feathered for making difficult decisions. I could imagine that many of us may wind up in similar positions one day. 

So I'd like discussion of Robin Hanson to be done thoughtfully, and also discussions of EA Munich to be done thoughtfully. 

The [Twitter threads](https://twitter.com/pranomostro1/status/1293267131270864903) seem like a mess to me. There are a few thoughtful comments, but tons of misery (especially from anonymous accounts and the like). I guess this is one thing that Twitter is just naturally quite poor at. 

There are a lot of hints that the the EA Munich team is exhausted over the response:

"Note that this document tries to represent the views of 8 different people on a controversial topic, compiled within a couple of hours, and is therefore necessarily simplifying."

"Because we're kind of overwhelmed with the situation, we won't be able to respond to your comments. We understand this is frustrating (especially if you think we have done a bad job), but we're only volunteers doing this in our free time."

I'm personally pretty happy with many people applying Hanlon's razor here to analyze our decision. "I disagree with you/you made the wrong choice" is much better than "you're a bad person and harbour ill will against X"

This is like that point in the movie where someone on one side would do something really stupid and cause actual violence. 

I imagine EA will face much bigger challenges of similar types in the future, so we should get practice in handling them well. 

Comment by oagr on How to estimate the EV of general intellectual progress · 2020-08-24T10:31:27.203Z · score: 2 (1 votes) · EA · GW

Yea, I think there's a similar concern any time you make other fields more well run. That said, as a rule of thumb, this seems a lot safer than making many other fields less well run. It would be great to be able to apply intellectual abilities selectively, but when that's too hard, doing it generally seems fairly good to me.

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-23T11:24:44.357Z · score: 2 (1 votes) · EA · GW

Thanks!

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-23T11:23:11.886Z · score: 6 (3 votes) · EA · GW

Nice post. I think I agree with all of that. 

I'm not advocating for "poorly done quantitative estimates." I think anyone reasonable would admit that it's possible to bungle them. 

I'm definitely not happy with a local optimum of "not having estimates". It's possible that "having a few estimates" can be worse, but I imagine we'll want to get to the point of "having lots of estimates, and becoming more mature to be able to handle them." at some point, so that's the direction to aim for.

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-23T11:09:31.331Z · score: 2 (1 votes) · EA · GW

Yea, I think there are similar incentives at play in both cases

Comment by oagr on What (other) posts are you planning on writing? · 2020-08-20T12:47:04.036Z · score: 4 (2 votes) · EA · GW

I like that these generally seem quite clear and focused.

In terms of decision relevance and benefit, I get the impression that several funders and meta EA orgs feel a crunch in not having great prioritization, and if better work emerges, they may change funding fairly quickly. I'm less optimistic about career change type work, mainly because it seems like it would take several more years to apply (it would take some time from convincing someone to having them start producing research). 

I'm skeptical of how much research into investments will change investments in the next 2-10 years. I don't get the impression OpenPhil or other big donors are closely listening to these topics here.

Therefore I'm more excited about the Giving Now/Later and Long-Term Future work.

Another way of phrasing this is that I think we have a decent discount rate (maybe 10% a year), plus I think that high-level research prioritization is a particularly useful field if done well. 

A few years back a relatively small amount of investigation into AI safety (maybe 20 person years?) led to a huge change from OpenPhil and a bunch of EA talent. 

I would be curious to hear directly from them. I think that work that influences the big donors is the highest leverage at this point, and I also get the impression that there is a lot of work that could change their minds. But I could be wrong.

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-19T21:24:05.614Z · score: 16 (8 votes) · EA · GW

That's an interesting list, especially for 30 minutes :) (Makes me wonder what you or others could do with more time.)

Much of it focused on EA community stuff. I kind of wonder if funders are extra resistant to some of this because it seems like they're just "giving money to their friends", which in some ways, they are. I could see some of it feeling odd and looking bad, but I think if done well it could be highly effective.

Many religious and ethnic groups spend a lot of attention helping each other, and it seems to have very positive effects. Right now EA (and the subcommunities I know of in EA) seem fairly far from that still.

https://www.nationalgeographic.com/culture/2018/09/south-asia-america-motels-immigration/

A semi-related point on that topic; I've noticed that for many intelligent EAs, it feels like EA is a competition, not a collaboration. Individuals at social events will be trying to one-up each other with their cleverness. I'm sure I've contributed to this. I've noticed myself becoming jealous when I hear of others who are similar in some ways doing well, which really should make no sense at all. I think in the anonymous surveys 80K did a while back a bunch of people complained that there was a lot of signaling going on and that status was a big deal.

Many companies and open source projects live or die depending on the cultural health. Investments in the cultural health of EA may be difficult to measure, but pay off heavily in the long run.

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-19T20:46:48.989Z · score: 8 (4 votes) · EA · GW

IME I get much more strongly negative comments when I write anything quantitative than when I don't. But I might just be noticing that type of criticism more than other types.

 

I haven't seen these specific examples, but there definitely seems to be a similar bias in other groups. Many organizations are afraid to make any kinds of estimates at all. At the extreme end are people who don't even make clear statements, they just speak in vague metaphors or business jargon that are easy to defend but don't actually convey any information. Needless to say, I think this is an anti-pattern. I'd be curious if anyone reading this would argue.
 


The rate of individual value drift is even higher, something around 5%. That's really bad. Is there anything we can do about it? Is bringing new people into the movement more important than improving retention?

It seems to me like some modeling here would be highly useful, though it can get kind of awkward. I imagine many decent attempts would include numbers like, "total expected benefit of one member". Our culture often finds some of these calculations too "cold and calculating." It could be worth it for someone to do a decent job at some of this, and just publicly write up the main takeaways.

I find the ideas you presented quite interesting and reasonable, I'd love to see more work along those lines.

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-19T20:35:56.293Z · score: 4 (2 votes) · EA · GW

Yep, and a few others at IARPA who worked around the forecasting stuff were also EAs or close. 

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-18T17:43:12.686Z · score: 9 (5 votes) · EA · GW

I'll give a +1 for Convergence. I've known the team for a while and worked with Justin a few years back. It's a bit on the theoretical side of prioritization, but that sort of thinking often does lead to more immediate value.

My impression is also that more funding could be quite useful to them, if anyone is reading this considering.

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-17T21:08:47.600Z · score: 8 (6 votes) · EA · GW

I've been thinking a lot about the lack of non-EA interest or focus on forecasting or related tools. I was very surprised when I made Guesstimate and there was both excitement from several people, but not that much excitement from most businesses or governments. 

I think that forecasting of the GJP sort is still highly niche. Almost no one knows of it or understands the value. You can look at this as similar to specific advances in, say, type theory or information theory. 

The really smart groups that have interests in improving their long term judgement seem to be financial institutions and similar. These are both highly secretive, and not interested in spending extra effort helping outside groups.

So to really advance a field like judgemental forecasting would require a combination of expertise, funding, and interest in helping the broad public, and this is a highly unusual combination. I imagine that if IARPA wasn't around in time to both be interested in and able to fund GJP's efforts, much less would have happened there. I'd also personally point out that I'd expect that IARPA's funding of it was around 1/3rd or maybe 1/20th as efficient as it would have been if OpenPhil would have organized a more directed effort, in terms of global benefit.

This makes me think that there are probably many other very specific technology and research efforts that also be exciting for us to focus on, but we don't have the expertise to recognize them. May may have gotten lucky with forecasting/estimation tech, as that was something we had to get close to anyway for other reasons.

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-17T20:50:20.446Z · score: 8 (4 votes) · EA · GW

Forecasting is a common good to many causes, so you'd expect it not to be neglected. But in practice, it seems the only people working on forecasting are EA or EA-adjacent (I'd count Tetlock as adjacent)

I think I've become a bit convinced that incentive and coordination problems are so poor that many "common goods" are surprisingly neglected. The history of the slow development and proliferation of Bayesian techniques in general (up to around 20 years ago maybe, but even now I think the foundations can be improved a lot) seems quite awful. 

Also, at this point, I feel quite strong about much of the EA community; like we've gathered up many of the most [intelligent + pragmatic + agentic + high-level-optimizing] people in the world. As such I think we can compete and do a good job in many areas we may choose to focus on. So it could be that we could move up from "absolutely, incredibly neglected", to "just somewhat neglected", which could open up a whole bunch of fields.

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-17T20:43:13.303Z · score: 42 (21 votes) · EA · GW

Do you have a list of the top research areas you'd like to see that aren't getting done?

Oh boy. I've had a bunch of things in the back of my mind. Some of this is kind of personal (specific to my own high level beliefs, but wouldn't apply to many others).
I'm a longtermist and believe that most of the expected value will happen in the far future. Because of that, many of the existing global poverty, animal welfare, and criminal justice reform interventions don't seem particularly exciting to me. I'm unsure what to think of AI Risk, but "unsure" is much, much better than "seems highly unlikely." I think it's safe to have some great people here; but currently get the impression that a huge number of EAs are getting into this field, and this seems like too many to me on the margin.

What I'm getting to is: when you exclude most of poverty, animal welfare, criminal justice reform, and AI, there's not a huge amount getting worked on in EA at the moment.

I think I don't quite buy the argument that the only long-term interventions to consider are ones that will cause X-risks in the next ~30 years, nor the argument that the only interventions are ones that will cause X-risks. I think it's fairly likely(>20%) that sentient life will survive for at least billions of years; and that there may be a fair amount of lock-in, so changing the trajectory of things could be great.

I like the idea of building "resilience" instead of going after specific causes. For instance, if we spend all of our attention on bio risks, AI risks, and nuclear risks, it's possible that something else weird will cause catastrophe in 15 years. So experimenting with broad interventions that seem "good no matter what" seems interesting. For example, if we could have effective government infrastructure, or general disaster response, or a more powerful EA movement, those would all be generally useful things.

I like Phil's work (above comment) and think it should get more attention, quickly. Figuring out and implementing an actual plan that optimizes for the long term future seems like a ton of work to me.

I really would like to see more "weird stuff." 10 years ago many of the original EA ideas seemed bizarre; like treating AI risk as highly important. I would hope that with 10-100x as many people, we'd have another few multiples of weird but exciting ideas. I'm seeing a few of them now but would like more.

Better estimation, high-level investigation, prioritization, data infrastructure, etc. seem great to me.

Maybe one way to put it would be something like, imagine clusters of ideas as unique as those of Center on Long-Term Risk, Qualia Computing, the Center for Election Science, etc. I want to see a lot more clusters like these.

Some quick ideas:
- Political action for all long term things still seems very neglected and new to me, as mentioned in this post.
- A lot of the prioritization work, even of, "Let's just estimate a lot of things to get expected values."
- I'd like to see research in ways AI could make the world much better/safer; the most exciting part to me is how it could help us reason in better ways, pre-AGI, and what that could lead to.
- Most EA organizations wouldn't upset anyone (are net positives for everyone), but many things we may want would. For instance, political action, or potential action to prevent bio or ai companies from doing specific things. I could imagine groups like, "slightly-secretive strategic agencies" that go around doing valuable things, to have a lot of possible benefit (but of course significant downsides if done poorly).
- This is close to me, but I'm curious if open source technologies could be exciting philanthropic investments. I think the donation to Roam may have gone extremely well, and am continually impressed and surprised by how little money there is in incredible but very early or experimental efforts online. Ideally this kind of work would include getting lots of money from non-EAs.
- In general, trying to encourage EA style thinking in non-EA ventures could be great. There's tons of philanthropic money being spent outside EA. The top few tech billionaires just dramatically increased their net worths in the last few months, many will likely spend those eventually. 
- I really care about growing the size and improve the average experience of the EA community. I think there's a ton of work to be done here of many shapes and forms.
- I think many important problems that feel like they should be done in Academia aren't due to various systematic reasons. If we could produce researchers who do "the useful things, very well", either in Academia or outside, that could be valuable, even in seemingly unrelated fields like anthropology, political science, or targeted medicine (fixing RSI, for instance). "Elephant and the Brain" style work comes to mind.
- On that note, having 1-2 community members do nothing but work on RSI, back, and related physical health problems for EAs/rationalists, could be highly worthwhile at this point. We already have a few specific psychologists and a productivity coach. Maybe eventually there could be 10-40+ people doing a mini-industry of services tailored to these communities.
- Unlikely idea: insect farms. Breed and experiment with insects or other small animals in ways that seem to produce the most well-being for the lowest cost. Almost definitely not that productive, but good for diversification, and possibly reasonably cheap to try for a few years.
- Much better EA funding infrastructure, in part for long-term funding.
- Investigation and action to reform/improve the UN and other global leadership structures.
- I'm curious about using extensive Facebook ads, memes, Youtube sponsorship, and similar, to both encourage Effective Altruism, and to encourage ideas we think are net valuable. These things can be highly scalable.

Also, I'd be curious to get the suggestions of yourself and others here.

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-16T17:09:41.081Z · score: 9 (6 votes) · EA · GW

Thanks for the response!
 

Quick responses:

4. I haven't investigated this much myself, I was relaying what I know from donors (I don't donate myself). I've heard a few times that OpenPhil and some of the donors behind EA Funds are quite worried about negative effects. My impression is that the reason for some of this is simple, but there are some more complicated reasons that go into the thinking here that haven't been written up fully. I think Oliver Habryka has a bunch of views here. 

5-6. I didn't mean to imply that junior researchers are "rare", just that they are limited in number (which is obvious). My impression is that there's currently a bottleneck to give the very junior researchers experience and reputability, which is unfortunate. This is evidenced by Rethink's round. I think there may be a fair amount of variation in these researchers though; that only a few are really the kinds who could pioneer a new area (this requires a lot of skills and special career risks).

7. I'm also really unsure about this. Though to be fair, I'm unsure about a lot of things. To be clear though, I think that there are probably rather few people this would be a good fit for.

I'm really curious just how impressive the original EA founders were compared to all the new EAs. There are way more young EAs now than there were in the early days, so theoretically we should expect that some will be in many ways more competent than the original EA founders, minus in experience of course.

Part of me wonders: if we don't see a few obvious candidates for young EA researchers as influential as the founders were, in the next few years, maybe something is going quite wrong. My guess is that we should aim to resemble other groups that are very meritocratic in terms of general leadership and research. 

8. Happy to discuss in person. They would take a while to organize and write up.

The very simple thing here is that to me, we really could use "funding work" of all types. OpenPhil still employs a very limited headcount given their resources, and EA Funds is mostly made up of volunteers. Distributing money well is a lot of work, and there currently aren't many resources going into this. 

One big challenge is that not many people are trusted to do this work, in part because of the expected negative impacts of funding bad things. So there's a small group trusted to do this work, and a smaller subset of them interested in spending time doing it.

I would love to see more groups help coordinate, especially if they could be accepted by the major donors and community. I think there's a high bar here, but if you can be over it, it can be very valuable.

I'd also recommend talking to the team at EA Funds, which is currently growing.

9. This could be worth discussing more further. RP is still quite early and developing. If you have suggestions about how it could improve, I'd be excited to have discussions on that. I could imagine us helping change it in positive directions going forward.

10. Thanks!

Comment by oagr on The case of the missing cause prioritisation research · 2020-08-16T14:17:41.285Z · score: 57 (24 votes) · EA · GW

Thanks for the post! Much of it resonated with me.

A few quick thoughts:

1. I could see some reads of this being something like, "EA researchers are doing a bad job and should feel bad." I wouldn't agree with this (mainly the latter bit) and assume the author wouldn't either. Lots of EAs I know seem to be doing about the best that they know of and have a lot of challenges they are working to overcome. 

2. I've had some similar frustrations over the last few years. I think that there is a fair bit of obvious cause prioritization research to be done that's getting relatively little attention. I'm not as confident as you seem to be about this, but agree it seems to be an issue.

3. I would categorize many of the issues as being systematic between different sectors. I think significant effort in these areas would require bold efforts with significant human and financial capital, and these clusters are rare. Right now the funding situation is still quite messy for ventures outside the core OpenPhil cause areas.

I could see an academic initiative taking some of them on, but that would be a significant undertaking from at least one senior academic who may have to take a major risk to do so. Right now we have a few senior academics who led/created the existing main academic/EA clusters, and these projects were very tied to the circumstances of the senior people. 

If you want a job in Academia, it's risky to do things outside the common tracks, and if you want one outside of Academia, it's often riskier. One in-between is making new small nonprofits. This is also a significant undertaking however. The funding situation for small ongoing efforts is currently quite messy; these are often too small for OpenPhil but too big for EA funds.

4. One reason why funding is messy is because it's thought that groups doing a bad job at these topics could be net negative. Thus, few people are trusted to lead important research in new areas that are core to EA. This could probably be improved with significantly more vetting, but this takes a lot of time. Now that I think about it, OpenPhil has very intensive vetting for their hires, and these are just hires; after they are hired they get managers and can be closely worked with. If a funder funds a totally new research initiative, they will have a vastly lower amount of control (or understanding) over it than organizations do over their employees. Right now we don't have organizations around who can do near hiring-level amounts of funding for small initiatives, perhaps we should though.

5. We only have so many strong EA researchers, and fewer people capable of leading teams and obtaining funding. Right now a whole lot of great ones are focused on AI (this often requires many years of grad school or training) and Animals. My impression is that on the margin, moving some people from these fields to other fields (cause prioritization or experimental new things) could be good, though a big change to several individuals. 

6. It seems really difficult to convince committed researchers to change fields. They often have taken years to develop expertise, connections, and citations, so changing that completely is very costly. An alternative is to focus on young, new people, but those people take a while to mature as researchers.

In EA we just don't have many "great generic researchers" who we can reassign from one topic to something very different on short notice. More of this seems great to me, but it's tricky to setup and attract talent for.

7. I think it's possible that older/experienced researchers don't want to change careers, and new ones aren't trusted with funding. Looking back I'm quite happy that Ellie and Holden started GiveWell without feeling like they needed to work in an existing org for 4 years first. I'm not sure what to do here, but would like to see more bets on smart young people.

8. I think there are several interesting "gaps" in EA and am sure that most others would agree. Solving them is quite challenging, it could require a mix of coordination, effort, networking, and thinking. I'd love to see some senior people try to do work like this full-time. In general I'd love for see more "EA researcher/funding coordination", that seems like the root of a lot of our problems.

9. I think Rethink Priorities has a pretty great model and could be well suited to these kinds of problems. My impression is funding has been a bottleneck for them. I think that Peter may respond to this, so can do so directly. If there are funders out there who are excited to fund any of the kinds of work described in this article, I'd suggest reaching out to Rethink Priorities and seeing if they could facilitate that. They would be my best bet for that kind of arrangement at the moment.

10. Personally, I think forecasting/tooling efforts could help out cause prioritization work quite a bit (this is what I'm working on), but it will take some time, and obviously aren't direct work on the issue.

Comment by oagr on Donor Lottery Debrief · 2020-08-06T11:59:43.000Z · score: 12 (7 votes) · EA · GW

You could both be right. My impression is that there are a whole bunch of ambitious people in the Bay, so being there for funding has advantages. I also think that non-Bay ventures are fairly neglected. Overall I (personally) would like to see more funding and clarity in basically all places. 

Also, note that the two ventures Tim funded were non-bay ventures. Bay connections are useful even for understanding international projects.

Comment by oagr on Announcing the EA Virtual Group* · 2020-06-26T22:35:02.132Z · score: 5 (4 votes) · EA · GW

Sure thing. I'm less concerned with the name than the collection of possibly-too-varied people it brings in, for the sake of the project. I imagine you'll get a better sense though as you start it.

Comment by oagr on Announcing the EA Virtual Group* · 2020-06-26T11:16:50.643Z · score: 22 (10 votes) · EA · GW

Kudos for taking on an initiative like this!

I think trying to have this be the ea virtual group will be difficult. If all EAs were in one city, there just couldn't be a single EA meetup in that city, it would be too large.

There is also a really tricky issue of who will attend. EAs aren't one unit. There are a bunch of pockets that have various relationships and opinions of each other. I think trying to aim for "absolutely everyone" will prove challenging, if that is the goal. For instance, experienced researchers don't typically enjoy spending a lot of time with very new people, as most of the questions are very basic.

I'd probably encourage you to think more about either specializing on some subtopic, or trying to identify a specific few key members that represent what you want the social group to grow towards.

I've found a lot of these social/reading groups are something like, "These 3 people really like talking to each other and are similar in a few key ways, and they can grow to others who share some of those similarities."

I've done something similar (.impact, a while back), and also hosted a few EA groups in the past; these ideas come from those experiences.

Good luck!

Comment by oagr on Against opposing SJ activism/cancellations · 2020-06-23T09:16:43.540Z · score: 19 (9 votes) · EA · GW

I really don't like this about the voting system. My read is that you (Chichiko) provided some points on one side of an uncomfortable discussion. Most readers seem to overall agree with the other side. My impression is that they used their downvotes to voice their high level opinion, rather than because they found your specific points to be bad.

I feel quite strange about this but feel that we're in some kind of meta-level argument of censorship; that any points in-favor of occasional censorship quickly get censored. By downvoting this piece so much, that's kind of what's happening.

Comment by oagr on External evaluation of GiveWell's research · 2020-05-26T11:42:20.835Z · score: 11 (4 votes) · EA · GW

Oh man, happy to have come across this. I'm a bit surprised people remember that article. I was one of the main people that set up the system, that was a while back.

I don't know specifically why it was changed. I left 80k in 2014 or so and haven't discussed this with them since. I could imagine some reasons why they stopped it though. I recommend reaching out to them if you want a better sense.

This was done when the site was a custom Ruby/Rails setup. This functionality required a fair bit of custom coding functionality to set up. Writing quality was more variable then than it is now; there were several newish authors and it was much earlier in the research process. I also remember that originally the scores disagreed a lot between evaluators, but over time (the first few weeks of use) they converged a fair bit.

After I left they migrated to Wordpress, which I assume would have required a fair effort to set up a similar system in. The blog posts seem like they became less important than they used to be; in favor of the career guide, coaching, the podcast, and other things. Also the quality has become a fair bit more consistent, from what I can tell as an onlooker.

The ongoing costs of such a system are considerable. First, it just takes a fair bit of time from the reviewers. Second, unfortunately, the internet can be a hostile place for transparency. There are trolls and angry people who will actively search through details and then point them out without the proper context. I think this review system was kind of radical, and can imagine it not being very comfortable to maintain, unless it really justified a fair bit of effort.

I'm of course sad it's not longer in place, but can't really blame them.

Comment by oagr on CEA's Plans for 2020 · 2020-05-05T08:23:03.274Z · score: 12 (5 votes) · EA · GW

I think I’m pretty torn up about this. I agree that this was a failure, but going too far in the other direction seems like a loss of opportunity. I think my ideal would be something like a very competent and large CEA, or another competent and large organization spearheading a bunch of new EA initiatives. I think there’s enough potential work to absorb an additional 30-1000 full time people. I’d prefer small groups to do this to a poorly managed big group, but in general don’t trust small groups all too much for this kind of work in the long run. Major strategic action requires a lot of coordination, and this is really difficult with a lot of small groups.

I think my take is that the failures mentioned were mostly failures of expectations, rather than bad decisions in the ideal. If CEA could have done all these things well, that would have been the ideal scenario to me. The projects often seemed quite reasonable, it just seemed like CEA didn’t quite have the necessary abilities at those points to deliver on them.

Referencing above comments, I think, “Let’s make sure that our organization runs well, before thinking too much about expanding dramatically” is a very legitimate strategy. My guess is that given the circumstances around it, it’s a very reasonable one as well. But I also have some part of me inside screaming, “How can we get EA infrastructure to grow much faster?”.

Perhaps more intense growth, or at least bringing in several strong new product managers, could be more of a plan in 1-2 years or so.

Comment by oagr on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-24T15:53:35.895Z · score: 24 (13 votes) · EA · GW

I think these comments could look like an attack on the author here. This may not be the intention, but I imagine many may think this when reading it.

Online discussions are really tricky. For every 1000 reasonable people, there could be 1 who's not reasonable, and who's definition of "holding them accountable" is much more intense than the rest of ours.

In the case of journalists this is particularly selfishly-bad; it would be quite bad for any of our communities to get them upset.

I also think that this is very standard stuff for journalists, so I really don't feel the specific author here is particularly relevant to this difficulty.

I'm all for discussion of the positives and weaknesses of content, and for broad understanding of how toxic the current media landscape can be. I just would like to encourage we stay very much on the civil side when discussing individuals in particular.

Comment by oagr on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-22T11:41:47.986Z · score: 17 (16 votes) · EA · GW

I feel like it's quite possible that the headline and tone was changed a bit by the editor, it's quite hard to tell with articles like this.

I wouldn't single out the author of this specific article. I think similar issues happen all the time. It's a highly common risk when allowing for media exposure, and a reason to possibly often be hesitant (though there are significant benefits as well).