Posts

Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:44.627Z
Forecasting Prize Results 2021-02-19T19:07:11.379Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:21:52.151Z
Open Communication in the Days of Malicious Online Actors 2020-10-06T23:57:35.529Z
Ozzie Gooen's Shortform 2020-09-22T19:17:54.175Z
Expansive translations: considerations and possibilities 2020-09-18T21:38:42.357Z
How to estimate the EV of general intellectual progress 2020-01-27T10:21:11.076Z
What are words, phrases, or topics that you think most EAs don't know about but should? 2020-01-21T20:15:07.312Z
Best units for comparing personal interventions? 2020-01-13T08:53:12.863Z
Predictably Predictable Futures Talk: Using Expected Loss & Prediction Innovation for Long Term Benefits 2020-01-08T22:19:32.155Z
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges 2019-12-19T18:16:04.299Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T16:36:10.564Z
Introducing Foretold.io: A New Open-Source Prediction Registry 2019-10-16T14:47:20.752Z
What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) 2019-08-04T20:38:10.413Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:23.576Z
What new EA project or org would you like to see created in the next 3 years? 2019-06-11T20:56:42.687Z
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T21:25:46.305Z
Discussion: What are good legal entity structures for new EA groups? 2018-12-18T00:33:16.620Z
Current AI Safety Roles for Software Engineers 2018-11-09T21:00:23.318Z
Prediction-Augmented Evaluation Systems 2018-11-09T11:43:06.088Z
Emotion Inclusive Altruism vs. Emotion Exclusive Altruism 2016-12-21T01:40:45.222Z
Ideas for Future Effective Altruism Conferences: Open Thread 2016-08-13T02:59:02.685Z
Guesstimate: An app for making decisions with confidence (intervals) 2015-12-30T17:30:55.414Z
Is there a hedonistic utilitarian case for Cryonics? (Discuss) 2015-08-27T17:50:36.180Z
EA Assembly & Call for Speakers 2015-08-18T20:55:13.854Z
Deep Dive with Matthew Gentzel on Recently Effective Altruism Policy Analytics 2015-07-20T06:17:48.890Z
The first .impact Workathon 2015-07-09T07:38:12.143Z
FAI Research Constraints and AGI Side Effects 2015-06-07T20:50:21.908Z
Gratipay for Funding EAs 2014-12-24T21:39:53.332Z
Why "Changing the World" is a Horrible Phrase 2014-12-24T00:41:50.234Z

Comments

Comment by Ozzie Gooen (oagr) on Thoughts on being overqualified for EA positions · 2021-05-04T15:39:28.540Z · EA · GW

Agreed. Also, there are a lot of ways we could pay for prestige; like with branding and marketing,  that could make things nicer for new employees.

Comment by Ozzie Gooen (oagr) on Thoughts on being overqualified for EA positions · 2021-05-02T02:08:34.528Z · EA · GW

I just wanted to flag one possible failure mode.

I've come across a few people who said that "getting management experience", for the purpose of eventually helping with direct work, was a big part in not wanting to do direct work directly. So far I haven't seen these people ever get into direct work. I think it can be great for earning to give, but am personally skeptical of its applicability to direct work.

From what I've seen, the skills one needs to lead EA organizations are fairly distinct, and doing so often requires a lot of domain specific knowledge that takes time to develop. Related, much of the experience I see people getting who are in management seem to be domain specific skills not relevant for direct work, or experience managing large teams of skills very different from what is seems to be needed in direct work.

For example, in bio safety orgs, the #1 requirement of a manager is a lot of experience in the field, and the same is true (maybe more so) in much of AI safety. 

I think non-direct-work management tracks can be great for earning to give, as long as that's what's intended.

Comment by Ozzie Gooen (oagr) on Thoughts on being overqualified for EA positions · 2021-05-01T16:49:23.202Z · EA · GW

Thanks for this, I feel like I've seen this too.

I'm 30 now, and I feel like several of my altruistic-minded friends in my age group in big companies are reluctant to work in nonprofits for stated reasons that feel off to me. 

My impression is that the EA space is quite small now, but has the potential to get quite a big bigger later on. People who are particularly promising and humble enough to work in such a setting (this is a big restriction) sometimes rise up quickly.

I think a lot of people look at initial EA positions and see them as pretty low status compared to industry jobs. I have a few responses here:
1) They can be great starting positions for people who want to do ambitious EA work. It's really hard to deeply understand how EA organizations work without working in one, even in (many, but not all) junior positions.
2) One incredibly valuable attribute of many effective people is a willingness to "do whatever it takes" (not meaning ethically or legally). This sometimes means actual sacrifice, it sometimes means working positions that would broadly be considered low status. Honestly I regard this attribute as equally important to many aspects of skills and intelligence. Some respected managers and executives are known for cleaning the floors or providing personal help to employees or colleagues, often because those things were highly effective at that moment, even if they might be low status. (Honestly, much of setting up or managing an organization is often highly glorified grunt work).

Personally, I try to give extra appreciation to people in normally low-status positions, I think these are very commonly overlooked.

---

Separately, I'm really not sure how much to trust the reasons people give for their decisions. I'm sure many people who use the "overqualified" argument would be happy to be setting up early infrastructure with very few users for an Elon Musk venture, or building internal tooling for few users at many well run, high paying, and prestigious companies.

Comment by Ozzie Gooen (oagr) on If I pay my taxes, why should I also give to charity? · 2021-04-19T03:34:12.243Z · EA · GW

Like Larks, I'm happy that work is being put into this. That said, I find this issue quite frustrating to discuss, because I think a fully honest discussion would take a lot more words than most people would have time for.

“Since I already pay my fair share in taxes, I don’t need to give to charity”

This is the sort of statement that has multiple presuppositions that I wouldn't agree with.

  • I pay my "fair share" in taxes
  • There's such thing as a "fair share"
  • There is some fairly objective and relevant notion of what one "needs to do"

The phrase is about as alien to me, and as far from my belief system, as an argument saying,

The alien Zordon transmits that Xzenefile means no charity.

One method of dealing with the argument above would be something like,

"Well, we know that Zordon previously transmitted Zerketeviz, which implies that signature Y12 might be relevant, so actually charity is valid."

But my preferred answer would be,
"First, I need you to end your belief in this Zordon figure".

The obvious problem is that this latter point would take a good amount of convincing, but I wanted to put this out there.

Comment by Ozzie Gooen (oagr) on As an EA, Should I renounce my US citizenship? · 2021-04-19T03:23:53.540Z · EA · GW

I'm not very familiar with investment options in the UK, but there are of course many investment options in the US. I believe that being a citizen of the US helps a fair bit for some of these options. 

My impression is that getting full citizenship of both the US and the UK is generally extremely difficult, I imagine ever changing your mind would be quite a challenge.

One really nice benefit of having both citizenship is that it gives you a lot of flexibility. If either country suddenly becomes much more preferable for some reason or another (imagine some tail risk, like a political disaster of some sort), you have the option of easily going to the other. 

You also need account for how the US might treat you if you do renounce citizenship. My impression is that they can be quite unfavorable to those who do this (particularly if they think it's for tax reasons); both by coming at these people for assets, making it difficult to come back to the US for any reason, or other things. 

I would be very hesitant to renounce citizenship of either, until you really do a fair amount of research on the cons of the matter.

https://foreignpolicy.com/2012/05/17/could-eduardo-saverin-be-barred-from-the-u-s-for-life/

Comment by Ozzie Gooen (oagr) on "Good judgement" and its components · 2021-04-17T22:03:53.573Z · EA · GW

I've been thinking about this topic recently. One question that comes to mind: How much of Good Judgement do you think is explained by g/IQ? My quick guess is that they are heavily correlated. 

My impression is that people with "good judgement" match closely with the people that hedge funds really want to hire as analysts, or who make strong executives of product managers. 

Comment by Ozzie Gooen (oagr) on Is there evidence that recommender systems are changing users' preferences? · 2021-04-15T03:16:36.772Z · EA · GW

(1) The difference between preferences and information seems like a thin line to me. When groups are divided about abortion, for example, which cluster would that fall into? 

It feels fairly clear to me that the media facilitates political differences, as I'm not sure how else these could be relayed to the extent they are (direct friends/family is another option, but wouldn't explain quick and correlated changes in political parties). 

(2) The specific issue of prolonged involvement doesn't seem hard to be believe. People spend lots of time on Youtube. I've definitely gotten lots of recommendations to the same clusters of videos. There are only so many clusters out there.

All that said, my story above is fairly different from Stuart's. I think his is more of "these algorithms are a fundamentally new force with novel mechanisms of preference changes". My claim is that media sources naturally change the preferences of individuals, so of course if algorithms have control in directing people to media sources, this will be influential in preference modification. This is where "preference modification" basically means, "I didn't used to be an intense anarcho-capitalist, but then I watched a bunch of the videos, and now tie in strongly to the movement"

However, the issue of "how much do news organizations actively optimize preference modification for the purposes of increasing engagement, either intentionally or non intentionally?" is more vague.

Comment by Ozzie Gooen (oagr) on Is there evidence that recommender systems are changing users' preferences? · 2021-04-13T03:05:10.535Z · EA · GW

There's a lot of anecdotal evidence that news organizations essentially change user's preferences. The fundamental story is quite similar. It's not clear how intentional this is, but there seem to be many cases of people becoming extremized after watching/reading the news (not that I think about it, this seems like a major factor in most of these situations). 

I vaguely recall Matt Taibbi complaining about this in the book Hate Inc. 

https://www.amazon.com/Hate-Inc-Todays-Despise-Another/dp/B0854P6WHH/ref=sr_1_3?dchild=1&keywords=Matt+Taibbi&qid=1618282776&sr=8-3

Here are a few related links:

https://nymag.com/intelligencer/2019/04/i-gathered-stories-of-people-transformed-by-fox-news.html
https://www.salon.com/2018/11/23/can-we-save-loved-ones-from-fox-news-i-dont-know-if-its-too-late-or-not/

If it turns out that the news channels change preferences, it seems like a small leap to suggest that recommender algorithms that get people onto news programs leads to changing their preferences. Of course, one should have evidence to the magnitude and so on.

Comment by Ozzie Gooen (oagr) on What are the highest impact questions in the behavioral sciences? · 2021-04-07T15:09:57.422Z · EA · GW

I've done a bit of thinking on this topic, main post here:
https://www.lesswrong.com/posts/vCQpJLNFpDdHyikFy/are-the-social-sciences-challenging-because-of-fundamental

I'm most excited about fundamental research in the behavioral sciences, just ideally done much better. I think the work of people like Joseph Henrick/David Graeber/Robin Hanson was useful and revealing. It seems to me like right now our general state of understanding is quite poor, so what I imagine as minor improvements in particular areas feel less impactful than just better overall understanding. 

Comment by Ozzie Gooen (oagr) on A Comparison of Donor-Advised Fund Providers · 2021-04-05T21:58:13.593Z · EA · GW

This looks really useful, many thanks for the writeup. I'd note that I've been using Vanguard for regular investments and found website annoying and the customer support quite bad; there would be long periods where they wouldn't offer any because things were "too crowded". I think most people underestimate the value of customer support, in part because it is most valuable in the tail end situations. 

Some quick questions:
- Are there any simple ways of making investments in these accounts that offer 2x leverage or more? Are there things here that you'd recommend?
- Do you have an intuition around when one should make a Donor-Advised Fund? If there are no minimums, should you set one up once you hit, say, $5K in donations that won't be spent a given tax year?
- How easy is it for others to invest in one's Donor-Advised Fund? Like, would it be really easy to set up your own version of EA Funds?

Comment by Ozzie Gooen (oagr) on Announcing "Naming What We Can"! · 2021-04-02T03:47:49.181Z · EA · GW

I think the phrases "Research Institute", and particular "...Existential Risk Institute" are a best practice and should be used much more frequently.

Centre for Effective Altuism -> Effective Altruism Research Institute (EARI)
Open Philanthropy -> Funding Effective  Research Institute (FERI)
GiveWell -> Shortermist Effective  Funding Research Institute (SEFRI)
80,000 Hours -> Careers that are Effective Research Institute (CERI)
Charity Entrepreneurship -> Charity Entrepreneurship Research Institute (CERI 2)
Rethink Priorities -> General Effective Research Institute (GERI)
 Center for Human-Compatible Artificial Intelligence -> Berkeley University Ai Research Institute (BUARI)
CSER -> Cambridge Existential Risk Institute (CERI 3)
LessWrong -> Blogging for Existential Risk Institute (BERI 2)
Alignement Forum -> Blogging for AI Risk Institute (BARI)
SSC -> Scott Alexanders' Research Institute (SARI)
 

Comment by Ozzie Gooen (oagr) on New Top EA Causes for 2021? · 2021-04-02T00:45:21.022Z · EA · GW

Maybe, Probabilistically Good?

Comment by Ozzie Gooen (oagr) on Some quick notes on "effective altruism" · 2021-03-25T03:09:58.478Z · EA · GW

I think this is a good point. That said, I imagine it's quite hard to really tell. 

Empirical data could be really useful to get here. Online experimentation in simple cases, or maybe we could even have some University chapters try out different names and see if we can infer any substantial differences. 

Comment by Ozzie Gooen (oagr) on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-25T02:21:44.106Z · EA · GW

This is really neat. I think in a better world analysis like this would be done by Goodreads and updated on a regular basis. Hopefully the new API changes won't make it more difficult to do this sort of work in the future.

Comment by Ozzie Gooen (oagr) on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T06:39:20.752Z · EA · GW

I'd also note that the larger goals are to scale in non-human ways. If we have a bunch of examples, we could:

1) Open this up to a prediction-market style setup, with a mix of volunteers and possibly inexpensive hires.
2) As we get samples, some people could use data analysis to make simple algorithms to estimate the value of many more documents.
3) We could later use ML and similar to scale this further.

So even if each item were rather time-costly right now, this might be an important step for later. If we can't even do this, with a lot of work, that would be a significant blocker.

https://www.lesswrong.com/posts/kMmNdHpQPcnJgnAQF/prediction-augmented-evaluation-systems

Comment by Ozzie Gooen (oagr) on EA Funds is more flexible than you might think · 2021-03-10T04:40:53.584Z · EA · GW

From where I'm coming from, having seen bits of many sides of this issue, I think average quality matters more than average quantity.

Traits of mediocre donors (including "good" donors with few resources):
- Don't hunt for great opportunities
- High amounts of noise/randomness in results
- Be strongly overconfident in some weird ways
- Have poor resolution, meaning they will not be able to choose targets much better than light common sense wisdom
- Difficult, time consuming, and opaque to work with
- Not very easy to understand, or not predictable

If one particular person not liking your for an arbitrary reason (uncorrelated overconfidence) stops you from getting funding, that would be the sign of a mediocre donor.  

If we had a bunch of these donors, the chances would go up for some nonprofits. Different nonprofits could be overconfident in different ways, leading to more groups being over or below different bars. Some bad nonprofits would be happy, because the noise could increase their chances of getting funding. But I think this is a pretty mediocre world overall.

Of course, one could argue that a given particular donor base isn't that good, so more competition is likely to result in better donors. I think competition can be quite healthy and result in improvements in quality. So, more organizations can be good, but for different reasons, and only so much as they result in better quality.

Similar to Jonas, I'd like to see more great donors join the fray, both by joining the existing organizations and helping them, and by making some new large funds.

Comment by Ozzie Gooen (oagr) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T07:34:29.429Z · EA · GW

On the first part:
The main problem that I'm worried about it's not that the terminology is different (most of these questions use fairly basic terminology so far), but rather that there is no order to all the questions. This means that readers have very little clue what kinds of things are forecasted.

Wikidata does a good job of having a semantic structure where if you want any type of fact, you could know where to look. Compare this page of Barack Obama, to a long list of facts, some about Obama, some about Obama and one or two other people, all somewhat randomly written and ordered.  See the semantic web or discussion on web ontologies for more on this subject. 

I expect that questions will eventually follow a much more semantic structure, and correspondingly, there will be far more questions at some points in the future. 

On the second part:
By public dashboards, I mean a rather static webpage that shows one set of questions, but includes the most recent data about them. There's been a few of these done so far. These are typically optimized for readers, not forecasters. 
See:
https://goodjudgment.io/superforecasts/#1464
https://pandemic.metaculus.com/dashboard#/global-epidemiology

These are very different from Metaforecast because they have different features. Metaforecast has thousands of different questions, and allows one to search by them, but it doesn't show historic data and it doesn't have curated lists. The dashboards, in comparison, have these features, but are typically limited to a very specific set of questions.

Comment by Ozzie Gooen (oagr) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T07:29:47.697Z · EA · GW

This whole thing is a somewhat tricky issue and one I'm surprised hasn't been discussed much before, to my knowledge. 

But there's not yet enough data to allow that.

One issue here is that measurement is very tricky, because the questions are all over the place. Different platforms have very different questions of different difficulties. We don't yet really have metrics that compare forecasts among different sets of questions. I imagine historical data will be very useful, but extra assumptions would be needed.

We're trying to get at some question-general stat of basically, "expected score (which includes calibration + accuracy) adjusted for question difficulty."

One question this would be answering is, "If Question A is on two platforms, you should trust the one with more stars"

Comment by Ozzie Gooen (oagr) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T07:22:13.839Z · EA · GW

It's possible we have different definitions of ok. 

I have worked with browser extensions before and found them to be a bit of a pain. You often have to do custom work for Safari, Firefox, and Google Chrome. Browsers change the standards, so you have to maintain them and update them in annoying ways at different times.

Perhaps more important, the process of trying to figure out what text is important text of different webpages, and then finding some semantic similarities to match questions, seems tricky to do well enough to be worthwhile. I can imagine a lot of very hacky approaches that would just be annoying most of the time.

I was thinking of something that would be used by, say, 30 to 300 people who are doing important work. 

Comment by Ozzie Gooen (oagr) on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T07:18:58.566Z · EA · GW

Thanks! If you have requests for Metaforecast, do let us know! 
 

Comment by Ozzie Gooen (oagr) on Forecasting Prize Results · 2021-02-24T06:57:05.169Z · EA · GW

Good to hear, and thanks for the thoughts!

Another way we could have phrased things would have been,
"This post was useful in ways X,Y, and Z. If it would have done things A,B, and C it would be been even more useful."

It's always possible to have done more. Some of the entries were very extensive. My guess is that you did a pretty good job per unit of time in particular. I'd think of the comments as things to think about for future work.

And again, nice work, and congratulations!

Comment by Ozzie Gooen (oagr) on Big List of Cause Candidates · 2021-02-18T04:12:41.722Z · EA · GW

My point was just that understanding the expected impact seems more challenging. I'd agree that understanding the short-term impacts are much easier of those kinds of things, but it's tricky to tell how that will impact things 200+ years from now. 

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-20T06:38:33.039Z · EA · GW

Happy to hear you're looking for things that could scale, I'd personally be particularly excited about those opportunities. 

I'd guess that internet-style things could scale particularly well; like the Forum / EA Funds / online models, etc, but that's also my internet background talking :).   In particular, things could be different if it makes sense to focus on a very narrow but elite group.

I agree that a group should scale staff only after finding a scalable opportunity.

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-20T06:35:00.973Z · EA · GW

Thanks!

Maybe I misunderstood this post. You wrote,

Therefore, we want to let people know what we're not doing, so that they have a better sense of how neglected those areas are.

When you said this, what timeline were you implying? I would imagine that if there were a new nonprofit focusing on a subarea mentioned here they would be intending to focus on it for 4-10+ years, so I was assuming that this post meant that CEA was intending to not get into these areas on a 4-10 year horizon. 

Were you thinking of more of a 1-2 year horizon? I guess this would be fine as long as you're keeping in communication with other potential groups who are thinking about these areas, so we don't have a situation where there's a lot of overlapping (or worse, competing) work all of a sudden.

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-20T06:30:53.245Z · EA · GW

Thanks for the diagrams and explanation!

I think when I see the diagrams, I think of these as "low overhead roles" vs "high overhead roles"; where "low overhead roles" have peak marginal value much earlier than high overhead roles. If one is interested in scaling work, and assuming that requires also scaling labor, then scalable strategies would be ones that would have many low overhead roles, similar to your second diagram of "CEA in the Future"

That said, my main point above wasn't that CEA should definitely grow, but that if CEA is having trouble/hesitancy/it-isn't-ideal growing, I would expect that the strategy of "encouraging a bunch of new external nonprofits" to be limited in potential.

If CEA thinks it could help police new nonprofits, that would also take Max's time or similar; the management time is coming from the same place, it's just being used in different ways and there would ideally be less of it. 

In the back of my mind, I'm thinking that OpenPhil theoretically has access to +$10Bil, and hypothetically much of this could go towards promotion of EA or EA-related principles, but right now there's a big bottleneck here. I could imagine that it's possible it could make sense to be rather okay wasting a fair bit of money and doing things quite unusual in order to get expansion to work somehow.

Around CEA and related organizations in particular, I am a bit worried that not all of the value of taking in good people is transparent. For example, if an org takes in someone promising and trains them up for 2 years, and then they leave for another org, that could have been a huge positive externality, but I'd bet it would get overlooked by funders. I've seen this happen previously. Right now it seems like there are a bunch of rather young EAs who really could use some training, but there are relatively few job openings, in part because existing orgs are quite hesitant to expand. 

I imagine that hypothetically this could be an incredibly long conversation, and you definitely have a lot more inside knowledge than I do. I'd like to personally do more investigation to better understand what the main EA growth constraints are, we'll see about this. 

One thing we could make tractable progress in is in forecasting movement growth or these other things. I don't have things in mind at the moment, but if you ever have ideas, do let me know, and we could see about developing them into questions in Metaculus or similar. I imagine having a group understanding of total EA movement growth could help a fair bit and make conversations like this more straightforward. 

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-18T23:52:10.499Z · EA · GW

Thanks for all the responses!

I've thought about this a bit more. Perhaps the crux is something like this:

From my (likely mistaken) read of things, the community strategy seems to want something like:
1) CEA doesn't expand its staff or focus greatly in the next 3-10 years.
2) CEA is able to keep essential control and ensure quality of community expansion in the next 3-10 years.
3) We have a great amount of EA meta / community growth in the next 3-10 years.

I could understand strategies where one of those three is sacrificed for the other two, but having all three sounds quite tricky, even if it would be really nice ideally.

The most likely way I could see (3) and (1) both happening is if there is some new big organization that comes in and gains a lot of control, but I'm not sure if we want that. 

My impression is that (3) is the main one to be restricted. We could try encouraging some new nonprofits, but it seems quite hard to me to imagine a whole bunch being made quickly in ways we would be comfortable with (not actively afraid of), especially without a whole lot of oversight. 

I think it's totally fine, and normally necessary (though not fun) to accept some significant sacrifices as part of strategic decision making. 

I don't particularly have an opinion on which of the three should be the one to go.

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-18T23:43:42.922Z · EA · GW

Thanks for the details and calculation of GW.

It's of course difficult to express a complete worldview in a few (even long) comments. To be clear, I definitely acknowledge that hiring has substantial costs (I haven't really done it yet for QURI), and is not right for all orgs, especially at all times. I don't think that hiring is intrinsically good or anything.

I also agree that being slow, in the beginning in particular, could be essential. 

All that said, I think something like "ability to usefully scale" is a fairly critical factor in success for many jobs other than, perhaps, theoretical research. I think the success of OpenPhil will be profoundly bottlenecked if it can't find some useful ways to scale much further (this could even be by encouraging many other groups). 

It could take quite a while of "staying really small" to "be able to usefully scale", but "be able to usefully scale" is one of the main goals I'd want to see. 

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-18T23:34:17.288Z · EA · GW

Having been in the startup scene, wisdom there is a bit of a mess.

It's clear that the main goal of early startups is to identify "product market fit", which to me seems like, "an opportunity that's exciting enough to spend effort scaling". 

Startups "pivot" all the time. (See The Lean Startup, though I assume you're familiar) 

Startups also experiment with a bunch of small features, listen to what users want, and ideally choose some to focus on. For instance, Instagram started with a general-purpose app; from this they found out that users just really liked the photo feature, so they removed the other stuff and just focussed on that. AirBnB started out in many cities, but later were encouraged to focus on one; but in part because of their expertise (I imagine) they were able to make a good decision. 

It's a known bug for startups to scale before "product market fit", or scale poorly (bad hires), both of which are quite bad.

However, it's definitely the intention   of basically all startups to eventually get to the point where they have an exciting and scalable opportunity, and then to expand. 

Comment by Ozzie Gooen (oagr) on Big List of Cause Candidates · 2021-01-17T23:05:11.945Z · EA · GW

I'm not sure why your instinct is to go by your own experience or ask some other people. This seems fairly 'un-EA' to me and I hope whatever you're doing regarding the scoring doesn't take this approach

From where I'm sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don't really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements. 

I'm quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don't cover the thing we're really interested in, and often they don't even replicate. 

My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they'd be similarly skeptical to Nuño here. 

All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy. 

I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for. 

For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one's epistemic abilities, and measuring educational interventions on such tests. 

Comment by Ozzie Gooen (oagr) on Big List of Cause Candidates · 2021-01-17T22:26:51.189Z · EA · GW

Yes, fleshing out the whole comment, basically.

Comment by Ozzie Gooen (oagr) on A Funnel for Cause Candidates · 2021-01-17T22:01:07.544Z · EA · GW

Great point.

I think my take is that evaluation and ranking often really makes sense for very specific goals. Otherwise you get the problem of evaluating an airplane using the metrics of a washing machine.

This post was rather short. I think if a funnel became more capacity, it would have to be clarified that it has a very particular goal in mind. In this case, the goal would be "identifying targets that could be entire nonprofits". 

We've discussed organizing cause areas that could make sense for smaller projects, but one problem with that is that the number of possible candidates in that case goes up considerably. It becomes a much messier problem to organize the space of possible options for any kind of useful work. If you have good ideas for this, please do post!

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-17T05:30:45.845Z · EA · GW

Hi Max,

Thanks for clarifying your reasoning here.

Again, if you think CEA shouldn’t expand, my guess is that it shouldn’t.

I respect your opinion a lot here and am really thankful for your work.

I think this is a messy issue. I tried clarifying my thoughts for a few hours. I imagine what’s really necessary is broader discussion and research into expectations and models of the expansion of EA work, but of course that’s a lot of work. Note that I'm not particularly concerned with CEA becoming big; I'm more concerned with us aiming for some organizations to be fairly large. 

Feel free to ignore this or just not respond. I hope it might provide information on a perspective, but I’m not looking for any response or to cause controversy.

What is organization centrality?

This is a complex topic, in part because the concept of “organizations” is a slippery one. I imagine what really matters is something like, “coordination ability”, which typically requires some kind of centralization of power. My impression is that there’s a lot of overlap in donors and advisors around the groups you mention. If a few people call all the top-level shots (like funding decisions), then “one big organization” isn’t that different from a bunch of small ones. I appreciate the point about operations sharing; I’m sure there are some organizations that have had subprojetts that have shared fewer resources than what you described. It’s possible to be very decentralized within an organization (think of a research lab with distinct product owners) and to be very centralized within a collection of organizations.

Ideally I’d imagine that the choice of coordination centralization would be quite separate from that about the formal Nonprofit structure. You’re already sharing operations in an unconventional way. I could imagine cases where it could makes sense to have many nonprofits under a single ownership (even if this ownership is not legally binding), perhaps to help for targeted fundraising or to spread out legal liability. I know many people and companies own several sub LLCs and similar, I could see this being the main case.  

“We will continue to do some of the work we currently do to help to coordinate different parts of the community - for instance the EA Coordination Forum (formerly Leaders Forum), and a lot of the work that our community health team do. The community health team and funders (e.g. EA Funds) also do work to try to minimize risks and ensure that high-quality projects are the ones that get the resources they need to expand.“

-> If CEA is vetting which projects get made and expand, and hosts community health and other resources, then it’s not *that* much different from technically bringing in these projects formally under its wing. I imagine finding some structure where CEA continues to offer organizational and coordination services as the base of organizations grows, will be a pretty tricky one.

Again, what I would like to see is lots of “coordination ability”, and I expect that this could go further with the centralization of power with capacity to act on it. (I could imagine funders who technically have authority, but don’t have the time to do much that’s useful with it). It’s possible that if CEA (or another group) is able to be a dominant decision maker, and perhaps grow that influence over time, then that would represent centralized control of power.

 

What can we learn from the past?

I’ve heard of the histories of CEA and 80,000 Hours being used in this way before. I agree with much of what you said here, but am unsure about the interpretations. What’s described is a very small sample size and we could learn different kinds of lessons from them.

Most of the non-EA organizations that I could point to that have important influence in my life are much bigger than 20 people. I’m very happy Apple, Google, The Bill & Melinda Gates Foundation, OpenAI, Deepmind, The Electronic Frontier Foundation, Universities, The Good Food Institute, and similar, exist.

It’s definitely possible to have too many goals, but that’s relative to size and existing ability. It wouldn’t have made sense for Apple to start out making watches and speakers, but it got there eventually, and is now doing a pretty good job at it (in my opinion). So I agree that CEA seems to have over-applied itself, but don’t think that means it shouldn’t be aiming to grow later on.

Many companies have had periods where they’ve diversified too quickly and suffered. Apple, famously, before Jobs came back, Amazon apparently had a period post-dot-com bubble, arguably Google with Google X, the list goes on and on. But I’m happy these companies eventually fixed their mistakes and continued to expand.

 

“Many Small EA Orgs”

“I hope for a world where there are lots of organizations doing similar things in different spaces… I think we’re better off focusing on a few goals and letting others pick up other areas….”

I like the idea of having lots of organizations, but I also like the idea of having at least some really big organizations. The Good Food Institute now seems to have a huge team and was just created a few years ago, and they seem to correspondingly be taking big projects.

I’m happy that we have few groups that coordinate political campaigns. Those seem pretty messy. True, the DNC in the US might have serious problems, but I think the answer would be a separate large group, not hundreds of tiny ones.

I’m also positive about 80,000 Hours, but I feel like we should be hoping for at least some organizations (like The Good Food Institute) to have much better outcomes. 80,000 Hours took quite some time to get to where it is today (I think it started in around 2012?), and is still rather small in the scheme of things. They have around 14 full time employees; they seem quite productive, but not 2-5 orders of magnitude more than other organizations.  GiveWell seems much more successful; not only did they also grow a lot, but they convinced a Billionaire couple to help them spin off a separate entity which now is hugely important.  
 

The costs of organizational growth vs. new organizations

Trust of key figures
It seems much more challenging to me to find people I would trust as nonprofit founders than people I would trust as nonprofit product managers. Currently we have limited availability of senior EA leaders, so it seems particularly important to select people in positions of power who already understand what these leaders consider to be valuable and dangerous. If a big problem happens, it seems much easier to remove a PM than a nonprofit Executive Director or similar.

Ease
Founding requires a lot of challenging tasks like hiring, operations, and fundraising, which many people aren’t well suited to. I’m founding a nonprofit now, and have been having to learn how to set up a nonprofit and maintain it, which has been a major distraction. I’d be happier at this stage making a department inside a group that would do those things for me, even if I had to pay a fee.

It seems great that CEA did operations for a few other groups, but my impression is that you’re not intending to do that for many of the new groups you are referring to.

One related issue is that it can be quite hard for small organizations to get talent. Typically they have poor brands and tiny reputations. In situations where these organizations are actually strong (which should be many), having them be part of the bigger organization in brand alone seems like a pretty clear win. On the flip side, if some projects will be controversial or done poorly, it can be useful to ensure they are not part of a bigger organization (so they don't bring it down). 

Failure tolerance
Not having a “single point of failure” sounds nice in theory, but it seems to me that the funders are the main thing that matters, and they are fairly coordinated (and should be). If they go bad, then little amount of reorganization will help us. If they’re able to do a decent job, then they should help select leadership of big organizations that could do a good job, and/or help spin-off decent subgroups in the case of emergencies.

I think generally effort going into “making sure things go well” is better than effort going into “making sure that disasters won’t be too terrible”; and that’s better achieved by focusing on sizable organizations.

Smaller failure tolerance could also be worse with a distributed system; I expect it to be much easier to fire or replace a PM than to kick out a founder or move them around.
 

Expectations of growth

One question might be how ambitious we are regarding the growth of meta and longtermist efforts. I could imagine a world where we’re 100x the size, 20 years from now, with a few very large organizations, but it’s hard to imagine how many people we could manage with tiny organizations. 
 

TLDR

My read of your posts is that you are currently aiming for / expecting a future of EA meta where there are a bunch of very small (<20 person) organizations. This seems quite unusual compared to other similar movements I’m aware of. Very unusual actions often require much stronger cases than usual ones, and I don’t yet see it. The benefits of having at least a few very powerful meta organizations seems greater than the costs.

I’m thankful for whatever work you decide to pursue, and more than encourage trying stuff out, like trying to encourage many small groups. I think I mainly wouldn’t want us to over-commit to any strategy like that though, and I also would like to encourage some more reconsideration, especially as new evidence emerges. 

Comment by Ozzie Gooen (oagr) on Things CEA is not doing · 2021-01-15T17:59:22.612Z · EA · GW

Happy to see this clarification, thanks for this post.

While I understand the reasoning behind this, part of me really wants to see some organization that can do many things here.

Right now things seem headed towards a world where we have a whole bunch of small/tiny groups doing specialized things for Effective Altruism. This doesn't feel ideal to me. It's hard to create new nonprofits, there's a lot of marginal overhead, and it will be difficult to ensure quality and prevent downside risks with a highly decentralized structure. It will make it very difficult to reallocate talent to urgent new programs. 

Perhaps CEA isn't well positioned now to become a very large generalist organization, but I hope that either that changes at some point, or other strong groups emerge here. It's fine to continue having small groups, but I really want to see some large (>40 people), decently general-purpose organizations around these issues.

Comment by Ozzie Gooen (oagr) on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-15T17:53:22.535Z · EA · GW

I've been thinking about this for a while. 

I've had decent experience with WorldBrain's Memex, while I haven't really enjoyed using hypothes.is as much. There are issues I have with Memex, but I'm more optimistic about it. They're adding collaboration functionality. I've talked to the CEO and they might be a good fit to work with; if it were the case that a few community members were bullish on it, I could see them listening to the community when deciding on features.

https://getmemex.com/

It's all a lot of work though. I'd love for there to be some sort of review site where EAs could review (or just upvote/downvote) everything. 

Comment by Ozzie Gooen (oagr) on Big List of Cause Candidates · 2020-12-30T04:41:56.892Z · EA · GW

I agree that I'd like to see more research on topics like these, but would flag that they seem arguably harder to do well than more standard X-risk research.

I think from where I'm standing, direct, "normal" X-risk work is relatively easy to understand the impact of; a 0.01% chance less of an X-risk is a pretty simple thing. When you get into more detailed models it can be more difficult to estimate the total importance or impact, even though more detailed models are often overall better. I think there's a decent chance that 10-30 years from now the space would look quite different (similar to ways you mention) given more understanding (and propagation of that understanding) of more detailed models. 

One issue regarding a Big List is figuring out what specifically should be proposed. I'd encourage you to write up a short blog post on this and we could see about adding it to this list or the next one :)

Comment by Ozzie Gooen (oagr) on Big List of Cause Candidates · 2020-12-30T04:33:37.892Z · EA · GW

The goal of this list was to be comprehensive, not opinionated. We're thinking about ways of doing ranking/evaluation (particularly, with forecasting) going forward. I'd also encourage others to give it their own go, it's a tricky problem.

One reason to lean towards comprehension is to make it more evident which causes are quite bad. I'm sure, given the number, that many of these causes are quite poor. Hopefully systematic analysis would both help identify these, and then make a strong case for their placement. 

Comment by Ozzie Gooen (oagr) on How might better collective decision-making backfire? · 2020-12-29T03:06:40.614Z · EA · GW

Quick chiming in; 
I'd agree that this work is relatively value neutral, except for two main points:
1) It seems like those with good values are often rather prone to use better tools, and we could push things more into the hands of good actors than bad ones. Effective Altruists have been quick to adapt many of the best practices (Bayesian reasoning, Superforecasting, probabilistic estimation), but most other groups haven't.
2) A lot of "values" seem instrumental to me. I think this kind of work could help change the instrumental values of many actors, if it were influential. My current impression is that there would be some level of value convergence that would come with intelligence, though it's not clear how much of this would happen.

That said, it's of course possible that better decision-making could be used for bad cases. Hopefully our better decision making abilities as we go on this trajectory could help inform us as to how to best proceed :)

Comment by Ozzie Gooen (oagr) on Careers Questions Open Thread · 2020-12-11T00:32:12.335Z · EA · GW

I've been in tech for a while. That sounds a lot like management / "product management", or "intrapreneurs". 

If you want to be in charge of big projects at a tech-oriented venture, having a technical background can be really useful. You might also just want to look at the backgrounds of top managers at Elon Musk companies. Most tech CEOs and managers I know of have majored in either software engineering or some hard science.

Hypothetically there could be some other major more focused on tech management than tech implementation, but in practice I don't know of one. It's really hard to teach management and often expected that those skills are ones you'll pick up later. 

I myself studied general engineering in college, but spent a fair amount of time on entrepreneurship and learning a variety of other things. Recently I've been more interested in history and philosophy. There's a lot of need and demand for good interdisciplinary people. But I'm happy I focused on math/science/engineering in college; those things seem much more challenging and useful to learn in a formal setting. I'd also recommend reading a lot of  Hacker News / Paul Graham / entrepreneurship literature; that's often the best stuff on understanding how to make big things happen, but it's not taught well in school.

Also, I really wouldn't suggest getting too focused on Elon Musk or any other one person in particular.  Often the most exciting things are small new ones by new founders. Also, hopefully in the next 5 to 20 years there will be many other great projects.

Comment by Ozzie Gooen (oagr) on Long-Term Future Fund: Ask Us Anything! · 2020-12-10T18:03:48.964Z · EA · GW

I agree that research organizations of the type that we see are particularly difficult to grow quickly.

My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted. 

Right now it seems like our solution to most problems is "try to solve it with experienced researchers", which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that's very hard to scale, as you note (I know of almost no organizations that have done this well). 
 

Separately,

The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we'd still have to wait 3-5 years before the talent comes on tap unfortunately.

I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while. 

Comment by Ozzie Gooen (oagr) on Long-Term Future Fund: Ask Us Anything! · 2020-12-09T22:22:34.353Z · EA · GW

My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship

 

This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually. 

I think one thing that's going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I've made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky. 

Comment by Ozzie Gooen (oagr) on Long-Term Future Fund: Ask Us Anything! · 2020-12-09T22:02:07.840Z · EA · GW

Thanks so much for this, that was informative. A few quick thoughts:

“Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along”

I’ve heard this one before and I could sympathize with it,  but it strikes me as a red flag that something is going a bit wrong. ( I’m not saying that this is your fault, but am flagging it is an issue for the community more broadly.)  Big companies often don’t have the ideal teams for new initiatives.  Often urgency is very important so they put something together relatively quickly. If it doesn’t work well is not that big of a deal, it is to spend the team and have them go to other projects, and perhaps find better people to take their place.

In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up.  But if this is the case it would be obviously severely limiting.  The obvious solution to this would be to have bigger orgs with more possibility.  Perhaps of specific initiatives were going well and demanded independence it could happen later on, but hopefully not for the first few years.

“I think it would be good to have scalable interventions for impact.” In terms of money, I’ve been thinking about this too. If this were a crucial strategy it seems like the kind of thing that could get a lot more attention. For instance, new orgs that focus heavily on ways to decently absorb a lot of money in the future.

Some ideas I’ve had:

- Experiment with advertising campaigns that could be clearly scaled up.  Some of them seem linearly useful up to millions of dollars.

-  Add additional resources to make existing researchers more effective.

- Buy the rights to books and spend on marketing for the key ones.

- Pay for virtual assistants and all other things that could speed researchers out.

- Add additional resources to make nonprofits more effective, easily.

- Better budgets for external contractors.

- Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding.

While it might be a strange example, the wealthy, or in particular, the Saudi government are examples of how to spend lots of money with relatively few trusted people, semi-successfully.

Having come from the tech sector, in particular, it feels like there are often much more stingy expectations placed on EA researchers. 

Comment by Ozzie Gooen (oagr) on AMA: Jason Crawford, The Roots of Progress · 2020-12-09T21:36:25.520Z · EA · GW

Thanks so much for the comment. This is obviously a complicated topic so I won’t aim to be complete, but here are some thoughts.

One challenge with epistemic, moral, and (I’ll throw in) political ideas is that we’ve literally been debating them for 2,500 years and we still don’t agree.

From my perspective, while we don’t agree on everything, there has been a lot of advancement during this period, especially if one looks at pockets of intellectuals. The Ancient Greeks schools of thought,  The Renaissance,  The Enlightenment, and the growth of atheism are examples of what seems like substantial progress (especially to people who have agreement with them, like myself).

I would agree that epistemic, moral, and political progress seems to be far slower than technological progress, but we definitely still have it and it seems more net positive. Real effort here also seems far more neglected.  There are clearly a fair number of academics in these areas, but I think in terms of number of people, resources, and “get it done” abilities, regular technical progress has been strongly favored. This means that we may have less leverage, but the neglectedness but  this could also mean that there are some really nice returns to highly competent efforts. 

The second thing that I’d flag is that  it’s possible that advances in the Internet and AI could mean that progress in these areas become much more tractable in the next 10 to 100 years.

I started by studying material progress because (1) it happened to be what I was most interested in and (2) it’s the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress.

I think I much agree with you here, though I myself am less interested in technical progress.  I agree that they can’t be separated. This is all the more reason I would encourage you to emphasize it in future work of yours :-).  I imagine any good study of epistemic and moral progress would include studies of technology for the reasons you mention. I’m not suggesting that you focus on epistemic and moral progress only, but rather that they could either be the primary emphasis where possible, or just a bit more emphasized here and there.  Perhaps this could be a good spot to collaborate directly with Effective Altruist researchers.

I haven’t read Ord’s take on this, but the concept as you describe it strikes me as not quite right.

My take was written quickly and  I think your impression is very different from his take. In The Precipice, Toby Ord recommends that The Long Reflection happens as one of three phrases, the first being “Reaching Existential Security”. This would involve setting things up so that humanity has a very low chance of existential risk per year.  It’s hard for me to imagine what this would look like.  There’s not much written about it in the book. I imagine it would look very different to what we have now and probably take a fair amount of more technological maturity. Having setups to ensure protections against existentially serious biohazards would be a precondition.  I imagine there is obviously some trade-off between our technological abilities to make quick progress during the reflection, and the risks and speed of us getting there, but that’s probably outside the scope of this conversation.

In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.

I agree that they are massively useful, but they also are massively risky. I’m sure that a lot of advancements that we have are locally a net negative; otherwise it seems odd that we could have so many big changes but still a world as challenging and messy as ours.

Some of science/technology/infrastructure/surplus wealth is obviously useful for getting us to Existential Security, and others are probably harmful.  It's not really clear to me that average modern advancements are net-positive at this point(this is incredibly complicated to figure out!), but it seems clear that at least some are (though we might not be able to tell which ones). 

Comment by Ozzie Gooen (oagr) on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T21:09:32.581Z · EA · GW

Thanks!

Comment by Ozzie Gooen (oagr) on An experiment to evaluate the value of one researcher's work · 2020-12-07T04:01:00.297Z · EA · GW

Yep, I think this is quite useful/obvious. (If I understand it correctly). Work though :)

Comment by Ozzie Gooen (oagr) on An experiment to evaluate the value of one researcher's work · 2020-12-07T04:00:14.221Z · EA · GW

Good catch

Comment by Ozzie Gooen (oagr) on An experiment to evaluate the value of one researcher's work · 2020-12-07T03:59:21.429Z · EA · GW

That's quite useful, thanks

In fact, one possibility would be to use the intuitive estimation approach on the work of one of the orgs/people who already have a bunch of this sort of data relevant to that work (after checking that the org/people are happy to have their work used for this process), and then look at the empirical data, and see how they compare. 

This seems like a neat idea to me. We'll investigate it. 

Comment by Ozzie Gooen (oagr) on My mistakes on the path to impact · 2020-12-05T02:38:44.850Z · EA · GW

Congrats on the promotion! (after just 6.5 months? Kudos) Also thanks for the case study. I think as you pointed out, this is a bit different from some of the common advice, so it's particularly useful.

Comment by Ozzie Gooen (oagr) on WANBAM is accepting expressions of interest for mentees! · 2020-12-05T02:37:01.136Z · EA · GW

This looks really nice to me, thanks for all of your work here!

Comment by Ozzie Gooen (oagr) on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T16:43:48.432Z · EA · GW

As discussed in other comments, it seems that progress studies focuses mostly on  economic and scientific progress,  and these seem to come with risks as well as rewards. At the same time, particular aspects of progress seem more safe; the progress of epistemics or morality for example. Toby Ord wrote about the Long Reflection, as a method of making a lot of very specific progress before focusing on other kinds.  These things are more difficult to study but might be more valuable. 

 So my question is, have you spent much time considering epistemic and moral progress (and other abstract but safe aspects) as a thing to study?  Do you have any thoughts on its viability?

(I've written a bit more here, but it's still relatively short). 

Comment by Ozzie Gooen (oagr) on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T16:34:12.233Z · EA · GW

Can you clarify on your models on which kinds of projects could cause net harm? My impression is that there are some thoughts that funding many things would be actively harmful, but I don't feel like I have a great picture of the details here. 

If there are such models, are there possible structural solutions to identifying particularly scalable endeavors? I'd hope that we could eventually identify opportunities for long-term impact that aren't "find a small set of particularly highly talented researchers", but things more like, "spend X dollars advertising Y in a way that could scale" or "build a sizeable organization of people that don't all need to be top-tier researchers".