Posts

Announcing the Forecasting Innovation Prize 2020-11-15T21:21:52.151Z
Open Communication in the Days of Malicious Online Actors 2020-10-06T23:57:35.529Z
Ozzie Gooen's Shortform 2020-09-22T19:17:54.175Z
Expansive translations: considerations and possibilities 2020-09-18T21:38:42.357Z
How to estimate the EV of general intellectual progress 2020-01-27T10:21:11.076Z
What are words, phrases, or topics that you think most EAs don't know about but should? 2020-01-21T20:15:07.312Z
Best units for comparing personal interventions? 2020-01-13T08:53:12.863Z
Predictably Predictable Futures Talk: Using Expected Loss & Prediction Innovation for Long Term Benefits 2020-01-08T22:19:32.155Z
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges 2019-12-19T18:16:04.299Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T16:36:10.564Z
Introducing Foretold.io: A New Open-Source Prediction Registry 2019-10-16T14:47:20.752Z
What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) 2019-08-04T20:38:10.413Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:23.576Z
What new EA project or org would you like to see created in the next 3 years? 2019-06-11T20:56:42.687Z
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T21:25:46.305Z
Discussion: What are good legal entity structures for new EA groups? 2018-12-18T00:33:16.620Z
Current AI Safety Roles for Software Engineers 2018-11-09T21:00:23.318Z
Prediction-Augmented Evaluation Systems 2018-11-09T11:43:06.088Z
Emotion Inclusive Altruism vs. Emotion Exclusive Altruism 2016-12-21T01:40:45.222Z
Ideas for Future Effective Altruism Conferences: Open Thread 2016-08-13T02:59:02.685Z
Guesstimate: An app for making decisions with confidence (intervals) 2015-12-30T17:30:55.414Z
Is there a hedonistic utilitarian case for Cryonics? (Discuss) 2015-08-27T17:50:36.180Z
EA Assembly & Call for Speakers 2015-08-18T20:55:13.854Z
Deep Dive with Matthew Gentzel on Recently Effective Altruism Policy Analytics 2015-07-20T06:17:48.890Z
The first .impact Workathon 2015-07-09T07:38:12.143Z
FAI Research Constraints and AGI Side Effects 2015-06-07T20:50:21.908Z
Gratipay for Funding EAs 2014-12-24T21:39:53.332Z
Why "Changing the World" is a Horrible Phrase 2014-12-24T00:41:50.234Z

Comments

Comment by oagr on Things CEA is not doing · 2021-01-20T06:38:33.039Z · EA · GW

Happy to hear you're looking for things that could scale, I'd personally be particularly excited about those opportunities. 

I'd guess that internet-style things could scale particularly well; like the Forum / EA Funds / online models, etc, but that's also my internet background talking :).   In particular, things could be different if it makes sense to focus on a very narrow but elite group.

I agree that a group should scale staff only after finding a scalable opportunity.

Comment by oagr on Things CEA is not doing · 2021-01-20T06:35:00.973Z · EA · GW

Thanks!

Maybe I misunderstood this post. You wrote,

Therefore, we want to let people know what we're not doing, so that they have a better sense of how neglected those areas are.

When you said this, what timeline were you implying? I would imagine that if there were a new nonprofit focusing on a subarea mentioned here they would be intending to focus on it for 4-10+ years, so I was assuming that this post meant that CEA was intending to not get into these areas on a 4-10 year horizon. 

Were you thinking of more of a 1-2 year horizon? I guess this would be fine as long as you're keeping in communication with other potential groups who are thinking about these areas, so we don't have a situation where there's a lot of overlapping (or worse, competing) work all of a sudden.

Comment by oagr on Things CEA is not doing · 2021-01-20T06:30:53.245Z · EA · GW

Thanks for the diagrams and explanation!

I think when I see the diagrams, I think of these as "low overhead roles" vs "high overhead roles"; where "low overhead roles" have peak marginal value much earlier than high overhead roles. If one is interested in scaling work, and assuming that requires also scaling labor, then scalable strategies would be ones that would have many low overhead roles, similar to your second diagram of "CEA in the Future"

That said, my main point above wasn't that CEA should definitely grow, but that if CEA is having trouble/hesitancy/it-isn't-ideal growing, I would expect that the strategy of "encouraging a bunch of new external nonprofits" to be limited in potential.

If CEA thinks it could help police new nonprofits, that would also take Max's time or similar; the management time is coming from the same place, it's just being used in different ways and there would ideally be less of it. 

In the back of my mind, I'm thinking that OpenPhil theoretically has access to +$10Bil, and hypothetically much of this could go towards promotion of EA or EA-related principles, but right now there's a big bottleneck here. I could imagine that it's possible it could make sense to be rather okay wasting a fair bit of money and doing things quite unusual in order to get expansion to work somehow.

Around CEA and related organizations in particular, I am a bit worried that not all of the value of taking in good people is transparent. For example, if an org takes in someone promising and trains them up for 2 years, and then they leave for another org, that could have been a huge positive externality, but I'd bet it would get overlooked by funders. I've seen this happen previously. Right now it seems like there are a bunch of rather young EAs who really could use some training, but there are relatively few job openings, in part because existing orgs are quite hesitant to expand. 

I imagine that hypothetically this could be an incredibly long conversation, and you definitely have a lot more inside knowledge than I do. I'd like to personally do more investigation to better understand what the main EA growth constraints are, we'll see about this. 

One thing we could make tractable progress in is in forecasting movement growth or these other things. I don't have things in mind at the moment, but if you ever have ideas, do let me know, and we could see about developing them into questions in Metaculus or similar. I imagine having a group understanding of total EA movement growth could help a fair bit and make conversations like this more straightforward. 

Comment by oagr on Things CEA is not doing · 2021-01-18T23:52:10.499Z · EA · GW

Thanks for all the responses!

I've thought about this a bit more. Perhaps the crux is something like this:

From my (likely mistaken) read of things, the community strategy seems to want something like:
1) CEA doesn't expand its staff or focus greatly in the next 3-10 years.
2) CEA is able to keep essential control and ensure quality of community expansion in the next 3-10 years.
3) We have a great amount of EA meta / community growth in the next 3-10 years.

I could understand strategies where one of those three is sacrificed for the other two, but having all three sounds quite tricky, even if it would be really nice ideally.

The most likely way I could see (3) and (1) both happening is if there is some new big organization that comes in and gains a lot of control, but I'm not sure if we want that. 

My impression is that (3) is the main one to be restricted. We could try encouraging some new nonprofits, but it seems quite hard to me to imagine a whole bunch being made quickly in ways we would be comfortable with (not actively afraid of), especially without a whole lot of oversight. 

I think it's totally fine, and normally necessary (though not fun) to accept some significant sacrifices as part of strategic decision making. 

I don't particularly have an opinion on which of the three should be the one to go.

Comment by oagr on Things CEA is not doing · 2021-01-18T23:43:42.922Z · EA · GW

Thanks for the details and calculation of GW.

It's of course difficult to express a complete worldview in a few (even long) comments. To be clear, I definitely acknowledge that hiring has substantial costs (I haven't really done it yet for QURI), and is not right for all orgs, especially at all times. I don't think that hiring is intrinsically good or anything.

I also agree that being slow, in the beginning in particular, could be essential. 

All that said, I think something like "ability to usefully scale" is a fairly critical factor in success for many jobs other than, perhaps, theoretical research. I think the success of OpenPhil will be profoundly bottlenecked if it can't find some useful ways to scale much further (this could even be by encouraging many other groups). 

It could take quite a while of "staying really small" to "be able to usefully scale", but "be able to usefully scale" is one of the main goals I'd want to see. 

Comment by oagr on Things CEA is not doing · 2021-01-18T23:34:17.288Z · EA · GW

Having been in the startup scene, wisdom there is a bit of a mess.

It's clear that the main goal of early startups is to identify "product market fit", which to me seems like, "an opportunity that's exciting enough to spend effort scaling". 

Startups "pivot" all the time. (See The Lean Startup, though I assume you're familiar) 

Startups also experiment with a bunch of small features, listen to what users want, and ideally choose some to focus on. For instance, Instagram started with a general-purpose app; from this they found out that users just really liked the photo feature, so they removed the other stuff and just focussed on that. AirBnB started out in many cities, but later were encouraged to focus on one; but in part because of their expertise (I imagine) they were able to make a good decision. 

It's a known bug for startups to scale before "product market fit", or scale poorly (bad hires), both of which are quite bad.

However, it's definitely the intention   of basically all startups to eventually get to the point where they have an exciting and scalable opportunity, and then to expand. 

Comment by oagr on Big List of Cause Candidates · 2021-01-17T23:05:11.945Z · EA · GW

I'm not sure why your instinct is to go by your own experience or ask some other people. This seems fairly 'un-EA' to me and I hope whatever you're doing regarding the scoring doesn't take this approach

From where I'm sitting, asking other people is fairly in line with what many EAs do, especially on longtermist things. We don't really have RCTs around AI safety, governance, or bio risks, so we instead do our best with reasoned judgements. 

I'm quite skeptical of taking much from scientific studies on many kinds of questions, and I know this is true for many other members in the community. Scientific studies are often very narrow in scope, don't cover the thing we're really interested in, and often they don't even replicate. 

My guess is that if we were to show several senior/respected EAs at OpenPhil/FHI and similar your previous blog post, as is, they'd be similarly skeptical to Nuño here. 

All that said, I think there are more easily-arguable proposals around yours (or arguably, modifications of yours). It seems obviously useful to make sure that Effective Altruists have good epistemics and there are initiatives in place to help teach them these. This includes work in Philosophy. Many EA researchers spend quite a while learning about Philosophy. 

I think people are already bought into the idea of basically teaching important people how to think better. If large versions of this could be expanded upon, they seem like they could be large cause candidates there could be buy in for. 

For example, in-person schools seem expensive, but online education is much cheaper to scale. Perhaps we could help subsidize or pay a few podcasters or Youtubers or similar to teach people the parts of philosophy that are great for reasoning. We could also target who is most important, and very well select the material that seems most useful. Ideally we could find ways to get relatively strong feedback loops; like creating tests that indicate one's epistemic abilities, and measuring educational interventions on such tests. 

Comment by oagr on Big List of Cause Candidates · 2021-01-17T22:26:51.189Z · EA · GW

Yes, fleshing out the whole comment, basically.

Comment by oagr on A Funnel for Cause Candidates · 2021-01-17T22:01:07.544Z · EA · GW

Great point.

I think my take is that evaluation and ranking often really makes sense for very specific goals. Otherwise you get the problem of evaluating an airplane using the metrics of a washing machine.

This post was rather short. I think if a funnel became more capacity, it would have to be clarified that it has a very particular goal in mind. In this case, the goal would be "identifying targets that could be entire nonprofits". 

We've discussed organizing cause areas that could make sense for smaller projects, but one problem with that is that the number of possible candidates in that case goes up considerably. It becomes a much messier problem to organize the space of possible options for any kind of useful work. If you have good ideas for this, please do post!

Comment by oagr on Things CEA is not doing · 2021-01-17T05:30:45.845Z · EA · GW

Hi Max,

Thanks for clarifying your reasoning here.

Again, if you think CEA shouldn’t expand, my guess is that it shouldn’t.

I respect your opinion a lot here and am really thankful for your work.

I think this is a messy issue. I tried clarifying my thoughts for a few hours. I imagine what’s really necessary is broader discussion and research into expectations and models of the expansion of EA work, but of course that’s a lot of work. Note that I'm not particularly concerned with CEA becoming big; I'm more concerned with us aiming for some organizations to be fairly large. 

Feel free to ignore this or just not respond. I hope it might provide information on a perspective, but I’m not looking for any response or to cause controversy.

What is organization centrality?

This is a complex topic, in part because the concept of “organizations” is a slippery one. I imagine what really matters is something like, “coordination ability”, which typically requires some kind of centralization of power. My impression is that there’s a lot of overlap in donors and advisors around the groups you mention. If a few people call all the top-level shots (like funding decisions), then “one big organization” isn’t that different from a bunch of small ones. I appreciate the point about operations sharing; I’m sure there are some organizations that have had subprojetts that have shared fewer resources than what you described. It’s possible to be very decentralized within an organization (think of a research lab with distinct product owners) and to be very centralized within a collection of organizations.

Ideally I’d imagine that the choice of coordination centralization would be quite separate from that about the formal Nonprofit structure. You’re already sharing operations in an unconventional way. I could imagine cases where it could makes sense to have many nonprofits under a single ownership (even if this ownership is not legally binding), perhaps to help for targeted fundraising or to spread out legal liability. I know many people and companies own several sub LLCs and similar, I could see this being the main case.  

“We will continue to do some of the work we currently do to help to coordinate different parts of the community - for instance the EA Coordination Forum (formerly Leaders Forum), and a lot of the work that our community health team do. The community health team and funders (e.g. EA Funds) also do work to try to minimize risks and ensure that high-quality projects are the ones that get the resources they need to expand.“

-> If CEA is vetting which projects get made and expand, and hosts community health and other resources, then it’s not *that* much different from technically bringing in these projects formally under its wing. I imagine finding some structure where CEA continues to offer organizational and coordination services as the base of organizations grows, will be a pretty tricky one.

Again, what I would like to see is lots of “coordination ability”, and I expect that this could go further with the centralization of power with capacity to act on it. (I could imagine funders who technically have authority, but don’t have the time to do much that’s useful with it). It’s possible that if CEA (or another group) is able to be a dominant decision maker, and perhaps grow that influence over time, then that would represent centralized control of power.

 

What can we learn from the past?

I’ve heard of the histories of CEA and 80,000 Hours being used in this way before. I agree with much of what you said here, but am unsure about the interpretations. What’s described is a very small sample size and we could learn different kinds of lessons from them.

Most of the non-EA organizations that I could point to that have important influence in my life are much bigger than 20 people. I’m very happy Apple, Google, The Bill & Melinda Gates Foundation, OpenAI, Deepmind, The Electronic Frontier Foundation, Universities, The Good Food Institute, and similar, exist.

It’s definitely possible to have too many goals, but that’s relative to size and existing ability. It wouldn’t have made sense for Apple to start out making watches and speakers, but it got there eventually, and is now doing a pretty good job at it (in my opinion). So I agree that CEA seems to have over-applied itself, but don’t think that means it shouldn’t be aiming to grow later on.

Many companies have had periods where they’ve diversified too quickly and suffered. Apple, famously, before Jobs came back, Amazon apparently had a period post-dot-com bubble, arguably Google with Google X, the list goes on and on. But I’m happy these companies eventually fixed their mistakes and continued to expand.

 

“Many Small EA Orgs”

“I hope for a world where there are lots of organizations doing similar things in different spaces… I think we’re better off focusing on a few goals and letting others pick up other areas….”

I like the idea of having lots of organizations, but I also like the idea of having at least some really big organizations. The Good Food Institute now seems to have a huge team and was just created a few years ago, and they seem to correspondingly be taking big projects.

I’m happy that we have few groups that coordinate political campaigns. Those seem pretty messy. True, the DNC in the US might have serious problems, but I think the answer would be a separate large group, not hundreds of tiny ones.

I’m also positive about 80,000 Hours, but I feel like we should be hoping for at least some organizations (like The Good Food Institute) to have much better outcomes. 80,000 Hours took quite some time to get to where it is today (I think it started in around 2012?), and is still rather small in the scheme of things. They have around 14 full time employees; they seem quite productive, but not 2-5 orders of magnitude more than other organizations.  GiveWell seems much more successful; not only did they also grow a lot, but they convinced a Billionaire couple to help them spin off a separate entity which now is hugely important.  
 

The costs of organizational growth vs. new organizations

Trust of key figures
It seems much more challenging to me to find people I would trust as nonprofit founders than people I would trust as nonprofit product managers. Currently we have limited availability of senior EA leaders, so it seems particularly important to select people in positions of power who already understand what these leaders consider to be valuable and dangerous. If a big problem happens, it seems much easier to remove a PM than a nonprofit Executive Director or similar.

Ease
Founding requires a lot of challenging tasks like hiring, operations, and fundraising, which many people aren’t well suited to. I’m founding a nonprofit now, and have been having to learn how to set up a nonprofit and maintain it, which has been a major distraction. I’d be happier at this stage making a department inside a group that would do those things for me, even if I had to pay a fee.

It seems great that CEA did operations for a few other groups, but my impression is that you’re not intending to do that for many of the new groups you are referring to.

One related issue is that it can be quite hard for small organizations to get talent. Typically they have poor brands and tiny reputations. In situations where these organizations are actually strong (which should be many), having them be part of the bigger organization in brand alone seems like a pretty clear win. On the flip side, if some projects will be controversial or done poorly, it can be useful to ensure they are not part of a bigger organization (so they don't bring it down). 

Failure tolerance
Not having a “single point of failure” sounds nice in theory, but it seems to me that the funders are the main thing that matters, and they are fairly coordinated (and should be). If they go bad, then little amount of reorganization will help us. If they’re able to do a decent job, then they should help select leadership of big organizations that could do a good job, and/or help spin-off decent subgroups in the case of emergencies.

I think generally effort going into “making sure things go well” is better than effort going into “making sure that disasters won’t be too terrible”; and that’s better achieved by focusing on sizable organizations.

Smaller failure tolerance could also be worse with a distributed system; I expect it to be much easier to fire or replace a PM than to kick out a founder or move them around.
 

Expectations of growth

One question might be how ambitious we are regarding the growth of meta and longtermist efforts. I could imagine a world where we’re 100x the size, 20 years from now, with a few very large organizations, but it’s hard to imagine how many people we could manage with tiny organizations. 
 

TLDR

My read of your posts is that you are currently aiming for / expecting a future of EA meta where there are a bunch of very small (<20 person) organizations. This seems quite unusual compared to other similar movements I’m aware of. Very unusual actions often require much stronger cases than usual ones, and I don’t yet see it. The benefits of having at least a few very powerful meta organizations seems greater than the costs.

I’m thankful for whatever work you decide to pursue, and more than encourage trying stuff out, like trying to encourage many small groups. I think I mainly wouldn’t want us to over-commit to any strategy like that though, and I also would like to encourage some more reconsideration, especially as new evidence emerges. 

Comment by oagr on Things CEA is not doing · 2021-01-15T17:59:22.612Z · EA · GW

Happy to see this clarification, thanks for this post.

While I understand the reasoning behind this, part of me really wants to see some organization that can do many things here.

Right now things seem headed towards a world where we have a whole bunch of small/tiny groups doing specialized things for Effective Altruism. This doesn't feel ideal to me. It's hard to create new nonprofits, there's a lot of marginal overhead, and it will be difficult to ensure quality and prevent downside risks with a highly decentralized structure. It will make it very difficult to reallocate talent to urgent new programs. 

Perhaps CEA isn't well positioned now to become a very large generalist organization, but I hope that either that changes at some point, or other strong groups emerge here. It's fine to continue having small groups, but I really want to see some large (>40 people), decently general-purpose organizations around these issues.

Comment by oagr on Should pretty much all content that's EA-relevant and/or created by EAs be (link)posted to the Forum? · 2021-01-15T17:53:22.535Z · EA · GW

I've been thinking about this for a while. 

I've had decent experience with WorldBrain's Memex, while I haven't really enjoyed using hypothes.is as much. There are issues I have with Memex, but I'm more optimistic about it. They're adding collaboration functionality. I've talked to the CEO and they might be a good fit to work with; if it were the case that a few community members were bullish on it, I could see them listening to the community when deciding on features.

https://getmemex.com/

It's all a lot of work though. I'd love for there to be some sort of review site where EAs could review (or just upvote/downvote) everything. 

Comment by oagr on Big List of Cause Candidates · 2020-12-30T04:41:56.892Z · EA · GW

I agree that I'd like to see more research on topics like these, but would flag that they seem arguably harder to do well than more standard X-risk research.

I think from where I'm standing, direct, "normal" X-risk work is relatively easy to understand the impact of; a 0.01% chance less of an X-risk is a pretty simple thing. When you get into more detailed models it can be more difficult to estimate the total importance or impact, even though more detailed models are often overall better. I think there's a decent chance that 10-30 years from now the space would look quite different (similar to ways you mention) given more understanding (and propagation of that understanding) of more detailed models. 

One issue regarding a Big List is figuring out what specifically should be proposed. I'd encourage you to write up a short blog post on this and we could see about adding it to this list or the next one :)

Comment by oagr on Big List of Cause Candidates · 2020-12-30T04:33:37.892Z · EA · GW

The goal of this list was to be comprehensive, not opinionated. We're thinking about ways of doing ranking/evaluation (particularly, with forecasting) going forward. I'd also encourage others to give it their own go, it's a tricky problem.

One reason to lean towards comprehension is to make it more evident which causes are quite bad. I'm sure, given the number, that many of these causes are quite poor. Hopefully systematic analysis would both help identify these, and then make a strong case for their placement. 

Comment by oagr on How might better collective decision-making backfire? · 2020-12-29T03:06:40.614Z · EA · GW

Quick chiming in; 
I'd agree that this work is relatively value neutral, except for two main points:
1) It seems like those with good values are often rather prone to use better tools, and we could push things more into the hands of good actors than bad ones. Effective Altruists have been quick to adapt many of the best practices (Bayesian reasoning, Superforecasting, probabilistic estimation), but most other groups haven't.
2) A lot of "values" seem instrumental to me. I think this kind of work could help change the instrumental values of many actors, if it were influential. My current impression is that there would be some level of value convergence that would come with intelligence, though it's not clear how much of this would happen.

That said, it's of course possible that better decision-making could be used for bad cases. Hopefully our better decision making abilities as we go on this trajectory could help inform us as to how to best proceed :)

Comment by oagr on Careers Questions Open Thread · 2020-12-11T00:32:12.335Z · EA · GW

I've been in tech for a while. That sounds a lot like management / "product management", or "intrapreneurs". 

If you want to be in charge of big projects at a tech-oriented venture, having a technical background can be really useful. You might also just want to look at the backgrounds of top managers at Elon Musk companies. Most tech CEOs and managers I know of have majored in either software engineering or some hard science.

Hypothetically there could be some other major more focused on tech management than tech implementation, but in practice I don't know of one. It's really hard to teach management and often expected that those skills are ones you'll pick up later. 

I myself studied general engineering in college, but spent a fair amount of time on entrepreneurship and learning a variety of other things. Recently I've been more interested in history and philosophy. There's a lot of need and demand for good interdisciplinary people. But I'm happy I focused on math/science/engineering in college; those things seem much more challenging and useful to learn in a formal setting. I'd also recommend reading a lot of  Hacker News / Paul Graham / entrepreneurship literature; that's often the best stuff on understanding how to make big things happen, but it's not taught well in school.

Also, I really wouldn't suggest getting too focused on Elon Musk or any other one person in particular.  Often the most exciting things are small new ones by new founders. Also, hopefully in the next 5 to 20 years there will be many other great projects.

Comment by oagr on Long-Term Future Fund: Ask Us Anything! · 2020-12-10T18:03:48.964Z · EA · GW

I agree that research organizations of the type that we see are particularly difficult to grow quickly.

My point is that we could theoretically focus more on other kinds of organizations that are more scalable. I could imagine there being more scalable engineering-heavy or marketing-heavy paths to impact on these problems. For example, setting up an engineering/data organization to manage information and metrics about bio risks. These organizations might have rather high upfront costs (and marginal costs), but are ones where I could see investing $10-100mil/year if we wanted. 

Right now it seems like our solution to most problems is "try to solve it with experienced researchers", which seems to be a tool we have a strong comparative advantage in, but not the only tool in the possible toolbox. It is a tool that's very hard to scale, as you note (I know of almost no organizations that have done this well). 
 

Separately,

The other way to scale up is to get people to skill-up in areas with more scalable mentorship: e.g. just work on any AI research topic for your PhD where you can get good mentorship, then go work at an org doing more impactful work once you graduate. I think this is probably our best bet to absorb most additional junior talent right now. This may beat the 10-30% figure I gave, but we'd still have to wait 3-5 years before the talent comes on tap unfortunately.

I just want to flag that I think I agree, but also feel pretty bad about this. I get the impression that for AI many of the grad school programs are decent enough, but for other fields (philosophy, some of Econ, things bio related), grad school can be quite long winded, demotivating, occasionally the cause of serious long term psychological problems, and often distracting or actively harmful for alignment. It definitely feels like we should eventually be able to do better, but it might be a while. 

Comment by oagr on Long-Term Future Fund: Ask Us Anything! · 2020-12-09T22:22:34.353Z · EA · GW

My view is that for most orgs, at least in the AI safety space, they can only grow by a relatively small (10-30%) rate per year while still providing adequate mentorship

 

This might be a small point, but while I would agree, I imagine that strategically there are some possible orgs that could grow more quickly; and due to them growing, could dominate the funding eventually. 

I think one thing that's going on is that right now due to funding constraints individuals are encouraged to create organizations that are efficient when small, as opposed to efficient when large. I've made this decision myself. Doing the latter would require a fair amount of trust that large funders would later be interested in it at that scale. Right now it seems like we only have one large funder, which makes things tricky. 

Comment by oagr on Long-Term Future Fund: Ask Us Anything! · 2020-12-09T22:02:07.840Z · EA · GW

Thanks so much for this, that was informative. A few quick thoughts:

“Projects that result in a team covering a space or taking on some coordination role that is worse than the next person who could have come along”

I’ve heard this one before and I could sympathize with it,  but it strikes me as a red flag that something is going a bit wrong. ( I’m not saying that this is your fault, but am flagging it is an issue for the community more broadly.)  Big companies often don’t have the ideal teams for new initiatives.  Often urgency is very important so they put something together relatively quickly. If it doesn’t work well is not that big of a deal, it is to spend the team and have them go to other projects, and perhaps find better people to take their place.

In comparison with nonprofits it’s much more difficult. My read is that we sort of expect the nonprofits to never die, which means we need to be *very very* sure about them before setting them up.  But if this is the case it would be obviously severely limiting.  The obvious solution to this would be to have bigger orgs with more possibility.  Perhaps of specific initiatives were going well and demanded independence it could happen later on, but hopefully not for the first few years.

“I think it would be good to have scalable interventions for impact.” In terms of money, I’ve been thinking about this too. If this were a crucial strategy it seems like the kind of thing that could get a lot more attention. For instance, new orgs that focus heavily on ways to decently absorb a lot of money in the future.

Some ideas I’ve had:

- Experiment with advertising campaigns that could be clearly scaled up.  Some of them seem linearly useful up to millions of dollars.

-  Add additional resources to make existing researchers more effective.

- Buy the rights to books and spend on marketing for the key ones.

- Pay for virtual assistants and all other things that could speed researchers out.

- Add additional resources to make nonprofits more effective, easily.

- Better budgets for external contractors.

- Focus heavily on funding non-EA projects that are still really beneficial. This could mean an emphasis on funding new nonprofits that do nothing but rank and do strategy for more funding.

While it might be a strange example, the wealthy, or in particular, the Saudi government are examples of how to spend lots of money with relatively few trusted people, semi-successfully.

Having come from the tech sector, in particular, it feels like there are often much more stingy expectations placed on EA researchers. 

Comment by oagr on AMA: Jason Crawford, The Roots of Progress · 2020-12-09T21:36:25.520Z · EA · GW

Thanks so much for the comment. This is obviously a complicated topic so I won’t aim to be complete, but here are some thoughts.

One challenge with epistemic, moral, and (I’ll throw in) political ideas is that we’ve literally been debating them for 2,500 years and we still don’t agree.

From my perspective, while we don’t agree on everything, there has been a lot of advancement during this period, especially if one looks at pockets of intellectuals. The Ancient Greeks schools of thought,  The Renaissance,  The Enlightenment, and the growth of atheism are examples of what seems like substantial progress (especially to people who have agreement with them, like myself).

I would agree that epistemic, moral, and political progress seems to be far slower than technological progress, but we definitely still have it and it seems more net positive. Real effort here also seems far more neglected.  There are clearly a fair number of academics in these areas, but I think in terms of number of people, resources, and “get it done” abilities, regular technical progress has been strongly favored. This means that we may have less leverage, but the neglectedness but  this could also mean that there are some really nice returns to highly competent efforts. 

The second thing that I’d flag is that  it’s possible that advances in the Internet and AI could mean that progress in these areas become much more tractable in the next 10 to 100 years.

I started by studying material progress because (1) it happened to be what I was most interested in and (2) it’s the most obvious and measurable form of progress. But I think that material, epistemic and moral progress are actually tightly intertwined in the overall history of progress.

I think I much agree with you here, though I myself am less interested in technical progress.  I agree that they can’t be separated. This is all the more reason I would encourage you to emphasize it in future work of yours :-).  I imagine any good study of epistemic and moral progress would include studies of technology for the reasons you mention. I’m not suggesting that you focus on epistemic and moral progress only, but rather that they could either be the primary emphasis where possible, or just a bit more emphasized here and there.  Perhaps this could be a good spot to collaborate directly with Effective Altruist researchers.

I haven’t read Ord’s take on this, but the concept as you describe it strikes me as not quite right.

My take was written quickly and  I think your impression is very different from his take. In The Precipice, Toby Ord recommends that The Long Reflection happens as one of three phrases, the first being “Reaching Existential Security”. This would involve setting things up so that humanity has a very low chance of existential risk per year.  It’s hard for me to imagine what this would look like.  There’s not much written about it in the book. I imagine it would look very different to what we have now and probably take a fair amount of more technological maturity. Having setups to ensure protections against existentially serious biohazards would be a precondition.  I imagine there is obviously some trade-off between our technological abilities to make quick progress during the reflection, and the risks and speed of us getting there, but that’s probably outside the scope of this conversation.

In general, science, technology, infrastructure, and surplus wealth are a massive buffer against almost all kinds of risk. So to say that we should stop advancing those things in the name of safety seems wrong to me.

I agree that they are massively useful, but they also are massively risky. I’m sure that a lot of advancements that we have are locally a net negative; otherwise it seems odd that we could have so many big changes but still a world as challenging and messy as ours.

Some of science/technology/infrastructure/surplus wealth is obviously useful for getting us to Existential Security, and others are probably harmful.  It's not really clear to me that average modern advancements are net-positive at this point(this is incredibly complicated to figure out!), but it seems clear that at least some are (though we might not be able to tell which ones). 

Comment by oagr on Long-Term Future Fund: Ask Us Anything! · 2020-12-07T21:09:32.581Z · EA · GW

Thanks!

Comment by oagr on An experiment to evaluate the value of one researcher's work · 2020-12-07T04:01:00.297Z · EA · GW

Yep, I think this is quite useful/obvious. (If I understand it correctly). Work though :)

Comment by oagr on An experiment to evaluate the value of one researcher's work · 2020-12-07T04:00:14.221Z · EA · GW

Good catch

Comment by oagr on An experiment to evaluate the value of one researcher's work · 2020-12-07T03:59:21.429Z · EA · GW

That's quite useful, thanks

In fact, one possibility would be to use the intuitive estimation approach on the work of one of the orgs/people who already have a bunch of this sort of data relevant to that work (after checking that the org/people are happy to have their work used for this process), and then look at the empirical data, and see how they compare. 

This seems like a neat idea to me. We'll investigate it. 

Comment by oagr on My mistakes on the path to impact · 2020-12-05T02:38:44.850Z · EA · GW

Congrats on the promotion! (after just 6.5 months? Kudos) Also thanks for the case study. I think as you pointed out, this is a bit different from some of the common advice, so it's particularly useful.

Comment by oagr on WANBAM is accepting expressions of interest for mentees! · 2020-12-05T02:37:01.136Z · EA · GW

This looks really nice to me, thanks for all of your work here!

Comment by oagr on AMA: Jason Crawford, The Roots of Progress · 2020-12-04T16:43:48.432Z · EA · GW

As discussed in other comments, it seems that progress studies focuses mostly on  economic and scientific progress,  and these seem to come with risks as well as rewards. At the same time, particular aspects of progress seem more safe; the progress of epistemics or morality for example. Toby Ord wrote about the Long Reflection, as a method of making a lot of very specific progress before focusing on other kinds.  These things are more difficult to study but might be more valuable. 

 So my question is, have you spent much time considering epistemic and moral progress (and other abstract but safe aspects) as a thing to study?  Do you have any thoughts on its viability?

(I've written a bit more here, but it's still relatively short). 

Comment by oagr on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T16:34:12.233Z · EA · GW

Can you clarify on your models on which kinds of projects could cause net harm? My impression is that there are some thoughts that funding many things would be actively harmful, but I don't feel like I have a great picture of the details here. 

If there are such models, are there possible structural solutions to identifying particularly scalable endeavors? I'd hope that we could eventually identify opportunities for long-term impact that aren't "find a small set of particularly highly talented researchers", but things more like, "spend X dollars advertising Y in a way that could scale" or "build a sizeable organization of people that don't all need to be top-tier researchers".

Comment by oagr on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T16:30:23.193Z · EA · GW

Do you have a vision for what the 3 to 10 year vision for the Long-Term Future Fund looks like? Do you expect it to be mostly the same and possibly add revenue, or have any large structural changes?

Comment by oagr on What 80,000 Hours learned by interviewing people we respect 'anonymously' · 2020-12-02T22:58:00.118Z · EA · GW

I enjoyed this and am considering doing similar posts myself.

One thing I've noticed is that the responses seem like they may fall into clusters. I get the impression there's one cluster of "doesn't feel elite and is frustrated with EA for not being accommodating" and a very different cluster of "very worried that EA is being too friendly, and not being properly disagreeable where it matters". I don't have a good sense on exactly what these clusters are. I could imagine it being the case that they are distinct, and if so, recognizing this would be very valuable. Perhaps they could be optimized separately, for instance.

Comment by oagr on An experiment to evaluate the value of one researcher's work · 2020-12-02T17:17:43.308Z · EA · GW

Yea, I'd love to see things like this, but it's all a lot of work. The existing tooling is quite bad, and it will probably be a while before we could rig it up with Foretold/Guesstimate/Squiggle.

Comment by oagr on An experiment to evaluate the value of one researcher's work · 2020-12-02T01:12:42.774Z · EA · GW

One challenge with willingness to pay is that we need to be clear who the money would be coming from. For instance, I would pay less for things if the money were coming from the budget of EA Funds than I would Open Phil, than I would the US Government. This seems doable to me, but is tricky. Ideally we could find a measure that wouldn't vary dramatically over time. For instance, the EA Funds budget might be desperate for cash some years have have too much others, changing the value of the marginal dollar dramatically. 

Comment by oagr on An experiment to evaluate the value of one researcher's work · 2020-12-02T01:03:19.467Z · EA · GW

I have a bunch of thoughts on this, and would like to spend time thinking of more. Here are a few: 

---

I’ve been advising this effort and gave feedback on it (some of which he explicitly included in the post in the “Caveats and warnings” section). Correspondingly, I think it’s a good early attempt, but definitely feel like things are still fairly early. Doing deep evaluation of some of this with getting more empirical data (for instance, surveying people to see which ones might have taken this advice, or having probing conversations with Guesstimate users) seems necessary to get a decent picture. However, it is a lot of work. This effort was much more of Nuño intuitively estimating all the parameters, which can get you kind of far, but shouldn’t be understood to be substantially more. Rubrics like these can be seen as much more authoritative than they actually are.

Reasons to expect these estimates to be over-positive

I tried my best to encourage Nuño to be fair and unbiased, but I’m sure he felt incentivized to give positive grades. I don’t believe I gave feedback to encourage him to exchange the scores favorably, but I did request that he made uncertainty more clear in this post. This wasn’t because I thought I did poorly in the rankings, is more because I thought that this was just a rather small amount of work for the claims being made. I imagine this will be an issue forward with evaluation, especially if people are evaluated you might be seen as possibly holding grudges or similar later on. It is not enough for them to not retaliate, the problem is that from an evaluator’s perspective, there’s a chance that they might retaliate.

Also, I imagine there is some selection pressure to a positive outcome. One of the reasons why I have been advising his efforts is because they are very related to my interests, so it would make sense that he might be more positive towards my previous efforts then would be others of different interests. This is one challenging thing about evaluation; typically the people who best understand the work have the advantage of better understanding its quality, but the disadvantage typically be biased towards how good this type of work is.

Note that all none of the projects wound up with a negative score, for example. I’m sure that at least one really should if we were clairvoyant, although it’s not obvious to me to say which one at this point.

Reasons to expect these estimates to be over-negative

I personally care whole lot more about being able to be neutral, and also in seeming neutral, than I do that my projects were evaluated favorably at the stage. I imagine this could been the case for Nuño as well. So it’s possible there was some over-compensation

here, but my guess is that you should expect things to be biased on the positive side regardless.

Tooling

I think this work brings to light how valuable improved tooling (better software solutions) could be. A huge spreadsheet can be kind of a mess, and things get more complicated if multiple users (like myself) would try to make rankings. I’ve been inspecting no-code options and would like to do some iteration here.

One change that seems obvious would be for reviews to be posted on the same page as the corresponding blog post. This could be done on the comments or in the post itself, like a Github status icon.

Decision Relevance

I’m hesitant to update much due to the rather low weight I place on this. I was very uncertain about the usefulness my projects before this and also I’m uncertain about it afterwards. I agree that most of it is probably not to be valuable at all unless I specifically, are much more unlikely someone else, continues his work into a more accessible or valuable form.

If it’s true the Guesstimate is actually far more important than anything else I’ve worked on, it would probably update me to focus a lot more on software. Recently I’ve been more focused on writing and mentorship than on technical development, but I’m considering changing back.

I think I would have paid around $1,000 or so for a report like this for my own usefulness. Looking back, the main value perhaps would come from talking through the thoughts with the people doing the rating. We haven’t done this yet, but might going forward. I’d probably pay at least $10,000 or so if I was sure that it was “fairly correct”.

The value of research in neglected areas

I think one big challenge with research is that you either focus on an active area or a neglected one. In active areas, marginal contributions may be less valuable because others are much more like you to come up with them. There’s one model where there is basically a bunch of free prestige lying around, and if you get there first you are taking zero-sum gains directly from someone else. In the EA community in particular I don’t want to play zero-sum games with other people. However, for neglected work, it seems very possible that almost no one will continue with it. My read is that neglected work is generally a fair bit more risky. There are instances where goes well and this could actually encourage a whole field to emerge (though this takes a while). There are other instances where no one happens to be interested in continuing this kind of research, and it dies before being able to be useful at all.

I think of my main research as being in areas I feel are very neglected. This can be exciting, but has obvious challenge that is difficult to be adopted by others, and so far this has been the case.

Comment by oagr on Announcing the Forecasting Innovation Prize · 2020-11-20T21:01:40.662Z · EA · GW

Thanks! That's useful to know. I intend to host more prizes in the future but can't promise things yet. 

There's no harm in writing up a bunch of rough ideas instead of aiming for something that looks super impressive. We're optimizing more to encourage creativity and inspire good ideas, rather than to produce work that can be highly cited. 

You can look through my LessWrong posts for examples of the kinds of things I'm used to. A few were a lot of work, but many just took a few hours or so. 

Comment by oagr on Desperation Hamster Wheels · 2020-11-03T22:45:30.237Z · EA · GW

To clarify;

My read of this article was that this could have been interpreted as meaning "for a form of consequentialism that doesn't give extra favor to oneself, it's often optimal to maximize a decent amount for oneself."

I'm totally fine optimizing for oneself when under the understanding that their philosophical framework favors favoring oneself, it just wasn't clear to me that that was what was happening in this article.

If the lesson there is, "I'm going to make myself happy because the utility function I'm optimizing for favors myself heavily", that's fine, it's just a very different argument then "actually, optimizing for my own happiness heavily is the optimal way of achieving a more universally good outcome." My original read is that the article was saying the latter, I could have been mistaken. Even if I were mistaken, I'm happy to discuss the alternative view; not the one Nicole meant, but the one I thought she meant. I'm sure other readers may have had the same impression I did.

All that said, I would note that often being personally well off is a great way to be productive. I know a lot of altruistic people who would probably get more done if they could focus more on themselves.

Comment by oagr on Desperation Hamster Wheels · 2020-11-01T15:28:09.280Z · EA · GW

I enjoyed reading this, thank you.

One small point:

"that I am a person whose life has value outside of my potential impact."

I'm happy to hear that this insight is worked for you, but want to flag that I don't think it's essential. Personally have been trying to think of my life only as a means to an end. Will my life technically might have value, I am fairly sure it is rather minuscule compared to the potential impact can make. I think it's' possible, though probably difficult, to intuit this and still feel fine / not guilty, about things. It makes me fear death less, for one.

I'm a bit wary on this topic that people might be a bit biased to select beliefs based on what is satisfying or which ones feel good. This is the type of phrase that I would assume would be well accepted in common views of morality, but in utilitarianism it is suspect.

To be clear, of course within utilitarianism one's wellbeing does have "some" "intrinsic/comparative" value, I suspect it's less than what many people would assume when reading that sentence.

Comment by oagr on Linch's Shortform · 2020-10-15T17:44:02.895Z · EA · GW

Definitely agreed. That said, I think some of this should probably be looked through the lens of "Should EA as a whole help people with personal/career development rather than specific organizations, as the benefits will accrue to the larger community (especially if people only stay at orgs for a few years).

I'm personally in favor of expensive resources being granted to help people early in their careers. You can also see some of this in what OpenPhil/FHI funds; there's a big focus on helping people get useful PhDs. (though this helps a small minority of the entire EA movement)

Comment by oagr on Nathan Young's Shortform · 2020-10-11T22:22:25.338Z · EA · GW

I think people have been taking up the model of open sourcing books (well, making them free). This has been done for [The Life You can Save](https://en.wikipedia.org/wiki/The_Life_You_Can_Save) and [Moral Uncertainty](https://www.williammacaskill.com/info-moral-uncertainty). 

I think this could cost $50,000 to $300,000 or so depending on when this is done and how popular it is expected to be, but I expect it to be often worth it.

Comment by oagr on [deleted post] 2020-10-11T22:18:37.150Z

Many kudos for doing this, I've been impressed seeing this work progress. 

I think it could well be the case that EAs have a decent comparative advantage in prioritization itself. I could imagine a world where the community does help prioritize a large range of globally important issues. This could work especially well if these people could influence the spending and talent of other people. Things that are poorly neglected present opportunity for significant leverage through prioritization and leadership.

On politics, my impression is that the community is going to get more involved on many different fronts.  It seems like the kind of thing that can go very poorly if done wrong, but the potential benefits are too big to ignore.

As Carl Shulman previously said, one interesting aspect about politics is the potential to absorb a deep amount of money and talent.  so I imagine one of the most valuable things about doing this can work is producing information value to inform us if and how to scale it later.

Comment by oagr on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-09T03:51:48.100Z · EA · GW

From a few conversations with him, I think he semi-identifies as an EA. He's definitely known about EA for a while, there is evidence for that (just search his name in the EA Forum search). 

I think he would admit that he doesn't fully agree with EAs on many issues.  I think that most EAs I know wouldn't exactly classify him as an EA if they were to know him, but as EA-adjacent.

He definitely knows far more about it than most politicians.

I would trust that he would use "evidence-based reasoning". I'm sure he has for DXE. However, "evidence-based reasoning" by itself is a pretty basic claim at this point. It's almost meaningless at this stage, I think all politicians can claim this.

Comment by oagr on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-09T03:36:34.116Z · EA · GW

I think it's possible to use good leadership practices and bad leadership practices.  I think the success of DxE has shown that he can do some things quite well.  

I've met Wayne before. I get the impression is he quite intelligent and has definitely been familiar with EA for some time. At the same time, DXE has used much more intense / controversial practices in general than many EA orgs, many practices others would be very uncomfortable with. Very arguably this contributed to their successes and failures. 

Sometimes I'm the most scared of the people who are the most capable. 

I really don't know much about Wayne, all things considered. I could imagine a significant amount of investigation concluding that he'd either be really great or fairly bad.

Comment by oagr on Open Communication in the Days of Malicious Online Actors · 2020-10-08T21:40:11.676Z · EA · GW

Agreed on preventative measures, where possible. I imagine preventative measures are probably more cost-effective than measures after the fact.

Comment by oagr on Open Communication in the Days of Malicious Online Actors · 2020-10-07T22:33:37.942Z · EA · GW

Thanks so much for the feedback. 

On the example; I wrote this fairly quickly. I think the example is quite mediocre and the writing of the whole piece was fairly rough. If I were to give myself a grade on writing quality for simplicity or understandability, it would be a C or so. (This is about what I was aiming for given the investment). 

I'd be interested in seeing further writing that uses more intuitive and true examples. 
 

Comment by oagr on Open Communication in the Days of Malicious Online Actors · 2020-10-07T16:28:06.591Z · EA · GW

Very fair question. I'm particularly considering the issue for community discussions around EA. There's a fair EA Twitter presence now and I think we're starting to see some negative impacts of this. (Especially around hot issues like social justice.)  

I was considering posting here or LessWrong and thought that the community here is typically more engaged with other public online discussion.

That said, if someone has ideas to address the issue on a larger scale, I could imagine that being an interesting area. (Communication as a cause area)

I myself am doing a broad survey of things useful for collective epistemics, so this would also fall within that.

Comment by oagr on Can the EA community copy Teach for America? (Looking for Task Y) · 2020-10-07T03:29:14.042Z · EA · GW

I've been thinking a fair bit about this.

I think that forecasting can act as really good intellectual training for people. It seems really difficult to BS, and there's a big learning curve of different skills to get good at (can get into automation).

I'm not sure how well it will scale in terms of paid forecasters for direct value (agreeing with "the impact probably isn't huge). I have a lot of uncertainty here though.

I think the best analogy is that to hedge funds and banks. I could see things going one of two ways; either it turns out what we really want is a small set of super intelligent people working in close coordination (like a high-salary fund) or that we need a whole lot of "pretty intelligent people" to do scaled labor (like a large bank or trading institution). 

That said, if forecasting could help us determine what else would be useful to be doing, then we're kind of set.

Comment by oagr on Open Communication in the Days of Malicious Online Actors · 2020-10-07T03:22:12.218Z · EA · GW

Thanks for letting me know, that's really valuable.

Comment by oagr on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-09-25T09:12:42.863Z · EA · GW

A very simple example might be someone saying, "What's up?" and the other person saying "The sky.". "What's up?" assumes a shared amount context. To be relevant, it would make much more sense for it to be asking how the other person is doing.

There are a bunch of youtube videos around the topic, I recall some go into examples.

Comment by oagr on Thomas Kwa's Shortform · 2020-09-24T09:06:43.896Z · EA · GW

First, neat idea, and thanks for suggesting it!

Is there a reason this isn't being done? Is it just too expensive?

From where I'm sitting, there are a whole bunch of potentially highly useful things that aren't being done. After several years around the EA community, I've gotten a better model of why that is:

1) There's a very limited set of EAs who are entrepreneurial, trusted by funders, and have the necessary specific skills and interests to do many specific things. (Which respected EAs want to take a 5 to 20 year bet on field anthropology?)
2) It often takes a fair amount of funder buy-in to do new projects. This can take several years to develop, especially for an research area that's new.
3) Outside of OpenPhil, funding is quite limited. It's pretty scary and risky to start something new and go for it. You might get funding from EA Funds this year, but who's to say if you'll have to fire your staff in 3 years.

On doing anthropology, I personally think there might be lower hanging fruit first engaging with other written moral systems we haven't engaged with. I'd be curious to get an EA interpretation of parts of Continental Philosophy, Conservative Philosophy, and the philosophies and writings of many of the great international traditions. That said, doing more traditional anthropology could also be pretty interesting.

Comment by oagr on Ozzie Gooen's Shortform · 2020-09-22T19:17:54.494Z · EA · GW

EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals. 

If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields. 

If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice. 

Comment by oagr on Solander's Shortform · 2020-09-20T17:10:53.902Z · EA · GW

I think this is one of the principals of GiveDirectly. I imagine that more complicated attempts at this could get pretty hairy (try to get the local population to come up with large coordinated proposals like education reform), but could be interesting.