Posts

Opportunity Costs of Technical Talent: Intuition and (Simple) Implications 2021-11-19T15:04:05.217Z
Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits 2021-11-17T18:12:18.005Z
Improve delegation abilities today, delegate heavily tomorrow 2021-11-11T21:52:18.782Z
Disagreeables and Assessors: Two Intellectual Archetypes 2021-11-05T09:01:58.207Z
Prioritization Research for Advancing Wisdom and Intelligence 2021-10-18T22:22:32.492Z
Contribution-Adjusted Utility Maximization Funds: An Early Proposal 2021-08-03T23:01:58.012Z
Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:44.627Z
Forecasting Prize Results 2021-02-19T19:07:11.379Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:21:52.151Z
Open Communication in the Days of Malicious Online Actors 2020-10-06T23:57:35.529Z
Ozzie Gooen's Shortform 2020-09-22T19:17:54.175Z
Expansive translations: considerations and possibilities 2020-09-18T21:38:42.357Z
How to estimate the EV of general intellectual progress 2020-01-27T10:21:11.076Z
What are words, phrases, or topics that you think most EAs don't know about but should? 2020-01-21T20:15:07.312Z
Best units for comparing personal interventions? 2020-01-13T08:53:12.863Z
Predictably Predictable Futures Talk: Using Expected Loss & Prediction Innovation for Long Term Benefits 2020-01-08T22:19:32.155Z
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges 2019-12-19T18:16:04.299Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T16:36:10.564Z
Introducing Foretold.io: A New Open-Source Prediction Registry 2019-10-16T14:47:20.752Z
What types of organizations would be ideal to be distributing funding for EA? (Fellowships, Organizations, etc) 2019-08-04T20:38:10.413Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:23.576Z
What new EA project or org would you like to see created in the next 3 years? 2019-06-11T20:56:42.687Z
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T21:25:46.305Z
Discussion: What are good legal entity structures for new EA groups? 2018-12-18T00:33:16.620Z
Current AI Safety Roles for Software Engineers 2018-11-09T21:00:23.318Z
Prediction-Augmented Evaluation Systems 2018-11-09T11:43:06.088Z
Emotion Inclusive Altruism vs. Emotion Exclusive Altruism 2016-12-21T01:40:45.222Z
Ideas for Future Effective Altruism Conferences: Open Thread 2016-08-13T02:59:02.685Z
Guesstimate: An app for making decisions with confidence (intervals) 2015-12-30T17:30:55.414Z
Is there a hedonistic utilitarian case for Cryonics? (Discuss) 2015-08-27T17:50:36.180Z
EA Assembly & Call for Speakers 2015-08-18T20:55:13.854Z
Deep Dive with Matthew Gentzel on Recently Effective Altruism Policy Analytics 2015-07-20T06:17:48.890Z
The first .impact Workathon 2015-07-09T07:38:12.143Z
FAI Research Constraints and AGI Side Effects 2015-06-07T20:50:21.908Z
Gratipay for Funding EAs 2014-12-24T21:39:53.332Z
Why "Changing the World" is a Horrible Phrase 2014-12-24T00:41:50.234Z

Comments

Comment by Ozzie Gooen (oagr) on Is it no longer hard to get a direct work job? · 2021-11-26T09:56:34.142Z · EA · GW

I think this is a major factor. From what I can tell, some people have very easy times getting EA jobs, and some have very hard times getting EA jobs. This in itself really isn't much information; we'd really need many stats to get a better sense of things.

For what it's worth, I wouldn't read this as, "the people who have a hard time... are just bad candidates". It's more that EA needs some pretty specific things, and there are some sorts of people for who it's been very difficult to find a position; even though some of these people are quite brilliant in many ways.

Comment by Ozzie Gooen (oagr) on Should I go straight into EA community-building after graduation or do software engineering first? · 2021-11-26T09:51:42.419Z · EA · GW

but a piece of advice that I've heard is that career capital for these things is only useful for getting your foot in the door (e.g. getting a coding test) and then your actually performance (rather than your resume) is what ends up getting you/not getting you the job.

In my experience, most orgs are much more excited about bringing on "senior engineers" than "junior engineers", and often the only way to get the performance to be a "senior engineer" is to work in a company to build those skills. It doesn't have to take long though.

Comment by Ozzie Gooen (oagr) on Should I go straight into EA community-building after graduation or do software engineering first? · 2021-11-26T09:48:48.452Z · EA · GW

I think I agree with Jack on the AI side.

If you can get a job at Anthropic, Redwood, maybe OpenAI or Deepmind or other AI companies, those might be better than most software startups.

The first 1-2 years of software engineering work can be amazing for the experience. It really can depend on what team you work with though. Some have great standards and help train junior engineers. Others don't, and won't teach decent practices. My intuition is that the startup you're considering is likely good (startups with lots of respect in tech circles, like Stripe, can be great), but I suggest asking some people who work there.

Many of the EA tech groups are much smaller or very early, so won't be able to provide the same experience. (and, if you join them anyway after, you'll get the experience they do provide then, anyway). Just chat to people in them and see what mentorship they can provide.

2 years seems like a bit much to me. If you join the software engineering role (and it's not directly EA), I'd suggest being really intense about it if you can. Try to learn as much as possible, as quickly as possible. Then, after 12 months, gauge how much you're learning (and maybe, how quickly the stock options are rising in value) and decide things from then.

For me, I found that my first real "junior developer job" was incredibly interesting and useful, but the work got pretty boring after around 6 months. I left around 14 months in, and in retrospect, sort of wish I left ~2 months sooner.

One other thing to note: try to learn full-stack skills, not narrow domain skills. Spending 1-2 years on a company-specific database solution is bad, doing it with similar tools to what a tiny startup would use is good (for many EA sorts of things)


Personally, I'm excited about more strong generalist EAs having 1-2 years of strong software engineering experience. Software engineering comes up all over the place. It's a highly general-purpose skill. It would give you a bunch of options later on.

Comment by Ozzie Gooen (oagr) on Pathways to impact for forecasting and evaluation · 2021-11-25T23:20:15.225Z · EA · GW

I think the QURI one is a good pass, though if I were to make it, I'd change a few details of course.

Comment by Ozzie Gooen (oagr) on Pathways to impact for forecasting and evaluation · 2021-11-25T23:13:18.454Z · EA · GW

I looked over an earlier version of this, just wanted to post my takes publicly.[1]

I like making diagrams of impact, and these seem like the right things to model. Going through them, many of the pieces seem generally right to me. I agree with many of the details, and I think this process was useful for getting us (QURI, which is just the two of us now) on the same page.

At the same time though, I think it's surprisingly difficult to make these diagrams to be understandable for many people. 

Things get messy quickly. The alternatives are to make them much simpler, and/or to try to style them better. 

I think these could have been organized much neater, for example, by:

  • Having the flow always go left-to-right.
  • Using a different diagram editor that looks neater.
  • Reducing the number of nodes by maybe 30% or so.
  • Maybe neater arrow structures (having 90% lines, rather than diagonal lines) or something.

That said, this would have been a lot of work to do (required deciding on and using different software), and there's a lot of stuff to do, so this is more "stuff to keep in mind for the future, particularly if we want to share these with many more people." (Nuno and I discussed this earlier)

One challenge is that some of the decisions on the particularities of the causal paths feel fairly ad-hoc, even though they make sense in isolation. I think they're useful for a few people to get a grasp on the main factors, but they're difficult to use for getting broad buy-in. 

If you take a quick glance and just think, "This looks really messy, I'm not going to bother", I don't particularly blame you (I've made very similar things that people have glanced over).

But the information is interesting, if you ever consider it worth your time/effort!

So, TLDR:

  •  Impact diagrams are really hard. At these levels of details, much more so. 
  • This is a useful exercise, and it's good to get the information out there.
  • I imagine some viewers will be intimidated by the diagrams.
  • I'm a fan of experimenting with things like this and trying out new software, so that was neat.

[1] I think it's good to share these publicly for transparency + understanding.

Comment by Ozzie Gooen (oagr) on Opportunity Costs of Technical Talent: Intuition and (Simple) Implications · 2021-11-25T13:03:11.493Z · EA · GW

Thanks so much, that's really useful to know (it's really hard to tell if these metaphors are useful at all), and also makes me feel much better about this. :) 

Comment by Ozzie Gooen (oagr) on Opportunity Costs of Technical Talent: Intuition and (Simple) Implications · 2021-11-24T13:30:05.187Z · EA · GW

Yea; lightcone  is much closer to any other group I knew of before. I was really surprised by their announcement. 

I think it's highly unusual (this seems much higher than the other and previous non-ai eng roles I knew of). 

I'd also be very surprised if Lightcone chose someone for $400,000 or more. My guess is that they'll be aiming for the sorts of people who aren't quite that expensive. 

So, I think Lightcone is making a reasonable move here, but it's an unusual move. Also, if we thought that future ea/engineering projects would have to pay $200k-500k per engineer, I think that would change the way we think about them a lot.

Comment by Ozzie Gooen (oagr) on Database of orgs relevant to longtermist/x-risk work · 2021-11-23T09:44:14.608Z · EA · GW

Yea, I was briefly familiar. 

I think it's still tough, and agree with Ben's comment here. 
https://forum.effectivealtruism.org/posts/kQ2kwpSkTwekyypKu/part-1-ea-tech-work-is-inefficiently-allocated-and-bad-for?commentId=ypo3SzDMPGkhF3GfP

But I think consultancy engineers could be a fit for maybe ~20-40% of EA software talent. 

Comment by Ozzie Gooen (oagr) on Database of orgs relevant to longtermist/x-risk work · 2021-11-23T09:41:33.215Z · EA · GW

Working at an EA org to discover needs: This seems much slower than asking people who work there, no? (I am not trying to guess the needs myself)

It really depends on how sophisticated the work is and how tied it is to existing systems.

For example, if you wanted to build tooling that would be useful to Google, it would probably be easiest just to start a job at Google, where you can see everything and get used to the codebases, than to try to become a consultant for Google, where you'd ask for very narrow tasks that don't require you to be part of their confidential workflows and similar.

Comment by Ozzie Gooen (oagr) on Database of orgs relevant to longtermist/x-risk work · 2021-11-22T18:46:41.893Z · EA · GW

I could see a space for software consultancies that work with EA orgs, that basically help build and maintain software for them. 

I'm not sure what you mean by SaaS in this case. If you only have 2-10 clients, it's sort of weird to have a standard SaaS business model. I was imagining more of the regular consultancy payment structure.

Comment by Ozzie Gooen (oagr) on Database of orgs relevant to longtermist/x-risk work · 2021-11-21T16:21:02.818Z · EA · GW

I'll note:

  1. When you say "paid", do you mean full-time? I've found that "part-time" people often drop off very quickly. Full-time people would be the domain of 80,000 Hours, so I'd suggest working with them on this.
  2. "no place for orgs to surface such needs beyond posting a job" -> This is complicated. I think that software consultancy models could be neat, and of course, full-time software engineering jobs do happen. Both are a lot of work. I'm much less excited about volunteer-type arrangements, outside of being used to effectively help filter candidates for later hiring.

I think that a lot of people just really can't understand or predict what would be useful without working in an EA org or in an EA group/hub. It took me a while! The obvious advice would be for people who want to really kickstart things, is to first try to work in or right next to an EA org for a year or so; then you'll have a much better sense.

Comment by Ozzie Gooen (oagr) on Even More Ambitious Altruistic Tech Efforts · 2021-11-20T16:25:24.915Z · EA · GW

this post is not an attack on you or on your position

Thanks! I didn't mean to say it was, just was clarifying my position.

An EA VC which funds projects based mostly on expected impact might be a good idea to consider

Now that I think about it, the situation might be further along than you might expect. I think I've heard about small "EA-adjacent" VCs starting in the last few years.[1] There are definitely socially-good-focused VCs out there, like 50 Year VC.

Anthropic was recently funded for $124 Million as the first round. Dustin Moskovitz, Jaan Tallinn, and the Center for Emerging Risk Research all were funders (all longtermists). I assume this was done fairly altruistically.

I think Jaan has funded several altruistic EA projects; including ones that wouldn't have made sense just on a financial level.

https://pitchbook.com/profiles/company/466959-97?fbclid=IwAR040xC65lCV0ZW68DOXwI7K_RkSzyr7ZJa9HBs7R7C4ZkFGM5sC1Lec9Wk#team

https://www.radiofreemobile.com/anthropic-open-ai-mission-impossible/?fbclid=IwAR3iC0B-EKFD40Hf7DXEedI_tzFgqypT7_Pf4jSiUhPeKbHq_xFawHc-rpA

[1]: Sorry for forgetting the 1-2 right names here.

Comment by Ozzie Gooen (oagr) on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits · 2021-11-20T15:33:44.705Z · EA · GW

I agree with (basically) all of this. I've been looking more into enterprise tools for QURI and have occasionally used some. As EA grows, enterprise tools make more sense for us.

I guess this seemed to me like a different topic, but I probably should have flagged this somewhere in this post.

On Guesstimate in particular, I'm very happy for other groups to use different tools (like Analytica, Causal and probabilistic programming languages. Normally when I talk to people about this, I wind up recommending other options. All that said, I think there are some areas where some smart programming efforts on our behalf could go a long way. I think the space of "monte carlo" tools is still quite small, and I'd like to see other efforts in it (like more startups).

One issue with what you mention is that I expect that managing a bunch of corporate licenses will be a pain. It wouldn't be great if some smart people can't see or work with the relevant information because their small team doesn't have a license. So in some situations, it's worth it, but if we can (cheaply) get/use open-source tools and standards, that can also be preferable.

Comment by Ozzie Gooen (oagr) on Even More Ambitious Altruistic Tech Efforts · 2021-11-20T15:26:31.790Z · EA · GW

I'm really happy to see this posted and to see more discussion on the topic.

However, I strongly disagree with him on what kinds of projects we should focus on. Most of his examples are of tools or infrastructure for other efforts. I think that as a community we should be even more ambitious - I think we should try to execute multiple tech-oriented R&D projects (not necessarily software-oriented) that can potentially have an unusually large direct impact.

This is a good point. My post had a specific frame in mind, of "Tech either for EAs or funded mostly by EAs".

"Tech startups created by EAs" is a very different category, and I didn't mean to argue that it should be less important. We've already seen several tech startups by EAs (FTX, Wave, as you mention); which is one reason why I was trying to draw attention on the other stuff. I've also been part of 3 EA startups earlier on that were more earning-to-give focused. (Guesstimate was one)

I didn't mean to argue that "Tech either for EAs or funded mostly by EAs" were more important than "Tech startups by EAs". The latter has been a big deal and probably will continue to be so, in large part because of opportunity costs (which I wrote about here).

funding situation in EA is currently very unfavorable to such efforts.

Most "Tech startups" already have an existing ecosystem of funding in the VC world. It's unclear to me where and how EA Funders could best encourage these sorts of projects. I could picture there being a EA VC soon; there are starting to be more potential things to fund.

Comment by Ozzie Gooen (oagr) on What Are Your Software Needs? · 2021-11-20T15:03:53.870Z · EA · GW

Sigh... sorry;

This is a question post, but it's more specific than my post. It's asking groups what their needs are, which will result in different answers than the sorts of ideas I provided.

The ideas I gave weren't ones that were explicitly asked for. They were instead ones I've noticed, and, having spent a while investigate, think they would be good bets. Many are more technical/abstract than I'd expect people would understand, especially when thinking "what are my software needs"

In my experience, this is one nice way of coming up with ideas, but it's definitely not the only way.

I think this might be getting into the weeds though. The TLDR is that I expect that this question will be useful for surfacing a subset of ideas from the community, but it doesn't seem like the be-all-end-all of feedback for software projects.

Comment by Ozzie Gooen (oagr) on What Are Your Software Needs? · 2021-11-20T14:30:14.600Z · EA · GW

Maybe it could be it's own post? Like, we write a Question post, and write all of the options as answers. We could do that after this one is live for a few days, and include the top ideas in it.

Comment by Ozzie Gooen (oagr) on What Are Your Software Needs? · 2021-11-20T14:29:19.158Z · EA · GW

I'm happy with others doing it, but it's a whole lot of ideas, so it feels to me like it would get messy. Maybe there's some way to use a more formal survey or identify some other software solution.

I also would very much want others to suggest ideas. (Like in this post!) I wasn't trying to make any sort of definitive list, just a generative one.

Comment by Ozzie Gooen (oagr) on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits · 2021-11-19T20:03:59.506Z · EA · GW

I think it would be cool for someone to create an "engineering agenda" that entrepreneurial software developers could take ideas from and start working on, analogous

I think my hunch is that this is almost like asking for an “entrepreneur agenda”. There are really a ton of options.

I’m happy to see people list all the ideas they can come up with.

I imagine “agendas” would be easier to rigorously organize if you limit yourself to a more specific area. (So, I’d want to see many “research agendas” :) )

Possibly you are planning this for later posts in this sequence already

My main area is forecasting and evaluation. (A la QURY). The plan is to spend time writing up my thoughts on it and then use that to describe an agenda of some kind. This is more cause-specific than skill-specific, but maybe 2/3rds of the work would be for software engineering.

Comment by Ozzie Gooen (oagr) on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits · 2021-11-19T17:57:32.824Z · EA · GW

Noted!

these projects generally seem to fail not because of software engineering but because of some non-technical thing

Agreed, though this seems mainly for getting them off the ground (making sure you find an important problem). Software startups also have this problem; and there's a lot of discussion & best practices about the right kinds of people & teams for software startups.

Comment by Ozzie Gooen (oagr) on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits · 2021-11-19T10:10:40.874Z · EA · GW

Agreed! 

Comment by Ozzie Gooen (oagr) on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits · 2021-11-18T13:22:46.734Z · EA · GW

Yea, I looked into it a bit. I'd imagine that an EA project here would begin by evaluating Mastodon more and seeing if we could either:

  • Use it as is
  • Fork it
  • Contribute to the codebase
  • Sponsor development to include features we particularly want

I would love to see it take off, for multiple reasons.

Comment by Ozzie Gooen (oagr) on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits · 2021-11-18T13:21:08.639Z · EA · GW

Good point. I don't have much Hacker News karma or Reddit status, but if anyone reading this does, that would be appreciated.

Comment by Ozzie Gooen (oagr) on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits · 2021-11-18T07:36:40.976Z · EA · GW

My impression is that there are more concrete software engineering projects in bio security and farmed animal welfare than in AI safety or EA meta; possibly it's worth including some of those in your document.

I could easily see this being the case, I'm just not in these fields, so have far fewer ideas. If anyone else reading this has any, do post!

Comment by Ozzie Gooen (oagr) on Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits · 2021-11-18T07:28:17.495Z · EA · GW

I don't see a ton of projects where the most immediate bottleneck is software engineering. In most of the project you list, it seems like there's substantial product management work needed before software engineering becomes the bottleneck

It seems like we might just be disagreeing on terminology? 

My main interest is in advancing software projects (I called them "Engineering Efforts", but this encompasses many skills), but I care much less about the specific order in which people are hired. 

That said, I don't feel like I really understand you. It's hard for me to imagine hiring a bunch of product managers before hiring a bunch of engineers. I've never seen that happen.

I have seen "tech founder" types with both skills kickstart things (like new tech companies, of course!). I imagine that we'd really want people with these skills in the early days. Senior people (or, hacker types who are young, but can still do everything solo) who can pioneer new areas. 

Tech research labs and similar are good examples; they have found some sorts of talent who seem particularly good at doing innovation.

If you think that our existing software infrastructure is particularly bottlenecked on traditional product management skills, that's good to know, and I wouldn't at all argue with hiring more product managers at this stage to help. 
 

If you disagree with me about (2) I would be really interested to discuss – I talk to a lot of software engineers, and it would be awesome to have things that I can point them to.

It's difficult for me to imagine exactly who these people are. I think some "software engineers" are capable and interested to do the product work themselves, and some aren't. If they are, that could be optimal, as they wouldn't need much oversight or someone to work with them. I think this is true for a minority of people, and these people also often have high opportunity costs (the sort of person also interested in just founding a company) 

For those reading this, the types that are good for "in-EA software entrepreneurship" are usually the types who:
- Have built a few exciting/interesting projects themselves or with one colleague or so.
- Understands EA a bit.
- Often enjoys hackathons, or other very self-directed work.
- Can do some basic UI/UX work.
- Full-stack, or focussed on mobile apps only. 

Every so often there are hobby tech projects that launch here on the EA Forum, those are the sorts of "EA Entrepreneurship" I'm thinking of. 

I would be interested in unleashing a few people like these as "internal entrepreneurs" or similar in EA.

EA funds sometimes gives out grants for people like this, and I could see us doing more of that.

Comment by Ozzie Gooen (oagr) on Improve delegation abilities today, delegate heavily tomorrow · 2021-11-16T16:42:28.189Z · EA · GW

For what it's worth, I think the best reason not to delegate is something like:
"Funding work is hard, and funders have limited time. If you can do some funding work yourself, that could basically contribute to the amount of funding work beign done." (This works in some cases, not for others)

> I'm nervous about a fund finding a new opportunity and suddenly leaving a charity with a large funding gap, crippling a very good charity.

I think that funding work is a lot more work than just making yearly yes/no statements that are in-isolation ideal. There's more to do, like communicating with charities and keeping commitments. 

In theory, a cluster of medium to large funders could do a better job than individual donors, but that's not always the case.

Comment by Ozzie Gooen (oagr) on Improve delegation abilities today, delegate heavily tomorrow · 2021-11-16T16:38:33.141Z · EA · GW

Noted, thanks!

I was trying to explain a framework; the resulting strategy is something like:
1. Improve delegation abilities
2. Delegate more in the future, accordingly.

You're totally right I didn't get into how to do this. 

Do you (or others reading this) have ideas for what the post should have been called? It already was sort of long, I wasn't sure about a good tradeoff (I tried a few variations, and didn't really like any of them)

It's too late to change now, but I can try to do better in the future

Comment by Ozzie Gooen (oagr) on Simple comparison polling to create utility functions · 2021-11-15T22:33:42.612Z · EA · GW

I just wanted to give my take on some of this:

  • The web app is neat to experiment with the ideas and help us build intuitions.
  • That said, I think the key ideas (not the web app in particular), are the main insight here.
  • The current implementation is a solid first step, but I think we’re still a ways from having something that’s fun to use. My guess is that it will require some sophisticated UX / UI work to do a job that’s good enough for this to be useful in production. (If anyone reading this wants to try, let one of us know!)
  • I also think it’s important to figure out how to allow for negative values. This is annoying, but so it goes.

One thing I learned over the course of this, is that we probably don’t actually want big tables of utility estimates. Or, more specifically, is that we want functions that we can query as “how does X compare to Y”?, and they give us the correct amount. These can trivially convert to tables, but are subtly better. The reason for this is that they’ll handle correlations between items.

10 apples might be exactly 10 times as good as 1 Apple; 10 oranges 10x as 1 orange. We want a query of “how much better is 10 apples compared to 1 apple” to return exactly “10x”, and similar for oranges. If we tried putting them all into a common unit, like “pear equivalent”, then we wouldn’t get this property.

I’m not sure what the best format is to store this sort of data. Maybe some cluster analysis or something. There must be some clever mathematics for this somewhere, it it’s not clear to either of us.

Comment by Ozzie Gooen (oagr) on Improve delegation abilities today, delegate heavily tomorrow · 2021-11-14T13:34:49.286Z · EA · GW

Thanks!

You probably want to put more effort into making the suggested action easy and compelling if you want to get people to do something.

I'm interested in arguing/discussion for buy-in that our community should strive to eventually have strong, trustworthy, high-delegation groups. I'm not sure amenable this is with straightforward actions right now.

Like with much of my more theoretical writing, I see it as a few steps before clear actions.

Comment by Ozzie Gooen (oagr) on Why fun writing can save lives: the case for it being high impact to make EA writing entertaining · 2021-11-13T18:17:08.913Z · EA · GW

Personally, I don't think this deserves that much discussion time. It's literally one word.

All that said, I'd note that I couldn't at all tell that it was humorous. The problem is that I just don't feel like I can model authors that well. I know that many,  particularly junior ones, do make such titles genuinely (not jokingly), so I just assumed it was that way.

It really, really, sucks, but I think public writing generally can't be subtle/clever in many ways we're used to with friends and colleagues.  Our friends would pick up on things like this, but random people online would often miss it. I've been trying to write with much less subtlety than I do in smaller communities; it's less nice, but I don't see another way.

Comment by Ozzie Gooen (oagr) on Improve delegation abilities today, delegate heavily tomorrow · 2021-11-12T09:12:27.166Z · EA · GW

I think I agree with everything there; but I'm unsure what you mean exactly when you say "opposite direction"

I was arguing that our abilities for delegation should improve. If we had better accountability/transparency/reasoning-at-what-level-is-optimal, that would help improve our abilities to do delegation well.

I wasn't arguing that we delegate more right now; but rather, that we should work on improving our abilities to delegate well, with the intention of delegating more later.
 
(I changed the title to try to make this more clear)

Comment by Ozzie Gooen (oagr) on Why fun writing can save lives: the case for it being high impact to make EA writing entertaining · 2021-11-12T08:26:54.664Z · EA · GW

I think I agree with like 80% of this. But I think it should be flagged more that when many people try "engaging writing", they do end up with stuff that's really bad.

For example the Copyblogger website seems full of encouraging classic clickbait headlines, like:

"Here’s why Netflix streaming quality has nosedived over the past few months"
"12 Of The Most Stunning Asian Landscapes. The Last One Blew Me Away."

I don't want to see stuff like that on the EA Forum. 

Similarly, I found the title of this post hyperbolic (you also call attention to this, but several paragraphs in). I don't want to encourage many more people to make titles like that. (Though I would encourage images, elegance, plain language, jokes, and so on). 

So I think EA writers can definitely improve on being engaging, but we should make sure to steer clear of the alarmist journalist techniques.

Comment by Ozzie Gooen (oagr) on Improve delegation abilities today, delegate heavily tomorrow · 2021-11-11T22:15:36.365Z · EA · GW

Thanks agree that corruption is a big problem for society at large. At the same time though, with some work, we can make sure that groups are not very corrupt. My intuition is that a great deal of competitive markets have very low corruption; I’d expect that Amazon runs pretty effectively, for instance. I think we can aim for similar levels in our charity delegation structures. It will take some monitoring/ evaluation/transparency, but it definitely seems doable.

My impression is that many groups that complain about corruption actually do fairly little to actively try to remove corruption. When good and agentic CEOs want to stamp it out, they often do.

With GiveDirectly/ EA Funds, I’m not arguing that right now one is better than the other (as I’m guessing you realized, but I’m not sure about other readers). My main point is that we should be aiming for a future that leans more in the direction of “a really solid EA Funds”. That would help in so many ways.

(Note that if we want more minds on the topic, we can achieve they at the same time, with something more like a Crypto DAO, or a version of EA Funds that takes a lot of user contributions for research)

If you have ideas of what you’d want in EA Funds (or maybe GiveWell’s general fund), those would be interesting.

Comment by Ozzie Gooen (oagr) on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-11T22:04:30.041Z · EA · GW

I'd note that I expect these clusters (and I suspect they're clusters) to be a minority of intellectuals. They stand out a fair bit to me, but they're unusual. 

I agree Bryan Caplan leans disagreeable, but is less intense than others. I found The Case Against Education and some of his other work purposefully edgy, which is disagreeable-type-stuff, but at the same time, I found his interviews to often be more reasonable. 

I would definitely see the "disagreeable" and "assessor" archetypes as a spectrum, and also think one person can have the perks of both.

Comment by Ozzie Gooen (oagr) on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-08T15:31:27.391Z · EA · GW

There seems to be a fine line between actually useful models of this kind which have some predictive power (or at least allow thoughts to be a bit tidier), and those that are merely peculiarly entertaining, like Myers-Briggs. And I find it hard to tell from the outside on which side of that line any given model falls. 

I have mixed feelings here. I think I'm more sympathetic to Myers-Briggs when used correctly, than other people. There definitely seems to be some signal that it categorizes (some professions are highly biased towards a narrow part of the spectrum). It doesn't seem all too different to categorizing philosophy as "continental" vs. "analytical". It's definitely not the best categorization, there are some flawed assumptions baked into it (either/or, as opposed to a spectrum, most famously), the org that owns it seems pretty weird, and lots of people make overconfident statements around it, but I think it can serve a role when used correctly.

Anyway, I imagine what we'd really want is a "Big 5 of Intellectuals" or similar. For that, it would be great for someone to eventually do some sort of cluster analysis.
 

I don't necessarily recommend that the disagreeables/assessors terminology takes off; I'd prefer it if this can be used for discussion that finds something better.  

Comment by Ozzie Gooen (oagr) on CEA grew a lot in the past year · 2021-11-05T23:08:08.013Z · EA · GW

I'm quite happy to see the progress here. Kudos to everyone at CEA to have been able to scale it, without major problems yet (that we know of). I think I've been pretty impressed by the growth of the community; intuitively I haven't noticed a big drop in average quality, which is obviously the thing to worry about with substantial community growth.

As I previously discussed in some related comment threads, CEA (and other EA organizations in general) scaling, seems quite positive to me. I prefer this to trying to get tons of tiny orgs, in large part because I think the latter seems much more difficult to do well. That said, I'm not sure how much CEA should try to scale over the next few years; 2x/year is a whole lot to sustain, and over-growth can of course be a serious issue. Maybe, 30%-60%/year feels safe, especially if many members are siloed into distinct units (like seems to be happening).

Some random things I'm interested in, in the future:

  • With so many people, is there a strong management culture? Are managers improving, in part to handle future growth?
  • What sorts of pockets of people would make great future hires for CEA, but not so much for other orgs? If there are distinct clusters, I could imagine trying to make projects basically around them. We seem pretty limited for "senior EA" talent now, so some of the growth strategy is about identifying other exciting people and figuring out how to best use them.
  • With the proliferation of new community groups, how do we do quality control to make sure none turn into cults or have big scandals, like sexual assault? Sadly, poor behavior is quite endemic in many groups, so we might have to be really extra rigorous to reach targets we'd find acceptable. The recent Leverage issues come to mind; personally, I would imagine CEA would be in a good position to investigate that in more detail to make sure that the bad parts of it don't happen again.

Also, while there's much to like here, I'd flag that the "Mistakes" seem pretty minor? I appreciate the inclusion of the section, but for a team with so many people and so many projects, I would have expected more to go wrong. I'm sure you're excluding a lot of things, but am not sure how much is being left out. I could imagine that maybe something like a rating would be more useful, like, "we rated our project quality 7/10, and an external committee broadly agreed". Or, "3 of our main projects were particularly poor, so we're going to work on improving them next time, but it will take a while."

I've heard before a criticism that "mistakes" pages can make things less transparent (because they give the illusion of transparency), not more, and that argument comes to mind.

I don't mean this as anything particularly negative, just something to consider for next time.

Comment by Ozzie Gooen (oagr) on Collective intelligence as infrastructure for reducing broad existential risks · 2021-11-05T22:48:02.225Z · EA · GW

Thanks so much for the summary, I just noticed this for some reason.

I'll keep an eye out.

It sounds a bit like CI is fairly scattered, doesn't have all too much existing work, and also isn't advancing particularly quickly as of now. (A journal sounds good, but there are lots of fairly boring journals, so I don't know what to make of this)

Maybe 1-5 years from now, of whenever there gets to be a good amount of literature that would excite EAs, there could be follow-up posts summarizing the work.

Comment by Ozzie Gooen (oagr) on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-05T22:30:34.808Z · EA · GW

Thanks for the comment (this could be it's own post). This is a lot to get through, so I'll comment on some aspects.

I have disagreeable tendencies, working on it but biased

I have some too! I think there are times when I'm fairly sure my intuitions lean overconfident in a research project (due to selection effects, at least), but it doesn't seem worth debiasing, because I'm going to be doing it for a while no matter what, and not writing about its prioritization. I feel like I'm not a great example of a disagreeable or an assessor, but I sometimes can lean one way in different situations.

Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.

I would definitely advocate for the appreciation of both disagreeables and assessors. I agree it's easy for assessors to team up against disagreeables (for examples, when a company gets full of MBAs), particularly when they don't respect them. 

Some Venture Capitalists might be examples of assessors who appreciate and have learned to work with disagreeables. I'm sure they spend a lot of time thinking, "Person X seems slightly insane, but no one else is crazy enough to make a startup in this space, and the downside for us is limited."

As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival.

This clearly seems bad to me. For what it's worth, I don't feel like I have to hide that much that I think, though maybe I'm somewhat high status. Sadly, I know that high-status people sometimes can say even less than low-status people, because they have more people paying attention and more to lose. I think we really could use improved epistemic setups somehow.

Comment by Ozzie Gooen (oagr) on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-05T22:16:19.063Z · EA · GW

Good find, I didn't see that discussion before. 

For those curious; Scott makes the point that it's good to separate "idea generation" from "vetted ideas that aren't wrong"; and that's it's valuable to have spaces where people can suggest ideas without needing them to be right. I agree a lot with this.

I have this model where in a healthy society, there can be contexts where people generate all sorts of false beliefs, but also sometimes generate gold (e.g. new ontologies that can vastly improve the collective map). If this context is generating a sufficient supply of gold, you DO NOT go in and punish their false beliefs. Instead, you quarantine them. You put up a bunch of signs that point to them and say e.g. “80% boring true beliefs 19% crap 1% gold,” then you have your rigorous pockets watch them, and try to learn how to efficiently distinguish between the gold and the crap, and maybe see if they can generate the gold without the crap. However sometimes they will fail and will just have to keep digging through the crap to find the gold.

Comment by Ozzie Gooen (oagr) on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-05T17:04:57.048Z · EA · GW

I think that Jobs, later on (after he re-joined Apple), was just a great manager. This meant he considered a whole lot of decisions and arguments, and generally made smart decisions upon reflection.

I think he (and other CEOs) are wildly inaccurate with how they portray themselves to the public. However, I think they can have great decision making in company-internal decisions. It's a weird, advantageous, inconsistency. 

This book goes into some detail:
https://www.amazon.com/Becoming-Steve-Jobs-Evolution-Visionary-ebook/dp/B00N6PCWY8/ref=sr_1_3?keywords=steve+jobs&qid=1636131865&rnid=2941120011&s=books&sr=1-3

Comment by Ozzie Gooen (oagr) on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-05T17:02:32.419Z · EA · GW

I like that naming setup. I considered using the word "evaluators", but decided against it because I've personally been using "evaluator" to mean something a bit distinct. 

Comment by Ozzie Gooen (oagr) on Disagreeables and Assessors: Two Intellectual Archetypes · 2021-11-05T16:57:58.295Z · EA · GW

This clustering is based on anecdotal data; I wouldn't be too surprised if it were wrong. I'd be extremely curious for someone to do a cluster analysis here and see if there are any real clusters here.

I feel like I've noticed a distinct cluster of generators who are disagreeable, and have a hard time thinking of many who are agreeable. Maybe you could give some examples that come to mind to you? Anders Sandberg comes to my mind, and maybe some futurists and religious people. 

My hunch is that few top intellectuals (that I respect) would score in the 70th percentile or above on the big 5 agreeableness chart, but I'm not sure. It's an empirical question.

I don't remember hearing about a generators/evaluators dichotomy before, that you & Stefan mention. I like that dichotomy too, it's quite possible it's better than the one I raise here.

Comment by Ozzie Gooen (oagr) on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-23T00:46:03.466Z · EA · GW

I agree there are ways for it to go wrong. There’s clearly a lot of poorly thought stuff out there. Arguably, the motivations to create ML come from desires to accelerate “wisdom and intelligence”, and… I don’t really want to accelerate ML right now.

All that said, the risks of ignoring the area also seem substantial.

The clear solution is to give it a go, but to go sort of slowly, and with extra deliberation.

In fairness, AI safety and bio risk research also have severe potential harms if done poorly (and some, occasionally even when done well). Now that I think about it, bio at least seems worse in this direction than “wisdom and intelligence”; it’s possible that AI is too.

Comment by Ozzie Gooen (oagr) on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-23T00:42:15.875Z · EA · GW

One adjacent category which I think is helpful to consider explicitly (I think you have it implicit here) is 'well-informedness', which I motion is distinct from 'intelligence' or 'wisdom'.

That’s an interesting take.

When I was thinking about “wisdom”, I was assuming it would include the useful parts of “well-informedness”, or maybe, “knowledge”. I considered using other terms, like “wisdom and intelligence and knowledge”, but that got to be a bit much.

I agree it’s still useful to flag that such narrow notions as “well informedness” are useful.

Comment by Ozzie Gooen (oagr) on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T20:04:54.762Z · EA · GW

My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.

I think I agree, though I can’t tell how much funding you have in mind.

Right now we have relatively few strong and trusted people, but lots of cash. Figuring out ways, even unusually extreme ways, of converting cash into either augmenting these people or getting more of them, but seem fairly straightforward to justify.

Comment by Ozzie Gooen (oagr) on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T20:02:57.180Z · EA · GW

EAs have less of an advantage in this domain.

I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.

My guess is that prioritization could be more valuable for money than EA talent right now, because we just have so much money (in theory).

Comment by Ozzie Gooen (oagr) on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T19:59:35.686Z · EA · GW

It's not clear anyone should care about my opinion in "Wisdom and Intelligence"

I just want to flag that I very much appreciate comments, as long as they don’t use dark arts or aggressive techniques.

Even if you aren’t an expert here, your questions can act as valuable data as to what others care about and think. Gauging the audience, so to speak.

At this point I feel like I have a very uncertain stance on what people think about this topic. Comments help here a whole lot.

Comment by Ozzie Gooen (oagr) on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T18:23:35.337Z · EA · GW

Less directly, I think caution is good for other interventions, e.g. "Epistemic Security", "Cognitive bias research", "Research management and research environments (for example, understanding what made Bell Labs work)".

I'd also agree that caution is good for many of the listed interventions. To me, that seems to be even more of a case for more prioritization-style research though, which is the main thing I'm arguing for.

Comment by Ozzie Gooen (oagr) on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T18:21:14.452Z · EA · GW

I agree that the existing community (and the EA community) represent much, if not the vast majority, of the value we have now. 

I'm also not particularly excited about lifehacking as a source for serious EA funding. I wrote the list to be somewhat comprehensive, and to encourage discussion (like this!), not because I think each area deserves a lot of attention.

I did think about "recruiting" as a wisdom/intelligence intervention. This seems more sensitive to the definition of "wisdom/intelligence" than other things, so I left it out here.

I'm not sure how extreme you're meaning to be here. Are you claiming something like,
> "All that matters is getting good people. We should only be focused on recruiting. We shouldn't fund any augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn't expect further returns to things like these."

Comment by Ozzie Gooen (oagr) on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T14:39:25.743Z · EA · GW

This tension is one reason why I called this "wisdom and intelligence", and tried to focus on that of "humanity", as opposed to just "intelligence", and in particular, 'individual intelligence". 

I think that "the wisdom and intelligence of humanity" is much safer to optimize than "the intelligence of a bunch of individuals in isolation". 

If it were the case that "people all know what to do, they just won't do it", then I would agree that wisdom and intelligence aren't that important. However, I think these cases are highly unusual. From what I've seen, in most cases of "big coordination problems", there are considerable amounts of confusion, deception, and stupidity. 

Comment by Ozzie Gooen (oagr) on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-19T14:34:35.322Z · EA · GW

Thanks for the link, I wasn't familiar with them. 

For one, I'm happy for people to have a very low bar to post links to things that might or might not be relevant.