Posts

Let's not have a separate "community building" track 2022-06-29T11:59:13.496Z
Impact markets may incentivize predictably net-negative projects 2022-06-21T13:00:16.644Z
[linkpost] Christiano on agreement/disagreement with Yudkowsky's "List of Lethalities" 2022-06-19T22:47:44.032Z
Perils of optimizing in social contexts 2022-06-16T17:23:27.604Z
Don't Over-Optimize Things 2022-06-16T16:28:09.072Z
Apply to join SHELTER Weekend this August 2022-06-15T14:21:10.242Z
Global health is important for the epistemic foundations of EA, even for longtermists 2022-06-02T13:57:59.556Z
Deferring 2022-05-12T23:44:55.335Z
Against immortality? 2022-04-28T11:51:00.323Z
Longtermist EA needs more Phase 2 work 2022-04-20T14:42:46.153Z
What do we want the world to look like in 10 years? 2022-04-20T14:34:32.595Z
Truthful AI 2021-10-20T15:11:10.363Z
What should we call the other problem of cluelessness? 2021-07-03T17:10:00.714Z
A do-gooder's safari 2021-07-03T11:03:29.557Z
Forget replaceability? (for ~community projects) 2021-03-31T14:41:23.899Z
Everyday Longtermism 2021-01-01T17:39:29.452Z
Good altruistic decision-making as a deep basin of attraction in meme-space 2021-01-01T17:11:06.906Z
Web of virtue thesis [research note] 2021-01-01T16:21:19.522Z
Blueprints (& lenses) for longtermist decision-making 2020-12-21T17:25:15.087Z
"Patient vs urgent longtermism" has little direct bearing on giving now vs later 2020-12-09T14:58:21.548Z
AMA: Owen Cotton-Barratt, RSP Director 2020-08-28T14:20:18.846Z
"Good judgement" and its components 2020-08-19T23:30:38.412Z
What is valuable about effective altruism? Implications for community building 2017-06-18T14:49:56.832Z
A new reference site: Effective Altruism Concepts 2016-12-05T21:20:03.946Z
Why I'm donating to MIRI this year 2016-11-30T22:21:20.234Z
Should effective altruism have a norm against donating to employers? 2016-11-29T21:56:36.528Z
Donor coordination under simplifying assumptions 2016-11-12T13:13:14.314Z
Should donors make commitments about future donations? 2016-08-30T14:16:51.942Z
An update on the Global Priorities Project 2015-10-07T16:19:32.298Z
Cause selection: a flowchart [link] 2015-09-10T11:52:07.140Z
How valuable is movement growth? 2015-05-14T20:54:44.210Z
[Link] Discounting for uncertainty in health 2015-05-07T18:43:33.048Z
Neutral hours: a tool for valuing time 2015-03-04T16:33:41.087Z
Report -- Allocating risk mitigation across time 2015-02-20T16:34:47.403Z
Long-term reasons to favour self-driving cars 2015-02-13T18:40:16.440Z
Increasing existential hope as an effective cause? 2015-01-10T19:55:08.421Z
Factoring cost-effectiveness 2014-12-23T12:12:08.789Z
Make your own cost-effectiveness Fermi estimates for one-off problems 2014-12-11T11:49:13.771Z
Estimating the cost-effectiveness of research 2014-12-11T10:50:53.679Z
Effective policy? Requiring liability insurance for dual-use research 2014-10-01T18:36:15.177Z
Cooperation in a movement supporting diverse causes 2014-09-23T10:47:11.357Z
Why we should err in both directions 2014-08-21T02:23:06.000Z
Strategic considerations about different speeds of AI takeoff 2014-08-13T00:18:47.000Z
How to treat problems of unknown difficulty 2014-07-30T02:57:26.000Z
On 'causes' 2014-06-24T17:19:54.000Z
Human and animal interventions: the long-term view 2014-06-02T00:10:15.000Z
Keeping the effective altruist movement welcoming 2014-02-07T01:21:18.000Z

Comments

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on The Future Might Not Be So Great. · 2022-07-03T16:44:18.941Z · EA · GW

It's from "man things in the world are typically complicated, and I haven't spent time digging into this, but although there surface level facts look bad I'm aware that selective quoting of facts can give a misleading impression".

I'm not trying to talk you out of the bad actor categorization, just saying that I haven't personally thought it through / investigated enough that I'm confident in that label. (But people shouldn't update on my epistemic state! It might well be I'd agree with you if I spent an hour on it; I just don't care enough to want to spend that hour.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on The Future Might Not Be So Great. · 2022-07-03T16:40:17.725Z · EA · GW

I don't think it's like "Jacy had an interpretation in mind and then chose statements". I think it's more like "Jacy wanted to say things that made himself look impressive, then with motivated reasoning talked himself into thinking it was reasonable to call himself a founder of EA, because that sounded cool".

(Within this there's a spectrum of more and less blameworthy versions, as well as the possibility of the straight-out lying version. My best guess is towards the blameworthy end of the not-lying versions, but I don't really know.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on The Future Might Not Be So Great. · 2022-07-03T16:37:26.742Z · EA · GW

Yes, I personally want to do that, because I want to spend time engaging with good faith actors and having them in gated spaces I frequent.

In general I have a strong perfectionist streak, which I channel only to try to improve things which are good enough to seem worth the investment of effort to improve further. This is just one case of that.

(Criticizing is not itself something that comes with direct negative effects. Of course I'd rather place larger sanctions on bad faith actors than good faith actors, but I don't think criticizing should be understood as a form of sanctioning.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on The Future Might Not Be So Great. · 2022-07-02T13:19:30.892Z · EA · GW

I agree with this.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on The Future Might Not Be So Great. · 2022-07-02T08:00:44.300Z · EA · GW

I'm saying it's a gross exaggeration not a lie. I can imagine someone disinterested saying "ok but can we present a democratic vision of EA where we talk about the hundred founders?" and then looking for people who put energy early into building up the thing, and Jacy would be on that list.

(I think this is pretty bad, but that outright lying is worse, and I want to protect language to talk about that.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on The Future Might Not Be So Great. · 2022-07-02T00:11:29.084Z · EA · GW

I actually didn't mean for any of my comments here to get into attacks on our defence of Jacy. I don't think I have great evidence and don't think I'm a very good person to listen to on this! I just wanted to come and clarify that my criticism of John was supposed to be just that, and not have people read into it a defence of Jacy.

(I take it that the bar for deciding personally to disengage is lower than for e.g. recommending others do that. I don't make any recommendations for others. Maybe I'll engage with Jacy later; I do feel happier about recent than old evidence, but it hasn't yet moved me to particularly wanting to engage.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on The Future Might Not Be So Great. · 2022-07-02T00:03:22.554Z · EA · GW

Actually no I got reasonably good vibes from the comment above. I read it as a bit defensive but it's a fair point that that's quite natural if he's being attacked.

I remember feeling bad about the vibes of the Apology post but I haven't gone back and reread it lately. (It's also a few years old, so he may be a meaningfully different person now.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on The Future Might Not Be So Great. · 2022-07-01T22:37:44.067Z · EA · GW

[meta for onlookers: I'm investing more energy into holding John to high standards here than Jacy because I'm more convinced that John is a good faith actor and I care about his standards being high. I don't know where Jacy is on the spectrum from "kind of bad judgement but nothing terrible" to "outright bad actor", but I get a bad smell from the way he seems to consistently turns to present things in a way that puts him in a relatively positive light and ignores hard questions, so absent further evidence I'm just not very interested in engaging]

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on The Future Might Not Be So Great. · 2022-07-01T21:29:09.712Z · EA · GW

I wouldn't have described Jacy as a co-founder of effective altruism and don't like him having had it on his website, but it definitely doesn't seem like a lie to me (I kind of dislike the term "co-founder of EA" because of how ambiguous it is).

Anyway I think calling it a lie is roughly as egregious a stretch of the truth as Jacy's claim to be a co-founder (if less objectionable since it reads less like motivated delusion). In both cases I'm like "seems wrong to me, but if you squint you can see where it's coming from".

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Future Fund June 2022 Update · 2022-07-01T21:10:31.417Z · EA · GW

Either way it looks pretty hard to have a real apples-to-apples comparison, since presumably the open call takes significantly more time from prospective grantees (but you wouldn't want to count that the same as grantmaker time).

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-01T21:08:21.018Z · EA · GW

Gavin's count says it includes strategy and policy people, for which I think AI Impacts counts. He estimated these accounted for half of the field then. (But I think should have included that adjustment of 50% when quoting his historical figure, since this post was clearly just about technical work.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-07-01T20:41:17.607Z · EA · GW

[Speaking for myself not Oliver ...]

I guess that a week doing ELK would help on this -- probably not a big boost, but the type of thing that adds up over a few years.

I expect that for this purpose you'd get more out of spending half a week doing ELK and half a week talking to people about models of whether/why ELK helps anything, what makes for good progress on ELK, what makes for someone who's likely to do decently well at ELK.

(Or a week on each, but wanting to comment about allocation of a certain amount of time rather than increasing the total.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Fanatical EAs should support very weird projects · 2022-07-01T11:04:48.852Z · EA · GW

Re. non-consequentialist stuff, I notice that I expect societies to go better if people have some degree of extra duty towards (or caring towards) those closer to them. That could be enough here?

(i.e. Boundedly rational agents shouldn't try to directly approximate their best guess about the global utility function.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-30T22:57:57.580Z · EA · GW

Re. 2), I think the relevant figure will vary by activity. 30% is a not-super-well-considered figure chosen for 80k, and I think I was skewing conservative ... really I'm something like "more than +20% per doubling, less than +100%". Losing 90% of the impact would be more imaginable if we couldn't just point outliery people to different intros, and would be a stretch even then.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-30T14:40:14.914Z · EA · GW

Salient points of agreement:

  • I agree it's pretty clear that you're not currently in a position where you should consider learning by going into direct work for years
  • I agree that the things you say you'd be more excited about are lower hanging fruit than asking professional community builders to spend 20% of their time on object-level stuff

OTOH my gut impression (may well be wrong) is that if 80k doubled its knowledge of object-level priorities (without taking up any time to do so) that would probably increase its impact by something like 30%. So from this perspective spending just 3-5% of time on keeping up with stuff feels like it's maybe a bit of an under-investment (although maybe that's correct if you're doing it for a few years and then spending time in a position which gives you space to go deeper).

One nuance: activity which looks like "let people know there's this community and their basic principles, in order that the people who would be a natural fit get to hear about it" feels to me like I want to put it in the education rather than community-building  bucket. Because if you're aiming for these intermediate variables like broad understanding rather than a particular shape of a community, then it's less important to have nuanced takes on what the community should look like. So for that type of work I less want to defend anything like 20% (although I'm still often into people who are going to do that spending a bunch of time earlier in their careers going deep on some of the object-level).

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-30T10:48:31.448Z · EA · GW

Thanks, changed to "let's not have a ..."

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-30T10:47:06.698Z · EA · GW

Thanks, really appreciated this (strong upvoted for the granularity of data).

To be very explicit: I mostly trust your judgement about these tradeoffs for yourself. I do think you probably get a good amount from social osmosis (such that if I knew you didn't talk socially a bunch to people doing direct work I'd be more worried that the 5-10% figure was too low); I almost want to include some conversion factor from social time to deliberate time.

If you were going to get worthwhile benefits from more investment in understanding object-level things, I think the ways this would seem most plausible to me are:

  • Understanding not just "who is needed to join AI safety teams?", but "what's needed in people who can start (great) new AI safety teams?"
  • Understanding the network of different kinds of direct work we want to see, and how the value propositions relate to each other, to be able to prioritize finding people to go after currently-under-invested-in areas
  • Something about long-term model-building which doesn't pay off in the short term but you'd find helpful in five years time

Overall I'm not sure if I should be altering my "20%" claim to add more nuance about degree of seniority (more senior means more investment is important) and career stage (earlier means more investment is good). I think that something like that is probably more correct but "20%" still feels like a good gesture as a default.

(I also think that you just have access to particularly good direct work people, which means that you probably get some of the benefits of sync about what they need in more time-efficient ways than may be available to many people, so I'm a little suspicious of trying to hold up the Claire Zabel model as one that will generalize broadly.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-29T23:43:43.683Z · EA · GW

I do think that still makes them sound more separate than ideal -- while I think many people should be specializing towards community building or direct work, I think that specialized to community building should typically involve a good amount of time paying close attention to direct work, and I think that specialized to direct work should in many cases involve a good amount of time looking to lever knowledge to inform community building.

To gesture at (part of) this intuition I think that some of the best content we have for community building includes The Precipice, HPMoR, and Cold Takes. In all cases these were written by people who went deep on object-level. I don't think this is a coincidence, and while I don't think all community-building content needs that level of expertise to produce well, I think that if we were trying to just use material written by specialized community builders (as one might imagine would be more efficient, since presumably they'll know best how to reach the relevant audiences, etc.) we'd be in much worse shape.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-29T22:40:03.554Z · EA · GW

BTW I agree that the title is flawed, but don't have something I feel comparably good about overall. But if you have a suggestion I like I'll change it.

(Maybe I should just change "track" in the title to "camp"? Feels borderline to me.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-29T22:28:24.916Z · EA · GW

My point is that it's not separate. People doing community building can (and should) talk a bunch to people focused on direct work. And we should see some of people moving backwards and forwards between community building and more direct work.

I think if we take a snapshot in 2022 it looks a bit more like there's a community-building track. So arguably my title is aspirational. But I think the presence or absence of a "track" (that people make career decisions based on) is a fact spanning years/decades, and my best guess is that (for the kind of reasons articulated here) we'll see more integration of these areas, and the title will be revealed as true with time.

Overall: playing a bit fast and loose, blurring aspirations with current reporting. But I think it's more misleading to say "there is a separate community-building track" than to say there isn't. (The more epistemically virtuous thing to say would be that it's unclear if there is, and I hope there isn't.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Community Builders Spend Too Much Time Community Building · 2022-06-29T22:05:19.451Z · EA · GW

Interesting point re. part organizers who were particularly successful. I don't have a great grasp of the anecdata here; I had a rough impression that some of the very successful ones also got relatively obsessive about understanding object-level areas, but that might be wrong.

(If you're right, I'm also interested in whether they were the only people who had a serious/deliberate go at doing great outreach vs just doing it more passively; I'd update particularly if we had examples of people trying seriously to do say a 60/40 learning/outreach split and not getting far with the outreach.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-29T21:36:43.022Z · EA · GW

Noticing that the (25%, 70%) figure is sufficiently different from what I would have said that we must be understanding some of the terms differently.

My clause there is intended to include cases like: software engineers (but not the people choosing what features to implement); caterers; lawyers ... basically if a professional could do a great job as a service without being value aligned, then I don't think it's making calls about what kind of community building needs to happen.

I don't mean to include the people choosing features to implement on the forum (after someone else has decided that we should invest in there forum), people choosing what marketing campaigns to run (after someone else has decided that we should run marketing campaigns), people deciding how to run an intro fellowship week to week (after someone else told them to), etc. I do think in this category maybe I'd be happy dipping under 20%, but wouldn't be very happy dipping under 10%. (If it's low figures like this it's less likely that they'll be literally trying to do direct work with that time vs just trying to keep up with its priorities.)

Do you think we have a substantive disagreement?

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-29T18:35:37.674Z · EA · GW

I guess I think there's a continuum of how much people are making those calls. There are often a bunch of micro-level decisions that people are making which are ideally informed by models of what it's aiming for. If someone is specializing in vegan catering for EA events then I think it's great if they don't have models of what it's all in service of, because it's pretty easy for the relevant information to be passed to them anyway. But I think most (maybe >90%) roles that people centrally think of as community building have significant elements of making these choices.

I guess I'm now thinking my claim should be more like "the fraction should vary with how high-level the choices you're making are" and provide some examples of reasonable points along that spectrum?

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-29T14:41:43.888Z · EA · GW

I'm saying "at least ~20%"; I'm certainly happy with some people with much higher ratios.

My impression is that Emma's post is mostly talking about student organizers. I think ">50%" seems like a very reasonable default there. I think it would be a bit too costly to apply to later career professionals (though it isn't totally crazy especially for "community building leadership" roles).

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Let's not have a separate "community building" track · 2022-06-29T14:09:22.812Z · EA · GW

My first-pass response is that this is mostly covered by:

It's fine to have professional facilitators who are helping the community-building work without detailed takes on object-level priorities, but they shouldn't be the ones making the calls about what kind of community-building work needs to happen

(Perhaps I should have called out building infrastructure as an important type of this.)

Now, I do think it's important that the infrastructure is pointed towards the things we need for the eventual communities of people doing direct work. This could come about via you spending enough time obsessing over the details of what's needed for that (I don't actually have enough resolution on whether you're doing enough obsessing over details for this, but plausibly you are), or via you taking a bunch of the direction (i.e. what software is actually needed) from people who are more engaged with that.

So I'm quite happy with there being specialized roles within the one camp. I don't think there should be two radically different camps within the one camp. (Where the defining feature of "two camps" is that people overwhelmingly spend time talking to people in their camp not the other camp.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Community Builders Spend Too Much Time Community Building · 2022-06-29T13:26:59.706Z · EA · GW

I agree with these claims (extracted from your comment):

  • EA student group outreach is an amazing opportunity
    • people doing it well while students often end up having more impact through it than they do in their later jobs
  • someone working 15h per week on it is probably going to achieve more than 3x as much as someone working 5h per week
  • someone being intentional about outreach can often achieve a lot more than someone who just does an object-level EA thing
  • learning to talk to people about EA on a small scale is really useful to doing later high-scale marketing; and running a student group of volunteers teaches you a bunch about management in general

But I feel much worse about your proposed model. This is significantly for the reasons discussed in this post, and in my post on why it's important for community building to be well-integrated with direct work. But also because:

  • Mass outreach may be valuable, but a big advantage of "mass" is that it can be done by professionals (who can themselves have invested years in getting a nuanced understanding of what's needed in direct work); there's no need for this to be done by students
  • I think that non-mass outreach will often be more effective from students who are significantly engaged with the EA project in non-outreach-y ways, since it lets them talk sincerely about their own practice and experience, without it coming across as a Ponzi scheme (OK this was covered in this post as 3.3)
    • While I agree that 15h/week will achieve more than 3x as much as 5h/week, I don't think it's >>3x as much, and I think will typically be outweighed by the benefits
  • I think that many professionals in direct work should be making a bit of time for community-building, and I think the skill-building from having done a little bit of outreach at university will often be helpful for this, so I don't want to restrict this benefit to just a handful of people
Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on "Two-factor" voting ("two dimensional": karma, agreement) for EA forum? · 2022-06-26T14:55:46.332Z · EA · GW

Yeah to proxy this maybe I'd imagine something like adding a virtual five upvotes and five downvotes to each comment to start it near 50%, so it's a strong signal if you see something with an extreme value.

Maybe that's a bad idea; makes it harder (you'd need to hover) to notice when something's controversial.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on "Two-factor" voting ("two dimensional": karma, agreement) for EA forum? · 2022-06-26T12:10:41.334Z · EA · GW

I think it would be better if the agreement was expressed as a percentage rather than a score, to make it feel more distinct // easier to remember what the two were.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins · 2022-06-23T18:45:30.124Z · EA · GW

(I'm a conscientious objector to LinkedIn. I think the business practices of requiring you to have an account to see other people's accounts, and of showing people who pay who's looked at their page, are super obnoxious.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins · 2022-06-22T00:01:38.807Z · EA · GW

I expect people will vary on this. Maybe most people who would be happy filling in the form at all won't mind much about google drive link-sharing. (I imagine a little more nervousness b/c it's easier for people to share a link to their CV than share e.g. a pdf of their CV)

Of possible interest: 2 minutes reflection from me says that I probably won't get to filling this in b/c "writing a CV" is something I will naturally feel perfectionist about // probably I'd need to spend 1-3 days on it to feel comfortable with it going to this group, and I probably don't want to spend that time (if someone made a bid that something was really important I could imagine myself pushing through the discomfort and doing something faster, but I'm more interested in myself as a stand-in for other people with the same hangups than literally getting a submission from me). If instead of asking for a CV you just had a series of questions about career that I could fill in on the form, I'd be decently likely to spend 20-30 minutes doing that. The key difference is that if I'm doing it for a form there's no social expectation that it's the kind of thing that people put time into polishing, so I don't feel bad about doing a quick rather than perfectionist version.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-21T23:51:20.449Z · EA · GW

I really appreciated this update. Mostly it checks out to me, but I wanted to push back on this:

Here’s a dumb thought experiment: Suppose that Yudkowsky wrote all of the same things, but never published them. But suppose, also, that a freak magnetic storm ended up implanting all of the same ideas in his would-be-readers’ brains. Would this absence of a casual effect count against deferring to Yudkowsky? I don’t think so. The only thing that ultimately matters, I think, is his track record of beliefs - and the evidence we currently have about how accurate or justified those beliefs were.

It seems to me that a good part of the beliefs I care about assessing are the beliefs about what is important. When someone has a track record of doing things with big positive impact, that's some real evidence that they have truth-tracking beliefs about what's important. In the hypothetical where Yudkowsky never published his work, I don't get the update that he thought these were important things to publish, so he doesn't get credit for being right about that.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins · 2022-06-21T23:36:31.198Z · EA · GW

For the group who have a CV but just don't want it publicly visible, maybe you should have a way of submitting that information that isn't giving a public link?

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins · 2022-06-21T23:31:58.480Z · EA · GW

I feel maybe you should say something like "this will be quick if you have an up-to-date LinkedIn or online CV"? (I don't; I guess I'm unusual but not super-unusual among the population who would otherwise be happy filling this in. People might either not have got to updating a CV recently, or not be happy having one publicly available.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Impact markets may incentivize predictably net-negative projects · 2022-06-21T19:18:13.059Z · EA · GW

Nice, that's pretty interesting. (It's hacky, but that seems okay.)

It's easy to see how this works in cases where there's a single known-in-advance funder that people are aiming to get retro funding from (evaluated in five years, say). Have you thought about whether it could work with a more free market, and not necessarily knowing all of the funders in advance?

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Impact markets may incentivize predictably net-negative projects · 2022-06-21T16:00:21.929Z · EA · GW

Finally: on a meta level, the amount of risk you're willing to spend on trying new funding mechanisms with potential downsides should basically be proportional to the amount of risk you see in our society at the moment.

I think this is not quite right. It shouldn't be about what we think about existing funding mechanisms, but what we think about the course we're set to be on. I think that ~EA is doing quite a good job of reshaping the funding landscape especially for the highest-priority areas. I certainly think it could be doing better still, and I'm in favour of experiments I expect to see there, but I think that spinning up impact markets right now is more likely to crowd out later better-understood versions than to help them.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Impact markets may incentivize predictably net-negative projects · 2022-06-21T15:56:51.290Z · EA · GW

Additionally - I think the negative externalities may be addressed with additional impact projects, further funded through other impact markets?

I didn't follow this; could you elaborate? (/give an example?)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Impact markets may incentivize predictably net-negative projects · 2022-06-21T15:55:47.284Z · EA · GW

I think startups are usually doing an activity which scales if it's good and stops if it's bad. People can sue if it's causing harm to them. Overall this kind of feedback mechanism does a fine job.

In the impact markets case I'm most worried about activities which have long-lasting impacts even without continuing/scaling them. I'm more into the possibility of markets for scalable/repeatable activities (seems less fraught).

In general the story for concern here is something like:

  • At the moment a lot of particularly high-leverage areas are have disproportionate attention from people who are earnestly trying to do good things
  • Impact markets could shift this to "attention from people earnestly trying to do high-variance things"
    • In cases where the resolution on what was successful or not takes a long time, and people potentially do a lot of the activity before we know whether it was eventually valued, this seems pretty bad
Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on You don’t have to respond to every comment · 2022-06-21T11:00:21.327Z · EA · GW

I think it's plausible that the norm is overall a bit too strong or a bit too weak at the moment. I feel pretty bad about "no norm" though.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on You don’t have to respond to every comment · 2022-06-21T10:56:22.084Z · EA · GW

I totally agree that there are some times when it's correct for people not to respond. But overall I think it's pretty clearly good to have some norm for the reasons above. Because I think that a lot of good things come out of getting to the bottom of stuff, I'd typically prefer that people posted half as many things if it meant they'd engage properly with comments on those things. I really worry that with no norm here we might lose something important about EA culture.

I think the ideal equilibrium should incur both some pain from less-response-than-we-might-hope and some pain from people-feeling-obliged-to-respond. I think maybe we're actually doing about right at that at the moment, on average? But I think it would better if everyone felt a bit of obligation to respond and nobody felt an overwhelming obligation to respond (and I guess right now it's more like some people feel it as overwhelming and some don't feel it at all).

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on You don’t have to respond to every comment · 2022-06-20T23:31:53.444Z · EA · GW

I think this post is literally correct -- I don't think there should be a strong norm of responding to every comment. 

But ... I kind of think there should be a norm of trying to respond to substantive comments (where it's OK not to do it, but that's an "OK not to always meet the norm when it's not convenient", not "there isn't even a norm here"). I don't think post authors are just in the same position of "it's nice to respond to things" as everyone else. I guess I think of it as analogous to giving a talk and not taking questions ... sure, sometimes it's the right call (and I'm supportive if someone really isn't up for taking questions), but it's really nice to try to clear some space for it if you can. And I worry that it's easy to read this post as having the implicature of "just don't bother responding if you don't feel like it".

This matters to me because I think we're collectively into truth-seeking about important topics, and I think that often some of the best content comes in back-and-forths where people are arguing about detailed points. I worry that a culture where people are encouraged to not respond to comments and go write their next post instead leads to more talking past each other, less accountability, and ultimately less grounding of our culture and our knowledge.

e.g. say I make a post arguing X, and someone else asks a pointed question in the comments. If I don't respond and this is fully socially endorsed it might be easy for readers to think "oh I'm sure Owen was just busy but he has a good response". But then if I don't have a good answer to the point it may be hard for the pointed question to get the social impact that it deserves, unless someone takes the time/effort to write up enough context that it can be a top-level post and rise to prominence itself.

(I don't think responses to substantive comments always need to be substantive to be helpful. I think it's great to just share "good point", or "hmm, yeah, I want to think more about that", or "I've never found this kind of argument compelling although I can't put my finger on exactly why" if that's where you're at.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Seven ways to become unstoppably agentic · 2022-06-19T13:42:30.200Z · EA · GW

I get where you're coming from (although I think domineeringness is less universally rewarded than intelligence across different parts of society). But given that we don't think the ideal society consists of people being very domineering, I worry that the indirect harms of pushing this in EA culture may be significant. I think it's harder to know what these are than the benefits, but I'm worried that it's a kind of naive consequentialist stance to privilege the things we have cleaner explicit arguments for.

At the very least I think there's something like a "missing mood" of ~sadness here about pushing for EAs to do lots of this. The attitude I want EAs to be adopting is more like "obviously in an ideal world this wouldn't be rewarded, but in the world we live in it is, and the moral purity of avoiding this generally isn't worth the foregone benefits". If we don't have that sadness I worry that (a) it's more likely that we forget our fundamentals and this becomes part of the culture unthinkingly, and (b) a bunch of conscientious people who intuit the costs of people turning this dial up see the attitudes towards it and decide that EA isn't for them.

This is exacerbated by the fact that I don't think there's a clean boundary between EA and non-EA worlds (e.g. if there are EA-adjacent professors perhaps lots of the applicants to work with them are EAs, and we don't really want the competition between them to be in terms of domineeringness).

But ... I don't think sadness is always correct around this! In particular I think many people do much less of putting themselves forwards // asking for favours than is socially optimal! I think most of the benefits of getting EAs to do more of this comes from the uncomplicated good of getting those people up to the social ideal rather than from the complicated case where there are tradeoffs. I think something which helped people to get up there (by helping them to think about what's socially ideal and when it's ambiguous whether to do more) would be really great.

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Seven ways to become unstoppably agentic · 2022-06-19T12:37:17.684Z · EA · GW

I'm sure it's context dependent and depends on size of favours. But I'm not sure it depends that much -- and I'm worried that if we don't discuss numbers it's easy for people who are naturally disinclined to ask to think "oh I'm probably doing this enough already" (or people who are naturally inclined to do this a lot already to think "oh yeah I totally need to do that more").

Maybe you could give a context where you think my numbers are badly off?

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Seven ways to become unstoppably agentic · 2022-06-19T03:50:59.644Z · EA · GW

I meant like maybe 3-15 times total ("few" was too ambiguous to be a good word choice).

Writing that out maybe I want to change it to 3-30 (the top end of which doesn't feel quite like "a few"). And I can already feel how I should be giving more precise categories // how taking what I said literally will mean not doing enough asking in some important circumstances, even if I stand by my numbers in some important spiritual sense.

Anyway I'm super interested to get other people's guesses about the right numbers here. (Perhaps with better categories.)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Seven ways to become unstoppably agentic · 2022-06-18T23:02:47.972Z · EA · GW

Oh maybe another thing that I feel uneasy about is the reinforcement of the message "the things you need are in other people's gift", and the way the post (especially #1) kind of presents "agency" as primarily a social thing (later points do this less but I think it's too late to impact the implicit takeaway.

Sometimes social agency is good, but I'm not sure I want to generically increase it in society, and I'm not sure I want EA associated with it. I'm especially worried about people getting social agency without having something like "groundedness".

(Thoughts still feel slightly muddled/incomplete, but guessing it's better to share than not)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Seven ways to become unstoppably agentic · 2022-06-18T22:40:41.284Z · EA · GW

I like quite a bit of this post (particularly points #2,3,6,7), but also left it feeling uneasy and with a desire to downvote (although I haven't done that because I'm conflicted).

I'm having trouble putting my finger on what I don't like. I think it's something like "I expect some people to be allergic to this, and it to correlate with the people who most need to hear the advice" (and the people who most feel excited about this correlating with those who most need to hear the opposite advice). So I'm feeling good about its existence as a resource to point specific people to (although a version which was less likely to trigger allergies would be even better!), but bad about the idea of it entering into EA cannon or being broadly seen as representing what EA is.

I guess I've talked myself into downvoting (since I currently think there are better effects from it having low karma), but I want to attach this to a "thank you for writing it".

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Seven ways to become unstoppably agentic · 2022-06-18T22:14:38.539Z · EA · GW

To avoid the "opposite advice" thing, maybe we can just talk about in absolute terms what are good amounts to ask for help?

My guess is that people should ask their friends/colleagues/acquaintances for help with things a few times a week, and ask senior people they don't know for help with things a few times a year. This is based on a sense of "imagining everyone was doing this" and wondering where I want to turn the dial to. I'm interested if others have different takes about the ideal level.

I think if people are asking noticeably less than that they should be seriously asking themselves if they should be ramping it up. And if people are asking noticeably more they should be seriously asking themselves if they should be turning it down.

I think that people receiving requests should tend to look for signals that suggest that the person makes few/many requests, and be more inclined to be positive if they make few or more inclined to be negative if they make many -- in order to try to get the overall incentive landscape right to encourage people to make about the right number of requests. Of course this is kind of hard to detect particularly if someone is cold emailing you ... anyone have better ideas?

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Don't Over-Optimize Things · 2022-06-17T09:46:07.428Z · EA · GW

I posted this to LessWrong as well, and one of the commenters there mentions the "performance / robustness stability tradeoff in controls theory". Is that the same as what you're thinking of?

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on Global health is important for the epistemic foundations of EA, even for longtermists · 2022-06-15T14:12:40.156Z · EA · GW

Yeah I think this is a really good question and would be excited to see that kind of analysis. Maybe I'd make the numerator be "# of charitable $ spent" rather than "# of charities" to avoid having the results be swamped by which areas have the most very small charities.

 It might also be pretty interesting to do some similar analysis of how good interventions in different broad areas look on longtermist grounds (although this necessarily involve a lot more subjective judgements).

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on AGI Ruin: A List of Lethalities · 2022-06-09T03:09:30.409Z · EA · GW

(I edited to get my meaning closer to correct)

Comment by Owen Cotton-Barratt (Owen_Cotton-Barratt) on AGI Ruin: A List of Lethalities · 2022-06-08T23:01:24.559Z · EA · GW

I think there's a bunch of really important content here and I hope people engage seriously. (I plan to.)

I find that I agree (in impression space, at the moment of reading) with ~70% of what you're saying -- and think it covers an awful lot of important ground, and wish it was better appreciated in these communities. Then I think have disagreements with the frame of ~20% (some of which rub me the wrong way in a manner which gives me a visceral urge to disengage, which I'm resisting; other parts of which I think may put attention on importantly the wrong things), and flat disagree (?) with ~10%.

I want to think about the places where I have disagreements. I suspect with some fraction I'll end up thinking you're ~right without further prompting; with some other fraction I'll end up thinking you're ~right after further discussion. On the other hand maybe I'll notice that some of the things that passed muster at first read seem subtly wrong. I'm interested to find out whether the remaining disagreements after that are enough to give me a significantly different bottom line than you. (A reason I particularly liked reading this is that it feels like it has a shot at significantly changing my bottom line, which is very unusual.)

(With apologies for the contentless reply; I thought it was better to express how I was relating to it and how I wanted to relate to it than express nothing for now until I've done my thinking.)