Organizational alignment

post by CarolineEllison · 2022-05-17T00:48:35.498Z · EA · GW · 12 comments

[Note: this was written in response to being asked for my thoughts on how important it is for EA orgs to hire aligned staff as they scale. Thanks to Sam Bankman-Fried for comments and for significant influence on these thoughts.]

Organizational alignment is a really hard problem; putting EA aside and just thinking about organizations that are trying to make money or something else, I think it’s still one of the biggest problems that organizations face.

There’s this Elon Musk quote that SBF likes to reference: “Every person in your company is a vector. Your progress is determined by the sum of all vectors.” By default, as you scale an organization, those vectors will all end up pointing in random different directions. Getting their directions more aligned is one of the key things you have to do to scale.

 

I think most of the stuff I have to say about organizational alignment applies to both EA and non-EA organizations. There’s two differences for EA orgs that I can think of:

 

I think the broad ways to do organizational alignment are:

 

Unsurprisingly, I think the more decision-making power someone has the more important it is for them to be aligned. There are a few ways I know of to have employees be super aligned with your org:

 

In practice, I think what I tend to do in my hiring is:

But caveating that this is at an organization with fairly legible goals, and the less legible things are the more I’d expect hiring EAs to be important.

12 comments

Comments sorted by top scores.

comment by Sam Bankman-Fried (sambf) · 2022-05-17T00:49:38.059Z · EA(p) · GW(p)

100% agree

Replies from: kuhanj
comment by kuhanj · 2022-05-17T23:58:26.407Z · EA(p) · GW(p)

Thanks to Sam Bankman-Fried for comments and for significant influence on these thoughts.

:P

comment by Kirsten (Khorton) · 2022-05-17T08:23:20.920Z · EA(p) · GW(p)

making compensation heavily dependent on how much value you add setting an organization level culture that emphasizes teamwork, deemphasizes individual status, etc

It's worth being aware that some of these options don't play nice together. For example, if you hire people who are intrinsically motivated by your mission and try to emphasize teamwork, you'd probably want to pay them fairly without emphasizing money too much. (According to my Educational Psychology professor) there's some evidence that offering to pay for results erodes intrinsic motivation and doesn't improve results for intellectual problems.

Replies from: Charles He
comment by Charles He · 2022-05-17T09:00:23.118Z · EA(p) · GW(p)

This is insightful!

Personally, I would consider appending “for onlookers”, in this particular instance, as the OP is probably extremely versed in the issues and has a strategy that considers these tradeoffs.

Replies from: Davidmanheim, Khorton
comment by Davidmanheim · 2022-05-18T06:49:10.678Z · EA(p) · GW(p)

Yeah, I think a more basic look at this would be helpful, and would encourage someone to write an "intro to org theory" post. But in lieu of that, I'll point out that the issues here relate to incentives in organizations generally, and will point to a preprint paper I wrote that discusses some of the desiderata and strategic considerations in organizations in the context of using metrics, and money based on those metrics, to align people.

comment by Kirsten (Khorton) · 2022-05-17T11:04:58.100Z · EA(p) · GW(p)

Yes for sure, it was meant to be a "yes and" to the post, not a criticism of Caroline!

comment by Davidmanheim · 2022-05-17T17:54:25.373Z · EA(p) · GW(p)

This is great - just wanted to comment about my older post [EA · GW] bringing up the issues, and thanking Caroline for moving the discussion forward!

comment by Alex Catalán Flores · 2022-05-17T10:49:14.907Z · EA(p) · GW(p)

OP -- I'm curious to hear your thoughts about investing greater energy into making goals more 'legible', as you put it. It strikes me that organisational alignment via loyalty + compensation + culture + management + hiring is circumventing the main problem, which is that the organisation's goals aren't clear. 

For example, couldn't an organisation whose North Star is to “do research to determine priorities for making the long-term future go well" create alignment by breaking down that overarching aim into its constituent goals? I'm spit-balling here, but one such constituent goal could be to "Become a research powerhouse", which would in turn be measured by a number of concrete and verifiable metrics such as "Publish X policy briefs" and/or "Double number of downloads on knowledge products on the website".   These goals would be fleshed out and discussed in detail, then published for everyone to see (or even create sub-goals for specific teams/departments). One could even publish them online so that external candidates can see them during recruitment rounds. The overarching idea is that being able to assess the organisation's goals will allow people to self-select, both in terms of the work they're doing but also in terms of their personal fit within the organisation, leading to greater alignment as people re-focus or exit. 

It's likely very obvious by now, but I'm putting forward John Doerr's Objectives & Key Results framework. It's hugely popular these days, and I'll be the first to admit a bias toward it. Doerr's broader point, however, is that one of the benefits of better goal-setting is organisational alignment:

A two-year Deloitte study found that no single factor has more impact than “clearly defined goals that are written down and shared freely. . . . Goals create alignment, clarity, and job satisfaction.”

My curiosity regarding your thoughts on this arises purely because your original post doesn't mention better goal-setting as a way to generate alignment. I also haven't come across many critiques about the better goal-setting = alignment assumption, so any thoughts on that vein would be very interesting to hear 

Replies from: Davidmanheim
comment by Davidmanheim · 2022-05-17T17:57:16.629Z · EA(p) · GW(p)

If you're up for a long-winded take on what I called "underspecified goals," and how they make alignment fail, I wrote about this question on Ribbonfarm quite a while ago.

comment by Rob Mitchell · 2022-05-17T09:03:31.220Z · EA(p) · GW(p)

Every person in your company is a vector. Your progress is determined by the sum of all vectors.

'Hey! I'm not a vector!' I cried out to myself internally as I read this. I mean, I get it and there's a nice tool / thought process in there, but this feels somewhat dehumanising without something to contextualise it. There are loads of tools you might employ to make good decisions that might involve placing someone in a matrix or similar, but hopefully it's obvious that it's a modelled exercise for a particular goal and you don't literally say 'people are maths' while you do it.

Anyway, I was thinking of political parties as I read this. If your party does well, you get an influx of members who somewhat share the same goals but are different from the existing core, not chosen by you, probably less knowledgeable about your history and ideology, and less immediately aligned. You have essentially no ability to produce alignment via financial mechanisms or 'hiring' processes. How do you get people to pull together? There's some recent examples of UK parties absolutely mangling this, but probably some good examples too (Obama 2008? German Green Party?) Obviously in organisations there are then additional mechanisms, but this seems interesting to study from the cultural elements which can be more separated out. 

Replies from: Samuel Shadrach
comment by acylhalide (Samuel Shadrach) · 2022-05-17T15:17:54.741Z · EA(p) · GW(p)

Not sure why you got downvoted. First para is valid, second seems a bit off context. (Like yes, it's related but is it related enough to actually further the goals of the OP?)

Replies from: Rob Mitchell
comment by Rob Mitchell · 2022-05-17T15:41:44.065Z · EA(p) · GW(p)

Well, it looks like I'm hijacking a thread about organisational scaling with some anxieties around referring to people in overly utilitarian ways that I've talked about elsewhere. Which is fair enough; interestingly I've done the opposite and talked about org scaling on threads that were fairly tangentially related and got quite a few upvotes for it. All very intriguing and if you're not occasionally getting blasted, you're not learning as much as you might, getting enough information about e.g. limits, etc...