Posts

Comments

Comment by Rob Mitchell on On being ambitious: failing successfully & less unnecessarily · 2022-05-26T08:27:37.390Z · EA · GW

One potential solution could involve explicitly funding such public goods. For example, funders could give an organisation additional funding to allow their staff to contribute more to effective altruism public goods, despite competing priorities.

I was thinking something similar reading some comments around funds giving (or not giving) feedback. There does seem to be a missed equilibrium:

  • It's in everyone's efforts if there is more feedback, support, coordination etc.
  • It's not in the interests or capability of any one organisation to take this on themselves.

I might not jump to assuming it would all be coming off existing staff's plates though. 

Anyway, great post. 

Comment by Rob Mitchell on A Problem with Motivation · 2022-05-25T09:48:19.388Z · EA · GW

This should recognise that more reliable motivation comes from norm-following rather than from individual willpower

I think this is right and is more true and important when the positive impacts you might have are distant in time, space or both. If you're doing something to help your local community then you should be able to see the impact yourself fairly quickly and willpower could well be the best thing to get you out picking litter or whatever. This falls down a bit if your beneficiaries are halfway round the world, in the future, or both.

Comment by Rob Mitchell on EA can sound less weird, if we want it to · 2022-05-25T06:35:17.729Z · EA · GW

It seems like there are certain principles that have a 'soft' and a 'hard' version - you list a few here. The soft ones are slightly fuzzy concepts that aren't objectionable, and the hard ones are some of the tricky outcomes you come to if you push them. Taking a couple of your examples:

Soft: We should try to do as much good with donations as possible

Hard: We will sometimes guide time and money away from things that are really quite important, because they're not the most important

 

Soft:  Long-term impacts are more important than short-term impacts

Hard:  We may pass up interventions with known and high visible short-term benefits in favour of those with long-term impacts that may not be immediately obvious

 

This may seem obvious, but to people who aren't familiar, leading with the soft ones on the basis that the hard ones will come up soon enough if someone is interested or does their research will be more effective in giving a positive impression than jumping straight to the hard stuff. But I see a lot more jumping than would be justified. I can see why, but if you were trying to persuade someone to join or have a good opinion of your political party, would you lead with 'we should invest in public services' or 'you should pay more taxes'?

Comment by Rob Mitchell on EA Common App Development Further Encouragement · 2022-05-21T07:17:05.132Z · EA · GW

Yes, in practice interview questions should vary a lot between different roles, even if on paper the roles are fairly similar, so I'm not sure they could be coordinated, beyond possibly some entry level roles.

In a situation where someone is good but doesn't quite fit in a role the referral element might be useful. Often I've interviewed someone thinking 'they're great but not as good a fit for the role' even if they match on paper, and being able to refer that person on to another organisation would be a mutual benefit.

Comment by Rob Mitchell on How many people have heard of effective altruism? · 2022-05-21T07:01:26.812Z · EA · GW

I'd heard of Peter Singer in an animal rights context years before I knew anything around his EA association or human philosophy in general. I wonder if a lot of people who have heard of him are in the same place I was.

Comment by Rob Mitchell on Thoughts on requesting reasoning or examples to not pursue fields/positions · 2022-05-20T20:56:21.412Z · EA · GW

I don't think approaching this as 'why not to pursue a path' is helpful. I think it's more about helping people be aware of things they may not know so they can make an educated decision. That decision may then be 'it's not for me'. Think of the numbers showing how few people become professional athletes. The framing isn't 'don't do it because you won't make it'. It's 'few people make it, decide in full knowledge.'

Comment by Rob Mitchell on "Big tent" effective altruism is very important (particularly right now) · 2022-05-20T18:27:59.099Z · EA · GW

Celebrate all the good actions[that people are taking (not diminish people when they don't go from 0 to 100 in under 10 seconds flat).

--

I'm uncomfortable doing too much celebrating of actions that are much lower impact than other actions

I think the following things can both be true:

  • The best actions are much higher impact than others and should be heavily encouraged.
  • Most people will come in on easier but lower impact actions and if there isn't an obvious and stepped progression to get to higher impact actions and support to facilitate this then many will fall out unnecessarily. Or may be put off entirely if 'entry level' actions either aren't available or receive a very low reward or status.

I didn't read the OP as saying that we should settle with lower impact actions if there's the potential for higher impact ones. I read it as saying that we should make it easier for people to find their level - either helping them to reach higher impact over time if for whatever reason they're unable or unwilling to get there straight away, or making space for lower impact actions if for whatever reason that's what's available. 

Some of this will involve shouting out and rewarding less impactful actions beyond their absolute value not for its own sake but because this may be the best way of helping this progression. I've definitely noticed the '0-100' thing and if I was younger and less experienced it might have bothered me more. 

Comment by Rob Mitchell on [$20K In Prizes] AI Safety Arguments Competition · 2022-05-20T12:07:06.364Z · EA · GW

[Policymakers]
They said that computers would never beat our best chess player; suddenly they did. They said they would never beat our best Go player; suddenly they did. Now they say AI safety is a future problem that can be left to the labs. Would you sit down with Garry Kasparov and Lee Se-dol and take that bet?

Comment by Rob Mitchell on Help Me Choose A High Impact Career!!! · 2022-05-18T11:56:39.622Z · EA · GW

Thanks Jordan. I wanted to pick up on the Turo element. You mention that this is something you only recently stumbled across, and it doesn't sound like you have prior experience or training in this area, and that you aren't especially passionate about it. You also say that you could make $200k a year on it working a 40 hour week. Where did you get these figures? There aren't many opportunities you can go into without experience and start earning $200k a year.

It may be possible, but I'd suggest it's a high bar to reach as such opportunities are rare, so I'd be interested to see more analysis here. You also mention risks, and it doesn't look like these are gone into in great deal. So I would really look for some maximally rational analysis on this aspect first.

Comment by Rob Mitchell on Giving What We Can - Pledge page trial (EA Market Testing) · 2022-05-17T20:00:19.038Z · EA · GW

'why seeing options other than the expected one would make me less likely to follow through'

I think the key is that 'following through' can mean several things that are similar from the perspective of GWWC but quite different from the perspective of the person pledging.

In my case I'd already been giving >10% for quite a while but thought it might be nice to formalise it. If I hadn't filled in the pledge it wouldn't have made any difference to my giving. So the value of the pledge to me was relatively low. If the website had been confusing or offputting I might have given up.

There are others who will already have decided to give 10% but haven't yet started. The pledge then would have a bit more value if there's a chance it could prevent backsliding but assuming the person had fully committed to giving at this level already, the GWWC pledge wouldn't be crucial to the potential pledger.

Finally, there are people who for whatever reason come across the website without yet having decided to give 10% (or even 1%) and make a decision to sign up when they're there. This is where the more standard marketing theory comes into play.

For the first two groups, the non-conversion is something like 'I can't even see what I'm meant to be signing up for. Never mind, it's not going to affect how I'll actually give anyway.' Friction in this case is anything that makes it harder to identify what the 10% pledge is and how to sign up to it.  I spent a couple of seconds looking between the three options but it was ultimately pretty easy to work out which one was the one I wanted. This would be even easier if it was the one main option.

For the third, it could well be 'There's too much choice, maybe I don't want to do it.' At any rate, it will be much different from people who had already committed to giving 10%.

The 'loss' to GWWC for all three looks the same but there's only a substantial loss to the wider world with the third group. 

I know people not always remembering what's in their minds can be an issue but I doubt it would be a problem on something like 'did you intend to give 10% when you arrived on the GWWC website?' and certainly not on 'have you already been giving 10%?' There's such a difference between the groups it would be really helpful to at least get an indication how they split out.

Comment by Rob Mitchell on Organizational alignment · 2022-05-17T15:41:44.065Z · EA · GW

Well, it looks like I'm hijacking a thread about organisational scaling with some anxieties around referring to people in overly utilitarian ways that I've talked about elsewhere. Which is fair enough; interestingly I've done the opposite and talked about org scaling on threads that were fairly tangentially related and got quite a few upvotes for it. All very intriguing and if you're not occasionally getting blasted, you're not learning as much as you might, getting enough information about e.g. limits, etc...

Comment by Rob Mitchell on Organizational alignment · 2022-05-17T09:03:31.220Z · EA · GW

Every person in your company is a vector. Your progress is determined by the sum of all vectors.

'Hey! I'm not a vector!' I cried out to myself internally as I read this. I mean, I get it and there's a nice tool / thought process in there, but this feels somewhat dehumanising without something to contextualise it. There are loads of tools you might employ to make good decisions that might involve placing someone in a matrix or similar, but hopefully it's obvious that it's a modelled exercise for a particular goal and you don't literally say 'people are maths' while you do it.

Anyway, I was thinking of political parties as I read this. If your party does well, you get an influx of members who somewhat share the same goals but are different from the existing core, not chosen by you, probably less knowledgeable about your history and ideology, and less immediately aligned. You have essentially no ability to produce alignment via financial mechanisms or 'hiring' processes. How do you get people to pull together? There's some recent examples of UK parties absolutely mangling this, but probably some good examples too (Obama 2008? German Green Party?) Obviously in organisations there are then additional mechanisms, but this seems interesting to study from the cultural elements which can be more separated out. 

Comment by Rob Mitchell on Giving What We Can - Pledge page trial (EA Market Testing) · 2022-05-17T08:35:27.679Z · EA · GW

Thanks everyone, this is very interesting and well worth having a look through the attached Gitbook.

Around the intuitive interpretation:

Perhaps giving people more options makes them indecisive. They may be particularly reluctant to choose a “relatively ambitious giving pledge” if a less ambitious option is highlighted.

It's possible that this is the reason, but there's an alternative interpretation based around the fact that GWWC is already quite well-known and referenced as 'the place you go to donate 10% of your income'. So if a lot of people are coming onto your page with that goal in mind, then it would make sense that the layouts that centre that option and make it as frictionless as possible will do better. Which is what we see here - the option centring a different option does much worse, but the one that does best is the one that most highlights the 10% pledge, not the one that contextualises it next to an even higher level pledge given equal space.

My own experience of using the site was very similar - I came on, looked around a bit for the 10% option I'd already decided on (in the original setup), then signed up. Things like favouring the middle option and the effects of anchoring are more relevant in a situation where someone has decided to buy, say, a broadband package but hasn't chosen which one; the lack of effect from them here might indicate that relatively fewer people are coming onto the page unsure how much to give.

You could try testing the 10% pledge next to the further pledge without the 1% pledge, but the really key thing feels like a post-pledge survey. 'Did you already know what you would pledge when you went on our website?' 'If so, did you consider giving at a different level when you saw the options?' etc. I'm sure you'd get a good response rate as people would be motivated to ensure others completed the pledge. Or if you already have this information, it would be really useful to see it!

Comment by Rob Mitchell on Effective Developers: The CV Blind Spot · 2022-05-15T18:17:12.099Z · EA · GW

This is good advice and can be expanded outside software developers as you say. It's also great to see you offering CV help!

As someone who's hired a decent number of people, the one caveat I would add is that this will be really useful to follow as above if you are applying for a job where there is a degree of discretion among decision-makers around what they're assessing.  It's less immediately applicable, but still potentially valuable, if the initial selection is based solely on scoring against predefined criteria. Sometimes this will be explicit ('applications will be assessed against the person specification'), sometimes implicit. This seems to happen less often for EA jobs, but I'm sure it does happen for at least some.

At any rate, if it's just about criteria, your task is then to list out all the criteria from the job pack, tick them off when you've put them in the application, read them back, think 'would I at least be scored as meeting expectations on this and ideally exceeding them?', and update accordingly. In which case, this sort of approach can help you move up to 'exceeds expectations' on a criterion if you can show you can hit it from multiple angles, e.g. in both work and personal life. It could also help you get a longlist that you could pick and choose from for those types of applications and help at interview...

Comment by Rob Mitchell on Charlotte's Shortform · 2022-05-15T12:40:41.708Z · EA · GW

For all that I've read and done with ToCs and critical path analysis, the first thing that comes to my mind is still 'avoiding this':

(I genuinely find thinking 'make sure you don't do this' at all stages is more effective than any theory I've read.)

Also, anything that has 2-3 paths to a potential goal that are at least partially independent will usually leave you in a better place than one linear path.  Then it's not so much 'backchaining' as switching emphasis ('lobbying seems to have stalled, so let's try publicity/behaviour change... then who knows, lobbying might be back on again').

Comment by Rob Mitchell on How close to nuclear war did we get over Cuba? · 2022-05-14T20:42:44.104Z · EA · GW

Thanks for the detailed response and for linking to that other post. I've been dealing with chickenpox in the house so this is probably later and briefer than the analysis deserves.

+1 to 'Command and Control' and 'Nuclear Folly' as well worth reading - between them, enough to dispel any illusions that the destructive power of nuclear weapons was matched with processes to avoid going wrong, whether by accident or human folly. I'll check out 'The Bomb'.

The worrying aspect for me is the combination of leeway for particular commanding officers combined with environmental factors that reduce the ability of those officers to know what's going on, and/or to exercise rational judgement. The sub is the most obvious example of this.

beyond the fact that the Soviet response to a US invasion of Cuba could be to attempt to take Berlin

That's a pretty strong argument in favour of escalation to nuclear exchange! I think it's also other situations taking up the bandwidth of intelligence and politicians, introducing uncertainty, increasing the number of locations where normal accidents or individuals doing something stupid could increase tensions. For China, it came to nothing but one more thing taking up attention and not ideal if you're dealing with one nuclear-armed Communist country to have another one with an unpredictable leader invading another country...

Comment by Rob Mitchell on How close to nuclear war did we get over Cuba? · 2022-05-13T21:50:43.232Z · EA · GW

there were no American war plans for instance that escalated from the use of tactical nuclear weapons by the Soviets to firing nuclear missiles

What's your source for this?

I'd also comment that this misses the wider global context. There were tensions over Berlin, and China and India briefly went to war alongside the Cuban missile crisis; potential overlaps between these conflicts raised the risk of nuclear exchange considerably, possibly not even beginning around Cuba, and at any rate expanding beyond it if it got going. 

Comment by Rob Mitchell on Bad Omens in Current Community Building · 2022-05-13T21:29:50.392Z · EA · GW

I haven't come across this yet... is it what I think it is?

Comment by Rob Mitchell on Intro and practical ideas around Salesforce within EA · 2022-05-13T09:54:42.547Z · EA · GW

Hi Eli! I'm glad those orgs are using Salesforce. It's powerful and scales very well. Annoyingly Salesforce themselves can be a massive sales and hype machine though, so it's not always easy to get the best advice from them directly. So freelance can be doubly useful.

Comment by Rob Mitchell on Bad Omens in Current Community Building · 2022-05-12T19:55:32.372Z · EA · GW

Very interesting. I haven't come into contact with any student groups, so can't comment on that. But here's my experiences of what's worked well and less well coming in as a longtime EA-ish giver in my late 30s looking for a more effective career:


Good

(Free) books:  I love books - articles and TED talks are fine for getting a quick and simple understanding of something, but nothing beats the full understanding from a good book. And some of the key ones are being given away free! Picking out a few, the Alignment Problem, The Precipice and Scout Mindset give a grounding in AI alignment, longtermism/existential risk and rational thinking techniques, and once you have a handful under your belt you're in a solid place to understand and contribute to some discussions. They're good writers too; it's not just information transfer. The approach of 'here's a free book, go away and read it, here's some resources if you want to research further' sounds like the polar opposite of what's described above. It worked well for me. Maybe a proper 'EA book starter list' would help it work even better (there's a germ of this lurking halfway down the page here, but surely this could be standalone and more loved...)

Introductions culture:  People seem happy to give their time up to talk to you after exchanging a couple of messages. After meeting people they're eager to introduce you to others you might be a good 'match' with or at least give leads. Apart from its obvious benefits this is really good for keeping spirits up early on when it might be a bit daunting otherwise.

80k careers guides:  Pretty obvious but very well-written and a good starting point.

Jobs boards e.g. 80k, Work on Climate, Facebook/LinkedIn groups:  Well-curated and give a clear view of what's available in the sector and particular roles are generally well-written. On ones where people post their own jobs they almost always follow community norms. Not entirely free from the usual problems (hype, jobs without posted salaries) but better than most. I've seen some jobs that are what I want but in other countries, which makes me hopeful I'm looking in the right place, especially if I can also start meeting some more people. Talking of which...

This forum:  Smart discussion, some key people on here writing and listening to feedback, seemed welcoming and receptive when I just rocked up and started writing some comments.


Less good

Occasionally, apparent coldness to immediate suffering:  I've only seen this a bit, but even one example could be enough to put someone off for good. I can see what motivates it, but if a person says 'I think x is one of the most pressing current problems', and the response is what seems like a dismissive 'well, x isn't a genuinely existential risk so it's not a priority', it can come across as a lack of empathy or, at worst, humanity. It's not the argument itself, as I've no issue with ranking charities or interventions and producing recommendations, but more the apparent absolutism and lack of compassion involved (even if, ironically, it could be produced by compassion for an imagined future greater good). 

Processes that don't seem fit for the scale of EA:  I've bigged up 80k above so I'll use them as an example here. Ordered a free book, it arrived, got an email later saying 'ah looks like we have distribution problems, here's a digital copy while you're waiting'... then another one saying 'oops forgot to attach it, here it is'. Signed up to 1:1 careers advice, heard nothing for 3 weeks, then 'sorry, we can't do you', with no explanation. They did connect me with a local organiser, which was great, but didn't pass on the responses I'd taken some time to think about, so we ended up covering some ground again. 

Occasionally insular worldview:  This comes from being concentrated in a small number of cities and often graduating from top universities. I linked this piece in another post, but it's very good, so I'm linking it again.


Neutral but interesting

'Eccentric' billionaires:  Media seem to like this angle but it doesn't really hold up in practice. The presence of the narrative did lead me to investigate the funding of EA in ways that I might not otherwise have done.

 

I'm still here, so clearly the good outweighs the rest!

Comment by Rob Mitchell on EA and the current funding situation · 2022-05-12T11:50:02.887Z · EA · GW

Definitely agree that networks will become worse predictors and ultimately grants, job offers etc. will become more impersonal. This isn't entirely a bad thing. For example personal and network-oriented approaches have significant issues around inclusivity that well-designed systems can avoid, especially if the original network is pretty concentrated and similar (see: the pic in the original post...)

As this happens this may also mean that over time people who have been in EA for a while may feel that 'over time the average person in the movement feels less similar to them'. This is a good thing!... if recognised, and well-managed, and people are willing to make the cognitive effort to make it work. 

Comment by Rob Mitchell on Global Income Coin: a UBI-generating currency · 2022-05-11T11:18:52.280Z · EA · GW

The White Paper is fascinating as an example of some smart people trying to identify and crack problems around global UBI - it is worth a look whatever your position on this post and/or solution.

For what it's worth the $2.8tn figure that much of this hangs off seems 'blithely optimistic' as already commented and the link to M0 plucked out of thin air, and the verification system cumbersome and doubtfully viable. There is the germ of something here though and I'm glad though to see so many different organisations and approaches trying to deal with the issue. 

Comment by Rob Mitchell on EA and the current funding situation · 2022-05-11T06:31:50.783Z · EA · GW

For many months, they will sit down many days a week and ask themselves the question "how can I write this grant proposal in a way that person X will approve of" or "how can I impress these people at organization Y so that I can get a job there?"

I would flip this and say, it's inevitable that this will happen, so what do we do about it? There are areas we can learn from:

  • Academia, as you mention - what do we want to avoid here? Which bits actually work well?
  • Organisations that have grown very rapidly and/or grown in a way that changes their nature. On a for-profit basis - Facebook as a cautionary tale of what happens when personal control and association isn't matched with institutional development? On a not-for-profit basis - I work for Greenpeace and we're certainly very different to what we were decades ago, with a mix of 'true believers' and people semi-aligned, generally in more support roles. Some would say we've sold out and indeed some people have abandoned us for other groups that are more similar to our early days, but we certainly have a lot more political influence that we did when we were primarily a direct action / protest group.
  • Corruption studies at a national level. What can we learn of the institutions of very low corruption countries e.g. in Scandinavia that we might adapt? 
Comment by Rob Mitchell on EA and the current funding situation · 2022-05-10T18:16:51.826Z · EA · GW

It's useful to separate out consultancy/advice-giving versus the actual doing. I would say though that a successful management/operations setup should be able to at least ameliorate the feedback issue you mention (e.g. by identifying leading and/or more quickly changing metrics that are aligned and gaining value from these). 

Comment by Rob Mitchell on EA and the current funding situation · 2022-05-10T09:07:34.378Z · EA · GW

I agree (and have formerly resembled this type...)  This is quite embedded in a lot of nonprofit culture. Part of it is what motivates the individual and their personality, part of it is the concept of supporters' money. 'Would the person who gave you £5 a month want you to be spending your money on that?' In practice this leads to counterproductive underspending. I remember waiting weeks to get maybe £100 worth of extra memory so I could crunch numbers at a reasonable speed without crashing the computer. The concept of taxpayers' money works similarly. 

There's probably a good forum post in there somewhere about how the psychology of charity affects perceptions of EA...

Comment by Rob Mitchell on EA and the current funding situation · 2022-05-10T08:45:19.577Z · EA · GW

Really interesting, and something I'll need to come back to. Just to pick out one bit:

Often, it’ll involve people doing things that just aren’t that enjoyable: management and scaling organisations to large sizes are rarely people’s favourite activities; and, it will be challenging to incentivise enough people to do these things effectively.

I've seen variations on this theme in a few posts, and it doesn't resonate with my own experience. In a genuinely influential management/ops role, there's a great deal of satisfaction to be had in seeing your organisation become more effective - if what that organisation is doing is highly worthwhile. I worry a bit that the tone of 'yeah this isn't glamorous, but someone has to do it' is putting off talent in the area. If an attraction to EA is around doing the most good, and this area is a bottleneck, there seem much more positive framings to be used. 

One other question - I've seen quite a few posts trying to set what to do with EA's increased resources through inductive reasoning. I've seen less around examining what others have done successfully or unsuccessfully in terms of embedding sustainable growth and development (e.g. Singapore), managing  very large amounts of money effectively (e.g. Norway's sovereign wealth funds), or increasing the ability to spend money quickly and well (e.g. getting 'shovel ready' engineering plans ready to go), and seeing what lessons can be drawn. None of those map on perfectly to EA's situation, but they should be instructive. Is this research happening, and if so, how are the conclusions being brought together and acted on?

Comment by Rob Mitchell on Tentative Reasons You Might Be Underrating Having Kids · 2022-05-09T20:39:17.138Z · EA · GW

Liked 'the big picture' bit, the tone change makes this.

I do feel though that this and other posts are less focussed on one of the key aspects beyond the effect on parents and the instrumental value of kids when they're grown up, namely the inherent value of a new, independent consciousness. Whether that's a positive experience of the world is a huge consideration, which you do mention; personally I would err on the side of optimism given human progress.

I'm also concerned around valuing children based on their chance of having a big impact when adult. This puts a lot of pressure on parents and potentially children for an outcome that is distant and largely out of their control, could encourage thinking of kids in terms of 'successes' and 'failures' rather than rounded people (with a high bar for 'success'), and could be counter-productive in enabling thriving.

My personal experience is that I did a lot of intellectualising about why I shouldn't have children when I wasn't ready for them, and when I was ready didn't need any particular reasoned justification. Make of that what you will...

Comment by Rob Mitchell on Do you offset your carbon emissions? · 2022-05-05T16:14:14.194Z · EA · GW

Thanks for the link to the Cool Earth post. I don't offset for two reasons:

Climate offsets are frequently ineffective, for reasons discussed in the Cool Earth post and, more journalistically, here;

Focussing on policy change to reduce emissions such as a frequent flyer tax or mandating cleaner fuel or better fuel efficiency will have higher impact than focussing on individual carbon footprints, especially as the latter individual focus may take attention away from the systemic changes needed.

While many organisations offering offsets offer quite ineffectual solutions, there are likely to be a small proportion that are highly effective; however, these would take some searching and aren't obvious unless you do a significant amount of research. At any rate the ones involved from clicking 'offset this flight' on the airline's payment page are likely to be poor value.  

Given that there is wide public interest in and desire to contribute to offsetting, it may be that the most effective donations in this area could be to organisations that publicise and/or certify those few schemes that offer and can demonstrate genuinely effective offsetting, to ensure that more money ends up going there. I don't know of any such organisations though. Or there is the option of donating to organisations that lobby for evidence-based policy change - aviation emissions are significantly more impactful than public perception believes (source).

You're then into comparing effective climate donations vs effective global development donations at least! 

Comment by Rob Mitchell on Mid-career people: strongly consider switching to EA work · 2022-04-27T12:55:47.866Z · EA · GW

As a fellow mid-career person looking at moving into EA, and agreeing that ‘EA career advice for mid-career people is undersupplied at the moment’, I found this post and the comments below really valuable - thanks for taking the time to write it up!


 

I wanted to pick up on Patrick’s point around specialist vs generalist, as to me this seems a key part of the issue. Much as it is the case that EA tends younger, but seems inclusive of older people, it does also seem to skew specialist. This is understandable, given there are a lot of practitioner roles that will require large amounts of specialist knowledge. It’s interesting that this comment talks about more generalist roles being mentioned at EAG that haven’t been publicised. I wonder if it is more likely that specialist roles get ‘officially’ publicised, while the more generalist ones are likelier to not be, maybe to the extent of only living in someone’s head in the style, ‘we could really do with someone to help us out on operations…’

 

What I would find really useful as more of a generalist is advice around ‘here’s how to use your skill stack to get a job in EA’. This exists to a certain extent in career profiles like on 80k, but not always with the context of ‘this is essential from day 1, this is essential but you can learn it on the job, this will make your life easier, this is something you would only use once a month in practice…’ And it does feel like particular types of roles that could appear in any sector of work (e.g. operations roles) get less coverage than those that specialise in a particular sector (e.g. AI researcher) If I get an EA job, I promise to write something around this! I’ll drop a couple of you a line about this later.

 

On the ‘quit your job and try things out’ approach - I’ve seen a couple of posts around interim funding. This could be really useful, but from my own situation, the issue isn’t so much runway but whether I would give up a reasonably effective job for the chance of something that may be much better. How do you really distinguish between the genuine difference in impact you might have against loss aversion? How many jobs are there that would be better and how many people are chasing them? For mid-career people, it feels like runway may be less of an impact relative to the knowledge you may be giving up something with a guaranteed impact, even if it may not be optimal, on the basis of uncertain factors. 

 

Thanks again for spending a few hours writing this one.