Posts

Two tongue-in-cheek EA anthems 2022-07-04T11:52:35.419Z

Comments

Comment by Oliver Sourbut on Effective altruism in the garden of ends · 2022-09-08T12:30:25.064Z · EA · GW

This is beautiful and important Tyler, thank you for sharing.

I've seen a few people burn out (and come close myself), and I have made a point of gently socially making and reinforcing this sort of point (far less eloquently) myself, in various contexts. 

I have a lot of thoughts about this subject.

One thing I embrace always is silliness and (often self-deprecating) humour, which are useful antidotes to stress for a lot of people. Incidentally, your tweet thread rendition of the Eqyptian spell includes

I am light heading for light. Even in the dark, a fire bums in the distance.

(emphasis mine) which I enjoyed. A case of bad keming reified?

A few friends and acquaintances have recently been working on something they're calling Shard Theory, which considers the various parts of a human's motivation system and their interactions. They're interested for other reasons, but I was reminded here. See also Kaj Sotala's Multiagent Models of Mind which is more explicitly about how to be a human.

As a firm descriptive (but undecidedly prescriptive) transhumanist, I think your piece here also touches on something we will likely one day (maybe soon?) have to grapple with, which is the fundamental relationship between (moral) agency and moral patienthood. As it happens, modern humans are quite conclusively both, by most lights, but it doesn't look like this is a law of nature. Indeed there are likely many deserving moral patients today who are not much by way of agents. And we may bring into being agents which are not especially moral-patienty. (Further, something sufficiently agenty might render humans themselves ourselves to the status of 'not much by way of agents'.)

Comment by Oliver Sourbut on Why EAs are skeptical about AI Safety · 2022-07-21T07:59:11.865Z · EA · GW

Seconded/thirded on Human Compatible being near that frontier. I did find its ending 'overly optimistic' in the sense of framing it like 'but lo, there is a solution!' while other similar resources like Superintelligence and especially The Alignment Problem seem more nuanced in presenting uncertain proposals for paths forward not as oven-ready but preliminary and speculative.

Comment by Oliver Sourbut on Announcing Non-trivial, an EA learning platform for teenagers · 2022-07-13T08:23:08.190Z · EA · GW

I think it's a staircase? Maybe like climbing upwards to more good stuff. Plus some cool circles to make it logo ish.

Comment by Oliver Sourbut on Announcing Non-trivial, an EA learning platform for teenagers · 2022-07-13T08:19:36.628Z · EA · GW

I'm intrigued by this thread. I don't have an informed opinion on the particular aesthetic or choice of quiz questions, but I note some superficial similarities to Coursera, Khan Academy, and TED-Ed, which are aimed at mainly professional age adults, students of all ages, and youth/students (without excluding adults) respectively.

Fun/cute/cartoon aesthetics do seem to abound these days in all sorts of places, not just for kids.

My uninformed opinion is that I don't see why it should put off teenagers (talented or otherwise) in particular, but I weakly agree that if something is explicitly pitched at teenagers, that might be offputting!

Comment by Oliver Sourbut on The Future Might Not Be So Great · 2022-07-04T08:39:27.694Z · EA · GW

It looks like I got at least one downvote on this comment. Should I be providing tips of this kind in a different way?

Comment by Oliver Sourbut on The Future Might Not Be So Great · 2022-06-30T15:46:13.918Z · EA · GW

I've considered a possible pithy framing of the Life Despite Suffering question as a grim orthogonality thesis (though I'm not sure how useful it is):

We sometimes point to the substantial majority's revealed preference for staying alive as evidence of a 'life worth living'. But perhaps 'staying-aliveness' and 'moral patient value' can vary more independently than that claim assumes. This is the grim orthogonality thesis.

An existence proof for the 'high staying-aliveness x low moral patient value' quadrant is the complex of torturer+torturee, which quite clearly can reveal a preference for staying alive, while quite plausibly being net negative value.

Can we rescue the correlation of revealed 'staying-aliveness' preference with 'life worth livingness'?

We can maybe reason about value from the origin of moral patients we see, without having a physical theory of value. All the patients we see at present are presumably products of natural selection. Let's also assume for now that patienthood comes from consciousness.

Two obvious but countervailing observations

  • to the extent that conscious content is upstream of behaviour but downstream of genetic content, natural selection will operate on conscious content to produce behaviour which is fitness-correlated
    • if positive conscious content produces attractive behaviour (and vice versa), we might anticipate that an organism 'doing well' according to suitable fitness-correlates would be experiencing positive conscious content
    • this seems maybe true of humans?
  • to the extent that behaviour is downstream of non-conscious control processes, natural selection will operate on non-conscious control processes to produce behaviour which is fitness-correlated
    • we can not rule out experiences 'not worth living' which nevertheless produce net revealed staying-aliveness preference, if the behaviour is sufficiently under non-conscious control, or if the selection for behaviour downstream of negative conscious experience is weak
      • weak selection is especially likely in novel out-of-distribution situations
    • in general, organisms which reveal preferences for not staying alive will never be ceteris paribus fitter (though there are special cases of course)

For non-naturally-selected moral patients, I think even the above bets are basically off.

Comment by Oliver Sourbut on The Future Might Not Be So Great · 2022-06-30T15:42:00.545Z · EA · GW

I'm shocked and somewhat concerned that your empirical finding is that so few people have encountered or thought about this crucial consideration.

My experience is different, with maybe 70% of AI x-risk researchers I've discussed with being somewhat au fait with the notion that we might not know the sign of future value conditional on survival. But I agree that it seems people (myself included) have a tendency to slide off this consideration or hope to defer its resolution to future generations, and my sample size is quite small (a dozen maybe) and quite correlated.

For what it's worth, I recall this question being explicitly posed in at least a few of the EA in-depth fellowship curricula I've consumed or commented on, though I don't recall specifics and when I checked EA Cambridge's most recent curriculum I couldn't find it.

Comment by Oliver Sourbut on The Future Might Not Be So Great · 2022-06-30T15:13:46.219Z · EA · GW

Typo hint:

"10<sup>38</sup>" hasn't rendered how you hoped. You can use <dollar>10^{38}<dollar> which renders as

Comment by Oliver Sourbut on Critiques of EA that I want to read · 2022-06-27T07:46:32.638Z · EA · GW

Got it, I think you're quite right on one reading. I should have been clearer about what I meant, which is something like

  • there is a defensible reading of that claim which maps to some negative utilitarian claim (without necessarily being a central example)
  • furthermore I expect many issuers of such sentiments are motivated by basically pretheoretic negative utilitarian insight

E.g. imagine a minor steelification (which loses the aesthetic and rhetorical strength) like "nobody's positive wellbeing (implicitly stemming from their freedom) can/should be celebrated until everyone has freedom (implicitly necessary to escape negative wellbeing)" which is consistent with some kind of lexical negative utilitarianism.

You're right that if we insist that 'freedom' be interpreted identically in both places (parsimonious, granted, though I think the symmetry is better explained by aesthetic/rhetorical concerns) another reading explicitly neglects the marginal benefit of lifting merely some people out of illiberty. Which is only consistent with utilitarianism if we use an unusual aggregation theory (i.e. minimising) - though I have also seen this discussed under negative utilitarianism.

Anecdata: as someone whose (past) political background and involvement (waning!) is definitely some kind of lefty, and who, if it weren't for various x- and s-risks, would plausibly consider some form (my form, naturally!) of lefty politics to be highly important (if not highly tractable), my reading of that claim at least goes something like the first one. I might not be representative in that respect.

I have no doubt that many people expressing that kind of sentiment would still celebrate marginal 'releases', while considering it wrong to celebrate further the fruits of such freedom, ignoring others' lack of freedom.

Comment by Oliver Sourbut on Critiques of EA that I want to read · 2022-06-25T20:24:35.340Z · EA · GW

Minor nitpick: "nobody's free until everyone is free" is precisely a (negative) utilitarian claim (albeit with unusual wording)

Comment by Oliver Sourbut on Are too many young, highly-engaged longtermist EAs doing movement-building? · 2022-06-25T11:30:05.427Z · EA · GW

It's possible the selection bias is high, but I don't have good evidence for this besides personal anecdata. I don't know how many people are relevantly similar to me, and I don't know how representative we are of the latest EA 'freshers', since dynamics will change and I'm reporting with several years' lag.

Here's my personal anecdata.

Since 2016, around when I completed undergrad, I've been an engaged (not sure what counts as 'highly engaged') longtermist. (Before that point I had not heard of EA per se but my motives were somewhat proto EA and I wanted to contribute to 'sustainable flourishing at scale' and 'tech for good'.) Nevertheless, until 2020 or so I was relatively invisibly upskilling, reflecting on priorities, consuming advice and ideas etc. and figuring out (perhaps too humbly and slowly) how to orient. More recently I've overcome some amount of impostor syndrome and simultaneously become more 'community engaged' (hence visible) and started directly contributing to technical AI safety research.

If there are a lot with stories like that, they might form a large but quiet cohort countervailing your concern.

Having said that, I think what you express here is excellent to discuss, I think I may have been unusually quiet+cautious, I didn't encounter EA during undergrad, and I suspect (without here justifying) that community dynamics have changed sufficiently that my anecdote is not IID with the cohort you're discussing.

Comment by Oliver Sourbut on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-25T09:08:46.820Z · EA · GW

I just wanted to state agreement that it seems a large number of people largely misread Death with Dignity, at least according to what seems to me the most plausible intended message: mainly about the ethical injunctions (which are very important as a finitely-rational and prone-to-rationalisation being), as Yudkowsky has written of in the past.

The additional detail of 'and by the way this is a bad situation and we are doing badly' is basically modal Yudkowsky schtick and I'm somewhat surprised it updated anyone's beliefs (about Yudkowsky's beliefs, and therefore their all-things-considered-including-deference beliefs).

I think if he had been a little more audience-aware he might have written it differently. Then again maybe not, if the net effect is more attention and investment in AI safety - and more recent posts and comments suggest he's more willing than before to use certain persuasive techniques to spur action (which seems potentially misguided to me, though understandable).

Comment by Oliver Sourbut on Blake Richards on Why he is Skeptical of Existential Risk from AI · 2022-06-16T10:54:12.554Z · EA · GW

I wrote something similar (with more detail) about the Gato paper at the time.

I don't think this is any evidence at all against AI risk though? It is maybe weak evidence against 'scaling is all you need' or that sort of thing.

Comment by Oliver Sourbut on Blake Richards on Why he is Skeptical of Existential Risk from AI · 2022-06-16T10:46:32.828Z · EA · GW

Thanks Rohin, I second almost all of this.

Interested to hear more about why long-term credit assignment isn't needed for powerful AI. I think it depends how you quantify those things and I'm pretty unsure about this myself.

Is it because there is already loads of human-generated data which implicitly embody or contain enough long-term credit assignment? Or is it that long-term credit assignment is irrelevant for long-term reasoning? Or maybe long-term reasoning isn't needed for 'powerful AI'?

Comment by Oliver Sourbut on How I failed to form views on AI safety · 2022-05-09T22:45:52.280Z · EA · GW

OK, this is the terrible terrible failure mode which I think we are both agreeing on (emphasis mine)

the perceived standard of "you have to think about all of this critically and by your own, and you will probably arrive to similar conclusions than others in this field"

By 'a sceptical approach' I basically mean 'the thing where we don't do that'. Because there is not enough epistemic credit in the field, yet, to expect that all (tentative, not-consensus-yet) conclusions to be definitely right.

In traditional/undergraduate mathematics, it's different - almost always when you don't understand or agree with the professor, she is simply right and you are simply wrong or confused! This is a justifiable perspective based on the enormous epistemic weight of all the existing work on mathematics.

I'm very glad you call out the distinction between performing skepticism and actually doing it.

Comment by Oliver Sourbut on How I failed to form views on AI safety · 2022-04-20T19:08:49.131Z · EA · GW

I feel like while “superintelligent AI would be dangerous” makes sense if you believe superintelligence is possible, it would be good to look at other risk scenarios from current and future AI systems as well.

I agree, and I think there's a gap for thoughtful and creative folks with technical understanding to contribute to filling out the map here!

One person I think has made really interesting contributions here is Andrew Critch, for example on Multipolar Failure and Robust Agent-Agnostic Processes (I realise this is literally me sharing a link without much context which was a conversation-failure-mode discussed in the OP so feel free to pass on this). He also has made some attempts to discuss more breadth e.g. here. Critch isn't the only one.

Comment by Oliver Sourbut on How I failed to form views on AI safety · 2022-04-20T19:03:28.747Z · EA · GW

I’m fairly sure deep learning alone will not result in AGI

How sure? :)

What about some combination of deep learning (e.g. massive self-supervised) + within-context/episodic memory/state + procedurally-generated tasks + large-scale population-based training + self-play...? I'm just naming a few contemporary 'prosaic' practices which, to me, seem plausibly-enough sufficient to produce AGI that it warrants attention.

Comment by Oliver Sourbut on How I failed to form views on AI safety · 2022-04-20T18:51:59.696Z · EA · GW

I was one of the facilitators in the most recent run of EA Cambridge's AGI Safety Fundamentals course, and I also have professional DS/ML experience.

In my case I very deliberately emphasised a sceptical approach to engaging with all the material, while providing clarifications and corrections where people's misconceptions are the source of scepticism. I believe this was well-received by my cohort, all of whom appeared to engage thoughtfully and honestly with the material.

I think this is the best way to engage, when time permits, because (in brief)

  • many arguments invoke ill-defined terms, and we need to sharpen these
  • many arguments are (perhaps explicitly) speculative and empirically uncertain
  • even mathematically/empirically rigorous content has important modelling assumptions and experimental caveats
  • scepticism often produces better creative/generative engagement
  • collectively we will fail if our individual opinions are overly shaped by founder effects

I hope that this is a common perspective, but to the extent that it isn't, I wonder if this (especially the last point) may be a source of some of your confusing experiences.

I'd also say: it seems appropriate to have 'very messy views' if by that you mean uncertainty about where things are going and how to make them better! I think folks who don't are doing one of two things

  • mistakenly concentrating more hypothesis weight than their observations/thinking in fact justify (which is a bad idea)
  • engaging in a thinking manoeuvre something like 'temporary MAP stance' or 'subjective probability matching' (which may be a good idea, if done transparently)

'Temporary MAP stance' or 'subjective probability matching'

MAP is Maximum A Posteriori i.e. your best guess after considering evidence. Probability matching is making actions/guesses proportional to your estimate of them being right (rather than picking the single MAP choice)

By this manoeuvre I'm gesturing at a kind of behaviour where you are quite unsure about what's best (e.g. 'should I work on interpretability or demystifying deception?') and rather than allowing that to result in analysis paralysis, you temporarily collapse some uncertainty and make some concrete assumptions to get moving in one or other direction. Hopefully in so doing you a) make a contribution and b) grow your skills and collect new evidence to make better decisions/contributions next time.

Comment by Oliver Sourbut on How I failed to form views on AI safety · 2022-04-20T18:29:45.861Z · EA · GW

Hey, as someone who also has professional CS and DS experience, this was a really welcome and interesting read. I have all sorts of thoughts but I had one main question

So I used the AGISF Slack to find people who had already had a background in machine learning before getting into AI safety and asked them what had originally convinced them. Finally, I got answers from 3 people who fit my search criteria. They mentioned some different sources of first hearing about AI safety (80 000 Hours and LessWrong), but all three mentioned one same source that had deeply influenced them: Superintelligence.

When I read this I remembered that I was one of the folks you reached out to on Slack! But I didn't mention Superintelligence at all (though in fact I have read it and have a generally good opinion of it, though it was several years ago). I guess I didn't fit your criteria quite right? In my case I had CS and a little academic AI, but no professional DS/ML experience, before getting 'into' AI safety. Interested to know what other people you spoke to and for what reasons they didn't fall into the criteria you were looking for.

Comment by Oliver Sourbut on What holiday songs, stories, etc. do you associate with effective altruism? · 2021-12-18T09:44:32.862Z · EA · GW

It's not EA but I have a soft spot for Good King Wenceslas (https://en.m.wikipedia.org/wiki/Good_King_Wenceslas)

It's a Christmas hymn about a rich prince who was busy striding around and giving to the poor, and it ends by saying all good Christians 'wealth or rank possessing' should do the same. It's a cracking tune and it means that at least once per year, most Anglican churchgoers will get reminded of those words.

The story is medieval but the particular text comes out of the Victorian charity movement which, at its best, was vaguely proto EA and proto progress studies in many ways.

Comment by Oliver Sourbut on How do EAs deal with having a "weird" appearance? · 2021-11-11T08:55:08.738Z · EA · GW

Just seconding this. For context I work not in academia but as a software engineer and data scientist in London.

I usually have crazy sticky-up hair that sort of does different things each day especially as it grows. That's my main superficial weirdness (unless you count the unusually big nose) though I have plenty of other quirks which are harder to label and harder to spot from a distance.

In hindsight I think the hair has made me memorable and recognisable in my workplaces (e.g. people have expressed looking forward to seeing me and my hair in meetings...), and since I'm also reasonably agreeable and competent, I suspect this memorableness has ultimately been net useful for networking (which is useful because networking hasn't historically come naturally to me).

Comment by Oliver Sourbut on Many Undergrads Should Take Light Courseloads · 2021-10-30T08:52:34.256Z · EA · GW

Thank you, I found myself agreeing with most of this post and reflecting on how I might have optimised during my undergrad experience. On the other hand, I note that neither the post nor any comments yet contains what I consider an important caveat:

Taking extra classes is a great way to explore in the sense of dissolving known- and unknown-unknowns (what fits me? what problem-framings am I missing? what tools do other disciplines have? what concerns to people interested in X have? what even is there if I look further?)

Extra-curricular activities also enable some of this sort of exploration. But I'd emphasise that for a competent young person, appropriate exploration, one way or another, is a really key aspect of impact.

For my personal experience (UK) I blended music (!) and mathematics with later explorations in philosophy and computer science, each of which is responsible in one way or another for opening doors to impactful, challenging, rewarding, and lucrative possibilities.

Comment by Oliver Sourbut on Prioritization Research for Advancing Wisdom and Intelligence · 2021-10-21T10:12:35.855Z · EA · GW

Yes yes, more strength to this where it's tractable and possible backfires are well understood and mitigated/avoided!

One adjacent category which I think is helpful to consider explicitly (I think you have it implicit here) is 'well-informedness', which I motion is distinct from 'intelligence' or 'wisdom'. One could be quite wise and intelligent but crippled or even misdirected if the information available/salient is limited or biased. Perhaps this is countered by an understanding of one's own intellectual and cognitive biases, leading to appropriate ('wise') choices of information-gathering behaviour to act against possible bias? But perhaps there are other levers to push which act on this category effectively.

To the extent that you think long-run trajectories will be influenced by few specific decision-making entities, it could be extremely valuable to identify, and improve the epistemics and general wisdom (and benevolence) of those entities. To the extent that you think long-run trajectories will be influenced by the interactions of many cooperating and competing decision-making entities, it could be more important to improve mechanisms for coordination, especially coordination against activities which destroy value. Well-informedness may be particularly relevant in the latter case.

Comment by Oliver Sourbut on Ben_Snodin's Shortform · 2021-10-14T09:58:31.018Z · EA · GW

It depends what media type you're talking about (audio, video, display, ...) - $6m/100m is $60CPM ('cost per mille'), which is certainly above the odds for similar 'premium video' advertising, but only by maybe 2-5x. For other media like audio and display the CPMs can be quite a bit lower, and if you're just looking to reach 'someone, somewhere' you can get a bargain via programmatic advertising.

I happen to work for a major demand-side platform in real-time ad buying and I've been wondering if there might be a way to efficiently do good this way. The pricing can be quite nuanced. Haven't done any analysis at this point.

Comment by Oliver Sourbut on How impactful is free and open source software development? · 2021-10-14T09:32:04.697Z · EA · GW

Hey, let me know if you'd like another reviewer. I'm a medium-experienced senior software engineer whose professional work and side-projects use various proportions of open-source and proprietary software. And I enjoy reviewing/proof-reading :)

Comment by Oliver Sourbut on Beyond fire alarms: freeing the groupstruck · 2021-10-07T08:36:59.736Z · EA · GW

I appreciated your detailed analysis of the fire alarm situation along with evidence and introspection notes.

I'm not sure if it opens up any action-relevant new hypothesis space, but one feature of the fire alarm situation which I think you did not analyse is that commonly people are concerned also for the welfare of their fellows, especially those who are close by. This makes sense: if you find yourself in a group, even of strangers (and you've reached consensus that you're not fighting each other) it will usually pay off to look out for each other! So perhaps another social feature at play when people fail to leave a smoky room when the group shows no signs of doing so is that they aren't willing to unilaterally secure their own safety unless they know the others are also going to be safe. Though on this hypothesis you might expect to see more 'speaking up' and attempts to convince others to move.

Comment by Oliver Sourbut on Beyond fire alarms: freeing the groupstruck · 2021-10-07T08:35:41.013Z · EA · GW

This was a great read, thank you - I especially valued the multiple series of illustrating/motivating examples, and the several sections laying out various hypotheses along with evidence/opinion on them.

I sometimes wonder how evolution ended up creating humans who are sometimes nonconformist, when it seems socially costly, but I think a story related to what you've written here makes sense: at least one kind of nonconformity can sometimes shift a group consensus from a fatal misinterpretation to an appropriate and survivable group response (and furthermore, presumably in expectation gain some prestige for the nonconforming maverick(s) who started the shift). So there's maybe some kind of evolutionary stable meta-strategy of 'probability of being conformist or not (maybe context-dependent)'.

Comment by Oliver Sourbut on EA Survey 2020: How People Get Involved in EA · 2021-05-27T08:02:46.458Z · EA · GW

Thanks for these very helpful insights! I thought the mosaic charts were particularly creative and visually insightful.

I have one minor statistical nit and one related question.

In cases where 'only one significant difference was found' (at a 95% c.i.), it could be worth noting that you have around 20 categories... so on average one spurious significant difference is to be expected! (If the difference is small.)

Also a question about how the significance test was carried out. so for calling a difference significant at 95% it matters whether you a) check if the individual 95% confidence intervals overlap or b) check if the diff'd confidence interval noted above contains 0 (the usual approach). Which approach was used here? I ask because to my eye there might be a few more (weakly) significant results than were mentioned in the text.

Comment by Oliver Sourbut on What harm could AI safety do? · 2021-05-24T17:08:31.936Z · EA · GW

To the extent that you are concerned about intrinsically-multipolar negative outcomes (that is, failure modes which are limited to multipolar scenarios), AI safety which helps only to narrowly align individual automated services with their owners could help to accelerate such dangers.

Critch recently outlined this sort of concern well.

A classic which I personally consider to be related is Meditations on Moloch

Comment by Oliver Sourbut on Careers Questions Open Thread · 2020-12-16T21:26:10.410Z · EA · GW

I really appreciate these data points! Actually it's interesting you mention the networking aspect - one of the factors that would push me towards more higher education is the (real or imagined?) networking opportunities. Though I get on very well with most people I work or study with, I'm not an instinctive 'networker' and I think for me, improving that could be a factor with relatively high marginal return.

As for learning practical skills... I'd hope to get some from a higher degree but if that were all I wanted I might indeed stick to Coursera and the like! It's the research aspect I'd really like to explore my fit for.

Trying to negotiate a break with the company had crossed my mind but sounds hard. Thanks for the nudge and anecdata about that possibility. It would be a big win if possible!

I'm really glad to hear that your path has been working out without regret. I hope that continues. :)

Comment by Oliver Sourbut on Careers Questions Open Thread · 2020-12-16T21:24:30.556Z · EA · GW

I welcome the reinforcement that a) it is indeed a tough call and b) I'm sane and they're good options! Thank you for the encouragement, and the advice.

I remain fuzzy on what shape 'impactful direct work' could take, and I'm not sure to what degree keeping my mind 'open' in that sense is rational (the better to capture emergent opportunities) vs merely comforting (specifying a path is scary and procastinating is emotionally safer)! I acknowledge that my tentative principal goal besides donations, if I continue engineering growth, is indeed working on safety. The MIRI job link is interesting. I'd be pleased and proud to work in that sort of role (though I'm likely to remain in the UK at least for the near future).

Thank you for the suggestion to talk to Richard or others. I've gathered a few accounts from friends I know well who have gone into further degrees in other disciplines, and I expect it would be useful to complement that with evidence from others to help better predict personal fit. I wouldn't know whom to talk to about impact on a long-term engineering track.

Comment by Oliver Sourbut on Careers Questions Open Thread · 2020-12-11T08:30:40.958Z · EA · GW

As an engineer (software) myself for a few years, I can encourage you that is rewarding, challenging, and in the right position you can have quite a bit of autonomy to drive decision-making and execute on your own vision. Depending on the role and organisation, it can be far from merely technical; the outline you give of the college project sounds exactly like engineering to me!

That said, there are few or no places where engineers are completely unconstrained. But there are routes from engineering into more 'overseeing'-type roles, e.g. architect, tech director, technical project manager. A lot of those people do much better if they have solid engineering experience of their own first.

Some different thoughts on which I have much less or no experience but seem relevant:

  • management consulting. Have you heard of that? I think they solve hard problems and have some room for vision.
  • entrepreneurs obviously have an opportunity to create and oversee a vision. I gather that a lot of the time it helps to have related experience in the relevant industry/field beforehand
Comment by Oliver Sourbut on Careers Questions Open Thread · 2020-12-10T15:45:35.250Z · EA · GW

I'm trying to choose between doubling down on skills in software engineering or branching out with the goal of working on AI safety longer term. I get the impression that a lot of people are in a similar position.

For me, my undergrad was an unusual mix of things but included Maths, Music (!) and Computer Science. I got good grades and I think there's a reasonable chance of my getting into a university like Oxford, Cambridge or Imperial to study a Masters and perhaps subsequently a PhD in Computer Science/AI.

Currently I'm paid well and developing a fair amount of expertise as a software engineer. I've been at it for ~4 years and gained a fair amount of respect and responsibility; most likely on the cusp of a 'senior' promotion within months. Sticking at the engineering, I might be able to give £10s of thousands/year or more, perhaps sustained over a few decades, and there's also a chance I could find myself doing impactful direct work or being in a position to influence the direction of large forces in tech.

On the other hand, from the work I've done and investigations in my own time I think my temperament and working style suit research, but I've little direct evidence of that so far and I'd need to prove that to myself and others. I'm thinking of a Masters as being a great way to do that. I've enjoyed several recommended online courses in ML and statsy things as well as a little tinkering on toy projects. On this evidence my capacities appear to align well with applying and implementing machine learning but I think research and policy have higher leverage over future flourishing.

Is applying for Masters courses in AI/ML is a sensible next exploration? One catch is that some of my current compensation is in equity which vests over time, meaning making a change would sacrifice some earnings.

Comment by Oliver Sourbut on You Should Write a Forum Bio · 2020-09-17T18:40:31.982Z · EA · GW

Thanks for this, Aaron. Joining a new community can be tricky online so it's helpful to have an explicit welcome like this!