Posts

Comments

Comment by tamgent on Open EA Global · 2022-09-07T12:39:53.830Z · EA · GW

I also don't get this. I can;t help thinking about the Inner Ring essay by C.S. Lewis. I hope that's not what's happening.

Comment by tamgent on AI Governance Needs Technical Work · 2022-09-07T12:12:31.566Z · EA · GW

I am a software engineer who transitioned to tech/AI policy/governance. I strongly agree with the overall message (or at least title) of this article: that AI governance needs technical people/work, especially for the ability to enforce  regulation. 

However in the 'types of technical work' you lay out I see some gaping governance questions/gaps. You outline various tools that could be built to improve the capability of actors in the governance space, but there are many such actors, and tools by their nature are dual use - where is the piece on who these tools would be wielded by, and how they can be used responsibly? I would be more excited about seeing new initiatives in this space that clearly set out which actors it works with for which kinds of policy issues and which not and why. Also there is a big hole around not being conflicted etc. There's lots of legal issues that can't be avoided that crop up when you need to actually use such tools in any context beyond a voluntary initiative of a company (which does not give as many guarantees as things that apply to all current and future companies, like regulations or to some extent standards). There is and will be increasingly a huge demand for companies with practical AI auditing expertise - this is a big opportunity to start trying to fill that gap. 

I think the section on 'advising on the above' could be fleshed out a whole lot more. At least I've found that because this area is very new, there is a lot of talking to do with lots of different people, lots of translation, before getting to actually do these things... it's helpful if you're the kind of technical person who is willing to learn how to communicate to a non-technical audience, and to learn from people with other backgrounds about the constraints and complexities of the policymaking world, and derives satisfaction from this. I think this is hugely worthwhile though - and if you're the kind of person who is willing to do that and looking for work in the area, do get in touch as I have some opportunities (in the UK).

Finally, I'll just more explicitly now highlight the risk of technical people being used for the aims of others (that may or may not lead to good outcomes) in this space. In my view, if you really want to work in this intersection you should be asking the above questions about anything you build - who will use this thing and how, and what are the risks and can I reduce them. And when you advise powerful actors, bringing your technical knowledge and expertise, do not be afraid to also give your opinions to decision-makers on what might lead to what kinds of real world outcomes, and ask questions about the application aims, and improve those aims.

Comment by tamgent on Most social activity will reside in VR by 2036 · 2022-08-23T06:55:37.668Z · EA · GW

Every time I've used VR (including latest ones), I feel sick and dizzy afterwards. I don't think this issue is unique to me. It feels difficult to me to imagine that most people would want to spend significant daily time in something that has such an effect and nothing in this post addressed this issue. Your prediction feels wildly wrong to me.

Comment by tamgent on Announcing the GovAI Policy Team · 2022-08-15T20:32:33.158Z · EA · GW

Great development. Does this mean GovAI will start inputting to more government consultations on AI and algorithms? The UK gov recently published a call for input on its AI regulation strategy - is GovAI planning to respond to it? On the regulation area - there's a lot of different areas of regulation (financial, content, communication infra, data protection, competition and consumer law), and the UK gov is taking a decentralised approach, relying on individual regulators' areas of expertise rather than creating a central body. How will GovAI stay on top of these different subject matter areas? 

Comment by tamgent on How technical safety standards could promote TAI safety · 2022-08-10T11:35:30.064Z · EA · GW

Just to add to UK regulator stuff in the space:  the DRCF has a stream on algorithm auditing. Here is a paper with a short section on standards. Obviously it's early days, and focused on current AI systems, but it's  a start: https://www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/auditing-algorithms-the-existing-landscape-role-of-regulators-and-future-outlook

Comment by tamgent on How technical safety standards could promote TAI safety · 2022-08-09T14:39:03.718Z · EA · GW

Well I disagree but there's no need to agree - diverse approaches to a hard problem sounds good to me. 

Comment by tamgent on How technical safety standards could promote TAI safety · 2022-08-09T06:08:10.626Z · EA · GW

AI doesn't exist in a vacuum, and TAI won't either. AI has messed up, is messing up and will mess up bigger as it gets more advanced. Security will never be a 100% solved problem, and aiming for zero breaches of all AI systems is unrealistic.  I think we're more likely to have better AI security with standards - do you disagree with that?  I'm not a security expert, but here some relevant considerations of one applied to TAI. See in particular the section "Assurance Requires Formal Proofs, Which Are Provably Impossible". Given the probably impossible nature of having formal guarantees (not to say we shouldn't try to get as close as possible), it really does seem that leveraging whatever institutional and coordination mechanisms have worked in the past is a worthwhile idea. I consider SSOs to be one set of these, all things considered.

Here is a section from an article written by someone who has worked in SSOs and security for decades:
> Most modern encryption is based on standardised algorithms and protocols; the use of open, well-tested and thoroughly analysed encryption standards is generally recommended. WhatsApp, Facebook Messenger, Skype, and Google Messages now all use the same encryption standard (the Signal protocol) because it has proven to be secure and reliable. Even if weaknesses are found in such encryption standards, solutions are often quickly made available thanks to the sheer number of adopters.

Comment by tamgent on How technical safety standards could promote TAI safety · 2022-08-08T17:43:35.039Z · EA · GW

I can respond to your message right now via a myriad of potential software because of the establishment of a technical standard, HTTP.  Additionally, all major web browsers run and interpret Javascript, in large part due to SSOs like IETF and W3C. By contrast, on mobile, we have two languages for the duopoly, and a myriad of issues I won't go into, but suffice to say there has been a failure of SSOs in the space to replicate what happened with web browsing and early internet. It may be that TAI present novel and harder challenges, but in some of the hardest such technical coordination challenges to date, SSOs have been very useful. I'm not as worried about defection as you if we get something good going - the leaders will likely have significant resources, and therefore be under bigger public scrutiny and will want to show they are also leading on participating in standard setting. I am hopeful that there will be significant innovation in this area in the next few years. [Disclaimer, I work in this area, so naturally biased]

Comment by tamgent on UK AI Policy Report: Content, Summary, and its Impact on EA Cause Areas · 2022-07-21T18:09:06.100Z · EA · GW

Thank you kindly for the summary! I was just thinking today when the paper was making the rounds - I'd really like a summary of this whilst I'm waiting on making the time to read it in full. So this is really helpful for me.

I work in this area, and can attest to the difficulty of getting resources towards capability building for detecting trends towards future risks, as opposed to simply firefighting the ones we've been neglecting. However, I think the near vs long term distinction is often unhelpful and limited, and I prefer to try to think about things in the medium term (next 2-10 years). There's a good paper on this by FHI and CSER. 

I agree with you that the approach outlined in the paper is generally good, and with your caveats/risks too. I also think it's nice that there is variation amongst nations' approaches; hopefully they'll be complementary and borrow pieces from each other's work.

Comment by tamgent on Strategic Perspectives on Long-term AI Governance: Introduction · 2022-07-04T09:07:19.639Z · EA · GW

Sorry more like a finite budget and proportions, not probabilities.

Comment by tamgent on Strategic Perspectives on Long-term AI Governance: Introduction · 2022-07-03T10:01:48.289Z · EA · GW

Agree on aggregate it's good for a collection of people to pursue many different strategies, but would you personally/individually weight all of these equally? If so, maybe you're just uncertain? My guess is that you don't weight all of these equally. Maybe another framing is to put probabilities on each and then dedicate the appropriate proportion of resources accordingly. This is a very top down approach though and in reality people will do what they will! I guess it seems hard to span more than two beliefs next to each other on any axis as an individual to me. And when I look at my work and my beliefs personally, that checks out. 

Comment by tamgent on Leaving Google, Joining the Nucleic Acid Observatory · 2022-06-11T19:13:46.839Z · EA · GW

Could you elaborate on what you mean by as ad tech gets stronger? Is that just because all tech gets stronger with time, or is it in response to the current shifts, like privacy sandbox?

Comment by tamgent on What’s the theory of change of “Come to the bay over the summer!”? · 2022-06-10T20:48:01.381Z · EA · GW

Yeah I also had a strong sense of this from reading this post. It reminded me of this short piece by C. S. Lewis called The Inner Ring, which I highly recommend. Here is a sentence from it that sums it up pretty well I think:

IN the whole of your life as you now remember it, has the desire to be on the right side of that invisible line ever prompted you to any act or word on which, in the cold small hours of a wakeful night, you can look back with satisfaction? 

Comment by tamgent on Deference Culture in EA · 2022-06-09T23:06:22.500Z · EA · GW

I found this to be an interesting way to think about this that I hadn't considered before - thanks for taking the time to write it up.

Comment by tamgent on Deference Culture in EA · 2022-06-09T23:03:28.143Z · EA · GW

On the philosophical side paragraph - totally agree; this is why worldview diversification makes so much sense (to me). The necessity of certain assumptions leads to divergence of kinds of work, and that is a very good thing, because maybe (almost certainly) we are wrong in various ways, and we want to be alive and open to new things that might be important. Perhaps on the margin an individual's most rational action could sometimes be to defer more, but as a whole, a movement like EA would be more resilient with less deference. 

Disclaimer: I personally  find myself very turned off  by  the deference culture in EA. Maybe that's just the way it should be though.

I do think that higher deference cultures are better at cooperating and getting things done - and these are no easy tasks for large movements. There have also been movements that have done terrible things in the past, accidentally, with these properties. There have also been movements that have done wonderful things, with these properties.

I'd guess there may be a correlation between people who think there should be more deference being in the "row" camp and people who think less in the "steer" camp, or another camp, described here.

Comment by tamgent on Power dynamics between people in EA · 2022-06-07T07:28:24.733Z · EA · GW

This is not about the EA community, but something that comes to mind which I enjoyed is the essay Tyranny of the Structurelessness, written in the 70s. 

Comment by tamgent on Unflattering reasons why I'm attracted to EA · 2022-06-06T19:42:18.427Z · EA · GW

I think the issue is that some of these motivations might cause us to just not actually make as much positive difference as we might think we're making. Goodharting ourselves.

Comment by tamgent on Request: feedback on my EAIF application · 2022-05-29T11:16:40.742Z · EA · GW

Have you spoken to the Czech group about their early days? I'd recommend it, and can put you in touch with some folks there if you like.

Comment by tamgent on How Could AI Governance Go Wrong? · 2022-05-28T18:20:24.416Z · EA · GW

Agreed. One book that made it really clear for me was The Alignment Problem by Brian Christian. I think that book does a really good job of showing how it's all part of the same overarching problem area.

Comment by tamgent on How Could AI Governance Go Wrong? · 2022-05-28T18:16:40.681Z · EA · GW

I'm not Hayden but I think behavioural science is useful area for thinking about AI governance, in particular about the design of human-computer interfaces. One example with current widely deployed AI systems is recommender engines (this is not a HCI eg). I'm trying to understand the tendencies of recommenders towards biases like concentration, or contamination problems, and how they impact user behaviour and choice. Additionally, how what they optimise for does/does not capture their values, whether that's because of a misalignment of values between the user and the company or because it's just really hard to learn human preferences because they're complex. In doing this, it's really tricky to actually distinguish in the wild between the choice architecture (behavioural parts) vs the algorithm when it comes to attributing to users' actions. 

Comment by tamgent on Bad Omens in Current Community Building · 2022-05-25T16:59:49.314Z · EA · GW

So from the perspective of the recruiting party these reasons make sense. From the perspective of a critical outsider, these very same reasons can look bad (and are genuine reasons to mistrust the group that is recruiting):
- easier to manipulate their trajectory
- easier to exploit their labour
- free selection, build on top of/continue rich get richer effects of 'talented' people
- let's apply a supervised learning approach to high impact people acquisition, the training data biases won't affect it

Comment by tamgent on Impact Of Alcohol Consumption On Performance · 2022-05-22T19:40:09.984Z · EA · GW

I've wondered in the past whether it's like dropout in a neural network. (I've never looked into this and know nothing about it)

Comment by tamgent on "Big tent" effective altruism is very important (particularly right now) · 2022-05-21T21:31:00.991Z · EA · GW

Yeah I just couldn't understand his comment until I realised that he'd misunderstood the OP as saying it should be a big movement rather than it should be a movement with diverse views that doesn't deter great people for having different views. So I was looking for an explanation and that's what my brain came up with. 

Comment by tamgent on "Big tent" effective altruism is very important (particularly right now) · 2022-05-21T21:25:26.072Z · EA · GW

Your comment now makes more sense given that you misunderstood the OP. Consider adding an edit mentioning what your misunderstanding was at top of your comment, I think it'd help with interpreting it.

So you agree 3 is clearly false. I thought that you thought it was near enough true to not worry about the possibility of being very wrong on a number of things. Good to have cleared that up.

I imagine then our central disagreement lies more in what it looks like once you collapse all that uncertainty on your unidimensional EV scale. Maybe you think it looks less diverse (on many dimensions) overall than I do. That's my best guess at our disagreement - that we just have different priors on how much diversity is the right amount for maximising impact overall. Or maybe we have no core disagreement. On an aside, I tend to find it mostly not useful as an exercise to do that collapsing thing at such an aggregate level, but maybe I just don't do enough macro analysis, or I'm just not that maximising.

BTW on your areas where you think we agree: I strongly disagree with commitment to EA as a sign of how likely someone is to make impact. Probably it does better than base rate in global population, sure, but here we are discussing the marginal set of people who would/wouldn't get deterred/to use EA as one of their inputs in helping them make an impact, depending on whether you take a big tent approach. I'm personally quite cautious to not confuse 'EA' with 'having impact' (not saying you did this, I'm just pretty wary about it and thus sensitive), and do worry about people selecting for 'EA alignment' - it really turns me off EA because it's strong sign of groupthink and bad epistemic culture.

Comment by tamgent on "Big tent" effective altruism is very important (particularly right now) · 2022-05-21T20:47:28.947Z · EA · GW

Yeah maybe. Sorry if you found it unhelpful, I could have been clearer. I find your decomposition interesting. I was most strongly gesturing at the third.

Comment by tamgent on "Big tent" effective altruism is very important (particularly right now) · 2022-05-20T23:31:08.244Z · EA · GW

Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful'? 

I got this impression from what I understood your main point to be, something like: 

There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.

I think there are several assumptions in both of these points that I want to unpack (and disagree with).

On the question of whether there is a unidimensional scale of talented people who will make the most impact: I believe that the EA movement could be wrong about the problems it thinks are most important, and/or the approaches to solving them. In the world where we are wrong, if we deter many groups with important skillsets or approaches that we didn't realise were important because we were overconfident in some problems/solutions, then that's quite bad. Conversely, in the world where we are right, yes maybe we have invested in more places than turned out to be necessary, but the downside risks seem smaller overall (depending on constraints, which I'll get to in next para). You could argue that talent correlates across all skillsets and approaches, and maybe there's some truth to that, but I think there's lots of places where the tails come apart, and I worry that not taking worldview diversification seriously can lead to many failure modes for a movement like EA. If you are quite certain that EA top cause areas as listed on 80k are right about the problems that are 'most' important and the 'best' approaches to solving them (this second one I am extremely uncertain about), you may reasonably disagree with me here - is that the case? In my view, these superlatives and collapsing of dimensions requires a lot of certainty about some baseline assumptions.

On the question of whether resource diversion from talented people to less 'talented' people is lower expected value: I think this depends on lots of things (sidestepping the question of talent definition which above para addresses). Firstly, are the resources substitutable? In the example you gave with university groups, I'd say no, if you fund a non-top university group then you are not detracting from top university group funding (assuming no shortage of monetary funding, which I believe we can assume). However, if you meant the resource is the time of a grantmaker specialised in community building, and it is harder for them to evaluate a non-top uni than top because maybe they know fewer people there etc. then I'd say that resource is substitutable. The question of substitutability matters to identify if it is a real cost, but it also opens a question of resource constraints and causality. Imagine a world where that time-constrained grantmaker decides to not take the easy decision but bear short term cost and invest in getting to know the new non-top uni - it is possible that the ROI is higher because of returns to early-stage scaling being higher, and new value of information. We could also imagine a different causality: if grantmaking itself was less centralised (which a bigger tent might lead to), some grantmakers might cater to non-top unis, and others to top unis, and we'd be able to see outcomes from both. So overall I think this point of yours is far from clearly true, and a bigger tent would give more value of information.

There were some points you made that I do agree with you on. In particular: celebration disproportionate to the impact feeling fake, adding false uncertainty to avoid coming across as dogmatic (although I think there is a middle way here), and real trade-offs in axes of desirable communication qualities. Another thing I noticed that I like is a care for epistemic quality and rigour and wanting to protect that cultural aspects. It's not obvious to me why that would need to be sacrificed to have a bigger tent - but maybe we have different ideas of what a bigger tent looks like.

(Also I did a quick reversal test of the actions in the OP in my head as mentioned in the applause lights post you linked to, and the vast majority do not stand up as applause lights in my opinion, in that I'd bet you'd find the opposite point of view being genuinely argued for around this forum or LW somewhere ).

Comment by tamgent on Avoiding Moral Fads? · 2022-04-04T10:46:48.602Z · EA · GW

I guess some scientific topics have some pretty good evidence and are hard to believe are extremely wrong (e.g. physics) given how much works so well that is based on it today, and then there are other scientific/medical areas that look scientific/medical without having the same robust evidence-base. I'd like to read a small overview meta analysis with some history of each field that claims (and is widely believed) to be scientific/medical, with discussion of some of its core ideas, and an evaluation of how sure we are that it is good and real in the way that a lot of physics is. I don't want to name particular other scientific/medical areas to contrast, but I do have at least one prominently in my mind.

Comment by tamgent on Community in six hours · 2022-04-02T14:04:32.733Z · EA · GW

BC is of the past, CB is of the future! We are definitely progressing, right, right alphabet? 

Comment by tamgent on Unsurprising things about the EA movement that surprised me · 2022-04-02T13:35:45.994Z · EA · GW

Really? I thought it stood for Easy Answers

Comment by tamgent on The case for infant outreach · 2022-04-02T13:21:08.012Z · EA · GW

Mmm I sense a short life thusfar. I posit that the shorter the life thusfar the more likely you are to feel this way. How high impact! Think of all the impact we can make on the impactable ones! 

Comment by tamgent on Effectiveness is a Conjunction of Multipliers · 2022-03-27T10:05:37.438Z · EA · GW

I enjoyed this comment

Comment by tamgent on Some benefits and risks of failure transparency · 2022-03-27T09:57:04.006Z · EA · GW

Some things I like about this post:
 - I like the topic, I am interested in failure and places where failure and mistake making is discussed openly feels more growthy.
- I liked that you gave lots of examples.

Some things I didn't like about this post:
- Sometimes I couldn't always see the full connections you were making, or I could but had to leap to them based on my own preconceptions, maybe they could be more explained? For example, a benefit was a stronger community, but you didn't explain the mechanism by which that leads to a stronger community. I don't think the Howie podcast supports the point, a lot of people liked the podcast, but how is that indicative of a stronger community exactly?

Things I disagree with in this post:
- I don't think the Opportunity Cost point was well argued. In particular, you discussed transparency in general, with examples of publishing annual reports and so on, which take a lot of time. However, this post is about being transparent about mistakes and failure, not transparency in general. I think the Opportunity Cost is much lower for just publishing big mistakes, even though it takes some time to word it properly, and then there is the stress of it. But you can choose simply not to look at reactions on social media. Same as people can choose not to engage in lengthy threads about it.
- I think your Reputational Cost point was better on the other side as some of the reasons would put it there. Also, I just think this is somewhat a normative cultural question rather than one about facts in the world. If my reputation will be destroyed in an area for publishing a mistake, either that is a good thing, or the person judging is undervaluing the growth/learning part and overvaluing a fixed view of people. I basically don't think someone who would incorrectly judge me negatively for publishing a mistake is worth me caring about the opinion of. Again, this is normative, not a fact about reality, it's about what kind of culture we want to create.
- Similar arguments to Reputational Cost apply to the Harming Discourse point - this is a normative culture question, we get to choose how we respond and whether we reward or disincentivise it! I would put it not as a risk/downside but in another category called cultural equilibrium or something, along with the reputation point. 
- I don't think the Career Risk point is different to the Reputational Cost point in any meaningful way. You can also take more ownership as an organisation rather than an individual, where appropriate. 

I recognise that the things I disagree with are all in the downsides/risks section, and that is because I am biased and uninterested in critiquing the other side. I feel somewhat entitled to do this because I'm under the impression that you added this section in after feedback to make it more balanced, so it's partially because I'm being mischievous and unfair (you made this easier), as well as not wanting to feel pressure myself to give a balanced comment and wanting to protest against feeling constrained in that way.

Comment by tamgent on Some benefits and risks of failure transparency · 2022-03-27T09:33:23.000Z · EA · GW

Thanks for sharing your motivations! Personally, I would have liked to read your original post, even if it was more one-sided, and got the other side elsewhere. Being helped with heuristics for making decisions is not really what I was looking for in this post - it feels paternalistic and contrived in me, and I'd enjoy you advocating earnestly for more of something you think is good.

Comment by tamgent on Early Reflections and Resources on the Russian Invasion of Ukraine · 2022-03-20T14:19:40.600Z · EA · GW

I found this valuable, thank you.

Comment by tamgent on Nuclear attack risk? Implications for personal decision-making · 2022-03-06T16:47:26.137Z · EA · GW

I'm reading this book now and finding it very good! I'm surprised because most books in this genre I've tried lately have been really bad and I couldn't bear to continue reading them. This one is fun because in addition to being practical it takes you on a fast route tour of just the essentials from the tech tree we've developed over human history, without the long winding arbitrary delays we had historically, which is interesting as well as potentially useful. Makes me think Cyberpunk is more likely in some ways (for some types of disaster) than I'd realised. It should be said that the book assumes technology is somewhat intact but most (but not all) of the population are gone.

Comment by tamgent on Best Countries during Nuclear War · 2022-03-04T15:24:30.478Z · EA · GW

For situations where it may happen but unclear and you want to temporarily go somewhere in a reversible way, places in your timezone if you want to continue working (and you can work remotely) might be worth considering.

Comment by tamgent on What are effective ways to help Ukrainians right now? · 2022-02-26T01:10:33.608Z · EA · GW

Thank you

Comment by tamgent on What are effective ways to help Ukrainians right now? · 2022-02-25T19:13:02.322Z · EA · GW

Do you know if there is something similar in Romania by any chance?

Comment by tamgent on Trading time in an EA relationship · 2022-02-24T17:03:40.136Z · EA · GW
  • From experience, heavily prioritising my partner over me is bad for my self-esteem and my mood, makes my partner feel guilty, and leads to resentment and conflict.
  • It feels painful and inefficient to spend time that could be converted into high impact work on chores that anyone could do.

To me one of these feels more essential than the other. If the former is compromised, you are less likely to be in a good position to make impact or support someone who is making impact. Whereas if the latter is, it may affect the former somewhat but probably in a much more limited way as it's only one component of what supports it (hopefully). 

Comment by tamgent on Should you work in the European Union to do AGI governance? · 2022-02-06T04:48:29.069Z · EA · GW

I think liberalism vs realism is an interesting lens but the conclusion doesn't seem right to me. You say you're working backwards from a theory of victory, but at least that argument was working backwards from a theory of catastrophe. I think this is an is-ought problem, and if we want things to go well then we might want to actively encourage more cooperative IR, whilst also not ignoring the powerful forces.

Comment by tamgent on New EA Cause Area: Run Blackwell's Bookstore · 2022-02-05T17:08:58.821Z · EA · GW

If there aren't significant competition issues caused by any of those acquisitions...

Comment by tamgent on Aligning Recommender Systems as Cause Area · 2022-01-03T10:32:05.092Z · EA · GW

If you remove the parentheses and comma from the first link, and the final period from the second, they work.

Comment by tamgent on How do you get more people to give you feedback? · 2021-12-04T17:58:43.362Z · EA · GW

This works really well in my experience too.

Comment by tamgent on Improve delegation abilities today, delegate heavily tomorrow · 2021-11-16T15:12:43.819Z · EA · GW

I found the change in title confusing as there wasn't really any discussion of how to actually improve our delegation abilities in the post, and more just encouragement to delegate more. You mentioned some ways in this comment (accountability, transparency...) but they're not really unpacked in the main post. Would be interested in a discussion unpacking these and other ways.

Comment by tamgent on Tax Havens and the case for Tax Justice · 2021-08-27T17:21:42.577Z · EA · GW

The international minimum corporate tax rate got finalised last month! After only 28 years of discussion. 
https://www.oecd.org/newsroom/130-countries-and-jurisdictions-join-bold-new-framework-for-international-tax-reform.htm 

Comment by tamgent on More EAs should consider “non-EA” jobs · 2021-08-27T10:11:10.329Z · EA · GW

I wonder if others' understanding of neglectedness is different from my own. I think I've always implicitly thought of neglectedness as how many people are trying to do the exact thing you're trying to do to solve the exact problem you're working on, and therefore think there's loads of neglected opportunities everywhere, mostly at non-EA orgs. But now reading this thread I got confused and checked the community definition here and which says it's about dedicating resources to a problem, which is quite different and helps me better understand this thread. It's funny that after all these years I've had a different concept in my head to everyone else and didn't realise. Anyway, if neglectedness includes resources dedicated to the problem, then a predominantly non-EA org like a government body might be dedicating lots of resources to a problem, but not making much progress on it. In my view, this is a neglected opportunity.

Maybe we should distinguish between neglected in terms of crowdedness vs. opportunities available? 

Also, what are others' understandings of neglectedness?

Comment by tamgent on More EAs should consider “non-EA” jobs · 2021-08-24T19:10:56.356Z · EA · GW

I agree that if you choose at random from EA org and non-EA org jobs, you are more likely to have more impact at an EA org job. And I agree that there is work involved in finding a high impact non-EA job. 

However,  I don't think the work involved in finding a high impact non-EA org job is hard because there are so few such opportunities out there, but because finding them requires more imagination/creativity than just going for a job at an EA org does. Maybe you could start a new AI safety team at Facebook or Amazon by joining, building the internal capital, and then proposing it. Maybe you can't because reasons. Either way, you learn by trying. And this learning is not wasted. Either you pave the way for others in the community, highlighting a new area where impact can be made. Or, if it turns out it's hard for reasons, then you've learnt why, and can pass that on to others who might try.

Needless to say this impact finding strategy scales better than one where everyone is exclusively focused on EA org jobs (although you need some of that too). On a movement scale, I'd make a bet that we're too far in the direction of thinking that EA orgs is a better path to impact and have significantly under-explored ways of making impact in non-EA orgs, and there are social reasons why we'd naturally bias in that direction. Alternatively, like Sarah said elsewhere, it's just less visible.

I just realised I haven't asked - why are high impact non-EA org jobs are hard to find, in your view?

Comment by tamgent on More EAs should consider “non-EA” jobs · 2021-08-22T20:18:14.163Z · EA · GW

Yeah I'd imagine much of the work of bringing EA ideas into spaces where folks might not want the identity is less visible, sometimes necessarily or wisely so. I'd love to see more stories told on forums such as this one of making impact in 'non-EA' spaces, even in an anonymised/redacted way.

Comment by tamgent on More EAs should consider “non-EA” jobs · 2021-08-21T07:56:54.247Z · EA · GW

Thanks for writing about this. I wanted to a while ago but didn't get round to it. I also get the sense that too many folks in the EA community think the best way they can make an impact is at an EA org. I think this probably isn't true for most people. Gave a couple of reasons why here

I wrote a list of some reasons to work at a non-EA org here a while ago, which overlap with your reasons.

Comment by tamgent on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T15:09:59.456Z · EA · GW

Mozilla have a fellowship aimed at this: https://foundation.mozilla.org/en/what-we-fund/fellowships/fellows-for-open-internet-engineering/