Posts

Comments

Comment by tamgent on Concerns with ACE's Recent Behavior · 2021-04-19T18:59:38.920Z · EA · GW

Knowing the basis of ACE's evaluations is of course essential to deciding whether to donate to/through them and I'd be surprised if esantorella disagreed. It's just that this post and discussion is not only or even mostly about that. In my view, it would have been a far more valuable/better post if it were focused more tightly on that serious issue and the evidence for and against it, and left out altogether small issues like publishing and taking down bad blog posts, and the general discourse norms discussion was in a separate post labelled appropriately.

Comment by tamgent on Concerns with ACE's Recent Behavior · 2021-04-18T18:21:04.021Z · EA · GW

I appreciate you trying to find our true disagreement here.

Comment by tamgent on Concerns with ACE's Recent Behavior · 2021-04-18T18:20:22.858Z · EA · GW

Sure, I do appreciate the point that Buck is bringing. I agree with it in fact (as the first part of my first sentence said). I just additionally  found the particular X he substituted not a good one for separate reasons to the main point he was making. I also think the real disagreement with Buck and myself is getting closer to it on a sister branch.

I do think your question is good here, and decomposes the discussion into two disagreements:
1) was this an instance of 'cancel culture', if so how bad is it?
2) what is the risk of writing about this kind of thing (causing splits) vs. the risk of not?

On 1) I feel, like Neel below, that moving charities ratings for an evaluator is a serious thing which requires a high bar of scrutiny, whereas the other two concerns outlined (blogpost and conference) seem far more minor. I think the OP would be far better if only focused on that and evidence for/against.

On 2) I think this is a discussion worth having, and that the answer is not 0 risk for any side.

EDIT to add: sorry I think I didn't respond properly/clearly enough to your main point. I get that Buck is conditioning on 1) above, and saying if we agree it's really bad, then what. I just think that he was not very explicit about that. If Buck had said something like, 'I want to pick up on a minor point, and to do this will need to condition on the world where we come to the conclusion that ACE did something unequivocally bad here...' at the beginning, I don't think the first part of my objections would have applied so much. EDIT to add: Although I still think he should have chosen a different bad thing X.

Comment by tamgent on Concerns with ACE's Recent Behavior · 2021-04-18T18:03:18.399Z · EA · GW

I don't disagree with any of that. I acknowledge there is real cost in trying to make people feel welcome on top of the community service of speaking up about bad practice (leaving aside the issue of how bad what happened is exactly).

I just think there is also some cost, that you are undervaluing and not acknowledging here, in the other side of that trade-off. Maybe we disagree on the exchange rate between these (welcomingness and unfiltered/candid communication)?

I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That's ok if it's not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will's comment was doing just that, and I upvoted it as a result. (Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn't fully grok it nevertheless).

Comment by tamgent on Concerns with ACE's Recent Behavior · 2021-04-18T17:49:44.483Z · EA · GW

I think the meta-level objection you raised (which I understood as: there may be costs of not criticising bad things because of worry about second-order effects) is totally fair and there is indeed some risk in this pattern (said this in the first line of my comment). This is not what I took issue with in your comment. I see you've responded to our main disagreement though, so I'll respond on that branch.

Comment by tamgent on Concerns with ACE's Recent Behavior · 2021-04-18T12:03:40.831Z · EA · GW

Whilst I agree with you that there is some risk in the pattern of not criticising bad thing X because of concerns about second-order effects, I think you chose a really bad substitution for 'X' here, and as a result can totally understand where Khorton's response is coming from (although I think 'campaigning against racism' is also a mischaracterisation of X here).

Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.

Why is it important to not throw out nuance here? Because of Will's original comment: there are downsides to being very critical, especially publicly, where we might cause more split or  be unwelcoming. I agree with you that we shouldn't be trying to appeal to everyone or take a balanced position on every issue, but I don't think we should ignore the importance of creating a culture that is welcoming to all either. These things do not in principle need to be traded-off against each other, we can have both (if we are skillful).

Despite you saying that you agree with the content of Will's comment, I think you didn't fully grok Will's initial concern, because when you say:

"if a group of people are sad that their views aren't sufficiently represented on the EA forum, they should consider making better arguments for them"

you are doing the thing (being unwelcoming)

Comment by tamgent on Concerns with ACE's Recent Behavior · 2021-04-16T18:01:29.486Z · EA · GW

I agree with your distinction between the views of individual employees at an organisation being totally fine to be whatever (although I wouldn't ignore it entirely, I also wouldn't overgeneralise from a couple of people in an org having epistemically-lacking views, maybe depending a bit on their position), and the decisions/statements an organisation makes as an org, being an important one. 

Comment by tamgent on What Questions Should We Ask Speakers at the Stanford Existential Risks Conference? · 2021-04-10T14:08:45.135Z · EA · GW

To what extent do you think the field of risk management is applicable to x-risks, and where is it most lacking? 

Comment by tamgent on Getting a feel for changes of karma and controversy in the EA Forum over time · 2021-04-08T16:29:29.231Z · EA · GW

The EA Forum could maybe fairly trivially collect some data on this by sending an alert randomly to a subset of instances of up/down votes across the user population that collects feedback on the reasons for the up/down vote. Obviously it would need to be balanced by ensuring not to cause too much friction for users. 

Comment by tamgent on My preliminary research on the Adtech marketplace · 2021-03-30T11:36:38.436Z · EA · GW

I think this post could be improved by targeting the audience better. Maybe you could say at the start of this post why you believe this is relevant to EA?

EDIT to add:
I have seen your section titled Context, but don't think it properly introduces a broad and unfamiliar EA audience to why you think this might be relevant or important. Can you explain why you think disinformation and data privacy violations should be paid more attention to in the EA community?

Comment by tamgent on FAQ: UK Civil Service Job Applications · 2021-03-28T16:30:28.475Z · EA · GW

I'd add that if you're interested in tech/AI policy, the Competition & Markets Authority (CMA) is quite a good, lesser known, place to consider.

EDIT to add: also the CMA is incubating the new Digital Markets Unit (DMU).

Comment by tamgent on What Makes Outreach to Progressives Hard · 2021-03-24T13:23:37.023Z · EA · GW

I don't think xuan's main point was about being charitable, although they had a few thoughts in that direction. More generally, trying to be charitable is usually good. Of course it's going to miss a point (what finite comment isn't), but maybe it's making another?

I appreciate you trying to bring the discussion towards what you see as the real reason for lefty positions being held by privileged students (subconscious social status jockeying), but I wonder if there's a more constructive way to speculate about this?

Maybe one prompt is: how would you approach a conversation with such a lefty friend to discover if that is their reason, or not?

You could be direct, put your cards on the table, and say you think they are just interested in the social status stuff, and let them defend themselves (that's usually what happens when you attack someone's subconscious motivation, regardless what's true). Or you could start by asking yourself, what if I was wrong here? Is there is another reason they might hold this position on this topic? That might lead you to ask questions about their reasons. You could test how load-bearing their explanations are, by asking hypotheticals, or for them to be concrete and specific. Maybe you, or they, end up changing/modifying your position or beliefs, or at least have a good discussion, with at least one person having more understanding going out than you had coming in. In any case, I think a conversation that assumes good faith is more likely to lead to a productive discussion.

Circling back to the initial thing: I'm assuming that you do see the value in being charitable and assuming good faith in general, and just feel it is hard to practice this in conversations when people are very attached to their positions. But let me know if not, i.e. if you do genuinely think there is no point in being charitable (as that would be our true disagreement, this seems unlikely).

Please correct me if I've misunderstood you here. 

+ nitpick: you use terms people might not have heard of. If I look up 'Moloch' I don't immediately see the article by Scott Alexander that I think you have in mind, just a Wikipedia article about the god. 

Comment by tamgent on AGI risk: analogies & arguments · 2021-03-23T17:33:25.041Z · EA · GW

Even though I've come across these arguments before,  something about the style of this piece made me feel a little uneasy and overwhelmed. I think it's that it raises many huge issues quickly, one after the other.

It's up to you of course, but consider having a content warning at the top of this. Something to the effect of:
Warning: the following arguments may be worrying beyond the useful level of motivating action. If you think this is a risk for you, be cautious about how, when and whether you wish to read it.

Comment by tamgent on What Makes Outreach to Progressives Hard · 2021-03-15T10:20:01.062Z · EA · GW

That wasn't really what I was saying, and I don't think you're steelmanning the intersectionalist perspective, although I agree with your description of the crux. I think many (maybe most?) people who like intersectionality would agree that prioritization is sometimes necessary and useful.

An attempt to steelman intersectionality for a moment:
- problems are usually interwoven and complex
- separating problems from their contexts can cause more problems
- saying one problem is more important than  another has negative side effects, because we are trying to fix a broken hammer with broken hammer (comparison culture is a cause of many problems, is a belief of many progressives, I believe)

I am unsure this is incompatible with prioritization, which in my view is simply a practical consequence of not having infinite resources. I think they'd agree, and would not take issue with, for example, someone dedicating their life to only climate change, as long as that person did not go around saying climate change is more important than all the other important issues, and also saw how climate change is related to, for example, improving international governance, or reducing corruption and worked with those efforts rather than in competition with/undermining them.

I think viewing most intersectionality proponents as people who cannot ever work on one thing because they literally need to address all problems at once is an overly literal interpretation, although it's possible to get this impression if there are a few loud ones like this (I don't know enough to know).

The disagreement seems to be more about whether it is helpful to compare the importance of issues in a public way. Comparing things, whilst necessary and important, can have side effects such as making some people feel bad about a the good thing that they are doing because it isn't the best thing a person in theory could be doing. We are familiar with this from 80K's mistakes.

I was focusing more on the marketing side like Cullen, and wondering whether worldview diversification might be a way to better connect with intersectionality proponents via a message like this:

problems are complicated and sometimes entangled, and we can work on many at once, on a group level, but also our resources are finite, so when allocating them, trade-offs will need to be made

Comment by tamgent on What Makes Outreach to Progressives Hard · 2021-03-14T11:47:27.171Z · EA · GW

Thanks for the article, interesting and well-written. I'm sure will be useful as a reference for me in some future conversations.

With reference to your section titled Incompatibility Between Intersectionality and Prioritization - how do you see worldview diversification fitting in?

To me, this perspective incorporates the value of diversification of causes (which intersectionality protects) whilst still being realistic about actually getting things done (which prioritization protects).  Under a worldview diversification lens, prioritization is less about one thing to the exclusion of all others, whilst still not going as far as to say all causes are equal and should have an equal place at the table.

Comment by tamgent on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-12T15:44:03.026Z · EA · GW

Ah sorry, I misunderstood

Comment by tamgent on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-12T15:00:32.770Z · EA · GW

I agree with much of this answer. However, I'm not sure it's the lack of promise of scale that makes projects not get funded, but rather other reasons. I am also excited about EA Funds now encouraging time-limited all-in experiments. 

Comment by tamgent on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-12T11:05:15.156Z · EA · GW

Here are few minor things I think you could modify for clarity:
 
Replace 'The sentence that made me think it's worth writing up a reaction was:' with 'From the article:'

Also, you repeat yourself at the end. The last two one-sentence paragraphs could just be one paragraph that says:

'Given the entrepreneurial slant of EA culture, I worry that some people will end up concluding "we should celebrate risk-taking even more than we already do".  Isn't dangerous career advice for the average EA?'

Comment by tamgent on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-12T10:53:58.998Z · EA · GW

Thank you for sharing your analysis of what I also see as a major challenge for us to overcome (the challenge of EA entrepreneurship becoming more costly). I agree with many things in your answer, but strongly disagree with the conclusion or 'bottom line'. It seems very bleak, like giving up. Instead I  think we should be creating better systems for mentorship and vetting. There are some initiatives trying to do things in this space, such as Charity Entrepreneurship and the longtermist incubator project. I am also excited about the new management and reform of EA Funds (see for example, this post on the ways in which EA Funds is more flexible than you might think). To me, these are all positive signs that the ecosystem of mentorship and vetting is maturing a bit too. However, I think there is still a lot more work to be done in this area, and would like to see more initiatives (or better understand what those initiatives are bottlenecked on).

Also on your 'bottom line' - one does not need to choose necessarily between having a safe career and doing EA entrepreneurship. I'm doing both, and I think as long as you make bets that are proportional to feedback and have good contingencies, it can be done.  Sometimes you do want to go 'all out' on an entrepreneurial venture, but you want to probably build up a track record and start with cheaper ventures first.

Comment by tamgent on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-05T14:35:31.744Z · EA · GW

What are the major risks or downsides that may occur, accidentally or otherwise, from efforts to improve institutional decision-making? 

How concerned are you about these (how likely you think they are, and how bad would they be if it happened)?

Comment by tamgent on AMA: Ian David Moss, strategy consultant to foundations and other institutions · 2021-03-05T14:34:33.639Z · EA · GW

On what timescales do you see most of the impact from improving institutional decision-making starting to kick in, and what does the growth function look like to you?

Comment by tamgent on Retention in EA - Part I: Survey Data · 2021-02-09T19:37:25.610Z · EA · GW

I'd be interested to zoom in on the "can't find a way to contribute" response, and wonder if follow-up questions were asked. It's extra hard because you're another degree removed by asking for group leaders impressions rather than speaking to "leavers" directly. I'd bet that people define contributing in very different ways, and as a result it's pretty unclear what exactly is going wrong here, if anything at all. For example, maybe people can't find a way to contribute via working at EA organisations specifically, but could contribute in highly impactful careers in non-EA organisations (there is a spectrum and I'm oversimplifying). Maybe some "left" to do that. Personally, I wouldn't count this as leaving the EA movement, or at least the model of the EA movement that I have and want to continue having.

But maybe others have a different model?

Comment by tamgent on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-29T21:40:00.752Z · EA · GW

If a magic fairy gave you 10 excellent researchers from a range of relevant backgrounds who were to work on a team together to answer important questions about the simulation hypothesis, what are the top n research questions you'd be most excited to discover they are pursuing? 

Comment by tamgent on Money Can't (Easily) Buy Talent · 2021-01-26T13:37:20.046Z · EA · GW

I think there are lots of opportunities for direct work at non-EA orgs with sufficient demand. 

Comment by tamgent on Money Can't (Easily) Buy Talent · 2021-01-26T13:33:37.053Z · EA · GW

I would really appreciate an explicit definition of 'direct work' that this post is using. I was assuming it was my definition in which direct work includes not just work at EA orgs, but also lots of impactful roles e.g. in certain policy areas or certain AI companies. However, some of the comments seem to assume otherwise.

Also if this post does mean 'working at EA orgs' rather than a wider 'direct work' definition, consider not using the term 'direct work' to avoid ambiguity.

Comment by tamgent on Money Can't (Easily) Buy Talent · 2021-01-26T13:18:46.221Z · EA · GW

The example you linked to is about someone struggling to get a job in an 'EA organisation'. This is clearly not the same as direct work, which is a much larger category. I am pretty sure you'd agree as someone who does direct work not always in an EA org, but let me know if I'm wrong there.

Comment by tamgent on Aligning Recommender Systems as Cause Area · 2021-01-25T14:54:40.039Z · EA · GW

The author, commenters and readers of this post may be interested in this new paper by the CMA, 'Algorithms: How they can reduce competition and harm consumers'. The programme of work being launched includes analyses of recommender systems.

Comment by tamgent on Lessons from my time in Effective Altruism · 2021-01-18T10:38:58.310Z · EA · GW

I think a good way to practice being proactive is to do lots of backwards chain/theory of change type thinking with outrageously favourable assumptions. For example, pretend you have infinite resources and anyone and everyone is willing to help you along the way.

Start with an ambitious goal, and then think about what the fastest way to get there. What are the fewest concrete steps you can take? Then you can see which are doable later, get some feedback on it, and do some murphyjitsu and explore alternative options on subsets of it. 

Some things are big multipliers, such as keeping options open and networking widely.

Comment by tamgent on Careers Questions Open Thread · 2021-01-01T14:24:11.953Z · EA · GW

Hi there, I would recommend talking to people who have done both paths and who share your higher-level goals and values.

If you haven't already, check out https://www.legalpriorities.org/ - maybe someone there would be worth talking to.

There is also an ex-barrister career advising at 80K, Habiba Islam.

Finally, I'm happy to tell you about what the lawyers do at my department in the civil service. If you're interested, DM me. 

Comment by tamgent on Careers Questions Open Thread · 2021-01-01T14:04:20.455Z · EA · GW

A bit of an out-there suggestion but what about combining parts of 1, 3, and 4? I'm imagining a health tech social enterprise like initiative from within a FAANG company that interfaces with health policy/academics. The main advantage would be from the scale of compute and people who know how to use that infrastructure coming together with the people understand the biggest problems in the field well (who are usually not working in the FAANG company).  My inspiration for this is the Google Earth Engine (GEE) team, which was pioneered by one individual, but is closely integrating/interfacing with researchers and industry professionals in remote sensing and helping them solve problems that could not have been so easily solved before. I think if it wasn't for the individual who founded GEE, so many great projects would not have been possible. I think this would be challenging (you'd be carving out an untrodden path) but have a high impact ceiling in the tails.

Comment by tamgent on Careers Questions Open Thread · 2020-12-30T21:03:05.863Z · EA · GW

[This comment isn't a reply to your main point, just about the 'glamour factor' that your film analogy is predicated on, sorry]

I think that the majority of people who believe working at an EA org is the highest impact thing they could do are probably wrong.

Consider:
1) if you work at an EA org you probably have skills that are very useful in a variety of other fields/industries. The ceiling on these impact opportunities is higher, as it uses more of your own creativity/initiative at a macro level (e.g. the level of deciding about where to work)
2) if 1) is not true, it's probably because you specialise in meta/EA/movement related matters, that don't transfer well outside. In this case you might be able to make more impact in EA orgs. But this is not the case for most people.

I think it's different for people starting new EA orgs, or joining very early-stage ones - that does seem to have a high ceiling on potential impact and is worth a shot for anyone doing it. 

Comment by tamgent on How high impact are UK policy career paths? · 2020-12-21T14:17:00.559Z · EA · GW

I’ll answer the question I find easier, which is the second one, as I got stuck/side-tracked on the first question (but will try to answer later).

What are possible paths to impact for civil servants?

I’ll comment on the two options you presented, and offer alternative frames for them.

1) improving the talent supply

You ask here how much better is the civil servant than the counterfactual hire. I think it’s good to ask this, but don’t really see a path to impact unless the job description actually identifies big priority problems you will work on and the quality of talent is woeful. I think both of these are usually not true, and the first matters more. This is because most of the impact you can make will usually not be in the job description. I think it’s more fruitful to ask: is the counterfactual candidate going to do what isn't in the job description but might be possible, and is this the kind of position that has such opportunities or is a stepping stone to it? (This is your second path to impact option).

When I think about ‘improving the talent supply’ as a path to impact I think of it on an institution not an individual level. This looks like helping the government get more sustainable expertise in the right places, once you identify who is needed where and why. I think this is a tractable route to a large amount of impact mid to long-term.

2) being different to other civil servants unrelated to objective job competence

I mostly agree with what you said here (especially how it differs on department and cause area) and think this is a fruitful direction. However, I’d frame it as, ‘contributing in ways outside of the job description, finding problems that others cannot see that are more consequential and coming up with solutions.’

On being more impact-oriented, I think there is a flavour of impact-oriented that some EAs have (or strive towards) that comes from the rationality overlap that is uniquely valuable. I think this flavour has more self-corrective mechanisms than many efforts to make an impact.

We should be cautious here as there can be downside risks when doing things outside the job description. But that’s why I’m excited about the internal EA Civil Service Network, so we can get feedback on our paths to impact and help each other improve them and stay on track.

I think figuring out ways of contributing outside the job description with more potential impact will depend on the specifics of the area you’re working on. I’d recommend talking proactively with people you trust and trust the judgement of (EAs and non) in the civil service as a good route. In the beginning (where I am) I think this looks like building relationships, skills and exploring hard, and later as picking battles and staying focused.

So in sum here’s an alternative framing on your paths to impact:

  1. improve the talent supply on an institutional level, finding the crucial expertise gaps and helping fill them, and building the social capital to suggest successful reform in hiring and firing processes
  2. contributing in ways outside of the job description, that is finding problems that others cannot see and coming up with solutions

I’d like to see area-specific discussion on 2) and would be happy to try to articulate my own (tech policy for x-risk, and some institutional decision-making) if of interest.

I’m sure there are other broad routes to impact not discussed in this answer.

EDIT: typo

Comment by tamgent on EAs working at non-EA organizations: What do you do? · 2020-12-11T01:14:59.397Z · EA · GW

Link to relevant post about doing nonstandard things within your career (as well as in your career choice) to make the post impact.

Comment by tamgent on EAs working at non-EA organizations: What do you do? · 2020-12-10T18:43:54.087Z · EA · GW

I think it's good if I comment, as it's been an explicit strategy of mine for a while. I'll answer the questions but first give some reasons why I decided to mostly focus on working in 'non-EA orgs'. (I won't back them up unless asked by someone so that I can just write without worrying about spending time.)

1) I think there are many more neglected opportunities to do good in non-EA orgs, where you will truly be irreplaceable, and see so much low hanging fruit to make impact in so many places. When I say impact here I do mean it with an EA lens.
2) There is lots more optionality if I default to non-EA org jobs, and see EA orgs as just another place to work in the world, to be evaluated alongside other opportunities. I don't understand why I would ever limit my options (except when I have to in order to progress).
3) I can bring so much more to a place that has less EA ideas in it, as someone who has thought about EA ideas a bit - what I bring will be more highly valued and novel for people (though I usually won't talk about EA explicitly, I'll just imbue the underlying values because turns out lots of 'non-EAs' share them when you get down to it).
4) I will learn about which other communities are also working on doing good and be able to work with them and also to make and facilitate connections between EA and 'non EA' and have a better map of who is making what impact where. Limiting this just to the EA space seems so unnecessarily limiting.
5) Related to 4) I actually see integrating EA and non EA as itself very important work to do if EA is to achieve many of its goals. Sometimes it might be better to have a more inclusive branding than the EA one, both to attract a more diverse set of views and also simply instrumental to EA goals. 
6) So far, I have found lots of EA ideas in 'non-EA' orgs and spaces - yes, fragments of them, worded differently, or just bubbling under the surface, but that is such an opportunity!

Where do you work, and what do you do?

I work in the civil service as a data scientist. In practice my role also involves tech policy stuff.  I code, do data analysis, talk to people, try to understand what's really going on both internally and in the world on the real world problems we work on. I try to get on projects I think are impactful, or that will help me get to a place where I can be more impactful, or ideally both.

What are some things you've worked on that you consider impactful?

In my current role I've done research and advising on digital markets which (definitely not just due to me though I am there to contribute/learn) has led to the establishment of the first  big tech independent sector regulator that I know of (Digital Markets Unit). As this unit grows legs over the next couple of years, I'm excited to be part of its initial shape-taking and think there will be lots of opportunities to have impact (if you're interested in it send me a message).

I've also done quite a few significant efficiency-enhancing things that make impact I guess, but I feel like I'm more replaceable on those so less sure about impact. 

For the most part though, I see my current role as upskilling/preparing/positioning for making impact - and lots of the impact I make might just be getting in the room and saying why we should not do something, and also might not the kinds of things I can just talk about on the internet.

Separately, I founded a not-for-profit educational/talent programme which aims to in 5 years or so make a significant contribution to the number of highly-capable individuals on their way to making lots of impact on pressing problems. You might consider this an EA project but from the perspective of my own career development it's not really. I didn't join an existing EA org, but more like I made 'non-EA' connections at a university and went from there (with lots of help from some awesome volunteers some of whom may identify as EA though). Also we want to be inclusive so don't explicitly identify EA as an organisation.

What are a few ways in which you bring EA ideas/mindsets to your current job?

The following is not EA ideas only, but I would say they pretty relevant to EA:

I try to work on projects related to big tech, from a perspective of risks of emerging technologies, focused on but not limited to AI. This is where the bulk of my time goes.

I'm trying to help develop the impact measurement/evaluation activities we do, in various parts of the organisation, although haven't made that much progress so far, mostly because there's just too much to do and there are competing priorities.

I met with the head of risk, and we discussed launching a monthly 'mistakes monday' where we will have presentations from different teams on mistakes that were made internally, and prizes for the biggest/best. I think this culture is quite important for good internal decision-making culture of of a regulator.

I brought in Ozzie to talk about forecasting, and his tools are being used by our team now (so far just on things of no consequence though, sorry Ozzie, little steps though).

Networking across departments to find out where is tractable to make impact, and use my role to make connections with people at other organisations.

Overall, I do think that firstly looking outside of EA to make impact is a good strategy for many people. It's not that I'm closed to EA jobs, just that I'm not primarily focusing on them. Having said this, I'm not sure where we are right now as a community on this, but I'd be really sad if we ended up over-correcting as a community and EA orgs stopped getting a good supply of talent. Obviously I really care about this as I spend most of my 'side-project' time on talent pipeline.

I think overall the best idea is to talk to lots of people about your specific situation to get a good variety of career advice, and to make sure the advice you are getting is varied enough (find someone who will give you the advice you don't want to hear!) - however, I will expect a lot of people involved in the EA community to have social motivators towards jobs at EA orgs, and think a little bit of correction in this direction is still due (probably? how can we measure this?).

OK I'll stop there as this is getting long. Thanks for reading if you made it this far, hope it was useful.

Small nitpick: I responded to this post even though I'd prefer if it was framed as people who are into EA, or EA-aligned people or who work in EA orgs or something rather than 'EAs', as I personally prefer that framing - but probably just being a pedant.

Comment by tamgent on Careers Questions Open Thread · 2020-12-08T10:45:30.868Z · EA · GW

To your third question, maybe check out public interest technology - some resources here: https://public-interest-tech.com/ 

Comment by tamgent on WANBAM is accepting expressions of interest for mentees! · 2020-12-05T10:08:52.146Z · EA · GW

Sorry this comment is not topic relevant but I noticed that all the hyperlinks in this post have facebook redirects and advertising click ids in the URL, which means anyone who clicks them will have tracking cookies from facebook put on their browser. Since none of the sites are facebook, some people might be surprised about this and unhappy. Facebook does this with almost every link it touches.

Comment by tamgent on Theory Of Change As A Hypothesis: Choosing A High-Impact Path When You’re Uncertain · 2020-11-28T11:34:34.974Z · EA · GW

Thanks for writing about this, I like seeing more on this topic.

I can imagine if a person has very low priors on how to have the most impact (or many lightly held hypotheses as you put it) that preserving option value would become important. Do you think it is good to advise this to such people on the margin, and do you see any side effects (e.g. at a group level) from doing so?

Comment by tamgent on Aligning Recommender Systems as Cause Area · 2020-11-27T13:09:11.253Z · EA · GW

Sorry I think I didn't address the measurement issue very well, and assumed your notion of user interests meant simply optimizing for views, when maybe it isn't. I still think through user research you can learn to develop good measures. For example: surveys, cohort tests (e.g. if you discount ratings over time within a viewing session, to down weight lower agency views, do you see changes such as users searching more instead of just letting autoplay), is there a relationship between how much a user feels netflix is improving their life (in a survey) and how much they are sucked in by autoplay? Learning these higher order behavioural indicators can help give users a better long-term experience, if that's what the company optimizes for.

Comment by tamgent on Aligning Recommender Systems as Cause Area · 2020-11-27T12:27:44.904Z · EA · GW

Thanks for raising this. I appreciate specification is hard, but I think there's a broader lens on 'user interests' with more acknowledgement for the behavioural side.

What users want in one moment isn't always the same as what they might endorse when in a less slippery behavioural setting or upon reflection. You might say this is a human not a technical problem. True, but we can design systems to that help us optimize for our long-term goals and that is a different task to optimizing for what we click on in a given moment. Sure it's much harder to specify, but I think user research can be done. Thinking about the user more holistically could open up new innovations too. Imagine a person has watched several videos in a row about weight loss and rather than keeping them on the couch longer, it learns to respond with good nudges: prompts them to get up and go for a run, reminds them of their personal goals for the day (because it has such integrations), messages your running buddy, closes itself (and has nice configurable settings with good defaults),  or advertises joining a local running group (right now the local running group would not afford the advert, but in a world where recommenders weight ad quality to somehow include long-term preferences of the user, that might be different). 

I understand the measurement frustration issue, the task is harder than just optimising for views and clicks though (not just technically, also to align to the company's bottom line). However, I do think little steps towards better specification can help, and I'd love to read future user research on it at Netflix.

Comment by tamgent on Is there a positive impact company ranking for job searches? · 2020-10-15T09:39:07.646Z · EA · GW

You could cheaply email existing large job boards and ask them if they were offered such a service (where the evaluation is provided for them by a yet-to-exist EA charity with reputation for it's good impact eval), would they implement it?

Comment by tamgent on Longtermist reasons to work for innovative governments · 2020-10-14T13:48:13.490Z · EA · GW

Governments may also compete on being innovative, adding to your diffusion point.

Comment by tamgent on Some thoughts on EA outreach to high schoolers · 2020-09-15T16:20:55.173Z · EA · GW

I agree with your upsides. On the failure modes though, I think the question of 'what do the parents think' is missing and important (I guess you maybe touch on it when saying it's politically delicate?). I can imagine all sorts of things high schoolers decide to do because of something they read related to EA that parents might be unhappy about and rally around, from donating their pocket money to moving country, or changing career plans (which many parents are opinionated about).

Regardless of whether the changes teenagers make in their life as a result of getting into EA will be good or bad or whether their parents are right or wrong, parents might with reason start rallying on Twitter or in the school union or something, with potential reputation costs, and making it harder for the next person trying to do it.

Young adults don't have this problem. Even though they may also do things that their parents disagree with as a result of getting into EA stuff, the parents will have a less strong mandate for rallying against it, especially in a public way.

I don't think this risk rules it out completely in my view, and there are likely things that can be done to minimise the risk. I know a teacher who runs an EA club at a specialist maths sixth-form in the UK (16-18yo), and it seems to be going quite well. I'll send him this post.

Comment by tamgent on Which properties does the EA movement share with deep-time organisations? · 2020-08-29T15:49:26.908Z · EA · GW

I liked the form this post took and the question it explores, thanks for writing.

Comment by tamgent on AMA: Owen Cotton-Barratt, RSP Director · 2020-08-29T15:23:35.475Z · EA · GW

Is there any impact measurement of RSP currently? I appreciate it is unusually hard, but have you had any thoughts on good ways to go about this?

Comment by tamgent on The case of the missing cause prioritisation research · 2020-08-29T15:13:25.418Z · EA · GW
For longtermists, surprising applications of ethical principles aren't as valuable, because by default we shouldn't expect them to influence humanity's trajectory, and because we're mainly using a maxipok strategy

Aiming for maxipok doesn't mean not influencing the trajectory (if the counterfactual is catastrophe), it's just much harder to measure impact. If measuring impact is hard, de-risking becomes more important, because of path-dependency. If we build out one or two particular longtermist cause areas really strongly with lots of certainty, they'll have a lot of momentum (orgs and stuff) and if we find out later that they are having negative impact or not having impact (or worse, this happens and we just never find out), that will be bad.

I agree longtermist cause prioritisation is harder, even though I didn't really think your reasons were very well articulated (in particular I don't understand why you're comparing altruism with understanding & controlling the future, seems like apples and oranges to me and surely it's the intersection of X and altruism with the market gap), but I don't think it's less valuable.

Comment by tamgent on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-29T22:39:31.358Z · EA · GW

On 4., in addition to the incentive problem, there's also the problem of matching the right reviewer to reviewee such that the counterfactual value generated is high enough, which will depend greatly on the post and the reviewer. I think this is harder than the incentives problem. Downsides of not solving the matching problem could be people spending time reviewing posts that might have been better spent, or posts that are promising and need reviewing get reviewed by whoever is most incentivized/has time on their hands and then people think it's been reviewed already so the price for a 2nd review goes up.

Comment by tamgent on Can/should we define quick tests of personal skill for priority areas? · 2019-06-13T13:40:18.075Z · EA · GW

I am interested in this. It can be very costly and difficult to pivot when you make commitments on the order of years, such as what to study at university. However, the sheer size of the commitment also has value as a costly signal. And that's why society relies on it so much. I think cheap tests like you describe are great to do before embarking on commitments on the order of years. And tracking the timing and directionality: ie. which opportunities might be better to take at another time, or how reversible is a pivot, what keeps my options open? I wish I had figured all that out earlier, ideally in high school. Probably telling people earlier, say in high school, to do cheap tests is pretty valuable.