Where are the long-termist theories of victory / impact? 2022-08-24T13:45:57.828Z
If you fail, you will still be loved and can be happy; a love letter to all EAs 2022-07-24T10:09:42.432Z
howdoyousay?'s Shortform 2022-06-20T23:19:41.274Z


Comment by howdoyousay? on Where are the long-termist theories of victory / impact? · 2022-08-28T15:59:10.579Z · EA · GW

Three things:

1. I'm mostly asking for any theories of victory pertaining to causes which support a long-termist vision / end-goal, such as eliminating AI risk. 

2. But also interested in a theory of victory / impact on long-termism itself, of which multiple causes interact. For example, if 

  • long-termism goal = reduce all x-risk and develop technology to end suffereing, enable flourishing + colonise stars

then the composites of a theory of victory/ impact could be...:

  • reduce X risk pertaining to Ai, bio, others
  • research / udnerstanding around enabling flourishing / reducing suffering 
  • stimulate innovation
  • think through governance systems to ensure technologies / research above used for the good / not evil

3. Definitely not 'advocating for longtermism' as an ends in itself, but I can imagine that advocacy could be part of a wider theory of victory. For example, could postulate that reducing X-risk would require mobilising considerable private / public sector resources, requiring winnning hearts and minds around both how scarily probably X-risk is and the bigger goal of giving our descendants beautiful futures / leaving a legacy.

Comment by howdoyousay? on Where are the long-termist theories of victory / impact? · 2022-08-28T15:49:59.555Z · EA · GW

Agree there's something to your 1-3 counterarguments but I find the fourth less convincing, maybe more because of semantics than actual substantive disagreement.  Why? A difference in net effect of x-risk reduction of 0.01% Vs. 0.001% is pretty massive. These especially matter if the expected value is massive, because sometimes the same expected value holds across multiple areas. For example, preventing asteroid related X risk Vs. Ai Vs. Bio Vs. runaway climate change; (by definition) all same EV (if you take the arguments at face value). But plausibility of each approach and individual interventions within that would be pretty high variance. 

Comment by howdoyousay? on If you fail, you will still be loved and can be happy; a love letter to all EAs · 2022-07-24T22:17:35.199Z · EA · GW

The more I reread your post, the more I feel our differences might be more nuances, but I think your contrarian / playing to an audience of cynics tone (which did amuse me) makes them seem starker? 

Before I grace you with more sappy reasons why you're wrong, and sign you up to my life-coaching platform[1] counter-argue, I want to ask a few things...

  • I am not sure whether you're saying "treating people better / worse depending on their success is good"; particularly in the paragraphs about success and worth. Or that you think that's just an immutable fact of life (which I disagree with). What's your take?
  • How do you see "having given my honest best shot"as distinct from my point of the value in trying your hardest? I'm suspicious we'd find them most the same thing if we looked into it...
  • Do you think that mastery over skills (as a tool to achieve goals) is incompatible with having an intrinsic sense of self worth? I would argue that they're pretty compatible. Moreover, for people feeling terrible and sh*t-talking themselves non-stop, which makes them think badly, I'm confident that feeling like their worth doesn't depend on sucessful mastery of skills is itself a pretty good foundation for mastery of skills.

Honestly I'm quite surprised by you saying you haven't found 'essentialist' self-worth, or what I'd call intrinsic self-worth, very valuable. I'd be down to understand this much better. For my part...:

  • I abandoned the success oriented self-worth because of a) the hedonic treadmill, and b) the practical benefits: believing you are good enough is a much better foundation for doing well in life[2], I've found, and c) reading David Foster Wallace[3]
  • I don't mind if people think I'm better / worse at something and 'measure me' in that way; I don't mind if it presents fewer opportunities. But I take issue when anyone...:
    • uses that measurement to update on someone's value as a person, and treat them differently because of it, or;
    • over-updates on someone's ability; the worst of which looks like deference or writing someone off.
  1. ^

    First week is free, pal

  2. ^

     And of course I notice the paradox in points a) and b); it's a classic. But I'll embrace the contradictions that help.

  3. ^

    lol #cliché

Comment by howdoyousay? on Leaning into EA Disillusionment · 2022-07-23T21:49:01.620Z · EA · GW

I agree with this in principle... But there's a delicious irony in the idea of EA leadership (apols for singling you out in this way Ben) now realising "yes this is a risk; we should try and convince people to do the opposite of it", and not realising the risks inherent in that.

The fundamental issue is the way the community - mostly full of young people - often looks to / overrelies on EA leadership for ideas of causes to dedicate themselves to, but also ideas about how to live their life. This isn't necessarily the EA leadership fault, but it's not as if EA has never made claims about how people should live their lives before; from donating 10% of their income to productivity 'hacks' which can become an industry in themselves. 

I think there are many ways to put the wisdom of Helen's post into action, and one of them might be for more EA leadership to be more open to saying what it doesn't know. Both in terms of the epistemics but the whole how to live your life stuff. I'm not claiming EA leaders act like some kind of gurus - far from it in fact - but I think some community members often regard them as such. But one thing I think it would be great is to hear more EA leaders coming out with a tone about EA ideas like "honestly, I don't know - I'm just on this journey trying to figure things out myself, here's the direction I'm trying to move to". 

I say this for two reasons: 1) because, knowing lots of people in leadership positions, I know this is how a lot of them feel both epistemically and in terms of how to live your life as an EA but it's not said in public; and 2) I think knowing this has made me feel a lot more healthy psychological distance from EA, because it lowers the likelihood of putting leaders on a pedestal / losing my desire to think independently. 

["We're just kids feeling our way in the dark of a cold, uncaring universe trying to inch carefully towards ending all suffering and maximising pleasure of all beings everywhere". New tag-line?]

Comment by howdoyousay? on EA for dumb people? · 2022-07-13T08:06:53.184Z · EA · GW

I didn't down/up-vote this comment but I feel the down-votes without explanation and critical engagement are a bit harsh and unfair, to be honest.  So I'm going to try and give some feedback (though a bit rapidly, and maybe too rapidy to be helpful...)

It feels like just an statement of fact to say that IQ tests have a sordid history; and concepts of intelligence have been weaponised against marginalised groups historically (including women, might I add to your list ;) ). That is fair to say. 

But reading this post, it feels less interested in engaging with the OP's post let alone with Linch's response, and more like there is something you wanted to say about intelligence and racism and have looked for a place to say that.  

  • I don't feel like relating the racist history of IQ tests helps the OP think about their role in EA; it doesn't really engage with what they were saying that they feel they are average and don't mind that, but rather just want to be empowered to do good.
  • I don't feel it meaningfully engages with Linch's central point; that the community has lots of people with attributes X in it, and is set up for people with attributes X, but maybe there are some ways the community is not optimised for other people

I think your post is not very balanced on intelligence.

  • general intelligence is as far as I understand a well established psychological / individual differences domain
    • Though this does how many people with outlying abilities in e.g. maths and sciences will - as they put it themselves - not be as strong on other intelligences, such as social. And in fairness to many EAs who are like this, they put their hands up on their intelligence shortcominds in these domains!
  • Of course there's a bio(psycho)social interaction between biological inheritance and environment when it comes to intelligence. The OP's and Linch's points still stand with that in mind.
  • The correlation between top university attendance and opportunity. Notably, the strongest predictor of whether you go to Harvard is whether your parents went to Harvard; but disentangling that from a) ability and b) getting coached / moulded to show your ability in the ways you need to for Harvard admissions interviews is pretty hard. Maybe a good way of thinking of it is something like for every person who get into elite university X...:
    • there are 100s of more talented people not given the opportunity or moulding to succeed at this, who otherwise would trounce them, but
    • there are 10000s more who, no matter how much opportunity or moulding they were given, would not succeed

Anyway, in EA we have a problem when it comes to identifying ourselves as a group that could be easily resolved by investing efforts in how our dynamics work, and the ways in which we exclude other people (I'm not just referring to Olivia) and how that affects within the community, at the level of biases and at the level of the effects that all this has on the work we do.

If I'm understanding you correctly, you're saying "we have some group dynamics problems; we involve some types of people less, and listen to some voices less". Is that correct? 

I agree - I think almost everyone would identify different weird dynamics within EA they don't love, and ways they think the community could be more inclusive; or some might find lack of inclusiveness unpalateable but be willing to bite that bullet on trade-offs. Some good work has been done recently on starting up EA in non-Anglophone, non-Western countries, including putting forward the benefits of more local interventions; but a lot more could be done. 

A new post on voices we should be listening to more, and EA assumptions which prevent this from happening would be welcome!

Comment by howdoyousay? on Doom Circles · 2022-07-09T14:10:53.344Z · EA · GW

Thanks for your open and thoughtful response.

Just to emphasise, I would bet that ~all participants would get a lot less value from one / a few doom circle sessions than they would from:

  • cultivating skills to ask / receive feedback that is effective (with all the elements I've written about above) which they can use across time - including after leaving a workshop, and / or;
  • just a pervasive thread throughout the workshop helping people develop both these skills and also initiate some relationships at the workshops where they can keep practising this feedback seeking / giving in future.

I did loads of this kind of stuff on (granted, somewhat poorly executed) graduate schemes and it proved persistently valuable, and helped you get 'buddies' who you could be this open, reflective and insight-seeking with.

I agree there are other types of feedback that are probably better for most people in most cases, and that Doom Circles are just one format that is not right for lots of people. I meant to emphasize that in the post but I see that might not have come through. 

I feel like I would re-edit this post maybe to emphasise "this is an option, but not necessarily the lead option", because its original positioning feels more like it's a canonical approach?

I'm glad to hear you feel more comfortable setting boundaries now. I think it is a good flag that some people might not be in a place to do that, so we should be mindful of social / status dynamics and try our best to make this truly opt-in.

Sadly I think I would have been a fairly good example of most younger EAs still forging their sense of self and looking for belonging to a community; in particular the kinds of people who might feel they need this kind of feedback. So if these are going to be run up again, I'd think reflecting on this in setting terms / design would be useful.

Comment by howdoyousay? on Doom Circles · 2022-07-09T12:21:05.693Z · EA · GW

The original CFAR alumni workshop included a warning:
"be warned that the nature of this workshop means we may be pushing on folks harder than we do at most other CFAR events, so please only sign up if that sounds like something that a) you want, and b) will be good for you."


I'm struggling to understand the motivations behind this. 

Reading between the lines, was there a tacit knowledge by the organisers that this was somewhat experimental, and that it could perhaps lead to great breakthroughs and positive emotions as well as the opposite; but could only figure it out by trying?

The reason this feels so weird to me - especially the 'pushing on folks harder' - is because I know there are many ways to enable difficult things to be said and heard without people feeling 'pushed on'; in fact, in ways that feel light! Or at least you can go into it knowing it can go either way, but with the intention of it not feeling heavy / difficult; but it sounds like heaviness / 'pushing on people' in explicitly part of the recipe? That feels unnecessary to me...

Grateful for illumination from whoever it comes from!

Comment by howdoyousay? on Doom Circles · 2022-07-09T11:00:29.707Z · EA · GW

I'm struggling to understand why anyone would choose one big ritual like 'Doom circles' instead of just purposefully inculcating a culture of opennes to giving / receiving critique that is supportive and can help others? And I have a lot of concerns about unintended negative consequences of this approach. 

Overall, this runs counter to my experience of what good professional feedback relationships look like; 

  • I suspect the formality will make it feel weirder  for people who aren't used to offering feedback / insights to start doing it in a more natural, every day way;  because they've only experienced it in a very bounded way which is likely highly emotionally charged. They might get the impression that feedback will always feel emotional, whereas if you approach it well it doesn't have to feel negatively emotional even when some of the content is less positive.
    • there should be high enough trust and mutual regard for my colleague to say to me "you know what? You do have a bit of a tendency to rush planning / be a bit rude to key stakeholders and that hasn't worked so well in the past, so maybe factor that into the upcoming projects"
  • low-context feedback is often not helpful; this is because someone's strengths are often what could 'doom' them if over-relied upon, and different circumstances require different approaches. This sounds like feedback given with very little context - especially if limited to 90 seconds and the receiver cannot give more context to help the giver.
  • feedback is ultimately just an opinion; you should be able to take and also discard it. It's often just based on someone's one narrower vantage point of you so if you get lots of it it will necessarily be contradictory. So if you acted on it all, you'd be screwed. This sounds like a fetishisation / glorification of the feedback given; which would then make it harder for the receiver of doom to assess each bit on it's merits, synthesise and integrate it better because of this.

A younger version of myself with less self-esteem would have participated and would have deferred excessively to others views even if I felt they had blindspots. I think I would integrate all of the things I heard, even if they were things I thought were likely not true on balance, and that these would rebound in a chorus of negative self-talk. But I think part of the attraction for me to Doom Circle's would have been:

  • all these smart people do it; there must be something to it
  • feeling like I must not be 'truly committed to sef-improvement' if I don't want to participate
  • and, in a small part, the rush / pain of hearing 'the truth', a form of psychic self-harm like reading a diary you know you shouldn't

Now, I think I would just refuse to do this and rather put forward my counter-proposal, which would look be more sharing reflections on each others traits / skills, what could enable us and hold us back, and two-way dialogue about this to try and figure out what is / isn't.  And doing so regularly - build up of negativity is always damaging when it eventually comes out, but also why hold back on the positivity when it's a great fuel for most people?

Comment by howdoyousay? on Leftism virtue cafe's Shortform · 2022-06-29T13:56:14.098Z · EA · GW

"70,000 hours back"; a monthly podcast interviewing someone who 'left EA' about what they think are some of EAs most pressing problems, and what somebody else should do about them.

Comment by howdoyousay? on howdoyousay?'s Shortform · 2022-06-20T23:19:41.493Z · EA · GW

Is it all a bit too convenient?

There's been lots of discussion about EA having so much money; particularly long-termist EA. Worries that that means we are losing the 'altruist' side of EA, as people get more comfortable, and work on more speculative cause areas. This post isn't about what's right / wrong or what "we should do"; it's about reconciling the inner tension this creates.

Many of us now have very well-paid jobs, which are in nice offices with perks like table-tennis. And that many people are working on things which often yield no benefit to humans and animals in the near-term but might in future; or indeed the first order effect of the jobs are growing the EA community, and 2nd and 3rd are speculative benefit to humans and animals or sentient beings in the future. These jobs are often high status. 

Though not in an EA org, I feel my job fits this bill as well. I get a bit pissed with myself sometimes feeling I've sold out; because it just seems to be a bit too convenient that the most important thing I could do gets me high profile speaking events, a nice salary, an impressive title, access to important people, etc. And that potential impact from my job, which is in AI regulation, is still largely speculative.

I feel long-termish, in that I aim to make the largest and most sustainable change for all sentient minds to be blissful, not suffer and enjoy endless pain au raisin. But that doesn't mean ignoring humans and animals today. To blatantly mis-quote Peter Singer the opportunity cost of not saving a drowning child today is still real, even if that means showing up 5 minutes late to work every day compromising on your productivity, which you believe is so important because you have a 1/10^7* chance of saving 10^700** children.

For me to believe I'm living my values, I think I need to still try to make an impact today. I try donate a good chunk to global health and  wellbeing initiatives, lean harder into animal rights, and (am now starting to) support people in my very deprived local community in London.

So two questions:

Do other long-termish leaning people feel this same tension?

And if so, how do you reconcile it within yourself?

*completely glib choice of numbers

**exponentially glibber

Comment by howdoyousay? on Responsible/fair AI vs. beneficial/safe AI? · 2022-06-04T05:06:57.385Z · EA · GW

Your question seems to be both about content and interpersonal relationships / dynamics. I think it's very helpful to split out the differences between the groups along those lines.

In terms of substantive content and focus, I think the three other responders outline very well; particularly on attitudes towards AGI timelines and types of models they are concerned about.

In terms of the interpersonal dynamics, my personal take is we're seeing a clash between left / social-justice and EA / long-termism play out stronger in this content area than most others, though to date I haven't seen any animus from the EA / long-termist side. In terms of explaining the clash, I guess it depends how detailed you want to get.

Could be minimalistic and sum it up as one or both sides hold stereotypical threat models of the other, and are not investigating these models but rather attacking based on them.

Could expand and explain why EA / long-termism evokes such a strong threat response to people from the left, especially marginalised communities and individuals who have been punished for putting forward ethical views - like Gebru herself.

I think the latter is important but requires lots of careful reflection and openness to their world views, which I think requires a much longer piece. (And if anyone is interested in collaborating on this, would be delighted!)

Comment by howdoyousay? on Responsible/fair AI vs. beneficial/safe AI? · 2022-06-04T04:54:42.019Z · EA · GW

To add to the other papers coming from the "AI safety / AGI" cluster calling for a synthesis in these views...

Comment by howdoyousay? on Awards for the Future Fund’s Project Ideas Competition · 2022-05-26T05:37:47.884Z · EA · GW

I think taking this forward would be awesome, and I'm potentially interested to contribute. So consider this comment an earmarking for me to come speak with you and / or Rory about this at a later date :)

Comment by howdoyousay? on EA needs to understand its “failures” better · 2022-05-24T17:08:43.650Z · EA · GW

Thanks for writing this, completely agree.

I'd love if the EA community was able to have increasingly sophisticated, evidence -backed conversations about e.g. mega-projects vs. prospecting for and / or investing more in low-hanging fruits.

It feels like it will help ground a lot more debates and decision making within the community, especially around prioritising projects which might plausibly benefit the long term future compared with projects we've stronger reasons to think will benefit people / animals today (albeit not an almost infinitely large number of people / animals).

But also, you know, an increasingly better understanding of what seems to work is valuable in and of itself!

Comment by howdoyousay? on The Inner Ring [Crosspost] · 2022-05-15T13:52:51.398Z · EA · GW

Cross-post to Leftism Virtue Café's commentary on this:

Comment by howdoyousay? on EA will likely get more attention soon · 2022-05-13T12:22:29.161Z · EA · GW

Equally there's an argument to thank and reply to critical pieces made against the EA community which honestly engage with subject matter. This post (now old) making criticisms of long-termism is a good example:

I'm sure / really hope Will's new book does engage with the points made here. And if so, it provides the rebuttal to those who come across hit-pieces and take them at face value, or those who promulgate hit-pieces because of their own ideological drives.

Comment by howdoyousay? on EA will likely get more attention soon · 2022-05-12T14:10:07.542Z · EA · GW

Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.  


In fact, I think I'll reflect on this list for a long time to ensure I continue not to respond on Twitter!

Comment by howdoyousay? on My experience with imposter syndrome — and how to (partly) overcome it · 2022-04-22T08:30:28.137Z · EA · GW

Agreed, and I was going to single out that quote for the same reason. 

I think that sentence is really the crux of imposter syndrome. I think it's also, unfortunately, somewhat uniquely triggered by how EA philosophy is a maximising philosophy, which necessitates comparisons between people or 'talent' as well as cause areas. 

As well as individual actions, I think it's good for us to think more about community actions around this as any intervention targeting the individual but not changing environment rarely makes the dent needed.

Comment by howdoyousay? on EA Houses: Live or Stay with EAs Around The World · 2022-04-19T11:00:02.421Z · EA · GW

Full disclosure: I'm thinking about writing up about ways in which EA's focus on impact, and the amount deference to high status people, creates cultural dynamics which are very negative for some of its members. 

I think that we should just bite the bullet here and recognise that the vast majority of smart dedicated people trying very hard to use reason and evidence to do the most good are working on improving the long run future.


It's a divisive claim, and not backed up with anything. By saying 'bite the bullet', it's like you're taunting the reader to say "if you don't recognise this, you're willfully avoiding the truth / cowardly in the face of truth".  Whereas for such a claim I think onus is on you to back it up. 

It's also quite a harsh value judgement of others, and bad for that reason - see below.


To be clear, there are plenty of people working on LT issues who have some/all of the above problems and I am also not very excited about them or their work.

This implies "some people matter, others do not". It's unpleasant and a value judgement, and worth downvoting on that alone. It also assumes such judgements can easily be made of others, whether they "Don't think about things well". I think I've pretty good judgement of people and how they think (it's part of my job to have it), but I wouldn't make these claims about someone as if it's definitive and then decide whether to engage / disengage with them off the bat of that. 

But it's even more worth downvoting given how many - in my experience, I'll caveat - EAs end up disconnecting from the community or beat themselves up because they feel the community makes value judgements about them, their worth, and whether they're worth talking to.  I think it's bad for all the 'mental health--> productivity --> impact' reasons, but most importantly because I think not hurting others or creating conditions in which they would be hurt matters.  This statement you made seems to me to be very value judgementy, and would make many people feel threatened and less like expressing their thoughts in case they would be accused of 'not thinking well', so I certainly don't want it going unchallenged, hence downvoting it. 

I would be super interested in seeing your list though, I'm sure there are some exceptions.

I think making a list of people doing things, and ranking them against your four criteria above, and sharing that with other people would bring further negative tones to the EA community.

Comment by howdoyousay? on The Effective Altruism culture · 2022-04-17T10:53:19.080Z · EA · GW

I do suspect there is a lot of interaction happening between social status, deference, elitism and what I'm starting to feel is more of a mental health epidemic then mental health deficit within the EA community.  I suspect it's good to talk about these together, as things going hand in hand.

What do I mean by this interaction?

Things I often hear, which exemplify it:

  • younger EAs, fresh out of uni following particular career advice from a person / org, investing a lot of faith in it - probably moreso than the person of higher status expects them to. Their path doesn't go quite right, they get very burned out and disillusioned
  • people not coming to EA events anymore because, while they want to talk about the ideas and feel inspired to donate, the imposter syndrome becomes too big when they get asked "what do you do for work?"
  • talented people not going for jobs / knocking themselves down because "I'm not as smart as X" or "I don't have 'elite university' credentials", which is a big downer for them and reinforces the whole deference to those with said status, particularly because they're more likely to bei n EA positions of power
    • this is a particularly pernicious one, because ostensibly smarter / more experienced people do exist, and it's hard to tell who is smarter / more experienced without looking to signals of it, and we value truth within the community...but these are also not always the most accurate signals, and moreover the response to the signal (i.e. "I feel less smart than that person")is in fact an input into someone's  ability to perform   

Call me a charlatan without my objective data, but speaking to group organisers this seems way more pervasive than I previously realised... Would welcome more group organisers / large orgs like CEA surveying this again, building on the 2018/19 work... hence why am I using strong language than might seem almost alarmist language

EDIT: formatting was a mess

Comment by howdoyousay? on Four categories of effective altruism critiques · 2022-04-11T13:38:51.814Z · EA · GW

I'd add a fifth; one about individuals personally exploring ways in which an EA mindset and / or taking advice / guidance on lifestyle or career from their EA community has led to less positive results in their own lives.

Some that come to mind are:

Denise's post about "My mistakes on the path to impact": And though I can't find it the post about how hard it is to get a job in an EA organisation, and how demoralising that is (among other points)

Comment by howdoyousay? on I want an ethnography of EA · 2022-04-11T12:46:26.342Z · EA · GW

Here's a podcast I listened to years ago which has influenced how I think about groups and what to be sceptical about; most specifically what we choose not to talk about.

This is why I'm somewhat sceptical about how EA groups would respond to an offer of an ethnography; what do people find uncomfortable to talk about with a stranger observing them, let alone with each other?

Comment by howdoyousay? on The Vultures Are Circling · 2022-04-06T09:10:50.754Z · EA · GW

Yes to links of what conversations on gaming the system are happening where! 

Surely this is something that should be shared directly with all funders as well? Are there any (in)formal systems in place for this?

Comment by howdoyousay? on The Vultures Are Circling · 2022-04-06T09:09:54.970Z · EA · GW

Yes to links of what conversations on gaming the system are happening where! Surely this is something that should be shared directly with all funders as well? Are there any (in)formal systems in place for this?

Comment by howdoyousay? on I feel anxious that there is all this money around. Let's talk about it · 2022-04-01T10:30:32.675Z · EA · GW

One way to approach this would simply be to make a hypothesis (i.e. the bar for grants is being lowered, we're throwing money at nonsense grants), and then see what evidence you can gather for and against it.

Another way would be to identify a hypothesis for which it's hard to gather evidence either way. For example, let's say you're worried that an EA org is run by a bunch of friends who use their billionaire grant money to pay each other excessive salaries and and sponsor Bahama-based "working" vacations. What sort of information would you need in order to support this to the point of being able to motivate action, or falsify it to the point of being able to dissolve your anxiety? If that information isn't available, then why not? Could it be made available? Identifying a concrete way in which EA could be more transparent about its use of money seems like an excellent, constructive research project.


Overall I like your post and think there's something to be said for reminding people that they have power; and in this case, the power is to probe at the sources of their anxiety and reveal ground-truth. But there is something unrealistic, I think, about placing the burden on the individual with such anxiety; particularly because answering questions about whether Funder X is lowering / raising the bar too much requires in-depth insider knowledge which - understandably - people working for Funder X might not want to reveal for a number of reasons, such as:

  1. they're too busy, and just want to get on with grant-making
  2. with distributed responsibility for making grants in an organisation, there will be a distribution of happiness across staff with the process, and airing such tensions in public can be awkward and uncomfortable
  3. they've done a lot of the internal auditing  / assessment they thought was proportional
  4. they're seeing this work as inherently experimental / learning-by-doing and therefore plan more post-hoc reviews the prior process crafting

I'm also just a bit averse, from experience, of replying to people's anxieties with "solve it yourself". I was on a graduate scheme where pretty much every response to an issue raised - often really systemic, challenging issues which people haven't been able to solve for years, or could be close to whistle-blowing issues - was pretty much "well how can you tackle this?"* The takeaway mesage then feels something like "I'm a failure if I can't see the way out of this, even if this is really hard, because this smart more experienced person has told me it's on me". But lots of these systemic issues do not have an easy solution, or taking steps towards action are either emotionally / intellectually hard or frankly could be personally costly. 

From experience, this kind of response can be empowering, but it can also inculcate a feeling of desperation when clever and can-do attitude people (like most EAs) are advised to solve something without support or guidance, especially when this is near intractable. I'm not saying this is what the response of 'research it yourself' is - in fact, you very much gave guidance - but I think the response was not sufficiently mindful of the barriers to doing this.  Specifically, I think it would be really difficult for a small group of capable people to research this a priori, unless there were other inputs and support like e.g. significant cooperation from Funder X they're looking to scrutinise, or advice from other people / orgs who've done this work.  Sometimes that is available, but it isn't always and I'd argue it's kind of a condition for success / not getting burned out trying to get answers on the issue that's been worrying you.

Side-note: I've deliberately tried to make this commentary funder neutral because I'm not sure how helpful the focus on FTx is.  In fairness to them, they may be planning to publish their processes / invite critique (or have done so in private?), or are planning to take forward rigorous evaluation of their grants like GiveWell did? So would rather frame this as an invitation to comment if they haven't already, because it felt like the assumptions throughout this thread are "they ain't doing zilch about this" which might not be the case.

*EDIT: In fact, sometimes a more appropriate response would have been "yes, this is a really big challenge you've encountered and I'm sorry you feel so hopeless over it - but the feeling reflects the magnitude of the challenge". I wonder if that's something relevant to the EA community as well; that aspects of moral uncertainty / uncertainty about whether what we're doing is impactful or not is just tough, and it's ok to sit with that feeling.

Comment by howdoyousay? on [April fool's post] Proposal to assign careers by birthdate · 2022-04-01T10:03:41.727Z · EA · GW

I'm personally concerned that horoscopes weren't taken into account in devising this scheme, when there is literally thousands of years worth of work on this, all going back to classical civilisation and aristotle or something.  Classic EAs overcomplicating things / reinventing the wheel.

Comment by howdoyousay? on I want an ethnography of EA · 2022-04-01T07:56:14.029Z · EA · GW

Interesting, I think it's the other way round; there are tonnes of companies and academic groups who do action-oriented evaluation work which can include (and I reckon in some cases exclusively be) ethnography. But in my experience the hard part is always "what can feasibly be researched?" and "who will listen and learn from the findings?"  In the case of EA community this would translate to something like the following, which are ranked in order of hardest to simpler...:

  • what exactly is the EA community? or what is a representative cross-section / group for exploration?
  • who actually wants to be surveilled and critiqued; to have their assumptions and blindspots surfaced in a way that may cast aspersions on their actions and what they advocate for? especially if these are 'central nodes' or public(ish) figures
  • how can the person(s) doing ethnography be given sufficient power and access to do their work effectively?
  • what kind of psychological contracts need to be engendered so that the results of this research don't fall on deaf ears? and how do we go about that?
  • what things do we want to learn from this? should it be theory-driven, or related to specific EA subject-matter (e.g. long-termism)? or should the ethnographer be given a wider remit to do this work?

I'd be happy to have a conversation about what this could look like - maybe slightly more useful than a paper because I suspect there are an unhelpful amount of potential misunderstanding potholes in this area, so easier to clarify by chatting through. 

Comment by howdoyousay? on I want an ethnography of EA · 2022-03-30T20:16:53.837Z · EA · GW

Yeah I'd know how to go about making this happen, including figuring out what's a decent research question for it, but not undertaking it myself.

Comment by howdoyousay? on I want an ethnography of EA · 2022-03-30T15:03:15.590Z · EA · GW

How does one tag someone with lots of money in this post?


I phrase this in jest, but mean it in all seriousness - the rhetoric at the moment is 'be more ambitious' because we are less cash constrained than before, but maybe we should add to this 'be more ambitious, but doubly as self-critical as before'.

Comment by howdoyousay? on A Landscape Analysis of Institutional Improvement Opportunities · 2022-03-29T08:11:54.927Z · EA · GW

I will be completely honest and share that I downvoted this response as I personally felt it was more defensive than engaging with the critiques, and didn't engage with specific points that were asked - for example, capacity for change. That said, I recognise I'm potentially coming late to the party in sharing my critiques of the approach / method, and in that sense I feel bad about sharing them now. But usually authors are ultimately open to this input, and I suspect this group is no different :)

A few further points:

  • I understand the premise of "our unit of analysis was the institutions themselves, so we could focus in on the most likely to be 'high leverage' to then gain the contextual understanding required to make a difference". I would not be surprised if the next step proves less fruitful than expected for a number of reasons, such as:
    • difficult to gain access to the 'inner rings' to ascertain this knowledge on how to make impact 
    • the 'capacity for change' / 'neglectedness, tractability' turns out to be a significantly lower leverage point within those institutions, which potentially reinforces the point we might have made a reasonable guess at: that impact / scale can be inversely correlated with flexibility / capacity for change / tractability / etc
  • I get a sense from having had a brief look at the methodology that insider knowledge from making change in these organisations could have been woven in earlier; either by talking to EAs / EA aligned types working within government or big tech companies or whatever else. This would have been useful for deciding what unit of analysis should be, or just sense-checking 'will what we produce be useful?'
    • If this was part of the methodology, my apologies: it's on me for skim-reading.
  • I'm a bit concerned by choosing to build a model for this, given as you say this work is highly contextual and we don't have most of this context. My main concerns are something like...:
    • quant models are useful where there are known and quantifiable distinguishers between different entities, and where you have good reason to think you can:
      • weight the importance of those distinguishers accordingly
      • change the weights of those distinguishers as new information comes in
    • but as Ian says, 'capacity for change' in highly contextual, and a critical factor in which organisations should be prioritised
    • however, the piece above reads like 'capacity for change' was factored into the model. If so, how? And why now when there's lowoer info on it?
    • just from a time resource perspective, models cost a lot, and sometimes are significantly less efficient than a qualitative estimate especially where things are highly contextual; so I'm keen to learn more about what drove this

This is all intended to be constructive even if challenging. I work in these kinds of contexts, so this work going well is meaningful to me, and I want to see the results as close to ground truth and actionable as possible. Admittedly, I don't consider the list of top institutions necessarily actionable as things stand or that they provide particularly new information, so I think the next step could add a lot of value.

Comment by howdoyousay? on A Landscape Analysis of Institutional Improvement Opportunities · 2022-03-21T14:33:34.241Z · EA · GW

I'm replying quickly to this as my questions closely align with the above to save the authors two responses; but admittedly I haven't read this in full yet. 

Next, we conducted research and developed 3-5-page profiles on 41 institutions. Each profile covered the institution’s organizational structure, expected or hypothetical impact on people’s lives in both typical and extreme scenarios, future trajectory, and capacity for change.

Can you explain more about 'capacity for change' and what exactly that entailed in the write-ups? I ask because looking at the final top institutions and reading their descriptions, it feels like the main leverage is driven by 'expected of hypothetical impact on people's lives in both typical and extreme scenarios', and less by 'capacity for change'. 

It seems to be a given that EAs working in one of these institutions (e.g. DeepMind) or concrete channels to influence (e.g. thinktanks to the CCP Politburo) constitute 'capacity for change' within the organisation, but I would argue that that in fact is driven by a plethora of internal and external factors to the organisation. External might be market forces driving an organisations dominance and threatening its decline (e.g. Amazon), and internal forces like culture and systems (e.g. Facebook / Meta's resistance to employee action).  In fact, the latter example really challenges why this organisation would be in the top institutions if 'capacity for change' has been well developed. 

For such a powerful institution, the Executive Office of the President is capable of shifting both its structure and priorities with unusual ease. Every US President comes into the office with wide discretion over how to set their agenda and select their closest advisors. Since these appointments are typically network-driven, positioning oneself to be either selected as or asked to recommend a senior advisor in the administration can be a very high-impact career track. 

Equally, when it comes to capacity for change this is both a point in favour and against, as such structure and priorities are by definition not robust / easily changed by the next administration. 

Basically, it's really hard to get a sense of whether the analysis captured these broader concerns from the write-up above. If it didn't, I would hope this would be a next step in the analysis as it would be hugely useful and also add a huge deal more novel insights both from a research perspective and in terms of taking action.

Also curious about how heavy this is weighted towards AI institutions - and I work in the field of AI governance so I'm not a sceptic. Does this potentially tell us anything about the methodology chosen, or experts enlisted?

EDIT: additional point around Executive Office of the President of US

Comment by howdoyousay? on A Case for Improving Global Equity as Radical Longtermism · 2021-12-18T14:59:36.752Z · EA · GW

I welcome the counter-arguments on this, but I think the writer makes a fair point around protecting current institutions and systems which are weakening due to political changes / pressure / defunding. It isn't ideal when countries withdraw funding from the WHO; and arguably if institution X was less reliant on funding from nation states, it would also be likely less beholden to them politically.  More beholden to philanthropists, so here comes the private actors Vs. states as funders debate again, which I'm not going to put forward a solution to now as much as say "it's an debate alright".

These institutions aren't perfect by any means - the masks debacle by the WHO being a case in point - but a question is if it didn't exist as a mechanism for near - and long-term health protection, would we suggest it should be founded? Answer is likely yes; so if they are underresourced, why not consider funding.

More controversial perspective: the message going round now is "we have lots of money, we just want to keep the bar high for what we do with it; ergo be ambitious". So I think it's fair enough to say "maybe making sure health protection / poverty alleviation systems to keep the world going in the right direction are fit for increased funding in the absence of these more ambitious and fitting ideas being put forward"...  

I guess I'm saying what's the appropriate default? Very high bar for innovative long-term ideas seems reasonable because this is an emerging field with high uncertainty. But lower bar for ways in which the world is on fire now, and where important institutions could get worse / lead to worse outcomes if defunding / underfunding continues?

Comment by howdoyousay? on Ngo's view on alignment difficulty · 2021-12-15T15:59:56.745Z · EA · GW

I petition all further posts written by Richard Ngo on this forum to be titled "Ngo ngows best".


Or revoke his membership.

Comment by howdoyousay? on A new, cause-general career planning process · 2020-12-03T14:02:47.400Z · EA · GW

Really excited based on what I've read above. Some very hot takes before I go and read the detail...

It would be great to see some case studies in due course of people who applied this kind of thinking, what choices they made and what they learned; particularly to highlight other high impact careers which don't align with priority paths. And it's easier to make sense of how to use a technique - and how much relative effort to consider for each step - when you have other cases to refer to.

Finally, I hope this career planning process will help the community reframe what it means to have an ‘effective altruist career'. Effective altruism is focused on outcomes, and for good reasons; but focusing a lot on outcomes can have some bad side effects.

This is very welcomed. Drilling into another bad side effects of focusing on outcomes, I would be curious to see if this approach can help readers to make career decisions which are more compatible with a happy career / life.  I suspect us EAs can be prone to a relentless focus on impact in the abstract, absolute sense to the detriment of thinking about what makes me personally impactful, and the latter is more likely to be where an individual will get results from placing their energy.  And burnout is well worth avoiding because it's a bloody pox and not fun!


Comment by howdoyousay? on Buck's Shortform · 2020-09-14T18:14:12.742Z · EA · GW
I feel like an easy way to get lots of upvotes is to make lots of vague critical comments about how EA isn’t intellectually rigorous enough, or inclusive enough, or whatever. This makes me feel less enthusiastic about engaging with the EA Forum, because it makes me feel like everything I’m saying is being read by a jeering crowd who just want excuses to call me a moron.

Could you unpack this a bit? Is it the originating poster who makes you feel that there's a jeering crowd, or the people up-voting the OP which makes you feel the jeers?

As counterbalance...

Writing, and sharing your writing, is how you often come to know your own thoughts. I often recognise the kernel of truth someone is getting at before they've articulated it well, both in written posts and verbally. I'd rather encourage someone for getting at something even if it was lacking, and then guide them to do better. I'd especially prefer to do this given I personally know that it's difficult to make time to perfect a post whilst doing a job and other commitments.

This is even more the case when it's on a topic that hasn't been explored much, such as biases in thinking common to EAs or diversity issues. I accept that in liberal circles being critical on basis of diversity and inclusion or cognitive biases is a good signalling-win, and you might think it would follow suit in EA. But I'm reminded of what Will MacAskill said about 8 months ago on an 80k podcast that he was awake thinking his reputation would be in tatters after posting in the EA forum, that his post would be torn to shreds (didn't happen). For quite some time I was surprised at the diversity elephant in the room on EA, and welcomed when these critiques came forward. But I was in the room and not pointing out the elephant for a long time because I - like Will - had fears about being torn to shreds for putting myself out there, and I don't think this is unusual.

I also think that criticisms of underlying trends in groups are really difficult to get at in a substantive way, and though they often come across as put-downs from someone who wants to feel bigger, it is not always clear whether that's due to authorial intent or reader's perception. I still think there's something that can be taken from them though. I remember a scathing article about yuppies who listen to NPR to feel educated and part of the world for signalling purposes. It was very mean-spirited but definitely gave me food for thought on my media consumption and what I am (not) achieving from it. I think a healthy attitude for a community is willingness to find usefulness in seemingly threatening criticism. As all groups are vulnerable to effects of polarisation and fractiousness, this attitude could be a good protective element.

So in summary, even if someone could have done better on articulating their 'vague critical comments', I think it's good to encourage the start of a conversation on a topic which is not easy to bring up or articulate, but is important. So I would say go on ahead and upvote that criticism whilst giving feedback on ways to improve it. If that person hasn't nailed it, it's started the conversation at least, and maybe someone else will deliver the argument better. And I think there is a role for us as a community to be curious and open to 'vague critical comments' and find the important message, and that will prove more useful than the alternative of shunning it.

Comment by howdoyousay? on Should we think more about EA dating? · 2020-07-27T15:43:55.002Z · EA · GW

Hypocrite 101 here as I am dating / have dated EAs, but anyway...

The problem this post is trying to solve is "EAs are a bit too weird for other people", and the proposed solution is "let's pair up romantically". This solution would, in my opinion, aggravate another significant problem which is best laid out by this post here about risks of insularity within the community from excessive value alignment. The writer makes a much more rigorous than I am about to, but I think one element of it applies to this case: the quote "EA will miss its ambitious goal by working with only an insular subset of the people it is trying to save."

Having friendships / relationships outside EA would diversify your own thought as well as potentially diversifying the pool of people interested in EA / EA thinking. So if you accept the arguments of this post that insularity / strong value-alignment is a threat, then friendships / relationships outside of EA are intrinsically valuable. Dating other EAs does not in itself create cultural and community insularity, but encouraging it as a solution to a problem of EAs not being great at external social integration would entrench community insularity.

The best counter-argument is that promoting friendships / relationships / any social interaction outside of EA won't go far enough, and that the real problem is insularity at leadership levels within EA, that's what we should break and give the non-dating a break. Which I think is fair. But withstanding that, still benefits for individuals or local groups (e.g. city-based) around external integration.

Other counter-arguments:

Most liberals marry liberals; most cultists marry cultists; people marry those from their fellow religion (and hunt them on dating apps) this is normal for humans to assortatively mate?

Or why can't we have friends who bring us diversity instead?

Comment by howdoyousay? on Longtermism ⋂ Twitter · 2020-06-21T18:00:54.356Z · EA · GW

Some words of caution here which I want to be brief with to (ideally) set someone up for taking down in a steel-man.

The tl;dr version is Twitter excels at meming misinformed, outraged takes on nuanced things.

First off, EA and in particular long-termism has some vocal detractors who do not seem to use the same norms as most people on the EAF.

Second, Twitter is a forum which people who dislike an event / idea can easily weaponise to discredit the thing and the poster, and do so through (sometimes deliberate) misinterpretation. So it's plausible that long-termist posts on Twitter - if not steel-manned rigorously beforehand - would be vulnerable for this. For example, any post not triple-checked could be retweeted with a misinterpreting comment that argues how long-termism is a bad ideology, and provoke a negative meme-and-outrage-cascade / pile-on.

Third, even with excellent codes of conduct in place (and I agree with disseminating the EAF CoC more widely where possible), an actor who wants to misinterpret something can and will. There is a fairly substantial risk that, should this happen, it would skew the discourse on long-termism outside EA for quite some time, and it may prove very challenging to reset this.

The above are some hot-takes, which I genuinely thought about *not* posting because I haven't had time to mull over them much but thought better to do it than not.

Also, I genuinely hope I'm wrong (especially because I hate being the Helen Lovejoy "won't someone please think of the (future) children?!" voice!) - I think it would be helpful for someone to give some arguments against those or propose some potential mitigations, maybe those seen in other Twitter forums?

Comment by howdoyousay? on Please use art to convey EA! · 2019-05-28T20:14:05.964Z · EA · GW

Kurzgesagt communicates some complex ideas using visualisations and reframing which are also quite effective, and possibly could learn from. Their video on time is a good example of this.

Comment by howdoyousay? on Please use art to convey EA! · 2019-05-28T20:12:03.150Z · EA · GW

Thank you for posting this. I massively laud giving slightly 'left field' approaches a go, and I think you've raised an important issue about communicating about EA movement and thinking generally.

My reply rests on a few some assumptions, which I hope are not too unfair - happy for critique / challenge on them.

The OP's point about art is worth considering in the context of another question: how can we communicate our thinking (in all its diversity and complexity) accurately and effectively to people outside the community?

Whilst I laud the OP's ambition, it's worth thinking about the intermediate steps between logical reasoning (which I observe is our default) and art; using metaphor and analogy to illustrate points. (To note: I believe some animal charities do this already, using the Schindler's car example to influence actions regarding factory farming.)
Before giving arguments in favour, here's an example: video explaining a new type of cancer treatment, CAR-T cell therapy

Some brief arguments in favour:

1) Metaphors / analogies can create an 'aha' moment where the outline of a complex idea is grasped easily and retained by the listener, which they can then layer nuance on top of. People might otherwise not grasp certain complex EA ideas so easily.

2) Whilst explaining a position in logical sequence with great attention to detail is often effective for influencing (and is the main communication approach observed in this forum), I assume that lots of people are not 'hooked' by that approach, or find the line of reasoning too abstract to wish to change their mindset of behaviour in response to it.

3) Metaphors / analogies can be more memorable, and therefore transfer from person to person or 'spread' better than prosaic reasoning.

4) If you assume that people often have weak attention spans and inaccurate recollection memory, then 1-3 are even stronger arguments in favour of using metaphors more.

The examples the OP chooses (e.g. Dr Strangelove) prove that communicating an idea through art requires the artist's ambition to be matched with huge skill, so this strikes me as 'high risk, high gain' territory. But we can probably make some decent gains by developing some metaphorical or allegorical ways of communicating EA thinking, testing them out and iterating.....and THEN seeing if people who we want to communicate our messages to apprehend them better.