Replaceability v. 'Contextualized Worthiness' 2022-07-27T16:48:19.173Z
Replaceability v. 'Contextualized Worthiness' 2022-04-11T09:45:35.729Z
Underinvestment at the top: what I discovered coaching a dozen EA leaders 2022-04-01T12:36:41.044Z
Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program 2019-07-23T23:53:20.274Z
RC Forward - Canada's Effective Giving Experiment: Results & Plans for 2019 2018-12-28T19:28:41.041Z
Please Take the 2018 Effective Altruism Survey! 2018-04-25T17:48:07.796Z
EA Survey 2017 Series: How do People Get Into EA? 2017-11-17T04:44:05.688Z
SHIC Workshop Experiment and Revised Impact Strategy 2018 2017-10-31T18:45:27.551Z
EA Survey 2017 Series: Have EA Priorities Changed Over Time? 2017-10-06T15:51:49.329Z
EA Survey 2017 Series: Qualitative Comments Summary 2017-09-21T01:36:50.004Z
EA Survey 2017 Series: Demographics II 2017-09-18T15:51:47.118Z
EA Survey 2017 Series: Donation Data 2017-09-12T01:29:56.716Z
EA Survey 2017 Series: Cause Area Preferences 2017-09-01T14:55:27.596Z
EA Survey 2017 Series: Community Demographics & Beliefs 2017-08-29T18:36:16.146Z
EA Survey 2017 Series: Distribution and Analysis Methodology 2017-08-29T18:31:51.850Z
.impact is now Rethink Charity 2017-05-30T20:45:01.645Z


Comment by Tee on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-13T14:16:29.771Z · EA · GW

I've got a similar feeling to Khorton. Happy to have been pre-empted there. 

It could be helpful to consider what it is that legibility in the grant application process (for which post-application feedback is only one sort) is meant to achieve. Depending on the grant maker's aims, this can non-exhaustively include developing and nurturing talent, helping future applicants self-select, orienting projects on whether they are doing a good job, being a beacon and marketing instrument, clarifying and staking out an epistemic position, serving an orientation function for the community etc.

And depending on the basket of things the grant maker is trying to achieve, different pieces of legibility affect 'efficiency' in the process. For example, case studies and transparent reasoning about accepted and rejected projects, published evaluations, criteria for projects to consider before applying, hazard disclaimers, risk profile declarations, published work on the grant makers theory of change, etc. can give grant makers 'published' content to invoke during the post-application process that allows for the scaling of feedback. (e.g. our website states that we don't invest in projects that rapidly accelerate 'x'). There are other forms of pro-active communication and stratifying applicant journeys that would make things even more efficient. 

FTX did what they did, and there is definitely a strong case for why they did it that way. In moving forward , I'd be curious to see if they acknowledge and make adjustments in light of the fact that different forms and degrees of legibility can affect the community. 


Comment by Tee on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-12T11:46:16.672Z · EA · GW

 why it’s at least a non-obvious decision

Will we provide feedback to rejected applicants in the future? Possibly, but I think this involves complex tradeoffs and isn't a no-brainer

 So I don’t think we should be doing this now, but I’m not saying that we won’t try to find ways to give more feedback in the future (see below).

Very much appreciate the considerate engagement with this. Wanted to flag that my primary response to your initial comment can be found here

All this makes a lot of sense to me. I suspect some people got value out of the presentation of this reasoning. My goal here was to bring this set of consideration to yours and Sam's attention and upvote its importance, hopefully it's factored into what is definitely non-obvious and complex to decide moving forward. Great to see how thoughtful you all have been and thanks again! 

Comment by Tee on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-11T19:39:11.610Z · EA · GW

Okay, upon review, that was a little bit too much of a rhetorical flourish at the end. Basically, I think there's something seriously important to consider here about how process can negatively affect community health and alignment, which I believe to be important for this community in achieving the plurality of ambitious goals we're shooting for. I believe FTX could definitely affect in a very positive way if they wanted to

Comment by Tee on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-11T18:00:57.589Z · EA · GW

an opportunity cost to providing feedback

huge mistake for Future Fund to provide substantial feedback except in rare cases.


Yep, I'd imagine what makes sense is between 'highly involved and coordinated attempt to provide feedback at scale' and 'zero'. I think it's tempting to look away from how harmful 'zero' can be at scale

> That could change in future if their other streams of successful applicants dry up and improving the projects of people who were previously rejected becomes the best way to find new things they want to fund.

Agreed – this seems like a way to pick up easy wins and should be a good go-to for grant makers to circle back. However, banking on this as handling the concerns that were raised doesn't account for all the things that come with unqualified rejection and people deciding to do other things, leave EA, incur critical stakeholder instability etc. as a result. 

In other words, for the consequentialist-driven among us, I don't think that community health is a nice-to-have if we're serious about having a community of highly effective people working urgently on hard/complex things

Comment by Tee on EA and the current funding situation · 2022-05-11T10:20:51.493Z · EA · GW

Related thread that Sam and Nick are speaking on:

Comment by Tee on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-11T10:10:16.906Z · EA · GW

Thanks to Sam and Nick for getting to this. I think it's very cool that you two are taking the time to engage. In light of the high esteem that I regard both of you and the value of your time, I'll try to close the loop of this interaction by leaving you with one main idea. 

I was pointing at something different than what I think was addressed. To distill what I was saying: >> Were FTX to encounter a strong case for non-negligible harms/externalities to community health that could result from the grant making process, what would your response to that evidence be? <<

The response would likely depend on a hard-to-answer question about how FTX conceives of its responsibilities within the community given that it is now the largest funder by far. 

Personally, I was hoping for a response more along the lines of "Oh, we hadn't thought about it that way. Can you tell us more? How do you think we get more information about how this could be important?" 

I was grateful for Nick's thoughtful answer about what's happening over there. I think we all hear what you're saying about chosen priorities, complexity of project, and bandwidth issues. Also the future is hard to predict. I get all that and can feel how authentically you feel proud about how hard the team has been working and the great work that's been done already. I'm sure that's an amazing place to be. 

My question marks are around how you conceive of responsibility and choose to take responsibility moving forward in light of new information about the reality on the ground. Given the resources at your disposal, I'd be inclined to view your answer within the lens of prioritization of options, rather than simply making the best of constraints.

As the largest funder in the space by far, it's a choice to be open to discovering and uncovering risk and harms that they didn't account for previously. It's a choice to devote time and resources to investigate them. It's a choice to think through how context shifts and your relationship to responsibility evolves. It's a choice to not do all those things. 

A few things that seem hard waive away: 

1) 1600 -1650 (?) rejected applications from the largest and most exciting new funder with no feedback could be disruptive to community health

Live example: Established organization(s) got rejected and/or far less than asked for with no feedback. Stakeholders asked the project leaders "What does it mean that you got rejected/less than you asked for from FTX? What does that say about the impact potential of your project, quality of your project, fitness to lead it, etc." This can cause great instability. Did FTX foresee this? Probably not, for understandable reasons. Is this the effect that FTX wants to have? Probably not. Is it FTX's responsibility to address this? Uncertain. 

2) Opaque reasoning for where large amounts of money goes and why could be disruptive to community health

3) (less certain regarding your M&E plans) Little visibility on M&E given to applicants puts them in a place of not only not knowing what is good, but also how they know they're doing well. Also potentially disruptive

In regards to the approach moving forward for FTX, I wouldn't be surprised if more reflection among the staff yielded more than 'we're trying hard + it's complex + bandwidth issues so what do you want us to do?' My hope with this comment is to nudge internal discussions to be more expansive and reflective. Maybe you can let me know if that happened or not. Insofar as I delivered this in a way that hopefully didn't feel like an attack, if you feel including me in a discussion would be helpful, I'd love to be a part of it. 

And finally, I'm not sure where the 'we couldn't possibly give feedback on 1700 applications' response came from. I mentioned feedback, but there's innumerable ways to construct a feedback apparatus that isn't (what seemed to be assumed) the same level of care and complexity for each application. A quick example – 'stratified feedback' – FTX considers who the applicant is and gives varying levels of feedback depth depending on who they are. This could be important for established EA entities (as I mentioned above), where for various reasons, you think leaving them completely in the dark would be actively harmful for a subnetwork of stakeholders. My ideal version of this would also include promising individuals who you don't want to discourage, but for whatever reason their application wasn't successful. 

Thanks for taking the time. I hope this is received well. 


Comment by Tee on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-10T12:38:51.283Z · EA · GW

Also not trying to lay this all at FTX's doorstep. Hoping that raising this will fold into some of the discussions about community effects behind closed doors over there

Comment by Tee on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-10T11:50:02.337Z · EA · GW

Thanks for writing this up, Nick. It seems like a pretty good first step in communicating about what I imagine is a hugely complex project to deploy that much funding in a responsible manner. Something for FTX to consider within the context of community health and the responsibilities that you can choose to acknowledge as a major funding player: 

– How could a grant making process have significant effects on community health? What responsibilities would be virtuous for a major funding player to acknowledge and address? – 

I've picked up on lots of (concerning) widespread psychological fallout from people, especially project leaders, struggling to make sense of decision-making surrounding all this money pouring into EA (primarily from FTX). I wouldn't want to dichotomize this discussion by weighing it against the good that can be done with the increased funding, but there's value in offering constructive thoughts on how things could be done better.

What seems to have happened at FTX is some mixture of deputizing several individuals as funders + an application process (from what I've been hearing) that offers zero feedback. For those involved over there, is this roughly correct?

If indeed there are no other plans to handle the fundamentals of grantmaking beyond deployment of funds, fundamentals that I believe dramatically affect community health, unless I someone can persuade me otherwise, I'd predict a lot more disoriented and short-circuited (key) EAs, especially because many people on this community orient themselves in the world of legible and explicit.

In particular, people are having trouble getting a sense of how merit is supposed to work in this space. One of the core things I try to get them to consider, which is more pronounced perhaps now more than ever, is that merit is only one of many currencies upon which your social standing and evaluation of your project rests. This is hard for people to look at.

I hope FTX plans to take more responsibility for community health by following up with investment in legible M&E and application feedback. Echoing what I said a month ago about funders in general:

"Funders could do more to prioritize fostering relationships – greater familiarity with project leaders reduces inefficiencies of all sorts, including performative and preparation overhead, miscommunication, missed opportunities, etc.

In my opinion, this should also apply to unsuccessful projects. A common theme that I’ve seen from funders, partly due to bandwidth issues though not entirely, is aversion to giving constructive feedback to unsuccessful projects that nonetheless endure within the community. Given my firsthand experience with many clients who are fairly averse to interpersonal conflict, it wouldn’t surprise me if aversion to conflict + public relations considerations + legal issues (and other things) precluded funders from giving constructive feedback to failed applications. Funders would likely need to hold the belief that this feedback would meaningfully improve these projects prospects, and therefore the community overall, in order to put in the requisite effort to get through these blocks to this type of action. They’d also likely need to feel reassured that the feedback wouldn’t be excessively damaging reputationally (for both themselves and others), destabilize the community, or the integrity of community norms.


EA leaders are often at least partially in the dark regarding expectations from funders. This could be the case for many reasons, but a common reasons among leaders included the following:

• Reputational fears – Reticence to reach out due to some (un)justifiable fear of reputational harm

• Value system clash/lack of familiarity – not wanting to waste the time of funders, usually due to lack of familiarity and fears of how they would be received, but also sometimes a principled decision about not wanting to bother important decision-makers

• Not having considered reaching out to funders regarding expectations at a meaningful enough grain of detail

• (Likely not always misplaced) concerns about arbitrariness of the evaluation process

• Preparation overhead – not being ‘ready’ in various ways. In some cases, My outside view of the situation led me to believe that quite a bit of preparational overhead and perfunctory correspondence could be avoided if funders made it clearer that they care less about certain aspects of performative presentation "

Comment by Tee on Underinvestment at the top: what I discovered coaching a dozen EA leaders · 2022-04-11T17:10:09.029Z · EA · GW

Yeah. On the face of it, I could see how this feels like an easy ask, but I intentionally constructed this post in such a way as to have my work stand up and be evaluated on its own, without being associated with (or positioned against) other programs, coaches, or theoretical paradigms for now. I'll have to spend a bit more time thinking through the differences between displaying in terms of 'highlighting', 'promoting', 'recommending' and 'publicly outing'. What to look out for in both positive and negative senses sounds like something that actually could be a great post on its own. Maybe we could co-author that. 

With that said, I'm curious to hear what high quality you'd like to promote. I'm guessing Paradigm?

This answer might make the above make more sense. My understanding is that Paradigm isn't currently active, but were it still an option, I would restrict the scope of my recommendation to attending their workshops and working closely with specific coaches. For someone looking for a well-structured coaching program and hoping for a widely-recognized credential to earn, it wouldn't be a very good choice. For me personally, my style of learning is boosted tremendously by fruitful individual relationships (great mentors, coaches, etc.) I like to think it worked out quite well in that sense

Comment by Tee on Replaceability v. 'Contextualized Worthiness' · 2022-04-11T11:31:14.589Z · EA · GW

Replaceability v. 'Contextualized Worthiness'


I'll take help recoining the term 'Contextualized Worthiness'. Curious for thoughts, feelings, criticisms!
More on my coaching trials with a dozen EA leaders here

My guess is that 80K is likely unaware of this, but the concept of 'replaceability', [1] or at least as my clients almost exclusively seem to interpret it,[2] seems to wreak havoc as a mental model on people's self-approximations around whether they should be taking on/staying in a given role. I see lots of evidence that it can even be ongoingly corrosive for those holding a role over a long period of time.

This feels like a big problem. In fact, I’d go as far as to say that I believe it’s a primary culprit for imposter syndrome and decision paralysis in EA.

Anxieties around replaceability are often delivered to me as a completely decontextualized hypothetical exercise, which is "is it possible that there's someone in the world who would be better at this role than me? If so, me taking this role could be critically bad for the world." The weight of this is likely exacerbated in leadership positions.

Putting high credence into decontextualized replaceability arguments seem obviously flawed to me, but more importantly, seems to me to have the psychological effect of egregiously warping risk calculations around career exploration, patient accumulation and consolidation of career capital, and particularly willingness to assume responsibility and take action.

You can basically condemn yourself (internally & socially) as a bad person for taking a role.[3]

A thing that I believe calibrates people better would be called something like “contextualized worthiness” considerations.

Here’s a handful: 

  • Distinguishing between ‘being quite well-placed contextually’ with ‘being best-placed decontextually’
  • Comparable talent landscape – where do comparable people seem to concentrate? Does it seem like none/some/most/all would be interested in taking on this role?
  • Whether you were selected out of a search process
  • Whether the costs of running an expansive (or infinite) search process are too prohibitive to make a move
  • The importance of in-network trust in having you assume the role (versus a hypothetical ideal stranger), especially if respected individuals in a domain are actively encouraging you
  • Nearly everyone is pretty poor at approximating skill at a distance, or even knowing what it takes to be great at something, unless it can be identified by highly specialized credentials (and _even then_…) Seems even more true with complex leadership & generalist positions.
  • Whether the position/org would have existed counterfactually
  • Institutional knowledge – being in the role/org for a reasonable amount of time + rapport and sync with surrounding individuals is not trivial for the ideal stranger to attain
  • Reversibility (sort of) – can you run a set of experiments in a fixed amount of time that will allow you to relinquish the position/fold the project if in fact you weren’t a good fit
  • Grace – having grace and forgiveness for yourself if you truly thought something through, took action and happened to be wrong
  • [the list can continue]

The above ‘contextualized worthiness’ considerations often have the effect of getting people to track more inputs from reality, rather relying too heavily upon an abstract thought exercise that yields an absurdly high bar for action, and often bottoms out in a nasty set of implications for any misstep.

If 80K doesn't already plan to do this, a suggestion for remedial action would be an additional series of posts nuancing this concept for people. Many people I speak to could use this.[4]

  1. ^

    To their credit, they do seem to have addressed some misconceptions and attempted to nuance it a long time ago:

    I'd wager that corrections would probably stick better in the community if misconception examples and even anonymized case studies were included.

    Nonetheless, I still think the point about how trying to correct how things spread mimetically still holds. If we keep seeing evidence that very unhelpful versions of this are still floating around, and that it severely affects (potentially) important people, more should be done to address this

  2. ^

    It could be claimed that this bastardizes the concept, but how concepts are originally designed and how they spread mimetically are very different.

  3. ^

    Worse still, interacting mental models often reinforced by EA can make people feel morally very bad for inaction

  4. ^

    Someone made the point that knowing how messages will spread is quite hard. Were I speaking to someone from 80K, I hope for the tenor of my message to be "hey, I get that mass media is hard, and you've largely been doing great, but we've potentially (re)discovered something pretty big as a result of your messaging. Would you seriously consider following up here?"

Comment by Tee on Underinvestment at the top: what I discovered coaching a dozen EA leaders · 2022-04-07T18:04:14.105Z · EA · GW

I'm struck by the effects reported after just around 4 sessions ~ 7 hours. I can't help but question whether these effects will last for more than a month after the coaching. When did they fill in the survey relative to the coaching? For how long do you predict that the effects will last?

Good questions – I think the set of claims that I'm more comfortable standing behind are that the coaching seems to be quite valuable and important during the period that the coachee is engaged, rather than trying to predict what the consequent effects will be after a pre-packaged period of time for the trial. A follow-up on the stickiness and potency of consequent effects would be interesting though.  I'm taking this suggestion pretty seriously. 

The set of claims I'm more comfortable standing behind is particularly true if that pre-packaged period of time for the trial is constructed for reasons that aren't all aimed at maximizing effects (e.g. if I had unlimited resources to run this trial in order to cause effects, the duration and frequencies might have been different)

Nearly all filled in the survey after the 4th session. The turnaround time on getting a completed survey ranged between 2 days and 2-3 weeks, depending on the person's responsiveness. A more rigorous trial would probably be more hardcore about when final feedback surveys are issued and completed. I didn't feel that I was in a position to draw hard lines on when these leaders submitted the surveys. 

What do you think the ideal coaching frequency is for people in this reference class? I.e., every week, every other week, once per month? (Assume that we'll have unlimited supply of high quality coaches).

Short answer is that fortnightly (once every two weeks) seems to be the sweet spot for fairly busy leaders undertaking complex roles. But the frequency we end up going with is unique to the individual and varies according to a constellation of things – a non-exhaustive list includes what their goals are/what the subject matter is, how inclined they are to test out new actions and outlooks and the time horizons on those feedback loops, how inclined they are to take time to pause and reflect (ie have they taken the time to think through what they felt was important to think through), their mental space and general availability, personal financial situation, etc. I'm sure their pre-existing models of what they need to work on and how long it will take to bring those things to a good place also plays an important factor. 

One of the main rooms for improvement (from my perspective) might be if the evaluation had been conducted by a third person and I'll probably see if I can find someone like this if/when I do a trial myself. Do you have any thoughts or reactions to that?

Good point – I flirted with this idea and I'm still quite interested in doing this. My primary hesitation is that I'd be concerned about off the bat about whether there's enough epistemic alignment on the 'metrics' that are chosen, and furthermore what the implications of certain metrics are. (For example, if someone over-engineered the quantitative metrics and anchored too hard in the importance of them, the results could be pretty damaging to how people look at your practice in a way that doesn't seem justified to me.)

Anticipating epistemic idiosyncrasies in the wide variety of readers out there, I personally chose a variety of metrics that would likely resonate in different ways with different people. I was shooting for producing a collage of valuations that cut across different paradigms. 

Following from that, I think it would actually be cool to have sections of a single unified evaluation designed by different people that measure along different paradigms. 

Comment by Tee on Underinvestment at the top: what I discovered coaching a dozen EA leaders · 2022-04-07T12:22:41.476Z · EA · GW

it's worth high-lighting that there are other promising "audiences" who can benefit massively from coaching even though they're currently less impactful.

Couldn't agree more. There were a set of strategic and tactical reasons why I felt it would be more compelling to make the case with leaders first. It seemed to me like a more straightforward way to cleanly demonstrate value in multiple ways. Others might disagree. Curious about your take. 

As an example, in the case where a broader community-talent-enrichment-focused project needs to receive funding support, you first need funders to be able to approximate and properly appreciate the value of coaching. 

This is quite a bit more difficult if you're trying to project the value of future talent + (likely) getting lower valuations of the value of the coaching because early-career people have a different relationship to money & growth, not to mention the anecdotal anchoring that often obfuscates these decisions for funders that would likely work against trying to fundraise for a project like this. 

You'd basically need tastemakers and purse-holders to have the willingness to evaluate this, ability to evaluate this, come to agree with this reasoning, and/or have high in-network-derived trust of an individual, in order to have a shot at doing that. Also curious if you have a take on that. 

> What do you think about this - in particular the numbers that I brought forth? 

It seems flatly clear to me that investing in the development of individuals at earlier inflection points would be extraordinarily valuable. Add-em-up methods of approximating value is not my strong suit unfortunately (nor my preferred method of approximating things in certain contexts), so I probably don't have much to say on specifics of your BOTEC there

> How much unmet (known as well as unknown) demand do you think there currently are within the community? I.e., given the eagerness of the participants (as well as my personal experience) I'm inclined to think that virtually all EA leaders should have a coach.

I mention here that, strictly speaking, I don't know for sure. I'd say that certainly there's far more demand for professionalized support than we've clocked, and there's far more developmental needs that individuals have 'at the top' than people realize. Being nitpicky, I'm not so sure that ambient demand or perceived need for coaching is a perfect proxy for whom, how much, and how useful it would be, though it does tend to be that those who recognize that they need help are much more easily helped. 

Comment by Tee on Underinvestment at the top: what I discovered coaching a dozen EA leaders · 2022-04-07T11:55:44.862Z · EA · GW

Cool to see your path to this Sebastian. Some great tips here. What's both tricky and exhilarating about navigating this space is how free-form it is. I have lots of respect for people who are this damn resourceful. 

I'd call your "alternative strategy" instead a "potential pathway" to gaining skill as a coach. What I outlined was more like a scaffolding set of considerations for thinking about how to gain skill and become a coach, within which innumerable pathways could be pursued. But I did like that you provided a personal example. It's probably a lot more accessible for others to model off of than the prompts I gave. 

Writing out your journey in this way does make me want to write out something of my own that's similar. Like the Coaching Insights section of this post but for how one could work towards becoming a professional coach. Could be interesting for people whose pathway is currently under consideration (or for those looking to fold in different approaches) 

Comment by Tee on Underinvestment at the top: what I discovered coaching a dozen EA leaders · 2022-04-07T09:58:46.468Z · EA · GW

In other words, if this is as good as it seems, one should prioritize providing this kind of coaching (or something similarly valuable) to all leaders within EA. 

I wouldn't disagree with this! Another way to say this, even if it's half as good as it seems, like if you slashed all of the metrics by which I calculated value here by 50% (e.g. quantitative monetary evaluations, productivity, # of people who had a notably good experience, # and quality of testimonials, # of people who continued on in a paid arrangement after the trial) it's still worth devoting far more attention and resources to this from within the community. 

Comment by Tee on Underinvestment at the top: what I discovered coaching a dozen EA leaders · 2022-04-07T09:51:29.792Z · EA · GW

Getting back to this comment might take a bit longer than usual for me to dig up exemplars of each category and even decide whether I think it's a good idea to promote coaching types of a certain category (i.e. I'd rather be quite selective of what I choose to promote, rather than highlighting less good things in an attempt to be comprehensive.) 

Also this from above!  "I wouldn’t say I’ve exhaustively canvassed what’s out there, so if anyone reading this has any suggestions for high-quality credentialing programs, particularly ones that encourage the integration of multiple methods and paradigms, I’d be curious to hear about them!" 

Comment by Tee on Underinvestment at the top: what I discovered coaching a dozen EA leaders · 2022-04-05T11:33:41.992Z · EA · GW

This was arrestingly sweet of you, Peter. Thank you. It's one of the best things that's come out of writing this post. I hope these types of comments get normalized in the community more broadly! 


Comment by Tee on Underinvestment at the top: what I discovered coaching a dozen EA leaders · 2022-04-02T12:59:16.957Z · EA · GW

Heartened to see that you enjoyed it! And great prompts/questions. Lovely to hear that this post could go some way in nudging you toward coaching. I have lots of thoughts on how to find a coach that might turn into another post, but some about getting the vibe right and trialing with more than one coach I mention here in this post. Hope it helps

There’s a lot to say about how coaching can improve the metabolization of stressors. In many cases, I’m pairing remedial efforts (working through emotional fallout and imprints) with methods that often have the effect of building more flexibility into the client’s ways of making sense and interpreting things. We’re also proactively aiming for a more elegant way of being and acting that causes less emotional shear (ie psychological toll) in local contexts. This can be approached and achieved in many ways as you might imagine. IMO it’s always a different set of moves, methods and timing for each person.

On recommending a coaching program – I’d almost never recommend a specific program offhand. My probably unsatisfying (though very on-brand) answer is that the way you pursue coaching skill and credentialing is mediated through what kind of coach you want to be, how you think the world works, and what you believe the path looks like to get there. (e.g. “I want to make a career change. How do I make a career change? Learn a new skill well enough to earn a living. How do I do that? Get a degree in a different field of study. That way, I’ll know what to do and people will take me seriously if I have gone through a course/get a degree”. This isn’t necessarily incorrect, but it’s a line of reasoning that will result in a particular sequence of specific actions)

The subject of how to develop skill and how to think about credentialing in this ‘industry’ also super interesting to get into. My rough approximation of the credentialing landscape is the following:

  • Some pockets of high-quality programs that are narrowly specialized. These often require considerable time and monetary investment.
  • Lots of low-grade, mass-scale credentialing bodies that basically take aspiring coaches from 0 to 0.1. (Usually the first thing that most people reach for in order to find the permission to make a career change and qualify to get listed on coaching registries. I’m not knocking it because I’m sure some really great coaches got started that way. But the typical use-case is good to know)
  • Many ideologically intense woo-woo or pseudo-scientific-claims-about-maximizing-human-potentiality programs (where some content/model gems do exist)
  • A subset of (usually individual personality-driven) coaching programs with little substance that try to intensely upsell people. These irritate me to no end and I find many of them pretty predatory or manipulative in gross way.

I wouldn’t say I’ve exhaustively canvassed what’s out there, so if anyone reading this has any suggestions for high-quality credentialing programs, particularly ones that encourage the integration of multiple methods and paradigms, I’d be curious to hear about them!

I’d characterize my own situation as a sustained (over years) combination of structured coaching training, informal coaching training, being mentored by senior coaches and therapists, and building skill through my own practice. I’ve no credentials issued by an official body. My introduction to coaching was through Paradigm Academy, where I received the more structured coaching training. After that, I preferred to pursue coaches and subject-matter experts that I felt could upgrade my ongoing practice in some way. I’ve done most of that my own dime personally, but also in professional contexts. In my time at Counterfactual Ventures, we designed our founder selection and development program alongside a cognitive developmental theory-oriented consultancy co-founded by Bill Torbort. Developmental paradigms derived from Piaget’s work popularized by Kegan, Kohlberg, Fischer and Torbort etc. have influenced me a lot.

Because of my own models of what coach I want to be, how the world works and how to get there, unless there’s something really amazing that’s not on my radar, I’ll probably opt for undertaking a handful of highly specialized courses/training (and possibly getting credentialed) that will hopefully result in a varied and potent repertoire of coaching methods.

Apologies if that was a lot to take in! Happy to chat with you about it more if you'd like. Feel free to reach out at if you'd like to continue the discussion elsewhere. 

Comment by Tee on Coordination within EA: community & ecosystems · 2022-03-14T11:31:31.268Z · EA · GW

Very grateful for this post. Personalizing to contextualize, it's amazing to see Vaidehi, Christina, Jack (and others I apologize for not remembering/knowing about) for consistently advocating for things that I gave up pursuing after hitting brick walls for years in the community starting in 2015. It takes courage (and almost certainly, tact) that I didn't have at the time when I was running multiple projects in this space. Maybe someday I'll say more about that if people would be interested

On the 'coaching for co-founders of organizations', Training For Good is exploring this area with their coaching trials (which I'm a part of), and in general this is something I feel well-positioned to tackle and I'm currently looking for support in being able to do so. I've recently finished trialing my coaching with more than a dozen EA leaders over the last few months. I feel the results are quite good, but I'm debating whether to post them publicly for privacy reasons. People who may be interested viewing the results can DM me! Keen to chat with anyone who wants to talk about supporting EA leadership

Comment by Tee on Hello from the new Content Specialist at CEA · 2022-03-10T10:24:21.360Z · EA · GW

Welcome Lizka! Hope the role is enjoyable for you for quite some time

Comment by Tee on Coaches for exploring careers? · 2021-12-27T15:40:08.081Z · EA · GW

Thanks Michelle! Hey Alex, happy to chat with you more about how I can help. Much of how I've helped others (including EAs) in the past regarding career transitions has revolved around the deliberate discovery, examination, articulation and refinement of your perceptions surrounding the many dimensions of the potential transition. (I say 'potential' because changing jobs isn't always the desired result). 

I'd contrast my approach to content– or landscape-specific type of information/model sharing, in the way that other career coaches might offer. For example, rather than telling you what's out there, my clients have chosen to create exploratory plans for coming to get information they'd need. 

If that's interesting, you can choose a time to chat with me here! (times for January 2021) 

Hope you find a skilled person to help with this in any case ~

Comment by Tee on EA could benefit from a general-purpose nonprofit entity that offers donor-advised funds and fiscal sponsorship · 2020-06-29T09:40:18.993Z · EA · GW

Hey Brendon, in 2020 Rethink Charity pivoted to provide fiscal sponsorships to select value-aligned projects in EA and adjacent communities. As many might know, we helped kickstart Rethink Priorities with a more in-house FS arrangement. We've just completed our first external FS arrangement with Dao Foods. For you or anyone who has seen this post, please do let me know if you know of any projects who could use this service!

Comment by Tee on Introducing Animal Advocacy Careers · 2020-02-11T11:54:20.008Z · EA · GW

Lauren, I'd like to echo Niel's sentiment here. Concerted efforts at cultivating EA-aligned talent (via training and launching projects) has always been something Rethink Charity has advocated for. Great to see you taking real strides in addressing this. Please reach out if RC and I can be of any help

Comment by Tee on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-12-04T23:43:10.763Z · EA · GW

FYI Rethink Charity and associated projects were also not invited, including Rethink Priorities and LEAN. We were also invited to forums in previous years

Comment by Tee on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-08-26T17:18:55.284Z · EA · GW
It’s great to see more efforts to evaluate and promote top giving opportunities. Rethink Grants seems promising and I’m interested in seeing where it goes.

Hey Eric, we appreciate the kind words and thank you for taking the time to bring some of these things to our attention.

How do donors know if they are fully funded?

Great question - were RG to continue on, the idea was for us to be quite involved in the fundraising process for recommended projects. If Donational were interested in continuing with the CAP, we would likely engage in a joint fundraising effort where we would take special care to keep key funders and the wider public in the loop regarding fundraising milestones and progress. This could even take the form of a public fundraising campaign in certain cases.

Have you seen the write-ups of ImpactMatters?

We have! In fact, Luisa Rodriguez, one of the Rethink Priorities analysts on this report, is a former ImpactMatters research analyst. ImpactMatters was also among the organizations that we drew inspiration from for Our Process.

To address this, one idea would be to put a lot of the details from the main body into an appendix.

This could certainly be helpful. I think a lot more could be done to better highlight key reasoning within future potential evaluations, including detailed notes on criteria that were important in the VOI, for example.

If I’m understanding correctly that Rethink Grants is also doing things to try to make the underlying organizations better, then it might be great to have more details on that.

That’s correct, and now that you mention it, future reports could expand more on all the possible intervention points that RG would consider for improving the overall quality of projects.I cover quite a bit of that in this reply in a different thread. As an example from this report, in the Potential Issues section, we mention pretty large plan changes from presenting the founder with a BOTEC that we came up with based on a handful of parameters that we considered crucial. Having this all mapped out onto one place would certainly be better.

Comment by Tee on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-08-07T17:02:53.067Z · EA · GW

Hey Jonas, apologies about the delay in replying here. Much will depend on whether we move forward with the program based on our own internal assessment of its potential and feedback we received from the community, especially those with an interest in grant making and community building via funding projects.

We loosely outline our remit and purpose in the introduction section and our current plan is to help potentially promising projects that would clearly benefit from the “early-stage planning, facilitating networking opportunities, and other as-needed efforts traditionally subsumed under project incubation” that we want to provide. Projects can often use assistance of this sort, and similar to some VC models, we hope that conducting a thorough and transparent evaluation of their program will be helpful to show others for getting funding traction. A perennial issue for existing grant makers is a lack of projects or research that are prepared to execute for one reason or another, and RG would hope to put time and resources into making a project ready and fundable. This role is meant to compliment the existing landscape.

As we mention in the OP, we do not currently fund projects ourselves - our goal is to help improve and recommend worthy projects to existing funders at this point. Given that many of the methods in this report are widely applicable, RG could also investigate and evaluate projects on behalf of existing grant makers or individual funders in cases where our interests align. In this case, were a potential funder interested in looking into a “shovel-ready” or existing project, we could be contracted assess it more thoroughly.

As for sourcing applications, we mention in the Our Process section that Rethink Grants will begin with an in-network approach to sourcing projects, relying on trusted referrals to help us reach out to promising individuals and organizations. If RG continues to conduct evaluations, we then consider projects on a rolling basis. A project that seems potentially cost-effective, run by a high-quality team, and has room for more funding moves forward through our evaluation process. We decided to look into Donational because it appeared to be a high potential project that satisfied these requirements.

Comment by Tee on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-07-29T19:02:33.599Z · EA · GW

Hey Oli, thanks for taking the time to come up with these points, and going out of your way to say, “...I think evaluations like this are quite important and a core part of what I think of as EA’s value proposition...and would like to see more people trying similar things in the future.” This is exactly the type of attitude toward agency and attempting to do good that I’d like to have encouraged more in EA.

Point-by-point, I think Derek covered a lot. I also mention in a comment how I was thinking about this evaluation in terms of a contribution to grant evaluation and the EA project space more broadly.

We might have done better to distill cruxes within our qualitative reasoning, though I do think a fair amount of this is presented in various sections. Agreed that swapping advanced mathematical models for BOTECs is often advisable, but at certain points in the future, I would imagine that evaluators could make good use of methods like these.

Comment by Tee on Rethink Grants: an evaluation of Donational’s Corporate Ambassador Program · 2019-07-26T19:46:51.802Z · EA · GW

Thank you to those who had a look at this report. Our team put a lot into this as you might imagine. I’ve been anticipating some commentary in this evaluation along the lines of “this is far too complex/quantitative for a $40,000 grant recommendation.” We’d agree. We gesture at this in the “The future of Rethink Grants” section at the end of the Executive Summary.

This could have perhaps been communicated better, but my hope is that readers will come to interpret this report, and the methods employed therein, as additional tools to consider when evaluating grants. There may be occasions where evaluators might find it useful to boost their repertoire by using these methods (or something similar) to potentially make better decisions. Project leads may also get some mileage out of how much we’ve put on display here.

There are certain instances where key reasoning (see Team Strength section), or quick deferral to experts, or even a simple back-of-the-envelope (BOTEC) calculation will suffice. But as with charity evaluation, we might agree, there are circumstances where intuition and BOTECs are not enough. An example from this report that Derek mentions, the VOI calculation and CEE lead us to a more nuanced conclusion that funding decent-sized pilot was very much worth doing in our opinion, rather than fully funding it from the outset or passing over this opportunity. Our conclusions from just a BOTEC might have been different.

I think we have good reason to believe that the level of rigor displayed in this evaluation is warranted at times. And when those situations arise, we hope others will reach for this report if they’ve found it useful.

Comment by Tee on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T20:14:56.197Z · EA · GW

Same for Rethink. Definitely appreciate this post and tried to make the application process swift and yet as informative as possible on both ends

Comment by Tee on List of possible EA meta-charities and projects · 2019-01-11T16:27:22.480Z · EA · GW

Hey Jonas, RC might be interested in touching base with you about this soon!

Comment by Tee on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-20T18:33:23.908Z · EA · GW
I do also think that it's very valuable for some pots of funding to not be very public as there are some bad incentives and restrictions caused by public work.

Yep, I think that's right. We (entities within the community) can improve from historical examples of simply not declaring anything on this front or the reasoning behind it.

E.g., I'm (currently) quite happy currently that EA Grants doesn't have to justify each grant publicly.

+1 though our post-decision feedback could be better in some ways.

Comment by Tee on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-20T16:26:29.522Z · EA · GW

Will things like Donation Data trends play into the committees decision-making?

(e.g. CEA received ~4x the donations of any other charity due to an individual donor, yet they received a sizable grant from this group. I realize that this fact doesn't automatically disqualify them as a valuable donation target.)

Manifold reasons for full disclosure - I contract for CEA, run a meta org that is a candidate for funding from the fund, have received funding from some individual members of the committee, biased toward resourcing valuable smaller projects etc.

Comment by Tee on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-19T23:18:39.803Z · EA · GW

Hey Alex, as I wrote to Jamie with the AWF AMA, I don't have a directed question but I deeply appreciate this level of transparency and hope it exerts pressure to raise the water level on grant making transparency more broadly

Comment by Tee on EA Meta Fund AMA: 20th Dec 2018 · 2018-12-19T21:52:35.224Z · EA · GW

I would imagine this should play into it: "£13.3m boost for Future of Humanity Institute"

Comment by Tee on Animal Welfare Fund AMA · 2018-12-19T16:53:07.422Z · EA · GW

Hey Jamie, I don't have a directed question on AWF per say, but I deeply appreciate this level of transparency and hope it exerts pressure to raise the water level on grant making transparency more broadly

Comment by Tee on Why Groups Should Consider Direct Work · 2018-05-31T03:38:27.798Z · EA · GW

An example is that EA Yale will likely be helping Rethink with reporting on the EA survey. Also see a lot of what EA NTNU has been up to. Richenda will have to forgive me because my memory is fuzzy on this, but I remember hearing of a university group that pressured a college make their annual donations to effective charities. All of these seem high-value to me and are not mutually exclusive with pledges, career changes etc.

Comment by Tee on EA Survey: Sexual Harassment Questions - Feedback Requested · 2018-03-15T15:39:03.036Z · EA · GW

In response to the comment that was deleted below, we do not intend to ignore this issue.

Comment by Tee on EA Survey: Sexual Harassment Questions - Feedback Requested · 2018-03-14T16:38:41.439Z · EA · GW

I interested Tee Barnett and Peter Hurford in adding sexual violence questions to the survey. Therefore sexual violence definitions need to be created.

Thanks for your dedication to this issue. I'm compelled to point out that that briefly speaking about a particular issue in an informal manner should not be seen as an endorsement on behalf of myself or Rethink Charity.

Comment by Tee on Announcing Rethink Priorities · 2018-03-12T19:54:59.302Z · EA · GW

Ben West asked this question in the EA Facebook group late last year, and I believe EA Funds has updated since then:

It's not clear what the optimal amount of funding for resurrecting LW should be, but according to the EA survey (run by Rethink), LW had been a top source for introducing people to EA until recently:

Qualifying this by clarifying that I'm the ED of Development for Rethink Charity – I would say the lineup of projects offered by Rethink (SHIC, LEAN, RC Forward and Rethink Priorities, EA Survey) should be among the most competitive funding options for community building, especially considering our reach and impact on a comparatively low budget:

Comment by Tee on Announcing Rethink Priorities · 2018-03-07T17:38:24.847Z · EA · GW

Thanks for asking Ervin. Were we to scale this project according to our estimates, we would need additional funding. There are also some small gaps in Rethink Charity operations that we'd like to fill. Talks are ongoing with CEA about additional funding either through their Grants or Funds programs

Comment by Tee on 2017 LEAN Impact Assessment: Qualitative Findings · 2018-01-04T20:12:24.347Z · EA · GW

Absolutely - but re-Richenda's point about deliberations at a higher level, the Hub is one of many resources we provide, and we want to make sure every donation we receive is most impactful.

Even an earmarked donation for this purpose is not a straightforward proposition. Take the decision to potentially integrate with the CEA platform as a hypothetical. If we were to spend $300 - $1k tweaking the Hub, and then had to double back (likely to change the coding language) once we decided that linking up with the CEA platform is most effective for the community, we may have wasted considerable resources.

Comment by Tee on 2017 LEAN Impact Assessment: Quantitative Findings · 2017-12-09T19:25:10.644Z · EA · GW

Richenda will have more insight on this than me, but my understanding is that when the qualitative report comes out, we will see that some of those who do have a website find it incredibly useful and it would absolutely be a disservice to pull the plug on that.

We're erring on the side of a 'targeted revision' of what we provide so that our services only go to those who are most effectively using them

Comment by Tee on EA Survey 2017 Series: How do People Get Into EA? · 2017-11-17T17:13:09.573Z · EA · GW

I agree, this is something we acknowledge multiple times in the post, and many times throughout the series. The level of rigor it would take to bypass this issue is difficult to reach.

This is also why the section where we see some overlap with Julia's survey is helpful.

Comment by Tee on The Hidden Cost of Shifting Away from Poverty · 2017-10-10T17:26:09.520Z · EA · GW

Additional data on EA shifts in cause area preference:

Comment by Tee on EA Survey 2017 Series: Cause Area Preferences · 2017-09-05T12:54:51.164Z · EA · GW

I've also updated the relevant passage to reflect the Bay Area as an outlier in terms of support for AI, not AI an outlier as a cause area

Comment by Tee on EA Survey 2017 Series: Cause Area Preferences · 2017-09-05T12:40:16.366Z · EA · GW

Hey Michelle, I authored that particular part and I think what you've said is a fair point. As you said, the point was to identify the Bay as an outlier in terms of the amount of support for AI, not declare AI as an outlier as a cause area.

The article in general seems to put quite a bit of emphasis on the fact that poverty came out as the most favoured cause.

I don't know that this is necessarily true beyond reporting what is actually there. When poverty is favored by more than double the number of people who favor the next most popular cause area (graph #1), favored by more people than a handful of other causes combined, and disliked the least, those facts need to be put into perspective.

If anything, I'd say we put a fair amount of emphasis on how EAs are coming around on AI, and how resistance toward putting resources toward AI has dropped significantly.

We could speculate about how future-oriented certain cause areas may be, and how to aggregate or disaggregate them in future surveys. We've made a note to consider that for 2018.

Comment by Tee on EA Survey 2017 Series: Cause Area Preferences · 2017-09-05T12:19:17.781Z · EA · GW

09/05/17 Update: Graph 1 (top priority) has been updated again

Comment by Tee on EA Survey 2017 Series: Cause Area Preferences · 2017-09-05T12:17:50.971Z · EA · GW

09/05/17 Update: Graph 1 (top priority) has been updated again

Comment by Tee on EA Survey 2017 Series: Cause Area Preferences · 2017-09-05T12:17:29.530Z · EA · GW

09/05/17 Update: Graph 1 (top priority) has been updated again

Comment by Tee on EA Survey 2017 Series: Cause Area Preferences · 2017-09-02T20:23:10.618Z · EA · GW

09/02/17 Post Update: The previously truncated graphs "This cause is the top priority" and "This cause is the top or near top priority" have been adjusted in order to better present the data

Comment by Tee on EA Survey 2017 Series: Cause Area Preferences · 2017-09-02T20:20:41.995Z · EA · GW

09/02/17 Update: We've updated the truncated graphs