Posts

Why I'm concerned about Giving Green 2021-01-20T22:59:20.608Z
Forecasts about EA organisations which are currently on Metaculus. 2020-12-29T17:42:18.572Z
Incentive Problems With Current Forecasting Competitions. 2020-11-10T21:40:46.317Z
What questions would you like to see forecasts on from the Metaculus community? 2020-07-26T14:40:17.356Z
Climate change donation recommendations 2020-07-16T21:17:57.720Z
[Linkpost] - Mitigation versus Supression for COVID-19 2020-03-16T21:01:28.273Z
If you (mostly) believe in worms, what should you think about WASH? 2020-02-18T16:47:12.319Z
alexrjl's Shortform 2019-11-02T22:43:28.641Z
What book(s) would you want a gifted teenager to come across? 2019-08-05T13:39:09.324Z
Can the EA community copy Teach for America? (Looking for Task Y) 2019-02-21T13:38:33.921Z
Amazon Smile 2018-11-18T20:16:27.180Z

Comments

Comment by alexrjl on Some global catastrophic risk estimates · 2021-05-04T18:31:35.175Z · EA · GW

The "metaculus" forecast weights users' forecasts by their track record and corrects for calibration, I don't think the details for how are public. Yes you can only see the community one on open questions.

I'd recommend against drawing the conclusion you did from the second paragraph (or at least, against putting too much weight on it). Community predictions on different questions about the same topic on Metaculus can be fairly inconsistent, due to different users predicting on each.

Comment by alexrjl on On the longtermist case for working on farmed animals [Uncertainties & research ideas] · 2021-04-11T13:07:22.276Z · EA · GW

I already believed it and had actually been recently talking to someone about it, so I was surpsied and pleased to come across the post, but couldn't find a phrasing which said this which didn't just sound like I was saying "oh yeah thanks for writing up my idea". Sorry for the confusion!

Comment by alexrjl on On the longtermist case for working on farmed animals [Uncertainties & research ideas] · 2021-04-11T11:37:46.813Z · EA · GW

Thanks for writing this, even accounting for suspicious convergence (which you were right to flag), it just seems really plausible that improving animal welfare now could turn out to be important from a longtermist perspective, and I'd be really excited to hear about more research in this field happening.

Comment by alexrjl on Research suggests BLM protests increase murder overall · 2021-04-09T22:03:05.488Z · EA · GW

released his preliminary findings on the Social Science Research network as a preprint, meaning the study has yet to receive a formal peer review.

 

It’s worth noting that Campbell didn’t subject the homicide findings to the same battery of statistical tests as he did the police killings since they were not the main focus of his research.

 

I thought there had also been some cautionary tales learned in the last year about widely publicisng and discussing headline conclusions from preprint data without appropriate caveats. Apparently not.

Comment by alexrjl on Actions to take for a career change towards EA (advice needed) · 2021-04-09T13:36:51.620Z · EA · GW

There's the EA jobs facebook group, and I'll pm you a discord link.

It's worth noting that 80k has a lot of useful advice on how to think about career impact, and also the option to apply for advising, as well as the jobs board. There's also Probably Good (search for their forum post) and Animal Advocacy careers.

Comment by alexrjl on EA Debate Championship & Lecture Series · 2021-04-07T09:36:35.051Z · EA · GW

I want to echo this. I think my own experience of debating has been useful to me in terms of my ability to intelligence-signal in person, but was pretty bad overall for my epistemics. One interesting thing about BP (which was the format I competed in most frequently at the highest level) was the importance in the 4th speaker role identifying the cruxes of the debate (usually referring to them as "clash"), which I think is really useful. Concluding that the side you've been told to favour has then "won" all of the cruxes is... less so.

Comment by alexrjl on Actions to take for a career change towards EA (advice needed) · 2021-04-06T18:07:04.933Z · EA · GW

All this advice seems realy good, and I want to particularly echo this bit:

It might be worth reframing how you think about this as "how can I find a job that has the biggest impact", rather than "how can I get an EA job".

Comment by alexrjl on "Hinge of History" Refuted (April Fools' Day) · 2021-04-01T15:05:27.791Z · EA · GW

This post is already having a huge impact on some of the most influential philosophers alive today! Thanks so much for writing it.

Comment by alexrjl on Forget replaceability? (for ~community projects) · 2021-04-01T12:03:13.352Z · EA · GW

Evidence Action are another great example of "stop if you are in the  downside case" done really well.

Comment by alexrjl on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T14:22:36.618Z · EA · GW

Interesting, thanks!

Comment by alexrjl on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T14:05:21.789Z · EA · GW

I was under the impression CSER was pretty "core EA"! Certainly I'd expect most highly engaged EAs to have heard of them, and there aren't that many people working on x-risk anywhere.

Comment by alexrjl on How much does performance differ between people? · 2021-03-29T21:02:01.865Z · EA · GW

I've been much less successful than LivB but would endorse it, though I'd note that there are substantially better objective metrics than cash prizes for many kinds of online play, and I'd have a harder time arguing that those were less reliable than subjective judgements of other good players. It somewhat depends on sample though, at the highest stakes the combination of v small playerpool and fairly small samples make this quite believable.

Comment by alexrjl on Is laziness immoral? · 2021-03-28T12:03:27.038Z · EA · GW

Hi Jacob,

I think you might really enjoy and benefit from reading this blog by Julia Wise. While it's great that you have such a strong instinct to help people, we're in this game for the long haul, and you won't have a big impact by feeling terrible about yourself and feeling guilty if you don't make sacrifices.

In particular, it's very likely that focusing on doing well in college and then university is going to make a much bigger different to your lifetime impact than whether you can get a part-time job to donate right now.

Comment by alexrjl on [Podcast] Thomas Moynihan on the History of Existential Risk · 2021-03-23T00:00:23.539Z · EA · GW

I've discovered hear this idea relatively recently but have been extremely impressed so far. Looking forward to this episode!

Comment by alexrjl on Feedback from where? · 2021-03-12T11:24:21.381Z · EA · GW

Because the orgs in question have literally said so, because I think the people working there genuinely care about their impact and are competent enough to have heard of Goodhart's law, and because in several cases there have been major strategy changes which cannot be explained by a model of "everyone working there has a massive blindspot and is focused on easy to meet targets". As one concrete example, 80k's focus has switched to be very explicitly longtermist, which it was not originally. They've also published several articles about areas of their thinking which were wrong or could have been improved, which again I would not expect for an organisation merely focused on gaming its own metrics.

Comment by alexrjl on Feedback from where? · 2021-03-11T22:50:30.356Z · EA · GW

Yeah to be clear I meant that the decision making processes are probably informed by these things even if the metrics presented to donors are not, and from the looks of Ben's comment above this is indeed the case.

Comment by alexrjl on Feedback from where? · 2021-03-11T21:43:44.325Z · EA · GW

I think there's likely a difference here between:

 What easily countable short term goals and metrics are communicated to supporters? (bednet distributions, advising calls etc.)

and

What things do we actually care about and track internally on longer timescales, to feed into things like annual reviews and forward planning?

 

I'd be extremely surprised if 80k didn't care about the impact of their advisees, or AMF didn't care about reducing malaria.

Comment by alexrjl on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-09T15:26:00.812Z · EA · GW

I completely agree with all of this, and am glad you laid it out so clearly.

Comment by alexrjl on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-08T21:49:01.562Z · EA · GW

Despite disagreeing with most of it, including but not limited to the things highlighted in this post, I think that Torres's post is fairly characterised as thought-provoking. I'm glad Joshua included it in the syllabus, also glad he caveated its inclusion, and think this response by Hayden is useful.

I haven't interacted with Phil much at all, so this is a comment purely on the essay, and not a defense of other claims he's made or how he's interacted with you. 

Comment by alexrjl on What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings? · 2021-03-08T10:57:30.011Z · EA · GW

(for what it's worth, I don't actually think utilitarianism leads to the conclusions in the post, but I think other commenters have discussed this, and I think the general point in my first comment is more important)

Comment by alexrjl on What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings? · 2021-03-07T12:26:08.046Z · EA · GW

If you take moral uncertainty even slightly seriously, you should probably avoid doing things which would be horrifically evil according to a whole load of worldviews you don't subscribe to, even if according to your preferred worldview it would be fine.

Comment by alexrjl on A full syllabus on longtermism · 2021-03-06T19:30:51.313Z · EA · GW

This is fantastic, thank you so much for putting it together.

Comment by alexrjl on Early Alpha Version of the Probably Good Website · 2021-03-04T07:47:05.371Z · EA · GW

Thanks so much, I'll check these links out!

(I had abbreviated "Probably Good" to PG)

Comment by alexrjl on Progress Open Thread: March 2021 · 2021-03-03T14:26:27.734Z · EA · GW

Going to do my best to lean into Aaron's "this is a humilty free zone" message from the first progress thread and hopefully get the ball rolling.

  • I won $1350 and a hoody in various forecasting competitions which finished in February ($850 + hoody of which was performance based, the other $500 was participation based).
  • For a few reasons, I've gradually started to get the impression that people respect and are interested in what I have to say about things. I'm not sure how related this is to the above, or how sensible it is on their part, but it feals really good!
  • I was invited to join a weekly call with a few people I vaguely knew on Twitter, and it's been a highlight of most weeks! They're all really nice and very interesting to talk to.
Comment by alexrjl on [Podcast] Marcus Daniell on High Impact Athletes, Communicating EA, and the Purpose of Sport · 2021-03-03T14:02:01.699Z · EA · GW

Excited for this! It's been awesome to watch HIA's success so far and they still have incredible potential.

Comment by alexrjl on Early Alpha Version of the Probably Good Website · 2021-03-03T12:18:54.876Z · EA · GW

This seems like really excellent feedback for them!

I have a query about your final point, where I think I agree with the PG framing. In general, I think when people talk about the long-term future they are including consideration of timescales much longer than the few decades most of the examples you mentioned, of the order of hundreds of years, or even longer. This is one reason reducing existential risk is so popular (because while affecting the shape of the future seems both extremely uncertain and fairly dependent on world view, making sure that there is a future at all seems good from many perspectives, though not all). Am I correct in my intepretation that you were talking about "long term" mostly in the <100 year sense?

Comment by alexrjl on Early Alpha Version of the Probably Good Website · 2021-03-03T12:02:45.901Z · EA · GW

This seems like a great initiative. Congratulations for setting it up! Like a couple of other commenters, I like the fact you linked to 80k and AAC on the career profile page.

Comment by alexrjl on Why I'm concerned about Giving Green · 2021-03-03T11:43:58.364Z · EA · GW

When did they say these were much less cost-effective?


I asked them! The website does now make it clear, I think, that they think policy options are best, though some of that is a recent change, and the language is still less effective than I'd like.

What do you mean by it being justified? It looks like you mean 'does well on a comparison of immediate impact', but, supposing these things are likely to be interpreted as recommendations about what is most cost-effective, this approach sounds close to outright dishonesty, which seems like it would still not be justified. (I'm not sure to what extent they are presenting them that way.)

You're right that I meant "does well on a comparison of immediate impact" here, but your second point is, I think, really important.  Having said that, while it's worth thinking about I don't think the current presentation of the difference between offsetting and policy intervention could be fairly described as "dishonest". I think it is clear that GG thinks policy is more effective, it's just that the size of the difference is not emphasised. 

I agree that, even in worlds where it produce the most immediate good from a donation perspective, presenting two options as equal when you think they are not is dishonest, and not justifiable. I don't think Giving Green has ever intended to do that though.

In terms of CATF vs Sunshine, I had initially suspected that it might be the case that they thought CATF was much better but that Sunshine was worth including to capture a section of the donations market which broadly likes progressive stuff. I agree that this would not be acceptable without a caveat that they thought CATF was best. Having spoken to them, I don't think this is the case (and Dan can confirm if he's still following the thread); I think they genuinely think that there's no difference in expectation between CATF and TSM. I strongly disagree with this assesment, but do believe it to be genuine.

Comment by alexrjl on alexrjl's Shortform · 2021-03-03T11:22:49.899Z · EA · GW

This is why you should have done physics ;)

Comment by alexrjl on alexrjl's Shortform · 2021-03-03T07:45:21.342Z · EA · GW

Volume of a sphere with radius increasing at constant rate has a quadratic rate of change.

Comment by alexrjl on alexrjl's Shortform · 2021-03-02T08:35:17.944Z · EA · GW

Lots of Givewell's modelling assumes that health burdens of diseases or deficiencies are roughly linear in a harm vs. severity sense. This is a defensible default assumption, but seems important enough when you dig in to the analysis that it would be worth investigating when it comes to whether there's a more sensible prior.

Comment by alexrjl on alexrjl's Shortform · 2021-02-24T09:21:53.644Z · EA · GW

Thanks, this is useful to flag. As It happens I think the "hard cap" will probably be an issue first, but it's definitely noteworthy that even if we avoid this there's still a softer cap which has the same effect on efficiency in the long run.

Comment by alexrjl on A ranked list of all EA-relevant documentaries, movies, and TV series I've watched · 2021-02-23T17:42:36.016Z · EA · GW

I really enjoyed the Channel 4 series Humans, and know at least one other EA who did. I thought it was one of the best representations of the questions around the potential rights of artificial sentience I'd seen within fiction.

Comment by alexrjl on alexrjl's Shortform · 2021-02-23T12:17:57.090Z · EA · GW

When Roodman's awesome piece on modelling the human trajectory came out, I feel like far too little attention was paid to the catastrophic effects of including finite resources in the model. 

I wonder if part of this is an (understandable) reaction to the various fairly unsophisticated anti-growth arguments which float around in environmentalist and/or anticapitalist circles. It would be a mistake to dismiss this as a concern simply because some related arguments are bad. To sustain increasing growth, our productive output per unit resource has to become arbitrarily large (unless space colonisation). It seems not only possible but somewhat likely that this "efficiency" measure will reach a cap some time before space travel meaningfully increases our available resources.

I'd like to see more sophisticated thought on this. As a (very brief) sketch of one failure mode:

- Sub AGI but still powerful AI ends up mostly automating the decision making of several alrge companies, which with their competitive advantage then obtain and use huge amounts of resources.

- They notice each other, and compete to grab those remaining resources as quickly as possible.

- Resources gone, very bad.

(This is along the same lines as "AGI acquires paperclips", it's not meant to be a fully fleshed out example, merely an illustrative story)

Comment by alexrjl on Can the EA community copy Teach for America? (Looking for Task Y) · 2021-02-19T14:21:38.791Z · EA · GW

Thanks! It's really nice to hear it's been thought provoking.

Comment by alexrjl on Project Ideas in Biosecurity for EAs · 2021-02-18T19:47:18.248Z · EA · GW

Thanks for writing this!

For those interested in the Sociology/Anthro side, I showed section to my partner, who's an anthropologist though not in this area. She suggested that these papers might be a helpful starting point, with the first in particular aiming to provide a framing for how the issue might be investigated anthropologically.

Biosecurity: Towards an anthropology of the contemporary

The history of biological warfare

History of biological warfare and bioterrorism

The illogic of the biological weapons taboo

Comment by alexrjl on Alternatives to donor lotteries · 2021-02-14T18:25:46.123Z · EA · GW

I really enjoyed this post. It has lots of interesting ideas and was very easy to read. Thanks for writing it!

Comment by alexrjl on Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected · 2021-02-14T16:32:55.760Z · EA · GW

I'm cautiously optimistic about Family Planning as a neartermist cause area, so thank you for raising it, however I strongly disagree with the substance of the argument above, that the problem family planning is solving is population growth, particularly when it comes to climate change. Honestly, I find the idea of reducing population in poorer countries in order to prevent climate change pretty objectionable. 

However, there is a resonable amount of evidence that family planning reduces infant and maternal mortality, and if I remember correctly also at least some evidence that it reduces gender inequality (which certainly seems like a reasonable prior). As an illustration of why I find the population growth argument objectionable, if your primary goal is reducing populations, then family planning programmes reducing mortality makes them worse, which seems obviously wrong.

The Life You Can Save recommends one of the charities you mentioned, Population Services International, they therefore seem like a good bet.

Comment by alexrjl on Apply to EA Funds now · 2021-02-14T07:09:51.404Z · EA · GW

Thanks! This seems like a good system for it.

Comment by alexrjl on Open thread: Get/give feedback on career plans · 2021-02-13T18:45:46.516Z · EA · GW

I'd be happy to talk to people about their plans! 

I think I am most likely to be able to help:

  •  Someone who just needs a sympathetic ear or someone to bounce ideas off.
  •  Someone interested in teaching/education, especially in how to talk to young people about EA (and much more importantly how not to.
  •  Someone fairly new who wants help getting a broader sense of the movement (I'm not an expert in any particular area, but I have a decently strong understanding of pretty much all of the major cause areas).
Comment by alexrjl on Apply to EA Funds now · 2021-02-13T18:39:38.481Z · EA · GW

If someone has a project which potentially spans multiple funds, should they apply to both noting that they have done so, or apply to one noting that they are happy for their application to be passed on to the other?

Comment by alexrjl on Open and Welcome Thread: February 2021 · 2021-02-13T13:31:11.861Z · EA · GW

The size of your weak upvotes is also affected by your total karma, just more slowly. Every post starts with one weak upvote from its author.

Comment by alexrjl on How to discuss topics that are emotionally loaded? · 2021-02-03T11:25:39.227Z · EA · GW

I tweeted about this, and it ended up being a much longer thread than I had originally intended. It's quite critical of the passage I quoted above so, though this is not intended as an attack on the OP but instead an extrapolation to a broader point, I thought it was best to flag that I had done so, as I didn't want this to unintentionally be  a "subtweet".

 

Comment by alexrjl on Why I'm concerned about Giving Green · 2021-02-02T22:38:30.593Z · EA · GW

Thanks for engaging here. This is a thoughtful and interesting comment, and I think it’s noteworthy that we basically agree on several important conclusions, namely that Giving Green should:

  • Clearly indicate that, currently, CATF looks, in expectation, to be far superior to TSM, not least because even if their own research doesn’t show this, everyone else’s does.
  • Be more clear about the difference in expectation between Offsets and Policy change (some progress has been made on this already).
  • Consider cost in their offset analysis (though that doesn’t mean calculating a naive $/TCO2e and calling it a day).
  • Be more clear about the current quality and limitations of their original research.
  • Consider incorporating quantitative models, especially about their own theory of change (not because qualitative ones aren’t valid, but because it would likely improve their reasoning and make it easier to evaluate).

There are, however, a couple of misconceptions in your comment which are similar to those in Dan’s initial responses, and have been discussed elsewhere in the comments. I’m going to try to summarise those here, as this thread has got very long so it’s not surprising some things are being missed.

Quantitative research

  think we may not want to require a quantitative CEA on charities working on policy change

As I mentioned in my reply to Dan when he raised a similar concern, I’m not rejecting Giving Green’s because it is not quantitative, I’m rejecting Giving Green’s analysis because of the many substantial flaws which have been extensively discussed, and I’m also saying that quantitative modelling is a useful exercise which may have prevented or helped identify many of those flaws. The way that building quantitative models can improve analysis, even if the models themselves are rough or flawed, is usefully discussed by Johannes at the start of this epic comment which is longer than the post itself so I’ll quote the relevant section. 

I should also state upfront that my credence in CATF and other high-impact climate charities does not come primarily from the cost-effectiveness models, which are clearly wrong and also described as such, but by the careful reasoning that has gone into the FP climate recommendations...

...But the process of building these models and doing the research around them -- for each FP recommendation there is at least 20 pages worth of additional background research examining all kinds of concerns --  combined with years of expertise working in and studying climate policy, has served the purpose of clearly delineating the theory of value creation, as well as the risks and assumptions, in a way that a completely qualitative analysis that has a somewhat loose connection between evidence, arguments, and conclusions (recommendation) has not. 

The fundamental concern with Giving Green’s analysis that I, and I think (?) Alex, have is not the lack of quantitative modeling per se, but the unwillingness to make systematic arguments about relative goodness of things in a situation of uncertainty, rather treating each concern as equally weighted and taking an attitude of “when things are uncertain, everything goes and we don’t know anything”...

The Sunrise Movement

Again, I think the most important misunderstanding here has already been discussed repeatedly in the comments. The difference between “is X good” and “is X good on the margin” is a massive and fundamental part of impact evaluation. It’s easy to argue a case along the lines of “progressive activism has been broadly positive/associated with positive changes”, I wholeheartedly agree with that claim! It just has very little to do with what the potential impact will be of TSM on the current margin. It is possible for extremely good causes to be poor donation opportunities, because additional donations would not allow them to do any more good. It is similarly possible for only moderately good causes to be extremely good donation opportunities, if additional donations would be transformative for them. Neglectedness is only one aspect of judging marginal impact, but it is discussed helpfully in this comment.

There’s been a good deal of discussion in other comments here and here, as well as the substance of the original post, about the downside risks of TSM, but I think it’s worth noting that the view that “the Biden camp will probably ignore them if they suggest something too crazy” is not one which is totally compatible with thinking that donations to TSM will have high marginal impact.

There are several ways in which TSM might influence things though which don’t seem obviously like they will fail, for example (quoting from this comment):

A stronger TSM could intensify pressure on Biden to prioritize executive orders over legislative politics, because this looks more appealing than more incrementally seeming legislative politics even though legislative politics would ultimately be more impactful and/or more robust over time.

I chose this in particular because it also speaks the “who is being ambitious and transformative” discussion which seems to have popped up a few times in the comments. Ultimately, bipartisan legislation, even if it’s slower to get big wins, ensures that those wins stick around in the long run (there’s also the national vs international angle, but that’s been covered elsewhere). Quoting part of another comment from Johannes:

All of the major success stories we have seen in climate over the past 20 years – solar, wind, coal > gas in the US, electric cars and batteries – have been the result of relatively narrow and targeted policies, the kind of which CATF advances for technologies that are less popular with greens for reasons of ideology, not merit.

Comment by alexrjl on How to discuss topics that are emotionally loaded? · 2021-02-02T18:32:07.174Z · EA · GW

It's not like Alice or Bob actually believe in an epistemic sense that some line of the other's argument is wrong. Rather, the other's argument makes them feel uncomfortable, because it is in some way related to something personal.

I think that in many situations which pattern match to these, this is Bob's view of the situation but not Alice's, which contributes to the situation. Bob thinks that Alice only disagrees because she is upset, and so doesn't actually consider her point of view seriously. Alice then finds the discussion difficult not only because the topic is upsetting, but also because she is very clearly not actually being listened to. Given how much more difficult Alice finds the discussion than Bob, she will likely not express the points she's trying to make as fluently or eloquently as Bob argues his side, adding to her frustration and strengthening Bob's view that he's right and she's just too emotional to see it.

You switched Bob and Alice between examples but I think the point is clear. As one concrete example, Example 4's Bob may not be a consequentialist, or he may feel that under moral uncertainty it's worth taking rights-based arguments extremely seriously even if otherwise acting as a utilitarian most of the time.

Comment by alexrjl on Why I'm concerned about Giving Green · 2021-01-30T15:23:43.241Z · EA · GW

Hi James,

Thanks for sharing your experience (and for the work you're doing).  I think it's worth noting that the funding discussion in the original post has quite a specific context:

  • Giving Green claimed that progressive climate activism was neglected based on financial data from 2015.
  • Given what's happened in the subsequent 6 years (including the formation of XR), financial data from 2015 is not close to sufficient to show neglectedness.

I secondly want to note, as has been discussed pretty extensively in the comments, that our prior should be that an organisation which is not CATF will underperform it, given that multiple independent evaluations of CATF by different people over a period of several years have repeatedly rated it extremely highly.  Wanting to allocate money to the highest EV option is not borne out of "risk-aversion", it's just straight EV maximisation. Of course, if it turns out that the potential funding pools are so divergent that recommending both options would result in far more donations coming in, I'd be extremely happy, and enthusiastically recommend both. This is why I called for modelling of exactly this tradeoff.

I'm afraid your final point about EA potentially being too late to social movements, while important in general, somewhat missed the mark if what you're attempting to do is imply that the reason the people on this thread who are skeptical about TSM have this particular blindspot. Sanjay, who I've worked closely with for some time, and who posted his own comment on this thread, has been working hard to start a social movement in the UK dedicated to preventing future pandemics, Johannes was a climate activist himself, and I've been thinking for some time about ways to allow people to get involved with EA in a ways other than donating even if they don't have the option of a full career switch. Our skepticism about TSM is skepticism about TSM, not about activism or mass movements more broadly.

Comment by alexrjl on Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement · 2021-01-29T11:18:07.298Z · EA · GW

I agree that exactly that tradeoff is important! There's definitely a balance to be struck, and you certainly wouldn't want to exclude those who already very aligned on the basis of low counterfactual impact, as the participation of those people will likely be very positive for other members!

Comment by alexrjl on Why I'm concerned about Giving Green · 2021-01-29T10:45:07.250Z · EA · GW

though I believe the issues with the idea of offsetting do not automatically mean helping people in search specifically of effective (or thus maybe least ineffective) offsetting possibilities is by nature ineffective

Indeed not, it will depend on the extent to which donors who seek offsets will be willing to donate to non-offset options if those are presented to them, and obviously also will depend on how effective offsets are compared to non-offset alternatives. This is why I called for these things to be modelled, rather than assumed, in the post.

In answer to your question, [here's](https://forum.effectivealtruism.org/posts/Yix7BzSQLJ9TYaodG/ethical-offsetting-is-antithetical-to-ea) a post with 77 comments from a few years ago which will probably serve as a reasonable starting point. 

Comment by alexrjl on Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement · 2021-01-28T08:44:05.171Z · EA · GW

It's an interesting idea, but even if this ends up producing very engaged participants you have to be careful.

If you (deliberately and successfully) only select for people who are super keen, you end up with a super keen cohort but potentially only minimal counterfactual impact as all those you selected would have ended up really involved anyway. This was briefly mentioned in the post and I think is worth exploring further.

Comment by alexrjl on Why I'm concerned about Giving Green · 2021-01-27T22:46:08.076Z · EA · GW

Thanks for the great work you're doing! It's exciting to see numbers on donor preferences (even if the samples are small so far). I think this data you are collecting has potential to be really helpful in forming answers to a couple of the high level questions I raised at the start, and I have a few thoughts on how to extend this. I'll send you a message.