Posts

[Linkpost] - Mitigation versus Supression for COVID-19 2020-03-16T21:01:28.273Z · score: 9 (6 votes)
If you (mostly) believe in worms, what should you think about WASH? 2020-02-18T16:47:12.319Z · score: 30 (13 votes)
alexrjl's Shortform 2019-11-02T22:43:28.641Z · score: 2 (1 votes)
What book(s) would you want a gifted teenager to come across? 2019-08-05T13:39:09.324Z · score: 24 (14 votes)
Can the EA community copy Teach for America? (Looking for Task Y) 2019-02-21T13:38:33.921Z · score: 73 (38 votes)
Amazon Smile 2018-11-18T20:16:27.180Z · score: 8 (7 votes)

Comments

Comment by alexrjl on What's the best platform/app/approach for fundraising for things that aren't registered nonprofits? · 2020-03-27T20:40:56.399Z · score: 3 (3 votes) · EA · GW

I think you can do this on the SoGive platform. Have sent Sanjay a message.

Comment by alexrjl on What posts you are planning on writing? · 2020-03-27T06:29:26.894Z · score: 3 (2 votes) · EA · GW

Yes, or at least I think the way they are often interpreted is different. I actually have no issue with 80k's formal definition, but qualitative use in practice (not by 80k) has often put both both of 80k's last two points in the tractability metric, then there's this other nebulous factor called 'Neglectedness' which ends up being counted again. The key metric is how much good can be done by one marginal extra person or dollar, and I've seen a few cases of people estimating that (which will clearly be affected by diminishing marginal returns), then adding a Neglectedness score on as well, which seems wrong.

I haven't written this up yet as I don't think it's hugely important- it's typically a feature of naïve/rough work, and there's definitely a chance that some of this kind of work is actually using a framework modelled on 80k but just not exposing that well. Most high quality research is just done by an actual CEA rather than by ITN framework, so there's obviously no issue there.

Comment by alexrjl on Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? · 2020-03-13T09:07:35.667Z · score: 1 (1 votes) · EA · GW

I hadn't heard that, thanks for sharing!

Comment by alexrjl on Shapley Values Reloaded: Philantropic Coordination Theory & other miscellanea. · 2020-03-12T21:45:47.330Z · score: 2 (2 votes) · EA · GW

Your previous post on this was immensely valuable. I haven't yet finished this but want to say thank you anyway for producing what is so far another extremely high-quality and informative post.

Comment by alexrjl on Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? · 2020-03-12T21:39:47.844Z · score: 10 (5 votes) · EA · GW

A large donor working with DMI to scale up messaging around handwashing seems like an obvious place to start given that they are plausibly close to that level of cost-effectiveness even ignoring coronavirus, and under optimistic assumptions are significantly better than that.

Comment by alexrjl on What are the key ongoing debates in EA? · 2020-03-12T15:49:28.486Z · score: 10 (6 votes) · EA · GW

I looked into worms a bunch for the WASH post I recently made. Miguel and Kramer's study has a currently unpublished 15 year follow up which according to givewell has similar results to the 10 year followup. Other than that the evidence of the last couple of years (including a new metastudy in September 2019 from Taylor-Robinson et. al.) has continued to point towards there being almost no effects of deworming on weight, height, cognition, school performance, or mortality. This hasn't really caused anyone to update because this is the same picture as in 2016/7. My WASH piece had almost no response, which might suggest that people just aren't too bothered by worms any more, though it could equally be something unrelated like style.

I think there's a reasonable case to be made that discussion and interest around worms is dropping though, as people for whom the "low probability of a big success" reasoning is convincing seem likely to either be long-termists, or to have updated towards growth-based interventions.

Comment by alexrjl on What are the key ongoing debates in EA? · 2020-03-12T12:24:44.459Z · score: 3 (2 votes) · EA · GW

Ditto to both parts of this

Comment by alexrjl on Where to find EA-related videos · 2020-03-03T12:44:17.360Z · score: 7 (3 votes) · EA · GW

Robert Miles has an excellent YouTube channel looking at AI safety. https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg

Comment by alexrjl on If you (mostly) believe in worms, what should you think about WASH? · 2020-02-28T10:10:48.247Z · score: 1 (1 votes) · EA · GW

I wrote most of the post before anyone was talking about coronavirus, and therefore used Dispensers for Safe Water as the example comparison, which doesn't at first glance look like it would help whatsoever with Covid-19.

DMI were mentioned in the post however, and their mass media communications designed to effect behaviour changes around, among other things, hand-washing, look even better in the light of the first case of Covid-19 being recorded in sub-saharan Africa. If there's interest, I'll try to write up a more detailed look at DMI soon.

Comment by alexrjl on Shoot Your Shot · 2020-02-18T08:50:11.104Z · score: 3 (3 votes) · EA · GW

I currently do high school outreach also (in fact, in a context which selects for mathematical talent and enthusiasm, so not miles away from splash) feel free to PM me if you'd like to discuss ideas and/or have some help with session planning. I'd also recommend getting in touch with @cafelow on the forum.

Comment by alexrjl on Founders Pledge Climate & Lifestyle Report · 2020-02-17T13:31:21.099Z · score: 1 (1 votes) · EA · GW

Thanks for posting. I think it's really valuable to have high quality cause area specific analysis to point interested non-EAs towards and that founder's pledge has consistently been a great source of exactly this.

I'm a little skeptical about the strength of the claims around the waterbed effect. It seems like governments historically have been much better at setting targets than meeting them, and that individual emissions make targets marginally less likely to be hit. It seems likely that e.g. if in 2040 it becomes clear that there's no way the UK will meet its 2050 target without huge and extremely costly changes, the government will move the target target than implement them, which would make anything that makes the target harder to hit potentially very harmful.

Comment by alexrjl on Illegible impact is still impact · 2020-02-14T10:14:50.705Z · score: 13 (7 votes) · EA · GW

Thank you for writing this. As someone whe estimates his own career path has almost entirely illegible impact, it's made me more excited to continue trying to maximise that impact, even though it's unlikely to be visible. I thought it was worth commenting mostly as even though the majority of the impact you've had by writing this post will be illegible, it might be nice to see some of it.

Comment by alexrjl on What posts you are planning on writing? · 2020-02-07T11:26:52.883Z · score: 1 (1 votes) · EA · GW

Importance, Tractibility and Neglectedness should not have equal weight.

TL;Dr, Neglectedness is a useful tiebreaker and gives you information about tractability but the relatively common matrix approach of scoring possible ideas on ITN and then ranking based on the sum of the scores overweights it.

Comment by alexrjl on Personal Data for analysing people's opinions on EA issues · 2020-01-12T09:29:00.099Z · score: 6 (4 votes) · EA · GW

The negative repercussions of this in terms of how EA is perceived seem absolutely enormous. Cambridge Analytica has got to be one of most despised companies in the Western world.

Comment by alexrjl on alexrjl's Shortform · 2020-01-06T08:45:55.101Z · score: 2 (2 votes) · EA · GW

Discounting the future consequences of welfare producing actions:

  • there's almost unanimous agreement among moral philosophers that welfare itself should not be discounted in the future.
  • however many systems in the world are chaotic, and it's very uncontroversial that in consequentialist theories the value of an action should depend on the expected utility it produces.
  • is it possible that the rational conclusion is to exponentially discount future welfare as a way of accounting for the exponential sensitivity to initial conditions exhibited by the long term consequences of one's actions?
Comment by alexrjl on Interaction Effect · 2019-12-16T18:53:12.238Z · score: 2 (2 votes) · EA · GW

I think the other responses capture the most important response to your question, which is that we tend to look at the value of things on the margin. However, as you're clearly thinking intelligently about important ideas, I thought I'd point you in the direction of some further thinking.

Another, perhaps clearer case where this "thinking on the margin" happens is with charity evaluation. If, for example, there existed some very rare and fatal disease which cost only pennies to cure, it would be extremely cost effective for people to donate to an organisation providing cures, until that organisation had enough to cure everyone with the disease. After this point, the cost effectiveness of additional funding would dramatically drop. Usually this doesn't happen quite so dramatically, but it's still an important effect. It is this sort of reasoning which has prompted givewell, for example, to look at "room for additional funding", see here.

There's another way of looking at your question though, which is to re-phrase it as "how should we assign credit for good outcomes which required multiple actors?"

One approach to answering this version of the question is discussed in depth here. I think you may enjoy it.


Comment by alexrjl on The Economic Lives of the Poor · 2019-11-22T09:45:50.600Z · score: 1 (1 votes) · EA · GW

Thank you for posting this. It's an excellent summary and also brought to my attention an important article I otherwise might not have come across for months.

Comment by alexrjl on What book(s) would you want a gifted teenager to come across? · 2019-11-09T15:30:08.733Z · score: 3 (2 votes) · EA · GW

I preordered a copy for the library and it was checked out almost instantly. :)

Comment by alexrjl on alexrjl's Shortform · 2019-11-05T18:56:34.914Z · score: 2 (2 votes) · EA · GW

I started donating regularly but following the thought process:

Some amount of money exists which is small enough that I wouldn't notice not having it.

This is clearly a lower bound on how much I am morally obligated to donate, because not having it costs me 0 utility, but giving it awa generates positive utility for someone else.

I ended up donating £1/month, but committing never to cancel this and periodically review it. I now donate much, much more.

To do:

Compare the benefits of encouraging other people to take a similar approach with the potentially harm associated with this approach going wrong, specifically moral licensing kicking in at relatively small donation amounts.

Comment by alexrjl on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-11-05T18:48:56.871Z · score: 3 (2 votes) · EA · GW

I agree that lots of structure is needed, and I'm very uncertain on the best structure. I do really like John Behar's post above about the "personal best" approach though.

Comment by alexrjl on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-11-05T12:37:36.032Z · score: 5 (4 votes) · EA · GW

I've updated towards earning to give having more of the characteristics of task y than I originally thought, based partly on the discussion on the comments. There are some good volunteering opportunities (for those in London, for example, doing charity analysis for sogive) but I haven't found anything as scalable yet.

One idea I want to explore more of effective activism. The difficulty of assessing outcomes is obvious, but XR, for all it's flaws, has shown the potential to get huge numbers of people involved.

Comment by alexrjl on alexrjl's Shortform · 2019-11-02T22:43:28.767Z · score: 4 (3 votes) · EA · GW

Given the probably existence of several catastrophic "tipping points" in climate change, as well as feedback loops more generally such as melting ice reducing solar reflectivity, it seems likely that averting CO2 emissions in the future is less valuable than doing so today.


To do: Figure out an appropriate discount rate to account for this.

Comment by alexrjl on What's the most effective organisation that deals with mental health as an issue? · 2019-08-20T10:32:41.884Z · score: 6 (5 votes) · EA · GW

Founders' pledge published a report into mental health interventions which concluded that strong minds was the best option.

You can download the report from https://founderspledge.com/research

Comment by alexrjl on Peer Support/Study/Networking group for EA math-centric students · 2019-08-11T14:04:21.354Z · score: 2 (2 votes) · EA · GW

Nice idea, I've filled in your form as a potential mentor. :)

Comment by alexrjl on What book(s) would you want a gifted teenager to come across? · 2019-08-07T13:05:20.947Z · score: 1 (1 votes) · EA · GW

Thanks for the considered recommendation. It looks interesting but the potential pitfall you note is certainly a problem with some students (being good at maths doesn't make you generally intelligent, but it can often make you believe you are)! I'll probably buy a personal copy and evaluate having read it.

Comment by alexrjl on What book(s) would you want a gifted teenager to come across? · 2019-08-06T18:10:14.027Z · score: 2 (2 votes) · EA · GW

These were exactly the sort of thing I was looking for, thank you!

Comment by alexrjl on What book(s) would you want a gifted teenager to come across? · 2019-08-06T18:01:08.312Z · score: 1 (7 votes) · EA · GW

Thank you for mentioning this. I've recommended it in the past based on having enjoyed it as a teenager, though not with any sort of EA intention, but won't be doing so again to students of any gender.

Comment by alexrjl on What book(s) would you want a gifted teenager to come across? · 2019-08-05T20:44:11.274Z · score: 1 (1 votes) · EA · GW

Thank you! I'll be in touch.

Comment by alexrjl on What book(s) would you want a gifted teenager to come across? · 2019-08-05T20:43:41.476Z · score: 1 (1 votes) · EA · GW

This was a favourite of mine as a teen (and many others judging by the upvotes), though I'm now re-evaluating based on the comment above, as I haven't read it since. There are multiple copies of this, as well as QED, Six Easy Pieces, and Six Not-So-Easy Pieces already in the library, all of which are very popular, and frequently recommended by me (I teach maths and physics). I'm not sure I'd consider any of them a strong nudge towards being more likely to end up as an EA though.

Comment by alexrjl on What book(s) would you want a gifted teenager to come across? · 2019-08-05T16:02:22.428Z · score: 1 (1 votes) · EA · GW

Thank you for the detailed recommendation. I'll get a copy and read to check suitability but for 16-18 year olds (many of whom are studying Economics) it seems excellent based on your description.

Comment by alexrjl on High School EA Outreach · 2019-05-10T18:12:53.104Z · score: 1 (1 votes) · EA · GW

This is spot on, and thinking about this was what prompted me to originally start thinking about trying to identify a 'Task Y'. I'm relatively confinced that E2g is a good task y in many situations, but working with students is not one of them.

Comment by alexrjl on a black swan energy prize · 2019-03-29T19:50:25.089Z · score: 1 (1 votes) · EA · GW

The idea of a prize for a spectacular breakthrough in the area of energy seems promising but I remain unconvinced that cold fusion, however repackaged, is the basket to put our eggs in here.

Cheap, high-capacity batteries which could be recharged arbitrarily many times could have as transformative an effect on our energy production and consumption as anew fuel source, by allowing a 100% renewable grid to be feasible, as well as making electric vehicles far more attractive. A breakthrough in high-temperature superconductivity could be similarly transformative.

I think sometimes it's too easy to get caught up in the excitement of finding a highly neglected idea, and in doing so miss the fact that it may be highly neglected for extremely good reasons.

Comment by alexrjl on Apology · 2019-03-29T09:31:18.716Z · score: -2 (8 votes) · EA · GW

I do have strong feelings about this, but having strong feelings and having given complex issues careful consideration are not mutually exclusive, and the implication otherwise was uncalled for. Having carefully considered the issue, I have concluded the anonymity of sexual assault victims is the most important factor here, I'm not alone in this conclusion. The UK legal system, for example, agrees.

Give that you easily identified that "access all evidence" was the other criterion which risked anonymity, I don't think it's too hard to see the connection between them.

Comment by alexrjl on Apology · 2019-03-29T06:30:10.420Z · score: -11 (11 votes) · EA · GW

Two of FIRE's conditions request that victims of sexual assault must face their assailant in order to have any hope of justice. I'm extremely glad that EA organisations violate FIRE's "safeguards".

Comment by alexrjl on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T20:18:01.511Z · score: 6 (8 votes) · EA · GW

I looked for the study because I was surprised by the strength of the statement it was used to support. When I found the study, I was annoyed to find that actually, it doesn't come close to supporting a claim of the strength made. This annoyance prompted the tone of the original post, which you have characterised fairly and was a mistake. I've now edited this out, because I don't want it to distract from the claim I am making:

The study does not support the claim that is it is being used to support.

Comment by alexrjl on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T07:03:25.761Z · score: 12 (12 votes) · EA · GW

"Even though historically men have been granted more authority than women, influence of feminism and social justice means that in many circumstances this has been mitigated or even reversed. For example, studies like Gornall and Strebulaev (2019) found that blinding evaluators to the race or sex of applicants showed that by default they were biased against white men."

That is an unreasonably strong conclusion to draw from the study you've cited, not least given that even in the abstract of that study the authors make it extremely clear that "[their] experimental design is unable to capture discrimination at later stages [than the initial email/expression of interest stage]". https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3301982

[Edited for tone]

Comment by alexrjl on Making discussions in EA groups inclusive · 2019-03-04T22:02:04.426Z · score: 3 (3 votes) · EA · GW

To be clear, I didn't make the above point in order to say "you should feel bad because you're white and male". I also didn't make it to say "you should just shut up and defer to the opinions of others here because you're white and male". I made it to try to explain why the choice to say "everyone just needs to suck it up and deal with their own discomfort" is not a choice with no downside; it puts the debate on an uneven footing, where people are not able to participate equally. It doesn't seem a stretch to then say that debates where some people are at an inherent disadvantage from the start are not as self evidently optimal as a truth seeking exercise as it may first seem.

Comment by alexrjl on Making discussions in EA groups inclusive · 2019-03-04T21:29:05.290Z · score: 2 (9 votes) · EA · GW
I'm a white male, and I view my own comfort in debate spaces merely as a means to reach truth, and welcome attempts to trade the former for the latter. Of course, you may be thinking "that's easy for you to say, cause you're a white male!" And there's no point arguing with that because no one will be convinced of anything. But I'm at least going to say that it is the right philosophy to follow.

Consider the possibility that the philosophy you mention is not as easy for everyone to follow as it is for you. When the entirety of society is built with your comfort in mind, it's very easy to sacrifice some of it as a means to reach truth, especially as as soon as you leave the debate space you can go back to not thinking about any of the discomfort you experienced during the discussion. You are safe putting all of your emotional and intellectual energy into the debate space, knowing that if the conversation gets too much for you, you can opt to leave at any time and go on with the rest of your day.

If, however, someone lives in a world where every day includes many instances of them being made uncomfortable (even if each instance might seem trivially small to someone who only sees one), which they have no option to switch off, that person cannot safely put all of their emotional and intellectual energy into a debate space which asks them to sacrifice their level of comfort. They don't have the option of participating in a discussion until it becomes too much, because they have to save energy with which to get through the rest of the day. As a result, they are not able to participate in the debate on an equal footing (if they even choose to at all, given the emotional effort involved).

Comment by alexrjl on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-02-24T17:31:24.645Z · score: 1 (1 votes) · EA · GW

Thanks for this. I've edited your question into the post. The third bullet point you wrote I actually think captures a lot of why I'm excited about a potential Task Y (or list, like the one aaron posted). If people have the option to do something which both genuinely is good, and seems good to them, and hear that this is actively encouraged by the EA community and enough to be considered a valuable part of it, I think this goes quite a long way towards stopping it seeming so elitist. Having multiple levels of commitment available to people, with good advice about the most effective thing to do given a particular level of commitment, seems to plausibly have lots of potential.

I have price discrimination in my head as a model here, though I realise the analogy is not a perfect one.

Comment by alexrjl on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-02-24T17:01:17.061Z · score: 1 (1 votes) · EA · GW

Thank you all for the positive comments and extrememly useful feedback! I've edited some subheadings and a summary into the original post, though I've (optimistically) left the title so that people who've read the post and want coming back to participate in the discussion don't get lost. I've also included John's question in the list of important question to ask.

Comment by alexrjl on Would killing one be in line with EA if it can save 10? · 2018-11-29T21:20:19.917Z · score: 2 (2 votes) · EA · GW

I'm not sure many EAs will agree with your intuition (If I'm understanding your question correctly) that it's morally wrong to kill one person to save 10. There are certainly some moral philosophers who do, however. This dilemma is often referred to as the "trolley problem", and has had plenty of discussion over the years.

You may find this interesting reading, it turns out people's intuitions about similar problems vary quite a lot based on culture.

Comment by alexrjl on Amazon Smile · 2018-11-22T07:03:33.769Z · score: 1 (1 votes) · EA · GW

I think this is a valid concern, and certainly don't think presenting 'Amazon smile is the sort of thing EAs do' is particularly useful or accurate. To try to be sightly more clear about why I do think the mention is a useful starting point:

  • Full EA can be quite a lot to try to introduce to people all at once, even when those people already want to help.
  • Asking people to carefully consider how they make a specific donation is a gentle way in, at least to 'soft EA'. (Giving games are another example of this)
  • Amazon Smile is a specific donation that you can ask people to consider how they make. If they haven't heard of it before, it's likely that their net experience of hearing about it and setting it up will be positive (they are getting to donate to a charity with no downside, again rather like a giving game). My hope is that this positive experience will make people more likely to consider where their donations go in future, and/or to respond positively to future things they hear about EA. I'm uncertain about how large the effects in each case will be, but don't think they will be negative. I am concerned, however about the effect of someone setting up Amazon smile on the total amount that they donate in future, which I think will be negative if you ignore any potential introduction to EA. This means the probability of the exercise being positive depends on how likely you are to be able to use the conversations as a productive starting point.
Comment by alexrjl on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-12T22:25:30.189Z · score: 2 (2 votes) · EA · GW

I think there's reason to be cautious with the "highest marginal information comes from studying neglected interventions" line of reasoning, because of the danger of studies not replicating. If we only ever test new ideas, and then suggest funding the new ideas which appear from their first study to have the highest marginal impact, it's very easy to end up with several false positives being funded even if they don't work particularly well.

In fact, in some sense the opposite argument could be made; it is possible that the highest marginal information gain will come from research research into a topic which is already receiving lots of funding. Mass deworming is the first example that springs to mind, mostly because there's such a lack of clarity at the moment, but the marginal impact of finding new evidence about an intervention there's lots of money in could still be very large.

I guess the rather sad thing is that the biggest impact comes from bad news: if an intervention is currently receiving lots of funding because the research picture looks positive, and a large study fails to replicate, a promising intervention now looks less so. If funding moves towards more promising causes as a result, this is a big positive impact, but it feels like a loss. It certainly feels less like good news than a promising initial study on a new cause area, but I'm not sure it actually results in a smaller impact.