Posts

Why I'm concerned about Giving Green 2021-01-20T22:59:20.608Z
Forecasts about EA organisations which are currently on Metaculus. 2020-12-29T17:42:18.572Z
Incentive Problems With Current Forecasting Competitions. 2020-11-10T21:40:46.317Z
What questions would you like to see forecasts on from the Metaculus community? 2020-07-26T14:40:17.356Z
Climate change donation recommendations 2020-07-16T21:17:57.720Z
[Linkpost] - Mitigation versus Supression for COVID-19 2020-03-16T21:01:28.273Z
If you (mostly) believe in worms, what should you think about WASH? 2020-02-18T16:47:12.319Z
alexrjl's Shortform 2019-11-02T22:43:28.641Z
What book(s) would you want a gifted teenager to come across? 2019-08-05T13:39:09.324Z
Can the EA community copy Teach for America? (Looking for Task Y) 2019-02-21T13:38:33.921Z
Amazon Smile 2018-11-18T20:16:27.180Z

Comments

Comment by alexrjl on Clarifying the Petrov Day Exercise · 2021-09-27T07:38:50.487Z · EA · GW

I interpreted your comment as saying that I was "lambasting the foibles of being a well intentioned unilateralist", and that I should not be doing so. If that was not the intent I'm glad.

Comment by alexrjl on Clarifying the Petrov Day Exercise · 2021-09-27T07:36:47.327Z · EA · GW

The lesson people I would want people to learn is "I might not have considered all the reasons people might do stuff". See comment below.

Comment by alexrjl on Clarifying the Petrov Day Exercise · 2021-09-27T07:35:46.634Z · EA · GW

This is closer, I think the framing I might have had in mind is closer to:

  • people underestimate the probability of tail risks.

  • I think one of the reasons why is that they don't appreciate the size of the space of unknown unknowns (which in this case includes people pushing the button for reasons like this).

  • causing them to see something from the unknown unknown space is therefore useful.

  • I think last year's phishing incident was actually a reasonable example of this. I don't think many people would have put sufficiently high probability on it happening, even given the button getting pressed.

Comment by alexrjl on Clarifying the Petrov Day Exercise · 2021-09-27T06:58:43.207Z · EA · GW

Yeah I guess you could read what I'm saying as that I actually think I should have pressed it for these reasons, but my moral conviction is not strong enough to have borne the social cost of doing so.

One read of that is that the community is strong enough in its social pressure to quiet bad actors like me from doing stupid harmful stuff we think is right.

Another is that social pressure is often enough to stop people from doing the right thing, and that we should be extra grateful to Petrov, and others in similar situations, because of this.

Either reading seems reasonable to discuss today.

Comment by alexrjl on Clarifying the Petrov Day Exercise · 2021-09-27T06:40:19.011Z · EA · GW

This wasn't intended as a "you should have felt sorry for me if I'd done a unilateralist thing without thinking". It was intended as a way of giving more information about the probability of unilateralist action than people would otherwise have had, which seems well within the spirit of the day.

I also think it's noteworthy that in the situation being celebrated the ability to resist social pressure was pointing in the opposite direction to the way it goes here, which seems like a problem with the current structure, but I didn't end up finding a good way to articulate it, and someone else said something similar already.

Comment by alexrjl on Clarifying the Petrov Day Exercise · 2021-09-27T05:05:39.977Z · EA · GW

It seems fairly likely (25%) to me that had Kirsten not started this discussion (on Twitter) I would have pushed the button because:

  • actually preventing the destruction of the world is important to me.

  • doing so, especially as a "trusted community member", would hammer home the danger of well intentioned unilateralists in the way an essay can't, and I think that idea is important.

  • despite being aware of lesswrong and having co-authored one post there, I didn't really understand how seriously some people took the game previously.

  • worse, I was in the dangerous position of having heard enough about Petrov day to, when I read the email, think "oh yeah I basically know what this is about", and therefore not read the announcement post.

I decided not to launch, but this was primarily because it became apparent through this discussion how socially costly it would be. I find people being angry with me on the internet unusually hard, and expect that pushing the button using the reasoning above could quite easily have cost me a significant amount of productive work (my median is ~ 1 week).

Comment by alexrjl on Magnitude of uncertainty with longtermism · 2021-09-13T07:11:14.740Z · EA · GW

This talk and paper discusses what I think are some of your concerns about growing uncertainty over longer and longer horizons.

Comment by alexrjl on Who do intellectual prizewinners follow on Twitter? · 2021-08-26T09:19:29.608Z · EA · GW

Nice

Comment by alexrjl on More EAs should consider “non-EA” jobs · 2021-08-21T12:43:09.167Z · EA · GW

In my case it was the opposite - I spent several years considering only non-EA jobs as I had formed the (as it turns out mistaken) impression that I would not be a serious candidate for any roles at EA orgs.

Comment by alexrjl on What things did you do to gain experience related to EA? · 2021-08-08T07:29:22.663Z · EA · GW

NB - None of the things below were done with the goal of building prestige/signalling. I did them because they were some combinaion of interesting, fun, and useful to the world. I doubt I'd have been able to stick with any if I'd viewed them as purely instrumental. I've listed them roughly in the order in which I think they were helpful in developing my understanding. The signalling value ordering is probably different (maybe even exactly reversed), but my experience of getting hired by an EA org is that you should prioritise developing skill/knowledge/understanding over signalling very heavily.

  • As a teacher, I ran a high-school group talking about EA ideas, mostly focusing on the interesting maths. This involved a lot of thinking and reading on my part in order to make the sessions interesting.
  • Over the course of a few years, I listened to almost every episode of the 80k podcast, some multiple times.
  • I wrote about things I thought were important on the EA forum.
  • I volunteered for SoGive as an analyst, and had a bunch of exciting calls with people like GiveWell and CATF as a result.
  • I spent a bunch of time on Metaculus, including volunteering as a moderator and trying to write useful questions, though I ended up doing fairly well at forecasting by some metrics.
Comment by alexrjl on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-06T11:26:47.732Z · EA · GW

Sentinel seems promising

Comment by alexrjl on A Sequence Against Strong Longtermism · 2021-07-26T08:54:50.096Z · EA · GW

I don't think the claim from Linch here is that not bothering to edit out snark has led to high value, rather that if a piece of work is flawed both in the level of snark and the poor quality of argument, the latter is more important to fix.

Comment by alexrjl on Career advice for Australian science undergrad interested in welfare biology · 2021-07-21T12:28:38.868Z · EA · GW

https://www.animaladvocacycareers.org/ seems like a good option to check out if you're set on Animal welfare work. Given that you're thinking about keeping AI on the table, you should probably at least consider keeping pandemic prevention similarly on the table, it seems like a smaller step sideways from your current interests. Have you considered applying to speak to someone at 80,000 hours*?

*I'll be working on the 1-1 team from September, but this is, as far as I can tell, the advice I'd have given anyway, and shouldn't be treated as advice from the team.

Comment by alexrjl on The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion · 2021-07-06T18:01:24.794Z · EA · GW

How do you approach identity? If ~no future people are "necessary", does this just reduce to critical-level utilitarianism (but still counting people with negative welfare, can't remember if critical level does that)? Are you ok with that?

Comment by alexrjl on The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion · 2021-07-06T14:45:59.491Z · EA · GW

Trying to summarise for my own understanding.

Is the below a reasonable tl;dr?

Total utilitarianism, except you ignore people who satisfy all of:

  • won't definitely  exist
  • Have welfare between 0 and T

Where T is a threshold chosen democratically by them, and lives with positive utility are taken to be "worth living".

If so, does this reduce to total utilitarianism in the case that people would choose not to be ignored if their lives were worth living?

Comment by alexrjl on What are some skills useful for EA that one could learn in ~6 months? · 2021-06-28T17:53:42.561Z · EA · GW

Forecasting:
Metaculus intro resources, partially complete introductory video series, book.

Comment by alexrjl on What are the 'PlayPumps' of cause prioritisation? · 2021-06-23T11:51:35.902Z · EA · GW

I think plastic straws are a v good option here, when you consider that:

  • paper straws are just a worse experience for ~everyone
  • metal/glass arguably worse for the environment given number of uses and resources required to produce (see also reusable bags)
  • Some disabled people rely on straws and paper replacements terrible for them

 

This is certainly closer to the playpumps [actively harmful once you think properly about it] than the ALS [not a huge issue but it's not like stopping ALS would be actually bad in a vacuum].

Comment by alexrjl on Why did EA organizations fail at fighting to prevent the COVID-19 pandemic? · 2021-06-19T17:52:50.893Z · EA · GW

Is the claim here that EA orgs focusing on GCRs didn't think GoF research was a serious problem and consequently didn't do enough to prevent it, even though they easily could have if they had just tried harder?
 

My impression is that many organisations and individual EAs were both conerned about risks due to GoF research, and were working on trying to prevent it. A postmortem about strategies used seems plausibly useful, as does a retrospective on whether it should have been an even bigger focus, but the claim as stated above I think is false, and probably unhelpful.
 

Comment by alexrjl on A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it · 2021-06-09T15:10:10.955Z · EA · GW

Overall I liked this post, and in particular I very strongly endorse the view that it's worth spending nontrivial time/energy/money to improve your health, energy, productivity etc. I don't have a strong view about how useful the specific pieces of advice were, my impression is that the literature is fairly poor in many of these areas. Partly because of this, my favourite section was:

One thing people sometimes say when I tell them there is a small chance taking some pill will fix their problems is that this seems somehow like cheating because it doesn’t require any lifestyle changes. As if because it’s easy you don’t really deserve to have it fixed? I don’t get it but suffice to say that if for ~$20 you can trial something with a simply massive expected value (even if it’s unlikely to work) and usually with almost no downside (you can just stop taking it after two weeks if it doesn’t work) you should definitely try that thing. Think of it like buying a lottery ticket but with much better odds and a chance of actually making you consistently happier in the long-run.

It's noteworthy that the above applies not just to "taking some pill", but in fact to any low-cost-of-trying intervention which might prove substantially beneficial in the long run.

 

To that end, I was surprised to see the following at the end (as I think its framing is contradicted by the above).

Less ideal solutions (but still definitely worth considering) include patching over the problem by trying things like nootropics, antidepressants, or other medication.

It seems straightforwardly wrong to characterise medically treating e.g. clinical depression or ADHD as a "less ideal solution" which is merely "patching over the problem". For many, treatment will be necessary for at least some time even if lifestyle adjustments and therapy are sufficient management in the longer term. For many others, medicine is a necessary part of the long-term solution, and possibly also a sufficient long-term solution.

I really liked this quote from Howie in a recent 80k podcast about this.

 

[1] - I'm linking to this because I think it makes the point well, but should probably disclose that I'll be working at 80k from September. The opinions above are only intended to represent my views, including the interpretation of what Howie's saying in the quote.

Comment by alexrjl on EA is a Career Endpoint · 2021-05-19T21:24:15.825Z · EA · GW

I agree, a single rejection is not close to conclusive evidence, but it is still evidence on which you should update (though, depending on the field, possibly not very much)

Comment by alexrjl on What are things everyone here should (maybe) read? · 2021-05-19T08:52:07.076Z · EA · GW

Agree with this but would note that "The Signal and the Noise" should probably be your first intro or likely isn't worth bothering with. It's a reasonable intro but I got ~nothing out of it when I read it (while already familiar with Bayesian stats).

Comment by alexrjl on Some global catastrophic risk estimates · 2021-05-04T18:31:35.175Z · EA · GW

The "metaculus" forecast weights users' forecasts by their track record and corrects for calibration, I don't think the details for how are public. Yes you can only see the community one on open questions.

I'd recommend against drawing the conclusion you did from the second paragraph (or at least, against putting too much weight on it). Community predictions on different questions about the same topic on Metaculus can be fairly inconsistent, due to different users predicting on each.

Comment by alexrjl on On the longtermist case for working on farmed animals [Uncertainties & research ideas] · 2021-04-11T13:07:22.276Z · EA · GW

I already believed it and had actually been recently talking to someone about it, so I was surpsied and pleased to come across the post, but couldn't find a phrasing which said this which didn't just sound like I was saying "oh yeah thanks for writing up my idea". Sorry for the confusion!

Comment by alexrjl on On the longtermist case for working on farmed animals [Uncertainties & research ideas] · 2021-04-11T11:37:46.813Z · EA · GW

Thanks for writing this, even accounting for suspicious convergence (which you were right to flag), it just seems really plausible that improving animal welfare now could turn out to be important from a longtermist perspective, and I'd be really excited to hear about more research in this field happening.

Comment by alexrjl on Research suggests BLM protests increase murder overall · 2021-04-09T22:03:05.488Z · EA · GW

released his preliminary findings on the Social Science Research network as a preprint, meaning the study has yet to receive a formal peer review.

 

It’s worth noting that Campbell didn’t subject the homicide findings to the same battery of statistical tests as he did the police killings since they were not the main focus of his research.

 

I thought there had also been some cautionary tales learned in the last year about widely publicisng and discussing headline conclusions from preprint data without appropriate caveats. Apparently not.

Comment by alexrjl on Actions to take for a career change towards EA (advice needed) · 2021-04-09T13:36:51.620Z · EA · GW

There's the EA jobs facebook group, and I'll pm you a discord link.

It's worth noting that 80k has a lot of useful advice on how to think about career impact, and also the option to apply for advising, as well as the jobs board. There's also Probably Good (search for their forum post) and Animal Advocacy careers.

Comment by alexrjl on EA Debate Championship & Lecture Series · 2021-04-07T09:36:35.051Z · EA · GW

I want to echo this. I think my own experience of debating has been useful to me in terms of my ability to intelligence-signal in person, but was pretty bad overall for my epistemics. One interesting thing about BP (which was the format I competed in most frequently at the highest level) was the importance in the 4th speaker role identifying the cruxes of the debate (usually referring to them as "clash"), which I think is really useful. Concluding that the side you've been told to favour has then "won" all of the cruxes is... less so.

Comment by alexrjl on Actions to take for a career change towards EA (advice needed) · 2021-04-06T18:07:04.933Z · EA · GW

All this advice seems realy good, and I want to particularly echo this bit:

It might be worth reframing how you think about this as "how can I find a job that has the biggest impact", rather than "how can I get an EA job".

Comment by alexrjl on "Hinge of History" Refuted (April Fools' Day) · 2021-04-01T15:05:27.791Z · EA · GW

This post is already having a huge impact on some of the most influential philosophers alive today! Thanks so much for writing it.

Comment by alexrjl on Forget replaceability? (for ~community projects) · 2021-04-01T12:03:13.352Z · EA · GW

Evidence Action are another great example of "stop if you are in the  downside case" done really well.

Comment by alexrjl on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T14:22:36.618Z · EA · GW

Interesting, thanks!

Comment by alexrjl on Any EAs familiar with Partha Dasgupta's work? · 2021-03-31T14:05:21.789Z · EA · GW

I was under the impression CSER was pretty "core EA"! Certainly I'd expect most highly engaged EAs to have heard of them, and there aren't that many people working on x-risk anywhere.

Comment by alexrjl on How much does performance differ between people? · 2021-03-29T21:02:01.865Z · EA · GW

I've been much less successful than LivB but would endorse it, though I'd note that there are substantially better objective metrics than cash prizes for many kinds of online play, and I'd have a harder time arguing that those were less reliable than subjective judgements of other good players. It somewhat depends on sample though, at the highest stakes the combination of v small playerpool and fairly small samples make this quite believable.

Comment by alexrjl on Is laziness immoral? · 2021-03-28T12:03:27.038Z · EA · GW

Hi Jacob,

I think you might really enjoy and benefit from reading this blog by Julia Wise. While it's great that you have such a strong instinct to help people, we're in this game for the long haul, and you won't have a big impact by feeling terrible about yourself and feeling guilty if you don't make sacrifices.

In particular, it's very likely that focusing on doing well in college and then university is going to make a much bigger different to your lifetime impact than whether you can get a part-time job to donate right now.

Comment by alexrjl on [Podcast] Thomas Moynihan on the History of Existential Risk · 2021-03-23T00:00:23.539Z · EA · GW

I've discovered hear this idea relatively recently but have been extremely impressed so far. Looking forward to this episode!

Comment by alexrjl on [deleted post] 2021-03-12T11:24:21.381Z

Because the orgs in question have literally said so, because I think the people working there genuinely care about their impact and are competent enough to have heard of Goodhart's law, and because in several cases there have been major strategy changes which cannot be explained by a model of "everyone working there has a massive blindspot and is focused on easy to meet targets". As one concrete example, 80k's focus has switched to be very explicitly longtermist, which it was not originally. They've also published several articles about areas of their thinking which were wrong or could have been improved, which again I would not expect for an organisation merely focused on gaming its own metrics.

Comment by alexrjl on [deleted post] 2021-03-11T22:50:30.356Z

Yeah to be clear I meant that the decision making processes are probably informed by these things even if the metrics presented to donors are not, and from the looks of Ben's comment above this is indeed the case.

Comment by alexrjl on [deleted post] 2021-03-11T21:43:44.325Z

I think there's likely a difference here between:

 What easily countable short term goals and metrics are communicated to supporters? (bednet distributions, advising calls etc.)

and

What things do we actually care about and track internally on longer timescales, to feed into things like annual reviews and forward planning?

 

I'd be extremely surprised if 80k didn't care about the impact of their advisees, or AMF didn't care about reducing malaria.

Comment by alexrjl on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-09T15:26:00.812Z · EA · GW

I completely agree with all of this, and am glad you laid it out so clearly.

Comment by alexrjl on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-08T21:49:01.562Z · EA · GW

Despite disagreeing with most of it, including but not limited to the things highlighted in this post, I think that Torres's post is fairly characterised as thought-provoking. I'm glad Joshua included it in the syllabus, also glad he caveated its inclusion, and think this response by Hayden is useful.

I haven't interacted with Phil much at all, so this is a comment purely on the essay, and not a defense of other claims he's made or how he's interacted with you. 

Comment by alexrjl on What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings? · 2021-03-08T10:57:30.011Z · EA · GW

(for what it's worth, I don't actually think utilitarianism leads to the conclusions in the post, but I think other commenters have discussed this, and I think the general point in my first comment is more important)

Comment by alexrjl on What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings? · 2021-03-07T12:26:08.046Z · EA · GW

If you take moral uncertainty even slightly seriously, you should probably avoid doing things which would be horrifically evil according to a whole load of worldviews you don't subscribe to, even if according to your preferred worldview it would be fine.

Comment by alexrjl on A full syllabus on longtermism · 2021-03-06T19:30:51.313Z · EA · GW

This is fantastic, thank you so much for putting it together.

Comment by alexrjl on Early Alpha Version of the Probably Good Website · 2021-03-04T07:47:05.371Z · EA · GW

Thanks so much, I'll check these links out!

(I had abbreviated "Probably Good" to PG)

Comment by alexrjl on Progress Open Thread: March 2021 · 2021-03-03T14:26:27.734Z · EA · GW

Going to do my best to lean into Aaron's "this is a humilty free zone" message from the first progress thread and hopefully get the ball rolling.

  • I won $1350 and a hoody in various forecasting competitions which finished in February ($850 + hoody of which was performance based, the other $500 was participation based).
  • For a few reasons, I've gradually started to get the impression that people respect and are interested in what I have to say about things. I'm not sure how related this is to the above, or how sensible it is on their part, but it feals really good!
  • I was invited to join a weekly call with a few people I vaguely knew on Twitter, and it's been a highlight of most weeks! They're all really nice and very interesting to talk to.
Comment by alexrjl on [Podcast] Marcus Daniell on High Impact Athletes, Communicating EA, and the Purpose of Sport · 2021-03-03T14:02:01.699Z · EA · GW

Excited for this! It's been awesome to watch HIA's success so far and they still have incredible potential.

Comment by alexrjl on Early Alpha Version of the Probably Good Website · 2021-03-03T12:18:54.876Z · EA · GW

This seems like really excellent feedback for them!

I have a query about your final point, where I think I agree with the PG framing. In general, I think when people talk about the long-term future they are including consideration of timescales much longer than the few decades most of the examples you mentioned, of the order of hundreds of years, or even longer. This is one reason reducing existential risk is so popular (because while affecting the shape of the future seems both extremely uncertain and fairly dependent on world view, making sure that there is a future at all seems good from many perspectives, though not all). Am I correct in my intepretation that you were talking about "long term" mostly in the <100 year sense?

Comment by alexrjl on Early Alpha Version of the Probably Good Website · 2021-03-03T12:02:45.901Z · EA · GW

This seems like a great initiative. Congratulations for setting it up! Like a couple of other commenters, I like the fact you linked to 80k and AAC on the career profile page.

Comment by alexrjl on Why I'm concerned about Giving Green · 2021-03-03T11:43:58.364Z · EA · GW

When did they say these were much less cost-effective?


I asked them! The website does now make it clear, I think, that they think policy options are best, though some of that is a recent change, and the language is still less effective than I'd like.

What do you mean by it being justified? It looks like you mean 'does well on a comparison of immediate impact', but, supposing these things are likely to be interpreted as recommendations about what is most cost-effective, this approach sounds close to outright dishonesty, which seems like it would still not be justified. (I'm not sure to what extent they are presenting them that way.)

You're right that I meant "does well on a comparison of immediate impact" here, but your second point is, I think, really important.  Having said that, while it's worth thinking about I don't think the current presentation of the difference between offsetting and policy intervention could be fairly described as "dishonest". I think it is clear that GG thinks policy is more effective, it's just that the size of the difference is not emphasised. 

I agree that, even in worlds where it produce the most immediate good from a donation perspective, presenting two options as equal when you think they are not is dishonest, and not justifiable. I don't think Giving Green has ever intended to do that though.

In terms of CATF vs Sunshine, I had initially suspected that it might be the case that they thought CATF was much better but that Sunshine was worth including to capture a section of the donations market which broadly likes progressive stuff. I agree that this would not be acceptable without a caveat that they thought CATF was best. Having spoken to them, I don't think this is the case (and Dan can confirm if he's still following the thread); I think they genuinely think that there's no difference in expectation between CATF and TSM. I strongly disagree with this assesment, but do believe it to be genuine.

Comment by alexrjl on alexrjl's Shortform · 2021-03-03T11:22:49.899Z · EA · GW

This is why you should have done physics ;)