Comment by niel_bowerman on UK Income Tax & Donations · 2020-07-13T03:36:27.428Z · EA · GW

This post is extremely helpful, and I have referred to it multiple times as I plan my finances. Thanks again for putting it together.

Comment by niel_bowerman on Space governance is important, tractable and neglected · 2020-07-09T19:36:51.441Z · EA · GW

The importance of this and related topics is premised on humanity's ability to achieve interstellar travel and settle other solar systems. Nick Beckstead did a shallow investigation into this question back in 2014, which didn't find any knockdown arguments against. Posting this here mainly as I haven't seen some of these arguments discussed in the wider community much.

Comment by niel_bowerman on Atari early · 2020-04-04T00:26:50.345Z · EA · GW

[Spitballing] I'm wondering if Angry Birds has just not been attempted by a major labs with sufficient compute resources? If you trained an agent like Agent57 or MuZero on Angry Birds then I am curious as to whether the agent would outperform humans?

Comment by niel_bowerman on Niel Bowerman: Could climate change make Earth uninhabitable for humans? · 2020-01-17T20:28:09.048Z · EA · GW

Louis Dixon has written a helpful summary of this talk here. It also has some interesting discussion in the comments:

Comment by niel_bowerman on Growth and the case against randomista development · 2020-01-16T19:12:39.411Z · EA · GW

This is one of the most thought-provoking (for me) posts that I've seen on the forum for a while. Thanks to you both for taking the time to put this together!

Comment by niel_bowerman on [Notes] Could climate change make Earth uninhabitable for humans? · 2020-01-16T18:30:00.595Z · EA · GW

Thanks for flagging this. I think estimating temperature rise after burning all available fossil fuels is mostly educated guesswork. Both estimating the total amount of fossil fuels is hard and estimate the climate response from them is hard.

However, I hadn't seen this Winkelmann, et al. paper, which makes a valuable contribution. It suggests that the climate response is substantially sub-linear at higher levels of warming.

The notes that are currently posted above about how warm it would get if we burned all the fossil fuels were back-of-the-envelope calculations that I did in this slides' notes, and I wouldn't trust them much. They assume a linear model which isn't reliable at these temperatures. I didn't end up including them in the talk as I didn't think they were robust enough. I'll ask Louis about removing them.

Thanks for flagging this Linch!

Comment by niel_bowerman on [Notes] Could climate change make Earth uninhabitable for humans? · 2020-01-16T00:04:54.302Z · EA · GW

Great question. I'm afraid I only have a vague answer: I would guess that the chance of climate change directly making Earth uninhabitable in the next few centuries is much smaller than 1 in 10,000. (That's ignoring the contribution of climate change to other risks.) I don't know how likely the LHC is to cause a black hole, but I would speculate with little knowledge that the climate habitability risk is greater than that.

As I mentioned in the talk, I think there are other emerging tech risks that are more likely and more pressing than this. But I would also encourage more folks with a background in climate science to focus on these tail risks if they were excited by questions in this space.

Comment by niel_bowerman on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-01-11T05:57:51.386Z · EA · GW

What is you high-level on take on social justice in relation to EA?

Comment by niel_bowerman on Introducing Animal Advocacy Careers · 2020-01-08T22:13:04.575Z · EA · GW

Hi Lauren, this is Niel from 80,000 Hours. We've already discussed this over email, but I'm excited that new organisations are being set up in this space. 80,000 Hours has limited resources and is not planning on increasing the amount we invest in improving our advice for animal advocates in the near term. I'm hopeful that Animal Advocacy Careers will be able to better serve the animal advocacy community than we can. Best of luck with the project!

Comment by niel_bowerman on 8 things I believe about climate change · 2020-01-02T23:38:27.962Z · EA · GW

In the current regime (i.e. for increases of less than ~4 degrees C), warming is roughly linear with cumulative carbon emissions (which is different from CO2 concentrations). Atmospheric forcing (the net energy flux at the top of the atmosphere due to changes in CO2 concentrations) is roughly logarithmic with CO2 concentrations.

How temperatures will change with cumulative carbon emissions at temperatures exceeding ~4 degrees C above pre-industrial is unknown, but will probably be somewhere between super-linear and logarithmic depending on what sorts of feedback mechanisms we end up seeing. I discuss this briefly in at this point in this talk:

Comment by niel_bowerman on Accuracy issues in FAO animal numbers · 2019-12-06T19:01:10.739Z · EA · GW

Btw, your link to FAO feedback on Indonesian broiler chickens leads to a discussion about Latvian egg-laying hens instead.

Comment by niel_bowerman on The case for building expertise to work on US AI policy, and how to do it · 2019-02-12T03:31:08.133Z · EA · GW

I think working on AI policy in an EU context is also likely to be valuable, however few (if any) of the world's very top AI companies are based in the EU (except DeepMind, which will soon be outside the EU after Brexit). Nonetheless, I think it would be very helpful to more AI policy expertise within an EU context, and if you can contribute to that it could be very valuable. It's worth mentioning that for UK citizens it might be better to focus on British AI policy.

Comment by niel_bowerman on Effective altruism outreach plans · 2014-05-09T18:37:00.000Z · EA · GW

This is useful thanks. We plan on using the same back-end to host both sites. Do you therefore suggest that we have a clear boundary in the site between the .com and .org sites, rather than simply letting people use the same URL suffix that they entered throughout the site?

Comment by niel_bowerman on Effective altruism outreach plans · 2014-05-09T18:34:00.000Z · EA · GW

No, it was not based on the TMM, though I can see that there are some rough similarities (i.e. these are both stage-based models of human engagement).

Comment by niel_bowerman on Effective altruism outreach plans · 2014-05-09T18:30:00.000Z · EA · GW

I agree. My current best guess is to provide just one or two key actions to take for new users, with the alternative routes still available but not as prominent.

Comment by niel_bowerman on Where I'm giving and why: Will MacAskill · 2014-01-02T21:26:00.000Z · EA · GW

I'm not sure of the exact numbers but my impression is that FHI has perhaps half a dozen full-time staff members, and CSER has one part-time person who is based in FHI and has been working on grant applications but I am unclear about the long-term financial viability of having this person working on applying for grants.

Comment by niel_bowerman on Where I'm giving and why: Will MacAskill · 2014-01-02T21:15:00.000Z · EA · GW

Yes, unless you were able to meet with people and create time to develop the neccessary trust. Also, like any grant-making foundation, I wouldn't expect people in the registry to fund all or even most of the oppertunities that came along, though the registry would lose some of its value if it appears to be unlikely to give out donations to good projects.

Comment by niel_bowerman on Where I'm giving and why: Will MacAskill · 2014-01-02T21:00:00.000Z · EA · GW

I would imagine Will donates to multiple charities because the impact of his donations come primarily through their ability to inspire others to donate. Because of Will's profile as a columnist and public intellectual, he often meets with potential donors who favour one of his recommendations over the others, and Will is able to say that he also donates to them, which may increase the likelihood of donations via the "actions speak louder than words" heuristic.

This would apply to others if they believe {{the impact of donations they can inspire by donating to multiple charities} - {the impact of donations they can inspire by donating to their top recommended charity}} > {{the impact of the donation to their top recommended charity} - {the impact of instead donating to multiple charities}}. Presumably Will believes that this inequality is true for his case. The exact quantities of donations that you need to be able to inspire for this to be true depend on your assessment of the relative efficiencies of the different charities that you are considering donating to. Of course in reality these quantities are virtually impossible to calculate and so there is always going to be signficant uncertainty associated with this decision.

It is also possible that Will is using some variant of the argument used by Julia Wise: "I wouldn’t want the whole effective altruist community to donate to only one place. So I’m okay with dividing things up a bit." /ea/5l/where_im_giving_and_why_julia_wise/

It is also interesting to note that many of the GiveWell staff have chosen to donate to only one of their recommendations, presumably because they agree that they can have more impact that way.

Comment by niel_bowerman on Navigating the epistemologies of effective altruism · 2014-01-02T20:31:00.000Z · EA · GW

"Yet from what I understand, GiveWell refuses to recommend any of these as top charities. My impression is that GiveWell finds it highly unlikely that any of these organizations are as effective as their recommended charities. Of course, many of these organizations exist on the assumption that they are. This area seems particularly awkward as all of these meta-charities promote GiveWell publicly, leading to several interviews. I imagine that it’s better off for everyone that Givewell and CEA appear as close friends, yet internally it seems like there’s a bit of tension over this stark disagreement on the need for CEA’s existence. This disagreement is somewhat showcased in the comments here."

Regardless of whether GiveWell thought that CEA's organisation were more effective than their own recommendations, I think it is rational for GiveWell not to recommend CEA's organisations. Such a recommendation would quickly lead to the 'infinte regression problem' (one should donate to an organisation, that encourages people to donate to an organisation, that encourages people to donate to an organisation, that... etc. ... that encourages people to donation to effective first order work. See Ben Todd's Master's thesis on career choice for more discussion). GiveWell would risk the accusation of contributing to some sort of charitable ponzi scheme, which is an accusation that I have heard made when a charity evaluator has discussed recommending another charity evaluator. Of course there are ways around this in practise (again see Ben Todd's thesis), but it would still pose a reputational risk for GiveWell to recommend a CEA organisation given their status as meta-charities.

Comment by niel_bowerman on Where I'm giving and why: Will MacAskill · 2014-01-02T20:11:00.000Z · EA · GW

In addition to Carl's comments on why the registery would be easier, it has the added benefit of people being able to control their own funds and thus being more willing to contribute to the 'fund'.

"Do you really think people would just send money to 1st-world strangers (ii) on the promise that the recipient was training to earn to give?" They needn't be strangers. This has already happened in the UK EA community amongst EAs who met through 80,000 Hours and supported each other financially in the early training and internship stages of their earning to give careers.