Posts

How much does a vote matter? 2020-10-29T17:21:08.065Z
When you shouldn't use EA jargon and how to avoid it 2020-10-26T12:48:29.850Z
'Ugh Fields', or why you can't even bear to think about that task 2020-09-14T16:39:48.330Z
Consider a wider range of jobs, paths and problems if you want to improve the long-term future 2020-06-29T14:48:55.111Z
Eleven recent 80,000 Hours articles on how to stop COVID-19 & other pandemics 2020-04-08T21:40:11.355Z
Podcast with Ben Todd covering the key ideas of 80,000 Hours (2h 57m) 2020-03-09T18:48:27.200Z
Attempted summary of the 2019-nCoV situation — 80,000 Hours 2020-02-03T22:37:44.413Z
Upcoming interviews on the 80,000 Hours Podcast 2019-07-01T14:08:39.735Z
Giving What We Can is still growing at a surprisingly good pace 2018-09-14T02:34:11.214Z
Do Prof Eva Vivalt's results show 'evidence-based' development isn't all it's cut out to be? 2018-05-21T16:28:27.239Z
Rob Wiblin's top EconTalk episode recommendations 2017-10-19T00:08:06.199Z
How accurately does anyone know the global distribution of income? 2017-04-06T04:49:45.335Z
In some cases, if a problem is harder humanity should invest more in it, but you should be less inclined to work on it 2017-02-21T10:29:01.945Z
Philosophical Critiques of Effective Altruism by Prof Jeff McMahan 2016-05-03T21:05:28.852Z
Why don't many effective altruists work on natural resource scarcity? 2016-02-20T12:32:14.584Z
Let's conduct a survey on the quality of MIRI's implementation 2016-02-19T07:18:55.158Z
The most persuasive writing neutrally surveys both sides of an argument 2016-02-18T08:42:38.857Z
How you can contribute to the broader EA research project 2016-02-17T09:23:26.227Z
If tech progress might be bad, what should we tell people about it? 2016-02-16T10:26:05.764Z
Should effective altruists work on taxation of the very rich? 2016-02-15T12:42:41.292Z
The Important/Neglected/Tractable framework needs to be applied with care 2016-01-24T15:10:55.665Z
Notice what arguments aren't made (but don't necessarily go and make them) 2016-01-24T13:52:45.111Z
If you don't have good evidence one thing is better than another, don't pretend you do 2015-12-21T19:19:54.464Z
What if you want to have a big social impact and live in a poorer country? 2015-12-20T16:58:33.276Z
How big a deal could GWWC be? Pretty big. 2015-12-20T00:46:45.843Z
An under-appreciated observation about giving now vs later 2015-12-19T22:26:19.482Z
What is a 'broad intervention' and what is a 'narrow intervention'? Are we confusing ourselves? 2015-12-19T16:12:49.618Z
Threads on Facebook worth being able to refer back to 2015-12-19T15:09:24.619Z
The most read 80,000 Hours posts from the last 3 months 2015-12-18T18:16:13.552Z
No, CS majors didn't delude themselves that the best way to save the world is to do CS research 2015-12-15T17:13:38.977Z
Two observations about 'skeptical vs speculative' effective altruism 2015-12-15T14:06:03.863Z
Saying 'AI safety research is a Pascal's Mugging' isn't a strong response 2015-12-15T13:48:27.186Z
Six Ways To Get Along With People Who Are Totally Wrong* 2015-02-24T12:41:43.096Z
Help a Canadian give with a tax-deduction by swapping donations with them! 2014-12-16T00:05:45.810Z
Generic good advice: do intense exercise often 2014-12-14T17:21:38.322Z
How can you compare helping two different people in different ways? 2014-12-11T17:08:02.170Z
Ideas for new experimental EA projects you could fund! 2014-12-02T02:47:04.545Z
Should we launch a podcast about high-impact projects and people? 2014-12-01T16:52:41.206Z
The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach 2014-11-25T13:48:36.283Z
Why it should be easy to dominate GiveWell’s recommendations 2013-07-17T04:00:46.000Z

Comments

Comment by robert_wiblin on How You Can Counterfactually Send Millions of Dollars to EA Charities · 2020-12-29T16:25:03.522Z · EA · GW

"The 2.16% U.S. federal funds rate in 2019 is one of the most conservative interest rates possible."

The U.S. Federal Funds rate has been effectively 0% since April 2020 and was roughly 0% for six years from 2009 to 2015. The same is roughly true of the UK. Central banks in both countries are saying they'll keep rates low for years to come.

I can't immediately find a reputable business savings accounts in the UK/US that currently offers more than 1%.

Those that offer the highest rates (something approaching 1%) on comparison sites tend to have conditions (e.g. you lock the money up for a period, or have to keep depositing regularly), and usually have a maximum amount on which you can earn interest, a maximum which is low enough to be binding for these organisations.

These accounts usually offer a high rate to attract customers for a while, then dramatically reduce the interest rate and trust you won't be bothered moving your money. I think that's their basic business model.

Opening bank accounts for non-profits, at least in the UK, is a pain — something that will take a few weeks, and some time/attention from the operations team, management and trustees (who are needed for e.g. security checks). It looks like you usually won't be able to put in more than a million dollars/pounds in any given account, often less.

So you'd need to open many accounts, keep track of them, secure the chequebooks, have them audited annually, integrate them into your bookkeeping system, change the signatures when staff turn over, figure out the idiosyncratic requirements to pull out money when you need to, and so on.

This may sound simple but if you've worked in operations you'll know it's actually a big hassle.

In return, for each account opened you make <£10k a year, and probably need to keep closing accounts and moving your money into new ones every few years, as the teaser rate used to draw you in is removed.

This may all be worth it, but it's far from a no-brainer, as these organisation have other fruitful projects they could be using staff to pursue.

Comment by robert_wiblin on If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant · 2020-11-25T16:06:15.357Z · EA · GW

In addition to the issues raised by other commentators I would worry that someone trying to work on something they're a bad fit for can easily be harmful.

That especially goes for things related to existential risk.

And in addition to the obvious mechanisms, having most of the people in a field be ill-suited to what they're doing but persisting for 'astronomical waste' reasons will mean most participants struggle to make progress, get demoralized, and repel others from joining them.

Comment by robert_wiblin on How much does a vote matter? · 2020-11-02T18:24:01.043Z · EA · GW

He says he's going to write a response. If I recall Jason isn't a consequentialist so he may have a different take on what kinds of things we can have a duty to do.

Comment by robert_wiblin on How much does a vote matter? · 2020-10-31T17:27:14.225Z · EA · GW

Want to write a TLDR summary? I could find somewhere to stick it.

Comment by robert_wiblin on How much does a vote matter? · 2020-10-31T17:26:38.670Z · EA · GW

It seems like to figure out whether it's a good use of time for 300 people like you to vote, you still need to figure out if it's worth it for any single of them.

Comment by robert_wiblin on When you shouldn't use EA jargon and how to avoid it · 2020-10-30T12:35:25.949Z · EA · GW

I'm actually more favourable to a smaller EA community, but I still think jargon is bad. Using jargon doesn't disproportionately appeal to the people we want.

The most capable folks are busy with other stuff and don't have time to waste trying to understanding us. They're also more secure and uninterested in any silly in-group signalling games.

Comment by robert_wiblin on When you shouldn't use EA jargon and how to avoid it · 2020-10-27T12:10:14.492Z · EA · GW

Yes but grok also lacks that connotation to the ~97% of the population who don't know what it means or where it came from.

Comment by robert_wiblin on [Link] "Where are all the successful rationalists?" · 2020-10-19T15:04:47.905Z · EA · GW

The EA community seems to have a lot of very successful people by normal social standards, pursuing earning to give, research, politics and more. They are often doing better by their own lights as a result of having learned things from other people interested in EA-ish topics. Typically they aren't yet at the top of their fields but that's unsurprising as most are 25-35.

The rationality community, inasmuch as it doesn't overlap with the EA community, also has plenty of people who are successful by their own lights, but their goals tend to be becoming thinkers and writers who offer the world fresh ideas and a unique perspective on things. That does seems to be the comparative advantage of that group. So then it's not so surprising that we don't see lots of people e.g. getting rich. They mostly aren't trying to. 🤷‍♂️

Comment by robert_wiblin on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T20:47:27.982Z · EA · GW

To better understand your view, what are some cases where you think it would be right to either

  1. not invite someone to speak, or
  2. cancel a talk you've already started organising,

but only just?

That is, cases where it's just slightly over the line of being justified.

Comment by robert_wiblin on Can my self-worth compare to my instrumental value? · 2020-10-13T10:58:10.722Z · EA · GW

For whatever reason people who place substantial intrinsic value on themselves seem to be more successful and have a larger social impact in the long term. It appears to be better for mental health, risk-taking, and confidence among other things.

You're also almost always better placed than anyone else to provide the things you need — e.g. sleep, recreation, fun, friends, healthy behaviours — so it's each person's comparative advantage to put extra effort into looking out for themselves. I don't know why, but doing that is more motivating if it feels like it has intrinsic and not just instrumental value.

Even the most self-effacing among us have a part of their mind that is selfish and cares about their welfare more than the welfare of strangers.

Folks who currently neglect their wellbeing and intrinsic value to a dangerous extent can start by fostering ways of thinking that build up that endorse and build up that selfishness.

Comment by robert_wiblin on Keynesian Altruism · 2020-09-17T11:52:52.969Z · EA · GW

Yep that sounds good, non-profits should aim to have fairly stable expenditure over the business cycle.

I think I was thrown off your true motivation by the name 'Keynesian altruism'. It might be wise to rename it 'countercyclical' so it doesn't carry the implication that you're looking for an economic multiplier.

Comment by robert_wiblin on Keynesian Altruism · 2020-09-15T14:50:59.885Z · EA · GW

The idea that charities should focus on spending money during recessions because of the extra benefit that provides seems wrong to me.

Using standard estimates of the fiscal multiplier during recessions — and ignoring any offsetting effects your actions have on fiscal or monetary policy — if a US charity spends an extra $1 during a recession it might raise US GDP by between $0 and $3.

If you're a charity spending $1, and just generally raising US GDP by $3 is a significant fraction of your total social impact, you must be a very ineffective organisation. I could not recommend giving to such a project.

I'd think such a gain would be swamped like other issues like investment returns, us learning about better charities in future, or the worst problems getting solved leaving us worse giving opportunities, and so on.

An exception might be if you independently thought something like GiveDirectly was the best option and wasn't going to be beaten by another option in future. Then giving money for dispersal during a recession in the recipient country might be, say, twice as good as giving it outside of recession.

There's a bunch of discussion of these issues in my interview with Phil Trammell.

Comment by robert_wiblin on More empirical data on 'value drift' · 2020-09-01T20:53:17.036Z · EA · GW

Is there even 1 exclusively about people working at EA organisations?

If someone had taken a different job with the goal of having a big social impact, and we didn't think what they were doing was horribly misguided, I don't think we would count them as having 'dropped out of EA' in any of the 6 data sets.

Comment by robert_wiblin on The case of the missing cause prioritisation research · 2020-08-17T15:52:35.148Z · EA · GW

"For example 80000 Hours have stopped cause prioritisation work to focus on their priority paths"

Hey Sam — being a small organisation 80,000 Hours has only ever had fairly limited staff time for cause priorities research.

But I wouldn't say we're doing less of it than before, and we haven't decided to cut it. For instance see Arden Koehler's recent posts about Ideas for high impact careers beyond our priority paths and Global issues beyond 80,000 Hours’ current priorities.

We aim to put ~10% of team time into underlying research, where one topic is trying to figure out which problems and paths go into each priority level. We also have podcast episodes on newer problems from time to time.

All that said, I am sympathetic to the idea that as a community we are underinvesting in cause priorities research.

Comment by robert_wiblin on Intellectual Diversity in AI Safety · 2020-07-23T11:14:21.474Z · EA · GW

It seems like lots of active AI safety researchers, even a majority, are aware of Yudkowsky and Bostrom's views but only agree with parts of what they have to say (e.g. Russell, Amodei, Christiano, the teams at DeepMind, OpenAI, etc).

There may still not be enough intellectual diversity, but having the same perspective as Bostrom or Yudkowsky isn't a filter to involvement.

Comment by robert_wiblin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T11:14:36.794Z · EA · GW

As Michael says, common sense would indicate I must have been referring to the initial peak, or the peak in interest/panic/policy response, or the peak in the UK/Europe, or peak where our readers are located, or — this being a brief comment on an unrelated topic — just speaking loosely and not putting much thought into my wording.

FWIW it looks like globally the rate of new cases hasn't peaked yet. I don't expect the UK or Europe will return to a situation as bad as the one they went through in late March and early April. Unfortunately the US and Latin America are already doing worse than it was then.

Comment by robert_wiblin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T11:27:28.195Z · EA · GW

I think you know what I mean — the initial peak in the UK, the country where we are located, in late March/April.

Comment by robert_wiblin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T15:27:50.858Z · EA · GW

There's often a few months between recording and release and we've had a handful of episodes that took a frustratingly long time to get out the door, but never a year.

The time between the first recording and release for this one was actually 9 months. The main reason was Howie and Ben wanted to go back and re-record a number of parts they didn't think they got right the first time around, and it took them a while to both be free and in the same place so they could do that.

A few episodes were also pushed back so we could get out COVID-19 interviews during the peak of the epidemic.

Comment by robert_wiblin on Study results: The most convincing argument for effective donations · 2020-07-01T11:25:49.177Z · EA · GW

Thanks for doing this research, nice work.

Could you make your figure a little larger, it's hard to read on a desktop. It might also be easier for the reader if each of the five arguments had a one-word name to keep track of the gist of their actual content.

"As you can see, the winner in Phase 2 was Argument 9 by a nose. Argument 9 was also the winner by a nose in Phase 1, and thus the winner overall."

I don't think this is quite right. Arguments 5 and 12 are very much within the confidence interval for Argument 9. Eyeballing it I would guess we can only be about 60% confident that argument 9 would do better again if you repeated the experiment.

I would summarise the results as follow:

  • All five arguments substantially outperformed the control, on average increasing giving by around 45%.
  • We also had some evidence that Arguments 5, 9 and 12 all outperformed Arguments 3 and 14, perhaps having about 30% more impact.
Comment by robert_wiblin on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T21:20:30.211Z · EA · GW

Hi Tobias — thanks for the ideas!

Invertebrate welfare is wrapped into 'Wild animal welfare', and reducing long-term risks from malevolent actors is partially captured under 'S-risks'. We'll discuss the other two.

Comment by robert_wiblin on Should EA Buy Distribution Rights for Foundational Books? · 2020-06-17T14:37:38.354Z · EA · GW

For future reference, next time you need to look up the page number for a citation, Library Genesis can quickly let you access a digital copy of almost any book: https://en.wikipedia.org/wiki/Library_Genesis

Comment by robert_wiblin on Will protests lead to thousands of coronavirus deaths? · 2020-06-04T14:15:09.774Z · EA · GW

I didn't mean to imply that the protests would fix the whole problem, obviously they won't.

As you say you'd need to multiply through by a distribution for 'likelihood of success' and 'how much of the problems solved'.

Comment by robert_wiblin on Will protests lead to thousands of coronavirus deaths? · 2020-06-04T11:01:53.448Z · EA · GW

I think a crux for some protesters will be how much total damage they think bad policing is doing in the USA.

While police killings or murders draw the most attention, much more damage is probably done in other ways, such as through over-incarceration, petty harassment, framing innocent people, bankrupting folks through unnecessary fines, enforcing bad laws such a drug prohibition, assaults, and so on. And that total damage accumulates year after year.

On top of this we could add the burden of crime itself that results from poor policing practices, including a lack of community trust in police due to their oppressive behaviour and lack of accountability.

Regardless of where a consequentialist analysis would come down, it is a tragedy that people feel they need to choose between missing an opportunity to fix a horrible system of state violence, and not spreading a dangerous pandemic.

Comment by robert_wiblin on How can I apply person-affecting views to Effective Altruism? · 2020-05-06T12:47:58.632Z · EA · GW

If I weren't interested in creating more new beings with positive lives I'd place greater priority on:

  • Ending the suffering and injustice suffered by animals in factory farming
  • Ending the suffering of animals in the wilderness
  • Slowing ageing, or cryonics (so the present generation can enjoy many times more positive value over the course of their lives)
  • Radical new ways to dramatically raise the welfare of the present generation (e.g. direct brain stimulation as described here)

I haven't thought much about what would look good from a conservative Christian worldview.

Comment by robert_wiblin on Eleven recent 80,000 Hours articles on how to stop COVID-19 & other pandemics · 2020-04-12T23:07:30.598Z · EA · GW

Hi PBS, I understand where you're coming from and expect many policy folks may well be having a bigger impact than front-line doctors, because in this case prevention is probably better than treatment.

At the same time I can see why we don't clap for them in that way, because they're not taking on a particularly high risk of death and injury in the same way the hospital staff are right now. I appreciate both, but on a personal level I'm more impressed by people who continue to accept a high risk of contracting COVID-19 in order to treat patients.

Comment by robert_wiblin on Toby Ord’s ‘The Precipice’ is published! · 2020-03-09T18:32:05.830Z · EA · GW

I've compiled 16 fun or important points from the book for the write-up of my interview with Toby, which might well be of interest people here. :)

Comment by robert_wiblin on Who should give sperm/eggs? · 2020-02-13T11:20:38.883Z · EA · GW

Hi Khorton — yes as I responded to Denise, it appears the one year thing must have been specific to the (for-profit) bank I spoke with. They pay so many up-front costs for each new donor I think they want to ensure they get a lot of samples out of each one to be able to cover them.

And perhaps they were highballing the 30+ number, so they couldn't say they didn't tell you should the most extreme thing happen, even if it's improbable.

Comment by robert_wiblin on Who should give sperm/eggs? · 2020-02-12T23:00:00.385Z · EA · GW

Hmmmm, this is all what I was told at one place. Maybe some of these rules — 30 kids max, donating for a year at a minimum, or the 99% figure — are specific to that company, rather than being UK-wide norms/regulations.

Or perhaps they were rounding up to 99% to just mean 'the vast majority'.

I'd forgotten about the ten family limit, thanks for the reminder.

Like you I have the impression that they're much less selective on eggs.

Comment by robert_wiblin on Who should give sperm/eggs? · 2020-02-12T21:24:09.348Z · EA · GW

In some ways the UK sperm donation process is an even more serious commitment than egg donation.

From what I was told, the rejection rate is extremely high — close to 99% of applicants are filtered out for one reason or another. If you get through that process they'll want you to go in and donate once a week or more, for at least a year. Each time you want to donate, you can't ejaculate for 48 hours beforehand.

And the place I spoke to said they'd aim to sell enough sperm to create 30 kids in the UK, and even more overseas.

The ones born in the UK can find out who you are and contact you once they turn 18. With so many children potentially resulting, there's a good chance that a number will do so. It would be worth thinking ahead of time how you'd respond, and whether that's something you'll want in your life in ~20 years' time.

Comment by robert_wiblin on Attempted summary of the 2019-nCoV situation — 80,000 Hours · 2020-02-08T18:51:37.931Z · EA · GW

I know 2 working in normal pandemic preparedness and 2-3 in EA GCBR stuff.

I can offer introductions though they are probably worked off their feet just now. DM me somewhere?

Comment by robert_wiblin on Attempted summary of the 2019-nCoV situation — 80,000 Hours · 2020-02-06T14:28:19.755Z · EA · GW

Thanks for the detailed feedback Adam. :)

Comment by robert_wiblin on Should Longtermists Mostly Think About Animals? · 2020-02-06T14:27:59.009Z · EA · GW

Part of the issue might be the subheading "Space colonization will probably include animals".

If the heading had been 'might', then people would be less likely to object. Many things 'might' happen!

Comment by robert_wiblin on Should Longtermists Mostly Think About Animals? · 2020-02-05T19:39:51.382Z · EA · GW

80% seems reasonable. It's hard to be confident about many things that far out, but:

i) We might be able to judge what things seem consistent with others. For example, it might be easier to say whether we'll bring pigs to Alpha Centauri if we go, than whether we'll ever go to Alpha Centauri.

ii) That we'll terraform other planets is itself fairly speculative, so it seems fair to meet speculation with other speculation. There's not much alternative.

iii) Inasmuch as we're focussing in on (what's in my opinion) a narrow part of the whole probability space — like flesh and blood humans going to colonise other stars and bringing animals with them — we can develop approaches that seem most likely to work in that particular scenario, rather than finding something that would hypothetically works across the whole space.

Comment by robert_wiblin on Should Longtermists Mostly Think About Animals? · 2020-02-04T00:07:02.352Z · EA · GW

I apologise if I'm missing something as I went over this very quickly.

I think a key objection for me is to the idea that wild animals will be included in space settlement in any significant numbers.

If we do settle space, I expect most of that, outside of this solar system, to be done by autonomous machines rather than human beings. Most easily habitable locations in the universe are not on planets, but rather freestanding in space, using resources from asteroids, and solar energy.

Autonomous intelligent machines will be at a great advantage over animals from Earth, who are horribly adapted to survive a long journey through interstellar space or to thrive on other planets.

In a wave of settlement machines should vastly outpace actual humans and animals as they can travel faster between stars and populate those start systems more rapidly.

If settlement is done by 'humans' it seems more likely to be performed by emulated human minds running on computer systems.

In addition to these difficulties, there is no practical reason to bring animals. By that stage of technological development we will surely be eating meat produced without a whole animal, if we eat meat at all. And if we want to enjoy the experience of natural environments on Earth we will be able to do it in virtual reality vastly more cheaply than terraforming the planets we arrive at.

If I did believe animals were going to be brought on space settlement, I would think the best wild-animal-focussed project would be to prevent that from happening, by figuring out what could motivate people to do so, and pointing out the strong arguments against it.

Comment by robert_wiblin on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-03T19:34:13.663Z · EA · GW

Howie and I just recorded a 1h15m conversation going through what we do and don't know about nCoV for the 80,000 Hours Podcast.

We've also compiled a bunch of links to the best resources on the topic that we're aware of which you can get on this page.

Comment by robert_wiblin on Growth and the case against randomista development · 2020-01-20T15:20:27.201Z · EA · GW

I've guessed this is the case on 'back of the envelope' grounds for a while, so nice to see someone put more time into evaluating it.

It's not true to say EAs have been blindly on board with RCTs — I've been saying economic policy is probably the top priority for years and plenty of people have agreed that's likely the case. But I don't work on poverty so unfortunately wasn't able to take it further than that.

Comment by robert_wiblin on Making decisions under moral uncertainty · 2020-01-20T15:13:10.823Z · EA · GW

Will's book, 'Moral Uncertainty', is coming out next month for those who are interested in the topic: https://www.amazon.co.uk/Moral-Uncertainty-William-MacAskill/dp/0198722273

Comment by robert_wiblin on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2020-01-17T12:33:13.508Z · EA · GW

I Jessica, IIRC the main problem you'll likely encounter is that some naïve cost-effectiveness estimates will give you a really low figure, like donating $1 to corporate campaigns is as effective as being vegan a whole year. (Not exactly, but that order of magnitude.)

Given that I'm inclined to just make it the lowest amount that feels substantial and like it would actually plausibly be enough to make someone else veg*n for a year — for me that means about $100 a year.

Comment by robert_wiblin on Assumptions about the far future and cause priority · 2019-11-12T12:02:15.872Z · EA · GW

Yes it needs to go in an explanation of how we score scale/importance in the problem framework! It's on the list. :)

Alternatively I've been wondering if we need a standalone article explaining how we can influence the long term, and what are signs that something might be highly leveraged for doing that.

Comment by robert_wiblin on Assumptions about the far future and cause priority · 2019-11-11T17:07:34.914Z · EA · GW

As a first pass the rate of improvement should asymptote towards zero so long as there's a theoretical optimum and declining returns to further research before the heat death of the universe, which seems like pretty mild assumptions.

As an analogy, there's an impossibly wide range of configurations of matter you could in theory use to create a glass from which we can drink water. But we've already gotten most of the way towards the best glass for humans, I would contend. I don't think we could keep improving glasses in any meaningful way using a galaxy's resources for a trillion years.

Keep in mind eventually the light cone of each star shrinks so far it can't benefit from research conducted elsewhere.

Comment by robert_wiblin on Assumptions about the far future and cause priority · 2019-11-11T14:43:57.281Z · EA · GW

Having settled most of the accessible universe we'll have hundreds of billions or even trillions of years to try to keep improving how we're using the matter and energy at our disposal.

Doesn't it seems almost certain that over such a long time period our annual rate if improvement in the value generated by the best configuration would eventually asymptote towards zero? I think that's all that's necessary for safety to be substantially more attractive than speed-ups.

(BTW safety is never 'infinitely' preferred because even on a strict plateau view the accessible universe is still shrinking by about a billionth a year.)

Comment by robert_wiblin on Effective Altruism and International Trade · 2019-11-07T12:01:42.963Z · EA · GW

"changes outlook towards life, makes married life less unequal for women, increases self-respect, self-confidence, allows for better participation in society"

I agree these are all benefits, but I class them as instrumental benefits, and imagine most others here do as well.

They are benefits inasmuch as they go on to improve people's well-being.

"the human development index, it includes education as an outcome, valuable for its own sake"

The HDI also includes GDP which presumably nobody thinks is valuable for its own sake (i.e. widgets are only useful inasmuch as they make people better off when they're used not valuable merely for existing). In my opinion education is good to have in the HDI as a proxy for all of the many instrumental benefits it provides people.

Most people here place great weight on a welfarist theory of value: https://plato.stanford.edu/entries/well-being/ . If you disagree with welfarism then it would pay to set education aside for a minute and go back and discuss more fundamental issues in moral philosophy.

Comment by robert_wiblin on Effective Altruism and International Trade · 2019-11-06T15:26:42.643Z · EA · GW

You quote GiveWell as saying:

"We do not place much intrinsic value on increasing time in school or test scores"

But you cut off the quote in a very misleading way indeed:

We do not place much intrinsic value on increasing time in school or test scores, although we think that such improvements may have instrumental value.

Unless you think spending time in school is very useful even if it has no other benefits to kids (i.e. they don't learn anything they use later in life), GiveWell is surely right here that the benefits are mostly instrumental.

It is wrong to quote others in a way that misrepresents their view like this.

You also say:

"Exactly zero dollars went to education ... The overall importance given to education is zero."

  1. GiveWell just didn't think the very best giving opportunities they could support were in education — that doesn't mean they think it has no value. They also didn't buy people food, but presumably they don't think eating is a useless activity and people can safely starve themselves.
  2. GiveWell isn't all of EA. Some EAs probably have a very positive view of the value of education. There's a wide range of views on most issues.
Comment by robert_wiblin on Effective Altruism and International Trade · 2019-11-05T13:10:58.810Z · EA · GW

"If growth leads to education, then why is South Africa behind Jamaica and India, how about Bangladesh > Pakistan? Sri Lanka > Brazil"

Because it's not the only factor?

"Its very strange EA says education has no value"

'EA' does not say this, and I don't know anyone involved in EA who holds such a strong view.

Comment by robert_wiblin on Oddly, Britain has never been happier · 2019-10-24T19:21:11.061Z · EA · GW

Hi bfinn, maybe have a listen to this episode of the Freakonomics podcast: http://freakonomics.com/podcast/new-freakonomics-radio-podcast-the-suicide-paradox/

It's one of the things that shaped my view that cross-country differences in suicide are best explained by culture rather than underlying happiness.

Comment by robert_wiblin on Oddly, Britain has never been happier · 2019-10-24T11:28:20.172Z · EA · GW

I also don't trust mental health time series to show whether conditions are becoming more common, because it's equally or more likely that more people are coming forward as having, e.g. depression, as it becomes very acceptable to talk about it.

But suicide rates are hugely influenced by the social acceptability of suicide specifically, and easy access to suicide methods that allow you to successfully kill yourself on impulse (e.g. guns, which have become less accessible to people over time). So unfortunately I don't think suicide rates are a reliable way to track mental health problems over time either.

Comment by robert_wiblin on [updated] Global development interventions are generally more effective than climate change interventions · 2019-10-11T17:16:48.556Z · EA · GW

Thanks for rewriting and republishing this. All very interesting.

On this new revised version, something that stood out to me was the truly extreme range between the optimistic and pessimistic scenarios you describe.

I think the relative cost-effectiveness range you've given spans fully ten orders of magnitude, or a range of 10,000,000,000x. Even by our standards that's a lot. If we're really this uncertain it seems we can say almost nothing. But I don't think we are that uncertain.

By choosing a value out in the tail for 4 different input variables all at once you've taken us way out into the extremes of the uncertainty bounds. It looks to me like for these scenarios you've chosen the 1st and 99th percentiles for SCC, η, cost of abatement, and gain from doing health, all at once.

If that's right you're ending up at more like 0.01 * 0.01 * 0.01 * 0.01 --> 0.000001th percentile on the cost-effectiveness output on either end (not really, because you can't actually combine uncertainty distributions like this, but you get my general point). That seems too extreme a value to be useful to me.

Maybe you could put your distributions for the inputs into Guesstimate, which will do simulations drawing from and multiplying the inputs, and then choose the 5th and 95th percentile values for the outputs? That would go a long way towards addressing this issue.

Hope this helps, let me know if I've misunderstood anything — Rob

Comment by robert_wiblin on Updated Climate Change Problem Profile · 2019-10-10T14:54:05.157Z · EA · GW

Hi mchr3k — thanks for writing this. I'm completely slammed with other work at 80,000 Hours just now (I'm recording 7 podcast interviews this month), so I won't be able to respond right away.

For what it's worth I agree with just posting this and emailing it to us, rather than letting us hold you up. Many people are going to be interested in what you're saying here and might have useful comments to add, not just 80,000 Hours. It's also an area where reasonable people can disagree so it's useful to have a range of views represented publicly.

Possibly letting us comment on a Google Doc first might have been helpful but I don't think people should treat it as a necessary step!

Comment by robert_wiblin on How to Make Billions of Dollars Reducing Loneliness · 2019-09-18T13:50:13.883Z · EA · GW

Fair enough I haven't looked at the YouGov report.

I responding to the thrust of Tyler's quote at the top.

I doubt pre-2004 data will give us insight into modern loneliness. Facebook and Twitter didn't exist back then, for instance.

That data is especially precious because you need a 'before' measurement to see whether social media coincides with any change or loneliness staying the same as before!

But I agree many problems aren't increasing but are still well worth addressing!

Comment by robert_wiblin on Effective Altruism and Everyday Decisions · 2019-09-16T21:15:52.425Z · EA · GW

The amount of electricity consumed by some appliances these days is astonishingly low.

The LED lightbulb in my room for example uses 9 Watts. If I left it on maximum brightness constantly for a whole year this would come to:

9 Watts * 24 hours per day * 365 days / 1000 = ~79kWh.

That would cost me 79kWh * 14.714p/kWh = £12 in electricity for the year.

If supplied 100% by especially dirty coal this might produce 71kg of CO2.

This is a small amount which could be offset on the EU carbon trading market for about £1.80.

While also not worth fussing much too about, at least heating systems and air conditioners do use a meaningful amount of energy! Get your house insulated and then don't sweat about the rest.