Posts

Upcoming interviews on the 80,000 Hours Podcast 2019-07-01T14:08:39.735Z · score: 25 (10 votes)
Giving What We Can is still growing at a surprisingly good pace 2018-09-14T02:34:11.214Z · score: 37 (35 votes)
Do Prof Eva Vivalt's results show 'evidence-based' development isn't all it's cut out to be? 2018-05-21T16:28:27.239Z · score: 17 (19 votes)
Rob Wiblin's top EconTalk episode recommendations 2017-10-19T00:08:06.199Z · score: 20 (18 votes)
How accurately does anyone know the global distribution of income? 2017-04-06T04:49:45.335Z · score: 20 (19 votes)
In some cases, if a problem is harder humanity should invest more in it, but you should be less inclined to work on it 2017-02-21T10:29:01.945Z · score: 7 (11 votes)
Philosophical Critiques of Effective Altruism by Prof Jeff McMahan 2016-05-03T21:05:28.852Z · score: 32 (32 votes)
Why don't many effective altruists work on natural resource scarcity? 2016-02-20T12:32:14.584Z · score: 12 (14 votes)
Let's conduct a survey on the quality of MIRI's implementation 2016-02-19T07:18:55.158Z · score: 11 (17 votes)
The most persuasive writing neutrally surveys both sides of an argument 2016-02-18T08:42:38.857Z · score: 14 (22 votes)
How you can contribute to the broader EA research project 2016-02-17T09:23:26.227Z · score: 13 (13 votes)
If tech progress might be bad, what should we tell people about it? 2016-02-16T10:26:05.764Z · score: 20 (20 votes)
Should effective altruists work on taxation of the very rich? 2016-02-15T12:42:41.292Z · score: 18 (9 votes)
The Important/Neglected/Tractable framework needs to be applied with care 2016-01-24T15:10:55.665Z · score: 18 (19 votes)
Notice what arguments aren't made (but don't necessarily go and make them) 2016-01-24T13:52:45.111Z · score: 12 (16 votes)
If you don't have good evidence one thing is better than another, don't pretend you do 2015-12-21T19:19:54.464Z · score: 34 (38 votes)
What if you want to have a big social impact and live in a poorer country? 2015-12-20T16:58:33.276Z · score: 12 (12 votes)
How big a deal could GWWC be? Pretty big. 2015-12-20T00:46:45.843Z · score: 11 (13 votes)
An under-appreciated observation about giving now vs later 2015-12-19T22:26:19.482Z · score: 7 (9 votes)
What is a 'broad intervention' and what is a 'narrow intervention'? Are we confusing ourselves? 2015-12-19T16:12:49.618Z · score: 8 (8 votes)
Threads on Facebook worth being able to refer back to 2015-12-19T15:09:24.619Z · score: 3 (5 votes)
The most read 80,000 Hours posts from the last 3 months 2015-12-18T18:16:13.552Z · score: 3 (7 votes)
No, CS majors didn't delude themselves that the best way to save the world is to do CS research 2015-12-15T17:13:38.977Z · score: 19 (19 votes)
Two observations about 'skeptical vs speculative' effective altruism 2015-12-15T14:06:03.863Z · score: 6 (6 votes)
Saying 'AI safety research is a Pascal's Mugging' isn't a strong response 2015-12-15T13:48:27.186Z · score: 13 (19 votes)
Six Ways To Get Along With People Who Are Totally Wrong* 2015-02-24T12:41:43.096Z · score: 40 (40 votes)
Help a Canadian give with a tax-deduction by swapping donations with them! 2014-12-16T00:05:45.810Z · score: 3 (3 votes)
Generic good advice: do intense exercise often 2014-12-14T17:21:38.322Z · score: 7 (7 votes)
How can you compare helping two different people in different ways? 2014-12-11T17:08:02.170Z · score: 9 (9 votes)
Ideas for new experimental EA projects you could fund! 2014-12-02T02:47:04.545Z · score: 9 (9 votes)
Should we launch a podcast about high-impact projects and people? 2014-12-01T16:52:41.206Z · score: 9 (9 votes)
The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach 2014-11-25T13:48:36.283Z · score: 6 (6 votes)
Why it should be easy to dominate GiveWell’s recommendations 2013-07-17T04:00:46.000Z · score: 1 (1 votes)

Comments

Comment by robert_wiblin on Is EA Growing? EA Growth Metrics for 2018 · 2019-06-03T20:59:11.684Z · score: 31 (13 votes) · EA · GW

It's worth keeping in mind that some of these rows are 10 or 100 times more important than others.

The most important to me personally is Open Phil's grantmaking. I hadn't realised that the value of their non-GiveWell grants had declined from 2017 to 2018.

Fortunately if they keep up the pace they've set in 2019 so far, they'll hit a new record of $250m this year. In my personal opinion that alone will probably account for a large fraction of the flow of impact effective altruism is having right now.

Comment by robert_wiblin on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T16:39:25.223Z · score: 16 (4 votes) · EA · GW

I would reply to an email asking something like this about 75% of the time within 1-2 weeks, and suspect the same is true of most other orgs.

Admittedly the answer might be only a few sentences, and might be 'sorry I don't know try asking X.'

But it seems worth trying in the first instance. :)

Comment by robert_wiblin on Why isn't GV psychedelics grantmaking housed under Open Phil? · 2019-05-06T11:39:39.143Z · score: 22 (8 votes) · EA · GW

Any forum post absorbs hours of time and attention from the community, so I support there being a norm of getting questions answered by emailing the group that probably knows the answer, where doing so is possible.

Comment by robert_wiblin on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T20:56:20.910Z · score: 5 (3 votes) · EA · GW

"The page itself doesn't seem to give any indication of that."

As I pointed out it says at the top:

"N.B. The results from this quiz were last reviewed in 2016 and the ranking may no longer reflect our current views. Your top results should be read as suggestions for further research, not recommendations. ... The quiz doesn’t tell you what you should do, but can give you ideas to research further."

We've been working to get it downgraded from the Google search results, but unfortunately we don't have full control over that.

Comment by robert_wiblin on Does climate change deserve more attention within EA? · 2019-04-18T18:05:03.037Z · score: 57 (17 votes) · EA · GW

I'll just respond to point 3 as it refers to my opinions directly. I don't think one should read me saying "Wow okay" as an off-the-cuff response to something someone says as much evidence that I've changed my mind, let alone that other people should. At that point I haven't had a chance to scrutinise the research, or reflect on its importance.

I suspect I said 'wow okay' because I was surprised by the claimed volatility of food supply in general, not the incremental effect of climate change.

Taking the figures Dave offers at face value, the increase is from 56% to 80% in the remainder of the century, which isn't surprisingly large to me. Not having looked at it, I'd also take such modelling with a pinch of salt, and worry that it exaggerates things in order to support advocacy on the topic.

There was a UK government study on this that estimated right now, it might be around 1% chance per year, but with the slow climate change... They were getting more like 80% chance of this century that something like that would happen.

N.B. (1-0.99^81 = 56%)

Comment by robert_wiblin on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T15:44:17.361Z · score: 24 (9 votes) · EA · GW

Hi lexande —

Re point 1, as you say the career capital career guide article now has the disclaimer about how our views have changed at the top. We're working on a site redesign that will make the career guide significantly less prominent, which will help address the fact that it was written in 2016 and is showing its age. We also have an entirely new summary article on career capital in the works - unfortunately this has taken a lot longer to complete than we would like, contributing to the current unfortunate situation.

Re point 2, the "clarifying talent gaps" post and "why focus on talent gaps" article do offer different views as they were published three years apart. We've now added a disclaimer linking to the new one.

The "Which jobs help people the most?" career guide piece, taken as a whole, isn't more positive about earning to give than the other three options it highlights (research, policy and direct work).

I think your characterisation of the process we suggest in the 'highest impact careers' article could give readers the wrong impression. Here's a broader quote:

When it comes to specific options, right now we often recommend the following five key categories, which should produce at least one good option for almost all graduates:

  1. Research in relevant areas
  2. Government and policy relevant to top problem areas
  3. Work at effective non-profits
  4. Apply an unusual strength to a needed niche
  5. Otherwise, earn to give

You say that that article 'largely contradicts' the 'clarifying talent gaps' post. I agree there's a shift in emphasis, as the purpose of the second is to make it clearer, among other things, how many people will find it hard to get into a priority path quickly. But 'largely contradicts' is an exaggeration in my opinion.

Re point 3, the replaceability blog post from 2012 you link to as contradicting our current position opens with "This post is out-of-date and no longer reflects our views. Read more."

Our views will continue to evolve as we learn more, just as they have over the last seven years, though more gradually over time. People should take this into account when following our advice and make shifts more gradually and cautiously than if our recommendations were already perfect and fixed forever.

Updating the site is something we’ve been working on, but going back to review old pages trades off directly with writing up our current views and producing content about our priority paths, something that readers also want us to do.

One can make a case for entirely taking down old posts that no longer reflect our views, but for now I'd prefer to continue adding disclaimers at the top linking to our updated views on a question.

If you find other old pages that no longer reflect our views and lack such disclaimers, it would be great if you could email those pages to me directly so that I can add them.

Comment by robert_wiblin on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T15:13:22.908Z · score: 21 (7 votes) · EA · GW

Hi lexande, Habryka, Milan — As you note, the quiz is no longer current content. It has been moved way down in the site structure, and carries this disclaimer:

"N.B. The results from this quiz were last reviewed in 2016 and the ranking may no longer reflect our current views. Your top results should be read as suggestions for further research, not recommendations. ... The quiz doesn’t tell you what you should do, but can give you ideas to research further."

Comment by robert_wiblin on Is Modern Monetary Theory a good idea? · 2019-04-17T13:36:02.561Z · score: 17 (8 votes) · EA · GW

It makes sense as a different way to conceptualise the government's budget constraint - limited by inflation (or the maximum sustainable level of seignorage) rather than an ability to borrow per se.

I just think analysing matters that way won't show that the government can spend a significant amount more than it does today, without higher taxes. There isn't lots of latent productive potential in the economy right now that could be unleashed, if only there were more spending. If that's correct, that makes it a much less interesting idea.

Comment by robert_wiblin on Introducing GPI's new research agenda · 2019-03-31T00:11:20.722Z · score: 3 (2 votes) · EA · GW

Wonderful, thank you! :)

Comment by robert_wiblin on Terrorism, Tylenol, and dangerous information · 2019-03-23T05:46:09.149Z · score: 44 (23 votes) · EA · GW

This is an interesting issue.

I remember commentators discussing the question of why we didn't see i) terrorists shooting people at shopping centres, ii) knifing them, iii) running pedestrians over with cars, etc, all the way back in 2001-2005.

I find it surprising that this obvious idea would occur to me and other random journalists and bloggers, but not to people who are actually trying to engage in terrorism. Regardless, pointing out these methods didn't have any noticeable effect at the time.

An alternative explanation might be that we saw this spate of terrorism - as far as I know all committed by people who are sympathetic to ISIS - because ISIS had a different ideology that regarded these attacks as more worthwhile. My impression is that ISIS was more motivated by pure bloodthirsty religious zealotry, with less of an emphasis on shifting the foreign policy of the US and countries in the Middle-East.

It wouldn't surprise me if ISIS - with its indiscriminate enthusiasm for all forms of murder - was pushing these methods aggressively, while Al-Qaida and other predecessor groups would have regarded running over a few pedestrians as an insufficient reason for one of its supporters to die. Perhaps because it's not striking enough, embarrassingly unimpressive compared to 9/11, not focussed on the right symbolic targets, or for some other practical reason.

The copy-cat explanation is also slightly different from giving people 'ideas'. ISIS supporters may not have been motivated by a blog post mentioning the method - only by seeing someone else actually pull it off. One might think of these methods not only coming to people's attention, but also becoming 'fashionable' among a particular group of fanatics.

ISIS, with its quasi-country status, may also simply have been unusually effective at attracting supporters in Europe or the US, and convincing them to attempt terrorist attacks. We would naturally see more experimentation of all kinds when 1,000 people are actively working to kill their fellow civilians than when only 100 are.

I agree with your conclusion though - saying things that are 'obvious' can absolutely speed up how many people notice them. If only because there are many many possible 'obvious' thoughts, but with one stream of consciousness, each of us only has time to stumble on a tiny fraction.

Comment by robert_wiblin on SHOW: A framework for shaping your talent for direct work · 2019-03-12T21:18:08.588Z · score: 56 (27 votes) · EA · GW

This is great. One thing I'd add is 'Demonstrate'. (Or dare I say.... Show.)

If you think your skills are better than people can currently measure with confidence, you need to find a way to credibly signal how capable you are, while demanding as little time as possible from senior people in the process.

You can do that in a lower level role, or by pulling off some impressive, scrutable and visible project. Or getting a more classic credential. Maybe other things as well.

One reason so many prominent EAs have been writers in the past is not only that it's a very broadly useful skill. It's also a skill which is unusually public and easy for others to evaluate you on. It also gives you a chance to demonstrate your general reasoning ability, which is one of the most widely valued characteristics.

Comment by robert_wiblin on Introducing GPI's new research agenda · 2019-03-05T04:33:02.428Z · score: 2 (1 votes) · EA · GW

Can you put up a plain text version of this? PDFs aren't absorbed nicely by other software (e.g. Facebook for sharing, Instapaper for saving to read later, etc.)

Comment by robert_wiblin on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-22T23:15:04.672Z · score: 4 (3 votes) · EA · GW

Do people who try to give huge amounts run the risk of the transaction being rejected by their bank (as fraudulent), and then not giving in time to be matched?

Comment by robert_wiblin on EAs Should Invest All Year, then Give only on Giving Tuesday · 2019-01-22T23:15:03.047Z · score: 1 (2 votes) · EA · GW
Comment by robert_wiblin on Giving more won't make you happier · 2018-12-12T02:20:35.880Z · score: 16 (7 votes) · EA · GW

I don't know if it's more or less reliable than past research suggesting a lower satiation point, but taking the paper Jebb et al. 2018 at face value, this is the effect of a 160% increase in income, from $40k to $105k:

In North America, life satisfaction goes from 7.63 to 8.0. Zero effect on positive affect (effects on positive affect/happiness are always lower and it's the measure I think is more reliable, which is why we chose the term happiness in that quote). Negative affect-free goes from 0.7 to 0.74.

Effects in Western Europe are a touch smaller.

Whether this counts as "extra income continuing to affect happiness quite a bit" or "extra income not affecting happiness that much" I guess is for readers to judge.

For myself, I would regard those gains to be sufficiently small that I would think it irrational for an egoist to focus much of their attention on earning more money at that point, rather than fostering strong relationships, a sense of purpose, or improving their self-talk.

Personally, I also expect even those correlations are overestimates of the actual effect of higher income on happiness, because we know the reverse is also happening: for various reasons happiness itself causes people's incomes to rise. On top of that, things like health also cause both happiness and higher incomes, increasing the correlation without increasing the causation. (Though as I describe in my income and happiness article, if you have a different causal diagram in mind, you could also try arguing that it's an underestimate.)

Comment by robert_wiblin on Giving more won't make you happier · 2018-12-11T19:06:29.968Z · score: 5 (4 votes) · EA · GW

Thanks, much appreciated Milan! :)

Comment by robert_wiblin on Giving more won't make you happier · 2018-12-10T23:52:25.355Z · score: 6 (4 votes) · EA · GW

While that piece on income and happiness seems solid, Milan might not like the vibe of this section in our article No matter your job, here’s 3 evidence-based ways anyone can have a real impact. I've just tinkered with the wording to make it harder for anyone to misunderstand what we're claiming:

"How much sacrifice will this involve?

Regardless of which career you choose, you can donate 10% of your income.

Normally when we think of doing good with our careers, we think of paths like becoming a teacher or charity worker, which often earn salaries as much as 50% lower than jobs in the private sector, and may not align with your skills or interests. In that sense, giving 10% is less of a sacrifice.

Moreover, as we saw in an earlier article, once you start earning more than about $40,000 a year as an individual, any extra income won’t affect your happiness that much, while acts that help others like giving to charity probably do make you happier.

To take just one example, one study found that in 122 of 136 countries, if respondents answered “yes” to the question “did you donate to charity last month?”, their life satisfaction was higher by an amount also associated with a doubling of income.5 In part, this is probably because happier people give more, but we expect some of the effect runs the other way too.

(Not persuaded? Read more on whether giving 10% is better or worse for your happiness than not donating at all.)"

Comment by robert_wiblin on Giving more won't make you happier · 2018-12-10T23:20:08.742Z · score: 46 (20 votes) · EA · GW

My own view here is:

i) Like Greg I've only ever seen the claim that donating some can be better for one's own welfare, compared to a baseline of not giving at all - not that the EA approach to giving is actually the optimal for happiness (!).

Giving small amounts regularly to a variety of emotionally appealing causes is more likely to be optimal for one's selfish welfare, at least if you don't have an EA mindset which would render that dissatisfying. As you say, 'give 10% once a year to the most effective charity' is likely worse than that.

ii) Unfortunately I don't have time to look into whether your source on income satiation points is better than the older one I used in my article. Personally, I think a focus on a 'satiation point' at which income yields literally zero further welfare gain is the wrong way to think about this. I am skeptical that the curve ever entirely flattens out, let alone turns back around. Rather I'd expect logarithmic returns (or perhaps something a bit sharper), up into the tens or hundreds of millions of dollars of income. Given the difficulty of measuring satisfaction, especially in the upper tail of income, I would trust this more common sense model over empirical measurements claiming otherwise. Above ~$100,000, additional gains in welfare are probably just too small for our survey methods to pick up.

For this reason 80,000 Hours use more modest language like "any extra income won’t affect your happiness that much", rather than claiming the effect is nothing.

Nonetheless, at high levels of income, further raising one's income gradually becomes a minor issue, before it becomes an entirely unmeasurable one. At that point, many people will better accomplish their life goals by focussing on improving the world - thereby giving themselves more community, higher purpose, sense of accomplishment, and indeed actual accomplishment - rather than eking out what limited returns remain from higher earnings, and this seems important to point out to people.

iii) So, while I say from an egoist perspective 'give 10% once a year to the most effective charity' is probably dominated by a 'fuzzy-hacking' approach to charity, that's not completely obvious.

Giving larger amounts, and giving them to the best charities one can find, often becomes a core part of people's identity, probably raising their sense of purpose / satisfaction with their work at all times, rather than just via a warm glow immediately after they donate a small sum. I don't think any of the evidence we've looked at can address this issue, except the observational studies, which are hopelessly confounded by other things.

Furthermore, being part of effective altruism or Giving What We Can can provide participants with a community of people who they feel some connection to, which many people otherwise lack, and which seems to have a larger effect on happiness than money. Finally, giving to the best charities allows people to show off to themselves about their uncommon intelligence and sophistication as a giver, which can also contribute to a positive self-conception.

We know that involvement in a religious community is correlated with large gains in welfare, and involvement in any philosophical community like effective altruism seems likely to bring with it some - though not all - of the same benefits.

Of course there are downsides too. In the absence of any actual data, I would remain agnostic, and act as though this were more or less a wash for someone's happiness. I wouldn't want someone to start being altruistic expecting it to make their life better, but I'd also want to challenge them if they were convinced it would make their life worse.

From this perspective, articles saying that giving or other acts of altruism are not as detrimental to someone's welfare as one might naïvely anticipate, seem to me to be advancing a reasonable point of view, even if future research may yet show them to be mistaken.

Comment by robert_wiblin on Giving more won't make you happier · 2018-12-10T22:10:26.178Z · score: 28 (12 votes) · EA · GW

"EA sometimes advocates that giving effectively will increase your happiness. Here’s an 80,000 Hours article (a) to that effect. ... This line of argument confuses the effect of donating at all with the effect of donating effectively."

This article you link to (by me) does not mention giving large amounts, or the effectiveness of charities, or advocate for giving to charity, so it's hard to see how it could be confusing that issue. I would appreciate if you could edit the article to clarify that.

It also doesn't claim that giving necessarily makes you happier than spending the money on yourself, only that in a given case it is possible - which it certainly is - and that giving likely provides more satisfaction than not having the money in the first place:

"Giving some money to charity is unlikely to make you less happy, and may well make you happier. ... donating money could easily make you happier than spending it on yourself... there’s good reason to think that giving away money will lower your subjective well-being significantly less than not having it in the first place."

In fact, the article only tangentially discusses donations at all. Nonetheless, I have added some further text to clarify my view.

Comment by robert_wiblin on Burnout: What is it and how to Treat it. · 2018-12-07T03:08:44.397Z · score: 3 (2 votes) · EA · GW

I think the link here has been 'miscopied' and goes to a different unrelated article:

"80,000 hours goes into the multitude of reasons you should do this here"

I expect you meant to link to this one:

Why even our readers should save enough to live for 6-24 months by Ben Todd

Comment by robert_wiblin on Earning to Save (Give 1%, Save 10%) · 2018-12-07T02:54:57.706Z · score: 8 (7 votes) · EA · GW

I largely agree with this though 12-36 months is perhaps a bit high. When I've told newcomers to save more and donate less, I've usually gotten the response that they really want to donate now because it makes them feel happier and more fulfilled. That's because they want to do good, but don't see an opportunity to contribute through 'direct work' yet, so giving is their main way to feel useful.

Inasmuch as the goal of giving now is to make someone feel good about themselves and motivated to continue, my objection is weaker. Though perhaps they should try to get that sense of accomplishment some other way while they save.

Comment by robert_wiblin on Why we have over-rated Cool Earth · 2018-12-07T02:45:24.645Z · score: 2 (1 votes) · EA · GW

I'll just add that I always read it as a highly tentative recommendation that you'd expect to be overturned if serious resources were put into investigating climate change charity - though I'm unusually in-the-know.

Comment by robert_wiblin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-12T23:30:45.565Z · score: 4 (4 votes) · EA · GW

I wrote 'most relevant' to contrast that group with people who know they'll never be able to do direct work at the organisations surveyed, for whom those figures are largely irrelevant. As you say, they're still not quite right for that group because you're only part way through the process (I think by the time you're offered a job, about half the costs have been paid, though that depends on whether you had a trial period).

They would be even more relevant to someone considering quitting - but as they can talk to their colleagues, they probably wouldn't be using this survey.

The costs of hiring makes applying to join a project look worse, but it also makes earning to give to fund the salaries of new staff look worse too. If you're expecting to donate hoping the money will be used to fund new hires, it seems like it should cancel out.

The three ways around bearing those costs that I can see at first glance are i) earning to give to prevent a group from laying off staff for lack of funding, ii) earning to give to buy capital goods rather than hire people, iii) staying in your current direct work job rather than switching.

Comment by robert_wiblin on Many EA orgs say they place a lot of financial value on their previous hire. What does that mean, if anything? And why aren't they hiring faster? · 2018-10-12T21:20:10.720Z · score: 9 (9 votes) · EA · GW

Hey Peter - it's partly for you, but also many other people who have the same questions.

I can comment on the choice of question in the survey as I'm the one who wrote it.

The reason we went for an ex post assessment rather than an ex ante one was that we thought people would be able to more reliably assess how they feel about a previous hire today, than remember how they felt about a previous hire in the past.

Asking people to remember what they thought before runs the risk that they will substitute a hard question (what they thought months ago before they knew how someone would work out) for an easy question (what do they think given what they know now). Then we'd get a similar answer, but it would look ex ante when it actually isn't.

It also seemed quite difficult for organisations to forecast how much they’ll value a typical hire in the future because, among other reasons, it’s difficult to anticipate how successful future searches will be.

In retrospect I think the framing we chose was probably a mistake, because the two assessments are much more different than most readers understand them to be. I agree with your suggestions for improvements and, indeed, we concluded our blog post with our plans to ask new questions next year or else interview a smaller number of people in more depth.

Hopefully a different approach next year will help us avoid this confusion going forward.

As for the article being misleading, we've:

i) Commented that these roles are hard to fill at the point when these figures are first mentioned.

ii) Explained the ex post, ex ante distinction in the relevant section, and now added a link to this post.

iii) Noted we don't have much confidence in the answers to that question and would not recommend that people update very much based on it.

Comment by robert_wiblin on Survey of EA org leaders about what skills and experience they most need, their staff/donations trade-offs, problem prioritisation, and more. · 2018-10-10T20:32:10.172Z · score: 5 (5 votes) · EA · GW

Tackling just one part of this:

"It may be worth having a separate survey trying to get opinions considering talent gaps in priority areas whether they are led by people involved in EA or not."

Ultimately our goal going forward is to make sure that we and our readers are highly informed about our priority paths (https://80000hours.org/articles/high-impact-careers/). About six out of ten of our current priority paths get direct coverage in this survey, while four do not.

I agree in future we should consider conducing different surveys of other groups - including people who don't identify as part of the EA community - about opportunities they're aware of in order to make sure we stay abreast of all the areas we recommend, rather than just those we are closest to.

Comment by robert_wiblin on EA Survey 2018 Series: Community Demographics & Characteristics · 2018-09-22T00:24:35.958Z · score: 5 (5 votes) · EA · GW

"Most people selected the “Other (please specify)” option (14%)." --> "The single most common answer was “Other (please specify)” (14%)." :)

Comment by robert_wiblin on Giving What We Can is still growing at a surprisingly good pace · 2018-09-15T17:55:39.095Z · score: 1 (1 votes) · EA · GW

If you look at these graphs ending in January 2017 I think you'll agree that a polynomial of degree 3 (cubic) seems like the best fit: https://imgur.com/a/9SlFZd9 .

If that's right we would expect something like 5,000 members by now.

It occurs to me now that all of these trend-lines are a bit biased towards forecasting rapid growth, as they finish right at the end of the 2016 holiday campaign which absorbed substantial resources. This was the highest period of growth, and likely not sustainable. It might be more reasonable to put the end-date in ~April and then we can fit the trend-line to a less cyclical curve.

Comment by robert_wiblin on EA Funds - An update from CEA · 2018-08-08T21:29:35.147Z · score: 11 (11 votes) · EA · GW

The effort required to set up a non-profit trading account, go through KYC, make it secure, teach everyone in the org how it works, and do the necessary legal, budgetary and accounting compliance each year make this much more than a few minute job. Things that are easy for individuals are often less straightforward for organisations.

Comment by robert_wiblin on New research on effective climate charities · 2018-07-13T20:00:47.497Z · score: 3 (3 votes) · EA · GW

I can second this. I prefer to download long pieces of text to my phone to read on Pocket/Instapaper. In addition, putting this report up as a web page will make it much more likely to be passed around on social media, Reddit, Hacker News, and so on.

Comment by robert_wiblin on How to have cost-effective fun · 2018-07-13T19:16:47.199Z · score: 11 (10 votes) · EA · GW

The unifying insight behind many of the examples is that what many people including me really enjoy is spending time with friends. A lot of expensive activities - like going out for drinks and food or weekends away - are 80% about getting to socialise, which can be done at much lower cost if you want.

Comment by robert_wiblin on How to have cost-effective fun · 2018-07-13T19:13:41.366Z · score: 5 (5 votes) · EA · GW

Interestingly, the folks at Netflix and HBO don't seem to mind people sharing accounts:

""We love people sharing Netflix," CEO Reed Hastings said Wednesday at the Consumer Electronics Show here in Las Vegas. "That's a positive thing, not a negative thing.""

Presumably because i) it makes more people subscribe because they get more value for money by sharing, and ii) eventually people get their own accounts if they like it.

More comments on what the various services think about password sharing here: https://www.thesimpledollar.com/is-it-ok-to-share-your-netflix-account-the-lowdown-on-log-in-sharing-at-nine-popular-services/

Comment by robert_wiblin on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2018-05-21T22:46:46.617Z · score: 1 (1 votes) · EA · GW

Yes - the reason you need to look at a bunch of activities rather than just one activity, is that your personal fit, both in general, and between earning vs direct work, could materially reorder them.

Comment by robert_wiblin on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2018-05-21T15:01:54.294Z · score: 2 (4 votes) · EA · GW

"Claiming that EA is now more talent constrained than funding constrained implicitly refers to Effective Altruist orgs being more talent than funding constrained."

It would be true if that were what was meant, but the speaker might also mean that 'anything which existing EA donors like Open Phil can be convinced to fund' will also be(come) talent constrained.

Inasmuch as there are lots of big EA donors willing to change where they give, activities that aren't branded as EA may still be latently talent constrained, if they can be identified.

The speaker might also think activities branded as EA are more effective than the alternatives, in which case the money/talent balance within those activities will be particularly important.

Comment by robert_wiblin on “EA” doesn’t have a talent gap. Different causes have different gaps. · 2018-05-21T14:55:42.091Z · score: 2 (4 votes) · EA · GW

Like you, at 80,000 Hours we view the relative impact of money vs talent to be specific to particular problems and potentially particular approaches too.

First you need to look for what activities you think are most impactful, and then see what your money can generate vs your time.

Comment by robert_wiblin on Empirical data on value drift · 2018-04-24T23:47:12.634Z · score: 14 (13 votes) · EA · GW

This is a useful analysis, I expect it will be incorporated into our discussion of discount rates in the career guide.

Perhaps I missed it but how many of the 7 who left the 50% category went into the 10% category rather than dropping out entirely?

Comment by robert_wiblin on The person-affecting value of existential risk reduction · 2018-04-20T18:39:13.308Z · score: 4 (4 votes) · EA · GW

I made a similar observation about AI risk reduction work last year:

"Someone taking a hard 'inside view' about AI risk could reasonably view it as better than AMF for people alive now, or during the rest of their lives. I'm thinking something like:

1 in 10 risk of AI killing everyone within the next 50 years. Spending an extra $1 billion on safety research could reduce the size of this risk by 1%.

$1 billion / (0.1 risk reduced by 1% 8 billion lives) = $125 per life saved. Compares with $3,000-7,000+ for AMF.

This is before considering any upside from improved length or quality of life for the present generation as a result of a value-aligned AI.

I'm probably not quite as optimistic as this, but I still prefer AI as a cause over poverty reduction, for the purposes of helping the present generation (and those remaining to be born during my lifetime)."

http://effective-altruism.com/ea/18u/intuition_jousting_what_it_is_and_why_it_should/amj

Comment by robert_wiblin on An Argument To Prioritize "Positively Shaping the Development of Crypto-assets" · 2018-04-08T01:12:17.741Z · score: 7 (7 votes) · EA · GW

Now that I've had a chance to read this properly I have one key follow-up question.

People often talk about the possibility of solving societal coordination problems with cryptocurrency, but I am yet to see a concrete example of this.

Is it possible to walk through a coordination failure that today could be best tackled using blockchain technology, and step-by-step how that would work?

This would be most persuasive if the coordination failure was in one of the priority problems mentioned above, but I'd find any specific illustration very helpful.

Comment by robert_wiblin on An Argument To Prioritize "Positively Shaping the Development of Crypto-assets" · 2018-04-04T20:39:54.916Z · score: 0 (2 votes) · EA · GW

If we want to keep sampling pessimistic articles:

Comment by robert_wiblin on An Argument To Prioritize "Positively Shaping the Development of Crypto-assets" · 2018-04-04T00:16:28.880Z · score: 8 (8 votes) · EA · GW

Thanks for writing this up, you cover a lot of ground. I don't have time to respond to it now, but I wanted to link to a popular counterpoint to the practical value of blockchain technology: Ten years in, nobody has come up with a use for blockchain.

Comment by robert_wiblin on Cash transfers are not necessarily wealth transfers · 2017-12-04T00:44:10.437Z · score: 11 (10 votes) · EA · GW

"Within countries, per capita GDP growth does not appear to lead to corresponding increases in well-being."

I spent a bunch of time looking into this 'Easterlin paradox' and concluded it's more likely than not that it doesn't exist. If you look across all the countries we have data on up to the present day, increased income is indeed correlated with increased levels of SWB. Not all things are positional or absolute, it's just a mix.

My impression is that people who study this topic are divided on the correct interpretation of the data, so you should take everyone's views (including mine) with a pinch of salt.

Comment by robert_wiblin on Against Modest Epistemology · 2017-11-18T01:00:17.239Z · score: 1 (1 votes) · EA · GW

"...the existence of super-forecasters themselves argues for a non-modest epistemology..."

I don't see how. No theory on offer argues that everyone is an epistemic peer. All theories predict some people have better judgement and will be reliably able to produce better guesses.

As a result I think superforecasters should usually pay little attention to the predictions of non-superforecasters (unless it's a question on which expertise pays few dividends).

Comment by robert_wiblin on Against Modest Epistemology · 2017-11-17T11:10:49.786Z · score: 1 (1 votes) · EA · GW

OK so it seems like the potential areas of disagreement are:

  • How much external confirmation do you need to know that you're a superforecaster (or have good judgement in general), or even the best forecaster?
  • How narrowly should you define the 'expert' group?
  • How often should you define who is a relevant expert based on whether you agree with them in that specific case?
  • How much should you value 'wisdom of the crowd (of experts)' against the views of the one best person?
  • How much to follow a preregistered process to whatever conclusion it leads to, versus change the algorithm as you go to get an answer that seems right?

We'll probably have to go through a lot of specific cases to see how much disagreement there actually is. It's possible to talk in generalities and feel you disagree, but actually be pretty close on concrete cases.

Note that it's entirely possible that non-modest contributors will do more to enhance the accuracy of a forecasting tournament because they try harder to find errors, but less right than others' all-things-considered views, because of insufficient deference to the answer the tournament as a whole spits out. Active traders enhance market efficiency, but still lose money as a group.

As for Eliezer knowing how to make good predictions, but not being able to do it himself, that's possible (though it would raise the question of how he has gotten strong evidence that these methods work). But as I understand it, Eliezer regards himself as being able to do unusually well using the techniques he has described, and so would predict his own success in forecasting tournaments.

Comment by robert_wiblin on Against Modest Epistemology · 2017-11-17T09:54:43.666Z · score: 1 (1 votes) · EA · GW

Hi Halffull - now I see what you're saying, but actually the reverse is true. That superforecasters have already extremised shows their higher levels of modesty. Extremising is about updating based on other people's views, and realising that because they have independent information to add, after hearing their view, you can be more confident of where to shift from your prior.

Imagine two epistemic peers estimating the weighting of a coin. They start with their probabilities bunched around 50% because they have been told the coin will probably be close to fair. They both see the same number of flips, and then reveal their estimates of the weighting. Both give an estimate of p=0.7. A modest person, who correctly weights the other person's estimates as equally as informative as their own, will now offer a number quite a bit higher than 0.7, which takes into account the equal information both of them has to pull them away from their prior.

Once they've done that, there won't be gains from further extremising. But a non-humble participant would fail to properly extremise based on the information in the other person's view, leaving accuracy to be gained if this is done at a later stage by someone running the forecasting tournament.

Comment by robert_wiblin on Status Regulation and Anxious Underconfidence · 2017-11-17T00:16:33.794Z · score: 8 (8 votes) · EA · GW

It strikes me as much more prevalent for people to be overconfident in their own idiosyncratic opinions. If you see half of people are 90% confident in X and half of people are 90% confident in not-X, then you know on average they are overconfident. That's how most of the world looks to me.

But no matter - they probably won't suffer much, because the meek do no inherit the Earth, at least not in this life.

People follow confidence in leaders, generating the pathological start-up founder who is sure they're 100x more likely to succeed than the base rate; someone who portrays themselves as especially competent in a job interview is more likely to be hired than someone who accurately appraises their merits; and I don't imagine deferring to a boring consensus brings more romantic success than elaborating on one's exciting contrarian opinions.

Given all this, it's unsurprising evolution has programmed us to place an astonishingly high weight on our own judgement.

While there are some social downsides to seeming arrogant, people who preach modesty here advocate going well beyond what's required to avoid triggering an anti-dominance reaction in others.

Indeed, even though I think strong modesty is epistemically the correct approach on the basis of reasoned argument, I do not and can not consistently live and speak that way, because all my personal incentives are lined up in favour of me portraying myself as very confident in my inside view.

In my experience it requires a monastic discipline to do otherwise, a discipline almost none possess.

Comment by robert_wiblin on Against Modest Epistemology · 2017-11-16T22:36:49.268Z · score: 3 (3 votes) · EA · GW

Not sure how this is a 'just so story' in the sense that I understand the term.

"the fact that "Extremizing" works to better calibrate general forecasts, but that extremizing of superforecaster's predictions makes them worse."

How is that in conflict with my point? As superforecasters spend more time talking and sharing information with one another, maybe they have already incorporated extremising into their own forecasts.

I know very well about superforecasters (I've read all of his books and interviewed Tetlock last week), but I am pretty sure an aggregation of superforecasters beats almost all of them individually, which speaks to the benefits of averaging a range of people's views in most cases. Though in many cases you should not give much weight to those who are clearly in a worse epistemic position (non-superforecasters, whose predictions Tetlock told me were about 10-30x less useful).

Comment by robert_wiblin on Against Modest Epistemology · 2017-11-16T19:05:57.639Z · score: 10 (10 votes) · EA · GW

Hi Eliezer, I wonder if you've considered trying to demonstrate the superiority of your epistemic approach by participating in one of the various forecasting tournaments funded by IARPA, and trying to be classified as a 'superforecaster'. For example the new Hybrid Forecasting Competition is actively recruiting participants.

To me your advice seems in tension with the recommendations that have come out of that research agenda (via Tetlock and others) which finds forecasts carefully aggregated from many people perform better than almost any individuals - and individuals that beat the aggregation were almost always lucky and can't repeat the feat. I'd be interested to see how an anti-modest approach fares in direct quantified competition with alternatives.

It would be understandable if you didn't think that was the best use of your time, in which case perhaps some others who endorse and practice the mindset you recommend could find the time to do it instead.

Comment by robert_wiblin on Survey of leaders in the EA community on a range of important topics, like what skills they need and what causes are most effective · 2017-11-04T09:15:07.943Z · score: 4 (8 votes) · EA · GW

Hey Jacy thanks for the detailed comment - with EA Global London on this weekend I'll have to be brief! :)

One partial response is that even if you don't think this is fully representative of the set of all organisation you'd like to have seen surveyed, it's informative about the groups that were. We list the orgs that were surveyed and point out who wasn't near the start of the article so people understand who the answers represent:

"The reader should keep in mind this sample does not include some direct work organisations that some in the community donate to, including the Against Malaria Foundation, Mercy for Animals or the Center for Human-Compatible AI at UC Berkeley."

You can take this information for whatever it's worth!

As for who I chose to sample - on any definition there's always going to be some grey area, orgs that almost meet that definition but don't quite. I tried to find all the organisations with full-time staff who i) were a founding part of the EA movement, or, ii) were founded by people who identify strongly as part of the EA community, or, iii) are now mostly led by people who identify more strongly as part of the EA movement than other other community. I think that's a natural grouping and don't view AMF, MfA or CHAI as meeting that definition (though I'd be happy to be corrected if any group does meet this definition whose leadership I'm not personally familiar with).

The main problem with that question in my mind is underrepresentation of GiveWell which has a huge budget and is clearly a central EA organisation - the participants from GiveWell gave me one vote to work with but didn't provide quantitative answers, as they didn't have a strong or clear enough view. More generally, people from the sample who specialise in one cause were more inclined to say they didn't have a view on fund which was most effective and so not answer it (which is reasonable but could bias the answers).

Personally like you I give more weight to the views of specialist cause priorities researchers working at cause-neutral organisations. They were more likely to answer the question and are singled out in the table with individual votes. Interestingly their results were quite similar to the full sample.

I agree we should be cautious about all piling on to the same causes and falling for an 'information cascade'. That said, if the views in that table are a surprise to someone, it's a reason to update in their direction, even if they don't act on that information yet.

I'd be very keen to get more answers to this question, including folks from direct work orgs. And also increase the sample at some organisations that were included in the survey, but for which few people answered that question (GiveWell most notably). With a larger sample we'll be able to break the answers down more finely to see how they vary by subgroup, and weight them by organisation size without giving single data points huge leverage over the result.

I'll try to do that in the next week or two one EAG London is over!

Comment by robert_wiblin on Against neglectedness · 2017-11-02T14:40:32.942Z · score: 3 (3 votes) · EA · GW

Hi Sacha, thanks for writing this, good food for thought. I'll get back to you properly next week after EA Global London (won't have any spare time for at least 4 days).

I just wanted to point out quickly that we do have personal fit in our framework and it can give you up to a 100x difference between causes: https://80000hours.org/articles/problem-framework/#how-to-assess-personal-fit

I also wonder if we should think about the effective resources dedicated to solving a problem using a Cobb-Douglas production function: Effective Resources = Funding ^ 0.5 * Talent ^ 0.5. That would help capture cases where an increase in funding without a commensurate increase in talent in the area has actually increased the marginal returns to an extra person working on the problem.

Comment by robert_wiblin on In defence of epistemic modesty · 2017-10-30T00:12:22.075Z · score: 5 (7 votes) · EA · GW

This post is one of the best things I've read on this forum. I upvoted it, but didn't feel that was sufficient appreciation for you writing something this thorough in your spare time!

Comment by robert_wiblin on In defence of epistemic modesty · 2017-10-29T23:36:40.190Z · score: 5 (5 votes) · EA · GW

I just thought I'd note that this appears similar to the 'herding' phenomenon in political polling, which reduces aggregate accuracy: http://www.aapor.org/Education-Resources/Election-Polling-Resources/Herding.aspx