Posts

An overview of arguments for concern about automation 2019-08-06T07:56:59.396Z · score: 34 (12 votes)
Rationality vs. Rationalization: Reflecting on motivated beliefs 2018-11-26T05:39:13.987Z · score: 31 (22 votes)

Comments

Comment by alexlintz on The ITN framework, cost-effectiveness, and cause prioritisation · 2019-10-10T08:32:06.568Z · score: 2 (2 votes) · EA · GW

I think your critique of the ITN framework might be flawed. (though I haven't read section 2 yet). I assume some of my critique must be wrong as I still feel a bit confused about it, but I really need to get back to work...

One point that I think is a bit confusing is that you use the term marginal cost-effectiveness. To my knowledge this is not an acknowledged term in economics or elsewhere. What I think you mean instead is the average benefit given a certain amount of money.

Cost-effectiveness is (according to wikipedia at least) generally expressed at something like: 100USD/QALY. This is done by looking at how much a program costs and how many QALYs it created. So we get the average benefit of each $100 dollars for the program by doing this. However, we gain no insight as to what happened inside of the program. Maybe the first 100USD did all the work and the rest ended up being fluff, we don't know. More likely would be that the money had diminishing marginal returns.

When talking about tractability you say:

with importance and tractability alone, you could calculate the marginal cost-effectiveness of work on a problem, which is ultimately what we care about

You would know cost-effectiveness if you knew the amount spent so far/amount of good done. You know the amount spent from neglectedness but don't know the amount already done with the money spent. I guess marginal cost-effectiveness = average benefit from X more dollars. Let’s say that this is doubling the amount spent so far. I don’t think we can construe this as marginal though as doubling the money is not an ‘at the margin’ change. I think then that tractability gives you average benefit from X more dollars (so no need for scale).

We still need neglectedness and scale though to do a proper analysis.

Scale because if something wasn’t a big problem, why solve it? And to look at neglectedness let's use some made-up numbers:

Say that we as humanity have already spent 1 trillion USD on climate change (we use this to measure neglectedness) and got a 1% reduction in risk of an extinction event (use this to calculate the amount of good = .01* present value of all future lives). That gives us cost-effectiveness (cost/good done). We DON'T know however what happens at the margin (if we put more money in). We just have an average. Assuming constant returns may seem (almost) reasonable on an intervention like bed net distribution but it seems less reasonable when we've already spent 1 trillion USD on a problem. Then what we really need to know is the benefit of, say, another 1 trillion USD. This I think is what 80k's tractability measure is trying to get at. The average benefit (or cost-effectiveness) of another hunk of money/resources.

So defending neglectedness a bit. If we think that the marginal benefit to more money is not constant (which seems eminently reasonable) then it makes sense to try to find out where we are on the curve. Neglectedness helps to show us where we might be on the curve, even though we have little idea what the curve looks like (though I would generally find it safe to assume decreasing marginal returns). If we're on the flat bit of the diminishing marginal returns curve then we sure as hell want to know, or at least find evidence which would indicate that to be likely.

So then neglectedness is trying to find where we are on the curve, which will help us understand the marginal return to one more person/dollar entering (the true margin). This might mean that even if a problem is unsolvable there might be easy gains to be had in terms of reducing risk on the margin. For something that is neglected but not tractable we might be able to have huge benefits by throwing a few people/dollars in (get x-risk reductions f.ex) but that might peter off really quickly thus making it untractable. It would be less attractive overall then because putting a lot of people in would not be worth it.

Tractability says, if we were to dump lot's more money, what are the average returns going to look like. If we are now at the flat part of the curve average returns might be FAR lower than they were in a cost-effectiveness analysis (average returns of past spending) of what we already spent.

Maybe new intuitions for these:

Neglectedness: How much bang for the buck do we get for one more person/dollar?

Tractability: Is it worth dumping lot's of resources into this problem?


Comment by alexlintz on What actions would obviously decrease x-risk? · 2019-10-08T17:47:39.680Z · score: 3 (2 votes) · EA · GW

Just to play devil's advocate with some arguments against peace (in a not so well thought out way)... There's a book called 'The Great Leveler' which puts forward the hypothesis that the only time when widespread redistribution has happened is after wars. This means that without war we might expect consistently rising inequality. This effect has been due to mass mobilization ('Taxing the Rich' asserts that there has only been mass political willpower to increase redistribution with the claims of veterans having served and feeling they should be compensated) anddestructionn of capital (in Europe much of the capital was destroyed in WW2 -> massive decrease in inequality, US less so on both front) (haven't read the book though). Spinning this further we could be approaching a time where great power war would not have this effect. This is because less labor is required and it would be higher skilled. Perhaps there would be little use for low skilled grunts in near future wars (or already). If we also saw less destruction of capital (maybe information warfare is the way of the future?) Then we lose the mechanisms which made war a leveller in the past. SO we might be in the last time where a great power war (one of the only things we know reduces inequality) would be able to reduce inequality. If inequality continues to increase we could see suboptimal societal values which could continue on indefinitely and/or cause large amount of suffering the mediumrun. This could also lead to more domestic unrest in medium-run which would imply a peace now vs peace later trade-off. Depending on how hingey the moment is for the long-term future now, it could be better to have peace later. ALSO, UN was created post WW2. Maybe we only have appetite for major international cooperation after nasty wars? Anyway... Even after considering that, peace and cooperation is probably good on net, but not as obvious as it may seem. (Wrote this on mobile, sorry for any errors and lack of having read more than a few pages of the books I cited)

Comment by alexlintz on EA Handbook 3.0: What content should I include? · 2019-10-01T18:09:43.948Z · score: 11 (7 votes) · EA · GW

I always recommend Nate Soares' post 'On Caring' to motivate the need for rational analysis of problems when trying to do good. http://mindingourway.com/on-caring/


Comment by alexlintz on An overview of arguments for concern about automation · 2019-08-06T13:16:55.202Z · score: 1 (1 votes) · EA · GW

Wage growth:

It took a surprisingly long time to find anything on real wage trends in Europe but it looks like, judging by the graphs on page 5 of this paper that Sweden, Norway, and in part the UK are exceptions to quite slow real-wage growth. Germany, France, Italy, Spain, and Denmark follow the wage stagnation of the US.

I very much agree though that my analysis is very focused on the US (and the discussion in general). This paper demonstrates that at least on a micro level there are demonstrated effects on wages and employment from automation in the UK. Says

I guess I'd conclude roughly that stagnation is happening in many (if not all) developed countries. I would wager that automation plays some role, though I would guess that role is relatively small in the grand scheme of things (for now).

Money in elections:

I think even if that theory were true though, I would argue that campaign techniques are improving (a la Cambridge Analytica, AgreggateIQ) such that in the near future money may be more persuasive. I don't think we've really seen a campaign between two tech-savvy politicians willing to pay top dollar for voter manipulation (first one in 2020?) but if we did I would imagine campaign contributions to grow in importance. It would definitely be interesting to dive into this a bit more though.

Pace of Automation:

Yes... I agree this is a major blindspot. I haven't looked at this literature much at all and don't really feel qualified to make serious assessments on the quality of the many predictions. I agree there should be something there though. I will add a few sentences following the ILO's literature review on the Future of Work to give people an idea of what is being talked about

Comment by alexlintz on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T07:26:13.153Z · score: 21 (13 votes) · EA · GW

Yeah I tend to agree that sending the whole thing is unnecessary. The first 17 chapters of printed version distributed at CFAR workshops (I think, haven't actually been to one) is enough to get people engaged enough to move to the online medium. I'm guessing sending just that small-looking book will make people more likely to read it as seeing a 2k page book would definitely be intimidating enough to stop many from actually starting.

I do tend to think giving the print version is useful as it incurs some sort of reciprocity which should incentivize reading it.

Comment by alexlintz on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T10:35:11.710Z · score: 15 (9 votes) · EA · GW

I agree that a quick and decisive input from someone very knowledgeable about EA and the topic involved would be very useful and save a lot of time and indecision for people evaluating career options.

I think we can provide a bit of this though through more engaged online communities around given topic areas. Not nearly as good as in person talks but people can at least get some general feedback on career ideas. I'm hoping to host an event later this year that will gather people interested in a cause area and use that as a catalyst to form a more cohesive online community. As far as I can tell (and in my experience) people tend not to engage much in an online community if they don't really know the people well. Though it's definitely true that some people are more than happy to engage with people they don't know.

I don't know how this could move forward but it seems like someone could potentially make a difference by engineering Facebook or Slack groups focused on certain cause areas to be more active places for general discussion and career advice. This would be so helpful for people who lack close contact with knowledgeable people in EA or within their cause area.

Comment by alexlintz on Rationality vs. Rationalization: Reflecting on motivated beliefs · 2018-11-27T16:16:42.859Z · score: 5 (5 votes) · EA · GW

Yes! Totally agree. I think I mentioned very briefly that one should also be wary of social dynamics pushing toward EA beliefs, but I definitely didn't address it enough. Although I think the end result was positive and that my beliefs are true (with some uncertainty of course), I would guess that my update toward long-termism was due in large part to lot's of exposure to the EA community and from the social pressure that brings.

I basically bought some virtue signaling in the EA domain at the cost of signaling in broader society. Given I hang out with a lot of EAs and plan to do so more in the future, I'd guess that if I were to rationally evaluate this decision it would look net positive in favor of changing toward long-termism (as you would also gain within the EA community by making a similar switch, though with some short-term itoldyouso negative effects).

So yes, I think it was largely due to closer social ties to the EA community that this switch finally became worthwhile and perhaps this was a calculation going on at the subconscious level. It's probably no coincidence that I finally made a full switch-over during an EA retreat where the broad society costs of switching beliefs was less salient and the EA benefits much more salient. To have the perfect decision-making situation I guess it would be nice to have equally good opportunities in communities representing every philosophical belief, but for now seems a bit unlikely. I suppose it's another argument for cultivating diversity within EA.

This brings up a whole other rabbit hole in terms of thinking about how we want to appeal to people with some interest in EA but not yet committed to the ideas. I think the social aspect is probably larger than many might think. Of course if we emphasized this we're limiting people's choice to join EA in a rational way. But then what is 'choice' really given the social construction of our personalities and desires....