Changes to how 80,000 Hours prioritises the world's problems

post by 80000_Hours · 2017-02-13T07:50:35.092Z · score: 18 (18 votes) · EA · GW · Legacy · 4 comments

80,000 Hours recently rewrote its approach for comparing problems against one another. This is how we give people advice on which problems are most 'pressing', and so are most promising for people aiming to have a large social impact with their career. We recommend checking it out.

This framework is a work in progress and is likely to be further iterated in future.

The biggest changes since the last version are:

In designing the framework we've benefitted from the work of Owen Cotton-Barratt at the Future of Humanity Institute in particular.

You could potentially use this process to write your own profiles of problems you or others in the community might work on, and we would be interested to see the results.

We also recently rewrote our profile of global priorities research - that is, prioritising different global problems as a profession. We hope it's now easier to take action after reading it. If you can see yourself conducting that research in your career, let us know and we might be in touch.

4 comments

Comments sorted by top scores.

comment by Peter_Hurford · 2017-02-14T00:07:50.566Z · score: 10 (10 votes) · EA(p) · GW(p)

Creating a well-defined mathematical underpinning for the neglectedness-tractability-importance framework is a really cool non-trivial accomplishment. Thanks for helping further arm all of us cause prioritizers. :)

comment by SoerenMind · 2017-02-13T14:39:10.213Z · score: 3 (3 votes) · EA(p) · GW(p)

If the funding for a problem with known total funding needs (e.g. creating drug x which costs $1b) goes up 10x, its solvability will go up 10x too - how do you resolve that this will make problems with low funding look very intractable? I guess the high neglectedness makes up for it. But this definition of solvability doesn't quite capture my intuition.

comment by Robert_Wiblin · 2017-02-28T22:22:57.319Z · score: 0 (0 votes) · EA(p) · GW(p)

Don't the shifts in solvability and neglectedness perfectly offset one another in such a case? Can you write out the case you're considering in more detail?