Posts

A Funnel for Cause Candidates 2021-01-13T19:45:52.508Z
2020: Forecasting in Review 2021-01-10T16:05:37.106Z
Forecasting Newsletter: December 2020 2021-01-01T16:07:36.000Z
Big List of Cause Candidates 2020-12-25T16:34:38.352Z
What are good rubrics or rubric elements to evaluate and predict impact? 2020-12-03T21:52:27.802Z
Forecasting Newsletter: November 2020. 2020-12-01T17:00:40.460Z
An experiment to evaluate the value of one researcher's work 2020-12-01T09:01:49.034Z
Predicting the Value of Small Altruistic Projects: A Proof of Concept Experiment. 2020-11-22T20:07:57.499Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:21:52.151Z
Incentive Problems With Current Forecasting Competitions. 2020-11-10T21:40:46.317Z
Forecasting Newsletter: October 2020. 2020-11-01T13:00:04.440Z
Forecasting Newsletter: September 2020. 2020-10-01T11:00:02.405Z
Forecasting Newsletter: August 2020. 2020-09-01T11:35:19.279Z
Forecasting Newsletter: July 2020. 2020-08-01T16:56:41.600Z
Forecasting Newsletter: June 2020. 2020-07-01T09:32:57.248Z
Forecasting Newsletter: May 2020. 2020-05-31T12:35:36.863Z
Forecasting Newsletter: April 2020 2020-04-30T16:41:38.630Z
New Cause Proposal: International Supply Chain Accountability 2020-04-01T07:56:17.225Z
NunoSempere's Shortform 2020-03-22T19:58:54.830Z
Shapley Values Reloaded: Philantropic Coordination Theory & other miscellanea. 2020-03-10T17:36:54.114Z
A review of two books on survey-making 2020-03-01T19:11:13.828Z
A glowing review of two free online MIT Global Poverty courses 2020-01-15T11:40:41.519Z
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges 2019-12-19T18:16:04.299Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T16:36:10.564Z
Shapley values: Better than counterfactuals 2019-10-10T10:26:24.220Z
Why do social movements fail: Two concrete examples. 2019-10-04T19:56:02.028Z
EA Mental Health Survey: Results and Analysis. 2019-06-13T19:55:37.127Z

Comments

Comment by nunosempere on Promoting EA to billionaires? · 2021-01-27T20:42:54.627Z · EA · GW

See also: Gates Foundation gives millions to help persuade ultra-wealthy donors to give more of their billions and The Giving Pledge

Comment by nunosempere on (Autistic) visionaries are not natural-born leaders · 2021-01-26T16:01:13.713Z · EA · GW

I disagree with this. I'm writing this without having looked at the data, but autism / Asperger's syndrome, particularly in their high functioning versions, seems to be underdiagnosed, and it's seems to be a very reasonable inference that at least some of the leaders under discussion were in fact on the autistic spectrum, or otherwise non-neurotypical. We can check this with a Metaculus question if you want.

Comment by nunosempere on Why "cause area" as the unit of analysis? · 2021-01-26T15:50:17.582Z · EA · GW

So for me, the motivation for categorizing altruistic projects into buckets (e.g., classifications of philanthropy) is to notice the opportunities, the gaps, the conceptual holes, the missing buckets. Some examples:

  • If you divide undertakings according to their beneficiaries and you have a good enough list of beneficiaries, you can notice which beneficiaries nobody is trying to help. For example, you might study invertebrate welfare, wild animal welfare, or something more exotic, such as suffering in fundamental physics.
  • If you have a list of tools, you can notice which tools aren't being applied to which problems, or you can explicitly consider which tool-problem pairings are most promising. For example, ruthlessness isn't often combined with altruism.
  • If you have a list of geographic locations, you can notice which ones seem more or less promising.
  • If you classify projects according to their level of specificity, you can notice that there aren't many people doing high level strategic work, or, conversely, that there are too many strategists and that there aren't many people making progress on the specifics.

More generally, if you have an organizing principle, you can optimize across that organizing principle. So here in order to be useful, a division of cause areas by some principle doesn't have to be exhaustive, or even good in absolute terms, it just has to allow you to notice an axis of optimization. In practice, I'd also tend to think that having several incomplete categorization schemes among many axis is more useful than having one very complete categorization scheme among one axis.

Comment by nunosempere on Forecasting of Priorities: a tool for effective political participation? · 2021-01-25T16:30:17.963Z · EA · GW

"What are the top national/world priorities" is usually so complex, that it will remain to be a mostly subjective judgment. Then, how else would you resolve it than by looking for some kind of future consensus?

You could decompose that complex question into smaller questions which are more forecastable, and forecast those questions instead, in a similar way to what CSET is doing for geopolitical scenarios. For example:

  • Will a new category of government spending take up more than X% of a country's GDP? If so, which category?
  • Will the Czech Republic see war in the next X years?
  • Will we see transformative technological change? In particular, will we see robust technological discontinuities in any of these X domains / some other sign-posts of transformative technological change?
  • ...

This might require having infrastructure to create and answer large number of forecasting questions efficiently, and it will require having a good ontology of "priorities/mega-trends" (so most possible new priorities are included and forecasted), as well as a way to update that ontology.

Comment by nunosempere on Forecasting of Priorities: a tool for effective political participation? · 2021-01-25T10:32:00.193Z · EA · GW

Have you considered that you're trying to do too many things at the same time?

Comment by nunosempere on Big List of Cause Candidates · 2021-01-23T12:50:20.822Z · EA · GW

Changelog 23rd Jan/2021

Notes:

  • I don't like "Politics: System Change, Targeted Change, and Policy Reform" as a category. I'm thinking of dividing it into several subcategories (e.g., "Politics: Systemic Change", "Politics: Mechanism Change", "Politics: Policy Change" and "Politics: Other".) I'd also be interested in more good examples of systemic change interventions, because the one which I most intensely associate with it is something like "marxist revolution".
  • Hattip to @Prabhat Soni for suggesting risks from whole brain emulation, atomically precise manufacturing, infodemics, cognitive enhancement, universal basic income, and the lesswrong tag for wireheading.

To do:

  • Think about adding "Cognitive Enhancement" as a cause area. See Bostrom here. Unclear to what extent it would be distinct from "Raising IQ"
  • Think about adding "Infodemics and protecting organisations that promote the spread of accurate knowledge like Wikipedia.". In particular, think if there is a more general category to which this belongs.
  • Tag these and add them to the google doc.
  • Follow up with the people who suggested these candidates.
Comment by nunosempere on Big List of Cause Candidates · 2021-01-23T12:49:57.364Z · EA · GW

Thread for changelogs

Comment by nunosempere on Big List of Cause Candidates · 2021-01-23T12:48:46.688Z · EA · GW

Done. From now on, this to do will be at the end of my "changelogs"

Comment by nunosempere on Why I'm concerned about Giving Green · 2021-01-23T12:07:21.353Z · EA · GW

from my experience working on impact measurement in climate projects and programs, I believe that much of this measurement is bullsh*t [...] I realized most of it was smoke and mirrors

Can you give an example of this?

Comment by nunosempere on Forecasting of Priorities: a tool for effective political participation? · 2021-01-18T12:00:31.134Z · EA · GW

Substantive points

Starting March 2021, we will test, whether a diversified group of 150-250 people with financial motivations and the basics of forecasting (representing the citizens who would choose the “forecaster” strategy) are actually able to predict the top 5-10 megatrends / grand societal challenges (from a long-list of 20-40), that will be prioritized 3-5 years later by experts in a Delphi study.

Wait, so citizens are incentivized to predict what experts will say? This seems a little bit weak, because experts can be arbitrarily removed from reality. You might think that, no, our experts have a great grasp of reality, but I'd intuitively be skeptical. As in, I don't really know that many people who have a good grasp of what the most pressing problems of the world are.

So in effect, if that's the case, then the key feedback loops of your system are the ones between experts using the Delphi system <> reality, and the loop between experts <> forecasters seems secondary. For example, if I'm asked what Eliezer Yudkowsky will say the world's top priority is in three years, I pretty much know that he's going to say "artificial intelligence", and if you ask me to predict what Greta Thunberg will say, I pretty much know that she's going to go with "climate change".

I think that eventually you'll need a cleverer system which has more contact with reality. I don't know how that system would look, though. Perhaps CSET has something to say here. In particular, they have a neat method of taking big picture questions and decomposing them into scenarios and then into smaller, more forecastable questions.

Anyways, despite this the first round seems like an interesting governance/forecasting experiment.


Also, 150-250 people seems like too little to get great forecasters. If you were optimizing for forecasting accuracy, you might be better off hiring a bunch of superforecasters.


Re: Predict-O-Matic problems, see some more here


(from a long-list of 20-40)

Not allowing forecasters to suggest their own trends (maybe with some very cursory review) seems like an easy mistake to fix.

Nitpicks:

The core of the mechanism is a forecasting tool, where each citizen receives a virtual credit (let's say $200) each year (on their birthday, so that the participation is spread in time). After login, they see a long-list of 20-40 public causes and challenges (such as longevity, corruption, legalization of drugs, mental health, better roads, etc). Citizens can (anonymously) allocate credit anytime within a year, using quadratic voting, to any causes that they consider a priority, and explain why or specify it further in a public comment. They can use different strategies to do that, which I will describe below. Once they allocate the credit, the amount actually goes to solving the cause (i.e. funding research and implementation of the solutions) as funding by the government.

This may have the problem that once the public identifies a "leader", either a very good forecaster or a persuasive pundit, they can just copy their forecasts. As a result, this part:

As a hypothetical result, the government is happy because it effectively harnesses a lot of inputs about what to fund and only pays rewards to the most visionary inputs 3y later.

seems like an overestimate; you wouldn't be harnessing that many inputs after all


Non-populist politicians are happy because they can tell their voters “your opinion matters” and now it's believable. The citizens are happy because they feel directly involved in policymaking and get educated in the process. NGOs hoping to improve public discourse are happy because there is a growing structured database of weighted arguments that get checked for accuracy 3 years later. Media are happy because now there is a constantly updating and easily understandable aggregate of what citizens actually think and want.

This depends on how much of the budget is chosen this way. In the worst case scenario, this gives a veneer of respectability to a process which only lets citizens decide over a very small portion of the budget.

Comment by nunosempere on Big List of Cause Candidates · 2021-01-18T10:42:47.505Z · EA · GW

With regards to coral reefs, your post is pretty short. In my experience, it's more likely that people will pay more attention to it if you flesh it out a little bit more.

Comment by nunosempere on Big List of Cause Candidates · 2021-01-18T10:36:14.976Z · EA · GW

Yeah, this makes sense, thanks.

Comment by nunosempere on A Funnel for Cause Candidates · 2021-01-16T10:57:51.975Z · EA · GW

Makes sense, thanks

Comment by nunosempere on A Funnel for Cause Candidates · 2021-01-15T09:49:16.687Z · EA · GW

"I have no particular reason to disagree with any of your reasoning about how promising this idea is for the population it intends to benefit, but I just think benefitting that population is [huge number] less important than benefitting [this other population], so I really don't care."

So suppose that the intervention was about cows, and I (the vectors in "1" in the image) gave them some moderate weight X, the length of the red arrow. Then if someone gives them a weight of 0.0001X, their red arrow becomes much smaller (as in 2.), and the total volume enclosed by their cube becomes smaller. I'm thinking that the volume represents promisingness. But they can just apply that division X -> 0.0001X to all my ratings, and calculate their new volumes and ratings (which will be different from mine, because cause areas which only affect, say, humans, won't be affected). 

Or "I have no particular reason to disagree with any of your reasoning about how promising this idea is if we aim to simply maximise expected utility even in Pascalian situations, and accept long chains of reasoning with limited empirical data behind them. But I'm just really firmly convinced that those are bad ways to make decisions and form beliefs, so I don't really care how well the idea performs from that perspective."

In this case, the red arrow would go completely to 0, and that person would just focus on the area of the square in which the blue and green arrows lie, across all cause candidates. Because I am looking at volume and they are looking at areas, our ratings will again differ.

Comment by nunosempere on NunoSempere's Shortform · 2021-01-14T17:59:48.918Z · EA · GW

Test II

Comment by nunosempere on A Funnel for Cause Candidates · 2021-01-14T12:19:49.270Z · EA · GW

I have got the impression that there is going to be a single funnelling exercise that aims to directly compare shorttermist vs longtermist areas including on their 'scale'.

Yeah, so I (and others) have been exploring different things, but I don't know what I'll end up going with. That said, I think that there are gains to be had in optimizing first two stages, not just the third evaluation stage.

Comment by nunosempere on A Funnel for Cause Candidates · 2021-01-14T11:43:23.810Z · EA · GW

Nitpick:  A change of basis might also be combined with a projection into a subspace. In the example, if one doesn't care about animals, or about the long term future at all, then instead of the volume of the cuboid they'd just consider the area of one of its faces.

Another nitpick: The ratio of humans to animals would depend on the specific animals. However, I sort of feel that the high level disagreements of the sort jackmalde is pointing to are probably in the ratio of the value between a happy human life and a happy cow life, not about the ratio of the life of a happy cow to a happy pig, chicken, insect, etc.

Comment by nunosempere on A Funnel for Cause Candidates · 2021-01-14T11:41:23.429Z · EA · GW

So suppose you have a cause candidate, and some axis like the ones you mention:

  • Degree to which the area arguably runs into the cluelessness problem
  • Degree to which the area affects beings who don't exist yet / Promisingness of the area under the totalist view.
  • ...

But also some others, like

  • Promisingness given a weight of X humans to Y animals
  • Promisingness given that humans are ~infinitely more valuable than animals
  • Tractability of finding good people to get a project started
  • Units of resources needed
  • Probability that a trusted forecasting system gives to your project still existing in 2 years.

For simplicity, I'm going to just use three axis, but the below applies to more. Right now, the topmost vectors represent my own perspective on the promisingness of a cause candidate, across three axis, but they could eventually represent some more robust measure (e.g., the aggregate of respected elders, or some other measure you like more). The vectors at the bottom are the perspectives of people who disagree with me across some axis.

For example, suppose that the red vector was "ratio of the value of a human to a standard animal", or "probability that a project in this cause area will successfully influence the long-term future".

Then person number 2 can say "well, no, humans are worth much more than animals". Or, "well, no, the probability of this project influencing the long-term future is much lower". And person number 2 can say something like "well, overall I agree with you, but I value animals a little bit more, so my red axis is somewhat higher", or "well, no, I think that the probability that this project has of influencing the long-term future is much higher".

Crucially, they wouldn't have to do this for every cause candidate. For example, if I value a given animal living a happier life the same as X humans, and someone else values that animal as 0.01X humans, or as 2X humans,  they can just apply the transformation to my values.

Similarly, if someone is generally very pessimistic about the tractability of influencing the long-term future they could transform my probabilities as to that happening. They could divide my probabilities by 10x (or, actually, subtract some amount of probability in bits). Then the tranformation might not be linear, but it would still be doable.

Then, knowing the various axis, one could combine them to find out the expected impact. For example, one could multiply three axis to get the volume of the box, or add them as vectors and consider the length of the purple vector, or some other transformation which isn't a toy example. 

So, a difference in perspectives would be transformed into a change of basis

So:

Even in that case there's a difficulty, in that anyone who disagrees with your stance on these foundational questions would then have the right to throw out all of your funneling work and just do their own. 

doesn't strike me as true. Granted, I haven't done this yet, and I might never because other avenues might strike me as more interesting, but the possibility exists. 

Comment by nunosempere on A Funnel for Cause Candidates · 2021-01-14T09:18:46.526Z · EA · GW

Makes sense, thanks, changed.

Comment by nunosempere on Big List of Cause Candidates · 2021-01-11T13:26:21.809Z · EA · GW

Acknowledged.

Comment by nunosempere on Big List of Cause Candidates · 2021-01-11T12:13:48.110Z · EA · GW

tl;dr/Notes:

I have some models of the world which lead me to think that the idea was unpromising. Some of them clearly have a subjective component. Still, I'm using the same "muscles" as when forecasting, and I trust that those muscles will usually produce sensible conclusions.

It is possible that in this case I had too negative a view, though not in a way which is clearly wrong (to me). If I was forecasting the question "will a charity be incubated to work on philosophy in schools" (surprise reveal: this is similar to what I was doing all along), I imagine I'd give it a very low probability, but that my team mates would give it a slightly higher probability. After discussion, we'd both probably move towards the center, and thus be more accurate.

Note that if we model my subjective promisingness = true promisingness + error term, if we pick the candidate idea at the very bottom of my list (in this case, philosophy in schools, the idea under discussion and one of the four ideas to which I assigned a "very unpromising" rating), we'd expect it to both be unpromising (per your own view) and have a large error term (I clearly don't view philosophy very favorably)

Comment by nunosempere on Big List of Cause Candidates · 2021-01-11T11:56:06.721Z · EA · GW

Let me try to translate my thoughts to something which might be more legible / written in a more formal tone.

  • From my experience observing this in Spain, the philosophy curriculum taught in schools is a political compromise, in which religion plays an important role. Further, if utilitarism is even taught (it wasn't in my high school philosophy class), it can be taught badly by proponents of some other competing theory. I expect this to happen, because most people (and by expectation most teachers) aren't utilitarian.
  • Philosophy doesn't have high epistemic standards, as evidenced by the fact that it can't come to a conclusion about "who is right". Some salient examples of philosophers who continue to be taught and given significant attention despite having few redeeming qualities are Plotinous, Anaximenes, or Hegel. Although it can be argued that they do have redeeming qualities (Anaximenes was an early proponent of proto-scientific thinking, and Hegel has some interesting insights about history, and has shaped further thought), paying too much attention to these philosophers would be the equivalent of coming to deeply understand phologiston or aether theory when studying physics. I understand that grading the healthiness of a field can be counterintuitive or weird, but to the extent that a field can be sick, I think that philosophy ranks near the bottom (in contrast, development economics of the sort where you do an RCT to find out if you're right would be near the top)
  • Relatedly, when teaching philosophy too much attention is usually given to the history of philosophy. I agree that an ideal philosophy course which promoted "critical thinking" would be beneficial, but I don't think that it would be feasible to implement it because: a) it would have to be the result of tricky political compromise and have to be very careful around critizicing whomever is in power, and b) I don't think that there are enough good teachers who could pull it off.
  • Note that I'm not saying that philosophy can't produce success stories, or great philosophers, like Parfit, David Pearce, Peter Singer, arguably Bostrom, etc (though note that all examples except Singer are pretty mathematical). I'm saying that most of the time, the average philosophy class is pretty mediocre
  • On this note, I believe that my own (negative) experience with philosophy in schools is more representative than yours. Google brings up that you went to Cambridge and UCL, so I posit that you (and many other EAs who have gone to top universities) have an inflated sense of how good teachers are (because you have been exposed to smart and at least somewhat capable teachers, who had the pleasure of teaching top students). In contrast, I have been exposed to average teachers who sometimes tried to do the best they could, and who often didn't really have great teaching skills.
Comment by nunosempere on 2020: Forecasting in Review · 2021-01-11T10:35:57.646Z · EA · GW

Thanks!

Comment by nunosempere on Big List of Cause Candidates · 2021-01-11T10:34:13.446Z · EA · GW

I can imagine reconsidering, but I don't in principle have anything against using my S1. Because:

  • It is fast, and I am rating 100+ causes
  • From past experience with forecasting, I basically trust it.
  • It does in fact have useful information. See here for some discussion I basically agree with.
Comment by nunosempere on Big List of Cause Candidates · 2021-01-10T22:37:38.730Z · EA · GW

OK this seems fairly personal and anecdotal

Yeah, this is fair. Ideally I'd ask a bunch of people what their subjective promisingness was, and then aggregate over that. I'd have to somehow adjust for the fact that people from EA backgrounds might have gone to excellent universities and schools, and thus their estimate of teacher quality might be much, much higher than average, though.

Comment by nunosempere on Why EA meta, and the top 3 charity ideas in the space · 2021-01-10T22:17:49.859Z · EA · GW

Well, it's not like we're each adversarially trying to maximize our own counterfactual impact, rather than impact as community :P

Comment by nunosempere on Why EA meta, and the top 3 charity ideas in the space · 2021-01-10T22:14:10.935Z · EA · GW

Here is a past list of EA charity ideas/cause candidates I collected from a previous project, organized by whether they are meta or partially meta (i.e., "a step removed from direct impact"), and then by my subjective promisingness. You can see more information by clicking on the "expand" button near each cause area. They come from a recent project some other people have linked to.

Some of the ones which I think you might think are competitive with your top three ideas are:

Additionally, some of the ones which I think might be competitive with your top three ideas (but about which you might disagree) are:

You can see more in the link above. I also have "Discovering Previously Unknown Existential Risks", which is pretty related to your "Exploratory altruism" cause, and "Effective Animal Advocacy Movement Building" (in which cause CE has incubated Animal Advocacy careers).

Comment by nunosempere on Big List of Cause Candidates · 2021-01-10T20:55:11.886Z · EA · GW

Ok, cheers, will add.

Comment by nunosempere on Big List of Cause Candidates · 2021-01-10T20:54:47.877Z · EA · GW

To do:

Comment by nunosempere on Big List of Cause Candidates · 2021-01-10T20:51:12.670Z · EA · GW

Can you give a bit more of an explanation about the scoring in the google sheet?

A post about this is incoming.

With respect to philosophy in schools in particular:

Why I'm not excited about it as a cause area:

  • Your posts conflicts with my personal experience about how philosophy in schools can be taught. (Spain has philosophy, ethics & civics classes for kids as their curriculum, and I remember them being pretty terrible. In a past life, I also studied philosophy at university and overall came away with a mostly negative impression).
  • I know an EA who is doing something similar to what you propose re: EAs teaching philosophy and spreading values, but for maths in an ultra-prestigious school. Philosophy doesn't seem central to that idea.
  • I believe that there aren't enough excellent philosophy teachers for it to be implemented at scale.
  • I don't give much credence to the papers you cite replicating at scale.
  • On the above two points, see Khorton's comments in your post.
  • To elaborate a bit on that, there are some things on the class of "philosophy in schools" that scale really well, like, say, CBT. But I expect that "philosophy in schools" would scale like, say, budhist meditation (i.e., badly without good teachers).
  • Philosophy seems like a terrible field. It has low epistemic standards. It can't come to conclusions. It has Hegel. There is simply a lot of crap to wade through.
  • Philosophy in schools meshes badly with religion and it's easy for the curriculum to become political.
  • I imagine that teaching utilitarianism at scale in schools is not very feasible.
  • I'd expect EA to loose a political value about teaching EA values (as opposed to, say, Christian values, or liberal values, or feminist values, etc.). I also expect this fight to be costly.

Why I categorized it as "very-short":

  • If I think about how philosophy in schools would be implemented, and you can see this in Spain, I imagine this coming about as a result of a campaign promise, and lasting for a term or two (4, 8 years) until the next political party comes with their own priorities. In Spain we had a problem with politicians changing education laws too often.
  • You in fact propose getting into party-politics as a way to implement "philosophy in schools"
  • When I think of trying to come up with a 100 or 1000 years research program to study philosophy in schools, the idea doesn't strike me as much superior to the 10 year version of: do a literature review of existing literature of philosophy in schools and try to get it implemented. This is in contrast with other areas for which e.g., a 100-1000 years+ observatory for global priorities research or unknown existential risks does strike as more meaningful.
  • One of your arguments was: "One reason why it might be highly impactful for philosophy graduates to teach philosophy is that they may, in many cases, not have a very high-impact alternative." This doesn't strike me as a consideration that will last for generations (though, you never know with philosophy graduates)

That said, I can also see why classifying it as longer term would make sense.

Comment by nunosempere on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-10T20:10:48.441Z · EA · GW

Re: probabilities, I've been working on a search engine for probabilities as a small side project, and some of the ones I could find are:

Will the USA's Labor Force Participation Rate be lower in 2023 than in 2018?: 70%

Second US civil war before July 2021?: 1% (Metaculus doesn't allow lower probabilities)

Before 1 January 2022, will the U.S. Senate expand the scope of matters for which a filibuster cannot be used?: 3%

SCOTUS impeachment before 2030: 7%

Will another 9/11 on U.S. soil be prevented at least through 2030?: 75%

Longbets series: By 2040 will the percentage of college-aged U.S. citizens who are attending postsecondary educational institutions in the United States drop at least 50% from the level in 2011?: 75%

Date USA Metaculites face emigration crisis: 15% before Dec 25, 2030

Will US life expectancy at birth for both sexes fall below 75 years before 2040? : 13%

Will at least one US state secede from the Union before 31 December, 2030?: 5%

Coup-cast: Probability of a coup in the US in 2021: 0.08%. (at 0.08% per year, the probability in 50 years would be ~4%.)


Notes:

  • Most of these are from Metaculus, which isn't surprising given the sheer volume of questions it has and its willingness to include long term and somewhat weird questions.
  • A >0.5% per 50 years sounds like a reasonable probability to me.
  • I imagine my probabilities would depend on the specific definition of "collapse". If something like the fall of the Soviet Union would also count as a "collapse" then I could imagine going up to a couple of percentage points.
  • having a question on Metaculus about this seems like a cheap win.
  • Added the cause candidate tag to this.
Comment by nunosempere on 2020: Forecasting in Review · 2021-01-10T17:23:50.563Z · EA · GW

Oh hey, Samotsvety is back to being team #2!

Comment by nunosempere on 2020: Forecasting in Review · 2021-01-10T17:23:02.440Z · EA · GW

Incidentally, my condolences for falling to #7. 

Comment by nunosempere on 2020: Forecasting in Review · 2021-01-10T17:22:28.231Z · EA · GW

You're of course right

Comment by nunosempere on Big List of Cause Candidates · 2021-01-10T16:30:47.368Z · EA · GW

Thanks, will add (but not rn)

Comment by nunosempere on NunoSempere's Shortform · 2021-01-05T10:26:10.448Z · EA · GW

Quality Adjusted Research Papers

Taken from here, but I want to be able to refer to the idea by itself. 

This spans six orders of magnitude (1 to 1,000,000 mQ), but I do find that my intuitions agree with the relative values, i.e., I would probably sacrifice each example for 10 equivalents of the preceding type (and vice-versa).

A unit — even if it is arbitrary or ad-hoc — makes relative comparison easier, because projects can be compared to a reference point, rather than between each other.. It also makes working with different orders of magnitude easier: instead of asking how valuable a blog post is compared to a foundational paper, one can move up and down in steps of 10x, which seems much more manageable. 

Comment by nunosempere on Big List of Cause Candidates · 2021-01-04T09:30:47.717Z · EA · GW

In that context, this seems maybe like just a pathway for reducing long-term-risks from malevolent actors? Or, are you thinking more of Age of Em or something else which Hanson wrote?

Comment by nunosempere on Forecasting Newsletter: December 2020 · 2021-01-03T09:28:06.295Z · EA · GW

Yes, there is a substack, which allows you to subscribe per email (and has better formatting): forecasting.substack.com

Comment by nunosempere on How do EA researchers decide on which topics to write on, and how much time to spend on it? · 2021-01-01T12:17:14.794Z · EA · GW

The posts linked under "fairly well received posts" are just as proof of capabilities (i.e., my forecasting system isn't suggesting terrible projects). They are also fairly long, so I wouldn't suggest reading all of them.

Right now, the time I take to predict the value of a project ranges from almost instantaneous to something like 5 mins.

how do you then estimate how long each research question/project would take? Do you also do forecasts for that? 

Yes, I also have an estimate for hours, but that's a bit more tricky, and I'm not that great at it. 

And do you then just divide the milibostroms by the estimated number of hours to find what's most cost-effective to pursue?

Yes, see column H of the spreadsheet. 

Comment by nunosempere on Big List of Cause Candidates · 2020-12-31T16:39:18.014Z · EA · GW

Right, the criteria in the tag are almost maximally inclusive ("posts which specifically suggest, consider or present a cause area, cause, or intervention. This is independent of the quality of the suggestion, the community consensus about it, or the level of specificity"). This is because I want to distinguish between the gathering step and the evaluation step. I happen to agree that cryonics right now doesn't feel that promising, but I'd still include it because some evaluation processes might judge it to be valuable after all. Incidentally, this has happened before for me, seeing an idea which struck me as really weird and then later coming to appreciate it (fish welfare)

Per Scott Alexander's post, considering the N least promising cause candidates in my list would be like a box which has a low chance of producing a really good idea. It will fail most of the time, but produce good ideas otherwise.

Also, cryonics has been discussed in the context of EA, one just has to follow the links in the post:

Comment by nunosempere on How do EA researchers decide on which topics to write on, and how much time to spend on it? · 2020-12-31T15:16:57.410Z · EA · GW

I predict the value of each project in terms of "microbostroms" (or, Quality Adjusted Research Papers), more on which here, or expected microbostroms per unit of resources, and then carry out the most promising ones. See here for a rubric for that unit.  

This has recently led to some fairly well received posts, such as: 

so I'm probably going to continue doing this. I initially started with a simple google sheet, which you can see here. But I recently moved to a foretold community, which looks something like this:

I'm actually looking to see if this has a chance of being useful for other people, so if you or other researchers want to send me a list of projects you're considering, I'm happy to get you set up on foretold and predict an initial estimate of their value. 

Comment by nunosempere on Big List of Cause Candidates · 2020-12-31T00:11:04.012Z · EA · GW

Fair enough; I've changed this to "Ideological politics" pending further changes.

Comment by nunosempere on Big List of Cause Candidates · 2020-12-30T16:14:43.645Z · EA · GW

The Cause Candidates tag has these criteria. You'll note that Cryonics qualifies, as would e.g. each of kbog's political proposals, even though I vehemently disagree with them. I think that the case for this is similar to the case in Rule Thinkers In, Not Out

Comment by nunosempere on Big List of Cause Candidates · 2020-12-30T14:45:03.033Z · EA · GW

I agree that the categorization scheme for politics isn't that great. But I also think that there is an important different between "pulling one side of the rope harder" (currently under "culture war", say, putting more resources into the US Senate races in Georgia) and "pulling the rope sideways", say Getting money out of politics and into charity [^1]. 

Note that a categorization scheme which distinguishes between the two doesn't have to take a position on their value. But I do want the categorization scheme to distinguish between the two clusters because I later want to be able to argue that one of them is ~worthless, or at least very unpromising. 

Simultaneously, I think that other political endeavors have been tainted by association to more "pulling the rope harder" kind of political proposals, and making the distinction explicitly makes it  more apparent that other kinds of political interventions might be very promising. 

Your proposed categorization seems to me to have the potential to obfuscate the difference between topics which are heavily politicized  among US partisan lines, and those which are not. For example, I don't like putting electoral reform (i.e., using more approval voting, which would benefit candidates near the center with broad appeal) and statehood for Puerto Rico (which would favor Democrats) in the same category.

I'll think a little bit about how and whether to distinguish between raw categorization schemes (which should presumably be "neutral") and judgment values or discussions (which should presumably be separate). One option would be to have, say, a neutral third party (e.g. Aaron Gertler) choose the categorization scheme. 

Lastly, I wanted to say that although it seems we have strong differences of opinion on this particular topic, I appreciate some of your high quality past work, like Extinguishing or preventing coal seam fires is a potential cause area, Love seems like a high priority, the review of space exploration which you linked, your overview of autonomous weapons, and your various posts on the meat eater problem. 

[^1]: Vote pairing would be in the middle, because it could be used both to trade Democrat <=> third party candidates and Republican <=> third party candidates, with third party candidates being the ones that benefit the most (which sounds plausibly good). In practice, I have the impression that exchanges have mostly been set-up for Democrat <=> third party trades, but if they gain more prominence I'd imagine that Republicans would invest more in their own setups. 

Comment by nunosempere on Big List of Cause Candidates · 2020-12-30T14:34:43.585Z · EA · GW
  1. Added the Space Exploration Review. Great post, btw, of the kind I'd like to see more of for other speculative or early stage cause candidates.
  2. I agree that the existential risks category is too broad, and that I was probably conflating it with dangers from technological development. Will disambiguate
Comment by nunosempere on Big List of Cause Candidates · 2020-12-30T11:40:15.022Z · EA · GW

Changed the "Air Purifiers Against Pollution"

Comment by nunosempere on Big List of Cause Candidates · 2020-12-30T11:36:05.030Z · EA · GW

Added value spreading

Comment by nunosempere on Big List of Cause Candidates · 2020-12-30T11:31:38.600Z · EA · GW

Added universal euphoria

Comment by nunosempere on Big List of Cause Candidates · 2020-12-28T23:59:09.091Z · EA · GW

On the first day, alexrjl went to Carl Shulman and said: "I have looked at 100 cause candidates, and here are the five I predict have the highest probability of being evaluated favorably by you"

And Carl Shulman looked at alexrjl in the eye, and said: "these are all shit, kiddo"

On the seventh day, alexrjl came back and said: "I have read through 1000 cause candidates in the EA Forum, LessWrong, the old Felicifia forum and all of Brian Tomasik's writtings. And here are the three I predict have the highest probability of being evaluated favorably by you"

And Carl Shulman looked at alexrjl in the eye and said: "David Pearce  already came up with your #1 twenty years ago, but on further inspection it was revealed to not be promising. Ideas#2 and #3 are not worth much because of such and such"

On the seventh day of the seventh week alexrjl came back, and said "I have scrapped Wikipedia, Reddit, all books ever written and otherwise the good half of the internet for keywords related to new cause areas, and came up with 1,000,000 candidates. Here is my top proposal"

And Carl Shulman answered "Mmh, I guess this could be competitive with OpenPhil's last dollar"

At this point, alexrjl attained nirvana. 

Comment by nunosempere on Big List of Cause Candidates · 2020-12-28T23:42:58.102Z · EA · GW

Sure. So one straightforward thing one can do is forecast the potential of each idea/evaluate its promisingness, and then just implement the best ideas, or try to convince other people to do so. 

Normally, this would run into incentive problems because if forecasting accuracy isn't evaluated, the incentive is to just to make the forecast that would otherwise benefit the forecaster. But if you have a bunch of aligned EAs, that isn't that much of a problem.

Still, one might run into the problem that maybe the forecasters are in fact subtly bad; maybe you suspect that they're missing a bunch of gears about how politics and organizations work. In that case, we can still try to amplify some research process we do trust, like a funder or incubator who does their own evaluation. For example, we could get a bunch of forecasters to try to forecast whether, after much more rigorous research, some more rigorous, senior and expensive evaluators also finds a cause candidate exciting, and then just carry the expensive evaluation for the ideas forecasted to be the most promising. 

Simultaneously, I'm interested in altruistic uses for scalable forecasting, and cause candidates seems like a rich field to experiment on. But, right now, these are just ideas, without concrete plans to follow on them.