Posts

Forecasting Newsletter: March 2021 2021-04-01T17:01:15.831Z
Relative Impact of the First 10 EA Forum Prize Winners 2021-03-16T17:11:29.172Z
Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:44.627Z
Forecasting Newsletter: February 2021 2021-03-01T20:29:24.094Z
Forecasting Prize Results 2021-02-19T19:07:11.379Z
Forecasting Newsletter: January 2021 2021-02-01T22:53:54.819Z
A Funnel for Cause Candidates 2021-01-13T19:45:52.508Z
2020: Forecasting in Review 2021-01-10T16:05:37.106Z
Forecasting Newsletter: December 2020 2021-01-01T16:07:36.000Z
Big List of Cause Candidates 2020-12-25T16:34:38.352Z
What are good rubrics or rubric elements to evaluate and predict impact? 2020-12-03T21:52:27.802Z
Forecasting Newsletter: November 2020. 2020-12-01T17:00:40.460Z
An experiment to evaluate the value of one researcher's work 2020-12-01T09:01:49.034Z
Predicting the Value of Small Altruistic Projects: A Proof of Concept Experiment. 2020-11-22T20:07:57.499Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:21:52.151Z
Incentive Problems With Current Forecasting Competitions. 2020-11-10T21:40:46.317Z
Forecasting Newsletter: October 2020. 2020-11-01T13:00:04.440Z
Forecasting Newsletter: September 2020. 2020-10-01T11:00:02.405Z
Forecasting Newsletter: August 2020. 2020-09-01T11:35:19.279Z
Forecasting Newsletter: July 2020. 2020-08-01T16:56:41.600Z
Forecasting Newsletter: June 2020. 2020-07-01T09:32:57.248Z
Forecasting Newsletter: May 2020. 2020-05-31T12:35:36.863Z
Forecasting Newsletter: April 2020 2020-04-30T16:41:38.630Z
New Cause Proposal: International Supply Chain Accountability 2020-04-01T07:56:17.225Z
NunoSempere's Shortform 2020-03-22T19:58:54.830Z
Shapley Values Reloaded: Philantropic Coordination Theory & other miscellanea. 2020-03-10T17:36:54.114Z
A review of two books on survey-making 2020-03-01T19:11:13.828Z
A glowing review of two free online MIT Global Poverty courses 2020-01-15T11:40:41.519Z
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges 2019-12-19T18:16:04.299Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T16:36:10.564Z
Shapley values: Better than counterfactuals 2019-10-10T10:26:24.220Z
Why do social movements fail: Two concrete examples. 2019-10-04T19:56:02.028Z
EA Mental Health Survey: Results and Analysis. 2019-06-13T19:55:37.127Z

Comments

Comment by NunoSempere on Getting a feel for changes of karma and controversy in the EA Forum over time · 2021-04-22T14:44:40.868Z · EA · GW

I would be interested in how to circumvent this for future analysis.

You can query by year, and then aggregate the years. From a past project, in nodejs:

/* Imports */
import fs from "fs"
import axios from "axios"

/* Utilities */
let print = console.log;
let sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms))

/* Support function */
let graphQLendpoint = 'https://www.forum.effectivealtruism.org/graphql/'
async function fetchEAForumPosts(start, end){
  let response  = await axios(graphQLendpoint, ({
    method: 'POST',
    headers: ({ 'Content-Type': 'application/json' }),
    data: JSON.stringify(({ query: `
       {
        posts(input: {
          terms: {
          after: "${start}"
          before: "${end}"
          }
          enableTotal: true
        }) {
          totalCount
          results{
            pageUrl
            user {
              slug
              karma
            }
            
          }
        }
      }`
})),
  }))
  .then(res => res ? res.data ? res.data.data ? res.data.data.posts ? res.data.data.posts.results : null : null : null : null)
  return response
}

/* Body */
let years = [];
for (var i = 2005; i <= 2021; i++) {
   years.push(i);
}

let main0 = async () => {
  let data = await fetchEAForumPosts("2005-01-01","2006-01-01")
  console.log(JSON.stringify(data,null,2))
}
//main0()

let main = async () => {
  let results = []
  for(let year of years){
    print(year)
    let firstDayOfYear = `${year}-01-01`
    let firstDayOfNextYear = `${year+1}-01-01`
    let data = await fetchEAForumPosts(firstDayOfYear, firstDayOfNextYear)
    //console.log(JSON.stringify(data,null,2))
    //console.log(data.slice(0,5))
    results.push(...data)
    await sleep(5000)
  }
  print(results)
  fs.writeFileSync("lwPosts.json", JSON.stringify(results, 0, 2))
}
main()
Comment by NunoSempere on Mundane trouble with EV / utility · 2021-04-04T18:51:41.703Z · EA · GW

So here is something which sometimes breaks people: You're saying that you prefer A = 10% chance of saving 10 people over B = 1 in a million chance of saving a billion lives. Do you still prefer a 10% chance of A over a 10% chance of B?

If you are, note how you can be Dutch-booked.

Comment by NunoSempere on Mundane trouble with EV / utility · 2021-04-04T13:12:54.723Z · EA · GW

On Pascal's mugging specifically, Robert Miles has an interesting youtube video arguing that AI Safety is not a Pascal mugging, which the OP might be interested in: 

Comment by NunoSempere on Mundane trouble with EV / utility · 2021-04-03T08:53:47.254Z · EA · GW

1 & 2 might be normally be answered by the Von Neumann–Morgenstern utility theorem*

In the case you mentioned, you can try to calculate the impact of an education throughout the beneficiaries' lives. In this case, I'd expect it to mostly be an increase in future wages, but also some other positive externalities. Then you look at  the willingness to trade time for money, or the willingness to trade years of life for money, or the goodness and badness of life at different earning levels, and you come up with a (very uncertain) comparison.

If you want to look at an example of this, you might want to look at GiveWell's evaluations in general, or at their evaluation of deworming charities in particular.

I hope that's enough to point you to some directions which might answer your questions.

* But e.g., for negative utilitarians, axiom's 3 and 3' wouldn't apply in general (because they prefer to avoid suffering infinitely more than promoting happiness, i.e. consider L=some suffering, M=non-existence, N=some happiness) but they would still apply for the particular case where they're trading-off between different quantities of suffering. In any case, even if negative utilitarians would represent the world with two points (total suffering, total happiness), they still have a way of comparing between possible worlds (choose the one with the least suffering, then the one with the most happiness if suffering is equal).

Comment by NunoSempere on Announcing "Naming What We Can"! · 2021-04-01T15:38:48.092Z · EA · GW

Unsong: The Origins. 

Comment by NunoSempere on New Top EA Causes for 2021? · 2021-04-01T08:24:00.329Z · EA · GW

This isn't exactly a proposal for a new cause area, but I've felt that the current names of EA organizations are confusingly named. So I'm proposing  some name-swaps:

  • Probably Good should now be called "80,000 hours". Since 80,000 hours explicitly moved towards a more longtermist direction, it has abandoned some of its initial relationship to its name, and Probably Good seems to be picking some of that slack.
  • "80,000 hours should be renamed to "Center for Effective Altruism" (CEA). Although technically a subsidiary, 80,000 hours reaches more people than CEA, and produces more research. This change in name would reflect its de-facto leadership position in the EA community.
  • The Center for Effective Altruism should rebrand to "EA Infrastructure Fund", per CEA's strategical focus on events, local groups and the EA forum, and on providing infrastructure for community building more generally.
  • However, this leaves the "EA Infrastructure Fund" without a name. I think the main desiderata for a name is basically prestige, and so I suggest "Future of Humanity Institute", which sounds suitably ominous. Further, the association with Oxford might lead more applicants to apply, and require a lower salary (since status and monetary compensation are fungible), making the fund more cost-effective.
  • Fortunately, the Global Priorities Institute (GPI) recently determined that helping factory farmed animals is the most pressing priority, and that we never cared that much about humans in the first place. This leaves a bunch of researchers at the Future of Humanity Institute and at the Global Priorities Institute, which recently disbanded, unemployed, but Animal Charity Evaluators is offering them paid junior researcher positions. To reflect its status as the indisputable global priority, Animal Charity Evaluators should consider changing their name to "Doing Good Better".
  • To enable this last change and to avoid confusion, Doing Good Better would have to be put out of print.

I estimate that having better names only has a small or medium impact, but that tractability is sky-high. No comment on neglectedness. 

What do you blokes think?

Comment by NunoSempere on Report on Semi-informative Priors for AI timelines (Open Philanthropy) · 2021-03-31T20:18:34.826Z · EA · GW

Random thought on anthropics: 

  • If AGI had been developed early and been highly dangerous, one can't update on not seeing it
  • Anthropic reasoning might also apply to calculating the base rate of AGI; in the worlds where it existed and was beneficial, one might not be trying to calculate its a priori outside view.
Comment by NunoSempere on Report on Semi-informative Priors for AI timelines (Open Philanthropy) · 2021-03-29T17:07:34.814Z · EA · GW

Some notes on the Laplace prior:

  • On footnote 16, you "For example, the application of Laplace’s law described below implies that there was a 50% chance of AGI being developed in the first year of effort". But historically, participants in the Dartmouth conference were gloriously optimistic

"We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer."

  • When you write "I also find that pr(AGI by 2036) from Laplace’s law is too high," what outside-view consideration are you basing that on? Also, is it really too high?
    • If you rule out AGI until 2028 (as you do in your report), the Laplace prior gives  you 1 - (1-[1/(2028-1956)+1])^(2036-2028) ≈ 10.4% ≈ 10%, which is well withing your range of 1% to 18%, and really near to your estimate of 8%.
    • The point that Laplace's prior depends on the unit of time chosen is really interesting, but it ends up not mattering once a bit of time has passed. For example, if we choose to use days instead of years, with (days since June 18 1956=23660, days until Mar 29 2028=2557, days until Jan 1 2036=5391), then Laplace's rule would give for the probability of AGI until 2036: 1 - (1-[1/(23660+2557+1)])^(5391-2557) = 10.2% ≈ 10%, pretty much the same as above.
      • It's fun to see that (1-(1/x))^x converges to 1/e pretty quickly, and that changing from years to days is equivalent to changing from  ~(1+(1/x))^(x*r) to ~(1+(1/(365*x)))^(365*x*r) , where x is the time passed in years and x*r is the time remaining in years. But both converge pretty quickly to (1/e)^r.
  • It is not clear to me that by adjusting the Laplace prior down  when you categorize AGI as a "highly ambitious but feasible technology" you are not updating twice: Once on the actual passage of time and another time given that AGI seems "highly ambitious". But one knows that AGI is "highly ambitious" because it has hasn't been solved in the first 65 years.

Given that, I'd still be tempted to go with the Laplace prior for this question, though I haven't really digested the report yet.

Comment by NunoSempere on Don't Be Bycatch · 2021-03-23T16:19:25.906Z · EA · GW

Nemo day, perhaps

Comment by NunoSempere on Want to alleviate developing world poverty? Alleviate price risk.​ (2018) · 2021-03-23T16:18:32.938Z · EA · GW

See also: https://en.wikipedia.org/wiki/Onion_Futures_Act

Comment by NunoSempere on BitBets: A Simple Scoring System for Forecaster Training · 2021-03-22T11:54:17.743Z · EA · GW

The auctioning scheme might not end up being proper, though

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-22T11:48:31.342Z · EA · GW
  1. Yes, we agree
  2. No, we don't agree. I think that Adam did better than other potential donor lottery winners, and so his counterfactual value is higher, and thus his Shapley value is also higher. If all the other donors had been clones of Adam, I agree that you'd just divide by n. Thus, the "In every example here, this will be equivalent to calculating counterfactual value, and dividing by the number of necessary stakeholders" is in fact wrong, and I was implicitly doing both of the following in one step: a. Calculating Shapley values with "evaluators" as one agent and b. thinking of Adam's impact as a high proportion of the SV of the evaluator round,
  3. The rest of our disagreements hinge on 2., and I agree that judging the evaluator step alone would make more sense.
Comment by NunoSempere on BitBets: A Simple Scoring System for Forecaster Training · 2021-03-18T13:09:49.367Z · EA · GW

This has beautiful elements.

I'm also interested in using scoring rules for actual tournaments, so some thoughts on that:

  • This scoring rule incentivizes people to predict on questions for which their credence is closer to the extremes, rather than on questions where their credence is closer to even.
  • The rule is is some ways analogous to an automatic market maker which resets for each participant, which is an interesting idea. You could use a set-up such as this to elicit probabilities from forecasters, and give them points/money in the process.
  • You could start your bits somewhere other than at 50/50 (this would be equivalent to starting your automatic market maker somewhere else).
Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T12:41:32.566Z · EA · GW

Yes, the scale is under construction, and you're not the first person to mention that the specific research agenda mentioned is overvalued.

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T12:34:27.427Z · EA · GW

I'm also not sure how to interpret your upper bound itself having a range?

Upper bound being a range is a mistake, fixed now.

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T12:33:34.390Z · EA · GW

I think that's a substantial part of the impact, but that there may be other substantial parts too, such as...

Yes, those seem like at least somewhat important pathways to impact that I've neglected, particularly the first two points. I imagine that could easily lead to a 2x to 3x error (but probably not to a 10x error)

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T12:30:36.958Z · EA · GW

Yes, I expect the intuitions and for estimation to generalize/help a great deal with the forecasting step, and though I agree that this might not be intuitively obvious. I understand that estimation and forecasting seem like different categories, but I don't expect that to be a significant hurdle in practice.

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T12:23:53.586Z · EA · GW

Oof, no it didn't, good point.

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T12:22:02.860Z · EA · GW

I have nothing to disagree about here :)

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T12:20:40.755Z · EA · GW

You are understanding correctly that the Shapley value multiplier is responsible for preventing double-counting, but you're making a mistake when you say that it "implies that shallowly evaluated giving is as impactful as "0 to 0.4" of in-depth evaluated giving"; the latter doesn't follow.

In the two player game, you have Value({}), Value({1}), Value({2}), Value({1,2}), and the Shapley value for player 1 (the funders) is ([Value({1})- Value({})] + [Value({1,2})- Value({2})] )/2, and the value of player 2 (the donor lottery winner) is  ([Value({2})- Value({})] + [Value({1,2})- Value({1})] )/2

In this case, I'm taking ([Value({2})- Value({})]  to be ~0 for simplicity, so the value of player 2 is  [Value({1,2})- Value({1})] )/2. Note that this is just the counterfactual value divided by a fraction.

If there were more players, it would be a little bit more complicated, but you'd end up with something similar to  [Value({1,2,3})- Value({1,3})] )/3. Note again this is just the counterfactual value divided by a fraction.

But now, I don't know how many players there are, so I just consider  [Value({The World})- Value({The world without player 2})] )/(some estimates of how many players there are). 

And the Shapley value multiplier would be 1/(some estimates of how many players there are). 

At no point am I assuming that "shallowly evaluated giving is as impactful as 0 to 0.4 of in-depth evaluated giving"; the thing that I'm doing is just allocating value so that the sum of the value of each player is equal to the total value.

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-18T12:07:14.618Z · EA · GW

Another weird thing is to see the 2017 Donor Lottery Grant having x5..10 higher impact than 2018 AI Alignment Literature Review and Charity Comparison.

I see now, that is weird. Note that if I calculate the total impact of the 100k to $1M I think Larks moved, the impact of that would be 100mQ to 2Q (change the Shapley value fraction in the Guessstimate to 1), which is closer to the 500mQ to 4Q I estimated from the 2017 Donor Lottery. And the difference can be attributed to a) Investing in organizations which are starting up, b) the high cost of producing AI safety papers, coupled with cause neutrality, and c) further error.

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-17T15:37:39.285Z · EA · GW
  • Good point re: value of information
  • Re: "The donor lottery evaluation seems to miss that $100K would have been donated otherwise": I don't think it does. In the "total project impact" section, I clarify that "Note that in order to not double count impact, the impact has to be divided between the funding providers and the grantee (and possibly with the new hires as well)."
Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-17T15:33:06.490Z · EA · GW

I'll flag the narrow and lowish estimates about the Cool Earth as something I was most likely wrong about, then, thanks.

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-17T10:22:50.243Z · EA · GW

Yeah, I think that the distinction between evaluation and forecasting is non-central. For example, these estimates can also be viewed as forecasts of what I would estimate if I spent 100x as much time on this, or as forecasts of what a really good system would output. 

 

 

More to the point, if a project isn't completed I could just estimate the distribution of expected quality, and the expected impact given each degree of quality (or, do a simplified version of that). 

That said, I was  thinking more about 2., though having a classification/lookup scheme would also be a way to produce explicit estimates.

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-17T10:08:44.052Z · EA · GW

To answer this specifically:

FWIW, I think the upper bound of my 80% confidence interval would be above 10% more effective and 3 years staying at the org, and definitely above 1% more effect and 0.5 years staying there. 

Yeah, I disagree with this. I'd expect most interventions to have a small effect, and in particular I expect it to just be hard to change people's actions by writing words. In particular, I'd be much higher if I was thinking about the difference between a completely terrible hiring round and an excellent one, but I don't know that people start off all that terrible or that this particular post brings people up all that much. 

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-17T10:02:50.121Z · EA · GW
  1. In the case of ALLFED, this is based on my picturing of one employee going about its month, and asking myself how surprising it would be if they couldn't produce 10 mQARPS of value per month, or how surprising it would be if they could produce 50 mQARPs per month.  In the case of the AI safety organizations, this is based on estimating the value of each of the papers that Larks things are valuable enough to mention, and then estimating what fraction of the total value of an organization those are.
  2. Private info
  3. a) Building up researchers into more capable researchers, knowledge acquired that isn't published, information value of trying out dead ends, acquiring prestige, etc. b) I actually didn't estimate ALLFED's impact, I estimated the impact of the marginal hires, per 1.
  4. Personal taste, it's possible that was the inferior choice. I found it more easy to picture the dollars moved than the improvement in productivity. In hindsight, maybe improving retention would be another main benefit which I didn't consider.
  5. I got that as a comment. The intuition here is that it would be really, really hard to find a project which moves as much money as Giving Tuesday and which you could do every day, every week, or every month. But if there are more than 52 local EA groups, an EAGx could be organized every week. If you think that EA is only doing projects at maximum efficiency (which it isn't), and knowing only that Giving Tuesdays are done once a year and EAGx are done more often, I'd expect one EAGx to be less valuable than one Giving Tuesday. 
    • Or, in other words, I'd expect there to be some tradeoff between quality and scalable.
Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-17T09:41:38.135Z · EA · GW

I'd be interested to hear roughly how long this whole process took you (or how long it took minus writing the actual post, or something)? This seems relevant to how worthwhile and scalable this sort of thing is. 

Maybe an afternoon for the initial version, and then two weeks of occasional tweaks. Say 10h to 30h in total? I imagine that if one wanted to scale this, one could get it to 30 mins to an hour for each estimate. 

Comment by NunoSempere on Relative Impact of the First 10 EA Forum Prize Winners · 2021-03-17T09:33:48.700Z · EA · GW

Yeah, I see what you're saying. Do you think that it is hard for the writeup to have a negative total effect?

Comment by NunoSempere on Gordon Irlam: an effective altruist ahead of his time · 2021-03-15T19:09:46.270Z · EA · GW

See also Gordon Irlam on the BEGuide, an interview in an EA/EA-adjacent blog from 2014

Comment by NunoSempere on Is pursuing EA entrepreneurship becoming more costly to the individual? · 2021-03-12T19:27:55.871Z · EA · GW

This answer brings valuable points, but it rubbed me the wrong way. After thinking about it, I think that it feels partisan because all of your points go in one direction, but just because the question doesn't ask about a tautology, I'd expect there to be competing considerations.

Here are some competing considerations:

- Charity Entrepreneurship now exists, and makes entrepreneurship much, much easier. I think that this effect is stronger than any other effect. Note that they offer a stipend. 
- I think you're confusing selection effects for environmental effects. As EA becomes larger, it will include people who are less hardcore, but for a given level of hardcoreness, it's unclear whether entrepreneurship is easier or harder. For example, Toby Ord pledged to donate everything he earned above £18,000, whereas EAs today seems at least a tad softer. 
- Hits-based giving has become institutionalized and popularized. This makes, e.g., an organization like ALLFED possible. I think that your "Losses look worse relative to safe bets" point might be dependent on your specific social circle, and maybe moot throughout most of EA.
- As EA becomes larger, (entrepreneurial) specialization becomes possible. This counteracts the effect of low-hanging fruit having been picked, to some extent.
- There are more EAs, meaning that network effects are stronger.
- I don't think that "burning you EA career capital" is a dynamic I've seen much
- Asking for feedback is still relatively easy, just by posting an idea on the EA forum.
- I also have some impressions based on my own experience, but I don't think that these generalize

Overall, my bottom line is that I'm uncertain, though I'm assigning slightly higher probability to it being harder.

Note that this is a different question than whether we should "pay a bit more respect to the courage or initiative shown by those who choose to figure out their own unique path or otherwise do something different than those around them", which one could model as balancing the marginal disillusionment of those who try and fail, and the high expected value of those who succeed. Note that if we also cherish those who try and fail, we can sort of have our cake and eat it too.

Comment by NunoSempere on Don't Be Bycatch · 2021-03-12T16:35:02.376Z · EA · GW

Yeah, that's not my proudest sentence. I meant the former, that it is particularly prone to generating bycatch, and hence it would benefit from higher level solutions. In your post, you try to solve this at the level of the little fish, but addressing that at the fisherman level strikes me as a better (though complimentary) idea.

Comment by NunoSempere on Don't Be Bycatch · 2021-03-12T10:34:28.140Z · EA · GW

tl;dr: I like the post. One thing that I think it gets wrong is that if "picking up trash in the park is a fine EA project to cut our teeth on", then that is a sorry state for EA to find itself in. 

I think that a thing that this post gets wrong is that EA seems to be particularly prone to generating bycatch, and although there are solutions at the individual level, I'd also appreciate having solutions at higher levels of organization. For example, the solution to "you find yourself writing not-so-valuable blogposts" is probably to ask a mentor to recommend you valuable blog posts to write.

One proposal to do that was to build what this post calls a "hierarchical networked structure" in which people have people to ask about which blog posts or research directions would be valuable, and Aaron Gertler is there to offer editing, and further along the way, EA groups have mentors which have an idea of which EA jobs are particularly valuable to apply to, and which are particularly likely to generate disillusionment, and EA group mentors themselves have someone to ask advice to, and so on. This to some extent already exists;  I imagine that this post is valuable enough to get sent on the EA newsletter, which means that involved members in their respective countries will read it and maybe propagate its ideas. But there is way to go. 

Another solution in that space would be to have a forecasting-based decentralized systems, where essentially the same thing happens (e.g., good blog posts to write or small projects to do get recommended, career hopes get calibrated, etc.), but which I imagine could be particularly scalable.

We can also look at past movements in history. In particular, General Semantics also had this same problem, and a while ago I speculated that this lead to its doom. Note also that religions don't have the problem of bycatch at all.

Comment by NunoSempere on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-10T17:50:33.839Z · EA · GW

More distantly and speculatively, I guess that Wikidata or fanfiction.net/archiveofourown (which are bigger and better, but just thinking of it conceptually) also delimit metaforecast, the one on the known-to-be-true part of the spectrum, the other on the known-to-be-fictional side. 

Comment by NunoSempere on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T17:47:09.127Z · EA · GW

Though I guess I should add the disclaimer that I haven't actively checked what alternatives were/are available to fill similar roles to the roles Metaforecast and Guesstimate aim to fill

The closest alternative I've found to Metaforecast would be The Odds API, which aggregates various APIs from betting houses, and is centered on sports.

Comment by NunoSempere on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T09:10:56.548Z · EA · GW

For what it's worth, I had the same initial impression as you (that making a browser extension wouldn't be that hard), but came to think more like Ozzie on reflection. 

Comment by NunoSempere on Introducing Metaforecast: A Forecast Aggregator and Search Tool · 2021-03-08T09:07:35.090Z · EA · GW

"Ideally, you'd want the star ratings to be based on the calibration and resolution that now-resolved questions from that platform (or of that type, or similar) have tended to have in the past. But there's not yet enough data to allow that. So you asked people who know about each platform to give their best guess as to how each platform has historically compared in calibration and resolution." 

Yes. Note that I actually didn't ask them about their guess, I asked them to guess a function from various parameters to stars (e.g. "3 stars, but 2 stars when the probability is higher than 90% or less than 10%".

Or maybe people gave their best guess as to how the platforms will compare on those fronts, based on who uses each platform, what incentives it has, etc.?

Also yes, past performance is highly indicative of future performance. 

Also, unlike some other uses for platform comparison, if one platform systematically had much easier questions which they got almost always right, I'd want to give them a higher score (but perhaps show them afterwards in the search, because they  might be somewhat trivial).

Comment by NunoSempere on Forecasting Prize Results · 2021-02-24T21:24:53.194Z · EA · GW

Yeah, that’d be very interesting! I don’t know if I’ll find someone with the right expertise who I can get interested in researching this.

This was also a point we discussed. Having something which builds upon someone else's work, or having something which will be built upon in the future generally makes a project more valuable. And in practice, I get the impression that it's mostly authors themselves which build upon their own work.

Comment by NunoSempere on EA Forum Prize: Winners for December 2020 · 2021-02-16T11:04:08.092Z · EA · GW

Nice! One other cool thing about the Big List of Cause Candidates is that people have been coming up with suggestions, and I have been updating the list as they do so.


Incidentally, the Big List of Candidates post was selected as a project by using a very rudimentary forecasting/evaluation system, similar to the ones here and here.  If you want to participate in that kind of thing by suggesting, carrying out or evaluating potential projects, you can sign up here.

In particular, as a novelty, I assigned a 50% chance that it would in fact get an EA forum prize.

Note that the forecast assumed that I was competing against fewer posts, but also that there would be fewer prizes, so the errors happily cancelled out. 


I think that that kind of forecast/comment:

  • Makes me look arrogant/not humble/unvirtuous, at least to some people. In particular, I strongly take the stance that the characters in In praise of unhistoric heroism who are ~"contented by sweeping offices instead of chasing the biggest projects they can find" are in fact making a mistake by not asking the question "but what are the most valuable things I could be doing?" (or, by using a forecasting system/setup to explore that question)
  • Is still really interesting because I think that forecasting funding decisions might be a workable method in order to amplify them, which is particularly valuable given that EA might be vetting constrained. Ideally I (or other forecasters) would get to do that with EA funds or OP grants, but I thought that the forum prize could be a nice beginning.

The other posts I thought were particularly strong are:

I correctly guessed My mistakes on the path to impact and  "Patient vs urgent longtermism" has little direct bearing on giving now vs later

Comment by NunoSempere on Forecasting Newsletter: January 2021 · 2021-02-12T14:04:28.027Z · EA · GW

This is not a mistake; you'll notice that the string "Hypermind" shares the letters "ermny" with „Germany". Anyways, in this case you might get more relevant results by clicking on "advanced results" and then on "2+ ★" or even "1+ ★", sacrificing some quality for breadth.

Comment by NunoSempere on Forecasting Newsletter: January 2021 · 2021-02-02T12:05:29.691Z · EA · GW

Thanks! Sure, I just did. Just search for "Hypermind" to see all of them, or for e.g., "covid-19" to get some results which include questions from Hypermind as well.

Comment by NunoSempere on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-01T10:18:08.301Z · EA · GW

What instrumental goals have you pursued successfully?

Comment by NunoSempere on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-01T10:17:50.448Z · EA · GW

To the extent that you have "a worldview" (in scare quotes), what is a short summary of that worldview?

Comment by NunoSempere on Promoting EA to billionaires? · 2021-01-27T20:42:54.627Z · EA · GW

See also: Gates Foundation gives millions to help persuade ultra-wealthy donors to give more of their billions and The Giving Pledge

Comment by NunoSempere on (Autistic) visionaries are not natural-born leaders · 2021-01-26T16:01:13.713Z · EA · GW

I disagree with this. I'm writing this without having looked at the data, but autism / Asperger's syndrome, particularly in their high functioning versions, seems to be underdiagnosed, and it's seems to be a very reasonable inference that at least some of the leaders under discussion were in fact on the autistic spectrum, or otherwise non-neurotypical. We can check this with a Metaculus question if you want.

Comment by NunoSempere on Why "cause area" as the unit of analysis? · 2021-01-26T15:50:17.582Z · EA · GW

So for me, the motivation for categorizing altruistic projects into buckets (e.g., classifications of philanthropy) is to notice the opportunities, the gaps, the conceptual holes, the missing buckets. Some examples:

  • If you divide undertakings according to their beneficiaries and you have a good enough list of beneficiaries, you can notice which beneficiaries nobody is trying to help. For example, you might study invertebrate welfare, wild animal welfare, or something more exotic, such as suffering in fundamental physics.
  • If you have a list of tools, you can notice which tools aren't being applied to which problems, or you can explicitly consider which tool-problem pairings are most promising. For example, ruthlessness isn't often combined with altruism.
  • If you have a list of geographic locations, you can notice which ones seem more or less promising.
  • If you classify projects according to their level of specificity, you can notice that there aren't many people doing high level strategic work, or, conversely, that there are too many strategists and that there aren't many people making progress on the specifics.

More generally, if you have an organizing principle, you can optimize across that organizing principle. So here in order to be useful, a division of cause areas by some principle doesn't have to be exhaustive, or even good in absolute terms, it just has to allow you to notice an axis of optimization. In practice, I'd also tend to think that having several incomplete categorization schemes among many axis is more useful than having one very complete categorization scheme among one axis.

Comment by NunoSempere on Forecasting of Priorities: a tool for effective political participation? · 2021-01-25T16:30:17.963Z · EA · GW

"What are the top national/world priorities" is usually so complex, that it will remain to be a mostly subjective judgment. Then, how else would you resolve it than by looking for some kind of future consensus?

You could decompose that complex question into smaller questions which are more forecastable, and forecast those questions instead, in a similar way to what CSET is doing for geopolitical scenarios. For example:

  • Will a new category of government spending take up more than X% of a country's GDP? If so, which category?
  • Will the Czech Republic see war in the next X years?
  • Will we see transformative technological change? In particular, will we see robust technological discontinuities in any of these X domains / some other sign-posts of transformative technological change?
  • ...

This might require having infrastructure to create and answer large number of forecasting questions efficiently, and it will require having a good ontology of "priorities/mega-trends" (so most possible new priorities are included and forecasted), as well as a way to update that ontology.

Comment by NunoSempere on Forecasting of Priorities: a tool for effective political participation? · 2021-01-25T10:32:00.193Z · EA · GW

Have you considered that you're trying to do too many things at the same time?

Comment by NunoSempere on Big List of Cause Candidates · 2021-01-23T12:50:20.822Z · EA · GW

Changelog 23rd Jan/2021

Notes:

  • I don't like "Politics: System Change, Targeted Change, and Policy Reform" as a category. I'm thinking of dividing it into several subcategories (e.g., "Politics: Systemic Change", "Politics: Mechanism Change", "Politics: Policy Change" and "Politics: Other".) I'd also be interested in more good examples of systemic change interventions, because the one which I most intensely associate with it is something like "marxist revolution".
  • Hattip to @Prabhat Soni for suggesting risks from whole brain emulation, atomically precise manufacturing, infodemics, cognitive enhancement, universal basic income, and the lesswrong tag for wireheading.

To do:

  • Think about adding "Cognitive Enhancement" as a cause area. See Bostrom here. Unclear to what extent it would be distinct from "Raising IQ"
  • Think about adding "Infodemics and protecting organisations that promote the spread of accurate knowledge like Wikipedia.". In particular, think if there is a more general category to which this belongs.
  • Tag these and add them to the google doc.
  • Follow up with the people who suggested these candidates.
Comment by NunoSempere on Big List of Cause Candidates · 2021-01-23T12:49:57.364Z · EA · GW

Thread for changelogs

Comment by NunoSempere on Big List of Cause Candidates · 2021-01-23T12:48:46.688Z · EA · GW

Done. From now on, this to do will be at the end of my "changelogs"