Posts

Pathways to impact for forecasting and evaluation 2021-11-25T17:59:52.797Z
Simple comparison polling to create utility functions 2021-11-15T19:48:21.147Z
A Model of Patient Spending and Movement Building 2021-11-08T18:00:17.481Z
Forecasting Newsletter: October 2021. 2021-11-02T14:05:33.784Z
An estimate of the value of Metaculus questions 2021-10-22T17:45:40.541Z
Forecasting Newsletter: September 2021. 2021-10-01T17:03:17.780Z
Building Blocks of Utility Maximization 2021-09-20T17:23:30.638Z
Forecasting Newsletter: August 2021 2021-09-01T16:59:06.263Z
Frank Feedback Given To Very Junior Researchers 2021-09-01T10:55:23.678Z
Forecasting Newsletter: July 2021 2021-08-01T15:07:00.985Z
Forecasting Newsletter: June 2021 2021-07-01T20:59:28.864Z
Shallow evaluations of longtermist organizations 2021-06-24T15:31:24.693Z
What should the norms around privacy and evaluation in the EA community be? 2021-06-16T17:31:59.174Z
2018-2019 Long-Term Future Fund Grantees: How did they do? 2021-06-16T17:31:36.048Z
Forecasting Newsletter: May 2021 2021-06-01T15:51:16.532Z
Forecasting Newsletter: April 2021 2021-05-01T15:58:16.948Z
Forecasting Newsletter: March 2021 2021-04-01T17:01:15.831Z
Relative Impact of the First 10 EA Forum Prize Winners 2021-03-16T17:11:29.172Z
Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:44.627Z
Forecasting Newsletter: February 2021 2021-03-01T20:29:24.094Z
Forecasting Prize Results 2021-02-19T19:07:11.379Z
Forecasting Newsletter: January 2021 2021-02-01T22:53:54.819Z
A Funnel for Cause Candidates 2021-01-13T19:45:52.508Z
2020: Forecasting in Review 2021-01-10T16:05:37.106Z
Forecasting Newsletter: December 2020 2021-01-01T16:07:36.000Z
Big List of Cause Candidates 2020-12-25T16:34:38.352Z
What are good rubrics or rubric elements to evaluate and predict impact? 2020-12-03T21:52:27.802Z
Forecasting Newsletter: November 2020. 2020-12-01T17:00:40.460Z
An experiment to evaluate the value of one researcher's work 2020-12-01T09:01:49.034Z
Predicting the Value of Small Altruistic Projects: A Proof of Concept Experiment. 2020-11-22T20:07:57.499Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:21:52.151Z
Incentive Problems With Current Forecasting Competitions. 2020-11-10T21:40:46.317Z
Forecasting Newsletter: October 2020. 2020-11-01T13:00:04.440Z
Forecasting Newsletter: September 2020. 2020-10-01T11:00:02.405Z
Forecasting Newsletter: August 2020. 2020-09-01T11:35:19.279Z
Forecasting Newsletter: July 2020. 2020-08-01T16:56:41.600Z
Forecasting Newsletter: June 2020. 2020-07-01T09:32:57.248Z
Forecasting Newsletter: May 2020. 2020-05-31T12:35:36.863Z
Forecasting Newsletter: April 2020 2020-04-30T16:41:38.630Z
New Cause Proposal: International Supply Chain Accountability 2020-04-01T07:56:17.225Z
NunoSempere's Shortform 2020-03-22T19:58:54.830Z
Shapley Values II: Philantropic Coordination Theory & other miscellanea. 2020-03-10T17:36:54.114Z
A review of two books on survey-making 2020-03-01T19:11:13.828Z
A review of two free online MIT Global Poverty courses 2020-01-15T11:40:41.519Z
[Part 1] Amplifying generalist research via forecasting – models of impact and challenges 2019-12-19T18:16:04.299Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T16:36:10.564Z
Shapley values: Better than counterfactuals 2019-10-10T10:26:24.220Z
Why do social movements fail: Two concrete examples. 2019-10-04T19:56:02.028Z
EA Mental Health Survey: Results and Analysis. 2019-06-13T19:55:37.127Z

Comments

Comment by NunoSempere on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-12-01T12:27:21.835Z · EA · GW

For what it's worth,  I don't disagree with you, though I do think that the steady state is a lower bound of value, not an upper bound.

Comment by NunoSempere on On the Universal Distribution · 2021-11-29T15:37:50.956Z · EA · GW

I thought that this post was neat. I was already familiar with Solomonoff induction, but the post still taught me a few things.

Comment by NunoSempere on On the Universal Distribution · 2021-11-29T15:17:06.838Z · EA · GW

Indeed, in some sense, Solomonoff Inductors are in a boat similar to the one that less computer-science-y Bayesians were in all along: you’ll plausibly converge on the truth, and resolve disagreements, eventually; but a priori, for arbitrary agents in arbitrary situations, it’s hard to say when. My main point here is that the Solomonoff Induction boat doesn’t seem obviously better.

Not necessarily true! See Scott Aaronson on this (but iirc, he makes some assumptions I disagreed with)

Comment by NunoSempere on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-28T20:05:31.581Z · EA · GW

Thinking more about this, these are more of an upper bound, which don't bind (because you can probably buy a 0.01% risk reduction per year much cheaper. So the parameter to estimate would be more like 'what are the other cheaper interventions'

Comment by NunoSempere on Pathways to impact for forecasting and evaluation · 2021-11-28T19:25:40.874Z · EA · GW

I don't have any immediate reply, but I thought this comment was thoughtful and that the forum can probably use more like it.

Comment by NunoSempere on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-28T19:15:01.252Z · EA · GW

Here is a Guesstimate which calculates this in terms of a one-off 0.01% existential risk reduction over a century.

Guesstimate screenshot
Comment by NunoSempere on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-28T19:02:52.696Z · EA · GW

Here is a Guesstimate model which addresses the item on the to-do list. Note that in this guesstimate I'm talking about a -0.01% yearly reduction.

Guesstimate screenshot.
Comment by NunoSempere on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-28T18:33:34.693Z · EA · GW

Just to complement Khorton's answer: With a discount rate of  [1], and a steady-state population of N, and a willingness to pay of $X, the total value of the future is , so the willingness to pay for 0.01% of it would be 

This discount rate might be because you care about future people less, or because you expect a % of pretty much unavoidable existential risk going forward.

Some reference values

  •  (10 billion),  means that willingness to pay for 0.01% risk reduction should be , i.e., $333 billion
  •  (7 billion),  means that willingness to pay for 0.01% risk reduction should be   i.e., $70 billion.

I notice that from the perspective of a central world planner, my willingness to pay would be much higher (because my intrinsic discount rate is closer to ~0%). Taking 

  •  (10 billion),  means that willingness to pay for 0.01% risk reduction should be , i.e., $100 trillion 

To do:

  • The above might be the right way to model willingness to pay from 0.02% risk per year to 0.01% risk per year. But with, e.g,. 3% remaining per year, willingness to pay is lower, because over the long-run we all die sooner.
    • E.g., reducing risk from 0.02% per year to 0.01% per year is much more valuable that reducing risk from 50.1% to 50%.

[1]: Where you value the -th year in the steady-state at  of the value of the first year. If you don't value future people, the discount rate  might be close to , if you do value them, it might be close to 

Comment by NunoSempere on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-28T18:11:43.021Z · EA · GW

Nitpick: Assuming that for every positive state there is an equally negative state is not enough to think that the maximally bad state is only -100% of the expected value of the future, it could be much worse than that..

Comment by NunoSempere on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-28T18:08:36.549Z · EA · GW

I'm curious about potential methodological approaches to answering this question:

  1. Arrive at a possible lower bound for the value of averting x-risk by thinking about how much one is willing to pay to save present people, like in Khorton's answer.
  2. Arrive at a possible lower bound by thinking about how much is willing to pay for current and discounted future people
  3. Thinking about what EA is currently paying for similar risk reductions, and arguing that one should be willing to pay at least as much for future risk-reduction opportunities
    • I'm unsure about this, but I think this is most of what's going on with Linch's intuitions.

Overall, I agree that this question is important, but current approaches don't really convince me. 

My intuition about what would convince me would be some really hardcore and robust modeling coming out of e.g., GPI taking into account both increased resources over time and increased risk. Right now the closest published thing that exists might be Existential risk and growth and Existential Risk and Exogenous Growth—but this is inadequate for our purposes because it considers stuff at the global rather than at the movement level—and the closest unpublished thing that exists are some models I've heard about that I hope will get published soon.

Comment by NunoSempere on How many EA 2021 $s would you trade off against a 0.01% chance of existential catastrophe? · 2021-11-28T17:43:55.903Z · EA · GW

*How* are you getting these numbers? At this point, I think I'm more interested in the methodologies of how to arrive at an estimate than about the estimates themselves

Comment by NunoSempere on Pathways to impact for forecasting and evaluation · 2021-11-26T17:12:00.105Z · EA · GW

How much do you think forecasting well on given questions is different from the skill of creating new questions? I notice that I'm trending to be increasingly impressed by people who are able to ask questions that seem important but that I wouldn't even have thought about

They seem similar because being able to orient oneself in a new domain would feed into both things. One can probably use (potentially uncalibrated) domain experts to ask questions which forecasters then solve.  Overall I have not thought all that much about this.

Comment by NunoSempere on Pathways to impact for forecasting and evaluation · 2021-11-26T17:09:30.368Z · EA · GW

I wonder how much you'd consider "changing governance culture" as part of the potential impact, e.g. I hope that Metaculus and co. will stay clear success stories and motivate government institutions to adopt and make probabilistic and evaluable predictions for important projects

I'm fairly skeptical about this for e.g., national governments. For the US government in particular, the base rate seems low; people were trying to do things like this since at least 1964 and mostly failing.

Comment by NunoSempere on Is it no longer hard to get a direct work job? · 2021-11-26T14:10:04.254Z · EA · GW

Makes sense

Comment by NunoSempere on Is it no longer hard to get a direct work job? · 2021-11-26T10:40:50.008Z · EA · GW

Cheers, thanks for the data.

Comment by NunoSempere on Is it no longer hard to get a direct work job? · 2021-11-26T00:47:13.827Z · EA · GW

It's hard to give a nuanced answer, but I'd mostly say that your update is not directionally correct. In particular, I'd expect the number of "EA jobs" to be in the hundreds to low thousands, but the number of EAs to be in the mid to high thousands.

Per the 2020 EA survey:

Around 135 people out of 1,679 non-students and 2,166 responses mentioned that they were employed at EA organizations. So this is 8.7% of non-students and 6.2% of total EA respondents.

Not that many people respond to surveys, so the total EA population is probably higher than 2k, but it's difficult to say how much higher.

Because I don't get the impression that the  number of "EA jobs" has literally doubled in the past year, I think that the chances of getting accepted into any EA org seem at most something like 10%, but more like 2 to 5%. So I'd say that the mood of your update doesn't seem to be directionally correct.

In particular, just in the case of uni EA groups, I imagine that there might be one organizer for every, say, 20 to 50 people (?? I really have no idea about this), which is also a ratio of 2 to 5%.

One major way in which I could imagine being wrong is if you're at a very prestigious uni, or if your definition of "hard work and dedicated" does convey 2 to 10% to your audience.

Comment by NunoSempere on Pathways to impact for forecasting and evaluation · 2021-11-25T23:11:35.490Z · EA · GW

I don't get why this post has been downvoted; it was previously at 16 and now at 8.

Comment by NunoSempere on Pathways to impact for forecasting and evaluation · 2021-11-25T18:00:33.051Z · EA · GW

I also drew some pathways to impact for QURI itself and for software, but I’m significantly less satisfied with them. 

Software

I thought that the software pathway was fairly abstract, so here would be something like my approximation of why Metaforecast is or could be valuable.

QURI itself

Note that QURI's pathway would just be the pathway of the individual actions we take around forecasting, evaluations, research and software, plus maybe some adjustment for e.g., mentorship, coordination power, helping funding, etc.

Comment by NunoSempere on Announcing my retirement · 2021-11-25T15:50:56.756Z · EA · GW

My Forum comments will be less frequent, but probably spicier.

Looking forward to this.

Comment by NunoSempere on What Are Your Software Needs? · 2021-11-25T11:19:09.938Z · EA · GW

I would appreciate a code review of Metaforecast (front-end, back-end).

Comment by NunoSempere on Simple comparison polling to create utility functions · 2021-11-25T01:00:40.048Z · EA · GW

2. Is now implemented.

1. is a bit tricky because the "is x times as valuable as" relation is kind of weird for negative inputs

Comment by NunoSempere on Simple comparison polling to create utility functions · 2021-11-25T00:59:04.640Z · EA · GW

Done!

Comment by NunoSempere on A Red-Team Against the Impact of Small Donations · 2021-11-24T23:56:30.359Z · EA · GW

This doesn't seem like it is common knowledge. Also, "weird things that make sense" does kind of screen off a bunch of ideas which make sense to potential applicants, but not to fund managers. 

It's possible that there are good weird ideas that never cross our desk, but that's again an informational reason rather than weirdness.

This is not the state of the world I would expect to observe if the LTF was getting a lot of weird ideas. In that  case, I'd expect some weird ideas to be funded, and some really weird ideas to not get funded.

Comment by NunoSempere on Announcing our 2021 charity recommendations · 2021-11-24T19:32:53.705Z · EA · GW

I'd be very curious about you feeding your intuitions into this utility function extractor (and then dividing your estimates of their relative value by their yearly budget.) I'm curious enough to put a small bounty on this, i.e., a $50 donation to a charity of your choice.

The way you would do this would be to go to Advanced options > Use your own data > Paste the below with the names of the orgs in the technology alternative space changed > Click on "change dataset"

[
    {
      "name": "Organization 1"
    },
    {
      "name": "Organization 2"
    },
    {
      "name": "Organization 3"
    },
    {
      "name": "Organization 4",
      "isReferenceValue": true
    }
  ]

And then select how much good in the world each organization is compared to each other, and then give me a screenshot of the output.

Comment by NunoSempere on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-23T20:14:43.463Z · EA · GW

This article is kind of too "feel good" for my tastes. I'd also like to see a more angsty post that tries to come to grips with the fact that most of the impact is most likely not going to come from the individual people, and tries to see if this has any new implications, rather than justifying that all is good.

For instance, 

  • Maybe given that there are billions of money floating around the kind of thing would be to try to influence them
  • But OpenPhil doesn't seem that approachable, and its not like they can be influenced all that much by that many people
  • Maybe there is some cause X that we're missing that would make the broad EA community great again
  • etc.

More generally, maybe the patterns in the early EA community were more suitable to a social movement without  billionaires, and there are better patterns that we could be executing now. For instance, maybe trying to get prestige outside of EA dominates earning to give now that EA is better funded. Or maybe EA is better funded but you'd still expect most people to have idiosyncratic preferences not shared by central funders.

Comment by NunoSempere on We’re Rethink Priorities. Ask us anything! · 2021-11-18T10:58:06.004Z · EA · GW

Surprising, I know

Comment by NunoSempere on We’re Rethink Priorities. Ask us anything! · 2021-11-17T19:38:10.676Z · EA · GW

Asked differently, why are you so cool, both at the RP level and personally?

Comment by NunoSempere on [Linkpost] Apply For An ACX Grant · 2021-11-17T10:04:45.717Z · EA · GW

Makes sense, thanks. 

Comment by NunoSempere on Simple comparison polling to create utility functions · 2021-11-16T15:24:44.964Z · EA · GW

Hey, this is a good idea, but it turns out it's slightly tricky to program. I'll get around to it eventually, though

Comment by NunoSempere on We’re Rethink Priorities. Ask us anything! · 2021-11-16T11:26:26.539Z · EA · GW

In your yearly report you mention:

Rethink Priorities has been trusted by EA Funds and Open Philanthropy to start new projects (e.g., on capacity for welfare of different animal species) and open entire new departments (such as AI governance).

These and other large organizations often only fund 25–50% of our needs in any particular area because they trust our ability to find other sources of funding. Therefore we rely on a broad range of individual donors to continue our work.

This surprised me, because I fairly often hear the advice of "donate to EA Funds" as the optimal thing to do, but it seems that if everybody did that, RP would not get funded. Do you have any thoughts on this?

Comment by NunoSempere on A Model of Patient Spending and Movement Building · 2021-11-16T08:45:01.065Z · EA · GW

Hey, thanks for the comments. Your point about a bull market is welcome, and I think similar to the point that Phil made in the 80kh podcast. Some nitpicks:

  • Nino -> Nuño
  • When people say that "capital depreciates", they generally mean " capital investments", i.e., machinery, computers, etc.
  • Note that labor depreciates at a rate d, in the sense that people move out of the movement because of value drift, but it also increases in value because of productivity improvements (see the exponentials in the model)
  • I think that depreciation of labor is actually empirically motivated, e.g., by https://forum.effectivealtruism.org/posts/eRQe4kkkH2pPzqvam/more-empirical-data-on-value-drift
  • But in models in which labor replicated itself (i.e., there was some "naturally arising movement-building"), we still didn't see that earning to give (in the sense of earning a salary) was favored in the limit either.
Comment by NunoSempere on Simple comparison polling to create utility functions · 2021-11-15T22:27:21.092Z · EA · GW

Cheers, I've added both suggestions as Github issues to remember.

Comment by NunoSempere on A Model of Patient Spending and Movement Building · 2021-11-15T22:22:28.539Z · EA · GW

Hey, good questions, thanks for cross-posting this from the EA Discord :)

OpenPhil is included in the model because the EA movement starts out with some capital. But convincing additional billionaires (or "earning to give" in the sense of "trying to become a billionaire to donate the billions to charity") is not modelled.

Also, the model does not (yet) include research, which is also part of what OpenPhil does.

One-time big donors could be modelled by increasing the initial capital, but this is kind of a kludge.

Also, once that small model exists, we can reason in ways like: The small model recommends doing direct work or movement building over earning to give, in the limit. Adding billionaires to the mix doesn't seem like it would change that property (unless "earning to give" includes "taking a shot at becoming a billionaire".)

Comment by NunoSempere on [Linkpost] Apply For An ACX Grant · 2021-11-15T10:48:17.914Z · EA · GW

Hey, I've seen you mentioning CLR and the Center for reducing Suffering a fair bit. Just to double check, are you affiliated with either?

Comment by NunoSempere on A Model of Patient Spending and Movement Building · 2021-11-15T10:26:10.302Z · EA · GW

Right, thanks, it seemed better to be too paranoid than to be too little.

Comment by NunoSempere on A Model of Patient Spending and Movement Building · 2021-11-11T17:41:11.752Z · EA · GW

Hey Ben, see this comment, I think that this post originally did not make it clear that the constant size point does depend on empirical/reasonable model assumptions.

Comment by NunoSempere on A Model of Patient Spending and Movement Building · 2021-11-11T16:05:00.260Z · EA · GW

Re: Labor grows to a constant size 

Hey, in hindsight I realize that the paper + summarization don't make clear that this does depend on model assumptions/empirical points, sorry. I've edited the post to make this clearer (here is the previous version without the edits, in case it's of interest.)

tl;dr: This comes from model assumptions which seem reasonable, but empirical investigations + historical case studies, or alternatively sci-fi scenarios could flip the conclusion.

In particular, let , i.e. roughly ,  so each year you lose % of people, but you also do some movement building, for which you spend   labor and  capital.

Then for some functions f which determine movement building, this already implies that the movement has a maximum size. So for instance, if you have, then with infinite capital this reduces to 

But then even if you allocate all labor to movement building (so that , or something), you'd have something like  , and this eventually converges to the point where  no matter where you start.

Now, above I've omitted some constants, and our function isn't quite the same, but that's essentially what's going on (see  in equation 6 in page 4.) I.e., if you lose movement participants as a percentage but have a recruitment function that eventually has "brutal" diminishing returns (sub-linear diminishing returns to labor + throwing money at movement building doesn't solve it), you get a similar result (movement converges to a constant.) 

But you could also imagine a scenario where the returns are less brutal—e.g., you're always able to recruit an additional participant by throwing money at the problem, or every movement builder can sort of eternally always recruit a person every year, etc. You could also imagine more sci-fi like scenario, where humanity is expanding exponentially (cubically) in space, and a social movement is a constant fraction of humanity. 

More realistically, if  instead looks like , which has diminishing returns but not brutally so, movement size can increase forever because you can always throw more money at the problem until 


Note that if you have a less brutal recruitment function, this increases the appeal of movement building, not of earn to giving.

Also, I'm not sure whether "brutal" is the right way to be talking about this. "Brutal" is the term I use when I think about this but if I recall correctly the function we use is standard in the literature, and it seems plausible when you start to think about groups which reach a large size. But there is definitely an empirical question here about how movement results actually look like.

Comment by NunoSempere on A Model of Patient Spending and Movement Building · 2021-11-11T00:56:56.027Z · EA · GW

More off-the-cuff thought: 

I can imagine that feedback loop (good in the world -> movement building) being important at the beginning. Arguably one of the reasons why the global health & development -> longtermism change of minds is so common is because longtermism has good arguments in principle but no big tangible wins to its name, so it's better able to convince those who pay attention to it because they're drawn to EA because of global health & development's big wins, rather than convince people directly.

But even in that case, if one wants longtermism to get a few big wins to increase its movement building appeal, it would surprise me if the way to do this was through more earning to give, rather than by spending down longtermism's big pot of money and using some of its labor for direct work.

Comment by NunoSempere on A Model of Patient Spending and Movement Building · 2021-11-11T00:53:46.049Z · EA · GW

This is a good point, and thanks for the comment. 

If the arrow is from good in the world, this could increase the value of direct work and direct spending (and thus earning to give) relative to movement building. I can imagine setups where this might flip the conclusion, but I think that this would be fairly unlikely. 

E.g., because of scope insensitivity, I don't think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions.

If the arrow is from direct work, this increases the value of direct work relative to everything else, and our conclusions almost certainly still hold.

I imagine that Phil might have some other thoughts to share.

Comment by NunoSempere on I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related) · 2021-11-10T17:15:32.388Z · EA · GW

Bounty suggestion: Reach out to people who have had their grants accepted (or even not accepted) by the LTFF, and ask them to publish them in exchange for $100-$500.

  • Why is this good: This might make it easier for prospective candidates to write their applications
  • Why do this as a bounty + assurance contract:
    • Why assurance contract: I might find it kind of scary to publish my own application alone, but easier if others do as well.
    • Why bounty: It feels like there is a cost to publishing an application because they were written by one's younger self, and they are slightly personal, and people have limited capacity to internalize externalities before they burn out.
  • This would require taking on some coordination costs
    • E.g., talking to the LTFF about whether the risks of people "hacking" the application process is worth the increase in the ease of applying.
    • E.g., actually enforcing strict comment guidelines about not posting comments which would make it more costly to publish applications.
    • Thinking about things which could wrong.
Comment by NunoSempere on [deleted post] 2021-11-08T20:14:56.445Z

Seems kind of similar to https://forum.effectivealtruism.org/tag/charity-evaluation

Comment by NunoSempere on There's a role for small EA donors in campaign finance · 2021-11-07T21:09:16.865Z · EA · GW

I'd be curious about considerations such as those in this post being paired with kbog's more thorough comparisons of political candidates.

Comment by NunoSempere on Wellbeing height and depth visualizations · 2021-11-07T20:50:52.707Z · EA · GW

+1. These posts look really interesting, but I'm sort of missing a brief motivation section 

Comment by NunoSempere on How can we make Our World in Data more useful to the EA community? · 2021-11-04T16:16:26.047Z · EA · GW

Collaborate with Jaime Sevilla on datasets for various values related to size, performance, training expense, etc. of large machine learning models. 

Having high quality data on this which one knows is going to be maintained makes it much easier to elicit forecasts about these topics, and eventually resolve those forecasts and keep track of track-records, and I know that Jaime has been working on this.

Comment by NunoSempere on EA Forum engagement doubled in the last year · 2021-11-04T11:15:08.488Z · EA · GW

Nice to see!

  • What is the cumulative number of hours for 2021 so far? 
  • Do you have the figures spent in the EA Wiki specifically? (i.e., on pages that start with https://forum.effectivealtruism.org/tags or https://forum.effectivealtruism.org/tag)?
  • I imagine that there is some way to know when someone is just leaving a tab open for a while, but can you elaborate on how you deal with that?
  • Relatedly, what is engagement driven by? By a few users who use it a lot, or by very many users who use it a little?
Comment by NunoSempere on How does forecast quantity impact forecast quality on Metaculus? · 2021-11-01T12:53:39.868Z · EA · GW

Coming back to this post, I'm thinking about what it means in terms of collaboration. Tetlock found that teams of superforecasters did better than people going at it alone. One process that could produce this kind of data is Metaculus being able to meaningfully coordinate 10 forecasters on one question (but not beyond that), whereas prediction markets right now kind of have people going at it alone.

Comment by NunoSempere on Forecasting Compute - Transformative AI and Compute [2/4] · 2021-11-01T12:23:27.219Z · EA · GW

This post should probably have been upvoted more, but sadly the EA forum sees posts with more popular appeal be upvoted more. 

Comment by NunoSempere on APPG for Future Generations Impact Report 2020 - 2021 · 2021-10-27T15:54:07.608Z · EA · GW

I don't have anything particularly insightful to say, but I'm excited that the APPGFG seems to be doing well.

Comment by NunoSempere on An estimate of the value of Metaculus questions · 2021-10-26T11:45:36.141Z · EA · GW

Version 2:

Comment by NunoSempere on An estimate of the value of Metaculus questions · 2021-10-26T11:12:15.322Z · EA · GW

My thoughts are that this problem is, well, not exactly solved, but perhaps solved in practice if you have competent and aligned forecasters, because then you can ask conditional questions which don't resolve.

  • Given such and such measures, what will the spread of covid be.
  • Given the lack of such and such measures, what will the spread of covid be

Then you can still get forecasts for both, even if you only expect the first to go through.

This does require forecasters to give probabilities even when the question they are going to forecast on doesn't resolve.

This is easier to do with EAs, because then you can just disambiguate the training and the deployment step for forecasters. That is, once you have an EA that is a trustworthy forecaster, you could in principle query them without paying that much attention to scoring rules.