NunoSempere's Shortform

post by NunoSempere · 2020-03-22T19:58:54.830Z · EA · GW · 42 comments


Comments sorted by top scores.

comment by NunoSempere · 2020-11-14T22:18:44.428Z · EA(p) · GW(p)

Reasons why upvotes on the EA forum and LW don't correlate that well with impact .

  1. More easily accessible content, or more introductory material gets upvoted more.
  2. Material which gets shared more widely gets upvoted more.
  3. Content which is more prone to bikeshedding gets upvoted more.
  4. Posts which are beautifully written are more upvoted.
  5. Posts written by better known authors are more upvoted (once you've seen this, you can't unsee).
  6. The time at which a post is published affects how many upvotes it gets.
  7. Other random factors, such as whether other strong posts are published at the same time, also affect the number of upvotes.
  8. Not all projects are conducive to having a post written about them.
  9. The function from value to upvotes is concave (e.g., like a logarithm or like a square root), in that a project which results in a post with a 100 upvotes is probably more than 5 times as valuable as 5 posts with 20 upvotes each. This is what you'd expect if the supply of upvotes was limited.
  10. Upvotes suffer from inflation as EA forum gets populated more, so that a post which would have gathered 50 upvotes two years might gather 100 upvotes now.
  11. Upvotes may not take into account the relationship between projects, or other indirect effects. For example, projects which contribute to existing agendas are probably more valuable than otherwise equal standalone projects, but this might not be obvious from the text.
  12. ...
Replies from: MichaelA, edoarad
comment by MichaelA · 2020-11-15T12:26:37.973Z · EA(p) · GW(p)

I agree that the correlation between number of upvotes on EA forum and LW posts/comments and impact isn't very strong. (My sense is that it's somewhere between weak and strong, but not very weak or very strong.) I also agree that most of the reasons you list are relevant.

But how I'd frame this is that - for example - a post being more accessible increases the post's expected upvotes even more than it increases its expected impact. I wouldn't say "Posts that are more accessible get more upvotes, therefore the correlation is weak", because I think increased accessibility will indeed increase a post's impact (holding other factor's constant). 

Same goes for many of the other factors you list. 

E.g., more sharing tends to both increase a post's impact (more readers means more opportunity to positively influence people) and signal that the post would have a positive impact on each reader (as that is one factor - among many - in whether people share things). So the mere fact that sharing probably tends to increase upvotes to some extent doesn't necessarily weaken the correlation between upvotes and impact. (Though I'd guess that sharing does increase upvotes more than it increases/signals impact, so this comment is more like a nitpick than a very substantive disagreement.)

comment by EdoArad (edoarad) · 2020-11-15T11:00:09.508Z · EA(p) · GW(p)

To make it clear, the claim is that the number karma for a forum post on a  project does not correlate well with the project's direct impact? Rather than, say, that a karma score of a post correlates well with the impact of the post itself on the community?

Replies from: NunoSempere
comment by NunoSempere · 2020-11-15T19:22:15.127Z · EA(p) · GW(p)

I'd say it also doesn't correlate that well with its total (direct+indirect) impact either, but yes. And I was thinking more in contrast to the karma score being an ideal measure of total impact; I don't have thoughts to share here on the impact of the post itself on the community.

Replies from: edoarad
comment by EdoArad (edoarad) · 2020-11-15T19:36:23.833Z · EA(p) · GW(p)

Thanks, that makes sense. 

I think that for me, I upvote according to how much I think a post itself is valuable for me or for the community as a whole. At least, that's what I'm trying to do when I'm thinking about it logically.

comment by NunoSempere · 2020-04-09T11:45:56.822Z · EA(p) · GW(p)

What happened in forecasting in March 2020

Epistemic status: Experiment. Somewhat parochial.

Prediction platforms.

  • Foretold has two communities on Active Coronavirus Infections and general questions on COVID.
  • Metaculus brings us the The Li Wenliang prize series for forecasting the COVID-19 outbreak, as well as the Lockdown series and many other pandemic questions
  • PredictIt: The odds of Trump winning the 2020 elections remain at a pretty constant 50%, oscillating between 45% and 57%.
  • The Good Judgment Project has a selection of interesting questions, which aren't available unless one is a participant. A sample below (crowd forecast in parenthesis):
    • Will the UN declare that a famine exists in any part of Ethiopia, Kenya, Somalia, Tanzania, or Uganda in 2020? (60%)
    • In its January 2021 World Economic Outlook report, by how much will the International Monetary Fund (IMF) estimate the global economy grew in 2020? (Less than 1.5%: 94%, Between 1.5% and 2.0%, inclusive: 4%)
    • Before 1 July 2020, will SpaceX launch its first crewed mission into orbit? (22%)
    • Before 1 January 2021, will the Council of the European Union request the consent of the European Parliament to conclude a European Union-United Kingdom trade agreement? (25%)
    • Will Benjamin Netanyahu cease to be the prime minister of Israel before 1 January 2021? (50%)
    • Before 1 January 2021, will there be a lethal confrontation between the national military or law enforcement forces of Iran and Saudi Arabia either in Iran or at sea? (20%)
    • Before 1 January 2021, will a United States Supreme Court seat be vacated? (No: 55%, Yes, and a replacement Justice will be confirmed by the Senate before 1 January 2021: 25%, Yes, but no replacement Justice will be confirmed by the Senate before 1 January 2021: 20%)
    • Will the United States experience at least one quarter of negative real GDP growth in 2020? (75%)
    • Who will win the 2020 United States presidential election? (The Republican Party nominee: 50%, The Democratic Party nominee: 50%, Another candidate: 0%)
    • Before 1 January 2021, will there be a lethal confrontation between the national military forces of Iran and the United States either in Iran or at sea? (20%)
    • Will Nicolas Maduro cease to be president of Venezuela before 1 June 2020? (10%)
    • When will the Transportation Security Administration (TSA) next screen two million or more travelers in a single day? (Not before 1 September 2020: 66%, Between 1 August 2020 and 31 August 2020: 17%, Between 1 July 2020 and 31 July 2020: 11%, Between 1 June 2020 and 30 June 2020: 4%, Before 1 June 2020: 2%)


  • The Brookings institution, on Forecasting energy futures amid the coronavirus outbreak
  • The European Statistical Service is "a partnership between Eurostat and national statistical institutes or other national authorities in each European Union (EU) Member State responsible for developing, producing and disseminating European statistics". In this time of need, the ESS brings us inane information, like "consumer prices increased by 0.1% in March in Switzerland".
  • Famine: The famine early warning system gives emergency and crisis warnings for East Africa.
  • COVID: Everyone and their mother have been trying to predict the future of COVID. One such initiative is Epidemic forecasting, which uses inputs from the above mentioned prediction platforms.
  • On LessWrong, Assessing Kurzweil's 1999 predictions for 2019 [LW · GW]; I expect an accuracy of between 30% and 40%, based on my own investigations but find the idea of crowdsourcing the assessment rather interesting.
comment by NunoSempere · 2020-11-08T11:42:08.965Z · EA(p) · GW(p)

Prizes in the EA Forum and LW.

I was looking at things other people had tried before.

EA Forum

How should we run the EA Forum Prize? [EA · GW]

Cause-specific Effectiveness Prize (Project Plan) [EA · GW]

Announcing Li Wenliang Prize for forecasting the COVID-19 outbreak [EA · GW]

Announcing the Bentham Prize [EA · GW]

$100 Prize to Best Argument Against Donating to the EA Hotel [EA · GW]

Essay contest: general considerations for evaluating small-scale giving opportunities ($300 for winning submission) [EA · GW]

Cash prizes for the best arguments against psychedelics being an EA cause area [EA · GW]

Cause-specific Effectiveness Prize (Project Plan) [EA · GW]

Debrief: "cash prizes for the best arguments against psychedelics" [EA · GW]

Cash prizes for the best arguments against psychedelics being an EA cause area [EA · GW]

A black swan energy prize [EA · GW]

AI alignment prize winners and next round [EA · GW]

$500 prize for anybody who can change our current top choice of intervention [EA · GW]

The Most Good - promotional prizes for EA chapters from Peter Singer, CEA, and 80,000 Hours [EA · GW]

LW (on the last 5000 posts)

Over $1,000,000 in prizes for COVID-19 work from Emergent Ventures [LW · GW]

The Dualist Predict-O-Matic ($100 prize) [LW · GW]

Seeking suggestions for EA cash-prize contest [LW · GW]

Announcement: AI alignment prize round 4 winners [LW · GW]

A Gwern comment on the Prize literature [LW(p) · GW(p)]

[prize] new contest for Spaced Repetition literature review ($365+) [LW · GW]

[Prize] Essay Contest: Cryonics and Effective Altruism [LW · GW]

Announcing the Quantified Health Prize [LW · GW]

Oops Prize update [LW · GW]

Some thoughts on:

AI Alignment Prize: Round 2 due March 31, 2018 [LW · GW]

Quantified Health Prize results announced [LW · GW]

FLI awards prize to Arkhipov’s relatives [LW · GW]

Progress and Prizes in AI Alignment [LW · GW]

Prize for probable problems [LW · GW]

Prize for the best introduction to the LessWrong source ($250) [LW · GW]

How to replicate.

Go to the EA forum API [? · GW] or to the LW API [? · GW] and input the following query:

      posts(input: {
        terms: {
          # view: "top"
          meta: null  # this seems to get both meta and non-meta posts
          after: "10-1-2000"
          before: "10-11-2020" # or some date in the future
      }) {
        results {

Copy the output into a last5000posts.txt

Search for the keyword "prize". In Linux one can use this with grep "prize" last5000posts.txt, or with grep -B 1 "prize" last5000posts.txt | sed 's/^.*: //' | sed 's/\"//g' > last500postsClean.txt to produce a cleaner output.

Replies from: NunoSempere
comment by NunoSempere · 2020-11-08T16:54:25.977Z · EA(p) · GW(p)

Can't believe I forgot the D-Prize, which awards $20,000 USD for teams to distribute proven poverty interventions.

comment by NunoSempere · 2020-11-01T09:23:11.742Z · EA(p) · GW(p)

The Stanford Social Innovation Review makes the case (archive link) that new, promising interventions are almost never scaled up by already established, big NGOs.

Replies from: EricHerboso
comment by EricHerboso · 2020-11-01T14:06:45.223Z · EA(p) · GW(p)

I suppose I just assumed that scale ups happened regularly at big NGOs and I never bothered to look closely enough to notice that it didn't. I find this very surprising.

comment by NunoSempere · 2021-01-05T10:26:10.448Z · EA(p) · GW(p)

Quality Adjusted Research Papers

Taken from here [EA · GW], but I want to be able to refer to the idea by itself. 

This spans six orders of magnitude (1 to 1,000,000 mQ), but I do find that my intuitions agree with the relative values, i.e., I would probably sacrifice each example for 10 equivalents of the preceding type (and vice-versa).

A unit — even if it is arbitrary or ad-hoc — makes relative comparison easier, because projects can be compared to a reference point, rather than between each other.. It also makes working with different orders of magnitude easier: instead of asking how valuable a blog post is compared to a foundational paper, one can move up and down in steps of 10x, which seems much more manageable. 

comment by NunoSempere · 2020-03-30T11:35:47.091Z · EA(p) · GW(p)

CoronaVirus and Famine

The Good Judgement Open forecasting tournament gives a 66% chance for the answer to "Will the UN declare that a famine exists in any part of Ethiopia, Kenya, Somalia, Tanzania, or Uganda in 2020?"

I think that the 66% is a slight overestimate. But nonetheless, if a famine does hit, it would be terrible, as other countries might not be able to spare enough attention due to the current pandemic.

  3. (registration needed to see)

It is not clear to me what an altruist who realizes that can do, as an individual:

  • A famine is likely to hit this region (but hasn't hit yet)
  • It is likely to be particularly bad.

Donating to the World Food Programme, which is already doing work on the matter, might be a promising answer, but I haven't evaluated the programe, nor compared it to other potentially promising options (see here: [EA · GW], or

Replies from: aarongertler, NunoSempere
comment by Aaron Gertler (aarongertler) · 2020-04-01T06:29:00.069Z · EA(p) · GW(p)

Did you mean to post this using the Markdown editor? Currently, the formatting looks a bit odd from a reader's perspective.

comment by NunoSempere · 2020-11-17T16:53:54.443Z · EA(p) · GW(p)

Ethiopia's Tigray region has seen famine before: why it could happen again - The Conversation Africa

Tue, 17 Nov 2020 13:38:00 GMT


The Tigray region is now seeing armed conflict. I'm at 5-10%+ that it develops into famine (regardless of whether it ends up meeting the rather stringent UN conditions for the term to be used) (but have yet to actually look into the base rate).  I've sent an email to to see if they update their forecasts. 

comment by NunoSempere · 2020-12-19T12:42:23.253Z · EA(p) · GW(p)

Excerpt from "Chapter 7: Safeguarding Humanity" of Toby Ord's The Precipice, copied here for later reference. h/t Michael A [EA · GW].


Many of those who have written about the risks of human extinction suggest that if we could just survive long enough to spread out through space, we would be safe—that we currently have all of our eggs in one basket, but if we became an interplanetary species, this period of vulnerability would end. Is this right? Would settling other planets bring us existential security?

The idea is based on an important statistical truth. If there were a growing number of locations which all need to be destroyed for humanity to fail, and if the chance of each suffering a catastrophe is independent of whether the others do too, then there is a good chance humanity could survive indefinitely.

But unfortunately, this argument only applies to risks that are statistically independent. Many risks, such as disease, war, tyranny and permanently locking in bad values are correlated across different planets: if they affect one, they are somewhat more likely to affect the others too. A few risks, such as unaligned AGI and vacuum collapse, are almost completely correlated: if they affect one planet, they will likely affect all. And presumably some of the as-yet-undiscovered risks will also be correlated between our settlements.

Space settlement is thus helpful for achieving existential security (by eliminating the uncorrelated risks) but it is by no means sufficient. Becoming a multi-planetary species is an inspirational project—and may be a necessary step in achieving humanity’s potential. But we still need to address the problem of existential risk head-on, by choosing to make safeguarding our longterm potential one of our central priorities.

Replies from: NunoSempere
comment by NunoSempere · 2020-12-19T12:51:56.539Z · EA(p) · GW(p)

Nitpick: I would have written "this argument only applies to risks that are statistically independent" as "this argument applies to a lesser degree if the risks are not statistically independent, and proportional to their degree of correlation." Space colonization still buys you some risk protection if the risks are not statistically independent but imperfectly correlated. For example, another planet definitely buys you at least some protection from absolute tyranny (even if tyranny in one place is correlated with tyranny elsewhere.)

comment by NunoSempere · 2020-11-30T18:45:14.320Z · EA(p) · GW(p)

Here is a more cleaned up — yet still very experimental — version of a rubric I'm using for the value of research:


  • Probabilistic
    • % of producing an output which reaches goals
      • Past successes in area
      • Quality of feedback loops
      • Personal motivation
    • % of being counterfactually useful
      • Novelty
      • Neglectedness
  • Existential
    • Robustness: Is this project robust under different models?
    • Reliability: If this is a research project, how much can we trust the results?


  • Overall promisingness (intuition)
  • Scale: How many people affected
  • Importance: How important for each person
  • (Proxies of impact):
    • Connectedness
    • Engagement
    • De-confusion
    • Direct applicability
    • Indirect impact
      • Career capital
      • Information value

Per Unit of Resources

  • Personal fit
  • Time needed
  • Funding needed
  • Logistical difficulty

See also: Charity Entrepreneurship's rubric, geared towards choosing which charity to start.

Replies from: edoarad
comment by EdoArad (edoarad) · 2020-11-30T20:07:59.237Z · EA(p) · GW(p)

I like it! I think that something in this vein could potentially be very useful. Can you expand more about the proxies of impact?

Replies from: NunoSempere
comment by NunoSempere · 2020-12-02T11:57:50.488Z · EA(p) · GW(p)

Sure. So I'm thinking that for impact, you'd have sort of causal factors (Scale, importance, relation to other work, etc.) But then you'd also have proxies of impact, things that you intuit correlate well with having an impact even if the relationship isn't causal. For example, having lots of comments praising some project doesn't normally cause the project to have more impact. See here for the kind of thing I'm going for.

comment by NunoSempere · 2020-10-30T21:19:55.621Z · EA(p) · GW(p)

If one takes Toby Ord's x-risk estimates (from here [EA · GW]), but adds some uncertainty, one gets: this Guesstimate. X-risk ranges from 0.1 to 0.3, with a point estimate of 0.19, or 1 in 5 (vs 1 in 6 in the book).

Replies from: NunoSempere, NunoSempere
comment by NunoSempere · 2020-10-30T21:27:03.095Z · EA(p) · GW(p)

I personally would add more probability to unforeseen natural risk and unforeseen anthropocentric risk

comment by NunoSempere · 2020-10-30T21:25:49.464Z · EA(p) · GW(p)

The uncertainty regarding AI risk is driving most of the overall uncertainty.

comment by NunoSempere · 2020-10-29T19:53:16.737Z · EA(p) · GW(p)

2020 U.S. Presidential election to be most expensive in history, expected to cost $14 billion - The Hindu Thu, 29 Oct 2020 03:17:43 GMT

comment by NunoSempere · 2020-03-22T19:58:55.058Z · EA(p) · GW(p)

Testing shortform

comment by NunoSempere · 2021-01-14T17:59:48.918Z · EA(p) · GW(p)

Test II