Posts

The Survival and Flourishing Fund grant applications open until August 23rd ($8m-$12m planned for dispersal) 2021-08-04T19:00:49.793Z
2020 AI Alignment Literature Review and Charity Comparison 2020-12-21T15:25:04.543Z
Avoiding Munich's Mistakes: Advice for CEA and Local Groups 2020-10-14T17:08:13.033Z
Will protests lead to thousands of coronavirus deaths? 2020-06-03T19:08:10.413Z
2019 AI Alignment Literature Review and Charity Comparison 2019-12-19T02:58:58.884Z
2018 AI Alignment Literature Review and Charity Comparison 2018-12-18T04:48:58.945Z
2017 AI Safety Literature Review and Charity Comparison 2017-12-20T21:54:07.419Z
2016 AI Risk Literature Review and Charity Comparison 2016-12-13T04:36:48.060Z
Being a tobacco CEO is not quite as bad as it might seem 2016-01-28T03:59:15.614Z
Permanent Societal Improvements 2015-09-06T01:30:01.596Z
EA Facebook New Member Report 2015-07-26T16:35:54.894Z

Comments

Comment by Larks on Leverage Research: reviewing the basic facts · 2021-09-27T18:41:19.443Z · EA · GW

Three years later, a similar post with some more details about Leverage's internal management processes, and an update from Leverage here.

Comment by Larks on Why I am probably not a longtermist · 2021-09-27T01:09:13.400Z · EA · GW

We might be concerned with degrading--or betraying--our species / traditions / potential.

Yeah this is a major motivation for me to be a longtermist. As far as I can see a Haidt/conservative concern for a wider range of moral values, which seem like they might be lost 'by default' if we don't do anything, is a pretty longtermist concern. I wonder if I should write something long up on this.

Comment by Larks on Clarifying the Petrov Day Exercise · 2021-09-27T00:03:11.292Z · EA · GW

If you had to opt-in, not opting in would be one of your two options. If you don't opt-in, there would be a post that says something like "We won! Everyone who could opt in decided to cooperate!"

Comment by Larks on What should "counterfactual donation" mean? · 2021-09-26T17:11:40.740Z · EA · GW

A minor variant on 9) which is still perhaps worth making explicit would be if you donated the $50 to a different charity that the other person did not think was very valuable. I think this maintains counterfactual validity if it is credible.

Comment by Larks on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T17:07:29.746Z · EA · GW

Surely after the site has been nuked you will no longer be able to enter the codes, because your silos will have been destroyed? And prior to that you risk mis-classifying our civilian space exploration vehicles, whose optimal launch trajectory just happens to go over LessWrong airspace, as weapons?

Comment by Larks on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T16:44:02.240Z · EA · GW

The appropriate response to someone with the launch codes to a real nuke suggesting we sell them to terrorists is to shoot them, not to wait to see if the terrorists could pay a lot of money; by comparison a downvote seems very apt!

Comment by Larks on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T16:37:29.864Z · EA · GW

First of all I'd like to thank the Forum team for their hard work producing this nuclear deterrent. We have been extremely lucky that LessWrong did not heed Bertrand Russell's advice during their period of nuclear monopoly. However, I am concerned that we have not yet tested these weapons, and hence we cannot be entirely sure they will function as intended. Perhaps a test strike against a lightly populated military target like the https://www.nytimes.com/ would make an effective demonstration?

Comment by Larks on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T16:26:19.619Z · EA · GW

I will not enter my codes for any reason ... if LessWrong is taken down, I will retaliate.

Ahh, Nixon's madman strategy.

Comment by Larks on Should We Have More Expansive Laws as an Alternative to Cancel Culture? · 2021-09-23T14:22:36.217Z · EA · GW

You might be interested in reading Robin Hanson's extended writings on this and similar subjects, for example here:

I’ve said before that it might be better if we had formal laws against the kinds of evil that cancel crowds now seek to punish. Because at least then there’d be a formal trail before punishment, which could exonerate many of the accused. But it doesn’t look like such laws will be passed anytime soon.

I agree with him that this solution is unlikely to satisfy people's desire for cancel culture:

  • People engaging in cancellations enjoy doing so; they would not get this benefit from passively watching a court proceeding.
  • Court proceedings give defendants the opportunity to cross-examine witnesses, present their own evidence, be judged by their peers, and so on, which increase their likelihood of being exonerated.
  • Often people are cancelled for activities that either were not a norm violation at the time, or are not in broader society, and hence would not be against the law. 
  • Cancel culture allows a small number of crazy people to exert disproportionate influence because they care more; laws determined by the median voter would be and are much more moderate.
  • Engaging in a cancellation mob allows people to signal how woke they are; passively accepting an institutional process would not.
  • Enforcement through highly random bullying creates a climate of fear, where people aggressively self-censor to avoid falling anywhere near the line. Encoding this in law would allow people to say things that were just on the permitted side of the line without fear.
  • Because there is little logic to cancellation, allies and high status people can be exempted from punishment for the same behaviour which would be cancellably 'creepy' or 'racist' from others.
  • Laws can take many years to pass; cancellation mobs sometimes want to punish people for things that were not forbidden very recently.
  • Passing such laws would require them to be debated, and many of them might seem absurd. By instead only raising these rules in the context of individual transgressors, principled opposition can be dismissed as supporting the bad person.
Comment by Larks on Does the Forum Prize lead people to write more posts? · 2021-09-21T04:00:02.402Z · EA · GW

Perhaps the most plausible way in which this could happen is that the authors of prize-winning posts are incentivized to post more frequently. We therefore examined whether prize-winning authors post more frequently in the six months following their prize than in the six months prior to it, relative to a control group.

I'm surprised this seemed the most plausible mechanism. Surely the incentive should have occurred prior to winning the prize? For my own case, I observed the existence of the prize, which encouraged me to put more work into making my post better, and the winning came later, presumably in part due to this extra effort. Is your idea that winning signals that you are high enough quality to be able to win, and hence its worth trying again?

In fact if winners suspected the Judges would be averse to letting them win 'too often' out of some egalitarian sentiment the effect might go in the opposite direction (though I think this would be very small, and I don't think I used this as a judging criteria).

Comment by Larks on Suggested norms about financial aid for EAG(x) · 2021-09-20T17:11:11.166Z · EA · GW

You should think of paying for your EAG ticket as equivalent to making a donation to EA community-building.

If we adopt this line of thought, wouldn't basically no-one end up paying?

  • Most people do not donate to community-building.
  • Personally attending doesn't significantly increase the cost-effectiveness of community-building from an impartial point of view.
  • Even if you were donating to EA community building anyway, you were probably donating more than the ticket price, so you are already 'covered'.
  • If you are going to donate, you should do so directly, because of the tax advantages. For higher income people this could effectively almost double the cost of buying the ticket directly.
Comment by Larks on Guarding Against Pandemics · 2021-09-19T02:21:38.070Z · EA · GW

I've read a lot of what Warrens thinks we should do, and it seems.... underwhelming?  It seems like a ton of it is "giving people money to mitigate the negative effects of the  pandemic" and almost nothing about preventing actual pandemics?  

Indeed, she said she was opposed to the US paying more for vaccines, and supported IP expropriation, both of which reduce the incentives to invest in vaccines for next time.

Comment by Larks on Guarding Against Pandemics · 2021-09-19T02:19:02.065Z · EA · GW

Ahh. You said in the post that the group was supporting both parties:

Guarding Against Pandemics (GAP), which does non-partisan political advocacy ... another important part of GAP’s work is supporting elected officials from both parties who will advocate for biosecurity and pandemic preparedness. [emphasis added]

... which makes this decision a bit confusing. I think it is very easy to get sucked into partisanship and just going for one side; avoiding this requires consistent effort from the beginning. Do you expect that over the long run you will support roughly equal numbers of Republicans and Democrats? I could imagine it being useful to have some kind of promise to spend equally between the parties. Otherwise I think you're in danger of just looking like another Democrat front group.

Comment by Larks on Guarding Against Pandemics · 2021-09-18T17:44:06.284Z · EA · GW

Very important topic, thanks for putting together this great idea!

Could you explain in a bit more detail how the $5,000 gating issue works? My understanding was that multi-candidate PACs topped out at giving $5,000 to each candidate, regardless of how many extra small donors they had (assuming they hit the 51 threshold). Once you have e.g. 100 donors giving $5,000, what can you do with an additional $5,000 donor?

Perhaps it might be useful for you to share the list of candidates you think are good? This would allow people to donate to them directly, allowing each top candidate to receive more than $5,000, because each individual can give $2,800 to each candidate. Donors could then write 'for supporting pandemic preparedness' in the notes field, so the politicians understand what behaviour we are supporting.

It would also allow people to customize who they donate to; people might want to support pandemic-aware politicians in general, but have other reasons for vetoing one or two on the list.

Finally, the post says:

Reminder: due to federal election law, only U.S. citizens are allowed to donate

But the donate link says:

I am a U.S. citizen or lawfully admitted permanent resident (i.e., green card holder).

Could you clarify whether green card holders can donate?

editted to add:

I notice you are using ActBlue to handle payments. My impression was they only allowed people to support Democrats - for example Phil Scott, the governor of Vermont, doesn't even show up on their website, even though he has been very good on covid. Are ActBlue happy with a non-partisan PAC using their systems to donate to Republican politicians?

Comment by Larks on EA needs consultancies · 2021-09-16T16:21:47.594Z · EA · GW

[T]he intersection of people who were very concerned about what was true, and people who were trying hard to make the world a better place, was negligible. 

Seems pretty plausible to me this is true. Both categories are pretty small to start with, and their correlation isn't super high. Indeed, the fact that you think it would be bad optics to say this seems like evidence that most people are indeed not 'very concerned' about what is true.

Comment by Larks on The motivated reasoning critique of effective altruism · 2021-09-16T16:15:16.600Z · EA · GW

Certainly we still do lots of them internally at Open Phil.

It might be helpful if you published some more of these to set a good example.

Comment by Larks on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-13T00:10:45.485Z · EA · GW

In the couple of past cases where people have shared fiction here, it's been on the frontpage and people haven't generally seemed to mind.

Presumably we are expecting a much higher volume than in the past. It might be a bit strange for newcomers to the movement, expecting to find a forum for serious idea discussion, instead find themselves on a strange version of AO3.

edit: perhaps entrants should have [Creative Writing Entry] as the start of their title, so it is easy to distinguish on the frontpage?

Comment by Larks on First vs. last name policies? · 2021-09-11T13:52:10.834Z · EA · GW

I essentially always just use first name, including CEOs or professors. I actually find it quite strange how insistent some otherwise extremely egalitarian people are on the use of professional titles as a mark of social status. 

For actual nobility I guess I might use titles.

Comment by Larks on Public Health Research · 2021-09-10T20:39:40.246Z · EA · GW

I was reminded of that post recently when reading Why We Sleep by Matthew Walker, who described the significant benefits that things as simple as switching away from LED lighting could have on sleep quality, which in turn has enormous impact on cognitive performance, mental health, car accidents, etc. I started to think that further investments in sleep research could potentially have high societal returns.

It's worth noting that Walker's book significantly misrepresents the science. Quoting at length from Guzey:

In the process of reading the book and encountering some extraordinary claims about sleep, I decided to compare the facts it presented with the scientific literature. I found that the book consistently overstates the problem of lack of sleep, sometimes egregiously so. It misrepresents basic sleep research and contradicts its own sources.

In one instance, Walker claims that sleeping less than six or seven hours a night doubles one’s risk of cancer – this is not supported by the scientific evidence (Section 1.1). In another instance, Walker seems to have invented a “fact” that the WHO has declared a sleep loss epidemic (Section 4). In yet another instance, he falsely claims that the National Sleep Foundation recommends 8 hours of sleep per night, and then uses this “fact” to falsely claim that two-thirds of people in developed nations sleep less than the “the recommended eight hours of nightly sleep” (Section 5).

Walker’s book has likely wasted thousands of hours of life and worsened the health of people who read it and took its recommendations at face value (Section 7).

The myths created by the book have spread in the popular culture and are being propagated by Walker and by other scientists in academic research. For example, in 2019, Walker published an academic paper that cited Why We Sleep 4 times just on its first page, meaning that he believes that the book abides by the academic, not the pop-science standards of accuracy (Section 14).

Any book of Why We Sleep’s length is bound to contain some factual errors. Therefore, to avoid potential concerns about cherry-picking the few inaccuracies scattered throughout, in this essay, I’m going to highlight the five most egregious scientific and factual errors Walker makes in Chapter 1 of the book. This chapter contains 10 pages and constitutes less than 4% of the book by the total word count.

Comment by Larks on Extrapolated Age Distributions after We Solve Aging · 2021-09-09T00:12:18.670Z · EA · GW

Somewhat relatedly, you might find this interesting: research estimating generation length over history for both sexes. Surprisingly to me, they find massive variation over time; ~30,000 years ago the average generation length was around 24 for women, but more like 33 for men vs around 26 more recently, a very large difference. It's not the same metric, but related in that it suggests another way in which historically sexual parity forces were not that strong and tolerated considerable variation over time.

 

Comment by Larks on Neglected biodiversity protection by EA. · 2021-09-04T20:09:11.179Z · EA · GW

Thanks, added a 'maybe'.

Comment by Larks on Neglected biodiversity protection by EA. · 2021-09-04T19:16:22.684Z · EA · GW

I suspect this answer will not be very satisfying to you, but it is in some sense the true answer so someone should provide it:

There are a great many possible causes in the world, and EA is focused on things which are (plausibly) the most effective in the world. By their nature only a small fraction of all causes are plausible candidates for the most effective, so we should expect most causes to not be EA causes. If you had some concrete arguments for why biodiversity might meet such a stringent standard, people could consider them, but in their absence the 'default' is for something to not be an EA cause.

In particular, in addition to some argument as to why having many species is very important, you might want some sort of comparison to:

  • Existential risk work, which aims at preventing the irreversible extinction of all species.*
  • Wild Animal Welfare work, which regards wild animals as maybe having net negative lives, and hence their extinction might be good (if this was by reducing the total number of animals).


* as a first approximation

Comment by Larks on Concern about the EA London COVID protocol · 2021-09-02T14:33:07.935Z · EA · GW

provision of hand sanitising stations, cleaning of public surfaces

As I understand it, these two measures are more safety theater than anything else - it seems that almost all transmission is through the air, not through surfaces. But it would be good to hear about ventilation: getting lots of fresh air is one of the most effect ways of reducing transmission.

Finally, I realize this is probably futile given venue restrictions, but as far as I'm aware one dose of J&J is not more effective than one dose of Pfizer or Moderna. If you're going to accept a single shot of J&J, why not also a single shot of Pfizer? Conversely, some other vaccines (e.g. Sinopharm) seem much less effective - does it really make sense to give people full credit for those? Or is there an implicit restriction on which vaccines are accepted based on e.g. MHRA approval?

Comment by Larks on Harrison D's Shortform · 2021-08-30T04:59:11.605Z · EA · GW

Great spot. Presumably this means a lot of kids will be googling related terms and looking for pre-existing policy suggestions and pro/con lists.

Comment by Larks on Extrapolated Age Distributions after We Solve Aging · 2021-08-29T17:49:18.285Z · EA · GW

Interesting work, thanks!

The rising female:male life expectancy ratio is interesting, because it instinctively strikes me as absurd - there should be some feedback loop that pushes them back towards similar numbers - but it's not clear to me this intuition is much more than status quo bias. 

Comment by Larks on If You're So Smart, Why Aren't You Governor Of California? (Scott Alexander: Astral Codex Ten) · 2021-08-26T16:39:28.943Z · EA · GW

Typically you introduce people with their most impressive credential, so I would just say:

Scott, who designed the rule-set for Dungeons and Discourse, ...

Comment by Larks on Who do intellectual prizewinners follow on Twitter? · 2021-08-25T20:17:51.012Z · EA · GW

I'm struck by how many of these accounts follow institutions like Oxford or the Rhodes Trust - my impression is that serious twitter users tend to prefer following individuals associated with institutions rather than the institutions themselves. I wonder if this suggests these people created accounts, followed a few groups they felt like they 'should', and then largely stopped using twitter.

Comment by Larks on Mission Hedgers Want to Hedge Quantity, Not Price · 2021-08-19T18:05:29.811Z · EA · GW

Historically it has been hard to get similar products off the ground. Virtually every human has native exposures to housing prices and the overall level of GDP in their country, but for some reason virtually no-one is interested in actually trading them. According to bloomberg on most days literally zero contracts trade for even the front-month Case-Shiller housing composite future. 

It's possible there might be some natural short interest for oil quantity contracts from e.g. pipelines, whose revenue is determined by the volume of oil sent through them? But this would likely be quite local, and I think you would struggle to find interest in the global quantity.

Comment by Larks on Building my Scout Mindset: #2 · 2021-08-17T03:24:02.836Z · EA · GW

I'm not sure your Ideological Turing  Test really captures the essence of the article. The key conclusion isn't just that civilians can't fully replace traditional police, it's that (this piece of) the evidence doesn't support the activists' claims that there is any aspect of policing which should be de-policed. Presumably the target audience is anyone who might have otherwise believed the suggestions in the other referenced articles that this de-policing policy was beneficial.

I don't like the possible implication that unhoused individuals are more prone to violence. 

Is this not true? My very quick googling (1) (2) suggests homeless people are disproportionately violent, and I would guess the problem is worse among those not using shelters, as being violent or abusing drugs might have been the reason they were kicked out. Given the demographics of the homeless it would be pretty surprising if they were not more prone to violence than the average person.

Comment by Larks on Risks from the UK's planned increase in nuclear warheads · 2021-08-16T15:28:03.315Z · EA · GW

Some research on nuclear winter suggests that 100 Hiroshima- sized nuclear detonations would be enough to destroy the majority of human life on earth.Under such a  model, it makes no sense for any country to have more than 100 warheads. 

I don't see how you can draw such a conclusion. This report concluded that 100 nukes attacking 100 different cities would cause dramatic climate change, but it could still make sense to have more warheads, as you might have other use cases for them. For example, destroying hardened military targets could take multiple warheads but produce much less smoke than a single city. Additionally, some fraction of your warheads could be destroyed prior to use, increasing the ex ante number required for deterrence. 

Comment by Larks on [PR FAQ] Adding profile pictures to the Forum · 2021-08-09T16:03:29.331Z · EA · GW

Many things exist on a continuum from impersonal meritocracy to social relationship based. At it's best, the former can be fair and efficient, but it can also feel 'cold'. The latter can provide motivation and a sense of belonging, but can also be biased, inefficient and nepotistic.

In this framework, it seems to me the forum should be more towards the former end. There are many other areas for people to engage in the social side of EA - e.g. local groups, facebook, EAGs, colleagues. But for most people there is no alternative to the forum for relatively objective discussion, so I would be wary about pushing away from that direction. You don't see profile pictures on journal articles, or court documents, or computer code.

Yes, at the moment the forum doesn't take advantage of many techniques that other platforms use to gain popularity. But to the extent these come at the cost of rational discussion, this is a cost we should be happy to pay. The less differentiated the forum is the less reason it has to exist.

Indeed for a while LW even had an option to remove usernames from the site so you could read each comment without preconceived notions! I think that is too extreme - usernames convey important information for statistical discrimination on comments - but I'm not sure why someone's looks should matter.

I also messaged you another important consideration in this direction.

Comment by Larks on [PR FAQ] Banner highlighting valuable EA resources · 2021-08-09T15:45:37.509Z · EA · GW

I find the recent tendency on the internet to use a huge amount of screen space for banners and the like quite annoying. I would almost definitely want to turn this off, and recommend making this very easy for people to do.

Comment by Larks on Phil Torres' article: "The Dangerous Ideas of 'Longtermism' and 'Existential Risk'" · 2021-08-06T16:13:39.008Z · EA · GW

Speaking of chutzpah, I've never seen anything quite like this:

“We can’t have people posting anything that suggests that Giving What We Can [an organization founded by Ord] is bad,” as Jenkins recalls. These are just a few of several dozen stories that people have shared with me after I went public with some of my own unnerving experiences.

He needs to briefly explain what the acronym 'GWWC' is - because otherwise the sentence will be incomprehensible - but because he wants to paint people as evil genocidal racists who don't care about the poor, he can't explain what type of organization GWWC is, or what the pledge is.

Comment by Larks on The Cryonics reductio against pure time preference: a rhetorical low-hanging fruit - or "Do we discount the future only because we won't live in it?" · 2021-08-03T19:19:46.342Z · EA · GW

Given that very few people are signed up for cryonics, being inconsistent with support cryonics doesn't seem like much of a reductio in general. It seems plausible to me that part of the reason people don't sign up is the far future doesn't seem 'real' to them, which is sort of like discounting.

Comment by Larks on Research on Effective Strategies for Equity and Inclusion in Movement-Building · 2021-07-27T01:30:44.049Z · EA · GW

The evidence of correlations between diversity and performance is substantial: An analysis by McKinsey found that “companies in the top quartile for racial and ethnic diversity are 35 percent more likely to have financial returns above their respective national industry medians”; that “companies in the top quartile for gender diversity are 15 percent more likely to have financial returns above their respective national industry medians”; that companies in the bottom quartile both for gender and for ethnicity and race lag in financial performance; that every 10% greater proportion of non-whites on senior-executive teams is associated with 0.8% greater earnings before interest and taxes; and that every 10% greater proportion of women on senior-executive teams is related to 3.5% greater earnings before interest and taxes in the UK.[4]

Worth noting that this study failed to replicate:

However, when we revisit McKinsey’s tests using recent data for US S&P 500® firms, we find statistically insignificant relations between McKinsey’s inverse normalized Herfindahl-Hirschman measures of executive racial/ethnic diversity and not only industry-adjusted EBIT margin, but also industry-adjusted sales growth, gross margin, ROA, ROE, and TSR. Our results suggest that despite the imprimatur often given to McKinsey’s (2015, 2018, 2020) studies, caution is warranted in relying on their findings to support the view that US publicly traded firms can deliver improved financial performance if they increase the racial/ethnic diversity of their executives. 

Comment by Larks on Why & How to Make Progress on Diversity & Inclusion in EA · 2021-07-27T01:25:02.972Z · EA · GW

Companies in the top quartile for diversity in gender and ethnicity are 15% and 35% more likely to outperform their industry’s median performance, respectively, and companies in the bottom quartile lag behind the median. 

Many other commentators have already pointed out the problems with other pieces of evidence cited in the post, but I thought it was worth noting that this study also failed to replicate:

However, when we revisit McKinsey’s tests using recent data for US S&P 500® firms, we find statistically insignificant relations between McKinsey’s inverse normalized Herfindahl-Hirschman measures of executive racial/ethnic diversity and not only industry-adjusted EBIT margin, but also industry-adjusted sales growth, gross margin, ROA, ROE, and TSR. Our results suggest that despite the imprimatur often given to McKinsey’s (2015, 2018, 2020) studies, caution is warranted in relying on their findings to support the view that US publicly traded firms can deliver improved financial performance if they increase the racial/ethnic diversity of their executives. 

Comment by Larks on Taylor Swift's "long story short" Is Actually About Effective Altruism and Longtermism (PARODY) · 2021-07-23T20:43:05.029Z · EA · GW

Thanks very much for writing this; I think it is important for EAs to become more aware of straussian and kabalistic messages which so suffuse our world.

However, I am skeptical of your analysis. While this was a very impressive effort for your second post on the forum, I think you fatally misinterpret the evidence here. This is not a story of her personal journey into EA, but a pre-mortem of existential risk.

You correctly start with the chorus:

And I fell from the pedestal
Right down the rabbit hole
Long story short, it was a bad time
Pushed from the precipice
Clung to the nearest lips
Long story short, it was the wrong guy

This seems to be to be very clearly a direct reference not to the rabbit hole of reading EA literature, but to existential risk due to transformative AI. 

At present humanity exists on a pedestal - we are richer and more numerous and more powerful than ever before, far exceeding any other species, and the whole light cone awaits us. But as mathematicians - like Lewis Carrol - develop stronger AI systems, alignment failure will result in the world getting weirder and weirder, as if falling down the rabbit hole. Humanity's long future story of learning, growth and intergalactic colonisation is cut dramatically short - a very bad time indeed! - as these AIs push us over the edge of the precipice that Toby uses as an analogy for existential risk. We were blind to this risk because we clung to the words coming out of the lips of the wrong guy, rather than listening to sages like Toby, Nick and Eliezer.

This message is reinforced in the bridge:

Now I'm all about you
I'm all about you, ah
Yeah, Yeah
I'm all about you, ah
Yeah Yeah

This is actually a serious lament. Gone is the time of value diversity, where humans sought pleasure and art and friendship and love and wisdom and honour and joy  and freedom and all the other good things in the world. Now this striving for the good has been replaced with a single goal that the AI is relentlessly optimising. 

The verse continues in this theme:

Fatefully
I tried to pick my battles 'til the battle picked me
Misery
Like the war of words I shouted in my sleep
And you passed right by
I was in the alley, surrounded on all sides
The knife cuts both ways
If the shoe fits, walk in it 'til your high heels break

You are correct that the word 'Fatefully' references Schell's book, but draw the wrong lesson from there on. In the past Taylor picked her own battles - she could focus on her own goals. But with the rise of TAI, this liberty was taken away from here, as she had to focus on protecting humanity. Alas, this was utterly futile, as she was too late - she should not have indulged in the belief she could choose her battles. This leads to misery, as her too-little-too-late efforts were as irrelevant as dreams, the development of AI passing right by.  Eventually she succumbs to the robots, surrounded on all sides as they cut into her to harvest trace amounts of minerals from her body.

Moving on to the next verse:

Actually
I always felt I must look better in the rear view
Missing me
At the golden gates they once held the keys to
When I dropped my sword
I threw it in the bushes and knocked on your door
And we live in peace
But if someone comes at us, this time, I'm ready

It is clear why she looks better in the rear view: in the past she was a beautiful and successful singer. Now her constituent atoms are being used for paperclip production. She does live in peace now - at least she has no further woes - but the last line is wistful. It is true vacuously, because all material implications are true if the antecedent is false: there is no-one left to come at her, because everyone is dead.

The next verse continues to describe the actions of the TAI. 

No more keepin' score
Now I just keep you warm (Keep you warm)
No more tug of war
Now I just know there's more (Know there's more)
No more keepin' score
Now I just keep you warm (Keep you warm)
And my waves meet your shore
Ever and evermore

There is no more keeping score because everything she once cared about has been destroyed, reset to zero, and there is no-one left to even observe this. The only thing being counted now is paperclips. As the AI is a singleton there is no more tug-of-war between different goals; only the relentless production of mental stationary, of which there can always be more. 

Most tragic is when we learn what Taylor's body is being used for - it appears her cells were fed into a furnace to power the factories that now cover the globe. Her waves - the waves of heat produced by the combustion of her body - meet the water that is being boiled to turn the turbines. And this will continue for ever and evermore, because the AI is a singleton, and there is no way to change course.

In the next verse Taylor provides advice to her past self:

Past me
I wanna tell you not to get lost in these petty things
Your nemeses
Will defeat themselves before you get the chance to swing
And he's passing by
Rare as the glimmer of a comet in the sky
And he feels like home
If the shoe fits, walk in it everywhere you go

She wants to tell her not to get tied up in petty things - namely anything other than AI alignment. After all, the nanobots will get her enemies before long. While she is lost in these petty things, AI development is passing her by. It is rare in the most important sense - it represents a hinge of history. Alas, because of her comfort and status quo bias she failed to see the danger, and now the silicon shoe she fits into perfectly (due to the disassembly of her body at the molecular level) controls her every step.

Alas, the last paragraph is the most tragic.

Climbed right back up the cliff
Long story short, I survived

Like Winston in 1984, she has come to love Big Brother, and sees the relentless march of paperclips as the natural continuation of the human story.
 

Comment by Larks on What would you do if you had half a million dollars? · 2021-07-22T00:15:18.366Z · EA · GW

I still don't understand this. The lottery means one / a small number of grantmakers get all the money to allocate. People who don't win don't need to think about where to donate. So really it seems to me that the lottery reduces the number of grantmakers and indeed the number of who spend time thinking about where to donate.

The model is this:

  • A bunch of people each have $5,000 to donate.
  • Many put in a bit of effort - they spend a bit of time on the GiveWell website, read some stuff by MIRI, and chat to a couple of friends. But this isn't enough to catch them up on the state of the art, let alone make some novel contribution to the grant application discrimination project.
  • Others can't find the time to do even this much research.
  • So overall very little grant evaluation has really been done, and what has been done is highly duplicative. Given they all fail to pass the bar of 'as good as the EA funds', this work was essentially wasted.

But if they instead did a lottery:

  • One person gets $500,000 to donate.
  • He now puts in a lot of effort - reading a huge amount of literature, and doing calls with the leaders of multiple organizations. Perhaps he also discusses his approaches with several other EAs for advice.
  • By the end he has a novel understanding of some aspect of the charitable funding landscape, which exceeds that of the EA fund grantmakers.
  • The overall amount of time spent is actually less than before, but the depth is far greater, and with dramatically less redundancy.

So by using the lottery we have both saved time and increased the amount of effective evaluation work being done.

Comment by Larks on EA cause areas are just areas where great interventions should be easier to find · 2021-07-18T01:53:29.551Z · EA · GW

identifying areas and ethnic groups internationally at greatest risk of genocide / ethnic violence and trying to direct funding for anti-racism movements towards these areas

You might be interested in previous discussion of genocide prevention as a cause area here

I'm skeptical that funding 'anti-racism' movements would make sense as an intervention though, at least in the contemporary 'woke' sense of the phrase. Many prominent 'anti-racist' memes, like that the relative lack of success of one ethnic group should be attributed to exploitation by another, can increase racial tensions, and are similar to those used to justify genocides in the past.

Comment by Larks on Miranda_Zhang's Shortform · 2021-07-15T15:45:19.879Z · EA · GW

These models predicted growth followed by collapse. The first part has been proven correct, but there is little evidence for the second. Acting like past observations of growth are evidence of future collapse seems like an unusual example of Goodman's New Riddle of Induction in the wild.

Comment by Larks on Intervention report: Agricultural land redistribution · 2021-07-15T15:26:58.324Z · EA · GW

Thanks for writing this; I really enjoy reading new detailed research on economic proposals like this, and I think How Asia Works is a very interesting book.

However, I must say I was a little confused by the structure of this report. The Summary and Funding Opportunities sections make you sound relatively positive to land expropriation, with the major problem being its lack of political tractability:

We think that if land redistribution were done well, it could be a high impact intervention for kickstarting growth. Our pessimism mainly comes from our belief that redistribution of the kind described in Joe Studwell’s How Asia Works is intractable.

But then reading the text in detail you raise a lot of specific objections, which seem like they would dramatically reduce the impact of the reform, or even drive it negative:

  • Lack of plausible theory for why large landlords would not be incentivised to make productive investments.
  • Large farms empirically more productive than smaller farms, even though the farm sizes you consider are still well below what a US farmer would consider economic.

Given these facts, it seems somewhat plausible to me that opposing land redistribution could be a valuable activity (though I agree it is unlikely to be GiveWell-competative), and hence jumping to suggestions about where to implement land redistribution, and which groups to fund, seems premature.

Another issue is the historical failures of land redistribution. Studwell effectively cherry-picks a small group of very similar countries to determine that land expropriation is good, without considering other parts of the world where it has been quite bad. I don't see how you can seriously evaluate land redistribution without considering examples like Zimbabwe, which saw increased violence, decreased production and increased poverty as a result of their expropriation of land from large white farmers. These policies pose a significant risk of corruption and cronyism, and this cannot be ignored in our evaluation. Many other parts of Africa and South America also tried similar policies, with much worse outcomes than those highlighted in the book.  If a policy has performed badly over most of the world, except for East Asians, an ethnic group who do extremely well in other regards (e.g. the success of East Asian immigrants to the US), this seems to suggest that the policy was not the crucial factor.

Similarly, other countries have succeeded without such redistribution. You gesture at this here:

Land equality is certainly not a sufficient condition for transformational economic growth; we suspect it is not a necessary condition either.

but I think it's worth being a bit more explicit about this: the majority of countries that have seen transformational economic growth have not had anything close to 'Land equality'. Indeed, in some cases like the UK, government policy was explicitly promoting a less egalitarian model of land use through enclosing the commons to benefit larger landholders.

Comment by Larks on [deleted post] 2021-07-10T04:39:38.755Z

I could not click on the comment box on desktop (Firefox, windows) but it seems I can on mobile (Adblock Browser, Pixel).

Edit: However, I can edit it from the desktop, though the formatting is messed up, and I get the below warning in red:

"This document was last edited in Markdown format. Showing the Markdown editor. Click here to switch to the EA Forum Docs editor (your default editor)."

Comment by Larks on You are allowed to edit Wikipedia · 2021-07-05T20:25:28.425Z · EA · GW

It's hard to give much credence to an article which claims that wikipedia provides literally zero value. It's not perfect, and the rules about what evidence they will accept can be annoying at times, but I think the information value it has created is clearly enormous. 

Wikipedia is not only a monopoly; it is the very worst monopoly, one that saps wealth, erodes knowledge, spreads false or misleading information, allows anonymous edits, and returns nothing to the economy.

Comment by Larks on [Meta] Is it legitimate to ask people to upvote posts on this forum? · 2021-06-29T15:56:01.438Z · EA · GW

If the post seems unusually important you should upvote it yourself, and maybe encourage others to read it. Explicitly asking them to upvote it seems like manipulation.

Comment by Larks on EA needs consultancies · 2021-06-29T15:52:02.075Z · EA · GW

Could OpenPhil run such a consultancy? You could hire people you only expected to have enough work to partially sustain, and then rent out their services to other organizations for the remainder. This could be a good way of proving out the business model. If successful, you could then spin it out.

My impression is many consultancies have their ex-employer as their primary client so this might not be so unusual.

Comment by Larks on What are the 'PlayPumps' of cause prioritisation? · 2021-06-27T18:02:37.094Z · EA · GW

I'm surprised you would think that the support for the USSR and WWII isolationism, positions endorsed by no major western political parties today, are more political than GMOs and nuclear power, which are still opposed by major groups, but happy to help!

I actually thought the USSR example might be especially palatable to your audience given the communists' historical persecution of Christians.

Comment by Larks on What are the 'PlayPumps' of cause prioritisation? · 2021-06-24T02:48:45.488Z · EA · GW

I think plastic straws are likely to be your best bet, but here are some alternatives. In each case I'm aiming for popular causes (not just individual laws or the like) where your readers should be able to see why people thought it was a good idea, even if ultimately unsuccessful:

  • Prohibition
  • Liberia
  • Opposition to GMOs
  • Opposition to Nuclear Power
  • US anti-war activism prior to Pearl Harbour
  • Western pro-Soviet activism before their atrocities were well known.
Comment by Larks on Type Checking GiveWell's GiveDirectly Cost Effective Analysis · 2021-06-24T00:33:17.544Z · EA · GW

Thanks for doing this analysis, and for writing up in such  a way as to introduce this technique to a wider audience!

Comment by Larks on What are some examples of successful social change? · 2021-06-23T01:40:09.661Z · EA · GW

This is a very broad category; Belisarius reconquered Italy with a carefully planned campaign and an army on the approximate scale of the EA movement!

Comment by Larks on Which non-EA-funded organisations did well on Covid? · 2021-06-22T17:58:41.994Z · EA · GW

He was also apparently instrumental in pushing through a big grant for Our World In Data despite the sclerotic procurement process.