Posts

The Long Reflection as the Great Stagnation 2022-09-01T20:55:40.690Z
Larks's Shortform 2022-08-30T13:43:42.593Z
Concerns with the Wellbeing of Future Generations Bill 2022-03-09T18:12:16.774Z
"Should have been hired" Prizes 2022-01-19T14:19:25.291Z
2021 AI Alignment Literature Review and Charity Comparison 2021-12-23T14:06:45.612Z
A Day in the Life of a Parent 2021-10-12T14:18:44.822Z
The Survival and Flourishing Fund grant applications open until August 23rd ($8m-$12m planned for dispersal) 2021-08-04T19:00:49.793Z
2020 AI Alignment Literature Review and Charity Comparison 2020-12-21T15:25:04.543Z
Avoiding Munich's Mistakes: Advice for CEA and Local Groups 2020-10-14T17:08:13.033Z
Will protests lead to thousands of coronavirus deaths? 2020-06-03T19:08:10.413Z
2019 AI Alignment Literature Review and Charity Comparison 2019-12-19T02:58:58.884Z
2018 AI Alignment Literature Review and Charity Comparison 2018-12-18T04:48:58.945Z
2017 AI Safety Literature Review and Charity Comparison 2017-12-20T21:54:07.419Z
2016 AI Risk Literature Review and Charity Comparison 2016-12-13T04:36:48.060Z
Being a tobacco CEO is not quite as bad as it might seem 2016-01-28T03:59:15.614Z
Permanent Societal Improvements 2015-09-06T01:30:01.596Z
EA Facebook New Member Report 2015-07-26T16:35:54.894Z

Comments

Comment by Larks on Longtermists should take climate change very seriously · 2022-10-04T00:27:29.770Z · EA · GW

Globally, climate change “could force 216 million people... to move within their countries by 2050.”

This seems like a remarkably small number to me. In 2019 around 7.4 million people moved state within the USA alone (source); over the next 28 years, with 0.4% annual population growth, that is 218 million people-moves. Spread out over the entire world this seems like a quite small amount of migration.

Comment by Larks on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T02:14:39.936Z · EA · GW

if you're not excited about it it seems likely to make you miserable.

I'm not sure the data really supports this view. People are pretty good at adapting, and a lot of men in particular seem to become far more excited about their kids after they are born than they expected to be ahead of time. 

As an extreme example, the recent Turnaround study investigated the impact of abortion denial on expectant mothers. While there were other negative consequences, involuntary motherhood does not appear to have made women miserable:

However, women did not suffer lasting mental health consequences, prompting questions about the effects of denial on women's emotions. ... Subsequent positive life events and bonding with the child also led to positive retrospective evaluations of the denial.

If even people in such an extreme situation can adjust then I suspect people who are merely 'not excited' can also.

Comment by Larks on Ask Me Anything about parenting as an Effective Altruist · 2022-09-28T01:55:18.968Z · EA · GW

they are rarely newborns

I think this is slightly overstating things - I'm not sure of the numbers as the statistics I've found online seem inconsistant, but it looks like the majority of private adoptions, and >10% of all adoptions, are newborns.

Comment by Larks on Likelihood of an anti-AI backlash: Results from a preliminary Twitter poll · 2022-09-28T01:38:33.013Z · EA · GW

What behaviours and values were stigmatized by BLM?

Behaviours like police traffic stops, disputing people of colour's lived experiences, or calling the cops in response to disturbances or crimes, and values like support for the police or fear of crime.

Comment by Larks on Democrats Veto, Republicans Coin Flip · 2022-09-27T02:50:04.030Z · EA · GW

I think it would be good if you could provide some summaries to let people know why they might be interested in clicking on the links.

Comment by Larks on What is your confidence in the premises of the Repugnant Conclusion? · 2022-09-20T21:13:13.016Z · EA · GW

It might be interesting to ask for people's credence in the conclusion also!

Comment by Larks on Why AGIs utility can't outweigh humans' utility? · 2022-09-20T18:23:37.224Z · EA · GW

The distinctive feature of utilitarianism is not that it thinks happiness/utility matter, but that it thinks nothing else intrinsically matters. Almost all ethical systems apply at least some value to consequences and happiness. And even austere deontologists who didn't would still face the question of whether AIs could have rights that might be impermissible to violate, etc. Agreed egoism seems less affected.

Comment by Larks on Why AGIs utility can't outweigh humans' utility? · 2022-09-20T13:28:19.328Z · EA · GW

Eliezer has long argued that they could, and we should be very cautious about creating sentient AIs for this reason (in addition to the standard 'they would kill us all' reason).

Also note that this question is not specific to utilitarianism at all, and affects most ethical systems.

Comment by Larks on Public reports are now optional for EA Funds grantees · 2022-09-19T00:16:47.951Z · EA · GW

I don't know about how this aspect of law works, but does the Trustees's report actually contain all the grants? Based on the May 2021 LTFF report, I would expect to see a e.g. significant grant made to the Cambridge Computer Science Department (or similar), but unless I am misreading, or it is labelled counter-intuitively, I don't see it.

More importantly, I would expect almost-all of the secretive grants to be made to individuals, which sounds like they are excluded from the reporting anyway.

 

Comment by Larks on [deleted post] 2022-09-18T05:11:17.457Z

It might be worth writing a little text about what Planecrash is and why people might like to know more about it.

Comment by Larks on Thomas Kwa's Shortform · 2022-09-17T16:43:49.738Z · EA · GW

We also seem to get a fair number of posts that make basically the same point as an earlier article, but the author presumably either didn't read the earlier one or wanted to re-iterate it.

Comment by Larks on [deleted post] 2022-09-15T19:10:55.541Z

His name is only displayed within an image of the cover of books he wrote beforehand, and in exact quotes. 

No, in addition to the 12 places you mention it is also at the bottom of every page:

© 2021 Phil Torres

and elsewhere on the website, in much the same usage as this post:

(Note that I recently changed my name from "Phil Torres" to "Émile P. Torres.")

Comment by Larks on Contra Appiah Defending Defending "Climate Villains" · 2022-09-15T04:05:39.945Z · EA · GW

Great article, thanks for writing it. I think all your points are basically correct. (Internet Archive link to the underlying).

I think he also misses a consideration in the opposite direction. The original letter seems to basically assume that being a polluting company means the company is a net bad thing. But just because they are causing some negative externality doesn't mean that externality outweighs their positive impacts. The author doesn't explain exactly who he is opposed to, but my guess is a proper analysis would suggest that the direct consequences of [working for] these firms are much less bad than the author expects. This is similar to how I think 80k overstated the harms of several 'bad' careers (e.g. this).

Comment by Larks on New Cause Area: Baby Longtermism – Child Rights and Future Children · 2022-09-15T03:21:21.753Z · EA · GW

I don't really understand the theory of action here. You suggest the goal is to save millions of children per year, but these largely die in countries that have ratified the convention. Furthermore, the four policy changes highlighted for the US (farm work, trial as adults, child marriage, corporal punishment) do not seem very closely tied to mortality - why not focus instead of more common killers like pre-term birth? You suggest that helping children more would help 'free' mothers from exclusive care of their children, but the four policies mentioned seem mainly neutral on this point, and some of them seem like they would actually make motherhood more difficult for at least a minority of mothers.

Comment by Larks on [deleted post] 2022-09-13T18:22:37.457Z

That's fair, I guess this objection applies to the post on the EA forum but not to the linked article.

Comment by Larks on [deleted post] 2022-09-13T17:37:12.312Z

But either the identity of the author of a post is important, and we should all disclose it properly, or it's totally secondary to the content, and it's acceptable to hide it, if this is a hinderance.

It seems reasonable to say 'the identity is not important, except people who have been specifically banned for abuse.' Anonymity is desirable, but not to enable evasion of other rules.

Comment by Larks on Increasing Urban Density as a Recommended Cause Area? · 2022-09-11T19:11:00.173Z · EA · GW

Possibly of interest: discussion of the potential benefits of crime reduction for improving density here.

Comment by Larks on 30 second action you could take · 2022-09-08T06:19:42.104Z · EA · GW

Results are out: Will came in a respectable 6th out of 50, beating Elon Musk (8th). Philosopher Kathleen Stock won overall. Here is the one-sentence summary Will got:

Looking to the future, philosopher William MacAskill argues in an upcoming work that we have a moral obligation to look to the future if we are to save ourselves from environmental disaster. (A review is upcoming.)

Comment by Larks on [Cause Exploration Prizes] Crime Reduction · 2022-09-05T00:50:46.908Z · EA · GW

Given the line you quote it's totally reasonable for you to think this, but I think Roodman's summary is actually very misleading. If you look at the tables where the actual calculations take place, you can see negative coefficients for incapacitation for both Rape and Aggravated Assault: he assumes imprisonment increases these crimes during the period of 'incapacitation'! If we ignored crimes among felons, these figures would be positive, and increased incarceration would actually look more desirable on the current margin than his report concludes:

Leaving Roodman's report to one side for the moment, there is an important empirical (non-tautological) insight in the incapacitation benefits of prison:  crimes are very concentrated among a small segment of the population, so you can imprison 1% of people and catch much more than 1% of all crimes. 

Comment by Larks on Valuing lives instrumentally leads to uncomfortable conclusions · 2022-09-04T22:42:22.853Z · EA · GW

It seems a little strange to call this a repugnant conclusion, given that this priority has been shared by the vast majority of people both historically and today. As far as I can see, almost no-one thinks that we should be completely indifferent between which person we save. I don't think really anyone believes it is equally important to save e.g. a terminally ill pedophile as it is to save a happy and healthy doctor who has many freind's and helps many people.

Comment by Larks on [Cause Exploration Prizes] Crime Reduction · 2022-09-03T17:42:15.149Z · EA · GW

Thanks very much for writing this great article, I think it's the best EA treatment of the subject I've seen.

Comment by Larks on ProjectLawful.com gives you policy experience · 2022-09-02T18:14:24.583Z · EA · GW

In case anyone thinks Davis is exaggerating, this quote is at the top of the first page:

Comment by Larks on Eliminate or Adjust Strong Upvotes to Improve the Forum · 2022-09-02T18:10:54.840Z · EA · GW

And in turn, I think gut intuition differences mean that in practice, some people are strong upvoting everything that they themselves post/comment, while others are never doing so because it feels impolite.

I'd be surprised if many people are strong-upvoting all their comments. The algorithmic default is to strong upvote your posts, but weak upvote your own comments, and I very rarely see a post with 1 vote above 2 karma. If I had to guess my median estimate would be that zero frequent commenters strong upvote more 5% of the comments.

I do think it would not be unreasonable to ban strong-upvoting your own comments.

Comment by Larks on Open EA Global · 2022-09-02T18:04:55.766Z · EA · GW

If nothing else presumably at some point venues have fire code capacity limits, though maybe past conferences have been small enough these haven't been binding.

Comment by Larks on Evaluation of Longtermist Institutional Reform · 2022-09-02T02:31:25.498Z · EA · GW

Superb article, thanks so much for writing this.

If you haven't seen it, you might enjoy my and John Myers' article criticising the UK's Future Generations Bill, which made many of the same arguments you make against a proposed law that featured things like the Posterity Impact Statements.

Comment by Larks on [deleted post] 2022-09-02T02:04:49.376Z

I thought this was a remarkably mean-spirited article. If you want to criticise a cause area, you can just do that directly - you don't need to act like you're warning the community of a traitor in our midst, given it seems like your underlying complaint is strictly on a policy level - I don't think you're alleging Alex believes in YIMBY for nefarious reasons. Even if you strongly disagree with an idea that is associated with literally only one person, you should be able to criticise it without making it personal - e.g. as I did on an unrelated subject here.

You're also being quite unfair by criticising him for giving only a short summary of the reasons for supporting YIMBY. For example, there is a considerable amount of empirical work on the subject, not just theory - e.g. Housing Constraints and Spatial Misallocation:

We quantify the amount of spatial misallocation of labor across US cities and its aggregate costs. Misallocation arises because high productivity cities like New York and the San Francisco Bay Area have adopted stringent restrictions to new housing supply, effectively limiting the number of workers who have access to such high productivity. Using a spatial equilibrium model and data from 220 metropolitan areas we find that these constraints lowered aggregate US growth by 36 percent from 1964 to 2009. 

Finally, some of your assertions seem frankly quite bizzare, like this:

He must be assuming that people are paying high rents rather than sharing rooms or camping on the streets, because neither sharing rooms nor camping free-of-charge increases poverty or lowers economic growth.

Restraining supply causes both high prices (rents) and lower quantities. If there is not enough supply, some people have to live elsewhere - that is the rationing function of prices. This does harm economic growth if it pushes them further away from high productivity jobs. If someone would be willing to pay X to live somewhere, and the physical costs of building a nice house are Y<X, then we are missing out on X-Y of economic value. People living elsewhere, or sharing rooms, or camping (!) are not the best economic outcome!

I do find myself in agreement with your penultimate sentence however:

Thus, it is not necessary to criticize Mr. Berger. 

Comment by Larks on Toby Ord’s The Scourge, Reviewed · 2022-08-31T17:20:29.091Z · EA · GW

The fact that the founder of EA wrote the article is largely irrelevant to ColdButtonIssues's argument; he just thinks an equivalent argument could be used against EA.

Comment by Larks on Toby Ord’s The Scourge, Reviewed · 2022-08-31T17:16:58.645Z · EA · GW

The challenge of the Scourge is that a common bioconservative belief ("The embryo has the same moral status as an adult human") may entail another which seems facially highly implausible ("Therefore, spontaneous abortion is one of the most serious problems facing humanity, and we must do our utmost to investigate ways of preventing this death—even if this is to the detriment of other pressing issues"). Many (most?) find the latter bizarre, so if they believed it was entailed by the bioconservative claim would infer this claim must be false. 

I don't really see how this helps, because it seems a similar thing applies to EAs, regardless of whether the issue is hypocrisy or a modus ponens / modus tollens.  We use common moral beliefs (future people have value) to entail others which seem facially highly implausible (we should spend vast sums of money on strange projects, even if this is to the detriment of other pressing issues). Many (most?) find the latter bizarre, so if they believed it was entailed by the future-people-have-value claim would infer this claim must be false. In both cases the argument is using common 'near' moral views to deduce sweeping global moral imperatives.

Comment by Larks on Effective altruism is no longer the right name for the movement · 2022-08-31T15:13:38.248Z · EA · GW

there are good people doing good work in all the segments.

Who do you think is in the 'Longterm+EA' and 'Xrisk+EA' buckets? As far as I know, even though they may have produced some pieces about those intersections, both Carl and Holden are in the center, and I doubt Will denies that humanity could go extinct or lose its potential either.

Comment by Larks on Hobbit Manifesto · 2022-08-30T20:38:54.229Z · EA · GW

Presumably making people smaller would mean smaller brains. Given that communication inside the brain is easy, and communication between people is difficult, a larger population of people with smaller brains might be much less able to handle cognitive problems.

Comment by Larks on Larks's Shortform · 2022-08-30T13:43:42.992Z · EA · GW

After having written an annual review of AI safety organisations for six years, I intend to stop this year. I'm sharing this in case someone else wanted to in my stead.

Reasons

  • It is very time consuming and I am busy.
  • I have a lot of conflicts of interests now.
  • The space is much better funded by large donors than when I started. As a small donor, it seems like you either donate to:
    • A large org that OP/FTX/etc. support, in which case funging is ~ total and you can probably just support any.
    • A large org than OP/FTX/etc. reject in which case there is a high chance you are wrong.
    • A small org OP/FTX/etc. haven't heard of, in which case I probably can't help you either.
  • Part of my motivation was to ensure I stayed involved in the community but this is not at threat now.

Hopefully it was helpful to people over the years. If you have any questions feel free to reach out.

Comment by Larks on Effective altruism's billionaires aren't taxed enough. But they're trying. · 2022-08-25T01:49:45.609Z · EA · GW

Effective altruism's billionaires aren't taxed enough

I think this is a misleading title. The tl;dr you posted, or indeed the linked article itself, does not really argue that taxes are too low. At times it implicitly assumes it, or vibes with it, but there's nothing in the article that would persuade someone who didn't already believe it. In general I think linkposts should use the same title as the linked article, or else a title that describes its contents faithfully, rather than adding additional editorialising. Both the original title or subtitle would be better.

Comment by Larks on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-23T14:49:24.813Z · EA · GW

I was intuitively thinking 5% tops

I'm surprised you'd have such a low threshold - I would have thought noise, misreading the question, trolling, misclicks etc. alone would push above that level.

Comment by Larks on So, I Want to Be a "Thinkfluencer" · 2022-08-15T18:39:39.075Z · EA · GW

As I understand it, posts are frontpage by default unless you or a mod decide otherwise.

Comment by Larks on How to Talk to Lefties in Your Intro Fellowship · 2022-08-14T02:53:29.662Z · EA · GW

Thanks for writing this. I think you do an excellent job on the rhetoric issues like language and framing. These seem like good methods for building coalitions around some specific policy issue, or deflecting criticism. 

But I'm not sure they're good for actually bringing people into the movement, because at times they seem a little disingenuous. EA opposition to factory farming has nothing to do with indigenous values - EAs are opposed to it taking place in any country, regardless of how nicely or otherwise people historically treated animals there. Similarly EA aid to Africa is because we think it is a good way of helping people, not because we think any particular group was a net winner or loser from the slave trade. If we're going to try to recruit someone, I feel like we should make it clear that EA is not just a flavour of woke, and explicitly contradicts it at times.

As well as seeming a bit dishonest, I think it could have negative consequences to recruit people in this way. We generally don't just want people who have been lead to agree on some specific policy conclusions, but rather those who are on board with the whole way of thinking. There has been a lot of press written about the damages to workplace cohesion, productivity and mission focus from hiring SJWs, and if even the Bernie Sanders campaign is trying to "Stop hiring activists" it could probably be significantly worse if your employees had been hired expecting a very woke environment and were then disappointed. 

Comment by Larks on Why aren't EAs talking about the COVID lab leak hypothesis more? · 2022-08-13T22:06:50.986Z · EA · GW

Is there any particular discussion you think we should be happening? My impression is EAs were concerned about lab leaks before, thought it was plausible but far from clear this was a lab leak, and continue to want more security for BSL labs in the future.

Comment by Larks on Cause Area Proposal: Paperwork Reduction · 2022-08-08T22:54:27.404Z · EA · GW

Thanks for writing this interesting post on a novel cause area I've never seen presented in this way.

Another aspect perhaps worth mentioning is that the modern world seems to require an increasingly high minimum IQ/contentiousness level to navigate successfully. Reducing paperwork burdens, which can difficult for some people to fill out, could help with this.

Comment by Larks on Most Ivy-smart students aren't at Ivy-tier schools · 2022-08-07T17:30:46.784Z · EA · GW

Thanks very much for sharing this, and in particular the fascinating charts. I was pretty surprised at how large a fraction of successful applicants were, on these axis, strictly dominated by other rejected applicants, and how large the overlap was in the box and whiskers plots. Sometimes colleges argue they can't just look at SAT because they have more applicants with perfect SATs than they have spaces, but that doesn't explain why you would almost all your successful applicants (from this school) would have sub-perfect SATs.

I have have thoughts that could perhaps change the conclusion:

It's well known that colleges care a lot about extracurriculars. If these really are a good sign of flexibility, work ethics, initiative and so on, perhaps we should care about them also. If so, colleges might be correctly adjusting, the low correlations we observe in the charts are just because we can't directly observe those facts, and high quality people are more concentrated in top schools than this data would suggest.

Additionally, SCOTUS is due to hear Students For Fair Admissions vs Harvard later this year, and Metaculus currently gives them a 75% chance to successfully get racial discrimination in university admissions found unlawful. If so the correlation with SAT/GPA might improve a lot after this year, so the phenomena you're highlighting might be a relatively short-lived one.

Comment by Larks on [link post] The Case for Longtermism in The New York Times · 2022-08-06T00:39:12.394Z · EA · GW

Nice article, thanks for linking (and Will for writing).

Unfortunately some people I know thought this section was a little misleading, as they felt it was insinuating that Xrisk from nuclear was over 20% - a figure I think few EAs would endorse. Perhaps it was judged to be a low-cost concession to the prejudices of NYT readers?

We still live under the shadow of 9,000 nuclear warheads, each far more powerful than the bombs dropped on Hiroshima and Nagasaki. Some experts put the chances of a third world war by 2070 at over 20 percent. An all-out nuclear war could cause the collapse of civilization, and we might never recover.

Comment by Larks on Moral Progress Reading List · 2022-08-03T04:34:36.621Z · EA · GW

Are you aware of anyone in EA who has studied the problem of moral regress?

Somewhat related: Gwern on the Narrowing Circle.

Comment by Larks on The danger of nuclear war is greater than it has ever been. Why donating to and supporting Back from the Brink is an effective response to this threat · 2022-08-02T18:42:28.060Z · EA · GW

The danger of nuclear war is greater than it has ever been.

What is your argument for the risk now being higher than during the Cuban Missile Crisis, or similar incidents during the Cold War, or indeed than earlier this year?

Comment by Larks on Three common mistakes when naming an org or project · 2022-07-26T22:23:17.782Z · EA · GW

Preserve option value by giving yourself a vague name

Seems quite possible that your donors want you to do the project you said you'd do, and not some other random project. If this is the case project lock-in through name choice could be a feature rather than a bug.

Comment by Larks on Hiring Programmers in Academia · 2022-07-25T03:26:19.663Z · EA · GW

Sounds like part of the purpose of BERI?

Comment by Larks on GLO, a UBI-generating stablecoin that donates all yields to GiveDirectly · 2022-07-20T01:47:38.112Z · EA · GW

You could just invest in 3m Treasury bills directly, or invest in a conventional fund that buys bills, (or indeed whatever other investments you thought were most appropriate given your circumstances) and then donate the interest to charity.

Comment by Larks on Reducing nightmares as a cause area · 2022-07-18T23:45:59.083Z · EA · GW

Thanks for sharing this very original idea! I'm somewhat sceptical of the intervention you mention but it definitely seems like a large and neglected issue.

Comment by Larks on Marriage, the Giving What We Can Pledge, and the damage caused by vague public commitments · 2022-07-12T18:05:22.131Z · EA · GW

The only cost of breaking the GWWC commitment is that people who saw you make that commitment might lose a but of trust in you. I think this is a great balance

This seems like very little cost at all. Charitable donations and income are, by default, private, so no-one need know you stopped, and even when people are public about leaving the community, the main reaction I have seen is one of best-wishes and urging self-care. I'm not sure I've ever seen any EA leaders write a harsh word about people for leaving.

Comment by Larks on One Million Missing Children · 2022-07-11T18:30:53.387Z · EA · GW

Thanks for writing this. For anyone with good ideas in the area, it's worth noting that addressing demographic decline is listed as an area the FTX Foundation is interested in funding.

Comment by Larks on Doom Circles · 2022-07-08T16:18:34.096Z · EA · GW

My gut reaction is this sounds pretty unpleasant. Perhaps I am misunderstanding the sort of feedback you'd expect to share in such a situation; could you perhaps give some examples?

Comment by Larks on Is there any research on internalizing x-risks or global catastrophic risks into economies? · 2022-07-06T20:01:15.023Z · EA · GW

Owen has done some related work here and here on pricing research externalities.

Comment by Larks on What actions most effective if you care about reproductive rights in America? · 2022-06-27T02:44:25.591Z · EA · GW

why apply it to abortion and not Ukraine?

I agree it should apply to both; if your question is why didn't I object to the previous post I don't have any specific defense other than having no recollection of seeing the Ukraine post at the time, though maybe I saw it and forgot.