This seems like a remarkably small number to me. In 2019 around 7.4 million people moved state within the USA alone (source); over the next 28 years, with 0.4% annual population growth, that is 218 million people-moves. Spread out over the entire world this seems like a quite small amount of migration.
if you're not excited about it it seems likely to make you miserable.
I'm not sure the data really supports this view. People are pretty good at adapting, and a lot of men in particular seem to become far more excited about their kids after they are born than they expected to be ahead of time.
As an extreme example, the recent Turnaround study investigated the impact of abortion denial on expectant mothers. While there were other negative consequences, involuntary motherhood does not appear to have made women miserable:
However, women did not suffer lasting mental health consequences, prompting questions about the effects of denial on women's emotions. ... Subsequent positive life events and bonding with the child also led to positive retrospective evaluations of the denial.
If even people in such an extreme situation can adjust then I suspect people who are merely 'not excited' can also.
I think this is slightly overstating things - I'm not sure of the numbers as the statistics I've found online seem inconsistant, but it looks like the majority of private adoptions, and >10% of all adoptions, are newborns.
What behaviours and values were stigmatized by BLM?
Behaviours like police traffic stops, disputing people of colour's lived experiences, or calling the cops in response to disturbances or crimes, and values like support for the police or fear of crime.
The distinctive feature of utilitarianism is not that it thinks happiness/utility matter, but that it thinks nothing else intrinsically matters. Almost all ethical systems apply at least some value to consequences and happiness. And even austere deontologists who didn't would still face the question of whether AIs could have rights that might be impermissible to violate, etc. Agreed egoism seems less affected.
I don't know about how this aspect of law works, but does the Trustees's report actually contain all the grants? Based on the May 2021 LTFF report, I would expect to see a e.g. significant grant made to the Cambridge Computer Science Department (or similar), but unless I am misreading, or it is labelled counter-intuitively, I don't see it.
More importantly, I would expect almost-all of the secretive grants to be made to individuals, which sounds like they are excluded from the reporting anyway.
Comment by Larks on [deleted post]
It might be worth writing a little text about what Planecrash is and why people might like to know more about it.
I think he also misses a consideration in the opposite direction. The original letter seems to basically assume that being a polluting company means the company is a net bad thing. But just because they are causing some negative externality doesn't mean that externality outweighs their positive impacts. The author doesn't explain exactly who he is opposed to, but my guess is a proper analysis would suggest that the direct consequences of [working for] these firms are much less bad than the author expects. This is similar to how I think 80k overstated the harms of several 'bad' careers (e.g. this).
I don't really understand the theory of action here. You suggest the goal is to save millions of children per year, but these largely die in countries that have ratified the convention. Furthermore, the four policy changes highlighted for the US (farm work, trial as adults, child marriage, corporal punishment) do not seem very closely tied to mortality - why not focus instead of more common killers like pre-term birth? You suggest that helping children more would help 'free' mothers from exclusive care of their children, but the four policies mentioned seem mainly neutral on this point, and some of them seem like they would actually make motherhood more difficult for at least a minority of mothers.
Comment by Larks on [deleted post]
That's fair, I guess this objection applies to the post on the EA forum but not to the linked article.
Comment by Larks on [deleted post]
But either the identity of the author of a post is important, and we should all disclose it properly, or it's totally secondary to the content, and it's acceptable to hide it, if this is a hinderance.
It seems reasonable to say 'the identity is not important, except people who have been specifically banned for abuse.' Anonymity is desirable, but not to enable evasion of other rules.
Results are out: Will came in a respectable 6th out of 50, beating Elon Musk (8th). Philosopher Kathleen Stock won overall. Here is the one-sentence summary Will got:
Looking to the future, philosopher William MacAskill argues in an upcoming work that we have a moral obligation to look to the future if we are to save ourselves from environmental disaster. (A review is upcoming.)
Given the line you quote it's totally reasonable for you to think this, but I think Roodman's summary is actually very misleading. If you look at the tables where the actual calculations take place, you can see negative coefficients for incapacitation for both Rape and Aggravated Assault: he assumes imprisonment increases these crimes during the period of 'incapacitation'! If we ignored crimes among felons, these figures would be positive, and increased incarceration would actually look more desirable on the current margin than his report concludes:
Leaving Roodman's report to one side for the moment, there is an important empirical (non-tautological) insight in the incapacitation benefits of prison: crimes are very concentrated among a small segment of the population, so you can imprison 1% of people and catch much more than 1% of all crimes.
It seems a little strange to call this a repugnant conclusion, given that this priority has been shared by the vast majority of people both historically and today. As far as I can see, almost no-one thinks that we should be completely indifferent between which person we save. I don't think really anyone believes it is equally important to save e.g. a terminally ill pedophile as it is to save a happy and healthy doctor who has many freind's and helps many people.
And in turn, I think gut intuition differences mean that in practice, some people are strong upvoting everything that they themselves post/comment, while others are never doing so because it feels impolite.
I'd be surprised if many people are strong-upvoting all their comments. The algorithmic default is to strong upvote your posts, but weak upvote your own comments, and I very rarely see a post with 1 vote above 2 karma. If I had to guess my median estimate would be that zero frequent commenters strong upvote more 5% of the comments.
I do think it would not be unreasonable to ban strong-upvoting your own comments.
Comment by Larks on [deleted post]
I thought this was a remarkably mean-spirited article. If you want to criticise a cause area, you can just do that directly - you don't need to act like you're warning the community of a traitor in our midst, given it seems like your underlying complaint is strictly on a policy level - I don't think you're alleging Alex believes in YIMBY for nefarious reasons. Even if you strongly disagree with an idea that is associated with literally only one person, you should be able to criticise it without making it personal - e.g. as I did on an unrelated subject here.
You're also being quite unfair by criticising him for giving only a short summary of the reasons for supporting YIMBY. For example, there is a considerable amount of empirical work on the subject, not just theory - e.g. Housing Constraints and Spatial Misallocation:
We quantify the amount of spatial misallocation of labor across US cities and its aggregate costs. Misallocation arises because high productivity cities like New York and the San Francisco Bay Area have adopted stringent restrictions to new housing supply, effectively limiting the number of workers who have access to such high productivity. Using a spatial equilibrium model and data from 220 metropolitan areas we find that these constraints lowered aggregate US growth by 36 percent from 1964 to 2009.
Finally, some of your assertions seem frankly quite bizzare, like this:
He must be assuming that people are paying high rents rather than sharing rooms or camping on the streets, because neither sharing rooms nor camping free-of-charge increases poverty or lowers economic growth.
Restraining supply causes both high prices (rents) and lower quantities. If there is not enough supply, some people have to live elsewhere - that is the rationing function of prices. This does harm economic growth if it pushes them further away from high productivity jobs. If someone would be willing to pay X to live somewhere, and the physical costs of building a nice house are Y<X, then we are missing out on X-Y of economic value. People living elsewhere, or sharing rooms, or camping (!) are not the best economic outcome!
I do find myself in agreement with your penultimate sentence however:
Thus, it is not necessary to criticize Mr. Berger.
The challenge of the Scourge is that a common bioconservative belief ("The embryo has the same moral status as an adult human") may entail another which seems facially highly implausible ("Therefore, spontaneous abortion is one of the most serious problems facing humanity, and we must do our utmost to investigate ways of preventing this death—even if this is to the detriment of other pressing issues"). Many (most?) find the latter bizarre, so if they believed it was entailed by the bioconservative claim would infer this claim must be false.
I don't really see how this helps, because it seems a similar thing applies to EAs, regardless of whether the issue is hypocrisy or a modus ponens / modus tollens. We use common moral beliefs (future people have value) to entail others which seem facially highly implausible (we should spend vast sums of money on strange projects, even if this is to the detriment of other pressing issues). Many (most?) find the latter bizarre, so if they believed it was entailed by the future-people-have-value claim would infer this claim must be false. In both cases the argument is using common 'near' moral views to deduce sweeping global moral imperatives.
there are good people doing good work in all the segments.
Who do you think is in the 'Longterm+EA' and 'Xrisk+EA' buckets? As far as I know, even though they may have produced some pieces about those intersections, both Carl and Holden are in the center, and I doubt Will denies that humanity could go extinct or lose its potential either.
Presumably making people smaller would mean smaller brains. Given that communication inside the brain is easy, and communication between people is difficult, a larger population of people with smaller brains might be much less able to handle cognitive problems.
I think this is a misleading title. The tl;dr you posted, or indeed the linked article itself, does not really argue that taxes are too low. At times it implicitly assumes it, or vibes with it, but there's nothing in the article that would persuade someone who didn't already believe it. In general I think linkposts should use the same title as the linked article, or else a title that describes its contents faithfully, rather than adding additional editorialising. Both the original title or subtitle would be better.
Thanks for writing this. I think you do an excellent job on the rhetoric issues like language and framing. These seem like good methods for building coalitions around some specific policy issue, or deflecting criticism.
But I'm not sure they're good for actually bringing people into the movement, because at times they seem a little disingenuous. EA opposition to factory farming has nothing to do with indigenous values - EAs are opposed to it taking place in any country, regardless of how nicely or otherwise people historically treated animals there. Similarly EA aid to Africa is because we think it is a good way of helping people, not because we think any particular group was a net winner or loser from the slave trade. If we're going to try to recruit someone, I feel like we should make it clear that EA is not just a flavour of woke, and explicitly contradicts it at times.
As well as seeming a bit dishonest, I think it could have negative consequences to recruit people in this way. We generally don't just want people who have been lead to agree on some specific policy conclusions, but rather those who are on board with the whole way of thinking. There has been a lot of press written about the damages to workplace cohesion, productivity and mission focus from hiring SJWs, and if even the Bernie Sanders campaign is trying to "Stop hiring activists" it could probably be significantly worse if your employees had been hired expecting a very woke environment and were then disappointed.
Is there any particular discussion you think we should be happening? My impression is EAs were concerned about lab leaks before, thought it was plausible but far from clear this was a lab leak, and continue to want more security for BSL labs in the future.
Thanks for writing this interesting post on a novel cause area I've never seen presented in this way.
Another aspect perhaps worth mentioning is that the modern world seems to require an increasingly high minimum IQ/contentiousness level to navigate successfully. Reducing paperwork burdens, which can difficult for some people to fill out, could help with this.
Thanks very much for sharing this, and in particular the fascinating charts. I was pretty surprised at how large a fraction of successful applicants were, on these axis, strictly dominated by other rejected applicants, and how large the overlap was in the box and whiskers plots. Sometimes colleges argue they can't just look at SAT because they have more applicants with perfect SATs than they have spaces, but that doesn't explain why you would almost all your successful applicants (from this school) would have sub-perfect SATs.
I have have thoughts that could perhaps change the conclusion:
It's well known that colleges care a lot about extracurriculars. If these really are a good sign of flexibility, work ethics, initiative and so on, perhaps we should care about them also. If so, colleges might be correctly adjusting, the low correlations we observe in the charts are just because we can't directly observe those facts, and high quality people are more concentrated in top schools than this data would suggest.
Additionally, SCOTUS is due to hear Students For Fair Admissions vs Harvard later this year, and Metaculus currently gives them a 75% chance to successfully get racial discrimination in university admissions found unlawful. If so the correlation with SAT/GPA might improve a lot after this year, so the phenomena you're highlighting might be a relatively short-lived one.
Nice article, thanks for linking (and Will for writing).
Unfortunately some people I know thought this section was a little misleading, as they felt it was insinuating that Xrisk from nuclear was over 20% - a figure I think few EAs would endorse. Perhaps it was judged to be a low-cost concession to the prejudices of NYT readers?
We still live under the shadow of 9,000 nuclear warheads, each far more powerful than the bombs dropped on Hiroshima and Nagasaki. Some experts put the chances of a third world war by 2070 at over 20 percent. An all-out nuclear war could cause the collapse of civilization, and we might never recover.
Preserve option value by giving yourself a vague name
Seems quite possible that your donors want you to do the project you said you'd do, and not some other random project. If this is the case project lock-in through name choice could be a feature rather than a bug.
You could just invest in 3m Treasury bills directly, or invest in a conventional fund that buys bills, (or indeed whatever other investments you thought were most appropriate given your circumstances) and then donate the interest to charity.
The only cost of breaking the GWWC commitment is that people who saw you make that commitment might lose a but of trust in you. I think this is a great balance
This seems like very little cost at all. Charitable donations and income are, by default, private, so no-one need know you stopped, and even when people are public about leaving the community, the main reaction I have seen is one of best-wishes and urging self-care. I'm not sure I've ever seen any EA leaders write a harsh word about people for leaving.
I agree it should apply to both; if your question is why didn't I object to the previous post I don't have any specific defense other than having no recollection of seeing the Ukraine post at the time, though maybe I saw it and forgot.