Posts

Apply to EA Funds now 2020-09-15T19:23:38.668Z · score: 49 (20 votes)
The EA Meta Fund is now the EA Infrastructure Fund 2020-08-20T12:46:31.556Z · score: 48 (22 votes)
EAF/FRI are now the Center on Long-Term Risk (CLR) 2020-03-06T16:40:10.190Z · score: 83 (44 votes)
EAF’s ballot initiative doubled Zurich’s development aid 2020-01-13T11:32:35.397Z · score: 263 (117 votes)
Effective Altruism Foundation: Plans for 2020 2019-12-23T11:51:56.315Z · score: 81 (36 votes)
Effective Altruism Foundation: Plans for 2019 2018-12-04T16:41:45.603Z · score: 52 (19 votes)
Effective Altruism Foundation update: Plans for 2018 and room for more funding 2017-12-15T15:09:17.168Z · score: 25 (25 votes)
Fundraiser: Political initiative raising an expected USD 30 million for effective charities 2016-09-13T11:25:17.151Z · score: 34 (22 votes)
Political initiative: Fundamental rights for primates 2016-08-04T19:35:28.201Z · score: 12 (14 votes)

Comments

Comment by jonas-vollmer on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-27T18:22:09.574Z · score: 2 (1 votes) · EA · GW

Thanks, I look forward to your analysis of uncorrelated investments! In particular, I'll be keen to see to what degree they rely on the same assumptions as value/momentum strategies, or if there are opportunities that are independent of that.

Comment by jonas-vollmer on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-27T10:39:05.734Z · score: 2 (1 votes) · EA · GW

Thanks, very helpful.

If we set this up well, we might get $100 million in investments, and the value added would be ~1% excess certainty equivalent rate, i.e., a certainty equivalent of $1 million per year.

If setting up such a DAF takes a year of labor, maintaining it takes 0.25 FTE, and labor has an opportunity cost of $3 million per year, it would take 3/(1-3*0.25) = 12 years to break even (with a plausible range from two years to 'never').

Over a period of ten years, it would return around 10/(1+10*0.25) = $3 million per person-year (with a plausible range from $500k to $5 million).

That seems pretty good, but perhaps slightly less valuable than other things EA Funds could be doing. 

I'd be keen to hear if you think this seems like a reasonable overall takeaway.

Comment by jonas-vollmer on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-27T09:42:47.378Z · score: 2 (1 votes) · EA · GW

Oh, thanks, I was under the mistaken impression that the Samuelson share formula used the geometric mean!

Comment by jonas-vollmer on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-25T12:25:49.306Z · score: 2 (1 votes) · EA · GW

(I don't understand how you arrived at 2.45:1 optimal leverage with log utility. I get 5%/(.16^2*1)= 1.95, and the Samuelson formula in leverage.py seems to be the same. Same for the other values.)

Comment by jonas-vollmer on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-25T12:20:15.863Z · score: 2 (1 votes) · EA · GW

Thanks! The above calculation compares an un-leveraged portfolio to a leveraged one, but at least under log utility and assuming a low risk of value drift, the relevant comparison is probably between a leveraged (tax-free) DAF and a leveraged taxable account? Presumably, that would be lower than 2.7%.

Also, do you happen to know how effortful and feasible tax loss harvesting might be for leveraged portfolios in taxable accounts?

Comment by jonas-vollmer on Donor-Advised Funds vs. Taxable Accounts for Patient Donors · 2020-10-23T17:45:33.047Z · score: 3 (2 votes) · EA · GW

Thanks, very helpful analysis! Any thoughts on how much value it would create if someone set up a DAF for EAs that makes leveraged investments? (This is on my longlist of things that EA Funds could try to do.)

Comment by jonas-vollmer on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T17:57:32.903Z · score: 3 (2 votes) · EA · GW

It might make sense if a bunch of individual EAs buy real estate such that the overall portfolio is well-diversified. 

Yeah, this is what I was trying to say. Perhaps I was unclear.

I don't expect this to happen in practice, because EAs are geographically concentrated in a small number of cities, so if people own investment properties in the cities where they live, the overall EA real estate portfolio will be too concentrated in those cities.

EAs can diversify the overall EA real estate portfolio by thinking about where other EAs are likely to buy houses. E.g., they should avoid buying a house if they moved to an EA hub city, but they should buy (or avoid selling) a house in their hometown, especially if they come from a place that doesn't have a lot of EAs.

In addition, buying houses in EA hub cities might be somewhat of a hedge against a change in living costs in those key locations, such that overweighting these could actually be more beneficial than harmful.

Anyway, all of this is a bit of a nitpick, I generally agree with the overall direction.

Comment by jonas-vollmer on The Risk of Concentrating Wealth in a Single Asset · 2020-10-19T10:17:06.703Z · score: 5 (3 votes) · EA · GW

Some tentative thoughts on real estate (I haven't thought about this much):

  • The real estate market is presumably less efficient than the stock market, so it's easier for careful individuals to make market-beating investments. (It also makes it easier to make investments that underperform the market, but I'd probably expect EAs to be better at this than the average home buyer, though I'm not entirely sure.)
  • My guess would be that the real estate market is overall less investable than global stock markets (REITs only cover a limited fraction of the overall real estate market), such that more altruists owning houses would lead to a better approximation of a global market portfolio.

If these points were true, that would mean that the introductory example of Carol renting out a home could actually be a good idea (if Carol is an altruist).

Comment by jonas-vollmer on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T08:55:40.072Z · score: 27 (11 votes) · EA · GW

+1, in the German-speaking area, activists have tried to prevent people from gaining physical access to where Singer's talk was to be hosted, and Singer was even physically assaulted on one occasion (a couple of decades ago though). Some venues have cancelled him. There are often protests (by disability rights activists, religious people, etc.) where he speaks.

Comment by jonas-vollmer on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T08:49:15.142Z · score: -4 (5 votes) · EA · GW

(Retracted.)

Comment by jonas-vollmer on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-09T18:58:53.508Z · score: 7 (6 votes) · EA · GW
  • Meret Schneider, who has been interested in EA and animal welfare, and works at EAF's spin-off Sentience Politics, is a Swiss MP.
Comment by jonas-vollmer on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-09T18:54:26.174Z · score: 12 (6 votes) · EA · GW

"really great or fairly bad" sounds like you're ruling out "really bad", but I think the worst outcomes are produced by combining very good with very bad leadership practices. If you're bad at everything, you're unlikely to have much of a negative impact because nobody will pay attention to you. So I would have said "really great or really bad". I agree with you otherwise.

Comment by jonas-vollmer on Apply to EA Funds now · 2020-10-04T10:05:00.329Z · score: 4 (2 votes) · EA · GW

Sorry for the slow response. I was on holiday, so Aaron edited the OP to clarify. It says PST but Anywhere on Earth is also fine.

Comment by jonas-vollmer on Are we living at the most influential time in history? · 2020-09-25T12:28:44.772Z · score: 18 (7 votes) · EA · GW

Now it's officially on BBC: https://www.bbc.com/future/article/20200923-the-hinge-of-history-long-termism-and-existential-risk

But here’s another adjective for our times that you may not have heard before: “hingey”.

Although it also says:

(though MacAskill now prefers the term “influentialness”, as it sounds less flippant)

Comment by jonas-vollmer on Apply to EA Funds now · 2020-09-23T08:48:46.520Z · score: 8 (4 votes) · EA · GW

Update: The LTFF application deadline has been extended through Friday, October 2nd.

Comment by jonas-vollmer on Long-Term Future Fund: September 2020 grants · 2020-09-18T13:13:47.416Z · score: 40 (11 votes) · EA · GW

Thanks for the critique!

In addition to four videos on his own channel, Robert Miles also published three videos on Computerphile during the last 12 months. He also publishes the Alignment Newsletter podcast.  So there's at least some additional output. There's probably more I don't know of.

you could find someone with a similar talent level (explaining fairly basic concepts)

I personally actually think this would be very difficult. Robert Miles' content seems to have been received positively by the AI safety community, but science communications in general is notoriously difficult, and I'd expect most YouTubers to routinely distort and oversimplify important concepts, such that I'd worry that such content would do more harm than good. In contrast, Robert Miles seems sufficiently nuanced.

(Disclosure: I work at EA Funds.)

Comment by jonas-vollmer on Long-Term Future Fund: September 2020 grants · 2020-09-18T12:51:42.252Z · score: 5 (3 votes) · EA · GW

Glad to hear you like it!

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-09-18T09:20:55.227Z · score: 8 (2 votes) · EA · GW

I shared an update on the basic rights for primates initiative here. (TL;DR: The Swiss supreme court has ruled that the initiative is valid, so the vote will finally happen. The ruling has created a lot of national and international headlines.)

Comment by jonas-vollmer on Political initiative: Fundamental rights for primates · 2020-09-18T09:17:30.892Z · score: 11 (7 votes) · EA · GW

Update: The initiative was initially declared invalid, but our supporters decided to litigate. This Wednesday, the Swiss federal court ruled that the initiative is valid! I.e., the court has confirmed that primates could in principle be holders of basic rights, and the city of Basel will have a ballot vote on whether primates deserve basic rights. This made front page headlines in all major Swiss newspapers. There’s a decent chance of it passing, and if it does, that would make Basel one of the first places in the world where nonhuman animals hold basic rights.

Some media coverage in English:

  • https://www.swissinfo.ch/eng/voters-to-decide-on-basic-rights-of-primates/46037828
  • https://www.dailymail.co.uk/news/article-8744207/Switzerland-region-vote-giving-primates-fundamental-constitutional-rights.html

Some coverage in German:

  • https://www.srf.ch/news/schweiz/grundrechte-fuer-affen-bundesgericht-erklaert-basler-primaten-initiative-fuer-zulaessig
  • https://www.20min.ch/story/haben-affen-in-basel-bald-menschenrechte-581046338538
  • https://www.nzz.ch/schweiz/primaten-initiative-ist-laut-bundesgericht-zulaessig-ld.1576944
Comment by jonas-vollmer on Apply to EA Funds now · 2020-09-17T07:23:27.473Z · score: 2 (1 votes) · EA · GW

Fixed, thanks.

Comment by jonas-vollmer on Giving and receiving feedback · 2020-09-10T10:02:18.625Z · score: 15 (7 votes) · EA · GW

The Google Docs commenting feature in particular invites micro-feedback rather than general high-level points. When asking for feedback on a Google Doc, I usually include a template like the following at the beginning (I don't always use all of it):

Epistemic status: …

Giving feedback on this document

I’d greatly appreciate critical feedback, especially about X. Thanks for taking the time, … Your feedback would be most appreciated about:

  • Do you think is broadly on the right track? Did I overlook important points? Do you think my line of argument makes sense?
  • Is the structure and form appropriate? Should it be shorter/longer?
  • In which areas do you think this document needs the most further work?

Please give feedback by DATE.

Comment by jonas-vollmer on Giving and receiving feedback · 2020-09-10T09:58:45.169Z · score: 14 (6 votes) · EA · GW

Someone recently asked me how to get better at receiving feedback. My response:

I'm not sure I have a lot of very insightful stuff to say, just the "obvious advice":

  • Right before receiving the feedback, consciously adopt a constructive mindset. (I usually do something like this: "What comes might hurt, but it won't be about me as a person in general, just about my behavior, which I can change; I'll try to breathe and relax if the feedback produces this tightening feeling.")
  • If I think that people are being overly negative, I force them to be more constructive by asking questions like "What would you suggest?", "Interesting. Do you have ideas for how to address this?", "I agree this is a concern, but I'm not sure how to solve it, do you have a suggestion?"
  • One thing that usually helps me is asking people whether my work is overall on the right track, and the answer is usually yes, and that makes it easier to take critical feedback. Many people forget to give high-level feedback, but it's usually quite easy to prompt them to do so.
  • If something feels threatening, asking others who I know value my contributions for their take usually helps me put things into perspective. E.g., when someone was negative about me, I asked some of my former colleagues whether they think I can do my new job well, and their take was something like "yeah you probably don't have the type of skill that this person mentions, but I don't think that skill is key to what you're trying to do, and this person doesn't appreciate some of the skills you have, either, so basically they shouldn't complain as much."
Comment by jonas-vollmer on Giving and receiving feedback · 2020-09-10T09:51:34.037Z · score: 8 (2 votes) · EA · GW

See also Daniel Kestenholz's How to Give and Receive Feedback.

Comment by jonas-vollmer on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-10T08:51:54.998Z · score: 12 (4 votes) · EA · GW

These seem like good objections to me, but overall I still find it pretty implausible. A hermit who leads a happy life alone on an island (and has read lots of books about personal identity and otherwise acquired a lot of wisdom) probably wouldn't want to commit suicide unless the amount of expected suffering in their future was pretty significant.

(I didn't understand, or disagree with, the fourth point.)

Comment by jonas-vollmer on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-10T08:43:46.271Z · score: 4 (3 votes) · EA · GW

As I said, mainly by assigning more credence to other views.

Comment by jonas-vollmer on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-09T12:51:22.001Z · score: 15 (6 votes) · EA · GW

While I agree that problematic implications do not follow in practice, I still think some views have highly counterintuitive implications. E.g., some suffering-focused views would imply that most happy present-day humans would be better off committing suicide if there's a small chance that they would experience severe suffering at some point in their lives. This seems a highly implausible and under-appreciated implication (and makes me assign more credence to views that don't have this implication, such as preference-based and upside-focused views).

Comment by jonas-vollmer on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-09T10:59:21.377Z · score: 8 (4 votes) · EA · GW

Paul Christiano talks about this question in his 80,000 Hours podcast, mainly saying that s-risks seem less tractable than AI alignment (but also expressing some enthusiasm for working on them).

Comment by jonas-vollmer on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-09T10:39:53.178Z · score: 14 (3 votes) · EA · GW

Most suffering-focused EAs I know agree about the facts: there's a small chance that AI-powered space colonization will create flourishing futures highly optimized for happiness and other forms of moral value, and this small chance of a vast payoff dominates the expected value of the future on many moral views. I think people generally agree that the typical/median future scenario will be much better than the present (for reasons like this one, though there's much more to say about that), though in absolute terms probably not nearly as good as it could be. 

So in my perception, most of the disagreement comes from moral views, not from perceptions of the likelihood or severity of s-risks.

Comment by jonas-vollmer on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T11:20:42.228Z · score: 14 (7 votes) · EA · GW

4 For me it seems like people constantly trade happiness for suffering (taking drugs expecting a hangover, eating unhealthy stuff expecting health problems or even just feeling full, finishing that show on Netflix instead of going to sleep… ). Those are reasons for me to believe that most people might not want to compensate suffering through happiness 1:1 , but are also far from expecting 1:10^17 returns or even stating there is no return which potentially could compensate any kind of suffering.

 

One counterargument that has been raised against this is that people just accept suffering in order to avoid other forms of suffering. E.g., you might feel bored if you don't take drugs, might have uncomfortable cravings for unhealthy food if you don't eat it, etc.

I do think this point could be part of an interesting argument, but as it stands, it merely offers an alternative explanation without analyzing carefully which of the two explanations is correct. So on its own, this doesn't seem to be a strong counterargument yet.

Comment by jonas-vollmer on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-24T07:43:37.038Z · score: 2 (1 votes) · EA · GW

This post itself is a major insight since 2015! :P

Comment by jonas-vollmer on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-21T07:53:57.924Z · score: 24 (9 votes) · EA · GW

I haven't checked the claims myself, but "follow good leadership practices" seems to be a heavily disputed claim. Some people claim DxE is a cult, see e.g. here.

Comment by jonas-vollmer on Donating effectively does not necessarily imply donating tax-deductibly · 2020-08-19T11:45:43.322Z · score: 4 (2 votes) · EA · GW

Great idea, I didn't know that this opportunity existed!

For those like me who don't know what a 'standard deduction' is in the US tax system, here's a brief explanation:

  • Even if you have no other qualifying deductions or tax credits, the IRS lets you take the standard deduction on a no-questions-asked basis. The standard deduction reduces the amount of income you have to pay taxes on.
  • You can either take the standard deduction or itemize on your tax return — you can’t do both. Itemized deductions are basically expenses allowed by the IRS that can decrease your taxable income.
  • Taking the standard deduction means you can’t deduct home mortgage interest or take the many other popular tax deductions — medical expenses or charitable donations, for example.

 

In 2020 the standard deduction is $12,400 for single filers and married filers filing separately, $24,800 for married filers filing jointly and $18,650 for heads of household.

Comment by jonas-vollmer on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T09:00:38.427Z · score: 0 (0 votes) · EA · GW

This is a bit of a nitpick: Perhaps you mean the more general point you mentioned above rather than the specific claim about AI risk, but you published this report in already in 2014, and I vaguely remember hearing a lot of discussion of those kinds of arguments in 2014 already.

Comment by jonas-vollmer on EA Meta Fund Grants – July 2020 · 2020-08-14T06:44:48.804Z · score: 2 (1 votes) · EA · GW

I think it's the former of the two. Regarding the last paragraph, I think this refers to high-impact recipients (I think mostly or exclusively charities). But someone from the Meta Fund could answer these questions in more detail.

Comment by jonas-vollmer on EA Meta Fund Grants – July 2020 · 2020-08-13T08:32:40.241Z · score: 23 (10 votes) · EA · GW

(I recently joined CEA as Head of EA Funds. Responding from my own perspective, rather than the Meta Fund's.)

As you said, it's hard to publish critiques of organizations or the work of particular people without harming someone's reputation or otherwise posing a risk to the careers of the people involved.

I also agree with you that it's useful to find ways to talk about risks and reservations.

One potential solution is to talk about the issues in an anonymized, aggregate manner. I have been thinking about whether we could publish sufficiently anonymized examples of risks and reservations to give the community some examples of things we don't fund – we expect this will make it easier to understand why we reject some applicants. And I've given a talk about downside risks and 80,000 Hours have published an article about them.

Comment by jonas-vollmer on How are the EA Funds default allocations chosen? · 2020-08-12T09:17:42.670Z · score: 12 (8 votes) · EA · GW

I agree that many donations will be anchored by the default even if they don't use the default allocation. But donation volume is dominated by a small number of very large transactions, almost all edit: most of which use a completely different allocation (often with 100% going to just one fund).

Comment by jonas-vollmer on How are the EA Funds default allocations chosen? · 2020-08-11T11:49:26.489Z · score: 11 (4 votes) · EA · GW

Hi Peter,

(Context: I have recently joined CEA to run EA Funds.)

I see from the payout overviews that the actual distribution of donation amounts between the four focus areas the past three years has followed more or less exactly the distribution indication on the sliders.

This seems to be a coincidence. Less than 10% of total donation volume is given according to the default allocation.

does that mean that any future donor will not get any EA recommendation on how to balance her donation between the four areas?

The allocation decision is based on a lot of judgment calls, such as: Do you think strong longtermism is correct? Do you think that non-human animals matter morally to a significant degree? Do you want to diversify across worldviews? This flowchart and this article give you an overview of some of the judgment calls involved.

There is no clear expert consensus on these questions, and there may not even be an objective answer to some of them, so we're moving away from recommending a particular allocation. But in the future, we may provide more guidance for donors to reason through these worldview questions themselves.

Comment by jonas-vollmer on EA reading list: suffering-focused ethics · 2020-08-04T11:37:06.717Z · score: 6 (3 votes) · EA · GW

Some academic references can be found here (in favor and against SFE)

Comment by jonas-vollmer on Common ground for longtermists · 2020-08-03T08:26:28.276Z · score: 7 (3 votes) · EA · GW

I think David Moss has data on this (can you tag people in EA Forum posts?). I've sent him a PM with a link to this comment as an FYI, though I'm not sure he has time to respond.

Comment by jonas-vollmer on Annotated List of EA Career Advice Resources · 2020-07-13T07:16:21.438Z · score: 3 (2 votes) · EA · GW

Also interesting: Daniel Kestenholz's career reflection framework. This is essentially a detailed template for a career plan.

Comment by jonas-vollmer on Poor meat eater problem · 2020-07-13T06:38:32.433Z · score: 5 (3 votes) · EA · GW

See these resources:

Quoting from the first of these:

This argument is usually called the “poor meat eater problem,” but I think this term is not quite accurate, given that the concern is stronger in the developed world, so I’m going to call it the “meat eater problem.”
Comment by jonas-vollmer on Five Ways To Prioritize Better · 2020-07-07T08:54:07.495Z · score: 13 (6 votes) · EA · GW

Great post. I personally didn't really enjoy the sales-y style sometimes ("I’m going to let you in on a secret") but I liked the clear examples that illustrate important ideas. Theory of change in particular seems underestimated/underdiscussed in EA.

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-07-03T20:15:04.951Z · score: 4 (2 votes) · EA · GW

Nice addition and caveats, thanks! :)

Comment by jonas-vollmer on EA Forum feature suggestion thread · 2020-07-02T12:28:19.657Z · score: 2 (1 votes) · EA · GW

Thanks, I wasn't aware of this!

Comment by jonas-vollmer on EA Forum feature suggestion thread · 2020-07-02T12:27:11.553Z · score: 4 (2 votes) · EA · GW

Very cool!

I think for me personally, this would work better if there were two buttons at the end – one called "publish", one called "share as draft with users" or something like that. That puts it more in the reference class of "this is a form of publishing my work" rather than "here's some additional feature that I don't understand how it works".

Also: I notice that my wording was a bit unfriendly – apologies, I would like to retract that. :)


EDIT: It seems that drafts don't support comments. I think this is one of the main features I was hoping for.

Comment by jonas-vollmer on EA Forum feature suggestion thread · 2020-06-30T09:58:57.921Z · score: 8 (2 votes) · EA · GW

Categories / sub-fora / better overview of tags

I think it would be very helpful if the forum was made easier to navigate by creating categories/sub-fora, making tags more intuitively accessible, or some other method. E.g., how do I find the most-upvoted forum posts and comments about EA investing?

Comment by jonas-vollmer on EA Forum feature suggestion thread · 2020-06-30T09:55:57.941Z · score: 2 (1 votes) · EA · GW

I would like to promote Wei Dai's suggestion that it would be nice if it was possible to share drafts privately and then potentially make them public at a later point. (I think there's some chance that this is already possible, but the UX doesn't seem intuitive, otherwise I would have noticed already.)

Before implementing, it seems worth talking to users to find out whether this would actually make them more likely to share their internal work publicly at some point. It could also be good to find out whether there are any other ways that could make people more likely to share their internal work publicly.

Comment by jonas-vollmer on Announcing Effective Altruism Ventures · 2020-06-23T13:05:14.552Z · score: 10 (3 votes) · EA · GW

Some info here: https://youtu.be/Y4YrmltF2I0?t=157

Comment by jonas-vollmer on Max_Daniel's Shortform · 2020-06-15T08:23:27.892Z · score: 7 (4 votes) · EA · GW

Your point reminds me of the "history is written by the winners" adage – presumably, most civilizations would look back and think of their history as one of progress because they views their current values most favorably.

Perhaps this is one of the paths that would eventually contribute to a "desired dystopia" outcome, as outlined in Ord's book: we fail to realize that our social structure is flawed and leads to suffering in a systematic manner that's difficult to change.

(Also related: https://www.gwern.net/The-Narrowing-Circle )

Comment by jonas-vollmer on Max_Daniel's Shortform · 2020-06-14T16:15:56.467Z · score: 10 (3 votes) · EA · GW

In addition to the examples you mention, the world has become much more unequal over the past centuries, and I wonder how that impacts welfare. Relatedly, I wonder to what degree there is more loneliness and less purpose and belonging than in previous times, and how that impacts welfare (and whether it relates to the Easterlin paradox). EAs don't seem to discuss these aspects of welfare often. (Somewhat related books: Angus Deaton's The Great Escape and Junger's Tribe.)