Posts

EAF/FRI are now the Center on Long-Term Risk (CLR) 2020-03-06T16:40:10.190Z · score: 83 (44 votes)
EAF’s ballot initiative doubled Zurich’s development aid 2020-01-13T11:32:35.397Z · score: 259 (115 votes)
Effective Altruism Foundation: Plans for 2020 2019-12-23T11:51:56.315Z · score: 81 (36 votes)
Effective Altruism Foundation: Plans for 2019 2018-12-04T16:41:45.603Z · score: 52 (19 votes)
Effective Altruism Foundation update: Plans for 2018 and room for more funding 2017-12-15T15:09:17.168Z · score: 25 (25 votes)
Fundraiser: Political initiative raising an expected USD 30 million for effective charities 2016-09-13T11:25:17.151Z · score: 34 (22 votes)
Political initiative: Fundamental rights for primates 2016-08-04T19:35:28.201Z · score: 12 (14 votes)

Comments

Comment by jonas-vollmer on EA reading list: suffering-focused ethics · 2020-08-04T11:37:06.717Z · score: 4 (2 votes) · EA · GW

Some academic references can be found here (in favor and against SFE)

Comment by jonas-vollmer on Common ground for longtermists · 2020-08-03T08:26:28.276Z · score: 7 (3 votes) · EA · GW

I think David Moss has data on this (can you tag people in EA Forum posts?). I've sent him a PM with a link to this comment as an FYI, though I'm not sure he has time to respond.

Comment by jonas-vollmer on Annotated List of EA Career Advice Resources · 2020-07-13T07:16:21.438Z · score: 3 (2 votes) · EA · GW

Also interesting: Daniel Kestenholz's career reflection framework. This is essentially a detailed template for a career plan.

Comment by jonas-vollmer on Poor meat eater problem · 2020-07-13T06:38:32.433Z · score: 5 (3 votes) · EA · GW

See these resources:

Quoting from the first of these:

This argument is usually called the “poor meat eater problem,” but I think this term is not quite accurate, given that the concern is stronger in the developed world, so I’m going to call it the “meat eater problem.”
Comment by jonas-vollmer on Five Ways To Prioritize Better · 2020-07-07T08:54:07.495Z · score: 12 (5 votes) · EA · GW

Great post. I personally didn't really enjoy the sales-y style sometimes ("I’m going to let you in on a secret") but I liked the clear examples that illustrate important ideas. Theory of change in particular seems underestimated/underdiscussed in EA.

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-07-03T20:15:04.951Z · score: 4 (2 votes) · EA · GW

Nice addition and caveats, thanks! :)

Comment by jonas-vollmer on EA Forum feature suggestion thread · 2020-07-02T12:28:19.657Z · score: 2 (1 votes) · EA · GW

Thanks, I wasn't aware of this!

Comment by jonas-vollmer on EA Forum feature suggestion thread · 2020-07-02T12:27:11.553Z · score: 4 (2 votes) · EA · GW

Very cool!

I think for me personally, this would work better if there were two buttons at the end – one called "publish", one called "share as draft with users" or something like that. That puts it more in the reference class of "this is a form of publishing my work" rather than "here's some additional feature that I don't understand how it works".

Also: I notice that my wording was a bit unfriendly – apologies, I would like to retract that. :)


EDIT: It seems that drafts don't support comments. I think this is one of the main features I was hoping for.

Comment by jonas-vollmer on EA Forum feature suggestion thread · 2020-06-30T09:58:57.921Z · score: 8 (2 votes) · EA · GW

Categories / sub-fora / better overview of tags

I think it would be very helpful if the forum was made easier to navigate by creating categories/sub-fora, making tags more intuitively accessible, or some other method. E.g., how do I find the most-upvoted forum posts and comments about EA investing?

Comment by jonas-vollmer on EA Forum feature suggestion thread · 2020-06-30T09:55:57.941Z · score: 2 (1 votes) · EA · GW

I would like to promote Wei Dai's suggestion that it would be nice if it was possible to share drafts privately and then potentially make them public at a later point. (I think there's some chance that this is already possible, but the UX doesn't seem intuitive, otherwise I would have noticed already.)

Before implementing, it seems worth talking to users to find out whether this would actually make them more likely to share their internal work publicly at some point. It could also be good to find out whether there are any other ways that could make people more likely to share their internal work publicly.

Comment by jonas-vollmer on Announcing Effective Altruism Ventures · 2020-06-23T13:05:14.552Z · score: 10 (3 votes) · EA · GW

Some info here: https://youtu.be/Y4YrmltF2I0?t=157

Comment by jonas-vollmer on Max_Daniel's Shortform · 2020-06-15T08:23:27.892Z · score: 7 (4 votes) · EA · GW

Your point reminds me of the "history is written by the winners" adage – presumably, most civilizations would look back and think of their history as one of progress because they views their current values most favorably.

Perhaps this is one of the paths that would eventually contribute to a "desired dystopia" outcome, as outlined in Ord's book: we fail to realize that our social structure is flawed and leads to suffering in a systematic manner that's difficult to change.

(Also related: https://www.gwern.net/The-Narrowing-Circle )

Comment by jonas-vollmer on Max_Daniel's Shortform · 2020-06-14T16:15:56.467Z · score: 10 (3 votes) · EA · GW

In addition to the examples you mention, the world has become much more unequal over the past centuries, and I wonder how that impacts welfare. Relatedly, I wonder to what degree there is more loneliness and less purpose and belonging than in previous times, and how that impacts welfare (and whether it relates to the Easterlin paradox). EAs don't seem to discuss these aspects of welfare often. (Somewhat related books: Angus Deaton's The Great Escape and Junger's Tribe.)

Comment by jonas-vollmer on Max_Daniel's Shortform · 2020-06-14T16:05:35.540Z · score: 6 (3 votes) · EA · GW

I second Stefan's suggestion to share this as a normal post – I realize I should have read your shortform much sooner.

Comment by jonas-vollmer on Max_Daniel's Shortform · 2020-06-14T15:51:48.020Z · score: 2 (1 votes) · EA · GW

I stumbled a bit with the framing here: I think it's often the case that you need a lot of person-internal talent (including a good attitude, altruistic commitment, etc.) to learn X.

I'd personally be excited to spend more time on mentorship of EA community members but it feels kind of hard to find potential mentees who aren't already in touch with many other mentors (either because I'm bad at finding them or because we need more "great people" or because I'm not great at mentoring people to learn X).

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-25T09:19:57.560Z · score: 3 (2 votes) · EA · GW

I was thinking it's perhaps best to list it like this:

"Brian Tomasik's Essays on Reducing Suffering (or FRI/CLR, EAF/GBS Switzerland, REG)"

I think Brian's work brought several people into EA and may continue to do so, whereas that seems less likely for the other categories.

I also see the point about historic changes, but I personally never thought the previous categories were particularly helpful.

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-25T09:18:01.111Z · score: 0 (0 votes) · EA · GW

(moved comment)

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-23T08:07:21.452Z · score: 2 (1 votes) · EA · GW

(Btw, I think you can remove REG/FRI/EAF/Swiss from future surveys because we've deemphasized outreach and have been focusing on research. I also think the numbers substantially overlap with "local groups".)

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T16:59:43.151Z · score: 11 (4 votes) · EA · GW

A bit offtopic, but if this isn't available yet, I'd be curious to see the distribution of "When did you join EA?" as an upper-bound estimate of the growth of the EA community.

See also this: https://forum.effectivealtruism.org/posts/MBJvDDw2sFGkFCA29/is-ea-growing-ea-growth-metrics-for-2018

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T16:48:34.524Z · score: 9 (3 votes) · EA · GW

I found this incredibly interesting and useful, in particular the "Engagement Level" section. Thanks! :)

Comment by jonas-vollmer on How Much Leverage Should Altruists Use? · 2020-05-16T09:21:51.799Z · score: 0 (0 votes) · EA · GW

(moved up)

Comment by jonas-vollmer on How Much Leverage Should Altruists Use? · 2020-05-16T09:14:42.088Z · score: 4 (2 votes) · EA · GW

I'm currently helping put together the investment strategy for a DAF and my tentative conclusion is that (contrary to what it says in most EA investment-related articles) it doesn't make sense to use a leveraged global market portfolio instead of (leveraged) global stocks. Perhaps much of the theory doesn't apply in practice because it doesn't take fees and the cost of leverage into account:

Bonds:

  • Buying bonds/TIPS with a ~0% return at a 0.75% margin loan cost seems like a certain loss. (Perhaps this was different before quantitative easing, so it might make more sense again at some point in the future.) (EDIT: The currently cheapest source of leverage appears to be box spread financing at ~0.55% p.a. for 3 years. The 3y US govt bond yield is 0.2% p.a. So even with cheap sources of leverage, it's not worth it.)
  • Bonds (weighted BND + BNDX) slightly underperformed cash in the recent crisis, so perhaps aren't very anticorrelated with stocks.

Commodities: Commodity ETFs have high TERs of ≥0.58%; buying and rolling individual futures costs time. (EDIT: Even gold (GLD) has a TER of 0.4%.)

(REITs: Already included in stock ETFs.)

(Added some edits in parentheses to the first paragraph.)

Comment by jonas-vollmer on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-16T08:25:22.838Z · score: 2 (1 votes) · EA · GW

Yeah, I think this is worth taking seriously. (FWIW, I think I had been mostly (though perhaps not completely) aware that you are agnostic.)

Comment by jonas-vollmer on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-05-14T08:33:18.582Z · score: 9 (6 votes) · EA · GW

I wonder why you think recusal is a bad way to address COIs. The downsides seem minimal to me: The other fund managers can still vote in favor of a grant, and the recused fund manager can still provide information about the potential grantee. This will also automatically mean that other fund managers have to invest more time into investigating the grant, which is something you seemed to favor. I'd be keen to hear your thoughts.

In comparison, using internal veto power seems like a more brittle solution that relies more on attention from other fund managers and might not work in all instances.

In comparison, disclosure often seems more complicated to me because it interferes with the privacy of fund managers and potential grantees.

I think Open Phil's situation is substantially different because they are accountable to a very different type of donor, have fewer grant evaluators per grant, and most of their grants fall outside the EA community such that COIs are less common. (That said, I wonder about the COI policy for their EA grants committee.) GiveWell is also in a landscape where COIs are much less likely to arise.

I think there should be a fairly restrictive COI policy for all of the funds, not just for the LTFF.

Comment by jonas-vollmer on 2019 Ethnic Diversity Community Survey · 2020-05-13T12:43:03.413Z · score: 8 (5 votes) · EA · GW

I would love for there to be an analysis of how demographically diverse core EAs are (high self-reported engagement and/or EA Forum membership).

(I also wrote this here.)

Comment by jonas-vollmer on EA Survey 2019 Series: Community Demographics & Characteristics · 2020-05-13T12:40:43.970Z · score: 6 (4 votes) · EA · GW

I would love for this analysis to be repeated for core EAs (high self-reported engagement and/or EA Forum membership). E.g., I'd be really curious to see how demographically diverse core EAs are.

Comment by jonas-vollmer on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T14:07:28.152Z · score: 6 (6 votes) · EA · GW

I think the point is that some previously highly engaged EAs may have become less engaged (so dropped out of the 1000 people), or some would-be-engaged people didn't become engaged, due to the community's strong emphasis of longtermism. So I think it's all the same point, not two separate points.

I think I personally know a lot more EAs who have changed their views to longtermism than EAs who have dropped out of EA due to its longtermist focus. If that's true of the community as a whole (which I'm not sure about), the main point stands.

Comment by jonas-vollmer on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T07:33:06.062Z · score: 8 (4 votes) · EA · GW
taking the survey results about engagement at face value doesn't seem right to me

Not sure I understand – how do you think we should interpret them? Edit: Nevermind, now I get it.

Regarding the latter issue, it sounds like we might address it by repeating the same analysis using, say, EA Survey 2016 data? (Some people have updated their views since and we'd miss out on that, so that might be closer to a lower-bound estimate of interest in longtermism.)

Comment by jonas-vollmer on How Much Leverage Should Altruists Use? · 2020-04-19T20:19:01.829Z · score: 2 (1 votes) · EA · GW

Hm, interesting, thanks. The fees are also very high, so it may not be worth it.

Comment by jonas-vollmer on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2020-04-15T16:52:04.004Z · score: 8 (2 votes) · EA · GW

Cool, thanks for the reply! Strong-upvoted.

Regarding #1 and #2, so far I found Paul's line of argument more convincing, but I have only followed the discussion superficially. But points #3 and #4 seem pretty strong and convincing to me, so I'm inclined to conclude that mission hedging is indeed the stronger consideration here.

For AI risk, #3 might not apply because there's no divestment movement for AI risk and tech giants are large compared to our philanthropic investments. For #4, using the same 10:1 ratio, we'd be faced with the choice between sacrificing around $10 billion to reduce the largest tech giants' output by 1%, or do something else with the money. We can probably do better than reducing output by 1%, especially because it's pretty unclear whether that would be net positive or negative.

Even with 10:1 leverage, this would be quite expensive

My understanding is that 10x leverage would also mean ~10x cost (from forgone diversification).

Comment by jonas-vollmer on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2020-04-13T13:14:53.030Z · score: 10 (3 votes) · EA · GW

This piece provides an IMO pretty strong defense of divestment: https://sideways-view.com/2019/05/25/analyzing-divestment/

Do you agree, and if to some extent, how does it change the conclusions of this article?

Comment by jonas-vollmer on How Much Leverage Should Altruists Use? · 2020-04-10T08:21:58.062Z · score: 2 (1 votes) · EA · GW
Startups
Another low-correlation investment opportunity, suggested by Paul Christiano

Should private equity ETFs be part of a global market portfolio? PSP, IPRV, and XLPE track private equity indices synthetically, BIZD invests in VC-ish companies. According to the McKinsey Global Private Markets Review 2019 (p. 15), global private equity AUM is $3.4 trillion, or ~2% of the global market portfolio.

Comment by jonas-vollmer on AMA Patrick Stadler, Director of Communications, Charity Entrepreneurship: Starting Charities from Scratch · 2020-04-02T08:25:48.068Z · score: 2 (1 votes) · EA · GW

I hope I'm not too late: What were some of the crucial influences / events / experiences / arguments that set you on the path towards becoming an entrepreneur?

Comment by jonas-vollmer on AMA Patrick Stadler, Director of Communications, Charity Entrepreneurship: Starting Charities from Scratch · 2020-04-02T08:24:02.866Z · score: 3 (2 votes) · EA · GW

I hope I'm not too late: In which ways (if at all) has your experience at the UN and SECO been useful for your recent and current work (New Incentives and Charity Entrepreneurship)? Do you think it would be useful for more EAs to get that kind of experience?

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T08:22:46.171Z · score: 2 (1 votes) · EA · GW

Makes sense, thanks!

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T08:15:52.774Z · score: 5 (4 votes) · EA · GW
Effective altruism wiki: Intuitively, this makes a lot of sense as a means of organizing knowledge of a particular community. Also, if the US Intelligence Community is doing it, it has to be good. I know that there have been attempts at this (e.g., arbital, priority.wiki, EAWiki). Unfortunately, these didn’t catch on as much as would be necessary to create a lot of value. Perhaps there are still ways of pulling this off though. See here and here for recent discussions.

In addition to the wikis, there are also EA Concepts and the LessWrong Wiki, which have similar roles.

Two hypotheses for why these encyclopedias didn't catch on so far:

  • Lack of coordination: Existing projects seemed to focus on content but not quality standards, editing/moderation, etc. Projects weren't maintained long-term. It probably wasn't sufficiently clear how new volunteers could best contribute. Resources were split between multiple projects.
  • Perhaps EA is still too small. Most communities with successful wikis have fairly large communities.

Personally, I'd be very excited about a better-coordinated and better-edited EA concepts/wiki. (I know of someone who is planning to work on this.)

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T08:02:20.931Z · score: 10 (4 votes) · EA · GW

On expert surveys, I would personally like to see more institutionalized surveys of key considerations like these: https://www.stafforini.com/blog/what_i_believe/ One interesting aspect could be to see in which areas agreement / disagreement is largest.

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T08:00:00.199Z · score: 2 (1 votes) · EA · GW
Building such institutions is a form of community-building. Arguably, this is one of the most important ways of making a difference since it offers a lot of leverage. It came second in the Leaders Forum survey.

(Not very important.) Hm, which result of the survey do you mean? I can't remember being given that option and can't find it immediately in that post.

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T07:57:01.272Z · score: 6 (4 votes) · EA · GW

Explicitly defined publication norms could also be helpful. It's often unclear how one should deal with information hazards, which seems to cause people to err on the side of not publishing their work. Instead, one could set up things like "info hazard peer review" or agree more explicitly on rules in the direction of "for issues around X and Y, or other potential info hazards, ask at least five peers from different orgs on whether to publish" (of course, this needs some more work).

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T07:53:43.757Z · score: 16 (9 votes) · EA · GW

Institutions for exchanging information (especially research) also seem helpful to me. For instance, many researchers circulate their work in semi-private google docs but only publish some of their work academically or on the Forum. (Sometimes, this is because of information hazards, but only rarely.) This makes it harder for new or less well-networked researchers to get up to speed with existing work. It also doesn't scale well as the community grows. It would be great if there were easy ways to make content public more easily. Wei Dai made a suggestion in this direction, and I bet there are further ways of making this happen.

Comment by jonas-vollmer on Toby Ord’s ‘The Precipice’ is published! · 2020-03-24T11:26:45.366Z · score: 2 (1 votes) · EA · GW

For those looking for the ebook, it's only available on the Canadian, German, and Australian (cheapest) amazon pages (but not US / UK ones). (EDIT: Actually available on the UK store.)

Comment by jonas-vollmer on Insomnia with an EA lens: Bigger than malaria? · 2020-03-18T07:24:59.695Z · score: 2 (1 votes) · EA · GW

Interesting, makes sense! I like that suggestion.

Comment by jonas-vollmer on Insomnia with an EA lens: Bigger than malaria? · 2020-03-17T14:09:29.770Z · score: 3 (2 votes) · EA · GW

Perhaps an app is an efficient way to popularize the ideas from the book? Many people don't commonly read non-fiction.

Comment by jonas-vollmer on EAF/FRI are now the Center on Long-Term Risk (CLR) · 2020-03-16T14:31:55.694Z · score: 3 (2 votes) · EA · GW

We did some surveys (partly because we thought of the "ICLR" / "eye clear" abbreviation) and only relatively few people liked the "clear" pronunciation. So the pronunciation we're going for is "C L R" ("see ell are"). Of course, if people just end up saying "clear" and like it, we won't object and would be happy to adopt that.

Comment by jonas-vollmer on Effective Altruism Foundation: Plans for 2020 · 2020-03-07T15:12:44.563Z · score: 3 (2 votes) · EA · GW

Thanks, fixed!

Comment by jonas-vollmer on EA Organization Updates: December 2019 · 2020-02-13T16:35:52.358Z · score: 2 (1 votes) · EA · GW

It was written correctly in the Google Doc though ;)

Comment by jonas-vollmer on EA Organization Updates: December 2019 · 2020-02-12T11:08:24.358Z · score: 4 (3 votes) · EA · GW

(Nitpick: It should say "Foundational Research Institute" rather than "Foundational Research Initiative".)

Comment by jonas-vollmer on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T16:06:03.248Z · score: 33 (15 votes) · EA · GW

I was thinking that you can always use a name that's different from the legal name. E.g., GiveWell's legal entity is called "The Clear Fund" but nobody cares/knows. Similarly, the Future of Humanity Institute has a "Centre for the Governance of AI" which isn't a separate legal entity. So it seems like the brand (and/or shorthand term) you use publicly is somewhat independent of the legal name.

Comment by jonas-vollmer on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T14:48:00.582Z · score: 11 (3 votes) · EA · GW

Thanks, that makes sense. What do you think about the other points I mentioned?

Comment by jonas-vollmer on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T14:33:34.694Z · score: 38 (18 votes) · EA · GW

This is great news, congrats on making this happen!

I guess you are doing this partly for legal reasons? I'm curious, have you considered going for "Athena Hotel" (the previous name of the hotel) as the main name of the project, regardless of what the legal entity is called? Might be easier to memorize/pronounce. I worry that otherwise, EAs will continue referring to CEEALAR as "EA Hotel", which could be a missed opportunity given that there's some reputational risk involved with the hotel.

Edit: More generally, it seems desirable to have a shorthand name for the hotel that's easier to spell, pronounce, and remember than "CEEALAR".

Some ideas: Athena Hotel, Athena Centre, Blackpool Hotel, Blackpool Centre, Learning & Research Centre.

(Someone pointed out to me that "Athena Hotel" might work particularly well because Athena is the Greek goddess of wisdom.)