Posts

EAF/FRI are now the Center on Long-Term Risk (CLR) 2020-03-06T16:40:10.190Z · score: 82 (43 votes)
EAF’s ballot initiative doubled Zurich’s development aid 2020-01-13T11:32:35.397Z · score: 255 (113 votes)
Effective Altruism Foundation: Plans for 2020 2019-12-23T11:51:56.315Z · score: 80 (35 votes)
Effective Altruism Foundation: Plans for 2019 2018-12-04T16:41:45.603Z · score: 52 (19 votes)
Effective Altruism Foundation update: Plans for 2018 and room for more funding 2017-12-15T15:09:17.168Z · score: 25 (25 votes)
Fundraiser: Political initiative raising an expected USD 30 million for effective charities 2016-09-13T11:25:17.151Z · score: 34 (22 votes)
Political initiative: Fundamental rights for primates 2016-08-04T19:35:28.201Z · score: 12 (14 votes)

Comments

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-25T09:19:57.560Z · score: 3 (2 votes) · EA · GW

I was thinking it's perhaps best to list it like this:

"Brian Tomasik's Essays on Reducing Suffering (or FRI/CLR, EAF/GBS Switzerland, REG)"

I think Brian's work brought several people into EA and may continue to do so, whereas that seems less likely for the other categories.

I also see the point about historic changes, but I personally never thought the previous categories were particularly helpful.

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-25T09:18:01.111Z · score: 0 (0 votes) · EA · GW

(moved comment)

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-23T08:07:21.452Z · score: 2 (1 votes) · EA · GW

(Btw, I think you can remove REG/FRI/EAF/Swiss from future surveys because we've deemphasized outreach and have been focusing on research. I also think the numbers substantially overlap with "local groups".)

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T16:59:43.151Z · score: 11 (4 votes) · EA · GW

A bit offtopic, but if this isn't available yet, I'd be curious to see the distribution of "When did you join EA?" as an upper-bound estimate of the growth of the EA community.

See also this: https://forum.effectivealtruism.org/posts/MBJvDDw2sFGkFCA29/is-ea-growing-ea-growth-metrics-for-2018

Comment by jonas-vollmer on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T16:48:34.524Z · score: 9 (3 votes) · EA · GW

I found this incredibly interesting and useful, in particular the "Engagement Level" section. Thanks! :)

Comment by jonas-vollmer on How Much Leverage Should Altruists Use? · 2020-05-16T09:21:51.799Z · score: 2 (1 votes) · EA · GW

The currently cheapest source of leverage appears to be box spread financing at ~0.55% p.a. for 3 years. The 3y US govt bond yield is 0.2% p.a.

Comment by jonas-vollmer on How Much Leverage Should Altruists Use? · 2020-05-16T09:14:42.088Z · score: 4 (2 votes) · EA · GW

I'm currently helping put together the investment strategy for a DAF and my tentative conclusion is that it doesn't make sense to use a leveraged global market portfolio instead of global stocks. Perhaps much of the theory doesn't apply in practice because it doesn't take fees and the cost of leverage into account:

Bonds:

  • Buying bonds/TIPS with a ~0% return at a 0.75% margin loan cost seems like a certain loss. (Perhaps this was different before quantitative easing, so it might make more sense again at some point in the future.)
  • Bonds (weighted BND + BNDX) slightly underperformed cash in the recent crisis, so perhaps aren't very anticorrelated with stocks.

Commodities: Commodity ETFs have high TERs of ≥0.58%; buying and rolling individual futures costs time. (EDIT: Even gold (GLD) has a TER of 0.4%.)

(REITs: Already included in stock ETFs.)

Comment by jonas-vollmer on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-16T08:25:22.838Z · score: 2 (1 votes) · EA · GW

Yeah, I think this is worth taking seriously. (FWIW, I think I had been mostly (though perhaps not completely) aware that you are agnostic.)

Comment by jonas-vollmer on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-05-14T08:33:18.582Z · score: 9 (6 votes) · EA · GW

I wonder why you think recusal is a bad way to address COIs. The downsides seem minimal to me: The other fund managers can still vote in favor of a grant, and the recused fund manager can still provide information about the potential grantee. This will also automatically mean that other fund managers have to invest more time into investigating the grant, which is something you seemed to favor. I'd be keen to hear your thoughts.

In comparison, using internal veto power seems like a more brittle solution that relies more on attention from other fund managers and might not work in all instances.

In comparison, disclosure often seems more complicated to me because it interferes with the privacy of fund managers and potential grantees.

I think Open Phil's situation is substantially different because they are accountable to a very different type of donor, have fewer grant evaluators per grant, and most of their grants fall outside the EA community such that COIs are less common. (That said, I wonder about the COI policy for their EA grants committee.) GiveWell is also in a landscape where COIs are much less likely to arise.

I think there should be a fairly restrictive COI policy for all of the funds, not just for the LTFF.

Comment by jonas-vollmer on 2019 Ethnic Diversity Community Survey · 2020-05-13T12:43:03.413Z · score: 8 (5 votes) · EA · GW

I would love for there to be an analysis of how demographically diverse core EAs are (high self-reported engagement and/or EA Forum membership).

(I also wrote this here.)

Comment by jonas-vollmer on EA Survey 2019 Series: Community Demographics & Characteristics · 2020-05-13T12:40:43.970Z · score: 6 (4 votes) · EA · GW

I would love for this analysis to be repeated for core EAs (high self-reported engagement and/or EA Forum membership). E.g., I'd be really curious to see how demographically diverse core EAs are.

Comment by jonas-vollmer on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T14:07:28.152Z · score: 6 (6 votes) · EA · GW

I think the point is that some previously highly engaged EAs may have become less engaged (so dropped out of the 1000 people), or some would-be-engaged people didn't become engaged, due to the community's strong emphasis of longtermism. So I think it's all the same point, not two separate points.

I think I personally know a lot more EAs who have changed their views to longtermism than EAs who have dropped out of EA due to its longtermist focus. If that's true of the community as a whole (which I'm not sure about), the main point stands.

Comment by jonas-vollmer on New data suggests the ‘leaders’’ priorities represent the core of the community · 2020-05-12T07:33:06.062Z · score: 8 (4 votes) · EA · GW
taking the survey results about engagement at face value doesn't seem right to me

Not sure I understand – how do you think we should interpret them? Edit: Nevermind, now I get it.

Regarding the latter issue, it sounds like we might address it by repeating the same analysis using, say, EA Survey 2016 data? (Some people have updated their views since and we'd miss out on that, so that might be closer to a lower-bound estimate of interest in longtermism.)

Comment by jonas-vollmer on How Much Leverage Should Altruists Use? · 2020-04-19T20:19:01.829Z · score: 2 (1 votes) · EA · GW

Hm, interesting, thanks. The fees are also very high, so it may not be worth it.

Comment by jonas-vollmer on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2020-04-15T16:52:04.004Z · score: 8 (2 votes) · EA · GW

Cool, thanks for the reply! Strong-upvoted.

Regarding #1 and #2, so far I found Paul's line of argument more convincing, but I have only followed the discussion superficially. But points #3 and #4 seem pretty strong and convincing to me, so I'm inclined to conclude that mission hedging is indeed the stronger consideration here.

For AI risk, #3 might not apply because there's no divestment movement for AI risk and tech giants are large compared to our philanthropic investments. For #4, using the same 10:1 ratio, we'd be faced with the choice between sacrificing around $10 billion to reduce the largest tech giants' output by 1%, or do something else with the money. We can probably do better than reducing output by 1%, especially because it's pretty unclear whether that would be net positive or negative.

Even with 10:1 leverage, this would be quite expensive

My understanding is that 10x leverage would also mean ~10x cost (from forgone diversification).

Comment by jonas-vollmer on A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good · 2020-04-13T13:14:53.030Z · score: 10 (3 votes) · EA · GW

This piece provides an IMO pretty strong defense of divestment: https://sideways-view.com/2019/05/25/analyzing-divestment/

Do you agree, and if to some extent, how does it change the conclusions of this article?

Comment by jonas-vollmer on How Much Leverage Should Altruists Use? · 2020-04-10T08:21:58.062Z · score: 2 (1 votes) · EA · GW
Startups
Another low-correlation investment opportunity, suggested by Paul Christiano

Should private equity ETFs be part of a global market portfolio? PSP, IPRV, and XLPE track private equity indices synthetically, BIZD invests in VC-ish companies. According to the McKinsey Global Private Markets Review 2019 (p. 15), global private equity AUM is $3.4 trillion, or ~2% of the global market portfolio.

Comment by jonas-vollmer on AMA Patrick Stadler, Director of Communications, Charity Entrepreneurship: Starting Charities from Scratch · 2020-04-02T08:25:48.068Z · score: 2 (1 votes) · EA · GW

I hope I'm not too late: What were some of the crucial influences / events / experiences / arguments that set you on the path towards becoming an entrepreneur?

Comment by jonas-vollmer on AMA Patrick Stadler, Director of Communications, Charity Entrepreneurship: Starting Charities from Scratch · 2020-04-02T08:24:02.866Z · score: 3 (2 votes) · EA · GW

I hope I'm not too late: In which ways (if at all) has your experience at the UN and SECO been useful for your recent and current work (New Incentives and Charity Entrepreneurship)? Do you think it would be useful for more EAs to get that kind of experience?

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T08:22:46.171Z · score: 2 (1 votes) · EA · GW

Makes sense, thanks!

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T08:15:52.774Z · score: 5 (4 votes) · EA · GW
Effective altruism wiki: Intuitively, this makes a lot of sense as a means of organizing knowledge of a particular community. Also, if the US Intelligence Community is doing it, it has to be good. I know that there have been attempts at this (e.g., arbital, priority.wiki, EAWiki). Unfortunately, these didn’t catch on as much as would be necessary to create a lot of value. Perhaps there are still ways of pulling this off though. See here and here for recent discussions.

In addition to the wikis, there are also EA Concepts and the LessWrong Wiki, which have similar roles.

Two hypotheses for why these encyclopedias didn't catch on so far:

  • Lack of coordination: Existing projects seemed to focus on content but not quality standards, editing/moderation, etc. Projects weren't maintained long-term. It probably wasn't sufficiently clear how new volunteers could best contribute. Resources were split between multiple projects.
  • Perhaps EA is still too small. Most communities with successful wikis have fairly large communities.

Personally, I'd be very excited about a better-coordinated and better-edited EA concepts/wiki. (I know of someone who is planning to work on this.)

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T08:02:20.931Z · score: 10 (4 votes) · EA · GW

On expert surveys, I would personally like to see more institutionalized surveys of key considerations like these: https://www.stafforini.com/blog/what_i_believe/ One interesting aspect could be to see in which areas agreement / disagreement is largest.

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T08:00:00.199Z · score: 2 (1 votes) · EA · GW
Building such institutions is a form of community-building. Arguably, this is one of the most important ways of making a difference since it offers a lot of leverage. It came second in the Leaders Forum survey.

(Not very important.) Hm, which result of the survey do you mean? I can't remember being given that option and can't find it immediately in that post.

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T07:57:01.272Z · score: 4 (3 votes) · EA · GW

Explicitly defined publication norms could also be helpful. It's often unclear how one should deal with information hazards, which seems to cause people to err on the side of not publishing their work. Instead, one could set up things like "info hazard peer review" or agree more explicitly on rules in the direction of "for issues around X and Y, or other potential info hazards, ask at least five peers from different orgs on whether to publish" (of course, this needs some more work).

Comment by jonas-vollmer on The case for building more and better epistemic institutions in the effective altruism community · 2020-03-30T07:53:43.757Z · score: 13 (7 votes) · EA · GW

Institutions for exchanging information (especially research) also seem helpful to me. For instance, many researchers circulate their work in semi-private google docs but only publish some of their work academically or on the Forum. (Sometimes, this is because of information hazards, but only rarely.) This makes it harder for new or less well-networked researchers to get up to speed with existing work. It also doesn't scale well as the community grows. It would be great if there were easy ways to make content public more easily. Wei Dai made a suggestion in this direction, and I bet there are further ways of making this happen.

Comment by jonas-vollmer on Toby Ord’s ‘The Precipice’ is published! · 2020-03-24T11:26:45.366Z · score: 2 (1 votes) · EA · GW

For those looking for the ebook, it's only available on the Canadian, German, and Australian (cheapest) amazon pages (but not US / UK ones). (EDIT: Actually available on the UK store.)

Comment by jonas-vollmer on Insomnia with an EA lens: Bigger than malaria? · 2020-03-18T07:24:59.695Z · score: 2 (1 votes) · EA · GW

Interesting, makes sense! I like that suggestion.

Comment by jonas-vollmer on Insomnia with an EA lens: Bigger than malaria? · 2020-03-17T14:09:29.770Z · score: 3 (2 votes) · EA · GW

Perhaps an app is an efficient way to popularize the ideas from the book? Many people don't commonly read non-fiction.

Comment by jonas-vollmer on EAF/FRI are now the Center on Long-Term Risk (CLR) · 2020-03-16T14:31:55.694Z · score: 3 (2 votes) · EA · GW

We did some surveys (partly because we thought of the "ICLR" / "eye clear" abbreviation) and only relatively few people liked the "clear" pronunciation. So the pronunciation we're going for is "C L R" ("see ell are"). Of course, if people just end up saying "clear" and like it, we won't object and would be happy to adopt that.

Comment by jonas-vollmer on Effective Altruism Foundation: Plans for 2020 · 2020-03-07T15:12:44.563Z · score: 3 (2 votes) · EA · GW

Thanks, fixed!

Comment by jonas-vollmer on EA Organization Updates: December 2019 · 2020-02-13T16:35:52.358Z · score: 2 (1 votes) · EA · GW

It was written correctly in the Google Doc though ;)

Comment by jonas-vollmer on EA Organization Updates: December 2019 · 2020-02-12T11:08:24.358Z · score: 4 (3 votes) · EA · GW

(Nitpick: It should say "Foundational Research Institute" rather than "Foundational Research Initiative".)

Comment by jonas-vollmer on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T16:06:03.248Z · score: 33 (15 votes) · EA · GW

I was thinking that you can always use a name that's different from the legal name. E.g., GiveWell's legal entity is called "The Clear Fund" but nobody cares/knows. Similarly, the Future of Humanity Institute has a "Centre for the Governance of AI" which isn't a separate legal entity. So it seems like the brand (and/or shorthand term) you use publicly is somewhat independent of the legal name.

Comment by jonas-vollmer on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T14:48:00.582Z · score: 11 (3 votes) · EA · GW

Thanks, that makes sense. What do you think about the other points I mentioned?

Comment by jonas-vollmer on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T14:33:34.694Z · score: 38 (18 votes) · EA · GW

This is great news, congrats on making this happen!

I guess you are doing this partly for legal reasons? I'm curious, have you considered going for "Athena Hotel" (the previous name of the hotel) as the main name of the project, regardless of what the legal entity is called? Might be easier to memorize/pronounce. I worry that otherwise, EAs will continue referring to CEEALAR as "EA Hotel", which could be a missed opportunity given that there's some reputational risk involved with the hotel.

Edit: More generally, it seems desirable to have a shorthand name for the hotel that's easier to spell, pronounce, and remember than "CEEALAR".

Some ideas: Athena Hotel, Athena Centre, Blackpool Hotel, Blackpool Centre, Learning & Research Centre.

(Someone pointed out to me that "Athena Hotel" might work particularly well because Athena is the Greek goddess of wisdom.)

Comment by jonas-vollmer on EA Hotel Fundraiser 6: Concrete outputs after 17 months · 2020-01-29T11:43:48.713Z · score: 0 (0 votes) · EA · GW

.

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-26T16:27:59.377Z · score: 10 (7 votes) · EA · GW

Update: We've hired someone part-time (a couple of hours per month) to help ensure good implementation of the initiative.

Comment by jonas-vollmer on Intervention Profile: Ballot Initiatives · 2020-01-18T10:51:41.163Z · score: 3 (2 votes) · EA · GW
You should correct me if I'm wrong, but it seems to me that the proposals were eventually weakened to the point that conservation of resources became the primary (perhaps sole?) focus.

"Primary focus" seems correct. The resulting legal texts didn't mention plant-based food anymore, if I recall correctly, but still led to a reduction in meat consumption/portions, so in that sense they were still somewhat successful.

Comment by jonas-vollmer on Intervention Profile: Ballot Initiatives · 2020-01-17T16:58:25.528Z · score: 10 (4 votes) · EA · GW

As I already mentioned via email, I think this is an excellent post.

I just noticed that I overlooked one point when giving feedback: The main idea behind Sentience Politics' "sustainable nutrition" initiatives was also to promote animal welfare and expand the moral circle (through reducing meat consumption). The environmental benefits are also significant, but weren't the primary motivation.

Comment by jonas-vollmer on Effective Altruism Foundation: Plans for 2020 · 2020-01-17T10:08:29.489Z · score: 6 (4 votes) · EA · GW

Thanks! I think I don't have the capacity to give detailed public replies to this right now. My respective short answers would be something like "sure, that seems fine" and "might inspire riskier content, depends a lot on the framing and context", but there's nuance to this that's hard to convey in half a sentence. If you would like to write something about these topics and are interested in my perspective, feel free to get in touch and I'm happy to share my thoughts!

Comment by jonas-vollmer on Effective Altruism Foundation: Plans for 2020 · 2020-01-16T22:37:18.435Z · score: 6 (3 votes) · EA · GW
Do the kinds of s-risks EAF has in mind mostly involve artificial sentience to get to astronomical scale?

Yes, see here. Though we also put some credence on other "unknown unknowns" that we might prevent through broad interventions (like promoting compassion and cooperation).

Are you primarily concerned with autonomous self-sustaining (self-replicating) suffering processes being created, or are you also very concerned about an agent already having or creating individuals capable of suffering and who require resources from the agent to keep running, despite the costs (of running, or the extra costs of sentience specifically)?
My guess is that the latter is much more limited in potential scale.

Both could be concerning. I find it hard to think about future technological capabilities and agents in sufficient detail. So rather than thinking about specific scenarios, we'd like to reduce s-risks through (hopefully) more robust levers such as making the future less multipolar and differentially researching peaceful bargaining mechanisms.

Comment by jonas-vollmer on Effective Altruism Foundation: Plans for 2020 · 2020-01-16T22:35:58.239Z · score: 6 (3 votes) · EA · GW

Thanks for giving input on this!

So you seem to think that our guidelines ask people to weaken their views while Nick's may not be doing that, and that they may be harmful to suffering-focused views if we think promoting SFE is important. I think my perspective differs in the following ways:

  • The guidelines are fairly similar in their recommendation to mention moral uncertainty and arguments that are especially important to other parts of the community while representing one's own views honestly.
  • If we want to promote SFE in EA, we will be more convincing for (potential) EAs if we provide nuanced and balanced arguments, which is what the guidelines ask for, and if s-risks research is more fleshed out and established in the community. Unlike our previous SFE content, our recent efforts (e.g., workshops, asking for feedback on early drafts) received a lot of engagement from both newer and long-time EA community members. (Outside of EA, this seems less clear.)
  • We sought feedback on these guidelines from community members and received largely positive feedback. Some people will always disagree but overall, most people were in favor. We'll seek out feedback again when we revisit the guidelines.
  • I think this new form of cooperation across the community is worth trying and improving on. It may not be perfect yet, but we will reassess at the end of this year and make adjustments (or discontinue the guidelines in a worst case).

I hope this is helpful. We have now published the guidelines, you can find the links above!

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-16T22:28:39.527Z · score: 2 (1 votes) · EA · GW

Fully agreed, thanks for the clarification!

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-15T09:53:33.993Z · score: 12 (7 votes) · EA · GW

They already have a committee allocating the grants which includes some academics, and they said they want to further improve the award practice. We have suggested specific academics they could work with. I'm not sure what it will end up looking like in practice. There are certainly some people in the administration who are eager to preserve the status quo, whereas others seemed quite excited about effectiveness improvements.

I don't think it's possible for citizens to sue the government for failing to implement a ballot initiative (or at least that's very uncommon). But there are many indirect ways to enforce an initiative, e.g., we could talk to the members of the city council who we know and work with them to submit motions to improve the implementation of the initiative. In general, referenda are taken very seriously in Switzerland.

As I wrote above, the bottleneck is likely EA-aligned people with development knowledge wanting to spend a couple of hours per year on this (rather than formal ways of suing/filing complaints if it's not implemented in the way we'd like). I think even a few small, friendly nudges would go a long way.

so much so that it could flip the sign of your assessment

That sounds like you think it might have been net negative, but I don't see how that follows from your points. Unless you think the entire budget has literally a zero impact, which I think is very unlikely for the following reason:

I think it's likely to have a significant positive impact if citizens of a city with a nominal per-capita GDP of $180,000 (source) give more money to people in developing countries (with a per-capita GDP which is ~2 orders of magnitude lower), even if that happens inefficiently. (There's a lot of EA and non-EA writing on the indirect effects of foreign aid, etc. so I'm not going to elaborate more on that here.)

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-15T09:35:25.247Z · score: 19 (10 votes) · EA · GW

Right, a non-consequentialist analysis might lead to different conclusions in this case. Thanks for pointing that out!

I think there's still a pretty strong case to be made that in the case of development cooperation, it's not quite as straightforward because developed countries have harmed developing countries in many ways (colonialism, tax havens, agricultural export subsidies, etc.). Thomas Pogge has argued along these lines IIRC, so one could look at his views on this.

More generally, we live in a highly globalized world, we routinely interact with these countries through trade, etc., such that it seems plausible that we do have some responsibilities towards them. And we're talking about one of the very wealthiest cities in the world (with a per-capita GDP of $180,000!) giving a relatively small additional amount. So if there is one particular case where Huemer's arguments appear particularly implausible, it is probably this one.

Overall, I don't think it's obvious whether the case for development cooperation becomes weaker or stronger if we take into account various non-consequentialist perspectives.

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-14T17:26:56.413Z · score: 11 (4 votes) · EA · GW

Thanks for the input!

Because modeling this involves several judgment calls and would make the analysis much more complex (and harder to understand), we decided it's better not to include it in the quantitative model and instead just mention it in the text.

I also think this is unlikely to change the numbers by more than 10%. I think it would take several fairly strong assumptions to change that, such as you think that Zurich's marginal budget is used effectively in an important cause area such as global catastrophic risk research funding.

Some brainstorming ideas for how to model this cost:

  • You could model a tax increase as a reduction in income for Zurich residents (using data on per-capita GDP in the city of Zurich, this is available) and compare that to an increase in the income for the average development cooperation recipient (taking into account that some funding is used for Swiss development cooperation staff compensation). The line of reasoning from this article (also linked above) could be helpful to then translate this into welfare changes.
  • You could try to better understand spending cuts by looking at the budget items and which ones tended to be cut during past cuts, then try to estimate how they compare to development cooperation (or the things Zurich residents usually spend money on).
Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-13T21:40:04.515Z · score: 3 (2 votes) · EA · GW

To further clarify: I think in many circumstances (e.g., for a ballot initiative in Switzerland on the federal level), public opinion polling would be crucial. But for this specific type of city-level initiative, I don't think it would help much.

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-13T21:38:03.467Z · score: 5 (3 votes) · EA · GW

That's correct. The original proposals for sustainable nutrition explicitly mentioned "plant-based" and "animal-friendly" food, but then the counterproposals only said "sustainable" or "environmentally friendly." So I'd say overall, from an animal welfare perspective, they were moderately successful. We didn't have the time to evaluate their actual impact, though I think this would be a worthwhile project for EAs, especially if it results in an EA Forum article similar to this one.

Comment by jonas-vollmer on EAF’s ballot initiative doubled Zurich’s development aid · 2020-01-13T16:41:46.517Z · score: 8 (7 votes) · EA · GW

I agree with the importance of "choosing the right avenue." I still don't think public opinion polling is very useful for that purpose (especially if some polling data is already available). In fact, I think public opinion polling would have been unlikely to clearly identify the key issues because the general public has much less pronounced and well-informed opinions than politicians and other stakeholders.

At least for Swiss initiatives, getting reactions/opinions from the responsible legislative body and the people they trust (like local charities in this case) seems much more useful because it shapes the legislative bodies' official recommendation to voters. I think it was a mistake not to do more of that type of stakeholder engagement in the early stages of the initiative, and that mistake almost led to a complete failure of the initiative.

Also noteworthy: Talking to local politicians is much cheaper still than doing public opinion polls (costs a couple of hours rather than thousands of dollars plus a lot of work to get the polling right).

That said, I think doing some polling before launching an initiative could also be somewhat helpful.

Comment by jonas-vollmer on Fundraiser: Political initiative raising an expected USD 30 million for effective charities · 2020-01-13T16:36:48.563Z · score: 3 (2 votes) · EA · GW

New EA Forum post is out: EAF’s ballot initiative doubled Zurich’s development aid.