Posts

Thoughts on patient philanthropy 2020-09-08T12:00:46.399Z · score: 14 (5 votes)
AMA: Tobias Baumann, Center for Reducing Suffering 2020-09-06T10:45:10.187Z · score: 39 (25 votes)
Common ground for longtermists 2020-07-29T10:26:50.727Z · score: 59 (29 votes)
Representing future generations in the political process 2020-06-25T15:31:39.402Z · score: 33 (17 votes)
Reducing long-term risks from malevolent actors 2020-04-29T08:55:38.809Z · score: 236 (100 votes)
Thoughts on electoral reform 2020-02-18T16:23:27.829Z · score: 75 (38 votes)
Space governance is important, tractable and neglected 2020-01-07T11:24:38.136Z · score: 71 (34 votes)
How can we influence the long-term future? 2019-03-06T15:31:43.683Z · score: 10 (12 votes)
Risk factors for s-risks 2019-02-13T17:51:37.632Z · score: 31 (12 votes)
Why I expect successful (narrow) alignment 2018-12-29T15:46:04.947Z · score: 18 (17 votes)
A typology of s-risks 2018-12-21T18:23:05.249Z · score: 25 (14 votes)
Thoughts on short timelines 2018-10-23T15:59:41.415Z · score: 22 (24 votes)
S-risk FAQ 2017-09-18T08:05:39.850Z · score: 24 (19 votes)
Strategic implications of AI scenarios 2017-06-29T07:31:27.891Z · score: 7 (7 votes)

Comments

Comment by tobias_baumann on Thoughts on patient philanthropy · 2020-09-10T10:48:18.673Z · score: 2 (1 votes) · EA · GW

I was just talking about 30 years because those are the farthest-out US bonds. I agree that the horizon of patient philanthropists can be much longer.

Comment by tobias_baumann on Thoughts on patient philanthropy · 2020-09-09T22:00:12.349Z · score: 4 (2 votes) · EA · GW

Yeah, but even 30 year interest rates are low (1-2% at the moment). There is an Austrian 100 year bond paying 0.88%. I think that is significant evidence that something about the "patient vs impatient actors" story does not add up.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-09T14:41:11.701Z · score: 9 (4 votes) · EA · GW

It is fair to say that some suffering-focused views have highly counterintuitive implications, such as the one you mention. The misconception is just that this holds for all suffering-focused views. For instance, there are plenty of possible suffering-focused views that do not imply that happy humans would be better off committing suicide. In addition to preference-based views, one could value happiness but endorse the procreative asymmetry (so that lives above a certain threshold of welfare are considered OK even if there is some severe suffering), or one could be prioritarian or egalitarian in interpersonal contexts, which also avoids problematic conclusions about such tradeoffs. (Of course, those views may be considered unattractive for other reasons.)

I think views along these lines are actually fairly widespread among philosophers. It just so happens that suffering-focused EAs have often promoted other variants of SFE that do arguably have implications for intrapersonal tradeoffs that you consider counterintuitive (and I mostly agree that those implications are problematic, at least when taken to extremes), thus giving the impression that all or most suffering-focused views have said implications.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T23:12:41.270Z · score: 7 (3 votes) · EA · GW

Re: 1., there would be many more (thoughtful) people who share our concern about reducing suffering and s-risks (not necessarily with strongly suffering-focused values, but at least giving considerable weight to it). That results in an ongoing research project on s-risks that goes beyond a few EAs (e.g., it is also established in academia or other social movements).

Re: 2., a possible scenario is that suffering-focused ideas just never gain much traction, and consequently efforts to reduce s-risks will just fizzle out. However, I think there is significant evidence that at least an extreme version of this is not happening.

Re: 3., I think the levels of engagement and feedback we have received so far are encouraging. However, we do not currently have any procedures in place to measure impact, which is (as you say) incredibly hard for what we do. But of course, we are constantly thinking about what kind of work is most impactful!

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:59:37.100Z · score: 5 (3 votes) · EA · GW

I would guess that actually experiencing certain possible conscious states, in particular severe suffering or very intense bliss, could significantly change my views, although I am not sure if I would endorse this as “reflection” or if it might lead to bias.

It seems plausible (but I am not aware of strong evidence) that experience of severe suffering generally causes people to focus more on it. However, I myself have fortunately never experienced severe suffering, so that would be a data point to the contrary.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:55:11.974Z · score: 5 (3 votes) · EA · GW

I was exposed to arguments for suffering-focused ethics from the start, since I was involved with German-speaking EAs (the Effective Altruism Foundation / Foundational Research Institute back then). I don’t really know why exactly (there isn’t much research on what makes people suffering-focused or non-suffering-focused), but this intuitively resonated with me.

I can’t point to any specific arguments or intuition pumps, but my views are inspired by writing such as the Case for Suffering-Focused Ethics, Brian Tomasik’s essays, and writings by Simon Knutsson and Magnus Vinding.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:40:02.252Z · score: 11 (5 votes) · EA · GW

I agree that s-risks can vary a lot (by many orders of magnitude) in terms of severity. I also think that this gradual nature of s-risks is often swept under the rug because the definition just uses a certain threshold (“astronomical scale”). There have, in fact, been some discussions about how the definition could be changed to ameliorate this, but I don’t think there is a clear solution. Perhaps talking about reducing future suffering, or preventing worst-case outcomes, can convey this variation in severity more than the term ‘s-risks’.

Regarding your second question, I wrote up this document a while ago on whether we should focus on worst-case outcomes, as opposed to suffering in median futures or 90th-percentile-badness-futures (given that those are more likely than worst-cases). However, this did not yield a clear conclusion, so I consider this an open question.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:28:00.310Z · score: 3 (2 votes) · EA · GW

One key difference is that there is less money in it, because OpenPhil as the biggest EA grantmaker is not focused on reducing s-risks. In a certain sense, that is good news because work on s-risks is plausibly more funding-constrained than non-suffering-focused longtermism.

In terms of where to donate, I would recommend the Center on Long-Term Risk and the Center for Reducing Suffering (which I co-founded myself). Both of those organisations are doing crucial research on s-risk reduction. If you are looking for something a bit less abstract, you could consider Animal Ethics, the Good Food Institute, or Wild Animal Initiative.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:13:50.222Z · score: 9 (5 votes) · EA · GW

I think a plausible win condition is that society has some level moral concern for all sentient beings (it doesn’t necessarily need to be entirely suffering-focused) as well as stable mechanisms to implement positive-sum cooperation or compromise. The latter guarantees that moral concerns are taken into account and possible gains from trade can be achieved. (An example for this could be cultivated meat, which allows us to reduce animal suffering while accommodating the interests of meat eaters.)

However, I think suffering reducers in particular should perhaps not focus on imagining best-case outcomes. It is plausible (though not obvious) that we should focus on preventing worst-case outcomes rather than shooting for utopian outcomes, as the difference in expected suffering between a worst-case and the median outcome may be much greater than the difference between the median outcome and the best possible future.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:00:00.002Z · score: 9 (5 votes) · EA · GW

I don’t think this view is necessary to prioritise s-risk. A finite but relatively high “trade ratio” between happiness and suffering can be enough to focus on s-risks. In addition, I think it’s more complicated than putting some numbers on happiness vs. suffering. (See here for more details.) For instance, one should distinguish between the intrapersonal and the interpersonal setting - a common intuition is that one man’s pain can’t be outweighed by another’s pleasure.

Another possibility is lexicality: one may contend that only certain particularly bad forms of suffering can’t be outweighed. You may find such views counterintuitive, but it is worth noting that lexicality can be multi-dimensional and need not involve abrupt breaks. It is, for instance, quite possible to hold the view that 1 minute of lava is ‘outweighable’ but 1 day is not. (I think I would not have answered “no amount can compensate” if it was about 1 minute.)

I also sympathise with the view mentioned by Jonas: that happiness matters mostly in so far as an existing being has a craving or desire to experience it. The question, then, is just how strong the desire to experience a certain timespan of bliss is. The poll was just about how I would do this tradeoff for myself, and it just so happens that abstract prospects of bliss does not evoke a very strong desire in me. It’s certainly not enough to accept a day of lava drowning - and that is true regardless of how long the bliss lasts. Your psychology may be different but I don’t think there’s anything inconsistent or illogical about my preferences.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T17:09:01.185Z · score: 9 (4 votes) · EA · GW

We have thought about this, and wrote up some internal documents, but have not yet published anything (though we might do that at some point, as part of a strategic plan). Magnus and I are quite aligned in our thinking about the theory of change. The key intended outcome is to catalyse a research project on how to best reduce suffering, both by creating relevant content ourselves and by convincing others to share our concerns regarding s-risks and reducing future suffering.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T16:37:50.266Z · score: 10 (5 votes) · EA · GW

Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T16:28:42.703Z · score: 10 (5 votes) · EA · GW

I would refer to this elaborate comment by Magnus Vinding on a very similar question. Like Magnus, I think a common misconception is that suffering-focused views have certain counterintuitive or even dangerous implications (e.g. relating to world destruction), when in fact those problematic implications do not follow.

Suffering-focused ethics is also still sometimes associated with negative utilitarianism (NU). While NU counts as a suffering-focused view, this often fails to appreciate the breadth of possible suffering-focused views, including pluralist and even non-consequentialist views. Most suffering-focused views are not as ‘extreme’ as pure negative utilitarianism and are far more compatible with widely shared moral intuitions. (Cf. this recent essay for an overview.)

Last, and related to this, there is a common perception of suffering-focused views as unusual or ‘fringe’, when they in fact enjoy significant support (in various forms).

Comment by tobias_baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T16:27:35.442Z · score: 11 (6 votes) · EA · GW

Great question! I think both moral and factual disagreements play a significant role. David Althaus suggests a quantitative approach of distinguishing between the “N-ratio”, which measures how much weight one gives to suffering vs. happiness, and the “E-ratio”, which refers to one’s empirical beliefs regarding the ratio of future happiness and suffering. You could prioritise s-risk because of a high N-ratio (i.e. suffering-focused values) or because of a low E-ratio (i.e. pessimistic views of the future).

That suggests that moral and factual disagreements are comparably important. But if I had to decide, I’d guess that moral disagreements are the bigger factor, because there is perhaps more convergence (not necessarily a high degree in absolute terms) on empirical matters. In my experience, many who prioritise suffering reduction still agree to some extent with some arguments for optimism about the future (although not with extreme versions, like claiming that the ratio is “1000000 to 1”, or that the future will automatically be amazing if we avoid extinction). For instance, if you were to combine my factual beliefs with the values of, say, Will MacAskill, then I think the result would probably not consider s-risks a top priority (though still worthy of some concern).

In addition, I am increasingly thinking that “x-risk vs s-risk” is perhaps a false dichotomy, and thinking in those terms may not always be helpful (despite having written much on s-risks myself). There are far more ways to improve the long-term future than this framing suggests, and we should look for interventions that steer the future in robustly positive directions.

Comment by tobias_baumann on The case of the missing cause prioritisation research · 2020-08-19T09:07:41.897Z · score: 5 (3 votes) · EA · GW

Yeah, I would perhaps say that the community has historically been too narrowly focused on a small number of causes. But I think this has been improving for a while, and we're now close to the right balance. (There is also a risk of being too broad, by calling too many causes important and not prioritising enough.)

Comment by tobias_baumann on The case of the missing cause prioritisation research · 2020-08-16T16:42:33.401Z · score: 10 (8 votes) · EA · GW

Thanks for writing this up! I think you're raising many interesting points, especially about a greater focus on policy and going "beyond speculation".

However, I'm more optimistic than you are about the degree of work invested in cause prioritisation, and the ensuing progress we've seen over the last years. See this recent comment of mine - I'd be curious if you find those examples convincing.

Also, speaking as someone who is working on this myself, there is quite a bit of research on s-risks and cause prioritisation from a suffering-focused perspective, which is one form of "different views" - though perhaps this is not what you had in mind. (I think it might be good to clarify in more detail what sort of work you want to see, because the term "cause prioritisation research" may mean very different things to different people.)

Comment by tobias_baumann on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T11:48:39.909Z · score: 48 (21 votes) · EA · GW

I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.

Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel. For instance:

  • The recent work on patient longtermism seems highly relevant and plausibly meets the bar for being “major”. This isn’t novel - Robin Hanson wrote about it in 2011, and Benjamin Franklin arguably implemented the idea in 1790 - but I still think that it’s a significant contribution. (There is a big difference between an idea being mentioned somewhere, possibly in very “hidden” places, and that idea being sufficiently widespread in the community to have a real impact.)
  • Effective altruists are now considering a much wider variety of causes than in 2015 (see e.g. here). Perhaps none of those meet your bar for being “major”, but I think that the “discovery” (scare quotes because probably none of those is the first mention) of causes such as Reducing long-term risks from malevolent actors, invertebrate welfare, or space governance constitutes significant progress. S-risks have also gained more traction, although again the basic idea is from before 2015.
  • Views on the future of artificial intelligence have become much more nuanced and diverse, compared to the relatively narrow focus on the “Bostrom-Yudkowsky view” that was more prevalent in 2015. I think this does meet the bar for “major”, although it is arguably not a single insight: relevant factors include takeoff speeds, whether AI is best thought of as a unified agent, or the likelihood of successful alignment by default. (And many critiques of the Bostrom-Yudkowsky view were written pre-2015, so it also isn't really novel.)
Comment by tobias_baumann on Common ground for longtermists · 2020-07-30T08:47:20.813Z · score: 3 (2 votes) · EA · GW

Thanks for the comment! I fully agree with your points.

People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways, which the other group values.

That's a good point. A key question is how fine-grained our influence over the long-term future is - that is, to what extent are there actions that only benefit specific values? For instance, if we think that there will not be a lock-in or transformative technology soon, it might be that the best lever over the long-term future is to try and nudge society in broadly positive directions, because trying to affect the long-term future is simply too "chaotic" for more specific attempts. (However, overall I think it's unclear if / to what extent that is true.)

Comment by tobias_baumann on Common ground for longtermists · 2020-07-30T08:33:33.174Z · score: 3 (2 votes) · EA · GW

Yeah, I meant it to be inclusive of this "portfolio approach". I agree that specialisation and comparative advantages (and perhaps also sheer motivation) can justify focusing on things that are primarily good based on one (set of) moral perspectives.

Comment by tobias_baumann on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T09:58:55.005Z · score: 1 (1 votes) · EA · GW

That seems plausible and is also consistent with Amara's law (the idea that the impact of technology is often overestimated in the short run and underestimated in the long run).

I'm curious how likely you think it is that productivity growth will be significantly higher (i.e. levels at least comparable with electricity) for any reason, not just AI. I wouldn't give this much more than 50%, as there is also some evidence that stagnation is on the cards (see e.g. 1, 2). But that would mean that you're confident that the cause of higher productivity growth, assuming that this happens, would be AI? (Rather than, say, synthetic biotechnology, or genetic engineering, or some other technological advance, or some social change resulting in more optimisation for productivity.)

While AI is perhaps the most plausible single candidate, it's still quite unclear, so I'd maybe say it's 25-30% likely that AI in particular will cause significantly higher levels of productivity growth this century.

Comment by tobias_baumann on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T22:15:51.958Z · score: 1 (1 votes) · EA · GW

I agree that it's tricky, and am quite worried about how the framings we use may bias our views on the future of AI. I like the GDP/productivity growth perspective but feel free to answer the same questions for your preferred operationalisation.

Another possible framing: given a crystal ball showing the future, how likely is it that people would generally say that AI is the most important thing that happens this century?

As one operationalization, then, suppose we were to ask an economist in 2100: "Do you think that the counterfactual contribution of AI to American productivity growth between 2010 and 2100 was at least as large as the counterfactual contribution of electricity to American productivity growth between 1900 and 1940?" I think that the economist would probably agree -- let's say, 50% < p < 75% -- but I don't have a very principled reason for thinking this and might change my mind if I thought a bit more.

Interesting. So you generally expect (well, with 50-75% probability) AI to become a significantly bigger deal, in terms of productivity growth, than it is now? I have not looked into this in detail but my understanding is that the contribution of AI to productivity growth right now is very small (and less than electricity).

If yes, what do you think causes this acceleration? It could simply be that AI is early-stage right now, akin to electricity in 1900 or earlier, and the large productivity gains arise when key innovations diffuse through society on a large scale. (However, many forms of AI are already widespread.) Or it could be that progress in AI itself accelerates, or perhaps linear progress in something like "general intelligence" translates to super-linear impact on productivity.

Comment by tobias_baumann on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T22:58:49.528Z · score: 8 (5 votes) · EA · GW

What is your overall probability that we will, in this century, see progress in artificial intelligence that is at least as transformative as the industrial revolution?

What is your probability for the more modest claim that AI will be at least as transformative as, say, electricity or railroads?

Comment by tobias_baumann on Space governance is important, tractable and neglected · 2020-07-10T12:17:19.376Z · score: 8 (3 votes) · EA · GW

I also recently wrote up some thoughts on this question, though I didn't reach a clear conclusion either.

Comment by tobias_baumann on Max_Daniel's Shortform · 2020-06-30T16:31:20.129Z · score: 3 (2 votes) · EA · GW

This could be relevant. It's not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.

Comment by tobias_baumann on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T21:03:18.047Z · score: 1 (1 votes) · EA · GW

Great stuff, thanks!

Comment by tobias_baumann on Representing future generations in the political process · 2020-06-27T08:08:37.741Z · score: 3 (2 votes) · EA · GW

Hi Michael,

thanks for the comment!

Could you expand on what you mean by the first part of that sentence, and what makes you say that?

I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course I'd be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, that's not politically feasible anytime soon. So we have to find a sweet spot where a proposal is both realistic and would be a significant improvement from our perspective.

It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who aren't moral agents. And then people could say things like "The market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y]."
Of course, coming up with such metrics is hard, but that seems like a problem we'll want to fix anyway.

I agree, and I'd be really excited about such prediction markets! However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether it's current or future animals, and the bottleneck is just the lack of political will to do it. (But it would be valuable to know more about which policies would be most important - e.g. perhaps such markets would say that funding cultivated meat research is 10x as important as other reforms.)

By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as they'll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.

Comment by tobias_baumann on Representing future generations in the political process · 2020-06-26T22:01:05.828Z · score: 4 (3 votes) · EA · GW

Hi Tyler,

thanks for the detailed and thoughtful comment!

I find much less compelling the idea that "if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so," and "if people do not care about the long-term future," they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?

Yeah, I agree that there are plenty of reasons why institutional reform could be valuable. I didn't mean to endorse that objection (at least not in a strong form). I like your point about how longtermist institutions may shift norms and attitudes.

I don't know if you meant to narrow in on only those reforms I mention which attempt to create literal representation of future generations or if you meant to bring into focus all attempts to ameliorate political short-termism.

I mostly had the former in mind when writing the post, though other attempts to ameliorate short-termism are also plausibly very important.

I'm glad to see CLR take something of an interest in this topic

Might just be a typo but this post is by CRS (Center for Reducing Suffering), not CLR (Center on long-term risk). (It's easy to mix up because CRS is new, CLR recently re-branded, and both focus on s-risks.)

As a classical utilitarian, I'm also not particularly bothered by the philosophical problems you set out above, but some of these problems are the subject of my dissertation and I hope that I have some solutions for you soon.

Looking forward to reading it!

Comment by tobias_baumann on Space governance is important, tractable and neglected · 2020-06-26T11:20:53.656Z · score: 2 (2 votes) · EA · GW

Hey Jamie, thanks for the pointer! I wasn't aware of this.

Another relevant critique of whether colonisation is a good idea is Daniel Deudney's new book Dark Skies.

I myself have also written up some more thoughts on space colonisation in the meantime and have become more sceptical about the possibility of large-scale space settlement happening anytime soon.

Comment by tobias_baumann on Wild animal suffering video course · 2020-06-24T16:14:51.762Z · score: 3 (3 votes) · EA · GW

Great work, thanks for sharing!

Comment by tobias_baumann on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T16:42:08.279Z · score: 47 (22 votes) · EA · GW

Great post - I think it's extremely important to explore many different problem areas!

Some further plausible (in my opinion) candidates are shaping genetic enhancement, reducing long-term risks from malevolent actors, invertebrate welfare and space governance.

Comment by tobias_baumann on EA considerations regarding increasing political polarization · 2020-06-20T11:38:32.070Z · score: 44 (16 votes) · EA · GW

Great work, thanks for writing this up! I agree that excessive polarisation is an important issue and warrants more EA attention. In particular, polarisation is an important risk factor for s-risks.

Political polarization, as measured by political scientists, has clearly gone up in the last 20 years.

It is worth noting that this is a US-centric perspective and the broader picture is more mixed, with polarisation increasing in some countries and decreasing in others.

If there’s more I’m missing, feel free to provide links in the comment section.

Olaf van der Veen has written a thesis on this, analysing four possible interventions to reduce polarisation: (1) switching from FPTP to proportional representation, (2) making voting compulsory, (3) increasing the presence of public service broadcasting, and (4) creating deliberative citizen's assemblies. Olaf's takeaway (as far as I understand it) is that those interventions seem compelling and fairly tractable but the evidence of possible impacts is often not very strong.

I myself have also written about electoral reform as a possible way to reduce polarisation, and malevolent individuals in power also seem closely related to increased polarisation.

Comment by tobias_baumann on Timeline of the wild-animal suffering movement · 2020-06-16T12:16:35.335Z · score: 2 (2 votes) · EA · GW

Amazing work, thanks for writing this up!

Comment by tobias_baumann on How Much Leverage Should Altruists Use? · 2020-05-23T20:42:34.972Z · score: 1 (1 votes) · EA · GW

The drawdowns of major ETFs on this (e.g. EMB / JNK) during the corona crash or 2008 are roughly 2/3 to 3/4 of how much stocks (the S&P 500) went down. So I agree the diversification benefit is limited. The question, bracketing the point on leverage extra cost, is whether the positive EV of emerging markets bonds / high yield bonds is more or less than 2/3 to 3/4 of the positive EV of stocks. That's pretty hard to say - there's a lot of uncertainty on both sides. But if that is the case and one can borrow at very good rates (e.g. through futures or box spread financing) then the best portfolio should be a levered up combination of bonds & stocks rather than just stocks.

FWIW, I'm in a similar position regarding my personal portfolio; I've so far not invested in these asset classes but am actively considering it.

Comment by tobias_baumann on How Much Leverage Should Altruists Use? · 2020-05-18T08:57:18.207Z · score: 1 (1 votes) · EA · GW

What are your thoughts on high-yield corporate bonds or emerging markets bonds? This kind of bond offers non-zero interest rates but of course also entail higher risk. Also, these markets aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds.

Theoretically, there should be some diversification benefit from adding this kind of bond, though it's all positively correlated. But unfortunately, ETFs on these kinds of bonds have much higher fees.

Comment by tobias_baumann on How should longtermists think about eating meat? · 2020-05-17T10:29:58.725Z · score: 33 (22 votes) · EA · GW

Peter's point is that it makes a lot of sense to have certain norms about not causing serious direct harm, and one should arguably follow such norms rather than expecting some complex longtermist cost-benefit analysis.

Put differently, I think it is very important, from a longtermist perspective, to advance the idea that animals matter and that we consequently should not harm them (particularly for reasons as frivolous as eating meat).

Comment by tobias_baumann on Helping wild animals through vaccination: could this happen for coronaviruses like SARS-CoV-2? · 2020-05-13T11:13:37.980Z · score: 4 (4 votes) · EA · GW

Great post, thanks for writing this up!

Comment by tobias_baumann on Reducing long-term risks from malevolent actors · 2020-05-07T07:49:40.329Z · score: 2 (2 votes) · EA · GW

Thanks for commenting!

I agree that early detection in children is an interesting idea. If certain childhood behaviours can be shown to reliably predict malevolence, then this could be part of a manipulation-proof test. However, as you say, there are many pitfalls to be avoided.

I am not well versed in the literature but my impression is that things like torturing animals, bullying, general violence, or callous-unemotional personality traits (as assessed by others) are somewhat predictive of malevolence. But the problem is that you'll probably also get many false positives from those indicators.

Regarding environmental or developmental interventions, we write this in Appendix B:

Malevolent personality traits are plausibly exacerbated by adverse (childhood) environments—e.g. ones rife with abuse, bullying, violence or poverty (cf. Walsh & Wu, 2008). Thus, research to identify interventions to improve such environmental factors could be valuable. (However, the relevant areas appear to be very crowded. Also, the shared environment appears to have a rather small effect on personality, including personality disorders (Knopik et al., 2018, ch. 16; Johnson et al., 2008; Plomin, 2019; Torgersen, 2009).)

Perhaps improving parenting standards and childhood environments could actually be a fairly promising EA cause. For instance, early advocacy against hitting children may have been a pretty effective lever to make society more civilised and less violent in general.

Comment by tobias_baumann on Reducing long-term risks from malevolent actors · 2020-05-02T16:14:08.885Z · score: 8 (5 votes) · EA · GW

Thanks for the comment!

I would guess that having better tests of malevolence, or even just a better understanding of it, may help with this problem. Perhaps a takeaway is that we should not just raise awareness (which can backfire via “witch hunts”), but instead try to improve our scientific understanding and communicate that to the public, which hopefully makes it harder to falsely accuse people.

In general, I don’t know what can be done about people using any means necessary to smear political opponents. It seems that the way to address this is to have good norms favoring “clean” political discourse, and good processes to find out whether allegations are true; but it’s not clear what can be done to establish such norms.

Comment by tobias_baumann on What is a good donor advised fund for small UK donors? · 2020-04-29T14:11:22.008Z · score: 11 (6 votes) · EA · GW

See here for a very similar question (and answers): https://forum.effectivealtruism.org/posts/ihDhDt375xHf9wBCo/uk-donor-advised-funds

Comment by tobias_baumann on Adapting the ITN framework for political interventions & analysis of political polarisation · 2020-04-28T10:41:39.387Z · score: 20 (11 votes) · EA · GW

Great work, thanks for sharing! It's great to see this getting more attention in EA.

Just for those deciding whether to read the full thesis: it analyses four possible interventions to reduce polarisation: (1) switching from FPTP to proportional representation, (2) making voting compulsory, (3) increasing the presence of public service broadcasting, and (4) creating deliberative citizen's assemblies. Olaf's takeaway (as far as I understand it) is that those interventions seem compelling and fairly tractable but the evidence of possible impacts is often not very strong.

Comment by tobias_baumann on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-15T21:33:33.600Z · score: 5 (4 votes) · EA · GW

Well, historically, there have been quite a few pandemics that killed more than 10% of people, e.g. the Black Death or Plague of Justinian. There's been no pandemic that killed everyone.

Is your point that it's different for anthropogenic risks? Then I guess we could look at wars for historic examples. Indeed, there have been wars that killed something on the order of 10% of people, at least in the warring nations, and IMO that is a good argument to take the risk of a major war quite seriously.

But there have been far more wars that killed fewer people, and none that caused extinction. The literature usually models the number of casualties as a Pareto distribution, which means that the probability density is monotonically decreasing in the number of deaths. (For a broader reference class of atrocities, genocides, civil wars etc., I think the picture is similar.)

But we don't in fact see lots of unknown risks killing even 0.1% of the population.

Smoking, lack of exercise, and unhealthy diets each kill more than 0.1% of the population each year. Coronavirus may kill 0.1% in some countries. The advent of cars in the 20th century resulted in 60 million road deaths, which is maybe 0.5% of everyone alive over that time (I haven't checked this in detail). That can be seen as an unknown from the perspective of someone in 1900. Granted, some of those are more gradual than the sort of catastrophe people have in mind - but actually I'm not sure why that matters.

Looking at individual nations, I'm sure you can find many examples of civil wars, famines, etc. killing 0.1% of the population of a certain country, but far fewer examples killing 10% (though there are some). I'm not claiming the latter is 100x less likely but it is clearly much less likely.

You could have made the exact same argument in 1917, in 1944, etc. and you would have been wildly wrong.

I don't understand this. What do you think the exact same argument would have been, and why was that wildly wrong?

Comment by tobias_baumann on Coronavirus and non-humans: How is the pandemic affecting animals used for human consumption? · 2020-04-08T20:59:29.491Z · score: 6 (2 votes) · EA · GW

Interesting, thanks!

However, I disagree with the idea that coronavirus doesn't have anything to do with animal farming.

Yeah, I wrote this based on having read that the origins of coronavirus involved bats. After reading more, it seems not that simple because farmed animals may have enabled the virus to spread between species.

Comment by tobias_baumann on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-07T13:26:28.610Z · score: 5 (9 votes) · EA · GW

I haven't looked at this in much detail, but Ord's estimates seem too high to me. It seems really hard for humanity to go extinct, considering that there are people in remote villages, people in submarines, people in mid-flight at the time a disaster strikes, and even people on the International Space Station. (And yes, there are women on the ISS, I looked that up.) I just don't see how e.g. a pandemic would plausibly kill all those people.

Also, if engineered pandemics, or "unforeseen" and "other" anthropogenic risks have a chance of 3% each of causing extinction, wouldn't you expect to see smaller versions of these risks (that kill, say, 10% of people, but don't result in extinction) much more frequently? But we don't observe that.

(I haven't read Ord's book so I don't know if he addresses these points.)

Comment by tobias_baumann on Coronavirus and non-humans: How is the pandemic affecting animals used for human consumption? · 2020-04-07T11:01:45.280Z · score: 6 (4 votes) · EA · GW

Great work, thanks for writing this up!

I'm wondering how this might affect the public debate on factory farming. Animal advocates sometimes argue that factory farms contribute to antibiotic resistance, and this point may carry much more force in the future. So perhaps one key conclusion is that advocates should emphasise this angle more in the future. (That said, AFAIK the coronavirus doesn't have anything to do with farmed animals, and my impression from a quick Google search is that the issue of antibiotic resistance is manageable with the right regulations.)

Comment by tobias_baumann on Effective Altruism and Free Riding · 2020-03-29T22:33:20.592Z · score: 20 (8 votes) · EA · GW

Interesting, thanks for writing this up!

In practice, and for the EA community in particular, I think there are some reasons why the collective action problem isn't quite as bad as it may seem. For instance, with diminishing marginal returns on causes, the most efficient allocation will be a portfolio of interventions with weights roughly proportional to how much people care on average. But something quite similar can also happen in the non-cooperative equilibrium for some diversity of actors who all support the cause they're most excited about. (Maybe this is similar to case D in your analysis.)

Can you point to examples of concrete EA causes that you think get too much or too little resources due to these collective action problems?

Comment by tobias_baumann on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-18T09:46:21.759Z · score: 12 (5 votes) · EA · GW

How many resources do you think the EAA movement (and ACE in particular) should invest in animal causes that are less "mainstream", such as invertebrate welfare or wild animal suffering?

What would convince you that it should be more (or less) of a focus?

Comment by tobias_baumann on Harsanyi's simple “proof” of utilitarianism · 2020-02-21T13:14:41.810Z · score: 2 (2 votes) · EA · GW

You're right; I meant to refer to the violation of individual rationality. Thanks!

Comment by tobias_baumann on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T17:03:36.630Z · score: 11 (7 votes) · EA · GW

Thanks for writing this up! I agree that this result is interesting, but I find it unpersuasive as a normative argument. Why should morality be based on group decision-making principles? Why should I care about VNM rationality of the group?

Also, you suggest that this result lends support to common EA beliefs. I'm not so sure about that. First, it leads to preference utilitarianism, not hedonic utilitarianism. Second, EAs tend to value animals and future people, but they would arguably not count as part of the "group" in this framework(?). Third, I'm not sure what this tells you about the creation or non-creation of possible beings (cf. the asymmetry in population ethics).

Finally, it's worth pointing out that you could also start with different assumptions and get very different results. For instance, rather than demanding that the group is VNM rational, one could consider rational individuals in a group who bargain over what to do, and then look at bargaining solutions. And it turns out that the utilitarian approach of adding up utilities is *not* a bargaining solution, because it violates Pareto-optimality in some cases. Does that "disprove" total utilitarianism?

(Using e.g. the Nash bargaining solution with many participants probably leads to some form of prioritarianism or egalitarianism, because you'd have to ensure that everyone benefits.)

Comment by tobias_baumann on Thoughts on electoral reform · 2020-02-19T10:28:30.224Z · score: 9 (6 votes) · EA · GW

I'm not entirely convinced that VSE is the right approach. It's theoretically appealing, but practical considerations, like perceptions of the voting process and public acceptance / "legitimacy" of the result, might be more important. Voters aren't utilitarian robots.

I was aware of the simulations you mentioned but I didn't check them in detail. I suspect that these results are very sensitive to model assumptions, such as tactical voting behaviour. But it would be interesting to see more work on VSE.

What EAs definitely shouldn't do, in my opinion, is to spend considerable resources discrediting those alternatives to one's own preferred system, as FairVote has repeatedly done with respect to approval voting. Much more is gained by displacing plurality than is lost by replacing it with a suboptimal alternative (for all reasonable alternatives to plurality).

Strongly agree with this!

Comment by tobias_baumann on Should Longtermists Mostly Think About Animals? · 2020-02-04T11:14:56.915Z · score: 11 (8 votes) · EA · GW

If you think animals on average have net-negative lives, the primary value in preventing x-risks might not be ensuring human existence for humans’ sake, but rather ensuring that humans exist into the long-term future to steward animal welfare, to reduce animal suffering, and to move all animals toward having net-positive lives.

This assumes that (future) humans will do more to help animals than to harm them. I think many would dispute that, considering how humans usually treat animals (in the past and now). It is surely possible that future humans would be much more compassionate and act to reduce animal suffering, but it's far from clear, and it's also quite possible that there will be something like factory farming on an even larger scale.