Posts

How can we reduce s-risks? 2021-01-29T15:46:30.552Z
Longtermism and animal advocacy 2020-11-11T17:44:34.882Z
Thoughts on patient philanthropy 2020-09-08T12:00:46.399Z
AMA: Tobias Baumann, Center for Reducing Suffering 2020-09-06T10:45:10.187Z
Common ground for longtermists 2020-07-29T10:26:50.727Z
Representing future generations in the political process 2020-06-25T15:31:39.402Z
Reducing long-term risks from malevolent actors 2020-04-29T08:55:38.809Z
Thoughts on electoral reform 2020-02-18T16:23:27.829Z
Space governance is important, tractable and neglected 2020-01-07T11:24:38.136Z
How can we influence the long-term future? 2019-03-06T15:31:43.683Z
Risk factors for s-risks 2019-02-13T17:51:37.632Z
Why I expect successful (narrow) alignment 2018-12-29T15:46:04.947Z
A typology of s-risks 2018-12-21T18:23:05.249Z
Thoughts on short timelines 2018-10-23T15:59:41.415Z
S-risk FAQ 2017-09-18T08:05:39.850Z
Strategic implications of AI scenarios 2017-06-29T07:31:27.891Z

Comments

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2021-06-17T08:30:23.158Z · EA · GW

Thanks! I've started an email thread with you, me, and David.

Comment by Tobias_Baumann on How can we reduce s-risks? · 2021-02-01T22:37:38.046Z · EA · GW

Thanks for the comment, this is raising a very important point. 

I am indeed fairly optimistic that thoughtful forms of MCE are positive regarding s-risks, although this qualifier of "in the right way" should be taken very seriously - I'm much less sure whether, say, funding PETA is positive. I also prefer to think in terms of how MCE could be made robustly positive, and distinguishing between different possible forms of it, rather than trying to make a generalised statement for or against MCE.

This is, however, not a very strongly held view (despite having thought a lot about it), in light of great uncertainty and also some degree of peer disagreement (other researchers being less sanguine about MCE). 

Comment by Tobias_Baumann on Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure · 2020-12-19T15:52:28.092Z · EA · GW

'Longtermism' just says that improving the long-term future matters most, but it does not specify a moral view beyond that. So you can be longtermist and focus on averting extinction, or you can be longtermist and focus on preventing suffering (cf. suffering-focused ethics); or you can have some other notion of "improving". Most people who are both longtermist and suffering-focused work on preventing s-risks.  

That said, despite endorsing suffering-focused ethics myself, I think it's not helpful to frame this as "not caring" about existential risks; there are many good reasons for cooperation with other value systems.

Comment by Tobias_Baumann on Longtermism and animal advocacy · 2020-11-18T11:54:25.438Z · EA · GW

I'm somewhat less optimistic; even if most would say  that they endorse this view, I think many "dedicated EAs" are in practice still biased against nonhumans, if only subconsciously. I think we should expect speciesist biases to be pervasive, and they won't go away entirely just by endorsing an abstract philosophical argument. (And I'm not sure if "most" endorse that argument to begin with.)

Comment by Tobias_Baumann on some concerns with classical utilitarianism · 2020-11-16T10:06:59.559Z · EA · GW

Fair point - the "we" was something like "people in general". 

Comment by Tobias_Baumann on Thoughts on electoral reform · 2020-11-15T22:06:18.843Z · EA · GW

This makes IRV a really bad choice. IRV results in a two-party system just like plurality voting does.

I agree that having a multi-party system might be most important, but I don't think IRV necessarily leads to a two-party system. For instance, French presidential elections feature far more than two parties (though they're using a two-round system rather than IRV).

Everything is subject to tactical voting (except maybe SODA? but I don't understand that argument). So I don't see this as a point against approval voting in particular.

I think that approval voting has significantly more serious tactical voting problems than IRV. Sure, they all violate some criteria, but the question is how serious the resulting issues are in practice. IRV seems to be fine based on e.g. Australia's experience. (Of course, we don't really know how good or bad approval voting would be, because it is rarely used in competitive elections.)

Comment by Tobias_Baumann on some concerns with classical utilitarianism · 2020-11-15T21:49:46.680Z · EA · GW

Great post - thanks a lot for writing this up! 

It's quite remarkable how we hold ideas to different standards in different contexts. Imagine, for instance, a politician that openly endorses CU. Her opponents would immediately attack the worst implications: "So you would torture a child in order to create ten new brains that experience extremely intense orgasms?" The politician, being honest, says yes, and that's the end of her career. 

By contrast, EA discourse and philosophical discourse is strikingly lenient when it comes to counterintuitive implications of such theories. (I'm not saying anything about which standards are better, and of course this does not only apply to CU.)

Comment by Tobias_Baumann on Thoughts on whether we're living at the most influential time in history · 2020-11-05T10:54:52.285Z · EA · GW

The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n

The fact that I consider a  certain property F should update me, though. This already demonstrates that F is something that I am particularly interested in, or that F is salient to me, which presumably makes it more likely that I am an outlier on F. 

Also, this principle can have pretty strange implications depending on how you apply it. For instance, if I look at the population of all beings on Earth, it is extremely surprising (10^-12 or so) that I am a human rather than an insect. 

Comment by Tobias_Baumann on Thoughts on whether we're living at the most influential time in history · 2020-11-05T10:42:12.261Z · EA · GW

I’m at a period of unusually high economic growth and technological progress

I think it's not clear whether higher economic growth or technological progress implies more influence. This claim seems plausible, but you could also argue that it might be easier to have an influence in a stable society (with little economic or technological change), e.g. simply because of higher predictability.

So, as I say in the original post and the comments, I update (dramatically) on my estimate of my influentialness, on the basis of these considerations. But by how much? Is it a big enough update to conclude that I should be spending my philanthropy this year rather than next, or this century rather than next century? I say: no.

I'm very sympathetic to patient philanthropy, but this seems to overstate the required amount of evidence. Taking into account that each time has donors (and other resources) of their own, and that there are diminishing returns to spending, you don't need to have extreme beliefs about your elevated influentialness to think that spending now is better. However, the arguments you gave are not very specific to 2020; presumably they still hold in 2100, so it stands to reason that we should invest at least over those timeframes (until we expect the period of elevated influentialness to end).

One reason for thinking that the update, on the basis of earliness, is not enough, is related to the inductive argument: that it would suggest that hunter-gatherers, or Medieval agriculturalists, could do even more direct good than we can. But that seems wrong. Imagine you can give an altruistic person at one of these times a bag of oats, or sell that bag today at market prices. Where would you do more good?

A bag of oats is presumably much more relative wealth in those other times than now. The current price of a ton of oats is GBP 120 per ton, so if the bag contains 50 kg, it's worth just GBP 6. 

People in earlier times also have less 'competition'. Presumably the medieval person could have been the first to write up arguments for antispeciesism or animal welfare; or perhaps they could have a significant impact on establishing science, increasing rationality, improving governance, etc.

(All things considered, I think it's not clear if earlier times are more or less influential.)

Comment by Tobias_Baumann on Thoughts on patient philanthropy · 2020-09-10T10:48:18.673Z · EA · GW

I was just talking about 30 years because those are the farthest-out US bonds. I agree that the horizon of patient philanthropists can be much longer.

Comment by Tobias_Baumann on Thoughts on patient philanthropy · 2020-09-09T22:00:12.349Z · EA · GW

Yeah, but even 30 year interest rates are low (1-2% at the moment). There is an Austrian 100 year bond paying 0.88%. I think that is significant evidence that something about the "patient vs impatient actors" story does not add up.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-09T14:41:11.701Z · EA · GW

It is fair to say that some suffering-focused views have highly counterintuitive implications, such as the one you mention. The misconception is just that this holds for all suffering-focused views. For instance, there are plenty of possible suffering-focused views that do not imply that happy humans would be better off committing suicide. In addition to preference-based views, one could value happiness but endorse the procreative asymmetry (so that lives above a certain threshold of welfare are considered OK even if there is some severe suffering), or one could be prioritarian or egalitarian in interpersonal contexts, which also avoids problematic conclusions about such tradeoffs. (Of course, those views may be considered unattractive for other reasons.)

I think views along these lines are actually fairly widespread among philosophers. It just so happens that suffering-focused EAs have often promoted other variants of SFE that do arguably have implications for intrapersonal tradeoffs that you consider counterintuitive (and I mostly agree that those implications are problematic, at least when taken to extremes), thus giving the impression that all or most suffering-focused views have said implications.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T23:12:41.270Z · EA · GW

Re: 1., there would be many more (thoughtful) people who share our concern about reducing suffering and s-risks (not necessarily with strongly suffering-focused values, but at least giving considerable weight to it). That results in an ongoing research project on s-risks that goes beyond a few EAs (e.g., it is also established in academia or other social movements).

Re: 2., a possible scenario is that suffering-focused ideas just never gain much traction, and consequently efforts to reduce s-risks will just fizzle out. However, I think there is significant evidence that at least an extreme version of this is not happening.

Re: 3., I think the levels of engagement and feedback we have received so far are encouraging. However, we do not currently have any procedures in place to measure impact, which is (as you say) incredibly hard for what we do. But of course, we are constantly thinking about what kind of work is most impactful!

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:59:37.100Z · EA · GW

I would guess that actually experiencing certain possible conscious states, in particular severe suffering or very intense bliss, could significantly change my views, although I am not sure if I would endorse this as “reflection” or if it might lead to bias.

It seems plausible (but I am not aware of strong evidence) that experience of severe suffering generally causes people to focus more on it. However, I myself have fortunately never experienced severe suffering, so that would be a data point to the contrary.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:55:11.974Z · EA · GW

I was exposed to arguments for suffering-focused ethics from the start, since I was involved with German-speaking EAs (the Effective Altruism Foundation / Foundational Research Institute back then). I don’t really know why exactly (there isn’t much research on what makes people suffering-focused or non-suffering-focused), but this intuitively resonated with me.

I can’t point to any specific arguments or intuition pumps, but my views are inspired by writing such as the Case for Suffering-Focused Ethics, Brian Tomasik’s essays, and writings by Simon Knutsson and Magnus Vinding.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:40:02.252Z · EA · GW

I agree that s-risks can vary a lot (by many orders of magnitude) in terms of severity. I also think that this gradual nature of s-risks is often swept under the rug because the definition just uses a certain threshold (“astronomical scale”). There have, in fact, been some discussions about how the definition could be changed to ameliorate this, but I don’t think there is a clear solution. Perhaps talking about reducing future suffering, or preventing worst-case outcomes, can convey this variation in severity more than the term ‘s-risks’.

Regarding your second question, I wrote up this document a while ago on whether we should focus on worst-case outcomes, as opposed to suffering in median futures or 90th-percentile-badness-futures (given that those are more likely than worst-cases). However, this did not yield a clear conclusion, so I consider this an open question.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:28:00.310Z · EA · GW

One key difference is that there is less money in it, because OpenPhil as the biggest EA grantmaker is not focused on reducing s-risks. In a certain sense, that is good news because work on s-risks is plausibly more funding-constrained than non-suffering-focused longtermism.

In terms of where to donate, I would recommend the Center on Long-Term Risk and the Center for Reducing Suffering (which I co-founded myself). Both of those organisations are doing crucial research on s-risk reduction. If you are looking for something a bit less abstract, you could consider Animal Ethics, the Good Food Institute, or Wild Animal Initiative.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:13:50.222Z · EA · GW

I think a plausible win condition is that society has some level moral concern for all sentient beings (it doesn’t necessarily need to be entirely suffering-focused) as well as stable mechanisms to implement positive-sum cooperation or compromise. The latter guarantees that moral concerns are taken into account and possible gains from trade can be achieved. (An example for this could be cultivated meat, which allows us to reduce animal suffering while accommodating the interests of meat eaters.)

However, I think suffering reducers in particular should perhaps not focus on imagining best-case outcomes. It is plausible (though not obvious) that we should focus on preventing worst-case outcomes rather than shooting for utopian outcomes, as the difference in expected suffering between a worst-case and the median outcome may be much greater than the difference between the median outcome and the best possible future.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T22:00:00.002Z · EA · GW

I don’t think this view is necessary to prioritise s-risk. A finite but relatively high “trade ratio” between happiness and suffering can be enough to focus on s-risks. In addition, I think it’s more complicated than putting some numbers on happiness vs. suffering. (See here for more details.) For instance, one should distinguish between the intrapersonal and the interpersonal setting - a common intuition is that one man’s pain can’t be outweighed by another’s pleasure.

Another possibility is lexicality: one may contend that only certain particularly bad forms of suffering can’t be outweighed. You may find such views counterintuitive, but it is worth noting that lexicality can be multi-dimensional and need not involve abrupt breaks. It is, for instance, quite possible to hold the view that 1 minute of lava is ‘outweighable’ but 1 day is not. (I think I would not have answered “no amount can compensate” if it was about 1 minute.)

I also sympathise with the view mentioned by Jonas: that happiness matters mostly in so far as an existing being has a craving or desire to experience it. The question, then, is just how strong the desire to experience a certain timespan of bliss is. The poll was just about how I would do this tradeoff for myself, and it just so happens that abstract prospects of bliss does not evoke a very strong desire in me. It’s certainly not enough to accept a day of lava drowning - and that is true regardless of how long the bliss lasts. Your psychology may be different but I don’t think there’s anything inconsistent or illogical about my preferences.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T17:09:01.185Z · EA · GW

We have thought about this, and wrote up some internal documents, but have not yet published anything (though we might do that at some point, as part of a strategic plan). Magnus and I are quite aligned in our thinking about the theory of change. The key intended outcome is to catalyse a research project on how to best reduce suffering, both by creating relevant content ourselves and by convincing others to share our concerns regarding s-risks and reducing future suffering.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T16:37:50.266Z · EA · GW

Apart from the normative discussions relating to the suffering focus (cf. other questions), I think the most likely reasons are that s-risks may simply turn out to be too unlikely, or too far in the future for us to do something about it at this point. I do not currently believe either of those (see here and here for more), and hence do work on s-risks, but it is possible that I will eventually conclude that s-risks should not be a top priority for one of those reasons.

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T16:28:42.703Z · EA · GW

I would refer to this elaborate comment by Magnus Vinding on a very similar question. Like Magnus, I think a common misconception is that suffering-focused views have certain counterintuitive or even dangerous implications (e.g. relating to world destruction), when in fact those problematic implications do not follow.

Suffering-focused ethics is also still sometimes associated with negative utilitarianism (NU). While NU counts as a suffering-focused view, this often fails to appreciate the breadth of possible suffering-focused views, including pluralist and even non-consequentialist views. Most suffering-focused views are not as ‘extreme’ as pure negative utilitarianism and are far more compatible with widely shared moral intuitions. (Cf. this recent essay for an overview.)

Last, and related to this, there is a common perception of suffering-focused views as unusual or ‘fringe’, when they in fact enjoy significant support (in various forms).

Comment by Tobias_Baumann on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T16:27:35.442Z · EA · GW

Great question! I think both moral and factual disagreements play a significant role. David Althaus suggests a quantitative approach of distinguishing between the “N-ratio”, which measures how much weight one gives to suffering vs. happiness, and the “E-ratio”, which refers to one’s empirical beliefs regarding the ratio of future happiness and suffering. You could prioritise s-risk because of a high N-ratio (i.e. suffering-focused values) or because of a low E-ratio (i.e. pessimistic views of the future).

That suggests that moral and factual disagreements are comparably important. But if I had to decide, I’d guess that moral disagreements are the bigger factor, because there is perhaps more convergence (not necessarily a high degree in absolute terms) on empirical matters. In my experience, many who prioritise suffering reduction still agree to some extent with some arguments for optimism about the future (although not with extreme versions, like claiming that the ratio is “1000000 to 1”, or that the future will automatically be amazing if we avoid extinction). For instance, if you were to combine my factual beliefs with the values of, say, Will MacAskill, then I think the result would probably not consider s-risks a top priority (though still worthy of some concern).

In addition, I am increasingly thinking that “x-risk vs s-risk” is perhaps a false dichotomy, and thinking in those terms may not always be helpful (despite having written much on s-risks myself). There are far more ways to improve the long-term future than this framing suggests, and we should look for interventions that steer the future in robustly positive directions.

Comment by Tobias_Baumann on The case of the missing cause prioritisation research · 2020-08-19T09:07:41.897Z · EA · GW

Yeah, I would perhaps say that the community has historically been too narrowly focused on a small number of causes. But I think this has been improving for a while, and we're now close to the right balance. (There is also a risk of being too broad, by calling too many causes important and not prioritising enough.)

Comment by Tobias_Baumann on The case of the missing cause prioritisation research · 2020-08-16T16:42:33.401Z · EA · GW

Thanks for writing this up! I think you're raising many interesting points, especially about a greater focus on policy and going "beyond speculation".

However, I'm more optimistic than you are about the degree of work invested in cause prioritisation, and the ensuing progress we've seen over the last years. See this recent comment of mine - I'd be curious if you find those examples convincing.

Also, speaking as someone who is working on this myself, there is quite a bit of research on s-risks and cause prioritisation from a suffering-focused perspective, which is one form of "different views" - though perhaps this is not what you had in mind. (I think it might be good to clarify in more detail what sort of work you want to see, because the term "cause prioritisation research" may mean very different things to different people.)

Comment by Tobias_Baumann on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-13T11:48:39.909Z · EA · GW

I think there haven’t been any novel major insights since 2015, for your threshold of “novel” and “major”.

Notwithstanding that, I believe that we’ve made significant progress and that work on macrostrategy was and continues to be valuable. Most of that value is in many smaller insights, or in the refinement and diffusion of ideas that aren’t strictly speaking novel. For instance:

  • The recent work on patient longtermism seems highly relevant and plausibly meets the bar for being “major”. This isn’t novel - Robin Hanson wrote about it in 2011, and Benjamin Franklin arguably implemented the idea in 1790 - but I still think that it’s a significant contribution. (There is a big difference between an idea being mentioned somewhere, possibly in very “hidden” places, and that idea being sufficiently widespread in the community to have a real impact.)
  • Effective altruists are now considering a much wider variety of causes than in 2015 (see e.g. here). Perhaps none of those meet your bar for being “major”, but I think that the “discovery” (scare quotes because probably none of those is the first mention) of causes such as Reducing long-term risks from malevolent actors, invertebrate welfare, or space governance constitutes significant progress. S-risks have also gained more traction, although again the basic idea is from before 2015.
  • Views on the future of artificial intelligence have become much more nuanced and diverse, compared to the relatively narrow focus on the “Bostrom-Yudkowsky view” that was more prevalent in 2015. I think this does meet the bar for “major”, although it is arguably not a single insight: relevant factors include takeoff speeds, whether AI is best thought of as a unified agent, or the likelihood of successful alignment by default. (And many critiques of the Bostrom-Yudkowsky view were written pre-2015, so it also isn't really novel.)
Comment by Tobias_Baumann on Common ground for longtermists · 2020-07-30T08:47:20.813Z · EA · GW

Thanks for the comment! I fully agree with your points.

People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways, which the other group values.

That's a good point. A key question is how fine-grained our influence over the long-term future is - that is, to what extent are there actions that only benefit specific values? For instance, if we think that there will not be a lock-in or transformative technology soon, it might be that the best lever over the long-term future is to try and nudge society in broadly positive directions, because trying to affect the long-term future is simply too "chaotic" for more specific attempts. (However, overall I think it's unclear if / to what extent that is true.)

Comment by Tobias_Baumann on Common ground for longtermists · 2020-07-30T08:33:33.174Z · EA · GW

Yeah, I meant it to be inclusive of this "portfolio approach". I agree that specialisation and comparative advantages (and perhaps also sheer motivation) can justify focusing on things that are primarily good based on one (set of) moral perspectives.

Comment by Tobias_Baumann on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T09:58:55.005Z · EA · GW

That seems plausible and is also consistent with Amara's law (the idea that the impact of technology is often overestimated in the short run and underestimated in the long run).

I'm curious how likely you think it is that productivity growth will be significantly higher (i.e. levels at least comparable with electricity) for any reason, not just AI. I wouldn't give this much more than 50%, as there is also some evidence that stagnation is on the cards (see e.g. 1, 2). But that would mean that you're confident that the cause of higher productivity growth, assuming that this happens, would be AI? (Rather than, say, synthetic biotechnology, or genetic engineering, or some other technological advance, or some social change resulting in more optimisation for productivity.)

While AI is perhaps the most plausible single candidate, it's still quite unclear, so I'd maybe say it's 25-30% likely that AI in particular will cause significantly higher levels of productivity growth this century.

Comment by Tobias_Baumann on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T22:15:51.958Z · EA · GW

I agree that it's tricky, and am quite worried about how the framings we use may bias our views on the future of AI. I like the GDP/productivity growth perspective but feel free to answer the same questions for your preferred operationalisation.

Another possible framing: given a crystal ball showing the future, how likely is it that people would generally say that AI is the most important thing that happens this century?

As one operationalization, then, suppose we were to ask an economist in 2100: "Do you think that the counterfactual contribution of AI to American productivity growth between 2010 and 2100 was at least as large as the counterfactual contribution of electricity to American productivity growth between 1900 and 1940?" I think that the economist would probably agree -- let's say, 50% < p < 75% -- but I don't have a very principled reason for thinking this and might change my mind if I thought a bit more.

Interesting. So you generally expect (well, with 50-75% probability) AI to become a significantly bigger deal, in terms of productivity growth, than it is now? I have not looked into this in detail but my understanding is that the contribution of AI to productivity growth right now is very small (and less than electricity).

If yes, what do you think causes this acceleration? It could simply be that AI is early-stage right now, akin to electricity in 1900 or earlier, and the large productivity gains arise when key innovations diffuse through society on a large scale. (However, many forms of AI are already widespread.) Or it could be that progress in AI itself accelerates, or perhaps linear progress in something like "general intelligence" translates to super-linear impact on productivity.

Comment by Tobias_Baumann on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T22:58:49.528Z · EA · GW

What is your overall probability that we will, in this century, see progress in artificial intelligence that is at least as transformative as the industrial revolution?

What is your probability for the more modest claim that AI will be at least as transformative as, say, electricity or railroads?

Comment by Tobias_Baumann on Space governance is important, tractable and neglected · 2020-07-10T12:17:19.376Z · EA · GW

I also recently wrote up some thoughts on this question, though I didn't reach a clear conclusion either.

Comment by Tobias_Baumann on Max_Daniel's Shortform · 2020-06-30T16:31:20.129Z · EA · GW

This could be relevant. It's not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.

Comment by Tobias_Baumann on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T21:03:18.047Z · EA · GW

Great stuff, thanks!

Comment by Tobias_Baumann on Representing future generations in the political process · 2020-06-27T08:08:37.741Z · EA · GW

Hi Michael,

thanks for the comment!

Could you expand on what you mean by the first part of that sentence, and what makes you say that?

I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course I'd be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, that's not politically feasible anytime soon. So we have to find a sweet spot where a proposal is both realistic and would be a significant improvement from our perspective.

It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who aren't moral agents. And then people could say things like "The market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y]."
Of course, coming up with such metrics is hard, but that seems like a problem we'll want to fix anyway.

I agree, and I'd be really excited about such prediction markets! However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether it's current or future animals, and the bottleneck is just the lack of political will to do it. (But it would be valuable to know more about which policies would be most important - e.g. perhaps such markets would say that funding cultivated meat research is 10x as important as other reforms.)

By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as they'll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.

Comment by Tobias_Baumann on Representing future generations in the political process · 2020-06-26T22:01:05.828Z · EA · GW

Hi Tyler,

thanks for the detailed and thoughtful comment!

I find much less compelling the idea that "if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so," and "if people do not care about the long-term future," they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?

Yeah, I agree that there are plenty of reasons why institutional reform could be valuable. I didn't mean to endorse that objection (at least not in a strong form). I like your point about how longtermist institutions may shift norms and attitudes.

I don't know if you meant to narrow in on only those reforms I mention which attempt to create literal representation of future generations or if you meant to bring into focus all attempts to ameliorate political short-termism.

I mostly had the former in mind when writing the post, though other attempts to ameliorate short-termism are also plausibly very important.

I'm glad to see CLR take something of an interest in this topic

Might just be a typo but this post is by CRS (Center for Reducing Suffering), not CLR (Center on long-term risk). (It's easy to mix up because CRS is new, CLR recently re-branded, and both focus on s-risks.)

As a classical utilitarian, I'm also not particularly bothered by the philosophical problems you set out above, but some of these problems are the subject of my dissertation and I hope that I have some solutions for you soon.

Looking forward to reading it!

Comment by Tobias_Baumann on Space governance is important, tractable and neglected · 2020-06-26T11:20:53.656Z · EA · GW

Hey Jamie, thanks for the pointer! I wasn't aware of this.

Another relevant critique of whether colonisation is a good idea is Daniel Deudney's new book Dark Skies.

I myself have also written up some more thoughts on space colonisation in the meantime and have become more sceptical about the possibility of large-scale space settlement happening anytime soon.

Comment by Tobias_Baumann on Wild animal suffering video course · 2020-06-24T16:14:51.762Z · EA · GW

Great work, thanks for sharing!

Comment by Tobias_Baumann on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T16:42:08.279Z · EA · GW

Great post - I think it's extremely important to explore many different problem areas!

Some further plausible (in my opinion) candidates are shaping genetic enhancement, reducing long-term risks from malevolent actors, invertebrate welfare and space governance.

Comment by Tobias_Baumann on EA considerations regarding increasing political polarization · 2020-06-20T11:38:32.070Z · EA · GW

Great work, thanks for writing this up! I agree that excessive polarisation is an important issue and warrants more EA attention. In particular, polarisation is an important risk factor for s-risks.

Political polarization, as measured by political scientists, has clearly gone up in the last 20 years.

It is worth noting that this is a US-centric perspective and the broader picture is more mixed, with polarisation increasing in some countries and decreasing in others.

If there’s more I’m missing, feel free to provide links in the comment section.

Olaf van der Veen has written a thesis on this, analysing four possible interventions to reduce polarisation: (1) switching from FPTP to proportional representation, (2) making voting compulsory, (3) increasing the presence of public service broadcasting, and (4) creating deliberative citizen's assemblies. Olaf's takeaway (as far as I understand it) is that those interventions seem compelling and fairly tractable but the evidence of possible impacts is often not very strong.

I myself have also written about electoral reform as a possible way to reduce polarisation, and malevolent individuals in power also seem closely related to increased polarisation.

Comment by Tobias_Baumann on Timeline of the wild-animal suffering movement · 2020-06-16T12:16:35.335Z · EA · GW

Amazing work, thanks for writing this up!

Comment by Tobias_Baumann on How Much Leverage Should Altruists Use? · 2020-05-23T20:42:34.972Z · EA · GW

The drawdowns of major ETFs on this (e.g. EMB / JNK) during the corona crash or 2008 are roughly 2/3 to 3/4 of how much stocks (the S&P 500) went down. So I agree the diversification benefit is limited. The question, bracketing the point on leverage extra cost, is whether the positive EV of emerging markets bonds / high yield bonds is more or less than 2/3 to 3/4 of the positive EV of stocks. That's pretty hard to say - there's a lot of uncertainty on both sides. But if that is the case and one can borrow at very good rates (e.g. through futures or box spread financing) then the best portfolio should be a levered up combination of bonds & stocks rather than just stocks.

FWIW, I'm in a similar position regarding my personal portfolio; I've so far not invested in these asset classes but am actively considering it.

Comment by Tobias_Baumann on How Much Leverage Should Altruists Use? · 2020-05-18T08:57:18.207Z · EA · GW

What are your thoughts on high-yield corporate bonds or emerging markets bonds? This kind of bond offers non-zero interest rates but of course also entail higher risk. Also, these markets aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds.

Theoretically, there should be some diversification benefit from adding this kind of bond, though it's all positively correlated. But unfortunately, ETFs on these kinds of bonds have much higher fees.

Comment by Tobias_Baumann on How should longtermists think about eating meat? · 2020-05-17T10:29:58.725Z · EA · GW

Peter's point is that it makes a lot of sense to have certain norms about not causing serious direct harm, and one should arguably follow such norms rather than expecting some complex longtermist cost-benefit analysis.

Put differently, I think it is very important, from a longtermist perspective, to advance the idea that animals matter and that we consequently should not harm them (particularly for reasons as frivolous as eating meat).

Comment by Tobias_Baumann on Helping wild animals through vaccination: could this happen for coronaviruses like SARS-CoV-2? · 2020-05-13T11:13:37.980Z · EA · GW

Great post, thanks for writing this up!

Comment by Tobias_Baumann on Reducing long-term risks from malevolent actors · 2020-05-07T07:49:40.329Z · EA · GW

Thanks for commenting!

I agree that early detection in children is an interesting idea. If certain childhood behaviours can be shown to reliably predict malevolence, then this could be part of a manipulation-proof test. However, as you say, there are many pitfalls to be avoided.

I am not well versed in the literature but my impression is that things like torturing animals, bullying, general violence, or callous-unemotional personality traits (as assessed by others) are somewhat predictive of malevolence. But the problem is that you'll probably also get many false positives from those indicators.

Regarding environmental or developmental interventions, we write this in Appendix B:

Malevolent personality traits are plausibly exacerbated by adverse (childhood) environments—e.g. ones rife with abuse, bullying, violence or poverty (cf. Walsh & Wu, 2008). Thus, research to identify interventions to improve such environmental factors could be valuable. (However, the relevant areas appear to be very crowded. Also, the shared environment appears to have a rather small effect on personality, including personality disorders (Knopik et al., 2018, ch. 16; Johnson et al., 2008; Plomin, 2019; Torgersen, 2009).)

Perhaps improving parenting standards and childhood environments could actually be a fairly promising EA cause. For instance, early advocacy against hitting children may have been a pretty effective lever to make society more civilised and less violent in general.

Comment by Tobias_Baumann on Reducing long-term risks from malevolent actors · 2020-05-02T16:14:08.885Z · EA · GW

Thanks for the comment!

I would guess that having better tests of malevolence, or even just a better understanding of it, may help with this problem. Perhaps a takeaway is that we should not just raise awareness (which can backfire via “witch hunts”), but instead try to improve our scientific understanding and communicate that to the public, which hopefully makes it harder to falsely accuse people.

In general, I don’t know what can be done about people using any means necessary to smear political opponents. It seems that the way to address this is to have good norms favoring “clean” political discourse, and good processes to find out whether allegations are true; but it’s not clear what can be done to establish such norms.

Comment by Tobias_Baumann on What is a good donor advised fund for small UK donors? · 2020-04-29T14:11:22.008Z · EA · GW

See here for a very similar question (and answers): https://forum.effectivealtruism.org/posts/ihDhDt375xHf9wBCo/uk-donor-advised-funds

Comment by Tobias_Baumann on Adapting the ITN framework for political interventions & analysis of political polarisation · 2020-04-28T10:41:39.387Z · EA · GW

Great work, thanks for sharing! It's great to see this getting more attention in EA.

Just for those deciding whether to read the full thesis: it analyses four possible interventions to reduce polarisation: (1) switching from FPTP to proportional representation, (2) making voting compulsory, (3) increasing the presence of public service broadcasting, and (4) creating deliberative citizen's assemblies. Olaf's takeaway (as far as I understand it) is that those interventions seem compelling and fairly tractable but the evidence of possible impacts is often not very strong.

Comment by Tobias_Baumann on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-15T21:33:33.600Z · EA · GW

Well, historically, there have been quite a few pandemics that killed more than 10% of people, e.g. the Black Death or Plague of Justinian. There's been no pandemic that killed everyone.

Is your point that it's different for anthropogenic risks? Then I guess we could look at wars for historic examples. Indeed, there have been wars that killed something on the order of 10% of people, at least in the warring nations, and IMO that is a good argument to take the risk of a major war quite seriously.

But there have been far more wars that killed fewer people, and none that caused extinction. The literature usually models the number of casualties as a Pareto distribution, which means that the probability density is monotonically decreasing in the number of deaths. (For a broader reference class of atrocities, genocides, civil wars etc., I think the picture is similar.)

But we don't in fact see lots of unknown risks killing even 0.1% of the population.

Smoking, lack of exercise, and unhealthy diets each kill more than 0.1% of the population each year. Coronavirus may kill 0.1% in some countries. The advent of cars in the 20th century resulted in 60 million road deaths, which is maybe 0.5% of everyone alive over that time (I haven't checked this in detail). That can be seen as an unknown from the perspective of someone in 1900. Granted, some of those are more gradual than the sort of catastrophe people have in mind - but actually I'm not sure why that matters.

Looking at individual nations, I'm sure you can find many examples of civil wars, famines, etc. killing 0.1% of the population of a certain country, but far fewer examples killing 10% (though there are some). I'm not claiming the latter is 100x less likely but it is clearly much less likely.

You could have made the exact same argument in 1917, in 1944, etc. and you would have been wildly wrong.

I don't understand this. What do you think the exact same argument would have been, and why was that wildly wrong?