Posts

Common ground for longtermists 2020-07-29T10:26:50.727Z · score: 50 (24 votes)
Representing future generations in the political process 2020-06-25T15:31:39.402Z · score: 33 (17 votes)
Reducing long-term risks from malevolent actors 2020-04-29T08:55:38.809Z · score: 230 (98 votes)
Thoughts on electoral reform 2020-02-18T16:23:27.829Z · score: 71 (36 votes)
Space governance is important, tractable and neglected 2020-01-07T11:24:38.136Z · score: 63 (31 votes)
How can we influence the long-term future? 2019-03-06T15:31:43.683Z · score: 9 (11 votes)
Risk factors for s-risks 2019-02-13T17:51:37.632Z · score: 31 (12 votes)
Why I expect successful (narrow) alignment 2018-12-29T15:46:04.947Z · score: 18 (17 votes)
A typology of s-risks 2018-12-21T18:23:05.249Z · score: 22 (13 votes)
Thoughts on short timelines 2018-10-23T15:59:41.415Z · score: 22 (24 votes)
S-risk FAQ 2017-09-18T08:05:39.850Z · score: 20 (17 votes)
Strategic implications of AI scenarios 2017-06-29T07:31:27.891Z · score: 6 (6 votes)

Comments

Comment by tobias_baumann on Common ground for longtermists · 2020-07-30T08:47:20.813Z · score: 3 (2 votes) · EA · GW

Thanks for the comment! I fully agree with your points.

People with and without suffering-focused ethics will agree on what to do in the present even more than would be expected from the above point alone. In particular, this is because many actions aimed at changing the long-term future in ways primarily valued by one of those groups of people will also happen to (in expectation) change the long-term future in other ways, which the other group values.

That's a good point. A key question is how fine-grained our influence over the long-term future is - that is, to what extent are there actions that only benefit specific values? For instance, if we think that there will not be a lock-in or transformative technology soon, it might be that the best lever over the long-term future is to try and nudge society in broadly positive directions, because trying to affect the long-term future is simply too "chaotic" for more specific attempts. (However, overall I think it's unclear if / to what extent that is true.)

Comment by tobias_baumann on Common ground for longtermists · 2020-07-30T08:33:33.174Z · score: 3 (2 votes) · EA · GW

Yeah, I meant it to be inclusive of this "portfolio approach". I agree that specialisation and comparative advantages (and perhaps also sheer motivation) can justify focusing on things that are primarily good based on one (set of) moral perspectives.

Comment by tobias_baumann on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T09:58:55.005Z · score: 1 (1 votes) · EA · GW

That seems plausible and is also consistent with Amara's law (the idea that the impact of technology is often overestimated in the short run and underestimated in the long run).

I'm curious how likely you think it is that productivity growth will be significantly higher (i.e. levels at least comparable with electricity) for any reason, not just AI. I wouldn't give this much more than 50%, as there is also some evidence that stagnation is on the cards (see e.g. 1, 2). But that would mean that you're confident that the cause of higher productivity growth, assuming that this happens, would be AI? (Rather than, say, synthetic biotechnology, or genetic engineering, or some other technological advance, or some social change resulting in more optimisation for productivity.)

While AI is perhaps the most plausible single candidate, it's still quite unclear, so I'd maybe say it's 25-30% likely that AI in particular will cause significantly higher levels of productivity growth this century.

Comment by tobias_baumann on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T22:15:51.958Z · score: 1 (1 votes) · EA · GW

I agree that it's tricky, and am quite worried about how the framings we use may bias our views on the future of AI. I like the GDP/productivity growth perspective but feel free to answer the same questions for your preferred operationalisation.

Another possible framing: given a crystal ball showing the future, how likely is it that people would generally say that AI is the most important thing that happens this century?

As one operationalization, then, suppose we were to ask an economist in 2100: "Do you think that the counterfactual contribution of AI to American productivity growth between 2010 and 2100 was at least as large as the counterfactual contribution of electricity to American productivity growth between 1900 and 1940?" I think that the economist would probably agree -- let's say, 50% < p < 75% -- but I don't have a very principled reason for thinking this and might change my mind if I thought a bit more.

Interesting. So you generally expect (well, with 50-75% probability) AI to become a significantly bigger deal, in terms of productivity growth, than it is now? I have not looked into this in detail but my understanding is that the contribution of AI to productivity growth right now is very small (and less than electricity).

If yes, what do you think causes this acceleration? It could simply be that AI is early-stage right now, akin to electricity in 1900 or earlier, and the large productivity gains arise when key innovations diffuse through society on a large scale. (However, many forms of AI are already widespread.) Or it could be that progress in AI itself accelerates, or perhaps linear progress in something like "general intelligence" translates to super-linear impact on productivity.

Comment by tobias_baumann on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-16T22:58:49.528Z · score: 8 (5 votes) · EA · GW

What is your overall probability that we will, in this century, see progress in artificial intelligence that is at least as transformative as the industrial revolution?

What is your probability for the more modest claim that AI will be at least as transformative as, say, electricity or railroads?

Comment by tobias_baumann on Space governance is important, tractable and neglected · 2020-07-10T12:17:19.376Z · score: 8 (3 votes) · EA · GW

I also recently wrote up some thoughts on this question, though I didn't reach a clear conclusion either.

Comment by tobias_baumann on Max_Daniel's Shortform · 2020-06-30T16:31:20.129Z · score: 3 (2 votes) · EA · GW

This could be relevant. It's not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.

Comment by tobias_baumann on Problem areas beyond 80,000 Hours' current priorities · 2020-06-29T21:03:18.047Z · score: 1 (1 votes) · EA · GW

Great stuff, thanks!

Comment by tobias_baumann on Representing future generations in the political process · 2020-06-27T08:08:37.741Z · score: 3 (2 votes) · EA · GW

Hi Michael,

thanks for the comment!

Could you expand on what you mean by the first part of that sentence, and what makes you say that?

I just meant that proposals to represent future non-human animals will likely gain less traction than the idea of representing future humans. But I agree that it would be perfectly possible to do it (as you say). And of course I'd be strongly in favour of having a Parliamentary Committee for all Future Sentient Beings or something like that, but again, that's not politically feasible anytime soon. So we have to find a sweet spot where a proposal is both realistic and would be a significant improvement from our perspective.

It seems we could analogously subsidize liquid prediction markets for things like the results in 2045, conditional on passing X or Y policy, of whatever our best metrics are for the welfare or preference-satisfaction of animals, or of AIs whose experiences matter but who aren't moral agents. And then people could say things like "The market expects that [proxy] will indicate in that [group of moral patients] will be better off in 2045 if pass [policy X] than if we pass [policy Y]."
Of course, coming up with such metrics is hard, but that seems like a problem we'll want to fix anyway.

I agree, and I'd be really excited about such prediction markets! However, perhaps the case of nonhuman animals differs in that it is often quite clear what policies would be better for animals (e.g. better welfare standards), whether it's current or future animals, and the bottleneck is just the lack of political will to do it. (But it would be valuable to know more about which policies would be most important - e.g. perhaps such markets would say that funding cultivated meat research is 10x as important as other reforms.)

By contrast, it seems less clear what we could do now to benefit future moral agents (seeing as they'll be able to decide for themselves what to do), so perhaps there is more of a need for prediction markets.

Comment by tobias_baumann on Representing future generations in the political process · 2020-06-26T22:01:05.828Z · score: 4 (3 votes) · EA · GW

Hi Tyler,

thanks for the detailed and thoughtful comment!

I find much less compelling the idea that "if there is the political will to seriously consider future generations, it’s unnecessary to set up additional institutions to do so," and "if people do not care about the long-term future," they would not agree to such measures. The main reason I find this uncompelling is just that it overgenerates in very implausible ways. Why should women have the vote? Why should discrimination be illegal?

Yeah, I agree that there are plenty of reasons why institutional reform could be valuable. I didn't mean to endorse that objection (at least not in a strong form). I like your point about how longtermist institutions may shift norms and attitudes.

I don't know if you meant to narrow in on only those reforms I mention which attempt to create literal representation of future generations or if you meant to bring into focus all attempts to ameliorate political short-termism.

I mostly had the former in mind when writing the post, though other attempts to ameliorate short-termism are also plausibly very important.

I'm glad to see CLR take something of an interest in this topic

Might just be a typo but this post is by CRS (Center for Reducing Suffering), not CLR (Center on long-term risk). (It's easy to mix up because CRS is new, CLR recently re-branded, and both focus on s-risks.)

As a classical utilitarian, I'm also not particularly bothered by the philosophical problems you set out above, but some of these problems are the subject of my dissertation and I hope that I have some solutions for you soon.

Looking forward to reading it!

Comment by tobias_baumann on Space governance is important, tractable and neglected · 2020-06-26T11:20:53.656Z · score: 2 (2 votes) · EA · GW

Hey Jamie, thanks for the pointer! I wasn't aware of this.

Another relevant critique of whether colonisation is a good idea is Daniel Deudney's new book Dark Skies.

I myself have also written up some more thoughts on space colonisation in the meantime and have become more sceptical about the possibility of large-scale space settlement happening anytime soon.

Comment by tobias_baumann on Wild animal suffering video course · 2020-06-24T16:14:51.762Z · score: 3 (3 votes) · EA · GW

Great work, thanks for sharing!

Comment by tobias_baumann on Problem areas beyond 80,000 Hours' current priorities · 2020-06-22T16:42:08.279Z · score: 47 (22 votes) · EA · GW

Great post - I think it's extremely important to explore many different problem areas!

Some further plausible (in my opinion) candidates are shaping genetic enhancement, reducing long-term risks from malevolent actors, invertebrate welfare and space governance.

Comment by tobias_baumann on EA considerations regarding increasing political polarization · 2020-06-20T11:38:32.070Z · score: 44 (16 votes) · EA · GW

Great work, thanks for writing this up! I agree that excessive polarisation is an important issue and warrants more EA attention. In particular, polarisation is an important risk factor for s-risks.

Political polarization, as measured by political scientists, has clearly gone up in the last 20 years.

It is worth noting that this is a US-centric perspective and the broader picture is more mixed, with polarisation increasing in some countries and decreasing in others.

If there’s more I’m missing, feel free to provide links in the comment section.

Olaf van der Veen has written a thesis on this, analysing four possible interventions to reduce polarisation: (1) switching from FPTP to proportional representation, (2) making voting compulsory, (3) increasing the presence of public service broadcasting, and (4) creating deliberative citizen's assemblies. Olaf's takeaway (as far as I understand it) is that those interventions seem compelling and fairly tractable but the evidence of possible impacts is often not very strong.

I myself have also written about electoral reform as a possible way to reduce polarisation, and malevolent individuals in power also seem closely related to increased polarisation.

Comment by tobias_baumann on Timeline of the wild-animal suffering movement · 2020-06-16T12:16:35.335Z · score: 2 (2 votes) · EA · GW

Amazing work, thanks for writing this up!

Comment by tobias_baumann on How Much Leverage Should Altruists Use? · 2020-05-23T20:42:34.972Z · score: 1 (1 votes) · EA · GW

The drawdowns of major ETFs on this (e.g. EMB / JNK) during the corona crash or 2008 are roughly 2/3 to 3/4 of how much stocks (the S&P 500) went down. So I agree the diversification benefit is limited. The question, bracketing the point on leverage extra cost, is whether the positive EV of emerging markets bonds / high yield bonds is more or less than 2/3 to 3/4 of the positive EV of stocks. That's pretty hard to say - there's a lot of uncertainty on both sides. But if that is the case and one can borrow at very good rates (e.g. through futures or box spread financing) then the best portfolio should be a levered up combination of bonds & stocks rather than just stocks.

FWIW, I'm in a similar position regarding my personal portfolio; I've so far not invested in these asset classes but am actively considering it.

Comment by tobias_baumann on How Much Leverage Should Altruists Use? · 2020-05-18T08:57:18.207Z · score: 1 (1 votes) · EA · GW

What are your thoughts on high-yield corporate bonds or emerging markets bonds? This kind of bond offers non-zero interest rates but of course also entail higher risk. Also, these markets aren't (to my knowledge) distorted by the Fed buying huge amounts of bonds.

Theoretically, there should be some diversification benefit from adding this kind of bond, though it's all positively correlated. But unfortunately, ETFs on these kinds of bonds have much higher fees.

Comment by tobias_baumann on How should longtermists think about eating meat? · 2020-05-17T10:29:58.725Z · score: 33 (22 votes) · EA · GW

Peter's point is that it makes a lot of sense to have certain norms about not causing serious direct harm, and one should arguably follow such norms rather than expecting some complex longtermist cost-benefit analysis.

Put differently, I think it is very important, from a longtermist perspective, to advance the idea that animals matter and that we consequently should not harm them (particularly for reasons as frivolous as eating meat).

Comment by tobias_baumann on Helping wild animals through vaccination: could this happen for coronaviruses like SARS-CoV-2? · 2020-05-13T11:13:37.980Z · score: 4 (4 votes) · EA · GW

Great post, thanks for writing this up!

Comment by tobias_baumann on Reducing long-term risks from malevolent actors · 2020-05-07T07:49:40.329Z · score: 2 (2 votes) · EA · GW

Thanks for commenting!

I agree that early detection in children is an interesting idea. If certain childhood behaviours can be shown to reliably predict malevolence, then this could be part of a manipulation-proof test. However, as you say, there are many pitfalls to be avoided.

I am not well versed in the literature but my impression is that things like torturing animals, bullying, general violence, or callous-unemotional personality traits (as assessed by others) are somewhat predictive of malevolence. But the problem is that you'll probably also get many false positives from those indicators.

Regarding environmental or developmental interventions, we write this in Appendix B:

Malevolent personality traits are plausibly exacerbated by adverse (childhood) environments—e.g. ones rife with abuse, bullying, violence or poverty (cf. Walsh & Wu, 2008). Thus, research to identify interventions to improve such environmental factors could be valuable. (However, the relevant areas appear to be very crowded. Also, the shared environment appears to have a rather small effect on personality, including personality disorders (Knopik et al., 2018, ch. 16; Johnson et al., 2008; Plomin, 2019; Torgersen, 2009).)

Perhaps improving parenting standards and childhood environments could actually be a fairly promising EA cause. For instance, early advocacy against hitting children may have been a pretty effective lever to make society more civilised and less violent in general.

Comment by tobias_baumann on Reducing long-term risks from malevolent actors · 2020-05-02T16:14:08.885Z · score: 8 (5 votes) · EA · GW

Thanks for the comment!

I would guess that having better tests of malevolence, or even just a better understanding of it, may help with this problem. Perhaps a takeaway is that we should not just raise awareness (which can backfire via “witch hunts”), but instead try to improve our scientific understanding and communicate that to the public, which hopefully makes it harder to falsely accuse people.

In general, I don’t know what can be done about people using any means necessary to smear political opponents. It seems that the way to address this is to have good norms favoring “clean” political discourse, and good processes to find out whether allegations are true; but it’s not clear what can be done to establish such norms.

Comment by tobias_baumann on What is a good donor advised fund for small UK donors? · 2020-04-29T14:11:22.008Z · score: 11 (6 votes) · EA · GW

See here for a very similar question (and answers): https://forum.effectivealtruism.org/posts/ihDhDt375xHf9wBCo/uk-donor-advised-funds

Comment by tobias_baumann on Adapting the ITN framework for political interventions & analysis of political polarisation · 2020-04-28T10:41:39.387Z · score: 19 (10 votes) · EA · GW

Great work, thanks for sharing! It's great to see this getting more attention in EA.

Just for those deciding whether to read the full thesis: it analyses four possible interventions to reduce polarisation: (1) switching from FPTP to proportional representation, (2) making voting compulsory, (3) increasing the presence of public service broadcasting, and (4) creating deliberative citizen's assemblies. Olaf's takeaway (as far as I understand it) is that those interventions seem compelling and fairly tractable but the evidence of possible impacts is often not very strong.

Comment by tobias_baumann on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-15T21:33:33.600Z · score: 5 (4 votes) · EA · GW

Well, historically, there have been quite a few pandemics that killed more than 10% of people, e.g. the Black Death or Plague of Justinian. There's been no pandemic that killed everyone.

Is your point that it's different for anthropogenic risks? Then I guess we could look at wars for historic examples. Indeed, there have been wars that killed something on the order of 10% of people, at least in the warring nations, and IMO that is a good argument to take the risk of a major war quite seriously.

But there have been far more wars that killed fewer people, and none that caused extinction. The literature usually models the number of casualties as a Pareto distribution, which means that the probability density is monotonically decreasing in the number of deaths. (For a broader reference class of atrocities, genocides, civil wars etc., I think the picture is similar.)

But we don't in fact see lots of unknown risks killing even 0.1% of the population.

Smoking, lack of exercise, and unhealthy diets each kill more than 0.1% of the population each year. Coronavirus may kill 0.1% in some countries. The advent of cars in the 20th century resulted in 60 million road deaths, which is maybe 0.5% of everyone alive over that time (I haven't checked this in detail). That can be seen as an unknown from the perspective of someone in 1900. Granted, some of those are more gradual than the sort of catastrophe people have in mind - but actually I'm not sure why that matters.

Looking at individual nations, I'm sure you can find many examples of civil wars, famines, etc. killing 0.1% of the population of a certain country, but far fewer examples killing 10% (though there are some). I'm not claiming the latter is 100x less likely but it is clearly much less likely.

You could have made the exact same argument in 1917, in 1944, etc. and you would have been wildly wrong.

I don't understand this. What do you think the exact same argument would have been, and why was that wildly wrong?

Comment by tobias_baumann on Coronavirus and non-humans: How is the pandemic affecting animals used for human consumption? · 2020-04-08T20:59:29.491Z · score: 6 (2 votes) · EA · GW

Interesting, thanks!

However, I disagree with the idea that coronavirus doesn't have anything to do with animal farming.

Yeah, I wrote this based on having read that the origins of coronavirus involved bats. After reading more, it seems not that simple because farmed animals may have enabled the virus to spread between species.

Comment by tobias_baumann on Some thoughts on Toby Ord’s existential risk estimates · 2020-04-07T13:26:28.610Z · score: 5 (9 votes) · EA · GW

I haven't looked at this in much detail, but Ord's estimates seem too high to me. It seems really hard for humanity to go extinct, considering that there are people in remote villages, people in submarines, people in mid-flight at the time a disaster strikes, and even people on the International Space Station. (And yes, there are women on the ISS, I looked that up.) I just don't see how e.g. a pandemic would plausibly kill all those people.

Also, if engineered pandemics, or "unforeseen" and "other" anthropogenic risks have a chance of 3% each of causing extinction, wouldn't you expect to see smaller versions of these risks (that kill, say, 10% of people, but don't result in extinction) much more frequently? But we don't observe that.

(I haven't read Ord's book so I don't know if he addresses these points.)

Comment by tobias_baumann on Coronavirus and non-humans: How is the pandemic affecting animals used for human consumption? · 2020-04-07T11:01:45.280Z · score: 6 (4 votes) · EA · GW

Great work, thanks for writing this up!

I'm wondering how this might affect the public debate on factory farming. Animal advocates sometimes argue that factory farms contribute to antibiotic resistance, and this point may carry much more force in the future. So perhaps one key conclusion is that advocates should emphasise this angle more in the future. (That said, AFAIK the coronavirus doesn't have anything to do with farmed animals, and my impression from a quick Google search is that the issue of antibiotic resistance is manageable with the right regulations.)

Comment by tobias_baumann on Effective Altruism and Free Riding · 2020-03-29T22:33:20.592Z · score: 20 (8 votes) · EA · GW

Interesting, thanks for writing this up!

In practice, and for the EA community in particular, I think there are some reasons why the collective action problem isn't quite as bad as it may seem. For instance, with diminishing marginal returns on causes, the most efficient allocation will be a portfolio of interventions with weights roughly proportional to how much people care on average. But something quite similar can also happen in the non-cooperative equilibrium for some diversity of actors who all support the cause they're most excited about. (Maybe this is similar to case D in your analysis.)

Can you point to examples of concrete EA causes that you think get too much or too little resources due to these collective action problems?

Comment by tobias_baumann on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-18T09:46:21.759Z · score: 12 (5 votes) · EA · GW

How many resources do you think the EAA movement (and ACE in particular) should invest in animal causes that are less "mainstream", such as invertebrate welfare or wild animal suffering?

What would convince you that it should be more (or less) of a focus?

Comment by tobias_baumann on Harsanyi's simple “proof” of utilitarianism · 2020-02-21T13:14:41.810Z · score: 2 (2 votes) · EA · GW

You're right; I meant to refer to the violation of individual rationality. Thanks!

Comment by tobias_baumann on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T17:03:36.630Z · score: 11 (7 votes) · EA · GW

Thanks for writing this up! I agree that this result is interesting, but I find it unpersuasive as a normative argument. Why should morality be based on group decision-making principles? Why should I care about VNM rationality of the group?

Also, you suggest that this result lends support to common EA beliefs. I'm not so sure about that. First, it leads to preference utilitarianism, not hedonic utilitarianism. Second, EAs tend to value animals and future people, but they would arguably not count as part of the "group" in this framework(?). Third, I'm not sure what this tells you about the creation or non-creation of possible beings (cf. the asymmetry in population ethics).

Finally, it's worth pointing out that you could also start with different assumptions and get very different results. For instance, rather than demanding that the group is VNM rational, one could consider rational individuals in a group who bargain over what to do, and then look at bargaining solutions. And it turns out that the utilitarian approach of adding up utilities is *not* a bargaining solution, because it violates Pareto-optimality in some cases. Does that "disprove" total utilitarianism?

(Using e.g. the Nash bargaining solution with many participants probably leads to some form of prioritarianism or egalitarianism, because you'd have to ensure that everyone benefits.)

Comment by tobias_baumann on Thoughts on electoral reform · 2020-02-19T10:28:30.224Z · score: 9 (6 votes) · EA · GW

I'm not entirely convinced that VSE is the right approach. It's theoretically appealing, but practical considerations, like perceptions of the voting process and public acceptance / "legitimacy" of the result, might be more important. Voters aren't utilitarian robots.

I was aware of the simulations you mentioned but I didn't check them in detail. I suspect that these results are very sensitive to model assumptions, such as tactical voting behaviour. But it would be interesting to see more work on VSE.

What EAs definitely shouldn't do, in my opinion, is to spend considerable resources discrediting those alternatives to one's own preferred system, as FairVote has repeatedly done with respect to approval voting. Much more is gained by displacing plurality than is lost by replacing it with a suboptimal alternative (for all reasonable alternatives to plurality).

Strongly agree with this!

Comment by tobias_baumann on Should Longtermists Mostly Think About Animals? · 2020-02-04T11:14:56.915Z · score: 11 (8 votes) · EA · GW

If you think animals on average have net-negative lives, the primary value in preventing x-risks might not be ensuring human existence for humans’ sake, but rather ensuring that humans exist into the long-term future to steward animal welfare, to reduce animal suffering, and to move all animals toward having net-positive lives.

This assumes that (future) humans will do more to help animals than to harm them. I think many would dispute that, considering how humans usually treat animals (in the past and now). It is surely possible that future humans would be much more compassionate and act to reduce animal suffering, but it's far from clear, and it's also quite possible that there will be something like factory farming on an even larger scale.

Comment by Tobias_Baumann on [deleted post] 2020-01-31T11:14:22.461Z

I don't think you've established that the 'technological transformation' is essential. If one believes that something like AI is unlikely in the foreseeable future, one can still try to shape the long-term future through other means, such as moral circle expansion, improving international cooperation, improving political processes (e.g. trying to empower future people, voting reform, reducing polarisation), and so on.

You may believe that shaping AI / the technological transformation would offer far more leverage than other interventions, but some will disagree with that, which is a strong reason to not include this in the definition.

Also, while many longtermist EAs believe that AI / a technological transformation is likely to happen this century, there are still some who don't. I for one am quite unsure about this.

Comment by tobias_baumann on UK donor-advised funds · 2020-01-22T15:12:34.138Z · score: 9 (3 votes) · EA · GW

I looked into this a while ago and ended up with a similar conclusion. The main options (to my knowledge) are NPT-UK, Prism the Gift Fund, and CAF's giving account.

Their fees all seemed too high for me to actually open a DAF (although sometimes it's not transparent and you're just supposed to get in touch). In particular, yearly fees eat up a significant fraction of the money if you leave it in for decades, so it seems unsuitable for such plan. It's probably so expensive because there are relatively few people who are interested in such accounts, and there is a lot of administrative work done by the fund (Gift Aid etc.).

Comment by tobias_baumann on Improving Pest Management for Wild Insect Welfare · 2019-12-26T20:26:47.450Z · score: 5 (4 votes) · EA · GW

Great work, thanks for writing this up!

Comment by tobias_baumann on Next Steps in Invertebrate Welfare, Part 3: Understanding Attitudes and Possibilities · 2019-11-19T12:24:45.958Z · score: 2 (2 votes) · EA · GW

Thanks for writing this up!

In this regard, Michael Greger (of Nutrition Facts) argues forcefully that anti-honey advocacy hurts the vegan movement. Many people apparently have trouble ascribing morally valuable states to cows and pigs. The idea that bees might suffer (and that we should care about their suffering) strikes these people as crazy. If an average person thinks that a small part of vegan ‘ideology’ is crazy, motivated reasoning will easily allow this thought to infect their perception of the rest of the vegan worldview. Hence, the knowledge that vegans care about bees may lead many people to show less compassion toward cows and pigs than they otherwise would[5].

Is there evidence that this is a significant effect? There are many lines of motivated reasoning, and if you avoid this one, perhaps people will just find another. My impression is that people who reject an idea or ideology because of some association with something 'crazy' are actually often just opposed to the idea/ideology in general, and would still be opposed if the 'crazy' thing wasn't around.

Also, there is an effect in the opposite direction from moving the Overton window, or making others look more moderate. (Cf. https://en.wikipedia.org/wiki/Radical_flank_effect )

In sum, even if invertebrate welfare is a worthwhile cause, several factors may prevent us from considering this issue properly. Additionally, there is the worry that rushing into a direct advocacy campaign may create hard-to-reverse lock-in effects. If the initial message is suboptimal, these lock-in effects can impose substantial costs. Hence, directly advocating for invertebrate welfare at this time might be actively counterproductive, both to the invertebrate welfare cause area and effective altruism more generally[11].

While I agree that we should be very careful about publicity at this point, I feel like there might still be opportunities for thoughtful advocacy. It seems not implausible that we could find angles that are mainstream-compatible and begin to normalise concern for invertebrates - e.g. extending welfare laws to lobsters.

Comment by tobias_baumann on Next Steps in Invertebrate Welfare, Part 2: Possible Interventions · 2019-11-18T13:08:46.238Z · score: 8 (4 votes) · EA · GW

Great work - thanks for writing this up!

Comment by tobias_baumann on Institutions for Future Generations · 2019-11-12T16:07:06.017Z · score: 21 (18 votes) · EA · GW

Here's another proposal:

We give every contemporary citizen shares in a newly created security. This security settles in, say, 100 years (in 2119), and its settlement value will be based on the degree to which 2119 people approve of the actions of people in the 2019-2119 timespan, as determined by a standardised survey - say, on a scale from 0 to 10.

This gives contemporary people a direct financial incentive to do what future people would approve of, and uses market mechanisms to generate accurate judgments.

(One might think that this doesn't work because people will go "I'll be dead before this settles", but I think this isn't really a problem - there is also an Austrian bond that settles in 100 years, and that doesn't seem to be a problem.)

Comment by tobias_baumann on Next Steps in Invertebrate Welfare, Part 1: Fundamental Research · 2019-11-12T15:39:49.582Z · score: 3 (3 votes) · EA · GW
A reason why it is not necessarily true that there is net suffering in nature is the hypothesis that small individuals–as invertebrates–may have less intense sentient experiences. In that scenario, small animals would experience relatively less suffering and more enjoyment than larger ones.

I don't understand how this follows. Wouldn't less intense experiences affect both suffering and pleasure equally?

Comment by tobias_baumann on Next Steps in Invertebrate Welfare, Part 1: Fundamental Research · 2019-11-12T15:37:59.700Z · score: 11 (5 votes) · EA · GW

Great work - thanks for writing this up!

The question of invertebrate sentience is surely important, but I'm not sure if further research on this is a top priority. Some relevant uncertainties:

  • Would further research significantly reduce uncertainty about invertebrate sentience? It seems that most people who thought about this have settled on something like "there is a significant chance that many invertebrate taxa are sentient, but we don't know for sure".
  • To what is society's lack of moral concern for invertebrates due to the belief that invertebrates are not sentient, rather than other factors (e.g. disgust reaction towards many invertebrates, or the difficulty of avoiding harm to insects in everyday life)?
Comment by tobias_baumann on EA Handbook 3.0: What content should I include? · 2019-10-01T09:02:26.169Z · score: 10 (11 votes) · EA · GW

I'd like to suggest including an article on reducing s-risks (e.g. https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-priority/ or http://s-risks.org/intro/) as another possible perspective on longtermism, in addition to AI alignment and x-risk reduction.

Comment by tobias_baumann on Are we living at the most influential time in history? · 2019-09-10T10:16:08.654Z · score: 2 (2 votes) · EA · GW

I don't understand this. Your last comment suggests that there may be several key events (some of which may be in the past), but I read your top-level comment as assuming that there is only one, which precludes all future key events (i.e. something like lock-in or extinction). I would have interpreted your initial post as follows:

Suppose we observe 20 past centuries during which no key event happens. By Laplace's Law of Succession, we now think that the odds are 1/22 in each century. So you could say that the odds that a key event "would have occurred" over the course of 20 centuries is 1 - (1-1/22)^20 = 60.6%. However, we just said that we observed no key event, and that's what our "hazard rate" is based on, so it is moot to ask what could have been. The probability is 0.

This seems off, and I think the problem is equating "no key event" with "not hingy", which is too simple because one can potentially also influence key events in the distant future. (Or perhaps there aren't even any key events, or there are other ways to have a lasting impact.)

Comment by tobias_baumann on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-07T20:31:33.923Z · score: 7 (6 votes) · EA · GW

I don't understand why this question has been downvoted by some people? It is a perfectly reasonable and interesting question. (The same holds for comments by Simon Knutsson and Magnus Vinding, which to me seem informative and helpful but have been downvoted.)

Comment by tobias_baumann on Are we living at the most influential time in history? · 2019-09-07T11:02:00.992Z · score: 3 (5 votes) · EA · GW

The following is yet another perspective on which prior to use, which questions whether we should assume some kind of uniformity principle:

As has been discussed in other comments and the initial text, there are some reasons to expect later times to be hingier (e.g. better knowledge) and there are some reasons to expect earlier times to be hingier (e.g. because of smaller populations). It is plausible that these reasons skew one way or another, and this effect might outweigh other sources of variance in hinginess.

That means that the hingiest times are disproportionately likely to be either a) the earliest generation (e.g. humans in pre-historic population bottlenecks) or b) the last generation (i.e. the time just before some lock-in happens). Our time is very unlikely to be the hingiest in this perspective (unless you think that lock-in happens very soon). So this suggests a low prior for HoH; however, what matters is arguably comparing present hinginess to the future, rather than to the past. And in this perspective it would be not-very-unlikely that our time is hingier than all future times.

In other words, rather than there being anything special about our time, it could just the case that a) hinginess generally decreases over time and b) this effect is stronger than other sources of variance in hinginess. I'm fairly agnostic about both of these claims, and Will argued against a), but it's surely likelier than 1 in 100000 (in the absense of further evidence), and arguably likelier even than 5%. (This isn't exactly HoH because past times would be even hingier.)

Comment by tobias_baumann on Are we living at the most influential time in history? · 2019-09-05T11:04:43.795Z · score: 6 (5 votes) · EA · GW
inverse relationship between population size and hingeyness

Maybe it's a nitpick but I don't think this is always right. For instance, suppose that from now on, population size declines by 20% each century (indefinitely). I don't think that would mean that later generations are more hingy? Or, imagine a counterfactual where population levels are divided by 10 across all generations – that would mean that one controls a larger fraction of resources but can also affect fewer beings, which prima facie cancels out.

It seems to me that the relevant question is whether the present population size is small compared to the future, i.e. whether the present generation is a "population bottleneck". (Cf. Max Daniel's comment.) That's arguably true for our time (especially if space colonisation becomes feasible at some point) and also in the rebuilding scenario you mentioned.

Comment by tobias_baumann on Are we living at the most influential time in history? · 2019-09-04T14:50:07.082Z · score: 2 (2 votes) · EA · GW

Do you think that this effect only happens in very small populations settling new territory, or is it generally the case that a smaller population means more hinginess? If the latter, then that suggests that, all else equal, the present is hingier than the future (though the past is even hingier), if we assume that future populations are bigger (possibly by a large factor). While the current population is not small in absolute terms, it could plausibly be considered a population bottleneck relative to a future cosmic civilisation (if space colonisation becomes feasible).

Comment by tobias_baumann on Are we living at the most influential time in history? · 2019-09-03T12:02:36.463Z · score: 11 (11 votes) · EA · GW

Great post! It's great to see more thought going into these issues. Personally, I'm quite sceptical about claims that our time is especially influential, and I don't have a strong view on whether our time is more or less hingy than other times. Some additional thoughts:

I got the impression that you assume that some time (or times) are particularly hingy (and then go on to ask whether it's our time). But it is also perfectly possible that no time is hingy, so I feel that this assumption needs to be justified. Of course, there is some variation and therefore there is inevitably a most influential time, but the crux of the matter is whether there are differences by a large factor (not just 1.5x). And that is not obvious; for instance, if we look at how people in the past could have shaped 21st century societies, it is not clear to me whether any time was especially important.

I think a key question for longtermism is whether the evolution of values and power will eventually settle in some steady state (i.e. the end of history). It is plausible that hinginess increases as one gets closer to this point. (But it's not obvious, e.g. there could just be a slow convergence to a world government without any pivotal events.) By contrast, if values and influence drift indefinitely, as they did so far in human history, then I don't see strong reasons to expect certain times to be particularly hingy. So it is crucial to ask whether a (non-extinction) steady state will happen, and how far away we are from it. (See also this related post of mine.)

"I suggest that in the past, we have seen hinginess increase. I think that most longtermists I know would prefer that someone living in 1600 passed resources onto us, today, rather than attempting direct longtermist influence."

Does this take into account that there have been fewer people around in 1600, and many ways to have an influence were far less competitive? I feel that a person in 1600 could have had a significant impact, e.g. via advocacy for the "right" moral views (e.g. publishing good arguments for consequentialism, antispeciesism, etc.) or by pushing for general improvements like reducing violence and increasing cooperation. So I don't quite agree with your take on this, though I wouldn't claim the opposite either – it is not obvious to me whether hinginess increased or decreased. (By your inductive argument, that suggests that it's not clear whether the future will be more or less hingy than the present.)

"A related, but more general, argument, is that the most pivotal point in time is when we develop techniques for engineering the motivations and values of the subsequent generation (such as through AI, but also perhaps through other technology, such as genetic engineering or advanced brainwashing technology), and that we’re close to that point."

Similar to your recent point about how creating smarter-than human intelligence has long been feasible, I'd guess that, given strong enough motivation, a lock-in would already be feasible via brainwashing, propaganda, and sufficiently ruthless oppression of opposition. (We've had these "technologies" for a long time.) The reason why this doesn't quite work in totalitarian states is that a) what you want to lock in is usually the power of an individual dictator or some group of humans, but there's no way to prevent death, and b) people are not fully aligned with the dictator even at the beginning, which limits what you can do (principal-agent problems etc.). The reason we don't it in liberal democracies is that a) we strongly disapprove of the necessary methods, b) we value free speech and personal autonomy, and c) most people don't really mind moderate forms of value drift. So it's to a large extent a question of motivation and taboos, and it is quite possible that people will reject the use of future lock-in technologies for similar reasons.

Comment by tobias_baumann on Ask Me Anything! · 2019-08-30T10:11:10.228Z · score: 4 (4 votes) · EA · GW
There’s a lot of debate about the causes of the industrial revolution. Very few commentators point to some technological breakthrough as the cause, so it's striking that people are inclined to point to a technological breakthrough in AI as the cause of the next growth mode transition. Instead, leading theories point to some resource overhang (‘colonies and coal’), or some innovation or change in institutions (more liberal laws and norms in England, or higher wages incentivising automation) or in culture. So perhaps there’s some novel governance system that could drive a higher growth mode, and that'll be the decisive thing.

Strongly agree. I think it's helpful to think about it in terms of the degree to which social and economic structures optimise for growth and innovation. Our modern systems (capitalism, liberal democracy) do reward innovation - and maybe that's what caused the growth mode change - but we're far away from strongly optimising for it. We care about lots of other things, and whenever there are constraints, we don't sacrifice everything on the altar of productivity / growth / innovation. And, while you can make money by innovating, the incentive is more about innovations that are marketable in the near term, rather than maximising long-term technological progress. (Compare e.g. an app that lets you book taxis in a more convenient way vs. foundational neuroscience research.)

So, a growth mode could be triggered by any social change (culture, governance, or something else) resulting in significantly stronger optimisation pressures for long-term innovation.

That said, I don't really see concrete ways in which this could happen and current trends do not seem to point in this direction. (I'm also not saying this would necessarily be a good thing.)

Comment by tobias_baumann on Ask Me Anything! · 2019-08-22T09:16:31.366Z · score: 20 (15 votes) · EA · GW

I disagree with your implicit claim that Will's views (which I mostly agree with) constitute an extreme degree of confidence. I think it's a mistake to approach these questions with a 50-50 prior. Instead, we should consider the base rate for "events that are at least as transformative as the industrial revolution".

That base rate seems pretty low. And that's not actually what we're talking about - we're talking about AGI, a specific future technology. In the absense of further evidence, a prior of <10% on "AGI takeoff this century" seems not unreasonable to me. (You could, of course, believe that there is concrete evidence on AGI to justify different credences.)

On a different note, I sometimes find the terminology of "no x-risk", "going well" etc. unhelpful. It seems more useful to me to talk about concrete outcomes and separate this from normative judgments. For instance, I believe that extinction through AI misalignment is very unlikely. However, I'm quite uncertain about whether people in 2019, if you handed them a crystal ball that shows what will happen (regarding AI), would generally think that things are "going well", e.g. because people might disapprove of value drift or influence drift. (The future will plausibly be quite alien to us in many ways.) And finally, in terms to my personal values, the top priority is to avoid risks of astronomical suffering (s-risks), which is another matter altogether. But I wouldn't equate this with things "going well", as that's a normative judgment and I think EA should be as inclusive as possible towards different moral perspectives.