Posts

The Future of Humanity & The Methods of Ethics: A discussion of Bostrom, Sidgwick and Scheffler (Thursday 22 July, 6:30pm UK) 2021-07-18T18:58:42.912Z
What should an effective altruist be committed to? 2014-12-17T13:21:14.006Z

Comments

Comment by peterhartree (Peter_Hartree) on What would you do if you had half a million dollars? · 2021-07-19T11:40:00.113Z · EA · GW

A thought that motivates my other comments on this thread: reviewing my GWWC donations a while ago, I realised that if I suddenly had lots of money, one of the first questions I would ask myself is "what friends and acquaintances should I fund?". To an outsider this kind of thing can look like rather non-altruistic nepotism, but from the inside it seems like betting on the opportunities that you are unusually able to see. I think it actually is the latter, at least sometimes. My impression is that for profit investors do a lot of "nepotistic investing", but I suspect that values like altruism and impartiality and transparency (as well as constraints of charitable legal status) make EA funders reluctant to go hard on this method.

Comment by peterhartree (Peter_Hartree) on What would you do if you had half a million dollars? · 2021-07-19T11:32:35.355Z · EA · GW

I would consider starting some kind of "major achievement" prize scheme.

Roughly, the idea I have in mind is to give large no-strings-attached lump sums to individuals who have:

(a) done exceptionally valuable work at non-trivial personal cost (e.g. massive salary sacrifice)

(b) a high likelihood of continuing to do extremely valuable work.

The aims would be:

(i) to help such figures become personally "set for life" in the way that successful startup founders sometimes do.

(ii) to improve the personal incentive structure faced by people considering EA careers.

This idea is very half baked. A couple quick comments:

  1. On (i): I'm surprised how often I meet people doing very valuable work who seem to have significant personal finance issues that (a) distract them and (b) mean that they don't buy time aggressively enough. Perhaps more importantly, I suspect that (c) personal financial security enables people to take riskier bets on their inside views, in a way that is valuably generative and/or error-correcting; also that (d) people who are doing very valuable work often have lists of good ideas for turning $$$ into good outcomes, so giving these people greater financial security would be one merit-based means of increasing the number of EA-sympathetic angel investors.

  2. On (ii): I have no idea if this would actually work out well. In theory, it'd make the personal incentives look a bit more like they do in for-profit entrepreneurship, i.e. small chance of large financial upside if you do well. In practice I could imagine a well known prize scheme causing various sorts of trouble.

  3. E.g. I see major PR risks to this kind of thing ("effective altruists conclude that the most effective use of money is to make themselves rich") and internal risk of resentment or even corruption scandals. I've not looked into how science prizes fare on this kind of thing.

  4. On (i): one possible counter is that IIRC there's some evidence for a "personal wealth sweet spot" in entrepreneurship. I think the story is supposed to be that too little financial security means you can't afford the risks, but too much security (both financial and status) makes you too complacent and lazy. My guess is that the complacency thing happens for many but not all people. Maybe one can filter for this.

Comment by peterhartree (Peter_Hartree) on What would you do if you had half a million dollars? · 2021-07-19T01:47:13.855Z · EA · GW

I would consider allocating at least $100K to trying my own version of something like Tyler Cowen's Emergent Ventures.

Comment by peterhartree (Peter_Hartree) on All Possible Views About Humanity's Future Are Wild · 2021-07-16T12:27:01.622Z · EA · GW

Thanks for the post.

You give a gloss definition of "wild":

we should be doing a double take at any view that we live in such a special time

Could you say a bit more on this? I can think of many different reasons one might do a double take—my impression is that you're thinking of just a few of them, but I'm not sure exactly which.

Comment by peterhartree (Peter_Hartree) on Podcast: Sharon Hewitt Rawlette on metaethics and utilitarianism · 2021-06-03T16:58:46.219Z · EA · GW

Thank you for this, Gus and Sharon.

This interview presented one of the most compelling cases for a hedonistic theory of value that I've heard, shifting my credence from “quite low” to “hmm, ok, maaaaybe”.

Some bits that stood out:

  1. Pluralistic conception of positive and negative experiences, i.e. experiences differ in intensity but also in character (so we can recognise fundamental differences between bodily pleasure, love, laughter, understanding, etc).

  2. Hedonism can solve the epistemic problem that haunts moral realism, by saying that we directly experience value and disvalue as a phenomenal quality.

  3. We attribute intrinsic value to non-experiential states of affairs because we recognise them as direct or indirect causes of experiential value. This is a cognitive shortcut, it works pretty well.

  4. Experience of pleasure from e.g. torture is pro tanto good, but it is not all things considered good because of the instrumental effects (i.e. lots of disvalue).

  5. Best argument against hedonistic utilitarianism is that it is too abstract. It's not actually helpful for people to think in these terms. We need nearly-absolute respect for rights, projecting intrinsic value into the world works well for us.

  6. Strong Realism vs anti-realism (as in: total mind-independence vs mind-dependence) matters: only the strong realist can deeply care about self-interested perspectival bias, e.g. can think of their deepest values as perhaps radically wrong, can worry that an AGI with idealised human values might still be an existential catastrophe.

For some reason, it hadn't occurred to me that a hedonist could do (1). It might be that I think of hedonists as aiming for a very tidy theory, and adding pluralism back in messes that up a bit (e.g. comparability and aggregation remain hard).

Anyway... "pluralistic hedonism" seems quite promising to me!

For readers: her PhD was supervised by Thomas Nagel and she thanks Parfit for input. I'm looking forward to reading it: https://www.stafforini.com/docs/Hewitt - Normative qualia and a robust moral realism.pdf

Comment by peterhartree (Peter_Hartree) on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T13:47:22.001Z · EA · GW
  1. How do you give advice?

PS (Tyler Cowen): I think about what I believe, then I think about what it's useful for people to hear, and then I say that.

EA: I think about what I believe, and then I say that. I generally trust people to respond appropriately to what I say.

Comment by peterhartree (Peter_Hartree) on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T13:21:40.967Z · EA · GW

So here's a list of claims, with a cartoon response from someone that represents my impression of a typical EA/PS view on things (insert caveats here):

  1. Some important parts of "developed world" culture are too pessimistic. It would be very valuable to blast a message of definite optimism, viz. "The human condition can be radically improved! We have done it in the past, and we can do it again. Here are some ideas we should try..."

PS: Strongly agree. The cultural norms that support and enable progress are more fragile than you think.

EA: Agree. But, as an altruist, I tend to focus on preventing bad stuff rather than making good stuff happen (not sure why...).

  1. Broadly, "progress" comes about when we develop and use our capabilities to improve the human condition, and the condition of other moral patients (~sentient beings).

PS: Agree, this gloss seems basically fine for now.

EA: Agree, but we really need to improve on this gloss.

  1. Progress comes in different kinds: technological, scientific, ethical, global coordination. At different times in history, different kinds will be more valuable. Balancing these capabilities matters: during some periods, increasing capabilities in one area (or a subfield of one area) may be disvaluable (c.f. Vulnerable World Hypothesis).

EA & PS: Seems right. Maybe we disagree on where the current margins are?

  1. Let's try not to destroy ourselves! The future could be wonderful!

EA & PS: Yeah, duh. But also eek—we recognise the dangers ahead.

  1. Markets and governments are quite functional, so that means there's much more low hanging fruit in pursuing the interests of those who these systems aren't at all built to serve (e.g. future generations, animals).

PS: Hmm, take a closer look. There are a lot of trillion dollar bills lying around, even in areas where an optimistic EMH would say that markets and government ought to do well.

EA: So I used to be really into the EMH. These days, I'm not so sure...

  1. Broadly promoting industrial literacy is really important.

PS: Yes!

EA: I haven't thought about this much. Quick thought is that I'm happy to see some people working on this. I doubt it's the best option for many of the people we speak to, but it could be a good option for some.

  1. We can make useful predictions about the effects of new technologies.

PS (David Deutsch): I might grudgingly accept an extremely weak formulation of this claim. At least on Fridays. And only if you don't try to explicitly assign probabilities.

EA: Yes.

  1. You might be missing a crucial consideration!

PS: What's that? Oh, I see. Yeah. Well... I'm all for thinking hard about things, and acting on the assumption that I'm probably wrong about mostly everything. In the end, I guess I'm crossing my fingers, and hoping we can learn by trial and error, without getting ourselves killed. Is there another option?

EA: I know. This gives me nightmares.

On Max Daniel's thread, I left some general comments, a longer list of questions to which PS/EA might give different answers, and links to some of the discussions that shaped my perspective on this.

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-03T07:32:18.297Z · EA · GW

@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you're at?

Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather... nascent. For example:

(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I'm sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to "take a punt" on some extremely high stakes issues.

(b) I'm struggling to think of examples of public discussion of how "strong" a version of DTD we should aim for in practice (pointers, anyone?).

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-03T07:26:41.313Z · EA · GW

To your Beckstead paraphrase, I'll add Tyler's recent exchange with Joseph Walker:

Cowen: Uncertainty should not paralyse you: try to do your best, pursue maximum expected value, just avoid the moral nervousness, be a little Straussian about it. Like here’s a rule on average it’s a good rule we’re all gonna follow it. Bravo move on to the next thing. Be a builder.

Walker: So… Get on with it?

Cowen: Yes ultimately the nervous Nellie’s, they’re not philosophically sophisticated, they’re over indulging their own neuroticism, when you get right down to it. So it’s not like there’s some brute let’s be a builder view and then there’s some deeper wisdom that the real philosophers pursue. It’s you be a builder or a nervous Nelly, you take your pick, I say be a builder.

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-03T07:22:48.753Z · EA · GW

I've gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.

Have you pressed Tyler Cowen on this?

I'm fairly confident that he has heard ~all the arguments that the effective altruism community has heard, and that he has understood them deeply. So I default to thinking that there's an interesting disagreement here, rather than a boring "hasn't heard the arguments" or "is making a basic mistake" thing going on.

In a recent note, I sketched a couple of possibilities.

(1) Stagnation is riskier than growth

Stubborn Attachments puts less emphasis on sustainability than other long-term thinkers like Nick Bostrom, Derek Parfit, Richard Posner, Martin Rees and Toby Ord. On the 80,000 Hours podcast, Tyler explained that existential risk was much more prominent in early drafts of the book, but he decided to de-emphasise it after Posner and others began writing on the topic. In any case, Tyler agrees with the claim that we should put more resources into reducing existential risk at current margins. However, it seems as though he, like Peter Thiel, sees the political risk of economic stagnation as a more immediate and existential concern than these other long-term thinkers. Speaking at one of the first effective altruism conferences, Thiel said if the rich world continues on a path of stagnation, it’s a one-way path to apocalypse. If we start innovating again, we at least have a chance of getting through, despite the grave risk of finding a black ball.

(2) Tyler is being Straussian

Tyler may have a different view about what messages are helpful to blast into the public sphere. Perhaps this is partly due to a Deutsch / Thiel-style worry about the costs of cultural pessimism about technology. Martin Rees, who sits in the UK House of Lords, claims that democratic politicians are hard to influence unless you first create a popular concern. My guess is Tyler may think both that politicians aren’t the centre of leverage for this issue, and that there are safer, more direct ways to influence them on this topic. In any case, it’s clear Tyler thinks that most people should focus on maximising the growth rate, and only a minority should focus on sustainability issues, including existential safety. It is not inconsistent to think that growth is too slow and that sustainability is underrated. Some listeners will hear the "sustainable" in "maximise the (sustainable) growth rate" and consider making that their focus. Most will not, and that's fine.

Many more people can participate in the project of "maximise the (sustainable) rate of economic growth" than "minimise existential risk".

(3) Something else?

I have a few other ideas, but I don't want to share the half-baked thoughts just yet.

One I'll gesture at: the phrase "cone of value", his catchphrase "all thinkers are regional thinkers", Bernard Williams, and anti-realism.

A couple relevant quotes from Tyler's interview with Dwarkesh Patel:

[If you are a space optimist you may think that we can relax more about safety once we begin spreading to the stars.] You can get rid of that obsession with safety and replace it with an obsession with settling galaxies. But that also has a weirdness that I want to avoid, because that also means that something about the world we live in does not matter very much, you get trapped in this other kind of Pascal's wager, where it is just all about space and NASA and like fuck everyone else, right? And like if that is right it is right. But my intuition is that Pascal's Wager type arguments, they both don't apply and shouldn't apply here, that we need to use something that works for humans here on earth.

On the 800 years claim:

In the Stanford Talk, I estimated in semi-joking but also semi-serious fashion, that we had 700 or 800 years left in us.

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-02T07:38:49.894Z · EA · GW

Some questions to which I suspect key figures in Effective Altruism and Progress Studies would give different answers:

a. How much of a problem is it to have a mainstream culture that is afraid of technology, or that underrates its promise?

b. How does the rate of economic growth in the West affect the probability of political catastrophe, e.g. WWIII?

c. How fragile are Enlightenment norms of open, truth-seeking debate? (E.g. Deutsch thinks something like the Enlightenment "tried to happen" several times, and that these norms may be more fragile than we think.)

d. To what extent is existential risk something that should be quietly managed by technocrats vs a popular issue that politicians should be talking about?

e. The relative priority of catastrophic and existential risk reduction, and the level of convergence between these goals.

f. The tractability of reducing existential risk.

g. What is most needed: more innovation, or more theory/plans/coordination?

h. What does ideal and actual human rationality look like? E.g. Bayesian, ecological, individual, social.

i. How to act when faced with small probabilities of extremely good or extremely bad outcomes.

j. How well can we predict the future? Is it reasonable to make probability estimates about technological innovation? (I can't quickly find the strongest "you can't put probabilities" argument, but here's Anders Sandberg sub-Youtubing Deutsch)

k. Credence in moral realism.

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-02T07:22:32.345Z · EA · GW

Some notable discussions involving key figures:

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-02T07:19:48.110Z · EA · GW

Bear in mind that I'm more familiar with the Effective Altruism community than I am with the Progress Studies community.

Some general impressions:

  1. Superficially, key figures in Progress Studies seem a bit less interested in moral philosophy than those in Effective Altruism. But, Tyler Cowen is arguably as much a philosopher as he is an economist, and he co-authored Against The Discount Rate (1992) with Derek Parfit. Patrick Collison has read Reasons and Persons, The Precipice, and so on, and is a board member of The Long Now Foundation. Peter Thiel takes philosophy and the humanities very seriously (see here and here). And David Deutsch has written a philosophical book, drawing on Karl Popper.

  2. On average, key figures in EA are more likely to have a background in academic philosophy, while PS figures are more likely to have been involved in entrepreneurship or scientific research.

  3. There seem to be some differences in disposition / sensibility / normative views around questions of risk and value. E.g. I would guess that more PS figures have ridden a motorbike, are more likely to say things like "full steam ahead".

  4. To caricature: when faced with a high stakes uncertainty, EA says "more research is needed", while PS says "quick, let's try something and see what happens". Alternatively: "more planning/co-ordination is needed" vs "more innovation is needed".

  5. PS figures seem to put less of a premium on co-ordination and consensus-building, and more of a premium on decentralisation and speed.

  6. PS figures seem (even) more troubled by the tendency of large institutions with poor feedback loops towards dysfunction.

Comment by peterhartree (Peter_Hartree) on Some quick notes on "effective altruism" · 2021-03-26T13:17:57.214Z · EA · GW

Thanks for writing this, Jonas.

For what it's worth:

  1. I share the concerns you mentioned.
  2. I personally find the name "effective altruism" somewhat cringe and off-putting. I've become used to it over the years but I still hear it and feel embarrassed every now and then.
  3. I find the label "effective altruist" several notches worse: that elicits a slight cringe reaction most of the time I encounter it.
  4. The names "Global priorities" and "Progress studies" don't trigger a cringe reaction for me.
  5. I have a couple of EA-inclined acquaintances who have told me they were put off by the name "effective altruism".
  6. While I don't like the name, the thought that it might be driving large and net positive selection effects does not seem crazy to me.
  7. I would be glad if someone gave this topic further thought, plausibly to the extent of conducting surveys and speaking to relevant experts.
Comment by peterhartree (Peter_Hartree) on Supportive scepticism in practice · 2015-01-25T13:35:27.308Z · EA · GW

Jess & Michelle: thanks for this excellent post. Three remarks I'd like to add:

1. We all need support, but individuals vary considerably in the kind of support they need in order to flourish. A kind of support that works well for one person might feel patronising, frustrating or stifling to another, or cold, distant and uncaring to a third. To be effectively supportive, we must be sensitive to individual needs.

2. Being supportive is difficult, so individuals in the community should help others support them. If the support you're getting from the community is suboptimal, it's unlikely that other individuals are entirely to blame.

3. As a community, we should create an atmosphere where it's easy for people to ask for more or different kinds of support when they need to. Admitting vulnerability and requesting support is a sign of strength and maturity, not weakness, so we should praise, encourage and reward those who do this.

Comment by peterhartree (Peter_Hartree) on EAs on RSS and Reddit! · 2015-01-01T18:43:42.457Z · EA · GW

Nice work. We'll hopefully add this to the 80,000 Hours blog sidebar during Q1.

Comment by peterhartree (Peter_Hartree) on What should an effective altruist be committed to? · 2014-12-23T09:11:40.110Z · EA · GW

I think there are two questions here:

  1. How much of my time should I allocate to altruistic endeavour?
  2. How should I use the time I’ve allocated to altruistic endeavour?

Effective altruism clearly has a lot to say about (2). It could also say some things about (1), but I don’t think it is obliged to. These look like questions that can be addressed (fairly) independently of one another.

An aside: a weakness of the unqualified phrase “do the most good” is that it blurs these two questions. If you characterise the effective altruist as someone who wants to “do the most good”, it’s easy to give the impression that they are committed to maximising both the effectiveness of their altruistic endeavour and the amount of time they allocate to altruistic endeavour.

I’m quite keen on Rob’s proposed characterisation of an effective altruist, which remains fairly quiet on (1):

Someone who believes that to be a good altruist, you should use evidence and reason to do the most good with your altruistic actions, and puts at least some time or money behind the things they therefore believe will do the most good.

This strikes me as a substantive and inclusive idea. Complementary communities or sub-groups could form around the idea of giving 10%, giving 50%, etc, and effective altruists might be encouraged - but not obliged - to join them.

Much of the discussion in this thread has focussed on the question of which characterisation of effective altruism would have the greater impact potential in the long-run. In particular, whether a more demanding characterisation, likely to limit appeal, might nonetheless have a greater overall impact. I don't have much to add to what's been said, except to flag that an inclusive characterisation is likely to bring more diversity to the community - a quality it's somewhat lacking at present.

Comment by peterhartree (Peter_Hartree) on Generic good advice: do intense exercise often · 2014-12-16T21:56:50.193Z · EA · GW

I strongly endorse what Rob said. Intense regular exercise is by far the best productivity and general well-being hack I've ever adopted. In my experience, once you get into it, it's the opposite of a chore.

Second-best hack (for focus): Pomodoro Technique (use Tadam as your timer (Mac only))

Third-best hack (for reducing stress): regular mindfulness meditation (about 10 minutes / day, use Headspace to learn the basics).