Posts

A discussion of Holden Karnofsky's "Most Important Century" series (Thursday 21 October, 19:00 UK) 2021-10-16T20:50:26.359Z
[Link post] Sam Scheffler: Conservatism, Temporal Bias, and Future Generations 2021-09-19T08:44:50.972Z
Nick Bostrom: An Introduction [early draft] 2021-07-31T17:04:20.991Z
The Future of Humanity & The Methods of Ethics: A discussion of Bostrom, Sidgwick and Scheffler (Thursday 22 July, 6:30pm UK) 2021-07-18T18:58:42.912Z
What should an effective altruist be committed to? 2014-12-17T13:21:14.006Z

Comments

Comment by peterhartree (Peter_Hartree) on What Are Your Software Needs? · 2021-11-21T14:28:42.088Z · EA · GW

Personally I'm looking for someone to help me build a simple plugin for the Obsidian note taking app.

The plugin should generate a list of links to notes that match criteria I specify.

Spec here. If you'd enjoy getting paid to make this for me, please send me a DM.

Comment by peterhartree (Peter_Hartree) on Why I am probably not a longtermist · 2021-10-01T21:44:22.236Z · EA · GW

My recent post on Scheffler discusses some of these themes:

https://forum.effectivealtruism.org/posts/NnYrFzXiTWerhTTkK/link-post-sam-scheffler-conservatism-temporal-bias-and

Comment by peterhartree (Peter_Hartree) on Utilitarianism Symbol Design Competition · 2021-08-07T06:09:00.223Z · EA · GW

Interesting, thanks Aaron. This result seems roughly in line with the fraction of EAG attendees who wear EA t-shirts.

Comment by peterhartree (Peter_Hartree) on Utilitarianism Symbol Design Competition · 2021-08-05T05:38:35.109Z · EA · GW

For what it's worth, this thread reminded me of Joshua Greene arguing that the brand of "utilitarianism" is so bad as to be a lost cause.

Greene suggests "deep pragmatism" for the rebrand.

Comment by peterhartree (Peter_Hartree) on Utilitarianism Symbol Design Competition · 2021-08-05T05:37:01.274Z · EA · GW

I didn't downvote. For what it's worth, the main negative reaction I had was:

  1. The use of the EA lightbulb as an example of a great symbol. Personally, I've always found it kind of amateurish and cringe. I think mainly because it combines two very tired cliches (a lightbulb to represent "ideas" and a heart to represent "altruism"? Really?!).

I suppose I could also complain that:

  1. The claim that "symbolism is important" is not substantiated. Generically that seems true, but the claim that utilitarianism the philosophical idea needs a good/better symbol and/or a flag isn't obvious.

  2. Granting that symbolism is important, running a prize competition on the EA Forum is probably not the best way to get a brilliant symbol. My main concern is that the format disproportionately encourages submissions from amateurs. In logo design, professional designers often encounter clients who believe that a great logo can be whipped up by more or less anyone in a couple of hours on a Sunday afternoon. But no—world class logos usually take weeks or months of work, drawing on years of specialist training. If I had just $1K to spend, I might look for a talented young designer from a low-ish wage EU country (e.g. Portugal), and ask them to spend a couple days on it.

Comment by peterhartree (Peter_Hartree) on The Future of Humanity & The Methods of Ethics: A discussion of Bostrom, Sidgwick and Scheffler (Thursday 22 July, 6:30pm UK) · 2021-08-05T01:17:08.595Z · EA · GW

The salon recording is now available here: https://www.youtube.com/watch?v=E-uSDlbSXjw

A written summary is below:

We began by considering utilitarianism—particularly Sidgwick's "pleasure as desirable consciousness" hedonism—as a starting point for thinking about what matters. The value and failure modes of attempts at legibility and abstraction were discussed, as were different ideas about what makes a "meaningful" life. While accepting that utilitarian principles have, historically, supported important reforms (such as the de-criminalisation of homosexuality), attendees voiced concern about what may be missing from a hedonistic theory of value. There was broad agreement that we'll face major moral and meta-ethical uncertainty for the foreseeable future, and that we need to find ways to act despite that. One participant described giving Prozac to their cat, despite their misgivings about hedonism.

Discussion then turned to Nozick's experience machine, and the idea that it reveals more about our attachment to the status quo than our commitment to "base reality". We discussed how, during a process of gradual change, each step, viewed from the previous step, may seem comprehensible and tractable to moral evaluation. Yet if we try to look directly from the present to the thousandth step down the line, we end up in trouble—facing visions of an alien future that leave us cold. Parents can just about understand their children, but grandparents often struggle to understand their great-grandchildren.

In the last hour, we focussed on the question: how to proceed? There was general agreement that we should try our best to keep options open for future generations, which as a first cut, suggests an interest in reducing catastrophic and existential risks. Some attendees proposed relating to our best theories of value (including hedonism) as tentative yardsticks, and there was general enthusiasm for focussing on directional improvements on the margin, rather than a highly specified long term vision. Several attendees expressed interest in the Effective Altruism and Progress Studies communities, and we discussed some challenges of building effective communities when good feedback loops are hard to construct. The forecasting community—including Metaculus, the Good Judgement Project, and Danny Hernandez' work on calibration training—was briefly mentioned. So too was the difficulty of achieving rational social responses to risk—the debacle of COVID-19 suggesting that we have roughly two modes: ignore or obsess.

In closing, we reflected on potential harms associated with exposure to big picture perspectives in general, and utilitarian ideas in particular. Several attendees described acquaintances who have developed deep anxiety over things they cannot control, and who are making big life decisions—such as deciding not to have children—for questionable, anxiety-driven reasons. It was suggested that some contemporary neuroses may be a sign of impartial perspectives taking undue prominence in our culture. If people think that agent-neutral reasons are the only reasons they can justifiably care about, they're going to have a hard time living their lives.

This brought us back to Sidgwick’s "profound problem". If we can believe something is valuable, yet not actually value it, where does this leave us? Perhaps Agnes Callard can help us: for her, aspiration is about the rational, purposive process of learning to value something you don’t already value. Perhaps we should think of "learning to aspire" as a central challenge for the present, and the future.

Comment by peterhartree (Peter_Hartree) on Betting on the best case: higher end warming is underrepresented in research · 2021-08-02T15:26:19.627Z · EA · GW

Somewhat related: Robert S. Pindyck on The Use and Misuse of Models for Climate Policy.

In short, his take (a) seems consistent with the claim that research and policy attention is being misallocated and (b) suggests a mechanism that might partly explain the misallocation.

Abstract (my emphasis):

In recent articles I have argued that integrated assessment models (IAMs) have flaws that make them close to useless as tools for policy analysis. IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory and can fool policymakers into thinking that the forecasts the models generate have some kind of scientific legitimacy. However, some economists and climate scientists have claimed that we need to use some kind of model for policy analysis and that IAMs can be structured and used in ways that correct for their shortcomings. For example, it has been argued that although we know very little about key relationships in the model, we can get around this problem by attaching probability distributions to various parameters and then simulating the model using Monte Carlo methods. I argue that this would buy us nothing and that a simpler and more transparent approach to the design of climate change policy is preferable. I briefly outline what such an approach would look like.

A few highlights:

I believe that we need to be much more honest and up-front about the inherent limitations of IAMs. I doubt that the developers of IAMs have any intention of using them in a misleading way. Nevertheless, overselling their validity and claiming that IAMs can be used to evaluate policies and determine the SCC can end up misleading researchers, policymakers, and the public, even if it is unintentional. If economics is indeed a science, scientific honesty is paramount.

...

Yes, the calculations I have just described constitute a “model,” but it is a model that is exceedingly simple and straightforward and involves no pretense that we know the damage function, the feedback parameters that affect climate sensitivity, or other details of the climate–economy system. And yes, some experts might base their opinions on one or more IAMs, on a more limited climate science model, or simply on their research experience and/or general knowledge of climate change and its impact.

...

Some might argue that the approach I have outlined here is insufficiently precise. But I believe that we have no choice. Building and using elaborate models might allow us to think that we are approaching the climate policy problem more scientifically, but in the end, like the Wizard of Oz, we would only be drawing a curtain around our lack of knowledge

...

I have argued that the best we can do at this point is to come up with plausible answers to these questions, most likely by relying at least in part on numbers supplied by climate scientists and environmental economists, that is, utilize expert opinion. This kind of analysis would be simple, transparent, and easy to understand. It might not inspire the kind of awe and sense of scientific legitimacy conveyed by a large-scale IAM, but that is exactly the point.

Comment by peterhartree (Peter_Hartree) on What would you do if you had half a million dollars? · 2021-07-28T06:22:35.224Z · EA · GW

A post on this topic, discussing the Thiel Fellowship, Entrepreneur First, and other attempts: https://www.strangeloopcanon.com/p/on-medici-and-thiel

Comment by peterhartree (Peter_Hartree) on What would you do if you had half a million dollars? · 2021-07-28T06:20:49.910Z · EA · GW
  1. In some cases yes, but only when they were working on specific projects that I expected to be legible and palatable to EA funders. Are there places I should be sending people who I think are very promising to be considered for very low strings personal development / freedom-to-explore type funding?
Comment by peterhartree (Peter_Hartree) on What would you do if you had half a million dollars? · 2021-07-19T11:40:00.113Z · EA · GW

A thought that motivates my other comments on this thread: reviewing my GWWC donations a while ago, I realised that if I suddenly had lots of money, one of the first questions I would ask myself is "what friends and acquaintances should I fund?". To an outsider this kind of thing can look like rather non-altruistic nepotism, but from the inside it seems like betting on the opportunities that you are unusually able to see. I think it actually is the latter, at least sometimes. My impression is that for profit investors do a lot of "nepotistic investing", but I suspect that values like altruism and impartiality and transparency (as well as constraints of charitable legal status) make EA funders reluctant to go hard on this method.

Comment by peterhartree (Peter_Hartree) on What would you do if you had half a million dollars? · 2021-07-19T11:32:35.355Z · EA · GW

I would consider starting some kind of "major achievement" prize scheme.

Roughly, the idea I have in mind is to give large no-strings-attached lump sums to individuals who have:

(a) done exceptionally valuable work at non-trivial personal cost (e.g. massive salary sacrifice)

(b) a high likelihood of continuing to do extremely valuable work.

The aims would be:

(i) to help such figures become personally "set for life" in the way that successful startup founders sometimes do.

(ii) to improve the personal incentive structure faced by people considering EA careers.

This idea is very half baked. A couple quick comments:

  1. On (i): I'm surprised how often I meet people doing very valuable work who seem to have significant personal finance issues that (a) distract them and (b) mean that they don't buy time aggressively enough. Perhaps more importantly, I suspect that (c) personal financial security enables people to take riskier bets on their inside views, in a way that is valuably generative and/or error-correcting; also that (d) people who are doing very valuable work often have lists of good ideas for turning $$$ into good outcomes, so giving these people greater financial security would be one merit-based means of increasing the number of EA-sympathetic angel investors.

  2. On (ii): I have no idea if this would actually work out well. In theory, it'd make the personal incentives look a bit more like they do in for-profit entrepreneurship, i.e. small chance of large financial upside if you do well. In practice I could imagine a well known prize scheme causing various sorts of trouble.

  3. E.g. I see major PR risks to this kind of thing ("effective altruists conclude that the most effective use of money is to make themselves rich") and internal risk of resentment or even corruption scandals. I've not looked into how science prizes fare on this kind of thing.

  4. On (i): one possible counter is that IIRC there's some evidence for a "personal wealth sweet spot" in entrepreneurship. I think the story is supposed to be that too little financial security means you can't afford the risks, but too much security (both financial and status) makes you too complacent and lazy. My guess is that the complacency thing happens for many but not all people. Maybe one can filter for this.

Comment by peterhartree (Peter_Hartree) on What would you do if you had half a million dollars? · 2021-07-19T01:47:13.855Z · EA · GW

I would consider allocating at least $100K to trying my own version of something like Tyler Cowen's Emergent Ventures.

Comment by peterhartree (Peter_Hartree) on All Possible Views About Humanity's Future Are Wild · 2021-07-16T12:27:01.622Z · EA · GW

Thanks for the post.

You give a gloss definition of "wild":

we should be doing a double take at any view that we live in such a special time

Could you say a bit more on this? I can think of many different reasons one might do a double take—my impression is that you're thinking of just a few of them, but I'm not sure exactly which.

Comment by peterhartree (Peter_Hartree) on Podcast: Sharon Hewitt Rawlette on metaethics and utilitarianism · 2021-06-03T16:58:46.219Z · EA · GW

Thank you for this, Gus and Sharon.

This interview presented one of the most compelling cases for a hedonistic theory of value that I've heard, shifting my credence from “quite low” to “hmm, ok, maaaaybe”.

Some bits that stood out:

  1. Pluralistic conception of positive and negative experiences, i.e. experiences differ in intensity but also in character (so we can recognise fundamental differences between bodily pleasure, love, laughter, understanding, etc).

  2. Hedonism can solve the epistemic problem that haunts moral realism, by saying that we directly experience value and disvalue as a phenomenal quality.

  3. We attribute intrinsic value to non-experiential states of affairs because we recognise them as direct or indirect causes of experiential value. This is a cognitive shortcut, it works pretty well.

  4. Experience of pleasure from e.g. torture is pro tanto good, but it is not all things considered good because of the instrumental effects (i.e. lots of disvalue).

  5. Best argument against hedonistic utilitarianism is that it is too abstract. It's not actually helpful for people to think in these terms. We need nearly-absolute respect for rights, projecting intrinsic value into the world works well for us.

  6. Strong Realism vs anti-realism (as in: total mind-independence vs mind-dependence) matters: only the strong realist can deeply care about self-interested perspectival bias, e.g. can think of their deepest values as perhaps radically wrong, can worry that an AGI with idealised human values might still be an existential catastrophe.

For some reason, it hadn't occurred to me that a hedonist could do (1). It might be that I think of hedonists as aiming for a very tidy theory, and adding pluralism back in messes that up a bit (e.g. comparability and aggregation remain hard).

Anyway... "pluralistic hedonism" seems quite promising to me!

For readers: her PhD was supervised by Thomas Nagel and she thanks Parfit for input. I'm looking forward to reading it: https://www.stafforini.com/docs/Hewitt - Normative qualia and a robust moral realism.pdf

Comment by peterhartree (Peter_Hartree) on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T13:47:22.001Z · EA · GW
  1. How do you give advice?

PS (Tyler Cowen): I think about what I believe, then I think about what it's useful for people to hear, and then I say that.

EA: I think about what I believe, and then I say that. I generally trust people to respond appropriately to what I say.

Comment by peterhartree (Peter_Hartree) on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T13:21:40.967Z · EA · GW

So here's a list of claims, with a cartoon response from someone that represents my impression of a typical EA/PS view on things (insert caveats here):

  1. Some important parts of "developed world" culture are too pessimistic. It would be very valuable to blast a message of definite optimism, viz. "The human condition can be radically improved! We have done it in the past, and we can do it again. Here are some ideas we should try..."

PS: Strongly agree. The cultural norms that support and enable progress are more fragile than you think.

EA: Agree. But, as an altruist, I tend to focus on preventing bad stuff rather than making good stuff happen (not sure why...).

  1. Broadly, "progress" comes about when we develop and use our capabilities to improve the human condition, and the condition of other moral patients (~sentient beings).

PS: Agree, this gloss seems basically fine for now.

EA: Agree, but we really need to improve on this gloss.

  1. Progress comes in different kinds: technological, scientific, ethical, global coordination. At different times in history, different kinds will be more valuable. Balancing these capabilities matters: during some periods, increasing capabilities in one area (or a subfield of one area) may be disvaluable (c.f. Vulnerable World Hypothesis).

EA & PS: Seems right. Maybe we disagree on where the current margins are?

  1. Let's try not to destroy ourselves! The future could be wonderful!

EA & PS: Yeah, duh. But also eek—we recognise the dangers ahead.

  1. Markets and governments are quite functional, so that means there's much more low hanging fruit in pursuing the interests of those who these systems aren't at all built to serve (e.g. future generations, animals).

PS: Hmm, take a closer look. There are a lot of trillion dollar bills lying around, even in areas where an optimistic EMH would say that markets and government ought to do well.

EA: So I used to be really into the EMH. These days, I'm not so sure...

  1. Broadly promoting industrial literacy is really important.

PS: Yes!

EA: I haven't thought about this much. Quick thought is that I'm happy to see some people working on this. I doubt it's the best option for many of the people we speak to, but it could be a good option for some.

  1. We can make useful predictions about the effects of new technologies.

PS (David Deutsch): I might grudgingly accept an extremely weak formulation of this claim. At least on Fridays. And only if you don't try to explicitly assign probabilities.

EA: Yes.

  1. You might be missing a crucial consideration!

PS: What's that? Oh, I see. Yeah. Well... I'm all for thinking hard about things, and acting on the assumption that I'm probably wrong about mostly everything. In the end, I guess I'm crossing my fingers, and hoping we can learn by trial and error, without getting ourselves killed. Is there another option?

EA: I know. This gives me nightmares.

On Max Daniel's thread, I left some general comments, a longer list of questions to which PS/EA might give different answers, and links to some of the discussions that shaped my perspective on this.

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-03T07:32:18.297Z · EA · GW

@ADS: I enjoyed your discussion of (1), but I understood the conclusion to be :shrug:. Is that where you're at?

Generally, my impression is that differential technological development is an idea that seems right in theory, but the project of figuring out how to apply it in practice seems rather... nascent. For example:

(a) Our stories about which areas we should speed up and slow down are pretty speculative, and while I'm sure we can improve them, the prospects for making them very robust seem limited. DTD does not free us from the uncomfortable position of having to "take a punt" on some extremely high stakes issues.

(b) I'm struggling to think of examples of public discussion of how "strong" a version of DTD we should aim for in practice (pointers, anyone?).

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-03T07:26:41.313Z · EA · GW

To your Beckstead paraphrase, I'll add Tyler's recent exchange with Joseph Walker:

Cowen: Uncertainty should not paralyse you: try to do your best, pursue maximum expected value, just avoid the moral nervousness, be a little Straussian about it. Like here’s a rule on average it’s a good rule we’re all gonna follow it. Bravo move on to the next thing. Be a builder.

Walker: So… Get on with it?

Cowen: Yes ultimately the nervous Nellie’s, they’re not philosophically sophisticated, they’re over indulging their own neuroticism, when you get right down to it. So it’s not like there’s some brute let’s be a builder view and then there’s some deeper wisdom that the real philosophers pursue. It’s you be a builder or a nervous Nelly, you take your pick, I say be a builder.

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-03T07:22:48.753Z · EA · GW

I've gotten several responses on this, and find them all fairly limited. As far as I can tell, the Progress Studies community just is not reasoning very well about x-risk.

Have you pressed Tyler Cowen on this?

I'm fairly confident that he has heard ~all the arguments that the effective altruism community has heard, and that he has understood them deeply. So I default to thinking that there's an interesting disagreement here, rather than a boring "hasn't heard the arguments" or "is making a basic mistake" thing going on.

In a recent note, I sketched a couple of possibilities.

(1) Stagnation is riskier than growth

Stubborn Attachments puts less emphasis on sustainability than other long-term thinkers like Nick Bostrom, Derek Parfit, Richard Posner, Martin Rees and Toby Ord. On the 80,000 Hours podcast, Tyler explained that existential risk was much more prominent in early drafts of the book, but he decided to de-emphasise it after Posner and others began writing on the topic. In any case, Tyler agrees with the claim that we should put more resources into reducing existential risk at current margins. However, it seems as though he, like Peter Thiel, sees the political risk of economic stagnation as a more immediate and existential concern than these other long-term thinkers. Speaking at one of the first effective altruism conferences, Thiel said if the rich world continues on a path of stagnation, it’s a one-way path to apocalypse. If we start innovating again, we at least have a chance of getting through, despite the grave risk of finding a black ball.

(2) Tyler is being Straussian

Tyler may have a different view about what messages are helpful to blast into the public sphere. Perhaps this is partly due to a Deutsch / Thiel-style worry about the costs of cultural pessimism about technology. Martin Rees, who sits in the UK House of Lords, claims that democratic politicians are hard to influence unless you first create a popular concern. My guess is Tyler may think both that politicians aren’t the centre of leverage for this issue, and that there are safer, more direct ways to influence them on this topic. In any case, it’s clear Tyler thinks that most people should focus on maximising the growth rate, and only a minority should focus on sustainability issues, including existential safety. It is not inconsistent to think that growth is too slow and that sustainability is underrated. Some listeners will hear the "sustainable" in "maximise the (sustainable) growth rate" and consider making that their focus. Most will not, and that's fine.

Many more people can participate in the project of "maximise the (sustainable) rate of economic growth" than "minimise existential risk".

(3) Something else?

I have a few other ideas, but I don't want to share the half-baked thoughts just yet.

One I'll gesture at: the phrase "cone of value", his catchphrase "all thinkers are regional thinkers", Bernard Williams, and anti-realism.

A couple relevant quotes from Tyler's interview with Dwarkesh Patel:

[If you are a space optimist you may think that we can relax more about safety once we begin spreading to the stars.] You can get rid of that obsession with safety and replace it with an obsession with settling galaxies. But that also has a weirdness that I want to avoid, because that also means that something about the world we live in does not matter very much, you get trapped in this other kind of Pascal's wager, where it is just all about space and NASA and like fuck everyone else, right? And like if that is right it is right. But my intuition is that Pascal's Wager type arguments, they both don't apply and shouldn't apply here, that we need to use something that works for humans here on earth.

On the 800 years claim:

In the Stanford Talk, I estimated in semi-joking but also semi-serious fashion, that we had 700 or 800 years left in us.

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-02T07:38:49.894Z · EA · GW

Some questions to which I suspect key figures in Effective Altruism and Progress Studies would give different answers:

a. How much of a problem is it to have a mainstream culture that is afraid of technology, or that underrates its promise?

b. How does the rate of economic growth in the West affect the probability of political catastrophe, e.g. WWIII?

c. How fragile are Enlightenment norms of open, truth-seeking debate? (E.g. Deutsch thinks something like the Enlightenment "tried to happen" several times, and that these norms may be more fragile than we think.)

d. To what extent is existential risk something that should be quietly managed by technocrats vs a popular issue that politicians should be talking about?

e. The relative priority of catastrophic and existential risk reduction, and the level of convergence between these goals.

f. The tractability of reducing existential risk.

g. What is most needed: more innovation, or more theory/plans/coordination?

h. What does ideal and actual human rationality look like? E.g. Bayesian, ecological, individual, social.

i. How to act when faced with small probabilities of extremely good or extremely bad outcomes.

j. How well can we predict the future? Is it reasonable to make probability estimates about technological innovation? (I can't quickly find the strongest "you can't put probabilities" argument, but here's Anders Sandberg sub-Youtubing Deutsch)

k. Credence in moral realism.

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-02T07:22:32.345Z · EA · GW

Some notable discussions involving key figures:

Comment by peterhartree (Peter_Hartree) on Progress studies vs. longtermist EA: some differences · 2021-06-02T07:19:48.110Z · EA · GW

Bear in mind that I'm more familiar with the Effective Altruism community than I am with the Progress Studies community.

Some general impressions:

  1. Superficially, key figures in Progress Studies seem a bit less interested in moral philosophy than those in Effective Altruism. But, Tyler Cowen is arguably as much a philosopher as he is an economist, and he co-authored Against The Discount Rate (1992) with Derek Parfit. Patrick Collison has read Reasons and Persons, The Precipice, and so on, and is a board member of The Long Now Foundation. Peter Thiel takes philosophy and the humanities very seriously (see here and here). And David Deutsch has written a philosophical book, drawing on Karl Popper.

  2. On average, key figures in EA are more likely to have a background in academic philosophy, while PS figures are more likely to have been involved in entrepreneurship or scientific research.

  3. There seem to be some differences in disposition / sensibility / normative views around questions of risk and value. E.g. I would guess that more PS figures have ridden a motorbike, are more likely to say things like "full steam ahead".

  4. To caricature: when faced with a high stakes uncertainty, EA says "more research is needed", while PS says "quick, let's try something and see what happens". Alternatively: "more planning/co-ordination is needed" vs "more innovation is needed".

  5. PS figures seem to put less of a premium on co-ordination and consensus-building, and more of a premium on decentralisation and speed.

  6. PS figures seem (even) more troubled by the tendency of large institutions with poor feedback loops towards dysfunction.

Comment by peterhartree (Peter_Hartree) on Some quick notes on "effective altruism" · 2021-03-26T13:17:57.214Z · EA · GW

Thanks for writing this, Jonas.

For what it's worth:

  1. I share the concerns you mentioned.
  2. I personally find the name "effective altruism" somewhat cringe and off-putting. I've become used to it over the years but I still hear it and feel embarrassed every now and then.
  3. I find the label "effective altruist" several notches worse: that elicits a slight cringe reaction most of the time I encounter it.
  4. The names "Global priorities" and "Progress studies" don't trigger a cringe reaction for me.
  5. I have a couple of EA-inclined acquaintances who have told me they were put off by the name "effective altruism".
  6. While I don't like the name, the thought that it might be driving large and net positive selection effects does not seem crazy to me.
  7. I would be glad if someone gave this topic further thought, plausibly to the extent of conducting surveys and speaking to relevant experts.
Comment by peterhartree (Peter_Hartree) on Supportive scepticism in practice · 2015-01-25T13:35:27.308Z · EA · GW

Jess & Michelle: thanks for this excellent post. Three remarks I'd like to add:

1. We all need support, but individuals vary considerably in the kind of support they need in order to flourish. A kind of support that works well for one person might feel patronising, frustrating or stifling to another, or cold, distant and uncaring to a third. To be effectively supportive, we must be sensitive to individual needs.

2. Being supportive is difficult, so individuals in the community should help others support them. If the support you're getting from the community is suboptimal, it's unlikely that other individuals are entirely to blame.

3. As a community, we should create an atmosphere where it's easy for people to ask for more or different kinds of support when they need to. Admitting vulnerability and requesting support is a sign of strength and maturity, not weakness, so we should praise, encourage and reward those who do this.

Comment by peterhartree (Peter_Hartree) on EAs on RSS and Reddit! · 2015-01-01T18:43:42.457Z · EA · GW

Nice work. We'll hopefully add this to the 80,000 Hours blog sidebar during Q1.

Comment by peterhartree (Peter_Hartree) on What should an effective altruist be committed to? · 2014-12-23T09:11:40.110Z · EA · GW

I think there are two questions here:

  1. How much of my time should I allocate to altruistic endeavour?
  2. How should I use the time I’ve allocated to altruistic endeavour?

Effective altruism clearly has a lot to say about (2). It could also say some things about (1), but I don’t think it is obliged to. These look like questions that can be addressed (fairly) independently of one another.

An aside: a weakness of the unqualified phrase “do the most good” is that it blurs these two questions. If you characterise the effective altruist as someone who wants to “do the most good”, it’s easy to give the impression that they are committed to maximising both the effectiveness of their altruistic endeavour and the amount of time they allocate to altruistic endeavour.

I’m quite keen on Rob’s proposed characterisation of an effective altruist, which remains fairly quiet on (1):

Someone who believes that to be a good altruist, you should use evidence and reason to do the most good with your altruistic actions, and puts at least some time or money behind the things they therefore believe will do the most good.

This strikes me as a substantive and inclusive idea. Complementary communities or sub-groups could form around the idea of giving 10%, giving 50%, etc, and effective altruists might be encouraged - but not obliged - to join them.

Much of the discussion in this thread has focussed on the question of which characterisation of effective altruism would have the greater impact potential in the long-run. In particular, whether a more demanding characterisation, likely to limit appeal, might nonetheless have a greater overall impact. I don't have much to add to what's been said, except to flag that an inclusive characterisation is likely to bring more diversity to the community - a quality it's somewhat lacking at present.

Comment by peterhartree (Peter_Hartree) on Generic good advice: do intense exercise often · 2014-12-16T21:56:50.193Z · EA · GW

I strongly endorse what Rob said. Intense regular exercise is by far the best productivity and general well-being hack I've ever adopted. In my experience, once you get into it, it's the opposite of a chore.

Second-best hack (for focus): Pomodoro Technique (use Tadam as your timer (Mac only))

Third-best hack (for reducing stress): regular mindfulness meditation (about 10 minutes / day, use Headspace to learn the basics).