Posts

AGI risk: analogies & arguments 2021-03-23T13:18:20.638Z
The academic contribution to AI safety seems large 2020-07-30T10:30:19.021Z
[Link] The option value of civilization 2019-01-06T09:58:17.919Z
Existential risk as common cause 2018-12-05T14:01:04.786Z

Comments

Comment by technicalities on What is an example of recent, tangible progress in AI safety research? · 2021-06-16T12:48:42.976Z · EA · GW

Not recent-recent, but I also really like Carey's 2017 work on CIRL. Picks a small, well-defined problem and hammers it flush into the ground. "When exactly does this toy system go bad?"

Comment by technicalities on What is an example of recent, tangible progress in AI safety research? · 2021-06-16T06:51:32.016Z · EA · GW

If we take "tangible" to mean executable:

But as Kurt Lewin once said "there's nothing so practical as a good theory". In particular, theory scales automatically and conceptual work can stop us from wasting effort on the wrong things.

  • CAIS (2019) pivots away from the classic agentic model, maybe for the better
  • The search for mesa-optimisers (2019) is a step forward from previous muddled thoughts on optimisation, and they make predictions we can test them on soon.
  • The Armstrong/Shah discussion of value learning changed my research direction for the better.


Also Everitt et al (2019) is both: a theoretical advance with good software.

Comment by technicalities on A Viral License for AI Safety · 2021-06-14T04:23:57.212Z · EA · GW

I think you're right, see my reply to Ivan.

Comment by technicalities on A Viral License for AI Safety · 2021-06-14T04:22:17.505Z · EA · GW

I think I generalised too quickly in my comment; I saw "virality" and "any later version" and assumed the worst. But of course we can take into account AGPL backfiring when we design this licence!

One nice side effect of even a toothless AI Safety Licence: it puts a reminder about safety into the top of every repo. Sure, no one reads licences (and people often ignore health and safety rules when it gets in their way, even at their own risk). But maybe it makes things a bit more tangible like LICENSE.md gives law a foothold into the minds of devs.

Comment by technicalities on Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community? · 2021-06-12T18:53:48.626Z · EA · GW

Seems I did this in exactly 3 posts before getting annoyed.

Comment by technicalities on Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community? · 2021-06-09T09:39:57.215Z · EA · GW

That's cool! I wonder if they suffer from the same ambiguity as epistemic adjectives in English though* (which would suggest that we should skip straight to numerical assignments: probabilities or belief functions).

Anecdotally, it's quite tiring to put credence levels on everything. When I started my blog I began by putting a probability on all major claims (and even wrote a script to hide this behind a popup to minimise aesthetic damage). But I soon stopped.

For important things (like Forum posts?) it's probably worth the effort, but even a document-level confidence statement is a norm with only spotty adoption on here.

Comment by technicalities on A Viral License for AI Safety · 2021-06-05T09:20:58.707Z · EA · GW

This is a neat idea, and unlike many safety policy ideas it has scaling built in.

However, I think the evidence from the original GPL suggests that this wouldn't work. Large companies are extremely careful to just not use GPL software, and this includes just making their own closed source implementations.* Things like the Skype case are the exception, which make other companies even more careful not to use GPL things. All of this has caused GPL licencing to fall massively in the last decade.** I can't find stats, but I predict that GPL projects will have much less usage and dev activity.

It's difficult to imagine software so good and difficult to replicate that Google would invite our virus into their proprietary repo. Sure, AI might be different from [Yet Another Cool AGPL Parser] - but then who has a bigger data moat and AI engineering talent than big tech, to just implement it for themselves?

** https://opensource.com/article/17/2/decline-gpl

Comment by technicalities on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T08:19:28.517Z · EA · GW

Aschenbrenner's model strikes me as a synthesis of the two intellectual programmes, and it doesn't get enough attention.

Comment by technicalities on Why should we *not* put effort into AI safety research? · 2021-05-16T09:46:41.091Z · EA · GW

Robin Hanson is the best critic imo. He has many arguments, or one very developed one, but big pieces are:

  • Innovation in general is not very "lumpy" (discontinuous). So we should assume that AI innovation will also not be. So no one AI lab will pull far ahead of the others at AGI time. So there won't be a 'singleton', a hugely dangerous world-controlling system.
     
  • Long timelines [100 years+] + fire alarms
     
  • Opportunity cost of spending / shouting now 
    "we are far from human level AGI now, we'll get more warnings as we get closer, and by saving $ you get 2x as much to spend each 15 years you wait."
    "having so many people publicly worrying about AI risk before it is an acute problem will mean it is taken less seriously when it is, because the public will have learned to think of such concerns as erroneous fear mongering."
     
  • The automation of labour isn't accelerating (therefore current AI is not being deployed to notable effect, therefore current AI progress is not yet world-changing in one sense)

He might not be what you had in mind: Hanson argues that we should wait to work on AGI risk, rather than that safety work is forever unnecessary or ineffective. The latter claim seems extreme to me and I'd be surprised to find a really good argument for it.

You might consider the lack of consensus about basic questions, mechanisms, solutions amongst safety researchers to be a bad sign.

Nostalgebraist (2019) sees AGI alignment as equivalent to solving large parts of philosophy: a noble but quixotic quest.

Melanie Mitchell also argues for long timelines. Her view is closer to the received view in the field (but this isn't necessarily a compliment).

Comment by technicalities on What are your favorite examples of moral heroism/altruism in movies and books? · 2021-04-26T17:03:46.435Z · EA · GW

Spoilers for Unsong:

Jalaketu identifies the worst thing in the world - hell - and sacrifices everything, including his own virtue and impartiality, to destroy it. It is the strongest depiction of the second-order consistency, second-order glory of consequentialism I know. (But also a terrible tradeoff.)

Comment by technicalities on Voting reform seems overrated · 2021-04-10T07:28:14.351Z · EA · GW

Shouldn't the title be "Proportional Representation seems overrated"?

PR is often what people mean by voting reform, in the UK, but there are options without these problems, e.g. approval voting.

Comment by technicalities on What are your main reservations about identifying as an effective altruist? · 2021-03-30T15:22:20.586Z · EA · GW

I see "effective altruist" as a dodgy shorthand for the full term: "aspiring effective altruist". I'm happy to identify as the latter in writing (though it is too clunky for speech).

Comment by technicalities on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-23T23:22:08.280Z · EA · GW

I call shotgun on "On Certainty", one of the most-wanted books. (The author and I have butted heads before. He is much better at headbutting than me.)

Comment by technicalities on AGI risk: analogies & arguments · 2021-03-23T17:55:27.950Z · EA · GW

I felt much the same writing it. I'll add that to my content note, thanks.

Comment by technicalities on AGI risk: analogies & arguments · 2021-03-23T15:38:20.935Z · EA · GW

The opposite post (reasons not to worry) could be good as well. e.g.

Comment by technicalities on EA capital allocation is an inner ring · 2021-03-19T17:51:33.228Z · EA · GW

In this one, it's that there is no main body, just a gesture off-screen. Only a small minority of readers will be familiar enough with the funding apparatus to complete your "exercise to the reader..." Maybe you're writing for that small minority, but it's fair for the rest to get annoyed.

In past ones (from memory), it's again this sense of pushing work onto the reader. Sense of "go work it out".

Comment by technicalities on EA capital allocation is an inner ring · 2021-03-19T17:15:46.218Z · EA · GW

It might be better to collate and condense your series into one post, once it's finished (or starting now). These individual posts really aren't convincing, and probably hurt your case if anything. Part of that is the Forum's conventions about content being standalone. But the rest is clarity and evidence: your chosen style is too esoteric.

I don't think it's our unwillingness to hear you out. Some of the most well-regarded posts on here are equally fundamental critiques of EA trends, but written persuasively / directly:

https://forum.effectivealtruism.org/posts/bsE5t6qhGC65fEpzN/growth-and-the-case-against-randomista-development

https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really

https://forum.effectivealtruism.org/posts/DxfpGi9hwvwLCf5iQ/objections-to-value-alignment-between-effective-altruists

https://forum.effectivealtruism.org/posts/jSPGFxLmzJTYSZTK3/reality-is-often-underpowered

Comment by technicalities on Can a Vegan Diet Be Healthy? A Literature Review · 2021-03-12T19:52:17.761Z · EA · GW

Worth noting that multivitamins are associated with very slightly increased mortality in the general population. Cochrane put this down to them overdosing A, E, and beta-carotene, which I don't expect vegans to be deficient in, so the finding might transfer. (Sounds like you've done blood tests though, so ignore me if it helps you.)

https://www.cochrane.org/CD007176/LIVER_antioxidant-supplements-for-prevention-of-mortality-in-healthy-participants-and-patients-with-various-diseases

Comment by technicalities on What are some potential coordination failures in our community? · 2020-12-12T08:24:34.725Z · EA · GW

The cycle of people coming up with ideas about how to organise people into projects, or prevent redundant posts, or make the Forum more accretive, being forgotten a week later. i.e. We fail to coordinate on coordination projects.

Comment by technicalities on Progress Open Thread: December 2020 · 2020-12-02T15:44:16.503Z · EA · GW

Can anyone in clean meat verify this news? The last time I checked, we were still years off market release.

Conditional on it being a real shock, hooray!

https://www.google.com/amp/s/amp.theguardian.com/environment/2020/dec/02/no-kill-lab-grown-meat-to-go-on-sale-for-first-time

Comment by technicalities on The academic contribution to AI safety seems large · 2020-11-22T12:29:30.439Z · EA · GW

Follow-up post to ARCHES with ranking of existing fields, lots more bibliographies.

Comment by technicalities on The Case for Space: A Longtermist Alternative to Existential Threat Reduction · 2020-11-18T16:14:02.979Z · EA · GW

Some more prior art, on Earth vs off-world "lifeboats". See also 4.2 here for a model of mining Mercury (for solar panels, not habitats).

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-31T17:15:45.460Z · EA · GW

This makes sense. I don't mean to imply that we don't need direct work.

AI strategy people have thought a lot about the capabilities : safety ratio, but it'd be interesting to think about the ratio of complementary parts of safety you mention. Ben Garfinkel notes that e.g. reward engineering work (by alignment researchers) is dual-use; it's not hard to imagine scenarios where lots of progress in reward engineering without corresponding progress in inner alignment could hurt us.

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-31T17:00:31.239Z · EA · GW

Thanks!

research done by people who are trying to do something else will probably end up not being very helpful for some of the core problems.

Yeah, it'd be good to break AGI control down more, to see if there are classes of problem where we should expect indirect work to be much less useful. But this particular model already has enough degrees of freedom to make me nervous.

I think that it might be easier to assign a value to the discount factor by assessing the total contributions of EA safety and non-EA safety.

That would be great! I used headcount because it's relatively easy, but value weights are clearly better. Do you know any reviews of alignment contributions?

... This doesn't seem to mesh with your claim about their relative productivity.

Yeah, I don't claim to be systematic. The nine are just notable things I happened across, rather than an exhaustive list of academic contributions. Besides the weak evidence from the model, my optimism about there being many other academic contributions is based on my own shallow knowledge of AI: "if even I could come up with 9..."

Something like the Median insights collection, but for alignment, would be amazing, but I didn't have time.

those senior researchers won't necessarily have useful things to say about how to do safety research

This might be another crux: "how much do general AI research skills transfer to alignment research?" (Tacitly I was assuming medium-high transfer.)

I think the link is to the wrong model?

No, that's the one; I mean the 2x2 of factors which lead to '% work that is alignment relevant'. (Annoyingly, Guesstimate hides the dependencies by default; try View > Visible)

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-31T13:33:16.556Z · EA · GW

An important source of capabilities / safety overlap, via Ben Garfinkel:

Let’s say you’re trying to develop a robotic system that can clean a house as well as a human house-cleaner can... Basically, you’ll find that if you try to do this today, it’s really hard to do that. A lot of traditional techniques that people use to train these sorts of systems involve reinforcement learning with essentially a hand-specified reward function...
One issue you’ll find is that the robot is probably doing totally horrible things because you care about a lot of other stuff besides just minimizing dust. If you just do this, the robot won’t care about, let’s say throwing out valuable objects that happened to be dusty. It won’t care about, let’s say, ripping apart a couch cushion to find dust on the inside... You’ll probably find any simple line of code you write isn’t going to capture all the nuances. Probably the system will end up doing stuff that you’re not happy with.
This is essentially an alignment problem. This is a problem of giving the system the right goals. You don’t really have a way using the standard techniques of making the system even really act like it’s trying to do the thing that you want it to be doing. There are some techniques that are being worked on actually by people in the AI safety and the AI alignment community to try and basically figure out a way of getting the system to do what you want it to be doing without needing to hand-specify this reward function...
These are all things that are being developed by basically the AI safety community. I think the interesting thing about them is that it seems like until we actually develop these techniques, probably we’re not in a position to develop anything that even really looks like it’s trying to clean a house, or anything that anyone would ever really want to deploy in the real world. It seems like there’s this interesting sense in which we have the storage system we’d like to create, but until we can work out the sorts of techniques that people in the alignment community are working on, we can’t give it anything even approaching the right goals. And if we can’t give anything approaching the right goals, we probably aren’t going to go out and, let’s say, deploy systems in the world that just mess up people’s houses in order to minimize dust.
I think this is interesting, in the sense in which the processes to give things the right goals bottleneck the process of creating systems that we would regard as highly capable and that we want to put out there.

He sees this as positive: it implies massive economic incentives to do some alignment, and a block on capabilities until it's done. But it could be a liability as well, if the alignment of weak systems is correspondingly weak, and if mid-term safety work fed into a capabilities feedback loop with greater amplification.

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-31T11:39:23.633Z · EA · GW

Thanks for this, I've flagged this in the main text. Should've paid more attention to my confusion on reading their old announcement!

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-30T10:31:25.525Z · EA · GW

If the above strikes you as wrong (and not just vague), you could copy the Guesstimate, edit the parameters, and comment below.

Comment by technicalities on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:33:30.983Z · EA · GW

Welcome!

It's a common view. Some GiveWell staff hold this view, and indeed most of their work involves short-term effects, probably for epistemic reasons. Michael Plant has written about the EA implications of person-affecting views, and emphasises improvements to world mental health.

Here's a back-of-the-envelope estimate for why person-affecting views might still be bound to prioritise existential risk though (for the reason you give, but with some numbers for easier comparison).

Dominic Roser and I have also puzzled over Christian longtermism a bit.

Comment by technicalities on What would a pre-mortem for the long-termist project look like? · 2020-04-12T07:58:34.208Z · EA · GW

Great comment. I count only 65 percentage points - is the other third "something else happened"?

Or were you not conditioning on long-termist failure? (That would be scary.)

Comment by technicalities on (How) Could an AI become an independent economic agent? · 2020-04-04T19:40:05.401Z · EA · GW

IKEA is an interesting case: it was bequeathed entirely to a nonprofit foundation with a very loose mission and no owner(?)

https://www.investopedia.com/articles/investing/012216/how-ikea-makes-money.asp

Not a silly question IMO. I thought about Satoshi Nakamoto's bitcoin - but if they're dead, then it's owned by their heirs, or failing that by the government of whatever jurisdiction they were in. In places like Britain I think a combination of "bona vacantia" (unclaimed estates go to the government) and "treasure trove" (old treasure also) cover the edge cases. And if all else fails there's "finders keepers".

Comment by technicalities on What posts do you want someone to write? · 2020-03-29T14:08:06.017Z · EA · GW

A nice example of the second part, value dependence, is Ozy Brennan's series reviewing GiveWell charities.

Why might you donate to GiveDirectly?
You need a lot of warmfuzzies in order to motivate yourself to donate.
You think encouraging cash benchmarking is really important, and giving GiveDirectly more money will help that.
You want to encourage charities to do more RCTs on their programs by rewarding the charity that does that most enthusiastically.
You care about increasing people’s happiness and don’t care about saving the lives of small children, and prefer a certainty of a somewhat good outcome to a small chance of a very good outcome.
You believe, in principle, that we should let people make their own decisions about their lives.
You want an intervention that definitely has at least a small positive effect.
You have just looked at GDLive and are no longer responsible for your actions.
Comment by technicalities on What posts do you want someone to write? · 2020-03-24T08:40:03.140Z · EA · GW

Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)

If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.

Comment by technicalities on What posts do you want someone to write? · 2020-03-24T08:35:03.042Z · EA · GW

A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is "do politics - to promote nonpolitical processes!"

https://forum.effectivealtruism.org/posts/RfKPzmtAwzSw49X9S/open-thread-46?commentId=rWn7HTvZaNHCedXNi

Comment by technicalities on What are some 1:1 meetings you'd like to arrange, and how can people find you? · 2020-03-20T14:52:38.622Z · EA · GW

Who am I?

Gavin Leech, a PhD student in AI at Bristol. I used to work in international development, official statistics, web development, data science.

Things people can talk to you about

Stats, forecasting, great books, development economics, pessimism about philosophy, aphorisms, why AI safety is eating people, fake frameworks like multi-agent mind. How to get technical after an Arts degree.

Things I'd like to talk to others about

The greatest technical books you've ever read. Research taste, and how it is transmitted. Non-opportunistic ways to do AI safety. How cluelessness and AIS interact; how hinginess and AIS interact.

Get in touch

g@gleech.org . I also like the sound of this open-letter site.

Comment by technicalities on Open Thread #46 · 2020-03-14T10:36:45.956Z · EA · GW

Suggested project for someone curious:

There are EA profiles of interesting influential (or influentially uninfluential) social movements - the Fabians, the neoliberals, the General Semanticists. But no one has written about the biggest: the scientific revolution in Britain as intentional intervention, a neoliberal style coterie.

A small number of the most powerful people in Britain - the Lord Chancellor, the king's physicians, the chaplain of the Elector Palatine / bishop of Chester, London's greatest architect, and so on - apparently pushed a massive philosophical change, founded some of the key institutions for the next 4 centuries, and thereby contributed to most of our subsequent achievements.

Outline:

  • Elizabethan technology and institutions before Bacon. Scholasticism and mathematical magic
  • The protagonists: "The Invisible College"
  • The impact of Gresham College and the Royal Society (sceptical empiricism revived! Peer review! Data sharing! efficient causation! elevating random uncredentialed commoners like Hooke)
  • Pre-emptive conflict management (Bacon's and Boyle's manifestos and Utopias are all deeply Christian)
  • The long gestation: it took 100 years for it to bear any fruit (e.g. Boyle's law, the shocking triumph of Newton); it took 200 years before it really transformed society. This is not that surprising measured in person-years of work, but otherwise why did it take so long?
  • Counterfactual: was Bacon overdetermined by economic or intellectual trends? If it was inevitable, how much did they speed it up?
  • Somewhat tongue in cheek cost:benefit estimate.

This was a nice introduction to the age.

Comment by technicalities on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-10T13:40:20.052Z · EA · GW

To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.

My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or discouraging not using U estimation. Is this a straw man? Possibly, I'm not sure I've ever read anything by a strict estimate-everything single-level person.

Comment by technicalities on What are the key ongoing debates in EA? · 2020-03-09T15:00:47.738Z · EA · GW

I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (:

(Speaking as a philosophy+economics grad and a sort-of computer scientist.)

Comment by technicalities on What are the key ongoing debates in EA? · 2020-03-09T13:28:40.601Z · EA · GW

Not sure. 2017 fits the beginning of the discussion though.

Comment by technicalities on What are the key ongoing debates in EA? · 2020-03-08T17:00:16.259Z · EA · GW

I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.

My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.

Comment by technicalities on What are the best arguments that AGI is on the horizon? · 2020-02-16T11:55:56.723Z · EA · GW

It can seem strange that people act decisively about speculative things. So the first piece to understand is expected value: if something would be extremely important if it happened, then you can place quite low probability on it and still have warrant to act on it. (This is sometimes accused of being a decision-theory "mugging", but it isn't: we're talking about subjective probabilities in the range of 1% - 10%, not infinitesimals like those involved in Pascal's mugging.)

I think the most-defensible outside-view argument is: it could happen soon; it could be dangerous; aligning it could be very hard; and the product of these probabilities is not low enough to ignore.

1. When you survey general AI experts (not just safety or AGI people), they give a very wide distribution of predicting when we will have human-level AI (HLAI), with a central tendency of "10% chance of human-level AI... in the 2020s or 2030s". (This is weak evidence, since technology forecasting is very hard; these surveys are not random samples; but it seems like some evidence.)


2. We don't know what the risk of HLAI being dangerous is, but we have a couple of analogous precedents:

* the human precedent for world domination through intelligence / combinatorial generalisation / cunning

* the human precedent for 'inner optimisers': evolution was heavily optimising for genetic fitness, but produced a system, us, which optimises for a very different objective ("fun", or "status", or "gratification" or some bundle of nonfitness things).

* goal space is much larger than the human-friendly part of goal space (suggesting that a random objective will not be human-friendly, which combined with assumptions about goal maximisation and instrumental drives implies that most goals could be dangerous) .

* there's a common phenomenon of very stupid ML systems still developing "clever" unintended / hacky / dangerous behaviours


3. We don't know how hard alignment is, so we don't know how long it will take to solve. It may involve certain profound philosophical and mathematical questions, which have been worked on by some of the greatest thinkers for a long time. Here's a nice nontechnical statement of the potential difficulty. Some AI safety researchers are actually quite optimistic about our prospects for solving alignment, even without EA intervention, and work on it to cover things like the "value lock-in" case instead of the x-risk case.

Comment by technicalities on Can I do an introductory post? · 2020-02-14T07:12:19.247Z · EA · GW

Welcome! This is a fine thing - you could link to your story here, for instance:

https://forum.effectivealtruism.org/posts/FA794RppcqrNcEgTC/why-are-you-here-an-origin-stories-thread

Comment by technicalities on Growth and the case against randomista development · 2020-01-16T13:55:52.751Z · EA · GW

Great work. I'm very interested in this claim

the top ten most prescribed medicines many work on only a third of the patients

In which volume was this claim made?

Comment by technicalities on In praise of unhistoric heroism · 2020-01-08T11:11:10.539Z · EA · GW

Some (likely insufficient) instrumental benefits of feeling bad about yourself:

  • When I play saxophone I often feel frustration at not sounding like Coltrane or Parker; but when I sing I feel joy at just being able to make noise. I'm not sure which mindset has led to better skill growth. : Evaluations can compare up (to a superior reference class) or compare down. I try to do plenty of both. e.g. "Relative to the human average I've done a lot and know a lot." Comparing up is more natural to me, so I have an emotional-support Anki deck of achievements and baselines.
  • Impostor syndrome is always painful and occasionally useful. Most people can't / won't pay attention to what they're bad at, and people with impostor syndrome sometimes do, and so at least have a chance to improve. If I had the chance to completely "cure" mine I might not, instead halving the intensity. (Soares' Replacing Guilt is an example of a productive mindset which dispenses with this emotional cost though, and it might be learnable, I don't know.)
  • It's really important for EAs to be modest, if only to balance out the arrogant-seeming claim in the word "Effective".
  • My adult life was tense and confusing until I blundered into two-level utilitarianism, so endorsing doing most actions intuitively, not scoring my private life. (I was always going to do most things intuitively, because it's impossible not to, but I managed to stop feeling bad about it.) Full explicit optimisation is so expensive and fraught that it only makes sense for large or rare decisions, e.g. career, consumption habits, ideology.
Comment by technicalities on Against value drift · 2019-11-05T12:34:29.319Z · EA · GW

Sure, I agree that most people's actions have a streak of self-interest, and that posterity could serve as this even in cases of sacrificing your life. I took OP to be making a stronger claim, that it is simply wrong to say that "people have altruistic values" as well.

There's just something up with saying that these altruistic actions are caused by selfish/social incentives, where the strongest such incentive is ostracism or the death penalty for doing it.

Comment by technicalities on Against value drift · 2019-10-30T19:05:32.213Z · EA · GW

How does this reduction account for the many historical examples of people who defied local social incentives, with little hope of gain and sometimes even destruction? (Off the top of my head: Ignaz Semmelweis, Irena Sendler, Sophie Scholl.)

We can always invent sufficiently strange posthoc preferences to "explain" any behaviour: but what do you gain in exchange for denying the seemingly simpler hypothesis "they had terminal values independent of their wellbeing"?

(Limiting this to atheists, since religious martyrs are explained well by incentives.)

Comment by technicalities on What book(s) would you want a gifted teenager to come across? · 2019-08-07T15:10:23.810Z · EA · GW

Actually I think Feynman has the same risk. (Consider his motto: "disregard others" ! All very well, if you're him.)

https://stepsandleaps.wordpress.com/2017/10/17/feynmans-breakthrough-disregard-others/

Comment by technicalities on What book(s) would you want a gifted teenager to come across? · 2019-08-07T08:32:23.013Z · EA · GW

I think I would have benefitted from Hanson's 'Elephant in the Brain', since I was intensely frustrated by (what I saw as) pervasive, inexplicable, wilfully bad choices, and this frustration affected my politics and ethics.

But it's high-risk, since it's easy to misread as justifying adolescent superiority (having 'seen through' society).

Comment by technicalities on Call for beta-testers for the EA Pen Pals Project! · 2019-07-26T20:49:55.988Z · EA · GW

I suggest randomising in two blocks: people who strongly prefer video calls vs people who strongly prefer text, with abstainers assigned to either. Should prevent one obvious failure mode, people having an incompatible medium.

Comment by technicalities on Who are the people that most publicly predicted we'd have AGI by now? Have they published any kind of retrospective, and updated their views? · 2019-06-29T14:29:50.169Z · EA · GW

I was sure that Kurzweil would be one, but actually he's still on track. ("Proper Turing test passed by 2029").

I wonder if the dismissive received view on him is because he states specific years (to make himself falsifiable), which people interpret as crankish overconfidence.

Comment by technicalities on Effective Altruism is an Ideology, not (just) a Question · 2019-06-29T10:50:12.682Z · EA · GW

Fair. But without tech there would be much less to fight for. So it's multiplicative.