Posts

Writing about my job: Data Scientist 2021-07-19T10:26:32.884Z
AGI risk: analogies & arguments 2021-03-23T13:18:20.638Z
The academic contribution to AI safety seems large 2020-07-30T10:30:19.021Z
[Link] The option value of civilization 2019-01-06T09:58:17.919Z
Existential risk as common cause 2018-12-05T14:01:04.786Z

Comments

Comment by technicalities on [Creative Writing Contest] [Fiction] [Referral] A Common Sense Guide to Doing the Most Good, by Alexander Wales · 2021-09-14T05:59:03.123Z · EA · GW

I'm actually pretty happy for this warning to spread; it's not a big problem now(?), but will be if growth continues. Vigilance is the way to make the critique untrue.

OTOH you don't necessarily want to foreground it as the first theme of EA, or even the main thing to worry about.

Comment by technicalities on My first PhD year · 2021-09-02T07:32:14.174Z · EA · GW

Looks like a great year Jaime!

Strongly agree that freedom to take side projects is a huge upside to PhDs. What other job lets you drop everything to work full-time for a month, on something with no connection to your job description?

Comment by technicalities on Frank Feedback Given To Very Junior Researchers · 2021-09-01T17:46:25.641Z · EA · GW

I think this is your best post this year. Because rarely said, despite these failure modes seeming omnipresent. (I fall into em all the time!)

Comment by technicalities on Some longtermist fiction · 2021-08-11T05:33:19.231Z · EA · GW

Yep, skip Phlebas at first - but do come back to it later, because despite being silly and railroading, it is the clearest depiction of the series' main theme, which is people's need for Taylorian strong evaluation, the dissatisfaction of unlimited pleasure and freedom, liberalism as unstoppable, unanswerable assimilator.

I wrote a longtermist critique of the Culture here.

Surface Detail is about desperately trying to prevent an s-risk. Excession is the best on most axes.

Comment by technicalities on Career advice for Australian science undergrad interested in welfare biology · 2021-07-21T10:49:57.439Z · EA · GW

Not a bio guy, but in general: talk to more people! List people you think are doing good work and ask em directly.

Also generically: try to do some real work in as many of them as you can. I don't know how common undergrad research assistants are in your fields, or in Australian unis, but it should be doable (if you're handling your courseload ok).

PS: Love the username.

Comment by technicalities on Writing about my job: Data Scientist · 2021-07-19T18:22:21.512Z · EA · GW

Big old US >> UK pay gap imo. Partial explanation for that: 32 days holiday in the UK vs 10 days US. 

(My base pay was 85% of total; 100% seems pretty normal in UK tech.)

Other big factor: this was in a sorta sleepy industry that tacitly trades off money for working the contracted 37.5 h week, unlike say startups. Per hour it was decent, particularly given 10% study time. 

If we say hustling places have a 50 h week (which is what one fancy startup actually told me they expected), then 41 looks fine

Comment by technicalities on The case against “EA cause areas” · 2021-07-18T10:09:19.994Z · EA · GW

Agree with the spirit - there is too much herding, and I would love for Schubert's distinctions to be core concepts. However, I think the problem you describe appears in the gap between the core orgs and the community, and might be pretty hard to fix as a result.

What material implies that EA is only about ~4 things?

  • the Funds
  • semi-official intro talks and Fellowship syllabi
  • the landing page has 3 main causes and mentions 6 more
  • the revealed preferences of what people say they're working on, the distribution of object-level post tags

What emphasises cause divergence and personal fit?

  • 80k have their top 7 of course, but the full list of recommended ones has 23
  • Personal fit is the second thing they raise, after importance
  • New causes, independent thinking, outreach, cause X, and 'question > ideology' is a major theme at every EAG and (by eye) in about a fifth of the top-voted Forum posts.

So maybe limited room for improvements to communication? Since it's already pretty clear. 

Intro material has to mention some examples, and only a couple in any depth. How should we pick examples? Impact has to come first. Could be better to not always use the same 4 examples, but instead pick the top 3 by your own lights and then draw randomly from the top 20.

Also, I've always thought of cause neutrality as conditional - "if you're able to pivot, and if you want to do the most good, what should you do?" and this is emphasised in plenty of places. (i.e. Personal fit and meeting people where they are by default.) But if people are taking it as an unconditional imperative then that needs attention.

Comment by technicalities on How to explain AI risk/EA concepts to family and friends? · 2021-07-12T09:36:16.857Z · EA · GW

Brian Christian is incredibly good at tying the short-term concerns everyone already knows about to the long-term concerns. He's done tons of talks and podcasts - not sure which is best, but if 3 hours of heavy content isn't a problem, the 80k one is good.

There's already a completely mainstream x-risk: nuclear weapons (and, popularly, climate change). It could be good to compare AI to these accepted handles. The second species argument can be made pretty intuitive too.

Bonus: here's what I told my mum.

AIs are getting better quite fast, and we will probably eventually get a really powerful one, much faster and better at solving problems than people. It seems really important to make sure that they share our values; otherwise, they might do crazy things that we won't be able to fix. We don't know how hard it is to give them our actual values, and to assure that they got them right, but it seems very hard. So it's important to start now, even though we don't know when it will happen, or how dangerous it will be.

Comment by technicalities on Undergraduate Making Life-Altering Choices While Sober, Please Advise · 2021-07-10T13:52:56.602Z · EA · GW

[I don't know you, so please feel free to completely ignore any of the following.]

I personally know three EAs who simply aren't constituted to put up with the fake work and weak authoritarianism of college. I expect any of them to do great things. Two other brilliant ones are Chris Olah and Kelsey Piper. (I highly recommend Piper's writing on the topic for deep practical insights and as a way of shifting the balance of responsibility partially off yourself and onto the ruinous rigid bureaucracy you are in. She had many of the same problems as you, and things changed enormously once she found a working environment that actually suited her. Actually just read the whole blog, she is one of the greats.)

80k have some notes on effective alternatives to a degree. kbog also wrote a little guide

In the UK a good number of professions have a non-college "apprenticeship" track, including software development and government! I don't know about the US.

This is not to say that you should not do college, just that there are first-class precedents and alternatives.

More immediately: I highly recommend coworking as a solution to ugh. Here's the best kind, Brauner-style, or here are nice group rooms on Focusmate or Complice.

You're a good writer and extremely self-aware. This is a really good start.

If you'd like to speak to some other EAs in this situation (including one in the US), DM me.

Comment by technicalities on What is an example of recent, tangible progress in AI safety research? · 2021-06-16T12:48:42.976Z · EA · GW

Not recent-recent, but I also really like Carey's 2017 work on CIRL. Picks a small, well-defined problem and hammers it flush into the ground. "When exactly does this toy system go bad?"

Comment by technicalities on What is an example of recent, tangible progress in AI safety research? · 2021-06-16T06:51:32.016Z · EA · GW

If we take "tangible" to mean executable:

But as Kurt Lewin once said "there's nothing so practical as a good theory". In particular, theory scales automatically and conceptual work can stop us from wasting effort on the wrong things.

  • CAIS (2019) pivots away from the classic agentic model, maybe for the better
  • The search for mesa-optimisers (2019) is a step forward from previous muddled thoughts on optimisation, and they make predictions we can test them on soon.
  • The Armstrong/Shah discussion of value learning changed my research direction for the better.


Also Everitt et al (2019) is both: a theoretical advance with good software.

Comment by technicalities on A Viral License for AI Safety · 2021-06-14T04:23:57.212Z · EA · GW

I think you're right, see my reply to Ivan.

Comment by technicalities on A Viral License for AI Safety · 2021-06-14T04:22:17.505Z · EA · GW

I think I generalised too quickly in my comment; I saw "virality" and "any later version" and assumed the worst. But of course we can take into account AGPL backfiring when we design this licence!

One nice side effect of even a toothless AI Safety Licence: it puts a reminder about safety into the top of every repo. Sure, no one reads licences (and people often ignore health and safety rules when it gets in their way, even at their own risk). But maybe it makes things a bit more tangible like LICENSE.md gives law a foothold into the minds of devs.

Comment by technicalities on Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community? · 2021-06-12T18:53:48.626Z · EA · GW

Seems I did this in exactly 3 posts before getting annoyed.

Comment by technicalities on Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community? · 2021-06-09T09:39:57.215Z · EA · GW

That's cool! I wonder if they suffer from the same ambiguity as epistemic adjectives in English though* (which would suggest that we should skip straight to numerical assignments: probabilities or belief functions).

Anecdotally, it's quite tiring to put credence levels on everything. When I started my blog I began by putting a probability on all major claims (and even wrote a script to hide this behind a popup to minimise aesthetic damage). But I soon stopped.

For important things (like Forum posts?) it's probably worth the effort, but even a document-level confidence statement is a norm with only spotty adoption on here.

Comment by technicalities on A Viral License for AI Safety · 2021-06-05T09:20:58.707Z · EA · GW

This is a neat idea, and unlike many safety policy ideas it has scaling built in.

However, I think the evidence from the original GPL suggests that this wouldn't work. Large companies are extremely careful to just not use GPL software, and this includes just making their own closed source implementations.* Things like the Skype case are the exception, which make other companies even more careful not to use GPL things. All of this has caused GPL licencing to fall massively in the last decade.** I can't find stats, but I predict that GPL projects will have much less usage and dev activity.

It's difficult to imagine software so good and difficult to replicate that Google would invite our virus into their proprietary repo. Sure, AI might be different from [Yet Another Cool AGPL Parser] - but then who has a bigger data moat and AI engineering talent than big tech, to just implement it for themselves?

** https://opensource.com/article/17/2/decline-gpl

Comment by technicalities on Help me find the crux between EA/XR and Progress Studies · 2021-06-03T08:19:28.517Z · EA · GW

Aschenbrenner's model strikes me as a synthesis of the two intellectual programmes, and it doesn't get enough attention.

Comment by technicalities on Why should we *not* put effort into AI safety research? · 2021-05-16T09:46:41.091Z · EA · GW

Robin Hanson is the best critic imo. He has many arguments, or one very developed one, but big pieces are:

  • Innovation in general is not very "lumpy" (discontinuous). So we should assume that AI innovation will also not be. So no one AI lab will pull far ahead of the others at AGI time. So there won't be a 'singleton', a hugely dangerous world-controlling system.
     
  • Long timelines [100 years+] + fire alarms
     
  • Opportunity cost of spending / shouting now 
    "we are far from human level AGI now, we'll get more warnings as we get closer, and by saving $ you get 2x as much to spend each 15 years you wait."
    "having so many people publicly worrying about AI risk before it is an acute problem will mean it is taken less seriously when it is, because the public will have learned to think of such concerns as erroneous fear mongering."
     
  • The automation of labour isn't accelerating (therefore current AI is not being deployed to notable effect, therefore current AI progress is not yet world-changing in one sense)

He might not be what you had in mind: Hanson argues that we should wait to work on AGI risk, rather than that safety work is forever unnecessary or ineffective. The latter claim seems extreme to me and I'd be surprised to find a really good argument for it.

You might consider the lack of consensus about basic questions, mechanisms, solutions amongst safety researchers to be a bad sign.

Nostalgebraist (2019) sees AGI alignment as equivalent to solving large parts of philosophy: a noble but quixotic quest.

Melanie Mitchell also argues for long timelines. Her view is closer to the received view in the field (but this isn't necessarily a compliment).

Comment by technicalities on What are your favorite examples of moral heroism/altruism in movies and books? · 2021-04-26T17:03:46.435Z · EA · GW

Spoilers for Unsong:

Jalaketu identifies the worst thing in the world - hell - and sacrifices everything, including his own virtue and impartiality, to destroy it. It is the strongest depiction of the second-order consistency, second-order glory of consequentialism I know. (But also a terrible tradeoff.)

Comment by technicalities on Voting reform seems overrated · 2021-04-10T07:28:14.351Z · EA · GW

Shouldn't the title be "Proportional Representation seems overrated"?

PR is often what people mean by voting reform, in the UK, but there are options without these problems, e.g. approval voting.

Comment by technicalities on What are your main reservations about identifying as an effective altruist? · 2021-03-30T15:22:20.586Z · EA · GW

I see "effective altruist" as a dodgy shorthand for the full term: "aspiring effective altruist". I'm happy to identify as the latter in writing (though it is too clunky for speech).

Comment by technicalities on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-23T23:22:08.280Z · EA · GW

I call shotgun on "On Certainty", one of the most-wanted books. (The author and I have butted heads before. He is much better at headbutting than me.)

Comment by technicalities on AGI risk: analogies & arguments · 2021-03-23T17:55:27.950Z · EA · GW

I felt much the same writing it. I'll add that to my content note, thanks.

Comment by technicalities on AGI risk: analogies & arguments · 2021-03-23T15:38:20.935Z · EA · GW

The opposite post (reasons not to worry) could be good as well. e.g.

Comment by technicalities on [deleted post] 2021-03-19T17:51:33.228Z

In this one, it's that there is no main body, just a gesture off-screen. Only a small minority of readers will be familiar enough with the funding apparatus to complete your "exercise to the reader..." Maybe you're writing for that small minority, but it's fair for the rest to get annoyed.

In past ones (from memory), it's again this sense of pushing work onto the reader. Sense of "go work it out".

Comment by technicalities on [deleted post] 2021-03-19T17:15:46.218Z

It might be better to collate and condense your series into one post, once it's finished (or starting now). These individual posts really aren't convincing, and probably hurt your case if anything. Part of that is the Forum's conventions about content being standalone. But the rest is clarity and evidence: your chosen style is too esoteric.

I don't think it's our unwillingness to hear you out. Some of the most well-regarded posts on here are equally fundamental critiques of EA trends, but written persuasively / directly:

https://forum.effectivealtruism.org/posts/bsE5t6qhGC65fEpzN/growth-and-the-case-against-randomista-development

https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really

https://forum.effectivealtruism.org/posts/DxfpGi9hwvwLCf5iQ/objections-to-value-alignment-between-effective-altruists

https://forum.effectivealtruism.org/posts/jSPGFxLmzJTYSZTK3/reality-is-often-underpowered

Comment by technicalities on Can a Vegan Diet Be Healthy? A Literature Review · 2021-03-12T19:52:17.761Z · EA · GW

Worth noting that multivitamins are associated with very slightly increased mortality in the general population. Cochrane put this down to them overdosing A, E, and beta-carotene, which I don't expect vegans to be deficient in, so the finding might transfer. (Sounds like you've done blood tests though, so ignore me if it helps you.)

https://www.cochrane.org/CD007176/LIVER_antioxidant-supplements-for-prevention-of-mortality-in-healthy-participants-and-patients-with-various-diseases

Comment by technicalities on What are some potential coordination failures in our community? · 2020-12-12T08:24:34.725Z · EA · GW

The cycle of people coming up with ideas about how to organise people into projects, or prevent redundant posts, or make the Forum more accretive, being forgotten a week later. i.e. We fail to coordinate on coordination projects.

Comment by technicalities on Progress Open Thread: December 2020 · 2020-12-02T15:44:16.503Z · EA · GW

Can anyone in clean meat verify this news? The last time I checked, we were still years off market release.

Conditional on it being a real shock, hooray!

https://www.google.com/amp/s/amp.theguardian.com/environment/2020/dec/02/no-kill-lab-grown-meat-to-go-on-sale-for-first-time

Comment by technicalities on The academic contribution to AI safety seems large · 2020-11-22T12:29:30.439Z · EA · GW

Follow-up post to ARCHES with ranking of existing fields, lots more bibliographies.

Comment by technicalities on The Case for Space: A Longtermist Alternative to Existential Threat Reduction · 2020-11-18T16:14:02.979Z · EA · GW

Some more prior art, on Earth vs off-world "lifeboats". See also 4.2 here for a model of mining Mercury (for solar panels, not habitats).

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-31T17:15:45.460Z · EA · GW

This makes sense. I don't mean to imply that we don't need direct work.

AI strategy people have thought a lot about the capabilities : safety ratio, but it'd be interesting to think about the ratio of complementary parts of safety you mention. Ben Garfinkel notes that e.g. reward engineering work (by alignment researchers) is dual-use; it's not hard to imagine scenarios where lots of progress in reward engineering without corresponding progress in inner alignment could hurt us.

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-31T17:00:31.239Z · EA · GW

Thanks!

research done by people who are trying to do something else will probably end up not being very helpful for some of the core problems.

Yeah, it'd be good to break AGI control down more, to see if there are classes of problem where we should expect indirect work to be much less useful. But this particular model already has enough degrees of freedom to make me nervous.

I think that it might be easier to assign a value to the discount factor by assessing the total contributions of EA safety and non-EA safety.

That would be great! I used headcount because it's relatively easy, but value weights are clearly better. Do you know any reviews of alignment contributions?

... This doesn't seem to mesh with your claim about their relative productivity.

Yeah, I don't claim to be systematic. The nine are just notable things I happened across, rather than an exhaustive list of academic contributions. Besides the weak evidence from the model, my optimism about there being many other academic contributions is based on my own shallow knowledge of AI: "if even I could come up with 9..."

Something like the Median insights collection, but for alignment, would be amazing, but I didn't have time.

those senior researchers won't necessarily have useful things to say about how to do safety research

This might be another crux: "how much do general AI research skills transfer to alignment research?" (Tacitly I was assuming medium-high transfer.)

I think the link is to the wrong model?

No, that's the one; I mean the 2x2 of factors which lead to '% work that is alignment relevant'. (Annoyingly, Guesstimate hides the dependencies by default; try View > Visible)

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-31T13:33:16.556Z · EA · GW

An important source of capabilities / safety overlap, via Ben Garfinkel:

Let’s say you’re trying to develop a robotic system that can clean a house as well as a human house-cleaner can... Basically, you’ll find that if you try to do this today, it’s really hard to do that. A lot of traditional techniques that people use to train these sorts of systems involve reinforcement learning with essentially a hand-specified reward function...
One issue you’ll find is that the robot is probably doing totally horrible things because you care about a lot of other stuff besides just minimizing dust. If you just do this, the robot won’t care about, let’s say throwing out valuable objects that happened to be dusty. It won’t care about, let’s say, ripping apart a couch cushion to find dust on the inside... You’ll probably find any simple line of code you write isn’t going to capture all the nuances. Probably the system will end up doing stuff that you’re not happy with.
This is essentially an alignment problem. This is a problem of giving the system the right goals. You don’t really have a way using the standard techniques of making the system even really act like it’s trying to do the thing that you want it to be doing. There are some techniques that are being worked on actually by people in the AI safety and the AI alignment community to try and basically figure out a way of getting the system to do what you want it to be doing without needing to hand-specify this reward function...
These are all things that are being developed by basically the AI safety community. I think the interesting thing about them is that it seems like until we actually develop these techniques, probably we’re not in a position to develop anything that even really looks like it’s trying to clean a house, or anything that anyone would ever really want to deploy in the real world. It seems like there’s this interesting sense in which we have the storage system we’d like to create, but until we can work out the sorts of techniques that people in the alignment community are working on, we can’t give it anything even approaching the right goals. And if we can’t give anything approaching the right goals, we probably aren’t going to go out and, let’s say, deploy systems in the world that just mess up people’s houses in order to minimize dust.
I think this is interesting, in the sense in which the processes to give things the right goals bottleneck the process of creating systems that we would regard as highly capable and that we want to put out there.

He sees this as positive: it implies massive economic incentives to do some alignment, and a block on capabilities until it's done. But it could be a liability as well, if the alignment of weak systems is correspondingly weak, and if mid-term safety work fed into a capabilities feedback loop with greater amplification.

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-31T11:39:23.633Z · EA · GW

Thanks for this, I've flagged this in the main text. Should've paid more attention to my confusion on reading their old announcement!

Comment by technicalities on The academic contribution to AI safety seems large · 2020-07-30T10:31:25.525Z · EA · GW

If the above strikes you as wrong (and not just vague), you could copy the Guesstimate, edit the parameters, and comment below.

Comment by technicalities on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T09:33:30.983Z · EA · GW

Welcome!

It's a common view. Some GiveWell staff hold this view, and indeed most of their work involves short-term effects, probably for epistemic reasons. Michael Plant has written about the EA implications of person-affecting views, and emphasises improvements to world mental health.

Here's a back-of-the-envelope estimate for why person-affecting views might still be bound to prioritise existential risk though (for the reason you give, but with some numbers for easier comparison).

Dominic Roser and I have also puzzled over Christian longtermism a bit.

Comment by technicalities on What would a pre-mortem for the long-termist project look like? · 2020-04-12T07:58:34.208Z · EA · GW

Great comment. I count only 65 percentage points - is the other third "something else happened"?

Or were you not conditioning on long-termist failure? (That would be scary.)

Comment by technicalities on (How) Could an AI become an independent economic agent? · 2020-04-04T19:40:05.401Z · EA · GW

IKEA is an interesting case: it was bequeathed entirely to a nonprofit foundation with a very loose mission and no owner(?)

https://www.investopedia.com/articles/investing/012216/how-ikea-makes-money.asp

Not a silly question IMO. I thought about Satoshi Nakamoto's bitcoin - but if they're dead, then it's owned by their heirs, or failing that by the government of whatever jurisdiction they were in. In places like Britain I think a combination of "bona vacantia" (unclaimed estates go to the government) and "treasure trove" (old treasure also) cover the edge cases. And if all else fails there's "finders keepers".

Comment by technicalities on What posts do you want someone to write? · 2020-03-29T14:08:06.017Z · EA · GW

A nice example of the second part, value dependence, is Ozy Brennan's series reviewing GiveWell charities.

Why might you donate to GiveDirectly?
You need a lot of warmfuzzies in order to motivate yourself to donate.
You think encouraging cash benchmarking is really important, and giving GiveDirectly more money will help that.
You want to encourage charities to do more RCTs on their programs by rewarding the charity that does that most enthusiastically.
You care about increasing people’s happiness and don’t care about saving the lives of small children, and prefer a certainty of a somewhat good outcome to a small chance of a very good outcome.
You believe, in principle, that we should let people make their own decisions about their lives.
You want an intervention that definitely has at least a small positive effect.
You have just looked at GDLive and are no longer responsible for your actions.
Comment by technicalities on What posts do you want someone to write? · 2020-03-24T08:40:03.140Z · EA · GW

Collating predictions made by particularly big pundits and getting calibration curves for them. Bill Gates is getting a lot of attention now for warning of pandemic in 2015; what is his average though? (This is a bad example though, since I expect his advisors to be world-class and to totally suppress his variance.)

If this could be hosted somewhere with a lot of traffic, it could reinforce good epistemics.

Comment by technicalities on What posts do you want someone to write? · 2020-03-24T08:35:03.042Z · EA · GW

A case study of the Scientific Revolution in Britain as intervention by a small group. This bears on one of the most surprising facts: the huge distance, 1.5 centuries, between the scientific and industrial revs. Could also shed light on the old marginal vs systemic argument: a synthesis is "do politics - to promote nonpolitical processes!"

https://forum.effectivealtruism.org/posts/RfKPzmtAwzSw49X9S/open-thread-46?commentId=rWn7HTvZaNHCedXNi

Comment by technicalities on What are some 1:1 meetings you'd like to arrange, and how can people find you? · 2020-03-20T14:52:38.622Z · EA · GW

Who am I?

Gavin Leech, a PhD student in AI at Bristol. I used to work in international development, official statistics, web development, data science.

Things people can talk to you about

Stats, forecasting, great books, development economics, pessimism about philosophy, aphorisms, why AI safety is eating people, fake frameworks like multi-agent mind. How to get technical after an Arts degree.

Things I'd like to talk to others about

The greatest technical books you've ever read. Research taste, and how it is transmitted. Non-opportunistic ways to do AI safety. How cluelessness and AIS interact; how hinginess and AIS interact.

Get in touch

g@gleech.org . I also like the sound of this open-letter site.

Comment by technicalities on Open Thread #46 · 2020-03-14T10:36:45.956Z · EA · GW

Suggested project for someone curious:

There are EA profiles of interesting influential (or influentially uninfluential) social movements - the Fabians, the neoliberals, the General Semanticists. But no one has written about the biggest: the scientific revolution in Britain as intentional intervention, a neoliberal style coterie.

A small number of the most powerful people in Britain - the Lord Chancellor, the king's physicians, the chaplain of the Elector Palatine / bishop of Chester, London's greatest architect, and so on - apparently pushed a massive philosophical change, founded some of the key institutions for the next 4 centuries, and thereby contributed to most of our subsequent achievements.

Outline:

  • Elizabethan technology and institutions before Bacon. Scholasticism and mathematical magic
  • The protagonists: "The Invisible College"
  • The impact of Gresham College and the Royal Society (sceptical empiricism revived! Peer review! Data sharing! efficient causation! elevating random uncredentialed commoners like Hooke)
  • Pre-emptive conflict management (Bacon's and Boyle's manifestos and Utopias are all deeply Christian)
  • The long gestation: it took 100 years for it to bear any fruit (e.g. Boyle's law, the shocking triumph of Newton); it took 200 years before it really transformed society. This is not that surprising measured in person-years of work, but otherwise why did it take so long?
  • Counterfactual: was Bacon overdetermined by economic or intellectual trends? If it was inevitable, how much did they speed it up?
  • Somewhat tongue in cheek cost:benefit estimate.

This was a nice introduction to the age.

Comment by technicalities on Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism · 2020-03-10T13:40:20.052Z · EA · GW

To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.

My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or discouraging not using U estimation. Is this a straw man? Possibly, I'm not sure I've ever read anything by a strict estimate-everything single-level person.

Comment by technicalities on What are the key ongoing debates in EA? · 2020-03-09T15:00:47.738Z · EA · GW

I read it as 'getting some people who aren't economists, philosophers, or computer scientists'. (:

(Speaking as a philosophy+economics grad and a sort-of computer scientist.)

Comment by technicalities on What are the key ongoing debates in EA? · 2020-03-09T13:28:40.601Z · EA · GW

Not sure. 2017 fits the beginning of the discussion though.

Comment by technicalities on What are the key ongoing debates in EA? · 2020-03-08T17:00:16.259Z · EA · GW

I've had a few arguments about the 'worm wars', whether the bet on deworming kids, which was uncertain from the start, is undermined by the new evidence.

My interlocutor is very concerned about model error in cost-benefit analysis, about avoiding side effects (and 'double effect' in particular); and not just for the usual PR or future credibility reasons.

Comment by technicalities on What are the best arguments that AGI is on the horizon? · 2020-02-16T11:55:56.723Z · EA · GW

It can seem strange that people act decisively about speculative things. So the first piece to understand is expected value: if something would be extremely important if it happened, then you can place quite low probability on it and still have warrant to act on it. (This is sometimes accused of being a decision-theory "mugging", but it isn't: we're talking about subjective probabilities in the range of 1% - 10%, not infinitesimals like those involved in Pascal's mugging.)

I think the most-defensible outside-view argument is: it could happen soon; it could be dangerous; aligning it could be very hard; and the product of these probabilities is not low enough to ignore.

1. When you survey general AI experts (not just safety or AGI people), they give a very wide distribution of predicting when we will have human-level AI (HLAI), with a central tendency of "10% chance of human-level AI... in the 2020s or 2030s". (This is weak evidence, since technology forecasting is very hard; these surveys are not random samples; but it seems like some evidence.)


2. We don't know what the risk of HLAI being dangerous is, but we have a couple of analogous precedents:

* the human precedent for world domination through intelligence / combinatorial generalisation / cunning

* the human precedent for 'inner optimisers': evolution was heavily optimising for genetic fitness, but produced a system, us, which optimises for a very different objective ("fun", or "status", or "gratification" or some bundle of nonfitness things).

* goal space is much larger than the human-friendly part of goal space (suggesting that a random objective will not be human-friendly, which combined with assumptions about goal maximisation and instrumental drives implies that most goals could be dangerous) .

* there's a common phenomenon of very stupid ML systems still developing "clever" unintended / hacky / dangerous behaviours


3. We don't know how hard alignment is, so we don't know how long it will take to solve. It may involve certain profound philosophical and mathematical questions, which have been worked on by some of the greatest thinkers for a long time. Here's a nice nontechnical statement of the potential difficulty. Some AI safety researchers are actually quite optimistic about our prospects for solving alignment, even without EA intervention, and work on it to cover things like the "value lock-in" case instead of the x-risk case.

Comment by technicalities on Can I do an introductory post? · 2020-02-14T07:12:19.247Z · EA · GW

Welcome! This is a fine thing - you could link to your story here, for instance:

https://forum.effectivealtruism.org/posts/FA794RppcqrNcEgTC/why-are-you-here-an-origin-stories-thread