Posts

Thoughts on whether we're living at the most influential time in history 2020-11-03T04:07:52.186Z
Some thoughts on EA outreach to high schoolers 2020-09-13T22:51:24.200Z
Buck's Shortform 2020-09-13T17:29:42.117Z
Some thoughts on deference and inside-view models 2020-05-28T05:37:14.979Z
My personal cruxes for working on AI safety 2020-02-13T07:11:46.803Z
Thoughts on doing good through non-standard EA career pathways 2019-12-30T02:06:03.032Z
"EA residencies" as an outreach activity 2019-11-17T05:08:42.119Z
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA 2019-11-15T22:44:17.606Z
A way of thinking about saving vs improving lives 2015-08-08T19:57:30.985Z

Comments

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-19T14:44:13.395Z · EA · GW

I am glad to have you around, of course.

My claim is just that I doubt you thought that if the rate of posts like this was 50% lower, you would have been substantially more likely to get involved with EA; I'd be very interested to hear I was wrong about that.

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-19T02:50:59.951Z · EA · GW

I am not sure whether I think it's a net cost that some people will be put off from EA by posts like this, because I think that people who would bounce off EA because of posts like this aren't obviously net-positive to have in EA. (My main model here is that the behavior described in this post is pretty obviously bad, and the kind of SJ-sympathetic EAs who I expect to be net sources of value probably agree that this behavior is bad. Secondarily, I think that people who are really enthusiastic about EA are pretty likely to stick around even when they're infuriated by things EAs are saying. For example, when I was fairly new to the EA community in 2014, I felt really mad about the many EAs who dismissed the moral patienthood of animals for reasons I thought were bad, but EAs were so obviously my people that I stuck around nevertheless. If you know someone (eg yourself) who you think is a counterargument to this claim of mine, feel free to message me.)

But I think that there are some analogous topics where it is indeed costly to alienate people. For example, I think it's pretty worthwhile for me as a longtermist to be nice to people who prioritize animal welfare and global poverty, because I think that many people who prioritize those causes make EA much stronger. For different reasons, I think it's worth putting some effort into not mocking religions or political views.

In cases like these, I mostly agree with "you need to figure out the exchange rate between welcomingness and unfiltered conversations".

I think that becoming more skillful at doing both well is an important skill for a community like ours to have more of. That's ok if it's not your personal priority right now, but I would like community norms to reward learning that skill more. My view is that Will's comment was doing just that, and I upvoted it as a result. 

I guess I expect the net result of Will's comment was more to punish Hypatia than to push community norms in a healthy direction. If he wanted to just push norms without trying to harm someone who was basically just saying true and important things, I think he should have made a different top level post, and he also shouldn't have made his other top level comment.

(Not saying you disagree with the content of his comment, you said you agreed with it in fact, but in my view, demonstrated you didn't fully grok it nevertheless).

There's a difference between understanding a consideration and thinking that it's the dominant consideration in a particular situation :) 

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-18T15:36:32.511Z · EA · GW

More generally, I think our disagreement here probably comes down to something like this:

There's a tradeoff between having a culture where true and important things are easy to say, and a culture where group X feels maximally welcome.  As you say, if we're skillful we can do both of these, by being careful about our language and always sounding charitable and not repeatedly making similar posts.

But this comes at a cost. I personally feel much less excited about writing about certain topics because I'd have to be super careful about them. And most of the EAs I know, especially those who have some amount of authority among EAs, feel much more restricted than I do. I think that this makes EA noticeably worse, because it means that it's much harder for these EAs to explain their thoughts on things.

And so I think it's noticeably costly to criticise people for not being more careful and tactful. It's worth it in some cases, but we should remember that it's costly when we're considering pushing people to be more careful and tactful.

I  personally think that "you shouldn't write criticisms of  an org for doing X, even when the criticisms are accurate and X is bad, because of criticising X has cultural connotations" is too far in the "restrict people's ability to say true things, for the sake of making people feel welcome".

(Some context here is that I wrote a Facebook post about ACE with similar content to this post last September.)

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-18T15:36:15.573Z · EA · GW

(I'm writing these comments kind of quickly, sorry for sloppiness.)

With regard to

Where X is the bad thing ACE did, the situation is clearly far more nuanced as to how bad it is than something like sexual misconduct, which, by the time we have decided something deserves that label, is unequivocally bad.

In this particular case, Will seems to agree that X was bad and concerning, which is why my comment felt fair to me.

I would have no meta-level objection to a comment saying "I disagree that X is bad, I think it's actually fine".

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-18T04:45:28.091Z · EA · GW

I think that this totally misses the point. The point of this post isn't to inform ACE that some of the things they've done seem bad--they are totally aware that some people think this. It's to inform other people that ACE has behaved badly, in order to pressure ACE and other orgs not to behave similarly in future, and so that other people can (if they want) trust ACE less or be less inclined to support them.

Comment by Buck on Concerns with ACE's Recent Behavior · 2021-04-18T01:43:52.260Z · EA · GW

I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor,  but as an intuition pump imagine the following comment.

"On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem.  On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I'm worried about the second-order effects of talking about this misconduct."

I guess my concern is that it seems like our top priority should be saying true and important things, and we should err on the side of not criticising people for doing so.

More generally I am opposed to "Criticising people for doing bad-seeming thing X would put off people who are enthusiastic about thing X."

Another take here is that if a group of people are sad that their views aren't sufficiently represented on the EA forum, they should consider making better arguments for them. I don't think we should try to ensure that the EA forum has proportionate amounts of pro-X and anti-X content for all X. (I think we should strive to evaluate content fairly; this involves not being more or less enthusiastic about content about views based on its popularity (except for instrumental reasons like "it's more interesting to hear arguments you haven't heard before).)

EDIT: Also, I think your comment is much better described as meta level than object level, despite its first sentence.

Comment by Buck on Why do so few EAs and Rationalists have children? · 2021-03-14T19:43:07.155Z · EA · GW

I’d be interested to see comparisons of the rate at which rationalists and EAs have children compared to analogous groups, controlling for example for education, age, religiosity, and income. I think this might make the difference seems smaller.

Comment by Buck on Yale EA’s Fellowship Application Scores were not Predictive of Eventual Engagement · 2021-01-28T05:42:47.388Z · EA · GW

Great post, and interesting and surprising result.

An obvious alternative selection criterion would be something like “how good would it be if this person got really into EA”; I wonder if you would be any better at predicting that. This one takes longer to get feedback on, unfortunately.

Comment by Buck on Buck's Shortform · 2021-01-11T02:01:11.566Z · EA · GW

I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.

I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.

But I don’t think that this prediction is true: I think that I see a weak positive correlation between how altruistic people are and how good their epistemics seem.

----

I think the main reason for this is that striving for accurate beliefs is unpleasant and unrewarding. In particular, having accurate beliefs involves doing things like trying actively to step outside the current frame you’re using, and looking for ways you might be wrong, and maintaining constant vigilance against disagreeing with people because they’re annoying and stupid.

Altruists often seem to me to do better than people who instrumentally value epistemics; I think this is because valuing epistemics terminally has some attractive properties compared to valuing it instrumentally. One reason this is better is that it means that you’re less likely to stop being rational when it stops being fun. For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them. 

Another reason is that if you’re an altruist, you find yourself interested in various subjects that aren’t the subjects you would have learned about for fun--you have less of an opportunity to only ever think in the way you think in by default. I think that it might be healthy that altruists are forced by the world to learn subjects that are further from their predispositions. 

----

I think it’s indeed true that altruistic people sometimes end up mindkilled. But I think that truth-seeking-enthusiasts seem to get mindkilled at around the same rate. One major mechanism here is that truth-seekers often start to really hate opinions that they regularly hear bad arguments for, and they end up rationalizing their way into dumb contrarian takes.

I think it’s common for altruists to avoid saying unpopular true things because they don’t want to get in trouble; I think that this isn’t actually that bad for epistemics.

----

I think that EAs would have much worse epistemics if EA wasn’t pretty strongly tied to the rationalist community; I’d be pretty worried about weakening those ties. I think my claim here is that being altruistic seems to make you overall a bit better at using rationality techniques, instead of it making you substantially worse.

Comment by Buck on If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant · 2020-11-24T16:36:21.278Z · EA · GW

My main objection to this post is that personal fit still seems really important when choosing what to do within a cause. I think that one of EA's main insights is "if you do explicit estimates of impact, you can find really big differences in effectiveness between cause areas, and these differences normally swamp personal fit"; that's basically what you're saying here, and it's totally correct IMO. But I think it's a mistake to try to apply the same style of reasoning within causes, because the effectivenesses between different jobs are much more similar and so personal fit ends up dominating the estimate of which one will be better.

Comment by Buck on Where are you donating in 2020 and why? · 2020-11-23T22:12:25.467Z · EA · GW

I'd be curious to hear why you think that these charities are excellent; eg I'd be curious for your reply to the arguments here.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-12T19:58:55.882Z · EA · GW

Oh man, I'm so sorry, you're totally right that this edit fixes the problem I was complaining about. When I read this edit, I initially misunderstood it in such a way that it didn't address my concern. My apologies.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-10T19:02:43.628Z · EA · GW

How much of that 0.1% comes from worlds where your outside view argument is right vs worlds where your outside view argument is wrong? 

This kind of stuff is pretty complicated so I might not be making sense here, but here's what I mean: I have some distribution over what model to be using to answer the "are we at HoH" question, and each model has some probability that we're at HoH, and I derive my overall belief by adding up the credence in HoH that I get from each model (weighted by my credence in it).  It seems like your outside view model assigns approximately zero probability to HoH, and so if now is the HoH, it's probably because we shouldn't be using your model, rather than because we're in the tiny proportion of worlds in your model where now is HoH.

I think this distinction is important because it seems to me that the probability of HoH give your beliefs should be almost entirely determined by the prior and HoH-likelihood of models other than the one you proposed--if your central model is the outside-view model you proposed, and you're 80% confident in that, then I suspect that the majority of your credence on HoH should come from the other 20% of your prior, and so the question of how much your outside-view-model updates based on evidence doesn't seem likely to be very important.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-10T03:07:34.881Z · EA · GW

Hmm, interesting. It seems to me that your priors cause you to think that the "naive longtermist" story, where we're in a time of perils and if we can get through it, x-risk goes basically to zero and there are no more good ways to affect similarly enormous amounts of value, has a probability which is basically zero. (This is just me musing.)

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-08T16:39:13.033Z · EA · GW

I agree with all this; thanks for the summary.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-07T20:42:14.981Z · EA · GW

Your interpretation is correct; I mean that futures with high x-risk for a long time aren't very valuable in expectation.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-07T20:39:07.338Z · EA · GW

On this set-up of the argument (which is what was in my head but I hadn’t worked through), I don’t make any claims about how likely it is that we are part of a very long future.

 

This does make a lot more sense than what you wrote in your post. 

Do you agree that as written, the argument as written in your EA Forum post is quite flawed? If so, I think you should edit it to more clearly indicate that it was a mistake, given that people are still linking to it.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-05T15:58:09.314Z · EA · GW

The comment I'd be most interested in from you is whether you agree that your argument forces you to believe that x-risk is almost surely zero, or that we are almost surely not going to have a long future.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-05T15:57:07.531Z · EA · GW

“It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me.

So you are saying that you do think that the evidence for longtermism/x-risk is enough to push you to thinking you're at a one-in-a-million time?

EDIT: Actually I think maybe you misunderstood me? When I say "you're one-in-a-million", I mean "your x-risk is higher than 99.9999% of other centuries' x-risk"; "one in a thousand" means "higher than 99.9% of other centuries' x-risk".  So one-in-a-million is a stronger claim which means higher x-risk.

What I'm saying is that if you believe that x-risk is 0.1%, then you think we're at least one in a million. I don't understand why you're willing to accept that we're one-in-a-million; this seems to me force you to have absurdly low x-risk estimates.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-05T15:56:01.386Z · EA · GW

My claim is that patient philanthropy is automatically making the claim that now is the time where patient philanthropy does wildly unusually much expected good, because we're so early in history that the best giving opportunities are almost surely after us.

Comment by Buck on Thoughts on whether we're living at the most influential time in history · 2020-11-05T15:53:28.430Z · EA · GW

I've added a link to the article to the top of my post. Those changes seem reasonable.

Comment by Buck on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-02T20:56:54.735Z · EA · GW

This is indeed what I meant, thanks.

Comment by Buck on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-02T17:31:49.881Z · EA · GW

But if, as this talk suggests, it’s not obvious whether donating to near term interventions is good or bad for the world, why are you interested in whether you can pitch friends and family to donate to them?

Comment by Buck on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-02T17:30:36.153Z · EA · GW

I basically agree with the claims and conclusions here, but I think about this kind of differently.
 

I don’t know whether donating to AMF makes the world better or worse. But this doesn’t seem very important, because I don’t think that AMF is a particularly plausible candidate for the best way to improve the long term future anyway—it would be a reasonably surprising coincidence if the top recommended way to improve human lives right now was also the most leveraged way to improve the long term future.

So our attitude should be more like "I don’t know if AMF is good or bad, but it’s probably not nearly as impactful as the best things I’ll be able to find, and I have limited time to evaluate giving opportunities, so I should allocate my time elsewhere", rather than "I can’t tell if AMF is good or bad, so I’ll think about longtermist giving opportunities instead."

Comment by Buck on Existential Risk and Economic Growth · 2020-11-02T00:43:51.201Z · EA · GW

I think Carl Shulman makes some persuasive criticisms of this research here :

My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.

If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data about previous coronaviruses such as SARS to stop COVID-19 in its tracks. Renewable energy research funding would be vastly higher than it is today, as would AI technical safety. As advanced AI developments brought AI catstrophic risks closer, there would be no competitive pressures to take risks with global externalities in development either by firms or nation-states.

Externalities massively reduce the returns to risk reduction, with even the largest nation-states being only a small fraction of the world, individual politicians much more concerned with their term of office and individual careers than national-level outcomes, and individual voters and donors constituting only a minute share of the affected parties. And conflict and bargaining problems are entirely responsible for war and military spending, central to the failure to overcome externalities with global climate policy, and core to the threat of AI accident catastrophe.

If those things were solved, and the risk-reward tradeoffs well understood, then we're quite clearly in a world where we can have very low existential risk and high consumption. But if they're not solved, the level of consumption is not key: spending on war and dangerous tech that risks global catastrophe can be motivated by the fear of competitive disadvantage/local catastrophe (e.g. being conquered) no matter how high consumption levels are.

I agree with Carl; I feel like other commenters are taking this research as a strong update, as opposed to a simple model which I'm glad someone's worked through the details of but which we probably shouldn't use to influence our beliefs very much.

Comment by Buck on [deleted post] 2020-10-24T22:42:03.847Z

My guess is that this feedback would be unhelpful and probably push the grantmakers towards making worse grants that were less time-consuming to justify to uninformed donors.

Comment by Buck on Evidence on correlation between making less than parents and welfare/happiness? · 2020-10-14T05:14:02.148Z · EA · GW

Inasmuch as you expect people to keep getting richer, it seems reasonable to hope that no generation has to be more frugal than the previous.

Comment by Buck on In defence of epistemic modesty · 2020-10-11T19:27:11.541Z · EA · GW

when domain experts look at the 'answer according to the rationalist community re. X', they're usually very unimpressed, even if they're sympathetic to the view themselves. I'm pretty Atheist, but I find the 'answer' to the theism question per LW or similar woefully rudimentary compared to state of the art discussion in the field. I see similar experts on animal consciousness, quantum mechanics, free will, and so on similarly be deeply unimpressed with the sophistication of argument offered.

I would love to see better evidence about this. Eg it doesn't match my experience of talking to physicists.

Comment by Buck on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-11T04:44:24.666Z · EA · GW

I think he wouldn't have thought of this as "throwing the community under the bus". I'm also pretty skeptical that this consideration is strong enough to be the main consideration here (as opposed to eg the consideration that Wayne seems way more interested in making the world better from a cosmopolitan perspective than other candidates for mayor).

Comment by Buck on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-10-11T04:42:08.887Z · EA · GW

Wayne at least sort-of identified as an EA in 2015, eg hosting EA meetups at his house. And he's been claiming to be interested in evidence-based approaches to making the world better since at least then.

Comment by Buck on EA Uni Group Forecasting Tournament! · 2020-09-20T03:45:43.384Z · EA · GW

I think this is a great idea, and I'm excited that you're doing it.

Comment by Buck on Buck's Shortform · 2020-09-19T05:00:39.146Z · EA · GW

I’ve recently been thinking about medieval alchemy as a metaphor for longtermist EA.

I think there’s a sense in which it was an extremely reasonable choice to study alchemy. The basic hope of alchemy was that by fiddling around in various ways with substances you had, you’d be able to turn them into other things which had various helpful properties. It would be a really big deal if humans were able to do this.

And it seems a priori pretty reasonable to expect that humanity could get way better at manipulating substances, because there was an established history of people figuring out ways that you could do useful things by fiddling around with substances in weird ways, for example metallurgy or glassmaking, and we have lots of examples of materials having different and useful properties. If you had been particularly forward thinking, you might even have noted that it seems plausible that we’ll eventually be able to do the full range of manipulations of materials that life is able to do.

So I think that alchemists deserve a lot of points for spotting a really big and important consideration about the future. (I actually have no idea if any alchemists were thinking about it this way; that’s why I billed this as a metaphor rather than an analogy.) But they weren’t really very correct about how anything worked, and so most of their work before 1650 was pretty useless. 

It’s interesting to think about whether EA is in a similar spot. I think EA has done a great job of identifying crucial and underrated considerations about how to do good and what the future will be like, eg x-risk and AI alignment. But I think our ideas for acting on these considerations seem much more tenuous. And it wouldn’t be super shocking to find out that later generations of longtermists think that our plans and ideas about the world are similarly inaccurate.

So what should you have done if you were an alchemist in the 1500s who agreed with this argument that you had some really underrated considerations but didn’t have great ideas for what to do about them? 

I think that you should probably have done some of the following things:

  • Try to establish the limits of your knowledge and be clear about the fact that you’re in possession of good questions rather than good answers.
  • Do lots of measurements, write down your experiments clearly, and disseminate the results widely, so that other alchemists could make faster progress.
  • Push for better scientific norms. (Scientific norms were in fact invented in large part by Robert Boyle for the sake of making chemistry a better field.)
  • Work on building devices which would enable people to do experiments better.

Overall I feel like the alchemists did pretty well at making the world better, and if they’d been more altruistically motivated they would have been even better.

There are some reasons to think that pushing early chemistry forward is easier than working on improving the long term future, In particular, you might think that it’s only possible to work on x-risk stuff around the time of the hinge of history.

Comment by Buck on Some thoughts on EA outreach to high schoolers · 2020-09-15T17:19:07.536Z · EA · GW

Yeah, I thought about this; it’s standard marketing terminology, and concise, which is why I ended up using it. Thanks though.

Comment by Buck on Buck's Shortform · 2020-09-13T22:45:59.196Z · EA · GW

I thought this post was really bad, basically for the reasons described by Rohin in his comment. I think it's pretty sad that that post has positive karma.

Comment by Buck on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-13T17:54:38.060Z · EA · GW

When I was 18 I watched a lot of videos of animal suffering, eg linked from Brian Tomasik's list of distressing videos of suffering (extremely obvious content warning: extreme suffering).  I am not sure whether I'd recommend this to others.

As a result, I felt a lot of hatred for people who were knowingly complicit in causing extreme animal suffering, which was basically everyone I knew. At the time I lived in a catered college university, where every day I'd see people around me eating animal products; I felt deeply alienated and angry and hateful.

This was good in some ways. I think it's plausibly healthy to feel a lot of hatred for society. I think that this caused me to care even less about what people thought of me, which made it easier for me to do various weird things like dropping out of university (temporarily) and moving to America.

I told a lot of people to their faces that I thought they were contemptible. I don't feel like I'm in the wrong for saying this, but this probably didn't lead to me making many more friends than I otherwise would have. And on one occasion I was very cruel to someone who didn't deserve it; I felt more bad about this than about basically anything else I'd done in my life.

I don't know whether I'd recommend this to other people. Probably some people should feel more alienated and others should feel less alienated.

Comment by Buck on Are there any other pro athlete aspiring EAs? · 2020-09-13T17:34:49.245Z · EA · GW

For what it's worth, I think that EA related outreach to heirs seems much less promising than to founders or pro poker players. 

Successful founders are often extremely smart in my experience; I expect pro poker players are also pretty smart on average.

Comment by Buck on Are there any other pro athlete aspiring EAs? · 2020-09-13T17:33:17.391Z · EA · GW

It seems likely that pro athletes are more intelligent than average, but I'd be very surprised if they were as intelligent as pro poker players on average.

Comment by Buck on Buck's Shortform · 2020-09-13T17:29:42.563Z · EA · GW

Edited to add: I think that I phrased this post misleadingly; I meant to complain mostly about low quality criticism of EA rather than eg criticism of comments. Sorry to be so unclear. I suspect most commenters misunderstood me.

I think that EAs, especially on the EA Forum, are too welcoming to low quality criticism [EDIT: of EA]. I feel like an easy way to get lots of upvotes is to make lots of vague critical comments about how EA isn’t intellectually rigorous enough, or inclusive enough, or whatever. This makes me feel less enthusiastic about engaging with the EA Forum, because it makes me feel like everything I’m saying is being read by a jeering crowd who just want excuses to call me a moron.

I’m not sure how to have a forum where people will listen to criticism open mindedly which doesn’t lead to this bias towards low quality criticism.

Comment by Buck on Judgement as a key need in EA · 2020-09-13T15:52:06.910Z · EA · GW

I would be pretty surprised if most of the people from the EALF survey thought that forecasting is "very closely related" to good judgement.

Comment by Buck on The academic contribution to AI safety seems large · 2020-07-31T16:20:15.536Z · EA · GW

Thanks for writing this post--it was useful to see the argument written out so I could see exactly where I agreed and disagreed. I think lots of people agree with this but I've never seen it written up clearly before.

I think I place substantial weight (30% or something) on you being roughly right about the relative contributions of EA safety and non-EA safety. But I think it's more likely that the penalty on non-EA safety work is larger than you think. 

I think the crux here is that I think AI alignment probably requires really focused attention, and research done by people who are trying to do something else will probably end up not being very helpful for some of the core problems.

It's a little hard to evaluate the counterfactuals here, but I'd much rather have the contributions from EA safety than from non EA safety over the last ten years.

I think that it might be easier to assign a value to the discount factor by assessing the total contributions of EA safety and non-EA safety. I think that EA safety does something like 70% of the value-weighted work, which suggests a much bigger discount factor than 80%.

---

Assorted minor comments:

But this is only half of the ledger. One of the big advantages of academic work is the much better distribution of senior researchers: EA Safety seems bottlenecked on people able to guide and train juniors

Yes, but those senior researchers won't necessarily have useful things to say about how to do safety research. (In fact, my impression is that most people doing safety research in academia have advisors who don't have very smart thoughts on long term AI alignment.)

None of those parameters is obvious, but I make an attempt in the model (bottom-left corner).

I think the link is to the wrong model?

A cursory check of the model

In this section you count nine safety-relevant things done by academia over two decades, and then note that there were two things from within EA safety last year that seem more important. This doesn't seem to mesh with your claim about their relative productivity.

Comment by Buck on The academic contribution to AI safety seems large · 2020-07-31T15:57:43.901Z · EA · GW

MIRI is not optimistic about prosaic AGI alignment and doesn't put much time into it.

Comment by Buck on How strong is the evidence of unaligned AI systems causing harm? · 2020-07-23T03:15:29.565Z · EA · GW

I don’t think the evidence is very good; I haven’t found it more than slightly convincing. I don’t think that the harms of current systems are a very good line of argument for potential dangers of much more powerful systems.

Comment by Buck on Intellectual Diversity in AI Safety · 2020-07-22T22:22:42.374Z · EA · GW

I'm curious what your experience was like when you started talking to AI safety people after already coming to come of your own conclusions. Eg I'm curious if you think that you missed major points that the AI safety people had spotted which felt obvious in hindsight, or if you had topics on which you disagreed with the AI safety people and think you turned out right.

Comment by Buck on Are there lists of causes (that seemed promising but are) known to be ineffective? · 2020-07-09T05:25:08.184Z · EA · GW

In an old post, Michael Dickens writes:

The closest thing we can make to a hedonium shockwave with current technology is a farm of many small animals that are made as happy as possible. Presumably the animals are cared for by people who know a lot about their psychology and welfare and can make sure they’re happy. One plausible species choice is rats, because rats are small (and therefore easy to take care of and don’t consume a lot of resources), definitively sentient, and we have a reasonable idea of how to make them happy.
[...]
Thus creating 1 rat QALY costs $120 per year, which is $240 per human QALY per year.
[...]
This is just a rough back-of-the-envelope calculation so it should not be taken literally, but I’m still surprised by how cost-inefficient this looks. I expected rat farms to be highly cost-effective based on the fact that most people don’t care about rats, and generally the less people care about some group, the easier it is to help that group. (It’s easier to help developing-world humans than developed-world humans, and easier still to help factory-farmed animals.) Again, I could be completely wrong about these calculations, but rat farms look less promising than I had expected.

I think this is a good example of something seeming like a plausible idea for making the world better, but which turned out to seem pretty ineffective.

Comment by Buck on Concern, and hope · 2020-07-07T16:48:26.251Z · EA · GW

What current controversy are you saying might make moderate pro-SJ EAs more wary of SSC?

Comment by Buck on Concern, and hope · 2020-07-07T16:14:34.863Z · EA · GW

I have two complaints: linking to a post which I think was made in bad faith in an attempt to harm EA, and seeming to endorse it by using it as an example of a perspective that some EAs have.

I think you shouldn't update much on what EAs think based on that post, because I think it was probably written in an attempt to harm EA by starting flamewars.

EDIT: Also, I kind of think of that post as trying to start nasty rumors about someone; I think we should generally avoid signal boosting that type of thing.

Comment by Buck on KR's Shortform · 2020-07-07T16:05:28.629Z · EA · GW

I'd be interested to see a list of what kinds of systematic mistakes previous attempts at long-term forecasting made.

Also, I think that many longtermists (eg me) think it's much more plausible to successfully influence the long run future now than in the 1920s, because of the hinge of history argument.

Comment by Buck on Concern, and hope · 2020-07-06T02:41:46.375Z · EA · GW

Many other people who are personally connected to the Chinese Cultural Revolution are the people making the comparisons, though. Eg the EA who I see posting the most about this (who I don't think would want to be named here) is Chinese.

Comment by Buck on Concern, and hope · 2020-07-06T02:40:04.579Z · EA · GW

I think that both the Cultural Revolution comparisons and the complaints about Cultural Revolution comparisons are way less bad than that post.

Comment by Buck on Concern, and hope · 2020-07-05T18:50:00.477Z · EA · GW
culminating in the Slate Star Codex controversy of the past two weeks

I don't think that the SSC kerfuffle is that related to the events that have caused people to worry about cultural revolutions. In particular, most of the complaints about the NYT plan haven't been related to the particular opinions Scott has written about.