Utopia In The Fog 2017-03-28T02:54:51.490Z


Comment by Zeke_Sherman on Critique of Superintelligence Part 4 · 2018-12-14T15:24:20.496Z · EA · GW

A lot of baggage goes into the selection of a threshold for "highly accurate" or "ensured safe" or statements of that sort. The idea is that early safety work helps even though it won't get you a guarantee. I don't see any good reason to believe AI safety to be any more or less tractable than preemptive safety for any other technology, it just happens to have greater stakes. You're right that the track record doesn't look great; however I really haven't seen any strong reason to believe that preemptive safety is generally ineffective - it seems like it just isn't tried much.

Comment by Zeke_Sherman on Critique of Superintelligence Part 3 · 2018-12-14T15:11:44.100Z · EA · GW

For low probability of other civilizations, see

Humans don't have obviously formalized goals. But you can formalize human motivation, in which case our final goal is going to be abstract and multifaceted, and it is probably going to be include a very very broad sense of well-being. The model applies just fine.

Because it is tautologically true that agents are motivated against changing their final goals, this is just not possible to dispute. The proof is trivial, it comes from the very stipulation of what a goal is in the first place. It is just a framework for describing an agent. Now, with this framework, humans' final goals happen to be complex and difficult to discern, and maybe AI goals will be like that too. But we tend to think that AI goals will not be like that. Omohundro argues some economic reasons in his paper on the "basic AI drives", but also, it just seems clear that you can program an AI with a particular goal function and that will be all there is to it.

Yes, AI may end up with very different interpretations of its given goal but that seems to be one of the core issues in the value alignment problem that Bostrom is worried about, no?

Comment by Zeke_Sherman on Critique of Superintelligence Part 5 · 2018-12-14T14:55:54.634Z · EA · GW

The Pascal's Mugging thing has been discussed a lot around here. There isn't an equivalence between all causes and muggings because the probabilities and outcomes are distinct and still matter. It's not the case that every religion and every cause and every technology has the same tiny probability of the same large consequences, and you cannot satisfy every one of them because they have major opportunity costs. If you apply EV reasoning to cases like this then you just end up with a strong focus on one or a few of the highest impact issues (like AGI) at heavy short term cost. Unusual, but not a reductio ad absurdum.

There is no philosophical or formal system that properly describes human beliefs because human beliefs are messy, fuzzy neurophysiological phenomena. But we may choose to have a rational system for modeling our beliefs more consistently, and if we do then we may as well go with something that doesn't give us obviously wrong implications in dutch book cases, because a belief system that has wrong implications does not fit our picture of 'rational' (whether we encounter those cases or not).

Comment by Zeke_Sherman on Tiny Probabilities of Vast Utilities: Concluding Arguments · 2018-11-16T10:38:12.873Z · EA · GW

I think the same sheltering happens if you talk about ignoring small probabilities, even if the probability of the x-risk is in fact extremely small.

The probability that $3000 to AMF saves a life is significant. But the probability that it saves the life of any one particular individual is extremely low. We can divide up the possibility space any number of ways. To me it seems like this is a pretty damning problem for the idea of ignoring small probabilities.

We can say that the outcome of the AMF donation has lower variance than the outcome of an x-risk donation, assuming equal EV. So we could talk about preferring low variance, or being averse to having no impact. But I don't know if that will seem as intuitively reasonable when we circle our new framework back to more everyday, tangible thought experiments.

Comment by Zeke_Sherman on Crohn's disease · 2018-11-15T10:24:58.205Z · EA · GW

Only if this project is assumed to be the best available use of funds. Other things may be better.

Comment by Zeke_Sherman on Crohn's disease · 2018-11-14T22:28:20.354Z · EA · GW

>Zeke estimates the direct financial upside of a successful replication to be about 33B$/year. This is a 66000:1 ratio (33B/500K = 66000).

This is not directly relevant, because the money is being saved by other people and governments, who are not normally using their money very well. EAs' money is much more valuable as it is used much more efficiently than Western people and governments usually do. NB: this is also the reason why EA should generally be considered funders of last resort.

If the study has a 0.5% (??? I have no idea) chance of leading to global approval and effective treatment then it's 35k QALY in expectation per my estimate which means a point estimate of $14/QALY. iirc, that's comparable to global poverty interventions but at a much lower robustness of evidence, some other top EA efforts with a similar degree of robustness will presumably have a much higher EV. Of course the other diseases you can work on may be much worse causes.

Also that $33B comes from a study on the impact of the disease. Just because you replicate well doesn't mean the treatment truly works, and is approved globally, etc. Hence the 0.5% number being very low.

Comment by Zeke_Sherman on Tiny Probabilities of Vast Utilities: Solutions · 2018-11-14T22:14:11.629Z · EA · GW

Last thread you said the problem with the funnel is that it makes the decision arbitrarily dependent upon how far you go. But to stop evaluating possibilities violates the regularity assumption. It seems like you are giving an argument against people who follow solution 1 and reject regularity; it's those people whose decisions depend hugely and arbitrarily on where they define the threshold, especially when a hard limit for p is selected. Meanwhile, the standard view in the premises here has no cutoff.

> One needs a very extreme probability function in order to make this work; the probabilities have to diminish very fast to avoid being outpaced by the utilities.

I'm no sure what you mean by 'very fast'. The implausibility of such a probability function is an intuition that I don't share. I think the appendix 8.1 is really going to be the core argument at stake.

Solution #6 seems like an argument about the probability function, not an argument about the decision rule.

Comment by Zeke_Sherman on Crohn's disease · 2018-11-14T10:02:10.790Z · EA · GW

I know. It's ten years of savings, because curing is accelerated by ten years.

Comment by Zeke_Sherman on Crohn's disease · 2018-11-14T00:05:52.658Z · EA · GW

Going from moderate disease to remission seems to be an increase of about 0.25 QALY/year ( If this research accelerates treatment for sufferers by an average of 10 years then that's an impact of 5 million QALY.

Crohn's also costs $33B per year in the US + major European countries ( If we convert that at a typical Western cost-per-statistical-life-saved of $7M, and the average life saved is +25 QALY, that's another 1.2 million. Maybe 2 million worldwide because Crohn's is mostly a Western phenomenon ( So that's 7 million QALY overall. Which of course we discount by whatever the probability of failure is.

It's very rough but it's a step forward, don't let the perfect be the enemy of the good.

Comment by Zeke_Sherman on Crohn's disease · 2018-11-13T19:24:01.994Z · EA · GW

We need to factor in QALY or WALY benefits of health improvement in addition to the money saved by users, but we also need to discount for how many people won't get the new treatment.

Comment by Zeke_Sherman on Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem · 2018-11-13T14:31:05.948Z · EA · GW

>The shape of your action profiles depends on your probability function

Are you saying that there is no expected utility just because people have different expectations?

>and your utility function

Well, of course. That doesn't mean there is no expected utility! It's just different for different agents.

>I'm arguing that even if you ignore infinitely valuable outcomes, there's still a big problem having to do with infinitely many possible finite outcomes,

That in itself is not a problem, imagine a uniform distribution from 0 to 1.

>if the profiles are funnel-shaped then what you end up doing will be highly arbitrary, determined mostly by whatever is happening at the place where you happened to draw the cutoff.

If you do something arbitrary like drawing a cutoff, then of course how you do it will have arbitrary results. I think the lesson here is not to draw cutoffs in the first place.

>That's what I'd like to think, and that's what I do think. But this argument challenges that; this argument says that the low-hanging fruit metaphor is inappropriate here: there is no lowest-hanging fruit or anything close; there is an infinite series of fruit hanging lower and lower, such that for any fruit you pick, if only you had thought about it a little longer you would have found an even lower-hanging fruit that would have been so much easier to pick that it would easily justify the cost in extra thinking time needed to identify it... moreover, you never really "pick" these fruit, in that the fruit are gambles, not outcomes; they aren't actually what you want, they are just tickets that have some chance of getting what you want. And the lower the fruit, the lower the chance...

There must be a lowest hanging fruit out of any finite set of possible actions, as long as "better intervention than" follows basic decision theoretic properties which come automatically if they have expected utility values.

Also, remember the conservation of expected evidence. When we think about the long run effects of a given intervention, we are updating our prior to go either up or down, not predictably making it seem more attractive.

Comment by Zeke_Sherman on Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem · 2018-11-11T16:17:35.812Z · EA · GW

>Imagine you are adding new possible outcomes to consideration, one by one. Most of the outcomes you add won't change the EV much. But occasionally you'll hit one that makes everything that came before look like a rounding error, and it might flip the sign of the EV.

But the probability of those rare things will be super low. It's not obvious that they'll change the EV as much as nearer term impacts.

This would benefit from an exercise in modeling the utilities and probabilities of a certain intervention to see what the distribution actually looks like. So far no one has bothered (or needed, perhaps) to actually enumerate the 2nd, 3rd, etc... order effects and estimate their probabilities. All this theorizing might be unnecessary if our actual expectations follow a different pattern.

>So this is a problem in theory--it means we are approximating an ideal which is both stupid and incoherent.

Are we? Expected utility is still a thing. Some actions have greater expected utility than others even if the probability distribution has huge mass across both positive and negative possibilities. If infinite utility is a problem then it's already a problem regardless of any funnel or oscillating type distribution of outcomes.

>Arguably this behavior is the predictable result of considering more and more possibilities in your EV calculations, and it doesn't represent progress in any meaningful sense--it just means that EAs have gone farther down the funnel-shaped rabbithole than everybody else.

Another way of describing this phenomenon is that we are simply seizing the low hanging fruit, and hard intellectual progress isn't even needed.

Comment by Zeke_Sherman on Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem · 2018-11-10T16:05:51.979Z · EA · GW

Good post but we shouldn't assume the "funnel" distribution to be symmetric about the line of 0 utility. We can expect that unlikely outcomes are good in expectation just as we expect that likely outcomes are good in expectation. Your last two images show actions which have an immediate expected utility of 0. But if we are talking about an action with generally good effects, we can expect the funnel (or bullet) to start at a positive number. We also might expect it to follow an upward-sloping line, rather than equally diverging to positive and negative outcomes. In other words, bed nets are more likely to please interdimensional travelers than they are to displease them, and so on.

Also, the distribution of outcomes at any level of probability should follow a roughly Gaussian distribution. Most bizarre, contorted possibilities lead to outcomes that are neither unusually good nor unusually bad. This means it's not clear that the utility is undefined; as you keep you looking to sets of unlikelier outcomes you are getting a series of tightly finite expectations rather than big broad ones that might easily turn out to be hugely positive or negative based on minor factors. Your images of the funnel and bullet should show much more density along the middle, with less density at the top and bottom. We still get an infinite series, so there is that philosophical problem for people who want a rigorous idea of utilitarianism, but it's not a big problem for practical decision making because it's easy to talk about some interventions being better than others.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-28T06:29:45.621Z · EA · GW

Considering that most people would be unhappy to be told that they're more likely to be a rapist because of their race, we should have a strong prior that many Effective Altruists would feel the same way.

Well I saw statistics that suggest that I'm more likely to be a rapist since I'm a man, the post explicitly said that I have a 6% chance of being a rapist as a man in EA, and that didn't make me unhappy. And I haven't seen anyone who has actually expressed any personal discomfort at the OP nor any of my posts, leaving aside the secondhand outrage expressed by characters such as yourself. So my prior is that this is false.

Apart from Lila's argument, this "non-white people are more likely to be rapists" is a terrible line of thinking because (IMO) it's likely to build racist modes of thought: assigning negative characteristics to minorities based on dubious evidence

Well you can actually ask rape victims what race their attacker was and then see what the statistics are, as RAINN did in the link I provided. That's not dubious evidence.

If the evidence were incontrovertible, this might be acceptable, but it's nowhere near the required standard of proof to overcome the strong prior that humans are equally likely to commit crimes regardless of race

Huh? Why on earth would you have that prior, given the long long history of different ethnic groups behaving differently and being treated differently throughout Western history? And we have damningly strong evidence that people of difference races commit crimes at different rates, as a pure statistical fact backed up by mountains of data gathered by the Bureau of Justice Statistics and many other institutions. What you want to attribute that to is up to you, but refusing to acknowledge it is the height of denialism which even race and progressive activists don't do. The actually well-informed progressive/leftist activists and philosophers don't grasp at concepts of rationality to try to throw together some skepticism about whether different races commit crimes at different rates, they just say that the cause of these differential rates is social and a result of structurally racist society, but if you gave a whiff of charity to my posts then you would know that nothing that I have said in any way assumes that the increased propensity of blacks to commit rapes relative to whites in the Western world is not a direct result of structurally racist society.

among other reasons, because race is largely a social construct).

This is a silly cop-out. Only uninformed right wing pundits who strawman poststructuralism think that for something to be a social construct implies that it doesn't matter and isn't real. We can define race as minimally as "the color that people check on surveys when they are asked about their race" and it is still true that people commit crimes at different rates in correspondence with how they identify. Dodging these issues by disputing how much biological reality there is or isn't associated with social race constructs blindly sweeps over enormous realities about how race, ethnicity and skin color are perceived and operate in contemporary society.

Additionally, the long history of using false statistics and "science" to bolster white supremacy should make one more skeptical of numbers like this

Those dastardly white supremacists at the Rape, Abuse & Incest National Network! Or is it the rape victims - you think we shouldn't believe the rape victims when they tell us what race their aggressor was, is that right? I'm almost offended by that. And yet you try to lecture me about things that will "strengthen bad cognitive patterns and weaken good judgement"...

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-22T01:20:00.003Z · EA · GW

It's nice to imagine things. But I'll wait for actual EAs to tell me about what does or doesn't upset them before drawing conclusions about what they think.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-21T22:13:25.539Z · EA · GW

I think it's pretty odd of you to try to tell me about what upsets EAs or how we feel, given that you have already left the movement. To speak as if you have some kind of personal stake or connection to this matter is rather dishonest.

I hope you're just using this as a demonstration and not seriously suggesting that we start racially profiling people in EA.

Racial profiling is something that is conducted by law enforcement and criminal investigation, and EA does neither of those things. I would be much more bothered if EA started trying to hunt for criminals within its ranks than I would be from the mere fact that the manner in which we did this involved racial profiling.

It should be clear why people find the following statements upsetting:

Neither of those statements are upsetting to me.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-13T12:49:37.121Z · EA · GW

There are an estimated 276,000 annual cases of female suicide in the entire world ( If, say, half of them are associated with sexual violence (guess), and you throw males in as well, then the eventual lifesaving potential is maybe 150,000 people per year.

Most of these suicides are in SE Asia and the Western Pacific where I believe healthcare and medication provision are not as comprehensive as they are here in the west.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-13T02:33:31.854Z · EA · GW

How long do you think it would take you to upgrade every single estimate to the maximum quality level?

Um I don't know, I just said I would estimate this one number. I think I was clear that I was talking about "this particular question".

Assuming 2,300 people in EA per the survey, for every 100 rape victims:

Out of the 25 rape victims who are spouses or partners of the perpetrator (, 20 will be outside of EA, when the offender is in EA.

Out of the 45 rape victims who are acquaintances of the perpetrator, 30 will be outside of EA, when the offender is in EA.

Out of the 28 rape victims who are strangers to the perpetrator, 20 will be outside of EA, when the offender is in EA.

Out of the 6 victims who can't remember or are victimized by multiple people, 4 will be outside of EA, when the offender is in EA.

For the 1 victim who is a non-spouse relative, the victim will be outside of EA.

This makes a total of 30% of rape victims of EAs being in EA.

Assuming 13,000 people in EA per the FB group, for every 100 rape victims:

Out of the 25 rape victims who are spouses or partners of the perpetrator (, 23 will be outside of EA, when the offender is in EA.

Out of the 45 rape victims who are acquaintances of the perpetrator, 40 will be outside of EA, when the offender is in EA.

Out of the 28 rape victims who are strangers to the perpetrator, 24 will be outside of EA, when the offender is in EA.

Out of the 6 victims who can't remember or are victimized by multiple people, 5 will be outside of EA, when the offender is in EA.

For the 1 victim who is a non-spouse relative, the victim will be outside of EA.

This makes a total of 12% of rape victims of EAs being in EA.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-13T02:18:40.467Z · EA · GW

I think you'd get better results if you spent your time simply including things that can easily be included, rather than sparking meta-level arguments about which things are or aren't worth including. You could have accepted the race correlations and then found one or two countervailing considerations to counter the alleged bias for a more comprehensive overall view. That still would have been more productive than this.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T13:41:25.574Z · EA · GW

The way in which gender is relevant while race is not is that sexual attractions are limited by gender preferences in most humans.

Sexual violence tendencies are correlated with racial status in most humans. Why treat it differently?

Given that most sexually violent people attack one gender but not the other, and given that our gender ratio is very seriously skewed, gender is a critical component of this sexual violence risk estimate.

And given that sexually violent people are disproportionately represented across racial categories, and given that our race ratio is very seriously skewed, race is a critical component of this sexual violence risk estimate.

Given that you believe a race adjustment should go with gender adjustment, I don't see why you are not also advocating for all of the following:

Try and find some statistics for both EAs and sex offenders with comparable data categories on those topics and you'll see.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T02:21:51.933Z · EA · GW

Well that's true. Depending on how many unscrupulous people you think there are on the EA forum :) Though you don't necessarily need to include all possible adjustments at once to avoid biased updates, you just need to select adjustments via an unbiased process.

Demographics is one of the more obvious and robust things to adjust for, though. It's a very common topic in criminology and social science, with accurate statistics available both for EA and for outside groups. It's a reasonable thing to think about as an easy initial thing to adjust for. You already included adjustment for gender statistics, so racial statistics should go along with that.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T01:31:34.370Z · EA · GW

It mentions them, but does it make any points based on the assumption that there are too few of them?

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T01:17:11.253Z · EA · GW

Again - I'm not making any demand about putting a lot of effort into the research. I think it's totally okay to make simple, off-the-cuff estimates, as long as better information isn't easy to find.

On this particular question though, we can definitely do better than calculating as if the figure is 100%. I mean, just think about it, think about how many of EAs' social and sexual interactions involve people outside of EA. So of course it's going to be less than 100%, significantly less. Maybe 50%, maybe 75%, we can't come up with a great estimate, but at least it will be an improvement. I can do it if you want. And you didn't write that the number was 100%, but the way the calculation was written made it seem like someone (like me) could come away with the impression that it was 100% if they weren't super careful. That's all I'm suggesting.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T01:08:43.008Z · EA · GW

Are most acts of sexual violence committed by a select particularly egregious few or by the presumably more common 'casual rapist'? Answering this question is relevant for picking the strategies to focus on.

Lisak and Miller (link repeated for convenience: give decent data on the distribution. 91% of rapes/attempted rapes are from repeat offenders.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T00:58:44.803Z · EA · GW

Of course that would be suboptimal, hundreds of hours calculating base rates would certainly not be worthwhile. I'm not offering to do it and I'm not demanding that anyone do it. Hundreds of hours directly studying EA would surely be more worthwhile, I agree on that. All I'm saying is that this information we have now is better than that information which we had an hour ago.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T00:42:43.520Z · EA · GW

I did not see that note. But for the calculations on the productivity impact, it seemed like one might read it with the assumption that the 80,000 hours in a career are EA career hours. If we don't have enough information to make an estimate on this proportion, that's fine, but it definitely doesn't mean that we should implicitly treat it as if it is 100%; after all it is certainly less than that. What I read of the calculations just didn't make it clear, so I wanted to clarify.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T00:34:16.934Z · EA · GW

Yes, I saw that part. But first, just because there are lots of unknown factors doesn't mean we should ignore the ones that we do know. Suppose we're too busy to look at anything besides demographics, that's fine, but it doesn't mean that we should deliberately ignore the information that we have about demographics. We'll have an inaccurate estimate, but it's still less inaccurate than the estimate we had before. If you don't/didn't have time to originally do this adjustment, that's fine, like I said you already did a lot of work getting a good statistical foundation here. But we have more information so let's update accordingly.

Now the statistics could be incorrect because of different rates of conviction or indictment or something of the sort. Sure, that is a different possibility, and if we have any suspicions about it then we can make some guesses in order to facilitate a better overall estimate. I would assume, from the outset, uniform priors for conviction rates. Maybe whites are under-represented due to bias in the system, or maybe they are over-represented due to the subcultures in which they live and the social independence or access to legal/judicial resources of their victims.

What are the facts? Sexual offense victims report ( that 57% of offenders are white, exactly in line with my other source. Only 27% report the offender as black, which is significantly less than my other source suggests though of comparatively little consequence for EA going by statistical averages. 6% say other and 8% say unknown.

In this case you are right that it seems like there was a disparity, blacks are apparently convicted disproportionately. But here at least we have an apparently more reliable source of perpetrator demographics and it says roughly the same thing about what EA base rates would be relative to that of the broader population.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T00:08:09.829Z · EA · GW

The second point is irrelevant - what statistic is changed by the prevalence of false rape accusations? The Lisak and Miller study cited for the 6% figure do a survey of self-reports among men on campus.

Comment by Zeke_Sherman on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T00:02:32.093Z · EA · GW

Are you assuming that crimes committed by people in EA will be towards other people in EA? According to RAINN, 34% of the time the sex offender is a family member. And most EAs have social circles which mostly comprise people who are not in EA, I would think. (This is certainly the case if you take the whole Facebook group to be the EA movement.)

I think that for all intents and purposes we should just use the survey responses as the template for the size of the EA movement, because if someone is on Facebook but is not even involved enough that we can get them to take a survey then we generally have little hope of influencing their behavior, if they even are in EA.

This seems like a well researched post with accurate statistics, but you didn't note that EA is demographically somewhat different from the rest of the population. According to (, 58% of American sexual assault offenders are white (this includes Hispanics), 40% are black, and 2% are "other". Meanwhile the EA survey ( showed that 89% of EAs identify as non-Hispanic white, 3.3% identify as Hispanic, 0.7% identify as black, and 7% identify as Asian (i.e. other). These stats are quite different from the base rate for the US, in a way that suggests the base rate of offenders in EA is lower than it is for the general population.

The 7.2 rapes per offender figure seems like it comes from a survey of paraphiliacs? Lisak and Miller say it is 4 rapes per offender. Maybe that is just because college students are younger.

Encourage or host dry events and parties.

I think that should be an obvious thing to do. Alcohol already costs money and reduces the intellectual caliber of conversation, we are better off without it.

Comment by Zeke_Sherman on Is EA Growing? Some EA Growth Metrics for 2017 · 2017-09-06T16:31:17.240Z · EA · GW

You can find the stats by going to the right of the page in moderation tools and clicking "traffic stats". They only go back a year though. should show you subscriber counts from before that, but not activity.

Comment by Zeke_Sherman on Is EA Growing? Some EA Growth Metrics for 2017 · 2017-09-06T14:36:39.352Z · EA · GW

The effective altruism subreddit is growing in traffic: (August figures are 2.5k and 9.5k)

The EA Wikipedia page is not changing much in pageviews:

Comment by Zeke_Sherman on Utopia In The Fog · 2017-06-04T08:28:33.037Z · EA · GW

I don't think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with "well that research was silly anyway.")

Parenthesis is probably true, e.g. most of MIRI's traditional agenda. If agents don't quickly gain decisive strategic advantages then you don't have to get AI design right the first time; you can make many agents and weed out the bad ones. So the basic design desiderata are probably important, but it's just not very useful to do research on them now. Not familiar enough with your line of work to comment on it, but just think about the degree to which a problem would no longer be a problem if you can build, test and interact with many prototype human-level and smarter-than-human agents.

Whether or not your system is rebuilding the universe, you want it to be doing what you want it to be doing. Which "multi-agent dynamics" do you think change the technical situation?

Aside from the ability to prototype as described above, there are the same dynamics which plague human society when multiple factions with good intentions end up fighting due to security concerns or tragedies of the commons, or when multiple agents with different priors interpret every new piece of evidence they see differently and so go down intractably separate paths of disagreement. FAI can solve all the problems of class, politics, economics, etc by telling everyone what to do, for better or for worse. But multiagent systems will only be stable with strong institutions, unless they have some other kind of cooperative architecture (such as universal agreement in value functions, in which case you now have the problem of controlling everybody's AIs but without the benefit of having an FAI to rule the world). Building these institutions and cooperative structures may have to be done right the first time, since they are effectively singletons, and they may be less corrigible or require different kinds of mechanisms to ensure corrigibility. And the dynamics of multiagent systems means you cannot accurately predict the long term future merely based on value alignment, which you would (at least naively) be able to do with a single FAI.

If evolution isn't optimizing for anything, then you are left with the agents' optimization, which is precisely what we wanted.

Well it leads to agents which are optimal replicators in their given environments. That's not (necessarily) what we want.

I though you were telling a story about why a community of agents would fail to get what they collectively want. (For example, a failure to solve AI alignment is such a story, as is a situation where "anyone who wants to destroy the world has the option," as is the security dilemma, and so forth.)

That too!

Comment by Zeke_Sherman on Utopia In The Fog · 2017-06-04T08:19:46.505Z · EA · GW

Amateur question: would it help to also include back-of-the-envelop calculations to make your arguments more concrete?

Don't think so. It's too broad and speculative with ill-defined values. It just boils down to (a) whether my scenarios are more likely than the AI-Foom scenario, and (b) whether my scenarios are more neglected. There's not many other factors that a complicated calculation could add.

Comment by Zeke_Sherman on Utopia In The Fog · 2017-06-04T08:16:44.519Z · EA · GW

Very little research is done on advancing AI towards AGI, while a large portion of neuroscience research and also a decent amount of nanotechnology research (billions of dollars per year between the two) are clearly pushing us towards the ability to do WBE, even if that's not the reason that research is conducting right now.

Yes, but I mean they're not trying to figure out how to do it safely and ethically. The ethics/safety worries are 90% focused around what we have today, and 10% focused on superintelligence.

Comment by Zeke_Sherman on Considering Considerateness: Why communities of do-gooders should be exceptionally considerate · 2017-06-03T18:17:31.197Z · EA · GW

Who hurt you?

Comment by Zeke_Sherman on Update on Effective Altruism Funds · 2017-04-22T04:49:06.897Z · EA · GW

given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds

This is wholly speculative. I've seen no evidence that consequentialists "feel bad" in any emotionally meaningful sense for having made donations to the wrong cause.

This is the same sort of effect people get from looking at this sort of advertising, but more subtle

Looking at that advertising slightly dulled my emotional state. Then I went on about my day. And you are worried about something that would even be more subtle? Why can't we control our feelings and not fall to pieces at the thought that we might have been responsible for injustice? The world sucks and when one person screws up, someone else is suffering and dying at the other end. Being cognizant of this is far more important than protecting feelings.

if, say, individual EAs had information about giving opportunities that were more effective than EA Funds, but donated to EA Funds anyways out of a sense of pressure caused by the "at least as good as OPP" slogan.

I think you ought to place a bit more faith in the ability of effective altruists to make rational decisions.

Comment by Zeke_Sherman on Utopia In The Fog · 2017-03-28T17:53:25.428Z · EA · GW

Thanks for the comments.

Evolution doesn't really select against what we value, it just selects for agents that want to acquire resources and are patient. This may cut away some of our selfish values, but mostly leaves unchanged our preferences about distant generations.

Evolution favors replication. But patience and resource acquisition aren't obviously correlated with any sort of value; if anything, better resource-acquirers are destructive and competitive. The claim isn't that evolution is intrinsically "against" any particular value, it's that it's extremely unlikely to optimize for any particular value, and the failure to do so nearly perfectly is catastrophic. Furthermore, competitive dynamics lead to systematic failures. See the citation.

Shulman's post assumes that once somewhere is settled, it's permanently inhabited by the same tribe. But I don't buy that. Agents can still spread through violence or through mimicry (remember the quote on fifth-generation warfare).

It seems like you are paraphrasing a standard argument for working on AI alignment rather than arguing against it.

All I am saying is that the argument applies to this issue as well.

Over time it seems likely that society will improve our ability to make and enforce deals, to arrive at consensus about the likely consequences of conflict, to understand each others' situations, or to understand what we would believe if we viewed others' private information.

The point you are quoting is not about just any conflict, but the security dilemma and arms races. These do not significantly change with complete information about the consequences of conflict. Better technology yields better monitoring, but also better hiding - which is easier, monitoring ICBMs in the 1970's or monitoring cyberweapons today?

One of the most critical pieces of information in these cases is intentions, which are easy to keep secret and will probably remain so for a long time.

By "don't require superintelligence to be implemented," do you mean systems of machine ethics that will work even while machines are broadly human level?

Yes, or even implementable in current systems.

I think the mandate of AI alignment easily covers the failure modes you have in mind here.

The failure modes here are a different context where the existing research is often less relevant or not relevant at all. Whatever you put under the umbrella of alignment, there is a difference between looking at a particular system with the assumption that it will rebuild the universe in accordance with its value function, and looking at how systems interact in varying numbers. If you drop the assumption that the agent will be all-powerful and far beyond human intelligence then a lot of AI safety work isn't very applicable anymore, while it increasingly needs to pay attention to multi-agent dynamics. Figuring out how to optimize large systems of agents is absolutely not a simple matter of figuring out how to build one good agent and then replicating it as much as possible.

Comment by Zeke_Sherman on Utopia In The Fog · 2017-03-28T17:26:15.126Z · EA · GW

Optimizing for a narrower set of criteria allows more optimization power to be put behind each member of the set. I think it is plausible that those who wish to do the most good should put their optimization power behind a single criteria, as that gives it some chance to actually succeed.

Only if you assume that there are high thresholds for achievements.

The best candidate afaik is right to exit, as it eliminates the largest possible number of failure modes in the minimum complexity memetic payload.

I do not understand what you are saying.

Edit: do you mean, the option to get rid of technological developments and start from scratch? I don't think there's any likelihood of that, it runs directly counter to all the pressures described in my post.

Comment by Zeke_Sherman on Concrete project lists · 2017-03-28T02:47:01.234Z · EA · GW

This is odd. Personally my reaction is that I want to get to a project before other people do. Does bad research really make it harder to find good research? This doesn't seem like a likely phenomenon to me.

Comment by Zeke_Sherman on Concrete project lists · 2017-03-28T02:45:09.491Z · EA · GW

I think we need more reading lists. There have already been one or two for AI safety, but I've not seen similar ones for poverty, animal welfare, social movements, or other topics.

Comment by Zeke_Sherman on Open Thread #36 · 2017-03-28T02:38:43.365Z · EA · GW

We all know how many problems there are with reputation and status seeking. You would lower epistemic standards, cement power users, and make it harder for outsiders and newcomers to get any traction for their ideas.

If we do something like this it should be for very specific capabilities, like reliability, skill or knowledge in a particular domain, rather than generic reputation. That would make it more useful and avoid some of the problems.

Comment by Zeke_Sherman on Concrete project lists · 2017-03-28T02:36:30.192Z · EA · GW

Pareto Fellowship was shut down? When? What happened?

Comment by Zeke_Sherman on Open Thread #36 · 2017-03-28T02:01:17.613Z · EA · GW

Has anyone thought about retiring in a foreign country where the cost of living is low? That seems like a great idea to me - all the benefits of saving money, without worrying about work opportunities.