Comment by kokotajlod on EA Forum Prize: Winners for February 2019 · 2019-04-01T02:41:05.064Z · score: 3 (3 votes) · EA · GW

Agreed.

Comment by kokotajlod on Suggestions for EA wedding vows? · 2019-03-24T15:06:06.490Z · score: 1 (1 votes) · EA · GW

Yep. Some helpful context for the quote: It was written at the start of World War Two by a French pilot; the quote draws heavily from his personal experiences, e.g. once he & his copilot crashed in the Sahara, were stranded there for four days and nearly died of dehydration before being rescued by a Bedouin who happened to stumble across them.

Comment by kokotajlod on Suggestions for EA wedding vows? · 2019-03-22T16:13:41.270Z · score: 6 (5 votes) · EA · GW

Congratulations! :)

My wife and I got married last year. Here are some quotes/sayings that we like. The first one (the shortest) made it into our wedding vows:

"Take pride in noticing when you are confused, or when evidence goes against what you think. Rejoice when you change your mind. "

" There are actually two struggles between good and evil within each person. The first is the struggle to choose the right path despite all the temptations to choose the wrong path; it is the struggle to make actions match words. The second is the struggle to correctly decide which path is right and which is wrong. Many people who win one struggle lose the other. Do not lose sight of this fact or you will be one of them. "

"One who wishes to believe says, “Does the evidence permit me to believe?” One who wishes to disbelieve asks, “Does the evidence force me to believe?” Beware lest you place huge burdens of proof only on propositions you dislike, and then defend yourself by saying: “But it is good to be skeptical.” If you attend only to favorable evidence, picking and choosing from your gathered data, then the more data you gather, the less you know. If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider. "

Then we had this reading, from https://en.wikiquote.org/wiki/Antoine_de_Saint_Exup%C3%A9ry

"Life has taught us that love does not consist in gazing at each other but in looking outward together in the same direction. There is no comradeship except through union in the same high effort. Even in our age of material well-being this must be so, else how should we explain the happiness we feel in sharing our last crust with others in the desert? No sociologist's textbook can prevail against this fact. Every pilot who has flown to the rescue of a comrade in distress knows that all joys are vain in comparison with this one. And this, it may be, is the reason why the world today is tumbling about our ears. It is precisely because this sort of fulfillment is promised each of us by his religion, that men are inflamed today. All of us, in words that contradict each other, express at bottom the same exalted impulse. What sets us against one another is not our aims — they all come to the same thing — but our methods, which are the fruit of our varied reasoning. "

Comment by kokotajlod on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-17T22:14:53.152Z · score: 2 (2 votes) · EA · GW

I agree that this was probably a factor that contributed to the accuracy gains of people who made more frequent forecasts. It may even have been doing most of the work; I'm not sure.

Comment by kokotajlod on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-16T14:17:36.854Z · score: 2 (2 votes) · EA · GW

The exact training module they used is probably not public, but they do have a training module on their website. It costs money though.

For sure, forecasters who devoted more effort to it tended to make more accurate predictions. It would be surprising if that wasn't true!

Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post

2019-02-15T19:14:41.459Z · score: 60 (23 votes)
Comment by kokotajlod on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-14T23:00:42.176Z · score: 1 (1 votes) · EA · GW

...because that AI research is useful for some other goal the AI has, such as maximizing paperclips. See the instrumental convergence thesis.

Yes, exactly.

The argument for doom by default seems to rest on a default misunderstanding of human values as the programmer attempts to communicate them to the AI. If capability growth comes before a goal is granted, it seems less likely that misunderstanding will occur.

Eh, I could see arguments that it would be less likely and arguments that it would be more likely. Argument that it is less likely: We can use the capabilities to do something like "Do what we mean," allowing us to state our goals imprecisely & survive. Argument that it is more likely: If we mess up, we immediately have an unaligned superintelligence on our hands. At least if the goals come before the capability growth, there is a period where we might be able to contain it and test it, since it isn't capable of escaping or concealing its intentions.

Comment by kokotajlod on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-11T17:49:06.446Z · score: 11 (8 votes) · EA · GW

I think the big disanalogy between AI and the Industrial and Agricultural revolutions is that there seems to be a serious chance that an AI accident will kill us all. (And moreover this isn't guaranteed; it's something we have leverage over, by doing safety research and influencing policy to discourage arms races and encourage more safety research.) I can't think of anything comparable for the IR or AR. Indeed, there are only two other cases in history of risk on that scale: Nuclear war and pandemics.

Comment by kokotajlod on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-11T17:35:08.613Z · score: 10 (7 votes) · EA · GW

Thanks for this talk/post--It's a good example of the sort of self-skepticism that I think we should encourage.

FWIW, I think it's a mistake to construe the classic model of AI accident catastrophe as capability gain first, then goal acquisition. I say this because (a) I never interpreted it that way when reading the classic texts, and (b) it doesn't really make sense--the original texts are very clear that the massive jump in AI capability is supposed to come from recursive self-improvement, i.e. the AI helping to do AI research. So already we have some sort of goal-directed behavior (bracketing CAIS/ToolAI objections!) leading up to and including the point of arrival at superintelligence.

I would construe the little sci-fi stories about putting goals into goal slots as not being a prediction about the architecture of AI but rather illustrations of completely different points about e.g. orthogonality of value or the dangers of unaligned superintelligences.

At any rate, though, what does it matter whether the goal is put in after the capability growth, or before/during? Obviously, it matters, but it doesn't matter for purposes of evaluating the priority of AI safety work, since in both cases the potential for accidental catastrophe exists.

Comment by kokotajlod on Which animals need the most help from the animal advocacy movement? · 2018-12-05T19:54:03.162Z · score: 3 (2 votes) · EA · GW

This research is very helpful, thanks! Two questions: (1) Sometimes I wonder if brain size is relevant, not just to probability-of-feeling-pain but to "amount of pain felt" or something like that. So, for example, perhaps a 1kg brain feels 1000x more pain than a 1g brain, on average. Do you include this in your analysis? If not, would it change things much if you did--e.g. making cows much higher-priority? (2) Your analysis is focused on the question of which animals should be prioritized in EA interventions. Does it also apply to the question of which animals are highest-priority to avoid eating? E.g. would it be better to be a reducitarian who eats beef but no other meats than a pescetarian?

Tiny Probabilities of Vast Utilities: Bibliography and Appendix

2018-11-20T17:34:02.854Z · score: 8 (5 votes)
Comment by kokotajlod on Tiny Probabilities of Vast Utilities: Concluding Arguments · 2018-11-20T16:05:46.038Z · score: 1 (1 votes) · EA · GW

Hmmm, good point: If we carve up the space of possibilities finely enough, then every possibility will have a too-low probability. So to make a "ignore small probabilities" solution work, we'd need to include some sort of rule for how to carve up the possibilities. And yeah, this seems like an unpromising way to go...

I think the best way to do it would be to say "We lump all possibilities together that have the same utility." The resulting profile of dots would be like a hollow bullet or funnel. If we combined that with an "ignore all possibilities below probability p" rule, it would work. It would still have problems, of course.

Comment by kokotajlod on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-15T22:20:08.355Z · score: 1 (1 votes) · EA · GW

I believe this concern is addressed by the next post in the series. The current examples implicitly only consider two possible outcomes: "No effect" and "You do blah blah blah and this saves precisely X lives..." The next post expands the model to include arbitrarily many possible outcomes of each action under consideration, and after doing so ends up reasoning in much the way you describe to defuse the initial worry.

Tiny Probabilities of Vast Utilities: Concluding Arguments

2018-11-15T21:47:58.941Z · score: 20 (11 votes)
Comment by kokotajlod on Tiny Probabilities of Vast Utilities: Solutions · 2018-11-15T20:55:31.744Z · score: 1 (1 votes) · EA · GW

Good point. I put in some links at the beginning and end, and I'll go through now and add the other links you suggest... I don't think the forum software allows me to link to a part of a post, but I can at least link to the post.

Comment by kokotajlod on Tiny Probabilities of Vast Utilities: Solutions · 2018-11-14T22:22:43.675Z · score: 2 (2 votes) · EA · GW

On solution #6: Yeah, it only works if the profiles really do cancel out. But I classified it as involving the decision rule because if your rule is simply to sum up the utilityXprobability of all the possibilities, it doesn't matter if they are perfectly symmetric around 0, your sum will still be undefined.

Yep, solution #1 involves biting the bullet and rejecting regularity. It has problems, but maybe they are acceptable problems.

Solution #2 would be great if it works, but I don't think it will--I regret pushing that to the appendix, sorry!

Thanks again for all the comments, btw!

Tiny Probabilities of Vast Utilities: Solutions

2018-11-14T16:04:14.963Z · score: 18 (10 votes)
Comment by kokotajlod on Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem · 2018-11-12T15:04:07.710Z · score: 1 (1 votes) · EA · GW
But the probability of those rare things will be super low. It's not obvious that they'll change the EV as much as nearer term impacts. ... All this theorizing might be unnecessary if our actual expectations follow a different pattern.

Yes, if the profiles are not funnel-shaped then this whole thing is moot. I argue that they are funnel-shaped, at least for many utility functions currently in use (e.g. utility functions that are linear in QALYs) I'm afraid my argument isn't up yet--it's in the appendix, sorry--but it will be up in a few days!

Are we? Expected utility is still a thing. Some actions have greater expected utility than others even if the probability distribution has huge mass across both positive and negative possibilities. If infinite utility is a problem then it's already a problem regardless of any funnel or oscillating type distribution of outcomes.

If the profiles are funnel-shaped, expected utility is not a thing. The shape of your action profiles depends on your probability function and your utility function. Yes, infinitely valuable outcomes are a problem--but I'm arguing that even if you ignore infinitely valuable outcomes, there's still a big problem having to do with infinitely many possible finite outcomes, and moreover even if you only consider finitely many outcomes of finite value, if the profiles are funnel-shaped then what you end up doing will be highly arbitrary, determined mostly by whatever is happening at the place where you happened to draw the cutoff.

Another way of describing this phenomenon is that we are simply seizing the low hanging fruit, and hard intellectual progress isn't even needed.

That's what I'd like to think, and that's what I do think. But this argument challenges that; this argument says that the low-hanging fruit metaphor is inappropriate here: there is no lowest-hanging fruit or anything close; there is an infinite series of fruit hanging lower and lower, such that for any fruit you pick, if only you had thought about it a little longer you would have found an even lower-hanging fruit that would have been so much easier to pick that it would easily justify the cost in extra thinking time needed to identify it... moreover, you never really "pick" these fruit, in that the fruit are gambles, not outcomes; they aren't actually what you want, they are just tickets that have some chance of getting what you want. And the lower the fruit, the lower the chance...

Comment by kokotajlod on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-10T19:55:18.153Z · score: 1 (1 votes) · EA · GW

Thanks! Yeah, sorry--I was thinking about putting it up all at once but decided against because that would make for a very long post. Maybe I should have anyway, so it's all in one place.

Well, I don't share your intuition, but I'd love to see it explored more. Maybe you can get an argument out of it. One way to try would be to try to find a class of at least 10^10^10^10 hypotheses that are at least as plausible as the Mugger's story.

Comment by kokotajlod on Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem · 2018-11-10T19:46:16.974Z · score: 4 (4 votes) · EA · GW

I'm not assuming it's symmetric. It probably isn't symmetric, in fact. Nevertheless, it's still true that the expected utility of every action is undefined, and that if we consider increasingly large sets of possible outcomes, the partial sums will oscillate wildly the more we consider.

Yes, at any level of probability there should be a higher density of outcomes towards the center. That doesn't change the result, as far as I can tell. Imagine you are adding new possible outcomes to consideration, one by one. Most of the outcomes you add won't change the EV much. But occasionally you'll hit one that makes everything that came before look like a rounding error, and it might flip the sign of the EV. And this occasional occurrence will never cease; it'll always be true that if you keep considering more possibilities, the old possibilities will continue to be dwarfed and the sign will continue to flip. You can never rest easy and say "This is good enough;" there will always be more crucial considerations to uncover.

So this is a problem in theory--it means we are approximating an ideal which is both stupid and incoherent--but is it a problem in practice?

Well, I'm going to argue in later posts in this series that it isn't. My argument is basically that there are a bunch of reasonably plausible ways to solve this theoretical problem without undermining long-termism.

That said, I don't think we should dismiss this problem lightly. One thing that troubles me is how superficially similar the failure mode I describe here is to the actual history of the EA movement: People say "Hey, let's actually do some expected value calculations" and they start off by finding better global poverty interventions, then they start doing this stuff with animals, then they start talking about the end of the world, then they start talking about evil robots... and some of them talk about simulations and alternate universes...

Arguably this behavior is the predictable result of considering more and more possibilities in your EV calculations, and it doesn't represent progress in any meaningful sense--it just means that EAs have gone farther down the funnel-shaped rabbithole than everybody else. If we hang on long enough, we'll end up doing crazier and crazier things until we are diverting all our funds from x-risk prevention and betting it on some wild scheme to hack into an alternate dimension and create uncountably infinite hedonium.

Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem

2018-11-10T09:12:15.039Z · score: 21 (10 votes)
Comment by kokotajlod on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-10T09:01:53.603Z · score: 1 (1 votes) · EA · GW

Yup. Also, even if the decision-theoretic move works, it doesn't solve the more general problem. You'll just "mug yourself" by thinking up more and more ridiculous hypotheses and chasing after them.

Comment by kokotajlod on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-09T13:20:33.177Z · score: 3 (3 votes) · EA · GW

It's good to know lots of people have this intuition--I think I do too, though it's not super strong in me.

Arguably, when p is below the threshold you mention, we can make some sort of psuedo-law-of-large-numbers argument for expected utility maximization, like "If we all follow this policy, probably at least one of us will succeed." But when p is above the threshold, we can't make that argument.

So the idea is: Reject expected utility maximization in general (perhaps for reasons which will be discussed in subsequent posts!), but accept some sort of "If following a policy seems like it will probably work, then do it" principle, and use that to derive expected utility maximization in ordinary cases.

All of this needs to be made more precise and explored in more detail. I'd love to see someone do that.

(BTW, upcoming posts remove the binary-outcomes assumption. Perhaps it was a mistake to post them in sequence instead of all at once...)

Comment by kokotajlod on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-08T14:52:53.447Z · score: 2 (2 votes) · EA · GW

Thanks for the comments--yeah, future posts are going to discuss these topics, though not necessarily using this terminology. In general I'm engaging at a rather foundational level: why is Knightian uncertainty importantly different from risk, and why is the distinction Yudkowsky mentions at the end a good and legitimate distinction to make?

Tiny Probabilities of Vast Utilities: A Problem for Long-Termism?

2018-11-08T10:09:59.111Z · score: 20 (13 votes)
Comment by kokotajlod on How to use the Forum · 2018-11-08T00:45:36.494Z · score: 2 (2 votes) · EA · GW

Sometimes I'd like to write on a topic that is only tangentially related to EA--for example, the simulation argument. Is this a good place to do that, or should I save that sort of thing for a different forum?

Comment by kokotajlod on Prioritization Consequences of "Formally Stating the AI Alignment Problem" · 2018-02-26T01:25:56.621Z · score: 0 (0 votes) · EA · GW

I'm skeptical of your specific views on qualia, etc. (but I haven't read your arguments yet, so I withhold judgment.)

Despite that skepticism, this seems like a promising area to explore at least.

I agree with your #5.

Comment by kokotajlod on Ongoing lawsuit naming "future generations" as plaintiffs; advice sought for how to investigate · 2018-01-25T03:44:32.314Z · score: 0 (0 votes) · EA · GW

OK, thanks! This is very helpful, I'm reading through the article you cite now.

Comment by kokotajlod on Ongoing lawsuit naming "future generations" as plaintiffs; advice sought for how to investigate · 2018-01-24T15:48:21.938Z · score: 0 (0 votes) · EA · GW

I should clarify: I'm not only looking for help from lawyers. Any advice or ideas would be appreciated.

Ongoing lawsuit naming "future generations" as plaintiffs; advice sought for how to investigate

2018-01-23T22:22:08.173Z · score: 8 (8 votes)
Comment by kokotajlod on An Exploration of Sexual Violence Reduction for Effective Altruism Potential · 2017-11-12T20:17:09.743Z · score: 6 (6 votes) · EA · GW

Thanks for writing this, Kathy! You pointed out some things which I hadn't really internalized yet, plus some statistics that I found surprising. I hope this sparks a good conversation.

As I see it, the case is basically: (1) Rape & other forms of sexual violence, harrassment, etc. are common enough that we should expect them to be significantly hurting the effectiveness of the EA movement. (2) Insofar as we think EA movement building is important (and it is), reducing sexual violence etc. in the EA movement in particular is also important. (from 1) (3) Is it neglected? Yes; the EA movement hasn't done much so far to deal with this. You are the first to seriously research it and write it up, for example. (4) Is it tractable? Yes; lots of effort has been put towards reducing workplace sexual violence in the wider world; presumably some best practices have been found somewhere and with careful research we can identify and implement them. (You give many examples of possible interventions with research behind them) (5) So we should do it. (from 2, 3, 4)

I'd be interested to see more work in particular on (4). What are some examples of communities that made significant progress in reducing the prevalence of sexual violence? Ideally we'd find examples of communities that were similar to us and made progress, and then do what they did.

Comment by kokotajlod on Why I think the Foundational Research Institute should rethink its approach · 2017-07-22T21:47:29.152Z · score: 0 (0 votes) · EA · GW

SoerenMind: It's wayyy more than just functionalism/physicalism plus moral anti-realism. There are tons of people who hold both views, and only a tiny fraction of them are negative utilitarians or anything close. In fact I'd bet it's somewhat unusual for any sort of moral anti-realist to be any sort of utilitarian.

Comment by kokotajlod on Why I think the Foundational Research Institute should rethink its approach · 2017-07-22T21:44:44.147Z · score: 2 (2 votes) · EA · GW

Interesting. I'm a moral anti-realist who also focuses on suffering, but not to the extent that you do (e.g. not worrying that much about suffering at the level of fundamental physics.) I would have predicted that theoretical arguments were what convinced you to care about fundamental physics suffering, not any sort of visceral feeling.

Comment by kokotajlod on Four quantitative models, aggregation, and final decision - Oxford Prioritisation Project · 2017-05-22T16:40:54.433Z · score: 5 (5 votes) · EA · GW

That second quote in particular seems to be a good example of what some might call measurability bias. Understandable, of course--it's hard to give out a prize on the basis of raw hunches--but nevertheless we should work towards finding ways to avoid it.

Kudos to OPP for being so transparent in their thought process though!

Comment by kokotajlod on [deleted post] 2017-04-30T15:07:14.119Z

Thanks for this! Even within EA I think there's a need for more brainstorming of different cause areas, and you've presented a well-researched case for this one. I am tentatively convinced!

What do you think is the best counterargument? That is, what's the best reason to think that maybe this isn't as tractable/neglected/important as you think?

I think the biggest concern (for me) is whether or not the research on the matter is solid. Does physical punishment cause worse outcomes, or does it merely correlate? Etc. This is important both for determining how serious the problem is, and for determining how tractable it is (because without research to back up our claims, it will be hard to convince anyone to change.) I haven't looked into it myself of course, but I'm glad you have.

Comment by kokotajlod on Alice and Bob on big-picture worldviews (Oxford Prioritisation Project) · 2017-03-25T16:50:54.677Z · score: 0 (0 votes) · EA · GW

Sure, sorry for the delay.

The ways that I envision suffering potentially happening in the future are these: --People deciding that obeying the law and respecting the sovereignty of other nations is more important than preventing the suffering of people inside them --People deciding that doing scientific research (simulations are an example of this) is well worth the suffering of the people and animals experimented on --People deciding that the insults and microagressions that affect some groups are not as bad as the inefficiencies that come from preventing them --People deciding that it's better to have a few lives without suffering than many many many lives with suffering (even when the many lives are all still all things considered good.) --People deciding that AI systems should be designed in ways that make them suffer in their daily jobs, because it's most efficient that way.

Utilitarianism comes down pretty strongly in favor of these decisions, at least in many cases. My guess is that in post-scarcity conditions, ordinary people will be more inclined to resist these decisions than utilitarians. The big exception is the sovereignty thing; in those cases I think utilitarians will lead to less suffering than the average humans. But those cases will only happen for a decade or so and will be relatively small-scale.

Comment by kokotajlod on Alice and Bob on big-picture worldviews (Oxford Prioritisation Project) · 2017-03-21T20:53:51.230Z · score: 1 (1 votes) · EA · GW

And I think normal humans, if given command of the future, would make even less suffering than classical utilitarians.

Comment by kokotajlod on Alice and Bob on big-picture worldviews (Oxford Prioritisation Project) · 2017-03-21T20:52:15.531Z · score: 3 (3 votes) · EA · GW

"Bob: agree, to make lots of suffering, it needs pretty human-like utility functions that lead to simulations or making many sentient beings."

I'm pretty sure this is false. Superintelligent singletons that don't specifically disvalue suffering will make lots of it (relative to the current amount, i.e. one planetful) in pursuit of other ends. (They'll make ancestor simulations, for example, for a variety of reasons.) The amount of suffering they'll make will be far less than the theoretical maximum, but far more than what e.g. classical utilitarians would do.

If you disagree, I'd love to hear that you do--because I'm thinking about writing a paper on this anyway, it will help to know that people are interested in the topic.

Anyone have thoughts/response to this critique of Effective Animal Altruism?

2016-12-25T21:14:39.612Z · score: 3 (7 votes)
Comment by kokotajlod on 2016 AI Risk Literature Review and Charity Comparison · 2016-12-14T20:12:18.891Z · score: 4 (4 votes) · EA · GW

This is great. Please do it again next year.

Comment by kokotajlod on Should effective altruism have a norm against donating to employers? · 2016-11-30T01:40:52.525Z · score: 2 (4 votes) · EA · GW

I agree with Owen. I don't have anything to add to what's been said, other than a response to the strongest reason against having that norm: It only conflicts with the norm of "do what's most effective" if it truly is more effective to donate to one's own employer. But because of the signaling/weirdness reasons (and, yes, the bias) that doesn't seem to be true. We're sophisticated enough that we can have a hierarchy of norms, with "do what's most effective" at the top and "don't donate to your employer unless there's a special circumstance" as a lower norm--as a helpful heuristic/guideline.

How much money is saved from taxes by foregoing salary? If it's at least 20% of the donation then I might change my mind.

Comment by kokotajlod on EA != minimize suffering · 2016-07-22T19:06:57.104Z · score: 0 (0 votes) · EA · GW

I completely agree with you about all the flaws and biases in our moral intuitions. And I agree that when people bite the bullet, they've usually thought about the situation more carefully than people who just go with their intuition. I'm not saying people should just go with their intuition.

I'm saying that we don't have to choose between going with our initial intuitions and biting the bullet. We can keep looking for a better, more nuanced theory, which is free from bias and yet which also doesn't lead us to make dangerous simplifications and generalizations. The main thing that holds us back from this is an irrational bias in favor of simple, elegant theories. It works in physics, but we have reason to believe it won't work in ethics. (Caveat: for people who are hardcore moral realists, not just naturalists but the kind of people who think that there are extra, ontologically special moral facts--this bias is not irrational.)

Comment by kokotajlod on EA != minimize suffering · 2016-07-19T23:48:48.947Z · score: 1 (1 votes) · EA · GW

I second this! I'm one of the many people who think that maximizing happiness would be terrible. (I mean, there would be worse things you could do, but compared to what a normal, decent person would do, it's terrible.)

The reason is simple: when you maximize something, by definition that means being willing to sacrifice everything else for the sake of that thing. Depending on the situation you are in, you might not need to sacrifice anything else; in fact, depending on the situation, maximizing that one thing might lead to lots of other things as a bonus--but in principle, if you are maximizing something, then you are willing to sacrifice everything else for the sake of it. Justice. Beauty. Fairness. Equality. Friendship. Art. Wisdom. Knowledge. Adventure. The list goes on and on. If maximizing happiness required sacrificing all of those things, such that the world contained none of them, would you still think it was the right thing to do? I hope not.

(Moreover, based on the laws of physics as we currently understand them, maximizing happiness WILL require us to sacrifice all of the things mentioned above, except possibly Wisdom and Knowledge, and even they will be concentrated in one being or kind of being.)

This is a problem with utilitarianism, not EA, but EA is currently dominated by utilitarians.

Comment by kokotajlod on EA != minimize suffering · 2016-07-19T23:33:17.637Z · score: 0 (0 votes) · EA · GW

I've struggled with similar concerns. I think the things EA's push for are great, but I do think that we are more ideologically homogeneous than we should ideally be. My hope is that as more people join, it will become more "big tent" and useful to a wider range of people. (Some of it is already useful for a wide range of people, like the career advice.)

Comment by kokotajlod on EA != minimize suffering · 2016-07-19T23:25:05.702Z · score: 2 (2 votes) · EA · GW

I agree that it's dangerous to generalize from fictional evidence, BUT I think it's important not to fall into the opposite extreme, which I will now explain...

Some people, usually philosophers or scientists, invent or find a simple, neat collection of principles that seems to more or less capture/explain all of our intuitive judgments about morality. They triumphantly declare "This is what morality is!" and go on to promote it. Then, they realize that there are some edge cases where their principles endorse something intuitively abhorrent, or prohibit something intuitively good. Usually these edge cases are described via science-fiction (or perhaps normal fiction).

The danger, which I think is the opposite danger to the one you identified, is that people "bite the bullet" and say "I'm sticking with my principles. I guess what seems abhorrent isn't abhorrent after all; I guess what seems good isn't good after all."

In my mind, this is almost always a mistake. In situations like this, we should revise or extend our principles to accommodate the new evidence, so to speak. Even if this makes our total set of principles more complicated.

In science, simpler theories are believed to be better. Fine. But why should that be true in ethics? Maybe if you believe that the Laws of Morality are inscribed in the heavens somewhere, then it makes sense to think they are more likely to be simple. But if you think that morality is the way it is as a result of biology and culture, then it's almost certainly not simple enough to fit on a t-shirt.

A final, separate point: Generalizing from fictional evidence is different from using fictional evidence to reject a generalization. The former makes you subject to various biases and vulnerable to propaganda, whereas the latter is precisely the opposite. Generalizations often seem plausible only because of biases and propaganda that prevent us from noticing the cases in which they don't hold. Sometimes it takes a powerful piece of fiction to call our attention to such a case.

[Edit: Oh, and if you look at what the OP was doing with the Giver example, it wasn't generalizing based on fictional evidence, it was rejecting a generalization.]

Comment by kokotajlod on Why Poverty? · 2016-04-26T14:41:16.365Z · score: 2 (2 votes) · EA · GW

"the resulting world will be a global (2) melting pot ruled by suffering-maximizing Shariah law."

This seems extremely implausible to me. Historically, assimilation and globalization has been the norm. Also, Shariah isn't even implemented in many Islamic countries, why would it be implemented in e.g. 2050 Britain?

"That's a worse existential risk than pandemics or climate change; in fact it would be worse than human extinction."

Hell no! Standards of living even in Saudi Arabia are probably better than they've been in most places for most of human history, and things are only going to get better.

On a more abstract level: It really seems like you are exaggerating the danger here. Since the danger is a particular culture/religious group, that's especially insensitive & harmful.

You might say "I agree that the odds of this nightmare scenario happening are very small, but because the scenario is so bad, I think we should still be concerned about it." I think that when we start considering odds <1% of sweeping cultural change, then we ought to worry about all sorts of other contenders in that category too. Communism could revive. A new, fiery religion could appear. World War Three could happen. So many things which would be worse, and more likely, then the scenario you are considering.

Comment by kokotajlod on Expected Value Estimates You Can (Maybe) Take Literally · 2016-04-17T04:40:46.768Z · score: 0 (0 votes) · EA · GW

I for one would DEFINITELY use a quantitative model like this one. If you need incentive to think more and develop a more sophisticated model and then explain and justify it in a new post... well, I'd love it if you did that.

Comment by kokotajlod on Why do effective altruists support the causes we do? · 2015-12-31T16:29:11.419Z · score: 3 (3 votes) · EA · GW

I found this very illuminating, thanks!

Nitpick: You say "There are altruistic activities which fall outside this grouping – for example, working to improve biodiversity for its own sake. But these don’t improve anyone’s well-being, and so fall outside the scope of effective altruism," but I thought EA was defined more broadly than that. My understanding of EA is that if you really think that e.g. preserving biodiversity for its own sake is more worthwhile than the other causes then that's fine, EA can help you find the most effective way to do that. I would have said that the reason why non-well-being-improving causes aren't part of the EA Big Three is because very few people think those causes are more urgent than well-being-improving causes in the first place. Thoughts?

Comment by kokotajlod on Moral anti-realists don't have to bite bullets · 2015-12-30T21:53:28.972Z · score: 1 (1 votes) · EA · GW

If values are chosen, not discovered, then how is the choice of values made?

Do you think the choice of values is made, even partially, even implicitly, in a way that involves something that fits the loose definition of a value--like "I want my values to be elegant when described in english" or "I want my values to match my pre-theoretic intuitions about the kinds of cases that I am likely to encounter?" Or do you think that the choice of values is made in some other way?

I too think that values are chosen, but I think that the choice involves implicit appeal to "deeper" values. These deeper values are not themselves chosen, on pain of infinite regress. And I think the case can be made that these deeper values are complex, at least for most people.

Comment by kokotajlod on Permanent Societal Improvements · 2015-12-14T18:51:45.601Z · score: 0 (0 votes) · EA · GW

Yep. Any ideas what such an other dimension might be? (There are of course the "normal" other dimensions, like average well-being, that are included in the calculation of utilons.)

Comment by kokotajlod on Donate Your Christmas to GiveWell Charities! · 2015-12-11T15:38:29.755Z · score: 3 (3 votes) · EA · GW

I have a gift wish list and I've put donations to EA charities on it. I think this is in general a great idea, though I can see why many people might be uncomfortable with it:

Gift-giving is a sacred ritual for some people, something we do to bond as family/friends, and a little "treat yourself" moment that happens once or twice a year. There are good psychological reasons behind it, in other words, and it is not clear that giving money to charity accomplishes all the same goals. The spectre of the "altruist who is so committed that they don't have a life anymore" looms.

I think the response to this is to acknowledge the truth behind it, but then point out that we are a very long way from that extreme "don't have a life anymore" situation. The status quo is currently zero charitable donations on the holidays; surely it won't hurt much to change that a bit. Indeed, by giving something to charity at the same time that we bond and treat ourselves, we might actually improve the bonding and the treating.

Comment by kokotajlod on My Cause Selection: Thomas Mather · 2015-08-31T02:10:47.584Z · score: 2 (2 votes) · EA · GW

This is a great list, thanks! The software patent reform idea was surprising to me, but in a good way.

You say a lot about these four causes; what about the rest? You've said a bit (though not in so many words) about why you don't go in for x-risk reduction (you want to make a difference in the next few decades) but what about e.g. immigration system reform, justice system reform, and pandemic prevention?