Am I an Effective Altruist for moral reasons? 2016-02-10T18:17:53.953Z
My Coming of Age as an EA: 12 Problems with Effective Altruism 2015-11-28T09:00:06.749Z
Moral Economics - What, Why, Whom, How, When, What For? 2015-08-11T02:27:24.080Z
Direct Funding Between EAs - Moral Economics 2015-07-28T01:07:53.100Z
Moving Moral Economics Forward 2015-07-21T03:26:41.814Z
Moral Economics Concepts 2015-07-17T23:55:12.454Z
Introducing Moral Economics 2015-07-14T07:13:09.254Z
What Got Us Here Won’t Get Us There: Failure Modes on the Way to Global Cooperation 2015-06-12T19:36:19.398Z
We are living in a suboptimal blogosphere 2015-06-08T16:41:51.213Z
[Discussion] What have you found great value in not doing? 2015-06-08T15:42:34.113Z
Questions about Effective Altruism in academia (RAQ) 2015-05-27T17:06:27.056Z
Effective Altruism as an intensional movement 2015-05-25T21:49:19.888Z
Should you give your best now or later? 2015-05-12T02:55:43.734Z
Open Thread 3 2014-10-15T17:04:02.457Z
Types of planners and plans for EAs 2014-10-10T19:22:48.411Z
Your Good Deeds 2014 Thread 2014-09-30T03:25:17.885Z
On Media and Effective Altruism 2014-09-27T20:43:31.643Z


Comment by Diego_Caleiro on Accountability buddies: a proposed system · 2018-07-27T04:32:34.612Z · EA · GW

This is a little old, but it's a similar concept with far higher level of investment:

Comment by Diego_Caleiro on EA Hotel with free accommodation and board for two years · 2018-06-18T23:10:55.012Z · EA · GW

I haven't read the whole thing. But this seems to be one of the, if not the coolest idea in EA in 2018. Glad you did it.

Good luck for everyone who goes to live or work there!

Comment by Diego_Caleiro on Should you give your best now or later? · 2018-02-05T01:49:30.295Z · EA · GW

It has been about 3 years, and only very specific talent still matters for EA now. Earning to Give to institutions is gone, only giving to individuals still makes sense.

It is possible that there will be full scale repleaceability of non-researchers in EA related fields by 2020.

But only if, until then, we keep doing things!

Comment by Diego_Caleiro on Cognitive Science/Psychology As a Neglected Approach to AI Safety · 2017-06-08T20:01:02.756Z · EA · GW

Kaj, I tend to promote your stuff a fair amount to end the inferential silence, and it goes without saying that I agree with all else you said.

Don't give up on your ideas or approach. I am dispirited that there are so few people thinking like you do out there.

Comment by Diego_Caleiro on Should you give your best now or later? · 2017-03-03T06:22:48.540Z · EA · GW

It's been less than two years and all the gaps have either been closed, or been kept open in purpose, which Ben Hoffman has been staunchly criticising.

But anyway, it has been less than 2 years and Open Phil has way more money than it knows what to do with.


Comment by Diego_Caleiro on Use "care" with care. · 2017-02-10T05:39:43.177Z · EA · GW

Amanda Askell has interesting thoughts suggestive of using "care" to have a counterfactual meaning. She suggests we think of care as what you would have cared about if you were in a context such that this was a thing you could potentially change. In a way, the distinction is between people who think about "care" in terms of rank "oh, that isn't the thing I most care about" and those who care in terms of absolutes "oh, I think the moral value of this is positive." further complicated by the fact some people are thinking in expected value of action and others are thinking absolute value of the object the action affects.

Semantically, if we think it is a good idea to "expand our circle of care" we should probably adopt "care" to mean the counterfactual meaning, as that broadens the scope of things we can truthfully claim to care about.

Comment by Diego_Caleiro on If tech progress might be bad, what should we tell people about it? · 2016-02-24T18:34:06.917Z · EA · GW

Also related, on facebook:

Comment by Diego_Caleiro on Am I an Effective Altruist for moral reasons? · 2016-02-17T22:20:06.357Z · EA · GW

They need not imply, but I would like a framework where they do under ideal circumstances. In that framework - which I paraphrase from Lewis - if I know a certain moral fact, e.g., that something is one of my fundamental values, then I will value it (this wouldn’t obtain if you are a hypocrite, in such case it wouldn’t be knowledge).

I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.

I’m unsure I got your notation. =/= means different? yes What is the meaning of “/” in “A/The…”? same as person/persons, it means either.

In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.

I mean failure to exercise moral reasoning. You would be right about what you value, you would desire as you desire to desire, have all the relevant beliefs right, have no conflicting desires or values, but you would not act to serve your desires according to your beliefs. In your instance things would be more complicated given that it involves knowing a negation. Perhaps we can go about like this. You would be right maximizing welfare is not your fundamental value, you would have the motivation to stop solely desiring to desire welfare, you would cease to desire welfare, there would be no other desire inducing a desire on welfare, there would be no other value inducing desire on welfare, but you would fail to pursue what serves your desire. This fits well with the empirical fact psychopaths have low-IQ and low levels of achievement. Personally, I would bet your problem is more with allowing to have moral akrasia with the excuse of moral uncertainty.

I don't think you carved reality at the joints here, let me do the heavy lifting: The distinction between our paradigms seems to be that I am using weightings for values and you are using binaries. Either you deem something a moral value of mine or not. I however think I have 100% of my future actions left to do, how do I allocate my future resources towards what I value. Part of it will be dedicated to moral goods, and other parts won't. So I do think I have moral values which I'll pay high opportunity cost for, I just don't find them to take a load as large as the personal values, which happen to include actually implementing some sort of Max(Worldwide Welfare) up to a Brownian distance from what is maximally good. My point, overall is that the moral uncertainty is only part of the problem. The big problem is the amoral uncertainty, which contains the moral uncertainty as a subset.

Why just minds? What determines the moral circle? Why does the core need to be excluded from morality? I claim these are worthwhile questions.

Just minds because most of the value seems to lie in mental states, the core is excluded from morality by definition of morality. My immediate one second self, when thinking only about itself of having an experience simply is not a participant of the moral debate. There needs to be some possibility of reflection or debate for there to be morality, it's a minimum complexity requirement (which by the way makes my Complexity value seem more reasonable).

If this is true, maximizing welfare cannot be the fundamental value because there is not anything that can and is epistemically accessible.

Approximate maximization under a penalty of distance from the maximally best outcome, and let your other values drift within that constraint/attractor.

Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.

It is certainly true of VNM, I think it is true of a lot more of what we mean by rationality. Not sure I understood your token/type token, but it seems to me that token commensurability can only obtain if there is only one type. It does not matter if it is linear, exponential or whatever, if there is a common measure it would mean this measure is the fundamental value. It might also be that the function is not continuous, which would mean rationality has a few black spots (or that value monism has, which I claim are the same thing).

I was referring to the trivial case where the states of the world are actually better or worse in the way they are (token identity) and where another world, if it has the same properties this one has (type identity) the moral rankings would also be the same.

About black spots in value monism, it seems that dealing with infinities leads to paradoxes. I'm unaware of what else would be in this class.

I know a lot of reasonable philosophers that are not utilitarians, most of them are not mainstream utilitarians. I also believe the far future (e.g. Nick Beckstead) or future generations (e.g. Samuel Scheffler) is a more general concern than welfare monism, and that many utilitarians do not share this concern (I’m certain to know a few). I believe if you are more certain about the value of the future than about welfare being the single value, you ought to expand your horizons beyond utilitarianism. It would be hard to provide another Williams regarding convincingness, but you will find an abundance of all sort of reasonable non-utilitarian proposals. I already mentioned Jonathan Dancy (e.g., my Nozick’s Cube, value pluralism and so on. Obviously, it is not recommendable to let these matters depend on being pointed.

My understanding is that by valuing complexity and identity in addition to happiness I already am professing to be a moral pluralist. It also seems that I have boundary condition shadows, where the moral value of extremely small values of these things are undefined, in the same way that a color is undefined without tone, saturation and hue.

Comment by Diego_Caleiro on Am I an Effective Altruist for moral reasons? · 2016-02-16T23:38:51.664Z · EA · GW

I find the idea that there are valid reasons to act that are not moral reasons weird; I think some folks call them prudential reasons. It seems that your reason to be an EA is a moral reason if utilitarianism is right, and “just a reason” if it isn't. But if not what is your reason for doing it?

My understanding of prudential reasons is that they are reasons of the same class as those I have to want to live when someone points a gun at me. They are reasons that relate me to my own preferences and survival, not as a recipient of the utilitarian good, but as the thing that I want. They are more like my desire for a back massage than like my desire for a better world. A function from my actions to my reasons to act would be partially a moral function, partially a prudential function.

If you are not acting like you think you should after having complete information and moral knowledge, perfect motivation and reasoning capacity, then it does not seem like you are acting on prudential reasons, it seems you are being unreasonable.

Appearances deceive here because "that I should X" does not imply "that I think I should X". I agree that if both I should X and I think I should X, then by doing Y=/=X I'm just being unreasoable. But I deny that mere knowledge that I should X implies that I think I should X. I translate

I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.

In your desert scenario, I think I should(convolution) defend my self, though I know I should (morality) not

Hence, even if impersonal reasons are all the moral reasons there are, insofar as there are impersonal reasons for people to have personal reasons these latter are moral reasons.

We are in disagreement. My understanding is that the four quadrants can be empty or full. There can be impartial reasons for personal reasons, personal reasons for impartial reasons, impartial reasons for impartial reasons and personal reasons for personal reasons. Of course not all people will share personal reasons, and depending on which moral theory is correct, there may well be distinctions in impersonal reasons as well.

Being an EA while fully knowing maximizing welfare is not the right thing to do seems like an instance of psychopathy (in the odd case EA is only about maximizing welfare). Of course, besides these two pathologies, you might have some form of cognitive dissonance or other accidental failures.

In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.

Perhaps you are not really that sure maximizing welfare is not the right thing to do.

Most of my probability mass is that maximizing welfare is not the right thing to do, but maximizing a combination of identity, complexity and welfare is.

I prefer this solution of sophisticating the way moral reasons behave than to claim that there are valid reasons to act that are not moral reasons; the latter looks, even more than the former, like shielding the system of morality from the real world. If there are objective moral truths, they better have something to do with what people want to want to do upon reflection.

One possibility is that morality is a function from person time slices to a set of person time slices, and the size to which you expand your moral circle is not determined a priori. This would entail that my reasons to act morally only when considering time slices that have personal identity 60%+ with me would look a lot like prudential reasons, whereas my reasons to act morally accounting for all time slices of minds in this quantum branch and its descendants would be very distinct. The root theory would be this function.

The right thing to do will always be an open question, and all moral reasoning can do is recommend certain actions over others, never to require. If there is more than one fundamental value, or if this one fundamental value is epistemically inaccessible, I see no other way out besides this solution.

Seems plausible to me.

Incommensurable fundamental values are incompatible with pure rationality in its classical form.

Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.

It seems to me Williams made his point; or the point I wished him to make to you. You are saying “if this is morality, I reject it”. Good. Let’s look for one you can accept.

I would look for one I can accept if I was given sufficient (convoluted) reasons to do so. At the moment it seems to me that all reasonable people are either some type of utilitarian in practice, or are called Bernard Williams. While I don't get pointed thrice to another piece that may overwhelm the sentiment I was left with, I see no reason to enter exploration stage. For the time being, the EA in me is peace.

Comment by Diego_Caleiro on Am I an Effective Altruist for moral reasons? · 2016-02-13T01:15:43.872Z · EA · GW


Comment by Diego_Caleiro on Am I an Effective Altruist for moral reasons? · 2016-02-13T01:07:38.168Z · EA · GW

I really appreciate your point about intersubjective tractability. It enters the question of how much should we let empirical and practical considerations spill into our moral preferences (ought implies can for example, does it also imply "in a not extremely hard to coordinate way"?)

At large I'd say that you are talking about how to be an agenty Moral agent. I'm not sure morality requires being agenty, but it certainly benefits from it.

Bias dedication intensity: I meant something ortogonal to optimality. Dedicating only to moral preferences, but more to some that actually don't have that great of a standing, and less to others which normally do the heavy lifting (don't you love when philosophers talk about this "heavy lifting"?). So doing it non-optimally.

Comment by Diego_Caleiro on Accomplishments Open Thread · 2016-02-13T00:16:56.725Z · EA · GW

Suggestion: Let people talk about any accomplishments, without special emphasis on the month level, or the name of the month.

Some of the moments when people most need to brag is when they need to recover a sense of identity with a self that is more than a month old, that did awesome stuff.

Example: Once upon a time 12 years ago I thought the most good I could do was fixing aging, so I found Aubrey, worked for them for a bit, and won a prize!

A thing I'm proud off is that a few days ago I gave an impromptu speech at Sproul Hall (where free speech started) at Berkeley, about technological improvement and EA, and several people came after to thank me for it.

Comment by Diego_Caleiro on Am I an Effective Altruist for moral reasons? · 2016-02-11T23:56:37.043Z · EA · GW

I frequently use surnames, but in this case since it was a call to action of sorts, first names seemed more appropriate. Thanks for the feedback though, makes sense.

Comment by Diego_Caleiro on Am I an Effective Altruist for moral reasons? · 2016-02-11T17:54:26.541Z · EA · GW

Agreed with 2 first paragraphs.

Activities that are more moral than EA for me: At the moment I think working directly on assembling and conveying knowledge in philosophy and psychology to the AI safety community has higher expected value. I'm taking the AI human compatible course at Berkeley, with Stuart Russell, I hang out at MIRI a lot, so in theory I'm in good position to do that research and some of the time I work on it. But I don't work on it all the time, I would if I got funding for our proposal.

But actually I was referring to a counterfactual world where EA activities are less aligned with what I see as morally right than this world. There's a dimension, call it "skepticism about utilitarianism" that reading Bernard Williams made me move along. If I moved more and more along that dimension, I'd still do EA activities, that's all.

Your expectation is partially correct, I assign 3% to EA activities is morally required of everyone, I feel personally more required to do them than 25% (because this is the dream time, I was lucky, I'm at a high leverage position etc..), but although I think it is right for me to do them, I don't do them because its right, and that's my overall point.

Comment by Diego_Caleiro on Am I an Effective Altruist for moral reasons? · 2016-02-11T17:37:36.797Z · EA · GW

Telofy: Trying to figure out the direction of the inferential gap here. Let me try to explain, I don't promise to succeed.

Aggregative consequentialist utilitarianism holds that people in general should value most minds having the times of their lives, where "in general" here actually translated into a "should" operator. A moral operator. There's a distinction between me wanting X, and morality suggesting, requiring, or demanding X. Even if X is the same, different things can hold a relation to it.

At the moment I both hold a personal preference relation to you having a great time as I do a moral one. But if the moral one was dropped (as Williams makes me drop sevral of my moral reasons) I'd still have the personal one, and it supersedes the moral considerations that could arise otherwise.

Moral Uncertainty: To confess, that was my bad not disentangling uncertainty about my preferences that happen to be moral, my preferences that happen to coincide with preferences that are moral, and the preferences that morality would, say, require me to have. That was bad philosophy and on my part and I can see Lewis, Chalmers and Muelhauser blushing at my failure.

I meant uncertainty I have as an empirical subject in determining which of the reasons for argument I find are moral reasons or not, and within that which belong to which moral perspective. For instance I assign high credence that breaking a promise is bad from a Kantian standpoint, times a low credence that Kant was right about what is right. So not breaking a promise has a few votes in my parliament, but not nearly as many as giving a speech about EA at UC Berkeley has, because I'm confident that a virtuous person would do that, and I'm somewhat confident it is good from a utilitarian standpoint too, so lots of votes.

I disagree that optimally satifying your moral preferences equals doing what is moral. For one thing you are not aware of all moral preferences that, on reflection you would agree with, for another, you could bias your dedication intensity in a way that even though you are acting on moral preferences, the outcome is not what is moral all things considered. Furthermore It is not obvious to me that a human is compelled necessarily to have all moral preferences that are "given" to them. You can flat out reject 3 preferences, act on all others, and in virtue of your moral gap, you would not be doing what is moral, even though you are satisfying all preferences in your moral preference class.

Nino: I'm not sure where I stand on moral realism (leaning against but weakly). The non-moral realist part of me replies:

wouldn't {move the universe into a state I prefer} and {act morally} necessarily be the same because you could define your own moral values to match your preferences?

Definitely not the same. First of all to participate in the moral discussion, there is some element of intersubjectivity that kicks in, which outright excludes defining my moral values to a priori equate my preferences, they may a posteriori do so, but the part where they are moral values involves clashing them against something, be it someone else, a society, your future self, a state of pain, or, in the case of moral realism, the moral reality out there.

To argue that my moral values equate all my preferences would be equivalent to universal ethical preference egoism, the hilarious position which holds that the morally right thing to do is for everyone to satisfy my preferences, which would tile the universe with whiteboards, geniuses, ecstatic dance, cuddlepiles, orgasmium, freckles, and the feeling of water in your belly when bodysurfing a warm wave at 3pm, among other things. I don't see a problem with that, but I suppose you do, and that is why it is not moral.

If morality is intersubjective, there is discussion to be had. If it is fully subjective, you still need to determine in which way it is subjective, what a subject is, which operations transfer moral content between subjects if any, what legitimizes you telling me that my morality is subjective, and finally why call it morality at all if you are just talking about subjective preferences.

Comment by Diego_Caleiro on Am I an Effective Altruist for moral reasons? · 2016-02-11T09:03:44.334Z · EA · GW

It seems that you feel the moral obligation strongly from your comment. Like the Oxford student cited by Krishna you don't want to do what you want to do, you want to do what you oughtto do.

I don't experience that feeling, so let me reply to your questions:

Wouldn't virtue ethics winning be contradicted by your pulling the lever?

Not really, the pulling of the lever is what I would do, it is what I would think I have reason to do, but it is not what I think I would have moral reason to do. I would reason that a virtuous person (ex hypothesi) wouldn't kill someone, that the moral thing to do is let the lever be. Then I would act on my preference that is stronger than my preference that the moral thing be done. The only case where a contradiction would arise is if you subscribe to all reasons for action being moral reasons, or moral reasons having the ultimate call in all action choice. I don't.

In the same spirit, you suggest I'm an ethical egoist. This is because when you simulated me in this lever conflict, you think "morality comes first" so you dropped the altruism requirement to make my beliefs compatible with my action. When I reason however I think "morality is one of the things I should consider here" and it doesn't win over my preference for most minds having an exulting time. So I go with my preference even when it is against morality. This is orthogonal to Ethical Egoism, a position that I consider both despicable and naïve, to be frank. (Naïve because I know the subagents with whom I have personal identity care for themselves about more than just happiness or their preference satisfaction, and despicable because it is one thing to be a selfish prick, understandable in an unfair universe into which we are thrown into a finite life with no given meaning or sensible narrative, it is another thing to advocate a moral position in which you want everyone to be a selfish prick, and to believe that being a selfish prick is the right thing to do, that I find preposterous at a non-philosophical level.)

If you were truly convinced that something else were morally right, you would do it. Why wouldn't you?

Because I don't always do what I should do. In fact I nearly never do what is morally best. I try hard to not stay too far from the target, but I flinch from staring into the void almost as much as the average EA Joe. I really prefer knowing what the moral thing to do is in a situation, it is very informative and helpful to assess what I in fact will do, but it is not compelling above and beyond the other contextual considerations at hand. A practical necessity, a failure of reasoning, a little momentary selfishness, and an appreciation for aesthetic values have all been known to cause me to act for non-moral reasons at times. And of course I often did what I should do too. I often acted the moral way.

To reaffirm, we disagree on what Ethical Egoism means. I take it to be the position that individuals in general ought to be egoists (say, some of the time). You seem to be saying that , and furthermore that if I use any egoistic reason to justify my action, then merely in virtue of my using it as justification I mean that everyone should be (permitted to) doing the same. That makes sense if your conception of just-ice is contractualist and you were assuming just-ification has a strong connection to just-ice. From me to me, I take it to be a justification (between my selves perhaps), but from me to you, you could take it as an explanation of my behavior, to avoid the implications you assign to the concept of justification as demanding the choice for ethical egoism.

I'm not sure what my ethical (meta-ethical) position is, but I am pretty certain it isn't, even in part, ethical egoism.

Comment by Diego_Caleiro on EA is elitist. Should it stay that way? · 2016-01-22T07:17:55.574Z · EA · GW

I'm not claiming this is optimal, but I might be claiming what I'm about to say may be more optimal than anything else that 98% of EAs are actually doing.

There are a couple thousand billionaires on the planet. There are also about as many EAs.

Let's say 500 billionaires are EA friendly under some set of conditions. Then it may well be that the best use of the top 500 EAs is to minutiously study single individual billionaires. Understand their values, where they come from, what makes them tick. Draw their CT-chart, find out their attachment style, personality disorder, and childhood nostalgia. Then approach them to help them, and while solving many of their problems faster than they can even see it, also show them the great chance they have of helping the world.

Ready set go:

Comment by Diego_Caleiro on The EA Newsletter & Open Thread - January 2016 · 2016-01-09T21:25:14.717Z · EA · GW

As Luke and Nate would tell you, the shift from researcher to CEO is a hard one to make, even when you want to do good, as Hanson puts it "Yes, thinking is indeed more fun."

I have directed an institute in Brazil before, and that was already somewhat a burden.

The main reason for the high variance though is that setting up an institute requires substantial funding. The people most likely to fundraise would be me, Stephen Frey (who is not on the website), and Daniel, and fundraising is taxing in many ways. Would be great if we had for instance the REG fundraisers to aid us (Liv, Ruari, Igor, wink wink) either by fundraising for us, or finding someone to, or teaching us to.

Money speaks. And it spells high variance.

Comment by Diego_Caleiro on The EA Newsletter & Open Thread - January 2016 · 2016-01-09T09:14:00.330Z · EA · GW

1) Convergence Analysis: The idea here is to create a Berkeley affiliated research institute that operates mainly in two fronts 1)Strategy on the long term future 2)Finding Crucial Considerations that have not been considered or researched yet. We have an interesting group of academics and I would take a mixed position of CEO and researcher.

2) Altruism: past, present, propagation: this is a book whose table of contents I already wrote, and would need further research and spelling out each of the 250 sections I have in mind. It is very different in nature from Will's book, or Singer's book. The idea here is not to introduce to EA, but to reason about the history of cooperation and altruism that led to us, and where this can be taken in the future, inclusive by the EA movement. This would be major intellectual undertaking, likely consuming my next three years and doubling as a PHD dissertation. Perhaps, tripling as a series of blog posts, for quick feedback loops and reliable writer motivation.

3) FLI grant proposal: Our proposal intended to increase our understanding psychological theories of human morality in order to facilitate later work in formalizing moral cognition to AIs, a subset of the value loading and control problems of Artificial Generalized Intelligence. We didn't win, so the plan here would be to try to find other funding sources for this research.

4) Accelerate the PHD: For that I need to do 3 field statements, one about the control problem in AI with Stuart, one about altruism with Deacon, and one to be determined, then only the dissertation would be still on the to do list.

All these plans scored sufficiently high in my calculations that it is hard to decide between them. Accelerating the PHD has a major disadvantage because it does not increase my funding. The book (via blog posts or not) has a strong advantage in that I think it will have sufficiently new material that it satisfies goal 1 best of all, it is probably the best for the world if I manage to get to the end of it and do it well. But again, it doesn't increase funding. Convergence has the advantage of co-working with very smart people, and if it takes off sufficiently well, it could solve the problem of continuing to live in Berkeley and that of financial constraints all at once, putting me in a stable position to continue doing research in relevant topics almost indeterminately, instead of having to make ends meet by downsizing the EA goal substantially among my priorities. So very high stakes, but uncertain probabilities. If AI is (nearly) all that matters, then the FLI grant will be the highest impact, followed by Convergence, the book and the acceleration.

In any event all of those are incredible opportunities which I feel lucky to even have in my consideration space. It is a privilege to be making that choice, but it is also very hard. So conditional on the goals I stated before: 1)Making the world better by the most effective means possible. 2)Continuing to live in Berkeley 3)Receive more funding 4)Not stop PHD 5)Use my knowledge and background to do (1).

I am looking for some light, some perspective from the outside that will make me lean one way or another. I have been uncomfortably indecisive for months, and maybe your analysis can help.

Comment by Diego_Caleiro on The EA Newsletter & Open Thread - January 2016 · 2016-01-09T08:40:16.772Z · EA · GW

Ok, so this doubles as an open thread?

I would like some light from the EA hivemind. For a while now I have been mostly undecided about what to do with my 2016-2017 period.

Roxanne and I even created a spreadsheet so I could evaluate my potential projects and drop most of them, mid-2015. My goals are basically an oscillating mixture of

1)Making the world better by the most effective means possible.

2)Continuing to live in Berkeley

3)Receive more funding

4)Not stop PHD

5)Use my knowledge and background to do (1).

This has proven an extremely hard decision to make. Here are the things I dropped because they were incompatible with time, or goals other than 1, but still think other EAs, who share goal 1, should carry on:

(1) Moral Economics: From when it started, Moral Econ is an attempt to install a different mindset in individuals, my goal has always been to have other people pick it up and take it forwards. I currently expect this to be done, and will go back to it only if it seems like it will fall apart.

(2) Effective Giving Pledge: This is a simple idea I applied to EA ventures with, though I actually want someone else to do it. The idea is simply to copy the Gates giving pledge website for an Effective Giving Pledge, which says that the wealthy benefactors will donate according to impact, tractability and neglectedness. If 3 or 4 signatories of the original pledge signed it, it would be the biggest shift in resource allocation from the non EA-money pool to the EA-money pool in history.

(3) Stuart Russell AI-safety course: I was going to spend some time helping Stuart to make an official Berkeley AI-safety course. His book is used in 1500+ Universities, so the if the trend caught, this would be a substantial win for the AI safety community. There was a non-credit course offered last semester in which some MIRI researchers, Katja, Paul, me and others were going to present. However it was very poorly attended and was not official, and it seems to me that the relevant metric is probability that this would become a trend.

(4) X-risk dominant paper: What are the things that would dominate our priority space on top of X-risk if they were true? Me and Daniel Kokotajlo began examining that question, but considered it to be too socially costly to publish anything about it, since many scenarios are too weird and could put off non-philosophers.

These are the things I dropped for reasons other than the EA goal 1. If you are interested in carrying on any of them, let me know and I'll help you if I can.

In the comment below, by contrast are the things between which I am still undecided the ones I want help in deciding:

Comment by Diego_Caleiro on My Coming of Age as an EA: 12 Problems with Effective Altruism · 2015-12-15T20:51:53.734Z · EA · GW

That sounds about right :)

I like your sincerity. The verbosity is something I actually like and quite praised in the human sciences I was raised in, I don't aim for the condensed information writing style. The nascissism I dislike and tried to fix before, but it's hard, it's a mix of a rigid personality trait with a discomfort from having been in the EA movement since long before it was an actual thing, having spent many years giving time resources and attention, and seeing new EAs who don't have knowledge or competence being rewarded (especially financially) by EA organizations that clearly don't deserve it. It also bugs me that people don't distinguish the much higher value of EAs who are not taking money from the EA sphere from those who have a salary, and to some extent are just part of the economic engine, like anyone with a normal NGO job that is just instantiating the economy.

I don't actually see any problem with people talking about what changed their lives or whether they are more like Ross than like Chandler. I usually like hearing about transformative experiences of others because it enlarges my possibility scope. Don't you?

This particular text was written for myself, but I think the editing tips also hold for the ones I write for others, so thanks! And yes, you do write like asshole sometimes on facebook. But so what, if that is your thing, good for you, life isn't fair.

Comment by Diego_Caleiro on My Coming of Age as an EA: 12 Problems with Effective Altruism · 2015-12-02T03:24:49.686Z · EA · GW

My experience on Lesswrong indicates that though well intentioned, this would be a terrible policy. The best predictor on Lesswrong if texts of mine would be upvoted or downvoted was wheter someone, in particular username Shminux, would give reasons for their downvote.

There is nothing I dislike or fear more, when I write on Lesswrong than Shminux giving reasons why he's downvoting this time.

Don't get me wrong, write a whole dissertation about what in the content is wrong, or bad, or unformatted, do anything else, but don't say, for instance "Downvoted because X"

It is a nightmare.

Having reasons for downvotes visible induces people to downvote, and we seldom do the mirror thing, writing down reasons for the upvote.

I have no doubt, none, after posting dozens of texts about all sort of things, on Lesswrong and EA forum, that having a policy of explaining downvotes is the worse possible thing from the writer's perspective.

Remember that death is the second biggest fear people have, the first one is speaking in public. Now think about the role of posts explaining downvotes, it would be a total emotional shutdown, specially for new posters who are still getting the gist of it.

Comment by Diego_Caleiro on My Coming of Age as an EA: 12 Problems with Effective Altruism · 2015-11-30T03:00:54.569Z · EA · GW

I will write that post once I am finantially secure with some institutional attachment. I think it is too important for me to write while I expect to receive funding as an individual, and don't want people to think "he's saying that because he is not financed by an institution." Also see this.

Comment by Diego_Caleiro on My Coming of Age as an EA: 12 Problems with Effective Altruism · 2015-11-29T22:57:43.866Z · EA · GW

I think we are falling prey to the transparency fallacy

, the double transparency fallacy,

and that there are large inferential gaps in our conversation in both directions.

We could try to close the gaps writing to one another here, but then both of us would end up sometimes taking a defensive stance which could hinder discussion progress. My suggestion is that we do one of these

1) We talk via skype or hangouts to understand each other's mind. 2) We wait organically for the inferential gaps to be filled and for both of us to grow as rationalists, and assume that we will converge more towards the future. 3) The third alternative - something I didn't think about, but you think might be a good idea.

Comment by Diego_Caleiro on My Coming of Age as an EA: 12 Problems with Effective Altruism · 2015-11-29T11:05:46.439Z · EA · GW

9) I ended up replying how to on Lesswrong.

Comment by Diego_Caleiro on Direct Funding Between EAs - Moral Economics · 2015-11-29T08:23:49.265Z · EA · GW

Do you want to do this soon? You can help us get it done.

Comment by Diego_Caleiro on My Coming of Age as an EA: 12 Problems with Effective Altruism · 2015-11-29T01:44:45.623Z · EA · GW

I'll bite: 1) Transhumanism: The evidence is for the paucity of our knowledge. 2) Status: People are being valued not for the exp value they produce, but by the position they occupy. 3) Analogy: Jargon from Musk, meaning copying and tweaking someone else's idea instead of thinking of a rocket, for instance, from the ground up - follow the chef and cook link. 4) Detonator: Key word was "cling to", they stick with one they had to begin with, demonstrating lack of malleability. 5) Size: The size gives reason to doubt the value of action because to the extent you are moved by other forces (ethical or internal) the opportunity cost rises 6) Nature: Same as five. 7) Uncertainty: Same here, more uncertainty, more opportunity cost. 8) Macrostrategy: as with the items before, if you value anything else but aggregative consequentialism inddiferent to risk, the opportunity cost rises. 9)Probabilistic reasoning: No short summary, you'd have to search for reference class tennis, reference class, Bayes theorem, and reasoning by analogy to get it. I agree it is unfortunate that terms sometimes stand for whole articles, books, papers, or ideologies, I said this was a direct reflection of my thinking with myself. 10) Trust in Institutions: link. 11) Delusional: Some cling to ideas, some to heroes, some cling to optimistic expectations, all of them are not letting the truth destroy what can be destroyed by it. 12) I'd be curious to hear the countervailing possibilities. Many people who are examining the movement going forwards seem to agree this is a crucial issue.

Comment by Diego_Caleiro on Systematically under explored project areas? · 2015-10-02T18:24:41.415Z · EA · GW

This post is the sort of thing I would expect Crux - the Crucial Considerations Institute we are forming in a few months - to output on a regular basis.

Comment by Diego_Caleiro on Cause selection: a flowchart [link] · 2015-09-11T12:52:23.081Z · EA · GW

Continue thinking this is great work and new EAs should always be directed to this :)

Comment by Diego_Caleiro on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2015-08-28T20:27:20.270Z · EA · GW

Arguing for Cryo as EA seems to be a bottom line reasoning for me.

I can imagine exceptions. For instance: 1) Mr E.A. is an effective altruist who is super productive and gets most enjoyment out of working, and rests by working even more. Expecting to be an emulation with high likelihood, Mr E. A. decided for cryopreservation to give himself a chance of becoming an emulation coalition which would control large fractions of the EM economy, and use these resources for the EA cause on which society has settled after long and careful thought.

2) Rey Cortzvail is a prominent technologists who fears death more than anything else. He received medals from many presidents, and invented many of the great technologies over the last few decades. To continue working well, it is fundamental for him to think he may get a chance to live a long life, so he signs up for cryonics to purchase peace of mind.

3) Neith Sorez wants to help the world's mess, he wants it badly. He also is not on board with the whole people die thing, and to be fair, much of his perception of how screwed up things are comes from the clarity with which he envisions the grim reaper, and the awful brought by it. He's convinced that AI matters and is pushing through to help AI be less dangerous and quite possibly help most people to get a chance to live long. He wears the necklace though as a reminder, to himself and others, and as a signal of allegiance to the many others who can see the scope of the horror that lies behind us, and the need to stop it from happening towards the future.

4) Miss E.A. has entered the EA community by going to cryonics meetings and noticing that these EA people seem to be pretty freaking awesome, and thinking there's no reason to not join the team, learn all about EA. Within a year, she is very convinced of some particular set of actions and is an active and central EA, all of this came through the cryonics community and her friends there, so she decides to not drop out of cryonics, for historical, signalling, allegiance and friendship reasons.

These above don't seem to me to be bottom line arguments.

But arguments of the form: "The best way to help the far future is to self preserve" through cryonics should definitely be dominated by "Paying for cryonics for whoever is the person you think is doing the best job who isn't yet a cryonicist".

Comment by Diego_Caleiro on Ethical Fourier Transform · 2015-08-18T21:21:30.978Z · EA · GW

Love it.

Comment by Diego_Caleiro on Moral Economics in Practice: Musing on Acausal Payments through Donations · 2015-08-17T21:23:29.412Z · EA · GW

Thumbs up for this. Creating a labor market for people who are willing to work for causes seems high value to me.

A few years ago, before I spend most, and while Brazil was doing well, I didn't care about money, and as usual I was working and paying my own work out of pocket.

If it was an option then, I would have wanted to work on far future EA, and hedge my bets by asking other people to donate to near future causes on behalf of the work I was doing. I currently lean much more strongly towards far future though, so most of my eggs are in that basket. You can consider writing a post about indirect donations between EAs, in the spirit of Direct Funding Between EAs. I believe Rox Heston would be happy to help conceptually, and Matt Reyes would be happy to help editing.!forum/moraleconomics

Comment by Diego_Caleiro on Direct Funding Between EAs - Moral Economics · 2015-07-28T21:39:57.381Z · EA · GW

Off the top of my mind and without consulting people:

Justin Shovelain, Oliver Habryka, Malcolm Ocean, Roxanne Heston, Miranda Dixon-Luinenburg, Steve Rayhawk, Gustavo Rosa, Stephen Frey, Gustavo Bicalho, Steven Kaas, Bastien Stern, Anne Wissemann, and many others.

If they had not received FLI funding: Kaj Sotala, Katja Grace.

If they needed to transition between institutions/countries: Most of the core EA community.

I have mentioned it as an option for a while, but personally waited for less conflict of interest to actually post about it (at the moment I'm receiving much less funding than I did before, and am receiving some funding through institutions, I wanted to make clear that my goal is not to have people disliking institutions, but to fix the labor market).

I think the greater point here is that this has not been considered so far because people did not even envision it being an option. But now they will.

Comment by Diego_Caleiro on Effective Altruism Global SF panel on AI: question submissions thread · 2015-07-21T01:20:08.236Z · EA · GW

Would it be valuable to develop a university level course on AI safety engineering to be implemented in hundreds of universities that use Russell's book worldwide, to attract more talented minds to the field? Which are the steps that would cause this to happen?

Comment by Diego_Caleiro on Introducing Moral Economics · 2015-07-16T04:51:13.905Z · EA · GW

Robin Hanson is not mainstream in any sense I can envision, he did take a look at it though :) I asked an economist friend to review, and an economy student reviewed it as well. Check below for the link for the complete google docs if you are an economist who happens to be reading this.

Comment by Diego_Caleiro on Introducing Moral Economics · 2015-07-14T19:03:11.508Z · EA · GW

If you are eager to see the other posts in the series, and would like to help them by commenting, feel free to comment in this google docs which contains all the posts in the series.

The posts are already finished, yet, I highly encourage other EAs to create more posts in that document, or suggest changes. I'm not an economist, I was struck by this idea while writing my book on Altruism, and already spent many hours learning more economics to develop it. The goal is to have actual economists carrying this on to distances I cannot.

Comment by Diego_Caleiro on Revamping Existing Charities · 2015-06-24T21:28:32.421Z · EA · GW

The question I would ask then is, if you want to influence larger organization, why not governmental organizations, which have the largest quantities of resources that can be flipped by one individual? If you get a technical position in a public policy related organization, you may be responsible for substantial changes in allocation of resources.

Comment by Diego_Caleiro on Revamping Existing Charities · 2015-06-24T17:18:29.651Z · EA · GW

At the end of the day, the metric will always be the same. If you can make the entire red cross more effective, it may be that each unit of your effort was worth it. But if you anticipate more and more donations going to EA recommended charities, then making them even more effective may be more powerful.

See also DavidNash comment.

Comment by Diego_Caleiro on Revamping Existing Charities · 2015-06-24T03:11:50.274Z · EA · GW

Except for the purposes of obtaining more epistemic information later on, the general agreement within the EA crowd is that one should invest the vast majority of eggs in one basket, the best basket.

I just want to point out the exact same is the case here, where if someone wants to make a charity more effective, choosing oxfam or the red cross would be a terrible idea, but trying to make AMF, FHI, SCI etc more effective would be a great idea.

Effective altruism is a winners take all kind of thing, where the goal is to make the best better, not to make anyone else be as good as the best.

Comment by Diego_Caleiro on What Got Us Here Won’t Get Us There: Failure Modes on the Way to Global Cooperation · 2015-06-12T19:41:49.017Z · EA · GW

This piece is a simplified version of an academic article Joao Fabiano and I are writing on the future of evolutionary forces, similar in spirit to this one. It will also be the basis of one of the early chapters of my book Altruism: past, present, propagation. We welcome criticism and suggestions of other forces/constraints/conventions that may be operating to interfere or accelerate the long term evolution of coalitions, cooperation, and global altruistic coordination.

Comment by Diego_Caleiro on I am Nate Soares, AMA! · 2015-06-11T00:18:37.347Z · EA · GW

1) I see a trend in the way new EAs concerned about the far future think about where to donate money that seems dangerous, it goes:

I am an EA and care about impactfulness and neglectedness -> Existential risk dominates my considerations -> AI is the most important risk -> Donate to MIRI.

The last step frequently involves very little thought, it borders on a cached thought.

How would you be conceiving of donating your X-risk money at the moment if MIRI did not exist? Which other researchers or organizations should be being scrutinized by donors who are X-risk concerned, and AI persuaded?

Comment by Diego_Caleiro on I am Nate Soares, AMA! · 2015-06-11T00:09:46.060Z · EA · GW

1)Which are the implicit assumptions, within MIRI's research agenda, of things that "currently we have absolutely no idea of how to do that, but we are taking this assumption for the time being, and hoping that in the future either a more practical version of this idea will be feasible, or that this version will be a guiding star for practical implementations"?

I mean things like

  • UDT assumes it's ok for an agent to have a policy ranging over all possible environments and environment histories

  • The notion of agent used by MIRI assumes to some extent that agents are functions, and that if you want to draw a line around the reference class of an agent, you draw it around all other entities executing that function.

  • The list of problems in which the MIRI papers need infinite computability is: X, Y, Z etc...

  • (something else)

And so on

2) How do these assumptions diverge from how FLI, FHI, or non-MIRI people publishing on the AGI 2014 book conceive of AGI research?

3) Optional: Justify the differences in 2 and why MIRI is taking the path it is taking.

Comment by Diego_Caleiro on We are living in a suboptimal blogosphere · 2015-06-08T21:16:59.304Z · EA · GW

These are very good points, I endorse checking John's comments.

Comment by Diego_Caleiro on [Discussion] Are academic papers a terrible discussion forum for effective altruists? · 2015-06-06T01:12:32.950Z · EA · GW

Some additional related points:

1) Joao Fabiano looked recently into acceptance likelihood for papers in the top 5 philosophical journals. It seems that 3-5% is a reasonable range. It is very hard to publish philosophy papers. It seems to be slightly harder to publish in the top philosophy journals than in Nature, Behavioral and Brain Sciences, or Science magazine, and this is after the filter of 6 positions available for 300 candidates that selects PHD candidates in philosophy (harder than Harvard medicine or economics).

2) Bostrom arguably became very influential before Superintelligence within academia and the Lesswrong world. I have frequently found academics in the fields of Philosophy of mind, ethics, bioethics, physics, and even philosophy of language who knew his name and had some idea of who he was. Bostrom is a very prolific academic in terms of paper publication, and notably he pays attention to having an open website with his ideas available to the public. I believe his success prior to Superintelligence is explained, besides sheer brilliance, by actually displaying his ideas online as well as creating the World Transhumanist Organization and using techniques like talking about normal seeming topics (the matrix, cars on the other lane) in papers which are also strategies that helped make David Chalmers, another brilliant academic, become more prominent.

3) Joao Fabiano's inquire also found that David Lewis published, singlehandedly, 6.3% of top five journal publications. All women together published 3.6% (including Ruth Millikan, very prolific author). To me this indicates that the power law for publications is incredibly strong, with only extremely brilliant people like Lewis making the most substantial dents. The field is also male-skewed in an awkward way.

4) For people who consider themselves intellectual potentials and intend to continue in academia, my suggestion is to create a table of contents for a book, and instead of going ahead and writing the chapters, find the closest equivalent of some chapter that could become a paper, and try to write a paper about that. If you get accepted, this develops your career, and allows you to be one of the stand-outs like Ord, MacAskill and Bostrom who will end up working in the top universities. If you continue to be systematically rejected, you can still get around by publishing books and being influential in the way say Sam Harris or Richard Dawkins became influential. Since I find this to be the optimal strategy I'm aware of at the moment, it is the one I'm taking.

5) One behavior I found dangerous in the past is to "save your secret amazing idea from the world" protecting it by not testing it against other minds or putting it out for publication. Idea theft may be common in academia, but the response to that should be to simply have another better idea and carry on the work. Most ideas are too complex for non-authors to be able to steal. I think great value can come from publishing your ideas on Lesswrong (Stuart Armstrong does this) or before transforming them into a paper, and eventually a book chapter. This is how I've been reasoning lately. (To put my money where my mouth is: if anyone wants to take a look at the table of contents of my academic Altruism: past, present, propagation book by the way I'm open to that, message me privately) I welcome any criticism of that blog--> Paper --> Chapter strategy.

6) Most of what I said applies to philosophy and the humanities, and I'd be interested to know if people who published in STEM fields like Paul Christiano, Nate Soares, Benja Fallenstein, Scott Aaronson and Tegmark think this is a valuable alternative there as well.

Comment by Diego_Caleiro on Questions about Effective Altruism in academia (RAQ) · 2015-05-27T22:35:10.177Z · EA · GW

More important than my field not being philosophy anymore (though I have two degrees in philosophy, and identify as a philosopher) the question you could have asked there is why would you want a philosophical audience to begin with? Seems to me there is more low hanging fruits in nearly any other area in terms of people who could become EAs. Philosophers have an easier time doing that, but attracting the top people in econ, literature, visual arts and others who may enjoy reading the occasional public science books is much less replaceable.

Comment by Diego_Caleiro on Questions about Effective Altruism in academia (RAQ) · 2015-05-27T22:10:59.938Z · EA · GW

I've left the field of philosophy (where I was mostly so I could research what seemed interesting and not what the university wanted, as Chalmers puts it "studying the philosophy of x" where x is what interests me at any time) and am now in biological anthropology. It seems that being a professor in non-philosophy fields is much easier than in philosophy, from my many years researching the topic. Also switching fields between undergrad and grad school is easy, in case someone reading this does not know.

Comment by Diego_Caleiro on Questions about Effective Altruism in academia (RAQ) · 2015-05-27T22:05:46.243Z · EA · GW

Biological Anthropology, with an adviser whose latest book is in philosophy of mind, the next book on information theory, the previous book on - of all things - biological anthropology, and most of his career was as a semioticist and neuroscientist. My previous adviser was a physicist in the philosophy of physics who turned into a philosopher of mind. My main sources of inspiration are Bostrom and Russell, who defy field borders. So I'm basically studying whatever you convince me makes sense in the intersection of interestingly complex and useful for the world. Except for math, code and decision theory, which are not my comparative advantage, specially not among EAs.

Comment by Diego_Caleiro on Questions about Effective Altruism in academia (RAQ) · 2015-05-27T18:00:14.702Z · EA · GW

What are the unsolved problems related to infinite ethics that might be worth tackling as an academic? Some relevant writings on this topic to see what the field looks like

Comment by Diego_Caleiro on Questions about Effective Altruism in academia (RAQ) · 2015-05-27T17:52:01.046Z · EA · GW

I am not considering what Bostrom/Grace/Besinger/ do philosophy strictu sensu in this question.

After repleaceability considerations have been used at Ben Todd and Will Mac Askill's theses at Oxford, and Nick Beckstead made the philosophical case for the far future, is there still large marginal return to be had on doing research on something that is philosophy strictu sensu?

I ask this because my impression is that after Parfit, Singer, Unger, Ord, Mac Askill and Todd we have run out of efforts that have great consequential impacts in philosophical discourse. Not because improvements cannot be made, but because they would be minor in relation to using that time for other less strictu sensu endeavours.

Comment by Diego_Caleiro on Questions about Effective Altruism in academia (RAQ) · 2015-05-27T17:43:30.656Z · EA · GW

Is it worthwhile to teach a class on effective altruism as a special course for undergrads? I recently saw a GWWC pledge party where many new undergrads had decided to take the pledge after a few months taking one. Though selection effects might have been a large part of it.