Comment by cole_haus on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T23:29:47.218Z · score: 6 (5 votes) · EA · GW

Ah, I see that now. Thanks.

FWIW, I was specifically looking for a disclaimer and it didn't quickly come to my attention. It looks like a few other people in these subthreads may have also missed the disclaimer.

Comment by cole_haus on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T20:06:37.620Z · score: 10 (4 votes) · EA · GW

Yeah, I hadn't realized it was more or less deprecated. (The page itself doesn't seem to give any indication of that. Edit: Ah, it does. I missed the second paragraph of the sidenote when I quickly scanned for some disclaimer.)

Also, apparently unfortunately, it's the first sublink under the 80,000 Hours site on Google if you search for 80,000 Hours.

Comment by cole_haus on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-16T20:06:28.454Z · score: 14 (8 votes) · EA · GW

It seems quite possible to me have a "parameterized list". That is, recommendations can take the shape "If X is true of you, Y and Z are good options." And in fact 80,000 Hours does do this to some degree (via, for example, their career quiz). While this isn't entirely personalized (it's based only on certain attributes that 80,000 Hours highlights), it's also far from a single, definitive list. So it doesn't seem to be that there's any insoluble tension between taking account of individual difference and communicating the same message to a broad audience--you just have to rely on the audience to do some interpreting.

Comment by cole_haus on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T05:23:52.556Z · score: 1 (1 votes) · EA · GW

I don't particularly want to try to resolve the disagreement here, but I'd think value per dollar is pretty different for dollars at EA institutions and for dollars with (many) EA-aligned people [1]. It seems like the whole filtering/selection process of granting is predicated on this assumption. Maybe you believe that people at CEA are the type of people that would make very good use of money regardless of their institutional affiliation?

[1] I'd expect it to vary from person to person depending on their alignment, commitment, competence, etc.

Comment by cole_haus on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T01:55:37.185Z · score: 23 (10 votes) · EA · GW

I am not OP but as someone who also has (minor) concerns under this heading:

  • Some people judge HPMoR to be of little artistic merit/low aesthetic quality
  • Some people find the subcultural affiliations of HPMoR off-putting (fanfiction in general, copious references to other arguably low-status fandoms)

If the recipients have negative impressions of HPMoR for reasons like the above, that could result in (unnecessarily) negative impressions of rationality/EA.

Clearly, there also many people that like HPMoR and don't have the above concerns. The key question is probably what fraction of recipients will have positive, neutral and negative reactions.

Comment by cole_haus on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T01:44:58.006Z · score: 5 (4 votes) · EA · GW

It's not at all clear to me why the whole $150k of a counterfactual salary would be counted as a cost. The most reasonable (simple) model I can think of is something like: ($150k * .1 + $60k) * 1.5 = $112.5k where the $150k*.1 term is the amount of salary they might be expected to donate from some counterfactual role. This then gives you the total "EA dollars" that the positions cost whereas your model seems to combine "EA dollars" (CEA costs) and "personal dollars" (their total salary).

Comment by cole_haus on Long Term Future Fund: April 2019 grant decisions · 2019-04-10T01:42:28.034Z · score: 5 (4 votes) · EA · GW

I think you have some math errors:

  • $150k * 1.5 + $60k = $285k rather than $295k
  • Presumably, this should be ($150k + $60k) * 1.5 = $315k ?
Comment by cole_haus on Most important unfulfilled role in the EA ecosystem? · 2019-04-05T20:51:24.196Z · score: 20 (9 votes) · EA · GW

I have a pretty averse reaction to all the people you named, expect I would feel similarly about someone in that mold in EA, and expect many other people in EA would feel similarly. I don't think charismatic leadership fits all that well with the other elements of EA in ways both important and incidental.

Comment by cole_haus on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-04T19:11:10.759Z · score: 4 (3 votes) · EA · GW

I don't know how promising others think this is, but I quite liked Concepts for Decision Making under Severe Uncertainty with Partial Ordinal and Partial Cardinal Preferences. It tries to outline possible decision procedures once you relax some of the subject expected utility theory assumptions you object to. For example, it talks about the possibility of having a credal set of beliefs (if one objects to the idea of assigning a single probability) and then doing maximin on this i.e. selecting the outcome that has the best expected utility according to its least favorable credences.

Comment by cole_haus on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-04T19:06:53.335Z · score: 10 (8 votes) · EA · GW

There's actually a thing called the Satisficer's Curse (pdf) which is even more general:

The Satisficer’s Curse is a systematic overvaluation that occurs when any uncertain prospect is chosen because its estimate exceeds a positive threshold. It is the most general version of the three curses, all of which can be seen as statistical artefacts.
Comment by cole_haus on Is any EA organization using or considering using Buterin et al.'s mechanism for matching funds? · 2019-04-02T05:02:39.442Z · score: 1 (1 votes) · EA · GW

IIRC, the mechanism has problems with collusion/dissembling. For example, one backer with $46 dollars and 4 backers with $1 each will get significantly better results by splitting their money into 5 contributions of $10 each. This seems like a problem that's actually moderately likely to arise in practice.

Comment by cole_haus on a black swan energy prize · 2019-03-30T04:25:57.222Z · score: 2 (2 votes) · EA · GW

It looks like the case you're making in the "a prize" section is that prizes are more open to "outsiders" than grants which seems generally plausible to me. On the other hand, grants can actually fund the research itself while contestants for a prize need some source of funding. If it's capital-intensive to mount a serious attempt at the prize, this creates a funding and vetting problem again (contestants will need money to bankroll their attempt).

Comment by cole_haus on a black swan energy prize · 2019-03-30T04:25:32.132Z · score: 2 (2 votes) · EA · GW

I'm not convinced that a prize is particularly helpful in this case. I think of prizes as useful for inducing investment in things like public goods where private returns are limited. That doesn't seem to be the case here; successfully creating "radically better energy generation" seems like it would be wildly remunerative. The promise of vast wealth seems like it ought to be sufficient incentive regardless of a prize.

OTOH, that's all very first-principles and the history of innovation prizes doesn't seem to really pay much attention to this line of criticism. Maybe prizes make particular problems more salient, etc.

Comment by cole_haus on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-14T18:06:23.030Z · score: 1 (1 votes) · EA · GW

This is interesting! I think it would also be useful to talk about the standard terminology in the field. Some of those terms are:

Reasons I think it's useful to talk about standard terminology:

  • Allows you to converse with others and understand their work more easily
  • Allows readers to follow up and connect with a larger body of work
  • Communicates to experts that you've seriously engaged with the field and understand it

In this particular case, I'd be interesting in hearing how your categories map to the standard ones. Or, if you think they don't, it would be interesting to hear why that is. What are the inadequacies of the standard terms and categories?

Comment by cole_haus on Impact Prizes as an alternative to Certificates of Impact · 2019-02-20T07:12:09.193Z · score: 3 (2 votes) · EA · GW

This seems very related to social impact bonds: "Social Impact Bonds are a type of bond, but not the most common type. While they operate over a fixed period of time, they do not offer a fixed rate of return. Repayment to investors is contingent upon specified social outcomes being achieved."

Comment by cole_haus on What we talk about when we talk about life satisfaction · 2019-02-12T19:13:26.372Z · score: 2 (2 votes) · EA · GW

Yup. It's in Chapter 23, The Nature and Significance of Happiness.

Comment by cole_haus on What we talk about when we talk about life satisfaction · 2019-02-09T19:10:59.909Z · score: 5 (3 votes) · EA · GW

I found a passage from the book that's much more on the nose:

But here we will focus on a deeper threat to the importance of LS, one that stems from the very nature and point of LS attitudes. How satisfied you are with your life does not simply depend on how well you see your life going relative to your priorities. It also depends centrally on how high you set the bar for a “satisfactory” life: how good is “good enough?” Rosa might be satisfied with her life only when getting almost everything she wants, while Juliet is satisfied even when getting very little of what she wants—indeed, even when most of her goals are being frustrated. It can seem odd to think that satisfied Juliet, for whom every day is a new kick in the teeth, is better off than dissatisfied Rosa, who nonetheless succeeds in almost all the things she cares about but is more demanding.

More to the point, it is not clear why LS should be so important insofar as it is a matter of how high or low individuals set the bar. Suppose Rosa has a lengthy, and not inconsequential, “life list,” and will not be satisfied until she has checked off every item on the list. It is not implausible that we should care about how well Rosa achieves her priorities—e.g., whether her goals are mostly met or roundly frustrated. But should anyone regard it as a weighty matter whether she actually gets every last thing on her list, and thus is satisfied with her life? It is doubtful, indeed, that Rosa should put much stock in it.

The point here is not simply that LS can reflect unreasonable demands, but that it depends on people’s standards for a good enough life, and these bear a problematic relationship to people’s well-being, depending on various factors that have no obvious relationship to how well people’s lives are going for them. It may happen that Rosa comes to see her standards as unreasonably high and revises them downwards—not because her priorities change, but because she now finds it unseemly to be so needy. In this case, what drives her LS is, in part, the norms she takes to apply to her attitudes—how it is fitting to respond to her life. Such norms likely influence most people’s attitudes toward their lives—a wish to exhibit virtues like fortitude, toughness, strength, or exactingness, non-complacency, and so forth. How satisfied we are with our lives partly depends, in short, on the norms we accept regarding how it is appropriate to respond to our lives. Note that most of us accept a variety of such norms, pulling in different directions, and it can be somewhat arbitrary which norms we emphasize in thinking about our lives. You may value both fortitude and not being complacent, and it may not be obvious which to give more weight in assessing your life. You may, at diff erent times, vary between them.

Similarly, LS depends on the perspective one adopts: relative to what are you more or less satisfied? Looking at Tiny Tim, you may naturally take up a perspective on your life that makes your good fortune more salient, and so you reasonably find yourself pretty satisfied with things. Then you think about George Clooney, and your life doesn’t look so good by comparison: your satisfaction drops. Worse, it is doubtful that any perspective is uniquely the right one to take: again, it is somewhat arbitrary. Unless you are like Rosa and have bizarrely—not to say childishly—determinate criteria for how good your life has to be to qualify as a satisfactory one, it will be open to you to assess your life from any of a number of vantage points, each quite reasonable and each yielding a different verdict.

Indeed, the very idea of subjecting one’s life to an all-in assessment of satisfactoriness is a bit odd. When you order a steak prepared medium and it turns up rare, its deficiencies are immediately apparent and your dissatisfaction can be given plain meaning: you send it back. Or, you don’t return to that establishment. But when your life has annoying features, what would it mean to deem it unsatisfactory? You can’t very well send it back. (Well . . .) Nor can you resolve to choose a different one next time around. It just isn’t clear what’s at stake in judging one’s life satisfactory or otherwise; lives are vastly harder to judge than steaks; and anyway, what counts as a reasonable expectation for a life is less than obvious since the price of admission is free—you’re just born, and there you are. So it is hard to know where to set the bar, and unsurprising that people can be so easily gotten by trivial influences to move it (Schwarz & Strack, 1999). You might be satisfi ed with your life simply because it beats being dead. The ideal of life satisfaction arguably imports a consumer’s concept, one most at home in retail environments, into an existential setting where metrics of customer satisfaction may be less than fitting. (It is an interesting question how far people spoke of life satisfaction before the postwar era got us in the habit of calling ourselves “consumers.”)

In short, LS depends heavily on where you set the bar for a “good enough” life, and this in turn depends on factors like perspectives and norms that are substantially arbitrary and have little bearing on your well-being. Th e worry is not that LS fails to track some objective standard of well-being, but that we should expect that it will fail to track any sane metric of well-being, including the individual’s own. To take one example: Studies suggest that dialysis patients report normal levels of LS, which might lead us to think they don’t really mind it very much. Yet when asked to state a preference, patients said they would be willing to give up half their remaining life-years to regain normal kidney function (Riis et al., 2005 ; Torrance, 1976 ; Ubel & Loewenstein, 2008). This is about as strong as a preference gets. A plausible supposition is that people don’t adjust their priorities when they get kidney disease so much as they adjust their standards for what they’ll consider a satisfactory life. LS thus obscures precisely the sort of information one might expect it to provide—not because of errors or noise, but because it is not the sort of thing that is supposed in any straightforward way to yield that information. LS is not that sort of beast.

The claim is not that LS measures never provide useful information about well-being. In fact they frequently do, because the perceived welfare information is in there somewhere, and differences in norms and perspectives may often cancel out over large populations. They may not cancel out, however, where norms and perspectives systematically differ, and this is a serious problem in many contexts, especially cross-cultural comparisons using LS (Haybron, 2007, 2008). But what the points raised in this section chiefly indicate about LS measures is that we cannot support conclusions about absolute levels of well-being with facts about LS. That people are satisfied with their lives does not so much as hint that their lives are going well relative to their priorities. If we wish reliably to assess how people see their lives going for them, we need a better yardstick than life satisfaction.
Comment by cole_haus on What we talk about when we talk about life satisfaction · 2019-02-05T23:42:15.235Z · score: 1 (1 votes) · EA · GW

Ah, yeah. I didn't mean to suggest that the philosophers have it all worked out. What I meant is that I think the philosophers seem to share your goals. In other words, I (as a non-professional in either psychology or philosophy) think if someone came up to a psychologist and said, "I've come up with these edge cases for 'life satisfaction'", they'd more or less reply, "That's regrettable. Moving on...". On the other hand, if someone came up to a philosopher and said, "I've come up with edge cases for 'eudaimonia'", they might reply, "Yes, edges cases like these are among my central concerns. Here's the existing work on the matter and here are my current attempts at a resolution."

Comment by cole_haus on How Can Donors Incentivize Good Predictions on Important but Unpopular Topics? · 2019-02-05T06:51:23.309Z · score: 5 (4 votes) · EA · GW

Subsidizing a prediction market seems like one of the more promising approaches to me. There's a write-up of would that would look like more concretely at: Subsidizing prediction markets. Unfortunately, a quick search also turns up a theoretical limitation of this approach: Subsidized Prediction Markets for Risk Averse Traders.

Comment by cole_haus on What we talk about when we talk about life satisfaction · 2019-02-05T02:30:18.641Z · score: 9 (5 votes) · EA · GW

My impression is that the term "life satisfaction" sees the heaviest use in psychology where full philosophical analysis of the necessary and sufficient properties of "life satisfaction" isn't especially desired or useful. As long as it the term denotes a concept with some internal consistency and we all use the term in roughly compatible ways, we can usefully use it in measurements.

If you're looking for a concept that's a load-bearing part of your ethics, primarily psychological constructs like "life satisfaction" aren't a great fit. I think the discussions you'd want to look at for these more philosophical purposes are discussions around eudaimonia, hedonia, etc.

Comment by cole_haus on What we talk about when we talk about life satisfaction · 2019-02-05T02:23:26.048Z · score: 8 (4 votes) · EA · GW

I don't have a neat, definitive answer for you, but I've been reading the Oxford Handbook of Happiness lately and these are the bits that come to mind:

  • The Satisfaction with Life Scale is the most common instrument used to measure life satisfaction and may give you some sense of how they operationalize the term. Rated on a 7 point Likert scale:
    • In most ways my life is close to my ideal.
    • The conditions of my life are excellent.
    • I am satisfied with my life.
    • So far I have gotten the important things I want in life.
    • If I could live my life over, I would change almost nothing.
  • Another common instrument is the Cantrill ladder which asks people to place themselves on a ladder where the bottom rung is the worst life possible and the top rung is the best life possible. This is probably closest to your "the most satisfying life imaginable".
  • One explicit definition listed in the book is:
Campbell et al. (1976) argue that satisfaction with any aspect of life reflects the gap between one’s current perceived reality and the level to which one aspires.

This sounds closest to your "the most satisfying life, in practice".

  • Another set of authors contend that general life satisfaction is actually (contrary to first impressions) more affective than cognitive:
This generalized positive view may be measured through asking “How satisfied are you with your life as a whole?,” and this question has been used in population surveys for over 35 years (Andrews & Withey, 1976). Not surprisingly, given the extraordinary generality of this question, the response that people give does not represent a cognitive evaluation of their life. Rather it reflects a deep and stable positive mood state that we initially called “core affect” (Davern et al., 2007), but which we now refer to as HPMood (Cummins, 2010).
Comment by cole_haus on Disentangling arguments for the importance of AI safety · 2019-01-23T20:49:24.844Z · score: 2 (2 votes) · EA · GW

Agreed. I think these reasons seem to fit fairly easily into the following schema: Each of A, B, C, and D is necessary for a good outcome. Different people focus on failures of A, failures of B, etc. depending on which necessary criterion seems to them most difficult to satisfy and most salient.

Comment by cole_haus on How can I internalize my most impactful negative externalities? · 2019-01-17T16:33:31.815Z · score: 5 (5 votes) · EA · GW

I actually wrote up a survey a bit ago pulling together negative externalities with estimates in the literature: https://www.col-ex.org/posts/pigouvian-compendium/. From (estimated) largest to smallest, they are:

  • Driving
  • Emitting carbon
  • Obesity
  • Drinking alcohol
  • Agriculture
  • Municipal waste
  • Smoking
  • Antibiotic use
  • Debt
  • Gun ownership
Comment by cole_haus on What is the Most Helpful Categorical Breakdown of Normative Ethics? · 2018-08-15T21:13:59.473Z · score: 4 (6 votes) · EA · GW

I think there's a certain prima facie plausibility to the traditional tripartite division. If you just think about the world in general, each of actors, actions, and states seem salient. It wouldn't take much to convince me that--appropriately defined--actors, actions, and states are mutually exclusive and collectively exhaustive in some metaphysical sense.

Once you accept the actors, actions, states division, it makes sense to have ethical theories revolving around each. These corresponds to virtue ethics, deontology and consequentialism.

Comment by cole_haus on What is the Most Helpful Categorical Breakdown of Normative Ethics? · 2018-08-15T21:06:45.371Z · score: 3 (3 votes) · EA · GW

I think you could fairly convincingly bucket virtue ethics in 'Ends' if you wanted to adopt this schema. A virtue ethicist could be someone who chooses the action that produces the best outcome in terms of personal virtue. They are (sort of) a utilitarian that optimizes for virtue rather than utility and restricts their attention to only themselves rather than the whole world.

Comment by cole_haus on EA Forum 2.0 Initial Announcement · 2018-07-24T11:47:07.806Z · score: 2 (2 votes) · EA · GW

That's great to hear!

Comment by cole_haus on EA Forum 2.0 Initial Announcement · 2018-07-22T18:08:09.020Z · score: 8 (8 votes) · EA · GW

I've not yet read it myself, but I'm curious if anyone involved in this project has read "Building Successful Online Communities: Evidence-Based Social Design" (https://mitpress.mit.edu/books/building-successful-online-communities). Seems quite relevant.

Comment by cole_haus on Doning with the devil · 2018-07-10T08:54:17.540Z · score: 1 (1 votes) · EA · GW

Yup, I hope the examples make that clear, but the other descriptions could do more to highlight that we're interested in the margin.

Comment by cole_haus on Doning with the devil · 2018-06-16T04:16:01.694Z · score: 3 (3 votes) · EA · GW

It was meant as mediocre word play on the idiom 'dining with the devil' and 'donating'.

Doning with the devil

2018-06-15T15:51:06.030Z · score: 3 (3 votes)
Comment by cole_haus on Effective Advertising and Animal Charity Evaluators · 2018-06-14T16:33:36.193Z · score: 5 (5 votes) · EA · GW

Anecdotally, we’ve found that our matching campaigns have brought in a disproportionately large number of new donors—the majority of whom were not previously involved with effective giving. [...] we were able to teach them about effective animal advocacy and to support them in effective giving elsewhere in the EA movement. The amount that these donors will give to effective charities during their lifetime is significantly higher than the donation-matching campaign that attracted them; we continue to build relationships with these new donors.

I think this might be a key part that merits more explication. I can think of two major objections that evidence here would help answer:

1) The consequentialist benefit of 'standard' marketing techniques isn't worth the deontological cost.

2) 'Standard' marketing techniques are self-defeating for EA. This relies upon a belief that those that are put off by the utilon approach and attracted by the fuzzy approach are unlikely to 'assimilate' into EA.

Can you share more information on the number of new donors and particularly their subsequent engagement with EA? Or, if you can't or aren't ready to share that data, can you at least attest that you're tracking it and working on it?

Comment by cole_haus on [deleted post] 2018-06-14T15:42:39.475Z

This seems very related to the unilateralist's curse: https://nickbostrom.com/papers/unilateralist.pdf. There, they suggest that if you're about to reveal information you're surprised others aren't talking about, take a moment and consider whether their silence is evidence you should remain silent.

Comment by cole_haus on Against prediction markets · 2018-05-30T19:17:41.757Z · score: 1 (1 votes) · EA · GW

Regarding section 1, is there a reliable way to determine who these market-beating superforecasters are? What about in new domains? Do we have to have a long series of forecasts in any new domain before we can pick out the superforecasters?

Somewhat relatedly, what guarantees do we have that the superforecasters aren't just getting lucky? Surely, some portion of them would revert to the mean if we continued to follow their forecasts.

Altogether, this seems somewhat analogous to the arguments around active vs passive investing where I think passive investing comes out on top.

Comment by cole_haus on Do Prof Eva Vivalt's results show 'evidence-based' development isn't all it's cut out to be? · 2018-05-30T18:51:47.456Z · score: 2 (2 votes) · EA · GW

I think Evidence-Based Policy: A Practical Guide To Doing It Better is also a good source here. The blurb:

Over the last twenty or so years, it has become standard to require policy makers to base their recommendations on evidence. That is now uncontroversial to the point of triviality--of course, policy should be based on the facts. But are the methods that policy makers rely on to gather and analyze evidence the right ones? In Evidence-Based Policy, Nancy Cartwright, an eminent scholar, and Jeremy Hardie, who has had a long and successful career in both business and the economy, explain that the dominant methods which are in use now--broadly speaking, methods that imitate standard practices in medicine like randomized control trials--do not work. They fail, Cartwright and Hardie contend, because they do not enhance our ability to predict if policies will be effective.

The prevailing methods fall short not just because social science, which operates within the domain of real-world politics and deals with people, differs so much from the natural science milieu of the lab. Rather, there are principled reasons why the advice for crafting and implementing policy now on offer will lead to bad results. Current guides in use tend to rank scientific methods according to the degree of trustworthiness of the evidence they produce. That is valuable in certain respects, but such approaches offer little advice about how to think about putting such evidence to use. Evidence-Based Policy focuses on showing policymakers how to effectively use evidence, explaining what types of information are most necessary for making reliable policy, and offers lessons on how to organize that information.