Posts

2018 AI Alignment Literature Review and Charity Comparison 2018-12-18T04:48:58.945Z · score: 110 (53 votes)
2017 AI Safety Literature Review and Charity Comparison 2017-12-20T21:54:07.419Z · score: 43 (43 votes)
2016 AI Risk Literature Review and Charity Comparison 2016-12-13T04:36:48.060Z · score: 53 (55 votes)
Being a tobacco CEO is not quite as bad as it might seem 2016-01-28T03:59:15.614Z · score: 10 (12 votes)
Permanent Societal Improvements 2015-09-06T01:30:01.596Z · score: 9 (9 votes)
EA Facebook New Member Report 2015-07-26T16:35:54.894Z · score: 11 (11 votes)

Comments

Comment by larks on Assumptions about the far future and cause priority · 2019-11-11T20:47:29.455Z · score: 9 (3 votes) · EA · GW

This is a really interesting post, thanks for writing it up.

I think I have two main models for thinking about these sorts of issues:

  • The accelerating view, where we have historically seen several big speed-ups in rate of change as a result of the introduction of more powerful methods of optimisation, and the introduction of human-level AGI is likely to be another. In this case the future is both potentially very valuable (because AGI will allow very rapid growth and world-optimisation) and endangered (because the default is that new optimisation forces do not respect the values or 'values' of previous modes.)
    • Physics/Chemistry/Plate Tectonics
    • Life/Evolution
    • Humanity/Intelligence/Culture/Agriculture
    • Enlightenment/Capitalism/Industrial Revolution
    • Recursively self-improving AGI?
  • The God of Straight Lines approach, where we'll continue to see roughly 2% RGDP growth, because that is what always happens. AI will make us more productive, but not dramatically so, and at the same time previous sources of productivity growth will be exhausted, so overall trends will remain roughly intact. As such, the future is worth a lot less (perhaps we will colonise the stars, but only slowly, and growth rates won't hit 50%/year) but also less endangered (because all progress will be incremental and slow, and humanity will remain in control). I think of this as being the epistemically modest approach.

As a result, my version of Clara thinks of AI Safety work as reducing risk in the worlds that happen to matter the most. It's also possible that these are the worlds where we can have the most influence, if you thought that strong negative feedback mechanisms strongly limited action in the Straight Line world

Note that I was originally going to describe these as the inside and outside views, but I actually think that both have decent outside-view justifications.

Comment by larks on AI policy careers in the EU · 2019-11-11T13:11:39.718Z · score: 3 (4 votes) · EA · GW

Thanks for writing this, it was very interesting.

Readers might be interested in the EU's AI Ethics guidelines, which various EA-type people tried (and apparently failed?) to influence in a productive direction.

A minor note:

the world’s largest trading bloc.

according to google...

  • US GDP (2018): 20.5 trillion
  • EU GDP (2018): 18.8 trillion

and presumably EU GDP, and influence on AI, will fall when the UK leaves. (If you use PPP I think China is bigger)


Comment by larks on Centre for the Study of Existential Risk Six Month Report April - September 2019 · 2019-11-10T22:46:47.096Z · score: 2 (1 votes) · EA · GW

Thanks for writing this up, I thought it was very helpful.

Comment by larks on [updated] Global development interventions are generally more effective than Climate change interventions · 2019-10-10T02:20:27.213Z · score: 11 (4 votes) · EA · GW
[updated] Global development interventions are generally more effective than Climate change interventions
Previously titled “Climate change interventions are generally more effective than global development interventions”.  Because of an error the conclusions have significantly changed. [old version]. I have extended the analysis and now provide a more detailed spreadsheet model below.

Wow, I have never seen someone do this before! This is really impressive, excellent job being willing to reverse your conclusions (and article). Max upvote from me.

Comment by larks on What actions would obviously decrease x-risk? · 2019-10-09T14:07:48.188Z · score: 4 (2 votes) · EA · GW

When I was studying maths it was made clear to us that some things were obvious, but not obviously obvious. Furthermore, many things I thought were obvious were in fact not obvious, and some were not even true at all!

Comment by larks on FHI Report: Stable Agreements in Turbulent Times · 2019-10-05T21:19:37.835Z · score: 3 (2 votes) · EA · GW

Thanks for sharing this here.

It strikes me that making it easier to change contracts ex post could make the long run situation worse. If we develop AGI, one agent or group is likely to become dramatically more powerful in a relatively short period of time. It seems like it would be very useful if we could be confident they would abide by agreements they made beforehand, in terms of resource sharing, not harming others, respecting their values, and so on. The whole field of AI alignment could be thought of as essentially trying to achieve this inside the AI. I was wondering if you had given any thought to this?

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-04T21:57:14.727Z · score: 8 (4 votes) · EA · GW

I think Stefan is basically correct, and perhaps we should distinguish between Disclaimers (where I largely agree with Robin's critique) and Disclosure (which I think is very important). For example, suppose a doctor were writing an article about how Amigdelogen can treat infection.

Disclaimers:

  • Obviously, I'm not saying Amigdelogen is the only drug that can treat infection. Also, I'm not saying it can treat cancer. And infection is not the only problem; world hunger is bad too. Also you shouldn't spend 100% of your money on Amigdelogen. And just because we have Amigdelogen doesn't mean you shouldn't be careful about washing your hands.

This is unnecessary because no reasonable person would assume you were making any of these claims. Additionally, as Robin points out, by making these disclosures you add pressure for others to make them too.

Disclosure:

  • I received a $5,000 payment from the manufacturer of Amigdelogen for writing this article, and hope to impress their hot sales rep.

This is useful information, because readers would reasonably assume you were unbiased, and this lets them more accurately evaluate how much weight to put on your claim, given that as non-experts they do not have the expertise to directly evaluate the evidence.

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-04T02:42:30.295Z · score: 32 (15 votes) · EA · GW

You're definitely right that most grant-making organisations do not make much use of such disclaimers. However, I think this mainly because it just doesn't come up - most grantmaking occurs between people who do not know each other much socially, and are often older and married anyway.

In contrast the EA community, especially in the bay area, is extremely tight socially, and also exhibits a high level of promiscuity. As such the risk for decisions being unduly influenced by personal relationships is significantly higher. For example, back in 2016 OpenPhil revealed that they had advisors living with people they were evaluating, and evaluatees in relationships with OpenPhil staff (source). OpenPhil no longer seem to publish their conflicts of interest, but I suspect similar issues still occur. Separately, I have been told that some people in the bay area community explicitly use sexual relationships to make connections and influence the flow of funds from donors to workers and projects, which seems to raise severe concerns about objectivity and bias, as well as the potential for abuse (in both directions). I would be very concerned by either of these in the private sector, and see little reason to hold EAs to a lower standard.

Donors in general are subject to a significant information asymmetry and have few defenses against improper behaviour from organisations, especially in areas where concrete outputs are scarce. Explicit declarations that specific suspect conduct has not taken place represents a minimum level of such protection.

With regard your bullet points, I think a good analogy would be disclaimers in financial research. Every piece of financial research comes with multiple pages of disclaimers at the end, including a promise from the authors that the piece represents their true opinions and various sections about financial conflicts of interest. Perhaps the first analysts subject to these requirements found them intrusive - however by now they are a totally automated and unremarked-upon part of the process. I would expect the same to apply here, partly because every disclosure should ideally say the same thing: "None of the judges were in a relationship with anyone they evaluated."

Indeed, the disclosure requirements in the financial sector cover cases like these quite directly. For example the CFA's Ethical and Professional Standards (2016):

"... requires members and candidates to fully disclose to clients, potential clients and employers all actual and potential conflicts of interest"

and from 2014:

"Members and Candidates must make full and fair disclosure of all matters that could reasonably be expected to impair their independence and objectivity or interfere with respective duties to their clients, prospective clients, and employer. Members and Candidates must ensure that such disclosures are prominent, are delivered in plain language, and communicate the relevant information effectively.

In this case, donors and potential donors to an EA organisation are the equivalent of clients and potential clients of an investment firm, and I think a personal relationship with a grantee could reasonably be expected to impair judgement.

A case I personally came across involved two flatmates who both worked for different divisions in the same bank (Research and Sales&Trading). Because the bank (rightfully) took the separation of these two functions very seriously, HR applied a lot of pressure to them and they found alternative living arrangements.

Another example is lotteries, where the family members of employees are not allowed to participate at all, because their winning would risk bringing the lottery into disrepute:

In most cases the employee's immediate family and employees of lottery suppliers are also not allowed to play. In practice, there is no way that employees could alter the outcome of a game in their favor, but lottery officials generally believe that public confidence would be damaged should an employee win a large prize. (source)

This is perhaps slightly unfair, as they did not choose the employment of their family members, but this seems to be a small cost. The number of lottery family members is very small compared to the lottery-ticket-buying public, and there are other forms of gambling open to them. And the costs here should be smaller still, as all I am suggesting is disclosure, a much milder policy than prohibition.

I did appreciate that the fund's most recent write-up does take note of potential conflicts of interest, along with a wealth of other details. I could not find the sort of conflict of interest policy you suggested on their website however.

Comment by larks on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-03T21:38:23.785Z · score: 31 (15 votes) · EA · GW

Thanks for writing this up. Impressive and super-informative as ever. Especially with Oliver I feel like I get a lot of good insight into your thought process.

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-03T16:56:18.710Z · score: 3 (5 votes) · EA · GW
This post has been shared within the organisation I work for and I think could do very large damage to the reputation of EA within my org.

Would you mind sharing, at least in general terms, which organisation you work for? I confess that if I knew I have forgotten.


Comment by larks on Analgesics for farm animals · 2019-10-03T14:22:54.809Z · score: 9 (6 votes) · EA · GW

Interesting work, thanks for doing the research. I really appreciate these posts on new topics I had no idea existed.

Comment by larks on Is pain just a signal to enlist altruists? · 2019-10-02T17:44:09.235Z · score: 17 (6 votes) · EA · GW

Wow, this is fascinating speculation, thanks for posting.

The section on pain varying with the social environment was especially interesting. It reminded me of the (common but not uncontroversial) parenting strategy whereby babies are left to cry at night, so as to avoid positively reinforcing crying and instead train them to sleep unaided.

Would it suggest that exhortations to 'stop being a wuss' were actually effective? The nearby people are effectively precommitting to not be moved by visible suffering, which might reduce the incentive for the victim to experience pain.


Comment by larks on Candy for Nets · 2019-09-29T15:02:27.348Z · score: 20 (14 votes) · EA · GW

This is so adorable! I especially like when she volunteered to take over your job.

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-27T20:39:03.617Z · score: 20 (15 votes) · EA · GW

It is admirably honest of you to highlight and address this, rather than hoping no-one notices.

I don't think any grants we've made have been to anyone who has ever been romantically involved with any of the fund members

Perhaps you could get the other judges to join you in a joint explicit declaration that you've never had any romantic or sexual relationships with any of the recipients? Would be good to put this at the bottom of the writeups.

edit: surprised people have downvoted this. To be clear, I was genuinely impressed that OP directly addressed this, even at the cost of drawing attention to it.

Comment by larks on [Link] Moral Interlude from "The Wizard and the Prophet" · 2019-09-27T19:13:02.054Z · score: 8 (5 votes) · EA · GW
At a 5 percent discount rate, the Argentine-American economist Graciela Chichilnisky has calculated, “the present value of the earth’s aggregate output discounted 200 years from now is a few hundred thousand dollars.”

  • 2019 Global GDP = around $88 trillion
  • Annual Real Growth Rate = assume 2.5%
  • Graciela's discount rate = 5%
  • Present Value of 2219 GDP = (88*10**12)*((1.025)**200)/((1.05)**200) = $710,224,969,039 (over $700 billion)
  • Present Value of 2219 and thereafter: ((88*10**12)/(0.05-0.025))*((1.025)**200)/((1.05)**200) = $28,408,998,761,567 (over $28 trillion)
Comment by larks on Psychology and Climate Change: An Overview · 2019-09-27T16:58:49.547Z · score: 12 (5 votes) · EA · GW
belief in free market ideology is a significant predictor of disbelief in global warming.

The citation here refers back to Heath & Gifford (2006), which is an n=185 survey of Canadians that failed to find a p=0.05 relationship in their main regression analysis (Table 3). Their conclusion seems to be justified by 1) this non-significant directional beta and 2) some post-hoc mediation analysis.

Comment by larks on A bunch of new GPI papers · 2019-09-25T17:57:26.583Z · score: 4 (3 votes) · EA · GW

Thanks for linking these here; they look like interesting papers.

Comment by larks on Forum Update: New Features (September 2019) · 2019-09-17T16:51:03.362Z · score: 5 (3 votes) · EA · GW

Thanks, these look like some interesting features.

Are there / should there be any social norms re: replying to someone else's shortform? They seem intuitively sort of 'private property' to me.

Comment by larks on The Long-Term Future: An Attitude Survey · 2019-09-17T01:56:11.034Z · score: 46 (18 votes) · EA · GW

Thanks for doing this work, and making it public. Similar to Max, I basically believe in the Total View, and am sympathetic to Temporal Cosmopolitanism, so consider this somewhat good news.

However, I am a little skeptical about some of the questions. To the extent you are trying to get at what people 'really' think (if they have real views on such a topic...) I worry that some of questions were phrased in a somewhat biased matter - particularly the ones asking for agreement with the text.

When doing political polling, people generally don't ask questions like this:

Do you agree the government should spend more on law and order?

... because people's level of agreement will be exaggerated. Instead, it's often considered better practice to say phrase it more like:

Which Statement do you agree with more?
1) The government should spend more on law and order, even if it means higher taxes.
2) The government should lower taxes, even if it means less spending on law and order.
Comment by larks on Existential Risk and Economic Growth · 2019-09-17T01:26:21.198Z · score: 5 (3 votes) · EA · GW

Thanks very much for writing this, I found it really interesting. I like the way you follow the formalism with many examples.

I have a very simple question, probably due to my misunderstanding - looking at your simulations, you have the fraction of workers and scientists working on consumption going asymptotically to zero, but the terminal growth rate of consumption is positive. Is this a result of consumption economies of scale growing fast enough to offset the decline in worker fraction?

Comment by larks on Cause X Guide · 2019-09-16T01:05:54.203Z · score: 6 (3 votes) · EA · GW

It's also illegal in Turkey and (de jure at least) in China.

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-16T01:02:09.270Z · score: 7 (5 votes) · EA · GW
I once even wrote a research proposal on this for the CEA Summer Research Fellowship 2017. I was then invited to the programme.

Could you link to the research by any chance?

Comment by larks on A summary of Nicholas Beckstead’s writing on Bayesian Ethics · 2019-09-12T19:45:42.962Z · score: 3 (2 votes) · EA · GW

Thanks for writing this, I found it interesting and it significantly increased the likelihood I'd read the original.

Comment by larks on [Solved] Was my post about things you'd be reluctant to express in front of other EAs manually removed from the front page, and if so, why? · 2019-09-12T18:55:39.621Z · score: 2 (1 votes) · EA · GW

I think it was re-classified as 'Community', which removes it from the front page and puts it in a secondary location. People can still see it but they have to have 'Include Community Posts' ticked, which I think is unchecked by default.

Comment by larks on List of ways in which cost-effectiveness estimates can be misleading · 2019-09-12T03:08:18.657Z · score: 8 (5 votes) · EA · GW

Thanks for writing this.

Could you give an example of this one, please?

Conflating expected value estimates with effectiveness estimates. There is a difference between a 50% chance to save 10 children, and a 100% chance to save 5 children. Estimates sometimes don’t make a clear distinction.

I understand these are two different things, but am wondering exactly what problems you are seeing this equivocation causing. Is this a risk-aversion issue?

Comment by larks on What opinions that you hold would you be reluctant to express publicly to other EAs? · 2019-09-12T02:28:50.446Z · score: 6 (3 votes) · EA · GW
I'd guess that saving a fetus would need to be ~100x more important in expectation than saving a farm animal for reducing abortions to be a potential cause area; in an EA framework, what grounds are there for believing that to be true.

There are a lot of EAs who think that human lives are significantly more important than animal lives, and that future lives matter a lot, so this does not seem totally unreasonable.

The most recent piece I read on the subject was this piece from Scott, with two methodologies that suggested one human was worth 320-500 chickens. Having said that, I think he mis-analysed the data slightly - people who selected "I don't think animals have moral value commensurable with the value of a human, and will skip to the end" should have been coded as assigning really high value to humans, not dropped from the analysis. Making this adjustment gives a median estimate of each human being worth just over 1,000 chickens.

Bearing in mind that half of all people have above-median estimates, so it could be very worthwhile for them. Using my alternative coding, the 75th percentile answer was human being worth 999,999,999 chickens. So even though it might not be worthwhile for some EAs, it definitely could be for others.

Comment by larks on Effective Pro Bono Projects · 2019-09-12T01:58:09.408Z · score: 2 (1 votes) · EA · GW
It depends on your perspective I suppose, and if you think that regulating/taxing anything is paternalistic and believe that everyone is rational, not addicted, truly knows the health effects, etc. then I agree there would be a percentage of the population where consumer surplus would be taken into account and who enjoy smoking (although this population who says they enjoy smoking today may not say that X years later).

I think you might be mistaken on several counts:

1) not all taxes are paternalistic (e.g. pigouvian taxes), but tobacco taxes almost definitely are. wikipedia:

Paternalism is action that limits a person's or group's liberty or autonomy and is intended to promote their own good.

rather, the debate is about whether or not paternalism is justified (in this instance). In both this comment and the grandparent I've implicitly assumed that paternalism can be justified, but you're right that there are libertarian arguments that paternalism is essentially always wrong.

You might also enjoy Robin's recent writing on paternalism, though it doesn't directly bare on this argument; unsurprisingly he concludes it is primarily about status.

2) People don't have to be fully rational for their decisions to have information about welfare. If full rationality were required then no decision ever would satisfy! While it seems clear that humans are not 100% rational, there is still some logic to people's actions. Indeed, models of rational addiction have been around for decades; it is definitely not true to say that addiction invalidates any inference about consumer welfare. See for example Becker and Murphy (1988).

3) 'truly knows the health effects' is again not required. For example, if people had noisy but unbiased estimates of the health impacts, some would over-consume and some would under-consume, and on average, tobacco taxation would not be welfare enhancing. If lack of knowledge is the issue, the appropriate response is to provide the information, not to tax it.

4) Consumer surplus should be taken into account for everyone regardless of whether or not on the whole smoking is optimal. It is an element in the cost-benefit calculation (probably the largest on one side of the equation). It might be larger or smaller for different people, in different circumstances, etc., but that is something that must be estimated and taken into account, not simply ignored.

Comment by larks on Effective Pro Bono Projects · 2019-09-11T14:11:20.203Z · score: 4 (3 votes) · EA · GW

Based on a quick read, it doesn't seem like you take into account the consumer surplus from smoking tobacco? This might not be a small factor:

  • Many smokers report enjoying the experience of smoking.
  • Many people choose to smoke despite knowing about the health effects.
  • Newer forms of tobacco consumption, like vaping, have significantly lower health side-effects.

Indeed, few consumer products will look positive if we ignore the consumer surplus they produce.

Comment by larks on Funding chains in the x-risk/AI safety ecosystem · 2019-09-10T16:48:58.756Z · score: 8 (5 votes) · EA · GW

Looks like the (joint) longest chain is:

  • Patrick
  • EA Funds
  • BERI
  • Survival and Flourishing
  • FLI
  • CEA
  • AI Safety Camp

I am pleased to see there do not seem to be any cycles!

Comment by larks on Should I give to Our World In Data? · 2019-09-10T12:18:55.458Z · score: 18 (8 votes) · EA · GW
I haven’t seen any EAs (except Steven Pinker) donate or grant here so it might be an overlooked/undervalued opportunity. Besides Pinker and the Bill and Melinda Gates Foundation, I didn’t recognize any of the donors on their donor list.

It seems strange to use the fraction of donors who are EAs as a negative signal, holding their total fundraising constant. Surely the information about implicit EA cost-effectiveness estimates is dominates the crowding out effect? In equilibrium this consideration would suggest that EA giving should be equally spread among all groups!

And if this was valid, you might not want EA opinions on the subject. Would you not then equally take persuasive EA arguments for giving there to be a reason to think it is not overlooked/undervalued?

Comment by larks on Consumer preferences for labgrown and plant-based meat · 2019-09-09T22:48:56.946Z · score: 4 (3 votes) · EA · GW
The result that males prefer plant-based or clean meat is surprising

Perhaps because women have stronger disgust reactions, which might reduce support for artificial meat without reducing support for animal rights.

Comment by larks on Our forthcoming AI Safety book · 2019-09-09T21:56:51.492Z · score: 5 (3 votes) · EA · GW

Thank you for changing the title; I think this is significantly better.

Comment by larks on Are we living at the most influential time in history? · 2019-09-06T02:17:23.939Z · score: 7 (4 votes) · EA · GW

Thanks for writing this all up! A few small comments:

And, for the Time of Perils view to really support HoH, it’s not quite enough to show that extinction risk is unusually high; what’s needed is that extinction risk mitigation efforts are unusually cost-effective. So part of the view must be not only that extinction risk is unusually high at this time, but also that longtermist altruists are unusually well-placed to decrease those risks — perhaps because extinction risk reduction is unusually neglected.

It could even be the case that extinction risks were unusually low right now, but this period is nonetheless unusually critical because of the tractability. For example, suppose the main risk to mankind was a asteroid or supervolcanos. Prior to the 20th century, there was little we could do about it - and after the 21st century we will have mature space colonies so it will also no longer be an extinction risk. Only in the interim can we do anything to reduce the probability, by researching the threats, attempting to redirect asteroids, accelerating colonization, and so on.

The primary reasons for believing (2) are that if we’re in a simulation it’s much more likely that the future is short, and that extending our future doesn’t change the total amount of lived experiences (because the simulators will just run some other simulation afterwards), and that we’re missing some crucial consideration around how to act.

I know you mention acausal decision theories elsewhere, but I think it is worthwhile bringing them up here. If we are in an ancestor simulation, it is rational for us to try to reduce existential risk, because this decision is acausally entangled with the decision of the 'original' people, whose existential risk reduction efforts causally lead to the existence of the simulation.

Similarly, I think your prior over our position needs directly address anthropic Doomsday-type arguments.

In contrast, if you are more sympathetic to moral realism (or a more sophisticated form of subjectivism), as I am, then you’ll probably be more sympathetic to the idea that future people will have a better understanding of what’s of value than you do now, and this gives another reason for passing the baton on to future generations.

I think you might be overstating the case here. Suppose you assigned 20% credence to some sort of subjectivist / lovecraftian parochialism that places a high value on our actual values right now, 50% to meta-ethical moral realism and predicted moral progress in the future, and 30% to other (e.g. moral realism but not moral progress). It seems this would suggest a nearly 20% credence in now being a hinge period. In contrast, according to the moral realist theory, now is not an especially important time. So for moral uncertainty reasons we should act as if now is an unusually important period.

even then you might still want to save money in a Victorian-values foundation to grant out at a later date

I suspect unfortunately the money may end up being essentially stolen and used for other purposes. There are many examples of this - a classic one is the Ford Foundation, which now promotes goals quite different from that which Henry Ford wanted.

Comment by larks on What to know before talking with journalists about EA · 2019-09-05T12:35:45.117Z · score: 4 (2 votes) · EA · GW

The link in the second paragraph doesn't work:

If you receive media inquiries, I encourage you to email me or schedule a chat. You can also refer others to me, to this post, or to our full guide: Advice for responding to journalists.

but the link in the final paragraph does:

Please see Advice for responding to journalists for more information about the recommendations we’ve received from media professionals. We hope community members will read it, leave a comment here, and/or contact us with any feedback, questions, or additional recommendations.

Strangely, the two paragraphs also feature different CEA email addresses for Sky.

Comment by larks on Why were people skeptical about RAISE? · 2019-09-05T00:06:57.995Z · score: 8 (4 votes) · EA · GW

My notes from the time suggest I thought the team was inexperienced relative to the difficulty of the project, and that their roadmap was poorly calibrated

Comment by larks on Global basic education as a missing cause priority · 2019-08-21T19:46:16.764Z · score: 2 (1 votes) · EA · GW

If education is good because it promotes autonomy, that is an instrumental value, not an intrinsic one. It would be intrinsically valuable if people said "I care about education even though it doesn't promote autonomy or health or anything else valuable."

You might like to read this SEP article about the difference.

Comment by larks on Logarithmic Scales of Pleasure and Pain: Rating, Ranking, and Comparing Peak Experiences Suggest the Existence of Long Tails for Bliss and Suffering · 2019-08-21T01:24:22.570Z · score: 5 (2 votes) · EA · GW

Interesting article!

It's particularly noteworthy how closely related two of the key events are: the wonders of Birth of Children vs the pain of Childbirth. I wonder if this suggests that targeting childbirth would be particularly effective, as it might 'unlock' a bit more of one of the key pleasures in a way that they other key pains do not. It seems like there might be some low hanging fruit here - in particular there are a lot of medications which have not been tested for use in pregnant women, where a clinical trial might increase women's options. I could see there being cases where it is doesn't make sense for pharma companies to do the trials (because the number of additional patients is small) but would be for us (because these patients matter a lot more, even though they would only pay the same amount for the drug).

Unrelatedly, another reason to expect Death of Father to be over-represented vs Death of Mother is that on average older men marry younger women.

Comment by larks on What's the most effective organisation that deals with mental health as an issue? · 2019-08-19T20:06:10.269Z · score: 12 (9 votes) · EA · GW

You might be interested in Cause Area: Mental Health, a post which won second place in the December forum prize.

Comment by larks on Ask Me Anything! · 2019-08-15T13:37:20.074Z · score: 4 (25 votes) · EA · GW

PlayPumps: overrated or underrated?

Comment by larks on 12 years of education as a missing cause priority · 2019-08-08T18:28:19.825Z · score: 5 (4 votes) · EA · GW

Most effective at the current margin!

We can generalise the argument to include other desiderata like neglectedness:

1) EA priorities in expectation maximise some objective function.

2) Relatively few things maximise each objective function, and there are relatively few objective functions.

3) Hence relatively few things will be EA priorities.

Comment by larks on 12 years of education as a missing cause priority · 2019-08-08T09:56:55.875Z · score: 8 (4 votes) · EA · GW

I realise this is not exactly the sort of answer you're looking for, but it's worth noting that EA priorities are meant to be in some sense the very most effective interventions in the world, so most things will not be EA priorities. As such I think it makes a lot more sense to place the burden of proof on people to prove that cause is highly effective, rather than on those who do not believe in the cause.

Comment by larks on Leverage Research: reviewing the basic facts · 2019-08-03T16:58:13.338Z · score: 24 (8 votes) · EA · GW
My plan, apart from the post here, was to post something over the next month.

Did you end up posting anything on this subject?

Comment by larks on Cluster Headache Frequency Follows a Long-Tail Distribution · 2019-08-03T16:52:47.244Z · score: 8 (5 votes) · EA · GW

This is very interesting, thanks for doing this work.

I would note that members of a cluster headache subreddit are unlikely to be representative of the broader population that generated the 1/1000 figure. Presumably they experience a disproportionately large number of headaches.

Comment by larks on Four practices where EAs ought to course-correct · 2019-08-03T16:43:33.068Z · score: 4 (3 votes) · EA · GW

It seems like a big distinction between the two lies in how quickly they could be rolled out. A pre-WWII database of religion would have taken a long time to create, so pre-emptively not creating one significantly inhibited the Germans, while the US already had the census data so could intern the Japanese. But it doesn't seem likely that not using facial recognition now would make it significantly harder to use later.

Comment by larks on How urgent are extreme climate change risks? · 2019-08-01T12:27:21.179Z · score: 19 (8 votes) · EA · GW

You might enjoy Vox arguing that it is not an existential risk.

Also I would note that there are already many organisations, researchers, engineers, policymakers and lobbyists working on the issue.


Comment by larks on 'Longtermism' · 2019-07-30T22:39:20.210Z · score: 5 (4 votes) · EA · GW

It is not that easy to distinguish between these two theories! Consider three worlds:

  • Sam exists with welfare 20
  • Sam does not exist
  • Sam exists with welfare 30

If you don't value creating positive people, you end up being indifferent between the first and second worlds, and between the second and third worlds... But then by 2), you want to prefer the third to the first, suggesting a violation of transitivity.

Comment by larks on Four practices where EAs ought to course-correct · 2019-07-30T10:56:21.684Z · score: 17 (5 votes) · EA · GW

it's pretty easy for me to minimize or avoid unhealthy foods such as ... fortified cereal

Sorry for the tangent to the main point of the post, but is fortified cereal bad? I had assumed that public health authorities + food companies were adding useful nutrients than most people's diets lacked.

Comment by larks on The EA Holiday Calendar · 2019-07-30T10:44:36.082Z · score: 10 (7 votes) · EA · GW

This is a cool idea.

Perhaps William Wilberforce's birthday? The abolition of slavery is probably one of the biggest improvements in world history, and has many parallels for contemporary EA issues.

I would consider replacing Contraception Day (which might be good but is not a canonical EA cause, and is at least prima face in conflict with the Total View) with an explicitly somber day (similar to Yom Kippur), like Holocaust Memorial Day, or the anniversary of the bombing of Hiroshima.

Possibly some space-related day could be a nice optimistic note, like the first man in space, or the moon landing.

You could also have the founding of GWWC.

Comment by larks on 'Longtermism' · 2019-07-26T01:44:38.722Z · score: 41 (23 votes) · EA · GW

Thanks for writing this; I thought it was good.

I would wonder if we might consider weakening this a little:

(i) Those who live at future times matter just as much, morally, as those who live today;

Anecdotally, it seems that many people - even people I've spoken to at EA events! - consider future generations to have zero value. Caring any amount about future people at all is already a significant divergence, and I would instinctively say that someone who cared about the indefinite future, but applied a modest discount factor, was also longtermist, in the colloquial-EA sense of the word.

Comment by larks on Age-Weighted Voting · 2019-07-16T03:46:16.379Z · score: 65 (19 votes) · EA · GW

I performed a very cursory literature review on the subject. Overall it seems the psychology research suggests that older people discount the future less than younger people, which might suggest giving their votes more weight.

Usually such a brief perusal of the literature would not give me a huge amount of confidence in the core claims; however in this case the conclusion should seem prima facie very plausible to anyone who has ever met a young boy.

In no particular order:

Age Differences in Temporal Discounting: The Role of Dispositional Affect and Anticipated Emotions:

Advanced age was associated with a lower tendency to discount the future, but this effect reached statistical significance only for the discounting of delayed gains.

Age-Related Changes in Decision Making:

Older adults are not always risk-averse, and their ability to postpone gratification tends to exceed that of younger adults.

Aging and altruism in intertemporal choice:

Research on life span changes in motivation suggests that altruistic motives become stronger with age, but no prior research has examined how altruism affects tolerance for temporal delays. Experiment 1 used a realistic financial decision making task involving choices for gains, losses, and donations. Each decision required an intertemporal choice between a smaller-immediate and a larger-later option. Participants more often chose the larger-later option in the context of donations than in the context of losses; thus, parting with more of their overall capital when the act of doing so benefited a charity. As predicted, the magnitude of this “altruism effect” was amplified in older relative to younger adults. -

Following Advice Because it's Been Paid For: Age, the Sunk-Cost Fallacy, and Loss Aversion:

[Y]ounger adults commit the sunk-cost fallacy more frequently, and make normatively correct decisions less frequently,than older adults when hypothetical sunk costs are at stake ... Younger adults made fewer initial investments of money on the Calorie Estimation Task than older adults. Younger adults demonstrated the sunk-cost fallacy more frequently, overinvested more after demonstrating the fallacy, and made the normatively correct decision less frequently than older adults on the hypothetical self-report measure of the sunk-cost fallacy. Younger adults indicated that they were more averse to hypothetical monetary losses on a Tradeoff Loss Aversion Task and more averse to hypothetical monetary losses on a Delay Discounting Loss Aversion Task. ... Older adults did not demonstrate the sunk-cost fallacy on the Calorie Estimation Task by following more expensive advice more closely than less expensive advice. -

Decision Making in Older Adults: A Comparison of Delay and Probability Discounting Across Ages:

[O]lder adults discounted delayed rewards less steeply than young adult

Acute stress and altruism in younger and older adults:

Even under acute psychosocial stress, older adults make more altruistic choices than younger adults.

Age-related differences in discounting future gains and losses:

Results indicated that impaired older adults discounted the future more than unimpaired older adults. Interestingly, middle-aged adults discounted future gains at a similar rate as impaired older adults, but discounted future losses less than impaired older adults (and similarly to unimpaired older adults).

Age‐related differences in delay discounting: Immediate reward, reward magnitude, and social influence:

Younger adults exhibited higher levels of delay discounting than older adults.

The only evidence I've found that older people might discount more was a Chinese study. I'm not sure why Chinese people would be different in this regard, though obviously their culture is different in many ways - or, the paper might just be wrong.

Age differences in delay discounting in Chinese adults:

The current study aimed to clarify this relationship using a relatively large sample of Chinese adults with a wide age range (viz., 18 to 86 years old). A total of1288 individuals completed the Monetary Choice Questionnaire. Results showed that the rate of delay discounting increased with age across adulthood, with younger participants (18–30 years) discounting lessthan both middle-aged participants (31–60 years) and older participants (over 60 years); and middle-aged participants discounting less than older participants. Furthermore, when the reward magnitude was large, participants were more likely to wait for delayed rewards.