Comment by larks on Activism to Make Kidney Sales Legal · 2019-04-09T21:43:50.634Z · score: 2 (1 votes) · EA · GW

My impression is that government prosecutors have a lot of discretion, so if you look too sympathetic they would simply turn a blind eye rather than suffer the negative media attention.

Comment by larks on Stefan Schubert: Psychology of Existential Risk and Long-Termism · 2019-03-25T22:08:39.790Z · score: 3 (2 votes) · EA · GW

Thanks, this was very interesting. Quick question - is there meant to be an answer to this question?

Question: Were there any differences in zebra affinity between Americans and Britons?

Comment by larks on How to Understand and Mitigate Risk (Crosspost from LessWrong) · 2019-03-14T21:40:53.038Z · score: 2 (1 votes) · EA · GW
It's quite easy to research the cost of creating a rice farm, or a power plant, as well as get a tight bounded probability distribution for the expected price you can sell your rice or electricity at after making the initial investment. These markets are very mature and there's unlikely to be wild swings or unexpected innovations that significantly change the market.

This doesn't affect your overall article much, but it's worth noting that commodity prices can be very volatile. Looking up the generic rice contract on Bloomberg for example, and picking the more extreme years but the same month (to avoid seasonality):

  • 1998 April: 10.2
  • 2002 April: 3.6
  • 2004 April: 11.3
  • 2005 April 7.2
  • 2008 April: 23.8
  • 2010 April: 12.6
  • 2013 April: 15.8
  • 2015 April: 10.0

You do have the ability to lock in the current implied profitability using futures, but in general commodity markets seem to be more volatile than non-commodity markets.

Comment by larks on Brian Tse: Risks from Great Power Conflicts · 2019-03-14T02:27:14.805Z · score: 5 (3 votes) · EA · GW
I think one paper shows that there were almost 40 near misses, and I think that was put up by the Future of Life Institute, so some people can look up that paper, and I think that in general it seems that experts agree some of the biggest risks from nuclear would be accidental use, rather than deliberate and malicious use between countries.

Possibly you are thinking of the Global Catastrophic Risks Institute, and Baum et al.'s A Model for the Probability of Nuclear War ?

Comment by larks on EA/X-risk/AI Alignment Coverage in Journalism · 2019-03-10T17:15:40.625Z · score: 8 (4 votes) · EA · GW

Thanks for highlighting this, I thought it was interesting. It does seem that, if you thought getting Vox to write about AI was good, it would be good to have an offsetting right-wing spokesman on the issue.

One related point would be that we can try to avoid excessively associating AI risk with left wing causes; discrimination is the obvious one. The alternative would be to try to come up with right-wing causes to associate it with as well; I have one idea, but I think this strategy may be a bad idea so am loath to share it.

Comment by larks on SHIC Will Suspend Outreach Operations · 2019-03-08T03:01:56.667Z · score: 52 (27 votes) · EA · GW

This was very interesting. Retrospectives on projects that didn't work can be extremely helpful to others, but I imagine can also been tough to write, so thanks very much!

Comment by larks on Making discussions in EA groups inclusive · 2019-03-05T04:15:10.691Z · score: 25 (14 votes) · EA · GW

It takes a long time to craft a response to posts like these. Even if there are clear problems with the post, given the sensitive topic you have to spend a lot of time on nuance, checking citations, and getting the tone right. That is a very high bar, one that I don't think is reasonable to expect everyone to pass. In contrast, people who agree seem to get a pass for silently upvoting.

Comment by larks on Making discussions in EA groups inclusive · 2019-03-05T02:00:11.731Z · score: 39 (17 votes) · EA · GW

While I appreciate your saying you don't intend to ban topics, I think there is considerable risk that this sort of policy becomes a form of de facto censorship. In the same way that we should be wary of Isolated Demands for Rigour, so too we should also be wary of Isolated Demands for Sensitivity.

Take for example the first item on your list - lets call it A).

Whether it is or has been right or necessary that women have less influence over intellectual debate and less economic and political power

I agree that this is not a great topic for an EA discussion. I haven't seen any arguments about the cost-effectiveness of a cause area that rely on whether A) is true or false. It seems unlikely that specifically feminist or anti-feminist causes would be the best things to work on, even if you thought A) was very true or false. If such a topic was very distracting, I can even see it making sense to essentially ban discussion of it, as LessWrong used to do in practice with regard Politics.

My concern is that a rule/recommendation against discussing such a topic might in practice be applied very unequally. For example, I think that someone who says

As you know, women have long suffered from discrimination, resulting a lack of political power, and their contributions being overlooked. This is unjust, and the effects are still felt today.

would not be chastised for doing so, or feel that they had violated the rule/suggestion.

However, my guess is that someone who said

As you know, the degree of discrimination against women has been greatly exaggerated, and in many areas, like conscription or homicide risk, they actually enjoy major advantages over men.

might be criticized for doing so, and might even agree (if only privately) that they had in some sense violated this rule/guideline with regard topic A).

If this is the case, then this policy is de facto a silencing not of topics, but of opinions, which I think is much harder to justify.

As a list of verboten opinions, this list also has the undesirable attribute of being very partisan. Looking down the list, it seems that in almost every case the discouraged/forbidden opinion is, in contemporary US political parlance, the (more) Right Wing opinion, and the assumed 'default' 'acceptable' one is the (more) Left Wing opinion. In addition, my impression (though I am less sure here) is that it is also biased against opinions disproportionately held by older people.

And yet these are two groups that are dramatically under-represented in the EA movement! (source) Certainly it seems that, on a numerical basis, conservatives are more under-represented than some of the protected groups mentioned in this article. This sort of list seems likely to make older and more conservative people feel less welcome, not more. Various viewpoints they might object to have been enshrined, and other topics, whose discussion conservatives find disasteful but is nonetheless not uncommon in the EA community, are not contraindicated.

For a generally well-received article on how to partially address this, you might enjoy Ozy's piece here.

Comment by larks on Research on Effective Strategies for Equity and Inclusion in Movement-Building · 2019-03-05T00:01:07.138Z · score: 13 (5 votes) · EA · GW

Here is a recent study on the topic that I think is very relevant:

Gender, Race, and Entrepreneurship: A Randomized Field Experiment on Venture Capitalists and Angels (Gornall and Strebulaev)
We sent out 80,000 pitch emails introducing promising but fictitious start-ups to 28,000 venture capitalists and business angels. Each email was sent by a fictitious entrepreneur with a randomly selected gender (male or female) and race (Asian or White). Female entrepreneurs received an 8% higher rate of interested replies than male entrepreneurs pitching identical projects. Asian entrepreneurs received a 6% higher rate than White entrepreneurs. Our results are not consistent with discrimination against females or Asians at the initial contact stage of the investment process.
link

However, it does seem pretty applicable to EA. The EA community is in many ways similar to the VC community:

  • Similar geographies: the Bay Area, London, New York etc.
  • Similar education backgrounds.
  • Both involve evaluating speculative projects with a lot of uncertainty.

Similarly to the studies discussed above, this finds that people are biased against white men.

(I have some qualms about this type of study, because they involve wasting people's time without their consent, but this doesn't affect the conclusions.)

Comment by larks on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-26T12:55:28.295Z · score: 42 (21 votes) · EA · GW

Great post. I'm sure writing this must have been tough, so thanks very much for sharing this.

Comment by larks on Impact Prizes as an alternative to Certificates of Impact · 2019-02-26T04:19:12.082Z · score: 4 (2 votes) · EA · GW

Great post; I had been thinking about writing something very similar. In many ways I think you have actually understated the potential of the idea. Additionally I think it addresses some of the concerns Owen raised last time.

Evaluation Costs
The final prize evaluations could be quite costly to produce.

I actually think the final evaluations might be cheaper than the status quo. At the moment OpenPhil (or whoever) has to do two things:

1) Judge how good an outcome is.

2) Judge how likely different outcomes are.

With this plan, 2) has been (partially) outsourced to the market, leaving them with just 1).

Cultural Risks
If Impact Prizes took off, I could imagine some actors drawing into the ecosystem who only motivated by making profits.

This is not a bug, this is a feature! There is a very large pool of people willing to predict arbitrary outcomes in return for money, that we have thus far only very indirectly been tapping into. In general bringing in more traders improves the efficiency of a market. Even if you add noisy traders, their presence improves the incentives for 'smart money' to participate. I think it's unlikely we'd reach the scale required for actual hedge funds to get involved, but I do think it's plausible we could get a lot of hedge fund guys participating in their spare time.

Legal Implications

In terms of legal status, one option I've been thinking about would be copying PredictIt. If we have to pay taxes every time a certificate is transferred, the transaction costs will be prohibitive. I am quite worried it will be hard to make this work within US law unfortunately, which is not very friendly to this sort of experimentation. At the same time, given the SEC's attitude towards non-compliant security issuance, I would not want to operate outside it!

Quick other thoughts

One issue with the idea is it is hard for OpenPhil to add more promised funding later, because the initial investment will already have been committed at some fixed level. e.g. If OpenPhil initially promise $10m, and then later bump it to $20m, projects that have already sold their tokens cannot expand to take advantage of this increase, so it is effectively pure windfall with no incentive effect. A possible solution would be cohorts; we promise $10m in 2022 for projects started in 2019, and then later add another $12m, paid in 2023, for 2020 projects.

Comment by larks on Impact Prizes as an alternative to Certificates of Impact · 2019-02-26T04:15:53.893Z · score: 10 (3 votes) · EA · GW

I think I might have been the second largest purchaser of the certificates. My experience was that we didn't attract the really high quality projects I'd want, and those we did see had very high reservation prices from the sellers, perhaps due to the endowment effect. I suspect sellers might say that they didn't see enough buyers. Possibly we just had a chicken-and-egg problem, combined with everyone involved being kind of busy.

Comment by larks on Three Biases That Made Me Believe in AI Risk · 2019-02-16T00:18:47.719Z · score: 5 (3 votes) · EA · GW

Could you go into a bit more detail about the two linguistic styles you described, perhaps using non-AI examples? My interpretation of them is basically agent-focused vs internal-mechanics-focused, but I'm not sure this is exactly what you mean.

If the above is correct, it seems like you're basically saying that internal-mechanics-focused descriptions work better for currently existing AI systems, which seems true to me for things like self-driving cars. But for something like AlphaZero, or Stockfish, I think an agentic framing is often actually quite useful:

A chess/Go AI is easy to imagine: they are smart and autonomous and you can trust the bot like you trust a human player. They can make mistakes but probably have good intent. When they encounter an unfamiliar game situation they can think about the correct way to proceed. They behave in concordance with the goal (winning the game) their creator set them and they tend to make smart decisions. If anything goes wrong then the car is at fault.

So I think the reason this type of language doesn't work well for self-driving cars is because they aren't sufficiently agent-like. But we know there can be agentic agents - humans are an example - so it seems plausible to me that agentic language will be the best descriptor for them. Certainly it is currently the best descriptor for them, given that we do not understand the internal mechanics of as-yet-uninvented AIs.

Comment by larks on Research on Effective Strategies for Equity and Inclusion in Movement-Building · 2019-02-01T03:22:36.786Z · score: 27 (16 votes) · EA · GW
In general it’s probably best not to anonymize applications. Field studies generally show no effect on interview selection, and sometimes even show a negative effect (which has also been seen in the lab). Blinding may work for musicians, randomly generated resumes, and identical expressions of interest, but in reality there seem to be subtle cues of an applicant’s background that evaluators may pick up on, and the risk of anonymization backfiring is higher for recruiting groups which are actively interested in DEI. This may be because they are unable to proactively check their biases when blind, or to proactively accommodate disadvantaged candidates at this recruitment stage, or because their staff is already more diverse and people may favor candidates they identify with demographically.

I think you are mis-describing these studies. Essentially, they found that when reviewers knew the race and sex of the applicants, they were biased in favour of women and non-whites, and against white males.

I admit I only read two of the studies you linked two, but I think these quotes from them are quite clear that about the conclusions:

We find that participating firms become less likely to interview and hire minority candidates when receiving anonymous resumes.

The public servants reviewing the job applicants engaged in discrimination that favoured female applicants and disadvantaged male candidates

Affirmative action towards the Indigenous female candidate is the largest, being 22.2% more likely to be short listed on average when identified compared to the de-identified condition. On the other hand, the identified Indigenous male CV is 9.4% more likely to be shortlisted on average compared to when it is de-identified. In absolute terms most minority candidates are on average more likely to be shortlisted when named compared to the de-identified condition, but the difference for the Indigenous female candidate is the only one that is statistically significant at the 95% confidence level.

This is also supported by other papers on the subject. For example, you might enjoy reading Williams and Ceci (2015):

The underrepresentation of women in academic science is typically attributed, both in scientific literature and in the media, to sexist hiring. Here we report five hiring experiments in which faculty evaluated hypothetical female and male applicants, using systematically varied profiles disguising identical scholarship, for assistant professorships in biology, engineering, economics, and psychology. Contrary to prevailing assumptions, men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males with matching lifestyles (single, married, divorced), with the exception of male economists, who showed no gender preference. Comparing different lifestyles revealed that women preferred divorced mothers to married fathers and that men preferred mothers who took parental leaves to mothers who did not. Our findings, supported by real-world academic hiring data, suggest advantages for women launching academic science careers.

This doesn't mean that anonymizing applications is a bad idea - it appears to have successfully reduced unfair bias - rather that the bias was in the opposite direction than the authors expected to find it.

Comment by larks on EA Forum Prize: Winners for December 2018 · 2019-01-31T02:11:16.815Z · score: 16 (10 votes) · EA · GW

Thanks very much! I think this prize is a great idea. I was definitely motivated to invest more time and effort by the hope of winning the prize (along with the satisfaction of getting front page with a lot of karma).

Comment by larks on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T02:26:06.780Z · score: 14 (5 votes) · EA · GW

I have definitely heard people referring to Future Perfect as 'the EA part of Vox' or similar.

Comment by larks on How can I internalize my most impactful negative externalities? · 2019-01-17T22:12:34.762Z · score: 6 (5 votes) · EA · GW

You might enjoy this post Claire wrote: Ethical Offsetting is Antithetical to EA.

Comment by larks on What Is Effective Altruism? · 2019-01-14T02:45:36.192Z · score: 1 (2 votes) · EA · GW

Thanks for writing this, I thought it was quite a good summary. However, I would like to push back on two things.

Effective altruism is egalitarian. Effective altruism values all people equally

I often think of age as being one dimension that egalitarians think should not influence how important someone is. However, despite GiveWell being one of the archetypal EA organisations (along with GWWC/CEA), they do not do this. Rather, they value middle-aged years of life more highly than baby years or life or old people years of life. See for example this page here. Perhaps EA should be egalitarian, but de facto it does not seem to be.

Effective altruism is secular. It does not recommend charities that most effectively get people into Heaven ...

This item seem rather different from the other items on the list. Most of the others seem like rational positions for virtually anyone to hold. However, if you were religious, this tennant seems very irrational - helping people get into heaven would be the most effective thing you could do! Putting this here seems akin to saying that AMF is an EA value; rather, these are conclusions, not premises.

Additionally, there is some evidence that promoting religion might be beneficial even on strictly material grounds. Have you seen the recent pre-registered RCT on protestant evangelism?

To test the causal impact of religiosity, we conducted a randomized evaluation of an evangelical Protestant Christian values and theology education program that consisted of 15 weekly half-hour sessions. We analyze outcomes for 6,276 ultra-poor Filipino households six months after the program ended. We find significant increases in religiosity and income, no significant changes in total labor supply, assets, consumption, food security, or life satisfaction, and a significant decrease in perceived relative economic status. Exploratory analysis suggests the program may have improved hygienic practices and increased household discord, and that the income treatment effect may operate through increasing grit.

https://www.nber.org/papers/w24278.pdf

I don't have a strong view on whether or not this is actually a good thing to do, let alone the best thing. RCTs provide high quality causal evidence, but even then most interventions do not work very well, and I'm not an expert on the impact of evangelism. But it seems strange to assume from very beginning that it is not something EAs would ever be interested in.

Comment by larks on EA Giving Tuesday Donation Matching Initiative 2018 Retrospective · 2019-01-06T18:22:46.296Z · score: 11 (8 votes) · EA · GW

Congratulations guys, this is really impressive. Thanks for all the work you put into this.

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:21:27.168Z · score: 10 (4 votes) · EA · GW

My general model is that charities get funding in two waves:

1) December

2) The rest of the year

As such, if I ask groups for their runway at the beginning of 1), and they say they have 12 months, that basically means that even if they failed to raise any money at all in the following 1) and 2) they would still survive until next December, at which point they could be bailed out.

However, I now think this is rather unfair, as in some sense I'm playing donor-of-last-resort with other December donors. So yes, I think 18 months may be a more reasonable threshold.

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:18:02.793Z · score: 3 (2 votes) · EA · GW

No principled reason, other than that this is not really my field, and I ran out of time, especially for work produced outside donate-able organizations. Sorry!

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:15:49.571Z · score: 2 (1 votes) · EA · GW
It's also worth noting that I believe the new managers do not have access to large pots of discretionary funding (easier to deploy than EA Funds) that they can use to fund opportunities that they find.

Good point!

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:14:36.149Z · score: 9 (4 votes) · EA · GW

I'm glad you found it helpful!

I don't have a great system. I combined a few things:

1) Organisations' websites

2) Backtracking from citations in papers, especially those published very recently

3) Author's own websites for some key authors

4) 'cited by' in Google scholar for key papers, like Concrete Problems

5) Asking organisations what else I should read - many do not have up to date websites.

6) Randomly coming accross things on facebook, twitter, etc.

7) Rohin's excelent newsletter.

Comment by larks on What’s the Use In Physics? · 2018-12-31T15:46:32.931Z · score: 3 (2 votes) · EA · GW

Great post, thanks for collecting all these in one place.

Comment by larks on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-24T01:54:18.057Z · score: 7 (11 votes) · EA · GW

According to this article on the pledge:

While the Pledge was originally focused on global poverty, since 2014 it has been cause-neutral. Members commit to donate to the organizations they believe are most effective at improving the lives of others.

Specifically, originally the pledge did not include animal welfare groups, but was later 'amended' to include them. Is there a principled reason to include animal welfare, but not religious outreach? They seem quite similar:

1) Both ingroups have (by their lights) strong reasons to think what they are doing is literally the most important thing in the world.

2) Many/most people agree with premises that logically imply the importance of both causes (i.e. many people are religious and believe in heaven, and many people believe animal cruelty is bad)

3) Both causes are seen as somewhat wierd by most people, despite 2)

4) Both causes are quite far from the original stated and de facto goals of GWWC, namely helping people in the third world.

Comment by larks on The case for taking AI seriously as a threat to humanity · 2018-12-24T01:40:43.332Z · score: 37 (13 votes) · EA · GW

Overall I think this is a great article. It seems like it could be one of the best pieces for introducing new people to the subject.

People sometimes try to gauge the overall views of an author by the relative amounts of page-space they dedicate to different topics, which is bad if you generally agree with something, but want to make a detailed objection to a minor point. I think Kelsey's article is good, and don't want the below to detract from this.

To try to help with this effect, I have deliberately made the top three paragraphs, where I explain that this article is very good before coming to the main point of the comment.

However, I do object to this section:

When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too.

The text links to another Vox article, which ultimately linked to this ProPublica article, which argues that a specific reoffending-prediction system was bad because:

The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.

Separately it notes

When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.

At this point, alarm bells should be ringing in your head. "More accurate than a coin flip" is not the correct way to analyze the accuracy of a binary test for an outcome unless the actual distribution is also 50:50! If fewer than 50% of people re-offend, a coin flip will get less than 50% right on those is classifies as high risk. Using the coin flip analogy is a rhetorical sleight of hand to make readers adopt the wrong analytical framework, and make the test look significantly worse than it actually is.

Now we've seen that the ProPublica authors perhaps cannot be entirely trusted to represent the data accurately, lets go back to the headline statement: that the false positive rate is higher for blacks than whites.

This is true, but in a trivial sense.

Blacks commit more crime than whites. This is true regardless of whether you look at arrest data, conviction data, or victimization surveys. (Even if you only asked Black people who committed crimes against them, this result still holds. Also holds true just looking at recidivism.) As a result of this base rate, any unbiased algorithm will have more false positives for blacks, even if it is equally accurate for both races at any given level of risk.

Here are some simple numbers, lifted from Chris's excellent presentation on the subject, to illustrate this point:

Simplified numbers: High risk == 60% chance of recidivism, low risk = 20%.
Black people: 60% labelled high risk * 40% chance of no recidivism = 24% chance of “labelled high risk, didn’t recidivate”.
White people: 30% labelled high risk * 40% chance of no recidivism = 12% chance of “labelled high risk, didn’t recidivate”.

It is a trivial statistical fact that any decent statistical test will have a higher false positive rate for subgroups with higher incidence. To avoid this, you'd have to adopt a test which included a specific "if white, increase risk" factor, and you would end up releasing more people who would reoffend, and keeping in jail people who would not. None of these seem like acceptable consequences.

Strangely however, neither the Vox article that this one linked to, nor the original ProPublica piece, mentioned this fact - I suspect due to the same political bias kbog discussed recently. There are good reasons to be concerned about the application of algorithms in areas like these. But damning the algorithms as racist for statistically misleading reasons, without explaining to readers the underlying reasons for these statistics, suggests that the authors have either failed to understand the data, or are actively trying to mislead their readers. I would recommend against linking to either article in future as evidence for the claims.

EDIT: Washington Post had a very good article explaining this also.

Comment by larks on Women's Empowerment: Founders Pledge report and recommendations · 2018-12-22T02:05:53.161Z · score: 14 (7 votes) · EA · GW

"at least 35% of women worldwide have experienced some form of physical or sexual violence."

The article uses this statistic to try to motivate why we might be interested in charities that focus specifically on women. However, we cannot evaluate this statistic in isolation: to draw this conclusion we need to compare against assault rates for men.

I wasn't able to immediately find a comparable stat for men - the source for the stat appears to be a women-specific WHO report - but I was able to find homicide data. This data is often regarded as especially reliable, because there are fewer issues about underreporting when there is a dead body. (I apologize in advance if the authors did in fact compare assault rates between sexes and just omitted this from the report).

So what does the data say? According to the UN Office on Drugs and Crime, men are dramatically more likely to be victims of homicide in virtually every country. Almost 80% of global homicide victims are male. And the small number of countries where this is not the case tend to be in the developed world, which is not where the charities in this post focus, or very small countries where I suspect there was only one homicide that year.

So a neutral observer would conclude this was a reason to support charities that reduced violence against men, not women, if one were inclined to choose one or the other.

The fact that this article does not seem to even investigate this makes me sceptical of the quality of the rest of the work. If EAs are going to write non-cause-neutral reports, we should at least be clear at the very beginning of the report that other causes are likely to be be better - rather than than presenting misleading evidence to the contrary. Otherwise we are in danger of sacrificing a very important part of what makes EA distinctive.

Source: http://www.unodc.org/gsh/en/data.html

Comment by larks on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-21T19:47:16.500Z · score: 9 (5 votes) · EA · GW

Sure, that's why I criticized Vox, not the individual author. I suspect the author did not complain about the title though.

Comment by larks on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-20T23:38:13.557Z · score: 48 (21 votes) · EA · GW

When Vox launched I was very excited, as I thought it would be a good source of high-quality journalism, even if they did censor authors for having the wrong conclusions. However, it seems like virtually every article, even when otherwise high quality, contains some unrelated and unnecessary jibe at conservatives - an unusually direct example of Politics is the Mindkiller. Perhaps this lead to their being in something of an echo chamber, where conservatives stopped reading?

Here's a recent example, to help make the above more concrete:

1) Trump signed a good law this week. Yes, really. - why does this need the snark in the title? The meaning would have been clearer, and less insulting, if they had just written "Trump signed a good law about HIV this week."

I worry about this in general with Future Perfect. This behaviour is not something the EA movement wants, but if Future Perfect ends up producing a very large volume of 'EA' articles, we risk getting tarnished by association.

Comment by larks on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-20T23:08:52.407Z · score: 6 (3 votes) · EA · GW

Thanks, I thought this article was very thoughtful.

I have one quick question about the examples you mention. While I agree that pro-life examples are a great idea, I'm not sure what you are getting at with the heaven-infinite-value example. Is the problem that people have been using this as a reductio?

2018 AI Alignment Literature Review and Charity Comparison

2018-12-18T04:48:58.945Z · score: 105 (49 votes)
Comment by larks on [Link] "Would Human Extinction Be a Tragedy?" · 2018-12-18T02:06:41.802Z · score: 9 (10 votes) · EA · GW

In response to the title question: yes.

Comment by larks on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-30T23:49:26.131Z · score: 5 (3 votes) · EA · GW

It seems to me that TRIA is really stretching the definition of 'equality'. Could I not equally suggest a Citizenship-Relative-Interest-Account? This would fit well with people's nationalistic intuitions. Indeed, if we look at the list of things GWWC claimed EAs do not discriminate based on, we could circumvent all of them with cunningly crafted X-Relative-Interest-Accounts.

I agree a moral discontinuity would be very perverse. But it seems there are many better options. For example, a totalist view - that people matter even before they are conceived - avoids this issue, and doesn't suffer from the various inconsistencies that person-affecting views do. Alternatively, if you thought that we should not value people who don't exist in any way, conception provides a clear discontinuity in many ways, such that it does not seem like it would be weird if there was a moral value discontinuity there also.

But I think the biggest problem is that, even if you accept TRIA, I suspect that most people's moral intuitions would produce a very different weighting distribution. Specifically, they would be more averse to causing pain to 5 year olds than adults - especially adult men. If I have time I might look into whether there has been any empirical research on the subject; it could be a useful project.

Comment by larks on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-26T04:21:34.903Z · score: 7 (4 votes) · EA · GW

Thanks for writing this very detailed analysis. I especially enjoyed the arguments for why we can compare LS scores between people, like the Canadian immigrant study.

The section I found most suprising was the part on Givewell using the Time-Relative Interest Account. I've always thought of some kind of egalitarianism as being relatively important to EA - the idea that all people are in some sense equally deserving of happiness/welfair/good outcomes. We might save a young person over an old person, but this is only because by doing this we're counterfactually saving more life-years.

For example, here is Giving What We Can:

People[2] are equal — everyone has an equal claim to being happy, healthy, fulfilled and free, whatever their circumstances. All people matter, wherever they live, however rich they are,  and whatever their ethnicity, age, gender, ability, religious views, etc. [emphasis added]

But the TRIA explicitly goes against this. It directly weighs a year of health for a 25 year old as being inherently more valuable than a year of health for a 5 year old - or a 50 year old. This seems very perverse. Is it really acceptable to cause a large amount of pain to a child, in order to prevent a smaller amount of pain for an adult? I think the majority of people would not agree with this - if anything people prefer to prioritize the suffering of children over that of adults.

Comment by larks on Why we have over-rated Cool Earth · 2018-11-26T02:57:27.233Z · score: 17 (14 votes) · EA · GW

Thanks for writing this. This sort of evaluation, which has the potential to radically change the consensus view on a charity, seems significantly under-supplied in our community, even though individual instances are tractable for a lone individual to produce. It's also obviously good timing at the start of the giving season.

I think the post would be improved without the section on contraception, however. There are many simple environmental interventions we could benchmark against instead, that don't involve population ethics. Preventing a future human from being born has many impacts - they will probably have a job, some probability of inventing a new discovery, and most importantly they will probably be grateful to be alive - of which emitting some CO2 is likely to be one of the smaller impacts. Any evaluation of contraception that only looks at direct environmental impact is going to be so lacking that I suspect you'd be better off choosing a different intervention to compare to.

Comment by larks on Announcing the EA donation swap system · 2018-11-25T14:59:38.279Z · score: 5 (3 votes) · EA · GW

Thanks, this is a cool idea.

Inger from Norway wants to support the Good Food Institute (GFI) with a donation of 5000 USD. Robert from the USA wants to support the Against Malaria Foundation (AMF) with a donation of 5000 USD. AMF is tax deductible in both countries, GFI is only tax deductible in the USA. The EA donation swap system introduces Robert and Inger together and they agree to swap donations.
Inger donates 5000 USD to AMF, Robert donates 5000 USD to GFI. They both get their tax deductions at the end of the financial year.

In this example Inger gains tax deductability, but Robert gains nothing in return for taking on the counterparty risk of the swap. Wouldn't it make sense for Robert to donate slightly less than $5000, or Inger slightly more, such that both parties benefit?

This reminds me a little bit of Critch's Rent Division Calculator, which aims to produce a room-and-rent-allocation for shared houses that everyone likes as much as possible; not merely one that no-one actively dislikes.

Comment by larks on Why Do Small Donors Give Now, But Large Donors Give Later? · 2018-11-14T00:33:32.679Z · score: 2 (1 votes) · EA · GW

I like the idea, but I'm not sure it fully captures what is going on. We could be comparing the poor person not to the foundation but to the rich person who endowed it, and asking why they waited until late in life to do so rather than continually donating. The 'poor' person does indeed have a valuable asset in their future earning potential, but so does the young Bill Gates. He could have sold a bit more MSFT stock every time it went up, rather than waiting until the end.

Comment by larks on 2017 Donor Lottery Report · 2018-11-13T02:12:09.199Z · score: 14 (7 votes) · EA · GW

Thanks for writing this up, it's very interesting, and should be helpful for other donors.

Comment by larks on Pursuing infinite positive utility at any cost · 2018-11-12T15:16:33.646Z · score: 3 (4 votes) · EA · GW

Presumably this system would suggest we should encourage people to believe in a wide variety of religions, if one believer is all we need for infinite utility. Rather than converting more people to Catholicism we'd spend our time inventing/discovering new religions and converting one person.

Comment by larks on What's Changing With the New Forum? · 2018-11-12T02:46:36.854Z · score: 3 (2 votes) · EA · GW

The new Lesswrong also has Greaterwrong, allowing people to use the old style interface if they find that easier. Is there any way to do the same for the new EA forum?

Comment by larks on Announcing new EA Funds management teams · 2018-10-30T23:32:20.843Z · score: 2 (2 votes) · EA · GW

Thanks!

Comment by larks on Announcing new EA Funds management teams · 2018-10-28T20:15:34.685Z · score: 15 (11 votes) · EA · GW

I'm glad to see these changes; they seem like significant improvements to the structure. However, I think it would have been nice to see some official recognition that these changes seem to be largely in response to problems were largely foreseen by the community a long time ago.

Comment by larks on EA Funds hands out money very infrequently - should we be worried? · 2018-10-27T16:34:03.020Z · score: 3 (2 votes) · EA · GW

Update: The funds have now committed to a regular schedule of giving. link

Comment by larks on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-10-27T16:31:25.763Z · score: 0 (0 votes) · EA · GW

Update: two months later, CEA has now updated the management teams for these funds, bringing on new managers and committing to a regular schedule of grant giving. link

Comment by larks on Thoughts on short timelines · 2018-10-25T03:02:01.271Z · score: 3 (3 votes) · EA · GW

the crazy P/E rations for google, amazon, etc. seems to imply that the market thinks something important will happen there,

Google's forward PE is 19x, vs the S&P500 on 15x. What's more, this is overstated, because it involves the expensing of R&D, which logically should be capitalised. Facebook is even cheaper at 16x, though if I recall correctly that excludes stock-based-comp expense.

I agree that many other tech firms have much more priced into their valuations, and that fundamental analysts in the stock market realistically only look 0-3 years out.

Comment by larks on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T22:11:16.947Z · score: 0 (0 votes) · EA · GW

The point, presumably, is that people would feel better because of the expectation that things would improve.

Of course, the criticism is that rather than simulating someone who starts in pain and then improves gradually, you could simply simulate someone with high welfare all along. But if you could achieve identity-continuity without welfare-level-continuity this cost wouldn't apply.

Comment by larks on Fisher & Syed on Tradable Obligations to Enhance Health · 2018-08-15T03:25:55.098Z · score: 2 (2 votes) · EA · GW

What is it about a Viagra company that makes them more responsible for solving global health issues than e.g. IKEA?

Yes, for some reason the proposal combines a carbon-trading-style-scheme with a decision to make pharmaceutical companies pay for it all. The latter seems to be totally separable - just distribute the credits in proportion (at a slightly lower ratio than the target) to revenues! This would also significantly help address the problem I outlined in the other comment, by reducing the incentive just to shift revenue ex-US.

Comment by larks on Fisher & Syed on Tradable Obligations to Enhance Health · 2018-08-15T03:15:13.620Z · score: 2 (2 votes) · EA · GW

The authors fail to consider what seems to me to be the obvious response firms would make.

Their policy is basically a tax on global sales for pharmaceutical companies, imposed by the US, which they would pay because of the threat of being excluded from the US market (roughly half of sales). The rational response is to sell off the rights to sell the international marketing rights to your drugs, either to a new international company or to an existing one. These sales are then protected from the US scheme, and the fall in the denominator of the ratio (by ~50%) should ensure the industry is compliant, without any need to alter their behaviour in other ways.

As a simple example, instead of Amgen selling Enbrel in the US and internationally, you would have AmgenUS, with the right to sell Enbrel in the US and paying the tax, and AmgenInternational, with the right to sell Enbrel internationally and does not pay the tax. These sorts of geographic splitting of marketing rights are moderately common in the industry anyway, and don't seem to significantly increase overhead.

There are of course ways around this problem, but I think this shows the general problem with all such regulations - that the designers never consider all the unintended consequences, and so mis-estimate the effects of their policies.

Comment by larks on EA Forum 2.0 Initial Announcement · 2018-07-22T18:32:25.210Z · score: 2 (2 votes) · EA · GW

Many of these concerns seem to be symmetric, and would also imply we should make it harder to upvote.

Comment by larks on EA Forum 2.0 Initial Announcement · 2018-07-19T21:29:24.011Z · score: 3 (3 votes) · EA · GW

Hey, first of all, thanks for what I'm sure what must have been a lot of work behind this. Many of these ideas seem very sensible.

Am I right in assuming that the scale for the upvotes was intended to be roughly-but-not-exactly logarithmic? And do downvotes scale the same way?

Comment by larks on Impact Investing - A Viable Option for EAs? · 2018-07-11T22:55:08.575Z · score: 4 (4 votes) · EA · GW

(quoting from the open thread)

The timber is sold after 10 years, conservative return to the investor is $20k

This kind of investment would be considered high risk - this company only started this program three years ago, and the first trees haven't yet produced profit.

This sounds extremely suspect. Conservative investments do not generate 23% CAGRs, and there are plenty of investors willing to fund credible 10 year projects. Timber was a particularly fashionable asset class for a while, and 'enviromental' investments are extremely fashionable right now.

[This is an opinion and is for information purposes only. It is not intended to be investment advice. You should consult a licensed financial advisor for investment advice. This is not the opinion of my firm. My firm may have positions in the discussed securities. This is not an invitation to buy or sell securities].

2017 AI Safety Literature Review and Charity Comparison

2017-12-20T21:54:07.419Z · score: 42 (42 votes)

2016 AI Risk Literature Review and Charity Comparison

2016-12-13T04:36:48.060Z · score: 51 (53 votes)

Being a tobacco CEO is not quite as bad as it might seem

2016-01-28T03:59:15.614Z · score: 10 (12 votes)

Permanent Societal Improvements

2015-09-06T01:30:01.596Z · score: 9 (9 votes)

EA Facebook New Member Report

2015-07-26T16:35:54.894Z · score: 11 (11 votes)