Comment by larks on Three Biases That Made Me Believe in AI Risk · 2019-02-16T00:18:47.719Z · score: 5 (3 votes) · EA · GW

Could you go into a bit more detail about the two linguistic styles you described, perhaps using non-AI examples? My interpretation of them is basically agent-focused vs internal-mechanics-focused, but I'm not sure this is exactly what you mean.

If the above is correct, it seems like you're basically saying that internal-mechanics-focused descriptions work better for currently existing AI systems, which seems true to me for things like self-driving cars. But for something like AlphaZero, or Stockfish, I think an agentic framing is often actually quite useful:

A chess/Go AI is easy to imagine: they are smart and autonomous and you can trust the bot like you trust a human player. They can make mistakes but probably have good intent. When they encounter an unfamiliar game situation they can think about the correct way to proceed. They behave in concordance with the goal (winning the game) their creator set them and they tend to make smart decisions. If anything goes wrong then the car is at fault.

So I think the reason this type of language doesn't work well for self-driving cars is because they aren't sufficiently agent-like. But we know there can be agentic agents - humans are an example - so it seems plausible to me that agentic language will be the best descriptor for them. Certainly it is currently the best descriptor for them, given that we do not understand the internal mechanics of as-yet-uninvented AIs.

Comment by larks on Research on Effective Strategies for Equity and Inclusion in Movement-Building · 2019-02-01T03:22:36.786Z · score: 21 (15 votes) · EA · GW
In general it’s probably best not to anonymize applications. Field studies generally show no effect on interview selection, and sometimes even show a negative effect (which has also been seen in the lab). Blinding may work for musicians, randomly generated resumes, and identical expressions of interest, but in reality there seem to be subtle cues of an applicant’s background that evaluators may pick up on, and the risk of anonymization backfiring is higher for recruiting groups which are actively interested in DEI. This may be because they are unable to proactively check their biases when blind, or to proactively accommodate disadvantaged candidates at this recruitment stage, or because their staff is already more diverse and people may favor candidates they identify with demographically.

I think you are mis-describing these studies. Essentially, they found that when reviewers knew the race and sex of the applicants, they were biased in favour of women and non-whites, and against white males.

I admit I only read two of the studies you linked two, but I think these quotes from them are quite clear that about the conclusions:

We find that participating firms become less likely to interview and hire minority candidates when receiving anonymous resumes.

The public servants reviewing the job applicants engaged in discrimination that favoured female applicants and disadvantaged male candidates

Affirmative action towards the Indigenous female candidate is the largest, being 22.2% more likely to be short listed on average when identified compared to the de-identified condition. On the other hand, the identified Indigenous male CV is 9.4% more likely to be shortlisted on average compared to when it is de-identified. In absolute terms most minority candidates are on average more likely to be shortlisted when named compared to the de-identified condition, but the difference for the Indigenous female candidate is the only one that is statistically significant at the 95% confidence level.

This is also supported by other papers on the subject. For example, you might enjoy reading Williams and Ceci (2015):

The underrepresentation of women in academic science is typically attributed, both in scientific literature and in the media, to sexist hiring. Here we report five hiring experiments in which faculty evaluated hypothetical female and male applicants, using systematically varied profiles disguising identical scholarship, for assistant professorships in biology, engineering, economics, and psychology. Contrary to prevailing assumptions, men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males with matching lifestyles (single, married, divorced), with the exception of male economists, who showed no gender preference. Comparing different lifestyles revealed that women preferred divorced mothers to married fathers and that men preferred mothers who took parental leaves to mothers who did not. Our findings, supported by real-world academic hiring data, suggest advantages for women launching academic science careers.

This doesn't mean that anonymizing applications is a bad idea - it appears to have successfully reduced unfair bias - rather that the bias was in the opposite direction than the authors expected to find it.

Comment by larks on EA Forum Prize: Winners for December 2018 · 2019-01-31T02:11:16.815Z · score: 16 (10 votes) · EA · GW

Thanks very much! I think this prize is a great idea. I was definitely motivated to invest more time and effort by the hope of winning the prize (along with the satisfaction of getting front page with a lot of karma).

Comment by larks on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T02:26:06.780Z · score: 14 (5 votes) · EA · GW

I have definitely heard people referring to Future Perfect as 'the EA part of Vox' or similar.

Comment by larks on How can I internalize my most impactful negative externalities? · 2019-01-17T22:12:34.762Z · score: 6 (5 votes) · EA · GW

You might enjoy this post Claire wrote: Ethical Offsetting is Antithetical to EA.

Comment by larks on What Is Effective Altruism? · 2019-01-14T02:45:36.192Z · score: 1 (2 votes) · EA · GW

Thanks for writing this, I thought it was quite a good summary. However, I would like to push back on two things.

Effective altruism is egalitarian. Effective altruism values all people equally

I often think of age as being one dimension that egalitarians think should not influence how important someone is. However, despite GiveWell being one of the archetypal EA organisations (along with GWWC/CEA), they do not do this. Rather, they value middle-aged years of life more highly than baby years or life or old people years of life. See for example this page here. Perhaps EA should be egalitarian, but de facto it does not seem to be.

Effective altruism is secular. It does not recommend charities that most effectively get people into Heaven ...

This item seem rather different from the other items on the list. Most of the others seem like rational positions for virtually anyone to hold. However, if you were religious, this tennant seems very irrational - helping people get into heaven would be the most effective thing you could do! Putting this here seems akin to saying that AMF is an EA value; rather, these are conclusions, not premises.

Additionally, there is some evidence that promoting religion might be beneficial even on strictly material grounds. Have you seen the recent pre-registered RCT on protestant evangelism?

To test the causal impact of religiosity, we conducted a randomized evaluation of an evangelical Protestant Christian values and theology education program that consisted of 15 weekly half-hour sessions. We analyze outcomes for 6,276 ultra-poor Filipino households six months after the program ended. We find significant increases in religiosity and income, no significant changes in total labor supply, assets, consumption, food security, or life satisfaction, and a significant decrease in perceived relative economic status. Exploratory analysis suggests the program may have improved hygienic practices and increased household discord, and that the income treatment effect may operate through increasing grit.

https://www.nber.org/papers/w24278.pdf

I don't have a strong view on whether or not this is actually a good thing to do, let alone the best thing. RCTs provide high quality causal evidence, but even then most interventions do not work very well, and I'm not an expert on the impact of evangelism. But it seems strange to assume from very beginning that it is not something EAs would ever be interested in.

Comment by larks on EA Giving Tuesday Donation Matching Initiative 2018 Retrospective · 2019-01-06T18:22:46.296Z · score: 10 (7 votes) · EA · GW

Congratulations guys, this is really impressive. Thanks for all the work you put into this.

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:21:27.168Z · score: 10 (4 votes) · EA · GW

My general model is that charities get funding in two waves:

1) December

2) The rest of the year

As such, if I ask groups for their runway at the beginning of 1), and they say they have 12 months, that basically means that even if they failed to raise any money at all in the following 1) and 2) they would still survive until next December, at which point they could be bailed out.

However, I now think this is rather unfair, as in some sense I'm playing donor-of-last-resort with other December donors. So yes, I think 18 months may be a more reasonable threshold.

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:18:02.793Z · score: 3 (2 votes) · EA · GW

No principled reason, other than that this is not really my field, and I ran out of time, especially for work produced outside donate-able organizations. Sorry!

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:15:49.571Z · score: 2 (1 votes) · EA · GW
It's also worth noting that I believe the new managers do not have access to large pots of discretionary funding (easier to deploy than EA Funds) that they can use to fund opportunities that they find.

Good point!

Comment by larks on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-05T18:14:36.149Z · score: 9 (4 votes) · EA · GW

I'm glad you found it helpful!

I don't have a great system. I combined a few things:

1) Organisations' websites

2) Backtracking from citations in papers, especially those published very recently

3) Author's own websites for some key authors

4) 'cited by' in Google scholar for key papers, like Concrete Problems

5) Asking organisations what else I should read - many do not have up to date websites.

6) Randomly coming accross things on facebook, twitter, etc.

7) Rohin's excelent newsletter.

Comment by larks on What’s the Use In Physics? · 2018-12-31T15:46:32.931Z · score: 3 (2 votes) · EA · GW

Great post, thanks for collecting all these in one place.

Comment by larks on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-24T01:54:18.057Z · score: 4 (9 votes) · EA · GW

According to this article on the pledge:

While the Pledge was originally focused on global poverty, since 2014 it has been cause-neutral. Members commit to donate to the organizations they believe are most effective at improving the lives of others.

Specifically, originally the pledge did not include animal welfare groups, but was later 'amended' to include them. Is there a principled reason to include animal welfare, but not religious outreach? They seem quite similar:

1) Both ingroups have (by their lights) strong reasons to think what they are doing is literally the most important thing in the world.

2) Many/most people agree with premises that logically imply the importance of both causes (i.e. many people are religious and believe in heaven, and many people believe animal cruelty is bad)

3) Both causes are seen as somewhat wierd by most people, despite 2)

4) Both causes are quite far from the original stated and de facto goals of GWWC, namely helping people in the third world.

Comment by larks on The case for taking AI seriously as a threat to humanity · 2018-12-24T01:40:43.332Z · score: 31 (12 votes) · EA · GW

Overall I think this is a great article. It seems like it could be one of the best pieces for introducing new people to the subject.

People sometimes try to gauge the overall views of an author by the relative amounts of page-space they dedicate to different topics, which is bad if you generally agree with something, but want to make a detailed objection to a minor point. I think Kelsey's article is good, and don't want the below to detract from this.

To try to help with this effect, I have deliberately made the top three paragraphs, where I explain that this article is very good before coming to the main point of the comment.

However, I do object to this section:

When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too.

The text links to another Vox article, which ultimately linked to this ProPublica article, which argues that a specific reoffending-prediction system was bad because:

The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.

Separately it notes

When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.

At this point, alarm bells should be ringing in your head. "More accurate than a coin flip" is not the correct way to analyze the accuracy of a binary test for an outcome unless the actual distribution is also 50:50! If fewer than 50% of people re-offend, a coin flip will get less than 50% right on those is classifies as high risk. Using the coin flip analogy is a rhetorical sleight of hand to make readers adopt the wrong analytical framework, and make the test look significantly worse than it actually is.

Now we've seen that the ProPublica authors perhaps cannot be entirely trusted to represent the data accurately, lets go back to the headline statement: that the false positive rate is higher for blacks than whites.

This is true, but in a trivial sense.

Blacks commit more crime than whites. This is true regardless of whether you look at arrest data, conviction data, or victimization surveys. (Even if you only asked Black people who committed crimes against them, this result still holds. Also holds true just looking at recidivism.) As a result of this base rate, any unbiased algorithm will have more false positives for blacks, even if it is equally accurate for both races at any given level of risk.

Here are some simple numbers, lifted from Chris's excellent presentation on the subject, to illustrate this point:

Simplified numbers: High risk == 60% chance of recidivism, low risk = 20%.
Black people: 60% labelled high risk * 40% chance of no recidivism = 24% chance of “labelled high risk, didn’t recidivate”.
White people: 30% labelled high risk * 40% chance of no recidivism = 12% chance of “labelled high risk, didn’t recidivate”.

It is a trivial statistical fact that any decent statistical test will have a higher false positive rate for subgroups with higher incidence. To avoid this, you'd have to adopt a test which included a specific "if white, increase risk" factor, and you would end up releasing more people who would reoffend, and keeping in jail people who would not. None of these seem like acceptable consequences.

Strangely however, neither the Vox article that this one linked to, nor the original ProPublica piece, mentioned this fact - I suspect due to the same political bias kbog discussed recently. There are good reasons to be concerned about the application of algorithms in areas like these. But damning the algorithms as racist for statistically misleading reasons, without explaining to readers the underlying reasons for these statistics, suggests that the authors have either failed to understand the data, or are actively trying to mislead their readers. I would recommend against linking to either article in future as evidence for the claims.

EDIT: Washington Post had a very good article explaining this also.

Comment by larks on Women's Empowerment: Founders Pledge report and recommendations · 2018-12-22T02:05:53.161Z · score: 14 (7 votes) · EA · GW

"at least 35% of women worldwide have experienced some form of physical or sexual violence."

The article uses this statistic to try to motivate why we might be interested in charities that focus specifically on women. However, we cannot evaluate this statistic in isolation: to draw this conclusion we need to compare against assault rates for men.

I wasn't able to immediately find a comparable stat for men - the source for the stat appears to be a women-specific WHO report - but I was able to find homicide data. This data is often regarded as especially reliable, because there are fewer issues about underreporting when there is a dead body. (I apologize in advance if the authors did in fact compare assault rates between sexes and just omitted this from the report).

So what does the data say? According to the UN Office on Drugs and Crime, men are dramatically more likely to be victims of homicide in virtually every country. Almost 80% of global homicide victims are male. And the small number of countries where this is not the case tend to be in the developed world, which is not where the charities in this post focus, or very small countries where I suspect there was only one homicide that year.

So a neutral observer would conclude this was a reason to support charities that reduced violence against men, not women, if one were inclined to choose one or the other.

The fact that this article does not seem to even investigate this makes me sceptical of the quality of the rest of the work. If EAs are going to write non-cause-neutral reports, we should at least be clear at the very beginning of the report that other causes are likely to be be better - rather than than presenting misleading evidence to the contrary. Otherwise we are in danger of sacrificing a very important part of what makes EA distinctive.

Source: http://www.unodc.org/gsh/en/data.html

Comment by larks on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-21T19:47:16.500Z · score: 9 (5 votes) · EA · GW

Sure, that's why I criticized Vox, not the individual author. I suspect the author did not complain about the title though.

Comment by larks on Response to a Dylan Matthews article on Vox about bipartisanship · 2018-12-20T23:38:13.557Z · score: 48 (21 votes) · EA · GW

When Vox launched I was very excited, as I thought it would be a good source of high-quality journalism, even if they did censor authors for having the wrong conclusions. However, it seems like virtually every article, even when otherwise high quality, contains some unrelated and unnecessary jibe at conservatives - an unusually direct example of Politics is the Mindkiller. Perhaps this lead to their being in something of an echo chamber, where conservatives stopped reading?

Here's a recent example, to help make the above more concrete:

1) Trump signed a good law this week. Yes, really. - why does this need the snark in the title? The meaning would have been clearer, and less insulting, if they had just written "Trump signed a good law about HIV this week."

I worry about this in general with Future Perfect. This behaviour is not something the EA movement wants, but if Future Perfect ends up producing a very large volume of 'EA' articles, we risk getting tarnished by association.

Comment by larks on How Effective Altruists Can Be Welcoming To Conservatives · 2018-12-20T23:08:52.407Z · score: 6 (3 votes) · EA · GW

Thanks, I thought this article was very thoughtful.

I have one quick question about the examples you mention. While I agree that pro-life examples are a great idea, I'm not sure what you are getting at with the heaven-infinite-value example. Is the problem that people have been using this as a reductio?

2018 AI Alignment Literature Review and Charity Comparison

2018-12-18T04:48:58.945Z · score: 105 (49 votes)
Comment by larks on [Link] "Would Human Extinction Be a Tragedy?" · 2018-12-18T02:06:41.802Z · score: 9 (10 votes) · EA · GW

In response to the title question: yes.

Comment by larks on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-30T23:49:26.131Z · score: 5 (3 votes) · EA · GW

It seems to me that TRIA is really stretching the definition of 'equality'. Could I not equally suggest a Citizenship-Relative-Interest-Account? This would fit well with people's nationalistic intuitions. Indeed, if we look at the list of things GWWC claimed EAs do not discriminate based on, we could circumvent all of them with cunningly crafted X-Relative-Interest-Accounts.

I agree a moral discontinuity would be very perverse. But it seems there are many better options. For example, a totalist view - that people matter even before they are conceived - avoids this issue, and doesn't suffer from the various inconsistencies that person-affecting views do. Alternatively, if you thought that we should not value people who don't exist in any way, conception provides a clear discontinuity in many ways, such that it does not seem like it would be weird if there was a moral value discontinuity there also.

But I think the biggest problem is that, even if you accept TRIA, I suspect that most people's moral intuitions would produce a very different weighting distribution. Specifically, they would be more averse to causing pain to 5 year olds than adults - especially adult men. If I have time I might look into whether there has been any empirical research on the subject; it could be a useful project.

Comment by larks on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2018-11-26T04:21:34.903Z · score: 7 (4 votes) · EA · GW

Thanks for writing this very detailed analysis. I especially enjoyed the arguments for why we can compare LS scores between people, like the Canadian immigrant study.

The section I found most suprising was the part on Givewell using the Time-Relative Interest Account. I've always thought of some kind of egalitarianism as being relatively important to EA - the idea that all people are in some sense equally deserving of happiness/welfair/good outcomes. We might save a young person over an old person, but this is only because by doing this we're counterfactually saving more life-years.

For example, here is Giving What We Can:

People[2] are equal — everyone has an equal claim to being happy, healthy, fulfilled and free, whatever their circumstances. All people matter, wherever they live, however rich they are,  and whatever their ethnicity, age, gender, ability, religious views, etc. [emphasis added]

But the TRIA explicitly goes against this. It directly weighs a year of health for a 25 year old as being inherently more valuable than a year of health for a 5 year old - or a 50 year old. This seems very perverse. Is it really acceptable to cause a large amount of pain to a child, in order to prevent a smaller amount of pain for an adult? I think the majority of people would not agree with this - if anything people prefer to prioritize the suffering of children over that of adults.

Comment by larks on Why we have over-rated Cool Earth · 2018-11-26T02:57:27.233Z · score: 17 (14 votes) · EA · GW

Thanks for writing this. This sort of evaluation, which has the potential to radically change the consensus view on a charity, seems significantly under-supplied in our community, even though individual instances are tractable for a lone individual to produce. It's also obviously good timing at the start of the giving season.

I think the post would be improved without the section on contraception, however. There are many simple environmental interventions we could benchmark against instead, that don't involve population ethics. Preventing a future human from being born has many impacts - they will probably have a job, some probability of inventing a new discovery, and most importantly they will probably be grateful to be alive - of which emitting some CO2 is likely to be one of the smaller impacts. Any evaluation of contraception that only looks at direct environmental impact is going to be so lacking that I suspect you'd be better off choosing a different intervention to compare to.

Comment by larks on Announcing the EA donation swap system · 2018-11-25T14:59:38.279Z · score: 5 (3 votes) · EA · GW

Thanks, this is a cool idea.

Inger from Norway wants to support the Good Food Institute (GFI) with a donation of 5000 USD. Robert from the USA wants to support the Against Malaria Foundation (AMF) with a donation of 5000 USD. AMF is tax deductible in both countries, GFI is only tax deductible in the USA. The EA donation swap system introduces Robert and Inger together and they agree to swap donations.
Inger donates 5000 USD to AMF, Robert donates 5000 USD to GFI. They both get their tax deductions at the end of the financial year.

In this example Inger gains tax deductability, but Robert gains nothing in return for taking on the counterparty risk of the swap. Wouldn't it make sense for Robert to donate slightly less than $5000, or Inger slightly more, such that both parties benefit?

This reminds me a little bit of Critch's Rent Division Calculator, which aims to produce a room-and-rent-allocation for shared houses that everyone likes as much as possible; not merely one that no-one actively dislikes.

Comment by larks on Why Do Small Donors Give Now, But Large Donors Give Later? · 2018-11-14T00:33:32.679Z · score: 2 (1 votes) · EA · GW

I like the idea, but I'm not sure it fully captures what is going on. We could be comparing the poor person not to the foundation but to the rich person who endowed it, and asking why they waited until late in life to do so rather than continually donating. The 'poor' person does indeed have a valuable asset in their future earning potential, but so does the young Bill Gates. He could have sold a bit more MSFT stock every time it went up, rather than waiting until the end.

Comment by larks on 2017 Donor Lottery Report · 2018-11-13T02:12:09.199Z · score: 14 (7 votes) · EA · GW

Thanks for writing this up, it's very interesting, and should be helpful for other donors.

Comment by larks on Pursuing infinite positive utility at any cost · 2018-11-12T15:16:33.646Z · score: 3 (4 votes) · EA · GW

Presumably this system would suggest we should encourage people to believe in a wide variety of religions, if one believer is all we need for infinite utility. Rather than converting more people to Catholicism we'd spend our time inventing/discovering new religions and converting one person.

Comment by larks on What's Changing With the New Forum? · 2018-11-12T02:46:36.854Z · score: 3 (2 votes) · EA · GW

The new Lesswrong also has Greaterwrong, allowing people to use the old style interface if they find that easier. Is there any way to do the same for the new EA forum?

Comment by larks on Announcing new EA Funds management teams · 2018-10-30T23:32:20.843Z · score: 2 (2 votes) · EA · GW

Thanks!

Comment by larks on Announcing new EA Funds management teams · 2018-10-28T20:15:34.685Z · score: 15 (11 votes) · EA · GW

I'm glad to see these changes; they seem like significant improvements to the structure. However, I think it would have been nice to see some official recognition that these changes seem to be largely in response to problems were largely foreseen by the community a long time ago.

Comment by larks on EA Funds hands out money very infrequently - should we be worried? · 2018-10-27T16:34:03.020Z · score: 3 (2 votes) · EA · GW

Update: The funds have now committed to a regular schedule of giving. link

Comment by larks on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-10-27T16:31:25.763Z · score: 0 (0 votes) · EA · GW

Update: two months later, CEA has now updated the management teams for these funds, bringing on new managers and committing to a regular schedule of grant giving. link

Comment by larks on Thoughts on short timelines · 2018-10-25T03:02:01.271Z · score: 3 (3 votes) · EA · GW

the crazy P/E rations for google, amazon, etc. seems to imply that the market thinks something important will happen there,

Google's forward PE is 19x, vs the S&P500 on 15x. What's more, this is overstated, because it involves the expensing of R&D, which logically should be capitalised. Facebook is even cheaper at 16x, though if I recall correctly that excludes stock-based-comp expense.

I agree that many other tech firms have much more priced into their valuations, and that fundamental analysts in the stock market realistically only look 0-3 years out.

Comment by larks on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T22:11:16.947Z · score: 0 (0 votes) · EA · GW

The point, presumably, is that people would feel better because of the expectation that things would improve.

Of course, the criticism is that rather than simulating someone who starts in pain and then improves gradually, you could simply simulate someone with high welfare all along. But if you could achieve identity-continuity without welfare-level-continuity this cost wouldn't apply.

Comment by larks on Fisher & Syed on Tradable Obligations to Enhance Health · 2018-08-15T03:25:55.098Z · score: 2 (2 votes) · EA · GW

What is it about a Viagra company that makes them more responsible for solving global health issues than e.g. IKEA?

Yes, for some reason the proposal combines a carbon-trading-style-scheme with a decision to make pharmaceutical companies pay for it all. The latter seems to be totally separable - just distribute the credits in proportion (at a slightly lower ratio than the target) to revenues! This would also significantly help address the problem I outlined in the other comment, by reducing the incentive just to shift revenue ex-US.

Comment by larks on Fisher & Syed on Tradable Obligations to Enhance Health · 2018-08-15T03:15:13.620Z · score: 2 (2 votes) · EA · GW

The authors fail to consider what seems to me to be the obvious response firms would make.

Their policy is basically a tax on global sales for pharmaceutical companies, imposed by the US, which they would pay because of the threat of being excluded from the US market (roughly half of sales). The rational response is to sell off the rights to sell the international marketing rights to your drugs, either to a new international company or to an existing one. These sales are then protected from the US scheme, and the fall in the denominator of the ratio (by ~50%) should ensure the industry is compliant, without any need to alter their behaviour in other ways.

As a simple example, instead of Amgen selling Enbrel in the US and internationally, you would have AmgenUS, with the right to sell Enbrel in the US and paying the tax, and AmgenInternational, with the right to sell Enbrel internationally and does not pay the tax. These sorts of geographic splitting of marketing rights are moderately common in the industry anyway, and don't seem to significantly increase overhead.

There are of course ways around this problem, but I think this shows the general problem with all such regulations - that the designers never consider all the unintended consequences, and so mis-estimate the effects of their policies.

Comment by larks on EA Forum 2.0 Initial Announcement · 2018-07-22T18:32:25.210Z · score: 2 (2 votes) · EA · GW

Many of these concerns seem to be symmetric, and would also imply we should make it harder to upvote.

Comment by larks on EA Forum 2.0 Initial Announcement · 2018-07-19T21:29:24.011Z · score: 3 (3 votes) · EA · GW

Hey, first of all, thanks for what I'm sure what must have been a lot of work behind this. Many of these ideas seem very sensible.

Am I right in assuming that the scale for the upvotes was intended to be roughly-but-not-exactly logarithmic? And do downvotes scale the same way?

Comment by larks on Impact Investing - A Viable Option for EAs? · 2018-07-11T22:55:08.575Z · score: 4 (4 votes) · EA · GW

(quoting from the open thread)

The timber is sold after 10 years, conservative return to the investor is $20k

This kind of investment would be considered high risk - this company only started this program three years ago, and the first trees haven't yet produced profit.

This sounds extremely suspect. Conservative investments do not generate 23% CAGRs, and there are plenty of investors willing to fund credible 10 year projects. Timber was a particularly fashionable asset class for a while, and 'enviromental' investments are extremely fashionable right now.

[This is an opinion and is for information purposes only. It is not intended to be investment advice. You should consult a licensed financial advisor for investment advice. This is not the opinion of my firm. My firm may have positions in the discussed securities. This is not an invitation to buy or sell securities].

Comment by larks on Empirical data on value drift · 2018-04-29T17:57:46.366Z · score: 6 (6 votes) · EA · GW

I think if people promise you that they'll do something, and then they don't answer when you ask if they did it, it's quite probably they did not do the thing.

Comment by larks on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-21T02:52:20.917Z · score: 16 (12 votes) · EA · GW

Thanks for writing this, I thought it was a good article. And thanks to Greg for funding it.

My pushback would be on the cooperation and coordination point. It seems that a lot of other people, with other moral values, could make a very similar argument: that they need to promote their values now, as the stakes as very high with possible upcoming value lock-in. To people with those values, these arguments should seem roughly as important as the above argument is to you.

  • Christians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with sinners who will go to hell.
  • Egalitarians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with wider and wider diversities of wealth.
  • Libertarians could argue that, if the singularity is approaching, it is vitally important that we ensure the universe won't be filled with property rights violations.
  • Naturalists could argue that, if the singularity is approaching, it is vitally important that we ensure the beauty of nature won't be bespoiled all over the universe.
  • Nationalists could argue that, if the singularity is approaching, it is vitally important that we ensure the universe will be filled with people who respect the flag.

But it seems that it would be very bad if everyone took this advice literally. We would all end up spending a lot of time and effort on propaganda, which would probably be great for advertising companies but not much else, as so much of it is zero sum. Even though it might make sense, by their values, for expanding-moral-circle people and pro-abortion people to have a big propaganda war over whether foetuses deserve moral consideration, it seems plausible we'd be better off if they both decided to spend the money on anti-malaria bednets.

In contrast, preventing the extinction of humanity seems to occupy a privileged position - not exactly comparable with the above agendas, though I can't exactly cache out why it seems this way to me. Perhaps to devout Confucians a pre-occupation with preventing extinction seems to be just another distraction from the important task of expressing filial piety – though I doubt this.

(Moral Realists, of course, could argue that the situation is not really symmetric, because promoting the true values is distinctly different from promoting any other values.)

Comment by larks on What is Animal Farming in Rural Zambia Like? A Site Visit · 2018-02-19T21:47:39.340Z · score: 0 (0 votes) · EA · GW

a standard sheet of paper is about 7 square feet.

Do you mean 0.7 square feet?

Comment by larks on EA #GivingTuesday Fundraiser Matching Retrospective · 2018-01-19T03:47:55.976Z · score: 0 (0 votes) · EA · GW

I think we are much more organised than most, and hence more able to learn from our mistakes.

Comment by larks on Economics, prioritisation, and pro-rich bias   · 2018-01-04T02:32:39.309Z · score: 0 (0 votes) · EA · GW

Not only is it not necessarily true that actual willingness to pay determines consumer preference, it is not even usually true. Differences in willingness to pay are to a significant extent and in a huge range of cases driven by differences in personal wealth rather than by differences in consumer preference. Rich people tend to holiday in exotic and sunny places at much higher rates than poor people. This is entirely a product of the fact that rich people have more money, not that poor people prefer to holiday in Blackpool. I think the same holds for the vast majority of differences in market demand across different income groups.

This is probably empirically true between income groups, but I don't think it's true between individuals, even of different income levels. Most people have zero demand for most goods, due to a combination of geographic location, lack of interest and diminishing marginal utility, and this is the main determinant of differences in demand between individuals.

For example, I have 0 demand for sandwiches right now - hence why sandwiches can be bought all over the world by people with incomes <1% of mine. This sort of case, where markets do correctly allocate sandwiches, strikes me as being the norm in markets, rather than the exception.

(I realise this does not directly contradict your point but wanted to ensure readers did not draw an unnecessarily strong conclusion from it)

Comment by larks on 2017 AI Safety Literature Review and Charity Comparison · 2017-12-21T23:28:24.455Z · score: 1 (1 votes) · EA · GW

Thanks, made corrections.

2017 AI Safety Literature Review and Charity Comparison

2017-12-20T21:54:07.419Z · score: 42 (42 votes)
Comment by larks on Changing the Government's Approach to Catastrophic Risks · 2017-10-10T23:14:04.126Z · score: 0 (0 votes) · EA · GW

Independent organisations can be vulnerable to cuts.

Do you know of any quantitative evidence on the subject? My impression was there is a fair bit of truth to the maxim "There's nothing as permanent as a temporary government program."

Comment by larks on Effective Altruism Grants project update · 2017-10-07T15:01:36.025Z · score: 1 (1 votes) · EA · GW

Thanks for writing this up! While it's hard to evaluate externally without seeing the eventually outcomes of the projects, and the counterfactuals of who you rejected, it seems like you did a good job!

Comment by larks on Effective Altruism Grants project update · 2017-10-07T14:49:31.041Z · score: 1 (1 votes) · EA · GW

EA money is money in the hands of EAs. It is argued that this is more valuable than non-EA money, because EAs are better at turning money into EAs. As such, a policy that cost $100 of non-EA money might be more expensive than one which cost $75 of EA money.

Comment by larks on Which five books would you recommend to an 18 year old? · 2017-10-06T00:37:03.662Z · score: -1 (1 votes) · EA · GW

I don't think so. My guess is you think so because she discussed selfishness as a virtue and altruism as a vice, but she is using these words in a somewhat different sense than we do. My impression is she would not have been opposed to someone who realised that the best way to promote their values was to help others. See for example the quote below.

Where I think she is well aligned is in the sense that it is possible to understand the world through reason, and for individuals to act to realise their goals. This sort of heroic attitude is clearly part of EA.

Do you consider wealthy businessmen like the Fords and the Rockefellers immoral because they use their wealth to support charity? No. That is their privilege, if they want to. My views on charity are very simple. I do not consider it a major virtue and, above all, I do not consider it a moral duty. There is nothing wrong in helping other people, if and when they are worthy of the help and you can afford to help them. I regard charity as a marginal issue. What I am fighting is the idea that charity is a moral duty and a primary virtue.

source: a surprisingly good interview, given that it is in Playboy!

Comment by larks on Which five books would you recommend to an 18 year old? · 2017-09-07T02:08:17.736Z · score: 0 (2 votes) · EA · GW

Thinking of books that had a big impact on me, and that I think I would endorse:

  • Godel Escher Bach, Douglas Hofstadter
  • The Sequences, Eliezer Yudkowsky
  • Atlas Shrugged, Ayn Rand
  • The Extended Phenotype, Richard Dawkins
  • Diaspora, Greg Egan

I also think the Culture novels, and the 80,000 Hours book, could be good.

Comment by larks on Does Effective Altruism Lead to the Altruistic Repugnant Conclusion? · 2017-07-29T16:13:24.088Z · score: 1 (1 votes) · EA · GW

(Note your first line seems to be missing some *'s)

Fixed, thanks.

2016 AI Risk Literature Review and Charity Comparison

2016-12-13T04:36:48.060Z · score: 51 (53 votes)

Being a tobacco CEO is not quite as bad as it might seem

2016-01-28T03:59:15.614Z · score: 10 (12 votes)

Permanent Societal Improvements

2015-09-06T01:30:01.596Z · score: 9 (9 votes)

EA Facebook New Member Report

2015-07-26T16:35:54.894Z · score: 11 (11 votes)