Posts

2019 AI Alignment Literature Review and Charity Comparison 2019-12-19T02:58:58.884Z · score: 139 (46 votes)
2018 AI Alignment Literature Review and Charity Comparison 2018-12-18T04:48:58.945Z · score: 113 (54 votes)
2017 AI Safety Literature Review and Charity Comparison 2017-12-20T21:54:07.419Z · score: 43 (43 votes)
2016 AI Risk Literature Review and Charity Comparison 2016-12-13T04:36:48.060Z · score: 53 (55 votes)
Being a tobacco CEO is not quite as bad as it might seem 2016-01-28T03:59:15.614Z · score: 10 (12 votes)
Permanent Societal Improvements 2015-09-06T01:30:01.596Z · score: 9 (9 votes)
EA Facebook New Member Report 2015-07-26T16:35:54.894Z · score: 11 (11 votes)

Comments

Comment by larks on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-16T23:24:46.670Z · score: 3 (2 votes) · EA · GW
B.2. “The Windfall Clause will shift investment to competitive non-signatory firms.”

The concern here is that, when multiple firms are competing for windfall profits, a firm bound by the Clause will be at a competitive disadvantage because unbound firms could offer higher returns on new capital. That is, investors would prefer firms that are not subject to a “tax” on their profits in the form of the Windfall Clause. This is especially bad because it could mean that more prosocial firms (i.e., ones that have signed the Clause) would be at a disadvantage to non-signatory firms, making a prosocial “winner” of an AI development race less likely.238

This is a valid concern which warrants careful consideration. Our current best model for how to address this is that the Clause could commit (or at least allow for the option of) distributions of equity,* instead of cash. This could either take the form of stock options or contingent convertible bonds. This avoids the concern identified by allowing firms to, for example, issue new, preferred shares which would have superior claim to windfall profits compared to donees. This significantly diminishes the concern that the Clause would dilute the value of new shares issued in the company and allows the bound firm to raise capital unencumbered by debt owed under the Clause.† Notably, firm management would still have fiduciary duties towards stock-holding windfall donees.

I agree that the problem (that investors will prefer to invest in non-signatories, and hence it will reduce the likelihood of pro-social firms winning, if pro-social firms are more likely to sign) does seem like a credible issue. I found the description of the proposed solution rather confusing however. Given that I worked as an equity analyst for five years, I would be surprised if many other readers could understand it!

Here are my thoughts on a couple of possible versions of what you might be getting at- apologies if you actually intended something else altogether.

1) The clause will allow the company to make the required payments in stock rather than cash.

Unfortunately this doesn't really make much difference, because it is very easy for companies to alter this balance themselves. Consider that a company which had to make a $1 billion cash payment could fund this by issuing $1 billion worth of stock; conversely a company which had to issue stock to the fund could neutralise the effect on their share count by paying cash to buy back $1 billion worth of ordinary shares. This is the same reason why dividends are essentially identical to share buybacks.

2) The clause will allow subsequent financing to be raised that is senior to the windfall clause claim, and thus still attractive to investors.

'Senior' does not mean 'better' - it simply means that you have priority in the event of bankruptcy. However, the clause is already junior to all other obligations (because a bankrupt firm will be making ~0% of GDP in profit and hence have no clause obligations), so this doesn't really seem like it makes much difference. The issue is dilution in scenarios when the company does well, which is when the most junior claims (typically common equity, but in this case actually the clause) perform best.

The fundamental reason these two approaches will not work is that the value of an investment is determined by the net present value of future cashflows (and their probability distribution). Given that the clause is intended to have a fixed impact on these flows (as laid out in II.A.2), the impact on firm valuation is also rather fixed, and there is relatively little that clever financial engineering can do about it.

3) The clause will have claim only to profits attributable to the existing shares at the time of the signing on. Any subsequent equity will have a claim on profits unencumbered by the clause. For example, if a company with 80 shares signs on to the clause, then issues 10 more shares to the market, the maximum % of profits that would be owed is 50%*80/(80+10) = 44.4%

This would indeed avoid most of the problems in attracting new capital (save only the fear that a management team willing to screw over their previous investors will do so to you in the future, which is something investors think about a lot).

However, it would also largely undermine the clause by being easy to evade due to the fungibility of capital. Consider a new startup, founded by three guys in a basement, that signs the clause. Over the next few years they will raise many rounds of VC, eventually giving up the majority of the company, all excluded from the clause. Additionally, they pay themselves and employees in stock or stock options, which are also exempt from the clause. Eventually they IPO, having successfully diluted the clause-affected shares to ~1%. In order to finish the job, they then issue some additional new equity and use the proceeds to buy back the original shares.


One interesting point on the other side, however, is the curious tendency for tech investors to ignore dilution. Many companies will exclude stock-based-comp from their adjusted earnings, and analysts/investors are often willing to go along with this, saying "oh but it's a non-cash expense". Furthermore, SBC is excluded from Free Cash Flow, which is the preferred metric for many tech investors. So it is possible that (for a while) investors would simply ignore it.

Comment by larks on FHI Report: The Windfall Clause: Distributing the Benefits of AI for the Common Good · 2020-02-15T03:25:08.787Z · score: 19 (5 votes) · EA · GW

Thanks very much for sharing this. It is nice to see some innovative thinking around AI governance.

I have a bunch of different thoughts, so I'll break them over multiple comments. This one mainly concerns the incentive effects.

> C.2. “The Windfall Clause operates like a progressive corporate income tax, and the ideal corporate income tax rate is 0%.”

> Some commentators argue that the ideal corporate tax rate is 0%. One common argument for this is that corporate income tax is not as progressive as its proponents think because corporate income is ultimately destined for shareholders, some of whom are wealthy, but many of whom are not. Better, then, to tax those wealthy shareholders more directly and let corporate profits flow less impeded to poorer ones. Additionally, current corporate taxes appear to burden both shareholders and, to a lesser extent, workers."

I think this is a bit of a strawman. While it is true that many people don't understand tax incidence and falsely assume the burden falls entirely on shareholders rather than workers and consumers, the main argument for the optimality of a 0% corporate tax rate is Chamley-Judd (see for example here) and related results. (There are some informal descriptions of the result here and here.) The argument is about disincentives to invest reducing long-run growth and thereby making everyone poorer, not a short-term distributional effect. (The standard counter-argument to Chamley Judd, as far as I know, is to effectively apply lots of temporal discounting, but this is not available to longtermist EAs).

This is sort of covered in B.1., but I do not think the responses are very persuasive. The main response is rather glib:

Further, by capping firm obligations at 50% of marginal profits, the Clause leaves room for innovation to be invested in even at incredibly high profit levels.231

There are a lot desirable investments which would be rendered uneconomic. The fact that some investment will continue at a reduced level does not mean that missing out on the other forgone projects is not a great cost! For example, a 20% pre-tax return on investment for a moderately risky project is highly attractive - but after ~25% corporate taxes and ~50% windfall clause, this is a mere 5% return* - almost definitely below their cost of capital, and hence society will probably miss out on the benefits. Citation 231, which seems like it should be doing most of the work here, instead references a passing comment in a pop-sci book about individual taxes:

There's also an argument that a big part of the very high earnings of many 'superstars' are also rents. These questions turn on whether most professional athletes, CEOs, media personalities, or rock stars are genuinely motivated by the absolute level of their compensation verses the relative compensation, their fame, or their intrinsic love of their work.

But corporations are much less motivated by fame and love of their work than individuals, so this does not seem very relevant, and furthermore it does not address the inter-temporal issue which is the main objection to corporation taxes.

I also think the sub-responses are unsatisfying. You mention that the clause will be voluntary:

> Firstly, we expect firms to agree to the Clause only if it is largely in their self-interest

But this does not mean it won't reduce incentives to innovate. Firms can rationally take actions that reduce their future innovation (e.g. selling off an innovative but risky division for a good price). A firm might voluntarily sign up now, when the expected cost is low, but then see their incentives dramatically curtailed later, when the cost is large. Furthermore, firms can voluntarily but irrationally reduce their incentives to innovate - for example a CEO might sign up for the clause because he personally got a lot of positive press for doing so, even at the cost of the firm.

Additionally, by publicising this idea you are changing the landscape - a firm which might have seen no reason to sign up might now feel pressured to do so after a public campaign, even though their submission is 'voluntary'.

The report then goes on to discuss externalities:

> Secondly, unbridled incentives to innovate are not necessarily always good, particularly when many of the potential downsides of that innovation are externalized in the form of public harms. The Windfall Clause attempts to internalize some of these externalities to the signatory, which hopefully contributes to steering innovation incentives in ways that minimize these negative externalities and compensate their bearers.

Here you approvingly cite Seb's paper, but I do not think it supports your point at all. Firms have both positive and negative externalities, and causing them to internalise them requires tailored solutions - e.g. a carbon tax. 'Being very profitable' is not a negative externality, so a tax on profits is not an effective way of minimising negative externalities. Similarly, the Malicious Use paper is mainly about specific bad use cases, rather than size qua size being undesirable. Moreover, size has little to do with Seb's argument, which is about estimating the costs of specific research proposals when applying for grants.

Finally, one must consider that under windfall scenarios the gains from innovation are already substantial, suggesting that globally it is more important to focus on distribution of gains than incentivizing additional innovation.

I strongly disagree with this non-sequitur. The fact that we have achieved some level of material success now doesn't mean that the future opportunity isn't very large. Again, Chamley-Judd is the classic result in the space, suggesting that it is never appropriate to tax investment for distributional purposes - if the latter must be done, it should be done with individual-level consumption/income taxation. This should be especially clear to EAs who are aware of the astronomical waste of potentially forgoing or delaying growth.

Elsewhere in the document you do hint at another response - namely that by adopting the clause, companies will help avoid future taxation (though I am sceptical):

> A Windfall Clause could build goodwill among the public, dampening harmful public antagonism for a small (expected) cost. Governments may be less likely to excessively tax or expropriate firms committed to providing a public good through the Windfall Clause.

and

> However, from a public and employee relations perspective, the Clause may be more appealing than taxation because the Clause is a cooperative, proactive, and supererogatory action. So, to the extent that the Windfall Clause merely replaces taxation, the Windfall Clause confers reputational benefits onto the signatory at no additional cos

However, it seems that the document equivocates on whether or not the clause is to reduce taxes, as elsewhere in the document you deny this:

> the Windfall Clause is not intended to be a substitute for taxation schemes. We also note that, as a private contract, the Windfall Clause cannot supersede taxation. Thus, if a state wants to tax the windfall, the Clause is not intended to stop it. Indeed, taxation efforts that broadly align with the goals and design principles of the Windfall Clause are highly desirable

\* for clarity of exposition I am assuming the donation is not tax deductible, but the point is not dramatically altered if it is.

Comment by larks on Short-Term AI Alignment as a Priority Cause · 2020-02-15T02:35:13.004Z · score: 6 (3 votes) · EA · GW

Many of these advantages (e.g. aligned recommenders pushing people towards longtermism, or animal rights) seem more like aligning recommenders with our values than any neutral account of alignment. It seems than any ideology could similarly claim that aligned recommenders are important for introducing people to libertarianism/socialism/conservatism/feminism etc. In contrast, this probably isn't the best interests of the viewer - e.g. your average omnivore probably doesn't want to be recommended videos about animal suffering.

Comment by larks on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T18:07:29.306Z · score: 2 (1 votes) · EA · GW
I'm concerned that splitting the "vote" between these two methods will do harm to the community's ability to decide what types of work are good.

Could you go into detail about why you think this would be bad? Typically when you are uncertain about something it is good to have multiple (semi-) independent indicators, as you can get a more accurate overall impression by combining the two in some way.

Comment by larks on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-13T17:54:44.095Z · score: 4 (2 votes) · EA · GW

Makes sense, thanks for clarifying!

Comment by larks on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-09T23:38:34.225Z · score: 18 (10 votes) · EA · GW

Thanks for writing this up for comment.

Quick possible oversight - I didn't see any discussion of recusal because the fund member is employed or receives funds from the potential grantee? Sorry if I just misread! The closest I saw was this:

A very close friend or partner of a fund member is employed, receiving funds from, or has some kind of other directly dependent relationship to the potential grantee

Right now I assume this would mainly apply to you (CFAR) and possibly Alex.


Separately, you mentioned OpenPhil's policy of (non-) disclosure as an example to emulate. I strongly disagree with this, for two reasons.

Firstly, I think OpenPhil's policy is bad. They enacted this policy as part of their general movement towards secrecy, but the actual reasons they described for sharing less details about their evaluation (i.e. issues are too complex, don't want to aid hostile actors etc.) do not seem to be that relevant to not disclosing conflicts. Certainly, OpenPhil's policy of non-disclosure makes me trust their work significantly less now as I have to assume there is a significant chance any given decision was unfairly biased.

Secondly, there are significant differences between OpenPhil and you guys. In particular, OpenPhil's main job is advising a very small number of individuals, who (I presume) have access to many private details that they do not need to make public. Additionally, those individuals have a considerable amount of influence over OpenPhil. In the case of the LTFF, however, donors are reliant on public disclosure in order to be able to evaluate the fund. It is like the difference between a private company (who have almost no public disclosure requirements) and a public one (who have a lot of disclosure requirements).

Comment by larks on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-09T23:38:12.946Z · score: 11 (6 votes) · EA · GW
One of the things that I am most concerned about if you were to just move towards recusal, is you just end up in a situation where by necessity the other fund members just have to take the recused person's word for the grant being good (or you pass up on all the most valuable grant opportunities).

It also seems that the recusal being discussed is quite weak. In my limited experience, recusal means totally vacating oneself from the decision making process. For example, when a SCOTUS Judge recuses himself, he doesn't take any role in the debate or voting. Similarly, when moderating the EA facebook group, generally conflicted mods won't argue for a position (though this is just my description of a norm and not an explicit rule we've had). If fund members with large conflicts of interest end up de facto making the decision anyway then the COI policy doesn't seem to have achieved anything.

I would suggest instead that other fund managers research the application and make the decision. This would help avoid an unfair bias towards funding people who are 'in the community'.

Comment by larks on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-09T22:11:45.476Z · score: 8 (8 votes) · EA · GW
If I imagine myself in the shoes of a more conservative potential donor, who is checking the fine print as part of their due diligence, I would be put off by phrases like ‘metamour’, ‘consumption of drugs’ and the repeated mentioning of sexual relationships.

The purpose of disclosure is to provide potential donors with information they consider relevant to their decision process. That some of donors will be persuaded not to donate by the information is a feature, not a bug.

Comment by larks on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-08T05:30:49.545Z · score: 12 (6 votes) · EA · GW
The people making the bet aren't, even pretty indirectly, in a position to influence the management of the tragedy or the dedication of resources to it. It doesn't actually matter all that much, in other words, if one of them is over- or under-confident about some aspect of the tragedy.

Do you think the bet would be less objectionable if Justin was able to increase the number of deaths?

Comment by larks on EA Forum Prize: Winners for December 2019 · 2020-02-01T01:44:01.876Z · score: 10 (6 votes) · EA · GW

You're welcome! It was a tough decision, as I did find it quite motivating last year, but figured it would create the appearance of a conflict of interest if it won this year.

Comment by larks on Space governance is important, tractable and neglected · 2020-01-17T01:57:49.325Z · score: 2 (1 votes) · EA · GW

One example might be that early on, colonies are extremely reliant on the home world (as the ISS is today) so a lot of central control is exerted by earth. Later on the large distances involve make both aid and (many forms of) coercion much more difficult, so much more decentralisation and independence seem likely. From our vantage point it seems unlikely if we can do much to influence the latter given we first have to go through the radically different former.

Comment by larks on Long-term investment fund at Founders Pledge · 2020-01-15T04:43:01.593Z · score: 2 (1 votes) · EA · GW

This comment is less well cited than my usual, but perhaps one of the ideas might spark some useful thought:

To learn more about e.g. risks of value drift, risks of expropriation and legal structures, we’re looking into case studies of similar funds, both active now and in the past.

Unfortunately I cannot find the reference, but I seem to recall the Ford Foundation having suffered such severe value drift - far from what Henry Ford intended - that the family basically disowned it.

Creative ideas for optimal governance of the fund

I have heard (though have not verified, so consider this very speculative) that there is a legal structure left over from the crusades (for protecting your castle while you were away in the holy land) that might be useful for cryonics - perhaps it might be of use here.

Comment by larks on Khorton's Shortform · 2020-01-14T23:15:19.898Z · score: 4 (2 votes) · EA · GW
Unfortunately, that means that those of us with our reputations on the line are the ones who have the most skin in the game to keep people from doing stupid unilateralist things that make everyone in the community look bad.

Surely if someone doesn't identify as an EA, their actions incur less reputational risk for the movement?

Comment by larks on Dataset of Trillion Dollar figures · 2020-01-14T23:09:55.554Z · score: 5 (4 votes) · EA · GW

Interesting idea!

I notice that many of the largest numbers are derivative notionals. It is important to note that this is a totally irrelevant number; derivative notionals are essentially arbitrary up to a scalar multiple.

As an example, suppose you and I want to make a bet about overnight interest rates on the first day of 2021 - specifically we agree that I will pay you $1 for every 1% the Fed Funds overnight rate is above 2%, and you will pay me $1 for every 1% it is below 2%, capped at $2 either way. The way we would formalise this as a contract would be:

  • An interest rate swap with 2% rate, one day tenor and $36,500 notional.
  • A receiver swaption with a 4% strike, one day tenor and $36,500 notional.
  • A payer swaption with a 0% strike, one day tenor and $36,500 notional.

In total this is over $100,000 worth of notional... for a $2 bet! What matters is the economic exposure of the derivatives, but this can be hard for non-specialists to calculate, so people often substitute the easier but irrelevant question of gross notional. Unfortunately this can include regulations, which has caused a variety of problems in the market.


Separately, you list 'global debt in the non-financial sector' as $1,521 trillion, but the source provided suggests it is $152 trillion. I suspect your scraping tool may have mistaken a footnote for an order of magnitude.


Comment by larks on The Center for Election Science Year End EA Appeal · 2020-01-06T20:34:35.396Z · score: 2 (1 votes) · EA · GW

He opposes the Fed, but does he want interest rates set by politicians? My understanding is he wanted a return to the gold standard, where there is less need to directly control the money supply - the only question is whether you insist on 100% backing or accept a lower ratio, but once that is set growth in the money supply is determined by the volume of physical gold.

Similarly bitcoin people oppose the Fed, but not because they want politicians in charge - they have another external rule (e.g. fixed max quantity) that reduces democratic discretion.

Comment by larks on The Center for Election Science Year End EA Appeal · 2020-01-05T02:25:01.513Z · score: 19 (5 votes) · EA · GW

Thanks for writing this; I find approval voting an interesting and intuitively attractive system (and have used it in the past).

I was surprised by the table you show about the impacts of approval voting on the democratic candidate selection. I generally think of approval voting as supporting moderate candidates, as you mention, but here it seems to be favouring the most extreme (Warren, Sanders) over the more moderate (Biden) - though perhaps I have misinterpreted the chart.

This makes me worry that increasing the 'democraticness' of elections might lead to worse outcomes. You sort of alude to this concern in the FAQ, but move over it pretty quickly. There are in fact many cases where it is I think relatively common to think that reducing 'democraticness' is a good move:

  • Independent central banks are much more competent, credible and technocratic than having elected politicians control the money supply. I'm not really aware of anyone who thinks this move was a bad idea.
  • Elected judges are often viewed as far more populist and generally lower quality than appointed ones.
  • Elected utility commissions are less competent than appointed ones (though perhaps I am biased on this issue).
  • Referendums and ballot initiatives are often blamed as part of the cause for the decline in Californian governance.

Elected vs Appointed isn't exactly the same as FPTP vs Approval Voting, but it seems like they have similar aspects.

Garrett Jones has a book on this. It's only on pre-order at the moment, but you can read Hanson's comments here.

Comment by larks on Genetic Enhancement as a Cause Area · 2020-01-05T01:42:48.853Z · score: 6 (3 votes) · EA · GW

I thought this was a very interesting article, but I would question how much counterfactual difference we could expect intervention to make here. My default expectation is that a lot of this is going to happen anyway due to demand from parents. My impression is that IVF and PGS screening both became very commonplace with little explicit policy support for exactly this reason. This doesn't apply to setting external incentives, but does apply to many of the specific technologies you mention.

I also thought this was a little strange:

One way of alleviating the harm due to inequality is by advocating a tax on innate, unearned qualities, such as favorable genetics and inheritance. I believe that these policies will be popular once the technology comes up on the horizon, and will likely play a large role in mitigating the worst risks of inequality.

With genetic enhancement, favourable genetics are (no longer) random - they are the result of a deliberate decision that you are trying to encourage. How many parents would want to curse their child with higher taxes? It seems rather strange that we should start taxing (e.g. discouraging) this good thing precisely at the moment it becomes possible to promote it!

Finally, you might enjoy this article by Eliezer. One interesting point is that there is something of a collective action problem, because each mutation is probably bad for the individual/family with it but provides useful information for everyone else.

Comment by larks on Welfare stories: How history should be written, with an example (early history of Guam) · 2020-01-04T03:12:48.806Z · score: 7 (5 votes) · EA · GW

I enjoyed reading this; thanks for sharing.

Comment by larks on Leverage Research: reviewing the basic facts · 2019-11-21T14:29:51.880Z · score: 2 (5 votes) · EA · GW

I don't know much about it, but isn't Reserve meant to be a Stablecoin? If so any change in value seems significantly worse than for other coins.

Comment by larks on Introducing Good Policies: A new charity promoting behaviour change interventions · 2019-11-19T02:18:50.852Z · score: 13 (12 votes) · EA · GW

Hey, I was wondering if you had taken into account the consumer surplus from smoking in your estimates?

This might not be a small factor:

  • Many smokers report enjoying the experience of smoking.
  • Many people choose to smoke despite knowing about the health effects.
  • Newer forms of tobacco consumption, like vaping, have significantly lower health side-effects.
  • Rational choice is still possible in the presence of addiction - see for example Becker and Murphy (1988).

I think this is especially important because preventing people from smoking is much more coercive than most EA projects; typically we are helping people do something they either want to do anyway or are at worst indifferent (e.g. with GiveDirectly or Against Malaria Foundation). But taxing products that people want to consume (even if they might me ill-informed or the like) is quite different.

As a concrete example, the killing of Eric Garner by the NYPD, one of the causes of the Black Lives Matter Movement, was directly caused by (among other things) high tobacco taxation.

(I previously brought up this issue here)

Comment by larks on aarongertler's Shortform · 2019-11-15T14:50:23.947Z · score: 5 (3 votes) · EA · GW

Done.

Comment by larks on Institutions for Future Generations · 2019-11-14T21:45:30.603Z · score: 7 (5 votes) · EA · GW

You might be interested in this (courtesy of Gwern):

The Corporate Governance of Benedictine Abbeys: What can Stock Corporations Learn from Monasteries?
The corporate governance structure of monasteries is analyzed to derive new insights into solving agency problems of modern corporations. In the long history of monasteries, some abbots and monks lined their own pockets and monasteries were undisciplined. Monasteries developed special systems to check these excesses and therefore were able to survive for centuries. These features are studied from an economic perspective. Benedictine monasteries in Baden-Württemberg, Bavaria and German speaking Switzerland have an average lifetime of almost 500 years and only a quarter of them broke up as a result of agency problems. We argue that this is due to an appropriate governance structure, relying strongly on the intrinsic motivation of the members and on internal control mechanisms.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1137090

Comment by larks on Institutions for Future Generations · 2019-11-14T17:44:37.413Z · score: 10 (6 votes) · EA · GW

Every financial security requires a matching liability. Who or what owes the money at maturity? If it's funded out of general taxation it's a vote on whether non-holders should pay money to the holders. Holders are incentivized to give high numbers, non-holders are incentivized to give low numbers, and accurate retrospective judgements don't seem to be relevant at all.

My guess is that the price falls rapidly to zero, like failed crypto schemes, though the game theory is not totally clear.

Comment by larks on Institutions for Future Generations · 2019-11-14T17:40:02.471Z · score: 1 (4 votes) · EA · GW

Hereditary Rule

Increasing the power of hereditary rulers (Monarchs, House of Lords) and introducing them in other places (e.g. making Senates hereditary and replacing Presidents with Monarchs) to reduce short-term incentives by extending time in government office, and taking advantage of the high level of parent-child altruism to extend this beyond an individual ruler's lifespan.


Comment by larks on Choosing effective university for donations · 2019-11-14T15:17:25.578Z · score: 9 (6 votes) · EA · GW

Some EA-ish organisations are legally part of universities. For example, FHI is part of Oxford, and CHAI is part of UC Berkeley. In both cases when I donated to these organisations in the past it was legally a restricted donation to the university, to my recollection. I assume GPI is also part of Oxford.

(To be clear, I am not arguing that you should give to these two specific organisations).

Comment by larks on What metrics may be useful to measure the health of the EA community? · 2019-11-14T13:37:46.804Z · score: 11 (5 votes) · EA · GW

Interesting question.

I think there are essentially two different angles here: how good is the EA community at achieving its stated purpose, and how healthy are the members.

For the first one, how many people are donating at least 10% of their labour income is an obvious test. The extent to which EA research breaks new ground, vs going round in circles, would be another.

For the second presumably many standard measures of social dysfunction would be relevant - e.g. depression, crime, drug addiction, or unemployment. Conversely, we would also care about positive indicators, like professional success, having children, good family relationships, etc. However, you would presumably want to think about selection effects (does EA attract healthy people) vs treatment effects (does EA make people healthy). If we (hypothetically) made some people so depressed they rapidly drop out, our depression stats could look good, despite this being clearly bad!

Another issue is judging whether someone is a member of the community. A survey could be unrepresentative if it doesn't reach enough people - or if it reaches only peripherally attached people.

Comment by larks on Assumptions about the far future and cause priority · 2019-11-11T20:47:29.455Z · score: 10 (4 votes) · EA · GW

This is a really interesting post, thanks for writing it up.

I think I have two main models for thinking about these sorts of issues:

  • The accelerating view, where we have historically seen several big speed-ups in rate of change as a result of the introduction of more powerful methods of optimisation, and the introduction of human-level AGI is likely to be another. In this case the future is both potentially very valuable (because AGI will allow very rapid growth and world-optimisation) and endangered (because the default is that new optimisation forces do not respect the values or 'values' of previous modes.)
    • Physics/Chemistry/Plate Tectonics
    • Life/Evolution
    • Humanity/Intelligence/Culture/Agriculture
    • Enlightenment/Capitalism/Industrial Revolution
    • Recursively self-improving AGI?
  • The God of Straight Lines approach, where we'll continue to see roughly 2% RGDP growth, because that is what always happens. AI will make us more productive, but not dramatically so, and at the same time previous sources of productivity growth will be exhausted, so overall trends will remain roughly intact. As such, the future is worth a lot less (perhaps we will colonise the stars, but only slowly, and growth rates won't hit 50%/year) but also less endangered (because all progress will be incremental and slow, and humanity will remain in control). I think of this as being the epistemically modest approach.

As a result, my version of Clara thinks of AI Safety work as reducing risk in the worlds that happen to matter the most. It's also possible that these are the worlds where we can have the most influence, if you thought that strong negative feedback mechanisms strongly limited action in the Straight Line world

Note that I was originally going to describe these as the inside and outside views, but I actually think that both have decent outside-view justifications.

Comment by larks on AI policy careers in the EU · 2019-11-11T13:11:39.718Z · score: 8 (7 votes) · EA · GW

Thanks for writing this, it was very interesting.

Readers might be interested in the EU's AI Ethics guidelines, which various EA-type people tried (and apparently failed?) to influence in a productive direction.

A minor note:

the world’s largest trading bloc.

according to google...

  • US GDP (2018): 20.5 trillion
  • EU GDP (2018): 18.8 trillion

and presumably EU GDP, and influence on AI, will fall when the UK leaves. (If you use PPP I think China is bigger)


Comment by larks on Centre for the Study of Existential Risk Six Month Report April - September 2019 · 2019-11-10T22:46:47.096Z · score: 3 (2 votes) · EA · GW

Thanks for writing this up, I thought it was very helpful.

Comment by larks on [updated] Global development interventions are generally more effective than Climate change interventions · 2019-10-10T02:20:27.213Z · score: 12 (5 votes) · EA · GW
[updated] Global development interventions are generally more effective than Climate change interventions
Previously titled “Climate change interventions are generally more effective than global development interventions”.  Because of an error the conclusions have significantly changed. [old version]. I have extended the analysis and now provide a more detailed spreadsheet model below.

Wow, I have never seen someone do this before! This is really impressive, excellent job being willing to reverse your conclusions (and article). Max upvote from me.

Comment by larks on What actions would obviously decrease x-risk? · 2019-10-09T14:07:48.188Z · score: 4 (2 votes) · EA · GW

When I was studying maths it was made clear to us that some things were obvious, but not obviously obvious. Furthermore, many things I thought were obvious were in fact not obvious, and some were not even true at all!

Comment by larks on FHI Report: Stable Agreements in Turbulent Times · 2019-10-05T21:19:37.835Z · score: 3 (2 votes) · EA · GW

Thanks for sharing this here.

It strikes me that making it easier to change contracts ex post could make the long run situation worse. If we develop AGI, one agent or group is likely to become dramatically more powerful in a relatively short period of time. It seems like it would be very useful if we could be confident they would abide by agreements they made beforehand, in terms of resource sharing, not harming others, respecting their values, and so on. The whole field of AI alignment could be thought of as essentially trying to achieve this inside the AI. I was wondering if you had given any thought to this?

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-04T21:57:14.727Z · score: 10 (5 votes) · EA · GW

I think Stefan is basically correct, and perhaps we should distinguish between Disclaimers (where I largely agree with Robin's critique) and Disclosure (which I think is very important). For example, suppose a doctor were writing an article about how Amigdelogen can treat infection.

Disclaimers:

  • Obviously, I'm not saying Amigdelogen is the only drug that can treat infection. Also, I'm not saying it can treat cancer. And infection is not the only problem; world hunger is bad too. Also you shouldn't spend 100% of your money on Amigdelogen. And just because we have Amigdelogen doesn't mean you shouldn't be careful about washing your hands.

This is unnecessary because no reasonable person would assume you were making any of these claims. Additionally, as Robin points out, by making these disclosures you add pressure for others to make them too.

Disclosure:

  • I received a $5,000 payment from the manufacturer of Amigdelogen for writing this article, and hope to impress their hot sales rep.

This is useful information, because readers would reasonably assume you were unbiased, and this lets them more accurately evaluate how much weight to put on your claim, given that as non-experts they do not have the expertise to directly evaluate the evidence.

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-04T02:42:30.295Z · score: 35 (16 votes) · EA · GW

You're definitely right that most grant-making organisations do not make much use of such disclaimers. However, I think this mainly because it just doesn't come up - most grantmaking occurs between people who do not know each other much socially, and are often older and married anyway.

In contrast the EA community, especially in the bay area, is extremely tight socially, and also exhibits a high level of promiscuity. As such the risk for decisions being unduly influenced by personal relationships is significantly higher. For example, back in 2016 OpenPhil revealed that they had advisors living with people they were evaluating, and evaluatees in relationships with OpenPhil staff (source). OpenPhil no longer seem to publish their conflicts of interest, but I suspect similar issues still occur. Separately, I have been told that some people in the bay area community explicitly use sexual relationships to make connections and influence the flow of funds from donors to workers and projects, which seems to raise severe concerns about objectivity and bias, as well as the potential for abuse (in both directions). I would be very concerned by either of these in the private sector, and see little reason to hold EAs to a lower standard.

Donors in general are subject to a significant information asymmetry and have few defenses against improper behaviour from organisations, especially in areas where concrete outputs are scarce. Explicit declarations that specific suspect conduct has not taken place represents a minimum level of such protection.

With regard your bullet points, I think a good analogy would be disclaimers in financial research. Every piece of financial research comes with multiple pages of disclaimers at the end, including a promise from the authors that the piece represents their true opinions and various sections about financial conflicts of interest. Perhaps the first analysts subject to these requirements found them intrusive - however by now they are a totally automated and unremarked-upon part of the process. I would expect the same to apply here, partly because every disclosure should ideally say the same thing: "None of the judges were in a relationship with anyone they evaluated."

Indeed, the disclosure requirements in the financial sector cover cases like these quite directly. For example the CFA's Ethical and Professional Standards (2016):

"... requires members and candidates to fully disclose to clients, potential clients and employers all actual and potential conflicts of interest"

and from 2014:

"Members and Candidates must make full and fair disclosure of all matters that could reasonably be expected to impair their independence and objectivity or interfere with respective duties to their clients, prospective clients, and employer. Members and Candidates must ensure that such disclosures are prominent, are delivered in plain language, and communicate the relevant information effectively.

In this case, donors and potential donors to an EA organisation are the equivalent of clients and potential clients of an investment firm, and I think a personal relationship with a grantee could reasonably be expected to impair judgement.

A case I personally came across involved two flatmates who both worked for different divisions in the same bank (Research and Sales&Trading). Because the bank (rightfully) took the separation of these two functions very seriously, HR applied a lot of pressure to them and they found alternative living arrangements.

Another example is lotteries, where the family members of employees are not allowed to participate at all, because their winning would risk bringing the lottery into disrepute:

In most cases the employee's immediate family and employees of lottery suppliers are also not allowed to play. In practice, there is no way that employees could alter the outcome of a game in their favor, but lottery officials generally believe that public confidence would be damaged should an employee win a large prize. (source)

This is perhaps slightly unfair, as they did not choose the employment of their family members, but this seems to be a small cost. The number of lottery family members is very small compared to the lottery-ticket-buying public, and there are other forms of gambling open to them. And the costs here should be smaller still, as all I am suggesting is disclosure, a much milder policy than prohibition.

I did appreciate that the fund's most recent write-up does take note of potential conflicts of interest, along with a wealth of other details. I could not find the sort of conflict of interest policy you suggested on their website however.

Comment by larks on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-03T21:38:23.785Z · score: 31 (15 votes) · EA · GW

Thanks for writing this up. Impressive and super-informative as ever. Especially with Oliver I feel like I get a lot of good insight into your thought process.

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-10-03T16:56:18.710Z · score: 3 (5 votes) · EA · GW
This post has been shared within the organisation I work for and I think could do very large damage to the reputation of EA within my org.

Would you mind sharing, at least in general terms, which organisation you work for? I confess that if I knew I have forgotten.


Comment by larks on Analgesics for farm animals · 2019-10-03T14:22:54.809Z · score: 9 (6 votes) · EA · GW

Interesting work, thanks for doing the research. I really appreciate these posts on new topics I had no idea existed.

Comment by larks on Is pain just a signal to enlist altruists? · 2019-10-02T17:44:09.235Z · score: 17 (6 votes) · EA · GW

Wow, this is fascinating speculation, thanks for posting.

The section on pain varying with the social environment was especially interesting. It reminded me of the (common but not uncontroversial) parenting strategy whereby babies are left to cry at night, so as to avoid positively reinforcing crying and instead train them to sleep unaided.

Would it suggest that exhortations to 'stop being a wuss' were actually effective? The nearby people are effectively precommitting to not be moved by visible suffering, which might reduce the incentive for the victim to experience pain.


Comment by larks on Candy for Nets · 2019-09-29T15:02:27.348Z · score: 20 (14 votes) · EA · GW

This is so adorable! I especially like when she volunteered to take over your job.

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-27T20:39:03.617Z · score: 22 (17 votes) · EA · GW

It is admirably honest of you to highlight and address this, rather than hoping no-one notices.

I don't think any grants we've made have been to anyone who has ever been romantically involved with any of the fund members

Perhaps you could get the other judges to join you in a joint explicit declaration that you've never had any romantic or sexual relationships with any of the recipients? Would be good to put this at the bottom of the writeups.

edit: surprised people have downvoted this. To be clear, I was genuinely impressed that OP directly addressed this, even at the cost of drawing attention to it.

Comment by larks on [Link] Moral Interlude from "The Wizard and the Prophet" · 2019-09-27T19:13:02.054Z · score: 8 (5 votes) · EA · GW
At a 5 percent discount rate, the Argentine-American economist Graciela Chichilnisky has calculated, “the present value of the earth’s aggregate output discounted 200 years from now is a few hundred thousand dollars.”

  • 2019 Global GDP = around $88 trillion
  • Annual Real Growth Rate = assume 2.5%
  • Graciela's discount rate = 5%
  • Present Value of 2219 GDP = (88*10**12)*((1.025)**200)/((1.05)**200) = $710,224,969,039 (over $700 billion)
  • Present Value of 2219 and thereafter: ((88*10**12)/(0.05-0.025))*((1.025)**200)/((1.05)**200) = $28,408,998,761,567 (over $28 trillion)
Comment by larks on Psychology and Climate Change: An Overview · 2019-09-27T16:58:49.547Z · score: 12 (5 votes) · EA · GW
belief in free market ideology is a significant predictor of disbelief in global warming.

The citation here refers back to Heath & Gifford (2006), which is an n=185 survey of Canadians that failed to find a p=0.05 relationship in their main regression analysis (Table 3). Their conclusion seems to be justified by 1) this non-significant directional beta and 2) some post-hoc mediation analysis.

Comment by larks on A bunch of new GPI papers · 2019-09-25T17:57:26.583Z · score: 4 (3 votes) · EA · GW

Thanks for linking these here; they look like interesting papers.

Comment by larks on Forum Update: New Features (September 2019) · 2019-09-17T16:51:03.362Z · score: 5 (3 votes) · EA · GW

Thanks, these look like some interesting features.

Are there / should there be any social norms re: replying to someone else's shortform? They seem intuitively sort of 'private property' to me.

Comment by larks on The Long-Term Future: An Attitude Survey · 2019-09-17T01:56:11.034Z · score: 46 (18 votes) · EA · GW

Thanks for doing this work, and making it public. Similar to Max, I basically believe in the Total View, and am sympathetic to Temporal Cosmopolitanism, so consider this somewhat good news.

However, I am a little skeptical about some of the questions. To the extent you are trying to get at what people 'really' think (if they have real views on such a topic...) I worry that some of questions were phrased in a somewhat biased matter - particularly the ones asking for agreement with the text.

When doing political polling, people generally don't ask questions like this:

Do you agree the government should spend more on law and order?

... because people's level of agreement will be exaggerated. Instead, it's often considered better practice to say phrase it more like:

Which Statement do you agree with more?
1) The government should spend more on law and order, even if it means higher taxes.
2) The government should lower taxes, even if it means less spending on law and order.
Comment by larks on Existential Risk and Economic Growth · 2019-09-17T01:26:21.198Z · score: 5 (3 votes) · EA · GW

Thanks very much for writing this, I found it really interesting. I like the way you follow the formalism with many examples.

I have a very simple question, probably due to my misunderstanding - looking at your simulations, you have the fraction of workers and scientists working on consumption going asymptotically to zero, but the terminal growth rate of consumption is positive. Is this a result of consumption economies of scale growing fast enough to offset the decline in worker fraction?

Comment by larks on Cause X Guide · 2019-09-16T01:05:54.203Z · score: 6 (3 votes) · EA · GW

It's also illegal in Turkey and (de jure at least) in China.

Comment by larks on [Link] What opinions do you hold that you would be reluctant to express in front of a group of effective altruists? Anonymous form. · 2019-09-16T01:02:09.270Z · score: 10 (6 votes) · EA · GW
I once even wrote a research proposal on this for the CEA Summer Research Fellowship 2017. I was then invited to the programme.

Could you link to the research by any chance?

Comment by larks on A summary of Nicholas Beckstead’s writing on Bayesian Ethics · 2019-09-12T19:45:42.962Z · score: 4 (3 votes) · EA · GW

Thanks for writing this, I found it interesting and it significantly increased the likelihood I'd read the original.

Comment by larks on [Solved] Was my post about things you'd be reluctant to express in front of other EAs manually removed from the front page, and if so, why? · 2019-09-12T18:55:39.621Z · score: 2 (1 votes) · EA · GW

I think it was re-classified as 'Community', which removes it from the front page and puts it in a secondary location. People can still see it but they have to have 'Include Community Posts' ticked, which I think is unchecked by default.