Posts

2020 AI Alignment Literature Review and Charity Comparison 2020-12-21T15:25:04.543Z
Avoiding Munich's Mistakes: Advice for CEA and Local Groups 2020-10-14T17:08:13.033Z
Will protests lead to thousands of coronavirus deaths? 2020-06-03T19:08:10.413Z
2019 AI Alignment Literature Review and Charity Comparison 2019-12-19T02:58:58.884Z
2018 AI Alignment Literature Review and Charity Comparison 2018-12-18T04:48:58.945Z
2017 AI Safety Literature Review and Charity Comparison 2017-12-20T21:54:07.419Z
2016 AI Risk Literature Review and Charity Comparison 2016-12-13T04:36:48.060Z
Being a tobacco CEO is not quite as bad as it might seem 2016-01-28T03:59:15.614Z
Permanent Societal Improvements 2015-09-06T01:30:01.596Z
EA Facebook New Member Report 2015-07-26T16:35:54.894Z

Comments

Comment by larks on DanielFilan's Shortform · 2021-01-22T18:31:44.599Z · EA · GW

Seems plausible. Presumably if some crime is deterred by these rules, which would leave the $3bn an under-estimate of the benefit. On the other hand, without the rules we might see more innovation in financial services, which would suggest the $300bn an under-estimate of the costs.

Unfortunately I think it is very unlikely we could make any progress in this regard, as governments do not like giving up power, and the proximate victims are not viewed sympathetically, even if the true incidence of the costs is broad.

There have been attempts in the past to reform, as they particular harm poor immigrants trying to send cash home, but as far as I am aware these attempts have been almost entirely unsuccessful.

Comment by larks on The Folly of "EAs Should" · 2021-01-10T18:10:46.999Z · EA · GW

we need to stop saying "don't donate to your local theatre" ... because actually [that is a] bad advice a lot of the time

I'm surprised you would say this - I would expect that not donating to a local theatre would have basically no negative effects for most people. I can see an argument for phrasing it more delicately - e.g. "I wouldn't donate to a local theatre because I don't think it will really help make the world a better place" - but I would be very surprised if it was actually bad advice. Most people who stop donating to a charity suffer essentially no negative consequences from doing so.

Comment by larks on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-10T17:59:34.300Z · EA · GW

Do you think being an EA, believing EA things, or being identified as one, represents any disadvantage (or advantage) in running for office?

Comment by larks on EA and the Possible Decline of the US: Very Rough Thoughts · 2021-01-08T16:07:39.302Z · EA · GW

Peaceful Scenarios

Collapse need not be violent or tumultuous. For example, there could be a legal agreement to split the country into different independent countries. Although difficult to imagine, the US could also enter into a treaty with an independent country that would integrate the two, fundamentally altering each.

These seem like quite different scenarios to the others discussed. If the US agreed to let California become independent, or annexed Canada, I would not expect any threat to nuclear security, or AI lab integrity, or drastic loss of life in the process. Annexing Canada could even potentially help continue US international hegemony through increased population and GDP, though it might be bad in other ways.

Comment by larks on Why are party politics not an EA priority? · 2021-01-07T17:22:22.529Z · EA · GW

eventually had to quit because the job was effectively unpaid

That's interesting - I've seen it argued that we should massively increase pay for MPs etc. in order to attract higher quality candidates. At the moment the pay and quality of life are both significantly worse than decent candidates could get by being e.g. an executive at a medium sized firm, and perhaps as a result many MPs are just not that bright. In contrast Singapore pays very highly and has a reputation for high competency. 

Comment by Larks on [deleted post] 2021-01-07T05:27:15.326Z

Very interesting article. Two minor pieces of housekeeping:

  • The paragraph beginning "Dr. Fauci regularly appears on MSNBC" appears twice
  • There are some sections marked with "<>" which appear to be placeholders for subsequent content.
Comment by larks on Research on Effective Strategies for Equity and Inclusion in Movement-Building · 2020-12-29T16:21:02.230Z · EA · GW

Blinding may work for musicians

 

The link you shared does not work, but I assume it was meant to be pointing at the classic study on orchestral interviews from 1997/2000. However, recent re-analysis of the paper (here, here, here, here) shows if anything it supports the opposite conclusion:

This table unambiguously shows that men are doing comparatively better in blind auditions than in non-blind auditions. The -0.022 number is the proportion of women that are successful in the audition process minus the proportion of men that are successful. Thus a larger proportion of men than women are successful in blind auditions, the exact opposite of what is claimed.

The 'fact' that blinded auditions help women overcome bias in non-blinded  auditions came from some dubious pre-replication-crisis analysis, where the authors picked a small subset (often less than three orchestras!) of the data to try to find the effect they are looking for:

The impact of the screen is positive and large in magnitude, but only when there is no semifinal round. Women are about 5 percentage points more likely to be hired than are men in a completely blind audition, although the effect is not statistically significant. The effect is nil, however, when there is a semifinal round, perhaps as a result of the unusual effects of the semifinal round.

... but even with this p-hacking the authors failed to achieve statistical significance. 

So on the whole this suggests that musician interviews is another case where the process was originally biased against men, and blinding helped reduce this bias.

Comment by larks on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-23T17:48:15.907Z · EA · GW

Thanks, fixed

Comment by larks on TAI Safety Bibliographic Database · 2020-12-22T17:20:07.636Z · EA · GW

I just wanted to say thanks very much to Jess and Angelica for putting all this together in addition to the analytics above, they were extremely helpful in providing me with lists of relevant papers from relevant organisations that I would have likely missed otherwise. 

Comment by larks on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-21T15:39:20.064Z · EA · GW

Thanks, fixed.

Comment by larks on Careers Questions Open Thread · 2020-12-07T18:06:50.509Z · EA · GW

I think it depends a lot on industry. In the world of startups frequently changing jobs doesn't seem that unusual at all. In finance, on the other hand, I would be very suspicious of someone who moved from one hedge fund to another every two years.

It also depends a bit on the role. A recent graduate who joins an investment bank as an analyst is basically expected to leave after two years; but if a Director leaves after two years that is a sign that something was wrong. Working as a teacher for two years and then quitting looks bad, unless it was Teach for America, in which case it is perfectly normal.

Comment by larks on ACE's Compensation Strategy · 2020-12-03T01:39:59.609Z · EA · GW

This is a cool post, thank you for laying out your thought process. I especially like the section on (not using) cost of living adjustments.

One thing I would bring up is it seems you have focused a lot on the demand side, and not so much the supply side, of the equation. In the 'type of work' section you discuss equally valuing all these different skillsets, but they might not all be equally scarce. There are some roles when you might want to hire non-EAs (e.g. accounting, IT, HR, legal), in which case you might need to pay more for rarer skills - and conversely if you wanted to hire a cleaner, or someone to do data entry, it might seem wasteful to pay them as much as your researchers and programmers. It would be a shame to have experienced staff wasting their time on low-skill work just because you couldn't justify spending an above-market wage on hiring someone.

Comment by larks on Open Philanthropy Staff: Suggestions for Individual Donors (2020) · 2020-12-02T16:59:35.790Z · EA · GW

from the comments:

The program staff responsible for our giving in x-risk, AI, and effective altruism community building chose to not make recommendations this year. In some cases this was due, at least in part, to idiosyncratic reasons (e.g. how busy they were during the window we were soliciting recommendations) and not necessarily because they had no recommendations they thought would be particularly good fits for individual donors. It’s worth noting that Nick Beckstead advises the Long-Term Future Fund, which might be an option for some individual donors interested in these causes.

Comment by larks on Introducing High Impact Athletes · 2020-11-30T17:20:28.275Z · EA · GW

Most have shied away from a percentage, asking to donate a discreet amount and maybe come in at a 1% pledge next year or the year after. 

I am imagining this conversation:

Marcus: you should donate 1% of your income

Athlete: I don't want to commit to a percentage. How about a fixed dollar value for this year, and maybe a percentage later?

Marcus: Sounds good. How much you you make?

Athlete: I make $500k a year.

Marcus: How about donating 10k then? That's a nice round number. 

Comment by larks on Oxford college choice from EA perspective? · 2020-11-24T18:17:07.405Z · EA · GW

It's been a few years since I was there, but most of the EA events were organised at the university level (aside from a small number of Balliol-specific events that were short-lived I think) and I would be surprised if that had changed.

I guess one EA-specific consideration might be proximity to the FHI office. If you can get college accommodation I guess that would favour Worcester, which is a lovely college in any case.

Comment by larks on Please Take the 2020 EA Survey · 2020-11-20T16:12:37.611Z · EA · GW

Presumably part of the idea is that it is somewhat incentivising while also being very cheap: the money goes to places CEA would like to support anyway, and doesn't really motivate non-EAs to take the survey.

A different concern is it is not clear to me how counterfactually valid the donation is.

Comment by larks on Why you should give to a donor lottery this Giving Season · 2020-11-18T05:03:59.300Z · EA · GW

Thanks for the writeup! I continue to think this is a cool idea, especially as it is so counter-intuitive to many people.

The lottery is administered by EA Funds, which is a project of the Centre for Effective Altruism (CEA). CEA can only make grants that are within its charitable objectives, and retains sole discretion over where the final grants are made. This means that we won’t make grants that run counter to broad altruistic principles, or to projects that don’t satisfy our regular due diligence requirements. 

Is there anything you can say about how often, if at all, this has been the case? I guess there have been relatively few winners in the past so I would guess the answer is 'never'?

Comment by larks on Promoting Effective Giving and Giving What We Can within EA Groups · 2020-11-10T02:45:37.552Z · EA · GW

we make promises all the time that have implied conditionality, such as the example about picking up your niece from school, and marriages which most people agree should end if that is best, but that's rarely in the vows

The niece scenario seems quite different from that of the pledge. To recap the scenario:

If you promise to pick up your niece from school and are hit by a car it’s not terrible for you to break that promise because you’re in the ICU. 

If you're in the ICU, it is quite possibly basically impossible for you to pick up your niece! If you're on oxygen support, or have a damaged spine, or many of the other conditions that warrant ICU, attempting to drive to her school might literally kill you, leaving her still unpicked up. If you're on strong painkillers you might still be able to physically operate the car, but your judgement is so impaired that driving would impose an unacceptably large risk on third parties, violating their rights. Or you might just be in a coma and unable to do anything at all. This seems quite dissimilar to the case of people wishing to get out of their pledge commitment. My impression is these people generally much more mundane motivations, closer to "I don't want to" than "I cannot". I think it is reasonable to infer a silent "unless it is impossible" into a promise, but  not a "unless I change my mind" - that would invalidate the entire point of the pledge.

Similarly, I strongly disagree about the marriage example. The classic marriage oath clearly states that it is meant to be until death, explicitly clarifies that a long list of conditions are not sufficient grounds for its end, and brings together a huge group of witnesses.  

To have and to hold, from this day forward, for better, for worse, for richer, for poorer, in sickness and in health, until death do us part.

It's hard to imagine how people could make much clearer their intentions to enter into a permanently binding contract, as was enshrined in law for much of its history.

Nor do I agree that fidelity to promises is a problem as you imply:

In cases where someone is particularly scrupulous to a point of detriment 

The idea that someone should fulfil their commitments is not a detriment or a problem. On the contrary, being a trustworthy person yields many advantages. Being able to credibly commit yourself can give others the confidence to act in beneficial ways that they might choose not to if they were afraid you would screw them over later. It also allows us to bind ourselves, protecting ourselves from future moments of weakness. 

Comment by larks on Why I think the EA Community should write more fiction · 2020-11-05T20:27:04.955Z · EA · GW

Obligatory link to Harry Potter and the Methods of Rationality, both a great piece of literature on its own merits and also one of the leading gateways to the LW/EA community.

Comment by larks on Why we should grow One for the World chapters alongside EA student groups · 2020-11-03T15:34:58.175Z · EA · GW

The average American gives 2-3% of their earnings to charity, see what giving 1% can do!

It seems a bit strange to put a lot of effort into trying to get people to commit to a level of giving that is far lower than the average person would have anyway. With GWWC we were pushing or people to donate significantly more. 

Comment by larks on Nathan Young's Shortform · 2020-10-24T17:16:28.874Z · EA · GW

Can you think of any examples of other movements which have this? I have not heard of such for e.g. the environmentalist or libertarian movements. Large companies might have whistleblowing policies, but I've not heard of any which make use of an independent organization for complaint processing.

Comment by larks on EA's abstract moral epistemology · 2020-10-20T16:19:42.443Z · EA · GW

Yeah despite having studied philosophy I also found this a little impenetrable. It keeps saying things like,

values are simultaneously woven into the fabric of reality and such that we require particular sensitivities to recognise them

and that these views came from some women philosophers at Oxford and Durham, but never really explaining what they mean.

To the extent I felt I understood it, this was only by pattern-matching to the usual criticisms of EA and utilitarianism, like 'too impersonal' and 'not left wing enough'. But this means I wasn't able to get much new from it.

Comment by larks on Thomas Kwa's Shortform · 2020-10-17T03:12:01.398Z · EA · GW

Unfortunately most cost-effectiveness estimates are calculated by focusing on the specific intervention the charity implements, a method which is a poor fit for large diversified charities.

Comment by larks on jackmalde's Shortform · 2020-10-13T14:00:43.143Z · EA · GW
he is asking you to consider how it typically feels like to listen to muzak and eat potatoes

I always found this very confusing. Potatoes are one of my favourite foods!

Comment by Larks on [deleted post] 2020-10-12T03:40:20.725Z
Can we at least have a consensus and commitment that we go back to the previous norm after this election, to prevent a slippery slope where engaging in partisan politics becomes increasingly acceptable in EA?

Unfortunately I expect that in four years time partisans will decide that 2024 is the new most important election in history and hence would renege on any such agreement.

Comment by larks on Can my self-worth compare to my instrumental value? · 2020-10-12T00:11:17.252Z · EA · GW

I wonder to what extent this springs from the fact that most pastors do not expect most of their congregants to achieve great things. Presumably if you are a successful missionary who converts multiple people, your instrumental value significantly exceeds your intrinsic value, so I wonder if they have the same feelings. An extreme case would be someone like Moses, whose intrinsic value presumably paled into insignificance compared to his instrumental value as a saviour of the Israelites and passing on the Word of God.

In any case, I think there is a strong case to be made for spending resources on yourself for non-instrumental reasons. Even if you don't think you matter more than anyone else, you definitely don't matter less than them! And you have a unique advantage in spending resources to generate your own welfare: an intimate understanding of your own circumstances and preferences. When we give to help others, it can be very difficult to figure out what they want and how to best achieve that. In contrast, I know very well which things I have been fixated on!

Comment by larks on Hiring engineers and researchers to help align GPT-3 · 2020-10-10T03:56:58.515Z · EA · GW

I didn't downvote, but I could imagine someone thinking Halstead had been 'tricked' - forced into compliance with a rule that was then revoked without notifying him. If he had been notified he might have wanted to post his own job adverts in the last few years.

Personally I share your intuitions that the occasional interesting job offer is good, but I don't know how this public goods problem could be solved. No job ads might be the best solution, for all that I enjoyed this one.

Comment by larks on Best Consequentialists in Poli Sci #1 : Are Parliaments Better? · 2020-10-10T03:52:28.342Z · EA · GW
While economics is often derided as the dismal science, I believe that economists have done much to improve policymaking in the world.

In keeping with the abolitionists origins of the phrase:

Carlyle’s target was ... economists such as John Stuart Mill, who argued that it was institutions, not race, that explained why some nations were rich and others poor. Carlyle attacked Mill ... for supporting the emancipation of slaves. It was this fact—that economics assumed that people were basically all the same, and thus all entitled to liberty—that led Carlyle to label economics “the dismal science.”
Comment by larks on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-03T04:37:52.128Z · EA · GW

It seems from your description that part of the problem is that the same body invents projects for itself to work on. Do you think things would be significantly improved if, after coming up with a research project, they had to invite external bids for the project, and only do it in-house if they won the tendering process? Perhaps this would be prohibitively hard to implement in practice.

Comment by larks on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-01T15:09:26.494Z · EA · GW

This was a really interesting article on a subject I'd never heard of before, thanks very much. I assume similar issues affect government research organisations in other countries as well.

Comment by larks on Suggestions for Online EA Discussion Norms · 2020-09-30T03:06:30.256Z · EA · GW
When asking the person to rephrase their comment, it can be useful suggest a rewrite yourself.
Example: Someone noticed a commenter who appeared to be name calling another person. This is how they might have rewritten the comment: "I have this point of view because of this reason. I see other people with this different approach and I find it odd because it seems so much in conflict with what I've learned. I wonder how they got to that conclusion."

I found this suggestion kind of surprising upon re-reading. Do you have experiencing of it working well? I worry it could easily come across as somewhat patronising.

Comment by larks on Why doesn't EA Fund support Paypal? · 2020-09-26T15:13:36.048Z · EA · GW

Is there any legal reason the OP couldn't paypal money to someone else who then makes a donation on his behalf? I agree their accepting paypal is the ideal solution but maybe this is an acceptable short term workaround

Comment by larks on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-09-24T14:14:23.332Z · EA · GW
Nobel Cause Corruption

Is this about how the Peace Prize is given out to either warmongers or ineffective activists rather than professional diplomats and international supply chain managers?

Comment by larks on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-21T02:01:42.748Z · EA · GW
I don't see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice.

It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.

Comment by larks on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-20T21:44:35.428Z · EA · GW
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don't necessarily think that malaria and AI risk aren't important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.

I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.

EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You're basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people's thought processes, in which case this is not so much of a surprise.

But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.

Comment by larks on Denise_Melchin's Shortform · 2020-09-19T16:51:42.321Z · EA · GW

I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I'm not sure how one would cache out the limits on 'atrocious' views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!

Comment by larks on So-Low Growth's Shortform · 2020-09-18T16:36:30.836Z · EA · GW

This is an interesting idea. You might need to change the design a bit; my impression is that the experiment focused on getting people to donate vs not donating, whereas the concern with longtermism is more about prioritisation between different donation targets. Someone's decision to keep the money wouldn't necessarily mean they were being short-termist: they might be going to invest that money, or they might simply think that the (necessarily somewhat speculative) longtermist charities being offered were unlikely to improve long-term outcomes.

Comment by larks on Long-Term Future Fund: September 2020 grants · 2020-09-18T16:30:08.905Z · EA · GW

As always, thanks very much for writing up this detailed report. I really appreciate the transparency and insight into your thought processes, especially as I realise doing this is not necessarily easy! Great job.

(It's possible that I might have some more detailed comments later, but in case I don't I didn't want to miss the chance to give you some positive feedback!)

Comment by larks on Denise_Melchin's Shortform · 2020-09-17T17:18:30.048Z · EA · GW

People do bring this up a fair bit - see for example some previous related discussion on Slatestarcodex here and the EA forum here.

I think most AI alignment people would be relatively satisfied with an outcome where our controls over AI outcomes were as strong as our current control over corporations: optimisation for a criteria that requires continual human input from a broad range of people, while keeping humans in-the-loop of decision making inside the optimisation process, and with the ability to impose additional external constrains at run-time (regulations).

Comment by larks on Tax Havens and the case for Tax Justice · 2020-09-17T03:29:52.411Z · EA · GW

Thanks for the effort that went into this post. However, I thought there was a conspicuous lack of any discussion of Optimal Taxation Theory.

Quoting from Mankiw's excellent review article, we can see why this part of economics is highly relevant to the issue: it is directly concerned with what type of tax system maximises utility:

The standard theory of optimal taxation posits that a tax system should be chosen to maximize a social welfare function subject to a set of constraints. The literature on optimal taxation typically treats the social planner as a utilitarian: that is, the social welfare function is based on the utilities of individuals in the society. ... one would not go far wrong in thinking of the social planner as a classic “linear” utilitarian.

I'm not sure I could put it better than he does, so I hope you forgive the repeated quotations. One of the main findings of this field is that taxes on capital should be zero:

Perhaps the most prominent result from dynamic models of optimal taxation is that the taxation of capital income ought to be avoided. This result, controversial from its beginning in the mid-1980s, has been modified in some subtle ways and challenged directly in others, but its strong underlying logic has made it the benchmark.

Why? There are several reasons, and I encourage you to read the whole article, but the third justification he lists should be especially appealing to longtermist EAs: capital taxation reduces investment, which makes everyone poorer in the long run: even those who do not own any capital.

A third intuition for a zero capital tax comes from elaborations of the tax problem considered by Frank Ramsey (1928). In important papers, Chamley (1986) and Judd (1985) examine optimal capital taxation in this model. They find that, in the short run, a positive capital tax may be desirable because it is a tax on old capital and, therefore, is not distortionary. In the long run, however, a zero tax on capital is optimal. In the Ramsey model, at least some households are modeled as having an infinite planning horizon (for example, they may be dynasties whose generations are altruistically connected as in Barro, 1974). Those households determine how much to save based on their discounting of the future and the return to capital in the economy. In the long-run equilibrium, their saving decisions are perfectly elastic with respect to the after-tax rate of return. Thus, any tax on capital income will leave the after-tax return to capital unchanged but raise the pre-tax return to capital, reducing the size of the capital stock and aggregate output in the economy. This distortion is so large as to make any capital income taxation suboptimal compared with labor income taxation, even from the perspective of an individual with no savings. [emphasis added]

There has been a lot of work on the subject since then - for example here and here - but I think of Chamley-Judd as being a core result that the rest of the field is responding to. Some find that capital taxes should be positive or high, and some find that they should be negative - that we should subsidise investment - but the negative effects of capital taxes on investment, growth and aggregate welfare is clearly an important topic that can not be dispensed with without comment!

The above is concerned with capital taxation, but corporate taxes specifically are I think even worse. They essentially function as capital taxation, but typically allow interest expense to be deducted, hence distorting financing decisions away from equity and towards debt - contributing to systemic risk. (This problem was partly addressed in the US by the 2017 tax reform). To the extent that they only apply to legal corporations, and not other types of entity, they also distort organisational choice, which is also bad.

As a result, it seems that corporate taxes are harmful, and it would be better for the world (and the long term future) if they did not exist. Unfortunately they do exist - probably due to exactly the problems with institutional decision making that longtermist EAs are concerned about (e.g. short planning horizons, high discount rates, and capture by special interests). Fortunately, international tax competition provides something of a remedy, by encouraging countries to lower their corporate taxes to closer to the ideal level. Contra your suggestion that it 'damages both "winners" and losers', it acts as a beneficial check on the ability of countries to institute harmful policies. We should be supporting tax havens and praising their effects, not seeking to destroy them.

Despite having a section on 'Objections', the article does not really address this argument. You do sort of get at this issue here:

Developing Countries
Tax havens are necessary structures in encouraging investment in developing countries[35]. ...

But the response misses the point:

Response: Agreed -- developing countries need to build both legal and tax system capacity. Development Financing Institutes and other investors require developing countries to honour and enforce contracts and to refrain from arbitrary seizure of assets.

Getting rid of tax havens degrades our ability to resist arbitrary seizure of assets. This is no small deal - many of the worst disasters in history have been intimately tied with governments' seizures of assets and resultant damage to productive capacity. If we get rid of one check on this problem, we should have something else in place that can serve a similar job. The mere threat of losing access to financial markets for a while is insufficient. There are possible alternatives - once upon a time the west used gunboat diplomacy to this effect - but we should not remove our current solution without first instigating a new one.

Indeed, I think this article actually showcases the problem to a small degree. You write:

[tax havens] cost governments worldwide at least $500B/year in lost tax revenue

It is true that current investments, if subject to a higher level of taxation, would lead to higher tax revenues for governments (in the short run). But these investments were made by individuals and companies who were expecting to pay lower taxes! If taxes had been higher, fewer of these investments would have been made. To point out now that there is a lot of capital out there that could be taxed more if we changed the rules is precisely the sort of ex post asset seizure that people are worried about.

This section also sort of hints at the problem:

Growth
Tax havens promote economic growth in high-tax countries, especially those located near tax havens. US multinationals' use of tax havens shifts tax revenue from foreign governments to the US by reducing the foreign tax credits they claim against US tax payable. As a result of the 1996 Puerto Rico tax haven phaseout mentioned above, employment by affected firms dropped not just in Puerto Rico, but in the US as a whole; affected firms reduced investment globally.[36]

But again the response misunderstands:

Response: If curbing tax havens reduces growth and taxes in developed countries for the benefit of developing countries, that is likely a trade-off many EAs would be willing to make (see below). Abbott Laboratories and other multinationals affected by the Puerto Rico phaseout may have reduced global investment, but increased investment and jobs in developing countries such as India. Given that US dollars go a lot further in less developed countries, a reduction in global investment by specific firms could also reflect better value for money.

The problem is not so much that getting rid of tax havens will reduce investment in the west specifically, but that this will result in a global increase in effective tax rates, and as such will reduce investment globally.

Comment by larks on Parenting: Things I wish I could tell my past self · 2020-09-16T15:29:55.674Z · EA · GW
Nut butters: my impression is that there’s pretty good evidence these days that kids are less likely to be allergic to things they’ve eaten regularly before age 1. Since nuts are choking hazards, we’ve been giving Leo various nut butters (peanut, cashew, almond, hazelnut).

Our paediatrician recommended this for people using bottles. It contains powdered peanut, cows milk and egg that you can add to their bottle once a day to help prevent allergies. At the beginning you titrate up, adding one food at a time, and then the packets switch to maintenance.

Comment by larks on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-14T13:59:00.366Z · EA · GW

Thanks.

In this paper, Strubell & al (2019) outline the hidden cost of machine learning (from inception to training and fine tuning) and found emissions for 1 model is about 360 tCo2.

The highest estimate they find is for Neural Architecture Search, which they estimated as emitting 313 tons of C02 after training for over 30 years. This suggests to me that they're using an inappropriate hardware choice! Additionally, the work they reference - here - does not seem to be the sort of work you'd expect to see widely used. Cars emit a lot of CO2 because everyone has one; most people have no need to search for new transformer architecture. The answers from one search could presumably be used for many applications.

Most of the models they train produce dramatically lower estimates.

I also don't really understand how their estimates for renewable generation for the cloud companies are so low. Amazon say they were 50% renewable in 2018, but the paper only gives them 18% credit, and Google say they are CO2 neutral now. It makes sense that they should look quite efficient, given that cloud datacenters are often located near geothermal or similar power sources. This 18% is based on a Greenpeace report which I do not really trust.

Finally, I found this unintentionally very funny:

Academic researchers need equitable access to computation resources.
Recent advances in available compute come at a high price not attainable to all who desire access. ... . Limiting this style of research to industry labs hurts the NLP research community in many ways. ... This even more deeply promotes the already problematic “rich get richer” cycle of research funding, where groups that are already successful and thus well-funded tend to receive more funding due to their existing accomplishments. Third, the prohibitive start-up cost of building in-house resources forces resource-poor groups to rely on cloud compute services such as AWS, Google Cloud and Microsoft Azure.
While these services provide valuable, flexible, and often relatively environmentally friendly compute resources ...

This whole paragraph is totally different to the rest of the paper. It appears in the conclusion section, but isn't really concluding from anything in the main body - it appears the authors simply wanted to share some left wing opinions at the end. But this 'conclusion' is exactly backwards - if training models is bad for the environment, it is good to prevent too many people doing it! And if cloud computing is more environmentally friendly than buying your own GPU, it is good that people are forced into using it!

Overall this paper was not very convincing that training models will be a significant driver of climate change. And there is compelling reason to be less worried about climate change than AGI. So I don't think this was very convincing that the main AI risk concern is the secondary effect on climate change.

Comment by larks on Buck's Shortform · 2020-09-14T01:25:49.749Z · EA · GW
I've proposed before that voting shouldn't be anonymous, and that (strong) downvotes should  require explanation (either your own comment or a link to someone else's). Maybe strong upvotes should, too?

It seems this could lead to a lot of comments and very rapid ascending through the meta hierarchy! What if I want to strong downvote your strong downvote explanation?

Comment by Larks on [deleted post] 2020-09-13T00:33:46.013Z

I think you have misrepresented Holden's argument:

Ironically, your letter disappointed me because the vitriol got in the way of good reasoning. A useful version of your letter would have tackled the question of whether it's possible to be *both* honest and kind. Your letter implicitly assumed that you can't do both, and left this assumption unchecked. I very much hope you don't allow your passion to get in the way of good analysis in the rest of your work.

I do not think that Holden assumed that nice and honest feedback are mutually exclusive at all. Reading his interlocutors (e.g. Mark Petersen), he is reacting to people saying that any public negative feedback would be too demoralising for the staff. I agree that he is suggesting that charity workers need to man up and accept tough feedback - "your life's work has been pointless" is going to hurt no matter how it's phrased - but disagree that there is any implication that you can't avoid being unnecessarily nasty in doing so.


If you had written this as a rebuttal piece - perhaps 'Reasons to Avoid Unnecessarily Upsetting Crybabies' - I might have upvoted it, despite the above. But as it is this article is unnecessarily passive aggressive. I do not think we should encourage people seeking the mantle of victimhood in order to criticise others.

Anyone with a long history of public comments is inevitably going to have some cringe material from a long time ago. I don't think it is a good principle that people should trawl through blog posts from 13 years ago, on a different website, looking for something to demand a public apology for. If we start accepting posts like this then this entire forum could end up being nothing but such articles!

This is particularly the case here because I see little reason to think this reflects Holden's current thinking; indeed his current organisation, OpenPhil, is generally extremely circumspect - to a fault, even. The context in which he was writing back in 2007 was very different. We had OvercomingBias, but this was before LessWrong, before GWWC, and before the rest of the EA movement. GiveWell was almost all there was - GiveWell, and an enormous philanthropy industry which treated any criticism as anathema. Staking out an extreme position early on can be a valuable exercise, to help people settle on the happy medium that I think we more or less have done so.

Comment by larks on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-10T16:49:18.147Z · EA · GW
In this paper

I think you may have forgotten to add a hyperlink?

Comment by larks on Asking for advice · 2020-09-07T15:29:44.786Z · EA · GW
I guess it's possible some people would find being sent a calendly link off-putting for some reason, but I haven't seen indications of that so far.

I actually find it extremely annoying, though I don't know why and I don't particularly endorse this reaction. There have been cases where people have sent me calendlies with zero slots available, or failed to show up for a call I scheduled using it, but I don't think this is the reason. I have actually missed at least one call that should have taken place just because I found calendly so irrationally aversive.

Comment by larks on Will protests lead to thousands of coronavirus deaths? · 2020-09-05T21:49:04.356Z · EA · GW
I don't think this has been posted as a comment yet, so I'd like to link this study (shared with me by Hauke Hillebrandt) which estimates the impact of protests on COVID-19 spread.

Thanks. I think this paper was actually already linked in a comment by AGB here; I've also discussed it in the retrospective part of the post.

Comment by larks on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T14:06:54.373Z · EA · GW
If you're talking to a man about rape and he thinks it's not a big deal, there's some chance he'll actually rape you.

I realise you did not say this applied to Robin, but just in case anyone reading was confused and mistakenly thought it was implicit, we should make clear that Robin does not think rape is 'not a big deal'. Firstly, opposition to rape is almost universal in the west, especially among the highly educated; as such our prior should be extremely strong that he does think rape is bad. In addition to this, and despite his opposition to unnecessary disclaimers, Robin has made clear his opposition to rape on many occasions. Here are some quotations that I found easily on the first page of google and by following the links in the article EA Munich linked:

I was not at all minimizing the harm of rape when I used rape as a reference to ask if other harms might be even bigger. Just as people who accuse others of being like Hitler do not usually intend to praise Hitler, people who compare other harms to rape usually intend to emphasize how big are those other harms, not how small is rape.

https://www.overcomingbias.com/2014/11/hanson-loves-moose-caca.html

You are seriously misrepresenting my views. I'm not at all an advocate for rape. 

https://twitter.com/robinhanson/status/990762713876922368?lang=en

It is bordering on slander for you to call me "pro-rape". You have no direct evidence for that claim, and I've denied it many times.  

https://twitter.com/robinhanson/status/991069965263491072

I didn't and don't minimize rape!  

https://twitter.com/robinhanson/status/1042739542242074630

and from personal communication:

of course I’m against rape, and it is easy to see or ask.

Separately, while I don't know what the base rate for a hypothetical person who supposedly doesn't take rape sufficiently seriously will rape someone at an EA event as a result (I suspect it is very low), I think we would be relatively safe here as it would presumably be a zoom meeting anyway due to German Immigration Restrictions.

Comment by larks on An argument for keeping open the option of earning to save · 2020-08-31T16:59:27.489Z · EA · GW

Thanks for writing this up. However, I am confused about the mechanism.

In my head I think of there as being three options, all of which have diminishing returns:

  • Direct Work
    • Turning money into EA outcomes.
    • Diminishing returns due to low hanging problems being solved, non-parallel workflows and running out of money.
  • Earn to Give/Spend
    • Turning market work into Direct Work.
    • Diminishing returns due to running out of good people to employ.
  • Earn to Save
    • Turning market work now into Direct work later.
    • Diminishing returns due to running out of good people to employ in the future.

As each possibility has diminishing returns, there is an optimal ratio of Spending to Saving. But an exogenous increase in Spending volume doesn't increase the marginal returns of Saving, so it doesn't increase the attractiveness of Saving vs Direct. It does make Saving more attractive vs Spending, but both of those require basically the same skills (e.g. tech or finance skills), so the value of those skills is diminished.

Separately, you might think of upcoming increases in Spending (OpenPhil, bequests, career advancement) as an artificially high level of Saving now. This would decrease the attractiveness of current Saving.

Comment by larks on More empirical data on 'value drift' · 2020-08-29T14:08:51.620Z · EA · GW
For instance, if the drop out rate for the most engaged core is:
Year 0-5: 10%
Year 5-10: 7%
Year 10-30: 15%
Then, the chance of staying involved the rest of their career is about 70%, which would mean the expected length of engagement is very roughly 20 years.

Are you assuming quite short careers? Using bucket midpoints I calculate

(20-0.1*2.5-0.07*7.5-20*0.15)/(1-0.1-0.07-0.15)

Which suggests you are using ~24 years for a full career, which seems a little low. If I substitute 40 years I get over 30 years of engagement.

0.1*2.5 + 0.07*7.5 + 0.15*20 + (1-0.1-0.07-0.15)*40

The answer does not change very much when I converted these numbers to annualised risk factors in excel (and assumed 100% dropoff at year 40).