Posts

Avoiding Munich's Mistakes: Advice for CEA and Local Groups 2020-10-14T17:08:13.033Z · score: 162 (96 votes)
Will protests lead to thousands of coronavirus deaths? 2020-06-03T19:08:10.413Z · score: 77 (46 votes)
2019 AI Alignment Literature Review and Charity Comparison 2019-12-19T02:58:58.884Z · score: 145 (50 votes)
2018 AI Alignment Literature Review and Charity Comparison 2018-12-18T04:48:58.945Z · score: 115 (55 votes)
2017 AI Safety Literature Review and Charity Comparison 2017-12-20T21:54:07.419Z · score: 43 (43 votes)
2016 AI Risk Literature Review and Charity Comparison 2016-12-13T04:36:48.060Z · score: 53 (55 votes)
Being a tobacco CEO is not quite as bad as it might seem 2016-01-28T03:59:15.614Z · score: 10 (12 votes)
Permanent Societal Improvements 2015-09-06T01:30:01.596Z · score: 9 (9 votes)
EA Facebook New Member Report 2015-07-26T16:35:54.894Z · score: 11 (11 votes)

Comments

Comment by larks on Nathan Young's Shortform · 2020-10-24T17:16:28.874Z · score: 8 (5 votes) · EA · GW

Can you think of any examples of other movements which have this? I have not heard of such for e.g. the environmentalist or libertarian movements. Large companies might have whistleblowing policies, but I've not heard of any which make use of an independent organization for complaint processing.

Comment by larks on EA's abstract moral epistemology · 2020-10-20T16:19:42.443Z · score: 12 (7 votes) · EA · GW

Yeah despite having studied philosophy I also found this a little impenetrable. It keeps saying things like,

values are simultaneously woven into the fabric of reality and such that we require particular sensitivities to recognise them

and that these views came from some women philosophers at Oxford and Durham, but never really explaining what they mean.

To the extent I felt I understood it, this was only by pattern-matching to the usual criticisms of EA and utilitarianism, like 'too impersonal' and 'not left wing enough'. But this means I wasn't able to get much new from it.

Comment by larks on Thomas Kwa's Shortform · 2020-10-17T03:12:01.398Z · score: 11 (6 votes) · EA · GW

Unfortunately most cost-effectiveness estimates are calculated by focusing on the specific intervention the charity implements, a method which is a poor fit for large diversified charities.

Comment by larks on jackmalde's Shortform · 2020-10-13T14:00:43.143Z · score: 4 (2 votes) · EA · GW
he is asking you to consider how it typically feels like to listen to muzak and eat potatoes

I always found this very confusing. Potatoes are one of my favourite foods!

Comment by Larks on [deleted post] 2020-10-12T03:40:20.725Z
Can we at least have a consensus and commitment that we go back to the previous norm after this election, to prevent a slippery slope where engaging in partisan politics becomes increasingly acceptable in EA?

Unfortunately I expect that in four years time partisans will decide that 2024 is the new most important election in history and hence would renege on any such agreement.

Comment by larks on Can my self-worth compare to my instrumental value? · 2020-10-12T00:11:17.252Z · score: 6 (6 votes) · EA · GW

I wonder to what extent this springs from the fact that most pastors do not expect most of their congregants to achieve great things. Presumably if you are a successful missionary who converts multiple people, your instrumental value significantly exceeds your intrinsic value, so I wonder if they have the same feelings. An extreme case would be someone like Moses, whose intrinsic value presumably paled into insignificance compared to his instrumental value as a saviour of the Israelites and passing on the Word of God.

In any case, I think there is a strong case to be made for spending resources on yourself for non-instrumental reasons. Even if you don't think you matter more than anyone else, you definitely don't matter less than them! And you have a unique advantage in spending resources to generate your own welfare: an intimate understanding of your own circumstances and preferences. When we give to help others, it can be very difficult to figure out what they want and how to best achieve that. In contrast, I know very well which things I have been fixated on!

Comment by larks on Hiring engineers and researchers to help align GPT-3 · 2020-10-10T03:56:58.515Z · score: 16 (8 votes) · EA · GW

I didn't downvote, but I could imagine someone thinking Halstead had been 'tricked' - forced into compliance with a rule that was then revoked without notifying him. If he had been notified he might have wanted to post his own job adverts in the last few years.

Personally I share your intuitions that the occasional interesting job offer is good, but I don't know how this public goods problem could be solved. No job ads might be the best solution, for all that I enjoyed this one.

Comment by larks on Best Consequentialists in Poli Sci #1 : Are Parliaments Better? · 2020-10-10T03:52:28.342Z · score: 19 (10 votes) · EA · GW
While economics is often derided as the dismal science, I believe that economists have done much to improve policymaking in the world.

In keeping with the abolitionists origins of the phrase:

Carlyle’s target was ... economists such as John Stuart Mill, who argued that it was institutions, not race, that explained why some nations were rich and others poor. Carlyle attacked Mill ... for supporting the emancipation of slaves. It was this fact—that economics assumed that people were basically all the same, and thus all entitled to liberty—that led Carlyle to label economics “the dismal science.”
Comment by larks on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-03T04:37:52.128Z · score: 6 (3 votes) · EA · GW

It seems from your description that part of the problem is that the same body invents projects for itself to work on. Do you think things would be significantly improved if, after coming up with a research project, they had to invite external bids for the project, and only do it in-house if they won the tendering process? Perhaps this would be prohibitively hard to implement in practice.

Comment by larks on Some thoughts on the effectiveness of the Fraunhofer Society · 2020-10-01T15:09:26.494Z · score: 9 (5 votes) · EA · GW

This was a really interesting article on a subject I'd never heard of before, thanks very much. I assume similar issues affect government research organisations in other countries as well.

Comment by larks on Suggestions for Online EA Discussion Norms · 2020-09-30T03:06:30.256Z · score: 23 (7 votes) · EA · GW
When asking the person to rephrase their comment, it can be useful suggest a rewrite yourself.
Example: Someone noticed a commenter who appeared to be name calling another person. This is how they might have rewritten the comment: "I have this point of view because of this reason. I see other people with this different approach and I find it odd because it seems so much in conflict with what I've learned. I wonder how they got to that conclusion."

I found this suggestion kind of surprising upon re-reading. Do you have experiencing of it working well? I worry it could easily come across as somewhat patronising.

Comment by larks on Why doesn't EA Fund support Paypal? · 2020-09-26T15:13:36.048Z · score: 5 (3 votes) · EA · GW

Is there any legal reason the OP couldn't paypal money to someone else who then makes a donation on his behalf? I agree their accepting paypal is the ideal solution but maybe this is an acceptable short term workaround

Comment by larks on What are words, phrases, or topics that you think most EAs don't know about but should? · 2020-09-24T14:14:23.332Z · score: 2 (1 votes) · EA · GW
Nobel Cause Corruption

Is this about how the Peace Prize is given out to either warmongers or ineffective activists rather than professional diplomats and international supply chain managers?

Comment by larks on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-21T02:01:42.748Z · score: 8 (4 votes) · EA · GW
I don't see any dissonance with respect to recycling and criminal justice—recycling is (nominally) about climate change, and climate change is a big deal, so recycling is important when you ignore the degree to which it can address the problem; likewise with criminal justice.

It seems a lot depends on how you group together things into causes then. Is my recycling about reducing waste in my town (a small issue), preventing deforestation (a medium issue), fighting climate change (a large issue) or being a good person (the most important issue of all)? Pretty much any action can be attached to a big cause by defining an even larger, and even more inclusive problem for it to be part of.

Comment by larks on [Linkpost] Some Thoughts on Effective Altruism · 2020-09-20T21:44:35.428Z · score: 8 (4 votes) · EA · GW
A huge portion of the variation in worldview between EAs and people who think somewhat differently about doing good seems to be accounted for by a different optimization strategy. EAs, of course, tend to use expected value, and prioritize causes based on probability-weighted value. But it seems like most other organizations optimize based on value conditional on success.
These people and groups select causes based only on perceived scale. They don't necessarily think that malaria and AI risk aren't important, they just make a calculation that allots equal probabilities to their chances of averting, say, 100 malarial infections and their chances of overthrowing the global capitalist system.

I agree it would be good to have a diagnosis of the thought process that generates these sorts of articles so we can respond in a targetted manner that addresses their model of their objections, rather that one which simply satisfies us that we have rebutted them. And this diagnosis is a very interesting one! However, I am a little sceptical, for two reasons.

EAs often break cause evaluation down into Scope, Tractability and Neglectedness, which is elegant as they correspond to three factors which can be multiplied together. You're basically saying that these critics ignore (or consider unquantifiable) Neglectedness and Tractability. However, it seems perhaps a little bit of a coincidence that the factor they are missing just happens to correspond to one of the terms in our standard decomposition. After all, there are many other possible decompositions! But maybe this decomposition just really captures something fundamental to all people's thought processes, in which case this is not so much of a surprise.

But more importantly I think this theory seems to give some incorrect predictions about cause focus. If Importance is all that matters, then I would expect these critics to be very interested in existential risks, but my impression is they are not. Similarly, I would be very surprised if they were dismissive of e.g. residential recycling, or US criminal justice, as being too small a scale an issue to warrant much concern.

Comment by larks on Denise_Melchin's Shortform · 2020-09-19T16:51:42.321Z · score: 7 (4 votes) · EA · GW

I have some sympathy with this view, and think you could say a similar thing with regard non-utilitarian views. But I'm not sure how one would cache out the limits on 'atrocious' views in a principled manner. To a truly committed longtermist it is plausible that any non-longtermist view is atrocious!

Comment by larks on So-Low Growth's Shortform · 2020-09-18T16:36:30.836Z · score: 4 (2 votes) · EA · GW

This is an interesting idea. You might need to change the design a bit; my impression is that the experiment focused on getting people to donate vs not donating, whereas the concern with longtermism is more about prioritisation between different donation targets. Someone's decision to keep the money wouldn't necessarily mean they were being short-termist: they might be going to invest that money, or they might simply think that the (necessarily somewhat speculative) longtermist charities being offered were unlikely to improve long-term outcomes.

Comment by larks on Long-Term Future Fund: September 2020 grants · 2020-09-18T16:30:08.905Z · score: 22 (14 votes) · EA · GW

As always, thanks very much for writing up this detailed report. I really appreciate the transparency and insight into your thought processes, especially as I realise doing this is not necessarily easy! Great job.

(It's possible that I might have some more detailed comments later, but in case I don't I didn't want to miss the chance to give you some positive feedback!)

Comment by larks on Denise_Melchin's Shortform · 2020-09-17T17:18:30.048Z · score: 7 (4 votes) · EA · GW

People do bring this up a fair bit - see for example some previous related discussion on Slatestarcodex here and the EA forum here.

I think most AI alignment people would be relatively satisfied with an outcome where our controls over AI outcomes were as strong as our current control over corporations: optimisation for a criteria that requires continual human input from a broad range of people, while keeping humans in-the-loop of decision making inside the optimisation process, and with the ability to impose additional external constrains at run-time (regulations).

Comment by larks on Tax Havens and the case for Tax Justice · 2020-09-17T03:29:52.411Z · score: 29 (13 votes) · EA · GW

Thanks for the effort that went into this post. However, I thought there was a conspicuous lack of any discussion of Optimal Taxation Theory.

Quoting from Mankiw's excellent review article, we can see why this part of economics is highly relevant to the issue: it is directly concerned with what type of tax system maximises utility:

The standard theory of optimal taxation posits that a tax system should be chosen to maximize a social welfare function subject to a set of constraints. The literature on optimal taxation typically treats the social planner as a utilitarian: that is, the social welfare function is based on the utilities of individuals in the society. ... one would not go far wrong in thinking of the social planner as a classic “linear” utilitarian.

I'm not sure I could put it better than he does, so I hope you forgive the repeated quotations. One of the main findings of this field is that taxes on capital should be zero:

Perhaps the most prominent result from dynamic models of optimal taxation is that the taxation of capital income ought to be avoided. This result, controversial from its beginning in the mid-1980s, has been modified in some subtle ways and challenged directly in others, but its strong underlying logic has made it the benchmark.

Why? There are several reasons, and I encourage you to read the whole article, but the third justification he lists should be especially appealing to longtermist EAs: capital taxation reduces investment, which makes everyone poorer in the long run: even those who do not own any capital.

A third intuition for a zero capital tax comes from elaborations of the tax problem considered by Frank Ramsey (1928). In important papers, Chamley (1986) and Judd (1985) examine optimal capital taxation in this model. They find that, in the short run, a positive capital tax may be desirable because it is a tax on old capital and, therefore, is not distortionary. In the long run, however, a zero tax on capital is optimal. In the Ramsey model, at least some households are modeled as having an infinite planning horizon (for example, they may be dynasties whose generations are altruistically connected as in Barro, 1974). Those households determine how much to save based on their discounting of the future and the return to capital in the economy. In the long-run equilibrium, their saving decisions are perfectly elastic with respect to the after-tax rate of return. Thus, any tax on capital income will leave the after-tax return to capital unchanged but raise the pre-tax return to capital, reducing the size of the capital stock and aggregate output in the economy. This distortion is so large as to make any capital income taxation suboptimal compared with labor income taxation, even from the perspective of an individual with no savings. [emphasis added]

There has been a lot of work on the subject since then - for example here and here - but I think of Chamley-Judd as being a core result that the rest of the field is responding to. Some find that capital taxes should be positive or high, and some find that they should be negative - that we should subsidise investment - but the negative effects of capital taxes on investment, growth and aggregate welfare is clearly an important topic that can not be dispensed with without comment!

The above is concerned with capital taxation, but corporate taxes specifically are I think even worse. They essentially function as capital taxation, but typically allow interest expense to be deducted, hence distorting financing decisions away from equity and towards debt - contributing to systemic risk. (This problem was partly addressed in the US by the 2017 tax reform). To the extent that they only apply to legal corporations, and not other types of entity, they also distort organisational choice, which is also bad.

As a result, it seems that corporate taxes are harmful, and it would be better for the world (and the long term future) if they did not exist. Unfortunately they do exist - probably due to exactly the problems with institutional decision making that longtermist EAs are concerned about (e.g. short planning horizons, high discount rates, and capture by special interests). Fortunately, international tax competition provides something of a remedy, by encouraging countries to lower their corporate taxes to closer to the ideal level. Contra your suggestion that it 'damages both "winners" and losers', it acts as a beneficial check on the ability of countries to institute harmful policies. We should be supporting tax havens and praising their effects, not seeking to destroy them.

Despite having a section on 'Objections', the article does not really address this argument. You do sort of get at this issue here:

Developing Countries
Tax havens are necessary structures in encouraging investment in developing countries[35]. ...

But the response misses the point:

Response: Agreed -- developing countries need to build both legal and tax system capacity. Development Financing Institutes and other investors require developing countries to honour and enforce contracts and to refrain from arbitrary seizure of assets.

Getting rid of tax havens degrades our ability to resist arbitrary seizure of assets. This is no small deal - many of the worst disasters in history have been intimately tied with governments' seizures of assets and resultant damage to productive capacity. If we get rid of one check on this problem, we should have something else in place that can serve a similar job. The mere threat of losing access to financial markets for a while is insufficient. There are possible alternatives - once upon a time the west used gunboat diplomacy to this effect - but we should not remove our current solution without first instigating a new one.

Indeed, I think this article actually showcases the problem to a small degree. You write:

[tax havens] cost governments worldwide at least $500B/year in lost tax revenue

It is true that current investments, if subject to a higher level of taxation, would lead to higher tax revenues for governments (in the short run). But these investments were made by individuals and companies who were expecting to pay lower taxes! If taxes had been higher, fewer of these investments would have been made. To point out now that there is a lot of capital out there that could be taxed more if we changed the rules is precisely the sort of ex post asset seizure that people are worried about.

This section also sort of hints at the problem:

Growth
Tax havens promote economic growth in high-tax countries, especially those located near tax havens. US multinationals' use of tax havens shifts tax revenue from foreign governments to the US by reducing the foreign tax credits they claim against US tax payable. As a result of the 1996 Puerto Rico tax haven phaseout mentioned above, employment by affected firms dropped not just in Puerto Rico, but in the US as a whole; affected firms reduced investment globally.[36]

But again the response misunderstands:

Response: If curbing tax havens reduces growth and taxes in developed countries for the benefit of developing countries, that is likely a trade-off many EAs would be willing to make (see below). Abbott Laboratories and other multinationals affected by the Puerto Rico phaseout may have reduced global investment, but increased investment and jobs in developing countries such as India. Given that US dollars go a lot further in less developed countries, a reduction in global investment by specific firms could also reflect better value for money.

The problem is not so much that getting rid of tax havens will reduce investment in the west specifically, but that this will result in a global increase in effective tax rates, and as such will reduce investment globally.

Comment by larks on Parenting: Things I wish I could tell my past self · 2020-09-16T15:29:55.674Z · score: 5 (3 votes) · EA · GW
Nut butters: my impression is that there’s pretty good evidence these days that kids are less likely to be allergic to things they’ve eaten regularly before age 1. Since nuts are choking hazards, we’ve been giving Leo various nut butters (peanut, cashew, almond, hazelnut).

Our paediatrician recommended this for people using bottles. It contains powdered peanut, cows milk and egg that you can add to their bottle once a day to help prevent allergies. At the beginning you titrate up, adding one food at a time, and then the packets switch to maintenance.

Comment by larks on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-14T13:59:00.366Z · score: 4 (3 votes) · EA · GW

Thanks.

In this paper, Strubell & al (2019) outline the hidden cost of machine learning (from inception to training and fine tuning) and found emissions for 1 model is about 360 tCo2.

The highest estimate they find is for Neural Architecture Search, which they estimated as emitting 313 tons of C02 after training for over 30 years. This suggests to me that they're using an inappropriate hardware choice! Additionally, the work they reference - here - does not seem to be the sort of work you'd expect to see widely used. Cars emit a lot of CO2 because everyone has one; most people have no need to search for new transformer architecture. The answers from one search could presumably be used for many applications.

Most of the models they train produce dramatically lower estimates.

I also don't really understand how their estimates for renewable generation for the cloud companies are so low. Amazon say they were 50% renewable in 2018, but the paper only gives them 18% credit, and Google say they are CO2 neutral now. It makes sense that they should look quite efficient, given that cloud datacenters are often located near geothermal or similar power sources. This 18% is based on a Greenpeace report which I do not really trust.

Finally, I found this unintentionally very funny:

Academic researchers need equitable access to computation resources.
Recent advances in available compute come at a high price not attainable to all who desire access. ... . Limiting this style of research to industry labs hurts the NLP research community in many ways. ... This even more deeply promotes the already problematic “rich get richer” cycle of research funding, where groups that are already successful and thus well-funded tend to receive more funding due to their existing accomplishments. Third, the prohibitive start-up cost of building in-house resources forces resource-poor groups to rely on cloud compute services such as AWS, Google Cloud and Microsoft Azure.
While these services provide valuable, flexible, and often relatively environmentally friendly compute resources ...

This whole paragraph is totally different to the rest of the paper. It appears in the conclusion section, but isn't really concluding from anything in the main body - it appears the authors simply wanted to share some left wing opinions at the end. But this 'conclusion' is exactly backwards - if training models is bad for the environment, it is good to prevent too many people doing it! And if cloud computing is more environmentally friendly than buying your own GPU, it is good that people are forced into using it!

Overall this paper was not very convincing that training models will be a significant driver of climate change. And there is compelling reason to be less worried about climate change than AGI. So I don't think this was very convincing that the main AI risk concern is the secondary effect on climate change.

Comment by larks on Buck's Shortform · 2020-09-14T01:25:49.749Z · score: 11 (4 votes) · EA · GW
I've proposed before that voting shouldn't be anonymous, and that (strong) downvotes should  require explanation (either your own comment or a link to someone else's). Maybe strong upvotes should, too?

It seems this could lead to a lot of comments and very rapid ascending through the meta hierarchy! What if I want to strong downvote your strong downvote explanation?

Comment by Larks on [deleted post] 2020-09-13T00:33:46.013Z

I think you have misrepresented Holden's argument:

Ironically, your letter disappointed me because the vitriol got in the way of good reasoning. A useful version of your letter would have tackled the question of whether it's possible to be *both* honest and kind. Your letter implicitly assumed that you can't do both, and left this assumption unchecked. I very much hope you don't allow your passion to get in the way of good analysis in the rest of your work.

I do not think that Holden assumed that nice and honest feedback are mutually exclusive at all. Reading his interlocutors (e.g. Mark Petersen), he is reacting to people saying that any public negative feedback would be too demoralising for the staff. I agree that he is suggesting that charity workers need to man up and accept tough feedback - "your life's work has been pointless" is going to hurt no matter how it's phrased - but disagree that there is any implication that you can't avoid being unnecessarily nasty in doing so.


If you had written this as a rebuttal piece - perhaps 'Reasons to Avoid Unnecessarily Upsetting Crybabies' - I might have upvoted it, despite the above. But as it is this article is unnecessarily passive aggressive. I do not think we should encourage people seeking the mantle of victimhood in order to criticise others.

Anyone with a long history of public comments is inevitably going to have some cringe material from a long time ago. I don't think it is a good principle that people should trawl through blog posts from 13 years ago, on a different website, looking for something to demand a public apology for. If we start accepting posts like this then this entire forum could end up being nothing but such articles!

This is particularly the case here because I see little reason to think this reflects Holden's current thinking; indeed his current organisation, OpenPhil, is generally extremely circumspect - to a fault, even. The context in which he was writing back in 2007 was very different. We had OvercomingBias, but this was before LessWrong, before GWWC, and before the rest of the EA movement. GiveWell was almost all there was - GiveWell, and an enormous philanthropy industry which treated any criticism as anathema. Staking out an extreme position early on can be a valuable exercise, to help people settle on the happy medium that I think we more or less have done so.

Comment by larks on Here's what Should be Prioritized as the Main Threat of AI · 2020-09-10T16:49:18.147Z · score: 2 (1 votes) · EA · GW
In this paper

I think you may have forgotten to add a hyperlink?

Comment by larks on Asking for advice · 2020-09-07T15:29:44.786Z · score: 16 (7 votes) · EA · GW
I guess it's possible some people would find being sent a calendly link off-putting for some reason, but I haven't seen indications of that so far.

I actually find it extremely annoying, though I don't know why and I don't particularly endorse this reaction. There have been cases where people have sent me calendlies with zero slots available, or failed to show up for a call I scheduled using it, but I don't think this is the reason. I have actually missed at least one call that should have taken place just because I found calendly so irrationally aversive.

Comment by larks on Will protests lead to thousands of coronavirus deaths? · 2020-09-05T21:49:04.356Z · score: 6 (3 votes) · EA · GW
I don't think this has been posted as a comment yet, so I'd like to link this study (shared with me by Hauke Hillebrandt) which estimates the impact of protests on COVID-19 spread.

Thanks. I think this paper was actually already linked in a comment by AGB here; I've also discussed it in the retrospective part of the post.

Comment by larks on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T14:06:54.373Z · score: 38 (17 votes) · EA · GW
If you're talking to a man about rape and he thinks it's not a big deal, there's some chance he'll actually rape you.

I realise you did not say this applied to Robin, but just in case anyone reading was confused and mistakenly thought it was implicit, we should make clear that Robin does not think rape is 'not a big deal'. Firstly, opposition to rape is almost universal in the west, especially among the highly educated; as such our prior should be extremely strong that he does think rape is bad. In addition to this, and despite his opposition to unnecessary disclaimers, Robin has made clear his opposition to rape on many occasions. Here are some quotations that I found easily on the first page of google and by following the links in the article EA Munich linked:

I was not at all minimizing the harm of rape when I used rape as a reference to ask if other harms might be even bigger. Just as people who accuse others of being like Hitler do not usually intend to praise Hitler, people who compare other harms to rape usually intend to emphasize how big are those other harms, not how small is rape.

https://www.overcomingbias.com/2014/11/hanson-loves-moose-caca.html

You are seriously misrepresenting my views. I'm not at all an advocate for rape. 

https://twitter.com/robinhanson/status/990762713876922368?lang=en

It is bordering on slander for you to call me "pro-rape". You have no direct evidence for that claim, and I've denied it many times.  

https://twitter.com/robinhanson/status/991069965263491072

I didn't and don't minimize rape!  

https://twitter.com/robinhanson/status/1042739542242074630

and from personal communication:

of course I’m against rape, and it is easy to see or ask.

Separately, while I don't know what the base rate for a hypothetical person who supposedly doesn't take rape sufficiently seriously will rape someone at an EA event as a result (I suspect it is very low), I think we would be relatively safe here as it would presumably be a zoom meeting anyway due to German Immigration Restrictions.

Comment by larks on An argument for keeping open the option of earning to save · 2020-08-31T16:59:27.489Z · score: 12 (6 votes) · EA · GW

Thanks for writing this up. However, I am confused about the mechanism.

In my head I think of there as being three options, all of which have diminishing returns:

  • Direct Work
    • Turning money into EA outcomes.
    • Diminishing returns due to low hanging problems being solved, non-parallel workflows and running out of money.
  • Earn to Give/Spend
    • Turning market work into Direct Work.
    • Diminishing returns due to running out of good people to employ.
  • Earn to Save
    • Turning market work now into Direct work later.
    • Diminishing returns due to running out of good people to employ in the future.

As each possibility has diminishing returns, there is an optimal ratio of Spending to Saving. But an exogenous increase in Spending volume doesn't increase the marginal returns of Saving, so it doesn't increase the attractiveness of Saving vs Direct. It does make Saving more attractive vs Spending, but both of those require basically the same skills (e.g. tech or finance skills), so the value of those skills is diminished.

Separately, you might think of upcoming increases in Spending (OpenPhil, bequests, career advancement) as an artificially high level of Saving now. This would decrease the attractiveness of current Saving.

Comment by larks on More empirical data on 'value drift' · 2020-08-29T14:08:51.620Z · score: 4 (2 votes) · EA · GW
For instance, if the drop out rate for the most engaged core is:
Year 0-5: 10%
Year 5-10: 7%
Year 10-30: 15%
Then, the chance of staying involved the rest of their career is about 70%, which would mean the expected length of engagement is very roughly 20 years.

Are you assuming quite short careers? Using bucket midpoints I calculate

(20-0.1*2.5-0.07*7.5-20*0.15)/(1-0.1-0.07-0.15)

Which suggests you are using ~24 years for a full career, which seems a little low. If I substitute 40 years I get over 30 years of engagement.

0.1*2.5 + 0.07*7.5 + 0.15*20 + (1-0.1-0.07-0.15)*40

The answer does not change very much when I converted these numbers to annualised risk factors in excel (and assumed 100% dropoff at year 40).

Comment by larks on Forum update: New features (August 2020) · 2020-08-28T21:24:59.823Z · score: 28 (9 votes) · EA · GW

I also find this a bit annoying. When I put a lot of effort into a time-sensitive post I would like it to be visible immediately. My 2018 AI review took four days and multiple emails to get promoted to frontpage - approximately a third of the remaining 2018 giving season.

I realise there is some risk of hypocrisy here, as I help mod the facebook group and we can be slow at times approving posts. But unlike facebook, the forum has a karma system for spam suppression; it seems you could say that any user with >1000 karma or whatever could post directly to main. Moderators could always move it away if the submitter misclassified it.

Comment by larks on Propose and vote on potential tags · 2020-08-26T18:27:33.977Z · score: 5 (3 votes) · EA · GW

Ahh yes, that covers it. I looked through the list of tags to check if there was already something on there; I guess I missed that one.

Comment by larks on Propose and vote on potential tags · 2020-08-26T15:47:53.866Z · score: 7 (4 votes) · EA · GW

I like Lists, so get me a List of Lists for my tag List.

There are a number of good posts that are basically lists of links to different articles (like this one). It would be nice to be able to easily access them.

Comment by larks on If a poverty alleviation intervention has a positive ROI, (why) isn't anyone lending money for them? · 2020-08-24T23:30:09.328Z · score: 5 (4 votes) · EA · GW
For example, why aren't banks lending money for people to pay to get themselves dewormed?

I think the question in some sense is 'why don't people pay for themselves to get dewormed?'. It's unlikely that banks in the third world would be able to make a personal loan that could only be used by that person for deworming.

Separately, an additional factor to the ones you mention is the fixed costs of making a loan. Things like KYC and AML regulations are similarly onerous for small loans as large ones, making smaller clients disproportionately expensive to deal with.

Comment by larks on The EA Meta Fund is now the EA Infrastructure Fund · 2020-08-21T19:12:20.453Z · score: 2 (1 votes) · EA · GW

Thanks. I can see why they would be concerned!

Comment by larks on The EA Meta Fund is now the EA Infrastructure Fund · 2020-08-20T16:21:51.252Z · score: 13 (8 votes) · EA · GW

Out of interest, who has the trademark? I googled the term and just got you guys.

Comment by larks on Making a Crowdaction website · 2020-08-19T16:02:49.248Z · score: 6 (3 votes) · EA · GW

You might be interested in the Free State Project, which seems like a similar idea: a large group of people all pledging to move to New Hampshire if enough other people made the same pledge. They seem to have had some success, including the election of the first strongly EA aligned politician in the US.

However, and somewhat contra to this, I recommend thinking a bit more about the examples you use. At the moment they mainly seem to be about organising protests, which seems like a very political example. The two examples on the collaction website don't seem great either - in neither case is there any sort of threshold effect whereby the action becomes more worthwhile the more people do it.

I would think about things like coordinating where people live. Right now many people live separated from their friends, but maybe if 10 of them agreed you could re-unite the college gang in one location. Similarly, a lot of EAs live in the bay area, but with enough coordination perhaps they could move somewhere cheaper that has electricity.

Comment by larks on Link: Longtermist Institutional Reform · 2020-08-18T19:40:59.363Z · score: 2 (1 votes) · EA · GW

Hey Tyler, thanks very much for engaging, and for working on this very important topic.

I was a little surprised you didn't spend more time arguing for Citizen's Assemblies and Sortition in general. While in your comment you mention they have been used a bit, it seems they have been only used for a tiny fraction of all decisions. If they were so advantageous, we might have expected private companies to take advantage of them in decision making, or governments to make widespread use, but as far as I'm aware their use by both is very small. I'm not aware of any major software or engineering projects being designed by sortition, or any military using it to decide strategy and tactics. Presumably this is because a randomly chosen decision making body will be made of up less conscientiousness, less knowledgable and less intelligent people than a body specifically chosen for these traits. Given what we know about the importance of mental acuity in decision making, it seems that we should be wary of any scheme that deliberately neglects any selection on this basis.

I worry that citizens' assemblies will end up favouring the views whose partisans have the most rhetorical skill and the most fashionable beliefs. In a representative system, disengaged people can rely on highly skilled representatives to defend their position. In an assembly, those with complicated but sound arguments might be at a disadvantage compared to those with higher status or more memetically powerful slogans, even if the latter are false.

You highlight the long remaining life expectancy of the members as a motivation for their to be longtermist, but this seems quite imperfect. In particular, it causes them to be disproportionately motivated by the interests of older people the further out in time you go, with little direct reason they should be concerned about the welfare of future cohorts at all.

In particular, the paper mentions the 2016 Irish assembly as an positive example, but it seems to actually be a counter-example. In the paper you note that future people have high moral value:

These people have the same moral value as us in the present.

This was recognised in the Irish constitution prior to the Assembly:

The State acknowledges the right to life of the unborn and, with due regard to the equal right to life of the mother, guarantees in its laws to respect, and, as far as practicable, by its laws to defend and vindicate that right.

However the Assembly recommended removing this right, reducing the protections for future generations, even in cases where no strong countervailing consideration exists. Indeed, it seems they deliberately sought input from affected members of the current generation, even though this introduces a bias, as similar input cannot be sought from future generations.

Comment by larks on Antibiotic resistance: Should animal advocates intervene? · 2020-08-18T15:06:32.708Z · score: 12 (5 votes) · EA · GW

This reminds me of a speech given by the head of Sanderson Farms a few years ago, during an industry-wide conversation about the merits of antibiotic-free farming. It stood out for me because it is rare for CEOs to take a public stance against a popular cause in this way.

Much has been written and discussed in recent weeks regarding the production and use of antibiotic-free chicken. In response to the announcement by several large users of chicken that they will move to antibiotic-free chicken over time, several processors in our industry have responded that they, too, will move to the production of antibiotic-free products.
After very deliberate, careful. And measured consideration of this issue, we informed our customers last week that we will continue our responsible use of antibiotics when prescribed by our veterinarians. This decision is based on animal welfare, environmental considerations. And food safety. First, we believe we have a moral obligation to care for the animals under our stewardship. Just as our vets do not compromise there oath to relieve the suffering of animals, our obligation to care for the animals under our care is not subject to compromise.
It is instructive to us that this discussion has revolved primarily around chickens. And no one, to our knowledge, has suggested that other species be denied care and medicine. It seems to us that if an animal is sick and its suffering would be relieved from the use of FDA-approved antibiotics, it does not matter if it is a chicken, cow, hog, or household pet. That animal should be treated.
We also have a commitment to environmental stewardship. Sick chickens do not perform well. When a chicken gets sick, it takes longer to reach market weight. It takes more feed to produce a pound of meat and it just performs poorly. Because its performance decreases, it takes more water, more feed, electricity, natural gas. And other resources to raise the bird. More feed means more acres, more water. And more fertilizer to grow grain. Given the number of animals on the ground in the United States for food production, even small changes in the performance of those animals could have a significant negative environmental impact. Simply stated, neglecting the health of our chickens is inconsistent with our environmental sustainability goals and our commitment to the judicious use of water and other natural resources.
Finally, healthy chickens are safe chickens. In our judgment and based on the experience in Europe, unhealthy chickens are more likely to carry higher loads of Salmonella, Campylobacter and E. coli. Our Company and our industry have made great strides in recent years to reduce these bacteria. And that work is in jeopardy if we neglect bird health.
Like everyone else, we understand the anxiety created by fear of antibiotic resistance caused by the misuse and overuse of antibiotics. We also are aware that there has been no credible scientific evidence that supports the notion that antibiotic resistance in humans is made more likely because of the use of antibiotics in chickens. Indeed, because of withdrawal periods mandated by the FDA, there are no antibiotic residues in chicken meat marketed in the United States. And in that sense, all chicken is antibiotic-free.
We will continue to work with our pharmaceutical suppliers to find alternatives to antibiotics important in human health. And we are committed to using alternatives when they become available. But until such alternatives are developed, we will treat the animals under our care as needed with antibiotics approved for use in chickens by the FDA.

(note this is a third-party transcript so may contain errors and differ from the actual speech in some ways)

Comment by larks on Improving local governance in fragile states - practical lessons from the field · 2020-08-12T20:29:32.227Z · score: 6 (3 votes) · EA · GW

Nice article.

At least on first read I didn't experience any confusion with the co-mingling of Lebanon and Jordan, but when I re-read looking for that specifically there were a few anecdotes I struggled to identify the country for. But I don't think that is a big issue, especially if you have citations anyone really interested could follow.

One thing I would like to see discussed is to what extent this separation between service provision and perceived legitimacy affects the motivations for donors. If the two went together then donors interested in stability would support the same policies as those interested in service consumption. But if they separate then this doesn't hold: presumably some donors will want to support e.g. roads, even if they don't increase stability, while other donors might want to support e.g. clerics who support the divine mandate, even if they don't provide services.

Incidentally one subject that might be interesting is the extent to which western corporations can be a positive influence. For example I hear that Uber is very successful in Jordan precisely because there is no need for the constant haggling and graft that many interactions require: it is all handled cleanly in the app.

Comment by larks on Max_Daniel's Shortform · 2020-08-06T18:38:12.132Z · score: 4 (2 votes) · EA · GW

Hey, yes - I would count that nuclear disarmament breakthrough as being equal to the sum of those annual world-saving instances. So you're right that the number of events isn't fixed, but their measure (as in the % of the future of humanity saved) is bounded.

Comment by larks on Max_Daniel's Shortform · 2020-08-06T03:48:57.691Z · score: 4 (2 votes) · EA · GW

Interesting post. I think I have a couple of thoughts, please forgive the uneditted nature.

One issue is whether more than one person can get credit for the same event. If this is the case, then both the climber girl and the parents can get credit for her surviving the climb (after all, both their actions were sufficient). Similarly, both we and the future people can get credit for saving the world.

If not, then only one person can get the credit for every instance of world saving. Either we can harvest them now, or we can leave them for other people to get. But the latter strategy involves the risk that they will remain unharvested, leading to a reduction in the total quantity of creditworthiness mankind accrues. So from the point of view of an impartial maximiser of humanity's creditworthiness, we should seize as many as we can, leaving as little as possible for the future.

Secondly, as a new parent I see the appeal of the invisible robots of deliverance! I am keen to let the sproglet explore and stake out her own achievements, but I don't think she loses much when I keep her from dying. She can get plenty of moral achievement from ascending to new heights, even if I have sealed off the depths.

Finally, there is of course the numerical consideration that even if facing a 1% risk of extinction carried some inherent moral glory, it would also reduce the value of all subsequent things by 1% (in expectation). Unless you think the benefit from our children, rather than us, overcoming that risk is large compared to the total value of the future of humanity, it seems like we should probably deny them it.

Comment by larks on Should local EA groups support political causes? · 2020-07-28T16:43:33.170Z · score: 4 (2 votes) · EA · GW

Supporting alcohol prohibition also seems like it might have accompanied the woman's suffrage.

Comment by larks on Should local EA groups support political causes? · 2020-07-23T02:42:03.616Z · score: 9 (7 votes) · EA · GW
while it's not something that EAs have a lot of research on

While I understand why this is a tempting and conflict-avoiding thing to say, (and is also literally true!), I think it would be a little disingenuous. The lack of EA research into many potential causes isn't simply an accident; research has been directed into areas that seem especially promising to the researcher (i.e. not just Important but also Neglected and Tractable, and ideally Quantifiable also). Given the natural sympathies of many EAs towards left-wing movements, I think it is reasonable to say that the reason EAs haven't published a lot of research into BLM as a cause area is because they generally don't expect it would look attractive - and I think the same is true for HK protests to a lesser degree.

Or you could link Hong Kong democracy protests to political stability and reducing great power conflict, etc.

Assuming the other students are in favour of the HK protests, I'm not sure this is such a great approach. In general protests are not good for stability! The HK movement, by drawing attention to China's authoritarianism, seem to have increased conflict between the West and China - the US is currently introducing various new anti-CCP measures for example. Similarly the BLM protests in the US seem quite destabilising - to the extent that they literally received funding from the US's geopolitical opponents. It's of course possible that something could be destabilising and good, but that is a different argument.

Unfortunately I think there is just not that much in common between EA and causes which seem neither neglected nor tractable. Overall I think Khorton's approach is best; individual EAs are of course free to have non-EA interests, but focusing on the most important issues, rather than being caught up in contemporary issues that get a lot of attention for non-EA reasons, is a key part of the distinctive value proposition of the movement.

Comment by larks on High stakes instrumentalism and billionaire philanthropy · 2020-07-19T23:56:42.945Z · score: 18 (11 votes) · EA · GW

Thanks for writing this. I like articles that showcase examples were academic research we might be unaware of is important to an issue we care about.

Comment by larks on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2020-07-15T19:18:15.237Z · score: 10 (3 votes) · EA · GW

It always seemed strange to me that the idea was expressed as 'rounding'. Replacing a 50.4% with 50% seems relatively innocuous to me; replacing 0.6% with 1% - or worse, 0.4% with 0% - seems like a very different thing altogether!

Comment by larks on A love letter to civilian OSINT, and possibilities as a tool in EA · 2020-07-15T13:45:01.021Z · score: 8 (3 votes) · EA · GW

Awesome article, really informative, thanks!

I was reminded of it a little by this recent article, saying that almost half of all criminal cases brought for the online grooming of children in the UK were the result of organised groups of ordinary people gathering evidence.

Comment by larks on New member--essential reading and unwritten rules? · 2020-07-13T18:07:41.681Z · score: 12 (6 votes) · EA · GW

Welcome! And congratulations on your achievements, which I'm sure you are more responsible for than modesty would allow you to acknowledge.

Comment by larks on Maximizing the Long-Run Returns of Retirement Savings · 2020-07-08T03:59:09.283Z · score: 3 (2 votes) · EA · GW

My understanding is the committees generally make rules for the indices, and then apply them relatively mechanistically, though they do occasionally change the rules. I think it is hard to totally get rid of this. You need some way to judge that a company's market cap is actually representative of market trading, as opposed to being manipulated by insiders (like LFIN was). Presumably if the index committee changed it to something absurd the regulator could change their index provider for the next year's bidding, though you are at risk of small changes that do not meet the threshold for firing.

As a minor technical note gross returns often are (very slightly) higher than the index's, because the managers can profit from stock lending. This is what allows zero-fee ETFs (though they are also somewhat a marketing ploy).

Comment by larks on Maximizing the Long-Run Returns of Retirement Savings · 2020-07-07T15:53:06.126Z · score: 2 (1 votes) · EA · GW

Ahhh, so basically the idea is that no underwriter would be willing to vouch for anything but a credible index shop. Seems plausible.