Posts

EA Boston 2018 Year in Review 2019-02-05T20:15:23.216Z

Comments

Comment by Taymon on Comments for shorter Cold Takes pieces · 2022-01-08T04:10:43.452Z · EA · GW

The Fun Theory Sequence (which is on a similar topic) had some things to say about the Culture.

Comment by Taymon on Comments for shorter Cold Takes pieces · 2022-01-08T03:07:16.168Z · EA · GW

Obligatory link to Scott Alexander's "Ambijectivity" regarding the contentiousness of defining great art.

Comment by Taymon on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2022-01-03T03:22:32.171Z · EA · GW

In the last paragraph, did you mean to write "the uncertainty surrounding the expected value of each policy option is high"?

Comment by Taymon on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-30T04:20:53.193Z · EA · GW

While true, I think most proposed EA policy projects are much too small in scope to be able to move the needle on trust, and so need to take the currently-existing level of trust as a given.

Comment by Taymon on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-29T20:12:52.621Z · EA · GW

I agree that that the word ‘populism’ is very prone to misunderstandings but I think the term 'technocracy' is acceptably precise. While precision is important, I think we should balance this against the benefits of using more common words, which make it easier for the reader to make connections with other arguments in favour of or against a concept.

I should clarify: I think the misunderstandings are symptoms of a deeper problem, which is that the concept of "technocracy" is too many different things rolled into one word. This isn't about jargon vs. non-jargon; substituting a more jargon-y word doesn't help. (I think this is part of why it's taken on such negative connotations, because people can easily roll anything they don't like into it; that's not itself a strong reason not to use it, but it's illustrative.)

"Technocracy" works okay-ish in contexts like this thread where we're all mostly speaking in vague generalities to begin with, but when discussing specific policies or even principles for thinking about policy, "I think this is too technocratic" just isn't helpful. More specific things like "I think this policy exposes the people executing it to too much moral hazard", or "I think this policy is too likely to have unknown-unknowns that some other group of people could have warned us about", are better. Indeed, those are very different concerns and I see no reason to believe that EA-in-general errs the same amount, or even in the same direction, for each of them. (If words like "moral hazard" are too jargon-y then you can just replace them with their plain-English definitions.)

Comment by Taymon on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-29T20:02:01.602Z · EA · GW

I also think that EAs haven't sufficiently considered populism as a tool to deal with moral uncertainty.

I agree that there hasn't been much systematic study of this question (at least not that I'm aware of), and maybe there should be. That being said, I'm deeply skeptical that it's a good idea, and I think most other EAs who've considered it are too, which is why you don't hear it proposed very often.

Some reasons for this include:

  • The public routinely endorses policies or principles that are nonsensical or would obviously result in terrible outcomes. Examples include Philip Tetlock's research on taboo tradeoffs [PDF], and this poll from Reuters (h/t Matt Yglesias): "Nearly 70 percent of Americans, including a majority of Republicans, want the United States to take 'aggressive' action to combat climate change—but only a third would support an extra tax of $100 a year to help."
  • You kind of can't ask the public what they think about complicated questions; they're very diverse and there's a lot of inferential distance. You can do things like polls, but they're often only proxies for what you really want to know, and pollster degrees-of-freedom can cause the results to be biased.
  • When EAs look back on history, and ask ourselves what we would/should have done if we'd been around then—particularly on questions (like whether slavery is good or bad) whose morally correct answers are no longer disputed—it seems to look like we would/should have sided with technocrats over populists, much more often than the reverse. A commonly-cited example is William Wilberforce, largely responsible for the abolition of slavery in the British Empire. Admittedly, I'd like to see some attempt to check how representative this is (though I don't expect that question to be answerable comprehensively).
Comment by Taymon on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-29T19:39:33.250Z · EA · GW

I am not convinced that there is much thinking amongst EAs about experts misusing technocracy by focusing on their own interests

In at least one particular case (AI safety), a somewhat deliberate decision was made to deemphasize this concern, because of a belief not only that it's not the most important concern, but that focus on it is actively harmful to concerns that are more important.

For example, Eliezer (who pioneered the argument for worrying about accident risk from advanced AI) contends that the founding of OpenAI was an instance of this. In his telling, DeepMind had previously had a quasi-monopoly on capacity to make progress towards transformative AI, because no other well-resourced actors were working seriously on the problem. This allowed them to have a careful culture about safety and to serve as a coordination point, so that all safety-conscious AI researchers around the world could work towards the common goal of not deploying something dangerous. Elon Musk was dissatisfied with the amount of moral hazard that this exposed DeepMind CEO Demis Hassabis to, so he founded a competing organization with the explicit goal of eliminating moral hazard from advanced AI by giving control of it to everyone (as is reflected in their name, though they later pivoted away from this around the time Musk stopped being involved). This forced both organizations to put more emphasis on development speed, lest the other one build transformative AI first and do something bad with it, and encouraged other actors to do likewise by destroying the coordination point. The result is a race to the precipice [PDF], where everyone has to compromise on safety and therefore accident risk is dramatically more likely.

More generally, politics is fun to argue about and people like to look for villains, so there's a risk that emphasis on person-vs.-person conflicts sucks up all the oxygen and accident risk doesn't get addressed. This is applicable more broadly than just AI safety, and is at least an argument for being careful about certain flavors of discourse.

One prominent dissenter from this consensus is Andrew Critch from CHAI; you can read the comments on his post for some thoughtful argument among EAs working on AI safety about this question.

I'm not sure what to think about other kinds of policies that EA cares about; I can't think of very many off the top of my head that have large amounts of the kind of moral hazard that advanced AI has. This seems to me like another kind of question that has to be answered on a case-by-case basis.

Comment by Taymon on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-29T19:01:03.428Z · EA · GW

I don't think there has been much thinking about whether equally distributed political power should or should not be an end in itself.

On the current margin, that's not really the question; the question is whether it's an end-in-itself whose weight in the consequentialist calculus should be high enough to overcome other considerations. I don't feel any qualms about adopting "no" as a working assumption to that question. I do think I value this to some extent, and I think it's right and good for that to affect my views on rich-country policies where the stakes are relatively low, but in the presence of (actual or expected future) mass death or torture, as is the case in the cause areas EA prioritizes, I think these considerations have to give way. It's not impossible that something could change my mind about this, but I don't think it's likely enough that I want to wait for further evidence before going out and doing things.

Of course, there are a bunch of ways that unequally distributed political power could cause problems big enough that EAs ought to worry about them, but now you're no longer talking about it as an end-in-itself, but rather as a means to some other outcome.

Comment by Taymon on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-29T18:50:02.290Z · EA · GW

it seems fairly clear to me that more populism is preferable under higher uncertainty, and more technocracy is preferable when plausible policy options have a greater range of expected values. 

I'm sorry, I don't understand what the difference is between those things.

Comment by Taymon on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-29T18:46:45.399Z · EA · GW

I think someone should research policy changes in democratic countries which counterfactually led to the world getting a lot better or worse (under a range of different moral theories, and under public opinion), and the extent to which these changes were technocratic or populist. This would be useful to establish the track records of technocracy and populism, giving us a better reason to generally lean one way or the other.

This is exactly the kind of thing that I think won't work, because reality is underpowered.

I forgot to link this earlier, but it turns out that some such research already exists (minus the stipulation that it has to be in democratic countries, but I don't think this is necessarily a fatal problem; there are key similarities with politics in non-democratic countries). In 2009, Daron Acemoglu (a highly-respected-including-by-EAs academic who studies governance) and some other people wrote a paper [PDF] arguing that the First French Empire created a natural experiment, and examining the results. Scott reviewed it in a follow-up post to his earlier exchange with Weyl. The authors' conclusion (spoilered because Scott's post encourages readers to try to predict the results in advance) is that

 technocratic-ish policies got better results.

I consider this moderately strong evidence against heuristics in the opposite direction, but very weak evidence in favor of heuristics in the same direction. There are quite a lot of caveats, some of which Scott gets into in the post. One of these is that the broader technocracy-vs.-populism question subsumes a number of other heuristics, which, in real life, we can apply independently of that single-axis variable. (His specific example might be controversial, but I can think of others that are harder to argue with, such as (on the technocratic side) "policies have to be incentive-compatible", or (on the populist side) "don't ignore large groups of people when they tell you you've missed something".) Once we do that, the value of a general catch-all heuristic in one direction or the other will presumably be much diminished.

Also, there are really quite a lot of researcher degrees-of-freedom in a project like this, which makes it very hard to have any confidence that the conclusions were caused by the underlying ground truth and not by the authors' biases. And just on a statistical level, sample sizes are always going to be tiny compared to the size of highly multi-dimensional policyspace.

So that's why I'm pessimistic about this research program, and think we should just try to figure stuff out on a case-by-case basis instead, without waiting for generally-applicable results to come in.

Since you mentioned it, I should clarify that I have no strong opinion on whether EA should be more technocratic or more populist on the current margin. (Though it's probably fair to say that I'm basically in favor of the status quo, because arguments against it mostly consist of claims that EA has missed something important and obvious, and I tend to find these unpersuasive. I suppose one could argue this makes me pro-technocracy, if one thought the status quo was highly technocratic.) In any case, my contention is that it's not a crucial consideration.

Comment by Taymon on Technocracy vs populism (including thoughts on the democratising risk paper and its responses) · 2021-12-29T07:39:14.767Z · EA · GW

First of all, thanks for this post. The previous post on this topic (full disclosure: I haven't yet managed to read the paper in detail) poisoned the discourse pretty badly by being largely concerned with meta-debate and by throwing out associations between the authors' dispreferred policy views and various unsavory-sounding concepts. I was worried that this meant nobody would try to address these questions in a constructive manner, and I'm glad someone has.

I also agree that there's been a bit of unreflectiveness in the adoption of a technocratic-by-default baseline assumption in EA. I was mostly a populist pre-EA, gradually became a technocrat because the people around me who shared my values were technocrats, and I don't think this was attributable to anyone convincing me that my previous viewpoint was wrong, for the most part. (By contrast, while social effects/frog-boiling were probably important in eroding my resistance to adopting EA views on AI safety, the reason I was thinking about adopting such views in the first place was because I read arguments for them that I couldn't refute.) I'm guessing this has happened to other people too. This is probably worrying and I don't think it's necessarily applicable to just this issue.

That said, I didn't know what to actually do about any of this, and after reading this post, I still don't. I think my biggest disagreement is that I don't think the concept of "technocracy" is actually very helpful, even if it's pointing at a real cluster of things.

I'm reading you as advocating that your four key questions be treated as crucial considerations for EA. I don't think this is going to work, because these questions do not actually have general answers. Reality is underpowered. Social science is nowhere near being capable of providing fully-general answers to questions this huge. I don't think it's even capable of providing good heuristics, because this kind of question is what's left after all known-good heuristics have already been taken into account; that's why it keeps coming up again and again. There is just no avoiding addressing these questions on a case-by-case basis for each individual policy that comes up.

One might argue that the concept of "technocracy" is nevertheless useful for reminding people that they need to actually consider this vague cluster of potential risks and downsides when formulating or making the case for a policy, instead of just forgetting about them. My objection here is that, as far as I can tell, EAs already do this. (To give just one example, Eliezer Yudkowsky has explicitly written about moral hazard in AGI development.) If this doesn't change our minds, it's because we think all the alternatives are worse even after accounting for these risks. You can make an argument that we got the assessment wrong, but again, I think it has to be grounded in specifics.

If we don't routinely use the word "technocracy", then maybe that's just because the word tends to mean a lot of different things to a lot of different people; you've adopted a particular convention in this post, but it's far from universal. Even if the meanings are related, they're not precise, and EAs value precision in writing. Routinely describing proposed policies as "populist" or "technocratic" seems likely to result in frequent misunderstandings.

Finally, since it sounds like there are concerns about lack of existing writing in the EAsphere about these questions, I'd like to link some good ones:

  • Scott Alexander's back-and-forth with Glen Weyl (part 1, part 2; don't miss Scott's response in the comments, and I think Weyl said further things on Twitter although I don't have links). Uses the word "technocracy", and is probably the most widely-read explicit discussion of technocracy-vs.-populism in the EAsphere. I think that Scott, at least, cannot reasonably be accused of never having thought about this.
  • Scott's review of Rob Reich's book Just Giving. Doesn't use the word "technocracy", but gets into similar issues, and presumably Reich's perspective in the book comes from many of the same concerns that drove this piece, which I think is what Peter Singer was responding to in the EA Handbook post that you linked. Builds on the earlier post "Against Against Billionaire Philanthropy" (see also highlights from the comments).
  • "Against Multilateralism", by Sarah Constantin. Maybe the EAsphere post that most explicitly lays out the case for something-like-populism (though ultimately not siding with it). Argues with Weyl again, though it actually predates his engagement with Scott and EA. Ends with some promising directions that, if further explored, could maybe be our best hope currently available of making general progress on this class of questions (though I still don't think they rise to the level of crucial considerations).
Comment by Taymon on Comments for shorter Cold Takes pieces · 2021-12-29T06:21:28.351Z · EA · GW

This (often framed as being about the hard problem of consciousness) has long been a topic of argument in the rationalsphere. What I've observed is that some people have a strong intuition that they have a particular continuous subjective experience that constitutes what they think of as being "them", and other people don't. I don't think this is because the people in the former group haven't thought about it. As far as I can tell, very little progress has been made by either camp of converting the other to their preferred viewpoint, because the intuitions remain even after the arguments have been made.

Comment by Taymon on Why don't governments seem to mind that companies are explicitly trying to make AGIs? · 2021-12-24T23:01:23.165Z · EA · GW

I think SpaceX's regular non-Mars-colonization activities are in fact taken seriously by relevant governments, and the Mars colonization stuff seems like it probably won't happen and also wouldn't be that big a deal if it did (in terms of, like, national security; it would definitely affect who gets into the history books). So it doesn't seem to me like governments are necessarily acting irrationally there.

Same with cryptocurrency; its implications for investor protection, tax evasion, capital controls evasion, and facilitating illicit transactions are indeed taken seriously, and while governments would obviously care quite a lot if it displaced fiat currency, I just don't think there's any way that's happening. If it does, then this is probably because fiat currency itself somehow stopped working and something was needed to fill the void; if governments think this scenario is at all plausible, then presumably their attention would be on the first part where fiat currency fails, since that's much more within their control and cryptocurrency isn't really a relevant input.

The scientific and regulatory culture around fusion power seems to be shaped, as you suggest, by the long history of failures in that domain; judging by similar situations in other fields, I wouldn't be surprised if no one wanted to admit to putting any credence in it, so that they wouldn't look stupid in case it fails again.

The state of pandemic preparedness does indeed seem like just straight-up government incompetence.

Comment by Taymon on Comments for shorter Cold Takes pieces · 2021-12-08T14:17:35.355Z · EA · GW

As far as I'm aware, the first person to explicitly address the question "why are literary utopias consistently places you wouldn't actually want to live?" was George Orwell, in "Why Socialists Don't Believe in Fun". I consider this important prior art for anyone looking at this question.

EAsphere readers may also be familiar with the Fun Theory Sequence, which Orwell was an important influence on.

On a related note, I get the impression that utopianism was not as outright intellectually discredited and unfashionable when Orwell wrote as it is today (e.g., the above essay predates Walden Two), even though most of the problems given in this piece were clearly already present and visible at that time. That seems like it does have something to do with the events of the 20th century, and their effects on the intellectual climate.

Comment by Taymon on Make a $100 donation into $200 (or more) · 2021-11-01T13:44:00.322Z · EA · GW

I maintain such a list.

Comment by Taymon on We’re discontinuing the standout charity designation · 2021-10-07T14:12:02.264Z · EA · GW

They answered this in their own comments section.

Comment by Taymon on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T22:10:46.148Z · EA · GW

I think it would be good for CEA to provide a clear explanation, that it (not LW) stands behind as an organization, of exactly what real value it views as being on the line here, and why it thinks it was worthwhile to risk that value.

Comment by Taymon on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T20:39:01.120Z · EA · GW

Correction: The annual Petrov Day celebration in Boston has never used the button.

Comment by Taymon on Forecasting transformative AI: what's the burden of proof? · 2021-08-18T00:56:10.948Z · EA · GW

Since you're (among other things) listing reference classes that people might put claims about transformative AI into, I'll note that millennarianism is a common one among skeptics. I.e., "lots of [mostly religious] groups throughout history have claimed that society is soon going to be swept away by an as-yet-unseen force and replaced with something new, and they were all deluded, so you are probably deluded too".

Comment by Taymon on Some quick notes on "effective altruism" · 2021-03-25T07:08:47.845Z · EA · GW

Reading this thread, I sort of get the impression that the crux here is between people who want EA to be more institutional (for which purpose the current name is kind of a problem) and people who want it to be more grassroots (for which purpose the current name works pretty okay).

There are other issues with the current name, like the thing where it opens us up to accusations of hypocrisy every time we fail to outperform anyone on anything, but I'm not sure that that's really what's driving the disagreement here. Partly, this is because people have tried to come up with better names over the years (though not always with a view towards driving serious adoption of them; often just as an intellectual exercise), and I don't think any of the candidates have produced widespread reactions of "oh yeah I wish we'd thought of that in 2012", even among people who see problems with the current name. So coming up with a name that's better than "effective altruism", by the lights of what the community currently is, seems like a pretty hard problem. (Obviously this is skewed somewhat by the inertia behind the current name, but I don't think that fully explains what's going on here.) When people do suggest different names, it tends to be because they think some or all of the community is emphasizing the wrong things, and want to pivot towards right ones.

"Global priorities community" definitely sounds incompatible with a grassroots direction; if I said that I was starting a one-person global priorities project in my basement, this would sound ridiculously grandiose and like I'd been severely Dunning-Krugered, whereas with an EA project this is fine.

For what it's worth, I'd prefer a name that's clearly compatible with both the institutional and the grassroots side, because it seems clear to me that both of these are in scope for the EA mandate and it's not acceptable to trade off either of them. The current name sounds a little more grassroots than I'd like, but again, I don't have any better ideas.

At one point I pitched Impartialist Maximizing Rationalist-Empiricist-Epistemological Welfarist-Axiological Ideology, or IMREEWAI for short, but for some strange reason nobody liked that idea :-P

Comment by Taymon on Where are you donating in 2020 and why? · 2020-12-05T00:56:46.590Z · EA · GW

Do you think the Biden campaign had room for more funding, i.e., that your donation made a Biden victory more likely on the margin (by enough to be worth it)? I am pretty skeptical of this; I suspect they already had more money than they were able to spend effectively. (I don't have a source for this other than Maciej Cegłowski, who has relevant experience but whom I don't agree with on everything; on the other hand, I can't recall ever hearing anyone make the case that U.S. presidential general-election campaigns do have room for more funding, and I'd be pretty surprised if there were such a case and it was strong.)

"Neglectedness" is a good heuristic for cause areas but I think that when donating to specific orgs it can wind up just confusing things and RFMF is the better thing to ask about.

I'm less certain about the Georgia campaign but still skeptical there, partly because it's a really high-profile race (since it determines control of the Senate and isn't competing for airtime with any other races) and partly because I think substantive electoral reform is likely to remain intractable even if the Democrats win. But I'd be interested to see a more thorough analysis of this.

Comment by Taymon on Where are you donating in 2020 and why? · 2020-11-25T22:38:07.372Z · EA · GW

Alcor claims on their brochure that membership dues "may be" tax-deductible. It's not clear to me how they concluded that. Somebody should probably ask them.

Comment by Taymon on Plan for Impact Certificate MVP · 2020-10-04T18:41:27.516Z · EA · GW

The second point there seems like the one that's actually relevant. It strikes me as unlikely that doing this with blockchain is less work than with conventional payment systems even if the developers have done blockchain things before, and conventional payment systems are even faster and more fungible with other assets than Ethereum. I'm reading the second point there as suggesting something like, you're hoping that funding for this will come in substantial part from people who are blockchain enthusiasts rather than EAs, and who therefore wouldn't be interested if it used conventional payment infrastructure?

(I agree that the "relics" idea is, at best, solving a different problem.)

Comment by Taymon on Factors other than ITN? · 2020-09-29T21:00:45.705Z · EA · GW

Wait, where's the N in the good-per-dollar-by-definition formula?

Comment by Taymon on The Hammer and the Dance · 2020-03-21T09:24:04.475Z · EA · GW

The post seems relatively optimistic. I'm worried that this may be motivated reasoning, and/or political reasoning (e.g., that people won't listen to anyone who isn't telling them that we can solve the crisis without doing anything too costly). Mind you, I'm not any kind of expert, I'm just suspicious-by-default given that most other analysis I've seen seems less optimistic (note that there are probably all kinds of horrible selection biases in what I'm reading and I have no idea what they are). Also, the author isn't an expert; they seem to have consulted experts for the post, but this still reduces my confidence in its conclusions, because those experts could have been selected for agreeing with a conclusion that the author came up with for non-expert-informed reasons.

Comment by Taymon on Advice for getting the most out of one-on-ones · 2020-03-21T09:20:07.858Z · EA · GW

I'm more likely to do this if there's a specific set of data I'm supposed to collect, so that I can write it down before I forget.

Comment by Taymon on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-10T05:43:44.287Z · EA · GW

Yeah, I should have known I'd get called out for not citing any sources. I'm honestly not sure I'd particularly believe most studies on this no matter what side they came out on; too many ways they could fail to generalize. I am pretty sure I've seen LW and SSC posts get cited as more authoritative than their epistemic-status disclaimers suggested, and that's most of why I believe this; generalizability isn't a concern here since we're talking about basically the same context. Ironically, though, I can't remember which posts. I'll keep looking for examples.

Comment by Taymon on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T05:00:00.112Z · EA · GW

"Breakthroughs" feel like the wrong thing to hope for from posts written by non-experts. A lot of the LW posts that the community now seems to consider most valuable weren't "breakthroughs". They were more like explaining a thing, such that each individual fact in the explanation was already known, but the synthesis of them into a single coherent explanation that made sense either hadn't previously been done, or had been done only within the context of an academic field buried in inferential distance. Put another way, it seems like it's possible to write good popularizations of a topic without being intimately familiar with the existing literature, if it's the right kind of topic. Though I imagine this wouldn't be much comfort to someone who is pessimistic about the epistemic value of popularizations in general.

The Huemer post kind of just felt like an argument for radical skepticism outside of one's own domain of narrow expertise, with everything that implies.

Comment by Taymon on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T04:58:58.935Z · EA · GW

It seems clear to me that epistemic-status disclaimers don't work for the purpose of mitigating the negative externalities of people saying wrong things, especially wrong things in domains where people naturally tend towards overconfidence (I have in mind anything that has political implications, broadly construed). This follows straightforwardly from the phenomenon of source amnesia, and anecdotally, there doesn't seem to be much correlation between how much, say, Scott Alexander (whom I'm using here because his blog is widely read) hedges in the disclaimer of any given post and how widely that post winds up being cited later on.

Comment by Taymon on Information security careers for GCR reduction · 2019-09-27T22:34:08.762Z · EA · GW

This post caused me to apply to a six-month internal rotation program at Google as a security engineer. I start next Tuesday.

Comment by Taymon on What would EAs most want to see from a "rationality" or similar project in the EA space? · 2019-09-15T17:26:55.489Z · EA · GW

I would like to see efforts at calibration training for people running EA projects. This would be useful for helping to push those projects in a more strategic direction, by having people lay out predictions regarding outcomes at the outset, kind of like what Open Phil does with respect to their grants.

Comment by Taymon on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-14T03:11:36.425Z · EA · GW

Can you give an example of a time when you believe that the EA community got the wrong answer to an important question as a result of not following your advice here, and how we could have gotten the right answer by following it?

Comment by Taymon on Candidate Scoring System, Second Release · 2019-03-20T17:27:47.298Z · EA · GW

Links aren't working.

Comment by Taymon on How Can Each Cause Area in EA Become Well-Represented? · 2019-03-07T20:33:36.526Z · EA · GW

Apologies if this is a silly question, but could you give examples of specific, concrete problems that you think this analysis is relevant to?

Comment by Taymon on EAs and EA Orgs Should Move Cash from Low-Interest to High-Interest Options · 2019-02-24T00:50:27.308Z · EA · GW

Does your recommendation account for the staff-time costs of doing anything other than whatever an org's current setup is? Orgs like CEA have stated that this is why they don't do financial-optimization things like this.

Comment by Taymon on EAGx Boston 2018 Postmortem · 2019-02-24T00:47:53.943Z · EA · GW

I don't think there was necessarily anything wrong with it, I'd just encourage future organizers to consider more explicitly what the goal is and how to achieve it.

Comment by Taymon on EAGx Boston 2018 Postmortem · 2019-02-07T23:46:17.672Z · EA · GW

No one on the team knew the donor, though he had donated to EA causes in the past and was acquainted with relevant people at CEA. We offered him VIP tickets and then he put $2,000 in the pay-what-you-want box in our online ticketing system. I think it was primarily thought of as defraying conference costs, and indeed we came in less than $2,000 under budget.

The organizers included Matt Reardon (OP and lead organizer) from Harvard Law School, Jen Eason and Vanessa Ruales from Harvard College, Juan Gil from MIT, Rebecca Baron from Tufts, and myself (no institutional affiliation).

When writing this postmortem, we actually did devote a section of it to a discussion of how the content was received, including individual presentations. Because most of the speakers were invited guests, this section will not be made public. I can share a few overall conclusions.

Overall, reception of the content in aggregate was positive. Some attendees were surprised by, and in a few cases critical of, the proportion of it devoted to animal welfare. This was not by design; most of the conference organizers are interested in animal welfare, but not moreso than other EA focus areas. Rather, it was determined primarily by the availability of speakers (most notably keynote speaker Bruce Friedrich). A few talks were also criticized by some attendees for being overly technical or of narrow interest.

Most of the panels were moderated by members of the organizing team; I think it would have been better to have these be moderated by people with deeper knowledge of the respective topics.

The anti-debate was an interesting idea whose specific workings we kind of just made up ad-hoc. I'd like to see it tried again, but only after further refinement of the format and clarity on how exactly it is supposed to work.

Comment by Taymon on Why we have over-rated Cool Earth · 2018-11-27T02:33:34.869Z · EA · GW

I don't think nobody delved into the Cool Earth numbers because they assumed a bunch of smart people had already done it. I think nobody delved into the Cool Earth numbers because it wasn't worth their time, because climate change charities generally aren't competitive with the standard EA donation opportunities, so the question is only relevant if you've decided for non-EA reasons that you're going to focus on climate change. (Indeed, if I understand correctly the Founders Pledge report was written primarily for non-EA donors who'd decided this.)

Whatever's been going on with global poverty and AI risk, I think it's probably a different problem.

(And yes, Doing Good Better was part of what I was referring to with respect to nuance getting lost in popularizations. It's that problem specifically that I claim is difficult, not the more general problem of groupthink within EA.)

Comment by Taymon on Why we have over-rated Cool Earth · 2018-11-27T00:20:47.772Z · EA · GW

I don't think I would call this hubris. We all knew that the Cool Earth recommendation was low-confidence. But what else were we going to do? To paraphrase Scott Alexander from another recent community controversy, our probability distribution was wide but centered around Cool Earth.

I do think that that nuance occasionally got lost when doing outreach to people not already very informed about EA, but that's a different problem. We haven't solved it, but I feel like that's because it's hard, not because nobody's thought about it.

(One could also argue that outreach to mainstream audiences about EA shouldn't discuss climate change at all, given its place in the movement, but the temptation to make those mainstream audiences more receptive by talking about something they already care about is strong.)

Comment by Taymon on Why we have over-rated Cool Earth · 2018-11-26T05:51:40.614Z · EA · GW

I suspect that it was widely recognized for quite some time that GWWC's analysis of Cool Earth was outdated enough not to be trustworthy. People donated to Cool Earth anyway because it was the only climate-change charity that we had any particular reason to believe was better than others. This, of course, has changed with the Founders Pledge report, and as such I predict that EA interest in Cool Earth will fade with time.

I looked a little to try to figure out why the criticisms of Cool Earth don't also apply to the Coalition for Rainforest Nations. It sounds like the primary reason is because CfRN influences nationwide policy, so the loggers can be displaced only to a different country, which is inconvenient enough that most would give up.

Comment by Taymon on Why we have over-rated Cool Earth · 2018-11-26T05:41:24.065Z · EA · GW

Also, the cases for contraception and female education as climate-change interventions seem much, much more speculative than the case for rainforest conservation, so much so that their respective cost-effectiveness numbers probably ought not to be directly compared.

Comment by Taymon on Getting past the DALY: different measures of "positive impact" · 2018-11-24T16:47:18.093Z · EA · GW

GiveWell doesn't directly use literal DALYs in their current cost-effectiveness estimates. They have a research page on them; the linked blog posts were originally published a long time ago, but were updated relatively recently, so they presumably still stand by them. See also this more recent post.

GiveWell's cost-effectiveness spreadsheet includes a tab on moral weights. You can make a copy of it, change the numbers to represent your preferred views on population ethics, and see what this does to the results.

Comment by Taymon on Towards Better EA Career Advice · 2018-11-24T03:55:50.318Z · EA · GW

I think the big problem with the narrow focus is that newbie EAs, especially if they're students, tend to get saturated with the message that the way to do good with your life is to go to 80,000 Hours and follow their career advice. Indeed, CEA's official advice for local group leaders says to heavily emphasize this. And they get this message relatively early in the sales funnel, long before they've gone through anything that would filter out the majority who aren't good candidates for 80,000 Hours's top priority paths. So it ought not to surprise anyone that a huge fraction of them come away demoralized.

There's an obvious sense in which this is still the impact-maximizing approach, in that the global utilitarian cost of demoralizing a bunch of people who weren't going to change the world anyway, is likely outweighed by the benefit of getting even one person who needed that extra push to start working on a priority program. But it still leaves a bad taste in my mouth. I feel as though, if EA is going to choose to be a community (as opposed to just a thing that some individuals happen to do), then it has at least some kind of responsibility to take care of its own, separate from its mission to maximize aggregate global utility. And there's a sense in which setting up expectations that most of us can't live up to constitutes a systematic failure to do that.

(Incidentally, I think most local group leaders don't want to send their members through the gauntlet like this. But even if they realize that there's a problem, it's still the accepted thing to do and they don't have any better ideas. EAs want to be doing something impactful, or else they wouldn't be EAs, and there aren't a lot of great alternative activities that groups of nonspecialists can do, especially now that fundraising for GiveWell top charities has (rightly) gone out of fashion.)

Comment by Taymon on Amazon Smile · 2018-11-21T03:49:24.583Z · EA · GW

I suspect that it is a bad idea to publicly advocate this (though using it is fine). I'm not worried so much about moral licensing; rather, I think the amount of money being moved in this way is so tiny, relative to the amount of attention required in order to move it, that in a genuinely impact-focused discussion of possible ways to do good it would not even come up. I fear that bringing it up in association with EA gives a misleading impression of what the EA approach to prioritization looks like.

Comment by Taymon on Additional plans for the new EA Forum · 2018-09-07T18:10:02.745Z · EA · GW

Is that form supposed to be accessible to outside CEA? Right now it's not.

Comment by Taymon on Would an EA world with limited money fund costly treatments? · 2018-03-31T19:47:35.754Z · EA · GW

Prior work on this topic [PDF]

Comment by Taymon on [deleted post] 2017-09-27T18:47:56.849Z

All of the endnote links are broken.

Comment by Taymon on EA Global 2017 Update · 2016-12-07T03:15:24.654Z · EA · GW

Is the nomination form supposed to have contact information? I just nominated a potential speaker who I'm connected to, but realized that you may have no way to get in touch with me.

Comment by Taymon on $250 donation for best EA intro essay - deadline: March 10 · 2016-02-11T20:34:32.632Z · EA · GW

So assuming you don't win, are you allowed to post your essay on your own blog? Or would this undermine CEA's ability to cannibalize bits of it?