Posts

Are there robustly good and disputable leadership practices? 2020-03-19T01:46:38.484Z · score: 11 (5 votes)
Harsanyi's simple “proof” of utilitarianism 2020-02-20T15:27:33.621Z · score: 49 (40 votes)
Quote from Strangers Drowning 2019-12-23T03:49:51.205Z · score: 43 (14 votes)
Peaceful protester/armed police pictures 2019-12-22T20:59:29.991Z · score: 17 (10 votes)
How frequently do ACE and Open Phil agree about animal charities? 2019-12-17T23:56:09.987Z · score: 62 (28 votes)
Summary of Core Feedback Collected by CEA in Spring/Summer 2019 2019-11-07T16:26:55.458Z · score: 106 (46 votes)
EA Art: Neural Style Transfer Portraits 2019-10-03T01:37:30.703Z · score: 37 (20 votes)
Is pain just a signal to enlist altruists? 2019-10-01T21:25:44.392Z · score: 60 (25 votes)
Ways Frugality Increases Productivity 2019-06-25T21:06:19.014Z · score: 69 (43 votes)
What is the Impact of Beyond Meat? 2019-05-03T23:31:40.123Z · score: 25 (10 votes)
Identifying Talent without Credentialing In EA 2019-03-11T22:33:28.070Z · score: 36 (18 votes)
Deliberate Performance in People Management 2017-11-25T14:41:00.477Z · score: 30 (26 votes)
An Argument for Why the Future May Be Good 2017-07-19T22:03:17.393Z · score: 26 (26 votes)
Vote Pairing is a Cost-Effective Political Intervention 2017-02-26T13:54:21.430Z · score: 12 (14 votes)
Living on minimum wage to maximize donations: Ben's expenses in 2016 2017-01-29T16:07:28.405Z · score: 21 (21 votes)
Voter Registration As an EA Group Meetup Activity 2016-09-16T15:28:46.898Z · score: 4 (6 votes)
You are a Lottery Ticket 2015-05-10T22:41:51.353Z · score: 10 (10 votes)
Earning to Give: Programming Language Choice 2015-04-05T15:45:49.192Z · score: 3 (3 votes)
Problems and Solutions in Infinite Ethics 2015-01-01T20:47:41.918Z · score: 6 (9 votes)
Meetup : Madison, Wisconsin 2014-10-29T18:03:47.983Z · score: 0 (0 votes)

Comments

Comment by ben_west on Climate Change Is Neglected By EA · 2020-05-27T15:57:13.139Z · score: 3 (2 votes) · EA · GW

Thanks for clarifying! I understand the intuition behind calling this "neglectedness", but it pushes in the opposite direction of how EA's usually use the term. I might suggest choosing a different term for this, as it confused me (and, I think, others).

To clarify what I mean by "the opposite direction": the original motivation behind caring about "neglectedness" was that it's a heuristic for whether low hanging fruit in the field exists. If no one has looked into something, then it's more likely that there is low hanging fruit, so we should probably prefer domains that are less established . (All other things being equal.)

The fact that many people have looked into climate change but we still have not "flattened the emissions curve" indicates that there is not low hanging fruit remaining. So an argument that climate change is "neglected" in the sense you are using the term is actually an argument that it is not neglected in the usual sense of the term. Hence the confusion from me and others.

Comment by ben_west on Climate Change Is Neglected By EA · 2020-05-26T20:02:31.483Z · score: 3 (2 votes) · EA · GW

The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention. - Will

Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell - Uri

I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I'm curious if this is the crux – I would guess that Will agrees (certain types of) climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.

Comment by ben_west on Climate Change Is Neglected By EA · 2020-05-26T19:55:13.879Z · score: 7 (4 votes) · EA · GW

Thanks for sharing! This does seem like an area many people are interested in, so I'm glad to have more discussion.

I would suggest considering the opposite argument regarding neglectedness. If I had to steelman this, I would say something like: a small number of people (perhaps even a single PhD student) do solid research about existential risks from climate change -> existential risks research becomes an accepted part of mainstream climate change work -> because "mainstream climate change work" has so many resources, that small initial bit of research has been leveraged into a much larger amount.

(Note: I'm not sure how reasonable this argument is – I personally don't find it that compelling. But it seems more compelling to me than arguing that climate change isn't neglected, or that we should ignore neglectedness concerns.)

Comment by ben_west on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-20T22:20:29.953Z · score: 3 (5 votes) · EA · GW

This is really interesting! It seems like there's also compelling evidence for more than 2:

While there is no direct evidence that any of the 25 [18] species of Hawaiian land birds that have become extinct since the documented arrival of Culex quinquefasciatus in 1826 [19] were even susceptible to malaria and there is limited anecdotal information suggesting they were affected by birdpox [19], the observation that several remaining species only persist either on islands where there are no mosquitoes or at altitudes above those at which mosquitoes can breed and that these same species are highly susceptible to avian malaria and birdpox [18,19] is certainly very strong circumstantial evidence...

The formerly abundant endemic rats Rattus macleari and Rattus nativitas disappeared from Christmas Island in the Indian Ocean (10°29′ S 105°38′ E) around the turn of the twentieth century. Their disappearance was apparently abrupt, and shortly before the final collapse sick individuals were seen crawling along footpaths [22]. At that time, trypanosomiasis transmitted by fleas from introduced black rats R. rattus was suggested as the causative agent. Recently, Wyatt et al. [22] managed to isolate trypanosome DNA from both R. rattus and R. macleari specimens collected during the period of decline, whereas no trypanosome DNA was present in R. nativitas specimens collected before the arrival of black rats. While this is good circumstantial evidence, direct evidence that trypanosomes caused the mortality is limited

Comment by ben_west on 162 benefits of coronavirus · 2020-05-14T16:54:45.996Z · score: 3 (2 votes) · EA · GW

Yeah, even if it just leads to acceptance that higher education is about signaling, that seems like a step in the right direction to me. It at least lays the groundwork for future innovators who can optimize for signaling as opposed to "education."

Comment by ben_west on 162 benefits of coronavirus · 2020-05-13T18:41:55.921Z · score: 3 (2 votes) · EA · GW

Re-assessment of education & educational institutions

I'm curious to see what happens here. I know a lot of people who are saying "I'm paying $50,000 a year to watch the same lecture I could have watched on YouTube for free?" Of course, that was also true before quarantine, but somehow quarantine has made it more salient.

I'm not sure whether this salience will last and cause a switch towards nontraditional learning.

Comment by ben_west on 162 benefits of coronavirus · 2020-05-13T18:38:28.629Z · score: 2 (1 votes) · EA · GW

Thanks for this thorough list! Regarding:

Change of government/leader in some countries: if they did not handle pandemic well

Do you have a sense for how well correlated public opinion and government performance is? At least in the US, my impression is that Trump's approval ratings got a slight bump but are now back to normal levels, and public opinion mostly tracks party allegiance rather than any government policy.

Comment by ben_west on Reducing long-term risks from malevolent actors · 2020-05-13T16:57:22.483Z · score: 4 (3 votes) · EA · GW

I wonder if one could find more credible signals of things like "caring for your employers", ideally in statistical form. Money invested in worker safety might be one such metric.

That seems reasonable. Another possibility is looking at benefits, which have grown rapidly (though there are also many confounders here).

Something which I can't easily measure but seems more robust is the fraction of "iterated games". E.g. I would expect enterprise salespeople to be less malevolent than B2C ones (at least towards their customers), because successful enterprise sales relies on building relationships over years or decades. Similarly managers are often recruited and paid well because they have a loyal team who will go with them, and so screwing over that team is not in their self-interest.

Comment by ben_west on Reducing long-term risks from malevolent actors · 2020-05-05T19:59:50.121Z · score: 2 (1 votes) · EA · GW

A minor copyediting suggestion (adding the words in bold):

Factor 1—characterized by cruelty, grandiosity, manipulativeness, and a lack of guilt—arguably represents the core personality traits of psychopathy. However, scoring highly on factor 2—characterized by impulsivity, reactive anger, and lack of realistic goals—is less problematic from our perspective. In fact, humans scoring high on factor 1 but low on factor 2 are probably more dangerous than humans scoring high on both factors (more on this below).

It's not a big deal, but it took me a minute to understand why you were saying it's both less problematic and more dangerous.

Comment by ben_west on Reducing long-term risks from malevolent actors · 2020-05-05T19:15:09.428Z · score: 14 (8 votes) · EA · GW

Thanks for this interesting article. Regarding malevolence among business leaders: my impression is that corporations have rewarded malevolence less over time.

E.g. in the early 1900s you had Frederick Taylor (arguably the most influential manager of the 20th century) describing his employees like:

one of the very first requirements for a man who is fit to handle pig iron as a regular occupation is that he shall be so stupid and so phlegmatic that he more nearly resembles in his mental make-up the ox than any other type.

Modern executives would never say this about their staff, and no doubt this is partly because what's said in the boardroom is different from what's said in public, but there is a serious sense in which credibly signaling prosocial behaviors towards your employees is useful. E.g. 80 years later you have Paul O'Neill, in almost exactly the same industry as Taylor, putting worker safety as his key metric, because he felt that people would work harder if they felt taken care of by the company.

My guess is that corporations which rely on highly skilled workers benefit more from prosocial executives, and that it's hard to pretend to be prosocial over a decades-long career, though certainly not impossible. So possibly one hard-to-fake measure of malevolence is whether you repeatedly succeed in a corporation where success requires prosociality.

Comment by ben_west on Racial Demographics at Longtermist Organizations · 2020-05-05T00:35:18.918Z · score: 21 (9 votes) · EA · GW

Thanks for writing this. Another reference point: YC founders are ~16% black or Hispanic.

(I'm not sure if this is the best reference class, I was just curious in the comparison because the population of people who start YC companies seems somewhat similar to the population who join longtermist organizations.)

Comment by ben_west on What posts do you want someone to write? · 2020-04-30T17:34:53.740Z · score: 2 (1 votes) · EA · GW

You rock, thanks so much!

Comment by ben_west on The Case for Impact Purchase | Part 1 · 2020-04-22T22:48:53.100Z · score: 3 (2 votes) · EA · GW

You can also borrow against the future prize or impact purchase, e.g. as Goldman Sachs allows you to do (in some limited cases). This moves the risk onto diversified private investors.

Comment by ben_west on Why I'm Not Vegan · 2020-04-22T19:22:30.210Z · score: 9 (5 votes) · EA · GW

I have an intuition that this is more of the disagreement between you and vegans (as opposed to having different moral weights). My guess is that one could literally prevent three chicken-years for less than $500/year?[1] And also that some vegans' personal happiness is more affected by not eating chickens than donating $500.

If that's true, then the reason vegans are vegan instead of donating is because they view it as "morality" as opposed to "axiology".

This accords with my intuition: having someone tell me they care about nonhuman animals while eating a chicken sandwich rubs me in a way that having someone tell me they care about the developing world while wearing $100 shoes does not.


  1. As one heuristic: Beyond meat is $4.59 for 9 ounces. So it would cost $424 to replace all 52.9 pounds Peter says the average American eats in a year. ↩︎

Comment by ben_west on Why I'm Not Vegan · 2020-04-22T19:01:59.772Z · score: 11 (4 votes) · EA · GW

Do the weights really affect the argument? I think Jeff is saying that being omnivorous results in ~6 additional animals alive at any given point. If an animal's existence on a farm is as bad as one human in the developing world is good (a pretty non-speciesist weighting), then it's $600 to go vegan.

$600 is admittedly much more than $0.43, but my guess is that Jeff still would rather donate the $600.

Comment by ben_west on COVID-19 response as XRisk intervention · 2020-04-17T23:28:27.466Z · score: 18 (6 votes) · EA · GW

I generally agree with your response, but wanted to point out one example of establishing credibility: Scott Aaronson says:

It does cause me to update in the direction of AI-risk being a serious concern. For the Bay Area rationalists have now publicly sounded the alarm about a looming crisis for the human race, well before it was socially acceptable to take that crisis too seriously (and when taking it seriously would have made a big difference), and then been 100% vindicated by events. Where previously they were 0 for 0 in predictions of that kind, they’re now 1 for 1.
...
[After Adam Scholl invites him to a workshop]: Thanks for asking! Absolutely, I’d be interested to attend an AI-risk workshop sometime. Partly just to learn about the field, partly to find out whether there’s anything that someone with my skillset could contribute.

(Note: part of what impressed Scott here was being early to raise the alarm, and that boat has already sailed, so it could be that future COVID-19 work won't do much to impress people like him.)

Comment by ben_west on willbradshaw's Shortform · 2020-04-07T23:28:45.765Z · score: 2 (1 votes) · EA · GW

This is a really interesting point. An additional consideration is that global leaders tend to be older, and hence more at risk (cf. Boris Johnson). You could imagine that their deaths are especially destabilizing.

If the longtermist argument for preventing pandemics is that they trigger destabilization which leads to, say, nuclear war, the age impacts could be an important factor.

Comment by ben_west on What posts do you want someone to write? · 2020-04-01T20:42:10.345Z · score: 2 (1 votes) · EA · GW

Awesome!

I personally would suggest a format of:
1. One paragraph summary that any educated layperson easily can understand
2. One page summary that a layperson with college-level math skills can understand
3. 2-5 pages of detail that someone with college-level math and Econ 101 skills can understand

This is just a suggestion though, I don't have a lot of confidence that it's correct.

Comment by ben_west on What are examples of EA work being reviewed by non-EA researchers? · 2020-03-30T23:27:13.335Z · score: 7 (4 votes) · EA · GW

Probably more informal than you want, but here's a Facebook thread debating AI safety involving some of the biggest names in AI.

Comment by ben_west on The Precipice is out today in US & Canada! Audiobook now available · 2020-03-26T16:57:58.540Z · score: 4 (3 votes) · EA · GW

See also Toby's AMA

Comment by ben_west on What posts do you want someone to write? · 2020-03-24T16:54:08.791Z · score: 10 (7 votes) · EA · GW

Defining "management constraints" better.

Anecdotally, many EA organizations seem to think that they are somehow constrained by management capacity. My experience is that this term is used in different ways (for example, some places use it to mean that they need senior researchers who can mentor junior researchers; others use it to mean that they need people who can do HR really well).

It would be cool for someone to interview different organizations and get a better sense of what is actually needed here

Comment by ben_west on What posts do you want someone to write? · 2020-03-24T15:31:39.558Z · score: 8 (5 votes) · EA · GW

More accessible summaries of technical work. Some things I would like summarized:

1. Existential risk and economic growth
2. Utilitarianism with and without expected utility

(You can see my own attempt to summarize something similar to #2 here , as one example.)

Comment by ben_west on Are there robustly good and disputable leadership practices? · 2020-03-19T04:03:59.335Z · score: 6 (5 votes) · EA · GW

If I had to suggest something which is both robustly good and disputable, I would suggest this principle:

Focus on minimizing the time between when you have an idea and when your customer benefits from that idea.

Evidence for being robustly good

This principle has a variety of names, as many different industries have rediscovered the same idea.

  1. The most famous formulation of this principle is probably as part of the Toyota Production System. Traditional assembly lines took a long time to set up, but once set up, they could pump out products incredibly fast. Toyota decided to change their focus instead towards responding rapidly, e.g. they set a radical goal of being able to change each of their dies in less than 10 minutes.
  2. Toyota’s success with this and other rapid response principles inspired a just-in-time manufacturing revolution.
  3. These principles were included in lean manufacturing, which has led to a variety of derivatives like lean software development and the lean startup.
  4. Another stream of development is in the software world, notably with the publication of the Manifesto for Agile Software Development, which drew from prior methodologies like Extreme Programming.
  5. Agile project management is now common in many technical fields outside of software.

This underlying principle, as well as its accoutrements like Kanban boards, can be seen in a huge variety of successful industries, from manufacturing to IT. The principle of reducing turnaround time can be applied by single individuals to their own workflow, or by multinational conglomerates. While it is easier to do agile project management in an agile company, it’s entirely possible for small teams (or even individuals) to unilaterally focus on reducing their turnaround times (meaning that this principle is not dependent on specific organizational cultures or processes).There are also more theoretical reasons to think this principle is robustly good. The planning fallacy is a well-evidenced phenomenon, and it reasonably would lead people to underestimate how important rapid responses are (since they believe they can forecast the future more accurately than they actually can).

Evidence for being disputable

  1. Waterfall project management (the antithesis of agile project management) is still quite common.
  2. Toyota’s success was in part due to how surprising their approach was (compared to the approach taken by US and European manufacturers).
  3. Each industry seems to require discovering this principle anew. E.g. The DevOps Handbook popularized these principles in IT Operations only a few years ago. (It explicitly references lean manufacturing principles as the inspiration.)
  4. The planning fallacy and other optimism biases would predict that people underestimate how important it is to respond rapidly to changes.

Other candidates

Some other possible principles which are both robustly useful and disputable:

  1. Theory of Constraints. This seems well evidenced (the principle is almost trivial, once stated) and managers are often surprised by it. However, I’m not sure it’s really “disputable” – it is more a principle that is unequivocally true, but hard to implement in practice.
  2. Minimize WIP”. This principle is disputable, and my impression is that certain areas of supply chain management consider it to be gospel, but I'm not sure how solid the evidence base for it is outside of SCM. Anecdotally, it's been pretty useful in my own work, and there are theoretical reasons to think it's undervalued (e.g. lots of psychological research about how people underestimate how bad distractions are).
  3. Talk to your customers a lot. Popularized by The Four Steps to the Epiphany and then later The Lean Startup. Well regarded among tech startups, but I’m less clear how useful it is outside of that.

Appendix: Evidence From India

One of the most famous experiments in management is Does management matter? Evidence from India. This involved sending highly-paid management consultants to randomly selected textile firms in India. The treatment group had significant improvements relative to the control group (e.g. 11% increase in productivity).How did they accomplish these gains? Through changes like:

  1. Putting trash outside, instead of on the factory floor
  2. Sorting and labeling excess inventory, instead of putting it in a giant heap
  3. Doing preventative maintenance on machines, instead of running them until they break down

I think the conclusion here is that “disputable” is a relative term – I doubt any US plant managers need to be convinced that they should buy garbage bins. Most of the benefits that the management consultants were able to provide were simply in encouraging adherence to (what managers in the US consider to be) “obvious” best practices. Those best practices clearly were not “obvious” to the Indian managers.

Comment by ben_west on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-18T02:31:30.395Z · score: 2 (1 votes) · EA · GW

GiveWell hired a VP of Marketing last fall. Do you have any insights from marketing GW that would be applicable to other EA organizations? Are there any surprising ways in which the marketing you are doing is different from "traditional" marketing?

Comment by ben_west on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-18T02:28:28.962Z · score: 5 (3 votes) · EA · GW

One for the World was incubated by GiveWell and received a sizable grant from the GH&D Fund.

The average American donates about 4% of their income to charity. (Some discussion about whether this is the correct number here). Given this, asking people to pledge 1% seems a bit odd – almost like you are asking them to decrease the amount they donate.

One benefit of OFTW is that they are pushing GiveWell-recommended charities, but this seems directly competitive with TLYCS, which generally suggests people pledge 2-5% (the scale adjusts based on your income).

It's also somewhat competitive with the Giving What We Can pledge, which is a cause-neutral 10%.

I'm curious what you see as the benefits of OftW over these alternatives? I'm also curious if you have visibility into your forecasts (namely, whether they will move 1-2x the money to top charities as they received in support this year)?

(This question mostly taken from here.)

Comment by ben_west on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-18T02:21:34.450Z · score: 5 (2 votes) · EA · GW

The GH&D Fund on EA Funds is unusual in that it almost exclusively gives large ($500k+) grants. The other funds regularly give $10-50k grants.

Do you think there is an opportunity for smaller funders in the GH&D space? Do you think there are economies of scale or other factors which make larger grants more useful in the GH&D space than in other cause areas?

Comment by ben_west on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-18T02:17:51.416Z · score: 4 (2 votes) · EA · GW

Most of the animal welfare organizations that I know of which seem unusually effective are somehow related to EA. (E.g. I see staff from ACE's top charities regularly at EA Global.)

Are there parts of the effective animal advocacy ecosystem which don't overlap with EA? Do you have a sense for why these parts aren't involved with EA?

Comment by ben_west on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-18T02:11:27.112Z · score: 13 (6 votes) · EA · GW

To what extent do you think future reductions in the number of farmed animals will come from advocacy, as opposed to technological advancement (e.g. Beyond Meat)? Do you have a sense of the historical impact of these two approaches?

Comment by ben_west on Research on developing management and leadership expertise · 2020-03-17T01:39:23.336Z · score: 2 (1 votes) · EA · GW

To clarify, the implication is that the causal chain might be from good organisational outcomes to good evaluations on leadership evaluation instruments, rather than the other way round?

Yep.

Comment by ben_west on Research on developing management and leadership expertise · 2020-03-07T22:16:22.881Z · score: 4 (2 votes) · EA · GW

One thing I found really interesting about this research is statements like these:

Therefore, though transformational leadership has been contrasted to transactional leadership (with the former being suggested to be superior), the use of contingent reward behaviours seems similarly effective to transformational leadership.

It sounds very believable to me that ~0% of "nonobvious" leadership recommendations don't outperform a "placebo". (Or, as you suggest, are only good subject to contingencies like personal fit.)

I would be curious if doing this review gave you a sense of what the "control group" for leadership could be?

I'm imagining something like:

  1. Your team has reasonably well defined goals
  2. Your team has the ability to make progress towards those goals
  3. Your team is not distracted from those goals by some major problem (e.g. morale, bureaucracy)

We might hypothesize that any team which meets 1-3 will not have its performance improved by "transformational" leadership etc.

Do you know if anyone has studied or hypothesized such a thing? If not, do you have a sense from your research of what this might look like?

Comment by ben_west on Research on developing management and leadership expertise · 2020-03-07T22:02:49.119Z · score: 4 (2 votes) · EA · GW

Do you know what these researchers are measuring when looking at the "results" level?

If I'm understanding correctly, they are claiming that training increases some sort of result by 0.6 standard deviations, which seems huge. E.g. if some corporate training increased quarterly revenue by 0.6 sd's that would be quite shocking.

(I tried to read through the meta-analyses but I could only find their descriptions of how the four levels differ, and nothing about what the results level looks like.)

Comment by ben_west on Research on developing management and leadership expertise · 2020-03-07T21:58:57.562Z · score: 10 (3 votes) · EA · GW

Thanks so much for sharing this and doing this research!

Regarding this:

That high performance on measures of leadership effectiveness causes organisational success, rather than organisational success inspiring high performance on (or at least more positive evaluations of) measures of leadership effectiveness. Given that the research is almost exclusively correlational,[36] we cannot be confident that this assumption is correct. However, this seems to me to be intuitively likely.

The Halo Effect is a compendium of evidence to the contrary. Basically, leaders who are good at one thing (e.g. maximizing revenue) are considered to be good at everything else (e.g. being humble). It has great examples of how the exact same CEO behavior is described positively versus negatively as the company's stock price fluctuates.

I would recommend at least skimming the book – it has really helped me differentiate useful from less useful business research.

Comment by ben_west on Quotes about the long reflection · 2020-03-07T21:44:33.123Z · score: 6 (5 votes) · EA · GW

Thanks for collecting these!

Comment by ben_west on Harsanyi's simple “proof” of utilitarianism · 2020-02-24T13:49:45.703Z · score: 3 (2 votes) · EA · GW

Alastair Norcross used the term "thoroughgoing aggregation" for what seems to be linear addition of utilities in particular

Ah, my mistake – I had heard this definition before, which seems slightly different.

I just find the conclusion section really jarring.

Thanks for the suggestion – always tricky to figure out what a "straightforward" consequence is in philosophy.

I changed it to this – curious if you still find it jarring?

Total utilitarianism is a fairly controversial position. The above example where can be extended to show that utilitarianism is extremely demanding, potentially requiring extreme sacrifices and inequality. It is therefore interesting that it is the only decision procedure which does not violate one of these seemingly reasonable assumptions.

Comment by ben_west on Harsanyi's simple “proof” of utilitarianism · 2020-02-23T06:12:10.889Z · score: 3 (2 votes) · EA · GW

Yeah, it doesn't (obviously) follow. See the appendix on equality. It made the proof simpler and I thought most readers would not find it objectionable, but if you have a suggestion for an alternate simple proof I would love to hear it!

Comment by ben_west on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T20:41:18.669Z · score: 4 (3 votes) · EA · GW

Thanks!

I don't think the theorem provides support for total utilitarianism, specifically, unless you add extra assumptions about how to deal with populations of different sizes or different populations generally. Average utilitarianism is still consistent with it, for example.

Well, average utilitarianism is consistent with the result because it gives the same answer as total utilitarianism (for a fixed population size). The vast majority of utility functions one can imagine (including ones also based on the original position like maximin) are ruled out by the result. I agree that the technical result is "anything isomorphic to total utilitarianism" though.

You might be interested in Teruji Thomas' paper

I had not seen that, thanks!

Comment by ben_west on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T20:31:08.932Z · score: 1 (2 votes) · EA · GW

In that case, it would IMO be better to change "total utilitarianism" to "utilitarianism" in the article. Utilitarianism is different from other forms of consequentialism in that it uses thoroughgoing aggregation. Isn't that what Harsanyi's theorem mainly shows?

Hmm, it does show that it's a linear addition of utilities (as opposed to, say, the sum of their logarithms). So I think it's stronger than saying just "thoroughgoing aggregation".

Comment by ben_west on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T17:53:23.988Z · score: 3 (2 votes) · EA · GW

Yeah, my point was that ex-ante utility was valued equally, but I think that was confusing. I'm just going to remove that section. Thanks!

Comment by ben_west on Harsanyi's simple “proof” of utilitarianism · 2020-02-20T17:50:32.619Z · score: 3 (2 votes) · EA · GW

Thanks for the comment!

Also, you suggest that this result lends support to common EA beliefs.

Hmm, I wasn't trying to suggest that, but I might have accidentally implied something. I would be curious what you are pointing to?

First, it leads to preference utilitarianism, not hedonic utilitarianism

I used preferences about restaurants as an example because that seemed like something people can relate to easily, but that's just an example. The theorem is compatible with hedonic utilitarianism. (In that case, the theorem would just prove that the group's utility function is the sum of each individual's happiness.)

Second, EAs tend to value animals and future people, but they would arguably not count as part of the "group" in this framework(?).

I don't think that this theorem says much about who you aggregate. It's just simply stating that if you aggregate some group of persons in a certain way, then that aggregation must take the form of addition.

Third, I'm not sure what this tells you about the creation or non-creation of possible beings (cf. the asymmetry in population ethics).

I agree it doesn't say much, see e.g. Michael's comment.

Comment by ben_west on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-02-11T01:08:39.088Z · score: 7 (4 votes) · EA · GW
I expect bets about a large number of Global Catastrophic Risks to be of great importance, and to similarly be perceived as "ghoulish" as you describe here.

The US government attempted to create a prediction market to predict terrorist attacks. It was shut down basically because it was perceived as “ghoulish”.

My impression is that experts think that shutting down the market made terrorism more likely, but I’m not super well-informed.

I see this as evidence both that 1) markets are useful and 2) some people (including influential people like senators) react pretty negatively to betting on life or death issues, despite the utility.

Comment by ben_west on More info on EA Global admissions · 2020-02-10T17:29:57.094Z · score: 2 (1 votes) · EA · GW

Thanks for the suggestions. There are some community-organized events like meetups or parties in the days around the conference. Due to some past issues (e.g. someone sending every attendee a promotional message about their organization on the event app, or confusion about who is actually present at the event to meet with), we’re wary of expanding app access beyond the actual conference attendees. (See also Ellen’s comment here, which is a somewhat similar idea.)

Comment by ben_west on Announcing A Volunteer Research Team at EA Israel! · 2020-01-30T23:16:16.577Z · score: 2 (1 votes) · EA · GW

Thanks for sharing this! It seems like a really exciting project, and I hope you continue to post updates. Very cool that you have explicit success metrics.

A semi-research thing I'm interested in is putting more information on Wikipedia. I wrote a little bit about this here. I suspect that for people who are new to research, or aren't entirely sure what subject they want to research, making existing research accessible is a similar task which is also quite useful for the world.

Comment by ben_west on More info on EA Global admissions · 2020-01-27T23:29:08.896Z · score: 5 (3 votes) · EA · GW

Thanks for asking Ozzie! The current bottlenecks limiting our ability to make a larger EA Global are not things that community members can easily help with.

That being said, we recently published a post on other types of events. I would encourage community members to read that and consider doing one-on-one’s, group socials, or other events listed there. Even though EA Global in particular is not something that can be easily scaled by the community, many other types of events can be.

More involved community members may also consider doing a residency. I believe you and I first met when I stayed in the Bay for a few weeks many years ago, and to this day I’m still more closely connected with people I met on that trip than many I met at EA Global.

Comment by ben_west on More info on EA Global admissions · 2020-01-15T17:54:19.287Z · score: 10 (4 votes) · EA · GW

Thanks for the detailed thoughts Oli.

I think having common knowledge of norms, ideas and future plans is often very important, and is better achieved by having everyone in the same place. If you split up the event into multiple events, even if all the same people attend, the participants of those events can now no longer verify who else is at the event, and as such can no longer build common knowledge with those other people about the things that have been discussed.

Interesting, this doesn’t fit with my experience for two reasons: a) attendance is so far past Dunbar’s number that I have a hard time knowing who attended any individual EA Global and b) even if I know that someone attended a given EA Global, I’m not sure whether they attended any individual talk/workshop/etc. (since many people don’t attend the same talks, or even any talks at all).

I’m curious if you have examples of “norms, ideas, or future plans” which were successfully shared in 2016 (when we had just the one large EA Global) that you think would not have successfully been shared if we had multiple events?

I have been to 3 EAGx events, all three of which seemed to me to be just generally much worse run than EAG, both in terms of content and operations

We have heard concerns similar to yours about logistics and content in the past, and we are providing more support for EAGx organizers this year, including creating a “playbook” to document best practices, having monthly check-in calls between the organizers and CEA’s events team, and hosting a training for the organizers (which is happening this week).

At least in recent years, the comparison of the Net Promoter Score of EAG and EAGx events indicate that the attendees themselves are positive about EAGx, though there are obviously lots of confounding factors:

(More information about EAGx can be found here.)

The value of a conference does scale to a meaningful degree with n^2… I think there are strong increasing returns to conference size

Echoing Denise, I would be curious for evidence here. My intuition is that marginal returns are diminishing, not increasing, and I think this is a common view (e.g. ticket prices for conferences don’t seem to scale with the square of the number of attendees).

Group membership is in significant parts determined by who attends EAG, and not by who attends EAGx, and I feel somewhat uncomfortable with the degree of control CEA has over that

Do you have examples of groups (events, programs, etc.) which use EA Global attendance as a “significant” membership criterion?

My impression is that many people who are highly involved in EA do not attend EA Global (some EA organization staff do not attend, for example), so I would be pretty skeptical of using it.

Meta Note

To clarify my above responses: I (and the Events team, who are currently running a retreat with the EAGx organizers) believe that more people being able to attend EA Global is good, all other things being equal. Even though I’m less positive about the specific things you are pointing to here than you are, I generally agree that you are pointing to legitimate sources of value.

Comment by ben_west on Long-Term Future Fund: November 2019 short grant writeups · 2020-01-14T04:02:16.442Z · score: 2 (1 votes) · EA · GW

Great – I appreciate your dedication to transparency even though you have so many other commitments!

Comment by ben_west on Long-Term Future Fund: November 2019 short grant writeups · 2020-01-14T02:31:44.056Z · score: 6 (4 votes) · EA · GW

Thanks for writing this up despite all your other obligations Oli! If you have time either now or when you do the more in-depth write up, I would still be curious to hear your thoughts on success conditions for fiction.

Comment by ben_west on More info on EA Global admissions · 2020-01-09T19:04:14.096Z · score: 7 (4 votes) · EA · GW

I wanted to share an update: for the past month, our events team (Amy, Barry, and Kate) have been brainstorming ways to allow more people to attend EA Global SF 2020. Our previous bottleneck was the number of seats available for lunch: even with us buying out the restaurant next to the dome (M.Y. China), we only had space for 550 people. (Tap 415, another nearby restaurant which we had used in prior years, has gone out of business.)

We have now updated our agreements with the venue and contractors and brainstormed some additional changes that will allow more attendees in sessions and at lunch. This has increased our capacity by 70 (from 550 to 620).

(As a reference point: EA Global SF had 499 attendees in 2019.)

Comment by ben_west on More info on EA Global admissions · 2020-01-03T23:45:22.272Z · score: 6 (3 votes) · EA · GW

Thanks for the feedback, Oliver. Do you have opinions on our hypothesis that we should focus on EAGx over more/bigger EA Globals?

Comment by ben_west on More info on EA Global admissions · 2020-01-03T23:45:05.864Z · score: 3 (2 votes) · EA · GW

We don’t have any current plans to split EA Global into multiple sub-conferences. We have used the fact that not everyone attends talks to increase attendance (for example, at EA Global London 2019, we accepted more attendees than could fit in the venue for the opening talk on the assumption that not all of them would attend the opening).

We will keep the sub-conference idea in mind for the future.

Comment by ben_west on More info on EA Global admissions · 2020-01-03T23:44:32.410Z · score: 2 (1 votes) · EA · GW

Thanks for the questions. We have adjusted our promotion – for example: the application page and form lists who we believe EA Global to be a good fit for, and we send group leaders an email with this set of criteria and some FAQs about why group members may not be admitted. Conversely, we send emails to people we expect to accept (e.g. Community Building Grant recipients), to encourage them to apply. We try to make community members aware when applications open and convey who the event is aimed at, but we don’t try to promote it as strongly as we did in some past years.

Despite this, we know that there are still many people who would be a good fit for EA Global who do not apply, and others who apply and feel disappointed when they are not accepted. We want to express our appreciation to everyone who applies.

Regarding themes: in 2017 EA Global Boston had a theme of “expanding the frontiers of EA”, EA Global London had an academic theme, and EA Global SF had a community theme and had looser admission standards than the other two. We found that people primarily applied to the conference they were geographically closest to and did not seem to have strong preferences about themes. We’ve also run smaller targeted retreats on specific topics like organizing EA groups or working in operations.