Thanks for clarifying! I understand the intuition behind calling this "neglectedness", but it pushes in the opposite direction of how EA's usually use the term. I might suggest choosing a different term for this, as it confused me (and, I think, others).
To clarify what I mean by "the opposite direction": the original motivation behind caring about "neglectedness" was that it's a heuristic for whether low hanging fruit in the field exists. If no one has looked into something, then it's more likely that there is low hanging fruit, so we should probably prefer domains that are less established . (All other things being equal.)
The fact that many people have looked into climate change but we still have not "flattened the emissions curve" indicates that there is not low hanging fruit remaining. So an argument that climate change is "neglected" in the sense you are using the term is actually an argument that it is not neglected in the usual sense of the term. Hence the confusion from me and others.
The same can hardly be said for AI safety, wild animal welfare, or (until this year, perhaps) pandemic prevention. - Will
Otherwise, looking at malaria interventions, to take just one example, makes no sense. Billions have and will continue to go in that direction even without GiveWell - Uri
I noticed Will listed AI safety and wild animal welfare (WAW), and you mentioned malaria. I'm curious if this is the crux – I would guess that Will agrees (certain types of)
climate change work is plausibly as good as anti-malaria, and I wonder if you agree that the sort of person who (perhaps incorrectly) cares about WAW should consider that to be more impactful than climate change.
Thanks for sharing! This does seem like an area many people are interested in, so I'm glad to have more discussion.
I would suggest considering the opposite argument regarding neglectedness. If I had to steelman this, I would say something like: a small number of people (perhaps even a single PhD student) do solid research about existential risks from climate change -> existential risks research becomes an accepted part of mainstream climate change work -> because "mainstream climate change work" has so many resources, that small initial bit of research has been leveraged into a much larger amount.
(Note: I'm not sure how reasonable this argument is – I personally don't find it that compelling. But it seems more compelling to me than arguing that climate change isn't neglected, or that we should ignore neglectedness concerns.)
This is really interesting! It seems like there's also compelling evidence for more than 2:
While there is no direct evidence that any of the 25  species of Hawaiian land birds that have become extinct since the documented arrival of Culex quinquefasciatus in 1826  were even susceptible to malaria and there is limited anecdotal information suggesting they were affected by birdpox , the observation that several remaining species only persist either on islands where there are no mosquitoes or at altitudes above those at which mosquitoes can breed and that these same species are highly susceptible to avian malaria and birdpox [18,19] is certainly very strong circumstantial evidence...
The formerly abundant endemic rats Rattus macleari and Rattus nativitas disappeared from Christmas Island in the Indian Ocean (10°29′ S 105°38′ E) around the turn of the twentieth century. Their disappearance was apparently abrupt, and shortly before the final collapse sick individuals were seen crawling along footpaths . At that time, trypanosomiasis transmitted by fleas from introduced black rats R. rattus was suggested as the causative agent. Recently, Wyatt et al.  managed to isolate trypanosome DNA from both R. rattus and R. macleari specimens collected during the period of decline, whereas no trypanosome DNA was present in R. nativitas specimens collected before the arrival of black rats. While this is good circumstantial evidence, direct evidence that trypanosomes caused the mortality is limited
Yeah, even if it just leads to acceptance that higher education is about signaling, that seems like a step in the right direction to me. It at least lays the groundwork for future innovators who can optimize for signaling as opposed to "education."
Re-assessment of education & educational institutions
I'm curious to see what happens here. I know a lot of people who are saying "I'm paying $50,000 a year to watch the same lecture I could have watched on YouTube for free?" Of course, that was also true before quarantine, but somehow quarantine has made it more salient.
I'm not sure whether this salience will last and cause a switch towards nontraditional learning.
Change of government/leader in some countries: if they did not handle pandemic well
Do you have a sense for how well correlated public opinion and government performance is? At least in the US, my impression is that Trump's approval ratings got a slight bump but are now back to normal levels, and public opinion mostly tracks party allegiance rather than any government policy.
I wonder if one could find more credible signals of things like "caring for your employers", ideally in statistical form. Money invested in worker safety might be one such metric.
That seems reasonable. Another possibility is looking at benefits, which have grown rapidly (though there are also many confounders here).
Something which I can't easily measure but seems more robust is the fraction of "iterated games". E.g. I would expect enterprise salespeople to be less malevolent than B2C ones (at least towards their customers), because successful enterprise sales relies on building relationships over years or decades. Similarly managers are often recruited and paid well because they have a loyal team who will go with them, and so screwing over that team is not in their self-interest.
A minor copyediting suggestion (adding the words in bold):
Factor 1—characterized by cruelty, grandiosity, manipulativeness, and a lack of guilt—arguably represents the core personality traits of psychopathy. However, scoring highly on factor 2—characterized by impulsivity, reactive anger, and lack of realistic goals—is less problematic from our perspective. In fact, humans scoring high on factor 1 but low on factor 2 are probably more dangerous than humans scoring high on both factors (more on this below).
It's not a big deal, but it took me a minute to understand why you were saying it's both less problematic and more dangerous.
one of the very first requirements for a man who is fit to handle pig iron as a regular occupation is that he shall be so stupid and so phlegmatic that he more nearly resembles in his mental make-up the ox than any other type.
Modern executives would never say this about their staff, and no doubt this is partly because what's said in the boardroom is different from what's said in public, but there is a serious sense in which credibly signaling prosocial behaviors towards your employees is useful. E.g. 80 years later you have Paul O'Neill, in almost exactly the same industry as Taylor, putting worker safety as his key metric, because he felt that people would work harder if they felt taken care of by the company.
My guess is that corporations which rely on highly skilled workers benefit more from prosocial executives, and that it's hard to pretend to be prosocial over a decades-long career, though certainly not impossible. So possibly one hard-to-fake measure of malevolence is whether you repeatedly succeed in a corporation where success requires prosociality.
(I'm not sure if this is the best reference class, I was just curious in the comparison because the population of people who start YC companies seems somewhat similar to the population who join longtermist organizations.)
I have an intuition that this is more of the disagreement between you and vegans (as opposed to having different moral weights). My guess is that one could literally prevent three chicken-years for less than $500/year? And also that some vegans' personal happiness is more affected by not eating chickens than donating $500.
If that's true, then the reason vegans are vegan instead of donating is because they view it as "morality" as opposed to "axiology".
This accords with my intuition: having someone tell me they care about nonhuman animals while eating a chicken sandwich rubs me in a way that having someone tell me they care about the developing world while wearing $100 shoes does not.
As one heuristic: Beyond meat is $4.59 for 9 ounces. So it would cost $424 to replace all 52.9 pounds Peter says the average American eats in a year. ↩︎
Do the weights really affect the argument? I think Jeff is saying that being omnivorous results in ~6 additional animals alive at any given point. If an animal's existence on a farm is as bad as one human in the developing world is good (a pretty non-speciesist weighting), then it's $600 to go vegan.
$600 is admittedly much more than $0.43, but my guess is that Jeff still would rather donate the $600.
I generally agree with your response, but wanted to point out one example of establishing credibility: Scott Aaronson says:
It does cause me to update in the direction of AI-risk being a serious concern. For the Bay Area rationalists have now publicly sounded the alarm about a looming crisis for the human race, well before it was socially acceptable to take that crisis too seriously (and when taking it seriously would have made a big difference), and then been 100% vindicated by events. Where previously they were 0 for 0 in predictions of that kind, they’re now 1 for 1.
[After Adam Scholl invites him to a workshop]: Thanks for asking! Absolutely, I’d be interested to attend an AI-risk workshop sometime. Partly just to learn about the field, partly to find out whether there’s anything that someone with my skillset could contribute.
(Note: part of what impressed Scott here was being early to raise the alarm, and that boat has already sailed, so it could be that future COVID-19 work won't do much to impress people like him.)
This is a really interesting point. An additional consideration is that global leaders tend to be older, and hence more at risk (cf. Boris Johnson). You could imagine that their deaths are especially destabilizing.
If the longtermist argument for preventing pandemics is that they trigger destabilization which leads to, say, nuclear war, the age impacts could be an important factor.
I personally would suggest a format of: 1. One paragraph summary that any educated layperson easily can understand 2. One page summary that a layperson with college-level math skills can understand 3. 2-5 pages of detail that someone with college-level math and Econ 101 skills can understand
This is just a suggestion though, I don't have a lot of confidence that it's correct.
Anecdotally, many EA organizations seem to think that they are somehow constrained by management capacity. My experience is that this term is used in different ways (for example, some places use it to mean that they need senior researchers who can mentor junior researchers; others use it to mean that they need people who can do HR really well).
It would be cool for someone to interview different organizations and get a better sense of what is actually needed here
If I had to suggest something which is both robustly good and disputable, I would suggest this principle:
Focus on minimizing the time between when you have an idea and when your customer benefits from that idea.
Evidence for being robustly good
This principle has a variety of names, as many different industries have rediscovered the same idea.
The most famous formulation of this principle is probably as part of theToyota Production System. Traditional assembly lines took a long time to set up, but once set up, they could pump out products incredibly fast. Toyota decided to change their focus instead towards responding rapidly, e.g. they set a radical goal of being able to changeeach of their dies in less than 10 minutes.
Agile project management isnow common in many technical fields outside of software.
This underlying principle, as well as its accoutrements likeKanban boards, can be seen in a huge variety of successful industries, from manufacturing to IT. The principle of reducing turnaround time can be applied by single individuals to their own workflow, or by multinational conglomerates. While it is easier to do agile project management in an agile company, it’s entirely possible for small teams (or even individuals) to unilaterally focus on reducing their turnaround times (meaning that this principle is not dependent on specific organizational cultures or processes).There are also more theoretical reasons to think this principle is robustly good. Theplanning fallacy is a well-evidenced phenomenon, and it reasonably would lead people to underestimate how important rapid responses are (since they believe they can forecast the future more accurately than they actually can).
Toyota’s success was in part due to how surprising their approach was (compared to the approach taken by US and European manufacturers).
Each industry seems to require discovering this principle anew. E.g.The DevOps Handbook popularized these principles in IT Operations only a few years ago. (It explicitly references lean manufacturing principles as the inspiration.)
The planning fallacy and other optimism biases would predict that people underestimate how important it is to respond rapidly to changes.
Some other possible principles which are both robustly useful and disputable:
Theory of Constraints. This seems well evidenced (the principle is almost trivial, once stated) and managers are often surprised by it. However, I’m not sure it’s really “disputable” – it is more a principle that is unequivocally true, but hard to implement in practice.
“Minimize WIP”. This principle is disputable, and my impression is that certain areas of supply chain management consider it to be gospel, but I'm not sure how solid the evidence base for it is outside of SCM. Anecdotally, it's been pretty useful in my own work, and there are theoretical reasons to think it's undervalued (e.g. lots of psychological research about how people underestimate how bad distractions are).
One of the most famous experiments in management is Does management matter? Evidence from India. This involved sending highly-paid management consultants to randomly selected textile firms in India. The treatment group had significant improvements relative to the control group (e.g. 11% increase in productivity).How did they accomplish these gains? Through changes like:
Putting trash outside, instead of on the factory floor
Sorting and labeling excess inventory, instead of putting it in a giant heap
Doing preventative maintenance on machines, instead of running them until they break down
I think the conclusion here is that “disputable” is a relative term – I doubt any US plant managers need to be convinced that they should buy garbage bins. Most of the benefits that the management consultants were able to provide were simply in encouraging adherence to (what managers in the US consider to be) “obvious” best practices. Those best practices clearly were not “obvious” to the Indian managers.
GiveWell hired a VP of Marketing last fall. Do you have any insights from marketing GW that would be applicable to other EA organizations? Are there any surprising ways in which the marketing you are doing is different from "traditional" marketing?
The average American donates about 4% of their income to charity. (Some discussion about whether this is the correct number here). Given this, asking people to pledge 1% seems a bit odd – almost like you are asking them to decrease the amount they donate.
One benefit of OFTW is that they are pushing GiveWell-recommended charities, but this seems directly competitive with TLYCS, which generally suggests people pledge 2-5% (the scale adjusts based on your income).
It's also somewhat competitive with the Giving What We Can pledge, which is a cause-neutral 10%.
I'm curious what you see as the benefits of OftW over these alternatives? I'm also curious if you have visibility into your forecasts (namely, whether they will move 1-2x the money to top charities as they received in support this year)?
The GH&D Fund on EA Funds is unusual in that it almost exclusively gives large ($500k+) grants. The other funds regularly give $10-50k grants.
Do you think there is an opportunity for smaller funders in the GH&D space? Do you think there are economies of scale or other factors which make larger grants more useful in the GH&D space than in other cause areas?
To what extent do you think future reductions in the number of farmed animals will come from advocacy, as opposed to technological advancement (e.g. Beyond Meat)? Do you have a sense of the historical impact of these two approaches?
One thing I found really interesting about this research is statements like these:
Therefore, though transformational leadership has been contrasted to transactional leadership (with the former being suggested to be superior), the use of contingent reward behaviours seems similarly effective to transformational leadership.
It sounds very believable to me that ~0% of "nonobvious" leadership recommendations don't outperform a "placebo". (Or, as you suggest, are only good subject to contingencies like personal fit.)
I would be curious if doing this review gave you a sense of what the "control group" for leadership could be?
I'm imagining something like:
Your team has reasonably well defined goals
Your team has the ability to make progress towards those goals
Your team is not distracted from those goals by some major problem (e.g. morale, bureaucracy)
We might hypothesize that any team which meets 1-3 will not have its performance improved by "transformational" leadership etc.
Do you know if anyone has studied or hypothesized such a thing? If not, do you have a sense from your research of what this might look like?
Do you know what these researchers are measuring when looking at the "results" level?
If I'm understanding correctly, they are claiming that training increases some sort of result by 0.6 standard deviations, which seems huge. E.g. if some corporate training increased quarterly revenue by 0.6 sd's that would be quite shocking.
(I tried to read through the meta-analyses but I could only find their descriptions of how the four levels differ, and nothing about what the results level looks like.)
Thanks so much for sharing this and doing this research!
That high performance on measures of leadership effectiveness causes organisational success, rather than organisational success inspiring high performance on (or at least more positive evaluations of) measures of leadership effectiveness. Given that the research is almost exclusively correlational, we cannot be confident that this assumption is correct. However, this seems to me to be intuitively likely.
The Halo Effect is a compendium of evidence to the contrary. Basically, leaders who are good at one thing (e.g. maximizing revenue) are considered to be good at everything else (e.g. being humble). It has great examples of how the exact same CEO behavior is described positively versus negatively as the company's stock price fluctuates.
I would recommend at least skimming the book – it has really helped me differentiate useful from less useful business research.
Alastair Norcross used the term "thoroughgoing aggregation" for what seems to be linear addition of utilities in particular
Ah, my mistake – I had heard this definition before, which seems slightly different.
I just find the conclusion section really jarring.
Thanks for the suggestion – always tricky to figure out what a "straightforward" consequence is in philosophy.
I changed it to this – curious if you still find it jarring?
Total utilitarianism is a fairly controversial position. The above example where (1,1)=(2,0) can be extended to show that utilitarianism is extremely demanding, potentially requiring extreme sacrifices and inequality.
It is therefore interesting that it is the only decision procedure which does not violate one of these seemingly reasonable assumptions.
Yeah, it doesn't (obviously) follow. See the appendix on equality. It made the proof simpler and I thought most readers would not find it objectionable, but if you have a suggestion for an alternate simple proof I would love to hear it!
I don't think the theorem provides support for total utilitarianism, specifically, unless you add extra assumptions about how to deal with populations of different sizes or different populations generally. Average utilitarianism is still consistent with it, for example.
Well, average utilitarianism is consistent with the result because it gives the same answer as total utilitarianism (for a fixed population size). The vast majority of utility functions one can imagine (including ones also based on the original position like maximin) are ruled out by the result. I agree that the technical result is "anything isomorphic to total utilitarianism" though.
In that case, it would IMO be better to change "total utilitarianism" to "utilitarianism" in the article. Utilitarianism is different from other forms of consequentialism in that it uses thoroughgoing aggregation. Isn't that what Harsanyi's theorem mainly shows?
Hmm, it does show that it's a linear addition of utilities (as opposed to, say, the sum of their logarithms). So I think it's stronger than saying just "thoroughgoing aggregation".
Also, you suggest that this result lends support to common EA beliefs.
Hmm, I wasn't trying to suggest that, but I might have accidentally implied something. I would be curious what you are pointing to?
First, it leads to preference utilitarianism, not hedonic utilitarianism
I used preferences about restaurants as an example because that seemed like something people can relate to easily, but that's just an example. The theorem is compatible with hedonic utilitarianism. (In that case, the theorem would just prove that the group's utility function is the sum of each individual's happiness.)
Second, EAs tend to value animals and future people, but they would arguably not count as part of the "group" in this framework(?).
I don't think that this theorem says much about who you aggregate. It's just simply stating that if you aggregate some group of persons in a certain way, then that aggregation must take the form of addition.
Third, I'm not sure what this tells you about the creation or non-creation of possible beings (cf. the asymmetry in population ethics).
Thanks for the suggestions. There are some community-organized events like meetups or parties in the days around the conference. Due to some past issues (e.g. someone sending every attendee a promotional message about their organization on the event app, or confusion about who is actually present at the event to meet with), we’re wary of expanding app access beyond the actual conference attendees. (See also Ellen’s comment here, which is a somewhat similar idea.)
Thanks for sharing this! It seems like a really exciting project, and I hope you continue to post updates. Very cool that you have explicit success metrics.
A semi-research thing I'm interested in is putting more information on Wikipedia. I wrote a little bit about this here. I suspect that for people who are new to research, or aren't entirely sure what subject they want to research, making existing research accessible is a similar task which is also quite useful for the world.
Thanks for asking Ozzie! The current bottlenecks limiting our ability to make a larger EA Global are not things that community members can easily help with.
That being said, we recently published a post on other types of events. I would encourage community members to read that and consider doing one-on-one’s, group socials, or other events listed there. Even though EA Global in particular is not something that can be easily scaled by the community, many other types of events can be.
More involved community members may also consider doing a residency. I believe you and I first met when I stayed in the Bay for a few weeks many years ago, and to this day I’m still more closely connected with people I met on that trip than many I met at EA Global.
I think having common knowledge of norms, ideas and future plans is often very important, and is better achieved by having everyone in the same place. If you split up the event into multiple events, even if all the same people attend, the participants of those events can now no longer verify who else is at the event, and as such can no longer build common knowledge with those other people about the things that have been discussed.
Interesting, this doesn’t fit with my experience for two reasons: a) attendance is so far past Dunbar’s number that I have a hard time knowing who attended any individual EA Global and b) even if I know that someone attended a given EA Global, I’m not sure whether they attended any individual talk/workshop/etc. (since many people don’t attend the same talks, or even any talks at all).
I’m curious if you have examples of “norms, ideas, or future plans” which were successfully shared in 2016 (when we had just the one large EA Global) that you think would not have successfully been shared if we had multiple events?
I have been to 3 EAGx events, all three of which seemed to me to be just generally much worse run than EAG, both in terms of content and operations
We have heard concerns similar to yours about logistics and content in the past, and we are providing more support for EAGx organizers this year, including creating a “playbook” to document best practices, having monthly check-in calls between the organizers and CEA’s events team, and hosting a training for the organizers (which is happening this week).
At least in recent years, the comparison of the Net Promoter Score of EAG and EAGx events indicate that the attendees themselves are positive about EAGx, though there are obviously lots of confounding factors:
The value of a conference does scale to a meaningful degree with n^2… I think there are strong increasing returns to conference size
Echoing Denise, I would be curious for evidence here. My intuition is that marginal returns are diminishing, not increasing, and I think this is a common view (e.g. ticket prices for conferences don’t seem to scale with the square of the number of attendees).
Group membership is in significant parts determined by who attends EAG, and not by who attends EAGx, and I feel somewhat uncomfortable with the degree of control CEA has over that
Do you have examples of groups (events, programs, etc.) which use EA Global attendance as a “significant” membership criterion?
My impression is that many people who are highly involved in EA do not attend EA Global (some EA organization staff do not attend, for example), so I would be pretty skeptical of using it.
To clarify my above responses: I (and the Events team, who are currently running a retreat with the EAGx organizers) believe that more people being able to attend EA Global is good, all other things being equal. Even though I’m less positive about the specific things you are pointing to here than you are, I generally agree that you are pointing to legitimate sources of value.
Thanks for writing this up despite all your other obligations Oli! If you have time either now or when you do the more in-depth write up, I would still be curious to hear your thoughts on success conditions for fiction.
I wanted to share an update: for the past month, our events team (Amy, Barry, and Kate) have been brainstorming ways to allow more people to attend EA Global SF 2020. Our previous bottleneck was the number of seats available for lunch: even with us buying out the restaurant next to the dome (M.Y. China), we only had space for 550 people. (Tap 415, another nearby restaurant which we had used in prior years, has gone out of business.)
We have now updated our agreements with the venue and contractors and brainstormed some additional changes that will allow more attendees in sessions and at lunch. This has increased our capacity by 70 (from 550 to 620).
(As a reference point: EA Global SF had 499 attendees in 2019.)
We don’t have any current plans to split EA Global into multiple sub-conferences. We have used the fact that not everyone attends talks to increase attendance (for example, at EA Global London 2019, we accepted more attendees than could fit in the venue for the opening talk on the assumption that not all of them would attend the opening).
We will keep the sub-conference idea in mind for the future.
Thanks for the questions. We have adjusted our promotion – for example: the application page and form lists who we believe EA Global to be a good fit for, and we send group leaders an email with this set of criteria and some FAQs about why group members may not be admitted. Conversely, we send emails to people we expect to accept (e.g. Community Building Grant recipients), to encourage them to apply. We try to make community members aware when applications open and convey who the event is aimed at, but we don’t try to promote it as strongly as we did in some past years.
Despite this, we know that there are still many people who would be a good fit for EA Global who do not apply, and others who apply and feel disappointed when they are not accepted. We want to express our appreciation to everyone who applies.
Regarding themes: in 2017 EA Global Boston had a theme of “expanding the frontiers of EA”, EA Global London had an academic theme, and EA Global SF had a community theme and had looser admission standards than the other two. We found that people primarily applied to the conference they were geographically closest to and did not seem to have strong preferences about themes. We’ve also run smaller targeted retreats on specific topics like organizing EA groups or working in operations.