1) One distinction one might want to make is between better versions of previous institutions and truly novel epistemic institutions. E.g. Global Priorities Institutes and Future of Humanity Institute are examples of the former - university research institutes isn't a novel institution. Other examples could be better expert surveys (that already exists), better data presentation, etc. My sense is that some people who think about better institutions are too focused on entirely new institutions, while neglecting better versions of existing institutions. Building something entirely novel is often very hard, whereas it's easier to build a new version of an existing institution.
2) One fallacy people who design new institutions often make is that they overestimate the amount of work people want to put into their schemes. E.g. suggested new institutions like post-publication peer review and some forms of prediction institutions suffer from the fact that people don't want to invest the time in them that they need. I think that's a key consideration that's often forgotten. This may be a particular problem for certain complex decentralised institutions, which depend on freely operating individuals (i.e. whom you don't employ full-time) either voluntarily or for profit investing time in your institution. Such decentralised institutions can be theoretically attractive, and I think there is a risk that people get nerd-sniped into putting more time into theorising about some such institutions than they're worth. By contrast, I'm more generally positive about professional institutions who employ people full-time (e.g. university departments). But obviously each suggestion should be evaluated on its own merits.
3) With regards to "norms and folkways", there is a discussion in economics and the other social sciences about the relative importance of "culture" and (formal) institutions for economic growth and other desirable developments. My view is that culture and norms are often under-rated relative to formal institutions. The EA community has developed a set of epistemic norms and an epistemic culture which is by and large pretty good. In fact, it seems we didn't develop too many formal institutions that are as valuable as those norms and that culture. That seems to me a reason to think more about how to foster better norms and a better culture, both within the EA community, and outside it.
One option would be to create a separate international fund for pandemic response paid for by national-level taxes on industries with inherent disease risk—such as live animal producers and sellers, forestry and extractive industries—that could support recovery and lessen the toll of outbreaks on national economies.
International air travel may contribute to spread of infectious diseases (cf. this suggestive tweet; though wealth may be a confounder; poor countries may have more undetected cases). That's an externality that travellers and airlines arguably should pay for, via a tax. The money would be used for defences against pandemics. Is this something that's considered in existing taxation? If there should be such a pandemic flight tax, how large should it optimally be?
That some of donors will be persuaded not to donate by the information is a feature, not a bug.
That isn't true as a matter of definition, as you seem to imply. Some donors being persuaded not to donate by the information can be a feature, but it can also be a bug. It has to be decided on a case-by-case-basis, by looking at what the disclosure statement actually says.
Sometimes the term "the Gricean maxims" (or "Grice's maxims") is used instead of "the Cooperative Principle" as the principal term. I personally find it more memorable, since "the Cooperative Principle" could mean so many things.
3. was discussed here. My impression of that discussion is that many of the forum readers thought that it's important that one familiarises oneself with the literature before commenting. Like I say in my comment, that's certainly my view.
I agree that too many EA Forum posts fail to appropriately engage with relevant literature.
In some cases, I think people feel that they have a nuanced position that isn't captured by broad labels. I think that reasoning can go to far, however: if that argument is pushed far enough, no one will count as a socialist, postmodernist, effective altruist, etc. And as you imply, these kinds of broad categories are useful, even while in some respects imperfect.
Which actors do you think one should try to influence to make sure that a potential transition to a world with AGI goes well (e.g. so that it leads to widely shared benefits)? For instance, do you think one should primarily focus on influencing private companies or governments? I'd be interested in learning more about the arguments for whatever conclusions you have. Thanks!
A new newsletter of potential interest: Reasonable People, by cognitive scientist Tom Stafford.
The plan is to collect in one place things I write on human rationality, reason and persuasion, sharing links and evidence on these topics as I try and understand advertising, bias, misinformation, influence and decision making.
Comment by Stefan_Schubert on [deleted post]
Thanks for this post. I think discussions about career prioritisation often become quite emotional and personal in a way that clouds people's judgements. Sometimes I think I've observed the following dynamic.
1. It's argued, more or less explicitly, that EAs should switch career into one of a small number of causes.
2. Some EAs are either not attracted to those careers, or are (or at least believe that they are) unable to successfully pursue those careers.
3. The preceding point means that there is a painful tension between the desire to do the most good, and one's personal career prospects. There is a strong desire to resolve that tension.
4. That gives strong incentives to engage in motivated reasoning: to arrive at the conclusion that actually, this tension is illusory; one doesn't need to engage in tough trade-offs to do the most good. One can stay on doing roughly what one currently does.
5. The EAs who believe in point 1 - that EAs should switch career to other causes - are often unwilling to criticise the reasoning described in 4. That's because these issues are rather emotional and personal, and that some may think it's insensitive to criticise people's personal career choices.
I think similar dynamics play out with regards to cause prioritisation more generally, decisions whether to fund specific projects which many feel strongly about, and so on. The key aspects of these dynamics are 1) that people often are quite emotional about their choice, and therefore reluctant to give up on it even in the face of better evidence and 2) that others are reluctant to engage in serious criticism of the former group, precisely because the issue is so clearly emotional and personal to them.
One way to mitigate these problems and to improve the level of debate on these issues is to discuss the object-level considerations in a detached, unemotional way (e.g. obviously without snark); and to do so in some detail. That's precisely what this post does.
I agree that consensus is unlikely regarding AI safety but I rather meant that it's useful when individuals make clear claims about difficult questions, and that's possible whether others agree with them or not. In AI Impacts' interview series, such claims are made (e.g. here: https://aiimpacts.org/conversation-with-adam-gleave/).
Thanks for this. I think it's valuable when well-informed EAs make easily interpretable claims about difficult questions (another such question is AI risk). This post (including the "appendices" in the comments) strikes a good balance; it is epistemically responsible, yet has clear conclusions.
You don't have to provide a complete ranking of candidates. You only have to decide which candidates to accept and which not to in the bucket that you would prefer to randomise. And it seems to me that such decisions could in principle be made extremely quickly, particularly since you must already have assimilated some information about the candidates in order to put them in the right bucket (though speed probably affects quality adversely; but I still think some signal will remain).
If time is an issue, organisers can make quick snap judgements. It's not clear to me that randomisation would be much faster, particularly since you anyway have to make a first rough scoring on your approach. And it seems reasonable, in my view, that organisers are better than chance at picking the better applicants, even when using snap judgements, and even among applicants in the same bucket.
Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.
...its settlement value will be based on the degree to which 2119 people approve of the actions of people in the 2019-2119 timespan, as determined by a standardised survey - say, on a scale from 0 to 10.
A potential risk is that people might not be very good at assessing whether the last century's actions/policies have, on average, been good for them or not. To study that risk one could run such surveys today, testing whether people in different countries approve of the actions of people (in their country) in the 1919-2019 time span. Then one could match those survey results against expert judgements of how well different countries have been run during that period. (The experts aren't necessarily right, but agreement or disagreement with the experts should still give some evidence.)
The “veil of ignorance” is a moral reasoning device designed to promote impartial decision making by denying decision makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here, we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across 7 experiments (n= 6,261), 4 preregistered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by anchoring, probabilistic reasoning, or generic perspective taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision makers who wish to make more impartial and/or socially beneficial choices.
One distinction one might make is that between institutions that:
a) Generate knowledge about how to help future generations effectively.
b) Give more power to people who want to help future generations, or whose task is to help future generations.
Using a belief-preference framework, one might say that a) generates true beliefs (and corrects false beliefs), whereas b) effectively makes the government's preferences more future-oriented.
An In-government Think Tank would be an example of a), and age-weighted voting an example of b). Some of the other institutions may be mixes; have both components.
Impartiality with respect to time is often compared with impartiality with respect to gender, ethnicity, etc. However, it seems to me that there is an important policy disanalogy, namely that it's probably more difficult to know how to advance the interests of future generations, than to know how to advance the interests of an underprivileged gender or ethnic group (even though that isn't trivial either). There's a risk that many policies that people might advocate for the sake of future generations aren't especially effective. One upshot of that is that when it comes to helping future generations, institutions that generate more knowledge may be unusually important.
I agree that the epistemic dynamics of discussions about the EA Hotel aren't optimal. I would guess that there are selection effects; that critics aren't heard to the same extent as supporters.
Relatedly, the amount of discussion about the EA Hotel relative to other projects may be a bit disproportionate. It's a relatively small project, but there are lots of posts about it (see OP). By contrast, there is far less discussion about larger EA orgs, large OpenPhil grants, etc. That seems a bit askew to my mind. One might wonder about the cost-effectiveness of relatively long discussions about small donations, given opportunity costs.
...deliberation often stultifies or corrupts us, that it often exacerbates our biases and leads to greater conflict.
Like Matt_Lerner, I wonder how you selected what evidence to cite, and whether the side that is more sceptical of deliberative democracy got a fair hearing.
With regards to this statement:
Empirical research shows that both politicians and average citizens have the capacity to deliberate when institutions are appropriate.
That seems to depend on what standards you have for "capacity to deliberate". At one point you use the phrase "rigorous analytic reasoning", and depending on what cut-off point one has for that, one might argue that capacity for such reasoning isn't that common.
A recent Swedish paper showed that politicians are "on average significantly smarter and better leaders than the population they represent". To the extent that that is true, politicians may be better at deliberating than the general public. I haven't looked at other countries, however.
The question "to what extent did a specific moral philosopher cause moral progress/change?" (not the exact question you pose, but close) is an instance of the more general question "to what extent have individuals influenced history?" (e.g. Luther, Napoleon, Stalin). It could be useful to look at what people have written on that more general issue, both to generate priors, and to gain insights about various methodological and conceptual issues (which I suspect can be pretty tricky).
Philosophy Contest: Write a Philosophical Argument That Convinces Research Participants to Donate to Charity
Can you write a philosophical argument that effectively convinces research participants to donate money to charity?
Prize: $1000 ($500 directly to the winner, $500 to the winner's choice of charity)
Preliminary research from Eric Schwitzgebel's laboratory suggests that abstract philosophical arguments may not be effective at convincing research participants to give a surprise bonus award to charity. In contrast, emotionally moving narratives do appear to be effective.
However, it might be possible to write a more effective argument than the arguments used in previous research. Therefore U.C. Riverside philosopher Eric Schwitzgebel and Harvard psychologist Fiery Cushman are challenging the philosophical and psychological community to design an argument that effectively convinces participants to donate bonus money to charity at rates higher than they do in a control condition.