Stefan_Schubert's Shortform

post by Stefan_Schubert · 2019-10-04T18:32:56.962Z · EA · GW · 23 comments

23 comments

Comments sorted by top scores.

comment by Stefan_Schubert · 2019-10-14T10:21:40.362Z · EA(p) · GW(p)

The Nobel Prize in Economics awarded to Abhijit Banerjee, Esther Duflo and Michael Kremer "for their experimental approach to alleviating global poverty".

comment by JP Addison (jpaddison) · 2019-10-14T14:05:10.342Z · EA(p) · GW(p)

Michael Kremer is a founding member of Giving What We Can 🙂

comment by Stefan_Schubert · 2020-09-15T16:06:11.470Z · EA(p) · GW(p)

I've written a blog post on naive effective altruism and conflict.


A very useful concept is naive effective altruism. The naive effective altruist fails to take some important social or psychological considerations into account. Therefore, they may end up doing harm, rather than good [EA · GW].

The standard examples of naive effective altruism are maybe lies and theft for the greater good [EA · GW]. But there are other and less salient examples. Here I want to discuss one of them: the potential tendency to be overly conflict-oriented. There are several ways this may occur.

First, people may neglect the costs of conflict - that it’s psychologically draining for them and for others, that it reduces the potential for future collaboration, that it may harm community culture, and so on. Typically, you enter into a conflict because you think that some individual or organisation is making a poor decision - e.g. that reduces impact. My hunch is that people often decide to take the conflict because they exclusively focus on this (supposed) direct impact cost, and don’t consider the costs of the conflict itself.

Second, people often have unrealistic expectations of how others will react to criticism. Rightly or wrongly, people tend to feel that their projects are their own, and that others can only have so much of a say over them. They can take a certain amount of criticism, but if they feel that you’re invading their territory too much, they will typically find you abrasive. And they will react adversely.

Third, overconfidence may lead you to think that a decision is obviously flawed, where there’s actually reasonable disagreement. That can make you push more than you should.

*

These considerations don’t mean that you should never enter into a conflict. Of course you should. Exactly when to do so is a tricky problem. All I want to say is that we should be aware that there’s a risk that we enter into too many conflicts if we apply effective altruism naively.

comment by Stefan_Schubert · 2020-03-06T14:45:38.381Z · EA(p) · GW(p)

International air travel may contribute to spread of infectious diseases (cf. this suggestive tweet; though wealth may be a confounder; poor countries may have more undetected cases). That's an externality that travellers and airlines arguably should pay for, via a tax. The money would be used for defences against pandemics. Is this something that's considered in existing taxation? If there should be such a pandemic flight tax, how large should it optimally be?

One might also consider whether there are other behaviours that increase the risk of pandemics that should be taxed for the same reason. Seb Farquhar, Owen Cotton-Barratt, and Andrew Snyder-Beattie already suggested that risk externalities should be priced into research with public health risks.

comment by Stefan_Schubert · 2020-03-09T15:15:13.429Z · EA(p) · GW(p)

Foreign Affairs discussing similar ideas:

One option would be to create a separate international fund for pandemic response paid for by national-level taxes on industries with inherent disease risk—such as live animal producers and sellers, forestry and extractive industries—that could support recovery and lessen the toll of outbreaks on national economies.
comment by Stefan_Schubert · 2019-10-24T13:10:34.066Z · EA(p) · GW(p)
Philosophy Contest: Write a Philosophical Argument That Convinces Research Participants to Donate to Charity
Can you write a philosophical argument that effectively convinces research participants to donate money to charity?
Prize: $1000 ($500 directly to the winner, $500 to the winner's choice of charity)
Background
Preliminary research from Eric Schwitzgebel's laboratory suggests that abstract philosophical arguments may not be effective at convincing research participants to give a surprise bonus award to charity. In contrast, emotionally moving narratives do appear to be effective.
However, it might be possible to write a more effective argument than the arguments used in previous research. Therefore U.C. Riverside philosopher Eric Schwitzgebel and Harvard psychologist Fiery Cushman are challenging the philosophical and psychological community to design an argument that effectively convinces participants to donate bonus money to charity at rates higher than they do in a control condition.

Link

comment by Stefan_Schubert · 2019-10-12T11:04:50.356Z · EA(p) · GW(p)

Of possible interest regarding the efficiency of science: paper finds that scientists on average spend 52 hours per year formatting papers. (Times Higher Education write-up; extensive excerpts here if you don't have access.)

comment by Habryka · 2019-10-12T19:37:42.544Z · EA(p) · GW(p)

This seems about a factor of 2 lower than I expected. My guess would be that this just includes the actual cost of fixing formatting errors, not the cost of fitting your ideas to fit the formatting at all (i.e. having to write all the different sections, even when it doesn't make sense, or being forced to use LaTeX in the first place).

(Note: I did not yet get around to reading the paper, so this is just a first impression, as well as registering a prediction)

comment by Stefan_Schubert · 2019-10-12T19:57:54.291Z · EA(p) · GW(p)

Yes, one could define broader notions of "formatting", in which case the cost would be higher. They use a narrower notion.

For the purpose of this work, formatting was defined as total time related to formatting the body of the manuscript, figures, tables, supplementary files, and references. Respondents were asked not to count time spent on statistical analysis, writing, or editing.

The authors think that there are straightforward reforms which could reduce the time spent on formatting, in this narrow sense.

[I]t is hoped that a growing number of journals will recommend no strict formatting guidelines, at least at first submission but preferably until acceptance, to alleviate the unnecessary burden on scientists. In 2012, Elsevier initiated a process like this in the journal Free Radical Biology & Medicine with “Your Paper, Your Way”, a simplified submission process with no strict formatting requirements until the paper has been accepted for publication.

It may be more difficult to get acceptance for more far-reaching reforms.

comment by Stefan_Schubert · 2020-09-19T12:13:14.944Z · EA(p) · GW(p)

On encountering global priorities research (from my blog).


People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.

But people who encounter global priorities research - the study of what actions would improve the world the most - often take a different approach. Many disagree with global priorities researchers’ rankings of causes, preferring a ranking of their own.

This can happen for many reasons, and there’s some merit to several of them. First, as global priorities researchers themselves acknowledge, there is much more uncertainty in global priorities research than in most other fields. Second, global priorities research is a young and not very well-established field.

But there are other factors that may make people defer less to existing global priorities research than is warranted. I think I did, when I first encountered the field.

First, people often have unusually strong feelings about global priorities. We often feel strongly for particular causes or particular ways of improving the world, and don’t like to hear that they are ineffective. So we may not listen to rankings of causes that we disagree with.

Second, most intellectually curious people usually have put some thought into the questions that global priorities research studies, even if they’ve never heard of the field itself. This is especially so since most academic disciplines have some relation with global priorities research. So people typically have a fair amount of relevant knowledge. That’s good in some ways, but can also make them overconfident of their abilities to judge existing global priorities research. Identifying the most effective ways of improving the world requires much more systematic thinking than most people will have done prior to encountering the field of global priorities research.

Third, people may underestimate how much thinking global priorities researchers have done over the past 10-20 years, and how sophisticated that thinking is. This is to some extent understandable, given how young the field is. But if you start to truly engage with the best global priorities research, you realize that they have an answer to most of your objections. And you’ll discover that they’ve come up with many important considerations that you’ve likely never thought of. This was definitely my personal experience.

For these reasons, people who are new to global priorities research may come to dismiss existing research prematurely. Of course, that’s not the only mistake you can make. You can also go too far in the other direction, and be overly deferential. It’s a tricky balance to strike. But in my experience, premature dismissal is relatively common - and maybe especially so among smart and experienced people. So it’s something to watch out for.

Thanks to Ryan Carey for comments.

comment by Denise_Melchin · 2020-09-19T12:35:33.056Z · EA(p) · GW(p)

People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus.

I'm not sure I agree with this, so it is not obvious to me that there is anything special about GP research. But it depends on who you mean by 'people' and what your evidence is. The reference class of research also matters - I expect people are more willing to believe physicists, but less so sociologists.

comment by Stefan_Schubert · 2020-09-19T12:40:29.278Z · EA(p) · GW(p)

Yeah, I agree that there are differences between different fields - e.g. physics and sociology - in this regard. I didn't want to go into details about that, however, since it would have been a bit a distraction from the main subject (global priorities research).

comment by Stefan_Schubert · 2020-06-11T15:09:47.978Z · EA(p) · GW(p)

I wrote a blog post on utilitarianism and truth-seeking. Brief summary:

The Oxford Utilitarianism Scale defines tendency to accept utilitarianism in terms of two factors: acceptance of instrumental harm for the greater good, and impartial beneficence.

But there is another question, which is subtly different, namely: what psychological features do we need to apply utilitarianism, and to do it well?

Once we turn to application, truth-seeking becomes hugely important. The utilitarian must find the best ways of doing good. You can only do that if you're a devoted truth-seeker.

comment by Stefan_Schubert · 2019-11-04T13:06:14.025Z · EA(p) · GW(p)

New paper in Personality and Individual Differences finds that:

Timegiving behaviors (i.e. caregiving, volunteering, giving support) and prosocial traits were associated with a lower mortality risk in older adults, but giving money was not.
comment by JP Addison (jpaddison) · 2019-11-05T15:25:04.692Z · EA(p) · GW(p)

When I read your posts on psychology, I get the sense that you're genuinely curious about the results, without much of any filter for them matching with the story that EA would like to tell. Nice job.

comment by Stefan_Schubert · 2019-11-05T15:38:51.226Z · EA(p) · GW(p)

Thanks!

comment by Stefan_Schubert · 2019-10-04T18:32:57.140Z · EA(p) · GW(p)

Hostile review of Stuart Russell's new book Human Compatible in Nature. (I disagree with the review.)

Russell, however, fails to convince that we will ever see the arrival of a “second intelligent species”. What he presents instead is a dizzyingly inconsistent account of “intelligence” that will leave careful readers scratching their heads. His definition of AI reduces this quality to instrumental rationality. Rational agents act intelligently, he tells us, to the degree that their actions aim to achieve their objectives, hence maximizing expected utility. This is likely to please hoary behavioural economists, with proclivities for formalization, and AI technologists squeaking reward functions onto whiteboards. But it is a blinkered characterization, and it leads Russell into absurdity when he applies it to what he calls “overly intelligent” AI.
Russell’s examples of human purpose gone awry in goal-directed superintelligent machines are bemusing. He offers scenarios such as a domestic robot that roasts the pet cat to feed a hungry child, an AI system that induces tumours in every human to quickly find an optimal cure for cancer, and a geoengineering robot that asphyxiates humanity to deacidify the oceans. One struggles to identify any intelligence here.
comment by Stefan_Schubert · 2019-10-15T13:32:48.180Z · EA(p) · GW(p)

Andrew Gelman argues that scientists’ proposals for fixing science are themselves not always very scientific.

If you’ve gone to the trouble to pick up (or click on) this volume in the first place, you’ve probably already seen, somewhere or another, most of the ideas I could possibly propose on how science should be fixed. My focus here will not be on the suggestions themselves but rather on what are our reasons for thinking these proposed innovations might be good ideas. The unfortunate paradox is that the very aspects of “junk science” that we so properly criticize—the reliance on indirect, highly variable measurements from nonrepresentative samples, open-ended data analysis, followed up by grandiose conclusions and emphatic policy recommendations drawn from questionable data— all seem to occur when we suggest our own improvements to the system. All our carefully-held principles seem to evaporate when our emotions get engaged.
comment by Stefan_Schubert · 2019-11-20T11:33:00.382Z · EA(p) · GW(p)

Marginal Revolution:

Due to a special grant, there has been a devoted tranche of Emergent Ventures to individuals, typically scholars and public intellectuals, studying the nature and causes of progress.

Nine grantees, including one working on X-risk:

Leopold Aschenbrenner, 17 year old economics prodigy, to spend the next summer in the Bay Area and for general career development. Here is his paper on existential risk.
comment by EdoArad (edoarad) · 2019-11-20T15:16:09.804Z · EA(p) · GW(p)

The paper was also here on the forum [EA · GW]

comment by Stefan_Schubert · 2019-11-13T21:33:46.153Z · EA(p) · GW(p)

"Veil-of-ignorance reasoning favors the greater good", by Karen Huang, Joshua Greene, and Max Bazerman (all at Harvard).

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision making by denying decision makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here, we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across 7 experiments (n = 6,261), 4 preregistered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by anchoring, probabilistic reasoning, or generic perspective taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision makers who wish to make more impartial and/or socially beneficial choices.
comment by Stefan_Schubert · 2019-10-04T18:40:07.119Z · EA(p) · GW(p)

Philosopher Eric Schwitzgebel argues that good philosophical arguments should be such that the target audience ought to be moved by the argument, but that such arguments are difficult to make regarding animal consciousness, since there is no common ground.

The Common Ground Problem is this. To get an argument going, you need some common ground with your intended audience. Ideally, you start with some shared common ground, and then maybe you also introduce factual considerations from science or elsewhere that you expect they will (or ought to) accept, and then you deliver the conclusion that moves them your direction. But on the question of animal consciousness specifically, people start so far apart that finding enough common ground to reach most of the intended audience becomes a substantial problem, maybe even an insurmountable problem.

Cf. his paper Is There Something It’s Like to Be a Garden Snail?

The question “are garden snails phenomenally conscious?” or equivalently “is there something it’s like to be a garden snail?” admits of three possible answers: yes, no, and denial that the question admits of a yes-or-no answer. All three answers have some antecedent plausibility, prior to the application of theories of consciousness. All three answers retain their plausibility also after the application of theories of consciousness. This is because theories of consciousness, when applied to such a different species, are inevitably questionbegging and rely partly on dubious extrapolation from the introspections and verbal reports of a single species.
comment by Stefan_Schubert · 2019-11-18T14:09:10.855Z · EA(p) · GW(p)

Eric Schwitzgebel:

We Might Soon Build AI Who Deserve Rights
Talk for Notre Dame, November 19:
Abstract: Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.