evelynciara's Shortform

post by evelynciara · 2019-10-14T08:03:32.019Z · EA · GW · 67 comments

67 comments

Comments sorted by top scores.

comment by evelynciara · 2020-07-10T05:30:34.961Z · EA(p) · GW(p)

I think we need to be careful when we talk about AI and automation not to commit the lump of labor fallacy. When we say that a certain fraction of economically valuable work will be automated at any given time, or that this fraction will increase, we shouldn't implicitly assume that the total amount of work being done in the economy is constant. Historically, automation has increased the size of the economy, thereby creating more work to be done, whether by humans or by machines; we should expect the same to happen in the future. (Note that this doesn't exclude the possibility of increasingly general AI systems performing almost all economically valuable work. This could very well happen even as the total amount of work available skyrockets.)

comment by HaukeHillebrandt · 2020-07-10T10:54:32.178Z · EA(p) · GW(p)

Also see a recent paper finding no evidence for the automation hypothesis:

http://www.overcomingbias.com/2019/12/automation-so-far-business-as-usual.html

comment by evelynciara · 2020-12-28T18:23:26.917Z · EA(p) · GW(p)

An idea I liked from Owen Cotton-Barratt's new interview on the 80K podcast: Defense in depth

If S, M, or L is any small, medium, or large catastrophe and X is human extinction, then the probability of human extinction is

So halving the probability of all small disasters, the probability of any small disaster becoming a medium-sized disaster, etc. would halve the probability of human extinction.

comment by evelynciara · 2020-09-17T05:48:02.522Z · EA(p) · GW(p)

I just listened to Andrew Critch's interview about "AI Research Considerations for Human Existential Safety" (ARCHES). I took some notes on the podcast episode, which I'll share here. I won't attempt to summarize the entire episode; instead, please see this summary [? · GW] of the ARCHES paper in the Alignment Newsletter.

  • We need to explicitly distinguish between "AI existential safety" and "AI safety" writ large. Saying "AI safety" without qualification is confusing for both people who focus on near-term AI safety problems and those who focus on AI existential safety problems; it creates a bait-and-switch for both groups.
  • Although existential risk can refer to any event that permanently and drastically reduces humanity's potential for future development (paraphrasing Bostrom 2013), ARCHES only deals with the risk of human extinction because it's easier to reason about and because it's not clear what other non-extinction outcomes are existential events.
  • ARCHES frames AI alignment in terms of delegation from m ≥ 1 human stakeholders (such as individuals or organizations) to n ≥ 1 AI systems. Most alignment literature to date focuses on the single-single setting (one principal, one agent), but such settings in the real world are likely to evolve into multi-principal, multi-agent settings. Computer scientists interested in AI existential safety should pay more attention to the multi-multi setting relative to the single-single one for the following reasons:
    • There are commercial incentives to develop AI systems that are aligned with respect to the single-single setting, but not to make sure they won't break down in the multi-multi setting. A group of AI systems that are "aligned" with respect to single-single may still precipitate human extinction if the systems are not designed to interact well.
    • Single-single delegation solutions feed into AI capabilities, so focusing only on single-single delegation may increase existential risk.
    • What alignment means in the multi-multi setting is more ambiguous because the presence of multiple stakeholders engenders heterogeneous preferences. However, predicting whether humanity goes extinct in the multi-multi setting is easier than predicting whether a group of AI systems will "optimally" satisfy a group's preferences.
  • Critch and Krueger coin the term "prepotent AI" to refer to an AI system that is powerful enough to transform Earth's environment at least as much as humans have and where humans cannot effectively stop or reverse these changes. Importantly, a prepotent AI need not be an artificial general intelligence.
comment by evelynciara · 2020-09-25T00:07:08.629Z · EA(p) · GW(p)

NYC is adopting ranked-choice voting for the 2021 City Council election. One challenge will be explaining the new voting system, though.

comment by evelynciara · 2020-03-23T01:49:48.430Z · EA(p) · GW(p)

Tentative thoughts on "problem stickiness"

When it comes to comparing non-longtermist problems from a longtermist perspective, I find it useful to evaluate them based on their "stickiness": the rate at which they will grow or shrink over time.

A problem's stickiness is its annual growth rate. So a problem has positive stickiness if it is growing, and negative stickiness if it is shrinking. For long-term planning, we care about a problem's expected stickiness: the annual rate at which we think it will grow or shrink. Over the long term - i.e. time frames of 50 years or more - we want to focus on problems that we expect to grow over time without our intervention, instead of problems that will go away on their own.

For example, global poverty has negative stickiness because the poverty rate has declined over the last 200 years. I believe its stickiness will continue to be negative, barring a global catastrophe like climate change or World War III.

On the other hand, farm animal suffering has not gone away over time; in fact, it has gotten worse, as a growing number of people around the world are eating meat and dairy. This trend will continue at least until alternative proteins become competitive with animal products. Therefore, farm animal suffering has positive stickiness. (I would expect wild animal suffering to also have positive stickiness due to increased habitat destruction, but I don't know.)

The difference in stickiness between these problems motivates me to focus more on animal welfare than on global poverty, although I'm still keeping an eye on and cheering on actors in that space.

I wonder which matters more, a problem's "absolute" stickiness or its growth rate relative to the population or the size of the economy. But I care more about differences in stickiness between problems than the numbers themselves.

comment by evelynciara · 2020-01-28T18:48:13.201Z · EA(p) · GW(p)

We're probably surveilling poor and vulnerable people in developing and developed countries too much in the name of aiding them, and we should give stronger consideration to the privacy rights of aid recipients. Personal data about these people collected for benign purposes can be weaponized against them by malicious actors, and surveillance itself can deter people from accessing vital services.

"Stop Surveillance Humanitarianism" by Mark Latonero

Automating Inequality by Virginia Eubanks makes a similar argument regarding aid recipients in developed countries.

comment by Aaron Gertler (aarongertler) · 2020-01-30T13:43:13.275Z · EA(p) · GW(p)

Interesting op-ed! I wonder to what extent these issues are present in work being done by EA-endorsed global health charities; my impression is that almost all of their work happens outside of the conflict zones where some of these privacy concerns are especially potent. It also seems like these charities are very interested in reaching high levels of usage/local acceptance, and would be unlikely to adopt policies that deter recipients unless fraud concerns were very strong. But I don't know all the Top Charities well enough to be confident of their policies in this area.

This would be a question worth asking on one of GiveWell's occasional Open Threads. And if you ask it on Rob Mather's AMA [EA · GW], you'll learn how AMF thinks about these things (given Rob's response times, possibly within a day).

comment by EdoArad (edoarad) · 2020-01-29T14:38:10.533Z · EA(p) · GW(p)

Related: https://www.effectivealtruism.org/articles/ea-global-2018-the-future-of-surveillance/ [? · GW]

comment by evelynciara · 2020-01-30T16:59:16.308Z · EA(p) · GW(p)

Thank you for sharing this! I took a class on surveillance and privacy last semester, so I already have basic knowledge about this subject. I agree that it's important to reject false tradeoffs. Personally, my contribution to this area would be in formulating a theory of privacy that can be used to assess surveillance schemes in this context.

comment by EdoArad (edoarad) · 2020-01-30T18:19:41.519Z · EA(p) · GW(p)

Shafi Goldwasser at Berkeley is currently working on some definitions of privacy and their applicability for law. See this paper or this talk. In a talk she gave last month she talked about how to formalize some aspects of law related to cryptographic concepts to formalize "the right to be forgotten". The recording is not up yet, but in the meantime I paste below my (dirty/partial) notes from the talk. I feel somewhat silly for not realizing the possible connection there earlier, so thanks for the opportunity to discover connections hidden in plain sight!

Shafi is working directly with judges, and this whole program is looking potentially promising. If you are seriously interested in pursuing this, I can connect you to her if that would help. Also, we have someone in our research team [EA · GW] at EA Israel doing some work into this (from a more tech/crypto solution perspective) so it may be interesting to consider a collaboration here.

The notes-

"What Crypto can do for the Law?" - Shafi Goldwasser 30.12.19:

  • There is a big language barrier between Law and CS, following a knowledge barrier.
  • People in law study the law of governing algorithms, but there is not enough participation of computer scientists to help legal work.
  • But, CS can help with designing algorithms and formalizing what these laws should be.
  • Shafi suggests a crypto definition for "The right to be forgotten". This should help
    • Privacy regulation like CCPA and GDPR have a problem - how to test whether one is compliant?
    • Do our cryptographic techniques satisfy the law?
      • that requires a formal definition
        • A first suggestion:
          • after deletions, the state of the data collector and the history of the interaction with the environment should be similar as to the case where information was never changed. [this is clearly inadequate - Shafi aims at starting a conversation]
  • Application of cryptographic techniques
    • History Oblivious Data Structure
    • Data Summarization using Differential Privacy leaves no trace
    • ML Data Deletion
comment by EdoArad (edoarad) · 2020-02-23T10:51:33.465Z · EA(p) · GW(p)

The talk is here

comment by evelynciara · 2020-11-29T04:32:37.986Z · EA(p) · GW(p)

AOC's Among Us stream on Twitch nets $200K for coronavirus relief

"We did it! $200k raised in one livestream (on a whim!) for eviction defense, food pantries, and more. This is going to make such a difference for those who need it most right now." — AOC's Tweet

Video game streaming is a popular way to raise money for causes. We should use this strategy to fundraise for EA organizations.

comment by Aaron Gertler (aarongertler) · 2020-12-02T21:49:08.821Z · EA(p) · GW(p)

It's difficult to raise money through streaming unless you already have a popular stream. I ran a charity stream for an audience of a few hundred people for three hours and raised roughly $150, and I may be the most popular video game streamer in the community (though other people with big audiences from elsewhere could probably create bigger streams than mine without much effort).

If anyone reading this is in contact with major streamers, it might be worth reaching out, but that can easily go wrong if the streamer has a charity they already feel committed to (so be cautious).

comment by evelynciara · 2020-04-03T02:04:20.282Z · EA(p) · GW(p)

Do emergency universal pass/fail policies improve or worsen student well-being and future career prospects?

I think a natural experiment is in order. Many colleges are adopting universal pass/fail grading for this semester in response to the COVID-19 pandemic, while others aren't. Someone should study the impact this will have on students to inform future university pandemic response policy.

comment by Aaron Gertler (aarongertler) · 2020-04-06T08:13:04.000Z · EA(p) · GW(p)

When suggestions of this type come up, especially for causes that don't have existing EA research behind them, my recommended follow-up is to look for people who study this as normal academics (here, "this" would be "ways that grades and grading policy influence student outcomes"). Then, write to professors who do this work and ask if they plan on taking advantage of the opportunity (here, the natural experiment caused by new grading policies).

There's a good chance that the people you write to will have had this idea already (academics who study a subject are frequently on the lookout for opportunities of this kind, and the drastic changes wrought by COVID-19 should be increasing the frequency with which people think about related studies they could run). And if they haven't, you have the chance to inspire them!

Writing to random professors could be intimidating, but in my experience, even when I've written emails like this as a private citizen without a .edu email address, I frequently get some kind of response; people who've made research their life's work are often happy to hear from members of the public who care about the same odd things they do.

comment by evelynciara · 2020-04-06T18:35:20.812Z · EA(p) · GW(p)

Thanks for the suggestion! I imagine that most scholars are reeling from the upheavals caused by the pandemic response, so right now doesn't feel like the right time to ask professors to do anything. What do you think?

comment by Aaron Gertler (aarongertler) · 2020-04-06T21:45:38.092Z · EA(p) · GW(p)

Maybe a better question for late May or early June, when classes are over.

comment by alexrjl · 2020-04-06T21:37:53.893Z · EA(p) · GW(p)

I think that's probably true for those working directly on the pandemic, but I'm not sure education researchers would mind being bothered. If anything they might welcome the distraction.

comment by evelynciara · 2020-02-05T08:59:03.439Z · EA(p) · GW(p)

I think improving bus systems in the United States (and probably other countries) could be a plausible Cause X.

Importance: Improving bus service would:

  • Increase economic output in cities
  • Dramatically improve quality of life for low-income residents
  • Reduce cities' carbon footprint, air pollution, and traffic congestion

Neglectedness: City buses probably don't get much attention because most people don't think very highly of them, and focus much more on novel transportation technologies like electric vehicles.

Tractability: According to Higashide, improving bus systems is a matter of improving how the bus systems are governed. Right now, I think a nationwide movement to improve bus transit would be less polarizing than the YIMBY movement has been. While YIMBYism has earned a reputation as elitist due to some of its early advocates' mistakes, a pro-bus movement could be seen as aligned with the interests of low-income city dwellers provided that it gets the messaging right from the beginning.

Also, bus systems are less costly to roll out, upgrade, and alter than other public transportation options like trains.

comment by Linch · 2020-02-06T02:57:00.324Z · EA(p) · GW(p)

Interesting post! Curious what you think of Jeff Kaufman's proposal to make buses more dangerous in the first world, the idea being that buses in the US are currently too far in the "safety" direction of the safety vs. convenience tradeoff.

GiveWell also has a standout charity (Zusha!) working in the opposite direction, trying to get public service vehicles in Kenya to be safer.

comment by evelynciara · 2020-02-06T18:42:08.670Z · EA(p) · GW(p)

I like Kaufman's second, third, and fourth ideas:

  • Allow the driver to start while someone is still at the front paying. (The driver should use judgment if they're allowed to do this, because the passenger at the front might lose their balance when the bus starts. Wheelchairs might be especially vulnerable to rolling back.)
  • Allow buses to drive 25mph on the shoulder of the highway in traffic jams where the main lanes are averaging below 10mph.
  • Higher speed limits for buses. Lets say 15mph over. (I'm not so sure about this: speed limits exist in part to protect pedestrians. Buses still cause fewer pedestrian and cyclist deaths than cars, though.)

But these should be considered only after we've exhausted the space of improvements to bus service that don't sacrifice safety. For example, we should build more bus-only lanes first.

comment by Khorton · 2020-02-06T22:38:32.755Z · EA(p) · GW(p)

Wait, do buses some place not start moving until... everyone's sitting down? Does that mean there's enough seats for everyone?

comment by Linch · 2020-02-07T01:01:06.646Z · EA(p) · GW(p)

I don't have statistics, but my best guess is that if you sample random points across all public buses running in America, in over 3/4 of the time, less than half of the seats are filled.

This is extremely unlike my experiences in Asia (in China or Singapore).

comment by evelynciara · 2021-01-05T02:57:44.384Z · EA(p) · GW(p)

A rebuttal of the paperclip maximizer argument

I was talking to someone (whom I'm leaving anonymous) about AI safety, and they said that the AI alignment problem is a joke (to put it mildly). They said that it won't actually be that hard to teach AI systems the subtleties of human norms because language models contain normative knowledge. I don't know if I endorse this claim but I found it quite convincing, so I'd like to share it here.

In the classic naive paperclip maximizer scenario, we assume there's a goal-directed AI system, and its human boss tells it to "maximize paperclips." At this point, it creates a plan to turn all of the iron atoms on Earth's surface into paperclips. The AI knows everything about the world, including the fact that blood hemoglobin and cargo ships contain iron. However, it doesn't know that it's wrong to kill people and destroy cargo ships for the purpose of obtaining iron. So it starts going around killing people and destroying cargo ships to obtain as much iron as possible for paperclip manufacturing.

I think most of us assume that the AI system, when directed to "maximize paperclips," would align itself with an objective function that says to create as many paper clips as superhumanly possible, even at the cost of destroying human lives and economic assets. However, I see two issues:

  1. It's assuming that the system would interpret the term "maximize" extremely literally, in a way that no reasonable human would interpret it. (This is the core of the paperclip argument, but I'm trying to show that it's a weakness.) Most modern natural language processing (NLP) systems are based on statistical word embeddings, which capture what words mean in the source texts, rather than their strict mathematical definitions (if they even have one). If the AI system interprets commands using a word embedding, it's going to interpret "maximize" the way humans would.

    Ben Garfinkel has proposed the "process orthogonality thesis" - the idea that, for the classic AI alignment argument to work, "the process of imbuing a system with capabilities and the process of imbuing a system with goals" would have to be orthogonal. But this point shows that the process of giving the system capabilities (in this case, knowing that iron can be obtained from various everyday objects) and the process of giving it a goal (in this case, making paperclips) may not be orthogonal. An AI system based on contemporary language models seems much more likely to learn that "maximize X" means something more like "maximize X subject to common-sense constraints Y1, Y2, ..." than to learn that human blood can be turned into iron for paperclips. (It's also possible that it'll learn neither, which means it might take "maximize" too literally but won't figure out that it can make paperclips from humans.)

  2. It's assuming that the system would make a special case for verbal commands that can be interpreted as objective functions and set out to optimize the objective function if possible. At a minimum, the AI system needs to convert each verbal command into a plan to execute it, somewhat like a query plan in relational databases. But not every plan to execute a verbal command would involve maximizing an objective function, and using objective functions in execution plans is probably dangerous for the reason that the classic paperclip argument tries to highlight, as well as overkill for most commands.

comment by evelynciara · 2021-01-05T03:31:49.614Z · EA(p) · GW(p)

Ben gives a great example of how the "alignment problem" might look different than we expect:

The case of the house-cleaning robot

  • Problem: We don’t know how to build a simulated robot that cleans houses well
  • Available techniques aren’t suitable:
    • Simple hand-coded reward functions (e.g. dust minimization) won’t produce the desired behavior
    • We don’t have enough data (or sufficiently relevant data) for imitation learning
    • Existing reward modeling approaches are probably insufficient
  • This is sort of an “AI alignment problem,” insofar as techniques currently classified as “alignment techniques” will probably be needed to solve it. But it also seems very different from the AI alignment problem as classically conceived.

...

  • One possible interpretation: If we can’t develop “alignment” techniques soon enough, we will instead build powerful and destructive dust-minimizers
  • A more natural interpretation: We won’t have highly capable house-cleaning robots until we make progress on “alignment” techniques

I've concluded that the process orthogonality thesis is less likely to apply to real AI systems than I would have assumed (i.e. I've updated downward), and therefore, the "alignment problem" as originally conceived is less likely to affect AI systems deployed in the real world. However, I don't feel ready to reject all potential global catastrophic risks from imperfectly designed AI (e.g. multi-multi failures), because I'd rather be safe than sorry.

comment by G Gordon Worley III (gworley3) · 2021-01-05T19:35:14.514Z · EA(p) · GW(p)

I think it's worth saying that the context of "maximize paperclips" is not one where the person literally says the words "maximize paperclips" or something similar; this is instead an intuitive stand-in for building an AI capable of superhuman levels of optimization, such that if you set it the task, say via specifying a reward function, of creating an unbounded number of paperclips you'll get it doing things you wouldn't as a human do to maximize paperclips because humans have competing concerns and will stop when, say, they'd have to kill themselves or their loved ones to make more paperclips.

The objection seems predicated on interpretation of human language, which is aside the primary point. That is, you could address all the human language interpretation issues and we'd still have an alignment problem, it just might not look literally like building a paperclip maximizer if someone asks the AI to make a lot of paperclips.

comment by Neel Nanda · 2021-01-06T09:10:33.209Z · EA(p) · GW(p)

In the classic naive paperclip maximizer scenario, we assume there's a goal-directed AI system, and its human boss tells it to "maximize paperclips." At this point, it creates a plan to turn all of the iron atoms on Earth's surface into paperclips. The AI knows everything about the world, including the fact that blood hemoglobin and cargo ships contain iron. However, it doesn't know that it's wrong to kill people and destroy cargo ships for the purpose of obtaining iron. So it starts going around killing people and destroying cargo ships to obtain as much iron as possible for paperclip manufacturing.

I don't think this is a good representation of the classic scenario. It's not that the AI "doesn't know it's wrong". It clearly has a good enough model of the world to predict eg "if a human saw me trying to do this, they would try to stop me". The problem is coding an AI that cares about right and wrong. Which is a pretty difficult technical problem. One key part of why it's hard is that the interface for giving an AI goals is not the same interface you'd use to give a human goals.

Note that this is not the same as saying that it's impossible to solve, or that it's obviously much harder than making powerful AI in the first place, just that it's a difficult technical problem and solving it is one significant step towards safe AI. I think this is what Paul Christiano calls intent alignment

I think it's possible that this issue goes away with powerful language models, if that can give us an interface to input a goal via a similar interface to instructing a human. And I'm excited about efforts like this one [EA · GW]. But I don't think it's at all obvious that this will just happen to work out. For example, GPT-3's true goal is "generate text that is as plausible as possible, based on the text in your training data". And it has a natural language interface, and this goal correlates a bit with "do what humans want", but it is not the same thing.

It's assuming that the system would make a special case for verbal commands that can be interpreted as objective functions and set out to optimize the objective function if possible. At a minimum, the AI system needs to convert each verbal command into a plan to execute it, somewhat like a query plan in relational databases. But not every plan to execute a verbal command would involve maximizing an objective function, and using objective functions in execution plans is probably dangerous for the reason that the classic paperclip argument tries to highlight, as well as overkill for most commands.

This point feels somewhat backwards. Everything Ai systems ever do is maximising an objective function, and I'm not aware of any AI Safety suggestions that get around this (just ones which have creative objective functions). It's not that they convert verbal commands to an objective function, they already have an objective function, which might capture 'obey verbal commands in a sensible way' or it might not. And my read on the paperclip maximising scenario is that "tell the AI to maximise paperclips" really means "encode an objective function that tells it to maximise paperclips"

 

Personally I think the paperclip maximiser scenario is somewhat flawed, and not a good representation of AI x-risk. I like it because it illustrates the key point of specification gaming - that it's really, really hard to make an objective function that captures "do the things we want you to do". But this is also going to be pretty obvious to the people making AGI, and they probably won't have an objective function as clearly dumb as maximise paperclips. But it might not be good enough.

comment by evelynciara · 2021-01-05T03:04:09.147Z · EA(p) · GW(p)

By the way, there will be a workshop on Interactive Learning for Natural Language Processing at ACL 2021. I think it will be useful to incorporate the ideas from this area of research into our models of how AI systems that interpret natural-language feedback would work. One example of this kind of research is Blukis et al. (2019).

comment by evelynciara · 2020-06-18T22:41:17.816Z · EA(p) · GW(p)

How pressing is countering anti-science?

Intuitively, anti-science attitudes seem like a major barrier to solving many of the world's most pressing problems: for example, climate change denial has greatly derailed the American response to climate change, and distrust of public health authorities may be stymying the COVID-19 response. (For instance, a candidate running in my district for State Senate is campaigning on opposition to contact tracing as well as vaccines.) I'm particularly concerned about anti-economics attitudes because they lead to bad economic policies that don't solve the problems they're meant to solve, such as protectionism and rent control, and opposition to policies that are actually supported by evidence. Additionally, I've heard (but can't find the source for this) that economists are generally more reluctant to do public outreach in defense of their profession than scientists in other fields are.

comment by Aaron Gertler (aarongertler) · 2020-06-23T06:42:25.611Z · EA(p) · GW(p)

Epistemic status: Almost entirely opinion, I'd love to hear counterexamples

When I hear proposals related to instilling certain values widely throughout a population (or preventing the instillation of certain values), I'm always inherently skeptical. I'm not aware of many cases where something like this worked well, at least in a region as large, sophisticated, and polarized as the United States. 

You could point to civil rights campaigns, which have generally been successful over long periods of time, but those had the advantage of being run mostly by people who were personally affected (= lots of energy for activism, lots of people "inherently" supporting the movement in a deep and personal way). 

If you look at other movements that transformed some part of the U.S. (e.g. bioethics or the conservative legal movement, as seen in Open Phil's case studies of early field growth), you see narrow targeting of influential people rather than public advocacy. 

Rather than thinking about "countering anti-science" more generally, why not focus on specific policies with scientific support? Fighting generically for "science" seems less compelling than pushing for one specific scientific idea ("masks work," "housing deregulation will lower rents"), and I can think of a lot of cases where scientific ideas won the day in some democratic context.

This isn't to say that public science advocacy is pointless; you can reach a lot of people by doing that. But I don't think the people you reach are likely to "matter" much unless they actually campaign for some specific outcome (e.g. I wouldn't expect a scientist to swing many votes in a national election, but maybe they could push some funding toward an advocacy group for a beneficial policy).

****

One other note: I ran a quick search to look for polls on public trust in science, but all I found was a piece from Gallup on public trust in medical advice

Putting that aside, I'd still guess that a large majority of Americans would claim to be "pro-science" and to "trust science," even if many of those people actually endorse minority scientific claims (e.g. "X scientists say climate change isn't a problem"). But I could be overestimating the extent to which people see "science" as a generally positive applause light [LW · GW].

comment by evelynciara · 2020-08-11T16:23:38.029Z · EA(p) · GW(p)

I think a more general, and less antagonizing, way to frame this is "increasing scientific literacy among the general public," where scientific literacy is seen as a spectrum. For example, increasing scientific literacy among climate activists might make them more likely to advocate for policies that more effectively reduce CO2 emissions.

comment by evelynciara · 2019-10-23T01:58:54.001Z · EA(p) · GW(p)

John, Katherine, Sarah, and Hank Green are making a $6.5M donation to Partners in Health to address the maternal mortality crisis in Sierra Leone, and are trying to raise $25M in total. PIH has been working with the Sierra Leone Ministry of Health to improve the quality of maternal care through facility upgrades, supplies, and training.

PIH blog postvlogbrothers video

comment by evelynciara · 2019-10-23T02:00:58.615Z · EA(p) · GW(p)

[crossposted to r/neoliberal]

comment by evelynciara · 2020-09-04T22:49:36.925Z · EA(p) · GW(p)

Epistemic status: Although I'm vaguely aware of the evidence on gender equality and peace, I'm not an expert on international relations. I'm somewhat confident in my main claim here.

Gender equality - in societies at large, in government, and in peace negotiations - may be an existential security factor insofar as it promotes societal stability and decreases international and intra-state conflict.

According to the Council on Foreign Relations, women's participation in peacemaking and government at large improves the durability of peace agreements and social stability afterward. Gender equality also increases trust in political institutions and decreases risk of terrorism. According to a study by Krause, Krause, and Bränfors (2018), direct participation by women in peacemaking positively affects the quality and durability of peace agreements because of "linkages between women signatories and women civil society groups." In principle, including other identity groups such as ethnic, racial, and religious minorities in peace negotiations may also activate these linkages and thus lead to more durable and higher quality peace.

Some organizations that advance gender equality in peacemaking and international security:

comment by evelynciara · 2020-09-04T22:56:09.315Z · EA(p) · GW(p)

Note: I recognize that gender equality is a sensitive topic, so I welcome any feedback on how I could present this information better.

comment by Matt_Lerner (mattlerner) · 2020-09-04T23:34:21.237Z · EA(p) · GW(p)

I think the instrumental benefits of greater equality (racial, gender, economic, etc.) are hugely undersold, particularly by those of us who like to imagine that we're somehow "above" traditional social justice concerns (including myself in this group, reluctantly and somewhat shamefully).

In this case, I think your thought is spot on and deserves a lot more exploration. I immediately thought of the claim (e.g. 1, 2) that teams with more women make better collective decisions. I haven't inspected this evidence in detail, but on an anecdotal level I am ready to believe it.

comment by evelynciara · 2021-01-08T17:49:47.924Z · EA(p) · GW(p)

Worldview diversification for longtermism

I think it would be helpful to get more non-utilitarian perspectives on longtermism (or ones that don't primarily emphasize utilitarianism).

Some questions that would be valuable to address:

  • What non-utilitarian worldviews support longtermism?
  • Under a given longtermist non-utilitarian worldview, what are the top-priority problem areas, and what should actors to do address them?

Some reasons I think this would be valuable:

  1. We're working under a lot of moral uncertainty, so the more ethical perspectives, the better.
  2. Even if we fully buy into one worldview, it would be valuable to incorporate insights from other worldviews' perspectives on the problems we are addressing.
  3. Doing this would attract more people with worldviews different from the predominant utilitarian one.

What non-utilitarian worldviews support longtermism?

Liberalism: There are strong theoretical and empirical reasons why liberal democracy may be valuable for the long-term future; see this post and its comments [EA · GW]. I think that certain variants of liberalism are highly compatible with longtermism, especially those focusing on:

  • Inclusive institutions and democracy
  • Civil and political rights (e.g. freedom, equality, and civic participation)
  • International security and cooperation
  • Moral circle expansion

Environmental and climate justice: Climate justice deals with climate change's impact on the most vulnerable members of society, and it prescribes how societies ought to respond to climate change in ways that protect their most vulnerable members. We can learn a lot from it about how to respond to other global catastrophic risks.

comment by jackmalde · 2021-01-08T19:06:08.087Z · EA(p) · GW(p)

Also just realised that the new legal priorities research agenda touches on this with some academic citations on pages 14 and 15.

comment by jackmalde · 2021-01-08T18:52:18.398Z · EA(p) · GW(p)

Toby Ord has spoken about non-consequentialist arguments for existential risk reduction, which I think also work for longtermism more generally. For example, Ctlr+F for "What are the non-consequentialist arguments for caring about existential risk reduction?" in this link [? · GW]. I suspect relevant content is also in his book The Precipice.

Some selected quotes from the first link:

  • "my main approach, the guiding light for me, is really thinking about the opportunity cost, so it's thinking about everything that we could achieve, and this great and glorious future that is open to us and that we could do"
  • "there are also these other foundations, which I think also point to similar things. One of them is a deontological one, where Edmund Burke, one of the founders of political conservatism, had this idea of the partnership of the generations. What he was talking about there was that we've had ultimately a hundred billion people who've lived before us, and they've built this world for us. And each generation has made improvements, innovations of various forms, technological and institutional, and they've handed down this world to their children. It's through that that we have achieved greatness ... is our generation going to be the one that breaks this chain and that drops the baton and destroys everything that all of these others have built? It's an interesting kind of backwards-looking idea there, of debts that we owe and a kind of relationship we're in. One of the reasons that so much was passed down to us was an expectation of continuation of this. I think that's, to me, quite another moving way of thinking about this, which doesn't appeal to thoughts about the opportunity cost that would be lost in the future."
  • "And another one that I think is quite interesting is a virtue approach ... When you look at humanity's current situation, it does not look like how a wise entity would be making decisions about its future. It looks incredibly juvenile and immature and like it needs to grow up. And so I think that's another kind of moral foundation that one could come to these same conclusions through."
comment by evelynciara · 2021-01-08T23:03:00.764Z · EA(p) · GW(p)

Thanks for sharing these! I had Toby Ord's arguments from The Precipice in mind too.

comment by evelynciara · 2020-07-23T03:51:16.073Z · EA(p) · GW(p)

Epistemic status: Tentative thoughts.

I think that medical AI could be a nice way to get into the AI field for a few reasons:

  • You'd be developing technology that improves global health by a lot. For example, according to the WHO, "The use of X-rays and other physical waves such as ultrasound can resolve between 70% and 80% of diagnostic problems, but nearly two-thirds of the world's population has no access to diagnostic imaging."[1] Computer vision can make radiology more accessible to billions of people around the world, as this project is trying to do.
  • It's also a promising starting point for careers in AI safety and applying AI/ML to other pressing causes.

AI for animal health may be even more important and neglected.


  1. World Radiography Day: Two-Thirds of the World's Population has no Access to Diagnostic Imaging ↩︎

comment by evelynciara · 2020-07-20T22:35:40.824Z · EA(p) · GW(p)

Stuart Russell: Being human and navigating interpersonal relationships will be humans' comparative advantage when artificial general intelligence is realized, since humans will be better at simulating other humans' minds than AIs will. (Human Compatible, chapter 4)

Also Stuart Russell: Automated tutoring!! (Human Compatible, chapter 3)

comment by evelynciara · 2020-12-29T20:32:39.047Z · EA(p) · GW(p)

I've been reading Adam Gopnik's book A Thousand Small Sanities: The Moral Adventure of Liberalism, which is about the meaning and history of liberalism as a political movement. I think many of the ideas that Gopnik discusses are relevant to the EA movement as well:

  • Moral circle expansion: To Gopnik, liberalism is primarily about calling for "the necessity and possibility of (imperfectly) egalitarian social reform and ever greater (if not absolute) tolerance of human difference" (p. 23). This means expanding the moral circle to include, at the least, all human beings. However, inclusion in the moral circle is a spectrum, not a binary: although liberal societies have made tremendous progress in treating women, POC, workers, and LGBTQ+ people fairly, there's still a lot of room for improvement. And these societies are only beginning to improve their treatment of immigrants, the global poor, and non-human animals.
  • Societal evolution and the "Long Reflection": "Liberalism's task is not to imagine the perfect society and drive us toward it but to point out what's cruel in the society we have now and fix it if we possibly can" (p. 31). I think that EA's goals for social change are mostly aligned with this approach: we identify problems and ways to solve them, but we usually don't offer a utopian vision of the future. However, the idea of the "Long Reflection," a process of deliberation that humanity would undertake before taking any irreversible steps that would alter its trajectory of development, seems to depart from this vision of social change. The Long Reflection involves figuring out what is ultimately of value to humanity or, failing that, coming close enough to agreement that we won't regret any irreversible steps we take. This seems hard and very different from the usual way people do politics, and I think it's worth figuring out exactly how we would do this and what would be required if we think we will have to take such steps in the future.
comment by Aaron Gertler (aarongertler) · 2020-12-30T11:56:21.833Z · EA(p) · GW(p)

Would you recommend the book itself to people interested in movement-building and/or "EA history"? Is there a good review/summary that you think would cover the important points in less time?

comment by evelynciara · 2020-12-30T16:39:38.414Z · EA(p) · GW(p)

Yeah, I would recommend it to anyone interested in movement building, history, or political philosophy from an EA perspective. I'm interested in reconciling longtermism and liberalism.

These paragraphs from the Guardian review summarize the main points of the book:

Given the prevailing gloom, Gopnik’s definition of liberalism is cautious and it depends on two words whose awkwardness, odd in such an elegant writer, betrays their doubtful appeal. One is “fallibilism”, the other is “imperfectability”: we are a shoddy species, unworthy of utopia. I’d have thought that this was reason for conservatively upholding the old order, but for Gopnik it’s our recidivism that makes liberal reform so necessary. We must always try to do better, cleaning up our messes. The sanity in the book’s title extends to sanitation: Gopnik whimsically honours the sewerage system of Victorian London as a shining if smelly triumph of liberal policy.

Liberalism here is less a philosophy or an ideology than a temperament and a way of living. Gopnik regards sympathy with others, not the building of walls and policing of borders, as the basis of community. “Love is love,” he avers, and “kindness is everything”. Both claims, he insists, are “true. Entirely true”, if only because the Beatles say so. But are they truths or blithe truisms? Such soothing mantras would not have disarmed the neo-Nazi thugs who marched through Charlottesville, Virginia, in 2017 or the white supremacist who murdered Jo Cox. Gopnik calls Trump “half-witted” and says Nigel Farage is a “transparent nothing”, but snubs do not diminish the menace of these dreadful men.

comment by evelynciara · 2020-07-16T16:46:08.660Z · EA(p) · GW(p)

Epistemic status: Raw thoughts that I've just started to think about. I'm highly uncertain about a lot of this.

Some works that have inspired my thinking recently:

Reading/listening to these works has caused me to reevaluate the risks posed by advanced artificial intelligence. While AI risk is currently the top cause in x-risk reduction, I don't think this is necessarily warranted. I think the CAIS model is a more plausible description of how AI is likely to evolve in the near future, but I haven't read enough to assess whether it makes AI more or less of a risk (to humanity, civilization, liberal democracy, etc.) than it would be under the classic "Superintelligence" model.

I'm strongly interested in improving diversity in EA, and I think this is an interesting case study about how one could do that. Right now, it seems like there is a core/middle/periphery of the EA community where the core includes people and orgs in countries like the US, UK, and Australia, and I think the EA movement would be stronger if we actively tried to bring more people in more countries into the core.

I'm also interested in how we could use qualitative methods like those employed in user experience research (UXR) to solve problems in EA causes. I'm familiar enough with design thinking (the application of design methods to practical problems) that I could do some of this given enough time and training.

comment by evelynciara · 2020-02-01T04:58:04.925Z · EA(p) · GW(p)

Joan Gass (2019) [EA · GW] recommends four areas of international development to focus on:

  • New modalities to foster economic productivity
  • New modalities or ways to develop state capabilities
  • Global catastrophic risks, particularly pandemic preparedness
  • Meta EA research on cause prioritization within global development

Improving state capabilities, or governments' ability to render public services, seems especially promising for public-interest technologists interested in development (ICT4D). For example, the Zenysis platform helps developing-world governments make data-driven decisions, especially in healthcare. Biorisk management also looks promising from a tech standpoint.

comment by evelynciara · 2020-05-28T17:48:26.611Z · EA(p) · GW(p)

I think there should be an EA Fund analog for criminal justice reform. This could especially attract non-EA dollars.

comment by evelynciara · 2020-02-11T21:55:16.337Z · EA(p) · GW(p)

A social constructivist perspective on long-term AI policy

I think the case for addressing the long-term consequences of AI systems holds even if AGI is unlikely to arise.

The future of AI development will be shaped by social, economic and political factors, and I'm not convinced that AGI will be desirable in the future or that AI is necessarily progressing toward AGI. However, (1) AI already has large positive and negative effects on society, and (2) I think it's very likely that society's AI capabilities will improve over time, amplifying these effects and creating new benefits and risks in the future.

comment by evelynciara · 2019-10-14T08:03:32.023Z · EA(p) · GW(p)

A series of polls by the Chicago Council on Global Affairs show that Americans increasingly support free trade and believe that free trade is good for the U.S. economy (87%, up from 59% in 2016). This is probably a reaction to the negative effects and press coverage of President Trump's trade wars - anecdotally, I have seen a lot of progressives who would otherwise not care about or support free trade criticize policies such as Trump's steel tariffs as reckless.

I believe this presents a unique window of opportunity to educate the American public about the benefits of globalization. Kimberly Clausing is doing this in her book, Open: The Progressive Case for Free Trade, Immigration, and Global Capital, in which she defends free trade and immigration to the U.S. from the standpoint of American workers.

comment by evelynciara · 2021-01-06T18:49:00.957Z · EA(p) · GW(p)

An EA Meta reading list:

comment by evelynciara · 2020-09-19T17:44:12.319Z · EA(p) · GW(p)

Social constructivism and AI

I have a social constructivist view of technology - that is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technology's effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.

How this worldview applies to AI: Artificial intelligence systems have embedded values because they are inherently goal-directed, and the goals we put into them may match with one or more human values.[1] Also, because they are autonomous, AI systems have more agency than most technologies. But AI systems are still a product of society, and their effects depend on their own values and capabilities as well as economic, social, environmental, and legal conditions in society.

Because of this constructivist view, I'm moderately optimistic about AI despite some high-stakes risks. Most technologies are net-positive for humanity; this isn't surprising, because technologies are chosen for their ability to meet human needs. But no technology can solve all of humanity's problems.

I've previously expressed [EA · GW] skepticism [EA(p) · GW(p)] about AI completely automating human labor. I think it's very likely that current trends in automation will continue, at least until AGI is developed. But I'm skeptical that all humans will always have a comparative advantage, let alone a comparative advantage in labor. Thus, I see a few ways that widespread automation could go wrong:

  • AI stops short of automating everything, but instead of augmenting human productivity, displaces workers into low-productivity jobs - or worse, economic roles other than labor. This scenario would create massive income inequality between those who own AI-powered firms and those who don't.
  • AI takes over most tasks essential to governing society, causing humans to be alienated from the process of running their own society (human enfeeblement). Society drifts off course from where humans want it to go.

I think economics will determine which human tasks are automated and which are still performed by humans.


  1. The embedded values thesis is sometimes considered a form of "soft determinism" since it posits that technologies have their own effects on society based on their embedded values. However, I think it's compatible with social constructivism because a technology's embedded values are imparted to it by people. ↩︎

comment by evelynciara · 2020-08-26T17:06:33.844Z · EA(p) · GW(p)

Latex markdown test:

When, in the course of human events, it becomes necessary for people to dissolve the political bands that tie it with another

comment by Aaron Gertler (aarongertler) · 2020-09-01T15:22:05.737Z · EA(p) · GW(p)

Did you mean to leave this published after finishing the test? (Not a problem if so; just wanted to check.)

comment by Habryka · 2020-09-01T20:10:03.198Z · EA(p) · GW(p)

In an ironic turn of events, you leaving this comment has made it so that the comment can no longer be unpublished (since users can only delete their comments if they have no replies). 

comment by Aaron Gertler (aarongertler) · 2020-09-01T23:31:36.208Z · EA(p) · GW(p)

However, if evelynciara had replied "yes," I'd have removed the thread in their stead ;-)

comment by evelynciara · 2020-09-01T23:20:41.451Z · EA(p) · GW(p)

Yes, I did. But I think it would be more valuable if we had a better Markdown editor or a syntax key.

comment by Aaron Gertler (aarongertler) · 2020-09-01T23:33:25.920Z · EA(p) · GW(p)

Noted. And thanks for having added your suggestions [EA(p) · GW(p)] on the suggestion thread already.

comment by evelynciara · 2020-08-03T22:27:47.717Z · EA(p) · GW(p)

Table test - Markdown

Column A Column B Column C
Cell A1 Cell B1 Cell C1
Cell A2 Cell B2 Cell C2
Cell A3 Cell B3 Cell C3
comment by Habryka · 2020-08-04T04:25:23.818Z · EA(p) · GW(p)

Seems to work surprisingly well!

comment by evelynciara · 2020-08-02T21:58:09.592Z · EA(p) · GW(p)

If you're looking at where to direct funding for U.S. criminal justice reform:

List of U.S. states and territories by incarceration and correctional supervision rate

On this page, you can sort states (and U.S. territories) by total prison/jail population, incarceration rate per 100,000 adults, or incarceration rate per 100,000 people of all ages - all statistics as of year-end 2016.

As of 2016, the 10 states with the highest incarceration rates per 100,000 people were:

  1. Oklahoma (990 prisoners/100k)
  2. Louisiana (970)
  3. Mississippi (960)
  4. Georgia (880)
  5. Alabama (840)
  6. Arkansas (800)
  7. Arizona (790)
  8. Texas (780)
  9. Kentucky (780)
  10. Missouri (730)

National and state-level bail funds for pretrial and immigration detention

comment by evelynciara · 2020-03-22T15:45:59.875Z · EA(p) · GW(p)

I'm playing Universal Paperclips right now, and I just had an insight about AI safety: Just programming the AI to maximize profits instead of paperclips wouldn't solve the control problem.

You'd think that the AI can't destroy the humans because it needs human customers to make money, but that's not true. Instead, the AI could sell all of its paperclips to another AI that continually melts them down and turns them back into wire, and they would repeatedly sell paperclips and wire back and forth to each other, both powered by free sunlight. Bonus points if the AIs take over the central bank.

comment by evelynciara · 2020-03-20T06:32:53.192Z · EA(p) · GW(p)

Can someone please email me a copy of this article?

I'm planning to update the Wikipedia article on Social discount rate, but I need to know what the article says.