Posts

Planning my birthday fundraiser for October 2020 2020-09-12T19:26:03.888Z · score: 19 (12 votes)
Is existential risk more pressing than other ways to improve the long-term future? 2020-08-20T03:50:31.125Z · score: 23 (9 votes)
What opportunities are there to use data science in global priorities research? 2020-08-18T02:48:23.143Z · score: 15 (9 votes)
Are some SDGs more important than others? Revealed country priorities from four years of VNRs 2020-08-16T06:56:19.326Z · score: 10 (8 votes)
How strong is the evidence of unaligned AI systems causing harm? 2020-07-21T04:08:07.719Z · score: 31 (15 votes)
What norms about tagging should the EA Forum have? 2020-07-14T04:19:54.841Z · score: 11 (3 votes)
Does generality pay? GPT-3 can provide preliminary evidence. 2020-07-12T18:53:09.454Z · score: 19 (12 votes)
Which countries are most receptive to more immigration? 2020-07-06T21:46:03.732Z · score: 17 (7 votes)
Will AGI cause mass technological unemployment? 2020-06-22T20:55:00.447Z · score: 3 (1 votes)
Idea for a YouTube show about effective altruism 2020-04-24T05:00:00.853Z · score: 18 (9 votes)
How do you talk about AI safety? 2020-04-19T16:15:59.288Z · score: 10 (8 votes)
International Affairs reading lists 2020-04-08T06:11:41.620Z · score: 14 (8 votes)
How effective are financial incentives for reaching D&I goals? Should EA orgs emulate this practice? 2020-03-24T18:27:16.554Z · score: 6 (3 votes)
What are some software development needs in EA causes? 2020-03-06T05:25:50.461Z · score: 10 (8 votes)
My Charitable Giving Report 2019 2020-02-27T16:35:42.678Z · score: 24 (16 votes)
Shoot Your Shot 2020-02-18T06:39:22.964Z · score: 7 (4 votes)
Does the President Matter as Much as You Think? | Freakonomics Radio 2020-02-10T20:47:27.365Z · score: 5 (5 votes)
Prioritizing among the Sustainable Development Goals 2020-02-07T05:05:44.274Z · score: 8 (7 votes)
Open New York is Fundraising! 2020-01-16T21:45:20.506Z · score: -4 (2 votes)
What are the most pressing issues in short-term AI policy? 2020-01-14T22:05:10.537Z · score: 9 (6 votes)
Has pledging 10% made meeting other financial goals substantially more difficult? 2020-01-09T06:15:13.589Z · score: 15 (11 votes)
evelynciara's Shortform 2019-10-14T08:03:32.019Z · score: 1 (1 votes)

Comments

Comment by evelynciara on Thomas Kwa's Shortform · 2020-09-24T04:30:56.244Z · score: 4 (3 votes) · EA · GW

I agree - I'm especially worried that focusing too much on longtermism will make us seem out of touch with the rest of humanity, relative to other schools of EA thought. I would support conducting a public opinion poll to learn about people's moral beliefs, particularly how important and practical they believe focusing on the long-term future would be. I hypothesize that people who support ideas such as sustainability will be more sympathetic to longtermism.

Comment by evelynciara on How have you become more (or less) engaged with EA in the last year? · 2020-09-23T03:56:31.616Z · score: 3 (2 votes) · EA · GW

I think I started out with r/EffectiveAltruism and checking out effective altruism websites. Then, someone wrote a post on the subreddit encouraging people to post on the EA Forum because that's where the action is. So now I'm mostly involved in the forum, but also some Facebook groups (although I try not to use FB often) and Discord.

Comment by evelynciara on evelynciara's Shortform · 2020-09-19T17:44:12.319Z · score: 1 (1 votes) · EA · GW

Social constructivism and AI

I have a social constructivist view of technology - that is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technology's effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.

How this worldview applies to AI: Artificial intelligence systems have embedded values because they are inherently goal-directed, and the goals we put into them may match with one or more human values.[1] Also, because they are autonomous, AI systems have more agency than most technologies. But AI systems are still a product of society, and their effects depend on their own values and capabilities as well as economic, social, environmental, and legal conditions in society.

Because of this constructivist view, I'm moderately optimistic about AI despite some high-stakes risks. Most technologies are net-positive for humanity; this isn't surprising, because technologies are chosen for their ability to meet human needs. But no technology can solve all of humanity's problems.

I've previously expressed skepticism about AI completely automating human labor. I think it's very likely that current trends in automation will continue, at least until AGI is developed. But I'm skeptical that all humans will always have a comparative advantage, let alone a comparative advantage in labor. Thus, I see a few ways that widespread automation could go wrong:

  • AI stops short of automating everything, but instead of augmenting human productivity, displaces workers into low-productivity jobs - or worse, economic roles other than labor. This scenario would create massive income inequality between those who own AI-powered firms and those who don't.
  • AI takes over most tasks essential to governing society, causing humans to be alienated from the process of running their own society (human enfeeblement). Society drifts off course from where humans want it to go.

I think economics will determine which human tasks are automated and which are still performed by humans.


  1. The embedded values thesis is sometimes considered a form of "soft determinism" since it posits that technologies have their own effects on society based on their embedded values. However, I think it's compatible with social constructivism because a technology's embedded values are imparted to it by people. ↩︎

Comment by evelynciara on evelynciara's Shortform · 2020-09-17T05:48:02.522Z · score: 7 (2 votes) · EA · GW

I just listened to Andrew Critch's interview about "AI Research Considerations for Human Existential Safety" (ARCHES). I took some notes on the podcast episode, which I'll share here. I won't attempt to summarize the entire episode; instead, please see this summary of the ARCHES paper in the Alignment Newsletter.

  • We need to explicitly distinguish between "AI existential safety" and "AI safety" writ large. Saying "AI safety" without qualification is confusing for both people who focus on near-term AI safety problems and those who focus on AI existential safety problems; it creates a bait-and-switch for both groups.
  • Although existential risk can refer to any event that permanently and drastically reduces humanity's potential for future development (paraphrasing Bostrom 2013), ARCHES only deals with the risk of human extinction because it's easier to reason about and because it's not clear what other non-extinction outcomes are existential events.
  • ARCHES frames AI alignment in terms of delegation from m ≥ 1 human stakeholders (such as individuals or organizations) to n ≥ 1 AI systems. Most alignment literature to date focuses on the single-single setting (one principal, one agent), but such settings in the real world are likely to evolve into multi-principal, multi-agent settings. Computer scientists interested in AI existential safety should pay more attention to the multi-multi setting relative to the single-single one for the following reasons:
    • There are commercial incentives to develop AI systems that are aligned with respect to the single-single setting, but not to make sure they won't break down in the multi-multi setting. A group of AI systems that are "aligned" with respect to single-single may still precipitate human extinction if the systems are not designed to interact well.
    • Single-single delegation solutions feed into AI capabilities, so focusing only on single-single delegation may increase existential risk.
    • What alignment means in the multi-multi setting is more ambiguous because the presence of multiple stakeholders engenders heterogeneous preferences. However, predicting whether humanity goes extinct in the multi-multi setting is easier than predicting whether a group of AI systems will "optimally" satisfy a group's preferences.
  • Critch and Krueger coin the term "prepotent AI" to refer to an AI system that is powerful enough to transform Earth's environment at least as much as humans have and where humans cannot effectively stop or reverse these changes. Importantly, a prepotent AI need not be an artificial general intelligence.
Comment by evelynciara on Some thoughts on EA outreach to high schoolers · 2020-09-16T05:20:14.844Z · score: 5 (3 votes) · EA · GW

Also, earlier I had the idea for a YouTube channel similar to many educational YouTube channels. The more zany, TikTok-style video content could complement it.

Comment by evelynciara on Some thoughts on EA outreach to high schoolers · 2020-09-15T16:18:11.121Z · score: 3 (2 votes) · EA · GW

This reminds me of the Planet Money TikTok!

Comment by evelynciara on Foreign Affairs Piece on Land Use Reform · 2020-09-15T05:55:32.364Z · score: 3 (2 votes) · EA · GW

Thank you for sharing this! I'm sympathetic to the YIMBY movement and appreciate your piece's comparative perspective.

Comment by evelynciara on How have you become more (or less) engaged with EA in the last year? · 2020-09-14T20:24:08.472Z · score: 10 (6 votes) · EA · GW

Yeah. I agree that the tension exists. Cause prioritization is one of the core ideas of EA, so it's important for us to emphasize that, but delicately so that we don't alienate others. Personally, I would use I-statements, such as "I care about <issue 1> too, but I've chosen to focus on <issue 2> instead because it's much more neglected," instead of you-statements that might put the listener on the defensive.

Comment by evelynciara on How have you become more (or less) engaged with EA in the last year? · 2020-09-13T23:00:17.673Z · score: 3 (2 votes) · EA · GW

The main thing that originally drove me away from the movement was people being dismissive toward causes that the EA movement doesn't focus on. At the time, I believed that conventional causes like climate change and international human rights advocacy (e.g. Amnesty International) are worth working on, and I wanted to know more about how they stack up against EA's focus areas. I heard comments like (paraphrased below):

  • In response to my suggestion that an EA student group partner with advocacy orgs at the university: "We could, but a lot of them are probably not effective."
  • In response to my complaint that EA doesn't focus enough on climate change: "You have to prioritize among the global catastrophic risks. Climate change is the least of them all." (I think they meant to say "least neglected", but just saying "least" made it sound like they were saying climate change isn't important.)
Comment by evelynciara on How have you become more (or less) engaged with EA in the last year? · 2020-09-12T21:55:31.449Z · score: 20 (9 votes) · EA · GW

I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:

  • Attended the virtual EA Global (I didn't register, just watched it live on YouTube)
  • Read The Precipice
  • Participated in two EA mentorship programs
  • Joined Covid Watch, an organization developing an app to slow the spread of COVID-19. I'm especially involved in setting up a subteam trying to reduce global catastrophic biological risks.
  • Started posting on the EA Forum
  • Ran a birthday fundraiser for the Against Malaria Foundation. This year, I'm running another one for the Nuclear Threat Initiative.

Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.

Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.

Comment by evelynciara on evelynciara's Shortform · 2020-09-04T22:56:09.315Z · score: 4 (3 votes) · EA · GW

Note: I recognize that gender equality is a sensitive topic, so I welcome any feedback on how I could present this information better.

Comment by evelynciara on evelynciara's Shortform · 2020-09-04T22:49:36.925Z · score: 5 (4 votes) · EA · GW

Epistemic status: Although I'm vaguely aware of the evidence on gender equality and peace, I'm not an expert on international relations. I'm somewhat confident in my main claim here.

Gender equality - in societies at large, in government, and in peace negotiations - may be an existential security factor insofar as it promotes societal stability and decreases international and intra-state conflict.

According to the Council on Foreign Relations, women's participation in peacemaking and government at large improves the durability of peace agreements and social stability afterward. Gender equality also increases trust in political institutions and decreases risk of terrorism. According to a study by Krause, Krause, and Bränfors (2018), direct participation by women in peacemaking positively affects the quality and durability of peace agreements because of "linkages between women signatories and women civil society groups." In principle, including other identity groups such as ethnic, racial, and religious minorities in peace negotiations may also activate these linkages and thus lead to more durable and higher quality peace.

Some organizations that advance gender equality in peacemaking and international security:

Comment by evelynciara on evelynciara's Shortform · 2020-09-01T23:20:41.451Z · score: 1 (1 votes) · EA · GW

Yes, I did. But I think it would be more valuable if we had a better Markdown editor or a syntax key.

Comment by evelynciara on evelynciara's Shortform · 2020-08-28T19:59:37.591Z · score: 6 (3 votes) · EA · GW

New Economist special report on dementia - As humanity ages the numbers of people with dementia will surge: The world is ill-prepared for the frightening human, economic and social implications

Comment by evelynciara on Is existential risk more pressing than other ways to improve the long-term future? · 2020-08-26T21:52:05.102Z · score: 1 (1 votes) · EA · GW

Pedro Oliboni wrote a paper that addresses one aspect of my question, the tradeoff between existential risk reduction and economic growth: On The Relative Long-Term Future Importance of Investments in Economic Growth and Global Catastrophic Risk Reduction.

Comment by evelynciara on evelynciara's Shortform · 2020-08-26T17:06:33.844Z · score: 1 (1 votes) · EA · GW

Latex markdown test:

When, in the course of human events, it becomes necessary for people to dissolve the political bands that tie it with another

Comment by evelynciara on The case of the missing cause prioritisation research · 2020-08-24T20:23:09.312Z · score: 1 (1 votes) · EA · GW

I wonder if we could create an open source library of IAMs for researchers and EAs to use and audit.

Comment by evelynciara on Twenty Year Economic Impacts of Deworming · 2020-08-23T16:20:54.050Z · score: 2 (2 votes) · EA · GW

Thanks for sharing this!

From the FP newsletter:

Schools are canceled throughout much of the world due to coronavirus, and that means public health interventions that typically happen at schools aren’t happening at all. The new study from Kenya is just our latest reminder that that is an enormous loss, and the children affected may still be disadvantaged from it 20 years later.

Right now may be a good time to fund a mass deworming effort that doesn't depend on schools.

Comment by evelynciara on evelynciara's Shortform · 2020-08-21T03:49:15.175Z · score: 2 (3 votes) · EA · GW

Trump's dismantling of the U.S. Postal Service really concerns me.

Comment by evelynciara on Propose and vote on potential tags · 2020-08-19T01:43:53.464Z · score: 3 (2 votes) · EA · GW

How about a tag for global governance and/or providing global public goods? This is arguably one of the most pressing problems there is, because many of the problems EA works on are global coordination problems, including existential risk (since existential security is a global public good).

Comment by evelynciara on Are some SDGs more important than others? Revealed country priorities from four years of VNRs · 2020-08-16T20:44:57.596Z · score: 1 (1 votes) · EA · GW

Thanks!

Comment by evelynciara on What career advice gaps are you trying to fill? · 2020-08-16T17:27:21.245Z · score: 1 (1 votes) · EA · GW

The first link is broken - it looks like you meant to link to https://bit.ly/LCANproposal

Comment by evelynciara on The case of the missing cause prioritisation research · 2020-08-16T16:54:36.287Z · score: 11 (4 votes) · EA · GW

I agree wholeheartedly with this! Strong upvote from me.

I agree that cause prioritization research in EA focuses almost entirely on utilitarian and longtermist views. There's substantial diversity of ethical theories within this space, but I bet that most of the world's population are not longtermist utilitarians. I'd like to see more research trying to apply cause prioritization to non-utilitarian worldviews such as ones that emphasize distributive justice.

One thing I notice is that, with few exceptions, the path to change for EA folk who want to improve the long-run future is research. They work at research institutions, design AI systems, fund research, support research. Those that do not do research seem to be trying to accumulate power or wealth or CV points in the vague hope that at some point the researchers will know what needs doing.

Fully agree, but I think it's ironic (in a good way) that your proposed solution is "more global priorities research." When I see some of 80K's more recent advice, I think, "Dude, I already sank 4 years of college into studying CS and training to be a software engineer and now you expect me to shift into research or public policy jobs?" Now I know they don't expect everyone to follow their priority paths, and I'm strongly thinking about shifting into AI safety or data science anyway. But I often feel discouraged because my skill set doesn't match what the community thinks it needs most.

I think there needs to be much better research into how to make complex decisions despite high uncertainty. There is a whole field of decision making under deep uncertainty (or knightian uncertainty) used in policy design, military decision making and climate science but rarely discussed in EA.

I wouldn't know how to assess this claim, but this is a very good point. I'm glad you're writing a paper about this.

Finally, I love the style of humor you use in this post.

Comment by evelynciara on The Case for Education · 2020-08-16T15:47:54.990Z · score: 4 (2 votes) · EA · GW

I think this take is interesting; I've given it an upvote.

I don't agree that the education system is as thoroughly broken as you say. For what it's worth, I've enjoyed my time studying CS at Cornell, and I'm starting my graduate studies there soon. Your mileage may vary - some schools do education well, others not so much. But I do think the institutions you're proposing can complement the existing education system by filling gaps in it.

Comment by evelynciara on EA Forum feature suggestion thread · 2020-08-16T04:08:59.810Z · score: 5 (3 votes) · EA · GW

I would love to have more features for the Markdown editor, since I prefer it over the WYSIWYG editor. For example, I'd like to be able to upload images while editing in Markdown (like GitHub does). Also, a syntax cheatsheet would be wonderful.

Ideally, I'd like to be able to switch between the Markdown and WYSIWYG editors while editing a document, or have a rendered preview tab in the Markdown editor.

Comment by evelynciara on evelynciara's Shortform · 2020-08-11T16:23:38.029Z · score: 1 (1 votes) · EA · GW

I think a more general, and less antagonizing, way to frame this is "increasing scientific literacy among the general public," where scientific literacy is seen as a spectrum. For example, increasing scientific literacy among climate activists might make them more likely to advocate for policies that more effectively reduce CO2 emissions.

Comment by evelynciara on What posts do you want someone to write? · 2020-08-06T06:03:06.467Z · score: 3 (2 votes) · EA · GW

I'd appreciate a forum post or workshop about how to interpret empirical evidence. Jennifer Doleac gives a lot of good pointers in the recent 80,000 Hours podcast, but I think the EA and public policy communities would benefit from a more thorough treatment.

Comment by evelynciara on evelynciara's Shortform · 2020-08-03T22:27:47.717Z · score: 1 (1 votes) · EA · GW

Table test - Markdown

Column A Column B Column C
Cell A1 Cell B1 Cell C1
Cell A2 Cell B2 Cell C2
Cell A3 Cell B3 Cell C3
Comment by evelynciara on EA Forum update: New editor! (And more) · 2020-08-03T16:12:24.026Z · score: 1 (1 votes) · EA · GW

How do you create them in Markdown?

Comment by evelynciara on evelynciara's Shortform · 2020-08-02T21:58:09.592Z · score: 1 (1 votes) · EA · GW

If you're looking at where to direct funding for U.S. criminal justice reform:

List of U.S. states and territories by incarceration and correctional supervision rate

On this page, you can sort states (and U.S. territories) by total prison/jail population, incarceration rate per 100,000 adults, or incarceration rate per 100,000 people of all ages - all statistics as of year-end 2016.

As of 2016, the 10 states with the highest incarceration rates per 100,000 people were:

  1. Oklahoma (990 prisoners/100k)
  2. Louisiana (970)
  3. Mississippi (960)
  4. Georgia (880)
  5. Alabama (840)
  6. Arkansas (800)
  7. Arizona (790)
  8. Texas (780)
  9. Kentucky (780)
  10. Missouri (730)

National and state-level bail funds for pretrial and immigration detention

Comment by evelynciara on Lukas_Gloor's Shortform · 2020-07-30T09:15:18.797Z · score: 3 (2 votes) · EA · GW

I agree that pleasure is not intrinsically good (i.e. I also deny the strong claim). I think it's likely that experiencing the full spectrum of human emotions (happiness, sadness, anger, etc.) and facing challenges are good for personal growth and therefore improve well-being in the long run. However, I think that suffering is inherently bad, though I'm not sure what distinguishes suffering from displeasure.

Comment by evelynciara on evelynciara's Shortform · 2020-07-23T03:51:16.073Z · score: 4 (3 votes) · EA · GW

Epistemic status: Tentative thoughts.

I think that medical AI could be a nice way to get into the AI field for a few reasons:

  • You'd be developing technology that improves global health by a lot. For example, according to the WHO, "The use of X-rays and other physical waves such as ultrasound can resolve between 70% and 80% of diagnostic problems, but nearly two-thirds of the world's population has no access to diagnostic imaging."[1] Computer vision can make radiology more accessible to billions of people around the world, as this project is trying to do.
  • It's also a promising starting point for careers in AI safety and applying AI/ML to other pressing causes.

AI for animal health may be even more important and neglected.


  1. World Radiography Day: Two-Thirds of the World's Population has no Access to Diagnostic Imaging ↩︎

Comment by evelynciara on evelynciara's Shortform · 2020-07-20T22:35:40.824Z · score: 4 (3 votes) · EA · GW

Stuart Russell: Being human and navigating interpersonal relationships will be humans' comparative advantage when artificial general intelligence is realized, since humans will be better at simulating other humans' minds than AIs will. (Human Compatible, chapter 4)

Also Stuart Russell: Automated tutoring!! (Human Compatible, chapter 3)

Comment by evelynciara on evelynciara's Shortform · 2020-07-16T16:46:08.660Z · score: 3 (2 votes) · EA · GW

Epistemic status: Raw thoughts that I've just started to think about. I'm highly uncertain about a lot of this.

Some works that have inspired my thinking recently:

Reading/listening to these works has caused me to reevaluate the risks posed by advanced artificial intelligence. While AI risk is currently the top cause in x-risk reduction, I don't think this is necessarily warranted. I think the CAIS model is a more plausible description of how AI is likely to evolve in the near future, but I haven't read enough to assess whether it makes AI more or less of a risk (to humanity, civilization, liberal democracy, etc.) than it would be under the classic "Superintelligence" model.

I'm strongly interested in improving diversity in EA, and I think this is an interesting case study about how one could do that. Right now, it seems like there is a core/middle/periphery of the EA community where the core includes people and orgs in countries like the US, UK, and Australia, and I think the EA movement would be stronger if we actively tried to bring more people in more countries into the core.

I'm also interested in how we could use qualitative methods like those employed in user experience research (UXR) to solve problems in EA causes. I'm familiar enough with design thinking (the application of design methods to practical problems) that I could do some of this given enough time and training.

Comment by evelynciara on Design-Jobs & (Science)Communication for EA? · 2020-07-15T17:44:37.049Z · score: 3 (2 votes) · EA · GW

Jah Ying Chung did a UX research study about how to improve communication and understanding between Western and Asian EA communities. So there's precedent for your second idea, but nothing like a fully fledged organization yet.

Comment by evelynciara on EA Forum feature suggestion thread · 2020-07-15T06:01:05.967Z · score: 3 (2 votes) · EA · GW

Post and comment previews in search results!

Comment by evelynciara on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T01:21:58.710Z · score: 21 (12 votes) · EA · GW

What do you think is the probability of AI causing an existential catastrophe in the next century?

Comment by evelynciara on evelynciara's Shortform · 2020-07-10T05:30:34.961Z · score: 13 (7 votes) · EA · GW

I think we need to be careful when we talk about AI and automation not to commit the lump of labor fallacy. When we say that a certain fraction of economically valuable work will be automated at any given time, or that this fraction will increase, we shouldn't implicitly assume that the total amount of work being done in the economy is constant. Historically, automation has increased the size of the economy, thereby creating more work to be done, whether by humans or by machines; we should expect the same to happen in the future. (Note that this doesn't exclude the possibility of increasingly general AI systems performing almost all economically valuable work. This could very well happen even as the total amount of work available skyrockets.)

Comment by evelynciara on Which countries are most receptive to more immigration? · 2020-07-07T20:19:14.736Z · score: 3 (2 votes) · EA · GW

I think these concerns are valid. The website Open Borders: The Case addresses many of the main arguments against open borders, including the possibility of nativist backlash to increased immigration.

"Nativist backlash" refers to the hypothesis that a country opening its borders to all immigration would cause a significant portion of current residents to subsequently turn against immigration. The problem with this claim is that the probability of backlash depends on how a country adopts open borders in the first place. Nathan Smith writes:

The trouble with “nativist backlash” as a standalone topic, is that a nativist backlash against open borders seems to presuppose that open borders is somehow established first. But for open borders to be established, something major would have to change in the policymaking process and/or public opinion. And whatever that change was, would presumably affect the likelihood and nature of any nativist backlash.

If open borders were established based on false advertising that it wasn’t really radical and wouldn’t make that much difference, then there would doubtless be a nativist backlash. Likewise if it were established by some sort of presidential and judicial fiat without popular buy-in. But if open borders came about because large majorities were persuaded that people have a natural right to migrate and it’s unjust to imprison them in the country of their birth, then people might be willing to accept the drastic consequences of their moral epiphanies.

So any claim that “open borders will inevitably provoke a nativist backlash” just seems ill formulated. One first needs a scenario by which open borders is established. Then one could assess the probability and likely character of a nativist backlash, but it would be different for every open borders scenario.

Comment by evelynciara on Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? · 2020-07-04T18:28:52.026Z · score: 1 (1 votes) · EA · GW

Background: I am an information science student who has taken a class on the societal aspects of surveillance.

My gut feeling is that advocating for or implementing "mass surveillance" targeted at preventing individuals from using weapons of mass destruction (WMDs) would be counterproductive.

First, were a mass surveillance system aimed at controlling WMDs to be set up, governments would lobby for it to be used for other purposes as well, such as monitoring for conventional terrorism. Pretty soon it wouldn't be minimally invasive anymore; it would just be a general-purpose mass surveillance system.

Second, a surveillance system of the scope that Bostrom has proposed ("ubiquitous real-time worldwide surveillance") would itself be an existential risk to liberal democracy. The problem is that a ubiquitous surveillance system would create the feeling that surveillees are constantly being watched. Even if it had strong technical and institutional privacy guarantees and those guarantees were communicated to the public, people would likely not be able to trust it; rumors of abuse would only make establishing trust harder. People modify their behavior when they know they are being watched or could be watched at any time, so they would be less willing to engage in behaviors that are stigmatized by society even if the Panopticon were not explicitly looking out for those behaviors. This feeling of constantly being watched would stifle risk-taking, individuality, creativity, and freedom of expression, all of which are essential to sustain human progress.

I think that a much more limited suite of targeted surveillance systems, combined with other mechanisms for arms control, would be a lot more promising while still being effective at controlling WMDs. Such limited surveillance systems are already used in gun control: for example, the U.S. federal government requires dealers to keep records of gun sales for at least 20 years, and many U.S. states and other countries keep records of who is licensed to own a gun. Some states also require gun owners to report lost or stolen guns in order to fight gun trafficking. These surveillance measures can be designed to balance gun owners' privacy interests with the public's interest in reducing gun violence. We could regulate synthetic biology a lot like we do gun control: for example, companies that create synthetic biology or sell desktop DNA sequencers could be required to maintain records of transactions.

However, I don't expect this targeted approach to work as well for cyber weapons. Because computers are general-purpose, cyber weapons can theoretically be developed and executed on any computer, and trying to prevent the use of cyber weapons by surveilling everyone who owns a computer would be extremely inefficient (since the vast majority of people who use computers are not creating cyber weapons) and impractical (because power users could easily uninstall any spyware planted on their machines). Also, because computers are ubiquitous and often store a lot of sensitive personal information, this form of surveillance would be extremely unpopular as well as invasive. Strengthening cyber defense seems like a more promising way to prevent harm from cyber attacks.

Comment by evelynciara on EA Updates for June 2020 · 2020-07-03T21:02:16.057Z · score: 7 (2 votes) · EA · GW

Thanks for making this post! I think it would be helpful if you linked directly to the playlist for EAGxVirtual 2020 instead of the channel.

Comment by evelynciara on evelynciara's Shortform · 2020-06-18T22:41:17.816Z · score: 6 (6 votes) · EA · GW

How pressing is countering anti-science?

Intuitively, anti-science attitudes seem like a major barrier to solving many of the world's most pressing problems: for example, climate change denial has greatly derailed the American response to climate change, and distrust of public health authorities may be stymying the COVID-19 response. (For instance, a candidate running in my district for State Senate is campaigning on opposition to contact tracing as well as vaccines.) I'm particularly concerned about anti-economics attitudes because they lead to bad economic policies that don't solve the problems they're meant to solve, such as protectionism and rent control, and opposition to policies that are actually supported by evidence. Additionally, I've heard (but can't find the source for this) that economists are generally more reluctant to do public outreach in defense of their profession than scientists in other fields are.

Comment by evelynciara on Forum update: Tags are live! Go use them! · 2020-06-01T21:34:48.166Z · score: 8 (5 votes) · EA · GW

Can you please add the tag directory to the sidebar?

Comment by evelynciara on evelynciara's Shortform · 2020-05-28T17:48:26.611Z · score: 2 (2 votes) · EA · GW

I think there should be an EA Fund analog for criminal justice reform. This could especially attract non-EA dollars.

Comment by evelynciara on Any good organizations fighting racism? · 2020-05-28T03:13:20.744Z · score: 3 (3 votes) · EA · GW

My understanding is that the criminal justice system plays a central role in institutional racism in the United States. For example, it is a significant contributor to the racial unemployment gap:

Mass incarceration plays a significant role in the lower labor force participation rate for African American men. African Americans are more likely to be incarcerated following an arrest than are white Americans, and formerly incarcerated individuals of all races experience difficulties in gaining employment. In spite of years of widespread agreement among researchers that incarceration is a profound factor in employment outcomes, employment statistics still do not gather data on incarceration, erasing a key structural factor. (Ajilore 2020)

Thus, criminal justice reform seems like an effective, targeted way to break the cycle.

Comment by evelynciara on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T07:56:37.645Z · score: 6 (4 votes) · EA · GW

If you think that embryos and fetuses have moral value, then abortion becomes a very important issue in terms of scale. However, it's not very neglected, and the evidence suggests that increased access to contraceptives, not restricted access to abortion services, is driving the decline in abortion rates in the U.S.

Designing medical technology to reduce miscarriages (which are spontaneous abortions) may be an especially important, neglected, and tractable way to prevent embryos/fetuses and parents from suffering. (10-50% of pregnancies end in miscarriages.)

Comment by evelynciara on How has biosecurity/pandemic preparedness philanthropy helped with coronavirus, and how might it help with similar future situations? · 2020-04-29T07:36:58.359Z · score: 2 (2 votes) · EA · GW

Unrolled for convenience

I have Twitter blocked using StayFocusd (which gives me an hour per day to view blocked websites), so reading it on a separate website allows me to take my time with it.

Comment by evelynciara on Is it a good idea to write EA blog posts for skill building and learning more about EA? · 2020-04-28T16:35:19.228Z · score: 6 (4 votes) · EA · GW

Yeah, I think that's a good idea. Most people's early creative work will not be their best work, so don't have high expectations at the beginning. I would focus on learning and having fun while you write.

Comment by evelynciara on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-27T08:15:37.392Z · score: 1 (1 votes) · EA · GW

I second this. I imagine that updating the AI problem profile must be a top priority for 80K because AI safety is a popular topic in the EA community, and it's important to have a central source for the community's current understanding of the problem.

Comment by evelynciara on Idea for a YouTube show about effective altruism · 2020-04-25T19:03:20.878Z · score: 2 (2 votes) · EA · GW

Or Complexly, though they seem to have a lot on their plate.

It shouldn't be hard to create good quality video at a low budget, though.