## Posts

What is the marginal impact of a small donation to an EA Fund? 2020-11-23T07:09:02.934Z
Which terms should we use for "developing countries"? 2020-11-16T00:42:58.385Z
Is Technology Actually Making Things Better? – Pairagraph 2020-10-01T16:06:23.237Z
Planning my birthday fundraiser for October 2020 2020-09-12T19:26:03.888Z
Is existential risk more pressing than other ways to improve the long-term future? 2020-08-20T03:50:31.125Z
What opportunities are there to use data science in global priorities research? 2020-08-18T02:48:23.143Z
Are some SDGs more important than others? Revealed country priorities from four years of VNRs 2020-08-16T06:56:19.326Z
How strong is the evidence of unaligned AI systems causing harm? 2020-07-21T04:08:07.719Z
What norms about tagging should the EA Forum have? 2020-07-14T04:19:54.841Z
Does generality pay? GPT-3 can provide preliminary evidence. 2020-07-12T18:53:09.454Z
Which countries are most receptive to more immigration? 2020-07-06T21:46:03.732Z
Will AGI cause mass technological unemployment? 2020-06-22T20:55:00.447Z
How do you talk about AI safety? 2020-04-19T16:15:59.288Z
What are some software development needs in EA causes? 2020-03-06T05:25:50.461Z
My Charitable Giving Report 2019 2020-02-27T16:35:42.678Z
Does the President Matter as Much as You Think? | Freakonomics Radio 2020-02-10T20:47:27.365Z
Prioritizing among the Sustainable Development Goals 2020-02-07T05:05:44.274Z
Open New York is Fundraising! 2020-01-16T21:45:20.506Z
What are the most pressing issues in short-term AI policy? 2020-01-14T22:05:10.537Z
Has pledging 10% made meeting other financial goals substantially more difficult? 2020-01-09T06:15:13.589Z
evelynciara's Shortform 2019-10-14T08:03:32.019Z

Comment by evelynciara on evelynciara's Shortform · 2020-11-29T04:32:37.986Z · EA · GW

AOC's Among Us stream on Twitch nets $200K for coronavirus relief "We did it!$200k raised in one livestream (on a whim!) for eviction defense, food pantries, and more. This is going to make such a difference for those who need it most right now." — AOC's Tweet

Video game streaming is a popular way to raise money for causes. We should use this strategy to fundraise for EA organizations.

Comment by evelynciara on Where are you donating in 2020 and why? · 2020-11-26T21:45:10.574Z · EA · GW

This year, I ran a birthday fundraiser for the Nuclear Threat Initiative. I also continued donating \$5/month to the ACLU. I'm still a student and I didn't have a job this summer, so I don't have much money to donate.

Comment by evelynciara on What music do you find most inspires you to use your resources (effectively) to help others? · 2020-11-22T18:52:46.011Z · EA · GW

Thank you for sharing this playlist! I added some songs from the Linkin Park album A Thousand Suns, which deals with x-risk.

Comment by evelynciara on Open and Welcome Thread: November 2020 · 2020-11-16T04:22:33.260Z · EA · GW

You've got another customer.

Comment by evelynciara on Progress Open Thread: October // Student Summit 2020 · 2020-11-01T15:50:18.730Z · EA · GW

Thank you!

Comment by evelynciara on Progress Open Thread: October // Student Summit 2020 · 2020-10-30T21:54:08.292Z · EA · GW

I won a prize at the Cornell Sustainability Hackathon! My team worked on a business idea for a waste heat recovery device for trucks to improve their fuel efficiency. We won the "Most Transitional Hack" award for positioning our product as a stepping stone to fleet electrification.

Comment by evelynciara on We're Lincoln Quirk & Ben Kuhn from Wave, AMA! · 2020-10-30T21:21:05.639Z · EA · GW

I've been looking into doing entrepreneurship in the future, and I know that mental health is a big issue for startup founders. So I have some questions related to entrepreneurship and mental health:

• Do you think it's possible to practice good work-life balance while working on a startup? If so, what might that look like?
• How many hours per week did you spend when you first started Wave? How many hours per week do you spend on it now?
• How do you take care of yourselves while running a startup?
Comment by evelynciara on How much does a vote matter? · 2020-10-30T04:50:57.214Z · EA · GW

Great post! I also think voting in state and local politics can make a greater difference if you care about local issues like zoning and occupational licensing (like I do). For example, in the upcoming NYC legislative election in 2021, there are several candidates running on a pro-housing platform and several candidates running against more housing development; the more pro-housing candidates are elected to office, the more likely it is that more housing units will be built. Even though a New Yorker's vote in the presidential election may not matter because New York is a solid blue state, their votes in local elections matter more.

Comment by evelynciara on When you shouldn't use EA jargon and how to avoid it · 2020-10-28T01:53:12.370Z · EA · GW

Great presentation! I wish you luck.

By the way, I get tired of saying "impact" since it's cited as a buzzword that people should avoid, especially in business. Do you recommend any synonyms for "impact" (as either a noun or a verb)?

Comment by evelynciara on When you shouldn't use EA jargon and how to avoid it · 2020-10-28T01:49:46.642Z · EA · GW

"displacing (the impact of)"?

Comment by evelynciara on Which countries are most receptive to more immigration? · 2020-10-25T04:33:08.864Z · EA · GW

According to a 2019 Pew Research Center survey of people in 12 countries, majorities of the public in Sweden (88%), the United Kingdom (85%), Canada (84%), Germany (81%), Australia (79%), and the United States (78%) support high-skilled immigration. Even among people who want fewer immigrants, support for high-skilled immigration in these countries is high.[1]

Additionally, people in many developed countries support admitting refugees even if they're not as keen on immigration in general.[2]

1. Majority of U.S. Public Supports High-Skilled Immigration.” Pew Research Center, Washington, D.C. (2019). ↩︎

2. People around the world express more support for taking in refugees than immigrants.” Pew Research Center, Washington, D.C. (2019). ↩︎

Comment by evelynciara on Is Technology Actually Making Things Better? – Pairagraph · 2020-10-12T15:28:54.618Z · EA · GW

Part 4 of this debate - the finale by Jason Crawford - has been published.

Comment by evelynciara on Hiring engineers and researchers to help align GPT-3 · 2020-10-07T22:20:53.158Z · EA · GW

Hi Paul, I messaged you privately.

Comment by evelynciara on Election scenarios · 2020-09-26T16:28:40.172Z · EA · GW

(Status: unsure) Preserving democracy in the United States is more valuable insofar as the world perceives the U.S. as the "leader" or "guarantor" of the liberal world order, particularly global democracy. But I don't think this outweighs the importance of democracy in the rest of the world, especially large democracies like India.

I think EAs' comparative advantages in promoting democracy in our own countries is the more important factor here.

Comment by evelynciara on Election scenarios · 2020-09-25T22:25:34.086Z · EA · GW

I agree. Just as the EA movement has been pushing against the bias towards philanthropy in rich countries, so we should also try to resist the urge to pay attention only to political crises in rich countries like the United States.

Comment by evelynciara on evelynciara's Shortform · 2020-09-25T00:07:08.629Z · EA · GW

NYC is adopting ranked-choice voting for the 2021 City Council election. One challenge will be explaining the new voting system, though.

Comment by evelynciara on Thomas Kwa's Shortform · 2020-09-24T04:30:56.244Z · EA · GW

I agree - I'm especially worried that focusing too much on longtermism will make us seem out of touch with the rest of humanity, relative to other schools of EA thought. I would support conducting a public opinion poll to learn about people's moral beliefs, particularly how important and practical they believe focusing on the long-term future would be. I hypothesize that people who support ideas such as sustainability will be more sympathetic to longtermism.

Comment by evelynciara on How have you become more (or less) engaged with EA in the last year? · 2020-09-23T03:56:31.616Z · EA · GW

I think I started out with r/EffectiveAltruism and checking out effective altruism websites. Then, someone wrote a post on the subreddit encouraging people to post on the EA Forum because that's where the action is. So now I'm mostly involved in the forum, but also some Facebook groups (although I try not to use FB often) and Discord.

Comment by evelynciara on evelynciara's Shortform · 2020-09-19T17:44:12.319Z · EA · GW

# Social constructivism and AI

I have a social constructivist view of technology - that is, I strongly believe that technology is a part of society, not an external force that acts on it. Ultimately, a technology's effects on a society depend on the interactions between that technology and other values, institutions, and technologies within that society. So for example, although genetic engineering may enable human gene editing, the specific ways in which humans use gene editing would depend on cultural attitudes and institutions regarding the technology.

How this worldview applies to AI: Artificial intelligence systems have embedded values because they are inherently goal-directed, and the goals we put into them may match with one or more human values.[1] Also, because they are autonomous, AI systems have more agency than most technologies. But AI systems are still a product of society, and their effects depend on their own values and capabilities as well as economic, social, environmental, and legal conditions in society.

Because of this constructivist view, I'm moderately optimistic about AI despite some high-stakes risks. Most technologies are net-positive for humanity; this isn't surprising, because technologies are chosen for their ability to meet human needs. But no technology can solve all of humanity's problems.

I've previously expressed skepticism about AI completely automating human labor. I think it's very likely that current trends in automation will continue, at least until AGI is developed. But I'm skeptical that all humans will always have a comparative advantage, let alone a comparative advantage in labor. Thus, I see a few ways that widespread automation could go wrong:

• AI stops short of automating everything, but instead of augmenting human productivity, displaces workers into low-productivity jobs - or worse, economic roles other than labor. This scenario would create massive income inequality between those who own AI-powered firms and those who don't.
• AI takes over most tasks essential to governing society, causing humans to be alienated from the process of running their own society (human enfeeblement). Society drifts off course from where humans want it to go.

I think economics will determine which human tasks are automated and which are still performed by humans.

1. The embedded values thesis is sometimes considered a form of "soft determinism" since it posits that technologies have their own effects on society based on their embedded values. However, I think it's compatible with social constructivism because a technology's embedded values are imparted to it by people. ↩︎

Comment by evelynciara on evelynciara's Shortform · 2020-09-17T05:48:02.522Z · EA · GW

I just listened to Andrew Critch's interview about "AI Research Considerations for Human Existential Safety" (ARCHES). I took some notes on the podcast episode, which I'll share here. I won't attempt to summarize the entire episode; instead, please see this summary of the ARCHES paper in the Alignment Newsletter.

• We need to explicitly distinguish between "AI existential safety" and "AI safety" writ large. Saying "AI safety" without qualification is confusing for both people who focus on near-term AI safety problems and those who focus on AI existential safety problems; it creates a bait-and-switch for both groups.
• Although existential risk can refer to any event that permanently and drastically reduces humanity's potential for future development (paraphrasing Bostrom 2013), ARCHES only deals with the risk of human extinction because it's easier to reason about and because it's not clear what other non-extinction outcomes are existential events.
• ARCHES frames AI alignment in terms of delegation from m ≥ 1 human stakeholders (such as individuals or organizations) to n ≥ 1 AI systems. Most alignment literature to date focuses on the single-single setting (one principal, one agent), but such settings in the real world are likely to evolve into multi-principal, multi-agent settings. Computer scientists interested in AI existential safety should pay more attention to the multi-multi setting relative to the single-single one for the following reasons:
• There are commercial incentives to develop AI systems that are aligned with respect to the single-single setting, but not to make sure they won't break down in the multi-multi setting. A group of AI systems that are "aligned" with respect to single-single may still precipitate human extinction if the systems are not designed to interact well.
• Single-single delegation solutions feed into AI capabilities, so focusing only on single-single delegation may increase existential risk.
• What alignment means in the multi-multi setting is more ambiguous because the presence of multiple stakeholders engenders heterogeneous preferences. However, predicting whether humanity goes extinct in the multi-multi setting is easier than predicting whether a group of AI systems will "optimally" satisfy a group's preferences.
• Critch and Krueger coin the term "prepotent AI" to refer to an AI system that is powerful enough to transform Earth's environment at least as much as humans have and where humans cannot effectively stop or reverse these changes. Importantly, a prepotent AI need not be an artificial general intelligence.
Comment by evelynciara on Some thoughts on EA outreach to high schoolers · 2020-09-16T05:20:14.844Z · EA · GW

Also, earlier I had the idea for a YouTube channel similar to many educational YouTube channels. The more zany, TikTok-style video content could complement it.

Comment by evelynciara on Some thoughts on EA outreach to high schoolers · 2020-09-15T16:18:11.121Z · EA · GW

This reminds me of the Planet Money TikTok!

Comment by evelynciara on Foreign Affairs Piece on Land Use Reform · 2020-09-15T05:55:32.364Z · EA · GW

Thank you for sharing this! I'm sympathetic to the YIMBY movement and appreciate your piece's comparative perspective.

Comment by evelynciara on How have you become more (or less) engaged with EA in the last year? · 2020-09-14T20:24:08.472Z · EA · GW

Yeah. I agree that the tension exists. Cause prioritization is one of the core ideas of EA, so it's important for us to emphasize that, but delicately so that we don't alienate others. Personally, I would use I-statements, such as "I care about <issue 1> too, but I've chosen to focus on <issue 2> instead because it's much more neglected," instead of you-statements that might put the listener on the defensive.

Comment by evelynciara on How have you become more (or less) engaged with EA in the last year? · 2020-09-13T23:00:17.673Z · EA · GW

The main thing that originally drove me away from the movement was people being dismissive toward causes that the EA movement doesn't focus on. At the time, I believed that conventional causes like climate change and international human rights advocacy (e.g. Amnesty International) are worth working on, and I wanted to know more about how they stack up against EA's focus areas. I heard comments like (paraphrased below):

• In response to my suggestion that an EA student group partner with advocacy orgs at the university: "We could, but a lot of them are probably not effective."
• In response to my complaint that EA doesn't focus enough on climate change: "You have to prioritize among the global catastrophic risks. Climate change is the least of them all." (I think they meant to say "least neglected", but just saying "least" made it sound like they were saying climate change isn't important.)
Comment by evelynciara on How have you become more (or less) engaged with EA in the last year? · 2020-09-12T21:55:31.449Z · EA · GW

I've gotten more involved in EA since last summer. Some EA-related things I've done over the last year:

• Attended the virtual EA Global (I didn't register, just watched it live on YouTube)
• Participated in two EA mentorship programs
• Joined Covid Watch, an organization developing an app to slow the spread of COVID-19. I'm especially involved in setting up a subteam trying to reduce global catastrophic biological risks.
• Started posting on the EA Forum
• Ran a birthday fundraiser for the Against Malaria Foundation. This year, I'm running another one for the Nuclear Threat Initiative.

Although I first heard of EA toward the end of high school (slightly over 4 years ago) and liked it, I had some negative interactions with EA community early on that pushed me away from the community. I spent the next 3 years exploring various social issues outside the EA community, but I had internalized EA's core principles, so I was constantly thinking about how much good I could be doing and which causes were the most important. I eventually became overwhelmed because "doing good" had become a big part of my identity but I cared about too many different issues. A friend recommended that I check out EA again, and despite some trepidation owing to my past experiences, I did. As I got involved in the EA community again, I had an overwhelmingly positive experience. The EAs I was interacting with were kind and open-minded, and they encouraged me to get involved, whereas before, I had encountered people who seemed more abrasive.

Now I'm worried about getting burned out. I check the EA Forum way too often for my own good, and I've been thinking obsessively about cause prioritization and longtermism. I talk about my current uncertainties in this post.

Comment by evelynciara on evelynciara's Shortform · 2020-09-04T22:56:09.315Z · EA · GW

Note: I recognize that gender equality is a sensitive topic, so I welcome any feedback on how I could present this information better.

Comment by evelynciara on evelynciara's Shortform · 2020-09-04T22:49:36.925Z · EA · GW

Epistemic status: Although I'm vaguely aware of the evidence on gender equality and peace, I'm not an expert on international relations. I'm somewhat confident in my main claim here.

Gender equality - in societies at large, in government, and in peace negotiations - may be an existential security factor insofar as it promotes societal stability and decreases international and intra-state conflict.

According to the Council on Foreign Relations, women's participation in peacemaking and government at large improves the durability of peace agreements and social stability afterward. Gender equality also increases trust in political institutions and decreases risk of terrorism. According to a study by Krause, Krause, and Bränfors (2018), direct participation by women in peacemaking positively affects the quality and durability of peace agreements because of "linkages between women signatories and women civil society groups." In principle, including other identity groups such as ethnic, racial, and religious minorities in peace negotiations may also activate these linkages and thus lead to more durable and higher quality peace.

Some organizations that advance gender equality in peacemaking and international security:

Comment by evelynciara on evelynciara's Shortform · 2020-09-01T23:20:41.451Z · EA · GW

Yes, I did. But I think it would be more valuable if we had a better Markdown editor or a syntax key.

Comment by evelynciara on evelynciara's Shortform · 2020-08-28T19:59:37.591Z · EA · GW

New Economist special report on dementia - As humanity ages the numbers of people with dementia will surge: The world is ill-prepared for the frightening human, economic and social implications

Comment by evelynciara on Is existential risk more pressing than other ways to improve the long-term future? · 2020-08-26T21:52:05.102Z · EA · GW

Pedro Oliboni wrote a paper that addresses one aspect of my question, the tradeoff between existential risk reduction and economic growth: On The Relative Long-Term Future Importance of Investments in Economic Growth and Global Catastrophic Risk Reduction.

Comment by evelynciara on evelynciara's Shortform · 2020-08-26T17:06:33.844Z · EA · GW

Latex markdown test:

When, in the course of human events, it becomes necessary for people to dissolve the political bands that tie it with another

Comment by evelynciara on The case of the missing cause prioritisation research · 2020-08-24T20:23:09.312Z · EA · GW

I wonder if we could create an open source library of IAMs for researchers and EAs to use and audit.

Comment by evelynciara on Twenty Year Economic Impacts of Deworming · 2020-08-23T16:20:54.050Z · EA · GW

Thanks for sharing this!

Schools are canceled throughout much of the world due to coronavirus, and that means public health interventions that typically happen at schools aren’t happening at all. The new study from Kenya is just our latest reminder that that is an enormous loss, and the children affected may still be disadvantaged from it 20 years later.

Right now may be a good time to fund a mass deworming effort that doesn't depend on schools.

Comment by evelynciara on evelynciara's Shortform · 2020-08-21T03:49:15.175Z · EA · GW

Trump's dismantling of the U.S. Postal Service really concerns me.

Comment by evelynciara on Propose and vote on potential tags · 2020-08-19T01:43:53.464Z · EA · GW

How about a tag for global governance and/or providing global public goods? This is arguably one of the most pressing problems there is, because many of the problems EA works on are global coordination problems, including existential risk (since existential security is a global public good).

Comment by evelynciara on Are some SDGs more important than others? Revealed country priorities from four years of VNRs · 2020-08-16T20:44:57.596Z · EA · GW

Thanks!

Comment by evelynciara on What career advice gaps are you trying to fill? · 2020-08-16T17:27:21.245Z · EA · GW

The first link is broken - it looks like you meant to link to https://bit.ly/LCANproposal

Comment by evelynciara on The case of the missing cause prioritisation research · 2020-08-16T16:54:36.287Z · EA · GW

I agree wholeheartedly with this! Strong upvote from me.

I agree that cause prioritization research in EA focuses almost entirely on utilitarian and longtermist views. There's substantial diversity of ethical theories within this space, but I bet that most of the world's population are not longtermist utilitarians. I'd like to see more research trying to apply cause prioritization to non-utilitarian worldviews such as ones that emphasize distributive justice.

One thing I notice is that, with few exceptions, the path to change for EA folk who want to improve the long-run future is research. They work at research institutions, design AI systems, fund research, support research. Those that do not do research seem to be trying to accumulate power or wealth or CV points in the vague hope that at some point the researchers will know what needs doing.

Fully agree, but I think it's ironic (in a good way) that your proposed solution is "more global priorities research." When I see some of 80K's more recent advice, I think, "Dude, I already sank 4 years of college into studying CS and training to be a software engineer and now you expect me to shift into research or public policy jobs?" Now I know they don't expect everyone to follow their priority paths, and I'm strongly thinking about shifting into AI safety or data science anyway. But I often feel discouraged because my skill set doesn't match what the community thinks it needs most.

I think there needs to be much better research into how to make complex decisions despite high uncertainty. There is a whole field of decision making under deep uncertainty (or knightian uncertainty) used in policy design, military decision making and climate science but rarely discussed in EA.

I wouldn't know how to assess this claim, but this is a very good point. I'm glad you're writing a paper about this.

Finally, I love the style of humor you use in this post.

Comment by evelynciara on The Case for Education · 2020-08-16T15:47:54.990Z · EA · GW

I think this take is interesting; I've given it an upvote.

I don't agree that the education system is as thoroughly broken as you say. For what it's worth, I've enjoyed my time studying CS at Cornell, and I'm starting my graduate studies there soon. Your mileage may vary - some schools do education well, others not so much. But I do think the institutions you're proposing can complement the existing education system by filling gaps in it.

Comment by evelynciara on EA Forum feature suggestion thread · 2020-08-16T04:08:59.810Z · EA · GW

I would love to have more features for the Markdown editor, since I prefer it over the WYSIWYG editor. For example, I'd like to be able to upload images while editing in Markdown (like GitHub does). Also, a syntax cheatsheet would be wonderful.

Ideally, I'd like to be able to switch between the Markdown and WYSIWYG editors while editing a document, or have a rendered preview tab in the Markdown editor.

Comment by evelynciara on evelynciara's Shortform · 2020-08-11T16:23:38.029Z · EA · GW

I think a more general, and less antagonizing, way to frame this is "increasing scientific literacy among the general public," where scientific literacy is seen as a spectrum. For example, increasing scientific literacy among climate activists might make them more likely to advocate for policies that more effectively reduce CO2 emissions.

Comment by evelynciara on What posts do you want someone to write? · 2020-08-06T06:03:06.467Z · EA · GW

I'd appreciate a forum post or workshop about how to interpret empirical evidence. Jennifer Doleac gives a lot of good pointers in the recent 80,000 Hours podcast, but I think the EA and public policy communities would benefit from a more thorough treatment.

Comment by evelynciara on evelynciara's Shortform · 2020-08-03T22:27:47.717Z · EA · GW

Table test - Markdown

Column A Column B Column C
Cell A1 Cell B1 Cell C1
Cell A2 Cell B2 Cell C2
Cell A3 Cell B3 Cell C3
Comment by evelynciara on EA Forum update: New editor! (And more) · 2020-08-03T16:12:24.026Z · EA · GW

How do you create them in Markdown?

Comment by evelynciara on evelynciara's Shortform · 2020-08-02T21:58:09.592Z · EA · GW

If you're looking at where to direct funding for U.S. criminal justice reform:

List of U.S. states and territories by incarceration and correctional supervision rate

On this page, you can sort states (and U.S. territories) by total prison/jail population, incarceration rate per 100,000 adults, or incarceration rate per 100,000 people of all ages - all statistics as of year-end 2016.

As of 2016, the 10 states with the highest incarceration rates per 100,000 people were:

1. Oklahoma (990 prisoners/100k)
2. Louisiana (970)
3. Mississippi (960)
4. Georgia (880)
5. Alabama (840)
6. Arkansas (800)
7. Arizona (790)
8. Texas (780)
9. Kentucky (780)
10. Missouri (730)

National and state-level bail funds for pretrial and immigration detention

Comment by evelynciara on Lukas_Gloor's Shortform · 2020-07-30T09:15:18.797Z · EA · GW

I agree that pleasure is not intrinsically good (i.e. I also deny the strong claim). I think it's likely that experiencing the full spectrum of human emotions (happiness, sadness, anger, etc.) and facing challenges are good for personal growth and therefore improve well-being in the long run. However, I think that suffering is inherently bad, though I'm not sure what distinguishes suffering from displeasure.

Comment by evelynciara on evelynciara's Shortform · 2020-07-23T03:51:16.073Z · EA · GW

Epistemic status: Tentative thoughts.

I think that medical AI could be a nice way to get into the AI field for a few reasons:

• You'd be developing technology that improves global health by a lot. For example, according to the WHO, "The use of X-rays and other physical waves such as ultrasound can resolve between 70% and 80% of diagnostic problems, but nearly two-thirds of the world's population has no access to diagnostic imaging."[1] Computer vision can make radiology more accessible to billions of people around the world, as this project is trying to do.
• It's also a promising starting point for careers in AI safety and applying AI/ML to other pressing causes.

AI for animal health may be even more important and neglected.

Comment by evelynciara on evelynciara's Shortform · 2020-07-20T22:35:40.824Z · EA · GW

Stuart Russell: Being human and navigating interpersonal relationships will be humans' comparative advantage when artificial general intelligence is realized, since humans will be better at simulating other humans' minds than AIs will. (Human Compatible, chapter 4)

Also Stuart Russell: Automated tutoring!! (Human Compatible, chapter 3)