What norms about tagging should the EA Forum have? 2020-07-14T04:19:54.841Z · score: 11 (3 votes)
Does generality pay? GPT-3 can provide preliminary evidence. 2020-07-12T18:53:09.454Z · score: 13 (8 votes)
Which countries are most receptive to more immigration? 2020-07-06T21:46:03.732Z · score: 17 (7 votes)
Will AGI cause mass technological unemployment? 2020-06-22T20:55:00.447Z · score: 3 (1 votes)
Idea for a YouTube show about effective altruism 2020-04-24T05:00:00.853Z · score: 18 (9 votes)
How do you talk about AI safety? 2020-04-19T16:15:59.288Z · score: 10 (8 votes)
International Affairs reading lists 2020-04-08T06:11:41.620Z · score: 13 (7 votes)
How effective are financial incentives for reaching D&I goals? Should EA orgs emulate this practice? 2020-03-24T18:27:16.554Z · score: 6 (3 votes)
What are some software development needs in EA causes? 2020-03-06T05:25:50.461Z · score: 10 (8 votes)
My Charitable Giving Report 2019 2020-02-27T16:35:42.678Z · score: 24 (16 votes)
Shoot Your Shot 2020-02-18T06:39:22.964Z · score: 7 (4 votes)
Does the President Matter as Much as You Think? | Freakonomics Radio 2020-02-10T20:47:27.365Z · score: 5 (5 votes)
Prioritizing among the Sustainable Development Goals 2020-02-07T05:05:44.274Z · score: 8 (7 votes)
Open New York is Fundraising! 2020-01-16T21:45:20.506Z · score: -4 (2 votes)
What are the most pressing issues in short-term AI policy? 2020-01-14T22:05:10.537Z · score: 9 (6 votes)
Has pledging 10% made meeting other financial goals substantially more difficult? 2020-01-09T06:15:13.589Z · score: 15 (11 votes)
evelynciara's Shortform 2019-10-14T08:03:32.019Z · score: 1 (1 votes)


Comment by evelynciara on Design-Jobs & (Science)Communication for EA? · 2020-07-15T17:44:37.049Z · score: 1 (1 votes) · EA · GW

Jah Ying Chung did a UX research study about how to improve communication and understanding between Western and Asian EA communities. So there's precedent for your second idea, but nothing like a fully fledged organization yet.

Comment by evelynciara on EA Forum feature suggestion thread · 2020-07-15T06:01:05.967Z · score: 1 (1 votes) · EA · GW

Post and comment previews in search results!

Comment by evelynciara on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-14T01:21:58.710Z · score: 16 (9 votes) · EA · GW

What do you think is the probability of AI causing an existential catastrophe in the next century?

Comment by evelynciara on evelynciara's Shortform · 2020-07-10T05:30:34.961Z · score: 11 (5 votes) · EA · GW

I think we need to be careful when we talk about AI and automation not to commit the lump of labor fallacy. When we say that a certain fraction of economically valuable work will be automated at any given time, or that this fraction will increase, we shouldn't implicitly assume that the total amount of work being done in the economy is constant. Historically, automation has increased the size of the economy, thereby creating more work to be done, whether by humans or by machines; we should expect the same to happen in the future. (Note that this doesn't exclude the possibility of increasingly general AI systems performing almost all economically valuable work. This could very well happen even as the total amount of work available skyrockets.)

Comment by evelynciara on Which countries are most receptive to more immigration? · 2020-07-07T20:19:14.736Z · score: 3 (2 votes) · EA · GW

I think these concerns are valid. The website Open Borders: The Case addresses many of the main arguments against open borders, including the possibility of nativist backlash to increased immigration.

"Nativist backlash" refers to the hypothesis that a country opening its borders to all immigration would cause a significant portion of current residents to subsequently turn against immigration. The problem with this claim is that the probability of backlash depends on how a country adopts open borders in the first place. Nathan Smith writes:

The trouble with “nativist backlash” as a standalone topic, is that a nativist backlash against open borders seems to presuppose that open borders is somehow established first. But for open borders to be established, something major would have to change in the policymaking process and/or public opinion. And whatever that change was, would presumably affect the likelihood and nature of any nativist backlash.

If open borders were established based on false advertising that it wasn’t really radical and wouldn’t make that much difference, then there would doubtless be a nativist backlash. Likewise if it were established by some sort of presidential and judicial fiat without popular buy-in. But if open borders came about because large majorities were persuaded that people have a natural right to migrate and it’s unjust to imprison them in the country of their birth, then people might be willing to accept the drastic consequences of their moral epiphanies.

So any claim that “open borders will inevitably provoke a nativist backlash” just seems ill formulated. One first needs a scenario by which open borders is established. Then one could assess the probability and likely character of a nativist backlash, but it would be different for every open borders scenario.

Comment by evelynciara on Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? · 2020-07-04T18:28:52.026Z · score: 1 (1 votes) · EA · GW

Background: I am an information science student who has taken a class on the societal aspects of surveillance.

My gut feeling is that advocating for or implementing "mass surveillance" targeted at preventing individuals from using weapons of mass destruction (WMDs) would be counterproductive.

First, were a mass surveillance system aimed at controlling WMDs to be set up, governments would lobby for it to be used for other purposes as well, such as monitoring for conventional terrorism. Pretty soon it wouldn't be minimally invasive anymore; it would just be a general-purpose mass surveillance system.

Second, a surveillance system of the scope that Bostrom has proposed ("ubiquitous real-time worldwide surveillance") would itself be an existential risk to liberal democracy. The problem is that a ubiquitous surveillance system would create the feeling that surveillees are constantly being watched. Even if it had strong technical and institutional privacy guarantees and those guarantees were communicated to the public, people would likely not be able to trust it; rumors of abuse would only make establishing trust harder. People modify their behavior when they know they are being watched or could be watched at any time, so they would be less willing to engage in behaviors that are stigmatized by society even if the Panopticon were not explicitly looking out for those behaviors. This feeling of constantly being watched would stifle risk-taking, individuality, creativity, and freedom of expression, all of which are essential to sustain human progress.

I think that a much more limited suite of targeted surveillance systems, combined with other mechanisms for arms control, would be a lot more promising while still being effective at controlling WMDs. Such limited surveillance systems are already used in gun control: for example, the U.S. federal government requires dealers to keep records of gun sales for at least 20 years, and many U.S. states and other countries keep records of who is licensed to own a gun. Some states also require gun owners to report lost or stolen guns in order to fight gun trafficking. These surveillance measures can be designed to balance gun owners' privacy interests with the public's interest in reducing gun violence. We could regulate synthetic biology a lot like we do gun control: for example, companies that create synthetic biology or sell desktop DNA sequencers could be required to maintain records of transactions.

However, I don't expect this targeted approach to work as well for cyber weapons. Because computers are general-purpose, cyber weapons can theoretically be developed and executed on any computer, and trying to prevent the use of cyber weapons by surveilling everyone who owns a computer would be extremely inefficient (since the vast majority of people who use computers are not creating cyber weapons) and impractical (because power users could easily uninstall any spyware planted on their machines). Also, because computers are ubiquitous and often store a lot of sensitive personal information, this form of surveillance would be extremely unpopular as well as invasive. Strengthening cyber defense seems like a more promising way to prevent harm from cyber attacks.

Comment by evelynciara on EA Updates for June 2020 · 2020-07-03T21:02:16.057Z · score: 7 (2 votes) · EA · GW

Thanks for making this post! I think it would be helpful if you linked directly to the playlist for EAGxVirtual 2020 instead of the channel.

Comment by evelynciara on evelynciara's Shortform · 2020-06-18T22:41:17.816Z · score: 6 (6 votes) · EA · GW

How pressing is countering anti-science?

Intuitively, anti-science attitudes seem like a major barrier to solving many of the world's most pressing problems: for example, climate change denial has greatly derailed the American response to climate change, and distrust of public health authorities may be stymying the COVID-19 response. (For instance, a candidate running in my district for State Senate is campaigning on opposition to contact tracing as well as vaccines.) I'm particularly concerned about anti-economics attitudes because they lead to bad economic policies that don't solve the problems they're meant to solve, such as protectionism and rent control, and opposition to policies that are actually supported by evidence. Additionally, I've heard (but can't find the source for this) that economists are generally more reluctant to do public outreach in defense of their profession than scientists in other fields are.

Comment by evelynciara on Forum update: Tags are live! Go use them! · 2020-06-01T21:34:48.166Z · score: 8 (5 votes) · EA · GW

Can you please add the tag directory to the sidebar?

Comment by evelynciara on evelynciara's Shortform · 2020-05-28T17:48:26.611Z · score: 2 (2 votes) · EA · GW

I think there should be an EA Fund analog for criminal justice reform. This could especially attract non-EA dollars.

Comment by evelynciara on Any good organizations fighting racism? · 2020-05-28T03:13:20.744Z · score: 3 (3 votes) · EA · GW

My understanding is that the criminal justice system plays a central role in institutional racism in the United States. For example, it is a significant contributor to the racial unemployment gap:

Mass incarceration plays a significant role in the lower labor force participation rate for African American men. African Americans are more likely to be incarcerated following an arrest than are white Americans, and formerly incarcerated individuals of all races experience difficulties in gaining employment. In spite of years of widespread agreement among researchers that incarceration is a profound factor in employment outcomes, employment statistics still do not gather data on incarceration, erasing a key structural factor. (Ajilore 2020)

Thus, criminal justice reform seems like an effective, targeted way to break the cycle.

Comment by evelynciara on How can I apply person-affecting views to Effective Altruism? · 2020-04-29T07:56:37.645Z · score: 6 (4 votes) · EA · GW

If you think that embryos and fetuses have moral value, then abortion becomes a very important issue in terms of scale. However, it's not very neglected, and the evidence suggests that increased access to contraceptives, not restricted access to abortion services, is driving the decline in abortion rates in the U.S.

Designing medical technology to reduce miscarriages (which are spontaneous abortions) may be an especially important, neglected, and tractable way to prevent embryos/fetuses and parents from suffering. (10-50% of pregnancies end in miscarriages.)

Comment by evelynciara on How has biosecurity/pandemic preparedness philanthropy helped with coronavirus, and how might it help with similar future situations? · 2020-04-29T07:36:58.359Z · score: 1 (1 votes) · EA · GW

Unrolled for convenience

I have Twitter blocked using StayFocusd (which gives me an hour per day to view blocked websites), so reading it on a separate website allows me to take my time with it.

Comment by evelynciara on Is it a good idea to write EA blog posts for skill building and learning more about EA? · 2020-04-28T16:35:19.228Z · score: 6 (4 votes) · EA · GW

Yeah, I think that's a good idea. Most people's early creative work will not be their best work, so don't have high expectations at the beginning. I would focus on learning and having fun while you write.

Comment by evelynciara on What will 80,000 Hours provide (and not provide) within the effective altruism community? · 2020-04-27T08:15:37.392Z · score: 1 (1 votes) · EA · GW

I second this. I imagine that updating the AI problem profile must be a top priority for 80K because AI safety is a popular topic in the EA community, and it's important to have a central source for the community's current understanding of the problem.

Comment by evelynciara on Idea for a YouTube show about effective altruism · 2020-04-25T19:03:20.878Z · score: 2 (2 votes) · EA · GW

Or Complexly, though they seem to have a lot on their plate.

It shouldn't be hard to create good quality video at a low budget, though.

Comment by evelynciara on Binding Fuzzies and Utilons Together · 2020-04-25T09:02:07.321Z · score: 8 (5 votes) · EA · GW

Anecdotally, The Precipice played a huge part in getting me into longtermism because it combined philosophical arguments with emotional appeal: "Trillions of lives are at stake. Save them." "We need to do right by our ancestors." "Everything you care about is at stake. Protect our future." I think it's likely that the book's emotional weight will be a major factor in its persuasiveness.

Comment by evelynciara on saulius's Shortform · 2020-04-24T16:50:57.521Z · score: 1 (1 votes) · EA · GW


Comment by evelynciara on saulius's Shortform · 2020-04-24T05:16:26.602Z · score: 1 (1 votes) · EA · GW

What's the syntax for footnotes?

Comment by evelynciara on Is preventing child abuse a plausible Cause X? · 2020-04-23T17:50:29.003Z · score: 1 (1 votes) · EA · GW

As a member of what the commenters on that post call "the Left," I will say that many leftists I know do care about mental health, recognize that child abuse often leads to long-lasting health problems for victims, and believe that society should treat children with dignity and respect.

Comment by evelynciara on How do you talk about AI safety? · 2020-04-19T22:17:44.148Z · score: 2 (2 votes) · EA · GW

note: your link is broken

Comment by evelynciara on International Affairs reading lists · 2020-04-09T15:10:46.878Z · score: 1 (1 votes) · EA · GW

You're welcome! I'm actually not an expert; I want to learn myself and someone else shared this with me.

Comment by evelynciara on evelynciara's Shortform · 2020-04-06T18:35:20.812Z · score: 1 (1 votes) · EA · GW

Thanks for the suggestion! I imagine that most scholars are reeling from the upheavals caused by the pandemic response, so right now doesn't feel like the right time to ask professors to do anything. What do you think?

Comment by evelynciara on Is running Folding@home / Rosetta@home beneficial? · 2020-04-03T03:27:02.250Z · score: 1 (1 votes) · EA · GW

I just installed Folding@home on my laptop (partially inspired by this post), and for me, the cost so far has been close to zero. FAH took me about 10 minutes to install, and my time isn't very valuable right now (spring break). It used up to 75% of my CPU capacity and I didn't notice a drop in my laptop's performance, and I don't pay for electricity at my dorm. As for the external cost of running it, I don't know what percentage of my school's electricity comes from fossil fuels, so it's hard for me to estimate my FAH instance's carbon footprint.

However, I'm worried that FAH will cause my laptop's fan to wear out more quickly because the laptop is not designed to crunch numbers 24/7. I think it's best if I have a plan for maintaining whatever hardware I run FAH on, so I'm going to stop running it for now.

[Edit: I'm more concerned about the risk of damaging my laptop now.]

Comment by evelynciara on evelynciara's Shortform · 2020-04-03T02:04:20.282Z · score: 7 (4 votes) · EA · GW

Do emergency universal pass/fail policies improve or worsen student well-being and future career prospects?

I think a natural experiment is in order. Many colleges are adopting universal pass/fail grading for this semester in response to the COVID-19 pandemic, while others aren't. Someone should study the impact this will have on students to inform future university pandemic response policy.

Comment by evelynciara on New Top EA Causes for 2020? · 2020-04-02T01:43:30.283Z · score: 3 (2 votes) · EA · GW

I detect elevated levels of malarkey in this comment :^)

Comment by evelynciara on Is Existential Risk a Useless Category? Could the Concept Be Dangerous? · 2020-04-01T16:24:51.536Z · score: 8 (7 votes) · EA · GW

I think the paper title is clickbaity and misleading, given that you argue narrowly against Bostrom's conception of existential risk rather than the broader idea of x-risk itself.

Comment by evelynciara on Is Existential Risk a Useless Category? Could the Concept Be Dangerous? · 2020-03-31T21:44:55.826Z · score: 5 (3 votes) · EA · GW

technological development proceeds from the time of this writing (in 2020) for another decade. Cures for pathologies like Alzheimer’s, diabetes, and heart disease are discovered. New strategies for preventing large-scale outbreaks of infectious disease are developed, and life expectancy around the world increases to 95 years old. The human population stabilizes at around 8 billion people... But at the end of this decade, technological progress stalls permanently: the conditions realized at the end of the decade are the conditions that hold for the next 1 billion years, at which point Earth becomes uninhabitable due to the sun’s growing luminosity. Nonetheless, many trillions and trillions of humans will come to exist in these conditions, with more opportunities for self-actualization than ever before. (pp. 13-14)

I agree that this is not an existential catastrophe, at least on timescales of less than a billion years, provided that humanity is not permanently prevented from leaving Earth. To me, an "existential catastrophe" is an event that causes humanity's welfare or the quality of its moral values to permanently fall far below present-day levels, e.g. to pre-industrial levels. At most, I'd be disappointed if technology plateaued at a level above the present day's technological progress.

However, I'd consider it an existential catastrophe if humanity permanently lost the ability to settle outer space, because that would make our eventual extinction inevitable.

Comment by evelynciara on Toby Ord’s ‘The Precipice’ is published! · 2020-03-29T04:33:20.135Z · score: 5 (2 votes) · EA · GW

I noticed that much of the Wikipedia article about this book was copied from this post. Did you give anyone authorization to write the article using this post as the source? I ask because Wikipedia is very strict about copyrights, and I need to make sure that the article is rewritten if it violates your copyright.

Comment by evelynciara on What questions could COVID-19 provide evidence on that would help guide future EA decisions? · 2020-03-28T17:24:13.937Z · score: 2 (2 votes) · EA · GW

How to get disease surveillance right: monitoring the spread of a disease effectively without infringing on civil liberties.

Also, how effectively is the Fed's expansionary response to the COVID-19 crisis mitigating the worst risks of the pandemic? (I'm not remotely a macroecon expert so I don't know what the best questions are, but I know that Open Phil is interested in this area.)

Comment by evelynciara on What posts do you want someone to write? · 2020-03-26T15:05:41.706Z · score: 2 (2 votes) · EA · GW

I care about a lot of different U.S. policy issues and would like to get a sense of their neglectedness and tractability. So I'd love it if someone could do a survey to find out how many people in the U.S. work full time on various issues and how hard it is to get bills passed on them.

Comment by evelynciara on evelynciara's Shortform · 2020-03-23T01:49:48.430Z · score: 8 (6 votes) · EA · GW

Tentative thoughts on "problem stickiness"

When it comes to comparing non-longtermist problems from a longtermist perspective, I find it useful to evaluate them based on their "stickiness": the rate at which they will grow or shrink over time.

A problem's stickiness is its annual growth rate. So a problem has positive stickiness if it is growing, and negative stickiness if it is shrinking. For long-term planning, we care about a problem's expected stickiness: the annual rate at which we think it will grow or shrink. Over the long term - i.e. time frames of 50 years or more - we want to focus on problems that we expect to grow over time without our intervention, instead of problems that will go away on their own.

For example, global poverty has negative stickiness because the poverty rate has declined over the last 200 years. I believe its stickiness will continue to be negative, barring a global catastrophe like climate change or World War III.

On the other hand, farm animal suffering has not gone away over time; in fact, it has gotten worse, as a growing number of people around the world are eating meat and dairy. This trend will continue at least until alternative proteins become competitive with animal products. Therefore, farm animal suffering has positive stickiness. (I would expect wild animal suffering to also have positive stickiness due to increased habitat destruction, but I don't know.)

The difference in stickiness between these problems motivates me to focus more on animal welfare than on global poverty, although I'm still keeping an eye on and cheering on actors in that space.

I wonder which matters more, a problem's "absolute" stickiness or its growth rate relative to the population or the size of the economy. But I care more about differences in stickiness between problems than the numbers themselves.

Comment by evelynciara on evelynciara's Shortform · 2020-03-22T15:45:59.875Z · score: 1 (1 votes) · EA · GW

I'm playing Universal Paperclips right now, and I just had an insight about AI safety: Just programming the AI to maximize profits instead of paperclips wouldn't solve the control problem.

You'd think that the AI can't destroy the humans because it needs human customers to make money, but that's not true. Instead, the AI could sell all of its paperclips to another AI that continually melts them down and turns them back into wire, and they would repeatedly sell paperclips and wire back and forth to each other, both powered by free sunlight. Bonus points if the AIs take over the central bank.

Comment by evelynciara on EA Global Live Broadcast · 2020-03-22T00:22:20.671Z · score: 4 (3 votes) · EA · GW

In the future, can we please have breaks between talks?

Comment by evelynciara on evelynciara's Shortform · 2020-03-20T06:32:53.192Z · score: 1 (1 votes) · EA · GW

Can someone please email me a copy of this article?

I'm planning to update the Wikipedia article on Social discount rate, but I need to know what the article says.

Comment by evelynciara on Phil Trammell: The case for ignoring the world’s current problems — or how becoming a ‘patient philanthropist’ could allow you to do far more good · 2020-03-18T19:06:27.086Z · score: 4 (3 votes) · EA · GW

This was a very intriguing interview!

Question: If you're an economist (or other social scientist) trying to get into the global priorities field, should you join GPI or try to start global priorities research centers at other universities?

Comment by evelynciara on Toby Ord’s ‘The Precipice’ is published! · 2020-03-17T20:50:39.964Z · score: 1 (1 votes) · EA · GW

How are you planning to advertise the book? I have suggestions....

Comment by evelynciara on EAF/FRI are now the Center on Long-Term Risk (CLR) · 2020-03-12T01:10:14.933Z · score: 6 (5 votes) · EA · GW

I like this development. I've heard a suggestion that EA and longtermism carry separate movement identities while continuing to have significant overlap, so they can develop and attract newcomers more independently. This seems to be in line with that suggestion.

Comment by evelynciara on How to persuade people who donate to charity already vs. don't donate at all? · 2020-03-01T06:09:13.874Z · score: 1 (1 votes) · EA · GW

I would try persuading them to donate to both their chosen charity and an EA charity or EA Fund. You could also help them find a charity that's in the same cause as the one they've chosen but does more effective work. The idea is to grow the pie, not make people move donations from one charity to another.

Comment by evelynciara on EA Updates for February 2020 · 2020-02-28T16:54:29.657Z · score: 14 (10 votes) · EA · GW

Thank you for doing this!

I think I read the forum too much and it's honestly information overload :( but these roundups help me get a better sense of what's going on in the community!

Comment by evelynciara on My Charitable Giving Report 2019 · 2020-02-28T04:03:07.216Z · score: 5 (3 votes) · EA · GW

I'm glad you appreciate my post! $230 doesn't feel like much to me because I perceive myself as relatively well-off (for undergraduate students), but I still think that running a successful birthday fundraiser was an accomplishment. I'm glad that this can serve as inspiration to others.

Comment by evelynciara on 1 min Key and Peele clip on children saved per dollar · 2020-02-27T06:55:21.727Z · score: 2 (2 votes) · EA · GW

This is really funny!

Comment by evelynciara on Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI? · 2020-02-22T02:51:32.214Z · score: 1 (8 votes) · EA · GW

I think some of the cultural aspects are deeply worrying, although I'm open to some of the claims being exaggerated.

The employees work long hours and talk incessantly about their jobs through meals and social hours... more than others in the field, its employees treat AI research not as a job but as an identity.

Although I would also be excited if my work were making a difference, this is a red flag. It's been argued that encouraging people to become very emotionally invested in their work leads to burnout, which can hurt their long-term productivity. I think effective altruists are especially susceptible to this dynamic. There needs to be a special emphasis on work-life balance in this community.

I'm also confused about the documentary thing. What is that statement referring to? It makes the documentary sound like a gratuitous attempt to flex on DeepMind.

Comment by evelynciara on Shoot Your Shot · 2020-02-20T03:46:30.434Z · score: 1 (1 votes) · EA · GW

Thank you—this is really helpful feedback!

What tactics do you think could be more effective at promoting effective altruism? I've been thinking about promoting EA practices in a public interest tech context, but I'd be flying blind because I have no idea what's effective and relatively little experience in PIT overall. One possibility would be to deliver an EA talk at a tech conference such as GHC.

"trying to do things that will really help people, rather than ignoring their needs in favor of what we think will help"

is uncontroversial; I think cause prioritization would be more controversial, though. I wouldn't be surprised if people working on one cause objected to being told that they'd have more impact working on a different one.

In past Splash courses I've taught, I've noticed that some students were already familiar with the topic; for example, during my introductory machine learning class, one student asked about a type of neural network that I had heard of but was unfamiliar with. Do you think I'd be mostly preaching to converts?

Comment by evelynciara on Aligning Recommender Systems as Cause Area · 2020-02-14T21:09:54.795Z · score: 1 (1 votes) · EA · GW

There's a growing area of research on fair ranking algorithms. Where the problem you've scoped out focuses on the utility of end users, fairness in ranking aims to align recommender systems with the "utility" of the items being recommended (e.g. job applicants) and the long-term viability of the platform.

Comment by evelynciara on Prioritizing among the Sustainable Development Goals · 2020-02-14T16:48:17.698Z · score: 5 (3 votes) · EA · GW

This implies that the "experts" think that "Eliminate the most extreme poverty" is a matter of distribution of money and power via state authority (taxation).

I don't think it implies that these experts think redistribution is the best way to eliminate extreme poverty. Increasing GDP per capita is 40th out of 117 targets, and being ranked this low could mean that they value it as a means of reducing poverty but not as an end in itself.

Comment by evelynciara on Prioritizing among the Sustainable Development Goals · 2020-02-13T01:44:37.269Z · score: 3 (2 votes) · EA · GW

Thank you!

Yes; although they don't seem to have published the entire dataset of responses, they published a few here:

  • "The most important criterion in determining the order of the SDGs should be how to expand the capability set of the least advantaged members of society"
  • "Extent to which a target focuses on a system and not individual people"
  • "Looked at specific macro issues that would benefit people who could then play a more effective role in society, thus helping with the other goals"
  • "Start with the most basic human needs (food, water), then education and then the natural environment, where government has a strong role to play (including to address negative externalities)"
Comment by evelynciara on evelynciara's Shortform · 2020-02-11T21:55:16.337Z · score: 2 (2 votes) · EA · GW

A social constructivist perspective on long-term AI policy

I think the case for addressing the long-term consequences of AI systems holds even if AGI is unlikely to arise.

The future of AI development will be shaped by social, economic and political factors, and I'm not convinced that AGI will be desirable in the future or that AI is necessarily progressing toward AGI. However, (1) AI already has large positive and negative effects on society, and (2) I think it's very likely that society's AI capabilities will improve over time, amplifying these effects and creating new benefits and risks in the future.

Comment by evelynciara on Prioritizing among the Sustainable Development Goals · 2020-02-08T05:30:03.111Z · score: 2 (2 votes) · EA · GW

Well, for starters, I think any kind of policy work is a moonshot. Lobbying for pro-growth/globalist policies would have a small chance of boosting econ growth by a lot, which would in turn affect a lot of the other SDG targets.

Comment by evelynciara on Prioritizing among the Sustainable Development Goals · 2020-02-07T16:44:27.604Z · score: 1 (1 votes) · EA · GW

I wonder what the experts believed the appropriate tradeoffs between individual vs. institutions and urgency vs. process were.