Posts

Should EAs in the U.S. focus more on federal or local politics? 2021-05-05T08:33:14.691Z
If you had a large amount of money (at least $1M) to spend on philanthropy, how would you spend it? 2021-05-01T00:27:48.625Z
Why AI is Harder Than We Think - Melanie Mitchell 2021-04-28T08:19:02.842Z
To Build a Better Ballot: an interactive guide to alternative voting systems 2021-04-18T06:24:43.454Z
Moral pluralism and longtermism | Sunyshore 2021-04-17T00:14:13.114Z
What does failure look like? 2021-04-09T22:05:16.065Z
Thoughts on "trajectory changes" 2021-04-07T02:18:36.962Z
Quadratic Payments: A Primer (Vitalik Buterin, 2019) 2021-04-05T18:05:55.215Z
Please stand with the Asian diaspora 2021-03-20T01:05:39.533Z
How should EAs manage their copyrights? 2021-03-09T18:42:06.250Z
Is The YouTube Algorithm Radicalizing You? It’s Complicated. 2021-03-01T21:50:17.109Z
The link between surveillance and free expression | Sunyshore 2021-02-23T02:14:49.084Z
How can non-biologists contribute to wild animal welfare? 2021-02-17T20:58:44.034Z
[Podcast] Ajeya Cotra on worldview diversification and how big the future could be 2021-01-22T23:57:48.193Z
What I believe, part 1: Utilitarianism | Sunyshore 2021-01-10T17:58:58.513Z
What is the marginal impact of a small donation to an EA Fund? 2020-11-23T07:09:02.934Z
Which terms should we use for "developing countries"? 2020-11-16T00:42:58.385Z
Is Technology Actually Making Things Better? – Pairagraph 2020-10-01T16:06:23.237Z
Planning my birthday fundraiser for October 2020 2020-09-12T19:26:03.888Z
Is existential risk more pressing than other ways to improve the long-term future? 2020-08-20T03:50:31.125Z
What opportunities are there to use data science in global priorities research? 2020-08-18T02:48:23.143Z
Are some SDGs more important than others? Revealed country priorities from four years of VNRs 2020-08-16T06:56:19.326Z
How strong is the evidence of unaligned AI systems causing harm? 2020-07-21T04:08:07.719Z
What norms about tagging should the EA Forum have? 2020-07-14T04:19:54.841Z
Does generality pay? GPT-3 can provide preliminary evidence. 2020-07-12T18:53:09.454Z
Which countries are most receptive to more immigration? 2020-07-06T21:46:03.732Z
Will AGI cause mass technological unemployment? 2020-06-22T20:55:00.447Z
Idea for a YouTube show about effective altruism 2020-04-24T05:00:00.853Z
How do you talk about AI safety? 2020-04-19T16:15:59.288Z
International Affairs reading lists 2020-04-08T06:11:41.620Z
How effective are financial incentives for reaching D&I goals? Should EA orgs emulate this practice? 2020-03-24T18:27:16.554Z
What are some software development needs in EA causes? 2020-03-06T05:25:50.461Z
My Charitable Giving Report 2019 2020-02-27T16:35:42.678Z
Shoot Your Shot 2020-02-18T06:39:22.964Z
Does the President Matter as Much as You Think? | Freakonomics Radio 2020-02-10T20:47:27.365Z
Prioritizing among the Sustainable Development Goals 2020-02-07T05:05:44.274Z
Open New York is Fundraising! 2020-01-16T21:45:20.506Z
What are the most pressing issues in short-term AI policy? 2020-01-14T22:05:10.537Z
Has pledging 10% made meeting other financial goals substantially more difficult? 2020-01-09T06:15:13.589Z
evelynciara's Shortform 2019-10-14T08:03:32.019Z

Comments

Comment by evelynciara on Open Thread: May 2021 · 2021-05-08T17:49:09.675Z · EA · GW

Good catch!

Comment by evelynciara on Open Thread: May 2021 · 2021-05-07T19:58:06.005Z · EA · GW

Good news: IBM has created a 2nm chip in a lab

Comment by evelynciara on Open Thread: May 2021 · 2021-05-06T23:05:44.815Z · EA · GW

While in my last year of high school, I independently came up with the idea that we should try to maximize aggregate utility over time: . A few weeks later, I heard about EA from a teacher.

Comment by evelynciara on Should EAs in the U.S. focus more on federal or local politics? · 2021-05-05T08:39:03.965Z · EA · GW

My own thoughts: In farmed animal welfare, I think it's possible for EAs to influence state governments to fund research and development on alternative proteins (especially through land-grant universities like Cornell University in New York) and improve regulations on animal agriculture. It may also be possible to change state and local environmental laws to improve wild animal welfare.

Comment by evelynciara on Open Thread: April 2021 · 2021-05-03T19:29:35.202Z · EA · GW

I just donated US $30 to the GiveIndia oxygen supply fundraiser. Inspired by this thread

Comment by evelynciara on Effective donations for COVID-19 in India · 2021-05-03T17:05:29.317Z · EA · GW

Jeff Coleman writes:

Back of napkin math shows that funding oxygen intervention for India is currently more effective than even top rated interventions from @GiveWell...

Hat tip to Dr. Rohin Francis for identifying the specific opportunity in his excellent video on the crisis India is currently facing: https://twitter.com/MedCrisis/status/1387428737583550468… 

I've used numbers from https://covid.giveindia.org/healthcare-heroes/… because they were very specific and can handle int'l donations easily.

They claim able to deploy funds within 1-2 weeks. Where details were lacking I checked with an MD I know who has been treating COVID in northern Canada to estimate the impact of various interventions, assuming effective triage and that each item is already a choke point.

My starting assumptions were:

Avg patient age 50 yo, giving post-survival life expectancy of ~25 years
70 ₹ to 1 USD
Avg 28 oxygen-days needed to save a patient
40 USD to add one year of life expectancy via Givewell top charities

All assumptions highly conservative. Exchange rate is pre-adjusted to cover payment processing and conversion. Patients are actually skewing younger right now. 4 weeks of oxygen to save just one patient actually assumes triage inefficiency. 40 USD is cheapest estimate given.

Next I took all listed items and calculated the number of days the equipment needs to be in usage to beat GiveWell cost effectiveness. Results:

Oxygen plant: 37 days
Oxygen concentrators: 22 days
Bipaps: 72 days
Ventilators: 600 days

For the first 3 these are rock solid.

For oxygen tanks there are recurring costs. If we assume all costs borne by the donor we get:

B type oxygen cylinder: 33 days
D type oxygen cylinder: no breakeven

However if refills are paid for locally we have to unadjust the exchange rate, giving:

B: 30 days
D: 180 days

If we were to further claim that local refill payments are unlikely to compete with charitable giving, and just remove them entirely, we get:

B type: 12 days
D type: 10 days

Note that this assumes that only one of plants or cylinders limits care, rather than both.

In summary:

Almost all interventions planned by this oxygen campaign will outperform GiveWell top charity recommendations given highly conservative assumptions about effectiveness, length of crisis, etc.

Ventilators and D type cylinders are the weakest interventions but...

...given the long service time of this equipment even these are likely to prove relatively effective in the event that all chokepoints for other interventions could be met. India also has a strong record of redeploying unneeded equipment to other countries for later crises.

Please check my math and support via ACH or credit card if you agree: https://covid.giveindia.org/healthcare-heroes/… There is ~600k USD of funding room left in this campaign. If met we can continue down Rohin's list. I welcome corrections regarding rates/assumptions/etc. My full working is below.

Here are the breakdowns for how I did each calculation. The full .ods spreadsheet can be downloaded from https://file.io/EDsyrhSJMDWA to check the underlying math. Images assume no oxygen cylinder refill cost to donors, but spreadsheet does not.

Effectiveness is hard to judge, I tried to estimate both the risk of a patient who would need that intervention and the amount of impact the intervention would have. Ventilators fair poorly because patients on them die up to half the time in spite of multi-week treatment.

The least obvious assumption I made is probably that D type oxygen tanks would be used for high flow rate treatment exclusively, so even though they are 4 times bigger they run out 4x as fast, while only being twice as effective at saving lives as B type cylinders are.

Oxygen concentrators come out looking really good in all-cost effectiveness, but of course it probably comes down to what is actually available and how quickly it can be produced/obtained.

An important question here is what the marginal impact of donation actually is. E.g. will the Indian government step in to sufficiently fund oxygen supply to the point where it is logistically rather than financially limited? I welcome any insight that anyone can offer here.

Note that the speed of government response is a major factor. If the government can't buy all available oxygen supply at these prices within ~5 weeks, then donating will still outperform it for some of the interventions, assuming they are locally available within 2 weeks.

It's a glaring omission, but for crypto donations see @CryptoRelief_ as well! Reliable org who has already deployed 1M USD worth of funds directly towards oxygen concentrators: https://twitter.com/sandeepnailwal/status/1388813415309737986…

 I'm just trying to draw in the #EffectiveAltruism community as well!

Thanks to Jacob Eliosoff for pointing me to this thread!

Comment by evelynciara on Effective donations for COVID-19 in India · 2021-05-01T06:54:53.468Z · EA · GW

I've seen these charities recommended on social media:

I'm not familiar with any of these charities, but I think medical interventions seem especially helpful. Also, some of these charities provide economic relief in the form of food or cash to people in India who've been affected by the COVID-19 crisis, and that seems valuable too.

Comment by evelynciara on Open Thread: April 2021 · 2021-04-30T08:12:40.899Z · EA · GW

Chart: https://imgur.com/a/BpKqWBN

(I can't embed it in a comment in rich-text mode, so here's the link)

Comment by evelynciara on Open Thread: April 2021 · 2021-04-30T08:04:37.365Z · EA · GW

I'm surprised no one on the EA Forum has been talking about the ongoing COVID-19 crisis in India. A year ago, we as a community were monitoring COVID-19 even before it hit North America and Europe. But the pandemic isn't over, and in fact, the number of new cases in India has shot up about 20x since February. (Source: Google search as of 2021-04-30)

This fundraising page has been shared with me via social media, and the interventions it supports seem very promising (e.g. supplying oxygen to patients, donating food to hungry families). I'm curious about other things EAs can do to help.

Comment by evelynciara on What are your favorite examples of moral heroism/altruism in movies and books? · 2021-04-27T19:12:59.779Z · EA · GW

Spoilers for Avatar: The Last Airbender:

  • Zuko's decision to join Team Avatar and warn them about Fire Lord Ozai's plan to destroy the Earth Kingdom may have saved millions of Earth Kingdom citizens.
  • Aang figuring out how to defeat the Fire Lord:
    • When Aang ran off during Parts 1-2 of "Sozin's Comet," he consulted the previous 4 Avatars about the moral dilemma he was facing: how to defeat Ozai without killing him. As a fellow Air Nomad, Avatar Yangchen shared Aang's belief in the sanctity of life, but persuaded him that as the Avatar, he needed to put the world first. So initially, Aang concluded that he had no choice but to kill the Fire Lord.
    • Later, Aang was about to kill Ozai but decided against it because he had a better alternative. A lion turtle had given him the power of energybending, which he used to defeat Ozai by taking away his firebending ability instead of his life.
    • At both points, Aang made the correct utilitarian decision given his abilities and knowledge at the time.
Comment by evelynciara on evelynciara's Shortform · 2021-04-23T16:24:32.726Z · EA · GW

Nope!

Comment by evelynciara on evelynciara's Shortform · 2021-04-23T05:37:57.250Z · EA · GW

Possible research/forecasting questions to understand the economic value of AGI research

A common narrative about AI research is that we are on a path to AGI, in that society will be motivated to try to create increasingly general AI systems, culminating in AGI. Since this is a core assumption of the AGI risk hypothesis, I think it's very important to understand whether this is actually the case.

Some people have predicted that AI research funding will dry up someday as the costs start to outweigh the benefits, resulting in an "AI winter." Jeff Bigham wrote in 2019 that the AI field will experience an "AI autumn," in which the AI research community will shift its focus from trying to develop human-level AI capabilities to developing socially valuable applications of narrow AI.

My view is that an AI winter is unlikely to happen anytime soon (10%), an AI autumn is likely to happen eventually (70%), and continued investment in AGI research all the way to AGI is somewhat unlikely (20%). But I think we can try to understand and predict these outcomes better. Here are some ideas for possibly testable research questions:

  • What will be the ROI on:
  • How much money will OpenAI make by licensing GPT-3?
  • How long will it take for the technology behind GPT-2 and GPT-3 (roughly, making generic language models do other language tasks without specific training) to become as economically competitive as similar technologies did after they were invented?
  • How long will it take for DeepMind and OpenAI to break even?
  • How do the growth rates of DeepMind and OpenAI's revenues and expenses compare to those of other corporate research labs throughout history?
  • Will Alphabet downsize or shut down DeepMind?
  • Will Microsoft scale back or end its partnership with OpenAI?

Notes:

  • I don't know of any other labs actively trying to create AGI.
  • I have no experience with financial analysis, so I don't know if these questions are the kind that a financial analyst would actually be able to answer. They could be nonsensical for all I know.
Comment by evelynciara on [deleted post] 2021-04-20T19:32:57.026Z

Let's merge this with AI Risks

Comment by evelynciara on What music do you find most inspires you to use your resources (effectively) to help others? · 2021-04-17T17:51:19.526Z · EA · GW

Nuclear Threat Initiative (NTI) has its own playlist: Atomic Songs

Comment by evelynciara on Concerns with ACE's Recent Behavior · 2021-04-17T07:38:10.943Z · EA · GW

I don't think the former is true either (with respect to national politics). 

Comment by evelynciara on Concerns with ACE's Recent Behavior · 2021-04-17T06:17:06.417Z · EA · GW

Is "social justice" ideology really the dominant ideology in our society now? My impression is that it's only taken seriously among young, highly-educated people.

Comment by evelynciara on Concerns with ACE's Recent Behavior · 2021-04-17T03:08:19.051Z · EA · GW

Nitpick: I really wish SJ-aligned people would clarify what they mean by "capitalism" in these contexts.

Comment by evelynciara on evelynciara's Shortform · 2021-04-15T05:24:00.331Z · EA · GW

On the difference between x-risks and x-risk factors

I suspect there isn't much of a meaningful difference between "x-risks" and "x-risk factors," for two reasons:

  1. We can treat them the same in terms of probability theory. For example, if  is an "x-risk" and  is a "risk factor" for , then . But we can also say that , because both statements are equivalent to . We can similarly speak of the total probability of an x-risk factor because of the law of total probability (e.g. ) like we can with an x-risk.
  2. Concretely, something can be both an x-risk and a risk factor. Climate change is often cited as an example: it could cause an existential catastrophe directly by making all of Earth unable to support complex societies, or indirectly by increasing humanity's vulnerability to other risks. Pandemics might also be an example, as a pandemic could either directly cause the collapse of civilization or expose humanity to other risks.

I think the difference is that x-risks are events that directly cause an existential catastrophe, such as extinction or civilizational collapse, whereas x-risk factors are events that don't have a direct causal pathway to x-catastrophe. But it's possible that pretty much all x-risks are risk factors and vice versa. For example, suppose that humanity is already decimated by a global pandemic, and then a war causes the permanent collapse of civilization. We usually think of pandemics as risks and wars as risk factors, but in this scenario, the war is the x-risk because it happened last... right?

One way to think about x-risks that avoids this problem is that x-risks can have both direct and indirect causal pathways to x-catastrophe.

Comment by evelynciara on [deleted post] 2021-04-14T22:48:38.265Z

If we deprecate these tags, can we keep them as wiki-only tags provided they still add value?

Comment by evelynciara on evelynciara's Shortform · 2021-04-14T22:35:21.093Z · EA · GW

"Quality-adjusted civilization years"

We should be able to compare global catastrophic risks in terms of the amount of time they make global civilization significantly worse and how much worse it gets. We might call this measure "quality-adjusted civilization years" (QACYs), or the quality-adjusted amount of civilization time that is lost.

For example, let's say that the COVID-19 pandemic reduces the quality of civilization by 50% for 2 years. Then the QACY burden of COVID-19 is  QACYs.

Another example: suppose climate change will reduce the quality of civilization by 80% for 200 years, and then things will return to normal. Then the total QACY burden of climate change over the long term will be  QACYs.

In the limit, an existential catastrophe would have a near-infinite QACY burden.

Comment by evelynciara on What does failure look like? · 2021-04-11T21:13:53.628Z · EA · GW

Yup ;)

Comment by evelynciara on Voting reform seems overrated · 2021-04-11T04:09:54.471Z · EA · GW

Coming back to this thread now having thought about it more. Speaking from my personal experience as an American citizen, I think the spoiler effect lowers voters' confidence in the electoral system, especially that of idealistic, young voters.

I was a Bernie supporter in 2016. I wasn't excited about Hillary being the Democratic nominee, but because I understood the incentive structure created by FPTP, I chose to support Hillary in the general election because I really hated Trump. But, there was a substantial number of Bernie supporters who defected from the Democratic base after Bernie lost. Mainstream Democrats seemed to be shaming them into voting for Hillary, on the grounds that:

  • if you vote for Jill Stein (the Green candidate) instead of Hillary, Trump will win.
  • if you vote for Gary Johnson (the Libertarian candidate) instead of Hillary, Trump will win.
  • if you write in Bernie, Trump will win.
  • if you don't vote, Trump will win.

This has been called vote-shaming, and I think it makes American political culture a lot more toxic because it pits ideologically similar people (like center-left and far-left progressives) against each other. Many people don't vote at all, both because of voter suppression, and because they don't feel represented by the major candidates. Eligible non-voters in 2016 were also more likely to be younger, less educated, less affluent, and non-White (source), which suggests that the system is not representing these groups as well as it could be. It is a problem that citizens of the world's oldest continuously running democracy feel disempowered - it means that the government is not as responsive to citizens' interests as it should be. Vote-shaming puts the blame on individuals for not voting, instead of the system for causing vote-splitting.

Just so you all don't think that this only happens on the left: I have a friend who didn't really like either major candidate. He leans conservative and strikes me as someone who might have preferred the Libertarian Party or Bernie Sanders. Despite not liking Trump that much, he voted for Trump in the 2016 general, because he thought Hillary was worse.

Some statistics:

  • In 2016, just 54.8% of the voting-age population (VAP) voted in the presidential election; 59.2% of the voting-eligible population (VEP) voted.
  • In 2020, this increased to 62% of the VAP and 66.7% of the VEP. (source)

I think that increasing voter turnout would make the government more responsive to citizens' interests, and I think changing the voting system we use would help with this because it would help citizens feel more empowered to vote.

Note: I'm not saying that vote-splitting, or even problems with the voting mechanism in general, is the only issue with the U.S. electoral system. I think there could be other problems introduced by a new voting system such as approval voting - practical problems that degrade the political system similarly to the way that I think vote-splitting does (since we know that no voting system is theoretically perfect).

Comment by evelynciara on Status update: Getting money out of politics and into charity · 2021-04-11T03:24:02.743Z · EA · GW

Yeah, I can see that. I would add the option to just donate your money to charity.

Also, how would it deal with minor-party candidates? Which major-party candidates would a minor-party donation cancel out, if any?

Comment by evelynciara on Open Thread: April 2021 · 2021-04-10T17:12:26.985Z · EA · GW

Welcome! I'm Evelyn, and I've been finishing up my CS MEng degree at Cornell. I've been exploring the intersection of EA and public interest tech. I'm happy to talk about it sometime if you're interested.

Comment by evelynciara on What Questions Should We Ask Speakers at the Stanford Existential Risks Conference? · 2021-04-10T08:10:45.515Z · EA · GW
  • What are some "obscure" existential risks that we should look more into?
  • What's the biggest x-risk and why is it bigger than the others?
Comment by evelynciara on Quadratic Payments: A Primer (Vitalik Buterin, 2019) · 2021-04-10T07:12:22.892Z · EA · GW

Yeah, it sounds right to me.

Comment by evelynciara on [deleted post] 2021-04-10T02:50:37.039Z

I think it's weird not to have punctuation between the author and title, as in this example:

Diabate, Abdoulaye (2019) Target Malaria proceeded with a small-scale release of genetically modified sterile male mosquitoes in Bana, a village in Burkina Faso, Target Malaria's Blog, July 1.

Pretty much all major citation styles (e.g. MLA and APA) have a period after the author's name.

Comment by evelynciara on Voting reform seems overrated · 2021-04-10T02:27:47.092Z · EA · GW

I think this piece about the center-squeeze effect might address your concern about other voting systems leading to greater prominence of extreme candidates. In short, both plurality and ranked-choice voting tend to eliminate centrist candidates early, as many voters may like a centrist candidate but prefer to vote for a more extreme candidate, either because (in RCV) they like the extreme candidate more or (in FPTP) the centrist candidate is not viable. Approval voting gives an advantage to candidates that aren't everyone's favorite but are acceptable to voters in all parts of the spectrum.

(TBH I'm pretty undecided between voting systems, but I've long held that plurality voting is a bad system. Nowadays I'm sympathetic to approval and quadratic voting.)

Comment by evelynciara on Open Thread: April 2021 · 2021-04-09T17:20:43.932Z · EA · GW

Anyone else having issues with the tagging feature? It's very slow and unresponsive for me.

(Edit: The commenting feature is glitching out too. I think all of the "reactive" features are glitching out today.)

(Edit 2: It seems to be working better after I switch to my mobile Wi-Fi hotspot. Might just be the Wi-Fi network I was on.)

Comment by evelynciara on The EA Forum Editing Festival has begun! · 2021-04-08T05:08:52.648Z · EA · GW

For what it's worth, a lot of old posts don't have tags, or have some tags but don't have ones that weren't around when the posts were created. It'd be great to see a lot of these posts get tagged to make them easier to find.

Comment by evelynciara on The EA Forum Editing Festival has begun! · 2021-04-07T22:29:27.664Z · EA · GW

Just started by tagging this post :)

Comment by evelynciara on [deleted post] 2021-04-06T22:38:06.783Z

This is a duplicate of Long Reflection

Comment by evelynciara on Quadratic Payments: A Primer (Vitalik Buterin, 2019) · 2021-04-06T18:06:27.061Z · EA · GW

Colorado has been using QV to make some decisions, such as priorities for agencies and interagency groups.

Comment by evelynciara on The Epistemic Challenge to Longtermism (Tarsney, 2020) · 2021-04-05T06:35:16.205Z · EA · GW

Thanks for posting this! Your linkpost actually got me to watch the talk for the first time, even though I was aware of this paper for a while.

I think some variant of the cubic growth model could be useful for figuring out whether trying to reduce x-risk is better than trying to make durable changes to the long-term "trajectory" of the social welfare curve. I spent some time a few months ago trying to address this by modeling the trajectory of humanity, so I appreciate this paper for proposing even a simpler toy model.

I have rough thoughts about how the utility from economic growth could be incorporated: Assume that each star system has a growth rate  that the residents of that star system can influence (e.g. through policy). The economy of each star system tends to grow exponentially, but GDP per capita has logarithmic utility, so the utility of the star system  grows roughly linearly.

If the economy of each star system starts at a steady state, then grows exponentially at  starting at time , the time at which humanity arrives at the star system, we get . If the star system's GDP is capped at , then we get .

To incorporate economic growth into the trajectory model used in the paper, we can replace  with the cross-correlation of  and  (this assumes that all star systems have the same growth rate). Since  is piecewise linear and  is cubic, the cross-correlation is piecewise quintic (it's the integral of a cubic function times a linear function). My gut tells me that having a piecewise quintic term in the trajectory function instead of a cubic term isn't going to change much about the implications of the model.

Note: I realize that by using GDP per capita, I'm leaving out the population of each star system. This would result in multiplying  by a function that models the population over time, starting at time .

Comment by evelynciara on Important Between-Cause Considerations: things every EA should know about · 2021-04-05T01:45:48.199Z · EA · GW

Is anyone working on this? I don't want to overcommit but I'd like to contribute in any way I can.

Comment by evelynciara on Nathan_Barnard's Shortform · 2021-04-03T20:27:40.770Z · EA · GW

I don't know, but I think it would be great to look into.

There was a proposal to make a "Rising Powers" or "BRICS" tag, but the community was most interested in making one for China. I'd like to see more discussion of other rising powers, including the other BRICS countries.

Comment by evelynciara on evelynciara's Shortform · 2021-04-02T06:31:26.026Z · EA · GW

Vaccine hesitancy might be a cause X (no, really!)

One thing that stuck out to me in the interview between Rob Wiblin and Ezra Klein is how much of a risk vaccine hesitancy poses to the US government's public health response to COVID:

But there are other things where the conservatism is coming from the simple fact, to put this bluntly, they deal with the consequences of a failure in a way you and I don’t. You and I are sitting here, like, “Go faster. The trade-offs are obvious here.” They are saying, “Actually, no. The trade-offs are not obvious. If this goes wrong, we can have vaccine hesitancy that destroys the entire effort.” ...

I think that there is a very different kind of feedback they are getting, and a kind of thing they fear, which is not that just the vaccine will be three weeks slower than it should have been, but if they are wrong, if they did not get enough data, if they missed something, they are going to imperil the whole effort, and that will also kill a gigantic number of people.

I'm aware of PR campaigns aimed at convincing people to get vaccinated, especially populations with higher rates of vaccine hesitancy. I wonder if these efforts could lead to a permanent shift in public attitudes toward vaccines. If that happens, then maybe governments can act faster and take more high-risk, high-reward actions during future epidemics without having to worry as much about vaccine hesitancy and mistrust of public health authorities "crashing" the public health response.

Comment by evelynciara on [deleted post] 2021-04-01T18:51:48.571Z

I'm thinking we should rename this tag to "April Fools' Day" or "April Fools' Day Posts" and add a description. (Seriously.)

Comment by evelynciara on New Top EA Causes for 2021? · 2021-04-01T18:45:56.155Z · EA · GW

Reducing Existential Risk by Embracing the Absurd

As we all know, longtermists face a lot of moral cluelessness: it is impossible to predict all of the consequences of any of our actions over the very long term. This makes us especially susceptible to existential crises. As longtermists, we should reduce this existential risk by recognizing that the universe is fundamentally meaningless, and that we are the only ones who can create meaning. We should embrace the absurd.

Comment by evelynciara on New Top EA Causes for 2021? · 2021-04-01T18:33:08.973Z · EA · GW

I suggest that the names be reassigned using the Top Trading Cycles and Trains algorithm.

Comment by evelynciara on Is Democracy a Fad? · 2021-04-01T06:32:08.234Z · EA · GW

Thanks for your very thorough response! I'm going to try to articulate my reasons for being skeptical based on what I understand about AI and econ (although I'm not an expert in either). And I'll definitely read the papers you linked when I have more time.

The human brain is ultimately just a physical thing, so there's no fundamental physical reason why (at least in aggregate) human-made machines couldn't perform all of the same tasks that the brain is capable of.

I agree that it's theoretically possible to build AGI; as I like to put it, it's a no-brainer (pun very much intended).

But I think that replicating the capabilities of the human brain will be very expensive. Even if algorithmic improvements drive down the amounts of compute needed for ML training and inference, I would expect narrow AI systems to be cheaper and easier to train than more general ones at any point in time. If you wanted to automate 3 different tasks, you would train 3 separate ML systems to do each of them, because you could develop them independently from each other. Whereas if you tried to train a single AI system to do all of them, I think it would be more complicated to ensure that it reaches the same performance as the collection of narrow AI systems, and it would require more compute.

Also, if you wanted a general intelligence (whether a human or machine) to do tasks that require <insert property of general intelligence>, I think it would be cheaper to hire humans, up to a point. This is partly because, until AGI is commercially viable, the process of developing and maintaining AI systems necessarily involves human labor. Machine intelligence scales because computation does, but I think it would be unlikely to scale enough to make machine labor more cost-effective than human labor in all cases.

I do think that AGI depressing human wages to the point of mass unemployment is a tail risk that society should watch for, and that it would lead to humans losing control of society through enfeeblement, but I don't think it's a necessary outcome of further AI development.

Comment by evelynciara on evelynciara's Shortform · 2021-03-31T22:30:17.793Z · EA · GW

Some rough thoughts on cause prioritization

  • I've been tying myself up in knots about what causes to prioritize. I originally came back to effective altruism because I realized I had gotten interested in 23 different causes and needed to prioritize them. But looking at the 80K problem profile page (I am fairly aligned with their worldview), I see at least 17 relatively unexplored causes that they say could be as pressing as the top causes they've created profiles for. I've taken a stab at one of them: making surveillance compatible with privacy, civil liberties, and public oversight.
  • I'm sympathetic to this proposal for how to prioritize given cluelessness. But I'm not sure it should dominate my decision making. It also stops feeling like altruism when it's too abstracted away from the object-level problems (other than x-risk and governance).
  • I've been seriously considering just picking causes from the 80K list "at random."
    • By this, I mean could just pick a cause from the list that seems more neglected, "speaks to me" in some meaningful way, and that I have a good personal fit for. Many of the more unexplored causes on the 80K list seem more neglected, like one person worked on it just long enough to write one forum post (e.g. risks from malevolent actors).
    • It feels inherently icky because it's not really taking into account knowledge of the scale of impact, and it's the exact thing that EA tells you not to do. But: MIRI calls it quantilizing, or picking an action at random from the top x% of actions one could do. They think it's a promising alternative to expected utility maximization for AI agents, which makes me more confident that it might be a good strategy for clueless altruists too.
    • Some analogies that I think support this line of thinking:
      • In 2013, the British newspaper The Observer  ran a contest between professional investment managers and... a cat throwing a toy at a dartboard to pick stocks. The cat won. According to the efficient market hypothesis, investors are clueless about what investing opportunities will outperform the pack, so they're unlikely to outperform an index fund or a stock-picking cat. If we're similarly clueless about what's effective in the long term, then maybe the stochastic approach is fine.
      • One strategy for dimensionality reduction in machine learning and statistics is to compress a high-dimensional dataset into a lower-dimensional space that's easier to compute with by creating a random projection. Even though the random projection doesn't take into account any information in the dataset (like PCA does), it still preserves most of the information in the dataset most of the time.
  • I've also been thinking about going into EA community building activities (such as setting up an EA/public interest tech hackathon) so I can delegate, in expectation, the process of thinking about which causes are promising to other people who are better suited to doing it. If I did this, I would most likely still be thinking about cause prioritization, but it would allow me to stretch that thinking over a longer time scale than if I had to do it all at once before deciding on an object-level cause to work on.
  • Even though I think AI safety is a potentially pressing problem, I don't emphasize it as much because it doesn't seem constrained by CS talent. The EA community currently encourages people with CS skills to go into either AI technical safety or earning to give. Direct work applying CS to other pressing causes seems more neglected, and it's the path I'm exploring.
Comment by evelynciara on Open and Welcome Thread: March 2021 · 2021-03-31T10:25:08.428Z · EA · GW

I see that someone added a card image. Thanks to whoever did that!

Comment by evelynciara on Open and Welcome Thread: March 2021 · 2021-03-30T05:06:47.916Z · EA · GW

I created a sequence for the 2019 EA Survey. Is anyone else able to see it? Also, why isn't it linked from the sequences page?

Comment by evelynciara on Is Democracy a Fad? · 2021-03-27T22:20:04.511Z · EA · GW

I think this argument is interesting. Maybe this is my neoclassical-econ bias speaking, but I'm more skeptical of automation displacing human labor (as I've said in this shortform). It's not clear to me that AI firms will have economic incentives to produce general AIs as opposed to more narrow AIs, and I think mass technological unemployment is less likely without general AI.

Comment by evelynciara on evelynciara's Shortform · 2021-03-27T17:17:20.731Z · EA · GW

Yeah, it is very similar to preference utilitarianism. I'm still undecided between hedonic and preference utilitarianism, but thinking about this made me lean more toward preference utilitarianism.

What do you think is wrong with the current definitions of liberty? I think the concept of well-being is similarly vague. I tend to use different proxies for well-being interchangeably (fulfillment of preferences, happiness minus suffering, good health as measured by QALYs or DALYs, etc.) and I think this is common practice in EA. But I still think that freedom and well-being are useful concepts: for example, most people would agree that China has less economic and political freedom than the United States.

Comment by evelynciara on evelynciara's Shortform · 2021-03-26T17:21:20.695Z · EA · GW

Effective Altruism and Freedom

I think freedom is very important as both an end and a means to the pursuit of happiness.

Economic theory posits a deep connection between freedom (both positive and negative) and well-being. When sufficiently rational people are free to make choices from a broader choice set, they can achieve greater well-being than they could with a smaller choice set. Raising people's incomes expands their choice sets, and consequently, their happiness - this is how GiveDirectly works.

I wonder what a form of effective altruism that focused on maximizing (positive and negative) freedom for all moral patients would look like. I think it would be very similar to the forms of EA focused on maximizing total or average well-being; both the freedom- and well-being-centered forms of EA would recommend actions like supporting GiveDirectly and promoting economic growth. But we know that different variants of utilitarianism have dramatically different implications in some cases. For example, the freedom-maximizing worldview would not endorse forcing people into experience machines.

We can also think of the long-term future of humanity in terms of humanity's collective freedom to choose how it develops. We want to preserve our option value - our freedom to change course - and avoid making irreversible decisions until we are sure they are right.

Comment by evelynciara on Open and Welcome Thread: March 2021 · 2021-03-24T19:22:14.333Z · EA · GW

Is anyone else having trouble logging into the EA Fellowship Weekend Grip platform? I can't get past the screen where I set my password.

Comment by evelynciara on Please stand with the Asian diaspora · 2021-03-21T17:18:21.408Z · EA · GW

PBS Newshour created this list of ways people in the US can fight racism and violence against Asian Americans. (I'll add it to the post.)

I also think that solidarity with Asians around the world includes opposing the human rights violations occurring in Asian countries, such as Myanmar, China, and India.

Comment by evelynciara on [deleted post] 2021-03-13T00:19:50.069Z

Can this tag be unlocked so that it can be added to this page?