Posts

Introducing the EA Public Interest Technologists Slack community 2021-09-08T17:08:55.698Z
Epistemic trespassing, or epistemic squatting? | Noahpinion 2021-08-25T01:50:00.748Z
Database dumps of the EA Forum 2021-07-27T19:19:15.438Z
World federalism and EA 2021-07-14T05:53:34.769Z
Why You Should Donate a Kidney - The Neoliberal Podcast 2021-06-27T04:01:11.570Z
[Podcast] Tom Moynihan on why prior generations missed some of the biggest priorities of all 2021-06-25T15:39:58.856Z
Open, rigorous and reproducible research: A practitioner’s handbook 2021-06-24T21:20:11.622Z
Exporting EA discussion norms 2021-06-01T13:35:11.840Z
Should EAs in the U.S. focus more on federal or local politics? 2021-05-05T08:33:14.691Z
If you had a large amount of money (at least $1M) to spend on philanthropy, how would you spend it? 2021-05-01T00:27:48.625Z
Why AI is Harder Than We Think - Melanie Mitchell 2021-04-28T08:19:02.842Z
To Build a Better Ballot: an interactive guide to alternative voting systems 2021-04-18T06:24:43.454Z
Moral pluralism and longtermism | Sunyshore 2021-04-17T00:14:13.114Z
What does failure look like? 2021-04-09T22:05:16.065Z
Thoughts on "trajectory changes" 2021-04-07T02:18:36.962Z
Quadratic Payments: A Primer (Vitalik Buterin, 2019) 2021-04-05T18:05:55.215Z
Please stand with the Asian diaspora 2021-03-20T01:05:39.533Z
How should EAs manage their copyrights? 2021-03-09T18:42:06.250Z
Is The YouTube Algorithm Radicalizing You? It’s Complicated. 2021-03-01T21:50:17.109Z
Surveillance and free expression | Sunyshore 2021-02-23T02:14:49.084Z
How can non-biologists contribute to wild animal welfare? 2021-02-17T20:58:44.034Z
[Podcast] Ajeya Cotra on worldview diversification and how big the future could be 2021-01-22T23:57:48.193Z
What I believe, part 1: Utilitarianism | Sunyshore 2021-01-10T17:58:58.513Z
What is the marginal impact of a small donation to an EA Fund? 2020-11-23T07:09:02.934Z
Which terms should we use for "developing countries"? 2020-11-16T00:42:58.385Z
Is Technology Actually Making Things Better? – Pairagraph 2020-10-01T16:06:23.237Z
Planning my birthday fundraiser for October 2020 2020-09-12T19:26:03.888Z
Is existential risk more pressing than other ways to improve the long-term future? 2020-08-20T03:50:31.125Z
What opportunities are there to use data science in global priorities research? 2020-08-18T02:48:23.143Z
Are some SDGs more important than others? Revealed country priorities from four years of VNRs 2020-08-16T06:56:19.326Z
How strong is the evidence of unaligned AI systems causing harm? 2020-07-21T04:08:07.719Z
What norms about tagging should the EA Forum have? 2020-07-14T04:19:54.841Z
Does generality pay? GPT-3 can provide preliminary evidence. 2020-07-12T18:53:09.454Z
Which countries are most receptive to more immigration? 2020-07-06T21:46:03.732Z
Will AGI cause mass technological unemployment? 2020-06-22T20:55:00.447Z
Idea for a YouTube show about effective altruism 2020-04-24T05:00:00.853Z
How do you talk about AI safety? 2020-04-19T16:15:59.288Z
International Affairs reading lists 2020-04-08T06:11:41.620Z
How effective are financial incentives for reaching D&I goals? Should EA orgs emulate this practice? 2020-03-24T18:27:16.554Z
What are some software development needs in EA causes? 2020-03-06T05:25:50.461Z
My Charitable Giving Report 2019 2020-02-27T16:35:42.678Z
Shoot Your Shot 2020-02-18T06:39:22.964Z
Does the President Matter as Much as You Think? | Freakonomics Radio 2020-02-10T20:47:27.365Z
Prioritizing among the Sustainable Development Goals 2020-02-07T05:05:44.274Z
Open New York is Fundraising! 2020-01-16T21:45:20.506Z
What are the most pressing issues in short-term AI policy? 2020-01-14T22:05:10.537Z
Has pledging 10% made meeting other financial goals substantially more difficult? 2020-01-09T06:15:13.589Z
evelynciara's Shortform 2019-10-14T08:03:32.019Z

Comments

Comment by evelynciara on evelynciara's Shortform · 2021-09-27T06:13:09.462Z · EA · GW

I've been thinking about AI safety again, and this is what I'm thinking:

The main argument of Stuart Russell's book focuses on reward modeling as a way to align AI systems with human preferences. But reward modeling seems more like an AI capabilities technology than an AI safety one. If it's really difficult to write a reward function for a given task Y, then it seems unlikely that AI developers would deploy a system that does it in an unaligned way according to a misspecified reward function. Instead, reward modeling makes it feasible to design an AI system to do the task at all.

Even with reward modeling, though, AI systems are still going to have similar drives due to instrumental convergence: self-preservation, goal preservation, resource acquisition, etc., even if they have goals that were well specified by their developers. Although maybe corrigibility and not doing bad things can be built into the systems' goals using reward modeling.

The ways I could see reward modeling technology failing to prevent AI catastrophes (other than misuse) are:

  • An AI system is created using reward modeling, but the learned reward function still fails in a catastrophic, unexpected way. This is similar to how humans often take actions that unintentionally cause harm, such as habitat destruction, because they're not thinking about the harms that occur.
    • Possible solution: create a model garden for open source reward models that developers can use when training new systems with reward modeling. This way, developers start from a stronger baseline with better safety guarantees than they would have if they were developing reward modeling systems from scratch/with only their proprietary training data.
  • A developer cuts corners while creating an AI system (perhaps due to economic pressure) and doesn't give the system a robust enough learned reward function, and the system fails catastrophically.
    • Lots of ink has been spilled about arms race dynamics 😛
    • Possible solution: Make sure reward models can be run efficiently. For example, if reward modeling is done using a neural network that outputs a reward value, make sure it can be done well even with slimmer neural networks (fewer parameters, lower bit depth, etc.).
Comment by evelynciara on evelynciara's Shortform · 2021-09-26T16:40:21.276Z · EA · GW

Content warning: missing persons, violence against women, racism.

Amid the media coverage of the Gabby Petito case in the United States, there's been some discussion of how missing persons cases for women and girls of color are more neglected than those for missing White women. Some statistics:

Black girls and women go missing at high rates, but that isn't reflected in news coverage of missing persons cases. In 2020, of the 268,884 girls and women who were reported missing, 90,333, or nearly 34% of them, were Black, according to the National Crime Information Center. Meanwhile, Black girls and women account for only about 15% of the U.S. female population, according to census data. In contrast, white girls and women — which includes those who identify as Hispanic — made up 59% of the missing, while accounting for 75% of the overall female population.

[...]

In [Wyoming], more than 400 Indigenous girls and women went missing between 2011 and the fall of 2020, according to a state report. Indigenous people made up 21% of homicide victims in Wyoming between 2000 and 2020, despite being less than 3% of the state's population. The disparity can be seen in the media: Only 18% of Indigenous female victims received coverage. However, among white victims, 51% were in the news.

To be clear, Rivers explains, it's not about asking for more attention or being in "competition" with white people — it's about other groups getting the same attention as white victims and having their lives honored in the same ways.

Comment by evelynciara on evelynciara's Shortform · 2021-09-22T16:27:12.352Z · EA · GW

I think an EA career fair would be a good idea. It could have EA orgs as well as non-EA orgs that are relevant to EAs (for gaining career capital or earning to give)

Comment by evelynciara on Open Thread: September 2021 · 2021-09-18T07:00:30.406Z · EA · GW

I think someone should offer a prize for thoughtful responses to the "Most Important Century" series. I think it's important for someone to point out flaws in the arguments since many people will be relying on it as their gateway to longtermist EA.

Comment by evelynciara on evelynciara's Shortform · 2021-08-30T19:04:06.643Z · EA · GW

Wild idea: Install a small modular reactor in a charter city and make energy its biggest export!

Charter cities' advantage is their lax regulatory environment relative to their host countries. Such permissiveness could be a good environment for deploying nuclear reactors, which are controversial and regulated to death in many countries. Charter cities are good environments for experimenting with governance structures; they can also be good for experimenting with controversial technologies.

Comment by evelynciara on Open Thread: August 2021 · 2021-08-30T04:01:12.714Z · EA · GW

I found it: "Why Nations Fail" and the long-termist view of global poverty

Comment by evelynciara on All Possible Views About Humanity's Future Are Wild · 2021-08-28T05:16:42.008Z · EA · GW

I found this article enthralling. But I have a critique:

So humanity would have to go extinct in some way that leaves no other intelligent life (or intelligent machines) behind.

A few people I know think this is not a very "wild" outcome. Earth could suffer a disaster that wipes out both humanity and the digital infrastructure needed to sustain advanced AI. I think this is a distinct possibility because humanity seems resilient whereas IT infrastructure is especially brittle - it depends on electricity and communications systems of some sort.

To put some numbers on this:

  • In The Precipice, Toby Ord estimates that total existential risk is 1/6 in the next 100 years, and x-risk from AI is 1/10. So the total x-risk not from AI is  in the next century. This means that such a disaster (one in which humans and AI both go extinct) is likely to happen once every 1500 years.
  • Given that humanity goes extinct, another intelligent species emerging on Earth and restarting civilization seems really unlikely. I'd put it at once every 100,000 years (a scientific wild-ass guess).

Another intuition that may explain people's faith in the "skeptical view": Species come and go on Earth all the time. Humans are just another species - and, at that, are "disrupting" the "natural order" of Earth's biosphere, and will eventually go extinct too.

Comment by evelynciara on This Can't Go On · 2021-08-28T02:05:07.732Z · EA · GW

Jeremy Rifkin discusses this possibility in The Zero Marginal Cost Society

Comment by evelynciara on Utilitarianism Symbol Design Competition · 2021-08-21T17:00:33.671Z · EA · GW

I like this design, but it violates the rule of tincture: the heraldic metals - yellow (or) and white (argent) - should not be placed on each other because they don't contrast enough. So does the original five-star design. I would use a different background color, like light blue.

Comment by evelynciara on EA Survey 2020: How People Get Involved in EA · 2021-08-19T15:21:39.545Z · EA · GW

Hi! You might want to start putting these in a sequence, as Vaidehi and I have done: 2017, 2018, 2019

Comment by evelynciara on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-19T03:20:59.528Z · EA · GW

I think this would be broadly useful and in particular increase the reach of mobile payment-based activities like GiveDirectly. I'd be curious about estimates of how cost-effective increasing internet penetration would be, compared to throwing more money at GD.

Comment by evelynciara on evelynciara's Shortform · 2021-08-16T04:31:04.862Z · EA · GW

Also, the US withdrawal from Afghanistan is a teaching moment for improving institutional decision making. Biden appears to have been blindsided by the rapid Taliban insurgency:

“The jury is still out, but the likelihood there’s going to be the Taliban overrunning everything and owning the whole country is highly unlikely,” Biden said on July 8.

(I thought that it might take 30 days for the Taliban to completely take over Afghanistan, whereas it happened over a weekend.)

And in general, the media seems to think the US drawdown was botched. USA Today has called it "predictable and disastrous."

Comment by evelynciara on evelynciara's Shortform · 2021-08-16T04:19:50.132Z · EA · GW

Content warning: the current situation in Afghanistan (2021)

Is there anything people outside Afghanistan can do to address the worsening situation in Afghanistan? GiveDirectly-style aid to Afghans seems like not-an-option because the previous Taliban regime "prevented international aid from entering the country for starving civilians." (Wikipedia)

The best thing we can do is probably to help resettle Afghan refugees, whether by providing resources to NGOs that help them directly, or by petitioning governments to admit more of them. Some charities that do this:

I don't have a good sense of how much impact donations to these charities would have. The US is already scrambling to get SIV applicants out of Afghanistan and into the US and other countries where they can wait in safety for their applications to be processed. On the margins, advocacy groups can probably advocate for this process to be improved and streamlined.

Comment by evelynciara on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-14T21:25:59.714Z · EA · GW

I agree with this. As the article says, multiple funders are pulling out of nuclear arms control, not just MacArthur. So it would be a good idea for EA funders like Open Phil to come in and close the gap. But in doing so, we should understand why MacArthur and other funders are exiting this field and learn from them to figure out how to do better.

Comment by evelynciara on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-14T21:24:34.189Z · EA · GW

I misread this as "nuclear power", not "nuclear arms control"  😂

Comment by evelynciara on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-13T04:40:14.439Z · EA · GW

I definitely want to see more modeling of supervolcano and comet disasters.

Comment by evelynciara on EA Forum feature suggestion thread · 2021-08-12T22:54:44.918Z · EA · GW

Tags for tags: We should turn the "Related entries" sections of wiki pages into native tags so we can build a crowdsourced graph of links between the wiki pages. Links can be uni- or bidirectional and specify different types of relationships such as "A is related to B" or "A is a parent of B".

Comment by evelynciara on Open Thread: August 2021 · 2021-08-12T01:58:45.709Z · EA · GW

Hey, I'm trying to find a recent EA Forum post about the differences between longtermist and "near-termist" EAs. It conceptualizes these groups as having clusters of traits that don't necessarily have to do with valuing the far future, such as "near-termists prefer tight feedback loops and strong evidence." And I'm pretty sure it was published in the last 2 years. I've having a hard time searching for it, though. Does anyone know where it is?

Comment by evelynciara on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-12T00:18:08.033Z · EA · GW

The Sustainable Development Goals - and their predecessor, the MDGs - are like a megaproject led by the UN. Some of these are already aligned with EA priorities, such as the following:

  • Eradicating extreme poverty (Goal 1, Target 1.1)
  • Ending hunger (Goal 2, Target 2.1) and malnutrition (Target 2.2)
    • Fortify Health aims to improve health by providing fortified wheat flour
  • Good health and well-being (Goal 3)
  • Clean water and sanitation (Goal 6)
  • Ending energy poverty (Goal 7, Target 7.1)
  • Increasing the share of renewable energy (Target 7.2) and energy efficiency (Target 7.3)
  • Promoting clean energy innovation (Target 7.A)
  • Decent work and economic growth (Goal 8)

The Economist has written that Goal 1 (ending poverty) should be "at the head of a very short list." In my opinion, if we're going to do a megaproject, we should take a handful of the SDG targets (such as 1.1, ending extreme poverty) and spend billions of dollars aggressively optimizing them.

Comment by evelynciara on Towards a Weaker Longtermism · 2021-08-08T22:17:47.232Z · EA · GW

Yeah. I have this idea that the EA movement should start with short-term interventions and work our way to interventions that operate over longer and longer timescales, as we get more comfortable understanding their long-term effects.

Comment by evelynciara on evelynciara's Shortform · 2021-08-07T23:04:26.345Z · EA · GW

My shortforms on public transportation as an EA cause area:

Comment by evelynciara on evelynciara's Shortform · 2021-08-07T20:09:07.646Z · EA · GW

Back-of-the-envelope calculations for improving efficiency of public transit spending

The cost of building and maintaining public transportation varies widely across municipalities due to inefficiencies - for example, the NYC Second Avenue Subway has cost $2.14 billion per kilometer to build, whereas it costs an average of $80.22 million to build a kilometer of tunnel in Spain (Transit Costs Project). While many transit advocacy groups advocate for improving quality of public transit service (e.g. Straphangers Campaign in NYC), few advocate for reducing wasteful infrastructure spending.

BOTEC for operating costs

  • Uday Schultz writes: "bringing NYCT’s [the NYC subway agency] facility maintenance costs down to the national average could save $1.3 billion dollars per year."
  • With a 6% discount rate, this equates to a $21.7 billion net present value. So an advocacy campaign that spent $21.7 million to reduce NYCT's maintenance costs to the national average would yield a 1000x return.
  • Things that would make the cost-effectiveness of this campaign higher or lower:
    • (Higher) A lower discount rate would increase the net present value of the benefits
    • (Higher) In theory, we can reduce maintenance costs to even lower levels than the US national average; Western European levels are (I think) lower.
    • (Lower) We might not realize all of the potential efficiency gains for political reasons - e.g. if contractors and labor unions block the best possible reforms.

BOTEC for capital construction costs

  • The NYC Second Avenue Subway will be 13.7 km long when completed and cost over $17 billion. Phase 1 of the subway line has been completed, consists of 2.9 km of tunnel, and cost $4.45 billion.
  • So the rest of the planned subway line (yet to be built) consists of 10.8 km of tunnel and is expected to cost $12.55 billion, for an average of $1.16 billion per km of tunnel.
  • Phase 2 of the subway will be 2.4 km long and cost $6 billion, for an average of $2.5 billion per km of tunnel.
  • There will likely be cost overruns in the future, so let's take the average of these two numbers and assume that the subway will cost an average of $1.83 billion/km to build.
  • As I stated before, the average cost per km of new tunnel in Spain is $80.22 million (Transit Costs Project). If NYCT could build the rest of the Second Avenue Subway at this cost, it would save $1.75 billion per km of new tunnel, or $18.9 billion overall (since there are 10.8 km of tunnel left to build).

N.B.: These are BOTECs for individual aspects of transit spending, and a transit spending advocacy project would benefit from economies of scope because it would be lobbying for cost reductions across all aspects of public transit, not just e.g. the Second Avenue Subway or operating costs.

See also: "So You Want to Do an Infrastructure Package," Alon Levy's whitepaper on reducing transit infrastructure costs (Niskanen Center, 2021)

Comment by evelynciara on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-02T18:19:06.496Z · EA · GW

Hi! Like Tessa, I appreciate you sharing your concerns about the EA movement. I downvoted because some of your criticisms seem off the mark to me. Specifically, in the two years I've been highly involved in EA, I haven't heard a single person say that non-white people are "biologically incapable of governing themselves." The scientific consensus is that "claims of inherent differences in intelligence between races have been broadly rejected by scientists on both theoretical and empirical grounds" (Wikipedia), so it seems like a bizarre thing for an EA to say. Do you mind telling us where you've heard someone in the EA community say this?

Comment by evelynciara on Hack4Impact Cornell is offering custom software development for non-profits · 2021-08-01T21:08:43.257Z · EA · GW

Go Big Red! Don't you mean fall 2021?

Comment by evelynciara on EA Forum feature suggestion thread · 2021-07-31T00:40:32.323Z · EA · GW

We should add the ability to convert posts to questions (or back to regular posts, but that's tricky because answers would have to be converted to regular comments).

Also, the editor should automatically suggest converting your post to a linkpost or question post if the title or body text matches certain patterns. For example, if you write "Crossposted from X" or "This is a linkpost" at the top, it can infer that your post is most likely a linkpost. I see a lot of posts from inexperienced users that are classified as regular posts even though they're intended to be linkposts or questions, so I think this would be helpful to them.

Comment by evelynciara on Decreasing populism and improving democracy, evidence-based policy, and rationality · 2021-07-29T05:25:26.489Z · EA · GW

Yeah, this makes total sense.

Comment by evelynciara on Propose and vote on potential EA Wiki entries · 2021-07-28T15:23:31.715Z · EA · GW

Yeah, maybe something broader like "democracy" or "liberal democracy." Perhaps we could rename the "direct democracy" tag to "democracy"?

Comment by evelynciara on Decreasing populism and improving democracy, evidence-based policy, and rationality · 2021-07-27T21:06:21.990Z · EA · GW

I appreciate that you did a cause report on this topic! I'm also interested in finding good ways to protect and improve liberal democracy.

I disagree with your assessment that "Generally, economic policy is hard to affect through philanthropy and as such this does not seem to be a very tractable cause." In the most recent 80,000 Hours interview, Alexander Berger cited macroeconomic stabilization as an Open Phil success story:

...we have funded for several years around macroeconomic stabilization policy. And I think the federal reserve and macroeconomic policymakers in the U.S. have really moved in our direction and adopted a lot of more expansionary policies and become more focused on increasing employment, relative to worrying about inflation. I think it’s really hard to attribute impact, so I’m not sure how much of that has to do with our funding, but it’s an area where I think the world has really moved our way, and we might’ve played a small role in that, and the stakes I think are quite high in terms of human wellbeing there. So I see that as a big win.

Also, anecdotally, I think there's been a lot of movement in the U.S. toward "free-market progressive" policies on zoning reform, occupational licensing, and non-competes. For example, several U.S. municipalities have abolished single-family zoning, and the Biden administration recently issued an executive order targeting occupational licensing and non-competes. And I think the YIMBY movement has grown a lot. All of these reforms would likely promote economic growth and reduce inequality in my opinion. So I'm a lot more optimistic about economic reforms.

I suggest looking into two organizations:

  • Niskanen Center: One of their key policy areas is defending liberal democracy (a.k.a. the open society) from authoritarian populism.
  • The Neoliberal Project / Center for New Liberalism aims to push back against left- and right-populism by building a movement around a new "neoliberal" political identity. Their theory of change is that politics is organized around group identities by default, so in order to create policy change, you need to create a new political identity around advocating for a set of policies. They have chapters around the world that do local political advocacy, like the YIMBY movement does. (Disclaimer: I've been very involved in online neoliberal politics.)
Comment by evelynciara on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-27T15:17:41.480Z · EA · GW

AGI confirmed? 😬

Comment by evelynciara on Propose and vote on potential EA Wiki entries · 2021-07-26T16:36:41.021Z · EA · GW

That's a good idea.

Comment by evelynciara on Propose and vote on potential EA Wiki entries · 2021-07-26T04:51:01.256Z · EA · GW

I like this idea, sort of. I think we should create a politics and policy "mega-tag" (the tags that show up in white, like Existential risk) while keeping the others as sub-tags.

Comment by evelynciara on Propose and vote on potential EA Wiki entries · 2021-07-26T04:49:05.566Z · EA · GW

Open society

The ideal of an open society - a society with high levels of democracy and openness - is related to many EA causes and policy goals. For example, open societies are associated with long-run economic growth, and an open society is conducive to the "long reflection." This tag could host discussion about the value of open societies, the meaning of openness, and how to protect and expand open societies.

Comment by evelynciara on A Twitter Bot that regularly tweets current top posts from the EA Forum · 2021-07-26T03:01:25.375Z · EA · GW

This is cool! I stay away from Twitter, but I can see this increasing EA's profile among those who use it.

Comment by evelynciara on You should write about your job · 2021-07-20T16:29:59.735Z · EA · GW

What kind of data science are you interested in? The DSSG fellowship is focused on data science for academic research.

Comment by evelynciara on You should write about your job · 2021-07-19T04:43:32.081Z · EA · GW

I'm currently interning at the Stanford Data Science for Social Good summer fellowship! My team works on using computer vision and Google Street View data to identify physical features of buildings and urban environments that might correlate with community well-being in U.S. cities.

I think the fellowship is good for students who are interested in getting into academic research and data science, so I'm happy to talk more about it if anyone's interested.

Comment by evelynciara on What would a cheap, nutritious and enjoyable diet for the world's extreme poor people like? · 2021-07-14T07:12:40.763Z · EA · GW

I think peanuts would be a significant part of it, as peanuts are healthy and cheap to produce:

https://www.stack.com/a/are-peanuts-healthy/

Comment by evelynciara on Is Democracy a Fad? · 2021-07-07T05:50:16.214Z · EA · GW

Conversely, there's a hypothesis that the Indus Valley Civilization was more egalitarian, unlike other Bronze Age civilizations in the Near East and China that were hierarchical. See: this Twitter thread (also by Manvir Singh) and this article (by Patrick Wyman).

Comment by evelynciara on [Future Perfect] How to be a good ancestor · 2021-07-02T19:19:04.443Z · EA · GW

I'm really glad they linked longtermism to philosophies originating outside the EA movement that emphasize the needs of future generations:

Several Indigenous communities have long embraced the principle of “seventh-generation decision making,” which involves weighing how choices made today will affect a person born seven generations from now. In fact, it’s that kind of thinking that inspired Japanese economics professor Tatsuyoshi Saijo to create the Future Design movement (he learned about the concept while visiting the US and found it extraordinary).

[...]

In 2015, 21 young Americans filed a landmark case against the government — Juliana v. United States — in which they argued that its failure to confront climate change will have serious effects on both them and future generations, which constitutes a violation of their rights.

Comment by evelynciara on Open Thread: June 2021 · 2021-06-30T15:47:27.438Z · EA · GW

Thanks for letting me know! I'm interested in organizing an event soon, so this feature would be useful to me.

Comment by evelynciara on Open Thread: June 2021 · 2021-06-30T06:42:47.653Z · EA · GW

Is it still possible to create an event page on the forum?

Comment by evelynciara on EA needs consultancies · 2021-06-29T04:55:15.414Z · EA · GW

Thanks! I might use this in the future :)

Comment by evelynciara on evelynciara's Shortform · 2021-06-28T16:10:32.229Z · EA · GW

Reason to invest in cultivated meat research: we can use meat scaffolding technology to grow nervous tissue and put chemicals in the cell media that cause it to experience constant euphoria

Comment by evelynciara on What posts do you want someone to write? · 2021-06-27T21:44:18.466Z · EA · GW

I think it would be really interesting for someone to write about the intellectual history of environmental ethics and animal ethics, and probably environmentalism more broadly. The rift between them dates back at least to the 1980s, and I think it's important for EAs interested in environmentalism or (wild) animal welfare to understand how they're building on/situated in this discourse.

(Inspired by the recent 80K episode on the intellectual history of x-risk.)

Comment by evelynciara on A central directory for open research questions · 2021-06-27T16:30:16.232Z · EA · GW

I would add the GFI alternative protein solutions database

Comment by evelynciara on Open Thread: June 2021 · 2021-06-24T22:47:37.260Z · EA · GW

Great to meet you! You might be interested in some posts in the AI forecasting and Estimation of existential risk categories, such as:

I've also written a lot about AI risk on my shortform.

Comment by evelynciara on What should CEEALAR be called? · 2021-06-15T23:56:52.145Z · EA · GW

I like "EA Retreat House"

Comment by evelynciara on evelynciara's Shortform · 2021-06-15T20:59:05.257Z · EA · GW

Practical/economic reasons why companies might not want to build AGI systems

(Originally posted on the EA Corner Discord server.)

First, most companies that are using ML or data science are not using SOTA neural network models with a billion parameters, at least not directly; they're using simple models, because no competent data scientist would use a sophisticated model where a simpler one would do. Only a small number of tech companies have the resources or motivation to build large, sophisticated models (here I'm assuming, like OpenAI does, that model size correlates with "sophisticated-ness").

Second, increasing model size has diminishing returns with respect to model performance. Scaling laws usually relate model size to training loss via a power law, so every doubling of model size results in a smaller increase in training performance. And this is training performance, which is not the same as test set performance - increases in training performance above a certain threshold are considered not to matter for the model's ultimate performance. (This is why techniques like early stopping exist - you just stop training the model once its true performance stops increasing.)

(Counterpoint: Software systems typically have superstar economics - e.g. the best search engine is 100x more profitable than the second-best search engine. So there could be a non-linear relationship between model performance and profitability, such that increasing a model's performance from 97% to 98% makes a huge difference in profits whereas going from 96% to 97% does not.)

Third - and this reason only applies to AGI, not powerful narrow AIs - it's not clear to me how you would design an engineering process to ensure that an AGI system can perform multiple tasks very well and generalize to new tasks. Typically, when we design software, we create a test suite that evaluates its suitability for the tasks for which it's designed. Before releasing a new version of an AI system, we have to run the entire test suite on it and make sure it passes. It's obviously easier to design a test suite for an AI that is designed to do a few tasks well than for an AI that's supposed to be able to do any task. (On the flip side, this means that anyone seeking to design an AGI would have to design a way to test it to ensure that it's (1) actually an AGI and (2) performant.) (While generality isn't strictly necessary for AIs to be dangerous, I believe many of us would agree that AGIs are more dangerous as x-risks than narrow AIs.)

Fourth, setting out to create AGI would have a huge opportunity cost. Yes, technically, humans are probably not the absolute smartest, most capable beings that evolution could have built, but that doesn't mean that building a smarter AGI machine would be profitable. It seems to me that humans have a comparative advantage in planning etc. while "technology as a whole" will have a comparative advantage in e.g. doing machine vision at scale. So most firms ought to just hire a bunch of humans and design/purchase technological systems that complement humans' skill sets (this is a common idea about how future AI development will go, called "intelligence augmentation").

Comment by evelynciara on evelynciara's Shortform · 2021-06-15T16:53:54.069Z · EA · GW

Crazy idea: When charities apply for funding from foundations, they should be required to list 3-5 other charities they think should receive funding. Then, the grantmaker can run a statistical analysis to find orgs that are mentioned a lot and haven't applied before, reach out to those charities, and encourage them to apply. This way, the foundation can get a more diverse pool of applicants by learning about charities outside their network.

Comment by evelynciara on Shouldn't 'Effective Altruism' be capitalized? · 2021-06-11T21:58:58.371Z · EA · GW

I generally don't capitalize "effective altruism" just as I wouldn't capitalize "liberalism" or "socialism" so... ¯\_(ツ)_/¯

Comment by evelynciara on Matt_Sharp's Shortform · 2021-05-23T00:48:00.998Z · EA · GW

It was funny until he insulted her appearance. Then 🤢