Posts

Why doesn't WWOTF mention the Bronze Age Collapse? 2022-09-19T06:29:47.162Z
Rethinking longtermism and global development 2022-09-02T05:28:41.059Z
What should we do about the proliferation of online EA community spaces? 2022-08-19T00:58:38.114Z
Humanity’s vast future and its implications for cause prioritization 2022-07-26T05:04:30.197Z
What kind of organization should be the first to develop AGI in a potential arms race? 2022-07-17T17:41:42.453Z
How are y'all finding places to stay for EAG San Francisco? 2022-06-08T04:39:25.894Z
What time does EAG San Francisco start and end? 2022-06-08T04:31:13.281Z
Animal welfare orgs: what are your software and data analysis needs? 2022-06-04T22:27:04.498Z
What is the difference between the San Francisco and Washington DC EAGs? 2022-06-01T23:45:19.140Z
Should U.S. donors give to an EA-focused PAC like Guarding Against Pandemics instead of individual campaigns? 2022-05-20T23:49:18.466Z
Are there good introductory materials that explain how charity evaluation works? 2022-05-06T03:03:36.236Z
What factors make EA outreach at a school valuable? 2022-04-26T06:07:16.467Z
What moral philosophies besides utilitarianism are compatible with effective altruism? 2022-04-16T19:53:05.829Z
NPR's Indicator on WHO funding and pandemic preparedness 2022-03-19T01:37:29.449Z
What are the standard terms used to describe risks in risk management? 2022-03-05T04:07:13.819Z
Is transformative AI the biggest existential risk? Why or why not? 2022-03-05T03:54:25.352Z
Stanford Data Science for Social Good is seeking student fellows and project partners 2022-01-21T01:45:25.487Z
What value would Open Phil's Global Health and Wellbeing framework put on economic growth? 2022-01-06T07:43:37.996Z
"Don't Look Up" and the cinema of existential risk | Slow Boring 2022-01-05T04:28:39.249Z
Partnerships between the EA community and GLAMs (galleries, libraries, archives, and museums) 2021-12-26T20:46:33.000Z
What holiday songs, stories, etc. do you associate with effective altruism? 2021-12-09T18:15:30.089Z
EA Public Interest Technologists Meeting #2 2021-12-05T01:54:26.161Z
Where should I donate? 2021-11-22T20:56:32.078Z
Our Criminal Justice Reform Program Is Now an Independent Organization: Just Impact 2021-11-17T16:08:50.662Z
How to Accelerate Technological Progress - New Things Under the Sun 2021-10-03T17:49:40.241Z
Introducing the EA Public Interest Technologists Slack community 2021-09-08T17:08:55.698Z
Epistemic trespassing, or epistemic squatting? | Noahpinion 2021-08-25T01:50:00.748Z
Database dumps of the EA Forum 2021-07-27T19:19:15.438Z
World federalism and EA 2021-07-14T05:53:34.769Z
Why You Should Donate a Kidney - The Neoliberal Podcast 2021-06-27T04:01:11.570Z
[Podcast] Tom Moynihan on why prior generations missed some of the biggest priorities of all 2021-06-25T15:39:58.856Z
Open, rigorous and reproducible research: A practitioner’s handbook 2021-06-24T21:20:11.622Z
Exporting EA discussion norms 2021-06-01T13:35:11.840Z
Should EAs in the U.S. focus more on federal or state/local politics? 2021-05-05T08:33:14.691Z
If you had a large amount of money (at least $1M) to spend on philanthropy, how would you spend it? 2021-05-01T00:27:48.625Z
Why AI is Harder Than We Think - Melanie Mitchell 2021-04-28T08:19:02.842Z
To Build a Better Ballot: an interactive guide to alternative voting systems 2021-04-18T06:24:43.454Z
Moral pluralism and longtermism | Sunyshore 2021-04-17T00:14:13.114Z
What does failure look like? 2021-04-09T22:05:16.065Z
Thoughts on "trajectory changes" 2021-04-07T02:18:36.962Z
Quadratic Payments: A Primer (Vitalik Buterin, 2019) 2021-04-05T18:05:55.215Z
Please stand with the Asian diaspora 2021-03-20T01:05:39.533Z
How should EAs manage their copyrights? 2021-03-09T18:42:06.250Z
Is The YouTube Algorithm Radicalizing You? It’s Complicated. 2021-03-01T21:50:17.109Z
Surveillance and free expression | Sunyshore 2021-02-23T02:14:49.084Z
How can non-biologists contribute to wild animal welfare? 2021-02-17T20:58:44.034Z
[Podcast] Ajeya Cotra on worldview diversification and how big the future could be 2021-01-22T23:57:48.193Z
What I believe, part 1: Utilitarianism | Sunyshore 2021-01-10T17:58:58.513Z
What is the marginal impact of a small donation to an EA Fund? 2020-11-23T07:09:02.934Z
Which terms should we use for "developing countries"? 2020-11-16T00:42:58.385Z

Comments

Comment by BrownHairedEevee (evelynciara) on Smart Movements Start Academic Disciplines · 2022-09-26T14:08:29.580Z · EA · GW

I like this idea. But why not set up welfare science, an interdisciplinary field including welfare economics, welfare biology, and positive psychology?

Comment by evelynciara on [deleted post] 2022-09-18T07:09:49.962Z

I was worried sick, I legit thought his plane had crashed on the way to EAG DC

Comment by BrownHairedEevee (evelynciara) on EA Forum feature suggestion thread · 2022-09-14T04:38:24.071Z · EA · GW

We should have at least one dedicated "megathread" for EAG-related questions each year, so it's easier to ask such questions in public without creating dedicated posts for each of them.

Comment by evelynciara on [deleted post] 2022-09-13T19:24:42.555Z

Point taken, thanks! My original comment was mainly addressed at the fact that OP used "Phil", and I was not aware that Émile still uses "he" pronouns because their Twitter bio only says "they". I think the correction – using "Émile" while noting that Émile was "formerly known as Phil" to help others identify him – is satisfactory.

Comment by evelynciara on [deleted post] 2022-09-13T16:45:37.694Z

Thank you!

Comment by evelynciara on [deleted post] 2022-09-13T05:56:25.521Z

Please do not misgender Émile Torres. They may be a persona non grata in this community, but they still deserve to be called by their preferred name and pronouns like anyone else.

Comment by BrownHairedEevee (evelynciara) on Rethinking longtermism and global development · 2022-09-03T03:29:52.534Z · EA · GW

I think it depends on the countries that gain power. If South Africa or the EU becomes a great power I'd be less worried because they have liberal values. But the most likely candidates for great powers are China and India, and China is an outright dictatorship while India is slipping into one.

Comment by BrownHairedEevee (evelynciara) on Criticism of EA Criticisms: Is the real disagreement about cause prio? · 2022-09-02T15:25:07.224Z · EA · GW

Related post: "Disagreeing about what’s effective isn’t disagreeing with effective altruism"

Comment by BrownHairedEevee (evelynciara) on What Would A Longtermist Flag Look Like? · 2022-08-29T04:37:17.665Z · EA · GW

I associate it with robotics because it's a purplish metallic color.

Comment by BrownHairedEevee (evelynciara) on What Would A Longtermist Flag Look Like? · 2022-08-26T00:23:03.331Z · EA · GW

Here's another version:

The chevron is a symbol of progress, as in the South African flag and the progress pride flag. The gold, green, and lavender stripes represent humans, animals, and artificial sentience.

Comment by BrownHairedEevee (evelynciara) on What Would A Longtermist Flag Look Like? · 2022-08-24T20:07:36.713Z · EA · GW

Very late to this thread, but here's a design for a longtermist flag:

The ring of seven 7-pointed stars represents the seven classical planets, from which the seven days of the week are derived. This in turn represents the vastness of time and space. The dark blue field represents the vastness of outer space.

Comment by evelynciara on [deleted post] 2022-08-20T05:39:14.198Z

Hi Pablo, I noticed that you changed this page to state that MacAskill et al. merge tractability and neglectedness into a single factor called leverage. I think that's a misreading of the text: they actually split the  term in a different way from OCB's original formulation, so that the first factor is tractability and the second factor is leverage instead of neglectedness. Here's the relevant quote from the paper:

An alternative approach is to identify ‘tractability’ with the overall difficulty of the problem. Let  be the total amount of work that would be required to solve the problem (or to achieve some relevant benchmark), and let  be the value of  that would count as a full solution. (The simplest way to think about this is that X is ‘the percent solved’, so that .) We can then write

.

The first factor on the right-hand side, , measures the overall easiness of solving the problem, while the second, , measures how much easier it is to make progress at the current margin.... To avoid confusion, let us call this second factor leverage.

I think that the intent of the bolded text (emphasis mine) is to redefine tractability as "the overall easiness of solving the problem," or . The factor  is what they're defining as leverage, not  as a whole.

Comment by BrownHairedEevee (evelynciara) on Help with Upcoming NPR Interview with William MacAskill · 2022-08-18T04:02:22.652Z · EA · GW

Hey! Just saying, your "the1a.org" link is broken.

Comment by BrownHairedEevee (evelynciara) on How technical safety standards could promote TAI safety · 2022-08-15T05:33:04.530Z · EA · GW

Great post! I agree that standard setting could be useful. I think it could be especially important to set standards on how AI systems interact with animals and the natural environment, in addition to humans.

Comment by BrownHairedEevee (evelynciara) on How to Talk to Lefties in Your Intro Fellowship · 2022-08-15T01:38:51.356Z · EA · GW

I think one of the best narratives we can use with leftists/social justice types is allyship: EA is, in practice, about being allies to marginalized populations that lack the resources to improve their own welfare, such as the global poor, non-human animals, and people/beings who are yet to be born. We do this by using evidence to reason about what interventions would help these populations, and in the case of global poverty, we factor poor people's preferences about different types of outcomes into our decision-making.

Comment by BrownHairedEevee (evelynciara) on Annabella Wheatley's Shortform · 2022-08-14T19:26:49.957Z · EA · GW

If everyone focused on working in prioritized causes then conditions in the majority of wealthy or economically stable-ish countries would rapidly deteriorate.

EA prioritization is about the best use of additional resources ("at the margin") given how existing resources are currently allocated, and priorities change as more money or workers get directed to any given cause. Once we reached the ideal number of people working on global poverty reduction or AI safety, for example, the next best thing for someone to work on would become the best thing for them to work on. Eventually, you'd reach a point where working on altruistic projects is no more beneficial for the world than taking an ordinary job. Let me know if this explanation makes sense.

Comment by BrownHairedEevee (evelynciara) on Paula Amato's Shortform · 2022-08-14T17:52:50.633Z · EA · GW

I have several acquaintances from developing countries who like EA in part because it prioritizes global poverty, although this view isn't universally shared by all people from developing countries.

Comment by BrownHairedEevee (evelynciara) on Against longtermism · 2022-08-12T04:33:48.121Z · EA · GW

Another example of long-term thinking working well is Ben Franklin's bequests to the cities of Boston and Philadelphia, which grew for 200 years before being cashed out. (Also one of the inspirations for the Patient Philanthropy Fund.)

Comment by BrownHairedEevee (evelynciara) on Data Science for Effective Good + Call for Projects + Call for Volunteers · 2022-08-10T05:16:26.180Z · EA · GW

Awesome initiative!

Edit: Didn't realize that you guys were taking over SEADS, an existing org. I think it would have been more clear had you put that information at the top of the post where someone skimming the post can easily see it.

Comment by BrownHairedEevee (evelynciara) on [edited] Inequality is a (small) problem for EA and economic growth · 2022-08-09T06:58:55.226Z · EA · GW

Great work – I've been waiting for someone to use the isoelastic utility model! Are you going to submit this to the Criticism and Red Teaming Contest?

Comment by BrownHairedEevee (evelynciara) on Why does no one care about AI? · 2022-08-09T06:35:18.967Z · EA · GW

Okay, fine. I agree that it's hard to come up with an x-risk more urgent than AGI. (Though here's one: digital people being instantiated and made to suffer in large numbers would be an s-risk, and could potentially outweigh the risk of damage done by misaligned AGI over the long term.)

Comment by BrownHairedEevee (evelynciara) on Most* small probabilities aren't pascalian · 2022-08-08T01:38:04.460Z · EA · GW

This says "200 hundred". Do you mean 200 or 20,000?

Comment by BrownHairedEevee (evelynciara) on Why does no one care about AI? · 2022-08-08T00:22:33.795Z · EA · GW

I don't think we should defer too much to Ord's x-risk estimates, but since we're talking about them here goes:

  • Ord's estimate of total natural risk is 1 in 10,000, which is 160 times less than the total anthropogenic risk (1 in 6).
  • Risk from engineered pandemics (1 in 30) is within an order of magnitude of risk from misaligned AI (1 in 10), so it's hardly a rounding error (although simeon_c recently argued that Ord "vastly overestimates" biorisk).
Comment by BrownHairedEevee (evelynciara) on Why does no one care about AI? · 2022-08-07T22:54:02.829Z · EA · GW

Nitpicking here, but I do not believe that AI is the most pressing x-risk problem, as opposed to a pressing one:

It's pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem

Added 2022-08-09: The original claim was that AGI is the most pressing problem from a longtermist point of view, so I've edited this comment to clarify that I mean problem, not x-risk. To prove that AGI is the most pressing problem, one needs to prove that it's more cost-effective to work on AGI safety than to work on any other x-risk and any broad intervention to improve the future. (For clarity, a "pressing" problem is one that's cost-effective to allocate resources to at current margins.)

It's far from obvious to me that this is a dominant view: in 2021, Ben Todd said that broad interventions like improving institutional decision-making and reducing great power conflict were the largest resource gap in the EA cause portfolio.

Comment by BrownHairedEevee (evelynciara) on EA is Insufficiently Value Neutral in Practice · 2022-08-07T05:42:28.767Z · EA · GW

Thank you for this. I think it's worth discussing which kinds of moral views are compatible with EA. For example, in chapter 2 of The Precipice, Toby Ord enumerates 5 moral foundations for caring about existential risk (also discussed in this presentation):

1. Our concern could be rooted in the present — the immediate toll such a catastrophe would take on everyone alive at the time it struck. (common-sense ethics)

2. It could be rooted in the future, stretching so much further than our own moment — everything that would be lost. (longtermism)

3. It could be rooted in the past, on how we would fail every generation that came before us. (Burkean "partnership of generations" conservatism)

4. We could also make a case based on virtue, on how by risking our entire future, humanity itself displays a staggering deficiency of patience, prudence, and wisdom. (virtue ethics)

5. We could make a case based on our cosmic significance, on how this might be the only place in the universe where there's intelligent life, the only chance for the universe to understand itself, on how we are the only beings who can deliberately shape the future toward what is good or just.

So I find it strange and disappointing that we make little effort to promote longtermism to people who don't share the EA mainstream's utilitarian foundations.

Similarly, I think it's worth helping conservationists figure out how to conserve biodiversity as efficiently as possible, perhaps alongside other values such as human and animal welfare, even though it is not something inherently valued by utilitarianism and seems to conflict with improving wild animal welfare. I have moral uncertainty as to the relative importance of biodiversity and WAW, so I'd like to see society try to optimize both and come to a consensus about how to navigate the tradeoffs between the two.

Comment by BrownHairedEevee (evelynciara) on The Operations team at CEA transforms · 2022-08-06T19:29:36.934Z · EA · GW

I think something with "constellation" would be a good name for the super-org.

Comment by BrownHairedEevee (evelynciara) on Open Philanthropy should fund the abundance agenda movement · 2022-08-06T05:13:03.447Z · EA · GW

What about the Center for Global Development and Charter Cities Institute?

Comment by BrownHairedEevee (evelynciara) on 30 second action you could take · 2022-08-06T03:04:54.933Z · EA · GW

To the extent that this contest is a free and fair election (and it's not; internet polls are fundamentally insecure), it is legitimate for people to influence other people's votes by giving them recommendations. Voters ultimately make their own decisions.

Admittedly I wrote the above comment in a rush, but I did give a rationale for my last recommendation ("don't vote for Kathleen Stock because she's a TERF") - Stock has advocated for beliefs that harm transgender people. Specifically, she has signed a statement that describes "the practice of transgenderism" as inherently sexist (1, 2). (For clarification, TERF stands for "trans-exclusionary radical feminist".)

Comment by BrownHairedEevee (evelynciara) on What reason is there NOT to accept Pascal's Wager? · 2022-08-05T21:36:17.375Z · EA · GW

You can get out of the infinite (+/-) payoffs by exponentially discounting future well-being. This assumes that, while in heaven or hell, you experience a finite amount of well-being at every point in time (that doesn't grow exponentially without bound), but you live for an infinite amount of time.

Comment by BrownHairedEevee (evelynciara) on Open Philanthropy should fund the abundance agenda movement · 2022-08-05T16:01:16.914Z · EA · GW

I think this statement needs to be more precise. The Quartz article you cite states that the US "has the second-highest rate of poverty among rich countries (poverty here measured by the percentage of people earning less than half the national median income)," but this poverty threshold is still much higher than the International Poverty Line (IPL) used by the World Bank ($1.90/day), and in general, rich countries use higher poverty lines. 81% of those living in South Sudan (which is considered a least developed country) live below the IPL, whereas only 1% of Americans live below the IPL.

Also, I'm not an expert on this, but most poor people in rich countries have access to more infrastructure such as electricity and healthcare than poor people in poor countries, so their lives are qualitatively better in many ways even if it's not reflected in their incomes.

Poverty on Native American reservations is especially dire and seems comparable to the kind of extreme poverty we see in low- and middle-income countries. For example, Allen, South Dakota, on the Pine Ridge Reservation, has a per-capita income of $1,539 per year, or about $4.21 per day, which is between the $3.20 and $5.50/day poverty lines often used in international development.

Comment by BrownHairedEevee (evelynciara) on Open Philanthropy should fund the abundance agenda movement · 2022-08-05T06:48:35.804Z · EA · GW

Although I think the supply-side progressive agenda is great (and have been involved in it in the past), I no longer prioritize it because I think that global poverty is more important. Most EAs are familiar with logarithmic utility of money, but this post by Toby Ord convinced me that the marginal utility of money likely diminishes faster than logarithmic utility.[1] So interventions to grow developed economies need to clear a very high bar to be more cost-effective than developing-world poverty reduction.

Promoting inclusive growth in rich countries has flow-through effects that might be enough to make it clear this bar, such as the ones you mentioned. For example:

  • Reducing poverty and inequality in rich countries might reduce their voters' appetite for populism, making it more likely that those countries will adopt pro-global poor policies like immigration and free trade.
  • Similarly, once the developed-world government enacts policies that take care of citizens' basic needs, those citizens might move up the hierarchy of needs to self-actualization, including global awareness and altruism.
  • Growth in rich countries could lead to more growth in poor countries, as those countries would trade with each other and trade volumes would increase.
  • Frontier growth in rich countries involves increasing innovation, which has positive externalities that flow through to poor countries. (For example, when the U.S. invests in clean energy innovation, clean energy becomes more affordable to India, Nigeria, etc.) Also, major technological breakthroughs create benefits that are not captured in GDP statistics.
  • Growth in rich countries increases their capacity to pay for public goods such as climate change mitigation, pandemic preparedness, and AGI safety, as well as subsidize the growth of poor countries. (Similarly, growth in rich countries gives EAs more disposable income, which they then spend on EA causes. 😏)

However, promoting growth in poor countries also has flow-through effects, which I don't currently have the bandwidth to enumerate.

Let's say that the second-order effects of rich-world growth make it twice as cost-effective as it would otherwise be. Assuming that neglectedness and tractability are the same, it would still be at least 16x more cost-effective to reduce developing-world poverty.[2]

Specific causes that you mention might clear the bar for cost-effectiveness, like increasing immigration and clean energy innovation. Also, increasing housing supply would increase cities' capacity to house immigrants and knowledge workers involved in high-impact innovation. What do you think?

  1. ^

    For example, if Aliya's income is $100 and Baojin's income is $1000, and they both have logarithmic utility, then giving a dollar to Aliya would be 10x as valuable as giving it to Baojin. But if they have isoelastic utility with constant elasticity , then giving it to Aliya is x more valuable. Empirical studies show that  is likely somewhere between 1 and 2.

  2. ^

    Per Toby Ord's essay, the US poverty line (about $6000/person) is about 33x higher than the incomes of typical GiveDirectly recipients (about $180/person).

Comment by BrownHairedEevee (evelynciara) on What reason is there NOT to accept Pascal's Wager? · 2022-08-05T04:07:32.802Z · EA · GW

Askell's first response is a non sequitur. The person deciding to take Pascal's wager does so under uncertainty about which of the n gods will get them into heaven. The response is assuming you're already in the afterlife and will definitely get into heaven if you choose door A.

However, the n-god Pascal's wager suggests that believing in any one of the possible gods (indeterminate EU) is better than believing in no god (-infinite EU). Believing in all of them is even better (+infinite EU). There's nothing in the problem statement saying that each god will send you to hell for believing in any other god (although it can be inferred from the Ten Commandments that Yahweh will do so).

Comment by BrownHairedEevee (evelynciara) on Does China have AI alignment resources/institutions? How can we prioritize creating more? · 2022-08-05T03:26:01.267Z · EA · GW

Via Fai:

Comment by BrownHairedEevee (evelynciara) on 30 second action you could take · 2022-08-04T06:08:47.345Z · EA · GW

Y'all should vote for Will, or else David Chalmers or Demis Hassabis (DeepMind CEO). I like Joy Buolamwini too. Just don't vote for Kathleen Stock because she's a TERF.

Comment by BrownHairedEevee (evelynciara) on A Database of EA Organizations & Initiatives · 2022-08-04T04:02:06.309Z · EA · GW

This is awesome! It would be really useful for linking with other datasets if you added any tax identification numbers you are aware of, such as the Employer Identification Number (EIN) in the US.

Comment by BrownHairedEevee (evelynciara) on God is a neglected cause area · 2022-08-03T06:02:32.608Z · EA · GW

Setting aside that this content appears to have been generated by GPT-3 (...?), I find the argument extremely unconvincing as it presupposes that God exists.

Comment by BrownHairedEevee (evelynciara) on Humanity’s vast future and its implications for cause prioritization · 2022-07-26T18:02:51.646Z · EA · GW

Darn it! Will get that fixed.

Comment by BrownHairedEevee (evelynciara) on Using the “executive summary” style: writing that respects your reader’s time · 2022-07-26T04:30:40.554Z · EA · GW

Thanks for normalizing this! I'm trying this for my newest blog post, and I've realized that it's much easier to use a bullet-point summary than to try to come up with a creative intro section. So this will make my writing process much less stressful 😅

Comment by BrownHairedEevee (evelynciara) on EA is becoming increasingly inaccessible, at the worst possible time · 2022-07-22T21:51:11.693Z · EA · GW

Do you worry at all about a bait-and-switch experience that new people might have?

I think we could mitigate this by promoting global health & wellbeing and longtermism as equal pillars of EA, depending on the audience.

Comment by BrownHairedEevee (evelynciara) on EA is becoming increasingly inaccessible, at the worst possible time · 2022-07-22T21:50:19.400Z · EA · GW

Yeah, I recently experienced the problem with longtermism and similarly weird beliefs in EA being bad on-ramps to EA. I moderate a Discord server where we just had some drama involving a heated debate between users sympathetic to and users critical of EA. Someone pointed out to me that many users on the server have probably gotten turned off from EA as a whole because of exposure to relatively weird beliefs within EA like longtermism and wild animal welfare, both of which I'm sympathetic to and have expressed on the server. Although I want to be open with others about my beliefs, it seems to me like they've been "plunged in on the deep end," rather than being allowed to get their feet wet with the likes of GiveWell.

Also, when talking to coworkers about EA, I focus on the global health and wellbeing side because it's more data-driven and less weird than longtermism, and I try to refer to EA concepts like cost-effectiveness rather than EA itself.

Comment by BrownHairedEevee (evelynciara) on Why EA needs Operations Research: the science of decision making · 2022-07-21T16:12:57.479Z · EA · GW

I think this post is on the money! I've proactively created a tag for operations research; feel free to add any information you think would be useful to have on the wiki entry.

I went to Cornell, and (as you state) operations research is one of the majors offered in the engineering school, but I never took any OR classes because I wrongly assumed they would be too dry and difficult. But in retrospect, I think that taking even a whirlwind tour of OR would have been incredibly insightful for me. We have an ML engineer on my team at work who was using mixed-integer programming (or something like that) for model selection.

I have some suggestions for next steps:

  • Compile a list of open-source OR tools (such as spark-or, which integrates with Spark, and these libraries for Python).
  • Do a workshop at a future EA Global or EAGx where you walk through an example OR problem related to EA, preferably with live coding.
  • Volunteer to improve open-source OR libraries (possibly for pay) and fund their development,  aiming for parity with the commercial ones.
Comment by BrownHairedEevee (evelynciara) on arxiv.org - I might work there soon · 2022-07-19T04:12:52.037Z · EA · GW

Welcome to Cornell! 

Comment by BrownHairedEevee (evelynciara) on Enlightenment Values in a Vulnerable World · 2022-07-18T18:18:08.248Z · EA · GW

Thank you for writing this! I am strongly against authoritarianism, and I think liberalism is important for human welfare outside of its implications for existential risk. I appreciate that this post articulates in detail why we shouldn't give up on liberalism even in the face of existential threats from emerging technologies. The point about global police states (even ostensibly benevolent ones) having instrumentally convergent goals is also insightful and I think it would resonate with many people here.

Comment by BrownHairedEevee (evelynciara) on evelynciara's Shortform · 2022-07-18T04:09:30.362Z · EA · GW

In theory, any ethical system that provides a total order over actions - basically, a relation that says "action  is better than action " - is compatible with the "effective" part of effective altruism. The essence of effective altruism, then, is following a decision rule that says to choose the best action  available to you in any given situation.

As for the "altruism" part of EA, an ethical system would have to place value on "what's good/right for others," broadly defined. Usually that's the well-being of other individuals (as in utilitarianism), but it could also be the health of the natural environment (as in environmentalism) or violating the rights of others as little as possible (as in deontology).

Comment by BrownHairedEevee (evelynciara) on evelynciara's Shortform · 2022-07-04T17:53:06.894Z · EA · GW

Crazy idea: A vegan hot dog eating contest

Comment by BrownHairedEevee (evelynciara) on New US Senate Bill on X-Risk Mitigation [Linkpost] · 2022-07-04T02:02:10.956Z · EA · GW

A link to the bill itself would be helpful, though I haven't been able to find one by googling.

Comment by BrownHairedEevee (evelynciara) on The Future Might Not Be So Great · 2022-07-03T05:53:05.546Z · EA · GW

I downvoted this comment. While I think this discussion is important to have, I do not think that a post about longtermism should be turned into a referendum on Jacy's conduct. I think it would be better to have this discussion on a separate post or the open thread.

Comment by BrownHairedEevee (evelynciara) on Megaprojects for animals · 2022-07-02T02:18:47.270Z · EA · GW

These are interesting ideas! I think that AI systems designed with animal welfare in mind would be more reliant on computer vision and sensory data than NLP, since animals don't speak in human tongues. This blog post about using biologgers to measure animal welfare comes to mind.

Comment by BrownHairedEevee (evelynciara) on Before There Was Effective Altruism, There Was Effective Philanthropy · 2022-06-27T16:58:52.590Z · EA · GW

I don't think EP has fizzled out entirely. ImpactMatters is perhaps part of the second wave of EP. Charity Navigator acquired it in 2020 and incorporated its impact ratings into its Encompass Rating System.

Comment by BrownHairedEevee (evelynciara) on Doing good while clueless · 2022-06-27T00:23:48.533Z · EA · GW

For what it's worth, Bostrom (2013) does distinguish between insight and good values:

We thus want to reach a state in which we have (a) far greater intelligence, knowledge, and sounder judgment than we currently do; (b) far greater ability to solve global-coordination problems; (c) far greater technological capabilities and physical resources; and such that (d) our values and preferences are not corrupted in the process of getting there (but rather, if possible, improved). Factors b and c expand the option set available to humanity. Factor a increases humanity's ability to predict the outcomes of the available options and understand what each outcome would entail in terms of the realization of human values. Factor d, finally, makes humanity more likely to want to realize human values.