Posts

Is The YouTube Algorithm Radicalizing You? It’s Complicated. 2021-03-01T21:50:17.109Z
The link between surveillance and free expression | Sunyshore 2021-02-23T02:14:49.084Z
How can non-biologists contribute to wild animal welfare? 2021-02-17T20:58:44.034Z
[Podcast] Ajeya Cotra on worldview diversification and how big the future could be 2021-01-22T23:57:48.193Z
What I believe, part 1: Utilitarianism | Sunyshore 2021-01-10T17:58:58.513Z
What is the marginal impact of a small donation to an EA Fund? 2020-11-23T07:09:02.934Z
Which terms should we use for "developing countries"? 2020-11-16T00:42:58.385Z
Is Technology Actually Making Things Better? – Pairagraph 2020-10-01T16:06:23.237Z
Planning my birthday fundraiser for October 2020 2020-09-12T19:26:03.888Z
Is existential risk more pressing than other ways to improve the long-term future? 2020-08-20T03:50:31.125Z
What opportunities are there to use data science in global priorities research? 2020-08-18T02:48:23.143Z
Are some SDGs more important than others? Revealed country priorities from four years of VNRs 2020-08-16T06:56:19.326Z
How strong is the evidence of unaligned AI systems causing harm? 2020-07-21T04:08:07.719Z
What norms about tagging should the EA Forum have? 2020-07-14T04:19:54.841Z
Does generality pay? GPT-3 can provide preliminary evidence. 2020-07-12T18:53:09.454Z
Which countries are most receptive to more immigration? 2020-07-06T21:46:03.732Z
Will AGI cause mass technological unemployment? 2020-06-22T20:55:00.447Z
Idea for a YouTube show about effective altruism 2020-04-24T05:00:00.853Z
How do you talk about AI safety? 2020-04-19T16:15:59.288Z
International Affairs reading lists 2020-04-08T06:11:41.620Z
How effective are financial incentives for reaching D&I goals? Should EA orgs emulate this practice? 2020-03-24T18:27:16.554Z
What are some software development needs in EA causes? 2020-03-06T05:25:50.461Z
My Charitable Giving Report 2019 2020-02-27T16:35:42.678Z
Shoot Your Shot 2020-02-18T06:39:22.964Z
Does the President Matter as Much as You Think? | Freakonomics Radio 2020-02-10T20:47:27.365Z
Prioritizing among the Sustainable Development Goals 2020-02-07T05:05:44.274Z
Open New York is Fundraising! 2020-01-16T21:45:20.506Z
What are the most pressing issues in short-term AI policy? 2020-01-14T22:05:10.537Z
Has pledging 10% made meeting other financial goals substantially more difficult? 2020-01-09T06:15:13.589Z
evelynciara's Shortform 2019-10-14T08:03:32.019Z

Comments

Comment by evelynciara on How can non-biologists contribute to wild animal welfare? · 2021-02-24T01:02:08.773Z · EA · GW

I appreciate your comment! I'll reach out to you.

Comment by evelynciara on Open and Welcome Thread: February 2021 · 2021-02-22T05:28:49.726Z · EA · GW

Please tag your posts! I've seen several new forum posts with no tags, and I add any tags I consider relevant. But it would be better if everyone added tags when they publish new posts. Also, please add tags to any post that you think is missing them.

Comment by evelynciara on How can non-biologists contribute to wild animal welfare? · 2021-02-18T15:32:57.021Z · EA · GW

Nope, I haven't seen this yet. Thanks for the link!

Comment by evelynciara on Open and Welcome Thread: February 2021 · 2021-02-18T02:32:59.354Z · EA · GW

What's multiplier?

Comment by evelynciara on evelynciara's Shortform · 2021-02-14T07:15:13.523Z · EA · GW

Matt Yglesias gets EA wrong :(

What EAs think is that people should make decisions guided by a rigorous empirical evaluation based on consequentialist criteria.

Ummm, no. Not all EAs are consequentialists (although a large fraction of them are), and most EAs these days understand that "rigorous empirical evaluation" isn't the only way to reason about interventions.

It just gets worse from there:

In other words, effective altruists don’t think you should make charitable contributions to your church (again, relative to the mass public this is the most controversial part!) or to support the arts or to solve problems in your community. They think most of the stuff that people donate to (which, again, is largely religiously motivated) do is frivolous. But beyond that, they would dismiss the bulk of the kind of problems that concern most people as literal “first world problems” that blatantly fail the cost-benefit test compared to Vitamin A supplementation in Africa.

No! We're not against supporting programs other than the global health stuff. It's just that you gotta buy your fuzzies separate from your utils. More fundamentally, EAs disagree on whether EA is mandatory or supererogatory (merely good). If EA is supererogatory, then supporting your local museum isn't wrong, it just doesn't count towards your effective giving budget.

Comment by evelynciara on Progress Open Thread: February 2021 · 2021-02-14T05:50:23.882Z · EA · GW

I've been voting on videos for Project for Awesome and it's very fun (while also making a difference)! I've put most of my effort into voting for EA charities, but I've voted for other charities as well.

Comment by evelynciara on Voting open for Project for Awesome 2021! · 2021-02-13T19:28:46.488Z · EA · GW

There's another typo: "Sinergia", not "Singergia"

Comment by evelynciara on Blameworthiness for Avoidable Psychological Harms · 2021-02-09T15:57:58.575Z · EA · GW

This is interesting! I've been thinking about emotional harms caused by social systems recently.

Robinhood is being sued for allegedly causing the suicide of Alex Kearns through negligence. How do courts address psychological harms like this?

Comment by evelynciara on Open and Welcome Thread: February 2021 · 2021-02-09T00:22:36.454Z · EA · GW

Ah... I prefer to use the Markdown editor, but I could switch to the rich text editor for this post.

Comment by evelynciara on Open and Welcome Thread: February 2021 · 2021-02-08T17:50:26.018Z · EA · GW

Can you embed a YouTube video in the EA Forum? If so, how?

Comment by evelynciara on Is GDP the right metric? · 2021-02-04T21:20:52.981Z · EA · GW

GDP growth is compounding, while leisure time is zero-sum.... if you increase your leisure time by 5% every year, at some point your life will be just leisure time

This seems like a weird comparison. GDP reflects the total value of goods that people consume, whereas leisure time is just one such good, like food. Nobody cares if the total amount of food each person can eat increases by 1% each year, because 2000 kcal is enough for most people. The amount of leisure time available to people in the industrialized world has increased over the decades, but surely the amount of leisure time we can get will plateau at some point (it's mathematically impossible for leisure time to exceed 168 hours a week, but it would probably peak below that).

Comment by evelynciara on evelynciara's Shortform · 2021-02-03T18:31:13.270Z · EA · GW

Prevent global anastrophe we must.

Comment by evelynciara on (Autistic) visionaries are not natural-born leaders · 2021-01-31T19:42:04.197Z · EA · GW

I'm autistic, and my problem with the title is that it implies that autistic people are bad leaders, without substantiating the claim about autism (the words "autism" and "autistic" do not appear in the piece other than in the title and hatnote). Autistic people who see this may be discouraged from becoming leaders even if they'd otherwise be competent.

Comment by evelynciara on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-28T20:51:48.796Z · EA · GW

I really appreciated your 80K episode - it was one of my favorites! I created a discussion thread for it.

Some questions - feel free to answer as many as you want:

  • How much of your day-to-day work involves coding or computer science knowledge in general? I know you created a Jupyter notebook to go with your AI timelines forecast; is there anything else?
  • What are your thoughts on the public interest tech movement?
    • More specifically, I've been thinking about starting some meta research on using public interest tech to address the most pressing problems from an EA perspective. Do you think that would be useful?
Comment by evelynciara on Can my self-worth compare to my instrumental value? · 2021-01-25T07:04:35.002Z · EA · GW

I'm what one might call a "mom friend" - I often give emotional support to my friends when they have problems, while not demanding much from my friends in return for my emotional labor. I feel like I've been approaching effective altruism in a similar way - I give a lot, emotionally and intellectually, to the world and strive to do as much good for others in the long term as possible, but I don't feel like I deserve much from the world "in return" because of the privileges my circumstances have afforded me.

Overall, I feel like it's an unhealthy way of approaching relationships and effective altruism. In terms of relationships, I want to feel free to demand more from my friends, coworkers, and employers. In terms of the world... I wouldn't demand anything from the global poor, but I want to expect more from my own society. If I could "ask" society for something, it would be more personal freedom to pursue things that would give me fulfillment. But maybe I just have to take it.

Comment by evelynciara on Reworking the Tagging System · 2021-01-24T19:36:30.092Z · EA · GW

Yeah, I agree. There are way too many tags now.

Comment by evelynciara on Creepy Crawlies (an EA poem) · 2021-01-23T22:34:03.286Z · EA · GW

Me too. Maybe I'll write some.

Comment by evelynciara on DanielFilan's Shortform · 2021-01-23T05:45:20.632Z · EA · GW

Poaching, murder, terrorism, and sex trafficking all cause more than just financial harm, although I don't know what portion of the crime prevented by AML laws is these things. Authoritarian states like the PRC, which has been systematically oppressing Muslims and Tibetans, participate in money laundering, too. Decriminalization of drugs and sex work would reduce the amount of illicit drug and sex trafficking, since legal producers would outcompete the criminal organizations, while growing the economy.

Comment by evelynciara on Propose and vote on potential tags · 2021-01-23T03:19:31.075Z · EA · GW

Yeah, this would work. A general econ tag could focus on values other than economic growth, including equity and preventing hyperinflation.

Comment by evelynciara on Propose and vote on potential tags · 2021-01-23T03:17:45.746Z · EA · GW

"Economic Policy" or "Macroeconomic Stabilization"

Pros:

  • Macroeconomic stabilization is one of the areas that Open Phil works on, but it's not frequently discussed in the EA community. This tag could be specific to macro stabilization or it could encompass all areas of economic policy (aside from economic growth, which already has a tag).
  • Land use reform already has a tag.

Cons:

Comment by evelynciara on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T01:57:13.128Z · EA · GW

I like Ajeya's metaphor of different "sects" of the EA movement being different stops on the "train to crazy town":

So, you can think about the longtermist team as trying to be the best utilitarian philosophers they can be, and trying to philosophy their way into the best goals, and win that way. Where at least moderately good execution on these goals that were identified as good (with a lot of philosophical work) is the bet they’re making, the way they’re trying to win and make their mark on the world. And then the near-termist team is trying to be the best utilitarian economists they can be, trying to be rigorous, and empirical, and quantitative, and smart. And trying to moneyball regular philanthropy, sort of. And they see their competitive advantage as being the economist-y thinking as opposed to the philosopher-y thinking.

And so when the philosopher takes you to a very weird unintuitive place — and, furthermore, wants you to give up all of the other goals that on other ways of thinking about the world that aren’t philosophical seem like they’re worth pursuing — they’re just like, stop… I sometimes think of it as a train going to crazy town, and the near-termist side is like, I’m going to get off the train before we get to the point where all we’re focusing on is existential risk because of the astronomical waste argument. And then the longtermist side stays on the train, and there may be further stops.

The idea of neartermism and longtermism reflecting economist-like and philosopher-like ways of thinking struck a chord with me. I feel as if there are conflicting parts of me that want to follow different approaches to saving the world.

Comment by evelynciara on Prabhat Soni's Shortform · 2021-01-22T23:28:41.286Z · EA · GW

What's the proposed policy change? Making understanding of elections a requirement to vote?

Comment by evelynciara on Open and Welcome Thread: January 2021 · 2021-01-18T17:47:22.721Z · EA · GW

Hi! If you're interested in CS, I suggest checking out the public interest tech movement. I've been involved in public interest tech for over 4 years, and recently I've been thinking about the intersection of EA and public interest tech.

The Civic Digital Fellowship is a 10-week tech internship in the federal government that is open to college students who are U.S. citizens. I encourage you to apply once you start college.

I also recommend checking out Impact Labs, founded by fellow EA Aaron Mayer. They run a winter fellowship and a summer internship program every year.

There are many other opportunities in public interest tech; some are more aligned with EA causes than others. I can't list them all but you can use this page as a starting point.

Comment by evelynciara on International Cooperation Against Existential Risks: Insights from International Relations Theory · 2021-01-15T00:05:04.803Z · EA · GW

I'm happy to see more international relations content on the forum! I've stated here that IR seems relevant to a wide range of EA causes due to EA's global outlook, and so have other users like David Manheim.

Comment by evelynciara on Effective Altruism Merchandise Ideas · 2021-01-13T04:36:53.741Z · EA · GW

I had a similar idea very recently. I like the idea of using EA merchandise like T-shirts and stickers to promote ideas associated with EA. I like Brian Tan's CHOICE acronym and I think it definitely belongs on a shirt.

Comment by evelynciara on Tommy Raskin: In Memoriam · 2021-01-12T23:15:43.864Z · EA · GW

My condolences.

Comment by evelynciara on Progress Open Thread: January 2021 · 2021-01-10T17:00:42.450Z · EA · GW

Yeah, I didn't really linkpost this because I didn't think that a basic introduction to utilitarianism would fit in with the kinds of stuff that people usually post, which deals with philosophical topics at a more sophisticated level. But I can post it and the rest of the series!

Comment by evelynciara on Progress Open Thread: January 2021 · 2021-01-10T01:40:13.180Z · EA · GW

I like that Ord explicitly tied x-risk management to the concept of sustainability. I've noticed that sustainability is a longtermist concept because it considers the needs of future generations, but I don't think there's been much crosstalk between advocates of sustainability and EA-style longtermism.

Comment by evelynciara on Progress Open Thread: January 2021 · 2021-01-10T01:30:43.193Z · EA · GW

I started a new "What I believe" series on my Sunyshore blog about my ethical beliefs. The first post in the series is about why I believe total utilitarianism is closest to the correct ethical theory and its implications for society. I'm also working on a post about privacy and civil liberties. In the future, I might write a post for the "What I believe" series about effective altruism.

I've already gotten valuable feedback from members of the EA community who've critiqued my use of the Harsanyi original-position thought experiment. The goal of this post wasn't to present a bulletproof argument for utilitarianism but to explain my personal reasons for supporting it. Still, I appreciate all constructive feedback.

Comment by evelynciara on evelynciara's Shortform · 2021-01-08T23:03:00.764Z · EA · GW

Thanks for sharing these! I had Toby Ord's arguments from The Precipice in mind too.

Comment by evelynciara on EA Forum feature suggestion thread · 2021-01-08T18:03:49.478Z · EA · GW

There should be a feature that points out broken links when you write posts/comments!

Comment by evelynciara on Why EA meta, and the top 3 charity ideas in the space · 2021-01-08T18:00:50.084Z · EA · GW

What is this supposed to link to?

Comment by evelynciara on evelynciara's Shortform · 2021-01-08T17:49:47.924Z · EA · GW

Worldview diversification for longtermism

I think it would be helpful to get more non-utilitarian perspectives on longtermism (or ones that don't primarily emphasize utilitarianism).

Some questions that would be valuable to address:

  • What non-utilitarian worldviews support longtermism?
  • Under a given longtermist non-utilitarian worldview, what are the top-priority problem areas, and what should actors to do address them?

Some reasons I think this would be valuable:

  1. We're working under a lot of moral uncertainty, so the more ethical perspectives, the better.
  2. Even if we fully buy into one worldview, it would be valuable to incorporate insights from other worldviews' perspectives on the problems we are addressing.
  3. Doing this would attract more people with worldviews different from the predominant utilitarian one.

What non-utilitarian worldviews support longtermism?

Liberalism: There are strong theoretical and empirical reasons why liberal democracy may be valuable for the long-term future; see this post and its comments. I think that certain variants of liberalism are highly compatible with longtermism, especially those focusing on:

  • Inclusive institutions and democracy
  • Civil and political rights (e.g. freedom, equality, and civic participation)
  • International security and cooperation
  • Moral circle expansion

Environmental and climate justice: Climate justice deals with climate change's impact on the most vulnerable members of society, and it prescribes how societies ought to respond to climate change in ways that protect their most vulnerable members. We can learn a lot from it about how to respond to other global catastrophic risks.

Comment by evelynciara on Why EA meta, and the top 3 charity ideas in the space · 2021-01-08T15:31:18.194Z · EA · GW

I think all of the meta charity ideas are great, especially the first one. Exploratory altruism would address a problem that I have: 80,000 Hours now lists 17 problems that could be as valuable to work on as the top problems that they've fully evaluated. I share much of their worldview, so I am very confused about which of these would be the best for me to work on. It would be very helpful if I had a way to compare these problems, especially tentative answers to these questions:

  1. How do these problems compare to the current top problems in terms of scale, neglectedness, and solvability?
  2. How do these problems relate to each other?
Comment by evelynciara on Legal Priorities Research: A Research Agenda · 2021-01-07T19:45:52.514Z · EA · GW

I really appreciate the work you're all doing! I'm interested in longtermism and liberal political theory, and the section on institutional design appears relevant to that.

Comment by evelynciara on Legal Priorities Research: A Research Agenda · 2021-01-07T05:06:03.945Z · EA · GW

Great work! I noticed that the first three pages of the PDF are not accessible, as it is not possible to select the text.

Comment by evelynciara on evelynciara's Shortform · 2021-01-06T18:49:00.957Z · EA · GW

An EA Meta reading list:

Comment by evelynciara on evelynciara's Shortform · 2021-01-05T03:31:49.614Z · EA · GW

Ben gives a great example of how the "alignment problem" might look different than we expect:

The case of the house-cleaning robot

  • Problem: We don’t know how to build a simulated robot that cleans houses well
  • Available techniques aren’t suitable:
    • Simple hand-coded reward functions (e.g. dust minimization) won’t produce the desired behavior
    • We don’t have enough data (or sufficiently relevant data) for imitation learning
    • Existing reward modeling approaches are probably insufficient
  • This is sort of an “AI alignment problem,” insofar as techniques currently classified as “alignment techniques” will probably be needed to solve it. But it also seems very different from the AI alignment problem as classically conceived.

...

  • One possible interpretation: If we can’t develop “alignment” techniques soon enough, we will instead build powerful and destructive dust-minimizers
  • A more natural interpretation: We won’t have highly capable house-cleaning robots until we make progress on “alignment” techniques

I've concluded that the process orthogonality thesis is less likely to apply to real AI systems than I would have assumed (i.e. I've updated downward), and therefore, the "alignment problem" as originally conceived is less likely to affect AI systems deployed in the real world. However, I don't feel ready to reject all potential global catastrophic risks from imperfectly designed AI (e.g. multi-multi failures), because I'd rather be safe than sorry.

Comment by evelynciara on evelynciara's Shortform · 2021-01-05T03:04:09.147Z · EA · GW

By the way, there will be a workshop on Interactive Learning for Natural Language Processing at ACL 2021. I think it will be useful to incorporate the ideas from this area of research into our models of how AI systems that interpret natural-language feedback would work. One example of this kind of research is Blukis et al. (2019).

Comment by evelynciara on evelynciara's Shortform · 2021-01-05T02:57:44.384Z · EA · GW

A rebuttal of the paperclip maximizer argument

I was talking to someone (whom I'm leaving anonymous) about AI safety, and they said that the AI alignment problem is a joke (to put it mildly). They said that it won't actually be that hard to teach AI systems the subtleties of human norms because language models contain normative knowledge. I don't know if I endorse this claim but I found it quite convincing, so I'd like to share it here.

In the classic naive paperclip maximizer scenario, we assume there's a goal-directed AI system, and its human boss tells it to "maximize paperclips." At this point, it creates a plan to turn all of the iron atoms on Earth's surface into paperclips. The AI knows everything about the world, including the fact that blood hemoglobin and cargo ships contain iron. However, it doesn't know that it's wrong to kill people and destroy cargo ships for the purpose of obtaining iron. So it starts going around killing people and destroying cargo ships to obtain as much iron as possible for paperclip manufacturing.

I think most of us assume that the AI system, when directed to "maximize paperclips," would align itself with an objective function that says to create as many paper clips as superhumanly possible, even at the cost of destroying human lives and economic assets. However, I see two issues:

  1. It's assuming that the system would interpret the term "maximize" extremely literally, in a way that no reasonable human would interpret it. (This is the core of the paperclip argument, but I'm trying to show that it's a weakness.) Most modern natural language processing (NLP) systems are based on statistical word embeddings, which capture what words mean in the source texts, rather than their strict mathematical definitions (if they even have one). If the AI system interprets commands using a word embedding, it's going to interpret "maximize" the way humans would.

    Ben Garfinkel has proposed the "process orthogonality thesis" - the idea that, for the classic AI alignment argument to work, "the process of imbuing a system with capabilities and the process of imbuing a system with goals" would have to be orthogonal. But this point shows that the process of giving the system capabilities (in this case, knowing that iron can be obtained from various everyday objects) and the process of giving it a goal (in this case, making paperclips) may not be orthogonal. An AI system based on contemporary language models seems much more likely to learn that "maximize X" means something more like "maximize X subject to common-sense constraints Y1, Y2, ..." than to learn that human blood can be turned into iron for paperclips. (It's also possible that it'll learn neither, which means it might take "maximize" too literally but won't figure out that it can make paperclips from humans.)

  2. It's assuming that the system would make a special case for verbal commands that can be interpreted as objective functions and set out to optimize the objective function if possible. At a minimum, the AI system needs to convert each verbal command into a plan to execute it, somewhat like a query plan in relational databases. But not every plan to execute a verbal command would involve maximizing an objective function, and using objective functions in execution plans is probably dangerous for the reason that the classic paperclip argument tries to highlight, as well as overkill for most commands.

Comment by evelynciara on evelynciara's Shortform · 2020-12-30T16:39:38.414Z · EA · GW

Yeah, I would recommend it to anyone interested in movement building, history, or political philosophy from an EA perspective. I'm interested in reconciling longtermism and liberalism.

These paragraphs from the Guardian review summarize the main points of the book:

Given the prevailing gloom, Gopnik’s definition of liberalism is cautious and it depends on two words whose awkwardness, odd in such an elegant writer, betrays their doubtful appeal. One is “fallibilism”, the other is “imperfectability”: we are a shoddy species, unworthy of utopia. I’d have thought that this was reason for conservatively upholding the old order, but for Gopnik it’s our recidivism that makes liberal reform so necessary. We must always try to do better, cleaning up our messes. The sanity in the book’s title extends to sanitation: Gopnik whimsically honours the sewerage system of Victorian London as a shining if smelly triumph of liberal policy.

Liberalism here is less a philosophy or an ideology than a temperament and a way of living. Gopnik regards sympathy with others, not the building of walls and policing of borders, as the basis of community. “Love is love,” he avers, and “kindness is everything”. Both claims, he insists, are “true. Entirely true”, if only because the Beatles say so. But are they truths or blithe truisms? Such soothing mantras would not have disarmed the neo-Nazi thugs who marched through Charlottesville, Virginia, in 2017 or the white supremacist who murdered Jo Cox. Gopnik calls Trump “half-witted” and says Nigel Farage is a “transparent nothing”, but snubs do not diminish the menace of these dreadful men.

Comment by evelynciara on evelynciara's Shortform · 2020-12-29T20:32:39.047Z · EA · GW

I've been reading Adam Gopnik's book A Thousand Small Sanities: The Moral Adventure of Liberalism, which is about the meaning and history of liberalism as a political movement. I think many of the ideas that Gopnik discusses are relevant to the EA movement as well:

  • Moral circle expansion: To Gopnik, liberalism is primarily about calling for "the necessity and possibility of (imperfectly) egalitarian social reform and ever greater (if not absolute) tolerance of human difference" (p. 23). This means expanding the moral circle to include, at the least, all human beings. However, inclusion in the moral circle is a spectrum, not a binary: although liberal societies have made tremendous progress in treating women, POC, workers, and LGBTQ+ people fairly, there's still a lot of room for improvement. And these societies are only beginning to improve their treatment of immigrants, the global poor, and non-human animals.
  • Societal evolution and the "Long Reflection": "Liberalism's task is not to imagine the perfect society and drive us toward it but to point out what's cruel in the society we have now and fix it if we possibly can" (p. 31). I think that EA's goals for social change are mostly aligned with this approach: we identify problems and ways to solve them, but we usually don't offer a utopian vision of the future. However, the idea of the "Long Reflection," a process of deliberation that humanity would undertake before taking any irreversible steps that would alter its trajectory of development, seems to depart from this vision of social change. The Long Reflection involves figuring out what is ultimately of value to humanity or, failing that, coming close enough to agreement that we won't regret any irreversible steps we take. This seems hard and very different from the usual way people do politics, and I think it's worth figuring out exactly how we would do this and what would be required if we think we will have to take such steps in the future.
Comment by evelynciara on evelynciara's Shortform · 2020-12-28T18:23:26.917Z · EA · GW

An idea I liked from Owen Cotton-Barratt's new interview on the 80K podcast: Defense in depth

If S, M, or L is any small, medium, or large catastrophe and X is human extinction, then the probability of human extinction is

So halving the probability of all small disasters, the probability of any small disaster becoming a medium-sized disaster, etc. would halve the probability of human extinction.

Comment by evelynciara on Progress Open Thread: December 2020 · 2020-12-28T02:49:20.405Z · EA · GW

I finished the first semester of my CS Master of Engineering (MEng) program at Cornell! All of my classes assigned final projects instead of exams this semester, so the last month of the semester was harrowing.

For my computational genetics and genomics class, I worked on using the ROLLOFF algorithm to estimate the number of generations since the admixture of Indo-European- and Dravidian-speaking peoples in South Asia. I didn't get satisfactory results with this study, but I learned a lot about population genetics, statistics, and CS while doing it!

Comment by evelynciara on Progress Open Thread: December 2020 · 2020-12-28T01:44:20.149Z · EA · GW

I'm working on my first blog posts for my new blog, Sunyshore, which will focus on effective altruism, public interest tech, and computer science. One of the posts I'm drafting is about effective giving and the spirit of Christmas. Other topics I may write about in the future include:

  • The paper "Effective Justice"
  • Opportunity cost and cause prioritization
  • Longtermism and justice/Longtermism and liberalism
  • AI fairness, accountability, and transparency

My audience thus far consists of people from the EA community, the online "neoliberal" community, and my university.

Comment by evelynciara on The case of the missing cause prioritisation research · 2020-12-27T05:48:26.181Z · EA · GW

Thanks for this list! I appreciate the Effective Justice paper because it: (1) articulates a deontological version of effective altruism and (2) shows how one could integrate the ideas of EA and justice. I've been trying to do the second thing for a while, although as a pure consequentialist I focus more on distributive justice, so this paper is inspiring for me.

Comment by evelynciara on Computational biology thesis topic suggestions · 2020-12-22T04:31:40.243Z · EA · GW

Hi! I just took a computational genetics class and I'm really interested in this topic, so I'm glad you're excited to use compgen for social good!

How about using computational methods to study environmental DNA (eDNA)?

Comment by evelynciara on Careers Questions Open Thread · 2020-12-14T04:31:58.233Z · EA · GW

Consider tech roles in government! Governments do a lot of high-impact work, especially in the areas that most EAs care about (global health and development, long-term risks), so working in government could allow you to work directly on these areas and build connections that may open the doors to higher-impact work. If you're a U.S. citizen, you can apply for the Civic Digital Fellowship (for students) or the Presidential Innovation Fellowship (for more seasoned technologists), both of which place technologists in the federal government.

Comment by evelynciara on What are some potential coordination failures in our community? · 2020-12-12T15:54:25.857Z · EA · GW

I think that's what EA Hub is trying to do

Comment by evelynciara on AMA: Jason Crawford, The Roots of Progress · 2020-12-08T15:02:13.662Z · EA · GW

Housing affordability: There are new construction technologies on the horizon, such as modular construction and mass timber; mass timber is being incorporated into new versions of the International Building Code, so it's gradually being normalized. However, my colleagues in the YIMBY movement tell me that zoning laws limit competition among construction companies, which discourages them from investing in these innovations. (Also, construction unions seem to hate modular construction.)

What makes you think there haven't been major breakthroughs in energy technology? As I understand it, there has been significant progress in making renewable energy cheap.