Posts

AI safety university groups: a promising opportunity to reduce existential risk 2022-06-30T18:37:12.554Z
Starting an EA uni group is simple and effective, thanks to intro EA programs 2022-06-17T06:34:08.714Z
AI governance student hackathon on Saturday, April 23: register now! 2022-04-12T04:39:15.222Z
Ask AI companies about what they are doing for AI safety? 2022-03-08T21:54:05.193Z
[linkpost] Peter Singer: The Hinge of History 2022-01-16T01:25:11.395Z
Free Guy, a rom-com on the moral patienthood of digital sentience 2021-12-23T07:47:46.054Z
Is it no longer hard to get a direct work job? 2021-11-25T22:03:59.412Z
How to Maximize Your Impact in a Career in CS 2021-11-02T00:53:53.935Z
Animal Advocacy Careers talk by Jamie Harris 2021-10-31T23:08:45.286Z
Weekly EA Lunch 2021-10-31T22:52:57.864Z

Comments

Comment by mic (michaelchen) on EA may look like a cult (and it's not just optics) · 2022-10-02T00:02:12.145Z · EA · GW

I'm curious whether the reason why EA may be perceived as a cult while, e.g., environmentalist and social justice activism are not, is primarily that the concerns of EA are much less mainstream.

I appreciate the suggestions on how to make EA less cultish, and I think they are valuable to implement, but I don't think they would have a significant effect on public perception of whether EA is a cult.

Comment by mic (michaelchen) on How/When Should One Introduce AI Risk Arguments to People Unfamiliar With the Idea? · 2022-09-30T02:02:22.581Z · EA · GW

I think AI Risk Intro 1: Advanced AI Might Be Very Bad is great.

Comment by mic (michaelchen) on AI alignment with humans... but with which humans? · 2022-09-09T02:43:38.593Z · EA · GW

I agree, that seems concerning. Ultimately, since the AI developers are designing the AIs, I would guess that they would try to align the AI to be helpful to the users/consumers or to the concerns of the company/government, if they succeed at aligning the AI at all. As for your suggestions "Alignment with whoever bought the AI? Whoever users it most often? Whoever might be most positively or negatively affected by its behavior? Whoever the AI's company's legal team says would impose the highest litigation risk?" – these all seem plausible to me.

On the separate question of handling conflicting interests: there's some work on this (e.g., "Aligning with Heterogeneous Preferences for Kidney Exchange" and "Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning"), though perhaps not as much as we would like.

Comment by mic (michaelchen) on Effective altruism is no longer the right name for the movement · 2022-09-09T01:24:54.546Z · EA · GW

But I sometimes have a fear in the back of my mind that some of the attendees who are intrigued by these ideas are later going to look up effective altruism, get the impression that the movement’s focus is just about existential risks these days, and feel duped.  Since EA pitches don’t usually start with longtermist ideas, it can feel like a bait and switch.

To avoid the feeling of a bait and switch, I think one solution is to introduce existential risk in the initial pitch. For example, when introducing my student group Effective Altruism at Georgia Tech, I tend to say something like: "Effective Altruism at Georgia Tech is a student group which aims to empower students to pursue careers tackling the world's most pressing problems, such as global poverty, animal welfare, or existential risk from climate change, future pandemics, or advanced AI." It's totally fine to mention existential risk – students still seem pretty interested and happy to sign up for our mailing list.

Comment by mic (michaelchen) on AI alignment with humans... but with which humans? · 2022-09-09T01:07:06.009Z · EA · GW

I think AI alignment isn't really about designing AI to maximize for the preference satisfaction of a certain set of humans. I think an aligned AI would look more like an AI which:

  • is not trying to cause an existential catastrophe or take control of humanity
  • has had undesirable behavior trained out or adversarially filtered
  • learns from human feedback about what behavior is more or less preferable
    • In this case, we would hope the AI would be aligned to the people who are allowed to provide feedback
  • has goals which are corrigible
  • is honest, non-deceptive, and non-power-seeking
Comment by mic (michaelchen) on We need more recruiters in EA · 2022-08-23T18:44:42.324Z · EA · GW

Thanks for writing this! There's been a lot of interest of EA community building, but I think the one of the most valuable parts of EA community building is basically just recruiting – e.g., notifying interested people about relevant opportunities and inspiring people to apply for impactful opportunities. A lot of potential talent isn't looped in with a local EA group or the EA community at all, however, so I think more professional recruiting could help a lot with solving organizational bottlenecks.

Comment by mic (michaelchen) on Why EA needs Operations Research: the science of decision making · 2022-07-21T01:39:06.219Z · EA · GW

I was excited to read this post! At EA at Georgia Tech, some of our members are studying industrial engineering or operations research. Should we encourage them to reach out to you if they're interested in getting involved with operations research for top causes?

Comment by mic (michaelchen) on Four questions I ask AI safety researchers · 2022-07-17T21:54:40.449Z · EA · GW

What are some common answers you hear for Question #4: "What are the qualities you look for in promising AI safety researchers? (beyond general intelligence)"

Comment by mic (michaelchen) on My Most Likely Reason to Die Young is AI X-Risk · 2022-07-05T04:31:10.802Z · EA · GW

Technical note: I think we need to be careful to note the difference in meaning between extinction and existential catastrophe. When Joseph Carlsmith talks about existential catastrophe, he doesn't necessarily mean all humans dying;  in this report, he's mainly concerned about the disempowerment of humanity. Following Toby Ord in The Precipice, Carlsmith defines an existential catastrophe as "an event that drastically reduces the value of the trajectories along which human civilization could realistically develop". It's not straightforward to translate his estimates of existential risk to estimates of extinction risk.

Of course, you don't need to rely on Joseph Carlsmith's report to believe that there's a ≥7.9% chance of human extinction conditioning on AGI.

Comment by mic (michaelchen) on $500 bounty for alignment contest ideas · 2022-07-01T07:04:45.134Z · EA · GW

Here's my proposal for a contest description. Contest problems #1 and 2 are inspired by Richard Ngo's Alignment research exercises.

AI alignment is the problem of ensuring that advanced AI systems take actions which are aligned with human values. As AI systems become more capable and approach or exceed human-level intelligence, it becomes harder to ensure that they remain within human control instead of posing unacceptable risks.

One solution to AI alignment proposed by Stuart Russell, a leading AI researcher, is the assistance game, also called a cooperative inverse reinforcement learning (CIRL) game, following these principles:

  1. "The machine’s only objective is to maximize the realization of human preferences.
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behavior."

For a more formal specification of this proposal, please see Stuart Russell's new book on why we need to replace the standard model of AI, Cooperatively Learning Human Values, and Cooperative Inverse Reinforcement Learning.

Contest problem #1: Why are assistance games not an adequate solution to AI alignment?

  • The first link describes a few critiques; you're free to restate them in your own words and elaborate on them. However, we'd be most excited to see a detailed, original exposition of one or a few issues, which engages with the technical specification of an assistance game.

Another proposed solution to AI alignment is iterated distillation and amplification (IDA), proposed by Paul Christiano. Paul runs the Alignment Research Center and previously ran the language model alignment team at OpenAI. In IDA, a human H wants to train an AI agent, X by repeating two steps: amplification and distillation. In the amplification step, the human uses multiple copies of X to help it solve a problem. In the distillation step, the agent X learns to reproduce the same output as the amplified system of the human + multiple copies of X. Then we go through another amplification step, then another distillation step, and so on.

You can learn more about this at Iterated Distillation and Amplification and see a simplified application of IDA in action at Summarizing Books with Human Feedback.

Contest problem #2: Why might an AI system trained through IDA be misaligned with human values? What assumptions would be needed to prevent that?

Contest problem #3: Why is AI alignment an important problem? What are some research directions and key open problems? How can you or other students contribute to solving it through your career?

You're free to submit to one or more of these contest problems. You can write as much or as little as you feel is necessary to express your ideas concisely; as a rough guideline, feel free to write between 300 and 2000 words. For the first two content problems, we'll be evaluating submissions based on the level of technical insight and research aptitude that you demonstrate, not necessarily quality of writing.

I like how contest problems #1 and 2:

  • provide concrete proposals for solutions to AI alignment, so it's not an impossibly abstract problem
  • ask participants to engage with prior research and think about issues, which seems to be an important aspect of doing research
  • are approachable

Contest problem #3 here isn't a technical problem, but I think it can be helpful so that participants actually end up caring about AI alignment rather than just engaging with it on a one-time basis as part of this contest. I think it would be exciting if participants learned on their own about why AI alignment matters, form a plan for how they could work on it as part of their career, and end up motivated to continue thinking about AI alignment or to support AI safety field-building efforts in India.

Comment by mic (michaelchen) on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-01T03:55:23.171Z · EA · GW

Some quick thoughts:

  • Strong +1 to actually trying and not assuming a priori that you're not good enough.
  • If you're at all interested in empirical AI safety research, it's valuable to just try to get really good at machine learning research.
  • An IMO medalist or generic "super-genius" is not necessarily someone who would be a top-tier AI safety researcher, and vice versa.
  • For trying AI safety technical research, I'd strongly recommend How to pursue a career in technical AI alignment.
Comment by mic (michaelchen) on How to become more agentic, by GPT-EA-Forum-v1 · 2022-06-21T00:17:36.007Z · EA · GW

As a countervailing perspective, Dan Hendrycks thinks that it would be valuable to have automated moral philosophy research assistance to "help us reduce risks of value lock-in by improving our moral precedents earlier rather than later" (though I don't know if he would endorse this project). Likewise, some AI alignment researchers think it would be valuable to have automated assistance with AI alignment research. If EAs could write a nice EA Forum post just by giving GPT-EA-Forum a nice prompt and revising the resulting post, that could help EAs save time and explore a broader space of research directions. Still, I think some risks are:

  • This bot would write content similar to what the EA Forum has already written, rather than advancing EA philosophy
  • The content produced is less likely to be well-reasoned, lowering the quality of content on the EA Forum
Comment by mic (michaelchen) on Software Developers: How to have Impact? A Software Career Guide · 2022-06-19T10:23:29.509Z · EA · GW

Distributed computing seems to be a skill in high demand among AI safety organizations. Does anyone have recommendations for resources to learn about it? Would it look like using the PyTorch Distributed package or something like a microservices architecture?

Comment by mic (michaelchen) on AGI Ruin: A List of Lethalities · 2022-06-08T09:42:59.667Z · EA · GW

I feel somewhat concerned that after reading your repeated writing saying "use your AGI to (metaphorically) burn all GPUs", someone might actually do so, but of course their AGI isn't actually aligned or powerful enough to do so without causing catastrophic collateral damage. At least the suggestion encourages AI race dynamics – because if you don't make AGI first, someone else will try to burn all your GPUs! – and makes the AI safety community seem thoroughly supervillain-y.

Points 5 and 6 suggest that soon after someone develops AGI for the first time, they must use it to perform a pivotal act as powerful as "melt all GPUs", or else we are doomed. I agree that figuring out how to align such a system seems extremely hard, especially if this is your first AGI. But aiming for such a pivotal act with your first AGI isn't our only option, and this strategy seems much riskier than if we take some more time use our AGI to solve alignment further before attempting any pivotal acts. I think it's plausible that all major AGI companies could stick to only developing AGIs that are (probably) not power-seeking for a decent number of years. Remember, even Yann LeCun of Facebook AI Research thinks that AGI should have strong safety measures. Further, we could have compute governance and monitoring to prevent rogue actors from developing AGI, at least until we solve alignment enough to entrust more capable AGIs to develop strong guarantees against random people developing misaligned superintelligences. (There are also similar comments and responses on LessWrong.)

Perhaps a crux here is that I'm more optimistic than you about things like slow takeoffs, AGI likely being at least 20 years out, the possibility of using weaker AGI to help supervise stronger AGI, and AI safety becoming mainstream. Still, I don't think it's helpful to claim that we must or even should aim to try to "burn all GPUs" with our first AGI, instead of considering alternative strategies.

Comment by mic (michaelchen) on How to dissolve moral cluelessness about donating mosquito nets · 2022-06-08T08:16:28.011Z · EA · GW

Thanks for writing this! I've seen Hilary Greaves' video on longtermism and cluelessness in a couple university group versions of the Intro EA Program (as part of the week on critiques and debates), so it's probably been influencing some people's views. I think this post is a valuable demonstration that we don't need to be completely clueless about the long-term impact of presentist interventions.

Comment by mic (michaelchen) on Four Concerns Regarding Longtermism · 2022-06-08T07:57:50.681Z · EA · GW

I'm really sorry that my comment was harsher than I intended. I think you've written a witty and incisive critique which raises some important points, but I had raised my standards since this was submitted to the Red Teaming Contest.

Comment by mic (michaelchen) on Four Concerns Regarding Longtermism · 2022-06-07T02:22:34.912Z · EA · GW

For future submissions to the Red Teaming Contest, I'd like to see posts that are much more rigorously argued than this. I'm not concerned about whether the arguments are especially novel.

My understanding of the key claim of the post is, EA should consider reallocating some more resources from longtermist to neartermist causes. This seems plausible – perhaps some types of marginal longtermist donations are predictably ineffective, or it's bad if community members feel that longtermism unfairly has easier access to funding – but I didn't find the four reasons/arguments given in this post particularly compelling.

The section Political Capital Concern appears to claim: If EA as a movement doesn't do anything to help regular near-term causes, people will think that it's not doing anything to help people, and it could die as a movement. I agree that this is possible (though I also think a "longtermism movement" could still be reasonably successful, though unlikely to have much membership compared to EA.) However, EA continues dedicate substantial resources to near-term causes – hundreds of millions of dollars of donations each year! – and this number is only increasing, as GiveWell hopes to direct 1 billion dollars of donations per year. EA continues to highlight its contributions to near-term causes. As a movement, EA is doing fine in this regard.

So then, if the EA movement as a whole is good in this regard, who should change their actions based on the political capital concern? I think it's more interesting to examine whether local EA groups, individuals, and organizations should have a direct positive impact on near-term causes for signalling reasons. The post only gives the following recommendation (which I find fairly vague): "Instead, the thought is: when running your utility models, factor this in however you can. Consider that utility translated from EA resources to present life, when done effectively and messaged well, [4] redounds as well on the gains to future life." However, rededicating resources from longtermism to neartermism has costs to the longtermist projects you're not supporting. How do we navigate these tradeoffs? It would have been great to see examples for this.

The "Social Capital Concern" section writes:

focusing on longterm problems is probably way more fun than present ones.[7] Longtermism projects seem inherently more big picture and academic, detached from the boring mundanities of present reality.

This might be true for some people, but I think for most EAs, concrete or near-term ways of helping people has a stronger emotional appeal, all else equal. I would find the inverse of the sentence a lot more convincing, to be honest: "focusing on near-term problems is probably way more fun than ones in the distant future. Near-term projects seem inherently more appealing and helpful, grounded in present-day realities."

But that aside, if I am correct that longtermism projects are sexier by nature, when you add communal living/organizing to EA, it can probably lead to a lot of people using flimsy models to talk and discuss and theorize and pontificate, as opposed to creating tangible utility, so that they can work on cool projects without having to get their hands too dirty, all while claiming the mantle of not just the same, but greater, do-gooding.

Longtermist projects may be cool, and their utility may be more theoretical than near-term projects, but I'm extremely confused what you mean when they don't involve getting your hands dirty (in a way such that near-termist work, such as GiveWell's charity effectiveness research, involves more hands-on work). Effective donations have historically been the main neartermist EA thing to do, and donating is quite hands-off.

So individual EA actors, given social incentives brought upon by increased communal living, will want to find reasons to engage in longtermism projects because it will increase their social capital within the community.

This seems likely, and thanks for raising this critique (especially if it hasn't been highlighted before), but what should we do about it? The red-teaming contest is looking for constructive and action-relevant critiques, and I think it wouldn't be that hard to take some time to propose suggestions. The action implied by the post is that we should consider shifting more resources to near-termism, but I don't think that would necessarily be the right move, compared to, e.g., being more thoughtful about social dynamics and making an effort to welcome neartermist perspectives.

The section on Muscle Memory Concern writes:

I think this is a reason to avoid a disproportionate emphasis on longtermism projects. Because longtermism efficacy is inherently more difficult to calculate with confidence, it can become quite easy to forget how to provide utility quickly and confidently.

I don't know, even the most meta of longtermist projects, such as longtermist community building (or to go even another meta level, support for longtermist community building), is quite grounded in metrics and have short feedback loops, such that you can tell if your activities are having an impact – if not impact on the utility across all time, then at least something tangible, such as high-impact career transitions. I think the skills would transfer fairly well over to something more near-termist, such as community organizing for animal welfare, or running organizations in general. In contrast, if you're doing charity effectiveness research, whether near-termist or longtermist, it can be hard to tell if your work is any good. Over time, I think that now that we have more EAs getting their hands dirty with projects instead of just earning to give, as a community, we have more experience to be able to execute projects, whether longtermist or near-termist.

As for the final section, the discount factor concern:

Future life is less likely to exist than current life. I understand the irony here, since longtermism projects seek to make it more likely that future life exists. But inherently you just have to discount the utility of each individual future life. In the aggregate, there's no question that the utility gains are still enormous. But each individual life should have some discount based on this less-likely-to-exist factor.

I think longtermists are already accounting for the fact that we should discount future people by their likelihood to exist. That said, longtermist expected utility calculations are often more naive than they should be. For example, we often wrongly interpret reducing x-risk reduction from one cause by 1% as reducing x-risk as a whole by 1%, or conflate a 1% x-risk reduction this century with a 1% x-risk reduction across all time.

(I hope you found this comment informative, but I don't know if I'll respond to this comment, as I already spent an hour writing this and don't know if it was a good use of my time.)

Comment by mic (michaelchen) on What's the value of creating my own fellowship program when I can direct people to the virtual programs? · 2022-05-24T02:55:52.254Z · EA · GW

Some quick thoughts:

  • EA Virtual Programs should be fine in my opinion, especially if you think you have more promising things to do than coordinating logistics for a program or facilitating cohorts
  • The virtual Intro EA Program only has discussions in English and Spanish. If group members would much prefer to have discussions in Hungarian instead, it might be useful for you to find some Hungarian-speaking facilitators.
  • Like Jaime commented, if you're delegating EA programs to EA Virtual Programs, it's best for you to have some contact with participants, especially particularly engaged ones, so that you can have one-on-one meetings exploring their key uncertainties, share with them relevant opportunities, encouraging them to  etc.
  • It's rare for the EAIF to provide full-time funding for community building (see this comment)
  • I'd try to see if you could do more publicity of EA Virtual Programs, such as at Hungarian universities
Comment by mic (michaelchen) on What does the Project Management role look like in AI safety? · 2022-05-24T02:35:58.678Z · EA · GW

I see two new relevant roles on the 80,000 Hours job board right now:

Here's an excerpt from Anthropic's job posting. It's looking for basic familiarity with deep learning and mechanistic interpretability, but mostly nontechnical skills.

In this role you would:

  • Partner closely with the interpretability research lead on all things team related, from project planning to vision-setting to people development and coaching.
  • Translate a complex set of novel research ideas into tangible goals and work with the team to accomplish them.
  • Ensure that the team's prioritization and workstreams are aligned with its goals.
  • Manage day-to-day execution of the team’s work including investigating models, running experiments, developing underlying software infrastructure, and writing up and publishing research results in a variety of formats.
  • Unblock your reports when they are stuck, and help get them whatever resources they need to be successful.
  • Work with the team to uplevel their project management skills, and act as a project management leader and counselor.
  • Support your direct reports as a people manager - conducting productive 1:1s, skillfully offering feedback, running performance management, facilitating tough but needed conversations, and modeling excellent interpersonal skills.
  • Coach and develop your reports to decide how they would like to advance in their careers and help them do so.
  • Run the interpretability team’s recruiting efforts, in concert with the research lead.

You might be a good fit if you:

  • Are an experienced manager and enjoy practicing management as a discipline.
  • Are a superb listener and an excellent communicator.
  • Are an extremely strong project manager and enjoy balancing a number of competing priorities.
  • Take complete ownership over your team’s overall output and performance.
  • Naturally build strong relationships and partner equally well with stakeholders in a variety of different “directions” - reports, a co-lead, peer managers, and your own manager.
  • Enjoy recruiting for and managing a team through a period of growth.
  • Effectively balance the needs of a team with the needs of a growing organization.
  • Are interested in interpretability and excited to deepen your skills and understand more about this field.
  • Have a passion for and/or experience working with advanced AI systems, and feel strongly about ensuring these systems are developed safely.

Other requirements:

  • A minimum of 3-5 years of prior management or equivalent experience
  • Some technical or science-based knowledge or expertise
  • Basic familiarity in deep learning, AI, and circuits-style interpretability, or a desire to learn
  • Previous direct experience in machine learning is a plus, but not required
Comment by mic (michaelchen) on The real state of climate solutions - want to help? · 2022-05-23T03:39:22.724Z · EA · GW

You might want to share this project idea in the Effective Environmentalism Slack, if you haven't already done so.

Comment by mic (michaelchen) on Apply to help run EAGxIndia, Berkeley, Singapore and Future Forum! · 2022-05-23T03:37:32.592Z · EA · GW

Is the application form "EAGxBerkeley, India & Future Forum Organizing Team Expression of Interest" supposed to have questions asking about whether you're interested in organizing the Future Forum? I don't see any; I only see questions about EAGxBerkeley and EAGxIndia.

Comment by mic (michaelchen) on Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey) · 2022-05-20T18:15:26.480Z · EA · GW

From my experience with running EA at Georgia Tech, I think the main factors are:

  • not prioritizing high-impact causes
  • not being interested in changing their career plans
  • lack of high-impact career opportunities that fit their career interests, or not knowing about them
  • not having the skills to get high-impact internships or jobs
Comment by mic (michaelchen) on Some potential lessons from Carrick’s Congressional bid · 2022-05-20T17:49:54.509Z · EA · GW

I think I was primarily concerned that negative information about the campaign could get picked up by the media. Thinking it over now though, that motivation doesn't make sense for not posting about highly visible negative news coverage (which the media would have already been aware of) or not posting concerns on a less publicly visible EA platform, such as Slack. Other factors for why I didn't write up my concerns about Carrick's chances of being elected might have been that:

  • no other EAs seemed to be posting much negative information about the campaign, and I thought there might have been a good reason for that
  • aside from the posting of "Why Helping the Flynn Campaign is especially useful right now", there weren't any events that triggered me to consider writing up my concerns
  • the negative media coverage was obvious enough that I thought anyone considering volunteering would already know about it, and it had to already have been priced into the election odds estimates on Metaculus and PredictIt, so drawing attention to it might not have been valuable
  • time-sensitivity, as you mentioned
  • public critiques might have to be quite well-reasoned, and I might want to check-in with the campaign to make sure that I didn't misunderstand anything, etc. That could be a decent amount of effort on my part and their part and also somewhat awkward given that I was also volunteering for the campaign.

However, if someone privately asked me for my thoughts on how likely the campaign was to succeed or how valuable helping with it was, I would have been happy to share my honest opinion, including any concerns.

Comment by mic (michaelchen) on Some potential lessons from Carrick’s Congressional bid · 2022-05-20T17:26:32.055Z · EA · GW

Thanks for the suggestion, just copied the critiques of the "especially useful" post over!

Comment by mic (michaelchen) on Why Helping the Flynn Campaign is especially useful right now · 2022-05-20T17:25:47.705Z · EA · GW

Before the election was decided, I agreed with the overall point that donating, phone banking, or door-knocking for the campaign seemed quite valuable. At the same time, I want to mention a couple critiques I have (copied from my comment on "Some potential lessons from Carrick’s Congressional bid")

  • The post claims "The race seems to be quite tight. According to this poll, Carrick is in second place among likely Democratic voters by 4% (14% of voters favor Flynn, 18% favor Salinas), with a margin of error of +/- 4 percentage points." However, it declines to mention that the same poll found that "26 percent of the district’s voters [hold] an unfavorable opinion of him, compared to only 7 percent for Salinas" (The Hill).
  • At the time the post was written, a significant fraction of voters already had already voted. The claim "the campaign is especially impactful right now" seems misleading when it would have been better to help earlier on.
  • The campaign already has plenty of TV ads from the Protect the Future PAC, and there are lots of internet comments complaining about receiving mailers every other day and seeing Carrick ads all the time. (Though later I learned that PAC ads aren't able to show Carrick speaking, and I've read a few internet comments complaining about how they've never heard Carrick speak despite seeing all those ads. So campaign donations could be valuable for ads which do show him speaking.)
  • Having a lot of people coming out-of-state to volunteer could further the impression among voters that Carrick doesn't have much support from Oregonians.
  • If you can speak enthusiastically and knowledgeably about the campaign, you can do a better job of phone banking or door-knocking than the average person. However, the campaign already spent $847,000 for door-knockers. While volunteering for the campaign might have been high in expected value, the fact that other people could do door-knocking raises questions about whether it's in out-of-state EAs' comparative advantage to do so.
Comment by mic (michaelchen) on Some potential lessons from Carrick’s Congressional bid · 2022-05-19T04:51:00.164Z · EA · GW

Overall, I agree with Habryka's comment that "negative evidence on the campaign would be 'systematically filtered out'". Although I maxed out donations to the primary campaign and phone banked a bit for the campaign, I had a number of concerns about the campaign that I never saw mentioned in EA spaces. However, I didn't want to raise these concerns for fear that this would negatively affect Carrick's chances of winning the election.

Now that Carrick's campaign is over, I feel more free to write my concerns. These included:

I also have some critiques of the post Why Helping the Flynn Campaign is especially useful right now but I declined to write a comment. These include:

  • The post claims "The race seems to be quite tight. According to this poll, Carrick is in second place among likely Democratic voters by 4% (14% of voters favor Flynn, 18% favor Salinas), with a margin of error of +/- 4 percentage points." However, it declines to mention that "26 percent of the district’s voters holding an unfavorable opinion of him, compared to only 7 percent for Salinas" (The Hill).
  • At the time the post was written, a significant fraction of voters already had already voted. The claim "the campaign is especially impactful right now" seems misleading when it would have been better to help earlier on.
  • The campaign already has plenty of TV ads from the Protect the Future PAC, and there are lots of internet comments complaining about receiving mailers every other day and seeing Carrick ads all the time. (Though later I learned that PAC ads aren't able to show Carrick speaking, and I've read a few internet comments complaining about how they've never heard Carrick speak despite seeing all those ads. So campaign donations could be valuable for ads which do show him speaking.)
  • Having a lot of people coming out-of-state to volunteer could further the impression among voters that Carrick doesn't have much support from Oregonians.
  • If you can speak enthusiastically and knowledgeably about the campaign, you can do a better job of phone banking or door-knocking than the average person. However, the campaign already spent $847,000 for door-knockers. While volunteering for the campaign might have been high in expected value, the fact that other people could do door-knocking raises questions about whether it's in out-of-state EAs' comparative advantage to do so.
Comment by mic (michaelchen) on Why should I care about insects? · 2022-05-19T03:29:21.058Z · EA · GW

Another introductory post about why one may want to care about insect welfare: Does Insect Suffering Bug You? - Faunalytics (Jesse Gildesgame, 2016).

Recently, activists have started campaigning against silk because they believe the production process is cruel to silkworms. Many people respond to these campaigns with skepticism: who cares about silkworms? It’s easy to feel for the chinchillas, foxes, and other furry mammals used in fur clothing. But insects like silkworms are a harder sell. It seems crazy to grant moral consideration to a bug.

Nonetheless, the idea that we should care about insect welfare has been gaining credibility among activists, scientists, and philosophers in recent years.

[…]

The question of whether insects can feel pain or have other negative subjective experiences is hotly contested among scientists.12 Amid the uncertainty and debate, one thing is clear: at least for now, we can’t be sure. Whether or not insects have the capacity to suffer is still very much an open question.

If insects can suffer, they probably suffer a lot. Starvation, desiccation, injury, internal organ failure, predation, infection, chemical imbalances, and other stressors are common features in a bug’s life.13 It’s possible that insect lives are full of suffering.

Should we be worried?

Since the science isn’t clear, we should assign a nontrivial likelihood to the hypothesis that insects suffer. However, even if you think the likelihood that insects suffer is extremely low, it’s worth keeping in mind just how many insects there are. Their sheer numbers suggest that if they do suffer, the scale of the issue would be enormous. […]

What should be done, if anything?

[…]

Even if we should prioritize vertebrate welfare, there are things we can do to mitigate the risk of insect suffering that don’t impede efforts to promote vertebrate welfare.19 We can replace silk with polyester or rayon. We can develop and improve standards for the humane use of insects in research. We can help farmers choose pesticides that limit possible insect suffering. We can also take care to avoid hurting insects in our daily lives in a number of ways.

It might turn out that insects don’t suffer, but until we know, it’s a risk worth taking seriously.

Comment by mic (michaelchen) on If EA is no longer funding constrained, why should *I* give? · 2022-05-17T04:59:29.750Z · EA · GW

The Qualia Research Institute might be funding-constrained but it's questionable whether it's doing good work; for example, see this comment here about its Symmetry Theory of Valence.

Comment by mic (michaelchen) on Introducing the EA Public Interest Technologists Slack community · 2022-05-15T19:46:04.615Z · EA · GW

Also relevant: the EA Software Engineering Discord

Comment by mic (michaelchen) on Bad Omens in Current Community Building · 2022-05-12T18:46:48.271Z · EA · GW

I see, I thought you were referring to reading a script about EA during a one-on-one conversation. I don't see anything wrong with presenting a standardized talk, especially if you make it clear that EA is a global movement and not just a thing at your university. I would not be surprised if a local chapter of, say, Citizens' Climate Lobby, used an introductory talk created by the national organization rather than the local chapter.

Comment by mic (michaelchen) on Bad Omens in Current Community Building · 2022-05-12T16:37:52.602Z · EA · GW

introducing people to EA by reading prepared scripts

Huh, I'm not familiar with this, can you post a link to an example script or message me it?

I agree that reading a script verbatim is not great, and privately discussed info in a CRM seems like an invasion of privacy.

Comment by mic (michaelchen) on What are your recommendations for technical AI alignment podcasts? · 2022-05-12T00:22:13.854Z · EA · GW
  • AXRP
  • Nonlinear Library: Alignment Forum
  • Towards Data Science (the podcast has had an AI safety skew since 2020)
  • Alignment Newsletter Podcast
Comment by mic (michaelchen) on Volunteering abroad · 2022-05-07T20:53:25.639Z · EA · GW

Would you be interested in supporting EA groups abroad to recruit local talent to work on impactful causes? I'm not sure what country you're from or what languages you're fluent in. But even if you only know English, it seems like you could potentially help with EA Philippines, EA India, EA University of Cape Town, EA Nigeria, or EA groups in the UK, US, and Australia. You can browse groups and get in contact with them through the EA Forum.

To get a sense of why this could be valuable, see "Building effective altruism - 80,000 Hours" and "A huge opportunity for impact: movement building at top universities" (especially relevant to groups like EA Philippines which are focused on supporting university groups).

Comment by mic (michaelchen) on Comparative advantage does not mean doing the thing you're best at · 2022-05-03T18:47:40.274Z · EA · GW

Besides distillation, another option to look into could be the Communications Specialist or Senior Communications Specialist contractor roles at the Fund for Alignment Research.

Comment by mic (michaelchen) on There are currently more than 100 open EA-aligned tech jobs · 2022-05-01T23:14:39.466Z · EA · GW

Could 80,000 Hours make it clear on their job which roles they think are valuable only for career capital and aren't directly impactful? It could just involve adding a quick boilerplate statement like in the job details, such as:

Relevant problem area: AI safety & policy

Wondering why we’ve listed this role?

We think this role could be a great way to develop relevant career capital, although other opportunities would be better for directly making an impact.

Perhaps this suggestion is unworkable for various reasons. But I think it's easy for people to think, since this job is listed on the 80,000 Hours jobs board and seems to have some connection to social impact, then it's a great way to make an impact. It's already tempting enough for people to work on AGI capabilities as long as it's ""safe"". And when the job description says "OpenAI […] is often perceived as one of the leading organisations working on the development of beneficial AGI," the takeaway for readers is likely that any role there is a great way to positively shape the development of AI.

What are your thoughts on Habryka's comment here?

Please don't work in AI capabilities research, and in particular don't work in labs directly trying to build AGI (e.g. OpenAI or Deepmind). There are few jobs that cause as much harm, and historically the EA community has already caused great harm here. (There are some arguments that people can make the processes at those organizations safer, but I've only heard negative things about people working in jobs that are non-safety related who tried to do this, and I don't currently think you will have much success changing organizations like that from a ground-level engineering role)

China-related AI safety and governance paths - Career review (80000hours.org) recommends working in regular AI labs and trying to build up the field of AI safety there. But how would one actually try to pivot a given company in a more safety-oriented direction?

Comment by mic (michaelchen) on Increasing Demandingness in EA · 2022-04-30T14:31:47.483Z · EA · GW

My bad, I meant to write "Part-time volunteering might not provide as much of an opportunity to build unique skills, compared to working full-time on direct work". Fixed.

Comment by mic (michaelchen) on Increasing Demandingness in EA · 2022-04-29T04:21:21.100Z · EA · GW

Is it possible to have a 10% version of pursuing a high-impact career? Instead of donating 10% of your income, you would donate a couple hours a week to high-impact volunteering. I've listed a couple opportunities here. In my opinion, many of these would count as a high-impact career if you did full-time.

  • Organizing a local EA group
    • Or in-person/remote volunteering for a university EA group, to help with managing Airtable, handling operations, designing events, facilitating discussions, etc. Although I don't know that any local EA groups currently accept remote volunteers, from my experience with running EA at Georgia Tech, I know we'd really benefit from one!
    • If you're quite knowledgeable about EA/longtermism and like talking to people about EA, being something like an EA Guides Program mentor could be a great option. One-on-one chats can be quite helpful for enabling people to develop better plans for making an impact throughout their life. I don't know the Global Challenges Project is looking for more mentors for its EA Guides Program at this time, but it would be valuable if it had a greater capacity.
  • Facilitating for EA programs that are constrained by the number of (good) facilitators. In Q1 2022, this included the AGI Safety Fundamentals technical alignment and governance tracks. (Edit) EA Virtual Programs is also constrained by the number of facilitators.
  • Signing up as a personal assistant for Pineapple Operations (assuming this is constrained by the number of PAs, though I have no idea whether it is)
  • Phone banking for Carrick Flynn's campaign (though this opportunity is only available through May 17)
  • Gaining experience that would be helpful for pursuing a high-impact career (e.g., by taking a MOOC on deep learning to test your fit for machine learning work for AI safety)
  • Distilling AI safety articles
  • Volunteering for Apart Research's AI safety or meta AI safety projects
  • Volunteering for projects from Impact CoLabs, perhaps
  • Running a workplace EA group, especially if you're able to foster discussion about working on pressing problems

Part-time volunteering might nprovide as much of an opportunity to build unique skills, compared to working full-time on direct work, but I think it could still be pretty valuable depending on what you do.

In a way, sacrificing your time might be more demanding than sacrificing your excess income. But volunteering can help you feel more connected to the community and feel more fulfilling than just donating money as an individual. It might not even be a sacrifice as for some opportunities, you could get paid, either directly (as in the case of Pineapple Operations) or through applying to the EA Infrastructure Fund or Long-Term Future Fund.

Comment by mic (michaelchen) on [$20K In Prizes] AI Safety Arguments Competition · 2022-04-28T21:17:03.406Z · EA · GW

most AI experts think advanced AI is much likelier to wipe out human life than climate change

I'm not sure this is true, unless you use a very restrictive definition of "AI expert". I would be surprised if most AI researchers saw AI as a greater threat than climate change.

Comment by mic (michaelchen) on [$20K In Prizes] AI Safety Arguments Competition · 2022-04-28T21:14:43.815Z · EA · GW

Meta: This post was also cross-posted to LessWrong.

Comment by mic (michaelchen) on [$20K In Prizes] AI Safety Arguments Competition · 2022-04-26T19:15:51.224Z · EA · GW

Companies and governments will find it strategically valuable to develop advanced AIs which are able to execute creative plans in pursuit of a goal achieving real-world outcomes. Current large language models have a rich understanding of the world which generalizes to other domains, and reinforcement learning agents already achieve superhuman performance at various games. With further advancements in AI research and compute, we are likely to see the development of human-level AI this century. But for a wide variety of goals, it is often valuable to pursue instrumental goals such as acquiring resources, self-preservation, seeking power, and eliminating opposition. By default, we should expect that highly capable agents will have these unsafe instrumental objectives.

The vast majority of actors would not want to develop unsafe systems. However, there are reasons to think that alignment will be hard with modern deep learning systems, and difficulties with making large language models safe provide empirical support of this claim. Misaligned AI may seem acceptably safe and only have catastrophic consequences with further advancements in AI capabilities, and it may be unclear in advance whether a model is dangerous. In the heat of an AI race between companies or governments, proper care may not be taken to make sure that the systems being developed behave as intended.

(This is technically two paragraphs haha. You could merge them into one paragraph, but note that the second paragraph is mostly by Joshua Clymer.)

Comment by mic (michaelchen) on How I failed to form views on AI safety · 2022-04-18T10:39:09.677Z · EA · GW

But I am a bit at loss on why people in the AI safety field think it is possible to build safe AI systems in the first place. I guess as long as it is not proven that the properties of safe AI systems are contradictory with each other, you could assume it is theoretically possible. When it comes to ML, the best performance in practice is sadly often worse than the theoretical best.

To me, this belief that AI safety is hard or impossible would imply that AI x-risk is quite high. Then, I'd think that AI safety is very important but unfortunately intractable. Would you agree? Or maybe I misunderstood what you were trying to say.

I agree that x-risk from AI misuse is quite underexplored.

For what it's worth, AI safety and governance researchers do assign significant probability to x-risk from AI misuse. AI Governance Week 3 — Effective Altruism Cambridge comments:

For context on the field’s current perspectives on these questions, a 2020 survey of AI safety and governance researchers (Clarke et al., 2021) found that, on average [1], researchers currently guess there is: [2]

A 10% chance of existential catastrophe from misaligned, influence-seeking AI [3]

A 6% chance of existential catastrophe from AI-exacerbated war or AI misuse

A 7% chance of existential catastrophe from “other scenarios”

Comment by mic (michaelchen) on What is a neutral life like? · 2022-04-16T17:37:04.282Z · EA · GW

Relevant: happiness - How happy are people relative to neutral (as measured by experience sampling)? - Psychology & Neuroscience Stack Exchange

Comment by mic (michaelchen) on Free-spending EA might be a big problem for optics and epistemics · 2022-04-15T02:06:39.742Z · EA · GW

For what it's worth, even though I prioritize longtermist causes, reading

Maybe it depends on the cause area but the price I'm willing to pay to attract/retain people who can work on meta/longtermist things is just so high that it doesn't seem worth factoring in things like a few hundred pounds wasted on food.

made me fairly uncomfortable, even though I don't disagree with the substance of the comment, as well as

2) All misallocations of money within EA community building is lower than misallocations of money caused by donations that were wasted by donating to less effective cause areas (for context, Open Phil spent ~200M in criminal justice reform, more than all of their EA CB spending to date), and 

Comment by mic (michaelchen) on Free-spending EA might be a big problem for optics and epistemics · 2022-04-13T06:32:54.778Z · EA · GW

Free food and free conferences are things that are somewhat standard among various non-EA university groups. It's easy to object to whether they're an effective use of money, but I don't think they're excessive except under the EA lens of maximizing cost-effectiveness. I think if we reframe EA universities groups as being about empowering students to tackle pressing global issues through their careers, and avoid mentioning effective donations and free food in the same breath, then it's less confusing why there is free stuff being offered. (Besides apparently being more appealing to students, I also genuinely think high-impact careers should be the focus of EA university groups.)

I'm in favor of making EA events and accommodation feel less fancy.

There are other expenses that I'd be more concerned about from an optics perspective about than free food and conferences.

You find out that if you build a longtermist group in your university, EA orgs will pay you for your time, fly you to conferences and hubs around the world and give you all the resources you could possibly make use of. This is basically the best deal that any student society can currently offer. Given this, how much time are you going to spend critically evaluating the core claims of longtermism?

It's worth noting that these perks are available for new EA groups in general, not even particularly longtermist EA groups. That said, I think there are plenty of additional perks to being a longtermist (career advising from 80,000 Hours, grants from the Long-Term Future Fund or the FTX Future Fund to work on projects, etc.) that you might want to be one even if you're intellectually unsure about it. I think another incentive pushing university organizers in favor of a longtermist direction is: it doesn't make sense to be spending this much money on free food and conferences from a neartermist perspective, at least in my opinion.

Comment by mic (michaelchen) on Nikola's Shortform · 2022-04-07T02:55:16.409Z · EA · GW

I've considered this before and I'm not sure I agree. If I'm at a +10 utility for the next 10 years and afterwards will be at +1,000,000 utility for the following 5,000 years, I might just feel like skipping ahead to be feeling +1,000,000 utility, simply from being impatient about getting to feel even better.

Comment by mic (michaelchen) on University Groups Should Do More Retreats · 2022-04-07T01:24:41.332Z · EA · GW

Got it, I'm surprised by how little time it took to organize HEA's spring retreat. What programming was involved?

Comment by mic (michaelchen) on University Groups Should Do More Retreats · 2022-04-06T22:50:01.993Z · EA · GW

For me, the main value of retreats/conferences has been forming lots connections, but I haven't become significantly more motivated to be more productive, impactful, or ambitious. I have a couple questions which I think would be helpful for organizers to decide whether they should be running more retreats:

  • How many hours does it take to organize a retreat?
  • To what extent can the value of a retreat be 80/20'd with a series of 1-on-1s? (Perhaps while taking a walk through a scenic part of campus) Would that save organizer time?
  • Do you have estimates as to how many participants have significant plan changes after a retreat?
Comment by mic (michaelchen) on Intro to AI/ML Reading Group at EA Georgetown! · 2022-04-06T22:02:08.418Z · EA · GW

My experience with EA at Georgia Tech is that a relatively small proportion of people who complete our intro program participate in follow-up programs, so I think it's valuable to have content you think is important in your initial program instead of hoping that they'll learn it in a later program. I think plenty of Georgetown students would be interested in signing up for an AI policy/governance program, even if it includes lots of x-risk content.

Comment by mic (michaelchen) on The Vultures Are Circling · 2022-04-06T06:58:01.036Z · EA · GW
As a community that values good epidemics

good epistemics?

Thanks for posting about this; I had no idea this was happening to a significant extent.

Comment by mic (michaelchen) on Intro to AI/ML Reading Group at EA Georgetown! · 2022-04-06T06:38:52.587Z · EA · GW

To the extent that the program is meant to provide an introduction to "catastrophic and existential risk reduction in the context of AI/ML", I think it should include some more readings on the alignment problem, existential risk from misaligned AI, transformative AI or superintelligence. I think Mauricio Baker's AI Governance Program has some good readings for this.