Posts

Transitioning to Signal (Tips & Tricks) 2022-07-23T18:57:23.483Z
EAs should use Signal instead of Facebook Messenger 2022-07-21T00:38:44.224Z
A grand strategy to recruit AI capabilities researchers into AI safety research 2022-04-15T17:11:35.609Z
Peter S. Park's Shortform 2022-03-23T16:05:39.394Z

Comments

Comment by Peter S. Park on What ‘equilibrium shifts’ would you like to see in EA? · 2022-07-26T17:32:22.099Z · EA · GW

"The main bottleneck to solving existential risks is Berkeley real estate." -leader of a high-in-demand Berkeley EA coworking space

Where can EAs coordinate to move to (that is not the oversaturated San Francisco Bay Area)?

Comment by Peter S. Park on EAs should use Signal instead of Facebook Messenger · 2022-07-23T22:40:57.581Z · EA · GW

I'm looking into (informal) ways to transfer message history from mobile to desktop, and will get back to you. (It would be great to have a Signal expert/enthusiast on board for these kinds of questions!)

The solution that I'm pretty convinced would work is to periodically backup the desktop Signal files on a secure cloud storage. But again, I haven't tried this.

Comment by Peter S. Park on EAs should use Signal instead of Facebook Messenger · 2022-07-23T22:20:18.746Z · EA · GW

Thanks so much, Habryka, for this really important point! I really appreciate it.

How to move message history between devices?

For moving message history from your primary (mobile) device to your new primary device: https://support.signal.org/hc/en-us/articles/360007059752-Backup-and-Restore-Messages

For moving message history from desktop to desktop, there is no official way yet endorsed by Signal developers. But there are unofficial ways (that I have yet to test myself, and would be great to test! Please let us know if you tried this method either successfully or unsuccessfully.)

Moving chat history from Windows to Windows: https://www.reddit.com/r/signal/comments/s93756/is_there_way_i_can_move_over_my_chat_history_from/htllhog/?context=8&depth=9

Moving chat history from Mac to Mac: https://www.reddit.com/r/signal/comments/lbnzg7/comment/hifd8tj/?utm_source=share&utm_medium=web2x&context=3

Again, I have not tried these methods. It would be great if someone has tried them and shared the result with the rest of the EA community!

Comment by Peter S. Park on Transitioning to Signal (Tips & Tricks) · 2022-07-23T19:52:26.865Z · EA · GW

Thanks so much for your thoughtful comment, Pablo! I really appreciate it.

Some reasons for the EA community as a whole to use Signal over Telegram:

  1. End-to-end encryption is by default (in Telegram, you have to enable it manually, and you might forget to do so)

  2. The code is open-source (Telegram’s is proprietary and thus could have security vulnerabilities or backdoors)

  3. Signal proactively protects your metadata, and your privacy in general

Comment by Peter S. Park on EAs should use Signal instead of Facebook Messenger · 2022-07-21T20:15:48.006Z · EA · GW

This is what I do when I need to use Facebook Messenger!

Comment by Peter S. Park on EAs should use Signal instead of Facebook Messenger · 2022-07-21T07:49:55.593Z · EA · GW

Google Docs are not secure.

For the moment, I've used a publicly shared file on Dropbox for the Signal guide. But we are open to better ideas!

And we are also looking for a tech/Signal expert who would be better than myself at writing up and maintaining the EA community's Signal guide!

Comment by Peter S. Park on EAs should use Signal instead of Facebook Messenger · 2022-07-21T07:42:10.395Z · EA · GW

My understanding is that after one links their multiple desktop devices to their main mobile-phone account (via this method), all Signal messages to their main account from that point on get sent to the linked desktop devices as well. This means the following seems true:

Suppose an EA believes that most other EAs will eventually switch to Signal for all internal messages (which I think is likely due to its substantial privacy benefits). 

Then, this EA would maximize the proportion of their messages stored in their devices' Signal apps by switching to Signal sooner rather than later, and linking all their desktop devices sooner rather than later.

In other words, it is plausible that the EAs who switch to Signal as early as possible (and link all their desktop devices as early as possible) will comparatively benefit, at least when it comes to the proportion of message history stored in their devices' Signal apps. In contrast, EAs who do so later would miss out on this benefit. 

Consequently, as someone who predicts that most EAs should and will eventually switch to Signal, my enthusiastic suggestion is that EAs should make a Signal account and link all their desktop devices to it today!

Comment by Peter S. Park on EAs should use Signal instead of Facebook Messenger · 2022-07-21T06:55:33.717Z · EA · GW

All you have to do is create an invite link and send it to the person.

You can also use a QR code.

Details can be found here!

Comment by Peter S. Park on Is it still hard to get a job in EA? Insights from CEA’s recruitment data · 2022-07-16T00:10:26.312Z · EA · GW

Thank you so much for this really informative analysis! I really appreciate it.

The data seem to be consistent with anecdotes that there are many more people looking for EA jobs than there are positions currently.

If this pattern is true, then one way to match more effective altruists with currently scarce EA jobs could be to establish new organizations and projects in undersaturated EA hubs (underrepresented U.S. states and countries), as opposed to just in  oversaturated EA hubs (e.g., San Francisco Bay Area).

Comment by Peter S. Park on "Tech company singularities", and steering them to reduce x-risk · 2022-05-14T22:01:33.626Z · EA · GW

Thank you so much for this extremely important and brilliant post, Andrew! I really appreciate it.

I completely agree that the degree to which autonomous general-capabilities research is outpacing alignment research needs to be reduced (most likely via recruitment and social opinion dynamics), and that this seems neglected relative to its importance.

I wrote a post on a related topic recently, and it would be really great to hear what you think! (https://forum.effectivealtruism.org/posts/juhMehg89FrLX9pTj/a-grand-strategy-to-recruit-ai-capabilities-researchers-into)

Comment by Peter S. Park on EA will likely get more attention soon · 2022-05-13T18:39:50.170Z · EA · GW

Thank you so much for this extremely important and helpful guide on EA messaging, Julia! I really appreciate it, and hope all EAs read it asap.

Social opinion dynamics seem to have the property where some action (or some inaction) can cause EA to move into a different equilibrium, with a potentially permanent increase or decrease in EA’s outreach and influence capacity. We should therefore tread carefully.

Unfortunately, social opinion dynamics are also extremely mysterious. Nobody knows precisely what action or what inaction possesses the risk of permanently closing some doors to additional outreach and influence. Part of the system is likely inherently unpredictable, but people are almost certainly not near the optimal level of knowledge about predicting such social opinion dynamics.

But perhaps EA movement-builders are already using and improving a cutting-edge model of social opinion dynamics!

Comment by Peter S. Park on Bad Omens in Current Community Building · 2022-05-12T21:19:19.345Z · EA · GW

Thanks so much for this extremely important and well-written post, Theo! I really appreciate it.

My main takeaway from this post (among many takeaways!) is that EA outreach and movement-building could be significantly better. I’m not sure yet on the clear next steps, but perhaps outreach could be even more individualized and epistemically humble.

One devil’s-advocate point on your point that “while it may be true that there are certain characteristics which predict that people are more likely to become HEAs, it does not follow that a larger EA community made up of such people would automatically be better than this one.” Despite Goodhart’s Law, I think that there is some definition of HEA such that maximizing the number of HEAs is the best practical strategy for cooperative movement-building. Having a lot of dedicated people in a cooperative group is very important, perhaps the most important factor in determining the success of the group. More complicated goals/guidelines for movement-builders are harder to use, both for individuals and for group coordination.

Comment by Peter S. Park on Effective [Re]location · 2022-05-12T20:51:36.567Z · EA · GW

Thanks so much for your kind words on our post, Nick! I really appreciate it.

One of the non-governmental barriers to relocation for international folks is the general non-accessibility of relevant information. Even something as basic as finding an apartment to rent in a foreign city could present a quite high barrier (and certainly a perceived barrier) to relocation.

Comment by Peter S. Park on Transcripts of interviews with AI researchers · 2022-05-11T20:02:59.184Z · EA · GW

This is such an incredibly useful resource, Vael! Thank you so much for your hard work on this project.

I really hope this project continues to go strong!

Comment by Peter S. Park on Should You Have Children Despite Climate Change? · 2022-05-04T16:24:51.523Z · EA · GW

Thank you so much for this extremely helpful suggestion, Linch! I really appreciate it.

Comment by Peter S. Park on Should You Have Children Despite Climate Change? · 2022-05-04T15:20:01.004Z · EA · GW

A thought: Especially when enabled by technology, people are very capable. In theory, a person can easily offset the negative impact of their greenhouse gas emissions and have a lot of time and resouces left over to pursue positive impact. For example, by donating a fraction of their money to carbon offsetting projects and not having a polluting lifestyle, the median American can easily have a net reducing effect on global greenhouse gas emissions throughout their lifetime. Also, I think the median person in the world can in theory achieve a net reducing effect as well, by devoting a fraction of their time and resources to planting trees (nature's baseline technology for carbon capture).

So perhaps the right framing isn't "Should you have children despite climate change?"  One alternative framing is: suppose you want to influence the next generation, who will either capably help the world or capably harm the world. Should you do it by parenting and influencing your own children, or by influencing other people's children? 

I think most EAs favor the latter option, and indeed there is a compelling argument in favor of it. Humans are perhaps the only species whose primary mode of phenotypic inheritance is learning knowledge and values from group members: including not only parents, but also a lot of other people. This is why we are so adaptable and capable.

But for EAs who derive a lot of pleasure from parenting and are high-fidelity influencers as parents (i.e., have a high guarantee of influencing their children to have similar values as them),  I think parenting can be an excellent use of their time and resources. I think optimal parenting is a domain which is quite neglected by EAs, and hope that this changes moving forward.

Comment by Peter S. Park on Snakebites kill 100,000 people every year, here's what you should know · 2022-04-29T16:25:54.233Z · EA · GW

That makes sense! Shoes are probably more expensive than malaria nets.

But it might still be a better intervention point than antivenom+improving diagnosis+increasing people's willingness to go to the hospital.

Comment by Peter S. Park on Snakebites kill 100,000 people every year, here's what you should know · 2022-04-28T20:29:34.213Z · EA · GW

What about something they can wear on their leg to prevent the snakebite? 

Comment by Peter S. Park on A grand strategy to recruit AI capabilities researchers into AI safety research · 2022-04-28T05:58:07.720Z · EA · GW

Thank you so much for your kind words, Max! I'm extremely grateful.

I completely agree that if (a big if!) we could identify and recruit AI capabilities researchers who could quickly "plug in" to the current AI safety field, and ideally could even contribute novel and promising directions for  "finding structure/good questions/useful framing", that would be extremely effective. Perhaps a maximally effective use of time and resources for many people. 

I also completely agree that experiential learning on how to talent-scout and recruit AI capabilities researchers is likely to be also helpful for recruiting for the AI safety field in general. The transfer will be quite high. (And of course, recruiting junior research talent, etc. will be "easy mode" compared to recruiting AI capabilities researchers.)

Comment by Peter S. Park on A grand strategy to recruit AI capabilities researchers into AI safety research · 2022-04-28T05:50:09.095Z · EA · GW

Thank you so much for your feedback on my post, Peter! I really appreciate it.

It seems like READI is doing some incredible and widely applicable work! I would be extremely excited to collaborate with you, READI, and people working in AI safety on movement-building. Please keep an eye out for a future forum post with some potential ideas on this front! We would love to get your feedback on them as well.

(And thank you very much for letting me know about Vael's extremely important write-up! It is brilliant, and I think everyone in AI safety should read it.)

Comment by Peter S. Park on [$20K In Prizes] AI Safety Arguments Competition · 2022-04-26T23:59:45.475Z · EA · GW

I think Elon Musk said it in a documentary about AI risks. (Is this correct?)

Comment by Peter S. Park on [$20K In Prizes] AI Safety Arguments Competition · 2022-04-26T23:58:29.892Z · EA · GW

Quoted from an EA forum post draft I'm working on:

“Humans are currently the smartest being on the planet. This means that non-human animals are completely at our mercy. Cows, pigs, and chickens live atrocious lives in factory farms, because humans’ goal of eating meat is misaligned with these animals’ well-being. Saber-toothed tigers and mammoths were hunted to extinction, because nearby humans’ goal was misaligned with these animals’ survival. 

But what if in the future, we were not the smartest being on the planet? AI experts predict that it’s basically a coin flip whether or not the following scenario happens by year X. The scenario is that researchers at Deepmind, Google, or Facebook accidentally create an AI system that is systematically smarter than humans. If the goal of this superintelligent, difficult-to-control AI system is accidentally misaligned with human survival, humanity will go extinct. And no AI expert has yet convinced the rest of the field that there is a way to align this superintelligent AI system’s goal in a controlled, guaranteed manner.”

Comment by Peter S. Park on A grand strategy to recruit AI capabilities researchers into AI safety research · 2022-04-23T01:55:13.936Z · EA · GW

Thank you very much for the constructive criticisms, Max! I appreciate your honest response, and agree with many of your points.

I am in the process of preparing a (hopefully) well-thought-out response to your comment.

Comment by Peter S. Park on A grand strategy to recruit AI capabilities researchers into AI safety research · 2022-04-21T14:51:26.614Z · EA · GW

Thank you so much Jay for your kind words! 

If you happen to think of any suggestions, any blind spots of the post, or any constructive criticisms, I'd be extremely excited to hear them! (Either here or in private conversation, whichever you prefer.)

Comment by Peter S. Park on Longtermist EA needs more Phase 2 work · 2022-04-21T14:44:22.995Z · EA · GW

Thanks so much for your comment, Owen! I really appreciate it.

I was under the impression (perhaps incomplete!) that your definition of "phase 2" was "an action whose upside is in its impact," and "phase 1" was "an action whose upside is in reducing uncertainty about what is the highest impact option for future actions."

I was suggesting that I think we already know that recruiting people away from AI capabilities research (especially into AI safety) has a substantially high impact, and this impact per unit of time is likely to improve with experience. So pondering without experientially trying it is worse for optimizing its impact, for reducing uncertainty.

Comment by Peter S. Park on Longtermist EA needs more Phase 2 work · 2022-04-21T02:42:53.232Z · EA · GW

The best use of time and resources (in the Phase 2 sense) is probably to recruit AI capabilities researchers into AI safety. Uncertainty is not impossible to deal with, and is extremely likely to improve from experience.

Comment by Peter S. Park on Begging, Pleading AI Orgs to Comment on NIST AI Risk Management Framework · 2022-04-16T21:39:55.907Z · EA · GW

I completely agree with the urgency and the evaluation of the problem.

In case begging and pleading doesn't work, a complementary method is to create a prestige differential between AI safety research and AI capabilities research (i.e., like that between green-energy research and fossil fuels), with the goal of convincing people to move from the latter to the former. See my post for a grand strategy.

How do we recruit AI capabilities researchers to transition into AI safety research? It seems that "it is relatively easy to persuade people to join AI safety in 1-on-1s." I think it's most likely worth brainstorming methods to scale this.

Comment by Peter S. Park on The Vultures Are Circling · 2022-04-06T18:15:06.668Z · EA · GW

My prior is that one's degree of EA-alignment is pretty transparent. If there are any grifters, they would probably be found out pretty quickly and we can retract funding/cooperation from that point on. 

Also, people who are at a crossroads of either being EA-aligned or non-EA aligned (e.g., people who want to be a productive member of a lively and prestigious community) could be organizationally "captured" and become EA-aligned, if we maintain a high-trust, collaborative group environment.

Comment by Peter S. Park on Peter S. Park's Shortform · 2022-04-06T18:04:40.289Z · EA · GW

A general class of problems for effective altruists is the following:

In some domains, there are a finite number of positions through which high-impact good can be done. These positions tend to be prestigious (perhaps rationally, perhaps not). So, there is strong zero-sum competition for these positions. The limiting factor is that effective altruists face steep competition for these positions against other well-intentioned people who are just not perfectly aligned on one or more crucial issues. 

One common approach is to really help the effective altruists to break through this competition. But this is hard. Another common approach is to try to convince non-effective altruists who have successfully broken into these positions to be more EA-aligned. But convincing experienced people is often difficult (you can't teach an old dog new tricks, generally speaking).

A thought I can't shake is that if we could reduce the competition somehow (expand the pie, target young and high-potential people, and more controversially, to convince non-EA-aligned people to drop out of the race) it would be much more feasible.

 

Comment by Peter S. Park on Peter S. Park's Shortform · 2022-03-28T00:42:18.000Z · EA · GW

So one alternative is to have a preprint server like arXiv (where papers can be posted) that directly serves as a journal, potentially with peer reviews that are also posted. Independent of paper availability to the public, this would also save researchers' time. (Instead of formatting papers to fit the Elsevier guidelines, they could be doing more research or training new researchers.)

Comment by Peter S. Park on Peter S. Park's Shortform · 2022-03-25T03:29:35.031Z · EA · GW

What is a lower bound for the maximal counterfactual impact from allocating a couple dozen billion dollars?

Comment by Peter S. Park on $100 bounty for the best ideas to red team · 2022-03-23T16:25:04.223Z · EA · GW

Reposting my post: “At what price estimate do you think Elsevier can be acquired?

Could acquiring Elsevier and reforming it to be less rent-seeking be feasible?”

Comment by Peter S. Park on Peter S. Park's Shortform · 2022-03-23T16:05:39.534Z · EA · GW

At what price estimate do you think Elsevier can be acquired?

Could acquiring Elsevier and reforming it to be less rent-seeking be feasible?

Comment by Peter S. Park on Anecdotes Can Be Strong Evidence and Bayes Theorem Proves It · 2022-03-14T02:56:33.726Z · EA · GW

I think so too! A strong anecdote can directly illustrate a cause-and-effect relationship that is consistent with a certain plausible theory of the underlying system. And correct causal understanding is essential for making externally valid predictions.

Comment by Peter S. Park on EA Projects I'd Like to See · 2022-03-14T02:49:08.334Z · EA · GW

My intuition is that the priority for funding criticism of EA/longtermism is low, because there will be a lot of smart and motivated people who (in my opinion, because of previously held ideological commitments; but the true reason doesn’t matter for the purpose of my argument) will formulate and publicize criticisms of EA/longtermism, regardless of what we do.

Comment by Peter S. Park on Anecdotes Can Be Strong Evidence and Bayes Theorem Proves It · 2022-03-13T18:11:08.247Z · EA · GW

They can be (deterministic Bayesian updating is just causal inference), but they can also not be (probabilistic Bayesian updating requires a large sample size; also, sampling bias is universally detrimental to accurate learning).

Comment by Peter S. Park on Let Russians go abroad · 2022-03-13T17:55:31.899Z · EA · GW

Just to play devil’s advocate:

For many different types of talented people, the harm to the Russian government from their emigration might be overstated (at least the short term harm), because it’s economy is disproportionately based on oil and gas. Taxes from citizens’ economic activity are not as important.

But the strong case for open immigration does not require this harm to be true.

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-10T19:53:51.669Z · EA · GW

It's plausible that compared to a stable authoritarian nuclear state, an unstable or couped authoritarian nuclear state could be even worse (in worst-case scenario and even potentially in expected value). 

For a worst-case scenario, consider that if a popular uprising is on the verge of ousting Kim Jong Un, he may desperately nuke who-know's-where or order an artillery strike on Seoul. 

Also, if you believe these high-access defectors' interviews, most North Korean soldiers genuinely believe that they can win a war against the U.S. and South Korea. This means that even if there is a palace coup rather than a popular uprising, it's plausible that an irrational general rises to power and starts an irrational nuclear war with the intent to win.

So I think it's plausible that prevention is an entirely different beast than policy regarding already existing stable, authoritarian, and armed states.

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-08T00:38:51.627Z · EA · GW

Research on how to minimize the risk of false alarm nuclear launches

Effective Altruism

Preventing false alarm nuclear launches (as Petrov did) via research on the relevant game theory, technological improvements, and organization theory, and disseminating and implementing this research, could potentially be very impactful.

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-07T04:14:51.266Z · EA · GW

Facilitate interdisciplinarity in governmental applications of social science

Values and Reflective Processes, Economic Growth

At the moment, governmental applications of social science (where, for example, economists who use the paradigm of methodological individualism are disproportionately represented) could benefit from drawing on other fields of social science that can fill potential blind spots. The theory of social norms is a particularly relevant example. Also, behavioral scientists and psychologists could potentially be very helpful in improving the judgement of high-impact decision-makers in government, and in improving predictions on policy counterfactuals via filling in previous informational blind spots. Research and efforts to increase the consideration of diverse plausible scientific paradigms in governmental applications of social science could potentially be very impactful. 

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-06T20:53:05.743Z · EA · GW

Increase the number of STEM-trained people, in EA and in general

Economic growth, Research that can help us improve

Research and efforts to increase the numberof quantitatively skilled people in general, and targeted EA movement-building efforts to them could potentially be very impactful. (e.g., AI alignment research, biorisk research, scientific research in general) Incentivizing STEM education at the school and university levels, facilitating immigration of STEM degree holders, and offering STEM specific guidance via 80,000 Hours and other organizations could potentially be very impactful. 

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-06T02:08:08.278Z · EA · GW

Incentivize researchers to prioritize paradigm shifts rather than incremental advances

Economic growth, Research That Can Help Us Improve

There's a plausible case that societal under-innovation is one of the largest causes (if not the largest cause) of people's suboptimal well-being. For example, scientific research could be less risk-averse/incremental and more pro-moonshots. Interdiscplinary research on how to achieve society's full innovation potential, and movement-building targeted at universities, scientific journals, and grant agencies to incentivize scientific moonshots could potentially be very impactful.
 

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-04T16:42:51.185Z · EA · GW

A fast and widely used global database of pandemic prevention data

Biorisk

Speed is of the essence for pandemic prevention when emergence occurs. A fast and widely used global database could potentially be very impactful. It would be great if events like the early discovery of potential pandemic pathogens, doctors' diagnoses of potential pandemic symptoms, etc. regularly and automatically gets uploaded to the database, and high-frequency algorithms can use this database to predict potential pandemic outbreaks faster than people can do.

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-04T03:07:46.459Z · EA · GW

Yes, I think these proposals together could be especially high-impact, since people who pass screening may develop issues of mental health down the line.

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-04T02:51:29.196Z · EA · GW

"find an existing youtube studio with some folks who are interested in EA"-> This sounds very doable and potentially quite impactful. I personally enjoy watching Kurzgesagt and they have done EA-relevant videos in the past (e.g., meat consumption).

"But a broader, 80K-style effort to build the EA pipeline so we can attract and absorb more media people into the movement also seems worthwhile." -> I agree!

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-04T02:47:07.612Z · EA · GW

Thanks so much for these suggestions! I would also really like to see these projects get implemented. There are already bootcamps for, say, pivoting into data science jobs, but having other specializations of statistics bootcamps (e.g., an accessible life-coach level bootcamp for improving individual decision-making, or a bootcamp specifically for high-impact CEOs or nonprofit heads) could be really cool as well.

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-04T02:42:58.454Z · EA · GW

Thanks for the great big-picture suggestions! Some of these are quite ambitious (in a good way!) and I think this is the level of out-of-the-box thinking needed on this issue. 

This idea goes hand-in-hand with a previous post "Facilitate U.S. voters' relocation to swing states." For a project aiming to facilitate relocation to well-chosen parts of the US, it could be additionally impactful to consider geographic voting power as well, depending on the scale of the project.

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-04T02:36:57.604Z · EA · GW

Thanks so much, Jackson!

I have never published a book, but some EAs have written quite famous and well-written books. In addition to what you suggested, I was thinking "80,000 pages" could organize mentoring relationships for other EAs who are interested in writing a book, writer's circles, a crowdsourced step-by-step guide, etc. Networking in general is very important for publishing and publicizing books, from what I can gather, so any help on getting one's foot in the door could be quite helpful.

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-03T19:44:31.151Z · EA · GW

Pipeline for podcasts

Effective altruism

 Crowdsourced resources, networks, and grants may help facilitate EAs and longtermists' creation of high-impact, informative podcasts.

Comment by Peter S. Park on The Future Fund’s Project Ideas Competition · 2022-03-03T18:44:26.619Z · EA · GW

Reduce meat consumption

Biorisk, Moral circle expansion

Research and efforts to reduce broad meat consumption would help moral circle expansion, pandemic prevention, and climate change mitigation. Perhaps messaging from the pandemic-prevention angle (in addition to the climate change angle and the moral circle expansion angle) may help.