What are EA project ideas you have?

post by Mati_Roy · 2020-03-07T02:58:53.338Z · EA · GW · 1 comment

This is a question post.


1 comment

The question is meant to be broad.

I invite y'all to share your ideas here, as they come to you.

Relatedly, if you see a project idea that has already been done, pointing it out as a reply would be useful!

For sharing existing project lists, I suggest doing it in the following post instead: Concrete project lists [EA · GW]


Motivation for asking: From now on, I intend to use my answer here to continuously document new ideas I come up with instead of having them logged privately in Google Docs. This is part of my goal of reducing the time between conceiving of an idea and sharing it (How much delay do you generally have between having a good new idea and sharing that idea publicly online? [LW · GW]).


answer by dominicroser · 2020-03-09T08:43:10.527Z · EA(p) · GW(p)

WHAT: A book like "Strangers Drowning", but focused on the "E" of EA rather than the "A" of EA.

WHY: narrative can be such a tremendous force in changing people's lives. It's often more powerful than argument (even for brainy people).

There's already a lot of world literature and newspaper stories on people who have been tremendously altruistic. There is much less literature about people who have been tremendously altruistic and -- this is key -- have been motivated by their altruism to care about effectiveness and listen to the evidence.

I'd love to have a book with biographies or stories that traces -- in narrative rather than argument -- people whose love for others has pushed them to care about effectiveness, care about evidence, and generally care about a results-oriented outlook that focuses on what 'really works at the end of the day'. (Note that the book should not generally be about people who care about effectiveness and evidence -- but only about people who have deliberately chosen to do so out of altruism (rather than, say, out nerdiness)).

Possible biographies could include: Florence Nightingale, Ignaz Semmelweis, Deng Xiaoping, figures from EA and utilitarianism, some theologians in the 2nd world war who pragmatically looked towards ending the killing (Bonhoeffer, Barth, etc?), etc. Not vouching for this list of examples at all -- it's more to give an idea.

By the way, creating such a book could be a project for EAs with a different skillset than the cliché EAs.

answer by Mati_Roy · 2020-03-07T03:04:06.188Z · EA(p) · GW(p)

moving my answers in separate comments below this answer.

particularly useful feedback includes, but isn't limited to:

  • links to a similar project that was already done
  • connection with people interested in this project
  • analysis of the usefulness of the project

note: those are just ideas, they might not be a priority, or good at all

comment by Mati_Roy · 2020-03-30T02:06:11.894Z · EA(p) · GW(p)

The Bullshit Awards

Proposal: Give prizes to people spotting / blowing the whistle on papers bullshitting its readers, and explaining why.

Details: There could be a Bullshit Alert Prize for the one blowing the whistle, and a Bullshit Award for the one having done the bullshitting. This would be similar to the Darwin Awards in that you don't want to be the source of such an award.

Example: An analysis that could have won this is Why We Sleep — a tale of institutional failure.

Note: I'm not sure whether that's a good way to go about fixing that problem. Is shaming a useful tool?

Replies from: yhoiseth
comment by yhoiseth · 2020-03-30T16:01:46.564Z · EA(p) · GW(p)

Great idea! This sounds like a lot of fun. I'm also unsure about the net benefit. We might want to keep it as unaffiliated as possible from other EA organizations in order to avoid any spillover damage.

comment by Mati_Roy · 2020-03-11T22:19:49.324Z · EA(p) · GW(p)

Belief Network

Last updated: 2020-03-30

Category: group rationality; signal boosting

Proposal: Track people's beliefs over time, and what information gave them the biggest update.

Details: It could be done at the same time than the EA survey every year. And/or it could be a website that people continuously update.

Motivation: The goals are

1) to track which information is the most valuable so that more people consume it, and

2) see how beliefs evolve (which might be evidence in itself about which beliefs are true; although, I think most, including myself, wouldn't think this was the strongest form of evidence). It could be that most people make a similar series of paradigm shifts over time, and knowing which ones might help speed things up.

Alternative name: MindChange

What's been done so far: Post on LessWrong What are some articles that updated your beliefs a lot on an important topic? The EA survey also tracks some high-level views, notably on cause prioritization.


Just saw a similar idea I had (I think 2 years ago).

A Chrome Extension and Plug In to measure changes to one's world model and one's behaviors.

Goal: Try to find the articles that are the most likely to update our map and/or behaviors.

comment by Mati_Roy · 2020-04-18T17:27:25.827Z · EA(p) · GW(p)

Promise Prediction

Proposal: Have a prediction market on what politicians will accomplish in their next mandate.

Why: That way, it will make it easier for people to know how likely each policies are to be implemented, and it will make it harder for politicians to bullshit everyone.

Related: This project would complement really well the Polimeter which tracks the promises made by politicians. They are now part of the Vox Pop Labs.

Note: I think I've seen this idea somewhere else, but I don't remember where.

comment by Mati_Roy · 2020-03-07T22:41:58.909Z · EA(p) · GW(p)

Impact of the 5% payout rule

Category: meta-EA; research

Proposal: Research what would be the consequences of removing the 5% payout rule.

Motivating intuition: maybe it would help longer-termist causes (?) and it might also increase the global ratio of investing / consumption (?)

Date posted: 2020-03-06

Additional information:

A foundation must pay out 5% of its assets each year while a public charity may not.
Donors to a public charity receive greater tax benefits than donors to a foundation.
A public charity must collect at least 10% of its annual expenses from the public to remain tax-exempt while a foundation does not.

( source: Foundation (United States law) )

comment by Mati_Roy · 2020-04-14T03:30:00.386Z · EA(p) · GW(p)

Quantified Doomsday Clock


Since the Doomsday Clock from the Bulletin of the Atomic Scientists doesn't have any clear methodology for why the clock advances or recedes, I am providing the Metaculus Doomsday Clock as an alternative. Currently the way it advances is by using the Metaculus median prediction of humanity going extinct by 2100 to determine how many minutes we are from midnight. It can be improved, so make suggestions in the comments.

(source: Matthew Barnett's Facebook wall)

Proposal: IFTTT-connected Doomsday-looking Doomsday Clock

Notes: Please contact me if interested in helping commercializing this; I have a few ideas and can fund the project.

Question: What would be a good name for it? Brainstroming:

  • Quantified Doomsday Clock
  • Quantum Doomsday Clock
  • Metaculus Doomsday Clock (pro: publicity for Metaculus; but important con IMO: inflexible and not future-proof as it becomes dependent on Metaculus)
Replies from: Khorton
comment by Khorton · 2020-04-14T11:08:31.734Z · EA(p) · GW(p)

Rather than 2100 can I suggest the next century? Otherwise we'd move away from midnight as we move toward 2100 - very counterintuitive

Replies from: Mati_Roy
comment by Mati_Roy · 2020-04-15T04:21:24.866Z · EA(p) · GW(p)

yeah good point, I agree; thanks!

comment by Mati_Roy · 2020-04-20T01:29:25.323Z · EA(p) · GW(p)

Group for collective actions

Status: done, see: https://www.facebook.com/groups/LWCoordination/

Proposal: have a group to experiment with coordinating on small projects that require coordination

Example: I just posted a proposal about improving the Cause Prioritization Wiki. If 100 major edits get committed, then everyone does the edits they committed. This is useful because a wiki only becomes interesting when there's a lot of editors, so this allows the platform to get bootstrapped, and avoids the chicken-egg problem.

Comments: There's a meta-thread to discuss the group itself in the group, so I invite you to comment there if you have any comments on the group.

comment by Mati_Roy · 2020-04-20T23:55:35.568Z · EA(p) · GW(p)

Philanthropy tax / Giving your 2 percents

Meta-proposal: Research what would be the consequences of implementing the proposal.

Proposal: Give the ability to citizens to decide where X% (say 2%) of their tax goes directly (it can be a charity or a government program)

Details: Of course, government can rebalance the rest of its budget in such a way that there's no counterfactual changes. But maybe it would still make a change. If not, then maybe the X% has to go to a charity. Or maybe the donations could be made for more specific governmental projects.

Reasoning: Maybe individuals have specific insights that the government doesn't have when it comes to public good, but altruism aside, individuals don't have an incentive to finance public goods. Empowering citizens to directly decide where part of their taxes goes would help with that.

Extra: Mayyybe their could be a way to certified some charities as efficient, but that's dangerous of going full circle, and having the government once again making the decisions, but their might be some in-between that would be superior. Maybe there should be a restriction to charities working on public good.

Thought on impact: Maybe philanthropists would give X% less to charities given they would have this mechanism to direct money to charities they want to support. If that's true, then increasing income taxes by X% would sort of be going full circle, except now everyone would be giving X% to charity.

Name: Calling it the "philanthropy tax" might confuse the concept with "taxing philanthropy". I'm definitely open to hearing other suggestions for names.

Update: Not surprisingly, other people have had similar ideas. For example, see Robert Lee's Facebook post.

comment by Mati_Roy · 2020-06-06T02:42:42.080Z · EA(p) · GW(p)

Royalty free AI images

Created: early 2019 (or maybe before) | Originally shared on EA Work

Cause area: AI safety

Proposal: Make a collection of (royalty) free images representing the idea of AI / AI safety / AI x-risk / AI risk that aren't anthropomorphizing AI or otherwise misportraying AI (both by searching for existing images and by creating more). This could be used by the media, local AI (safety) groups, etc.

Details: I think this is less of a problem than it used to be, but still think this could be valuable. If you want funding for that, you could consider applying for a grant from the Long-Term Future Fund: https://app.effectivealtruism.org/funds/far-future.

Cross-post: https://www.facebook.com/groups/1696670373923332/permalink/2287004484889915/

comment by Mati_Roy · 2020-06-16T06:50:48.189Z · EA(p) · GW(p)

Shaking hands across the world

Category: Bringing powerful countries closer together

Idea: Handshake statue in Time Square and some equivalent place in China, where people can give each other a handshake across the world

Effectiveness: I don't know;doesn't seem effective, but also maybe such symbols are powerful and would bring the world closer to each other, hence increasing cooperation / reducing risk of wars

Source: Space Force TV show, s1e7 8:30

comment by Mati_Roy · 2020-03-17T10:32:03.313Z · EA(p) · GW(p)

Coronavirus: Should I go to work?

UPDATE: An EA project I'm part of might do this

summary: have an app that helps people decide whether they shouldn't go to work

context: in the last 12 hours I spent maybe about 2 hours 'empowering' someone I know by giving them more information to help them decide whether they should take sick days

problem: knowing what's the probability one's infected (by the coronavirus) helps informing them about whether they should avoid going to work. the probability beyond which you should stay home is not the same for each type of job. at what point should one not go to work?

the 2 main sub questions are:

  • what's the probability that I'm infected?
    • there are already forms that sort of do that ex.: https://covid19.empego.ca/#/, but I would prefer a more probabilistic approach with more detailed input
  • if I'm infected, what damage am I likely to cause, in expectation? how many people am I meeting at work? how many confirmed cases are in my city? etc.
    • there's an app made by EAs that might get released in the coming days that address a similar question

for example: Someone told me: my partner was coughing, had a sore throat, and had X fever during the whole day, but is now feeling better; yesterday ze was okay, and we slept together, but I haven't seen zir since then. ze wasn't outside the country recently, and hasn't met anyone infected as far as ze knows. ze lives in city Y which has Z cases.

there could also be intermediary recommendations (maybe?): go to work, but take the following precautions:

  • wear a mask
  • avoid meetings
  • etc.

addendum: in countries that don't have a monetary incentives for people to self-quarantine, there will be a negative externality not captured. but the tool should still improve decision making.

comment by Mati_Roy · 2020-03-07T22:44:33.299Z · EA(p) · GW(p)

Forum Facebook page

Posted: 2020-03-07

Category: signal boosting

Proposal: Share the best (say >=100 karmas) posts on the EA Forum on a Facebook page called "Best of the EA Forum"

Why? So that people that naturally go on Facebook but not on the EA Forum can be exposed to that content

Note: If there's a way to get this list easily, it might facilitate the process.

Update: 2020-04-24

Experimental page using Zapier: https://www.facebook.com/EAForumKarma100/

x-post: https://www.facebook.com/groups/1392613437498240/permalink/2947443972015171/

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2020-03-09T20:26:54.957Z · EA(p) · GW(p)

I appreciate this idea! However, I'd prefer that people cross-post Forum content to groups that already have substantial/relevantly targeted audiences (actually, I'd really like people to do this more often), rather than creating a new group that could split off some of the Forum's readership. 

Having a Forum-focused Facebook group also seems like it would raise the chances of more discussion happening on Facebook rather than on the Forum posts themselves, which seems bad (comments harder to find later, not linked to anyone's profile, not open for karma voting, not eligible for the Comment Prize, etc.)

If the group really is just a collection of links that people can easily share in other groups, and if discussion comes back to the Forum, it could be a net positive. I'll be curious to see how it gets used.

Replies from: Mati_Roy
comment by Mati_Roy · 2020-03-09T23:41:50.855Z · EA(p) · GW(p)

thanks for your comment, I totally agree!

maybe we could ban comments? and delete the page if that doesn't end up working?

Replies from: aarongertler
comment by Aaron Gertler (aarongertler) · 2020-03-10T01:00:36.949Z · EA(p) · GW(p)

Rather than a ban, probably official discouragement + a polite reminder that people should add their comments to the Forum as well as the Facebook posts? If people really want to talk on Facebook, it seems bad to stop them, but gentle nudges go a long way!

comment by Mati_Roy · 2020-03-22T08:19:04.690Z · EA(p) · GW(p)

Altruist credits

Epistemic status: not sure if the idea works

Category: meta

Proposal: Pay someone with a 'donation gift card' or 'donation credits'

Details and rationale:

Often, when I work on a project approved by EAs, I don't necessarily want to be paid as much as I want to be able to have people work on my EA projects in the future.

Imagine you have a Donor Advisor Fund called the Altruist Bank which emits one Altruist Credit per USD you put into it. The Altruist Credit can be spent by saying to which charity you want the DAF to send a USD. The Altruist Credit can also be given to other people directly.

My hope was that accepting to be paid with altruist credits would be a strong signal of alignment on altruism, and altruistic people might perform better at altruist projects (as their incentives are more aligned). A discounted wage might also act as a signal, although maybe it can also attract less qualified people (?)

It might also encourage a culture of more donations.

And **maybe** be simpler than everyone individually opening a DAF.

Avoiding possible problems:

  • If we can somehow make it illegal to sell, that would be useful because otherwise anyone can sell their Altruist Credits to altruists for just slightly less than 1 USD each, at which point you're just back with USDs
  • If it became massively used, then it could start to being used just as a currency (as long as everyone expect others to accept it) (although this seems unlikely to happen)

Additional note:

  • I think parallel economies, such as Simbi, are bad for basic Econ 101 reasons, but here maybe the altruistic signaling is of sufficient additional value (?)
comment by Mati_Roy · 2020-03-27T17:41:31.799Z · EA(p) · GW(p)

Moved from my short form; created on 2020-02-28

Group to discuss information hazard

Context: Sometimes I come up with ideas that are very likely information hazard, and I don't share them. Most of the time I come up with ideas that are very likely not information hazard.

Problem: But also, sometimes, I come up with ideas that are in-between, or that I can't tell whether I should share them are not.

Solution hypothesis: I propose creating a group with which one can share such ideas to get external feedback on them and/or about whether they should be shared more widely or not. To reduce the risk of information leaking from that group, the group could:

  • be kept small (5 participants?)
    • note: there can always be more such groups
  • be selective
    • exam on information hazard / on Bostrom's paper on the topic
      • notably: some classes of hazard should definitely not be shared in that group, and this should be made explicit
    • questionnaire on how one handled information in the past
      • notably: secrets
    • have a designated member share a link on an applicant's Facebook wall with rewards for reporting antisocial behavior
    • pledge to treat the information with the utmost seriousness
    • commit to give feedback for each idea (to have a ratio of feedback / exposed person of 1)

Questions: What do you think of this idea? How can I improve this idea? Would you be interested in helping with or joining such a group?

Possible alternatives:

  • Info-hazard buddy: ask a trusted EA friend if they want to give you feedback on possible info-hazardy ideas
    • warning: some info-hazard ideas (/idea categories) should NOT be thought about more. some info-hazard can be personally damaging to someone (ask for clear consent before sharing them, and consider whether it's really useful to do so).
    • note: yeah I think I'm going to start with this first

EtA 2020-04-18

Ritual to become info-hazard buddy:

  • Ask the person the share their lying policy
  • Ask the person for an history of lies they've told
  • Post on their Facebook wall an anonymous form for people to report that person's trustworthiness
  • Check who that person has blocked on Facebook, and reach out to them to ask why they were blocked
  • Write a doc about a pledge to handling information with the utmost care, and sign it (maybe also sign a legal non-disclosure agreement)
    • Although some infractions should probably be sued in some alternative court if possible to avoid the information getting more exposure
Replies from: Khorton
comment by Khorton · 2020-03-27T18:40:59.908Z · EA(p) · GW(p)

Why wouldn't you just ask four people who you trust to review each idea in confidence? Why formalize it or insist they reciprocate it?

comment by Mati_Roy · 2020-04-22T04:38:36.507Z · EA(p) · GW(p)

Category: research

Externalities of war predictions

See: link

comment by Mati_Roy · 2020-05-20T00:21:28.267Z · EA(p) · GW(p)

Maybe summarizing the book "Who Goes First? The Story of Self-experimentation in Medicine". Two possibly important thesis:

  • self-experimentation is important
  • medical innovations are available way before they get adopted
comment by Mati_Roy · 2020-06-06T02:38:54.875Z · EA(p) · GW(p)

EA StackExchange

Created: early 2019 (or maybe before) | Originally shared on EA Work

Create a quality StackExchange site so that the EA community can build up knowledge online.

Note: The previous attempt to do so failed (see: https://area51.stackexchange.com/proposals/97583/effective-altruism).

comment by Mati_Roy · 2020-06-06T02:41:20.393Z · EA(p) · GW(p)

Decision Theory Interactive Guide

Created: early 2019 (or maybe before) | Originally shared on EA Work

Proposal: I think this could help understanding decision theories (especially functional decision theory). There could be some scenarios where the user has to choose an action or a decision procedure and see how this affects other parts of the scenario that are logically connected to the agent the user controls. For example: playing the prisoner dilemma with a copy of oneself, Newcomb’s problem, etc. Could be done in a similar way to Nicky Case's games.

comment by Mati_Roy · 2020-06-06T02:45:40.031Z · EA(p) · GW(p)

Sober September

Created: early 2019 (or maybe before) | Originally shared on EA Work

Cause area: aging

Dry Feb is a Canadian initiative that invites people to go sober for February to raise money for the Canadian Cancer Society: https://www.dryfeb.ca/.

Imagine this idea, but worldwide and for general medical research.

I would suggest fundraising for the Methuselah Foundation for its broad approach. They fund a lot of prizes which create market pressures for medical progress, so avoid the donors to have to figure out which research groups are the most effective. They’ve also had other initiative that helps the field at large such as conferences and roadmaps. More on them here: https://www.mfoundation.org/who-we-are/.

An idea for a name is “Sober September”.

Tangentially, reducing alcohol consumption might also be a somewhat effective intervention to increase QALYs in richer countries (ex.: htp://citeseerx.ist.psu.edu/viewdoc/download?doi=

Note: I’m not working on this, but could provide some guidance.

comment by Mati_Roy · 2020-06-06T02:58:42.366Z · EA(p) · GW(p)

Science policy think tank (or advocacy group?)

Potential problem: it might accelerate all scientific progress, which isn’t relevant in the framework of technological differential progress, or possibly harmful (?) if, for example, AI parrallelizes better than AI safety

Related: https://causeprioritization.org/Improving_science

comment by Mati_Roy · 2020-06-06T02:59:29.669Z · EA(p) · GW(p)

FDA Policy Think-tank (and/or advocacy group)

comment by Mati_Roy · 2020-06-06T03:01:13.878Z · EA(p) · GW(p)

Rationalist Olympiads

Potential funding: EA Meta Fund


answer by Mati_Roy · 2020-06-06T02:37:57.193Z · EA(p) · GW(p)

I will document ideas from others I want to signal boost in replies to this comment

1 comment

Comments sorted by top scores.