Experiment in Retroactive Funding: An EA Forum Prize Contest 2022-06-01T21:15:09.031Z


Comment by Matt Brooks on Announcing, powered by EA Funds · 2022-07-18T15:07:17.952Z · EA · GW

Hey Robbert-Jan,

Sorry, somehow I missed your comment but saw it once Simon replied and I got a notification.

We're likely staying in the web2 world for now, but there is a chance we graduate to web3/crypto in the future.

Check out our website here: Join our Discord here: Read (or skim) our long EA post here:

Comment by Matt Brooks on Announcing, powered by EA Funds · 2022-07-18T15:06:11.513Z · EA · GW

Hey Simon,

We've been funded by the FTX Future Fund regrantor program!

Check out our website here: Join our Discord here: Read (or skim) our long EA post here:

Comment by Matt Brooks on Announcing the New York Effective Altruism Hub: a coworking and event space (+AIS/longtermism office scoping) · 2022-06-13T23:24:20.926Z · EA · GW

Exciting! I just filled out the form.

Comment by Matt Brooks on Against immortality? · 2022-04-28T20:44:54.050Z · EA · GW

I think this is really difficult to truly assess because there is a huge confounder. The more you age the worse your memory gets, your creativity decreases, your ability to focus decreases, etc., etc.

If all of that was fixed with anti-aging it may not be true that science progresses one funeral at a time because the people at the top of their game can keep producing great work instead of becoming geriatric while still holding status/power in the system.

Also, it could be a subconscious thing: "why bother truly investigating my beliefs at age 70, I'm going to die soon anyway, let me just continue with the inertia until I retire soon"

Also, this seems possible to fix with better institutional structures/incentives. Academia is broken in many ways, this is just one of them.

Comment by Matt Brooks on Against immortality? · 2022-04-28T20:33:31.138Z · EA · GW

This is a good comment. I'd like to respond but it feels like a lot of typing... haha


but that’s not the same as seeing improvements in leaders’ quality

I just mean the world is trending towards democracies and away from totalitarianism.


It’s inherently easier to attain and keep power by any means necessary with zero ethics

Yes, but 100x easier? Probably not. What if the great minds have 100x the numbers and resources? Network effects are strong


There’s another asymmetry where it’s often easier to destroy/attack/kill than build something.

Same response as above


I think it’s ambiguous whether Putin supports your point. The world is in a very precarious situation now because of one tyrant.

My point is that the vast majority of the world immediately pushed back on Putin much harder than people thought. This backs up my trend that people are less tolerant of totalitarianism than they were 100 years ago. We are globally trying (and succeeding) to set stronger norms against inflicting violence and oppression.

Some personality pathologies like narcissism and psychopathy seem to be increasing lately, tracking urbanization rates and probably other factors.

I'm guessing it will be somewhat easier to reverse these trends in a less scarcity-based society in the future, especially when we have a better handle on mental health from all angles. And the increases are probably not enough to matter in the wider question of great minds vs dictators.


People can be “brilliant” on some cognitive dimensions but fail at defense against dark personality types. For instance, some otherwise brilliant people may be socially naive.

The great minds can just outnumber the dictators in numbers and in resources, but again network effects can fight against this because each individual person doesn't have to succeed against dictators, the whole global fight for good has to collectively succeed.


Outside of our EA bubble, it doesn’t look like the world is particularly sane or stable.

The world definitely seems to be trending towards saner and more stable though.

Comment by Matt Brooks on Against immortality? · 2022-04-28T19:20:52.933Z · EA · GW

I agree, it feels like a stakesy decision! And I'm pretty aligned with longtermist thinking, I just think that "entire future at risk due to totalitarianism lock-in due to removing death from aging" seems really unlikely to me. But I haven't really thought about it too much so I guess I'm really uncertain here as we all seem to be.

"what year you guess it would first have been good to grant people immortality?"

I kind of reject the question due to 'immortality' as that isn't the decision we're currently faced with. (unless you're only interested in this specific hypothetical world). The decision we're faced with is do we speed up anti-aging efforts to reduce age-related death and suffering? You can still kill (or incapacitate) people that don't age, that's my whole point of the great minds vs. dictators.

But to consider the risks in the past vs today:

Before the internet and modern society/technology/economy it was much much harder for great minds to coordinate against evils in a global sense (thinking of the Cultural Revolution as you mentioned). So my "great-minds counter dictators" theory doesn't hold up well in the past but I think it does in modern times.

The population 200 years ago was 1/8 what is today and growing much slower so the premature deaths you would have prevented per year with anti-aging would have been much less than today so you get less benefit.

The general population's sense of morals and demand for democracy is improving so I think the tolerance for evil/totalitarianism is dropping fairly quickly.

So you'd have to come up with an equation with at least the following:
- How many premature deaths you'd save with anti-aging
- How likely and in what numbers will people, in general, oppose totalitarianism
- If there was opposition, how easily could the global good coordinate to fight totalitarianism
- If there was coordinated opposition would their numbers/resources outweigh the numbers/resources of totalitarianism
- If the coordinated opposition was to fail, how long would this totalitarian society last (could it last forever and totally consume the future or is it unstable?)

Comment by Matt Brooks on Against immortality? · 2022-04-28T18:35:20.722Z · EA · GW

Of course, it would, but if you're reducing the risk of totalitarian lock-in from 0.4% to 0.39% (obviously made up numbers) by waiting 200 years I would think that's a mistake that costs billions of lives.

Comment by Matt Brooks on Against immortality? · 2022-04-28T18:09:00.969Z · EA · GW

The thing that's hard to internalize (at least I think) is that by waiting 200 years to start anti-aging efforts you are condemning billions of people to an early death with a lifespan of ~80 years. 

You'd have to convince me that waiting 200 years would reduce the risk of totalitarian lock-in so much that it offsets billions of lives that would be guaranteed to "prematurely end".

Totalitarian lock-in is scary to think about and billions of people's lives ending prematurely is just text on a screen. I would assume that the human brain can easily simulate the everyday horror of a total totalitarian world. But it's impossible for your brain to digest even 100,000,000 premature deaths, forget billions and billions.

Comment by Matt Brooks on Against immortality? · 2022-04-28T17:06:50.478Z · EA · GW

But we're not debating if immortality over the last thousand years would have been better or not, we're looking at current times and then estimating forward, right? (I agree a thousand years ago immortality would have been much much riskier than starting today)

In today's economy/society great minds can instantly coordinate and outnumber the dictators by a large margin. I believe this trend will continue and that if you allow all minds to continue the great minds will outgrow the dictator minds and dominate the equation.

Dictators are much more likely to die (not from aging) than the average great mind (more than 50x?). This means that great minds will continue to multiply in numbers and resources while dictators sometimes die off (from their risky lifestyle of power-grabbing).

Once there are 10,000 more brilliant minds with 1,000x more resources than the evil dictators how do you expect the evil dictator to successfully power grab a whole country/the whole world?

Comment by Matt Brooks on Against immortality? · 2022-04-28T16:31:28.625Z · EA · GW

When thinking about the tail of dictators don't you also have to think of the tail of good people with truly great minds you would be saving from death? (People like John von Neumann, Benjamin Franklin, etc.)

Overall, dictators are in a very tough environment with power struggles and backstabbing, lots of defecting, etc. while great minds tend to cooperate, share resources, and build upon each other.

 Obviously, there are a lot more great minds doing good than 'great minds' wishing to be world dictators. And it seems to trend in the right direction. Compare how many great smart democratic leaders there are now vs 100 years ago. Extend that line another 100 years and it seems like we'll be improving.

In a world in which a long tail dictator could theoretically work out an ironclad grasp of their country for evil, wouldn't there be thousands of truly brilliant minds with lots of global coordinated resources around the world pushing against this? (see Russia vs Ukraine for a very very simple real-world example of "1 evil guy vs the world")

So this long tail dictator has to worry about intense internal struggle/pressure but also most of the world externally pressuring them as well? I don't see how the moral brilliant minds don't just outmaneuver this dictator because they have 100x+ more people, resources, and coordination (in this theoretical future).

Comment by Matt Brooks on HealthCredit: a carbon credit for health · 2022-04-08T22:47:11.919Z · EA · GW

Congrats on winning the hackathon! Very Impressive! I'm excited to see how this project progresses, this seems like a great opportunity to improve the traditional funding and non profit sector without taking huge crazy leaps.

Comment by Matt Brooks on Toward Impact Markets · 2022-03-17T17:19:27.617Z · EA · GW

I am also confused about NPX: it cannot be that if the funding fails to make impact then the investor invests other funding principle?


Sorry, what do you mean by this?

Comment by Matt Brooks on Toward Impact Markets · 2022-03-16T00:50:34.475Z · EA · GW

Really impressive post, some great deep thinking here.  I saw earlier drafts and it has definitely grown and improved.  I'm proud to be working with you on this project, thank you for your time and effort! And thanks for 1% of the impact as well, I appreciate that.

For anyone looking for a simple real-world example I have this WIP document on how it could have been used to kickstart climate change action in the past. I'm trying to figure out a way to easily convey the concept and benefits of an impact market to a general audience (non EA/Rationalist people)

Comment by Matt Brooks on Being an individual alignment grantmaker · 2022-02-28T17:04:20.274Z · EA · GW

Thanks for shouting out and our Discord! 

We just hacked together V1 during ETH Denver's hackathon last weekend but we're going to be iterating towards a very serious market over the next few months.

If anyone wants to stay up-to-date on impact certificates or even better, wants to help build it (or support us in any way) then feel free to join our small but growing Discord.

Comment by Matt Brooks on EA Fundraising Through Advantage Sports Betting: A Guide ($500/Hour in Select States) · 2022-02-02T01:53:44.925Z · EA · GW

Incredible. Thanks so much, I'll reply here to let you know how we made out!

Comment by Matt Brooks on EA Fundraising Through Advantage Sports Betting: A Guide ($500/Hour in Select States) · 2022-02-01T22:00:02.062Z · EA · GW

Great post! 

I'm unlikely going to itemize my taxes so is there a simpler way my GF and I can take advantage of the best bonuses? We can countertrade each other on the same book if that helps. Are there low-hanging fruit bonuses in NJ?

Comment by Matt Brooks on Exposure to 3m Pointless viewers- what to promote? · 2021-12-10T03:19:53.699Z · EA · GW

If you're thinking about something about GiveWell a catchy line might be something like: 

"The best charities are 100 times more effective than others, GiveWell is a nonprofit that finds these charities and recommends them so your donation goes the furthest (or so your donation can save the most lives)."

Comment by Matt Brooks on Internalizing Externalities · 2021-12-09T03:19:25.786Z · EA · GW

Your post/idea reminds me of "Social Impact Bonds":

Seems sorta similar except instead of private investors it's an open market for regular people to make a profit from their good policy investments.

Comment by Matt Brooks on Announcing, powered by EA Funds · 2021-12-01T04:13:52.182Z · EA · GW

This is great! 

I had a similar idea regarding the crypto community as a great potential source of EA funding but I'm thinking more for-profit than strictly donation. It might be possible to tie crypto profits to EA funding by creating an impact certificate DAO.

I have a working project proposal I've been getting feedback on, if anyone is interested in reading it and giving me their thoughts just message me and I'll send you the link!

I would love to connect with the team to see if we can bounce some ideas off or learn from each other.

Comment by Matt Brooks on Certificates of impact · 2021-11-21T20:33:42.238Z · EA · GW

Bob does lose cash of his balance sheet, but his net asset position stays the same, because he's gained an IC that he can resell.


What if no one buys it?