Posts

[Link] How effective altruists ignored risk 2023-02-06T07:40:54.954Z
[Cause Exploration Prizes] Jhana meditation 2022-08-12T05:26:58.680Z
[Link] Centre for Applied Eschatology 2022-04-01T20:39:02.346Z
New Cause Area: Programmatic Mettā 2021-04-01T12:54:18.466Z
Ben Hoffman & Holden Karnofsky 2021-03-20T04:07:33.641Z
EA capital allocation is an inner ring 2021-03-19T04:06:26.596Z
Why do so few EAs and Rationalists have children? 2021-03-14T05:05:53.675Z
Feedback from where? 2021-03-11T05:05:19.843Z
Some preliminaries and a claim 2021-02-25T05:01:15.624Z
QRI and the Symmetry Theory of Valence 2020-12-19T18:34:10.021Z
[Link] "Where are all the successful rationalists?" 2020-10-17T19:59:58.175Z
[Link] How understanding valence could help make future AIs safer 2020-10-08T18:53:59.848Z
Shifts in subjective well-being scales? 2020-08-18T18:27:21.789Z
[Link] "Will He Go?" book review (Scott Aaronson) 2020-06-12T22:10:43.100Z
[Link] "Cutting through spiritual colonialism" 2020-05-20T03:15:03.058Z
[Link] "Average utilitarianism implies solipsistic egoism" (Tarsney 2020) 2020-04-29T20:54:12.353Z
[Link] "The Origin of Consciousness Reading Companion" (Putanumonit) 2020-04-07T18:56:11.676Z
[Link] "On hiding the source of knowledge" (Jessica Taylor) 2020-01-27T05:19:10.143Z
[Link] "Moral understanding and moral illusions" 2020-01-26T10:50:44.487Z
[Link] "Evaluating Arguments One Step at a Time" (Ought) 2020-01-11T19:12:31.018Z
[Link] Moloch Hasn’t Won (Zvi) 2019-12-28T23:21:00.487Z
[Link] EA Global 2020 announced (CEA) 2019-12-03T19:31:57.242Z
What is EA's story? 2019-11-30T21:45:45.433Z
[Link] "Status in academic ethics" (Charles Foster) 2019-11-27T23:20:04.510Z
[Link] "Art as the starting point" (Autotranslucence) 2019-11-27T17:10:25.705Z
[Link] A new charity evaluator (NYTimes) 2019-11-26T22:44:23.857Z
[Link] Against "Why We Sleep" (Guzey) 2019-11-15T21:15:08.098Z
[Link] "Progress Update October 2019" (Ought) 2019-10-29T21:34:42.504Z
[Link] "One year of Future Perfect" (Vox) 2019-10-15T18:12:55.663Z
[Link] "Machine Learning Projects for IDA" (Ought) 2019-10-12T17:35:18.638Z
[Link] "State of the Qualia" (QRI) 2019-10-11T21:14:23.412Z
[Link] "How feasible is long-range forecasting?" (Open Phil) 2019-10-11T21:01:53.471Z
Should CEA buy ea.org? 2019-10-04T23:10:52.237Z
[Link] Experience Doesn’t Predict a New Hire’s Success (HBR) 2019-10-04T19:30:49.479Z
Why is the amount of child porn growing? 2019-10-02T01:09:45.207Z
[Link] Moral Interlude from "The Wizard and the Prophet" 2019-09-27T18:42:16.728Z
[Link] The Case for Charter Cities Within the EA Framework (CCI) 2019-09-23T20:08:19.947Z
[Link] "Relaxed Beliefs Under Psychedelics and the Anarchic Brain" (SSC) 2019-09-11T14:45:35.993Z
[Link] Progress Studies (Jasmine Wang) 2019-09-10T19:55:55.891Z
Campaign finance reform as an EA priority? 2019-08-30T01:46:55.222Z
[Link] BERI handing off Jaan Tallinn's grantmaking 2019-08-27T17:13:30.112Z
[Links] Tangible actions to support Hong Kong protestors from afar 2019-08-18T23:47:03.223Z
[Link] Virtue signaling annotated bibliography (Geoffrey Miller) 2019-08-14T22:41:55.592Z
[Link] Bolsonaro is cutting down the rainforest (nytimes) 2019-08-01T00:45:11.495Z
[Link] The Schelling Choice is "Rabbit", not "Stag" (LessWrong post) 2019-07-31T21:27:22.097Z
[Link] "Two Case Studies in Communist Insecurity" (The Scholar's Stage) 2019-07-25T22:17:05.968Z
[Link] Thiel on GCRs 2019-07-22T20:47:13.076Z
Debrief: "cash prizes for the best arguments against psychedelics" 2019-07-14T17:04:20.153Z
[Link] "Revisiting the Insights model" (Median Group) 2019-07-14T14:58:39.661Z
[Link] "Why Responsible AI Development Needs Cooperation on Safety" (OpenAI) 2019-07-12T01:19:39.816Z

Comments

Comment by Milan_Griffes on William_MacAskill's Shortform · 2023-03-17T17:15:13.412Z · EA · GW

When is the independent investigation expected to complete? 

Comment by Milan_Griffes on Effective altruism in the garden of ends · 2023-03-04T18:32:10.802Z · EA · GW

I wrote a thread with some reactions to this. 

(Overall I agree with Tyler's outlook and many aspects of his story resonate with my own.) 

Comment by Milan_Griffes on Milan Griffes on EA blindspots · 2023-02-24T02:16:06.109Z · EA · GW

(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19

10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang 

See discussion in this thread 


11. EA correctly identifies improving institutional decision-making as important but hasn't yet grappled with the radical political implications of doing that 

This one feels like it requires substantial unpacking; I'll probably expand on it further at some point. 

Essentially the existing power structure is composed of organizations (mostly large bureaucracies) and all of these organizations have (formal and informal) immunological responses that activate when someone tries to change them. (Here's some flavor to pump intuition on this.) 

To improve something is to change it. There are few Pareto improvements available on the current margin, and those that exist are often not perceived as Pareto by all who would be touched by the change. So attempts to improve institutional decision-making trigger organizational immune responses by default.  

These immune responses are often opaque and informal, especially in the first volleys. And they can arise emergently: top-down coordination isn't required to generate them, only incentive gradients. 

The New York Times' assault on Scott Alexander (a) is an example to build some intuition of what this can look like: the ascendant power of Slate Star Codex began to feel threatening to the Times and so the Times moved against SSC. 


16. taking dharma seriously a la @RomeoStevens76's current research direction 

I've since realized that this would be best accomplished by generalizing (and modernizing) to a broader category, which we've taken to referring to as valence studies


19. worldview drift of elite EA orgs (e.g. @CSETGeorgetown, @open_phil) via mimesis being real and concerning 

I'm basically saying that mimesis is a thing. 

It's hard to ground things objectively, so social structures tend to become more like the other social structures around them. 

CSET is surrounded by and intercourses with DC-style think tanks, so it is becoming more like a DC-style think tank (e.g. suiting up starts to seem like a good idea). 

Open Phil interfaces with a lot of mainstream philanthropy, and it's starting to give away money in more mainstream ways.  

Comment by Milan_Griffes on Consider not sleeping around within the community · 2023-02-23T21:21:10.729Z · EA · GW

Ah, the silent majority and the vocal minority

Comment by Milan_Griffes on EV UK board statement on Owen's resignation · 2023-02-22T20:42:07.052Z · EA · GW

But I have a feeling that the community takes revenge on him for all the tension the recent events left. This is cruel. I’m honestly worried if the guy is ok. Hope he is. 

The scapegoat mechanism comes to mind: 

The key to Girard's anthropological theory is what he calls the scapegoat mechanism. Just as desires tend to converge on the same object, violence tends to converge on the same victim. The violence of all against all gives way to the violence of all against one. When the crowd vents its violence on a common scapegoat, unity is restored. Sacrificial rites the world over are rooted in this mechanism.

Comment by Milan_Griffes on Bad Actors are not the Main Issue in EA Governance · 2023-02-22T19:09:58.213Z · EA · GW

I wrote in this direction a few years ago, and I'm very glad to see you clearly stating these points here. 

From What's the best structure for optimal allocation of EA capital? – 

So EA is currently in a regime wherein the large majority of capital flows from a single source, and capital allocation is set by a small number of decision-makers.

Rough estimate: if ~60% of Open Phil grantmaking decisioning is attributable to Holden, then 47.2% of all EA capital allocation, or $157.4M, was decided by one individual in 2017. 2018 & 2019 will probably have similar proportions.

It seems like EA entered into this regime largely due to historically contingent reasons (Cari & Dustin developing a close relationship with Holden, then outsourcing a lot of their philanthropic decision-making to him & the Open Phil staff).

It's not clear that this structure will lead to optimal capital allocation...

Comment by Milan_Griffes on AGI in sight: our look at the game board · 2023-02-22T17:58:04.687Z · EA · GW

... there is a lot we can actually do. We are currently working on it quite directly at Conjecture

I was hoping this post would explain how Conjecture sees its work as contributing to the overall AI alignment project, and was surprised to see that that topic isn't addressed at all. Could you speak to it?

Comment by Milan_Griffes on Should EVF consider appointing new board members? · 2023-02-13T00:16:53.175Z · EA · GW

Isn't the point of being placed on leave in a case like this to (temporarily) remove the trustee from their duties and responsibilities while the situation is investigated, as their ability to successfully execute on their duties and responsibilities has been called into question? 

(I'm not trying to antagonize here – I'm genuinely trying to understand the decision-making of EA leadership better as I think it's very important for us to be as transparent as possible in this moment given how it seems the opacity around past decision-making contributed to bad outcomes. 

You've certainly thought about this more than I have and I want to learn more about your models here. 

But I don't really follow how it conflicting with duties disqualifies being placed on leave as a viable option, as at first brush that sorta seems like the point!) 

Comment by Milan_Griffes on Should EVF consider appointing new board members? · 2023-02-12T03:38:11.615Z · EA · GW

Thanks, Claire. Can you comment on why Nick Beckstead and Will MacAskill were recused rather than placed on leaves of absence? 

Comment by Milan_Griffes on Should EVF consider appointing new board members? · 2023-02-12T03:37:00.721Z · EA · GW

Thanks, Nicole! It's helpful to hear updates like this from EA leadership in the midst of all these scandals. 

Can you comment on why Nick Beckstead was recused rather than placed on a leave of absence?

Comment by Milan_Griffes on Why People Use Burner Accounts: A Commentary on Blacklists, the EA "Inner Circle", and Funding · 2023-02-08T22:15:40.109Z · EA · GW

Thank you for a good description of what this feels like . But I have to ask… do you still “want to join that inner circle” after all this? Because this reads like your defense of using a burner account is that it preserves your chance to enter/remain in an inner ring which you believe to be deeply unethical.

Anonymity is not useful solely for preserving the option to join the critiqued group. It can also help buffer against reprisal from the critiqued group.  

See Ben Hoffman on this (a): 

"Ayn Rand is the only writer I've seen get both these points right jointly:

  1. There's no benefit to joining the inner ring except discovering that their insinuated benefit does not exist.
  2. Ignoring inner rings is refusing to protect oneself against a dangerous adversary."
Comment by Milan_Griffes on [Link] How effective altruists ignored risk · 2023-02-08T20:33:16.990Z · EA · GW

I don't think snark cuts against quality, and we come from a long lineage of it

Comment by Milan_Griffes on [Link] How effective altruists ignored risk · 2023-02-08T20:04:41.431Z · EA · GW

It seems like we're talking past each other here, in part because as you note we're referring to different EA subpopulations: 

  1. Elite EAs who mentored SBF & incubated FTX
  2. Random/typical EAs who Cremer would hang out with at parties 
  3. EA grant recipients 

I don't really know who knew what when; most of my critical feeling is directed at folks in category (1). Out of everyone we've mentioned here (EA or not), they had the most exposure to and knowledge about (or at least opportunity to learn about) SBF & FTX's operations. 

I think we should expect elite EAs to have done better than Sequoia et al. at noticing red flags (e.g. the reports of SBF being shitty at Alameda in 2017; e.g. no ring-fence around money earmarked for the  Future Fund) and acting on what they noticed. 

Comment by Milan_Griffes on Should EVF consider appointing new board members? · 2023-02-07T03:26:20.592Z · EA · GW

Wow "Asana Philanthropy Fund" makes the comparison so sharp. 

Comment by Milan_Griffes on [Link] How effective altruists ignored risk · 2023-02-07T03:18:40.044Z · EA · GW

Thanks. I think Cowen's point is a mix of your (a) & (b). 

I think this mixture is concerning and should prompt reflection about some foundational issues.

Comment by Milan_Griffes on [Link] How effective altruists ignored risk · 2023-02-07T03:14:21.965Z · EA · GW

l question in this space is if EAs have allocated their attention wisely. The answer seems to be "mostly yes." In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX's health than established hedge funds is somewhat odd. 

Two things: 

  1. Sequoia et al. isn't a good benchmark – 

    (i) those funds were doing diligence in a very hot investing environment where there was a substantial tradeoff between depth of diligence and likelihood of closing the deal. Because EAs largely engaged FTX on the philanthropic side, they didn't face this pressure. 

    (ii) SBF was inspired and mentored by prominent EAs, and FTX was incubated by EA over the course of many years. So EAs had built relationships with FTX staff much deeper than what funds would have been able to establish over the course of a months-long diligence process. 
     
  2. The entire EA project is premised on the idea that it can do better at figuring things out than legacy institutions. 
Comment by Milan_Griffes on [Link] How effective altruists ignored risk · 2023-02-07T03:06:28.948Z · EA · GW

I read Cremer as gesturing in these passages to the point Tyler Cowen made here (a): 

Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.  And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated).  When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.

If EA is going to do some lesson-taking, I would not want this point to be neglected. 

Comment by Milan_Griffes on [Link] How effective altruists ignored risk · 2023-02-06T20:23:45.928Z · EA · GW

In particular, the shot at Cold Takes being "incomprehensible" didn't sit right with me - Holden's blog is a really clear presentation of those concerned by the risk misaligned AI plays to the long-run future, regardless of whether you agree with it or not.

Agree that her description of Holden's thing is uncharitable, though she might be describing the fact that he self-describes his vision of the future as 'radically unfamiliar... a future galaxy-wide civilization... seem[ing] too "wild" to take seriously... we live in a wild time, and should be ready for anything... This thesis has a wacky, sci-fi feel.'

(Cremer points to this as an example of an 'often-incomprehensible fantasy about the future')

Comment by Milan_Griffes on "Status" can be corrosive; here's how I handle it · 2023-01-24T20:27:21.307Z · EA · GW

https://twitter.com/QualyThe/status/1617806572281028609 

Comment by Milan_Griffes on On being compromised · 2023-01-05T22:47:42.900Z · EA · GW

So, out of your list of 5 organizations, 4 of them were really very much quite bad for the world, by my lights, and if you were to find yourself to be on track to having a similar balance of good and evil done in your life, I really would encourage you to stop and do something less impactful on the world. 

This view is myopic (doesn't consider the nth-order effects of the projects) and ahistorical (compares them to present-day moral standards rather than the counterfactuals of the time). 

Comment by Milan_Griffes on It's okay to leave · 2022-12-25T01:04:30.058Z · EA · GW
r/MemeTemplatesOfficial - JUST WALK OUT you can leave! work social thing movies home class dentist clothes shoppi too fgncy weed store cops if yóur quick friend ships DA SHARE za IF IT SUCKS... HIT DA BRICKS!! real winners quit
Comment by Milan_Griffes on Keep EA high-trust · 2022-12-23T20:26:08.368Z · EA · GW

Your previous post demonstrated much stronger reasons to not trust you than those you accused of being untrustworthy. 

... strikes me as "not nice" fwiw, though overall it's been cool to see how you've both engaged with this conversation.

Comment by Milan_Griffes on Bad Omens in current EA Governance · 2022-12-23T02:50:55.314Z · EA · GW

Probably Good is a reasonable counterexample to my model here (though it's not really a direct competitor – they're aiming at a different audience and consulted with 80k on how to structure the project).  

It'll be interesting to see how its relationships with 80k and Open Phil develop as we enter a funding contraction. 

Comment by Milan_Griffes on Announcing EA Survey 2022 · 2022-12-22T17:34:40.032Z · EA · GW

I'm curious to read some of the reasoning of those who disagreed with this, as I'm currently high-conviction on these recommendations but feel open to updating substantially (strong beliefs weakly held). 

Comment by Milan_Griffes on Bad Omens in current EA Governance · 2022-12-22T17:25:21.237Z · EA · GW

If you or me or anyone else wanted to start our own organisation under a new brand with similar goals to CEA or GWWC I don't think anyone would try to stop us!

My model is that no one would try to formally stop this effort (i.e. via a lawsuit), though it would receive substantial pushback in the form of: 

  • Private communication discouraging the effort 
  • Organizers of the effort excluded and/or removed from coordinating fora, such as EA slack groups 
  • Public writing suggesting that the effort be rolled into the existing EA movement 
  • Attempts (by professional EAs) to minimize the funding directed to the effort from traditional EA funders (i.e. the effort would be viewed as a competitor for funding) 
Comment by Milan_Griffes on Bad Omens in current EA Governance · 2022-12-22T17:20:27.522Z · EA · GW

I don't follow what you're pointing to with "beholden to the will of every single participant in this community."

My point is that CEA was established as a centralizing  organization to coordinate the actions and branding of the then-nascent EA community. 

Whereas Luke's phrasing suggests that CEA drove the creation of the EA community, i.e. CEA was created and then the community sprung up around it. 

Comment by Milan_Griffes on Bad Omens in current EA Governance · 2022-12-21T02:19:35.169Z · EA · GW

CEA was setup before there was an EA movement (the term "effective altruism" was invented while setting up CEA to support GWWC/80,000 Hours).

The coinage of a name for a movement is different from the establishment of that movement. 

Comment by Milan_Griffes on Bad Omens in current EA Governance · 2022-12-21T02:08:14.553Z · EA · GW

Another conflict-of-interest vector is that EVF board members could influence funding to EVF sub-orgs via other positions they hold, e.g. Open Phil (where Claire Zabel works as a senior program officer) funds CEA (a sub-org of EVF, where Claire is a board member).  

Comment by Milan_Griffes on Bad Omens in current EA Governance · 2022-12-21T02:02:14.476Z · EA · GW

Ah ha: 

https://ev.org/charity  (a

Effective Ventures Foundation is governed by a board of five trustees (Will MacAskill, Nick Beckstead, Tasha McCauley, Owen Cotton-Barratt, and Claire Zabel) (the “Board”). The Board is responsible for overall management and oversight of the charity, and where appropriate it delegates some of its functions to sub-committees and directors within the charity.

Comment by Milan_Griffes on Bad Omens in current EA Governance · 2022-12-21T01:59:43.716Z · EA · GW

Who sits on the board of the Effective Ventures Foundation? 

Comment by Milan_Griffes on Announcing EA Survey 2022 · 2022-12-20T20:39:31.024Z · EA · GW

"What actions would you like to see from EA organizations or EA leadership in the next few months?" 

  • Pausing new grant investigations 
  • Pausing public outreach and other attempts to grow the movement 
  • Something approximating a formal truth & reconciliation process 
  • More inner work (therapy, meditation, movement practices, self-directed reflection, time in nature, pursuit of Aristotelian leisure especially by working with one's hands) 
Comment by Milan_Griffes on Why do EAs have children? · 2022-11-18T05:12:53.747Z · EA · GW

I pulled it down for a while, and just reposted it

Comment by Milan_Griffes on Media attention on EA (again) · 2022-11-17T19:39:44.805Z · EA · GW

As Shakeel wrote here, the leaders of EA organizations can’t say a lot right now, and we know that’s really frustrating. 

** the leaders of EA organizations are deciding not to say a lot right now... 

Comment by Milan_Griffes on Who's at fault for FTX's wrongdoing · 2022-11-16T20:09:31.855Z · EA · GW

Here are some jumping-off points for reflecting on how one might update their moral philosophy given what we know so far. 

Comment by Milan_Griffes on Some important questions for the EA Leadership · 2022-11-16T19:51:48.405Z · EA · GW

From this July 2022 FactCheck article (a): 

Bankman-Fried has provided Protect Our Future PAC with the majority of its donations. The group has raised $28 million for the 2022 election cycle as of June 30, with $23 million from Bankman-Fried. Nishad Singh, who serves as head of engineering at FTX, has donated another $1 million

As of July 21, the PAC has spent $21.3 million on independent expendituresexclusively in Democratic primaries for House seats. 

This level of spending makes Protect Our Future PAC the third highest among outside spenders, topped only by Club for Growth Action and United Democracy Project

...

The PAC has spent $10.5 million, about half of the group’s independent expenditures through July 21, in support of Democrat Carrick Flynn in his unsuccessful primary bid in the highly funded Oregon 6th Congressional District race. Like Protect Our Future PAC, Flynn has stated that his “‘first priority is pandemic prevention.’”

The PAC spent nearly $940,000 against Flynn’s opponent, Oregon state Rep. Andrea Salinas. These expenditures represent the only instance in which Protect our Future PAC has spent money against a Democratic candidate.  

 

From a May 2022 NPR article (a): 

The race has made for the third most expensive House Democratic primary in the country, according to the nonpartisan, nonprofit group OpenSecrets. By Monday, the Democratic race drew more than $13 million in outside money, OpenSecrets reported.

The vast majority of that — more than $10 million — was donated to Flynn's campaign by a group backed by a cryptocurrency billionaire.

Comment by Milan_Griffes on Some important questions for the EA Leadership · 2022-11-16T19:25:15.124Z · EA · GW

I mean, my primary guess here is Carrick. I don't think there was anyone besides Carrick who "decided" to make the Carrick campaign happen. 

People other than Carrick decided to fund the campaign, which wouldn't have happened without funding. 

Comment by Milan_Griffes on Who's at fault for FTX's wrongdoing · 2022-11-16T19:17:48.713Z · EA · GW

Thanks for this comment. 

I'm more interested in reflecting on the foundational issues in EA-style thinking that contributed to the FTX debacle than in abscribing wrongdoing or immorality (though I agree that the whole episode should be thoroughly investigated). 

Examples of foundational issues: 

  • FTX was an explicitly maximalist project, and maximization is perilous 
  • Following a utilitarian logic, FTX/Alameda pursued a high-leverage strategy (Caroline on leverage);  the decision to pursue this strategy didn't account for the massive externalities that resulted from its failure 
  • The Future Fund failed to identify an existential risk to its own operation, which casts doubt on their/our ability to perform risk assessment 
  • EA's inability and/or unwillingness to vet FTX's operations (lack of financial controls, lack of board oversight, no ring-fence around funds committed to the Future Fund) and SBF's history of questionable leadership points to overeager power-seeking  
  • MacAskill's attempt to broker an SBF <> Elon deal re: purchasing Twitter also points to overeager power-seeking 
  • Consequentialism straightforwardly implies that the ends justify the means at least sometimes; protesting that the ends don't justify the means is cognitive dissonance 
  • EA leadership's stance of minimal communication about their roles in the debacle points to a high weight placed on optics / face-saving (Holden's post and Oli's commenting are refreshing counterexamples though I think it's important to hear more about their involvement at some point too) 
Comment by Milan_Griffes on If Professional Investors Missed This... · 2022-11-16T18:47:22.865Z · EA · GW

The issue is, we had a lot more on the line than their investors did. 

Big +1 

FTX is like Enron exploding in the center of EA. 

Comment by Milan_Griffes on Who's at fault for FTX's wrongdoing · 2022-11-16T18:21:11.748Z · EA · GW

Here are some excerpts from Sequoia Capital's profile on SBF (published September 2022, now pulled). 

On career choice: 

Not long before interning at Jane Street, SBF had a meeting with Will MacAskill, a young Oxford-educated philosopher who was then just completing his PhD. Over lunch at the Au Bon Pain outside Harvard Square, MacAskill laid out the principles of effective altruism (EA). The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill. 

... 

It was his fellow [fraternity members] who introduced SBF to EA and then to MacAskill, who was, at that point, still virtually unknown. MacAskill was visiting MIT in search of volunteers willing to sign on to his earn-to-give program. 

At a café table in Cambridge, Massachusetts, MacAskill laid out his idea as if it were a business plan: a strategic investment with a return measured in human lives. The opportunity was big, MacAskill argued, because, in the developing world, life was still unconscionably cheap. Just do the math: At $2,000 per life, a million dollars could save 500 people, a billion could save half a million, and, by extension, a trillion could theoretically save half a billion humans from a miserable death. 

MacAskill couldn’t have hoped for a better recruit. Not only was SBF raised in the Bay Area as a utilitarian, but he’d already been inspired by Peter Singer to take moral action. During his freshman year, SBF went vegan and organized a campaign against factory farming. As a junior, he was wondering what to do with his life. And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth. 

SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.” But, right there, between a bright yellow sunshade and the crumb-strewn red-brick floor, SBF’s purpose in life was set: He was going to get filthy rich, for charity’s sake. All the rest was merely execution risk. 

His course established, MacAskill gave SBF one last navigational nudge to set him on his way, suggesting that SBF get an internship at Jane Street that summer. 

In 2017, everything was going great for SBF. He was killing it at Jane Street... He was giving away 50 percent of his income to his preferred charities, with the biggest donations going to the Centre for Effective Altruism and 80,000 Hours. Both charities focus on building the earn-to-give idea into a movement. (And both had been founded by Will MacAskill a few years before.) He had good friends, mostly fellow EAs. Some were even colleagues. 

... [much further down in the profile] 

So when, that next summer, MacAskill sat with SBF in Harvard Square and carefully explained, in the way only an Oxford-educated philosopher can, that the practice of effective altruism boils down to “applied utilitarianism,” Snipe’s arrow hit SBF hard. He’d found his path. He would become a maximization engine. As he wrote in his blog, “If you’ve decided that some of your time—or money—can be better spent on others than on yourself, well, then, why not more of it? Why not all of it?” 

 

On deciding what to do after leaving Jane Street: 

SBF made a list of possible options, with some notes about each:

  1. Journalism—low pay, but a massively outsized impact potential.
  2. Running for office—or maybe just being an advisor?
  3. Working for the movement—EA needs people!
  4. Starting a startup—but what, exactly?
  5. Bumming around the Bay Area for a month or so—just to see what happens. 

 

On setting up the initial Japanese Bitcoin arbitrage at Alameda: 

Fortunately, SBF had a secret weapon: the EA community. There’s a loose worldwide network of like-minded people who do each other favors and sleep on each other’s couches simply because they all belong to the same tribe. Perhaps the most important of them was a Japanese grad student, who volunteered to do the legwork in Japan. As a Japanese citizen, he was able to open an account with the one (obscure, rural) Japanese bank that was willing, for a fee, to process the transactions that SBF—newly incorporated as Alameda Research—wanted to make. 

The spread between Bitcoin in Japan and Bitcoin in the U.S. was “only” 10 percent—but it was a trade Alameda found it could make every day. With SBF’s initial $50,000 compounding at 10 percent each day, the next step was to increase the amount of capital. 

At the time, the total daily volume of crypto trading was on the order of a billion dollars. Figuring he wanted to capture 5 percent of that, SBF went looking for a $50 million loan. Again, he reached out to the EA community. Jaan Tallinn, the cofounder of Skype, put up a good chunk of that initial $50 million. 

 

On the early days at Alameda: 

The first 15 people SBF hired, all from the EA pool, were packed together in a shabby, 600-square-foot walk-up, working around the clock. The kitchen was given over to stand-up desks, the closet was reserved for sleeping, and the entire space overrun with half-eaten take-out containers. It was a royal mess. But it was also the good old days, when Alameda was just kids on a high-stakes, big-money, earn-to-give commando operation. Fifty percent of Alameda’s profits were going to EA-approved charities.

“This thing couldn’t have taken off without EA,” reminisces Singh, running his hand through a shock of thick black hair. He removes his glasses to think. They’re broken: A chopstick has been Scotch taped to one of the frame’s sides, serving as a makeshift temple. “All the employees, all the funding—everything was EA to start with.” 

 

On how he was thinking about future earnings: 

“Am I,”  [reporter asks], “talking to the world’s first trillionaire?” 

...

“Maybe let’s take a step back,” he says, only to launch into an explanation of his own, personal utility curve: “Which is to say, if you plot dollars-donated on the X axis, and Y is how-much-good-I-do-in-the-world, then what does that curve look like? It’s definitely not linear—it does tail off, but I think it tails off pretty slowly.”

His point seems to be that there is, out there somewhere, a diminishing return to charity. There’s a place where even effective altruism ceases to be effective. “But I think that, even at a trillion, there’s still really significant marginal utility to dollars donated.” 

...

“So, is five trillion all you could ever use to help the world?” 

...

“Okay, at that scale, I think the answer might be yes. Because, if your spending is on the scale of the U.S. government, it might have too weird and distortionary an impact on things.” 

... so, money spent now will be more effective at making the world a better place than money spent later. “I think there are some things that are pretty urgent,” SBF says. “There’s just a long series of crucial considerations, and all of them matter a lot—and you can’t fuck any of them up, or you miss most of the total value that you could ever get.” 

To be clear, SBF is not talking about maximizing the total value of FTX—he’s talking about maximizing the total value of the universe. And his units are not dollars: In a kind of GDP for the universe, his units are the units of a utilitarian. He’s maximizing utils, units of happiness. And not just for every living soul, but also every soul—human and animal—that will ever live in the future. Maximizing the total happiness of the future—that’s SBF’s ultimate goal. FTX is just a means to that end. 

 

On what differentiates FTX in crypto: 

The FTX competitive advantage? Ethical behavior. SBF is a Peter Singer–inspired utilitarian in a sea of Robert Nozick–inspired libertarians. He’s an ethical maximalist in an industry that’s overwhelmingly populated with ethical minimalists. I’m a Nozick man myself, but I know who I’d rather trust my money with: SBF, hands-down. And if he does end up saving the world as a side effect of being my banker, all the better.

 

On the EA community in the Bahamas that congealed around FTX: 

A cocktail party is in full swing, with about a dozen people I don’t recognize standing around. It turns out to be a mixer for the local EA community that’s been drawn to Nassau in the hopes that the FTX Foundation will fund its various altruistic ideas. The point of the party is to provide a friendly forum for the EAs who actually run EA-aligned nonprofits to meet the earn-to-give EAs at FTX who will fund them, and vice versa. The irony is that, while FTX hosts the weekly mixer—providing the venue and the beverages—it’s rare for an actual FTX employee to ever show up and mix. Presumably, they’re working too hard.

...

“Imagine nerds invented a religion or something,” says Woods, stabbing at my question with vigor, “where people get to argue all day.”

“It’s… an ideology,” counters Morrison. The argument has begun. 

Woods amiably disagrees: “EA is not an ideology, it’s a question: ‘How do I do the most good?’ And the cool thing about EA, compared to other cause areas, is that you can change your views constantly—and still be part of the movement.”

...

Woods serves up an answer to my question. (Fittingly, she’s wearing tennis whites.) “EA attracts people who really care, but who are also really smart,” she says. “If you are altruistic but not very smart, you just bounce off. And if you’re smart but not very altruistic,” she continues, “you can get nerd sniped!”

...

“This ties into the way FTX is doing its foundation,” Morrison says, helpfully knocking the ball back to my true interest. “The foundation wants to get a lot of money out there in order to try a lot of things quickly. And how can you do that effectively?” It’s a rhetorical question, a move worthy of a preppy debate champ who went to a certain finishing school in Cambridge—which is exactly what Morrison is. “Part of the answer is to give money to someone in the EA community.”

“Because EA is different from other communities,” Woods continues, picking up right where Morrison left off. “They’re like, ‘This is the ethical thing, and this is the truth.’ And we’re like, ‘What is the ethical thing? What is the truth?’” 


Following your analogy, if a fan of Novik had: 

  • been convinced by Novik to dedicate their career to the Novikian ethic 
  • been pointed by Novik to a promising first job in that career path 
  • decided to leave that promising first job on the basis of Novikian reasoning, framing the question of what to do next in Novikian terms 
  • worked with a global network of Novikians to implement an international crypto arbitrage 
  • received seed funding from a prominent Novikian to scale up this arbitrage 
  • exclusively hired Novikians to continue scaling the arbitrage once it started working 
  • thought about forward-facing professional decisions strictly in terms of the Novikian ethic 
  • used their commitment to Novikianism to garner a professional edge in their industry 
  • used a large portion of the proceeds of their business to fund Novikian projects, overseen by a foundation staffed exclusively by elite Novikians and advised by Novik herself
  • fostered a community of Novikians around their lavish corporate headquarters 

 

... then I think it would be fair to attribute some of the impact of their actions to Novikianism. 

Comment by Milan_Griffes on A personal statement on FTX · 2022-11-12T19:34:08.594Z · EA · GW

Archived version (that gets around the paywall)

Comment by Milan_Griffes on A personal statement on FTX · 2022-11-12T19:24:23.007Z · EA · GW

If you say that your business model is to hold depositor funds 1:1 and earn money from fees, but in fact you sometimes earn money via making trades with depositor funds, then you would be misrepresenting your business model. 

Comment by Milan_Griffes on The FTX Future Fund team has resigned · 2022-11-12T18:46:55.402Z · EA · GW

I asked some further questions in this direction here

Comment by Milan_Griffes on The FTX Future Fund team has resigned · 2022-11-12T18:40:12.460Z · EA · GW

Can you give some context on why Lightcone accepted a FTX Future Fund grant (a) given your view of his trustworthiness? 

Comment by Milan_Griffes on A personal statement on FTX · 2022-11-12T18:28:43.894Z · EA · GW

I think it's good practice to try to understand a project's business model and try to independently verify the implications of that model before joining the project. 

Comment by Milan_Griffes on A personal statement on FTX · 2022-11-12T18:02:10.811Z · EA · GW

It's fair enough to feel betrayed in this situation, and to speak that out. 

But given your position in the EA community, I think it's much more important to put effort towards giving context on your role in this saga. 

Some jumping-off points: 

  • Did you consider yourself to be in a mentor / mentee relationship with SBF prior to the founding of FTX? What was the depth and cadence of that relationship? 
    • e.g. from this Sequoia profile (archived as they recently pulled it from their site): 

      "The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.

      ... And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth. SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.”"
       
  • What diligence did you / your team do on FTX before agreeing to join the Future Fund as an advisor? 
    • [Edited to add: Were you aware of the 2018 dispute at Alameda  re: SBF's leadership? If so, how did this context factor into your decision to join the Future Fund?] 
    • Did you have visibility into where money earmarked for Future Fund grants was being held?
    • Did you understand the mechanism by which FTX claimed to be generating revenue? Were the revenues they reported sanity-checked against a back-of-the-envelope estimate of how much their claimed mechanism would be able to generate?
       
  • What were your responsibilities at the Future Fund? How often were you in contact with SBF and other members of FTX leadership in your  role as an advisor? 
Comment by Milan_Griffes on Money Stuff: FTX Had a Death Spiral · 2022-11-11T00:33:02.682Z · EA · GW

Sequoia pulled that link, here's an archived version

Comment by Milan_Griffes on FTX Crisis. What we know and some forecasts on what will happen next · 2022-11-10T23:47:36.860Z · EA · GW

Follow-up from an independent source: https://twitter.com/AutismCapital/status/1590852094894149632

Comment by Milan_Griffes on FTX Crisis. What we know and some forecasts on what will happen next · 2022-11-10T19:23:57.798Z · EA · GW

https://twitter.com/AutismCapital/status/1590779299946442753

tl;dr – insider source says many FTX employees etc have lost their life savings; SBF had a history of pitching them to double-down on holding FTT and other assets on the exchange

Comment by Milan_Griffes on [Cause Exploration Prizes] Jhana meditation · 2022-10-29T00:06:31.822Z · EA · GW

Scott Alexander recently wrote about jhana again: https://astralcodexten.substack.com/p/nick-cammarata-on-jhana

Comment by Milan_Griffes on Just Look At The Thing! – How The Science of Consciousness Informs Ethics · 2022-10-20T18:26:52.223Z · EA · GW

More on jhana meditation as a new cause area.