Posts
Comments
I read his comment differently, but I'll stop engaging now as I don't really have time for this many follow-ups, sorry!
What if the investor decided to invest knowing there was an X% chance of being defrauded, and thought it was a good deal because there's still an at least (100-X)% chance of it being a legitimate and profitable business? For what number X do you think it's acceptable for EAs to accept money?
Fraud base rates are 1-2%; some companies end up highly profitable for their investors despite having committed fraud. Should EA accept money from YC startups? Should EA accept money from YC startups if they e.g. lied to their investors?
I think large-scale defrauding unsuspecting customers (who don't share the upside from any risky gambles) is vastly worse than defrauding professional investors who are generally well-aware of the risks (and can profit from FTX's risky gambles).
(I'm genuinely confused about this question; the main thing I'm confident in is that it's not a very black-and-white kind of thing, and so I don't want to make my bet about that.)
I think that's false; I think the FTX bankruptcy was hard to anticipate or prevent (despite warning flags), and accepting FTX money was the right judgment call ex ante.
I expect a 3-person board with a deep understanding of and commitment to the mission to do a better job selecting new board members than a 9-person board with people less committed to the mission. I also expect the 9-person board members to be less engaged on average.
(I avoid the term "value-alignment" because different people interpret it very differently.)
That was an example; I'd want it to exclude any type of fraud except for the large-scale theft from retail customers that is the primary concern with FTX.
I think 9-member boards are often a bad idea because they tend to have lots of people who are shallowly engaged, rather than a smaller number of people who are deeply engaged, tend to have more diffusion of responsibility, and tend to have much less productive meetings than smaller groups of people. While this can be mitigated somewhat with subcommittees and specialization, I think the optimal number of board members for most EA orgs is 3–6.
no lawyers/accountants/governance experts
I have a fair amount of accounting / legal / governance knowledge and as part of my board commitments think it's a lot less relevant than deeply understanding the mission and strategy of the relevant organization (along with other more relevant generalist skills like management, HR, etc.). Edit: Though I do think if you're tied up in the decade's biggest bankruptcy, legal knowledge is actually really useful, but this seems more like a one-off weird situation.
I would be willing to take the other side of this bet, if the definition of "fraud" is restricted to "potentially stealing customer funds" and excludes thinks like lying to investors.
You seem to imply that it's fine if some board members are not value-aligned as long as the median board member is. I strongly disagree: This seems a brittle setup because the median board member could easily become non-value-aligned if some of the more aligned board members become busy and step down, or have to recuse due to a COI (which happens frequently), or similar.
TL;DR: You're incorrectly assuming I'm into Nick mainly because of value alignment, and while that's a relevant factor, the main factor is that he has an unusually deep understanding of EA/x-risk work that competent EA-adjacent professionals lack.
I might write a longer response. For now, I'll say the following:
- I think a lot of EA work is pretty high-context, and most people don't understand it very well. E.g., when I ran EA Funds work tests for potential grantmakers (which I think is somewhat similar to being a board member), I observed that highly skilled professionals consistently failed to identify many important considerations for deciding on a grant. But, after engaging with EA content at an unusual level of depth for 1-2 years, they can improve a lot (i.e., there were some examples of people improving their grantmaking skills a lot). Most such people never end up attaining this level of engagement, so they never reach the level of competence I think would be required.
- I agree with you that too much of a focus on high status core EAs seems problematic.
- I think value-alignment in a broader sense (not tracking status, but actual altruistic commitment) matters a great deal. E.g., given the choice between personal prestige and impact, would the person reliably choose the latter? I think some high-status core EAs who were on EA boards were not value-aligned in this sense, and this seems bad.
EDIT: Relevant quote—I think this is where Nick shines as a board member:
For example, if a nonprofit's mission is "Help animals everywhere," does this mean "Help as many animals as possible" (which might indicate a move toward focusing on farm animals) or "Help animals in the same way the nonprofit traditionally has" or something else? How does it imply the nonprofit should make tradeoffs between helping e.g. dogs, cats, elephants, chickens, fish or even insects? How a board member answers questions like this seems central to how their presence on the board is going to affect the nonprofit.
I agree with you: When I wrote "knew specifics about potential fraud", I meant it roughly in the sense you described.
To my current knowledge, Nick did not have access to evidence that the funds were likely fraudulently obtained. (Though it's not clear that I would know if it were the case.)
Overall, I think Nick did the right thing ex ante when he chose to run the Future Fund and accept SBF's money (unless he knew specifics about potential fraud).
If he should be removed from the board, I think we either need an argument of the form "we have specific evidence to doubt that he's trustworthy" or "being a board member requires not just absence of evidence of untrustworthiness, but proactively distancing yourself from any untrustworthy actors, even if collaborating with them would be beneficial". I don't buy either of these.
I would still like an argument that they shouldn't be removed from boards, when almost any other org would. I would like the argument made and seen to be made.
Here's my tentative take:
- It's really hard to find competent board members that meet the relevant criteria.
- Nick (together with Owen) did a pretty good job turning CEA from a highly dysfunctional into a functional organization during CEA's leadership change in 2018/2019.
- Similarly, while Nick took SBF's money, he didn't give SBF a strong platform or otherwise promote him a lot, and instead tried to independently do a good (not perfect, but good enough!) job running a philanthropic organization. While SBF may have wanted to use the philanthropy to promote the FTX/SBF brand, Nick didn't do this.
- Continuity is useful. Nick has seen lots of crises and presumably learnt from them.
So, while Will should be removed, Nick has demonstrated competence and should stay on.
(Meta note: I feel frustrated about the lack of distinction between Nick and Will on this question. People are a bit like "Will did a poor job, therefore Nick and Will should be removed from the board." Please, discuss the two people separately.)
I think it's very reasonable to remove Will, and much less clear whether to remove Nick. I would like to see some nuanced distinction between the two of them. My personal take is that Nick did an okay job and should probably stay on the relevant boards. Honestly I feel pretty frustrated by the lack of distinction between Will and Nick in this discussion.
Personally, I think it's useful if this decision is made by people who competently investigate the case and gather all the information, not by people acting primarily based on public information like this post. Even though I know Owen well, I personally find it hard to say how likely Owen is to make mistakes again; it seems plausible to me that he can learn from his mistakes and continue to be highly involved in the community without causing any further issues, and it also seems possible that he would continue to make similar mistakes. It seems to me that the main way to find out would be by seeking out conversations and investigating.
I personally think the community health team (after implementing some improvements) would be suitable for deciding his future involvement. Even though they didn't deal with this particular case well, I think overall their track record seems strong, and I think they can learn from this case. They have a lot more relevant context than external investigators.
As a friend pointed out, relying on Owen's own judgment regarding whether or when to restart mentorship, event organizing, and funding recommendations seems a really bad idea given that the problematic cases happened in the first place due to errors in Owen's judgment. I think it should go without saying that these decisions should be made by a separate body.
(I don't think these two types of judgments are perfectly correlated, but they seem somewhat correlated. Also I don't mean to take a stance on whether/how Owen should be involved in the future; I think it's good to consider the full range of options.)
Yeah, it being more pervasive and entangled with EA culture than I thought is one of my takeaways, and I've been spending some time to reflect and think about ways I could help improve things.
Ok, sorry in case that was a bit of a strawman!
it was a lot less bad than what I'd have expected based on the TIME piece account
From my personal perspective: While the additional context makes the interaction itself seem less bad, I think the fact that it involved Owen (rather than, say, a more tangentially involved or less influential community member) made it a lot worse than what I would have expected. In addition, this seems the first second time (after this one*) I hear about a case that the community health team didn't address forcefully enough, which wasn't clear to me based on the Time article.
* edited based on feedback that someone sent me via DM, thank you
(edit: I think you acknowledge this elsewhere in your comment)
I'm curious why some people disagree-voted on this comment.
Hi Ammar, would love to get your application to the Atlas Fellowship! It's not part of the EA movement but discusses EA-relevant ideas and rationality topics. Applications will likely open in a few weeks. https://www.atlasfellowship.org/
EA-ish/impact-focused programs for high schoolers: LEAF, Non-Trivial
Rationality-focused programs for high schoolers: ESPR, SPARC, WARP.
FWIW, I think Habryka should probably value his time at >$1700 per hour. Put differently, I think if longtermist funders could spend $3.4 million per year to get another Habryka, that seems like a good use of longtermist resources to me. I'm not totally confident in this judgment and have some uncertainty about this, but here some intuitions/examples: 1) having another Habryka could've reduced community exposure to FTX and fallout from the FTX collapse, which could easily be worth more than $3.4 million, 2) it's generally really hard to find people who can run organizations competently, 3) if longtermism spends $250m/y and ~3x that amount in human labor, that's roughly $1b per year, and I think it's plausible that he's improving the culture of the community and allocation of those resources by more than 0.34% via useful commenting on this forum and similar activities, 4) other people of an (in my view) similar caliber often have excellent earning-to-give opportunities with an expected value of >$5m/y.
(That said, I agree with your other points and I personally think the coffee table is excessive.)
My general approach is to make sure to sufficiently disincentivize norm violations, but be very lenient with giving young people second/third/n-th chances, and not write them off just because they didn't understand the norms or didn't immediately follow them.
(Incidentally, we've recently been discussing how to get disagreeable people to apply and feel welcome and comfortable in the Atlas community, partly for the reasons you mentioned.)
My guess is that some people may come away thinking something like "Will, Toby, Beckstead, Owen, and other people who remain senior in EA executed things like the CEA-GWWC merger." Is this correct, or was it mainly the Leverage people (Kerry, Tyler, Larissa, and Tara)? I don't actually know, but it may be worth clarifying.
My understanding of what was going on at the time was that the board and advisors failed to prevent Leverage from gaining a lot of influence over CEA until they finally intervened in December 2018, but that they would not have endorsed what was going on if they had fully understood the situation. Whether someone was more like a victim, a bystander, or actively deceptive matters a lot, I think.
Yeah, I found this a tricky one. I am currently not planning to respond to this post because I think it caused overall a bit too much collateral damage (leaking documents, accusations against a student in our program, and outing former staff), and I don't want to incentivize that. But I do like thoughtful critiques, and am in principle pretty interested in receiving and responding to them.
I retracted my comment. I still think it would be useful for the Atlas Fellowship to know its tier, and I'd be happy for others to learn about Atlas's tier even if it was bad.
But I think people would have all kinds of incorrect interpretations of the tiers, and it would produce further low-quality discussion on the Forum (which already seems pretty low, especially as far as Open Phil critiques go), and it could be a hassle for Open Phil. Basically I agree with this comment, and I don't trust the broader EA community to correctly interpret the tier numbers.
Actually, this particular post was drafted by a person who has been banned from the Forum, so I think it's fine that it's not published
It's the same as with probabilities. How can probabilities be calibrated, given that they are fairly subjective? The LR can be calibrated the same way given that it's just a function of two probabilities.
If grantee concerns are a reason against doing this, you could allow grantees to opt into having their tiers shared publicly. Even an incomplete list could be useful.
I'd personally happily opt in with the Atlas Fellowship, even if the tier wasn't very good.
If a concern is that the community would read too much into the tiers, some disclaimers and encouragement for independent thinking might help counteract that.
At this point, I think it's unfortunate that this post has not been published, a >2 month delay seems too long to me. If there's anything I can do to help get this published, please let me know.
Interesting point, agreed that this would be very interesting to analyze!
Yeah. Here's the example in more detail:
- Prior odds: 1:1
- Theoretical arguments that minimum wages increase unemployment, LR = 1:3 → posterior odds 1:3
- Someone sends an empirical paper and the abstract says it improved the situation, LR = 1.2:1 → posterior odds 1.2:3
- IGM Chicago Survey results, LR = 5:1 → posterior odds 6:3 (or 2:1)
Yeah fair, although I expect people to have more difficulty converting log odds back into probabilities.
Yeah exactly, that's part of the idea here! E.g., on Metaculus, if someone posts a source and updates their belief, they could display the LR to indicate how much it updated them.
Should we be using Likelihood Ratios in everyday conversation the same way we use probabilities?
Disclaimer: Copy-pasting some Slack messages here, so this post is less coherent or well-written than others.
I've been thinking that perhaps we should be indicating likelihood ratios in everyday conversation to talk about the strength of evidence the same way we indicate probabilities in everyday conversation to talk about beliefs, that there should be a likelihood ratio calibration game, and that we should have cached likelihood ratios for common types of evidence (eg experimental research papers of a given level of quality).
However, maybe this is less useful because different pieces of evidence are often correlated? Or can we just talk about the strength of the uncorrelated portion of additional evidence?
See also: Strong Evidence is Common
Example
Here's an example with made-up numbers:
Question: Are minimum wages good or bad for low-skill workers?
- Theoretical arguments that minimum wages increase unemployment, LR = 1:3
- Someone sends an empirical paper and the abstract says it improved the situation, LR = 1.2:1
- IGM Chicago Survey results, LR = 5:1
So if you start out with a 50% probability, your prior odds are 1:1, your posterior odds after seeing all the evidence are 6:3 or 2:1, so your posterior probability is 67%.
If another person starts out with a 20% probability, their prior odds are 1:4, their posterior odds are 1:2, their posterior probability is 33%.
These two people agree on the strength of evidence but disagree on the prior. So the idea is that you can talk about the strength of the evidence / size of the update instead of the posterior probability (which might mainly depend on your prior).
Calibration game
A baseline calibration game proposal:
You get presented with a proposition, and submit a probability. Then you receive a piece of evidence that relates to the proposition (e.g. a sentence from a Wikipedia page about the issue, or a screenshot of a paper/abstract). You submit a likelihood ratio, which implies a certain posterior probability. Then both of these probabilities get scored using a proper scoring rule.
My guess is that you can do something more sophisticated here, but I think the baseline proposal basically works.
I feel excited by this fund and the selection of grantees!
Same for the organization I run.
Yeah, it's mostly a heuristic argument, and the best you can do might be to just carefully look at the object level instead of trying to infer based on what people are saying.
Good point, I wasn't tracking that the Wytham post doesn't actually have that much Karma. I do think my claim would be correct regarding my first example (spending norms vs. asset hedges).
My claim might also be correct if your metric of choice was the sum of all the comment Karma on the respective posts.
There aren't posts about them I think, but I'd also predict that they'd get less Karma if they existed.
As far as I know, large philanthropic foundations often use DAFs to attain public charity status, getting the same tax benefits. And if they're private foundations, they're still getting a benefit of ~15%, and possibly a lot more via receiving donations of appreciated assets.
I also don't think public charity status and tax benefits are especially relevant here. I think public scrutiny is not intrinsically important; I mainly care about taking actions that maximize social impact, and public scrutiny seems much worse for this than figuring out high-impact ways to preserve/increase altruistic assets.
Yeah, I think making sure discussion of these topics (both Anthropic and Wytham) is appropriately careful seems good to me. E.g., the discussion of Wytham seemed very low-quality to me, with few contributors providing sound analysis of how to think about the counterfactuals of real estate investments.
I don't actually know the details, but as far as I know, EVF is primarily funded by private foundations/billionaires, too.
Also, some of this hedging could've been done by community members without actual ownership of Meta/Asana/crypto. Again, the lack of discussion of this seems problematic to me.
I agree that the hedges might be practically infeasible or hard. But my point is that this deserves more discussion and consideration, not that it was obviously easy to fix.
EA Forum discourse tracks actual stakes very poorly
Examples:
- There have been many posts about EA spending lots of money, but to my knowledge no posts about the failure to hedge crypto exposure against the crypto crash of the last year, or the failure to hedge Meta/Asana stock, or EA’s failure to produce more billion-dollar start-ups. EA spending norms seem responsible for $1m–$30m of 2022 expenses, but failures to preserve/increase EA assets seem responsible for $1b–$30b of 2022 financial losses, a ~1000x difference.
- People are demanding transparency about the purchase of Wytham Abbey (£15m), but they’re not discussing whether it was a good idea to invest $580m in Anthropic (HT to someone else for this example). The financial difference is ~30x, the potential impact difference seems much greater still.
Basically I think EA Forum discourse, Karma voting, and the inflation-adjusted overview of top posts completely fails to correctly track the importance of the ideas presented there. Karma seems to be useful to decide which comments to read, but otherwise its use seems fairly limited.
(Here's a related post.)
Consider radical changes without freaking out
As someone running an organization, I frequently entertain crazy alternatives, such as shutting down our summer fellowship to instead launch a school, moving the organization to a different continent, or shutting down the organization so the cofounders can go work in AI policy.
I think it's important for individuals and organizations to have the ability to entertain crazy alternatives because it makes it more likely that they escape local optima and find projects/ideas that are vastly more impactful.
Entertaining crazy alternatives can be mentally stressful: it can cause you or others in your organization to be concerned that their impact, social environment, job, or financial situation is insecure. This can be addressed by pointing out why these discussions are important, a clear mental distinction between brainstorming mode and decision-making, and a shared understanding that big changes will be made carefully.
Why considering radical changes seems important
The best projects are orders of magnitude more impactful than good ones. Moving from a local optimum to a global one often involves big changes, and the path isn't always very smooth. Killing your darlings can be painful. The most successful companies and projects typically have reinvented themselves multiple times until they settled on the activity that was most successful. Having a wide mental and organizational Overton window seems crucial for being able to make pivots that can increase your impact several-fold.
When I took on leadership at CLR, we still had several other projects, such as REG, which raised $15 million for EA charities at a cost of $500k. That might sound impressive, but in the greater scheme of things raising a few million wasn't very useful given that the best money-making opportunities could make a lot more per staff per year, and EA wasn't funding-constrained anymore. It took me way too long to realize this, and only my successor stopped putting resources into the project after I left. There's a world where I took on leadership at CLR, realized that killing REG might be a good idea, seriously considered the idea, got input from stakeholders, and then went through with it, within a few weeks of becoming Executive Director. All the relevant information to make this judgment was available at the time.
When I took on leadership at EA Funds, I did much better: I quickly identified the tension between "raising money from a broad range of donors" and "making speculative, hits-based grants", and suggested that perhaps these two aims should be decoupled. I still didn't go through with it nearly as quickly as I could have, this time not because of limitations of my own reasoning, but more because I felt constrained by the large number of stakeholders who had expectations about what we'd be doing.
Going forward, I intend to be much more relentless about entertaining radical changes, even when they seem politically infeasible, unrealistic, or personally stressful. I also intend to discuss those with my colleagues, and make them aware of the importance of such thinking.
How not to freak out
Considering these big changes can be extremely stressful, e.g.:
- The organization moving to a different continent could mean breaking up with your life partner or losing your job.
- A staff member was excited about a summer fellowship but not a school, such that discussing setting up a school made them think there might not be a role at the organization that matches their interests anymore.
Despite this, I personally don't find it stressful if I or others consider radical changes, partly because I use the following strategies:
- Mentally flag that radical changes can be really valuable. Remind myself of my previous failings (listed above) and the importance of not repeating them. There's a lot of upside to this type of reasoning! Part of the reason for writing this shortform post is so I can reference it in the future to contextualize why I'm considering big changes.
- Brainstorm first, decide later (or "babble first, prune later"): During the brainstorming phase, all crazy ideas are allowed and I (and my team) aim to explore novel ideas freely. We can always still decide against going through with big changes during the decision phase that will happen later. A different way to put this is that considering crazy ideas must not be strong evidence for them actually being implemented. (For this to work, it's important that your organization actually has a sound decision procedure that actually happens later, and doesn't mix the two stages. It's also important for you to flag clearly that you're in brainstorming mode, not in decision-making mode.)
- Implement big changes carefully, and create common knowledge of that intention. Big changes should not be the result of naïve EV maximization, but should carefully take into account the full set of options (avoiding false dichotomies), the value of coordination (maximizing joint impact of the entire team, not just the decision-maker), externalities on other people/projects/communities, existing commitments, etc. Change management is hard; big changes should involve getting buy-in from the people affected by the change.
Hmm, but EA isn't an organization, it's a movement. I don't really know what it even means to say that a movement has co-founders ...
There's a lot of existing analysis and literature on how to become a billionaire startup founder (or quant trader, etc.). But there seems to be little analysis of how to turn a $1b fortune into a $100b fortune. Put differently, it's pretty clear to me how one might make $10m or $100m per year, but very unclear how one could make $10b per year, even though e.g. Gautam Adani seems to have done just that.
Do you have an overall take on whether there are any strategies that seem to work predictably, or whether it's pure luck at that point? (Perhaps it's worth looking out for strategies that require billions of dollars of capital as a barrier of entry, otherwise markets are likely to be efficient.)
I really disagree with this and think it's an incorrect representation of the actual history of the EA community.
- Whether someone was intending to grow a large movement seems much less important than whether someone actually did. (I.e., whether they made seminal contributions to the ideas and culture of the community that actually helped create the community.)
- GWWC seems pretty unimportant in the grander scheme of things compared to other organizations, books, ideas, etc. E.g., I think Will's contributions to 80K, DGB, etc. seem more important than GWWC.
Right now I don't feel compelled to write a more elaborate response, but if this false founding myth keeps coming up I might write a longer post at some point.