Posts
Comments
Me too, from inside the building. Best of luck Max!
The difference between "building effective altruism" and "community" could use some clarification.
Yes, but you've usually been arguing in favour of (or at least widening the overton window around) elite EA views vs the views of the EA masses, have been very close to EA leadership, and are super disagreeable - you are unrepresentative on many relevant axes.
If there have really been minors raped in EA or serious infringements by high-up EAs, then EA would be repeating some of the worst ills of the Catholic church, secularly. Why would one want any part in such a community.
This is so many levels of concerning. I don't think people are really understanding this / processing it.
Yes, Vassar was more than "somewhat central" in the rationality community. When I first visited SF in 2013 or so, he was one of the main figures in the rationalist tradition, especially as transmitted face-to-face. About as many people would recommend that you hear Michael talk as any other individual. Only 1 or 2 people were more notable. I remember hearing that in the earlier days, it was even more so, and that he was involved in travelling around to recruit the major early figures in the rationalist community from different parts of the US.
Although I can't say for sure, I would also bet that there's dozens of unofficial rationalist events (and a few unofficial EA events) that he attended in the last five years, given that he was literally hanging out in the miri/cfar reception area for hours per week, right until the time he was officially banned.
Whereas he was orders of magnitude less present in EA world (although his presence at all is still bad).
The cases I know of come disproportionately from more aspie people, and I can think of at least one case where the person didn't think that they had done anything wrong. This would make sense, because aspie people are on average less competent at judging the lines of socially acceptable behaviour
I can think of problems like this with non-EA academics too. There was a a famous medic who taught at my undergrad degree and iirc gave weird physical compliments to female students during his lectures, and I can think of at least one non-EA prof who made multiple female students uncomfortable.
Having said that, my personal hunch would be that things are worse in EA. Some of the reasons are unpopular to talk about, but they include it being quite male, young (including minors), poly, aspie, less professional and due to what we are discovering can be quite a fine line between consequentialism and amorality. In some of these respects, it resembles the chess community and the atheism community, which have had significant problems.
A huge fraction of the EA community's reputational issues, DEI shortcomings, and internal strife stem from its proximity to/overlap with the rationalist community.
Generalizing a lot, it seems that "normie EAs" (IMO correctly) see glaring problems with Bostrom's statement and want this incident to serve as a teachable moment so the community can improve in some of the respects above, and "rationalist-EAs" want to debate race and IQ (or think that the issue is so minor/"wokeness-run-amok-y" that it should be ignored or censored). This predictably leads to conflict.
This is inaccurate as stated, but there is an important truth nearby. The apparent negatives you attribute to "rationalist" EAs are also true of non-rationalist old-timers in EA, who trend slightly non-woke, while also keeping arms length from the rationalists. SBF himself was not particularly rationalist, for example. What seems to attract scandals is people being consequentialist, ambitious, and intense, which are possible features of rationalists and non-rationalists alike.
Relatedly, which EA projects have shut down? I suspect it's a much smaller fraction than the ~90% of startup companies that do, and that it should be at least a bit larger than it currently is.
Totally, this is what I had in mind - something like the average over posts based on how often they are served on the frontpage.
Thanks for the response. Out of the four responses to nitpicks, I agree with the first two. I broadly agree about the third, forum quality. I just think that peak post quality is at best a lagging indicator - if you have higher volume and even your best posts are not as good anymore, that would bode very poorly. Ideally, the forum team would consciously trade off between growth and average post quality, and in some cases favouring the latter, e.g. performing interventions that would improve the latter even if they slowed growth. And the fourth, understatement, I don't think we disagree that much.
As for summarising the year, it's not quite that I want you say that CEA's year was bad. In one sense, CEA's year was fine, because these events don't necessarily reflect negatively on CEA's current operations. But in another important sense, it was a terrible year for CEA, because these events have a large bearing on whether CEA's overarching goals are being reached. And this could bear on on what operating activities should be performed in future. I think an adequate summary would capture both angles. In an ideal world (where you were unconstrained by legal consequences etc.), I think an annual review post would note that when such seismic events happen, the standard metrics become relatively less important, while strategy becomes more important, and the focus of discussion then rests on the actually important stuff. I can accept that in the real world, that discussion will happen (much) later, but it's important that it happens.
Several nitpicks:
- "2022 was a year of continued growth for CEA and our programs." - A bit of a misleading way to summarise CEA's year?
- "maintaining high retention and morale" - to me there did seem to be a dip in morale at the office recently
- "[EA Forum] grew by around 2.9x this year." - yes, although a bit of this was due to the FTX catastrophe
- "Overall, we think that the quality of posts and discussion is roughly flat over the year, but it’s hard to judge." - this year, a handful of people told me they felt the quality had decreased, which didn't happen in previous years, and I noticed this too.
- "Recently the community took a significant hit from the collapse of FTX and the suspected illegal and/or immoral behaviour of FTX executives." - this is a very understated way to note that a former board member of CEA committed one of the largest financial frauds of all time.
I realise there are legal and other constraints, so maybe I am being harsh, but overall, several components of this post seemed not very "real" or straightforward relative to what I would usually expect from this sort of EA org update.
This update would be more useful if it said more about the main catastrophe that EA (and CEA) is currently facing. For whatever reasons, maybe perfectly reasonable, it seems you chose the strategy of saying little new on that topic, and presenting by and large the updates on CEA's ordinary activities that you would present if the catastrophe hadn't happened. But even given that choice, it would be good to set expectations appropriately with some sort of disclaimer at the top of the doc.
Exciting!
People have sometimes raised the idea of founding an AI-focused consultancy, to do things like evaluate or certify the safety and fairness of systems. I know you've said you plan to apply, but not perform "deep technical" work, but can you say any more about whether this space is one you've considered getting involved in?
I mostly agree with you, Jonas, but I think you're using the phrase "founder" in a confusing way. I think a founder is someone who is directly involved in establishing an organisation. Contributions that are indirect like Bostrom and Eliezer's, or that come after the organisation is started (like DGB) may be very important, but don't make them founders. I would probably totally agree with you if you just said you're answering a different question: "Who caused EA to be what it is today?"
I think the FTX stuff a bigger deal than Peter Singer's views on disability, and for me to be convinced about the England and enlightenment examples, you'd have to draw a clearer line between the philosophy and the wrongful actions (cf. in the FTX case, we have a self-identified utilitarian doing various wrongs for stated utilitarian reasons).
I agree that every large ideology has had massive scandals, in some cases ranging up to purges, famines, wars, etc. I think the problem for us, though, is that there aren't very many people who take utilitarianism or beneficentrism seriously as an action-guiding principle - there are only ~10k effective altruists, basically. What happens if you scale that up to 100k and beyond? My claim would be that we need to tweak the product before we scale it, in order to make sure these catastrophes don't scale with the size of the movement.
You give no evidence for your claim that hardcore utilitarians commit 1/10 of the "greatest frauds". I struggle to even engage with this claim because it seems so speculative.
I mean that the dollar value of lost funds would seem to make it one of the top ten biggest frauds of all time (assuming that fraud is what happened). Perusing a list on Wikipedia, I can only see four times larger sums were defrauded: Madoff, Enron, Worldcom, Stanford.
This post argues against a strawman - it's not credible that utilitarianism endorses frauding to give. It's also not quite a question of whether Sam "misconstrued" utilitarianism, in that I doubt that he did a rational calculus on whether fraud was +EV, and he denies doing so.
The potential problem, rather, is that naive consequentialism/act utilitarianism removes some of the ethical guardrails that would ordinarily make fraud very unlikely. As I've said: In order to avoid taking harmful actions, an act utilitarian has to remember to calculate, and then to calculate correctly. (Whereas rules are often easier to remember and to properly apply.) The way Sam tells it, he became "less grounded" or "cocky", leading to these mistakes. Would this have happened if he followed another theory? We can't know, but we should be clear-eyed about the fact that hardcore utilitarians, despite representing maybe 1/1M of the world's population, are responsible for maybe 1/10 of the greatest frauds, i.e. they're over-represented by a factor of 100k, in a direction that would be pretty expected, based on the (italicised) argument above (which must surely have been made previously by moral philosophers). For effective altruists, we can lop off maybe one order of magnitude, but it doesn't look great either.
Sam and his leadership group, primarily, created FTX, but I agree that EA & utilitarianism also deserve a lot of the blame.
One bit of clarifying info is that according to Sam, FTX wasn't just grabbing customer $ after Alameda became insolvent, but lent ~1/3 or more of the customer funds held to Alameda. And this happened whenever users deposited funds through Alameda, something we know was already happening years ago - from the early stages of FTX.
The same gist comes across in interviewing from Coffezilla: https://t.co/rMljwAqhDq
If you write it as "upvote if you think the proposal worth polling people on, and agreevote if you think it's a good proposal", then that would match the standard usage of the voting axes.
Upvote for agreement with the general tone of the suggestion or if you think there is a good suggestion nearby.
Agreevote if you think they are well-framed.
Upvote for agreement, and agreevote for good framing? That's roughly the opposite of normal, which may confuse interpretation of the results.
Great, thanks!
I was thinking just for comments.
Most people have strong upvote strength 3-7 though. Anyway, if this is a big problem, then just cap self-upvote strength around 5?
The question is whether FTX's leadership knowingly misled for financial gain, right? We know that thety said they weren't borrowing (T&C) or investing (public statements) users' digital assets, and that Alameda played by the same rules as other FTX users. They then took $8B of user-loaded money (most of which users did not ask to be lent) and knowingly lent it to Alameda, accepting FTT (approximately the company's own stocks) as collateral, to make big crypto bets. Seems like just based on that, they might have defrauded the user in 4+ different ways. I think "we were net-lenders, and seemed to have enough collateral" might (assuming it is true) be a (partally) mitigating factor for the fourth point, but not the others?
To me, that sounds like a feature, not a bug, given how the influx of users has degraded average post quality recently.
I think usually when a discussion is heated, I prefer the equilibrium where the two primary discussion partners have votes that cancel each other out, instead of an equilibrium where just all the comments are in the negatives.... This includes the case where the person you are responding to is strong-downvoting your comment, and then I think it can make sense to strong-upvote your comment, in order to not give the false impression that there is a consensus against your comment.
This problem won't arise if everyone strong-upvotes themselves by default.
The third proposal seems fine to me, but the fourth is complex, and still rewards users who strong-upvote their own comments as much as the rules allow.
(4) was definitely the story with Ben Goertzen and his "Cosmism". I expect some "a/acc" libertarian types will also go for it. But it is and will stay pretty fringe imo.
I haven't seen any evidence that FTX promised to never invest customer deposits. Does anyone have a link? My understanding is that FTX offered customers the opportunity to make leveraged trades, i.e., to bet more than the money they had in their accounts. This suggests to me that FTX was not just an exchange but a lender, which is a very different sort of financial beast (with a different risk profile). I also understand that there was a significant interest rate on the customer accounts -- 6% -- which adds weight to that conclusion. You can't get a return on investment without risk.
The issue is that as a user of FTX, you were supposed to be able to choose whether your money was being lent out or not - e.g. there was a "lend/stop lending" button in the interface. It seems totally reasonable to me that FTX loses your money if you lend it. But my current impression is that the amount lent to, and lost by, Alameda was much more than the amount that users agreed to have lent out. Agree that segregation of funds, if implemented properly, would solve the problem here.
Why do you say "Alameda (FTX trading)"? Aren't these just separate entities?
You're right - fixed.
It's reported here: https://www.axios.com/2022/11/12/ftx-terms-service-trading-customer-funds
I've heard (unverified) that customer deposits were $16B and voluntary customer lending <$4B. It would make sense to me that a significant majority of customer funds were not voluntarily lent, based on the fact that returns from lending crypto were minimal, and lending was opt-in, and not pushed hard on the website.
Exactly. The terms and conditions said that deposited funds were not being lent to Alameda ("FTX Trading"), The terms and conditions said that title of digital assets would belong to the user, and not transfer to, or be loaned to FTX trading. Which would seem to make impossible loaning these funds from FTX trading to Alameda (end edit) whereas Sam said in today's NYT/CNBC interview that FTX allowed Alameda to take out an $8B line of credit, using I think money that was not given to FTX for lending. It immediately looks like he defrauded his customers.
In today's interview with NYT/CNBC Sam tried out a few defenses:
- there was another line in the T&C that allowed this (sounds dubious absent further details)
- FTX didn't have visibility into the size of Alameda's loans on its own dashboard; only Alameda knew about the loan (implausible; he was housemates with Alameda's CEO, who talked about these borrowed funds at a leaked company meeting during the collapse - to which he simply said that he wouldn't be able to clarify others' comments), and
- Alameda was a small fraction of trading activity, and he paid attention to this rather than the size of the line of credit (also super implausible - how can one not be aware of a multibillion dollar line of credit?).
So I don't see how any of these defenses work. There's also a question of if he defrauded customers, how long this was going on for. When asked when the comingling of funds began, he just talked about it getting bigger from mid-2022. That would mean at least four months, but the fact that he didn't give it a straight answer at least suggests to me that this might have actually begun significantly before that, possibly years.
You can also look at the predictions here, here, here, here, and here, which collectively suggest that Sam committed fraud, and is likely to be criminally charged and spend years in prison. Personally, if he's not imprisoned, I personally would guess it's >50% that avoided facing the US justice system altogether, by somehow avoiding extradition.
Updated pageview figures:
- "effective altruism": peaked at ~20x baseline. Of all views, 10.5% were in Nov 9-27
- "longtermism": peaked ~5x baseline. Of all views, 18.5% in Nov 9-27.
- "existential risk": ~2x. 0.8%.
There are apparently five films/series/documentaries coming up on SBF - these four, plus Amazon.
It's what global priorities researchers tell me is happening.
Putting things in perspective: what is and isn't the FTX crisis, for EA?
In thinking about the effect of the FTX crisis on EA, it's easy to fixate on one aspect that is really severely damaged, and then to doomscroll about that, or conversely to focus on an aspect that is more lightly affected, and therefore to think all will be fine across the board. Instead, we should realise that both of these things can be true for different facets of EA. So in this comment, I'll now list some important things that are, in my opinion, badly damaged, and some that aren't, or that might not be.
What in EA is badly damaged:
- The brand “effective altruism”, and maybe to an unrecoverable extent (but note that most new projects have not been naming themselves after EA anyway.)
- The publishability of research on effective altruism (philosophers are now more sceptical about it).
- The “innocence” of EA (EAs appear to have defrauded ~4x what they ever donated). EA, in whatever capacity it continues to exist, will be harshly criticised for this, as it should be, and will have to be much more thick-skinned in future.
- The amount of goodwill among promoters of EA (they have lost funds on FTX, regranters have been embarrassed by the disappearance of promised funds, others have to content with clawbacks.), as well as the level of trust within EA, generally.
- Abundant funding for EA projects that are merely plausibly-good.
What in EA is only damaged mildly, or not at all:
- The rough amount of people who people want to doing good, effectively
- The social network that has been built up around doing good, effectively, (i.e. “the real EA community is the friends we made along the way”)
- The network of major organisations that are working on EA-related problems.
- The knowledge that we have accumulated, through research and otherwise, about how to do good effectively.
- “Existential risk”, as a brand
- The “AI safety” research community in general
- The availability of good amounts of funding for clearly-good EA projects.
What in EA might be badly damaged:
- The viability of “utilitarianism” as a public philosophy, absent changes (although Sam seems to have misapplied utilitarianism, this doesn’t redeem utilitarianism as a public philosophy, because we would also expect it to be applied imperfectly in future, and it is bad that its misapplication can be so catastrophic).
- The current approach to building a community to do good, effectively (it is not clear whether a movement is even the right format for EA, going forward)
- The EA “pitch”. (Can we still promote EA in good conscience? To some of us, the phrase “effective altruism” is now revolting. Does the current pitch still ring true, that joining this community will enable one to act as a stronger force for good? I would guess that many will prefer to pitch more specific things that are of interest to them, e.g. antimalarials, AI safety, whatever.)
Given all of this, what does that say about how big of a deal the FTX crisis is for EA? Well, I think it's the biggest crisis that EA has ever had (modulo the possible issue of AI capabilities advances). What's more, I also can't think of a bigger scandal in the 223-year history of utilitarianism. On the other hand, the FTX crisis is not even the most important change in EA's funding situation, so far. For me, most important was when Moskovitz entered the fold, changing the number of EA billionaires went from zero to one. When I look over the list above, I think that much more of the value of the EA community resides in its institutions and social network than in its brand. The main ways that a substantial chunk of value could be lost is if enough trust or motivation was lost, that it became hard to run projects, or recruit new talent. But I think that even though some goodwill and trust is lost, it can be rebuilt, and people's motivation is intact. And I think that whatever happens to the exact strategy of outreach currently used by the EA community, we will be able to find ways to attract top talent to work on important problems. So my gut feeling would be that maybe 10% of what we’ve created is undone by this crisis. Or that we’re set back by a couple of years, compared to where we would be if FTX was not started. Which is bad, but it's not everything.
Update: Dustin says that the bloomberg estimate ($11.3B) is about right, if you add on an extra $3B of foundation assets, so community wealth would be down more like 55%, not 70%.
I agree in principle, but I think EA shares some of the blame here - FTX's leadership group consisted of four EAs. It was founded for ETG reasons, with EA founders and with EA investment, by Sam, an act utilitarian, who had been a part of EA-aligned groups for >10 years, and with a foundation that included a lot of EA leadership, and whose activities consisted mostly of funding EAs.
SBF's views on utilitarianism
After hearing about his defrauding FTX, like everyone else, I wondered why he did it. I haven't met Sam in over five years, but one thing that I can do is take a look at his old Felicifia comments. At that time, back in 2012, Sam identified as an act utilitarian, and said that he would only follow rules (such as abstaining from theft) only if and when there was a real risk of getting caught. You can see this in the following pair of quotes.
Quote #1. Regarding the Parfit's Hiker thought experiment, he said:
I'm not sure I understand what the paradox is here. Fundamentally if you are going to donate the money to THL and he's going to buy lots of cigarettes with it it's clearly in an act utilitarian's interest to keep the money as long as this doesn't have consequences down the road, so you won't actually give it to him if he drives you. He might predict this and thus not give you the ride, but then your mistake was letting Paul know that you're an act utilitarian, not in being one. Perhaps this was because you've done this before, but then not giving him money the previous time was possibly not the correct decision according to act utilitarianism, because although you can do better things with the money than he can, you might run in to problems later if you keep in. Similarly, I could go around stealing money from people because I can spend the money in a more utilitarian way than they can, but that wouldn't be the utilitarian thing to do because I was leaving out of my calculation the fact that I may end up in jail if I do so.
Quote #2. Regarding act vs rule utilitarianism, he said:
I completely agree that in practice following rules can be a good idea. Even though stealing might sometimes be justified in the abstract, in practice it basically never is because it breaks a rule that society cares a lot about and so comes with lots of consequences like jail. That being said, I think that you should, in the end, be an act utilitarian, even if you often think like a rule utilitarian; here what you're doing is basically saying that society puts up disincentives for braking rules and those should be included in the act utilitarian calculation, but sometimes they're big enough that a rule utilitarian calculation approximates it pretty well in a much simpler fashion.
Act utilitarianism is notoriously a form of morality that comes without guard-rails. In order to avoid taking harmful actions, an act utilitarian has to remember to calculate, and then to calculate correctly. (Whereas rules are often easier to remember and to properly apply.) The thing is that even if, for a minute, we assume act utilitarianism, it seems clear that somewhere along the line to defrauding $8B, the stakes have become large enough that you need to do a utility calculation, and it's hard to envisage how the magnitude of destruction wrought by the (not so unlikely) FTX crisis would not render his fraudulent actions negative EV. Maybe he became deluded about his changes of success, and simply mis-calculated, although this seems unlikely.
Alternatively, maybe over the course of the last decade, he became a nihilist. Some may speculate that he was corrupted by the worlds of crypto and finance. Maybe he continued to identify as utilitarian, but practiced it only occasionally - for instance, maybe when the uncertainties were large, he papered over them by choosing a myopically selfish decision. But really, it all just seems very unclear. Even in Kelsey's interview, I can't tell whether he was disavowing all of morality, or only the rule-following business-ethics variety. And one can't know when he is telling the truth anyway. So I don't feel confident that this is going to be any clearer over time, either.
It's worth considering Eric Neyman's questions: (1) are the proposed changes realistic, (2) would the changes actually have avoided the current crisis, and (3) would its benefits exceed its costs generally.
On (1), I think David's proposals are clearly realistic. Basically, we would be less of an "undifferentated social club", and become more like a group of academic fields, and a professional network, with our conferences and groups specialising, in many cases, into particular careers.
On (2), I think part of our mistake was we used an overly one-dimensional notion of trust. We would ask "is this person value-aligned?", as a shorthand for evaluating trustworthiness. The problem is that any self-identified utilitarian who hangs around EA for a while will then seem trustworhty, whereas objectively, an act utilitarian might be anything but. Relatedly, we thought that if our leadership trust someone, we must trust them too, even if they are running an objectively shady business like an offshore crypto firm. This kind of deference is a classic problem for social movements as well.
Another angle on what happened is that FTX behaved like a splinter group. Being a movement means you can convince people of some things for not-fully-rational reasons - based on them liking your leadership and social scene. But this can also be used against you. Previously, the rationalist community has been burned by the Vassarites, the Zizians. There has been Leverage Research. And now, we could say that FTX had its own charismatic leadership, dogmas (about drugs, and risk-hunger), and social scene. If we were less like a movement, it might've been harder for them to be.
So I do think being less communal and more professional could make things like FTX less likely.
On (3), I think this change would come with significant benefits. Fewer splinter groups. Fewer issues with sexual harrassment (since it would be less of a dating scene). Fewer tensions relating to whether written materials are "representative" of people's views. Importantly, high-performing outsiders would less likely bounce off due to EA seeming "cultic", as did Sam Harris, per his latest pod. Also see Matt Y's comment.
I think the main downside to the change would be if it involved giving up our impact somehow. Maybe movements attract more participants than non-movements? But do they attract the right talent? Maybe members of a movement are prepared to make larger sacrifices for one another? But this doesn't seem a major bottleneck presently.
So I think the proposal does OK on the Eric-test. There is way more to be said on all this, but FWIW, my current best guess is that David's ideas about professionalising and disaggregating by professional interest should be a big part of EA's future.
The FTX crisis through the lens of wikipedia pageviews.
(Relevant: comparing the amounts donated and defrauded by EAs)
1. In the last two weeks, SBF has about about 2M views to his wikipedia page. This absolutely dwarfs the number of pageviews to any major EA previously.

2. Viewing the same graph on a logarithmic scale, we can see that even before the recent crisis, SBF was the best known EA. Second was Moskovitz, and roughly tied at third are Singer and Macaskill.

3. Since the scandal, many people will have heard about effective altruism, in a negative light. It has been accumulating pageviews at about 10x the normal rate. If pageviews are a good guide, then 2% of people who had heard about effective altruism ever would have heard about it in the last two weeks, through the FTX implosion.

4. Interest in "longtermism" has been only weakly affected by the FTX implosion, and "existential risk" not at all.

Given this and the fact that two books and a film are on the way, I think that "effective altruism" doesn't have any brand value anymore is more likely than not to lose all its brand value. Whereas "existential risk" is far enough removed that it is untainted by these events. "Longtermism" is somewhere in-between.
We should make it harder to manipulate your own comments' karma. My favoured approach would be to deactivate all voting on one's own comments. Also fine would be if by default, you strongly upvote and strongly agree with all of your own comments.
There was a good amount of agreement about this previously.
In the first example, you complain that EA neglected typical experts and "EA would have benefited from relying on more outside experts" but in the second example, you say that EA "prides itself on NOT just doing what everyone else does but using reason and evidence to be more effective", so should have realised the possible failure of FTX. These complaints seem exactly opposite to one another, so any actual errors made must be more subtle.
I think you're right - I could have avoided some confusion if I said it could lead to "multi-billion-dollar-level bad consequences". Edited to clarify.
Why require surety, when we can reason statistically? There've been maybe ten comparably-sized frauds ever, so on expectation, hardline act utilitarians like Sam have been responsible for 5% of the worst frauds, while they represent maybe 1/50M of the world's population (based on what I know of his views 5-10yrs ago). So we get a risk ratio of about a million to 1, more than enough to worry about.
Anyway, perhaps it's not worth arguing, since it might become clearer over time what his philosophical commitments were.
I'm not sure they would call it "recruiting", but there already are large parts of existing nonprofits that talk to current & future ultra high net worth individuals, such as Longview Philanthropy, Founders Pledge, Generation Pledge, and Open Philanthropy. But there are only a very limited number of potentially sympathetic ultra high net worth individuals, and you don't want to put them off effective giving, so it's important to do it right. As such, I definitely would not suggest starting a new crack team to try to do this work. Instead, it's better to talk to the existing groups that cater to UHNWIs, first.
I totally agree. But even if we conservatively say that it's a 50% chance that he was using act utilitarianism as his decision procedure, that's enough to consider it compromised, because it could lead to bad consequences multiple billions of dollars of damages (edited).
There are also subtler issues: if you intend to be act utilitarian but aren't and do harm, that's still an argument against intending to use the decision procedure. And if someone says they're act utilitarian but aren't and does harm, that's an argument against trusting people who say they're act utilitarian.
I agree that most utilitarians already thought act utilitarianism as a decision procedure was bad. Still, it's important that more folks can see this, with higher confidence, so that this can be prevented from happening again.
I think I agree that the St Petersburg paradox issue is orthogonal to choice of decision procedure (unless placing the bet requires engaging in a norm-violating activity like fraud).