Posts

Moving Toward More Concrete Proposals for Reform 2023-01-27T00:36:34.186Z
Jason's Shortform 2022-12-20T00:08:43.656Z

Comments

Comment by Jason on Books: Lend, Don't Give · 2023-03-25T00:42:53.484Z · EA · GW

Given that Jeff posted this shortly after raising the possibility that he should write a book (of the sort that could easily make it onto many lending/giving tables) -- I admire the post against potential self-interest here.

Comment by Jason on Assessment of Happier Lives Institute’s Cost-Effectiveness Analysis of StrongMinds · 2023-03-25T00:37:37.703Z · EA · GW

I'm not Joel (nor do I work for HLI, GiveWell, SM, or any similar organization). I do have a child, though. And I do have concerns with overemphasis on whether one is a parent, especially when one's views are based (in at least significant part) on review of the relevant academic literature. Otherwise, does one need both to be a parent and to have experienced a severe depressive episode (particularly in a low-resource context where there is likely no safety net) in order to judge the tradeoffs between supporting AMF and supporting SM?

Personally -- I am skeptical that the positive effect of therapy exceeds the negative effect of losing one's young child on a parent's own well-being. I just don't think the thought experiment you proposed is a good way to cross-check the plausibility of such a view. The consideration of the welfare of one's child (independent of one's own welfare) in making decisions is just too deeply rooted for me to think we can effectively excise it in a thought experiment.

In any event -- given that SM can deliver many courses of therapy with the resources AMF needs to save one child, the two figures don't need to be close if one believes the only benefit from AMF is the prevention of parental grief. SM's effect size would only need to be greater 1/X of the WELLBYs lost from parental grief from one child death, where X is the number of courses SM can deliver with the resources AMF needs to prevent one child death. That is the bullet that epicurean donors have to bite to choose SM over AMF.

Comment by Jason on Assessment of Happier Lives Institute’s Cost-Effectiveness Analysis of StrongMinds · 2023-03-24T22:10:02.621Z · EA · GW

I don't think that's the right question for three reasons.

The hypothetical mother will almost certainly consider the well-being of her child (under a deprivationist framework) in making that decision -- no one is suggesting that saving a life is less valuable than therapy under such an approach. Whatever the merits of an epicurean view that doesn't weigh lost years of life, we wouldn't have made it long as a species if parents applied that logic to their own young children.

Second, the hypothetical mother would have to live with the guilt of knowing she could have saved her child but chose something for herself.

Finally, GiveWell-type recommendations often would fail the same sort of test. Many beneficiaries would choose receiving $8X (where X = bednet cost) over receiving a bednet, even where GiveWell thinks they would be better off with the latter.

Comment by Jason on Assessment of Happier Lives Institute’s Cost-Effectiveness Analysis of StrongMinds · 2023-03-23T00:50:36.400Z · EA · GW

Fair points. I'm not planning to move my giving to GiveWell All Grants to either SM or GD, and don't mean to suggest anyone else does so either. Nor do I want to suggest we should promote all organizations over an arbitrary bar without giving potential donors any idea about how we would rank within the class of organizations that clear that bar despite meaningful differences.

I mainly wrote the comment because I think the temperature in other threads about SM has occasionally gotten a few degrees warmer than I think optimally conducive to what we're trying to do here. So it was an attempt at a small preventive ice cube.

I think you're right that we probably mean different things by "one of." 5-10X differences are big and meaningful, but I don't think that insight is inconsistent with the idea that a point estimate something around "above GiveDirectly" is around the point at which an organization should be on our radar as potentially worth recommending given the right circumstances. 

One potential definition for the top class would be whether a person could reasonably conclude on the evidence that it was the most effective based on moral weights or assumptions that seem plausible. Here, it's totally plausible to me that a donor's own moral weights might value reducing suffering from depression relatively more than GiveWell's analysis implies, and saving lives relatively less. GiveWell's model here makes some untestable philosophical assumptions that seem relatively favorable to AMF: "deprivationist framework and assuming a 'neutral point' of 0.5 life satisfaction points." As HLI's analysis suggests at Section 3.4 of this study, the effectiveness of AMF under a WELLBY/subjective well-being model is significantly dependent on these assumptions.

For a donor with significantly different assumptions and/or moral weights, adjusting for those could put SM over AMF even accepting the rest of GiveWell's analysis. More moderate philosophical differences could put one in a place where more optimistic empirical assumptions + a expectation that SM will continue reducing cost-per-participant and/or effectively refine its approach as it scales up could lead to the same conclusion.

Another potential definition for the top class would be whether one would feel more-than-comfortable recommending it to a potential donor for whom there are specific reasons to choose an approach similar to the organization's. I think GiveWell's analysis suggests the answer is yes for reasons similar to the above. If you've got a potential donor who just isn't that enthuiastic about saving lives (perhaps due to emphasizing a more epicurean moral weighting) but is motivated to give to reducing human suffering, SM is a valuable organization to have in one's talking points (and may well be a better pitch than any of the GiveWell top charities under those circumstances).

Comment by Jason on Assessment of Happier Lives Institute’s Cost-Effectiveness Analysis of StrongMinds · 2023-03-22T17:52:55.815Z · EA · GW

Thanks to GiveWell for sharing this! 

It's worth emphasizing that this analysis estimates StrongMinds at about 2.3X times effective as GiveDirectly-type programs, which is itself a pretty high bar, and as plausibly up to ~ 8X as effective (or as low as ~ 0.5X). If we take GD as the bar for a program being one of the most effective in the Global Health space, this conclusion suggests that StrongMinds is very likely to be a strong program (no pun intended), even if it isn't the single best use of marginal funding. I know that's obvious from reading the full post, but I think it bears some emphasis that we're talking about donor choice among a variety of programs that we have reason to believe are rather effective.

Comment by Jason on Some Comments on the Recent FTX TIME Article · 2023-03-22T01:18:53.408Z · EA · GW

I think it would be problematic if a society heaped full adoration on risk-takers when their risks worked out, but doled out negative social consequences (which I'll call "shame" to track your comment) only based on ex ante expected-value analysis when things went awry. That would overincentivize risktaking. 

To maintain proper incentives, one could argue that society should map the amount of public shame/adoration to the expected value of the decision(s) made in cases like this, whether the risk works out or not. However, it would be both difficult and burdensome to figure out all the decisions someone made, assign an EV to each, and then sum to determine how much public shame or adoration the person should get. 

By assigning shame or adoration primarily based on the observed outcome, society administers the shame/adoration incentives in a way that at least makes the EV of public shame/adoration at least somewhat related to the EV of the decision(s) made. Unfortunately, that approach means that people whose risks don't pan out often end up with shame that may not be morally justified.

Comment by Jason on Some Comments on the Recent FTX TIME Article · 2023-03-21T23:58:46.190Z · EA · GW

I characterized the lawsuit is a fishing expedition because I saw no specific evidence in the complaint about what the VC firms actually knew -- only assumptions based on rather general public statements from the VCs. And the complaints allege -- and I think probably have to allege -- actual knowledge of the fraudulent scheme against the depositors. The reason is that, as a general rule, the plaintiff has to establish that the defendant owed them a duty to do or refrain from doing something before negligence liability will attach. 

Of course, you have to file the lawsuit in order to potentially get to discovery and start subpoenaing documents and deposing witnesses. It's not an unreasonable fishing expedition to undertake, but I think the narrative that the VCs were sloppy, rushed, or underinvested on their due dilligence is much more likely than the complaint's theory that they knew about the depositor fraud and actively worked to conceal it until FTX did an IPO and they unloaded their shares.

(I certainly do not think anyone in EA knew about the fraudulent scheme against depositors either.)

Comment by Jason on Some Comments on the Recent FTX TIME Article · 2023-03-21T18:20:24.154Z · EA · GW

In general, standard corporate audits aren't intended to be intelligible by consumers but instead by investors and regulators. It's shocking that FTX's regulator in the Bahamas apparently did not require a clean audit opinion addressing internal controls, and maybe no US regulator required it for FTX US either.

At present, my #2 on who to blame (after FTX insiders in the know) is the regulators. It's plausible the auditors did what they were hired to do and issued opinion letters making it clear what their scope of work was in ways that were legible to their intended audience. I can't find any plausible excuse for the regulators.

Comment by Jason on Some Comments on the Recent FTX TIME Article · 2023-03-21T17:24:59.760Z · EA · GW

In a universe where EA leaders had a sufficiently high index of suspicion, they could have at least started publicly distancing themselves from SBF and done one of two things: (1) stop working with FTXFF or encouraging people to apply, and/or (2) obtain "insurance" against fraudlent collapse by enlisting some megadonors who privately agreed in advance to immediately commit to repay all monies paid out to EA-aligned grantees if fraud ended up being discovered that inflicted relevant losses.

Public whistleblowing would likely have been terrible . . .if the evidence were strong enough (which I really doubt it was) then it should have been communicated to the US Department of Justice or another appropriate government agency.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-21T16:07:57.325Z · EA · GW

The assumption that this 1/3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1/3 would come from outside what Jack called "high status core EAs."

Comment by Jason on Some Comments on the Recent FTX TIME Article · 2023-03-21T01:34:47.083Z · EA · GW

Thanks for sharing this. I skimmed the relevant portions of the underlying lawsuit referenced in the press release, and my overall impression is "fishing expedition." (Maybe more than that against the banks . . . but those banks just went bust and I doubt will have any money to pay a judgment, so I didn't bother skimming that). Not that there aren't reasonable grounds for a class-action law firm to engage in a fishing expedition, but they won't have any real evidence until they (possibly) survive motions to dismiss and get to discovery.

Comment by Jason on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-20T18:45:41.294Z · EA · GW

Any competent outside firm would gather input from stakeholders before releasing a survey. But I hear the broader concern, and note that some sort of internal-external hybrid is possible. The minimal level of outside involvement, to me, would involve serving as a data guardian, data pre-processor, and auditor-of-sorts. This is related to the two reasons I think outside involvement is important: external credibility, and respondent assurance.

As far as external validity, I think media reports like this have the capacity to do significant harm to EA's objectives. Longtermist EA remains, on the whole, more talent-constrained and influence-constrained than funding-constrained. The adverse effect on talent joining EA could be considerable. Social influence is underrated; for example, technically solving AI safety might not actually accomplish much without the ability to socially pressure corporations to adopt effective (but profit-reducing) safety methods or convince governments to compel them to do so.

When the next article comes out down the road, here's what I think EA would be best served by being able to say if possible:

(A) According to a study overseen by a respected independent investigator, the EA community's rate of sexual misconduct is at most no greater than the base rate. 

(B) We have best-in-class systems in place for preventing sexual misconduct and supporting survivors, designed in connection with outside experts. We recognize that sexual misconduct does occur, and we have robust systems for responding to reports and taking the steps we can to protect the community. There is independent oversight over the response system.

(C) Unfortunately, there isn't that much we can do about problematic individuals who run in EA-adjacent circles but are unaffiliated with institutional EA.

(A) isn't externally credible without some independent organization vouching for the analysis in some fashion. In my view, (B) requires at some degree of external oversight to be externally credible after the Owen situation, but that's another story. Interestingly, I think a lot of the potential responses are appropriate either as defensive measures under the "this is overblown reporting by hostile media outlets" hypothesis or "there is a significant problem here" hypothesis.  I'd like to see at least funding and policy commitments on some of those initatives in the near term, which would reduce the time pressure on other initiatives for which there is a good chance that further datagathering would substantially change the desirability, scope, layout, etc.

I think one has to balance the goal of external credibility against other goals. But moving the research to (say) RP as opposed to CEA wouldn't move the external-credibility needle in any appreciable fashion.

The other element here is respondent assurance. Some respondents, especially those no longer associated with EA, may be more comfortable giving responses if the initial data collection itself and any necessary de-identification is done by an outside organization. (It's plausible to me that the combination of responses in a raw survey response could be uniquely identifying.) 

Ideally, you would want to maximize the number of survivors who would be willing to confidentally name the person who committed misconduct. This would allow the outside organization to do a few things that would address methodological concerns in the Time article. First, it could identify perpetrators who had committed misconduct against multiple survivors, avoiding the incorrect impression that perpetrators were more numerous than they were. Second, it could use pre-defined criteria to determine if the perpetrator was actually an EA, again addressing one of the issues with the Time article. Otherwise, you end up with a numerator covering all instances in which someone reports misconduct by someone they identified as an EA . . . but use narrower criteria to develop the denominator, leading to an inflated figure. It would likely be legally safer for CEA to turn over its event-ban list to the outside organization under an NDA for very limited purposes than it would be to turn it over to RP.  That would help another criticism of the Time article, that it failed to address CEA's response to various incidents.

Contingent on budget and maybe early datagathering, I would consider polling men too about things like attitudes associated with rape culture. Surveying or focusing-grouping people about deviant beliefs and behaviors (I'm using "deviant" here as sociologists do), not to mention their own harassment or misconduct, is extremely challenging to start with. You need an independent investigator with ironclad promises of confidentiality to have a chance at that kind of research. But then again, it's been almost 20 years since my somewhat limited graduate training in social science research methods, so I could be wrong on this.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-18T23:11:05.032Z · EA · GW

I think Jack's point was that having some technical expertise reduces the odds of a Bad Situation happening at a general level, not that it would have prevented exposure to the FTX bankruptcy specifically.

If one really does not want technical expertise on the board, a possible alternative is hiring someone with the right background to serve as in-house counsel, corporate secretary, or a similar role -- and then listening to that person. Of course, that costs money.

Comment by Jason on Offer an option to Muslim donors; grow effective giving · 2023-03-18T22:53:21.933Z · EA · GW

Although most of us display extreme partiality with a large portion of our spending -- e.g., I think of what I end up spending to keep my dog happy and well in an urban environment!

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-18T21:51:22.471Z · EA · GW

I don't know the acceptable risk level either. I think it is clearly below 49%, and includes at least fraud against bondholders and investors that could reasonably be expected to cause them to lose money from what they paid in.

It's not so much the status of the company as a fraud-commiter that is relevant, but the risk that you are taking and distributing money under circumstances that are too close to conversion (e.g., that the monies were procured by fraud and that the investors ultimately suffer a loss). I can think of two possible safe harbors under which other actors' acceptance of a certain level of risk makes it OK for a charity to move forward:

  • In many cases, you could imply a maximum risk of fraud that the bondholders or other lenders were willing to accept from the interest rate minus inflation minus other risk of loss -- that will usually reveal that bondholders at least were not factoring in more than a few percent fraud risk. The risk accepted by equity holders may be greater, but usually bondholders take a haircut in these types of situations -- and the marginal dollars you're spending would counterfactually have gone to them in preference to the equity holders. However, my understanding is that FTX didn't have traditional bondholders.
  • If the investors were sophisticated, I think the percentage of fraud risk they accepted at the time of their investment is generally a safe harbor. For FTX, I don't have any reason to believe this was higher than the single digits; as you said, the base rate is pretty low and I'd expect the public discourse pre-collapse to have been different if it were believed to be significantly higher.

However, those safe harbors don't work if the charity has access to inside information (that bondholders and equity holders wouldn't have) and that inside information updates the risk of fraud over the base rates adjusted for information known to the bond/equity holders. In that instance, I don't think you can ride off of the investor/bondholder acceptance of the risk as low enough.

There is a final wrinkle here -- for an entity as unregulated as FTX was, I don't believe it is plausible to have a relatively high risk of investor fraud and a sufficiently low risk of depositor fraud. I don't think people at high risk of cheating their investors can be deemed safe enough to take care of depositors. So in this case there is a risk of investor fraud that is per se unacceptable, and a risk of investor fraud that implies an unacceptable risk of depositor fraud. The acceptable risk of investor fraud is the lower of the two.

Exception: If you can buy insurance to ensure that no one is worse off because of your activity, there may be no maximum acceptable risk. Maybe that was the appropriate response under these circumstances -- EA buys insurance against the risk of fraud in the amount of the donations, and returns that to the injured parties if there was fraud at the time of any donation which is discovered within a six-year period (the maximum statute of limitations for fraudulent conveyance in any U.S. state to my knowledge). If you can't find someone to insure you against those losses at an acceptable rate . . . you may have just found your answer as to whether the risk is acceptable.

Comment by Jason on How my community successfully reduced sexual misconduct · 2023-03-18T21:05:19.753Z · EA · GW

Agree that it wouldn't work for every event. I could see it working for someone with a pattern of coming to shorter events -- asking someone who has become a regular attender at events for a certificate would be appropriate. Although I suggested an hour-long class because I like the idea of everyone regularly in the community receiving training, the less-involved person training could be 10-15 minutes.

I think the increased visibility of the process (compared to CH-event organizer checks) could be a feature. If you hand over a green cert, you are subtly reminded of the advantages of being able to produce one. If you hand over a yellow one, you are made aware that the organizers are aware of your yellow status and will likely be keeping a closer eye on you  . . . which is  a good thing, I think. Asking to see a certificate before dating or having sex with another EA shouldn't be an affirmatively encouraged use case, but some people might choose to ask -- and that would be 100% up to the person. But that might be an additional incentive for some people to keep to green-cert behavior.

Although no one should take this as legal advice, one of the possible merits of a certificate-based approach is that the lack of merit in a defamation suit should be clear very early in the litigation. The plaintiff will realize quickly that they aren't going to be able to come up with any evidence on a foundational element of the claim (a communication from defendant to a third party about the plaintff). With a more active check-in, you're going to have to concede that element and go into discovery on whether there was communication that included (or implied) a false statement of fact. Discovery is generally the most expensive and painful part of litigation -- and even better, a would-be plaintiff who can figure out that there was no communication will probably decide never to sue at all.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-18T18:01:54.092Z · EA · GW

Yes, the Corolla comment looks less innocent if the speaker has significant reasons to believe Sam was ethically shady. If you know someone is ethically shady but decide to work them with anyway, you need to be extra careful not to make statements that a reasonable person could read as expressing a belief in that person's good ethics.

Comment by Jason on Legal Assistance for Victims of AI · 2023-03-18T17:58:12.043Z · EA · GW

Yes. The definition of "unauthorized practice of law" is murkier and depends more on context than one might think. For instance, I personally used -- and recommend for most people without complex needs -- the Nolo/Quicken WillMaker will-writing software.

On a more serious note, if there were 25 types of small legal harm commonly caused by AI chatbots, writing 25 books on "How to Sue a Chatbot Company For Harm X, Including Sample Pleadings" is probably not going to constitute unauthorized practice.

Comment by Jason on How my community successfully reduced sexual misconduct · 2023-03-18T17:50:54.564Z · EA · GW

(not legal advice, not researched)

It seems that there would be partial workarounds here, at least in theory. Suppose that CEA or another organization offered a one-hour class called Sexual Misconduct Training for EAs that generated a green, digitally signed certificate of attendance "valid" for a year. The organization does not allow individuals who it has determined to have committed moderate-severity misconduct within the past few years to attend the one-hour class. They may, however, attend a four-hour Intensive Training class with which generates a yellow digitally-signed certificate with a validity of six months. Those known to have committed serious misconduct may only attend a class that does not generate a certificate at all.

A community organizer, party host, etc. could ask people for their certificates and take whatever action they deem appropriate if a person submits a yellow certificate or does not submit one at all. At a minimum, they would know to keep a close eye on the person, ask for references from prior EA involvement, etc. In this scenario, Organization hasn't spoken about anyone to a third party at all! (Classically, defamation at least in the US requires a false statement purporting to be fact that is published or communicated to a third person.) It has, at most, exercised its right not to speak about the person, which is generally rather protected in the US. And if the person voluntarily shows a third party the certificate, that's consent on their part.

The greater legal risk might be someone suing if a green-certificate holder commits misconduct . . . but I think that would be a tough sell. First, no one could plausibly claim reliance on the certificate for more than the proposition that Organization had not determined the individual ineligible to take the relevant class at the time the decision was made. To have a case, a plaintiff would have to show that Organization had received a report about the certificate holder, was at least negligent in issuing the certificate in light of that report, and owed them a legal duty not to issue a certificate under those circumstances. As long as Organization is clear about the limits of the certificate process, I think most courts and juries would be hesitant to issue a decision that strongly disincentivizes risk-reduction techniques deployed in good faith and at least moderate effort.

Comment by Jason on Legal Assistance for Victims of AI · 2023-03-18T16:42:29.122Z · EA · GW

Saw this morning that Eugene Volokh, a well-respected libertarian-leaning law professor who specializes in U.S. free-speech law, and others are working on an law review article about libel lawsuits against developers of LLMs. The post below explains how he asked GPT-4 about someone, got false information claiming that he pled guilty to a crime, and got fake quotes attributed to major media outlets:

https://reason.com/volokh/2023/03/17/large-libel-models-chatgpt-4-erroneously-reporting-supposed-felony-pleas-complete-with-made-up-media-quotes/

Comment by Jason on Does EA get the "best" people? Hypotheses + call for discussion · 2023-03-18T01:10:01.513Z · EA · GW

The mods can't realistically call different strike zones based on whether or not "expected value of the stuff [a poster says] remains high." Not only does that make them look non-impartial, it actually is non-impartial.

Plus, warnings and bans are the primary methods by which the mods give substance to the floor of what forum norms require. That educative function requires a fairly consistent floor. If a comment doesn't draw a warning, it's at least a weak signal that the comment doesn't cross the line.

I do think a history of positive contributions is relevant to the sanction.

Comment by Jason on Legal Assistance for Victims of AI · 2023-03-18T00:42:34.613Z · EA · GW

The linked article says -- persuasively, in my view -- that Section 230 generally doesn't shield companies like OpenAI for what their chatbots say. But that merely takes away a shield; you still need a sword (a theory of liability) on top of that.

My guess is that most US courts will rely significantly on analogies in absence of legislative action. Some of those are not super-friendly to litigation. Arguably the broadest analogy is to buggy software with security holes that can be exploited and cause damage; I don't think plaintiffs have had much success with those sorts of lawsuits. If there is an interveining human actor, that also can make causation more difficult to establish. Obviously that is all at the 100,000 foot level and off the cuff! To the extent the harmed person is a user of the AI, they may have signed an agreement that limits their ability to sue (both by waiving certain claims, by limiting potential damages, or by onerous procedural requirements that mandate private arbitration and preclude class actions).

There are some activities at common law that are seen as superhazardous and which impose strict liability on the entity conducting them -- using explosives is the usual example. But -- I don't understand there to be a plausible case that using AI in an application right now is similarly superhazardous in a way that would justify extending those precedents to AI harm. 

Comment by Jason on Legal Assistance for Victims of AI · 2023-03-18T00:26:47.977Z · EA · GW

I think your last paragraph hits on a real risk here: litigation response is driven by fear of damages, and will drive the AI companies' interest in what they call "safety" in the direction of wherever their damages exposure is greatest in the aggregate and/or the largest litigation-existential risk to their company

Comment by Jason on Legal Assistance for Victims of AI · 2023-03-18T00:23:45.952Z · EA · GW

If only there were some sort of new technology that could be harnessed to empower millions of ordinary people who will have small legitimate legal grievances against AI companies to file their own suits as self-represented litigants, with documents that are at least good enough to make it past the initial pleading stages . . . .

(not intended as a serious suggestion)

Comment by Jason on Legal Assistance for Victims of AI · 2023-03-17T23:27:13.678Z · EA · GW

There are definitely a lot of legal angles that AI will implicate, although some of the examples you provided suggest the situation is more mixed:

  • The HIPAA rules don't apply to everyone. See, e.g., 45 C.F.R. § 164.104 (stating the entities to which HIPAA Privacy Rule applies). If you tell me about your medical condition (not in my capacity as a lawyer), HIPAA doesn't stop me from telling whoever I would like. I don't see how telling a generalized version of ChatGPT is likely to be different.
  • I agree that professional-practice laws will be relevant in the AI context, although I think AI companies know that the real money is in providing services to licensed professionals to super-charge their work and not in providing advice to laypersons. I don't think you can realistically monetize a layperson-directed service without creating some rather significant liability concerns even apart from unauthorized-practice concerns.
  • The foreign law problem you describe is about as old as the global Internet. Companies can and do take steps to avoid doing business in countries where the laws are considered unfriendly. Going after a U.S. tech company in a foreign court often only makes sense if (a) the tech company has assets in the foreign jurisdiction; or (b) a court in a country where the tech company has assets will enforce the foreign court order.  For instance, no U.S. court will enforce a judgment for heresy.

More fundamentally, I don't think it will be OpenAI, etc. who are providing most of these services. They will license their technology to other companies who will actually provide the services, and those companies will not necessarily have the deep pockets. Generally, we don't hold tool manufacturers liable when someone uses their tools to break the law (e.g., Microsoft Windows, Amazon Web Services, a gun). So you'd need to find a legal theory that allowed imputing liability onto the AI company that provided an AI tool to the actual service provider. That may be possible but is not obvious in many cases.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-17T23:00:59.201Z · EA · GW

Although at that point -- at least in my view -- the bet is only about a subset of knowledge that could have rendered it ethically unacceptable to be involved with FTXFF. Handing out money which you believed more likely than not to have been obtained by defrauding investors or bondholders would also be unacceptable, albeit not as heinous as handing out money you believed more likely than not to have been stolen from depositors. (I also think the ethically acceptable risk is less than "more likely than not" but kept that in to stay consistent with Nathan's proposed bet which used "likely.")

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-17T21:33:27.097Z · EA · GW

It's clear to me that the pre-FTX collapse EVF board, at least, needed more "lawyers/accountants/governance" expertise. If someone had been there to insist on good governance norms, I don't believe that statutory inquiry would likely have been opened - at a minimum it would have been narrower. Given the very low base rate of SIs, I conclude that the external evidence suggests the EVF UK board was very weak in legal/accounting/governance etc. capabilities.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-17T20:30:29.988Z · EA · GW

So: excludes securities fraud?

Comment by Jason on Offer an option to Muslim donors; grow effective giving · 2023-03-17T19:23:21.623Z · EA · GW

I would object to a self-identified EA only giving money to help Muslims and claiming it as an EA activity. How people choose to purchase their fuzzies (as opposed to utilons) isn't really my concern.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-17T19:20:25.865Z · EA · GW

I don't agree with that characterization.

On my 6/3 model, you'd need four recusals among the heavily aligned six and zero among the other three for the median member to be other; three for the median to be between heavily aligned and other. If you're having four of six need to recuse on COI grounds, there are likely other problems with board composition at play.

Also, suggesting that alignment is not the "emphasis" for each and every board seat doesn't mean that you should put misaligned or truly random people in any seat. One still should expect a degree of alignment, especially in seat seven of the nine-seat model. Just like one should expect a certain level of general board-member competence in the six seats with alignment emphasis.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-17T16:31:16.160Z · EA · GW

@Michael_PJ offered a comment about "content linking Sam to EA." That last sentence is hard to read as anything but.

One should know that conversations with someone as famous and unfiltered as Elon Musk about the year's most-talked about acquisition could go public. There are also other non-public boosts like the quote at the end of the article. But even if not, the private vouch still goes to @lilly 's point about why anyone would boost/vouch for SBF "knowing what it appears they knew then."

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-17T15:10:47.806Z · EA · GW

There is the whole vouching for SBF as prospective purchaser of Twitter:

You vouch for him?

Very much so! Very dedicated to making the long-term future of humanity go well.

Comment by Jason on Some problems in operations at EA orgs: inputs from a dozen ops staff · 2023-03-17T14:33:48.274Z · EA · GW

Regarding asking EAs to do work for which they are overqualified and that non-EAs could do, I wonder whether financial incentives come into play here.

As a general rule, charitable organizations pay their employees below-market salaries and expect that the psychological value employees get from working for an organization they are aligned with ("warm fuzzies," to save space) covers the difference. Although some might disagree, I think this is a good practice in many roles and up to a certain point -- you often want to select to some extent for the extent to which the job candidate gets warm fuzzies working for your organization vs. is just doing it for the paycheck.

To the extent an organization's general pay strategy is -- say -- 70% of market rate (expecting the other 30% in warm fuzzies), that isn't going to be competitive for people who don't value the warm fuzzies significantly.

Imagine you have three types of jobs in the world -- private-sector, Save the Puppies (StP), and opera. Alice really likes puppies but only mildly likes opera, so she values StP fuzzies but minimally values opera fuzzies. She would be equally happy with a private-sector job, a 30% haircut to receive StP fuzzies, or a 5% haircut to receive opera fuzzies. Bob has similar preferences except that he values opera fuzzies and only mildly values StP fuzzies. Claire places only mild value on all fuzzies.

Suppose StP has a job opening that needs someone with an 80K level of qualifications/experience. Alice is a more qualified candidate (private-sector market rate = 100K) than Bob or Claire (whose rate = 80K). However, she is actually cheaper for StP (will work for 70K) than Bob or Claire (will work for 76K). Thus, there is a natural incentive to hire Alice for work she is overqualified for -- plus demonstrated alignment to StP's mission probably has some value for the organization, especially if it is smaller and finds it inefficient to separate out tasks for which alignment is important.

That is, of course, merely a model. But EA, both by its nature and its recruiting strategy,  generates a population of EAs who are highly qualified/capable, so the Alice/Bob/Claire hypothetical is more likely to happen in EA than in StP. Since liking puppies is generally consistent across ability levels, StP can probably find someone at the 80K level who is aligned with StP and will work for 56K.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-17T14:19:05.582Z · EA · GW

The nature of most EA funding also provides a check on misalignment. An EA organization that became significantly misaligned from its major funders would quickly find itself unfunded. As opposed to Wikimedia, which had/has a different funding structure as I understand it.

Comment by Jason on Offer an option to Muslim donors; grow effective giving · 2023-03-17T13:24:15.167Z · EA · GW

And the charity is going to poor Muslims no matter what we do -- we'd only be harming poor people by declining to offer a more effective way to do good because of our disapproval of the donor's religious restrictions.

Comment by Jason on Offer an option to Muslim donors; grow effective giving · 2023-03-17T13:04:25.220Z · EA · GW

I take it that, at least at this stage, the recipients would be GD recipients anyway and the directed giving is replacing "secular" monies that would have been given to the same people. There's some extra overhead for getting these new donors on board, but that's the nature of fundraising.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-17T11:34:40.417Z · EA · GW

Fair enough -- my attempted point was to acknowledge concerns that being too quick to replace leaders when a bad outcome happened might incentivize them to be suboptimally conservative when it comes to risk.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T23:12:50.357Z · EA · GW

And even if one really values "alignment," I suspect that a board's alignment is mostly that of its median member. That may have been less true at EVF where there were no CEOs, but boards are supposed to exercise their power collectively.

On the other hand, a board's level of legal, accounting, etc. knowledge is not based on the mean or median; it is mainly a function of the most knowledgeable one or two members.

So if one really values alignment on say a 9-member board, select six members with an alignment emphasis and three with a business skills emphasis. (The +1 over a bare majority is to keep an alignment majority if someone has to leave.)

Comment by Jason on Does EA get the "best" people? Hypotheses + call for discussion · 2023-03-16T20:15:05.515Z · EA · GW

The meaning of "simp" differs from place to place, but it's not particularly civil and decidedly not in this context. I support a suspension action in light of the recent warning, but given the dissimilar type of violation maybe a week or two would have been sufficient.

https://www.cnn.com/2021/02/19/health/what-is-simp-teen-slang-wellness/index.html

Comment by Jason on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-16T18:43:49.644Z · EA · GW

I needed to walk away from this thread due to some unrelated stressful drama at work, which seems to have resolved earlier this week. So I took today off to recover from it. :) I wanted to return to this in part to point out what I think are some potential cruxes, since I expect some of the same cruxes will continue to come up in further discussions of these topics down the road.

1. I think we may have different assumptions or beliefs about the credibility of internal data-gathering versus independent data-gathering.  Although the review into handling of the Owen situation is being handled by an outside firm, I don't believe the broader inquiry you linked is.

I generally don't update significantly on internal reporting by an organization which has an incentive to paint a rosy picture of things. That isn't anti-CEA animus; I feel the same way about religious groups, professional sports leagues, and any number of other organizations/movements.

In contrast, an outside professional firm would bring much more credibility to assessing the situation. If you want to get as close as ground truth as possible, you don't want someone with an incentive to sell more newspapers or someone hostile to EA -- but you also don't want those researching and writing the report to be favorably inclined to EA either. If the truth is the goal, those involved shouldn't be even unconsciously influenced by the potential effect of the report on EA. This counts for double after the situation with Owen wasn't managed well.

Conditional on the news articles being inaccurate and/or overstated, an internal review is a much weaker shield with which to defend EA against misrepresentations in the public sphere because the public has to decide how much to trust inside researchers/writers. An outside firm also allows people to come forward who do not want to reveal their identities to any EA organization, and brings specialized expertise in data collection on sensitive topics that is unlikely to be available in-house.

As I see it, the standard practice in situations like this is to bring in a professional, independent, and neutral third party to figure out what is going on. For example, when there were allegations of sexual misconduct in the Antarctic research community, the responsible agencies brought in an independent firm to conduct surveys, do interviews, and the like. The report is here.

Likewise, one of the churches in the group of 15-20 churches I attend discovered a sexual predator in its midst. Everyone who attended any of the 15-20 churches was given the contact information for an independent investigative firm and urged to share any information or concerns about other possible misconduct anywhere in the group. The investigative firm promised that no personally-identifiable information would be released to the church group without the reporter's permission (although declining permission would sharply limit the action the church could take against any wrongdoer). The group committed, in advance, to releasing the independent investigative report with redactions only to protect the identities of survivors and witnesses.  Those steps built credibility with me that the group of churches was taking this seriously and that the public report would be a full and accurate reflection on what the investigators found.

2. Based on crux 1, I suspect people may be trying to answer different questions based on this article. If one expects to significantly update on the CEA data gathering, a main question is whether there is enough information to warrant taking significant actions now on incomplete information rather than waiting for information to assist in making a more accurate decision. If one doesn't expect to significantly update on that data gathering, a main question is whether there is enough information to warrant pursuing independent information gathering. The quantum of evidence needed seems significantly higher for the first question than the second. (Either formulation is consistent with taking actions now that should be in undertaken no matter what additional data comes in, or actions where the EV is much greater than the costs.)

Comment by Jason on Write a Book? · 2023-03-16T18:10:22.810Z · EA · GW

I wonder if you could reduce the opportunity cost by farming out some of the background labor to (for lack of a better term) a research assistant? Seems like that might be a useful investment (depending on funding) to maximize your productivity and minimize time away from your object-level job.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T18:04:19.916Z · EA · GW

Right -- I think a major crux between Nathan and Titotal's comments involve assumptions or beliefs about the extent to which certain leaders' long-term effectiveness has been impaired. My gut says there will ultimately be very significant impairment as applied to public-facing / high-visibility roles, less so for certain other roles.

If almost all current leaders would be better than any plausible replacement, even after a significant hit to long-term effectiveness, then I think that says something about the leadership development pipeline that is worth observing.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T14:07:07.363Z · EA · GW

Presumably they have interviewed Will and/or have done enough work to reliably figure out what he thinks secondhand.

Independent investigators have incentives both for and against whitewashing for whoever is paying the bills. A reputation for whitewashing causes a loss in interest by organizations who want/need outsiders to view the results as unbiased.

Comment by Jason on Offer an option to Muslim donors; grow effective giving · 2023-03-16T13:58:38.649Z · EA · GW

From the perspective of a Christian who (more or less) tithes to secular global health charities, it would be challenging to figure this out unless you asked donors outright whether they were acting in accordance with a religious teaching about tithing / believed in tithing and were counting the donation toward their tithe.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T13:45:27.873Z · EA · GW

If a global health organization made a mistake in judgment that caused [its] effectiveness to permanently decline by (say) 30%, and it was no longer effective in comparison to alternatives we could counterfactually fund, I suspect very few of us would support continuing to fund it. I would find it potentially concerning, from a standpoint of impartiality, if we do not apply the same standard to leaders. After all, we didn't protect the hypothetical global health organization's beneficiaries merely out of a sense of fairness.

I see the argument that applying such a standard to leaders could discourage them from making EV-positive bets. However, experiencing an adverse outcome on most EV-positive bets won't materially impact a leader's long-term future effectiveness. Moreover, it could be difficult to evaluate leaders from a 100% ex ante perspective. There's a risk of evaluating successful bets by their outcome (because outsiders may not understand that there was a significant bet + there is low incentive to evaluate the ex ante wisdom of taking a risk if all turned out well) but unsuccessful bets from an ex ante perspective. That would credit the leader with their winnings but not with most of their losses, and would overincentivize betting.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T02:37:34.184Z · EA · GW

I think the quote is saying that speaking out would saddle the person who spoke out about what someone else knew with significant costs. Although I think the quote overstates the risk, I don't think your reasoning holds. It's not clear to me why anyone has a duty to voluntarily burden themselves with costs to aid the litigation interests of a third party.

If the statement is actually about a senior leader's own knowledge, and their organization received significant funds from FTX/Alameda-linked sources, they are very going to be involved in litigation whether they speak or not.

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T02:29:18.106Z · EA · GW

In most cases, I think the being-dragged-into-legal-proceedings risk of a random person speaking out is considerably less than this quote would imply. First, you'd need a litigant who cared enough about what you had to say to issue a subpoena -- the FTX debtors, presumably. Even then, they would only care if they were litigating a suit for which the information would be relevant and which didn't settle quickly enough. Unless the person on the other end denied the facts, it's doubtful they would want to burn one of their limited number of depositions on a third party who heard something. And they'd likely get any relevant e-mails from the other person anyway.  There are restrictions on subpoenas -- for instance, in the US gederal context, they generally cannot command attendance more than 100 miles away from where the person lives, regularly conducts business, etc. [FRCP 45; FRBP 9016.] If you're in a non-US country and are not an employee or agent of a party, international process is often very, very slow to the point it is a last resort for getting information like that.

None of that is legal advice, and people who have questions about their potential exposure for speaking out should consult with an appropriate lawyer. 

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T00:37:17.053Z · EA · GW

"[K]new specifics about potential fraud" seems too high a standard. Surely there is some percentage X at which "I assess the likelihood that these funds have been fraudulently obtained as X%" makes it unacceptable to serve as distributor of said funds, even without any knowledge of specifics of the potential fraud. 

I think your second paragraph hinges on the assumption that Nick merely had sufficient reason to see SBF as a mere "untrustworthy actor[]" rather than something more serious. To me, there are several gradations between "untrustworthy actor[]" and "known fraudster."

(I don't have any real basis for an opinion about what Nick in particular knew, by the way . . . I just think we need to be very clear about what levels of non-specific concern about a potential bad actor are or are not acceptable.)

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-15T23:31:27.413Z · EA · GW

"every other long-time EA involved had left because of the same concerns" is significant corroboration though (and a direct quote from an on-the-record source).

Comment by Jason on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-15T20:58:12.300Z · EA · GW

Adding: I think this article also raises my level of concern that no one seems to have been looking out for the grantees. I'd like to think that this information would have caused people to be much more careful and self-protective around SBF/FTX adjacent stuff at a minimum, like incorporating an organization to receive any grants rather than exposing oneself to liability. But did grantees know about these concerns so they could protect themselves?