Posts
Comments
Yeah. (as a note I am also a fan of the animal welfare stuff).
This is good suggestion.
I think most of this stuff is too dry to hold my attention by itself. I would like a social environment that was engaging yet systematically directed my attention more often to things I care about. This happens naturally if I am around people who are interesting/fun but also highly engaged and motivated about a topic. As such I have focused on community and community spaces more than, for example, finding a good randomista newsletter or extracting randomista posts from the forums.
Another reason to focus on community interaction, is that it is both much more fun and much more useful to help with creative problem solving. But forum posts tend to report the results of problem solving / report news. I would rather be engaging with people before that step, but I don't know of a place where one could go to participate in that aside from employment. In contrast, I do have a sense of where one could go to participate in this kind of group or community re: AI safety.
from private convos I am pretty sure that the tweet about mike vassar is in reference to this https://forum.effectivealtruism.org/posts/7b9ZDTAYQY9k6FZHS/abuse-in-lesswrong-and-rationalist-communities-in-bloomberg?commentId=FCcEMhiwtkmr7wS84 (which is about Mike Vassar, not Jacy)
there may or may not be other things informing it, but it's not about Jacy.
"It doesn't exist" is too strong for sure. I consider GiveWell central to the randomista part and it was my entrypoint into EA at large. Founder's Pledge was also pretty randomista back when I was applying for a job there in college. I don't know anything about HLI.
There may be a thriving community around GiveWell etc that I am ignorant to. Or maybe if I tried to filter out non-randomista stuff from my mind then I would naturally focus more on randomista stuff when engaging EA feeds.
The reality is that I find stuff like "people just doing AI capabilities work and calling themselves EA" to be quite emotionally triggering and when I'm exposed to it thats what my attention goes to (if I'm not, as is more often the case, avoiding the situation entirely). Naturally this probably makes me pretty blind to other stuff going on in EA channels. There are pretty strong selection effects on my attention here.
All of that said, I do think that community building in EA looks completely different than how it would look if it were the GiveWell movement.
17. I get a lot of messages these days about people wanting me to moderate or censor various forms of discussion on LessWrong that I think seem pretty innocuous to me, and the generators of this usually seem to be reputation related. E.g. recently I've had multiple pretty influential people ping me to delete or threaten moderation action against the authors of posts and comments talking about: How OpenAI doesn't seem to take AI Alignment very seriously, why gene drives against Malaria seem like a good idea, why working on intelligence enhancement is a good idea. In all of these cases the person asking me to moderate did not leave any comment of their own trying to argue for their position, before asking me to censor the content. I find this pretty stressful, and also like, most of the relevant ideas feel like stuff that people would have just felt comfortable discussing openly on LW 7 years ago or so (not like, everyone, but there wouldn't have been so much of a chilling effect so that nobody brings up these topics).
First of all, yikes.
Second of all, I think I could always sense that things were like this (broadly speaking), but simultaneously worried I was just paranoid and deranged. I think that this dynamic has been quite bad for my mental health.
- I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very "randomista" flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized and clear-cut.
I feel like I joined EA for this "randomista" flavored version of the movement. I don't really feel like the version of EA I thought I was joining exists even though, as you describe here, it gets a lot of lip service (because it's uncontroversially good and inspiring!!!!). I found it validating for you to point this out.
If it does exist, it hasn't recruited me despite my pretty concentrated efforts over several years. And I'm not sure why it wouldn't.
I don't have a problem with longtermist principles. As far as I'm concerned maybe the best way to promote longterm good really is to take huge risks at the expense of community health / downside risks / integreity ala SBF (among others). But I don't want to spend my life participating in some scheme to ruthlessly attain power and convert it into good, and I sure as hell don't want to spend my life participating in that as a pawn. I liked the randomista + earn to give version of the movement because I could just do things that were definitely good to do in the company of others doing the same. I feel like that movement has been starved out by this other thing wearing it as a mask.
My critique seems resilient to this consideration. The fact that managers do not publicly criticize employees is not evidence of discomfort or awkwardness. Under the very obvious model of "how would a manager get what they want re: an employee", public criticism is not a sensical lever to want to use.
There would still be zero benefit to publicly criticize in the case you are describing.
Related, there’s far more public criticism from Google employees about their management than there is their management about their employees. This plays out on a lot of levels.
The nature of A having power over B is that A doesn't need to coordinate with others in order to get what A wants with respect to B. It would be really bizarre for management to publicly criticize employees whom they can just fire. There is simply no benefit. This explains much more of the variance than anything to do with awkwardness or "punching down".
Nice try -- I like your on-the-nose username
As somebody in the industry I have to say Alameda/FTX pushing MAPS was surreal and cannot be explained as good faith investing by a competent team.
As far as I can tell there is no reason to condemn fraud, but not the stuff SBF openly endorsed, except that fraud happened and hit the "bad" outcome.
From https://conversationswithtyler.com/episodes/sam-bankman-fried/
COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing?
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually.
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing.
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical.
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence?
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.
One of my friends literally withdrew everything from FTX after seeing this originally, haha. Pretty sure the EV on whatever scheme occurred was higher than 51/49, so it follows....
I have to say I didn't expect "all remaining assets across ftx empire 'hacked' and apps updated to have malware" as an outcome.
(as an aside it also seems quite unusual to apply this impartiality to the finances of EAs. If EAs were going to be financially impartial it seems like we would not really encourage trying to earn money in competitive financially zero sum ways such as a quant finance career or crypto trading)
Seriously, imagine dedicating your life to EA and then finding out you lost your life savings because one group of EAs defrauded you and the other top EAs decided you shouldn't be alerted about it for as long as possible specifically because it might lead to you reaching safety. Of course none of the in-the-know people decided to put up their own money to defend against bank run, just decided it would be best if you kept doing so.
In that situation I have to say I would just go and never look back.
Aspiring to be impartially altruistic doesn't mean we should shank eachother. The so-impartial-we-will-harvest-your-organs-and-steal-your-money version of EA has no future as a grassroots movement or even room to grow as far as I can tell.
This community norm strategy works if you determine that retaining socioeconomically normal people doesn't actually matter and you just want to incubate billionaires, but I guess we have to hope the next billionare is not so (allegedly) impartial towards their users' welfare.
I would like to be involved in the version of EAs where we look after eachother's basic wellness even if it's bad for FTX or other FTX depositors. I think people will find this version of EA more emotionally safe and inspiring.
To me there is just no normative difference between trying to suppress information and actively telling people they should go deposit on FTX when distress occurred (without communicating any risks involved), knowing that there was a good chance they'd get totally boned if they did so. Under your model this would be no net detriment, but it would also just be sociopathic.
Yes the version of EA where people suppress this information, rather than actively promote deposits, is safer. But both are quite cruel and not something I could earnestly suggest to a friend that they devote their lives to.
What I think: I think that FTX was insolvent such that even if FTT price was steady, user funds were not fully backed. That is, they literally bet the money on a speculative investment and lost it, and this caused a multibillion dollar financial hole. It is also possible that some or all of the assets - liabilities deficit was caused by a hack that happened months ago that they did not report.
As far as I can tell, you don't think this. Well, if you really don't think that, and it turns out you were wrong, then I'd like you to update. I think probabilities are a good way to enforce that, that is my actual good-faith belief. Of course I'm also always looking for profitable trades.
Is there any bet you'd take, that doesn't rely on a legal system (which I agree adds a lot of confounders, not to mention delay), on the above claim? Could we bet on "By April 2023, evidence arises that FTX user funds were not even 95% backed before Binance's FTT selloff?" Or maybe we could bet on Nuno's belief on the backing?
BTW your chart is USDD not USDC. Idk what USDD is.
Also I've now spent like wayyy too much time chatting about this on here. Making a bet would involve further chatting. So FYI the most likely outcome is that I wake up tomorrow and pretend it was all just a dream. Sorry to disappoint and thanks for indulging me a bit in the end.
You're Agrippa! The guy with very short timelines, is Berkeley adjacent and knows that cool DxE person.
No, I do care about you! I respect you quite a bit. I was wrong and I retract what I said before in at least a few comments, and I apologize for my behavior. Also, I'll be happy to take any negative repercussions.
😳 That's nice of you, thanks.
I'm actually not a guy though I don't take any offense to the assumption, given my username.
Maybe Nuno would escrow for us.
I'm probably down for $500, would need to talk to my partner about going much higher anyway. If you are in the US we might not need escrow since suing eachother is an option, if we went >5k that would be worth it.
Re SBF vs FTX/Alameda paying: Yeah I meant SBF personally. I agree it's a big difference. Jan 1st is the date but I also don't know how fast this stuff ever goes and researching it sounds annoying.
Given that you think it's likely FTX "gambled" user funds I am really not sure we disagree on anything interesting to begin with :-[
Maybe you think it's only 70% likely and I think it's a lot more than that?
Also, thanks for taking a position on both. We are on the same side of 50/50 for the "gambled deposits" question, though. I wish we could come up with something we disagree on that might also resolve sooner, I'll think on it...
Maybe we disagree on just how big FTX's financial hole is? Could we bet on "as of today, FTX liabilities - FTX liabilites >= 4bn"? I'd go positive on that one.
Dunno... Really can't tell what you believe. You commented that folks are being too negative yet seem to also think that FTX "gambled" user deposits, which sounds pretty negative to me (though we can disagree about whether it was good to have done this). Oh wellz.
For 50/50, I'll take negative, will not resolve affirmatively on:
- "SBF found guilty of literally anything / pays a fine of over 1M for literally any reason, by 2024".
Cool, what size bet? And, after we figure that out, any thoughts on an escrow?
:-(
I will have to insist on trusted escrow for any bets between us...
We seem to have very different ideas of what "operationalization" means...
How about "By April, will evidence come out that FTX gambled deposits rather than keeping it in reserves?" ? There's already a literal prediction market up on that one!
We could do "SBF found guilty of literally anything / pays a fine of over 1M for literally any reason, by 2024" ? If that's not operationalization I really have to give up here.
I do have a real name by the way!!
BTW I am assuming you are willing to bet in the thousands. If not, I really don't consider that a bad thing, but lmk please!
As an aside it is surprising to me that I seem at all to you like the type of person Sam might have been surrounded with. I don't think anyone remotely insider-y has ever even slightly felt that way about me.
I will take a bet like "found guilty for X/paid a fine of X", which are actual events that happen.
OK whatevs, which side of 50/50 do you want? And by what date? (and for that matter what X? Fraud???)
That said I really dunno why you don't like "FTX used user funds to make risky investments" or "FTX speculated using user funds" etc. Is there nobody we might mutually trust to neutrally trust to resolve such a thing?
I'm sorry but I really don't understand why you think it's not adequate. "Fraud" is quite well-defined, and "loss of user funds" is also quite well-defined.
I would offer odds on like, criminal prosecution results, but that will take such a long time to resolve that I don't think it makes an attractive bet. As you point out there are also jurisdictional questions.
Is "SBF lied about the safety of user funds on FTX.com" better to you?
"FTX used user funds to make risky investments"?
"SBF mislead users about the backing of their FTX.com accounts"?
The real world event would be "FTX committed fraud that caused >1bn loss of user funds". But if it's a bet somebody has to arbitrate the outcome, you know?
I just picked EA forum users as an arbitator since like, that's the venue here. But if you have any other picks for arbitrator that would be fine. You can pick yourself but I'm not sure I'd agree to that bet. Likewise I assumed you wouldn't go for it if I picked me. And if it's the 2 of us well then we might tie.
> but do you have like an an account on a prediction market
Multiple
> Are you associated with the grantees on their prediction market projects?
I am not sure that I understand this question. I have myself recieved a grant to work on a prediction market project.
I think that putting up probabilities is and should be expected. I think that actual financial betting shouldn't be expected but is certainly welcome.
If I was going to dispute the first thing I would do is ask for probabilities. It seems weird to try to argue with you about whether your predictions are wrong if I don't even know what they are. For all I know we have the same predictions and just a different view of other posters' predictions.
In one year we can make a thread that asks EA forum users to vote on whether they believe >90% odds that SBF fraudulently handled funds (that is, in a way that was directly contradictory to public representation of handling funds) in a way that costed FTX.com customers >1bn in losses.
If a majority of users (whose accounts existed since yesterday, to prevent shenanigans) vote yes, then the bet resolves YES. Otherwise NO.
Which side of 50/50 do you want?
If you don't want to make a single quantifiable prediction on this topic, after making claims about other people's predictions being "too negative", yes I consider that both evasive and inadequate.
If you really believe people are being "too negative" in their speculation, I thought you might be willing to put your money where your mouth is in some way. If you're not, then you're not, but it's got nothing to do with how well defined legality is, the moral meaning of illegality, etcetera.
Edit: I don't actually really think that a social expectation of financial betting is a good norm (not that betting is bad, just that declining to financial bet is fine). Please interpret "putting your money where your mouth is" as referring to reputational stake from making a concrete prediction on fraud/malfeasance/etcetera.
Can you just operationalize a few things yourself and attach numbers to them? That sounds easiest.
For example, your odds on whether SBF literally goes to prison within the next 4 years...
(Though I think there are better ways to operationalize)
If you can't come up with a way to operationalize a prediction on this topic in any straightforwardly falsifiable way then that's okay I guess, though kind of sad.
Would you be open to stating some probabilities on this topic -- for example, your probability that Sam gets convicted of fraud, is conclusively found out to have committed fraud, etcetera?
I ask because I'd potentially be interested in making some financial bets with you!
I really take issue with #2 here. Bank run exacerbation saved my friend's life savings. Expectations of collapse can save your life if, you know, there's a collapse.
It really seems insanely cruel to say we shouldn't inform people because it might be bad for FTX (namely in the event of insolvency). Where are our priorities? I'm very glad that my friends did not observe your #2 preference here.
Of course the best way to help FTX against a bank run would have been to deposit your own funds at the first sign of distress. As of writing I think it's still not too late!
There seem like two obvious models:
1) intractability model, where AGI = doom and the only safe move is not to make it
2) race / differential progress model, where safety needs to be ahead of capabilities by some amount, before capabilities reaches point X
As far as I can tell, alignment is advancing a lot slower per researcher than capabilities. So even if you contribute 1 year on capabilities and 10 on alignment, your effect under differential progress was just bad, and your effect under intractability was badder.
I'm curious how much the "having aligned people in the room is good" theory can be assessed already. I personally am not a big buyer of it. For example this phenomenon doesn't seem visible in the manhattan project or following nuclear policy.
We simply have a specific bar for admissions and everyone above that bar gets admitted
A) Does this represent a change from previous years? Previous comms have gestured at a desire to get a certain mixture of credentials, including beginners. This is also consistent with private comms and my personal experience.
B) Its pretty surprising that Austin, a current founder of a startup that received 1M in EA related funding from FTX regrants, would be below that bar!
Maybe you are saying that there is a bar above which you will get in, but below which you may or may not get in.
I think lack of clarity and mixed signals around this stuff might contribute unnecessarily to hurt feelings.
I had a pretty painful experience where I was in a pretty promising position in my career, already pretty involved in EA, and seeking direct work opportunities as a software developer and entrepreneur. I was rejected from EAG twice in a row while my partner, a newbie who just wanted to attend for fun (which I support!!!) was admitted both times. I definitely felt resentful and jealous in ways that I would say I coped with successfully but wow did it feel like the whole thing was lame and unnecessary.
I felt rejected from EA at large and yeah I do think my life plans have adjusted in response. I know there were many such cases! In the height of my involvement I was a very devoted EA, really believed in giving as much as I could bear (time etc included).
This level of devotion juxtaposed with being turned away from even hanging out with people, it's quite a shock. I think the high devotion version of my life would be quite fulfilling and beautiful, and I got into EA seeking a community for that, but never found it. EAG admissions is a pretty central example of this mismatch to me.
Relatedly to time, I wish we knew more about how much money is spent on community building. It might be very surprising! (hint hint)
Sorry I did not realize that OP doesn't solicit donations from non megadonors. I agree this recontextualizes how we should interpret transparency.
Given the lack of donor diversity, tho, I am confused why their cause areas would be so diverse.
Well this is still confusing to me
in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public
Seems obviously true and in fact a continued premise of your post is that there are key facts absent that could explain or fail to explain one decision or the other. Is this particularly true in crminal justice reform? Compared to IDK orgs like AMF (which are hyper transparent by design) maybe, compared to stuff around AI risk I think not.
My guess is that a “highly intelligent idealized utilitarian agent” probably would have invested a fair bit less in criminal justice reform than OP did, if at all.
This is like the same thesis as your post, does not actually convey much information (it is what anyone I assume would have already guessed Ozzie thought).
I think we can rarely fully trust the public reasons for large actions by large institutions. When a CEO leaves to “spend more time with family”, there’s almost always another good explanation. I think OP is much better than most organizations at being honest, but I’d expect that they still face this issue to an extent. As such, I think we shouldn’t be too surprised when some decisions they make seem strange when evaluating them based on their given public explanations.
Yeah I mean, no kidding. But it's called Open Philanthropy. It's easy to imagine there exists a niche for a meta-charity with high transaparency and visibility. It also seems clear that Open Philanthropy advertises as a fulfillment of this niche as much as possible and that donors do want this. So when their behavior seems strange in a cause area and the amount of transparency on it is very low, I think this is notable, even if the norm among orgs is to obfuscate internal phenomena. So I don't rlly endorse any normative takeaway from this point about how orgs usually obfuscate information.
We are currently at around 50 ideas and will hit 100 this summer.
This seems like a great opportunity to sponsor a contest on the forum.
Also, there is an application out there for running polls where users make pairwise comparisons over items in a pool and a ranking is imputed. It's not necessary for all pairs to be compared, the system scales with a high number of alternatives. I don't remember what it's called, it was a research project presented by a group when I was in college. I do think it could be a good way to extract a ranking from a crowd (alternative to upvotes / downvotes and other stuff). If you are super excited about this then I can spend some time at some point trying to hunt it down.
Your approach to exploring solutions is neat. Good luck.
One idea I think I would suggest would be trying to bring personal doomsday solutions to market that actually work super well / upgrading the best-available option somehow.
It cracks me up that this is the first comment you've ever gotten posting here, it really is not the norm.
The comment is using what I call “EA rhetoric” which has sort of evolved on the forum over the years, where posts and comments are padded out with words and other devices. To the degree this is intended to evasive, this is further bad as it harms trust. These devices are perfectly visible to outsiders.
I agree that this has evolved on the forum over the years and it is driving me insane. Seems like a total race to the bottom to appear as the most thorough thinker. You're also right to point out that it is completely visible to outsiders.
It's interesting that you say that given what is in my eyes a low amount of content in this comment. What is a model or model-extracted part that you liked in this comment?
Decent discussion on Twitter, especially from @MichaelDello
https://twitter.com/brianluidog/status/1534738045483683840
To me the biggest challenge in assessing impact is empirical question of how much any supply increase in meat or meat-like stuff leads to replacement of other meat. But this would apply as well to accepted cause areas of meat replacers and cell culture.
Substitution is unclear. In my experience it's very clear that scallop is served as a main course protein in contexts where the alternative is clearly fish, or most often shrimp. So insofar that substitution occurs, we'd mainly see substitution of shrimp and fish.
However, it is not clear how much substitution of meat in fact occurs at all as supply increases. People generally seem to like eating meat and meat-like stuff. I don't know data here but meat consumption is globally on the rise.
https://www.animal-ethics.org/snails-and-bivalves-a-discussion-of-possible-edge-cases-for-sentience/#:~:text=Many%20argue%20that%20because%20bivalves,bivalves%20do%20in%20fact%20swim
I found this discussion interesting. To me it seems like they feel aversion -- not sure how that is any different from suffering -- so it is just a question of "how much?".
Why not take it a step further and ask funders if you should buy yourself a laptop?
Are re-granters vetting applicants to the fund (or at least get to see them), or do they just reach out to individuals/projects they've come across elsewhere?
I don't think that their process is so defined. Some of them may solicit applications, I have no idea. In my case, we were writing an application for the main fund, solicited notes from somebody who happened to be a re-granter without us knowing (or at least without me knowing), and he ended up opting to fund it directly.
--
Still, grantmakers, including re-granters [...]
No need to restate
--
Animal advocates (including outside EA) have been trying lots of things with little success and a few types of things with substantial success, so the track record for a type of intervention can be used as a pretty strong prior.
It's definitely true that in a pre-paradigmatic context vetting is at its least valuable. Animal welfare does seem a bit pre-paradigmatic to me as well, relative to for example global health. But not as much as longtermism.
--
concretely:
It seems relevant whether regranters would echo your advice, as applied to highly engaged EA aware of a great-seeming opportunity to disburse a small amount of funds (for example, a laptop's worth of funds). I highly doubt that they would. This post by Linch https://forum.effectivealtruism.org/posts/vPMo5dRrgubTQGj9g/some-unfun-lessons-i-learned-as-a-junior-grantmaker does not strike me as writing by somebody who would like to be asked to micro manage <20k sums of money more than status quo.
I appreciate the praise! Very cool.
I don't agree with your analysis of the comment chain.
(and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).
Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about.
These assertions / assumptions aren't true. He didn't limit his commentary (which was a reply / rebuttal to Sapphire) to animal welfare. If he had, it would still be irrelevant that he's done so, given that animal welfare is Sapphire's dominant cause area. In fact, his response (corrected by Sapphire) re: Rethink was misleading! So I'm not sure how this reading is supported.
I thought you ignored this reasonable explanation
I am also not really sure how this reading is supported.
Tangentially: As a matter of fact I think that EA has been quite negative for animal welfare because in large part CEA is a group of longtermists co-opting efforts to organize effective animal welfare and then neglecting it. I am a longtermist too but I think that the growth potential for effective animal welfare is much higher and should not be bottlenecked by a longtermist movement. I engage animal welfare as a cause area about equally as much as longtermism, excluding donations.
As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.
There is really not a shortage of unspecific commentary about leftism (or any other ideological classification) on LW, EAF, Twitter, etcetera. Other people seem to like it a lot more than me. Discussion that I find valuable is overwhelmingly specific, clear, object-level. Heuristics are fine but should be clearly relevant and strong. Etcetera. Not doing so is responsible for a ton of noise, and the noise is even noisier if it's in a reply setting and superficially resembles conversation.
Wdym by "do they get to see the applicants"? (for context I am a regrant recipient) The future fund does one final review and possible veto over the grant, but I was told this was just to veto any major reputational risks / did not really involve effectiveness evaluation. My regranter did not seem to think its a major filter and I'd be surprised to learn that this veto has ever been exercised (or that it had been in a years time).
--
Still, the re-granters are grantmakers, and they've been vetted. They're probably much better informed than the average EA.
I mean, you made pretty specific arguments about the information theory of centralized grants. Once you break up across even 20 regranters, these effects you are arguing for -- the effects of also knowing about all the other applications -- become turbo diminished.
As far as I can tell none of your arguments are especially targeted at the average EA at all. You and sapphire are both personally much better informed than the average EA.