Posts

Agrippa's Shortform 2022-02-24T05:10:52.151Z
Reducing EA job search waste 2019-04-17T00:09:00.236Z

Comments

Comment by Agrippa on Open EA Global · 2022-09-03T23:00:36.185Z · EA · GW

Thanks for clarifying 

Comment by Agrippa on Open EA Global · 2022-09-02T20:52:45.181Z · EA · GW

We simply have a specific bar for admissions and everyone above that bar gets admitted 


A) Does this represent a change from previous years? Previous comms have gestured at a desire to get a certain mixture of credentials, including beginners. This is also consistent with private comms and my personal experience. 

B) Its pretty surprising that Austin, a current founder of a startup that received 1M in EA related funding from FTX regrants, would be below that bar! 

Maybe you are saying that there is a bar above which you will get in, but below which you may or may not get in.

I think lack of clarity and mixed signals around this stuff might contribute unnecessarily to hurt feelings.

Comment by Agrippa on Open EA Global · 2022-09-02T18:56:56.953Z · EA · GW

I had a pretty painful experience where I was in a pretty promising position in my career, already pretty involved in EA, and seeking direct work opportunities as a software developer and entrepreneur. I was rejected from EAG twice in a row while my partner, a newbie who just wanted to attend for fun (which I support!!!) was admitted both times. I definitely felt resentful and jealous in ways that I would say I coped with successfully but wow did it feel like the whole thing was lame and unnecessary. 

I felt rejected from EA at large and yeah I do think my life plans have adjusted in response. I know there were many such cases! In the height of my involvement I was a very devoted EA, really believed in giving as much as I could bear (time etc included). 

This level of devotion juxtaposed with being turned away from even hanging out with people, it's quite a shock. I think the high devotion version of my life would be quite fulfilling and beautiful, and I got into EA seeking a community for that, but never found it. EAG admissions is a pretty central example of this mismatch to me.  

Comment by Agrippa on Community Builders Spend Too Much Time Community Building · 2022-07-02T18:05:00.146Z · EA · GW

Relatedly to time, I wish we knew more about how much money is spent on community building. It might be very surprising! (hint hint)

Comment by Agrippa on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-28T05:10:22.517Z · EA · GW

Sorry I did not realize that OP doesn't solicit donations from non megadonors. I agree this  recontextualizes how we should interpret transparency.

 Given the lack of donor diversity, tho, I am confused why their cause areas would be so diverse.  

Comment by Agrippa on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-17T18:50:17.414Z · EA · GW

Well this is still confusing to me
 

in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public

Seems obviously true and in fact a continued premise of your post is that there are key facts absent that could explain or fail to explain one decision or the other. Is this particularly true in crminal justice reform? Compared to IDK orgs like AMF (which are hyper transparent by design) maybe, compared to stuff around AI risk I think not.

 

My guess is that a “highly intelligent idealized utilitarian agent” probably would have invested a fair bit less in criminal justice reform than OP did, if at all.

This is like the same thesis as your post, does not actually convey much information (it is what anyone I assume would have already guessed Ozzie thought). 

 

I think we can rarely fully trust the public reasons for large actions by large institutions. When a CEO leaves to “spend more time with family”, there’s almost always another good explanation. I think OP is much better than most organizations at being honest, but I’d expect that they still face this issue to an extent. As such, I think we shouldn’t be too surprised when some decisions they make seem strange when evaluating them based on their given public explanations.

Yeah I mean, no kidding. But it's called Open Philanthropy. It's easy to imagine there exists a niche for a meta-charity with high transaparency and visibility. It also seems clear that Open Philanthropy advertises as a fulfillment of this niche as much as possible and that donors do want this. So when their behavior seems strange in a cause area and the amount of transparency on it is very low, I think this is notable, even if the norm among orgs is to obfuscate internal phenomena. So I don't rlly endorse any normative takeaway from this point about how orgs usually obfuscate information. 

Comment by Agrippa on Fønix: Bioweapons shelter project launch · 2022-06-17T04:11:27.533Z · EA · GW

We are currently at around 50 ideas and will hit 100 this summer.

 

This seems like a great opportunity to sponsor a contest on the forum.

Also, there is an application out there for running polls where users make pairwise comparisons over items in a pool and a ranking is imputed. It's not necessary for all pairs to be compared, the system scales with a high number of alternatives. I don't remember what it's called, it was a research project presented by a group when I was in college. I do think it could be a good way to extract a ranking from a crowd (alternative to upvotes / downvotes and other stuff). If you are super excited about this then I can spend some time at some point trying to hunt it down. 
 

Your approach to exploring solutions is neat. Good luck.

One idea I think I would suggest would be trying to bring personal doomsday solutions to market that actually work super well / upgrading the best-available option somehow. 

Comment by Agrippa on Fønix: Bioweapons shelter project launch · 2022-06-17T03:57:41.141Z · EA · GW

It cracks me up that this is the first comment you've ever gotten posting here, it really is not the norm. 

Comment by Agrippa on Fønix: Bioweapons shelter project launch · 2022-06-17T03:52:16.041Z · EA · GW

The comment is using what I call “EA rhetoric” which has sort of evolved on the forum over the years, where posts and comments are padded out with words and other devices. To the degree this is intended to evasive, this is further bad as it harms trust. These devices are perfectly visible to outsiders.

 

I agree that this has evolved on the forum over the years and it is driving me insane. Seems like a total race to the bottom to appear as the most thorough thinker. You're also right to point out that it is completely visible to outsiders. 

Comment by Agrippa on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-17T03:30:54.510Z · EA · GW

It's interesting that you say that given what is in my eyes a low amount of content in this comment. What is a model or model-extracted part that you liked in this comment?

Comment by Agrippa on New cause area: bivalve aquaculture · 2022-06-12T16:42:35.384Z · EA · GW

Decent discussion on Twitter, especially from @MichaelDello
https://twitter.com/brianluidog/status/1534738045483683840

To me the biggest challenge in assessing impact is empirical question of how much any supply increase in meat or meat-like stuff leads to replacement of other meat. But this would apply as well to accepted cause areas of meat replacers and cell culture.

Comment by Agrippa on New cause area: bivalve aquaculture · 2022-06-12T16:39:13.164Z · EA · GW

Substitution is unclear. In my experience it's very clear that scallop is served as a main course protein in contexts where the alternative is clearly fish, or most often shrimp. So insofar that substitution occurs, we'd mainly see substitution of shrimp and fish. 

However, it is not clear how much substitution of meat in fact occurs at all as supply increases. People generally seem to like eating meat and meat-like stuff. I don't know data here but meat consumption is globally on the rise.

Comment by Agrippa on New cause area: bivalve aquaculture · 2022-06-12T16:35:53.322Z · EA · GW

https://www.animal-ethics.org/snails-and-bivalves-a-discussion-of-possible-edge-cases-for-sentience/#:~:text=Many%20argue%20that%20because%20bivalves,bivalves%20do%20in%20fact%20swim

I found this discussion interesting. To me it seems like they feel aversion -- not sure how that is any different from suffering -- so it is just a question of "how much?". 

Comment by Agrippa on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-10T22:28:04.059Z · EA · GW

Why not take it a step further and ask funders if you should buy yourself a laptop?

Comment by Agrippa on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-10T22:27:35.928Z · EA · GW

Are re-granters vetting applicants to the fund (or at least get to see them), or do they just reach out to individuals/projects they've come across elsewhere?

I don't think that their process is so defined. Some of them may solicit applications, I have no idea. In my case, we were writing an application for the main fund, solicited notes from somebody who happened to be a re-granter without us knowing (or at least without me knowing), and he ended up opting to fund it directly. 

--

Still, grantmakers, including re-granters [...]

No need to restate

--

Animal advocates (including outside EA) have been trying lots of things with little success and a few types of things with substantial success, so the track record for a type of intervention can be used as a pretty strong prior.


It's definitely true that in a pre-paradigmatic context vetting is at its least valuable. Animal welfare does seem a bit pre-paradigmatic to me as well, relative to for example global health. But not as much as longtermism.

-- 

concretely:

It seems relevant whether regranters would echo your advice, as applied to highly engaged EA aware of a great-seeming opportunity to disburse a small amount of funds (for example, a laptop's worth of funds). I highly doubt that they would. This post by Linch https://forum.effectivealtruism.org/posts/vPMo5dRrgubTQGj9g/some-unfun-lessons-i-learned-as-a-junior-grantmaker does not strike me as writing by somebody who would like to be asked to micro manage <20k sums of money more than status quo.

Comment by Agrippa on The Strange Shortage of Moral Optimizers · 2022-06-10T22:08:13.218Z · EA · GW

I appreciate the praise! Very cool.

I don't agree with your analysis of the comment chain.

 

(and his beliefs about the specific funders you and Sapphire may not understand well as this is cause area dependent).

Your choice of him to press seems misguided, as he has has no direct involvement or strong opinions on AI safety object level issues that I think you care about. 

These assertions / assumptions aren't true. He didn't limit his commentary (which was a reply / rebuttal to Sapphire) to animal welfare. If he had, it would still be irrelevant that he's done so, given that animal welfare is Sapphire's dominant cause area. In fact, his response (corrected by Sapphire) re: Rethink was misleading! So I'm not sure how this reading is supported.

I thought you ignored this reasonable explanation

I am also not really sure how this reading is supported. 

Tangentially: As a matter of fact I think that EA has been quite negative for animal welfare because in large part CEA is a group of longtermists co-opting efforts to organize effective animal welfare and then neglecting it. I am a longtermist too but I think that the growth potential for effective animal welfare is much higher and should not be bottlenecked by a longtermist movement. I engage animal welfare as a cause area about equally as much as longtermism, excluding donations.

 

As mentioned I was/am in these circles (whatever that means). I don’t really have the heart to attack the work and object level issues to someone who is a true believer in most leftist causes, because I think that could have a chance of really hurting them.

There is really  not a shortage of unspecific commentary about leftism (or any other ideological classification) on LW, EAF, Twitter, etcetera. Other people seem to like it a lot more than me. Discussion that I find valuable is overwhelmingly specific, clear, object-level. Heuristics are fine but should be clearly relevant and strong. Etcetera. Not doing so is responsible for a ton of noise, and the noise is even noisier if it's in a reply setting and superficially resembles conversation. 

Comment by Agrippa on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-09T15:27:28.455Z · EA · GW

Wdym by "do they get to see the applicants"? (for context I am a regrant recipient) The future fund does one final review and possible veto over the grant, but I was told this was just to veto any major reputational risks / did not really involve effectiveness evaluation. My regranter did not seem to think its a major filter and I'd be surprised to learn that this veto has ever been exercised (or that it had been in a years time).

--

Still, the re-granters are grantmakers, and they've been vetted. They're probably much better informed than the average EA.

I mean, you made pretty specific arguments about the information theory of centralized grants. Once you break up across even 20 regranters, these effects you are arguing for -- the effects of also knowing about all the other applications -- become turbo diminished.

As far as I can tell none of your arguments are especially targeted at the average EA at all. You and sapphire are both personally much better informed than the average EA. 

Comment by Agrippa on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-09T15:17:31.777Z · EA · GW

Yes, but I expect funders/evaluators to be more informed about which undercover investigators would be best to fund, since I won't personally have the time or interest to look into their particular average cost-effectiveness, room for more funding, track record, etc., on my own,

Since we are talking about funding people within your network that you personally know, not randos, the idea is that you already know this stuff about some set of people. Like, explicitly, the case for self-funding norms is the case for utilizing informational capital that already exists rather than discarding it.

Knowing that one opportunity is really good doesn't mean there aren't far better ones doing similar work.

I think it is not that hard to keep up with what last year's best opportunities looked like and get a good sense of where the bar will be this year. Compiling the top 5 opportunities or whatever is a lot more labor intensive than reviewing the top 5 and you already state being informed enough to know about and agree with the decisions of funders. So I disagree with level at which we should think we are flying blind. 

If the disagreement comes down to a normative or decision-theoretic one

Yes I think this will be the most common source of disagreement at least in your case, my case, sapphire's case. With respect to the things I know about being rejected this was the case.

All of that said I think I have updated from your posts to be more encouraging of applying for EA funding and/or making forum posts. I will not do this in a deferrential manner and to me it seems harmful to do so -- I think people should feel discouraged if you explicitly discard what you personally know about their competence etc. 

Comment by Agrippa on The Strange Shortage of Moral Optimizers · 2022-06-09T14:55:20.350Z · EA · GW

Another way of looking at this: if I sincerely believe in my comments, this direct communication is immensely useful, even or especially if I’m wrong.

 

I do not think this, for lack of actual content. What would it mean for me to change my view on any topic or argument you have advanced? for you to change yours? I would engage in less "leftist micro activism"? I would decide DXE is probably net harmful instead of net positive? I would start believing CEA has been competently executing community building, against evidence? It cashes out to nothing except vague cultural / ideological association. 

--

I agree that the concerns around "dilution" are evidence of the phenomenon you are discussing. 

  • It remains unclear how impactful you believe this phenomenon has been in this case, which I think is important to convey.
  • Obviously, if somebody thought X was good, and that EA growth has been slowed because CEA hates X, this would not in itself form an argument for anything except the existence of conflict between CEA and likers of X.

--
TLDR:

Finally, and very directly, actual incidents of real activism are extremely obvious here, and you must admit involve similar patterns of accusations of centralization, censorship, and dismissal from an out of touch, self interested central authority on causes no one cares about.

Yes, this seems to follow the format of your entire thesis 

  1. Agrippa is engaging in, or promoting X (X is not particularly specificied in the comments of Charles, so I have no idea whether or not Charles could actually accurately describe the difference between my views and the average forum poster)
  2. X or some subset of X is often involved in the toxic and incompetent culture of toxic and incompetent leftist activism
  3. Toxic and incompetent leftist activism is bad (directly, and because CEA has intentionally funded less things for fear of it) so Agrippa should not engage in or promote X

At the object level, X seems to be "giving DXE as an example of people who include credible moral optimizers that don't align with EA". If X includes other posts by me, perhaps it includes "claiming that CEA has not done a good job at community building or disbursing funds" (which does not rest on any leftist principles or heuristics and does not even seem controversial among experienced EAs), and "whining that EA has ended up collaborating with, instead of opposing, the AI capabilities work" (which also does not rest on anything I would consider even vaguely leftist coded).

Comment by Agrippa on The Strange Shortage of Moral Optimizers · 2022-06-09T02:18:39.417Z · EA · GW

I think this discussion would have to be several layers less removed from the object level in order to contain insight. 

  • I see the "bycatch" from this shutting down as obstructing many good people, because basically fast growth can't be trusted.

 

There is a list of recent rejects from community building, for example, that would bring in a lot of good people in expectation, if this sort of activism wasn't a concern. 

Your explicit claim seems to be that fear of leftism / leftist activist practices are responsible for a slowing in the growth of EA, because institutions (namely CEA, I assume) are intentionally operating slower than they would if they did not have this fear. Your beliefs about magnitude of this slowdown are unclear. (do you think growth has been halved? tenthed?)

You seem to have strong priors that this would be true. I am not aware of any evidence that this phenomenon has occurred, and you have not pointed any out. I am aware of two community building initiatives over the past 5 years that have tried to get funding which were rejected, the EA Hotel and some other thing for training AI safety researchers, and the reasons for rejection were both specific and completely removed from anything you have discussed. 

--
 I chose the most contentful and specific part of your writing to react to IMO. I think your commentary would be helped by containing more content per word (above zero?) 

Comment by Agrippa on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-09T01:57:01.103Z · EA · GW

Thanks for the stuff about RP, that is not as bad as I had thought. 

I donated to RP in late 2019/early 2020 (my biggest donation so far), work there now and think they should continue to scale (at least for animal welfare, I don't see or pay a lot of attention to what's going on in the other cause areas, so won't comment on them)

If you are aware of a major instance where your judgement differed from that of funds, why advocate such strong priors about the efficacy of funds?

I think their undercover investigations are plausibly very good and rescuing clearly injured or sick animals from factory farms (not small farms) is good in expectation, but we can fund those elsewhere without the riskier stuff.

I agree the investigations seem really good / plausibly highest impact (and should be important even just to EAs who want to assess priorities, much less for the sake of public awareness). And you can fund them elsewhere / fund individuals to do this -- yourself! Not via funds.  

Comment by Agrippa on Power dynamics between people in EA · 2022-06-08T21:10:55.057Z · EA · GW

I would really like to participate in the version of EA where these power imbalances are not so severe. I think we could achieve this in various ways. 

I think that insofar that the EA social scene is downstream of the EA institutions then these power imbalances will be a lot more totalizing. In contrast the EA Hotel felt like a truly safe environment. 

Comment by Agrippa on The Strange Shortage of Moral Optimizers · 2022-06-08T20:56:32.632Z · EA · GW

I wouldn't really consider DXE particularly horizontalist? Paging @sapphire

I'm also not sure in what sense these quotes would be evidence of anything about DXE

Comment by Agrippa on The Strange Shortage of Moral Optimizers · 2022-06-08T20:51:48.985Z · EA · GW

Sorry  I am not sure I follow this post. I am not really commenting on how much DXE should grow, I'm not involved. However, if I was looking for those "moral optimizers" outside of EA that are surprisingly hard to find, I think that one place you can find them is DXE. It's an existence proof -- there are IMO sincere critics as the OP discusses.

If I were going to discuss whether DXE should grow, I would just try to list what they have accomplished and do some estimates of the costs. Heuristics about types of organization, the quality of the cultures involved, etc., would be of lower interest to me.

Comment by Agrippa on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-08T20:41:46.010Z · EA · GW

It does seem like the "not even spending the money" problem doesn't extend to spending on animal welfare as much, at least in the case of EA Funds that I know about.

ACE seems like a great example for the case against deferrence. 

Do you also have a low opinion of Rethink? They spent a long time not getting funded. 

What about direct action? 

I know that in Sapphire's case, the two projects she is highest afaik on are Rethink and DXE / direct action generally. Neither have had an easy time with funders, so empirically it seems like she shouldn't expect so much alignment with funders in future either. 

Comment by Agrippa on The Strange Shortage of Moral Optimizers · 2022-06-08T17:20:42.267Z · EA · GW

I do think that you can interpret DXE as a general good, "beneficentrist" org, given that if you are not longetermism-pilled it is IMO it is reasonable to say that animal welfare is the highest moral priority and I think this is their actual belief. It's an org for people to do the most important thing as they see it, not for them to just do a thing.

Comment by Agrippa on The Strange Shortage of Moral Optimizers · 2022-06-08T17:17:32.981Z · EA · GW

One major concern I have with the actually-existing wholesale criticisms of EA is that they tend to reinforce a kind of moral complacency.

 

I agree this is common and it was what I most commonly confronted in college at Cornell. Oh, I should actually just be focused on living sustainably, not being racist, and participating in democracy, and this will be an optimally ethical life? Convenient if true!

I have several friends who are members of Direct Action Everywhere. I think DXE, as I'm exposed to it, does present a sort of alt-EA that you are asking about. I think that many DXE members could non hypocritically comment that EA is complacent / EAs are generally more complacent people than themselves. 

While DXE is not focused on the general good (per se), anecdotally it seems like you can persuade DXE folks of extreme conclusions about the importance of AI safety, at least if they are also autistic. 

Comment by Agrippa on Deference Culture in EA · 2022-06-08T16:58:44.452Z · EA · GW

I like that you contrast deference with investigation, rather than unilaterism. So many discussions and posts about deference devolve into discussion about unilateralism. Example: https://forum.effectivealtruism.org/posts/Jx6ncakmergiC74kG/deference-culture-in-ea?commentId=epR5HxT6nkdSCtMCf

But arguments against unilateralism can't be applied as arguments against investigation. Investigation grows the intellectual commons. Empirically it's clear there is much to investigate. EAs generally agree that AI risk is the most important problem yet there is no plan to move forward (aside from help out OpenAI and hope that this somehow turns out to be a good idea instead of an apocalyptic one). 

Comment by Agrippa on Deference Culture in EA · 2022-06-08T16:42:02.169Z · EA · GW

This post seems to amount to replying "No" to Vaidehi's question since it is very long but does not include a specific example. 

> I won't be able to give you examples where I demonstrate that there was too little deference
I don't think that Vaidehi is asking you to demonstrate anything in particular about any examples given. It's just useful to give examples that illustrate your own subjective experience on the topic. It would have conveyed more information and perspective than the above post.

Comment by Agrippa on Transcript of Twitter Discussion on EA from June 2022 · 2022-06-08T16:15:13.880Z · EA · GW

This seems divorced from the empirical reality of centralized funding. Maybe you should be more specific about what orgs I should trust to direct my funds? The actual situation is that we have huge amount of unspent money bottlenecked by evaluation. Can't find it right now but you can look up post mortem of EA Funds and other CEA projects on here which spent IIRC under 50%? of donations explicitly due to bottlenecks on evaluation. 

This is why I was very glad to see that FTX set up 20? re-granters for their Future Fund. Under your theory of giving, such emphasis on on re-granters should be really surprising. 

Relatedly, a huge amount of the information used by granters is just network capital. It seems inefficient for everyone else to discard their network capital. It doesn't seem like a stretch to think that my #1 best opportunity that I become aware of over a few years would be better than a grantmaker's ~100th best given that my network isn't a 100th of the size.

Comment by Agrippa on Agrippa's Shortform · 2022-06-03T18:05:15.624Z · EA · GW

I don't mean to say anything pro DeepMind and I'm not sure there is anything positive to say re: DeepMind.

I think that once the nascent spirit of cooperation is destroyed, you can indeed take the adversarial route. It's not hard to imagine successful lobbying efforts that lead to regulation -- most people are in fact skeptical of tech giants wielding tons of power using AI! Among other things known to slow progress and hinder organizations. It is beyond me why such things are so rarely discussed or considered. I'm sure that Open Phil and 80k open cooperation with OpenAI has a big part in shaping narrative away from this kind of thing.

Comment by Agrippa on Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg · 2022-05-31T04:29:24.736Z · EA · GW

This is correct (wiz's post was originally remarked by them in a private convo with me)

Comment by Agrippa on Are you really in a race? The Cautionary Tales of Szilárd and Ellsberg · 2022-05-31T04:26:55.327Z · EA · GW

I would like to contact Daniel Ellsberg and ask what he thinks about the amount of good that could have been done from an aligned Manhattan project member using their knowledge of the project to advance safety. I expect the answer will be less than the harm caused by participating, whereas @richard_ngo thinks otherwise. Anyone have any recommendations on how I might do this? The only thing I found online was his publicist.

Comment by Agrippa on Agrippa's Shortform · 2022-05-31T04:06:48.128Z · EA · GW

It would be nice to know more about how many EAs are getting into this plan and how many end up working in safety. I don't have the sense that most of them get to the safety half. I also think it is reasonable to believe that no amount of safety research can prevent armageddon, because the outcome of the research may just be "this is not safe", as EY seems to report, and have no impact (the capabilities researchers don't care, or, the fact that we aren't safe yet means they need to keep working in capabilities so that they can help with the safety problem). 

Comment by Agrippa on RyanCarey's Shortform · 2022-05-31T03:59:35.870Z · EA · GW

It seems entirely possible that even with a 100 safety to 1 capabilities researcher ratio, 100 capabilities researchers could kill everyone before the 10k safety researchers came up with a plan that didnt kill everyone. It does not seem like a symmetric race.

Likewise, if the output of safety research is just "this is not safe to do" (as MIRI's seems to be), capabilities will continue, or in fact they will do MORE capabilities work so they can upskill and "help" with the safety problem. 

Comment by Agrippa on Agrippa's Shortform · 2022-05-31T03:53:02.941Z · EA · GW

Has Holden written any updates on outcomes associated with the grant? 

One can also argue that EA memes re AI risk led to the creation of OpenAI, and that therefore EA is net negative (see here for details). But if this is the argument Agrippa wants to make, then I am confused why they decided to link to the 2017 grant.

I am not making this argument but certainly I am alluding to it. EA strategy (weighted by impact) has been to do things that in actuality accelerate timelines, and even cooperate with doing so under the "have a good person standing nearby" theory.

I don't think that lobbying against OpenAI, other adversarial action, would have been that hard. But OpenPhil and other EA leadership of the time decided to ally and hope for the best instead. This seems off the rails to me.

Comment by Agrippa on Agrippa's Shortform · 2022-05-31T03:44:41.609Z · EA · GW

So much for open exchange of ideas

Comment by Agrippa on Agrippa's Shortform · 2022-05-25T05:48:03.275Z · EA · GW

This post includes some great follow up questions for the future. Has anything been posted re: these follow up questions?

Comment by Agrippa on Agrippa's Shortform · 2022-05-25T05:40:38.621Z · EA · GW

As far as I can tell liberal nonviolence is a very popular norm in EA. At the same time I really cannot thing of anything more mortally violent I could do than to build a doomsday machine. Even if my doomsday machine is actually a 10%-chance-of-doomsday machine or 1% or etcetera (nobody even thinks it's lower than that). How come this norm isn't kicking in? How close to completion does the 10%-chance-of-doomsday machine have to be before gentle kindness is not the prescribed reaction? 

Comment by Agrippa on Agrippa's Shortform · 2022-05-25T05:30:35.465Z · EA · GW

My favorite thing about EA has always been the norm that in order to get cred for being altruistic, you actually are supposed to have helped people. This is a great property, just align incentives. But now re: OpenAI I so often hear people say that gentle kindness is the only way, if you are openly adversarial then they will just do the opposite of what you want even more. So much for aligning incentives.

Comment by Agrippa on Agrippa's Shortform · 2022-05-25T05:28:07.270Z · EA · GW

https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support 

To me at this point the expected impact of the EA phenomena as a whole is negative. Hope we can right this ship, but things really seem off the rails.

Comment by Agrippa on Free-spending EA might be a big problem for optics and epistemics · 2022-05-09T08:23:45.512Z · EA · GW

I would expect detrimental effects if nerding out became even more of a paid-attention-to signal. It's something you can do endlessly without ever helping a person. But maybe you just mean "successfully making valuable intellectual contributions", in which case I agree.

Comment by Agrippa on A retroactive grant for creating the HPMoR audiobook (Eneasz Brodski)? · 2022-05-09T07:59:48.261Z · EA · GW

Permissionless economies are much more efficient. 

For funders: Distributing cash upon results being shown is a lot easier than vetting and modelling who and what will succeed, not to mention involves better incentives.
For doers: Barrier to entry is much lower, I just need to do the thing (that i probably enjoy), not this additional labor of getting a permissionful grant (which i will not enjoy).

For many doings, doers might require upfront payment or gaurantees. However, in very many cases, probabalistic recognition and payment is sufficient. 

Comment by Agrippa on The FDA demanded my employer bury inconvenient clinical trial data. What should I do? · 2022-04-16T06:09:56.638Z · EA · GW

Also, if I were you / your employer I would certainly start secretly recording any correspondence with the government.

Comment by Agrippa on The FDA demanded my employer bury inconvenient clinical trial data. What should I do? · 2022-04-16T06:08:22.514Z · EA · GW

Disclaimer: I am just a random onlooker with no particular relevant expertise.

In your place, I suspect I would do what you're doing, but maybe with less interest in my employer's perspective (not that you should follow my putative example). The FDA is being completely corrupt here and in a completely banal way, of course it must receive punishment if at all possible.

I would be surprised if any journalist wanted to run with this story given the lack of proof (due to intentional lack of paper trail), or if this report alone caused anything in particular. However, it is still a valuable contribution to a body of evidence on the topic, and you could end up unknowingly corroborating other reports on the same topic. 

Re: journalists: Yeah you could probably just send this to several journalists, especially watchdog-y publications like ProPublica and Motherjones. At the very least they will be able to contact you in the future if they decide to run a story on the topic.

Re: Internal affairs: maybe just call them anonymously (VOIP?) and ask if this is indeed the kind of thing they care about?

Anyway, I'm sending you wishes of strength and wellness. Godspeed. 

Comment by Agrippa on Momentum 2022 updates (we're hiring) · 2022-03-10T02:52:34.965Z · EA · GW

That's a good point re: 1/4, haha. 
Super cool to hear. Great job.

Comment by Agrippa on Momentum 2022 updates (we're hiring) · 2022-03-03T03:30:30.947Z · EA · GW

Apologies if this was already asked.

Momentum has moved over $10M with our software from 40,000 donors. In our mobile app, 87% of donations went to our recommended charities (including several longtermist ones).

Do you have any idea how much of your users (weighted by donation) self-ID as EAs? If the number is low, obviously this is super impressive, but if you've only gotten traction among EAs then less so.

Thanks for posting, this is a really interesting idea, congrats.

Comment by Agrippa on Agrippa's Shortform · 2022-03-02T22:21:30.709Z · EA · GW

I agree "having people on the inside" seems useful. At the same time, it's  hard for me to imagine what an "aligned" researcher could have done at the Manhattan Project to lower nuclear risk. That's not meant as a total dismissal, it's just not very clear to me.

> Safety-conscious researchers and engineers have done an incredible work setting up safety teams in OpenAI and DeepMind. 

I don't know much about what successes here have looked like, I agree this is a relevant and important case study.

> I think ostracizing them would be a huge error.
My other comments better reflect my current feelings here.

Comment by Agrippa on What psychological traits predict interest in effective altruism? · 2022-03-02T22:06:50.564Z · EA · GW

That would be my assumption, but OP says
> Note that the significant correlations with education level and income held even after controlling for age.

Comment by Agrippa on Agrippa's Shortform · 2022-02-27T02:26:50.490Z · EA · GW

yeah this is really alarming and aligns with my least possible charitable interpretation of my feelings / data.