Posts

Long-Term Future Fund: November 2020 grant recommendations 2020-12-03T12:57:36.686Z
Long-Term Future Fund: April 2020 grants and recommendations 2020-09-18T10:28:20.555Z
Long-Term Future Fund: September 2020 grants 2020-09-18T10:25:04.859Z
Comparing Utilities 2020-09-15T03:27:42.746Z
Long Term Future Fund application is closing this Friday (June 12th) 2020-06-11T04:17:28.371Z
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T04:54:25.630Z
Request for Feedback: Draft of a COI policy for the Long Term Future Fund 2020-02-05T18:38:24.224Z
Long Term Future Fund Application closes tonight 2020-02-01T19:47:47.051Z
Survival and Flourishing grant applications open until March 7th ($0.8MM-$1.5MM planned for dispersal) 2020-01-28T23:35:59.575Z
AI Alignment 2018-2019 Review 2020-01-28T21:14:02.503Z
Long-Term Future Fund: November 2019 short grant writeups 2020-01-05T00:15:02.468Z
Long Term Future Fund application is closing this Friday (October 11th) 2019-10-10T00:43:28.728Z
Long-Term Future Fund: August 2019 grant recommendations 2019-10-03T18:46:40.813Z
Survival and Flourishing Fund Applications closing in 3 days 2019-10-02T00:13:32.289Z
Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) 2019-09-09T04:14:02.083Z
Integrity and accountability are core parts of rationality [LW-Crosspost] 2019-07-23T00:14:56.417Z
Long Term Future Fund and EA Meta Fund applications open until June 28th 2019-06-10T20:37:51.048Z
Long-Term Future Fund: April 2019 grant recommendations 2019-04-23T07:00:00.000Z
Major Donation: Long Term Future Fund Application Extended 1 Week 2019-02-16T23:28:45.666Z
EA Funds: Long-Term Future fund is open to applications until Feb. 7th 2019-01-17T20:25:29.163Z
Long Term Future Fund: November grant decisions 2018-12-02T00:26:50.849Z
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) 2018-11-21T03:41:38.850Z

Comments

Comment by Habryka on Getting a feel for changes of karma and controversy in the EA Forum over time · 2021-04-22T18:13:05.001Z · EA · GW

Yep, it's an admin-only property. Sorry for the confusion!

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T17:54:20.585Z · EA · GW

Oh, yeah, that's fair. I had interpreted it as referring to Jakub's comment. I think there is a slightly stronger case to call Hypatia's post hostile than Jakub's comment, but in either case the statement feels pretty out of place. 

Comment by Habryka on CEA update: Q1 2021 · 2021-04-22T07:09:12.417Z · EA · GW

Thank you for posting this!

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T07:00:02.025Z · EA · GW

I agree the calculation isn't super straightforward, and there is a problem of disincentivizing glomarization here, but I do think overall, all things considered, after having thought about situations pretty similar to this for a few dozen hours, I am pretty confident it's still decent bayesian evidence, and I endorse treating it as bayesian evidence (though I do think the pre-commitment consideration dampen the degree to which I am going to act on that information a bit, though not anywhere close to fully). 

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T06:41:11.984Z · EA · GW

“they chose not to respond, therefore that says bad things about them, so I'll update negatively.” I think that the latter response is not only corrosive in terms of pushing all discussion into the public sphere even when that makes it much worse, but it also hurts people's ability to feel comfortably holding onto non-public information.

This feels wrong from two perspectives: 

  1. It clearly is actual, boring, normal, bayesian evidence that they don't have a good response. It's not overwhelming evidence, but someone declining to respond sure is screening off the worlds where they had a great low-inferential distance reply that was cheap to shoot off that addressed all the concerns. Of course I am going to update on that.
  2. I do just actually think there is a tragedy of the commons scenario with public information, and for proper information flow you need some incentives to publicize information. You and me have longstanding disagreements on the right architecture here, but from my perspective of course you want to reward organization for being transparent and punish organizations if they are being exceptionally non-transparent. I definitely prefer to join social groups that have norms of information sharing among its members, and where its members invest substantial resources to share important information with others, and where you don't get to participate in the commons if you don't invest an adequate amount of resources into sharing important information and responding to important arguments.
Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T06:24:40.208Z · EA · GW

Are you sure that they're not available for communication? I know approximately nothing about ACE, but I'd surprised if they wouldn't be willing to talk to you after e.g. sending them an email.

Yeah, I am really not sure. I will consider sending them an email. My guess is they are not interested in talking to me in a way that would later on allow me to write up what they said publicly, which would reduce the value of their response quite drastically to me. If they are happy to chat and allow me to write things up, then I might be able to make the time, but it does sound like a 5+ hour time-commitment and I am not sure whether I am up for that. Though I would be happy to pay $200 to anyone else who does that.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T06:20:26.081Z · EA · GW

I also think there's a strong tendency for goalpost-moving with this sort of objection—are you sure that, if they had said more things along those lines, you wouldn't still have objected?

I do think I would have still found it pretty sad for them to not respond, because I do really care about our public discourse and this issue feels important to me, but I do think I would feel substantially less bad about it, and probably would only have mild-downvoted the comment instead of strong-downvoted it. 

What I have a problem with is the notion that we should punish ACE for not responding to those accusations—I don't think they should have an obligation to respond

I mean, I do think they have a bit of an obligation to respond? Like, I don't know what you mean by obligation, like, I don't think they are necessarily morally bad people, but I do think that it sure costs me and others a bunch for them to not respond and makes overall coordinating harder. 

As an example, I sometimes have to decide which organizations to invite to events that I am organizing that help people in the EA community coordinate (historically things like the EA Leaders Retreat or EA Global, now it's more informal retreats and one-off things). The things discussed here feel like decent arguments to reduce those invites some amount, since I do think it's evidence that ACE's culture isn't a good fit for events like that. I would have liked ACE to respond to these accusations, and additionally, I would have liked ACE to respond to them publicly so I don't have to justify my invite to other attendees who don't know what their response was, even if I had reached out in private. 

In a hypothetical world where we had great private communication channels and I could just ask ACE a question in some smaller higher-trust circle of people who would go to the EA Leaders forum, or tend to attend whatever retreats and events I am running, then sure, that might be fine. But we don't have those channels, and the only way I know to establish common-knowledge in basically any group larger than 20 people within the EA community is to have it be posted publicly. And that means having private communication makes a lot of stuff like this really hard.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T03:29:58.558Z · EA · GW

I downvoted because it called the communication hostile without any justification for that claim. The comment it is replying to doesn't seem at all hostile to me, and asserting it is, feels like it's violating some pretty important norms about not escalating conflict and engaging with people charitably.

I also think I disagree that orgs should never be punished for not wanting to engage in any sort of online discussion. We have shared resources to coordinate, and as a social network without clear boundaries, it is unclear how to make progress on many of the disputes over those resources without any kind of public discussion. I do think we should be really careful to not end up in a state where you have to constantly monitor all online activity related to your org, but if the accusations are substantial enough, and the stakes high enough, I think it's pretty important for people to make themselves available for communication. 

Importantly, the above also doesn't highlight any non-public communication channels that people who are worried about the negative effects of ACE can use instead. The above is not saying "we are worried about this conversation being difficult to have in public, please reach out to us via these other channels if you think we are causing harm". Instead it just declares a broad swath of communication "hostile" and doesn't provide any path forward for concerns to be addressed. That strikes me as quite misguided given the really substantial stakes of shared reputational, financial, and talent-related resources that ACE is sharing with the rest of the EA community.

I mean, it's fine if ACE doesn't want to coordinate with the rest of the EA community, but I do think that currently, unless something very substantial changes, ACE and the rest of EA are drawing from shared resource pools and need to coordinate somehow if we want to avoid tragedies of the commons.

Comment by Habryka on EA Forum feature suggestion thread · 2021-04-21T21:45:08.245Z · EA · GW

We no longer weigh frontpage posts 10x, though we might want to reinstitute some kind of weighing again. I think the 10x was historically too much, and made it so that by far the primary determinant of who had how much karma was how many frontpage posts you had, which felt like it undervalued comments, but it's pretty plausible (and even likely to me) that the current system is now too skewed in the other direction. 

My current relationship towards karma is something like: The point of karma for comments is to provide local information in a thread about a mixture of importance, quality and readership, and it's pretty hard to disentangle those without making the system much more complex. Overall the karma of a post is a pretty good guess on how many people will want to read it, so it makes sense to use it for some recommendation systems, but the karma of comments feel a lot more noisy to me. As a long-term reward I think we shouldn't really rely on karma at all and instead use systems like the LessWrong review to establish in a much more considered way which posts were actually good. 

We've also deemphasized how much karma someone has on the site quite a bit because I don't want to create the impression that it's at all a robust measure of the quality of someone's contributions. So, for example, we no longer have karma leaderboards.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-20T05:17:37.866Z · EA · GW

I am familiar with ACE's charity evaluation process. The hypothesis I expressed above seems compatible with everything I know about the process. So alas, this didn't really answer my question.

Comment by Habryka on [deleted post] 2021-04-20T01:16:10.239Z

For whatever it's worth, this also seems pretty constraining to me. Internal links are already specially marked via the small degree-symbol, so differentiating internal and external links is pretty straightforward.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-19T22:53:41.125Z · EA · GW

Makes sense. I think the current issues discussed feel like the best evidence we have, and do we feel like pretty substantial evidence on this topic, but it doesn't seem necessary to discuss that fully here.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-19T16:33:28.106Z · EA · GW

Presumably knowing the basis of ACE's evaluations is one of the most important thing to know about ACE? And knowing to what degree social justice principles are part of that evaluation (and to what degree those principles conflict with evaluating cost-effectiveness) seems like a pretty important part of that.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-19T05:33:12.891Z · EA · GW

While your words here are technically correct, putting it like this is very misleading. Without breaking confidentiality, let me state unequivocally that if an organization had employees who had really bad views on DEI, that would be, in itself, insufficient for ACE to downgrade them from top to standout charity status. This doesn't mean it isn't a factor; it is. But the actions discussed in this EA forum thread would be insufficient on their own to cause ACE to make such a downgrade.

Just to clarify, this currently sounds to me like you are saying "the actions discussed in this forum thread would be insufficient, but would likely move an organization about halfway to being demoted from top to standout charity", which presumably makes this a pretty big factor that explains a lot of the variance in how different organizations score on the total evaluation. This seems very substantial, but I want to give you the space to say it plays a much less substantial role than that.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-18T20:12:41.831Z · EA · GW

and actively campaigning against racism has nothing in common with sexual harassment.

Universal statements like this strike me as almost always wrong. Of course there are many similarities that seem relevant here, and a simple assertion that they are not doesn't seem to help the discussion.

I would really quite strongly prefer to not have comments like this on the forum, so I downvoted it. I would have usually just left it at the downvote, but i think Khorton has in the past expressed a preference for having downvotes explained, so I opted on the side of transparency. 

Comment by Habryka on EA Debate Championship & Lecture Series · 2021-04-11T18:16:58.307Z · EA · GW

Finally, even after a re-read and showing your comment to two other people seeking alternative interpretations, I think you did say the thing you claim not to have said. Perhaps you meant to say something else, in which case I'd suggest editing to say whatever you meant to say. I would suggest an edit myself, but in this case I don't know what it was you meant to say.

I've edited the relevant section. The edit was simply "This is also pretty common in other debate formats (though I don't know how common in BP in particular)".

By contrast, criticisms I think mostly don't make sense:

+ Goodharting
+ Anything to the effect of 'the speakers might end up believing what they are saying', especially at top levels. Like, these people were randomly assigned positions, have probably been assigned the roughly opposite position at some point, and are not idiots.

Alas, those are indeed my primary concerns. It's of course totally OK if you are not compelled, but I have no idea how you are so confident to dismiss them. Having talked to multiple people who have participated in high-level debate in multiple formats, those are the criticisms that they level as well, including for formats very similar to BP, and for BP in-particular. 

I have now watched multiple videos of BP debate, and I wish I didn't because my guesses of what it would look like were basically right, and I feel like I wasted two hours of my time watching BP debates because you insisted that for some reason I am not allowed to make claims from very nearby points of evidence, even though as far as I can tell after spending those two hours, most of my concerns are on-point and BP looks just like most other forms of debate I've seen. 

I knew when I wrote the above comment that BP debates look less immediately goodharted. But after engaging more deeply with it, I would be surprised if it is actually much less goodharted. Of course, 4 debates at 2x speed isn't really enough to judge the whole category, but given that I've watched dozens of other debates in multiple other formats, I feel like I've pinpointed the type of debate that BP is pretty well in debate space. 

Of course, you can insist that only people intimately familiar with the format participate in this discussion, in which case I of course cannot clear that bar, and neither can almost anyone else on the forum (and this will of course heavily select against anyone who is critical of debate).


Let me take a step back. Look... I feel super frustrated by your comment above. I am trying to contribute a number of points to this discussion that feel important, and you now twice just kind of insinuated that I am unfairly biased or being unreasonable without really backing up your points? Like, your comments have been super stressful, and the associated downvotes have felt pretty bad. I think the arguments I've made in my comments are pretty straightforward, I stayed civil, and I don't think I am being particularly irrational about this topic. 

I've thought about it for 2-3 dozen hours over the years and at multiple points in the last few years have spent full-time work weeks evaluating whether we should have a debate tradition inside of EA as well, which caused me to think through a lot of the relevant considerations and investigate a substantial number of debate formats. I've talked to something like 8 people with extensive debate experience in EA for at least an hour and tried to get a sense of what things worked and what didn't. 

And then you come along and just assert: 

Overall, I'm left with the distinct impression that you've made up your mind on this based on a bad personal experience, and that nothing is likely to change that view. 

And this... just feels really unfair? Indeed, phenomenologically, my debate experiences were great. I didn't have a random bad experience that somehow soured me towards this whole sport. I was positive on it, and then thought about it for at least a dozen hours in total and overall came to a complicated high-level position that overall was a lot more hesitant. I have separately also thought for at least a literal 1000 by-the-clock-hours about our talent funnels and the epistemic norms I want the community to have and how the two interact. 

My position also isn't categorically opposed to debate at all. Indeed, I am personally likely to cause EA and the Rationality community to have more of a debating institution internally, and continue to feel conflicted about this project. I think it's quite plausible it's good, but I would want the organizers to think hard about how to avoid the problems that seem pretty deeply embedded in debate, and how to avoid them damaging the social institutions of EA, or attract people that might otherwise cause harm.

I don't know. It's fine for you to think I am being irrational about this topic, or for some reason to categorically dismiss the kind of concern that I am having, but I don't feel like you've really justified either of those assertions, and I perceived them both coming with some kind of social slap-down motion that made participating in this thread much more stressful than necessary. I will disengage for now. I hope the people involved in this project make good choices.

Comment by Habryka on EA Debate Championship & Lecture Series · 2021-04-10T18:12:59.390Z · EA · GW

Following up on this, as part of me trying to understand the format of BP more, I was watching this video, which is the most watched WUDC video on Youtube. And... I find it terrifying. I find it in some sense more terrifying than the video where everyone talks super fast: 

I encourage other people who are trying to evaluate debate as a method for truth-seeking to watch this themselves. 

There is no super-fast-talking here, but all the arguments in the opening speech are terrible rhetorical argument. The speaker leverages the laughs and engagement of the audience to dismiss the position of his opponents, and this overall really felt more terrifying to me than many of the big political speeches I've seen this year. 

Like, I... think I am more terrified of the effect this would have on epistemics than the effect of the super-fast-talkers in policy debate? Like, at least in policy-debate it's somewhat obvious you are playing a game. In the above, I wouldn't be surprised if the participants actually come to believe the position they are trying to defend. 

Comment by Habryka on EA Debate Championship & Lecture Series · 2021-04-10T17:58:46.461Z · EA · GW

To be clear, I think very little of my personal experience played a role in my position on this. Or at least very unlikely in the way you seem to suggest.

A good chunk of my thoughts on this were formed talking to Buck Shlegeris and Evan Hubinger at some point and also a number of other discussions about debating with a bunch of EAs and rationalists. I was actually pretty in favor of debate ~4-5 years ago when I remember first discussing this with people, but changed my mind after a bunch of people gave their perspectives and experiences and I thought more through the broader problem of how to fix it.

I also want to clarify the following 

after all, there's sadly no easy way for me to disprove your claim that a large fraction of BP debates end up not debating the topic at all..

I didn't say that. I said "This is also pretty common in other debate formats". I even explicitly said I am less familiar with BP as a debate format. It seems pretty plausible to me that BP has less of the problem of meta-debate. But I do think evidence of problems like meta-debate in other formats is evidence of BP also having problems, even if I am specifically less familiar with BP.

Comment by Habryka on EA Debate Championship & Lecture Series · 2021-04-10T05:22:25.625Z · EA · GW

A lot of my models here come from talking to Evan Hubinger about this, who has a lot of thoughts on debate and competed at the national level in Policy debate in college.

My guess is that overall debate is really badly goodharted. All of the different variants. One of the ways policy-debate in particular is really badly goodharted is that everyone talks so fast nobody can comprehend them. But this is really far from the only way policy debate is broken. Indeed, a large fraction of policy debates end up not debating the topic at all, but end up being full of people debating the institution of debating in various ways, and making various arguments for why they should be declared the winner for instrumental reasons. This is also pretty common in other debate formats (though I don't know how common in BP in particular). Evan won a bunch of his highest-level debates by telling stories about pirates for 15 minutes, and then telling everyone that the institution of debate is so broken and useless and wouldn't it just be better if we would use this time to learn cool facts about history like the pirate anecdotes I told you in the first 80% of the debate?

If people were just talking fast, I do think that would be a problem, but not a super big one. I can imagine a kind of hyperoptimized court system that functions fine with everyone talking at 2-3x normal speed. The problem is that it's evidence that the system at large has very little defenses against goodharting and runaway competition effects. 

As far as I can tell, some other debate formats do not have the problem of everyone talking at 2-3x speed, but basically have many of the other problems, and importantly, all share the fundamental attribute that they have very few successful defenses against goodhart's law. And so somehow, even though it might differ from debate format to debate format, the actual thing the competitors are doing has very little to do with seeking the truth. 

The debates I participated in in high-school had nobody talking fast. But it had people doing weird meta-debate, and had people repeatedly abusing terrible studies because you can basically never challenge the validity or methodology of a study, or had people make terrible rhetorical arguments, or intentionally obfuscate their arguments until they complete it in the last minute so the opposition would have no time to respond to it.

I watched about 20 minutes of the video you linked, and I do think just from that slice it seems much less obviously broken than the video I linked, and that does seem important to recognize. 

I do also think that I can only really interface with that whole video in a healthy way by repeatedly forcing myself to take a step back and basically throw away all the information that is thrown at me, because I don't trust it, and I don't trust the process that produced it. Like, I don't think I left that video having a better understanding of seatbelt laws (the topic of debate). I learned some useful relevant facts, but I am still worried that I left that debate with worse beliefs about seatbelt laws, and very likely much worse than if I had just read the Wikipedia article on it. Of course, the point of the debate is not for me to learn about seatbelt law, but I do also think the same is probably true for the participants. 

Comment by Habryka on EA Debate Championship & Lecture Series · 2021-04-08T22:31:38.268Z · EA · GW

I would also be interested in this.

Comment by Habryka on EA Debate Championship & Lecture Series · 2021-04-08T22:30:22.712Z · EA · GW

I appreciate the response!

However, we think these issues are not inherent to the format. 

I do basically think the problems are kind of inherent to the format and pretty hard to fix. Like, I don't think it's physically impossible to fix these issues, but I am very skeptical of any efforts that try to fix them that stay within the existing debate context. 

Overall, I am not sure where the key causes of our disagreement lies. The above didn't really feel like it addresses my core concerns with the sport, and while the list of benefits is nice, it feels like a list that I can basically construct for an arbitrary sport (like, not this specific list of debates, but something of equal benefit), and below I give some pointers why I think some of them don't hold, or at least don't hold with the forcefulness that one might expect based on your descriptions.

I think it is fair to say that many debates do not result in figuring the truth on the topic at hand due to the complicated nature of the policies & ideas that are discussed and the limited time per discussion.

To be clear, neither the complexity of the topics or the limited time are at all anywhere close to the central reason why I think debate isn't very truth-seeking. I can totally have a meeting with a bunch of friends of mine about a complicated issue with only an hour of time, and we can easily make good approximations and solid progress on understanding it. I am quite confident we would not if we instead spent that time in any competitive debating context. Indeed, it seems likely to me that we would leave the competitive debating context with worse beliefs than we entered it on the relevant topic.

There are ample discussion groups for debaters that seek to deepen their knowledge, there is a lot of emphasis on inclusion, and extracurricular educational videos are a rising trend. Therefore, most debaters operate in ecosystems where exploring complexity is a virtue. 

None of these (inclusion, extracurricular education or "exploring complexity is a virtue) have really much to do with my concerns for debate, so at least from my perspective, describing these as trends that are gaining momentum in the debate community does very little to make me less concerned. Some of these seem mildly bad to me.

Further, since debating improves your ability to understand how arguments relate to one another, these skills can aid in figuring which position makes more sense in complicated discussions in real life, which can be helpful in seeking the truth, or at the very least in identifying falsehoods.       

I don't really think debate is really helping you understand how arguments relate to each other, at least not in a truth-tracking way. In most debate formats, it's usually much less about actually making good arguments, but much more about abusing the way judges are told to score various arguments, in a way that has very little to do with the cognitive patterns I would encourage someone to use if they were trying to figure out whether an argument makes sense or not. 

c) Debating forces you to engage with multiple perspectives. Positions in debating, i.e. being for or against the topic, are randomly allocated. This feature compels you to think on a topic in ways you might not have otherwise, and ultimately assists in developing a more nuanced world view.   

I think this is actually useful, and learning the skill of generating steelmanned-arguments for positions you don't believe is quite useful. Though because of the problem I pointed out above with the arguments that you are generating having very little to do with actual truth-trackingness, this benefit does fall quite a bit short from the ideal you describe here. 

In my experience it creates a kind of "fallacy-of-grey"-like mindset where you are avoiding having any beliefs on these issues at all, or don't really think of it being your job to actually decide which side is right, which I think is quite bad. Ultimately the goal of understanding both sides of an argument is to still judge which side is right (or of course to do a more complicated synthesis between the two, though that's I think pretty actively discouraged in the debating format).

d) The debating community is truly global. In competitions you can hear voices that are hard to find in other places. The ability to gain the perspectives of people from around the world on a plethora of important issues has benefits for those that hold EA values.

I don't really believe this? The debating community is overall really insular and narrow, as far as I can tell, being really heavily selected for being full of all the standard ivy-league people that we already have a ton of. I like cognitive diversity, but I don't really think the debating community is very exciting from that perspective. Indeed it seems to have very similar selection filters to the way the EA community is already filtered for. I might be wrong here, but I currently don't really believe that recruiting from the debate community is going to increase our cognitive diversity on almost any important dimension.

Comment by Habryka on EA Debate Championship & Lecture Series · 2021-04-08T17:33:24.424Z · EA · GW

I generally want us to use truth-seeking methods when engaging with outsiders as well. Of course, that isn't always possible, but I also really don't want us to have a reputation for using lots of rhetorical tricks to convince others (and generally think that doing so is pretty bad).

Comment by Habryka on EA Debate Championship & Lecture Series · 2021-04-07T18:24:58.060Z · EA · GW

I don't have time to respond in super much depth because of a bunch of competing commitments but I want to say that all of these are good points and I appreciate you making them.

Comment by Habryka on EA Debate Championship & Lecture Series · 2021-04-07T05:13:57.374Z · EA · GW

I... feel pretty conflicted and hesitant about this. Overall, I have a sense that debate wasn't a healthy sport for me, and is unlikely to be healthy for others. And I guess I don't really want us to end up adopting much of its culture,  or use the techniques used in competitive debating to convince people. So I am not sure whether I think this is a good idea. 

As long as we are all just treating debate as a game that has as much relationship to figuring out the truth as playing soccer, I am of course OK with that, but I am pretty concerned that it's actually pretty hard to maintain that relationship to the sport at an event like this. 

I used to participate in debate in high school and think it taught me a lot of really bad epistemic habits that took quite a while to unlearn, and most of the American debate tradition seems even more broken than what I was used to in Germany. 

I think it's hard to understate how far away the common practices of competitive debate are from actually doing anything that helps you better understand how the world works, or has much to do with truth-seeking. And at the same time, my experience of the debate community is that they do think that they are learning valuable skills that help them better navigate the world and communicate to each other. 

Overall, my sense is that it's very hard to organize an event like this and actually have the narrative of "To be clear, please don't ever actually talk to other people in EA the way you talk to other people on stage here. I would consider that quite rude and probably actively harmful, and also, I really don't think the sport we are practicing here is helping you understand the world better, please don't take this seriously". 

Like, I would be really surprised if anything like that was said at the opening talk of this event, but like, I do think that is the actually right attitude to have towards debate if you don't want it to hurt you. I've talked to a bunch of people in the EA community who have a debating background, and all of the ones I've talked to thought that overall the habits they learned were probably bad, and the whole process really didn't have much to do with truth-seeking. 

For people who are unfamiliar with the degree to which various competitive debating practices have created to me quite horrifying abominations, here is an example of a very high-level competitive debate: 

I do think British Parliamentary Debate style is a bit less broken than this, but like, not that much. I think overall, the sport really doesn't have much to do with even just real and normal political debate, which is already a bad thing to imitate. 

This again, doesn't make me totally confident that the above is a bad project, but I do sure feel like I would warn people against attending events like this, and am pretty worried about adopting more of the competitive debating culture with things like this, and also don't super think that given how far the sport has deteriorated, that skill in it is really predictive at all in truth-seeking ability, after you control for g (and my guess is negatively correlated after you control for g, though I am less confident of that).

Comment by Habryka on What are your main reservations about identifying as an effective altruist? · 2021-04-02T06:54:03.616Z · EA · GW

Since you achieved some internal distance from an EA identity, are there any projects you've worked on, or ideas you've discussed publicly, that fall into the category "I wouldn't have done this before, because it felt like the kind of thing that would have made people angry/raised the 'reputation damage' flag"?

I guess... almost everything I am now working on? 

My Long Term Future Fund writeups were something that definitely falls into this category, as does a lot of the grant analysis and debating with people that's behind the final decisions. This is also true for my involvement with the Survival and Flourishing Fund. 

Also, my work on LessWrong feels like very much like the kind of thing that felt harder. LessWrong's reputation is a lot better now, but a lot of people thought I was investing in something quite harmful when I started working on LessWrong, and I was definitely quite self-conscious about it. 

Comment by Habryka on What are your main reservations about identifying as an effective altruist? · 2021-03-30T17:51:15.891Z · EA · GW

The movement at large strikes me as too prestige-seeking and power-seeking, and as such has pretty strong antibodies against a lot of stuff that could potentially, hypothetically, pose a PR risk or make enemies of any kind, or seem generally off-putting to anyone. 

I have found that when I identified as an EA, I had a lot more unproductive critical voices in my head that prevented me from considering a lot of potentially good ideas, and it exposed me to a lot of people who would get angry at me if I did anything that "damaged the reputation of the movement". After many years of actively carrying EA as part of my identity, I had noticed that my ability to take directed action in the world had very greatly atrophied, I was much more anxious and risk-averse, and it took me at least two years of internally distancing myself quite a lot from the EA-identity cluster before I felt like I could have novel ideas again and start working on ambitious projects again. 

In short, overall identifying as an EA and placing myself as a representative of the EA community made me much worse at thinking and achieving difficult things. These days I am holding the identity very much at a distance, but am of course still active in the community. I still find this pretty stressful, but mostly think the costs are worth it.

Comment by Habryka on Proposed Longtermist Flag · 2021-03-25T18:59:00.460Z · EA · GW

I at least didn't upvote because I was concerned this would increase the probability of this specific flag getting traction (which I think would be bad), but I really liked the comments and would love to see more threads like this, and upvoted a lot of stuff in the comments.

Comment by Habryka on Some quick notes on "effective altruism" · 2021-03-25T17:28:52.248Z · EA · GW

I mean, I just imagine what kind of person would be interested, and it would mostly be the kind of person who is ambitious, though not necessarily competent, and would seek out whatever opportunities or clubs there are that are associated with the biggest influence over the world, or sound the highest status, have the most prestige, or sound like would be filled with the most powerful people. I have met many of those people, and a large fraction of high-status opportunities that don't also strongly select for merit seem filled with them. 

Currently both EA and Rationality are weird in a way that is not immediately interesting to people who follow that algorithm, which strikes me as quite good. In universities, when I've gone to things that sounded like "Global Priorities" seminars, I mostly met lots of people with a political science degree, or MBA's, really focusing on how they can acquire more power and the whole conversation being very status oriented.  

Comment by Habryka on Some quick notes on "effective altruism" · 2021-03-25T17:27:39.670Z · EA · GW

nods My concerns have very little to do with cultishness, so my guess is we are talking about very different concerns here. 

Comment by Habryka on Some quick notes on "effective altruism" · 2021-03-25T01:14:40.839Z · EA · GW

Alas, I think that isn't actually what tends to attract the most competent manipulative people. Random social communities might attract incompetent or average-competence manipulative people, but those are much less of a risk than the competent ones. In general, professional communities, in particular ones aiming for relatively unconditional power, strike me as having a much higher density of manipulative people than random social communities.

I also think when I go into my models here, the term "manipulative" feels somewhat misleading, but it would take me a while longer to explain alternative phrasings. 

Comment by Habryka on Some quick notes on "effective altruism" · 2021-03-25T00:17:02.578Z · EA · GW

But we would also be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people

I don't understand this. We would be trending towards seeking more power, which would further attract power-seekers. We have already substantially gone down this path. You might have different models of what attracts manipulative people. My model is doing visibly power-seeking and high-status work is one of the most common attractors.

Moreover, today's community is older and more BS-resident with some legibly-trustworthy leaders.

I think we have overall become substantially less BS-resistant as we have grown and have drastically increased the surface area of the community, though it depends a bit on the details. 

But you seem to think there would be a big and harmful net effect - can you explain?

Yep, I would be up for doing that, but alas won't have time for it this week. It seemed better to leave a comment voicing my concerns at all, even if I don't have time to explain them in-depth, though I do apologize for not having the time to explain them in full. 

Comment by Habryka on Some quick notes on "effective altruism" · 2021-03-24T21:55:56.063Z · EA · GW

I think a name change might be good, but am not very excited about the "Global Priorities" name. I expect it would attract mostly people interested in seeking power and "having lots of influence" and I would generally expect a community with that name to be very focused on achieving political aims, which I think would be quite catastrophic for the community.

I actually considered this specific name in 2015 while I was working at CEA US as a potential alternative name for the community, but we decided against it at the time for reasons in this space (and because changing names seems hard).

Comment by Habryka on Progress Open Thread: March 2021 · 2021-03-24T21:22:16.516Z · EA · GW

Yep, I should have definitely kept the probabilities in log-form, just to be less confusing. It wouldn't have made a huge difference to the outcome, but it seems better practice than the thing that I did.

Comment by Habryka on Progress Open Thread: March 2021 · 2021-03-24T21:21:24.844Z · EA · GW

I am unsure how to think about satisfaction data. My general model is that lots of satisfaction data is biased upwards and I can't really imagine a negative result from that survey, so I really don't know how much to update on it. I would currently just ignore it, unless someone had a really clever study design where they have some kind of other intervention that is similarly costly and had similar social expectations, but we know is bad for people, that we could use as a control. 

And yes, I think concerns like infertility, same-sex couples, and many other things like that can make adoption the best choice for people who really want to have children. But I do think the costs would still be there, you might just not have an alternative. 

I also think one can reduce the costs here by a lot by trying to find one of the best kids to adopt, or doing weirder things like trying to find a surrogate mother, which will probably have much less adverse selection effects (though I haven't thought through this case very much). My concern is much more about the naive way most people seem to handle adoptions, and I think there are ways to reduce the risk to a level where the tradeoffs become much less harsh.

Comment by Habryka on Progress Open Thread: March 2021 · 2021-03-23T22:33:43.322Z · EA · GW

So, one thing I was thinking about was that people frequently use the murder-rate as a proxy for the overall crime rate, and I think I remember people doing that without any adjustment of the type you are thinking about here. Is there something special about the murder rate as a fraction of violent crimes, or should we actually make the same adjustments in that case?

Comment by Habryka on Progress Open Thread: March 2021 · 2021-03-23T21:18:12.384Z · EA · GW

Yeah, I think this is a totally fair critique and I updated some after reading it!  

I wrote the above after a long Slack conversation with Aaron at like 2AM, just trying to capture the rough shape of the argument without spending too much time on it. 

I do think actually chasing  this argument all the way through is interesting and possibly worth it. I think it's pretty plausible it could make a 2-3x difference in the final outcome (and possibly a lot more!), and I hadn't actually thought through it all the way. And while I had some gut sense it was important to differentiate between median and tail outcomes here, I hadn't properly thought through the exact relationship between the two and am appreciative of you doing some more of the thinking. 

I currently prefer your estimate of "moving it from 20% to 38%" as something like my best guess.

Comment by Habryka on EA Funds has appointed new fund managers · 2021-03-23T19:25:55.571Z · EA · GW

I... don't know either. I think I can tell you who the guest managers are, at least I don't think anybody told me to keep it secret, but I will wait 24 hours for someone to object before I post it here.

Comment by Habryka on Progress Open Thread: March 2021 · 2021-03-23T07:31:47.579Z · EA · GW

I agree with this, and strongly disagree with the decision by Aaron to moderate this comment (as well as with other people deciding to downvote this).  This strikes me as a totally reasonable and well-argued comment.

I've thought about this topic a good amount, and if a friend of mine were to tell me they are planning to adopt a child, I would immediately drop everything I am working on and spend at least 10 hours trying to talk them out of it, and generally be willing to spend a lot of my social capital to preven them from making this choice. 

For some calibration, risk of drug abuse, which is a reasonable baseline for other types of violent behavior as well, is about 2-3x in adopted children. This is not conditioning on it being a teenager adoption, which I expect would likely increase the ratio to more something like 3-4x, given the additional negative selection effects. 

Sibling abuse rates are something like 20% (or 80% depending on your definition). And is the most frequent form of household abuse. This means by adopting a child you are adding something like an additional 60% chance of your other child going through at least some level of abuse (and I would estimate something like a 15% chance of serious abuse). This is a lot. 

Like, this feels to me like a likely life-destroying mistake, with very predictably bad outcomes. Given that a large fraction of household abuse is sibling abuse, this is making it more likely than not that your other children will deal with substantial abuse in their childhood. This is not a "small probabilities of large downside" situation. These are large probabilities of large downsides.

Comment by Habryka on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-10T20:32:26.592Z · EA · GW

Ah, great, that makes sense. Thank you for the clarification!

Comment by Habryka on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-10T19:08:07.089Z · EA · GW

It's a bit hard to eye-ball, but it seems that the blue line is just the integral of the red line? Which means that the blue line doesn't account for any groups that closed down.

Comment by Habryka on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-10T05:24:37.493Z · EA · GW

Thse graphs are surprising to me. They seem to assume that no groups were dying in these years? I mean, that's plausible since they are all pretty young, but it does seem pretty normal for groups to die a few years after being founded.

Comment by Habryka on Where I Am Donating in 2016 · 2021-02-26T01:52:28.979Z · EA · GW

Woop, thanks for following up on this! I am always very happy when long-term bets like this get resolved.

Comment by Habryka on Introducing High Impact Athletes · 2021-02-25T18:27:12.478Z · EA · GW

Yep, I agree that for some organizations, optimizing for effectiveness will at certain times also mean that it's right to optimize at least partially for diversity as an instrumental goal. I think that is true. If you set it yourself as a bottom-line for your organization, as a terminal goal that has to be achieved independently of the specific problems you face, it will of course trade off against your other goals. But it will not necessarily do so if you just uncover it as part of optimizing for your other goals, as a useful instrumental/intermediary goal, and it can of course be useful advice to make people aware of that.

I disagree that it would be good advice for most organizations to follow, but I think we've reached the part where I no longer have definite takes, but more guesses and hunches and models with large inferential distance, such that it isn't obviously worth going into. 

Comment by Habryka on Introducing High Impact Athletes · 2021-02-24T17:53:48.297Z · EA · GW

Huh, I am very surprised that you expect that there is just unused time lying around for founders that would not alternatively be used for improving the organization. My sense is that any time spent on doing this would pretty directly trade off against the time used to drive the core organizational objective forward. Though of course this might not literally hold for all founders (some founders might find additional pockets of motivation that they find they cannot use on anything related to their organizational priorities, but could use it for diversity-related recruiting), but I would expect it holds for the vast majority of founders at some rate. 

Separately, I am even more surprised that you think a founder will so frequently overestimate their personal fit that it is better for them to hand off the project to someone else! I've literally never seen this go well, ever, even for considerations that strike me as much much more strongly related to the success of a project, and any founder making this choice would seem to be making a terrible mistake to me. 

But again, the question here is not whether diversity is important. If the project were to just succeed better if they gave the project to someone else they should choose whoever else would be best suited to them. Their diverse network would be one consideration here, but it would be a consideration in-service of seeing the project succeed. I don't see how optimizing for diversity would not somehow trade off against the person you would hand off the project to. If you have one person with slightly more diverse network but overall very low competence, and another person with a slightly less diverse network but being overall much more competent and established than you are, you should very likely choose the person who is much more competent. 

The same argument as I made above applies to this tradeoff of who to hand it off to, applies to all other organizational tradeoffs, and both of these domains strike me as domains where the argument for them trading off directly against organizational success strike me as particularly strong!

Comment by Habryka on Open and Welcome Thread: February 2021 · 2021-02-18T18:20:48.777Z · EA · GW

It's sometimes 1 (for upvotes) and sometimes -1 (for downvotes). Implementing it as a free variable was a bit easier than implementing it as a boolean, so we did that.

Comment by Habryka on Apply to EA Funds now · 2021-02-14T01:59:24.794Z · EA · GW

The new funds UI actually has this specific point covered. At least for the (at least historically) larger overlap for the EAIF and the LTFF. See this section: 

Comment by Habryka on Open and Welcome Thread: February 2021 · 2021-02-14T01:57:40.934Z · EA · GW

Here is the relevant section of the code: 

export const userSmallVotePower = (karma: number, multiplier: number) => {

if (karma >= 1000) { return 2 * multiplier }

return 1 * multiplier

}

 

export const userBigVotePower = (karma: number, multiplier: number) => {

if (karma >= 500000) { return 16 * multiplier } // Thousand year old vampire

if (karma >= 250000) { return 15 * multiplier }

if (karma >= 175000) { return 14 * multiplier }

if (karma >= 100000) { return 13 * multiplier }

if (karma >= 75000) { return 12 * multiplier }

if (karma >= 50000) { return 11 * multiplier }

if (karma >= 25000) { return 10 * multiplier }

if (karma >= 10000) { return 9 * multiplier }

if (karma >= 5000) { return 8 * multiplier }

if (karma >= 2500) { return 7 * multiplier }

if (karma >= 1000) { return 6 * multiplier }

if (karma >= 500) { return 5 * multiplier }

if (karma >= 250) { return 4 * multiplier }

if (karma >= 100) { return 3 * multiplier }

if (karma >= 10) { return 2 * multiplier }

return 1 * multiplier

}

In other words, you get 2 small-vote power at 1000 karma, and you can look at the numbers above to see the multipliers for strong-votes.

Comment by Habryka on Long-Term Future Fund: Ask Us Anything! · 2021-02-11T06:47:17.577Z · EA · GW

I wrote a long rant that I shared internally that was pretty far from publishable, but then a lot of things changed, and I tried editing it for a bit, but more things kept changing. Enough that at some point I gave up on trying to edit my document to keep up with the new changes, and instead just wait until things settle down, so I can write something that isn't going to be super confusing.

Sorry for the confusion here. At any given point it seemed like things would settle down more so I would have a more consistent opinion. 

Overall, a lot of the changes have been great, and I am currently finding myself more excited about the LTFF than I have in a long time. But a bunch of decisions are still to be made, so I will hold off on writing a bit longer. Sorry again for the delay. 

Comment by Habryka on MichaelA's Shortform · 2021-02-10T02:40:06.850Z · EA · GW

I really like these types of posts. I have some vague sense that these both would get more engagement and excitement on LW than the EA Forum, so maybe worth also posting them to there.

Comment by Habryka on In diversity lies epistemic strength · 2021-02-09T21:39:14.829Z · EA · GW

Sure, but in the above post you claim that demographic diversity is the best way to measure diversity of perspectives, which is a much stronger claim. I am not saying demographic diversity is completely irrelevant, I am just saying that it seems far from the best measure of cognitive diversity that we have.