Posts

Long-Term Future Fund: April 2020 grants and recommendations 2020-09-18T10:28:20.555Z · score: 40 (15 votes)
Long-Term Future Fund: September 2020 grants 2020-09-18T10:25:04.859Z · score: 75 (35 votes)
Comparing Utilities 2020-09-15T03:27:42.746Z · score: 20 (7 votes)
Long Term Future Fund application is closing this Friday (June 12th) 2020-06-11T04:17:28.371Z · score: 16 (4 votes)
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T04:54:25.630Z · score: 31 (10 votes)
Request for Feedback: Draft of a COI policy for the Long Term Future Fund 2020-02-05T18:38:24.224Z · score: 38 (20 votes)
Long Term Future Fund Application closes tonight 2020-02-01T19:47:47.051Z · score: 16 (4 votes)
Survival and Flourishing grant applications open until March 7th ($0.8MM-$1.5MM planned for dispersal) 2020-01-28T23:35:59.575Z · score: 8 (2 votes)
AI Alignment 2018-2019 Review 2020-01-28T21:14:02.503Z · score: 28 (11 votes)
Long-Term Future Fund: November 2019 short grant writeups 2020-01-05T00:15:02.468Z · score: 46 (20 votes)
Long Term Future Fund application is closing this Friday (October 11th) 2019-10-10T00:43:28.728Z · score: 13 (3 votes)
Long-Term Future Fund: August 2019 grant recommendations 2019-10-03T18:46:40.813Z · score: 79 (35 votes)
Survival and Flourishing Fund Applications closing in 3 days 2019-10-02T00:13:32.289Z · score: 11 (6 votes)
Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) 2019-09-09T04:14:02.083Z · score: 29 (10 votes)
Integrity and accountability are core parts of rationality [LW-Crosspost] 2019-07-23T00:14:56.417Z · score: 52 (20 votes)
Long Term Future Fund and EA Meta Fund applications open until June 28th 2019-06-10T20:37:51.048Z · score: 60 (23 votes)
Long-Term Future Fund: April 2019 grant recommendations 2019-04-23T07:00:00.000Z · score: 143 (75 votes)
Major Donation: Long Term Future Fund Application Extended 1 Week 2019-02-16T23:28:45.666Z · score: 41 (19 votes)
EA Funds: Long-Term Future fund is open to applications until Feb. 7th 2019-01-17T20:25:29.163Z · score: 19 (13 votes)
Long Term Future Fund: November grant decisions 2018-12-02T00:26:50.849Z · score: 35 (29 votes)
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) 2018-11-21T03:41:38.850Z · score: 21 (11 votes)

Comments

Comment by habryka on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-16T17:47:09.373Z · score: 10 (6 votes) · EA · GW

I would actually be really interested in talking to someone like Baumeister at an event, or ideally someone a bit more careful. I do think I would be somewhat unhappy to see them given just a talk with Q&A, with no natural place to provide pushback and followup discussion, but if someone were to organize an event with Baumeister debating some EA with opinions on scientific methodology, I would love to attend that.

Comment by habryka on Sign up for the Forum's email digest · 2020-10-15T21:41:58.929Z · score: 2 (1 votes) · EA · GW

You can also subscribe to tags, by going to the tag page and clicking the "Subscribe" button. For those notifications you can also choose frequency, in the notification settings on your profile.

Comment by habryka on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T19:39:09.603Z · score: 27 (14 votes) · EA · GW

I cannot find any section of this article that sounds like this hypothesis, so I am pretty confident the answer is that no, that is not what the article says.  The article responds relatively directly to this: 

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person. 

Comment by habryka on Open and Welcome Thread: October 2020 · 2020-10-10T18:34:08.056Z · score: 8 (4 votes) · EA · GW

Not particularly hard. My guess is half an hour of work or so, maybe another half hour to really make sure that there are no UI bugs.

Comment by habryka on Apply to EA Funds now · 2020-09-16T22:08:10.437Z · score: 2 (1 votes) · EA · GW

Yep, seems like that's the wrong link. Here is the fixed link: https://app.effectivealtruism.org/funds/far-future

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T04:17:26.079Z · score: 14 (5 votes) · EA · GW

Just to be clear, I don't think even most neoreactionaries would classify as white nationalists? Though maybe now we are arguing over the definition of white nationalism, which is definitely a vague term and could be interpreted many ways. I was thinking about it from the perspective of racism, though I can imagine a much broader definition that includes something more like "advocating for nations based on values historically associated with whiteness", which would obviously include neoreaction, but would also presumably be a much more tenable position in discourse. So for now I am going to assume you mean something much more straightforwardly based on racial superiority, which also appears to be the Wikipedia definition.

I've debated with a number of neoreactionaries, and I've never seen them bring up much stuff about racial superiority.  Usually just arguing against democracy and in favor of centralized control and various arguments derived from that, though I also don't have a ton of datapoints. There is definitely a focus on the superiority of western culture in their writing and rhetoric, much of which is flawed and I am deeply opposed to many of the things I've seen at least some neoreactionaries propose, but my sense is that I wouldn't characterize the philosophy fundamentally as white nationalist in the racist sense of the term. Though of course the few neoreactionaries that I have debated are probably selected in various ways that reduces the likelihood of having extreme opinions on these dimensions (though they are also the ones that are most likely to engage with EA, so I do think the sample should carry substantial weight). 

Of course, some neoreactionaries are also going to be white nationalists, and being a neoreactionary will probably correlate with white nationalism at least a bit, but my guess is that at least the people adjacent to EA and Rationality that I've seen engage with that philosophy haven't been very focused on white nationalism, and I've frequently seen them actively argue against it.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T01:12:32.391Z · score: 20 (8 votes) · EA · GW

Describing members of Leverage as "white nationalists" strikes me as pretty extreme, to the level of dishonesty, and is not even backed up by the comment that was linked. I thought Buck's initial comment was also pretty bad, and he did indeed correct his comment, which is a correction that I appreciate, and I feel like any comment that links to it should obviously also take into account the correction.

I have interfaced a lot with people at Leverage, and while I have many issues with the organization, saying that many white nationalists congregate there, and have congregated in the past, just strikes me as really unlikely. 

Buck's comment also says at the bottom: 

Edited to add (Oct 08 2019): I wrote "which makes me think that it's likely that Leverage at least for a while had a whole lot of really racist employees." I think this was mistaken and I'm confused by why I wrote it. I endorse the claim "I think it's plausible Leverage had like five really racist employees". I feel pretty bad about this mistake and apologize to anyone harmed by it.

I also want us to separate "really racist" from "white nationalist" which are just really not the same term, and which appear to me to be conflated via the link above.

I also have other issues with the rest of the comment (namely being constantly worried about communists or nazis hiding everywhere, and generally bringing up nazi comparisons in these discussions, tends to reliably derail things and make it harder to discuss these things well, since there are few conversational moves as mindkilling as accusing the other side to be nazis or communists. It's not that there are never nazis or communists, but if you want to have a good conversation, it's better to avoid nazi or communist comparisons until you really have no other choice, or you can really really commit to handling the topic in an open-minded way.)

Comment by habryka on Does Economic History Point Toward a Singularity? · 2020-09-08T22:55:15.676Z · score: 4 (4 votes) · EA · GW

To me, the graph with a summary of all trends only seems to have very few that at first glance look a bit like s-curves. But I agree one would need to go beyond eyeballing to know for sure.

Yeah, that was the one I was looking at. From very rough eye-balling, it looks like a lot of them have slopes that level off, but obviously super hard to tell just from eye-balling. I might try to find the data and actually check.

Comment by habryka on Does Economic History Point Toward a Singularity? · 2020-09-08T17:43:34.769Z · score: 4 (4 votes) · EA · GW

Note: Actually looking at the graphs in Farmer & Lafond (2016), many of these do sure seem pretty S-curve shaped. As do many of the diagrams in Nagy et al. (2013). I would have to run some real regressions to look at it, but in particular the ones in Farmer & Lafond seem pretty compatible with the basic s-curve model.

Overlapping S-curves are also hard to measure because obviously there are feedback effects between different industries (see my self-similarity comment above). Many of the advances in those fields are driven by exogenous factors, like their inputs getting cheaper, with no substantial improvements in their internal methodologies. One of my models of technological progress (I obviously also share the model of straightforward exponential growth and assign it substantial probability) is that you have nested and overlapping S-curves, which makes it hard to just look at cost/unit output of any individual field. 

For analyzing that hypothesis it seems more useful to hold inputs constant and then look at how cost/unit develops, in order to build a model of that isolated chunk of the system (and then obviously also look at the interaction between industries and systems to get a sense of how they interact). But that's also much harder to do, given that our data is already really messy and noisy.

Comment by habryka on Does Economic History Point Toward a Singularity? · 2020-09-07T23:34:26.390Z · score: 10 (4 votes) · EA · GW

I mean something much more basic. If you have more parameters then you need to have uncertainty about every parameter. So you can't just look at how well the best "3 exponentials" hypothesis fits the data, you need to adjust for the fact that this particular "3 exponentials" model has lower prior probability. That is, even if you thought "3 exponentials" was a priori equally likely to a model with fewer parameters, every particular instance of 3 exponentials needs to be less probable than every particular model with fewer parameters.

Thanks, this was a useful clarification. I agree with this as stated. And I indeed assign substantially more probability to a statement of the form "there were some s-curve like shifts in humanity's past that made a big difference" than to any specific "these three specific s-curve like shifts are what got us to where we are today".

As far as I can tell this is how basically all industries (and scientific domains) work---people learn by doing and talk to each other and they get continuously better, mostly by using and then improving on technologies inherited from other people.

It's not clear to me whether you are drawing a distinction between modern economic activity and historical cultural accumulation, or whether you feel like you need to see a zoomed-in version of this story for modern economic activity as well, or whether this is a more subtle point about continuous technological progress vs continuous changes in the rate of tech progress, or something else.

Hmm, I don't know, I guess that's just not really how I would characterize most growth? My model is that most industries start with fast s-curve like growth, then plateau, then often decline. Sure, kind of continuously in the analytical sense, but with large positive and negative changes in the derivative of the growth. 

And in my personal experience it's also less the case that I and the people I work with just get continuously better, it's more like we kind of flop around until we find something that gets us a lot of traction on something, and then we quickly get much better at the given task, and then we level off again. And it's pretty easy to get stuck in a rut somewhere and be much less effective than I was years ago, or for an organization to end up in a worse equilibrium and broadly get worse at coordinating, or produce much worse output than previously for other reasons.

Of course enough of those stories could itself give rise to a continuous growth story here, but there is a question here about where the self-similarity lies. Like, many s-curves can also give rise to one big s-curve. Just because I have many s-curve doesn't mean I get continuous hyperbolic growth. And so seeing lots of relative discontinuous s-curves at the small scale does feel like it's evidence that we also should expect the macro scale to be a relatively small number of discontinuous s-curves (or more precisely, s-curves whose peak is itself heavy-tail distributed, so that if you run a filter for the s-curves that explain most of the change, you end up with just a few that really mattered).

Comment by habryka on Does Economic History Point Toward a Singularity? · 2020-09-07T19:21:07.459Z · score: 22 (10 votes) · EA · GW

I feel really confused what the actual right priors here are supposed to be. I find the "but X has fewer parameters" argument only mildly compelling, because I feel like other evidence about similar systems that we've observed should easily give us enough evidence to overcome the difference in complexity. 

This does mean that a lot of my overall judgement on this question relies on the empirical evidence we have about similar systems, and the concrete gears-level models I have for what has caused growth. AI Impact's work on discontinuous vs. continuous progress feels somewhat relevant and evidence from other ecological systems also seems reasonably useful. 

When I try to understand what exactly happened in terms of growth at a gears-level, I feel like I tend towards more discontinuous hypotheses, because I have a bunch of very concrete, reasonably compelling sounding stories of specific things that caused the relevant shifts, and while I have some gears-level models for what would cause more continuous growth, they feel a lot more nebulous and vague to me, in a way that I think usually doesn't correspond to truth. The thing that on the margin would feel most compelling to me for the continuous view is something like a concrete zoomed in story of how you get continuous growth from a bunch of humans talking to each other and working with each other over a few generations, that doesn't immediately abstract things away into high-level concepts like "knowledge" and "capital". 

Comment by habryka on EricHerboso's Shortform · 2020-09-06T02:16:57.416Z · score: 11 (6 votes) · EA · GW

While I agree there is a thing going on here that's kind of messy, I think Dale is making a fine point. I would however pretty strongly prefer it if he wouldn't feign ignorance and instead just say straightforwardly that he thinks possibly the biggest problem with the thread is not actually the people arguing against racism as a cause area, but the people violating various rules of civility in attacking the people who argue against it, and the application of (as I think he perceives it) a highly skewed double-standard in the moderation of those perspectives, which is an assessment I find overall reasonably compelling.

Like, I found Dale's comment useful, while also feeling kind of annoyed by it. Overall, that means I upvoted it, but I agree with you on the general algorithm that I prefer straightforward explicit communication over feigned ignorance, even if the feigned ignorance is obviously satirical, as it is in this case.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-03T17:44:16.491Z · score: 14 (8 votes) · EA · GW

Your actual self-quote is an extremely weak version of this, since 'this might possibly actually happen' is not the same as explicitly saying 'I think this will happen'. The latter certainly does not follow from the former 'by necessity'.

Yeah, sorry, I do think the "by necessity" was too strong. 

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T19:15:12.259Z · score: 11 (8 votes) · EA · GW

I agree that the right strategy to deal with threats is substantially different than the right strategy to deal with warnings. I think it's a fair and important point. I am not claiming that it is obvious that absolutely clear-cut blackmail occured, though I think overall, aggregating over all the evidence I have, it seems very likely (~85%-90%) to me that situation game-theoretically similar enough to a classical blackmail scenario has played out. I do think your point about it being really important to get the assessment of whether we are dealing with a warning or a threat is important, and is one of the key pieces I would want people to model when thinking about situations like this, and so your relatively clear explanation of that is appreciated (as well as the reminder for me to keep the costs of premature retaliation in mind).

Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out.

This just seems like straightforward misrepresentation? What fervid hyperbole are you referring to? I am trying my best to make relatively clear and straightforward arguments in my comments here. I am not perfect and sometimes will get some details wrong, and I am sure there are many things I could do better in my phrasing, but nothing that I wrote on this post strikes me as being deserving of the phrase "fervid hyperbole". 

I also strongly disagree that I am applying some kind of one-sided charity to Hanson here. The only charity that I am demanding is to be open to engaging with people you disagree with, and to be hesitant to call for the cancellation of others without good cause. I am not even demanding that people engage with Hanson charitably. I am only asking that people do not deplatform others based on implicit threats by some other third party they don't agree with, and do not engage in substantial public attacks in response to long-chained associations removed from denotative meaning. I am quite confident I am not doing that here.

Of course, there are lots of smaller things that I think are good for public discourse that I am requesting in addition to this, but I think overall I am running a strategy that seems quite compatible to me with a generalizable maxim that if followed would result in good discourse, even with others that substantially disagree with me. Of course, that maxim might not be obvious to you, and I take concerns of one-sided charity seriously, but after having reread every comment of mine on this post in response to this comment, I can't find any place where such an accusation of one-sided charity fits well to my behavior.

That said, I prefer to keep this at the object-level, at least given that the above really doesn't feel like it would start a productive conversation about conversation norms. But I hope it is clear that I disagree strongly with that characterization of mine. 

You could still be right - despite the highlighted 'very explicit threat' which is also very plausibly not blackmail, despite the other 'threats' alluded to which seem also plausibly not blackmail and 'fair game' protests for them to make, and despite what the organisers have said (publicly) themselves, the full body of evidence should lead us to infer what really happened was bullying which was acquiesced to. But I doubt it.

That's OK. We can read the evidence in separate ways. I've been trying really hard to understand what is happening here, have talked to the organizers directly, and am trying my best to build models of what the game-theoretically right response is. I expect if we were to dig into our disagreements here more, we would find a mixture of empirical disagreements, and some deeper disagreements about when something constitutes blackmail, or something game-theoretically equivalent. I don't know which direction would be more fruitful to go into. 

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T18:41:19.868Z · score: 11 (4 votes) · EA · GW

No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they don't like is a world where you both won't have a lot of people, and you will have tons of polarization and internal conflict. 

How is your summary at all compatible with what I said, given that I explicitly said: 

with the second (the one where we select on tolerance) possibly actually being substantially larger

That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying. 

I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of "if you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apart". You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please don't claim that I am saying something I am pretty clearly not saying. 

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T06:58:42.549Z · score: 21 (10 votes) · EA · GW

I find it weird that just because I think a point is poorly presented, people think I disagree with the point.

Sorry! I never meant to imply that you disagree with the point. 

My comment in this case is more: How would you have actually wanted Robin Hanson to phrase his point? I've thought about that issue a good amount, and like, I feel like it's just a really hard point to make. I am honestly curious what other thing you would have preferred Hanson to say instead. The thing he said seemed overall pretty clear to me, and really not like an attempt to be intentionally edge or something, and more that the point he wanted to make kind of just had a bunch of inconvenient consequences that were difficult to explore (similarly to how utilitarianism quickly gives rise to a number of hard to discuss consequences that are hard to explore).

My guess is you can probably come up with something better, but that it would take you substantial time (> 10 minutes) of thinking. 

My argument here is mostly: In context, the thing that Robin said seemed fine, and I don't expect that many people who read that blogpost actually found his phrasing that problematic. The thing that I expect to have happened is that some people saw this as an opportunity to make Robin look bad, and use some of the words he said completely out of context, creating a narrative where he said something he definitely did not say, and that looked really bad. 

And while I think the bar of "only write essays that don't really inflame lots of people and cause them to be triggered" is already a high bar to meet, but maybe a potentially reasonable one, the bar of "never write anything that when taken out of context could cause people to be really triggered" is no longer a feasible bar to meet. Indeed it is a bar that is now so high that I no longer know how to make the vast majority of important intellectual points I have to make in order to solve many of the important global problems I want us to solve in my lifetime. The way I understood your comment above, and the usual critiques of that blogpost in particular, is that it was leaning into the out-of-context phrasings of his writing, without really acknowledging the context in which the phrase was used. 

I think this is an important point to make, because on a number of occasions I do think Robin has actually said things that seemed much more edgy and unnecessarily inflammatory even if you had the full context of his writing, and I think the case for those being bad is much stronger than the case for that blogpost about "gentle, silent rape" and other things in its reference class being bad. I think Twitter in particular has made some of this a lot worse, since it's much harder to provide much context that helps people comprehend the full argument, and it's much more frequent for things to be taken out of context by others.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T06:11:19.462Z · score: 16 (8 votes) · EA · GW

Because to me, phrases like "gentle, silent rape" seem obviously unnecessarily jarring even as far as twitter discussions about rape go."

I am always really confused when someone brings up this point as a point of critique. The substance of Hanson's post where he used that phrase just seemed totally solid to me. 

I feel like this phrase is always invoked to make the point that Hanson doesn't understand how bad rape is, or that he somehow thinks lots of rape is "gentle" or "silent", but that has absolutely nothing to do with the post where the phrase is used. The phrase isn't even referring to rape itself! 

When people say things like this, my feeling is that they must have not actually read the original post, where the idea of "gentle, silent rape" was used as a way to generate intuitions not about how bad rape is, but about how bad something else is (cuckoldry), and about how our legal system judges different actions in a somewhat inconsistent way. Again, nowhere in that series of posts did Hanson say that rape was in any way not bad, or not traumatic, or not something that we should obviously try to prevent with a substantial fraction of our resources. And given the relatively difficult point he tried to make, which is a good one and I appreciate him making, I feel like his word choice was overall totally fine, if one assumes that others will at the very least read what the phrase refers to at all, instead of totally removing it from context and using it in a way that has basically nothing to do with how it was used by him, which I argue is a reasonable assumption to make in a healthy intellectual community.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T01:20:08.686Z · score: 42 (16 votes) · EA · GW

My model of this is that there is a large fraction of beliefs in the normal Overton window of both liberals and conservatives, that are not within the Overton window of this community. From a charitable perspective, that makes sense, lots of beliefs that are accepted as Gospel in the conservative community seem obviously wrong to me, and I am obviously going to argue against them. The same is true for many beliefs in the liberal community. Since many more members of the community are liberal, we are going to see many more "woke" views argued against, for two separate reasons: 

  1. Many people assume that all spaces they inhabit are liberal spaces, the EA community is broadly liberal, and so they feel very surprised if they say something that everywhere else is accepted as obvious, suddenly get questioned here (concrete examples that I've seen in the past that I am happy to see questioned are: "there do not exist substantial cognitive differences between genders", "socialized healthcare is universally good", "we should drastically increase taxes on billionaires", "racism is obviously one of the most important problems to be working on").
  2. There are simply many more liberal people so you are going to see many more datapoints of "woke" people feeling attacked, because the baserates for conservatives is already that low

My prediction is that if we were to actually get someone with a relatively central conservative viewpoint, their views would seem even more outlandish to people on the forum, and their perspectives would get even more attacked. Imagine talking about any of the following topics on the forum: 

  1. Gay marriage and gay rights are quite bad
  2. Humans are not the result of evolution
  3. The war on drugs is a strongly positive force, and we should increase incarceration rates

(Note, I really don't hang out much in standard conservative circles, so there is a good chance the above are actually all totally outlandish and the result of stereotypes.) 

If I imagine someone bringing up these topics, the response would be absolutely universally negative, to a much larger degree than what we see when woke topics are being discussed. 

The thing that I think is actually explaining the data is simply that the EA and Rationality communities have a number of opinions that substantially diverge from the opinions held in basically any other large intellectual community, and so if someone comes in and just assumes that everyone shares the context from one of these other communities, they will experience substantial pushback. The most common community for which this happens is the liberal community, since we have substantial overlap, but this would happen with people from basically any community (and I've seen it happen with many people from the libertarian community who sometimes mistakenly believe all of their beliefs are shared in the EA community, and then receive massive pushback as they realize that people are actually overall quite strongly in favor of more redistribution of wealth).

And to be clear, I think this is overall quite good  and I am happy about most of these divergences from both liberal and conservative gospel, since they overall seem to point much closer to the actual truth than what those communities seem to generally accept as true (though I wouldn't at all claim that we are infallible and this is a uniform trend, and think there are probably quite a few topics where the divergences point away from the truth, just that the aggregate seems broadly in the right direction to me).

Comment by habryka on evelynciara's Shortform · 2020-09-01T20:10:03.198Z · score: 4 (3 votes) · EA · GW

In an ironic turn of events, you leaving this comment has made it so that the comment can no longer be unpublished (since users can only delete their comments if they have no replies). 

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T18:07:47.348Z · score: 21 (10 votes) · EA · GW

I downvoted the above comment by Khorton (not the one asking for explanations, but the one complaining about the comparison of Trolley's and rape), and think Larks explained part of the reason pretty well. I read it in substantial parts as an implicit accusation of Robin to be in support of rape, and also seemed to itself misunderstand Vaniver's comment, which wasn't at all emphasizing a dimension of trolley problems that made a comparison with rape unfitting, and doing so in a pretty accusatory way (which meerpirat clarified below).

I agree that voting quality somewhat deteriorates in more heated debates, but I think this characterization of how voting happens is too uncharitable. I try pretty hard to vote carefully, and often change my votes multiple times on a thread if I later on realize I was too quick to judge something or misunderstood someone, and really spend a lot of time reconsidering and thinking about my voting behavior with the health of the broader discourse in mind, so I am quite confident about my own voting behavior being mischaracterized by the above. 

I've also talked to many other people active on LessWrong and the EA Forum over the years, and a lot of people seem to put a lot of effort into how they vote, so I am also reasonably confident many others also spend substantial time thinking about their voting in a way that really isn't well-characterized by "roughly morphing barely restricted tribal warfare". 

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T17:57:50.348Z · score: 18 (7 votes) · EA · GW

Yes! This was definitely not CEA. I don't have any more info on what organization it is (the organizers just said "an organization").

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-01T02:51:47.072Z · score: 19 (12 votes) · EA · GW

The thing that I am saying is that in order to make space for someone who tries to enforce such norms, we would have to kick out many other people out of the community, and stop many others from joining. It is totally fine for people to not attend events if they just happen to hit on a topic that they are sensitive to, but for someone to completely disengage from a community and avoid talking to anyone in that community because a speaker at some event had some opinions that they were sensitive to, that wasn't even the topic of the announced talk, is obviously going to exert substantial pressure on what kind of discourse is possible with them.

This doesn't seem to fit nicely into the dichotomy you and Ben are proposing here, which just has two options: 

1. They are uncommon

2. They are not valuable

I am proposing a third option which is: 

3. They are common and potentially valuable on their own, but also they impose costs on others that outweigh the benefits of their participation, and that make it hard to build an intellectually diverse community out of people like that. And it's really hard to integrate them into a discourse that might come to unintuitive conclusions if they systematically avoid engaging with any individuals that have expressed any ideas at some point in their public history that they are particularly sensitive to.

It seems to me that the right strategy to run if you are triggered by specific topics, is to simply avoid engaging with those topics (if you really have no way of overcoming your triggeredness, or if doing so is expensive), but it seems very rarely the right choice to avoid anyone who ever has said anything public about the topic that is triggering you! It seems obvious how that makes it hard for you to be part of an intellectually diverse community.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-31T17:30:32.436Z · score: 21 (14 votes) · EA · GW

No, what I am saying is that unless you want to also enforce conformity, you cannot have a large community of people with different viewpoints who also all believe that you shouldn't associate with people they think are wrong. So the real choice is not between "having all the people who think you shouldn't associate with people who think they are wrong" and "having all the weird intellectually independent people", it is instead between "having an intellectually uniform and conformist slice of the people who don't want to be associated with others they disagree with" and "having a  quite intellectually diverse crowd of people who are tolerating dissenting opinions", with the second possibly actually being substantially larger, though generally I don't think size is the relevant constraint to look at here.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-30T23:58:39.195Z · score: 30 (18 votes) · EA · GW

Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up at the event, similar to how they have done for some events of Peter Singer. Indeed almost all concerns that were brought up during that meeting were concerns of external parties threatening EA Munich, or EA at large, in response to inviting Hanson. There were some minor concerns about Hanson's views qua his views alone, but basically all organizers who spoke at the debrief I was part of said that they were interested in hearing Robin's ideas and would have enjoyed participating in an event with him, and were primarily worried about how others would perceive it and react to inviting him.

As such, blackmail feels like a totally fair characterization of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).

More importantly, I am really confused why you would claim so confidently that no threats were made. The prior for actions like this being taken in response to implicit threats is really high, and talking to any person who has tried to organizing events like this, will show you that they have experienced implicit of explicit threats some form or another. In this situation there was also absolutely not an "apparent absence of people pressuring Munich to 'cancel Hanson'". There was indeed an abundance of threats that were readily visible by anyone looking at the current public intellectual climate, talking to people who are trying to organize public discourse, and just seeing how many other people are being actively punished on social media and other places for organizing events like this. 

While I don't think this had substantial weight in this specific decision, there was also one very explicit threat made to the organizers at EA Munich, at least if I remember correctly, of an organization removing their official affiliation with them if they were to host Hanson. The organizers assured others at the debrief that this did not play a substantial role in their final decision, but it does at least show that explicit threats were made.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-30T22:44:52.021Z · score: 39 (21 votes) · EA · GW

You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you, or people from backgrounds D and E to avoid joining your community. The people I referred to in the last section feel that some people might feel alienated and unwelcome by the presence of Robin as a speaker; they raised concerns about both his writing and his personal behavior, though the latter points were vague enough that I wound up not including them in the post.

But isn't it basically impossible to build an intellectually diverse community out of people who are unwilling to be associated with people they find offensive or substantially disagree with? It seems really clear that if Speaker B and C avoid talking to you, only because you associated with Speaker A, then they are following a strategy where they are generally not willing to engage with parties that espouse ideas they find offensive, which makes it really hard to create any high level of diversity out of people who follow that strategy (since they will either conform or splinter). 

That is why it's so important to not give into those people's demands, because building a space where lots of interesting ideas are considered is incompatible with having lots of people who stop engaging with you when you ever believe anything they don't like. I am much more fine with losing out on a speaker who is unwilling to associate with people they disagree with, than I am with losing out on a speaker who is willing to tolerate real intellectual diversity, since I actually have a chance to build an interesting community out of people of the second type, and trying to build anything interesting out of the first type seems pretty doomed. 

Obviously this is oversimplified, but I think the general gist of the argument carries a lot of weight.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-28T17:33:40.128Z · score: 26 (19 votes) · EA · GW

Some of the people who have spent the most time doing the above came to the conclusion that EA should be more cautious and attentive to diversity.

Edited from earlier comment: I think I am mostly confused what diversity has to do with this decision. It seems to me that there are many pro-diversity reasons to not deplatform Hanson. Indeed, the primary one cited, one of intellectual diversity and tolerance of weird ideas, is primary an argument in favor of diversity. So while diversity plays some role, I think I am actually confused why you bring it up here. 

I am saying this because I wanted to argue against things in the last section, but realized that you just use really high-level language like "diversity and inclusion" which is very hard to say anything about. Of course everyone is in favor of some types of diversity, but it feels to me like the last section is trying to say something like "people who talked to a lot of people in the community tend to be more concerned about the kind of diversity that having Robin as a speaker might harm", but I don't actually know whether that's what you mean. But if you do mean it, I think that's mostly backwards, based on the evidence I have seen.

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-15T21:44:59.344Z · score: 2 (1 votes) · EA · GW

Yeah, I agree with this. Adding footnotes to the new editor is quite a high priority.

Comment by habryka on Donor Lottery Debrief · 2020-08-10T03:47:51.422Z · score: 8 (6 votes) · EA · GW

Earning potential goes down with distance to the Bay (less so in COVID times, but even then that is still true, as many companies still adjust their salaries based on cost of living), which matters because people have friends and spouses who don't want to live an ascetic EA lifestyle.

Also, many, if not most of these projects could not be started outside of the Bay or any of the other global hubs, because they benefit from being part of an ecosystem of competent people. You could maybe pull them off in other major global cities (like New York, London, Hong Kong, Tokyo), but the rent prices won't differ that much between them, because the demand for being close to all the other good people drives prices up. The best people are in the big cities because that's where the other good people are. Not moving to one of the hubs of the world is for most people career suicide, and in general I am much more optimistic about projects and organizations that are located in one of the global talent hubs, because they get to leverage the local culture, service ecosystem, talent availability and social networks that come with those hubs that extend far beyond what the EA and Rationality communities can provide on their own. 

I know that my effectiveness would have dropped drastically had I moved out of a global hub, and my overall impact trajectory would have been much worse, so I am hesitant to recommend that anyone else do so, at least for the long term (I think temporarily moving to lower cost places is a good strategy for many people, and many should consider it, but it's not really solving the funding problem much, since I don't really think people should do that for more than 6 months, or maybe at the most a year).

Edit: Also COVID changes all of this at least a bit, though I don't really know how much and for how long. But it seems likely to me that the overall trends here are pretty robust and we will continue seeing high prices in the places where I would want people to be located.

Comment by habryka on What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? · 2020-08-08T09:11:20.702Z · score: 33 (14 votes) · EA · GW

This was posted to a relatively large (> 100 people) but private FB groups where various people who were active in EA and animal activism were talking to each other. I can confirm that it is accurate (since I am still part of the group).

Comment by habryka on The 80,000 Hours job board is the skeleton of effective altruism stripped of all misleading ideologies · 2020-08-07T20:00:31.985Z · score: 28 (24 votes) · EA · GW

Hmm, I think I would warn against this framing. In particular the job board systematically omits people working on small projects or organizations that don't really have much of a need for public hiring or recruitment rounds. Some concrete examples: 

  • None of the people the LTFF funds to do research would be represented by a slot on the job board, but I do think it's a viable path for people to take
  • I think there are very few PhD positions advertised on the job board, even though that's obviously a pretty frequent career path, and people can have quite a bit of impact through their PhDs. Like I see no representation of places like CHAI and MILA which have many good safety researchers working there.
  • Some projects that I know have hired people recently, but aren't on the job board, presumably because they are hiring from their networks and friends:
    • LessWrong
    • Quantified Uncertainty Research Institute
    • Epidemic Forecasting
    • The EA Hotel
    • Center for Applied Rationality
    • Centre for Effective Altruism
    • And probably many more that have recently started, or have hired, but didn't see much of a need for a public hiring round

Overall, when I look at the job board, the list of jobs feels highly unrepresentative to me (and I am also honestly not very excited about someone working in 90% of these roles, but that's probably a larger disagreement between my thoughts on cause prioritization and 80Ks thoughts on cause prioritization). 

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-06T17:40:56.078Z · score: 3 (2 votes) · EA · GW

It's mostly a UI issue. The comments editor has a lot less space to work with and I haven't yet found a good way to make that UI easily available in the context of comments. You can copy-paste tables and images from the post-editor into the comments editor in the worst-case, which I do recognize is annoying.

Comment by habryka on evelynciara's Shortform · 2020-08-04T04:25:23.818Z · score: 2 (1 votes) · EA · GW

Seems to work surprisingly well!

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-03T19:26:27.904Z · score: 3 (2 votes) · EA · GW

I think we allow markdown tables using this syntax, but I really haven't debugged it very much and it could totally be broken: https://www.markdownguide.org/extended-syntax/#tables

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-02T00:12:38.272Z · score: 3 (2 votes) · EA · GW

In the new editor when you have your cursor at the beginning of a new line a small Paragraph symbol should appear on the left of the editor. Clicking on that should bring up a new table menu item.

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-02T00:11:48.360Z · score: 4 (2 votes) · EA · GW

Huh, no idea why that happens. The hover-previews are not triggered by selection events, but only by the and the events and have been that way for a long time. My guess is something must have changed in Chrome or maybe in Vimium to make that happen? 

Reading through some Github issues for Vimium, it appears that Vimium does indeed send events when clicking on a link, so this is intended behavior as far as I can tell (why I do not know, though I can imagine it overall resulting in a better experience on other websites). I don't currently know how fix this without breaking it on other devices, so I would mostly treat this as a Vimium bug.

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-01T06:55:42.122Z · score: 4 (2 votes) · EA · GW

You say "They also now have the ability to edit tag descriptions in a wiki-like fashion", but when someone does something stupid on Wikipedia other people can view the article history and restore old versions. Here it looks like regular users can't do that?

My guess is that this is a temporary bug. The History page should allow users to see any previous revisions that were made, and should allow you to compare arbitrary revisions. You can see what it's supposed to look like on LessWrong. With that, restoring previous versions should be pretty easy. I expect that bug will probably be fixed within a week or so, and until then it probably won't be much of a problem.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-23T04:35:23.409Z · score: 2 (4 votes) · EA · GW

Yep, that's what I was implying.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T18:10:10.259Z · score: 3 (3 votes) · EA · GW

This is in contrast to a frequentist perspective, or maybe something close to a "common-sense" perspective, which tends to bucket knowledge into separate categories that aren't easily interchangeable.

Many people make a mental separation between "thinking something is true" and "thinking something is X% likely, where X is high", with one falling into the category of lived experience, and the other falling into the category of "scientific or probabilistic assessment". The first one doesn't require any externalizable evidence and is a fact about the mind, the second is part of a collaborative scientific process that has at its core repeatable experiments, or at least recurring frequencies (i.e. see the frequentist discussion of it being meaningless to assign probabilities to one-time events).

Under some of these other non-bayesian interpretations of probability theory, an assignment of probabilities is not valid if you don't associate it with either an experimental setup, or some recurring frequency. So under those interpretations you do have an additional obligation to provide evidence and context to your probability estimates, since otherwise they don't really form even a locally valid statement.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T05:55:44.608Z · score: 2 (3 votes) · EA · GW
Isn't self-reported data is unreliable?

Yes, but unreliability does not mean that you instead just use vague words instead of explicit credences. It's a fine critique to say that people make too many arguments without giving evidence (something I also disagree with, but that isn't the subject of this thread), but you are concretely making the point that it's additionally bad for them to give explicit credences! But the credences only help, compared to vague and ambiguous terms that people would use instead.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T16:50:51.832Z · score: 7 (6 votes) · EA · GW

I mean, very frequently it's useful to just know what someone's credence is. That's often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence. This is like saying that all statements of opinions or expressions of feelings are bad, unless they are accompanied with evidence, which seems like it would massively worsen communication.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T05:02:37.889Z · score: 10 (9 votes) · EA · GW
There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences.

From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences, and in general I think there is a lot of value in people providing credences even if they don't provide additional evidence, if only to avoid problems of ambiguous language.

Comment by habryka on EA Forum feature suggestion thread · 2020-07-15T03:58:39.721Z · score: 2 (1 votes) · EA · GW

This is also the case in the new editor! Sorry for not having this for so long!

Comment by habryka on Concern, and hope · 2020-07-09T08:02:34.173Z · score: 27 (8 votes) · EA · GW
witch hunts [...] top-down

The vast majority of witch hunts were not top-down as far as I remember from my cursory reading on this topic. They were usually driven by mobs and bottom-up social activity, with the church or other higher institutions usually trying to avoid getting involved with them.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-29T04:04:00.294Z · score: 6 (3 votes) · EA · GW

We actually just deployed the ability for users to delete their own comments if they have no children (i.e. no replies) for lesswrong. So I expect that will also be up on the EA Forum within the next few weeks.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-28T07:57:13.744Z · score: 4 (2 votes) · EA · GW

Yeah, I agree with this. I actually think we have an admin-only version of a button that does this, but we ran into some bugs and haven't gotten around to fixing them. I do expect we will do this at some point in the next few months.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-27T04:19:22.051Z · score: 5 (3 votes) · EA · GW

Huh, you're right. I will look into it.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-26T20:23:48.241Z · score: 3 (2 votes) · EA · GW

I am reasonably confident that we use the first image that is used in a post as the preview image, so you can already mostly do this.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-20T17:46:02.013Z · score: 2 (1 votes) · EA · GW

Yeah, this is the current top priority with the new editor rework, and the inability to make this happen was one of the big reasons for why we decided to switch editors. I expect this will happen sometime in the next month or two.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-20T17:43:59.432Z · score: 12 (6 votes) · EA · GW

Alas, I don’t think this is possible in the way you are suggesting it here. We can allow submission of a narrow subset of HTML, but indeed one of the single most common complaints that we got on the old forum was many posts having totally inconsistent formatting because people were submitting all kinds of weird HTML+CSS with differing font-sizes for each post, broken formatting on smaller devices, inconsistent text colors, garish formatting, floating images that broke text layout, etc.

Indeed just a week ago I got a bug report about the formatting of your old “Why the tails come apart” post being broken on smaller devices because of the custom HTML you submitted at the time. Indeed a very large fraction of old LW and EA Forum posts have broken formatting because of the overly permissible editor that old LessWrong and the old EA Forum both had (and I’ve probably spent at least 10 hours over the last years fixing posts with that kind of broken formatting).

If you want to import something from Google Docs, then exporting it to markdown and using the markdown editor is really as well as we can do, and we can ensure that always works reliably. I don’t think we can make arbitrary HTML submission work without frustrating tons of readers and authors.

I have also been working a lot on making the new editor work completely seamlessly with Google Docs copy-paste (and indeed there is a lot of special casing to specifically make copy-paste from Google Docs work). The only feature that’s missing and kind of difficult to do is internal links and footnotes, but I have not discovered any other feature that has been running into significant problems (that we would want, there are some others like left or right floating images that we don’t want because they break on smaller devices). So if you ever discover any document that you can‘t just copy paste, please send a bug report and I think we can likely make it work.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-18T17:13:22.973Z · score: 10 (7 votes) · EA · GW

That’s actually a lot of what the LessWrong team is currently working on! I don’t know yet whether we want to allow suggesting edits on all posts, but we are planning to allow wiki-like posts that allow people to submit changes.