Posts

Long-Term Future Fund: November 2020 grant recommendations 2020-12-03T12:57:36.686Z
Long-Term Future Fund: April 2020 grants and recommendations 2020-09-18T10:28:20.555Z
Long-Term Future Fund: September 2020 grants 2020-09-18T10:25:04.859Z
Comparing Utilities 2020-09-15T03:27:42.746Z
Long Term Future Fund application is closing this Friday (June 12th) 2020-06-11T04:17:28.371Z
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T04:54:25.630Z
Request for Feedback: Draft of a COI policy for the Long Term Future Fund 2020-02-05T18:38:24.224Z
Long Term Future Fund Application closes tonight 2020-02-01T19:47:47.051Z
Survival and Flourishing grant applications open until March 7th ($0.8MM-$1.5MM planned for dispersal) 2020-01-28T23:35:59.575Z
AI Alignment 2018-2019 Review 2020-01-28T21:14:02.503Z
Long-Term Future Fund: November 2019 short grant writeups 2020-01-05T00:15:02.468Z
Long Term Future Fund application is closing this Friday (October 11th) 2019-10-10T00:43:28.728Z
Long-Term Future Fund: August 2019 grant recommendations 2019-10-03T18:46:40.813Z
Survival and Flourishing Fund Applications closing in 3 days 2019-10-02T00:13:32.289Z
Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) 2019-09-09T04:14:02.083Z
Integrity and accountability are core parts of rationality [LW-Crosspost] 2019-07-23T00:14:56.417Z
Long Term Future Fund and EA Meta Fund applications open until June 28th 2019-06-10T20:37:51.048Z
Long-Term Future Fund: April 2019 grant recommendations 2019-04-23T07:00:00.000Z
Major Donation: Long Term Future Fund Application Extended 1 Week 2019-02-16T23:28:45.666Z
EA Funds: Long-Term Future fund is open to applications until Feb. 7th 2019-01-17T20:25:29.163Z
Long Term Future Fund: November grant decisions 2018-12-02T00:26:50.849Z
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) 2018-11-21T03:41:38.850Z

Comments

Comment by habryka on My mistakes on the path to impact · 2021-01-17T14:05:01.103Z · EA · GW

Yeah, totally agree that we can find individual instances where the advice is bad. Just seems pretty unlikely for that average to be worse, even just by the lights of the person who is given advice (and ignoring altruistic effects, which presumably are more heavy-tailed).

Comment by habryka on The Folly of "EAs Should" · 2021-01-17T14:02:35.681Z · EA · GW

Huh, I didn't have a sense that Greg's medical degree helped much with his work, but could totally be convinced otherwise.

Thinking more about it, I think I also just fully retract Greg as an example for other reasons. I think for many other people's epistemic states the above goes through, but I wouldn't personally think that he necessarily made the right call.

Comment by habryka on My mistakes on the path to impact · 2021-01-15T20:49:21.645Z · EA · GW

I don't think any of 80k's career advice has caused much harm compared to the counterfactual of not having given that advice at all, so I feel a bit confused how to think about this. Even the grossest misrepresentation of EtG being the only way to do good or something still strikes me as better than the current average experience a college graduate has (which is no guidance, and all career advice comes from companies trying to recruit you). 

Comment by habryka on Things CEA is not doing · 2021-01-15T20:00:47.796Z · EA · GW

Thank you for writing this! I think this is quite helpful. 

Comment by habryka on The ten most-viewed posts of 2020 · 2021-01-15T00:15:57.234Z · EA · GW

Yep, 90% of readers on LW and the EA Forum never vote. And 90% of voters never comment. This holds empirically for lots of forums. 

Comment by habryka on The Folly of "EAs Should" · 2021-01-13T18:57:33.176Z · EA · GW

The "probably" there is just for the case of becoming an AI safety researcher. The argument for why being a doctor seems rarely the right choice does of course not just route through AI Alignment being important. It routes through a large number of alternative careers that seem more promising, many of which are analyzed and listed on 80k's website. That is what my second paragraph was trying to say.

I think if you take into account all of those alternatives, the "probably" turns into a "very likely" and conditioning on "any decent shot" no longer seems necessary to me. 

Comment by habryka on The Folly of "EAs Should" · 2021-01-13T00:02:53.366Z · EA · GW

Some specialisations for doctors are very high earning. If someone was on the path to being a doctor and could still specialise in one of them, that is what I would suggest as an earning-to-give strategy.

Yeah, I do think this is plausible. When I last did a fermi on this I tended to overestimate the lifetime earnings of doctors because I didn't properly account for the many years of additional education required to become one, which often cost a ton of money and of course replace potential other career paths during that same time, so my current guess is that while being a doctor is definitely high-paying, I think it's not actually that great for EtG. 

The key difference here does seem to be whether you are already past the point where you finished your education. After you finished med-school or maybe even have your own practice, then it's pretty likely being a doctor will be the best way for you to earn lots of money, but if you are trying to decide whether to become a doctor and haven't started med-school, I think it's rarely the right choice from an impact perspective.

Comment by habryka on The Folly of "EAs Should" · 2021-01-12T06:15:34.842Z · EA · GW

I do think anyone who has any decent shot at being an AI Safety researcher should probably stop being a doctor and try doing that instead. I do think that many people don't fit that category, though some of the most prominent doctors in the community who quit their job (Ryan Carey and Gregory Lewis) have fit that bill, and I am exceptionally glad for them to have made that decision. 

I don't currently know of a reliable way to actually do a lot of good as a doctor. As such, I don't know why from an impact perspective I should suggest that people continue being a doctor. Of course there are outliers, but as career advice goes, it strikes me as one of the most reliably bad decisions I've seen people make. It also seems from a personal perspective a pretty reliably bad choice, with depression and suicide rates being far above population average.

Comment by habryka on The Folly of "EAs Should" · 2021-01-11T02:54:07.320Z · EA · GW

Yeah,  I do think the selection effects here are substantial. 

I do think I can identify multiple other  very similarly popular pieces of advice that did turn out to be bad reasonably frequently, and caused people to regret their choices, which is evidence the selection effects aren't completely overdetermining the outcome. 

Concretely, I think I know of a good number of people who regret taking the GWWC pledge, a good number of people who regret trying to get an ML PhD, and a good number of people who regret becoming active in policy. I do think those pieces of advice are a bit more controversial than the "don't become a doctor" advice within the EA Community, so the selection effects are less strong, but I do think the selection effects are not strong enough to make reasoning from experience impossible here.

Comment by habryka on The Folly of "EAs Should" · 2021-01-09T19:11:59.449Z · EA · GW

(I don't know of a practical scenario where either of those turned out to be bad advice, and multiple times when it saved someone from choosing a career that would have been much worse in terms of impact, so I don't think I understand why you think it's bad advice. At least for people I know it seems to have been really good advice, at least the doctor part.)

Comment by habryka on EA Forum feature suggestion thread · 2021-01-08T20:13:53.349Z · EA · GW

Yeah, I like it. Does seem like a good thing to have.

Comment by habryka on vaidehi_agarwalla's Shortform · 2020-12-20T04:59:44.011Z · EA · GW

I think it's most likely if the LessWrong team decides to run a conference, and then after looking into alternatives for a bit, decides that it's best to just build our own thing. 

I think it's much more likely if LW runs a conference than if CEA runs another conference, not because I would want to prioritize a LW conference app over an EAG app, but because I expect the first version of it to be pretty janky, and I wouldn't want to inflict that on the poor CEA team without being the people who built it directly and know in which ways it might break. 

Comment by habryka on vaidehi_agarwalla's Shortform · 2020-12-19T01:55:00.912Z · EA · GW

It seems plausible, though overall not that likely, to me that maybe the LessWrong team should just build our own conference platform into the forum. We might look into that next year as we are also looking to maybe organize some conferences.

Comment by habryka on Introducing Animal Advocacy Africa · 2020-12-19T01:38:48.590Z · EA · GW

I don't think I understand? This article doesn't seem to say anything that isn't publicly available about the ProVeg grants program, and doesn't seem to claim any affiliation with it.

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-11T03:20:57.585Z · EA · GW

It seems to me that one of the biggest problems with the world is that only a small fraction of people who do a really large amount of good get much rewarded for it. It seems likely that this prevents many people from pursuing doing much good with their lives. 

My favorite way of solving this kind of issue is with Impact Certificates, which has decent amount of writing on it, and you can think of the above as just buying about $100M of impact certificates for the relevant people (in practice I expect that if you get a good impact certificate market going, which is a big if, you could productively spend substantially more than $1B). 

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-06T05:07:01.528Z · EA · GW

$1B is a lot. It also gets really hard if I don't get to distribute it to other grantmakers. Here are some really random guesses. Please don't hold me to this, I have thought about this topic some, but not under these specific constraints, so some of my ideas will probably be dumb.

My guess is I would identify the top 20 people who seem to be doing the best work around long-term-future stuff, and give each of at least $10M, which would allow each of them to reliably build an exoskeleton around them and increase their output. 

My guess is that I would then invest a good chunk more into scaling up LessWrong and the EA Forum, and make it so that I could distribute funds to researchers working primarily on those forums (while building a system for peer evaluation to keep researchers accountable). My guess is this could consume another $100M over the next 10 years or so. 

I expect it would take me at least a decade to distribute that much money. I would definitely continue taking in applications for organizations and projects from people and kind of just straightforwardly scale up LTFF spending of the same type, which I think could take another $40M over the next decade.

I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven't even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.

It seems pretty plausible that one should consider buying a large newspaper with that money and optimizing it for actual careful analysis without the need for ads. This seems pretty hard though, but also, I really don't like the modern news landscape, and it doesn't take that much money to even run a large newspaper like the Washington Post, so I think this is pretty doable. But I do think it has the potential to take a good chunk of the $1B, so I am pretty unsure whether I would do it, even if you were to force me to make a call right now (for reference, the Washington Post was acquired for $250M).

I would of course just pay my fair share of all the existing good organizations that exist and currently get funded by Open Phil. My guess is that would take about $100M over the next decade.

I would probably keep a substantial chunk in reserve for worlds where some kind of quick pivotal action is needed that requires a lot of funds. Like, I don't know, a bunch of people pooling money for a list minute acquisition of Deepmind or something to prevent an acute AI risk threat.

If I had the money right now I would probably pay someone to run a $100K-$1M study of the effects of Vitamin D on COVID. It's really embarrassing that we don't have more data on that yet, even though it has such a large effect.

Maybe I would try to do something crazy like try to get permission to establish a new city in some U.S. state that I would try to make into a semi-libertarian utopia and get all the good people to move there? But like, that sure doesn't seem like it would straightforwardly work out. Also, seems like it would cost substantially more money than $1B.

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-06T04:46:07.854Z · EA · GW

Thank you! 

I am planning to respond to this in more depth, but it might take me a few days longer, since I want to do a good job with it. So please forgive me if I don't get around to this before the end of the AMA.

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-06T04:44:57.840Z · EA · GW

At least for me the answer is yes, I think the arguments for the hinge of history are pretty compelling, and I have not seen any compelling counterarguments. I think the comments on Will's post (which is the only post I know arguing against the hinge of history hypothesis) are basically correct and remove almost all basis I can see for Will's arguments. See also Buck's post on the same topic.

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T21:48:30.893Z · EA · GW

I agree that both of these are among the top 5 things that I've encountered that make me unexcited about a grant.

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T21:46:19.681Z · EA · GW

I agree that both of these are among our biggest mistakes.

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T07:17:39.785Z · EA · GW

We've definitely written informally things like "this is what would convince me that this grant was a good idea", but we don't have a more formalized process for writing down specific objective operationalizations that we all forecast on.

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T03:36:08.440Z · EA · GW

Most common is definitely that something doesn't really seem very relevant to the long-term future (concrete example: "Please fund this local charity that helps people recycle more"). This is probably driven by people applying with the same project to lots of different grant opportunities, at least that's how the applications often read. 

I would have to think a bit more about patterns that apply to the applications that pass the initial filter (i.e. are promising enough to be worth a deeper investigation).

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T03:23:45.340Z · EA · GW

We haven't historically done this. As someone who has tried pretty hard to incorporate forecasting into my work at LessWrong, my sense is that it actually takes a lot of time until you can get a group of 5 relatively disagreeable people to agree on an operationalization that makes sense to everyone, and so this isn't really super feasible to do for lots of grants. I've made forecasts for LessWrong, and usually creating a set of forecasts that actually feels useful in assessing our performance takes me at least 5-10 hours.

It's possible that other people are much better at this than I am, but this makes me kind of hesitant to use at least classical forecasting methods as part of LTFF evaluation. 

Comment by habryka on My mistakes on the path to impact · 2020-12-04T23:38:47.845Z · EA · GW

Thank you! I really appreciate people writing up reflections like this.

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T20:45:10.751Z · EA · GW

Ah yeah, I also think that if the opportunity presents itself we could grow into this role a good amount. Though I think on the margin I think it's more likely we are going to invest even more into more early-stage expertise and maybe do more active early-stage grantmaking.

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T19:28:43.389Z · EA · GW

Speaking just for myself on why I tend to prefer the smaller individual grants: 

Currently when I look at the funding landscape, it seems that without the LTFF there would be a pretty big hole in available funding for projects to get off the ground and for individuals to explore interesting new projects or enter new domains. Open Phil very rarely makes grants smaller than ~$300k, and even many donors don't really like giving to individuals and early-stage organizations because they often lack established charity status, which makes their donations non-tax-deductable. 

CEA has set up infrastructure to allow tax-deductible grants to individuals and organizations without charity status, and the fund itself seems well-suited to evaluate organizations by individuals, since we all have pretty wide networks and can pretty quickly gather good references on individuals that are working on projects that don't yet have an established track record. 

I think in a world without Open Phil or the Survival and Flourishing Fund, much more of our funding would go to established organizations. 

Separately, I also think that I personally view a lot of the intellectual work to be done on the Long Term Future to be quite compatible with independent researchers asking for grants for just them, or maybe small teams around them. This feels kind of similar to how academic funding is often distributed, and I think makes sense for domains where a lot of people should explore a lot of different directions and we have set up infrastructure so that researchers and distillers can make contributions without necessarily needing a whole organization around them (which I think the EA Forum enables pretty well). 

In addition to both of those points, I also think evaluating organizations requires a somewhat different skillset than evaluating individuals and small team projects, and we are currently better at the second than the first (though I think we would reskill if we thought it was more likely that more organizational grants would become more important again). 

Comment by habryka on Long-Term Future Fund: Ask Us Anything! · 2020-12-04T19:19:09.297Z · EA · GW

Yeah, I am also pretty worried about this. I don't think we've figured out a great solution to this yet. 

I think we don't really have sufficient capacity to evaluate organizations on an ongoing basis and provide good accountability. Like, if a new organization were to be funded by us and then grow to a budget of $1M a year, I don't feel like we have the capacity to evaluate their output and impact sufficiently well to justify giving them $1M each year (or even just $500k). 

Our current evaluation process routes feels pretty good for smaller projects, and granting to established organizations that have other active evaluators looking into them that we can talk to, but doesn't feel very well-suited to larger organizations that don't have existing evaluations done on them (there is a lot of due diligence work to be done on that I think requires higher staff capacity than we have). 

I also think the general process of the LTFF specializing into more something like venture funding, with other funders stepping in for more established organizations feels pretty good to me. I do think the current process has a lot of unnecessary uncertainty and risk in it, and I would like to work on that. So one thing I've been trying to get better at is predicting which projects could get long-term funding from other funders, and to try to help projects get to a place where they can receive long-term funding from more than just the LTFF.

Capital wise, I also think that we don't really have the funding to support organizations over longer periods of time. I.e. supporting 3 organizations at $500k a year would take up almost all of our budget, and I think it's not worth trading that off against the other smaller grants we've historically been making. But it is one of the most promising ways I would want to use additional funds we could get.

Comment by habryka on Introducing High Impact Athletes · 2020-12-04T18:56:35.930Z · EA · GW

Sure, the same basic argument applies to satisficing (which is just a limited form of optimizing, so it really doesn't change my argument). I find the assertion that this would not trade off at all against effectiveness  highly dubious. 

I think it's pretty reasonable to argue that the impact would be small, but saying that it is non-existent just seems really unlikely to me. It implies that if the funder was completely aware of the same information, but just wouldn't treat it as a strict target to meet, they would not be able to make any better tradeoffs favoring effectiveness in any practical situation. 

in practice I reject the premise that including demographic diversity in one's recruitment calculus will always harm team effectiveness

Maybe this is what you meant, but of course it will not always harm team effectiveness, the same way as playing the lottery will not always lose you money. On expectation it sure seems like it will though, if only a little.

Comment by habryka on Introducing High Impact Athletes · 2020-12-04T05:50:45.638Z · EA · GW

Of course I don't recommend sacrificing things like team cohesion or effectiveness for the sake of demographic diversity, but if that is a real tradeoff that a founder faces in practice,

It is generally really surprising if adding an additional constraint to a project does not make it harder to optimize for a specific goal. So of course those trade off against each other in practice. I hope we can preserve the specific meaning of words like this on the EA Forum. 

This doesn't mean it isn't worth optimizing for as well, but of course optimizing for demographic diversity will trade off some against effectiveness and team cohesion, and I want us to at least recognize that.

Comment by habryka on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-17T21:35:32.105Z · EA · GW

Ah, yeah, that's definitely a browser I haven't tested very much, and it would make sense for it to store less. Really sorry for that experience!

Comment by habryka on Why those who care about catastrophic and existential risk should care about autonomous weapons · 2020-11-16T18:17:46.955Z · EA · GW

We usually still save the comment in your browser local storage. So it being gone is a bit surprising. I can look into it and see whether I can reproduce it (though it’s also plausible you had some settings activated that prevented localstorage from working, like incognito mode on some browsers).

Comment by habryka on Thoughts on whether we're living at the most influential time in history · 2020-11-06T00:25:35.815Z · EA · GW

(It appears you dropped a closing parenthesis in this comment)

Comment by habryka on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-16T17:47:09.373Z · EA · GW

I would actually be really interested in talking to someone like Baumeister at an event, or ideally someone a bit more careful. I do think I would be somewhat unhappy to see them given just a talk with Q&A, with no natural place to provide pushback and followup discussion, but if someone were to organize an event with Baumeister debating some EA with opinions on scientific methodology, I would love to attend that.

Comment by habryka on Sign up for the Forum's email digest · 2020-10-15T21:41:58.929Z · EA · GW

You can also subscribe to tags, by going to the tag page and clicking the "Subscribe" button. For those notifications you can also choose frequency, in the notification settings on your profile.

Comment by habryka on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-14T19:39:09.603Z · EA · GW

I cannot find any section of this article that sounds like this hypothesis, so I am pretty confident the answer is that no, that is not what the article says.  The article responds relatively directly to this: 

Of course, being a prolific producer of premium prioritisation posts doesn’t mean we should give someone a free pass for behaving immorally. For all that EAs are consequentialists, I don’t think we should ignore wrongdoing ‘for the greater good’. We can, I hope, defend the good without giving carte blanche to the bad, even when both exist within the same person. 

Comment by habryka on Open and Welcome Thread: October 2020 · 2020-10-10T18:34:08.056Z · EA · GW

Not particularly hard. My guess is half an hour of work or so, maybe another half hour to really make sure that there are no UI bugs.

Comment by habryka on Apply to EA Funds now · 2020-09-16T22:08:10.437Z · EA · GW

Yep, seems like that's the wrong link. Here is the fixed link: https://app.effectivealtruism.org/funds/far-future

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T04:17:26.079Z · EA · GW

Just to be clear, I don't think even most neoreactionaries would classify as white nationalists? Though maybe now we are arguing over the definition of white nationalism, which is definitely a vague term and could be interpreted many ways. I was thinking about it from the perspective of racism, though I can imagine a much broader definition that includes something more like "advocating for nations based on values historically associated with whiteness", which would obviously include neoreaction, but would also presumably be a much more tenable position in discourse. So for now I am going to assume you mean something much more straightforwardly based on racial superiority, which also appears to be the Wikipedia definition.

I've debated with a number of neoreactionaries, and I've never seen them bring up much stuff about racial superiority.  Usually just arguing against democracy and in favor of centralized control and various arguments derived from that, though I also don't have a ton of datapoints. There is definitely a focus on the superiority of western culture in their writing and rhetoric, much of which is flawed and I am deeply opposed to many of the things I've seen at least some neoreactionaries propose, but my sense is that I wouldn't characterize the philosophy fundamentally as white nationalist in the racist sense of the term. Though of course the few neoreactionaries that I have debated are probably selected in various ways that reduces the likelihood of having extreme opinions on these dimensions (though they are also the ones that are most likely to engage with EA, so I do think the sample should carry substantial weight). 

Of course, some neoreactionaries are also going to be white nationalists, and being a neoreactionary will probably correlate with white nationalism at least a bit, but my guess is that at least the people adjacent to EA and Rationality that I've seen engage with that philosophy haven't been very focused on white nationalism, and I've frequently seen them actively argue against it.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-09T01:12:32.391Z · EA · GW

Describing members of Leverage as "white nationalists" strikes me as pretty extreme, to the level of dishonesty, and is not even backed up by the comment that was linked. I thought Buck's initial comment was also pretty bad, and he did indeed correct his comment, which is a correction that I appreciate, and I feel like any comment that links to it should obviously also take into account the correction.

I have interfaced a lot with people at Leverage, and while I have many issues with the organization, saying that many white nationalists congregate there, and have congregated in the past, just strikes me as really unlikely. 

Buck's comment also says at the bottom: 

Edited to add (Oct 08 2019): I wrote "which makes me think that it's likely that Leverage at least for a while had a whole lot of really racist employees." I think this was mistaken and I'm confused by why I wrote it. I endorse the claim "I think it's plausible Leverage had like five really racist employees". I feel pretty bad about this mistake and apologize to anyone harmed by it.

I also want us to separate "really racist" from "white nationalist" which are just really not the same term, and which appear to me to be conflated via the link above.

I also have other issues with the rest of the comment (namely being constantly worried about communists or nazis hiding everywhere, and generally bringing up nazi comparisons in these discussions, tends to reliably derail things and make it harder to discuss these things well, since there are few conversational moves as mindkilling as accusing the other side to be nazis or communists. It's not that there are never nazis or communists, but if you want to have a good conversation, it's better to avoid nazi or communist comparisons until you really have no other choice, or you can really really commit to handling the topic in an open-minded way.)

Comment by habryka on Does Economic History Point Toward a Singularity? · 2020-09-08T22:55:15.676Z · EA · GW

To me, the graph with a summary of all trends only seems to have very few that at first glance look a bit like s-curves. But I agree one would need to go beyond eyeballing to know for sure.

Yeah, that was the one I was looking at. From very rough eye-balling, it looks like a lot of them have slopes that level off, but obviously super hard to tell just from eye-balling. I might try to find the data and actually check.

Comment by habryka on Does Economic History Point Toward a Singularity? · 2020-09-08T17:43:34.769Z · EA · GW

Note: Actually looking at the graphs in Farmer & Lafond (2016), many of these do sure seem pretty S-curve shaped. As do many of the diagrams in Nagy et al. (2013). I would have to run some real regressions to look at it, but in particular the ones in Farmer & Lafond seem pretty compatible with the basic s-curve model.

Overlapping S-curves are also hard to measure because obviously there are feedback effects between different industries (see my self-similarity comment above). Many of the advances in those fields are driven by exogenous factors, like their inputs getting cheaper, with no substantial improvements in their internal methodologies. One of my models of technological progress (I obviously also share the model of straightforward exponential growth and assign it substantial probability) is that you have nested and overlapping S-curves, which makes it hard to just look at cost/unit output of any individual field. 

For analyzing that hypothesis it seems more useful to hold inputs constant and then look at how cost/unit develops, in order to build a model of that isolated chunk of the system (and then obviously also look at the interaction between industries and systems to get a sense of how they interact). But that's also much harder to do, given that our data is already really messy and noisy.

Comment by habryka on Does Economic History Point Toward a Singularity? · 2020-09-07T23:34:26.390Z · EA · GW

I mean something much more basic. If you have more parameters then you need to have uncertainty about every parameter. So you can't just look at how well the best "3 exponentials" hypothesis fits the data, you need to adjust for the fact that this particular "3 exponentials" model has lower prior probability. That is, even if you thought "3 exponentials" was a priori equally likely to a model with fewer parameters, every particular instance of 3 exponentials needs to be less probable than every particular model with fewer parameters.

Thanks, this was a useful clarification. I agree with this as stated. And I indeed assign substantially more probability to a statement of the form "there were some s-curve like shifts in humanity's past that made a big difference" than to any specific "these three specific s-curve like shifts are what got us to where we are today".

As far as I can tell this is how basically all industries (and scientific domains) work---people learn by doing and talk to each other and they get continuously better, mostly by using and then improving on technologies inherited from other people.

It's not clear to me whether you are drawing a distinction between modern economic activity and historical cultural accumulation, or whether you feel like you need to see a zoomed-in version of this story for modern economic activity as well, or whether this is a more subtle point about continuous technological progress vs continuous changes in the rate of tech progress, or something else.

Hmm, I don't know, I guess that's just not really how I would characterize most growth? My model is that most industries start with fast s-curve like growth, then plateau, then often decline. Sure, kind of continuously in the analytical sense, but with large positive and negative changes in the derivative of the growth. 

And in my personal experience it's also less the case that I and the people I work with just get continuously better, it's more like we kind of flop around until we find something that gets us a lot of traction on something, and then we quickly get much better at the given task, and then we level off again. And it's pretty easy to get stuck in a rut somewhere and be much less effective than I was years ago, or for an organization to end up in a worse equilibrium and broadly get worse at coordinating, or produce much worse output than previously for other reasons.

Of course enough of those stories could itself give rise to a continuous growth story here, but there is a question here about where the self-similarity lies. Like, many s-curves can also give rise to one big s-curve. Just because I have many s-curve doesn't mean I get continuous hyperbolic growth. And so seeing lots of relative discontinuous s-curves at the small scale does feel like it's evidence that we also should expect the macro scale to be a relatively small number of discontinuous s-curves (or more precisely, s-curves whose peak is itself heavy-tail distributed, so that if you run a filter for the s-curves that explain most of the change, you end up with just a few that really mattered).

Comment by habryka on Does Economic History Point Toward a Singularity? · 2020-09-07T19:21:07.459Z · EA · GW

I feel really confused what the actual right priors here are supposed to be. I find the "but X has fewer parameters" argument only mildly compelling, because I feel like other evidence about similar systems that we've observed should easily give us enough evidence to overcome the difference in complexity. 

This does mean that a lot of my overall judgement on this question relies on the empirical evidence we have about similar systems, and the concrete gears-level models I have for what has caused growth. AI Impact's work on discontinuous vs. continuous progress feels somewhat relevant and evidence from other ecological systems also seems reasonably useful. 

When I try to understand what exactly happened in terms of growth at a gears-level, I feel like I tend towards more discontinuous hypotheses, because I have a bunch of very concrete, reasonably compelling sounding stories of specific things that caused the relevant shifts, and while I have some gears-level models for what would cause more continuous growth, they feel a lot more nebulous and vague to me, in a way that I think usually doesn't correspond to truth. The thing that on the margin would feel most compelling to me for the continuous view is something like a concrete zoomed in story of how you get continuous growth from a bunch of humans talking to each other and working with each other over a few generations, that doesn't immediately abstract things away into high-level concepts like "knowledge" and "capital". 

Comment by habryka on EricHerboso's Shortform · 2020-09-06T02:16:57.416Z · EA · GW

While I agree there is a thing going on here that's kind of messy, I think Dale is making a fine point. I would however pretty strongly prefer it if he wouldn't feign ignorance and instead just say straightforwardly that he thinks possibly the biggest problem with the thread is not actually the people arguing against racism as a cause area, but the people violating various rules of civility in attacking the people who argue against it, and the application of (as I think he perceives it) a highly skewed double-standard in the moderation of those perspectives, which is an assessment I find overall reasonably compelling.

Like, I found Dale's comment useful, while also feeling kind of annoyed by it. Overall, that means I upvoted it, but I agree with you on the general algorithm that I prefer straightforward explicit communication over feigned ignorance, even if the feigned ignorance is obviously satirical, as it is in this case.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-03T17:44:16.491Z · EA · GW

Your actual self-quote is an extremely weak version of this, since 'this might possibly actually happen' is not the same as explicitly saying 'I think this will happen'. The latter certainly does not follow from the former 'by necessity'.

Yeah, sorry, I do think the "by necessity" was too strong. 

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T19:15:12.259Z · EA · GW

I agree that the right strategy to deal with threats is substantially different than the right strategy to deal with warnings. I think it's a fair and important point. I am not claiming that it is obvious that absolutely clear-cut blackmail occured, though I think overall, aggregating over all the evidence I have, it seems very likely (~85%-90%) to me that situation game-theoretically similar enough to a classical blackmail scenario has played out. I do think your point about it being really important to get the assessment of whether we are dealing with a warning or a threat is important, and is one of the key pieces I would want people to model when thinking about situations like this, and so your relatively clear explanation of that is appreciated (as well as the reminder for me to keep the costs of premature retaliation in mind).

Yet you mete out much more meagre measure to others than you demand from them in turn, endorsing fervid hyperbole that paints those who expressed opposition to Munich inviting Hanson as bullies trying to blackmail them, and those sympathetic to the decision they made as selling out.

This just seems like straightforward misrepresentation? What fervid hyperbole are you referring to? I am trying my best to make relatively clear and straightforward arguments in my comments here. I am not perfect and sometimes will get some details wrong, and I am sure there are many things I could do better in my phrasing, but nothing that I wrote on this post strikes me as being deserving of the phrase "fervid hyperbole". 

I also strongly disagree that I am applying some kind of one-sided charity to Hanson here. The only charity that I am demanding is to be open to engaging with people you disagree with, and to be hesitant to call for the cancellation of others without good cause. I am not even demanding that people engage with Hanson charitably. I am only asking that people do not deplatform others based on implicit threats by some other third party they don't agree with, and do not engage in substantial public attacks in response to long-chained associations removed from denotative meaning. I am quite confident I am not doing that here.

Of course, there are lots of smaller things that I think are good for public discourse that I am requesting in addition to this, but I think overall I am running a strategy that seems quite compatible to me with a generalizable maxim that if followed would result in good discourse, even with others that substantially disagree with me. Of course, that maxim might not be obvious to you, and I take concerns of one-sided charity seriously, but after having reread every comment of mine on this post in response to this comment, I can't find any place where such an accusation of one-sided charity fits well to my behavior.

That said, I prefer to keep this at the object-level, at least given that the above really doesn't feel like it would start a productive conversation about conversation norms. But I hope it is clear that I disagree strongly with that characterization of mine. 

You could still be right - despite the highlighted 'very explicit threat' which is also very plausibly not blackmail, despite the other 'threats' alluded to which seem also plausibly not blackmail and 'fair game' protests for them to make, and despite what the organisers have said (publicly) themselves, the full body of evidence should lead us to infer what really happened was bullying which was acquiesced to. But I doubt it.

That's OK. We can read the evidence in separate ways. I've been trying really hard to understand what is happening here, have talked to the organizers directly, and am trying my best to build models of what the game-theoretically right response is. I expect if we were to dig into our disagreements here more, we would find a mixture of empirical disagreements, and some deeper disagreements about when something constitutes blackmail, or something game-theoretically equivalent. I don't know which direction would be more fruitful to go into. 

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T18:41:19.868Z · EA · GW

No. How does my (3) match up to that option? The thing I am saying is not that we will lose 95% of the people, the thing I am saying is we are going to lose a large fraction of people either way, and the world where you have tons of people who follow the strategy of distancing themselves from anyone who says things they don't like is a world where you both won't have a lot of people, and you will have tons of polarization and internal conflict. 

How is your summary at all compatible with what I said, given that I explicitly said: 

with the second (the one where we select on tolerance) possibly actually being substantially larger

That by necessity means that I expect the strategy you are proposing to not result in a larger community, at least in the long run. We can have a separate conversation about the exact balance of tradeoffs here, but please recognize that I am not saying the thing you are summarizing me as saying. 

I am specifically challenging the assumption that this is a tradeoff of movement size, using some really straightforward logic of "if you have lots people who have a propensity to distance themselves from others, they will distance themselves and things will splinter apart". You might doubt that such a general tendency exists, you might doubt that the inference here is valid and that there are ways to keep such a community of people together either way, but in either case, please don't claim that I am saying something I am pretty clearly not saying. 

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T06:58:42.549Z · EA · GW

I find it weird that just because I think a point is poorly presented, people think I disagree with the point.

Sorry! I never meant to imply that you disagree with the point. 

My comment in this case is more: How would you have actually wanted Robin Hanson to phrase his point? I've thought about that issue a good amount, and like, I feel like it's just a really hard point to make. I am honestly curious what other thing you would have preferred Hanson to say instead. The thing he said seemed overall pretty clear to me, and really not like an attempt to be intentionally edge or something, and more that the point he wanted to make kind of just had a bunch of inconvenient consequences that were difficult to explore (similarly to how utilitarianism quickly gives rise to a number of hard to discuss consequences that are hard to explore).

My guess is you can probably come up with something better, but that it would take you substantial time (> 10 minutes) of thinking. 

My argument here is mostly: In context, the thing that Robin said seemed fine, and I don't expect that many people who read that blogpost actually found his phrasing that problematic. The thing that I expect to have happened is that some people saw this as an opportunity to make Robin look bad, and use some of the words he said completely out of context, creating a narrative where he said something he definitely did not say, and that looked really bad. 

And while I think the bar of "only write essays that don't really inflame lots of people and cause them to be triggered" is already a high bar to meet, but maybe a potentially reasonable one, the bar of "never write anything that when taken out of context could cause people to be really triggered" is no longer a feasible bar to meet. Indeed it is a bar that is now so high that I no longer know how to make the vast majority of important intellectual points I have to make in order to solve many of the important global problems I want us to solve in my lifetime. The way I understood your comment above, and the usual critiques of that blogpost in particular, is that it was leaning into the out-of-context phrasings of his writing, without really acknowledging the context in which the phrase was used. 

I think this is an important point to make, because on a number of occasions I do think Robin has actually said things that seemed much more edgy and unnecessarily inflammatory even if you had the full context of his writing, and I think the case for those being bad is much stronger than the case for that blogpost about "gentle, silent rape" and other things in its reference class being bad. I think Twitter in particular has made some of this a lot worse, since it's much harder to provide much context that helps people comprehend the full argument, and it's much more frequent for things to be taken out of context by others.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T06:11:19.462Z · EA · GW

Because to me, phrases like "gentle, silent rape" seem obviously unnecessarily jarring even as far as twitter discussions about rape go."

I am always really confused when someone brings up this point as a point of critique. The substance of Hanson's post where he used that phrase just seemed totally solid to me. 

I feel like this phrase is always invoked to make the point that Hanson doesn't understand how bad rape is, or that he somehow thinks lots of rape is "gentle" or "silent", but that has absolutely nothing to do with the post where the phrase is used. The phrase isn't even referring to rape itself! 

When people say things like this, my feeling is that they must have not actually read the original post, where the idea of "gentle, silent rape" was used as a way to generate intuitions not about how bad rape is, but about how bad something else is (cuckoldry), and about how our legal system judges different actions in a somewhat inconsistent way. Again, nowhere in that series of posts did Hanson say that rape was in any way not bad, or not traumatic, or not something that we should obviously try to prevent with a substantial fraction of our resources. And given the relatively difficult point he tried to make, which is a good one and I appreciate him making, I feel like his word choice was overall totally fine, if one assumes that others will at the very least read what the phrase refers to at all, instead of totally removing it from context and using it in a way that has basically nothing to do with how it was used by him, which I argue is a reasonable assumption to make in a healthy intellectual community.

Comment by habryka on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-02T01:20:08.686Z · EA · GW

My model of this is that there is a large fraction of beliefs in the normal Overton window of both liberals and conservatives, that are not within the Overton window of this community. From a charitable perspective, that makes sense, lots of beliefs that are accepted as Gospel in the conservative community seem obviously wrong to me, and I am obviously going to argue against them. The same is true for many beliefs in the liberal community. Since many more members of the community are liberal, we are going to see many more "woke" views argued against, for two separate reasons: 

  1. Many people assume that all spaces they inhabit are liberal spaces, the EA community is broadly liberal, and so they feel very surprised if they say something that everywhere else is accepted as obvious, suddenly get questioned here (concrete examples that I've seen in the past that I am happy to see questioned are: "there do not exist substantial cognitive differences between genders", "socialized healthcare is universally good", "we should drastically increase taxes on billionaires", "racism is obviously one of the most important problems to be working on").
  2. There are simply many more liberal people so you are going to see many more datapoints of "woke" people feeling attacked, because the baserates for conservatives is already that low

My prediction is that if we were to actually get someone with a relatively central conservative viewpoint, their views would seem even more outlandish to people on the forum, and their perspectives would get even more attacked. Imagine talking about any of the following topics on the forum: 

  1. Gay marriage and gay rights are quite bad
  2. Humans are not the result of evolution
  3. The war on drugs is a strongly positive force, and we should increase incarceration rates

(Note, I really don't hang out much in standard conservative circles, so there is a good chance the above are actually all totally outlandish and the result of stereotypes.) 

If I imagine someone bringing up these topics, the response would be absolutely universally negative, to a much larger degree than what we see when woke topics are being discussed. 

The thing that I think is actually explaining the data is simply that the EA and Rationality communities have a number of opinions that substantially diverge from the opinions held in basically any other large intellectual community, and so if someone comes in and just assumes that everyone shares the context from one of these other communities, they will experience substantial pushback. The most common community for which this happens is the liberal community, since we have substantial overlap, but this would happen with people from basically any community (and I've seen it happen with many people from the libertarian community who sometimes mistakenly believe all of their beliefs are shared in the EA community, and then receive massive pushback as they realize that people are actually overall quite strongly in favor of more redistribution of wealth).

And to be clear, I think this is overall quite good  and I am happy about most of these divergences from both liberal and conservative gospel, since they overall seem to point much closer to the actual truth than what those communities seem to generally accept as true (though I wouldn't at all claim that we are infallible and this is a uniform trend, and think there are probably quite a few topics where the divergences point away from the truth, just that the aggregate seems broadly in the right direction to me).