Posts

Long-Term Future Fund: November 2020 grant recommendations 2020-12-03T12:57:36.686Z
Long-Term Future Fund: April 2020 grants and recommendations 2020-09-18T10:28:20.555Z
Long-Term Future Fund: September 2020 grants 2020-09-18T10:25:04.859Z
Comparing Utilities 2020-09-15T03:27:42.746Z
Long Term Future Fund application is closing this Friday (June 12th) 2020-06-11T04:17:28.371Z
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T04:54:25.630Z
Request for Feedback: Draft of a COI policy for the Long Term Future Fund 2020-02-05T18:38:24.224Z
Long Term Future Fund Application closes tonight 2020-02-01T19:47:47.051Z
Survival and Flourishing grant applications open until March 7th ($0.8MM-$1.5MM planned for dispersal) 2020-01-28T23:35:59.575Z
AI Alignment 2018-2019 Review 2020-01-28T21:14:02.503Z
Long-Term Future Fund: November 2019 short grant writeups 2020-01-05T00:15:02.468Z
Long Term Future Fund application is closing this Friday (October 11th) 2019-10-10T00:43:28.728Z
Long-Term Future Fund: August 2019 grant recommendations 2019-10-03T18:46:40.813Z
Survival and Flourishing Fund Applications closing in 3 days 2019-10-02T00:13:32.289Z
Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) 2019-09-09T04:14:02.083Z
Integrity and accountability are core parts of rationality [LW-Crosspost] 2019-07-23T00:14:56.417Z
Long Term Future Fund and EA Meta Fund applications open until June 28th 2019-06-10T20:37:51.048Z
Long-Term Future Fund: April 2019 grant recommendations 2019-04-23T07:00:00.000Z
Major Donation: Long Term Future Fund Application Extended 1 Week 2019-02-16T23:28:45.666Z
EA Funds: Long-Term Future fund is open to applications until Feb. 7th 2019-01-17T20:25:29.163Z
Long Term Future Fund: November grant decisions 2018-12-02T00:26:50.849Z
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) 2018-11-21T03:41:38.850Z

Comments

Comment by Habryka on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-27T19:10:38.760Z · EA · GW

You probably want the link at the top of this post to go directly to the Deepmind paper page, instead of the LessWrong redirect-URL for the link. I.e. the current link is:

https://www.lesswrong.com/out?url=https%3A%2F%2Fdeepmind.com%2Fblog%2Farticle%2Fgenerally-capable-agents-emerge-from-open-ended-play

When it probably should be:

https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play

Comment by Habryka on microCOVID.org: A tool to estimate COVID risk from common activities · 2021-07-06T20:54:12.172Z · EA · GW

I found this FB post by Matt Bell surprisingly useful:  https://www.facebook.com/thismattbell/posts/10161279341706038 

Comment by Habryka on EA Infrastructure Fund: Ask us anything! · 2021-07-05T18:55:01.436Z · EA · GW

I would also be in favor of the LTFF doing this.

Comment by Habryka on Linch's Shortform · 2021-07-05T18:53:53.996Z · EA · GW

after accounting for meta-EA effects

I feel like the meta effects are likely to exaggerate the differences, not reduce them? Surprised about the line of reasoning here.

Comment by Habryka on What are some examples of successful social change? · 2021-06-24T01:31:29.239Z · EA · GW

Well, no. Whether that change was actually good, by its own lights, is the whole point. Change that looks big but doesn't actually help is not something that you should meaningfully count as a success. Magnitude of effect is not in itself good. I have no interest in emulating social movements that cause big effects in the world, in ways that don't actually help, or maybe even actively harm, my goals. I don't see at all why I should classify something that just had a big effect, without that effect actually being useful, as a "success". 

This is a really important distinction because in my model of the world it is much much easier to have some big effect on the world, than it is to have a specifically targeted big effect on the world. So measuring social movements by just "the size of their effect" is almost purely sampling from movements that took a path of lowest resistance of just doing things that are big, which is a path that doesn't seem like it generalizes at all to helping with things that we care about.

Comment by Habryka on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-24T01:30:07.014Z · EA · GW

Yeah, I think we've done that a few times, but not confident. Would have to look over a bunch of records to be confident.

Comment by Habryka on What are some examples of successful social change? · 2021-06-23T00:48:44.181Z · EA · GW

Also: 

Prohibition in the United States

My sense is overall the goals of the prohibition movement became harder to achieve after it took off, and it overall reinforced the role of alcohol in society, and made future efforts to reduce alcohol consumption harder. Again, not obviously harmful for its own goals, but also not obviously a success.

Comment by Habryka on What are some examples of successful social change? · 2021-06-23T00:47:13.318Z · EA · GW

Similarly: 

The modern environmental movement

It seems pretty plausible that due to opposition to nuclear power and polarization and politicization of the whole space, the environmental movement has been overall harmful to the goals of that movement. 

I think it's a somewhat hard call to make, and don't think it's obvious whether the environmental movement was harmful by its own lights or not, but I definitely wouldn't count it as an obvious success.

Comment by Habryka on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-22T07:25:21.851Z · EA · GW

Counterfactually, if grad school is 5-10x the risk of independent research, it seems like you should be 5-10x as hesitant to fund grad students compared to independent researchers.

I don't think that's an accurate estimate of the relevant risk. I don't think risk goes up linearly with time. Many people quit their PhDs when they aren't a good fit.

Well when the LTFF funds graduate students who aren't even directly focused on improving the long-term future, just to help them advance their careers, I think that sends a strong signal that the LTFF thinks grad school should be the default path.

I mean, I don't think there is currently a great "default path" for doing work on the long-term future. I feel like we've said some things to that effect. I think grad school is a fine choice for some people, but I think we are funding many fewer people for grad school than we are funding them for independent research (there are some people who we are funding for an independent research project during grad school, but that really isn't the same as funding someone for grad school), but would have to make a detailed count to be totally confident of this. Pretty confident this is true for my grant votes/recommendations. 

Comment by Habryka on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-20T03:31:06.721Z · EA · GW

I am indeed even more hesitant to recommend grad school to people than independent research. See my comments here: https://forum.effectivealtruism.org/posts/5zedDETncHasvxGrr/should-you-do-a-phd-in-science?commentId=hK2tso7Jexhvmvsfb 

Comment by Habryka on What are some key numbers that (almost) every EA should know? · 2021-06-18T22:02:18.576Z · EA · GW

You can embed flashcard decks into LW and EA Forum posts: 

https://www.lesswrong.com/posts/yK8mKmMQ73TuzgCv6/you-can-now-embed-flashcard-quizzes-in-your-lesswrong-posts 

So you could consider creating one of those.

Comment by Habryka on EA Forum feature suggestion thread · 2021-06-18T18:54:01.004Z · EA · GW

I do think pseudonymity is the right way to solve this. It's plausible that we might want to make name-changes easier, so if you create a pseudonymous account, you can later take ownership over it more properly, if it turns out to not have embarassed you.

Comment by Habryka on You can now apply to EA Funds anytime! (LTFF & EAIF only) · 2021-06-18T18:40:55.210Z · EA · GW

that they're skeptical of funding independent researchers

Just for the record, this is definitely not an accurate one-line summary of my stance, and I am pretty confident it's also not a very good summary of other people on the LTFF. Indeed, I don't know of almost any other funding body that has funded as many independent researchers as the LTFF. 

The linked post just says that Adam "tends to apply a fairly high bar to long-term independent research", which I do agree implies some level of hesitation, but I don't think it implies a general stance of skepticism towards funding independent researchers. My model here is that there are certain people for whom independent research is a pretty big trap for, and this does imply a certain level of hesitation on making a marginal grant. Many really great things will come out of independent research, but I do also think for some people trying to pursue an independent research path will be a really big waste of human capital, and potentially cause some pretty bad experiences, and I do think this implies thinking carefully through independent research grants.

Comment by Habryka on What are some key numbers that (almost) every EA should know? · 2021-06-18T16:46:29.296Z · EA · GW

Oops, that's why you don't try to do mental arithmetic that will shape the future of our lightcone at 1AM in the morning.

Comment by Habryka on What are some key numbers that (almost) every EA should know? · 2021-06-18T04:47:17.855Z · EA · GW

OOps, yeah, good chance that I accidentally used a U.S. only source.

Comment by Habryka on What are some key numbers that (almost) every EA should know? · 2021-06-18T03:46:43.398Z · EA · GW

I really like this idea. Here are some obvious ones: 

  • Number of galaxys in the reachable lightcone: ~10^9 (this is conveniently pretty close to the total number of humans alive, so I sometimes like thinking that if we reach the stars, I might have ownership over ~1 galaxy) (Sourced from Eternity in Six Hours)
  • Number of stars in a galaxy: ~10^10 - 10^12 (also surprisingly close to the number of humans alive)
  • People alive: ~10^10
  • Chicken deaths in a year: ~10^10
  • U.S. Philanthropic donations per year: ~$10^11 - $10^12 ($450 billion in 2019)
  • U.S. Foundation donations per year: $10^11 ($77 billion a year
Comment by Habryka on Forum update: New features (June 2021) · 2021-06-17T21:29:29.654Z · EA · GW

Yeah, pretty plausible. At some point I expect to sit down, run some simulations, and see in what final karma allocations different algorithms result in, and that's definitely one thing I would try out.

Comment by Habryka on Forum update: New features (June 2021) · 2021-06-17T17:51:01.663Z · EA · GW

Yeah, agree. I think there is some opportunity to do something better here, though I think strong-upvotes did also address that a decent amount (posts tend to get a lot more karma because they also tend to get a lot more strong-votes). I don't really think this fixes the whole problem though, and incentives are definitely still off.

Comment by Habryka on Forum update: New features (June 2021) · 2021-06-17T16:23:37.042Z · EA · GW

Yeah, I think the reasoning was the same as for LW 1.0. Posting requires more effort, and so rewarding it with more karma made sense. We might at some point at something like this again, but I do think the 10x was a bit extreme (and I think might have actually reduced the degree to which people post, because it was kind of scary and you could lose a lot of karma if you got downvoted).

Comment by Habryka on Linch's Shortform · 2021-06-14T03:48:14.995Z · EA · GW

I think this is a great idea.

Comment by Habryka on Which non-EA-funded organisations did well on Covid? · 2021-06-08T16:15:07.713Z · EA · GW

1 Day Sooner was funded by Open Philanthropy: https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/1daysooner-general-support 

Comment by Habryka on Buck's Shortform · 2021-06-06T19:58:27.477Z · EA · GW

Yeah, I really like this. SSC currently already has a book-review contest running on SSC, and maybe LW and the EAF could do something similar? (Probably not a contest, but something that creates a bit of momentum behind the idea of doing this)

Comment by Habryka on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-04T23:24:30.264Z · EA · GW

Yeah, I was also confused. Maybe say "A GiveWell-style CEA spreadsheet"? That feels like it captures what it is better.

Comment by Habryka on Long-Term Future Fund: May 2021 grant recommendations · 2021-06-04T23:18:35.239Z · EA · GW

Now for a more thorough response for 2): 

This kind of work does strike me as pretty early stage, so evaluation is difficult. The things I expect to pay most attention to in evaluating how this grant is going is just whether the people Logan is working with seem to benefit from it, and if they end up citing this program as a major influence on their future research practices and career choices (which seems pretty plausible to me). 

In the long run, I would hope to see some set of ideas from Logan make its way "into the groundwater" so to speak. This has happened quite a bit with Gendlin's focusing, and it seems to me that a substantial fraction of AI Alignment Researchers I interface with have learned a bunch focusing-adjacent techniques, and if something similar happens to techniques or ideas originating from Logan, that seems like a good signal that the work was valuable. 

I did have some email exchanges where I shared some LTFF internal discussion with Logan about what we hope to see out of this grant, and what would convince others on the fund that it was a good idea, which captured some of the above.

I also expect I will just watch and read and engage with any material coming out of Logan's program, try to apply them to my own research problems, and see whether they seem helpful or a waste of time. I might also end up getting some colleagues of mine, or some friends of mine who are active as researchers, to try out some of the material and see whether they find it useful, and debate with them what parts seem to work, and what parts don't.

Comment by Habryka on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-03T22:19:07.576Z · EA · GW

Yep, the new wiki/tagging system has been going decently well, I think. We are seeing active edits, and in general I am a lot less worried about it being abandoned, given how deeply it is integrated with the rest of LW (via the tagging system, the daily page and the recent discussion feed).

Comment by Habryka on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-03T06:47:43.393Z · EA · GW

Thank you for writing these! I really like these kind of long writeups, it really feels like it helps me get a sense of how other people think about making grants like this.

Comment by Habryka on Long-Term Future Fund: May 2021 grant recommendations · 2021-05-30T09:03:29.342Z · EA · GW

Re 1) I have historically not been very excited about Leverage's work, and wouldn't want to support more of it. I think it's mostly about choice of methodology. I really didn't like that Leverage was very inwards focused, and am a lot more excited about people developing techniques like this by trying to provide value to people who aren't working at the same organization as the person developing the technique (though Leverage also did that a bit in the last few years, though not much early on, as far as I can tell). Also, Leverage seemed to just make a bunch of assumptions about how the mind works that seemed wrong to me (as part of the whole "Connection Theory" package), and Logan seems to make fewer of those.

Logan also strikes me as a lot less insular, and I also have a bunch of specific opinions on Leverage that would take a while to get into that make me less excited about funding Leverage stuff. 

Re 2) Will write something longer about this in a few days. Just had a quick minute in-between a bunch of event and travel stuff in which I had time to write the above.

Comment by Habryka on EA Survey 2020: Demographics · 2021-05-26T05:49:28.774Z · EA · GW

I am also definitely interested in Peter Wildeford's new update on that post, and been awaiting it with great anticipation.

Comment by Habryka on EA Survey 2020: Demographics · 2021-05-24T08:52:16.824Z · EA · GW

Just for the record, I find the evidence that EA is shrinking or stagnating on a substantial number of important dimensions pretty convincing. Relevant metrics include traffic to many EA-adjacent websites, Google trends for many EA-related terms, attendance at many non-student group meetups, total attendance at major EA conferences, number of people filling out the EA survey, and a good amount of community attrition among a lot of core people I care a lot about.

I think in terms of pure membership, I think EA is probably been pretty stable with some minor growth. I think it's somewhat more likely than not that average competence in members has been going down, because new members don't seem as good as the members who I've seen leave. 

It seems very clear to me that growth is much slower than it was in 2015-2017, based on basically all available metrics. The obvious explanation of "sometime around late 2016 lots of people decided that we should stop pursuing super aggressive growth" seems like a relatively straightforward explanation and explains the data.

Comment by Habryka on RyanCarey's Shortform · 2021-05-23T01:13:50.066Z · EA · GW

I generally think more moderation is good, but have also pushed back on a number of specific moderation decisions. In general I think we need more moderation of the type "this user seems like they are reliably making low-quality contributions that don't meet our bar" and less moderation of the type "this was rude/impolite but it's content was good", of which there have been a few recently.

Comment by Habryka on How much do you (actually) work? · 2021-05-22T07:24:04.222Z · EA · GW

I've tracked my time for a few years down to the minute, and usually tracked about 6 hours of "real work" in a day, working about 6 days a week, with also usually an additional 2-3 hours of work on the Sunday. This gets me pretty close to 40 hours. 

However, to get to 6-7 hours of actual work in a day, I tend to spend something like 70 hours in the office per week. If you count time spent in meetings or work conversations as "real work" time, I also tend to get closer to 60 hours a week, though meetings are definitely not all fully focused.

Comment by Habryka on [deleted post] 2021-05-20T15:56:56.539Z

I think that's an error fixed on the latest version of LW? Not totally sure though, but I fixed some related issues a few weeks ago.

Comment by Habryka on Should you do a PhD in science? · 2021-05-09T19:35:57.673Z · EA · GW

To give a bit more context: I've specifically seen some ML PhDs work out fine, but feel like I've seen almost every other type of PhD work out badly, with my sense being that the person was not in a substantially better epistemic or career position after their PhD, especially compared to them having worked an industry job in the direction they would have liked to go into, or just independently writing blogposts. 

More than half of the PhDs I have heard of were aborted in the middle, with the person going through a major depressive period or something similar like it during it, and the levels of regret afterwards being quite high. 

ML PhDs seem somewhat better, in particular at places like CHAI where my sense is that people are working on stuff that's a lot more aligned with their goals. Though I think the track record is still pretty bad.

Comment by Habryka on Should you do a PhD in science? · 2021-05-09T08:50:20.380Z · EA · GW

Having talked to many people for multiple hours (>100) over the years about their career decisions, I share this assessment. 

Comment by Habryka on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-04-27T17:23:57.350Z · EA · GW

The link above has an additional "." at the end that prevents it from properly working. 

Comment by Habryka on Getting a feel for changes of karma and controversy in the EA Forum over time · 2021-04-23T17:58:41.750Z · EA · GW

Worried about goodharting effects. I expect authors and others would start using number of views as a quality signal and start optimizing towards more views. But I, having access to that signal, am confident it really very much isn't a good quality signal, and if LW and the EA Forum had a gradient that would incrementally just push towards more of the posts that get a lot of views, this would really destroy a lot of the value of a lot of posts.

Comment by Habryka on Getting a feel for changes of karma and controversy in the EA Forum over time · 2021-04-22T18:13:05.001Z · EA · GW

Yep, it's an admin-only property. Sorry for the confusion!

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T17:54:20.585Z · EA · GW

Oh, yeah, that's fair. I had interpreted it as referring to Jakub's comment. I think there is a slightly stronger case to call Hypatia's post hostile than Jakub's comment, but in either case the statement feels pretty out of place. 

Comment by Habryka on CEA update: Q1 2021 · 2021-04-22T07:09:12.417Z · EA · GW

Thank you for posting this!

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T07:00:02.025Z · EA · GW

I agree the calculation isn't super straightforward, and there is a problem of disincentivizing glomarization here, but I do think overall, all things considered, after having thought about situations pretty similar to this for a few dozen hours, I am pretty confident it's still decent bayesian evidence, and I endorse treating it as bayesian evidence (though I do think the pre-commitment consideration dampen the degree to which I am going to act on that information a bit, though not anywhere close to fully). 

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T06:41:11.984Z · EA · GW

“they chose not to respond, therefore that says bad things about them, so I'll update negatively.” I think that the latter response is not only corrosive in terms of pushing all discussion into the public sphere even when that makes it much worse, but it also hurts people's ability to feel comfortably holding onto non-public information.

This feels wrong from two perspectives: 

  1. It clearly is actual, boring, normal, bayesian evidence that they don't have a good response. It's not overwhelming evidence, but someone declining to respond sure is screening off the worlds where they had a great low-inferential distance reply that was cheap to shoot off that addressed all the concerns. Of course I am going to update on that.
  2. I do just actually think there is a tragedy of the commons scenario with public information, and for proper information flow you need some incentives to publicize information. You and me have longstanding disagreements on the right architecture here, but from my perspective of course you want to reward organization for being transparent and punish organizations if they are being exceptionally non-transparent. I definitely prefer to join social groups that have norms of information sharing among its members, and where its members invest substantial resources to share important information with others, and where you don't get to participate in the commons if you don't invest an adequate amount of resources into sharing important information and responding to important arguments.
Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T06:24:40.208Z · EA · GW

Are you sure that they're not available for communication? I know approximately nothing about ACE, but I'd surprised if they wouldn't be willing to talk to you after e.g. sending them an email.

Yeah, I am really not sure. I will consider sending them an email. My guess is they are not interested in talking to me in a way that would later on allow me to write up what they said publicly, which would reduce the value of their response quite drastically to me. If they are happy to chat and allow me to write things up, then I might be able to make the time, but it does sound like a 5+ hour time-commitment and I am not sure whether I am up for that. Though I would be happy to pay $200 to anyone else who does that.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T06:20:26.081Z · EA · GW

I also think there's a strong tendency for goalpost-moving with this sort of objection—are you sure that, if they had said more things along those lines, you wouldn't still have objected?

I do think I would have still found it pretty sad for them to not respond, because I do really care about our public discourse and this issue feels important to me, but I do think I would feel substantially less bad about it, and probably would only have mild-downvoted the comment instead of strong-downvoted it. 

What I have a problem with is the notion that we should punish ACE for not responding to those accusations—I don't think they should have an obligation to respond

I mean, I do think they have a bit of an obligation to respond? Like, I don't know what you mean by obligation, like, I don't think they are necessarily morally bad people, but I do think that it sure costs me and others a bunch for them to not respond and makes overall coordinating harder. 

As an example, I sometimes have to decide which organizations to invite to events that I am organizing that help people in the EA community coordinate (historically things like the EA Leaders Retreat or EA Global, now it's more informal retreats and one-off things). The things discussed here feel like decent arguments to reduce those invites some amount, since I do think it's evidence that ACE's culture isn't a good fit for events like that. I would have liked ACE to respond to these accusations, and additionally, I would have liked ACE to respond to them publicly so I don't have to justify my invite to other attendees who don't know what their response was, even if I had reached out in private. 

In a hypothetical world where we had great private communication channels and I could just ask ACE a question in some smaller higher-trust circle of people who would go to the EA Leaders forum, or tend to attend whatever retreats and events I am running, then sure, that might be fine. But we don't have those channels, and the only way I know to establish common-knowledge in basically any group larger than 20 people within the EA community is to have it be posted publicly. And that means having private communication makes a lot of stuff like this really hard.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-22T03:29:58.558Z · EA · GW

I downvoted because it called the communication hostile without any justification for that claim. The comment it is replying to doesn't seem at all hostile to me, and asserting it is, feels like it's violating some pretty important norms about not escalating conflict and engaging with people charitably.

I also think I disagree that orgs should never be punished for not wanting to engage in any sort of online discussion. We have shared resources to coordinate, and as a social network without clear boundaries, it is unclear how to make progress on many of the disputes over those resources without any kind of public discussion. I do think we should be really careful to not end up in a state where you have to constantly monitor all online activity related to your org, but if the accusations are substantial enough, and the stakes high enough, I think it's pretty important for people to make themselves available for communication. 

Importantly, the above also doesn't highlight any non-public communication channels that people who are worried about the negative effects of ACE can use instead. The above is not saying "we are worried about this conversation being difficult to have in public, please reach out to us via these other channels if you think we are causing harm". Instead it just declares a broad swath of communication "hostile" and doesn't provide any path forward for concerns to be addressed. That strikes me as quite misguided given the really substantial stakes of shared reputational, financial, and talent-related resources that ACE is sharing with the rest of the EA community.

I mean, it's fine if ACE doesn't want to coordinate with the rest of the EA community, but I do think that currently, unless something very substantial changes, ACE and the rest of EA are drawing from shared resource pools and need to coordinate somehow if we want to avoid tragedies of the commons.

Comment by Habryka on EA Forum feature suggestion thread · 2021-04-21T21:45:08.245Z · EA · GW

We no longer weigh frontpage posts 10x, though we might want to reinstitute some kind of weighing again. I think the 10x was historically too much, and made it so that by far the primary determinant of who had how much karma was how many frontpage posts you had, which felt like it undervalued comments, but it's pretty plausible (and even likely to me) that the current system is now too skewed in the other direction. 

My current relationship towards karma is something like: The point of karma for comments is to provide local information in a thread about a mixture of importance, quality and readership, and it's pretty hard to disentangle those without making the system much more complex. Overall the karma of a post is a pretty good guess on how many people will want to read it, so it makes sense to use it for some recommendation systems, but the karma of comments feel a lot more noisy to me. As a long-term reward I think we shouldn't really rely on karma at all and instead use systems like the LessWrong review to establish in a much more considered way which posts were actually good. 

We've also deemphasized how much karma someone has on the site quite a bit because I don't want to create the impression that it's at all a robust measure of the quality of someone's contributions. So, for example, we no longer have karma leaderboards.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-20T05:17:37.866Z · EA · GW

I am familiar with ACE's charity evaluation process. The hypothesis I expressed above seems compatible with everything I know about the process. So alas, this didn't really answer my question.

Comment by Habryka on [deleted post] 2021-04-20T01:16:10.239Z

For whatever it's worth, this also seems pretty constraining to me. Internal links are already specially marked via the small degree-symbol, so differentiating internal and external links is pretty straightforward.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-19T22:53:41.125Z · EA · GW

Makes sense. I think the current issues discussed feel like the best evidence we have, and do we feel like pretty substantial evidence on this topic, but it doesn't seem necessary to discuss that fully here.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-19T16:33:28.106Z · EA · GW

Presumably knowing the basis of ACE's evaluations is one of the most important thing to know about ACE? And knowing to what degree social justice principles are part of that evaluation (and to what degree those principles conflict with evaluating cost-effectiveness) seems like a pretty important part of that.

Comment by Habryka on Concerns with ACE's Recent Behavior · 2021-04-19T05:33:12.891Z · EA · GW

While your words here are technically correct, putting it like this is very misleading. Without breaking confidentiality, let me state unequivocally that if an organization had employees who had really bad views on DEI, that would be, in itself, insufficient for ACE to downgrade them from top to standout charity status. This doesn't mean it isn't a factor; it is. But the actions discussed in this EA forum thread would be insufficient on their own to cause ACE to make such a downgrade.

Just to clarify, this currently sounds to me like you are saying "the actions discussed in this forum thread would be insufficient, but would likely move an organization about halfway to being demoted from top to standout charity", which presumably makes this a pretty big factor that explains a lot of the variance in how different organizations score on the total evaluation. This seems very substantial, but I want to give you the space to say it plays a much less substantial role than that.