Posts

Long Term Future Fund application is closing this Friday (June 12th) 2020-06-11T04:17:28.371Z · score: 16 (4 votes)
[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T04:54:25.630Z · score: 31 (10 votes)
Request for Feedback: Draft of a COI policy for the Long Term Future Fund 2020-02-05T18:38:24.224Z · score: 38 (20 votes)
Long Term Future Fund Application closes tonight 2020-02-01T19:47:47.051Z · score: 16 (4 votes)
Survival and Flourishing grant applications open until March 7th ($0.8MM-$1.5MM planned for dispersal) 2020-01-28T23:35:59.575Z · score: 8 (2 votes)
AI Alignment 2018-2019 Review 2020-01-28T21:14:02.503Z · score: 28 (11 votes)
Long-Term Future Fund: November 2019 short grant writeups 2020-01-05T00:15:02.468Z · score: 46 (20 votes)
Long Term Future Fund application is closing this Friday (October 11th) 2019-10-10T00:43:28.728Z · score: 13 (3 votes)
Long-Term Future Fund: August 2019 grant recommendations 2019-10-03T18:46:40.813Z · score: 79 (35 votes)
Survival and Flourishing Fund Applications closing in 3 days 2019-10-02T00:13:32.289Z · score: 11 (6 votes)
Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) 2019-09-09T04:14:02.083Z · score: 29 (10 votes)
Integrity and accountability are core parts of rationality [LW-Crosspost] 2019-07-23T00:14:56.417Z · score: 52 (20 votes)
Long Term Future Fund and EA Meta Fund applications open until June 28th 2019-06-10T20:37:51.048Z · score: 60 (23 votes)
Long-Term Future Fund: April 2019 grant recommendations 2019-04-23T07:00:00.000Z · score: 143 (75 votes)
Major Donation: Long Term Future Fund Application Extended 1 Week 2019-02-16T23:28:45.666Z · score: 41 (19 votes)
EA Funds: Long-Term Future fund is open to applications until Feb. 7th 2019-01-17T20:25:29.163Z · score: 19 (13 votes)
Long Term Future Fund: November grant decisions 2018-12-02T00:26:50.849Z · score: 35 (29 votes)
EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday) 2018-11-21T03:41:38.850Z · score: 21 (11 votes)

Comments

Comment by habryka on evelynciara's Shortform · 2020-08-04T04:25:23.818Z · score: 2 (1 votes) · EA · GW

Seems to work surprisingly well!

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-03T19:26:27.904Z · score: 3 (2 votes) · EA · GW

I think we allow markdown tables using this syntax, but I really haven't debugged it very much and it could totally be broken: https://www.markdownguide.org/extended-syntax/#tables

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-02T00:12:38.272Z · score: 3 (2 votes) · EA · GW

In the new editor when you have your cursor at the beginning of a new line a small Paragraph symbol should appear on the left of the editor. Clicking on that should bring up a new table menu item.

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-02T00:11:48.360Z · score: 4 (2 votes) · EA · GW

Huh, no idea why that happens. The hover-previews are not triggered by selection events, but only by the and the events and have been that way for a long time. My guess is something must have changed in Chrome or maybe in Vimium to make that happen? 

Reading through some Github issues for Vimium, it appears that Vimium does indeed send events when clicking on a link, so this is intended behavior as far as I can tell (why I do not know, though I can imagine it overall resulting in a better experience on other websites). I don't currently know how fix this without breaking it on other devices, so I would mostly treat this as a Vimium bug.

Comment by habryka on EA Forum update: New editor! (And more) · 2020-08-01T06:55:42.122Z · score: 4 (2 votes) · EA · GW

You say "They also now have the ability to edit tag descriptions in a wiki-like fashion", but when someone does something stupid on Wikipedia other people can view the article history and restore old versions. Here it looks like regular users can't do that?

My guess is that this is a temporary bug. The History page should allow users to see any previous revisions that were made, and should allow you to compare arbitrary revisions. You can see what it's supposed to look like on LessWrong. With that, restoring previous versions should be pretty easy. I expect that bug will probably be fixed within a week or so, and until then it probably won't be much of a problem.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-23T04:35:23.409Z · score: 2 (4 votes) · EA · GW

Yep, that's what I was implying.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-22T18:10:10.259Z · score: 3 (3 votes) · EA · GW

This is in contrast to a frequentist perspective, or maybe something close to a "common-sense" perspective, which tends to bucket knowledge into separate categories that aren't easily interchangeable.

Many people make a mental separation between "thinking something is true" and "thinking something is X% likely, where X is high", with one falling into the category of lived experience, and the other falling into the category of "scientific or probabilistic assessment". The first one doesn't require any externalizable evidence and is a fact about the mind, the second is part of a collaborative scientific process that has at its core repeatable experiments, or at least recurring frequencies (i.e. see the frequentist discussion of it being meaningless to assign probabilities to one-time events).

Under some of these other non-bayesian interpretations of probability theory, an assignment of probabilities is not valid if you don't associate it with either an experimental setup, or some recurring frequency. So under those interpretations you do have an additional obligation to provide evidence and context to your probability estimates, since otherwise they don't really form even a locally valid statement.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-21T05:55:44.608Z · score: 2 (3 votes) · EA · GW
Isn't self-reported data is unreliable?

Yes, but unreliability does not mean that you instead just use vague words instead of explicit credences. It's a fine critique to say that people make too many arguments without giving evidence (something I also disagree with, but that isn't the subject of this thread), but you are concretely making the point that it's additionally bad for them to give explicit credences! But the credences only help, compared to vague and ambiguous terms that people would use instead.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T16:50:51.832Z · score: 7 (6 votes) · EA · GW

I mean, very frequently it's useful to just know what someone's credence is. That's often an order of magnitude cheaper to provide, and often is itself quite a bit of evidence. This is like saying that all statements of opinions or expressions of feelings are bad, unless they are accompanied with evidence, which seems like it would massively worsen communication.

Comment by habryka on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-20T05:02:37.889Z · score: 10 (9 votes) · EA · GW
There's also a lot of pseudo-superforcasting, like "I have 80% confidence in this", without any evidence backing up those credences.

From a bayesian perspective there is no particular reason why you have to provide more evidence if you provide credences, and in general I think there is a lot of value in people providing credences even if they don't provide additional evidence, if only to avoid problems of ambiguous language.

Comment by habryka on EA Forum feature suggestion thread · 2020-07-15T03:58:39.721Z · score: 2 (1 votes) · EA · GW

This is also the case in the new editor! Sorry for not having this for so long!

Comment by habryka on Concern, and hope · 2020-07-09T08:02:34.173Z · score: 27 (8 votes) · EA · GW
witch hunts [...] top-down

The vast majority of witch hunts were not top-down as far as I remember from my cursory reading on this topic. They were usually driven by mobs and bottom-up social activity, with the church or other higher institutions usually trying to avoid getting involved with them.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-29T04:04:00.294Z · score: 6 (3 votes) · EA · GW

We actually just deployed the ability for users to delete their own comments if they have no children (i.e. no replies) for lesswrong. So I expect that will also be up on the EA Forum within the next few weeks.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-28T07:57:13.744Z · score: 4 (2 votes) · EA · GW

Yeah, I agree with this. I actually think we have an admin-only version of a button that does this, but we ran into some bugs and haven't gotten around to fixing them. I do expect we will do this at some point in the next few months.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-27T04:19:22.051Z · score: 5 (3 votes) · EA · GW

Huh, you're right. I will look into it.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-26T20:23:48.241Z · score: 3 (2 votes) · EA · GW

I am reasonably confident that we use the first image that is used in a post as the preview image, so you can already mostly do this.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-20T17:46:02.013Z · score: 2 (1 votes) · EA · GW

Yeah, this is the current top priority with the new editor rework, and the inability to make this happen was one of the big reasons for why we decided to switch editors. I expect this will happen sometime in the next month or two.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-20T17:43:59.432Z · score: 12 (6 votes) · EA · GW

Alas, I don’t think this is possible in the way you are suggesting it here. We can allow submission of a narrow subset of HTML, but indeed one of the single most common complaints that we got on the old forum was many posts having totally inconsistent formatting because people were submitting all kinds of weird HTML+CSS with differing font-sizes for each post, broken formatting on smaller devices, inconsistent text colors, garish formatting, floating images that broke text layout, etc.

Indeed just a week ago I got a bug report about the formatting of your old “Why the tails come apart” post being broken on smaller devices because of the custom HTML you submitted at the time. Indeed a very large fraction of old LW and EA Forum posts have broken formatting because of the overly permissible editor that old LessWrong and the old EA Forum both had (and I’ve probably spent at least 10 hours over the last years fixing posts with that kind of broken formatting).

If you want to import something from Google Docs, then exporting it to markdown and using the markdown editor is really as well as we can do, and we can ensure that always works reliably. I don’t think we can make arbitrary HTML submission work without frustrating tons of readers and authors.

I have also been working a lot on making the new editor work completely seamlessly with Google Docs copy-paste (and indeed there is a lot of special casing to specifically make copy-paste from Google Docs work). The only feature that’s missing and kind of difficult to do is internal links and footnotes, but I have not discovered any other feature that has been running into significant problems (that we would want, there are some others like left or right floating images that we don’t want because they break on smaller devices). So if you ever discover any document that you can‘t just copy paste, please send a bug report and I think we can likely make it work.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-18T17:13:22.973Z · score: 10 (7 votes) · EA · GW

That’s actually a lot of what the LessWrong team is currently working on! I don’t know yet whether we want to allow suggesting edits on all posts, but we are planning to allow wiki-like posts that allow people to submit changes.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-18T17:11:30.000Z · score: 7 (4 votes) · EA · GW

Yep, that feature is live on LessWrong, so I expect it will go live within a few weeks on the forum.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-18T17:10:29.343Z · score: 4 (3 votes) · EA · GW

This is also supported in the new editor.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-18T17:08:56.733Z · score: 6 (4 votes) · EA · GW

Same is true for me (as the person who built the feature). On LessWrong the recommendations are randomized but for some reason on the EA Forum the admins/devs decided to always have them be strictly ordered by the latest highest karma posts you haven’t read, so they never change, and inevitably end up in a configuration where you’re not interested in any of the posts.

Comment by habryka on EA Forum feature suggestion thread · 2020-06-17T17:12:00.871Z · score: 10 (6 votes) · EA · GW

They are supported in the new editor which LessWrong has currently shipped if you op into beta features (and I expect will go live for everyone by default in the next two weeks or so).

Comment by habryka on How to Fix Private Prisons and Immigration · 2020-06-13T19:31:15.805Z · score: 2 (3 votes) · EA · GW

Note: The images in your post are broken, likely for everyone but you:

Broken Image

Google Drive generally doesn't work very well as an image hosting service. I recommend using imgur, or any of the other dozens of image hosting services out there.

One level up, I apologize for it not being easy to just upload images to the EA Forum and LessWrong. We have a new editor in beta that I expect to go live on the EA Forum soon that would allow you to just upload images without having to worry about image-hosting services.

Comment by habryka on Will protests lead to thousands of coronavirus deaths? · 2020-06-11T04:05:00.368Z · score: 4 (2 votes) · EA · GW

The timestamp with the small link icon is a link to the shortform comment itself. Here is the link for your shortform comment that I got this way:

https://forum.effectivealtruism.org/posts/siuZkxobSGtEqtwq2/kbog-s-shortform?commentId=fHxmf3G7Qn6H6pZpb

Comment by habryka on Climate Change Is Neglected By EA · 2020-05-27T18:01:10.975Z · score: 10 (4 votes) · EA · GW

While I never considered poverty reduction a top cause, I do consider climate change work to be quite a bit more important than poverty reduction in terms of direct impact, because of GCR-ish concerns (though overall still very unimportant compared to more direct GCR-ish concerns). My guess is that this is also true of most people I work with who are also primarily concerned about GCR-type things, though the topic hasn't come up very often, so I am not very confident about this.

I do actually think there is value on poverty-reduction like work, but that comes primarily from an epistemic perspective where poverty-reduction requires making many fewer long-chained inferences about the world, in a way that seems more robustly good to me than all the GCR perspectives, and also seems like it would allow better learning about how the world works than working on climate change. So broadly I think I am more excited about working with people who work on global poverty than people who work on climate change (since I think the epistemic effects dominate the actual impact calculation here).

Comment by habryka on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T23:02:36.949Z · score: 4 (2 votes) · EA · GW

Yeah, that seems fair. I do think that "LessWrong meetups" are a category that is more similar to the whole "Local Group" category, and the primary thing that is surprising to me is that there were so many people who choose LessWrong instead of Local Group and then decided to annotate choice that with a reference to their local group.

Comment by habryka on EA Survey 2019 Series: How EAs Get Involved in EA · 2020-05-22T21:11:29.865Z · score: 5 (4 votes) · EA · GW
Perhaps surprisingly, given that LessWrong is often thought of primarily as an online community and forum, the next two largest response categories mentioned an in-person event (14%) or local group (14%).

The LessWrong community has had dozens (if not hundreds in total) of active meetup groups for almost a decade now, with a large number of people now very active in the community having come through those meetups. I am really quite surprised that you would say that it is surprising that people mention LessWrong meetups. They've always played a pretty big role in the structure of the EA and Rationality communities.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-05-15T03:15:48.428Z · score: 2 (1 votes) · EA · GW

I've also outlined my reasoning quite a bit in other comments, here is one of the ones that goes into a bunch of detail: https://forum.effectivealtruism.org/posts/Hrd73RGuCoHvwpQBC/request-for-feedback-draft-of-a-coi-policy-for-the-long-term?commentId=mjJEK8y4e7WycgosN

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-05-15T03:06:08.032Z · score: 8 (4 votes) · EA · GW

The usual thing that I've seen happen in the case of recusals is that the recused person can no longer bring their expertise to the table, and de-facto when a fund-member is recused from a grant, without someone else having the expertise to evaluate the grant, it is much less likely for that grant to happen. This means two things:

1. Projects are now punished for establishing relationships with grantmakers and working together with grantmakers

2. Grantmakers are punished for establishing relationships with organizations and projects they are excited about

3. Funds can no longer leverage the expertise of the people with the most relevant context

In general when someone is recused they seem to no longer argue for why a grant is important, and in a hit-based view a lot of the time the people who have positive models for why a grant is important are also most likely to have a social network that is strongly connected to the grant in question.

I don't expect a loosely connected committee like the LTFF or other EA Funds to successfully extract that information from the relevant fund-member, and so a conservative COI policy will reliably fail to make the most valuable grants. Maybe an organization in which people had the time to spend hundreds of hours talking to each other can afford to just have someone with expertise recuse themselves and then try to download their models of why a grant is promising and evaluate it themselves independently, but the LTFF (and I expect other EA Funds) do not have that luxury. I have not seen a group of people navigate this successfully and de-facto I am very confident that a process that relies heavily on recusals will just tend to fail to make grants when the fund-member with the most relevant expertise is excused.

have fewer grant evaluators per grant

Having fewer grant evaluators per grant is a choice that Open Phil made that the EA Funds can also make, I don't see how that is an external constraint. It is at least partially a result of trusting in the hit-based giving view that generates a lot of my intuitions around recusals. Nothing is stopping the EA Funds from having fewer grant evaluators per grant (and de-facto most grants are only investigated by a single person on a fund team, with the rest just providing basic oversight, which is why recusals are so costly, because frequently only a single fund member even has the requisite skills and expertise necessary to investigate a grant in a reasonable amount of time).

and most of their grants fall outside the EA community such that COIs are less common.

While most grants fall outside of the EA community, many if not most of the grant investigators will still have COIs with the organizations they are evaluating, because that is where they will extend their social network. So the people who work at GiveWell tend to have closer social ties to organizations working in that space (often having been hired from that space), the people working on biorisk will have social ties to the existing pandemic prevention space, etc. I do think that overall Open Phil's work is somewhat less likely to hit on COIs but not that much. I also overall trust Open Phil's judgement a lot more in domains where they are socially embedded in the relevant network, and I think Open Phil also thinks that, and puts a lot of emphasis of understanding the specific social constraints and hierarchies in the fields they are making grants in. Again, a recusal-heavy COI policy would create really bad incentives on grantmakers here, and isolate the fund from many of the most important sources of expertise.

Comment by habryka on Ben_Snodin's Shortform · 2020-05-06T02:56:41.986Z · score: 2 (1 votes) · EA · GW

Kind of surprised that this post doesn't link at all to Paul's post on altruistic equity: https://forum.effectivealtruism.org/posts/r7vmtHZKuosJZ3Xq5/altruistic-equity-allocation

Comment by habryka on [U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government · 2020-04-14T01:42:14.272Z · score: 2 (1 votes) · EA · GW

Note: PayPal is now also accepting applications, which might be an option for many people: https://www.loanbuilder.com/ppp-loan-application

(While the link doesn't go to PayPal directly, it is linked from here, which makes me think it's legitimate)

Comment by habryka on Examples for impact of Working at EAorg instead of ETG · 2020-03-15T18:49:45.889Z · score: 7 (4 votes) · EA · GW

This also seems right to me. We roughly try to distribute all the money we have in a given year (with some flexibility between rounds), and aren't planning to hold large reserves. So from just our decisions we couldn't ramp up our grantmaking because better opportunities arise.

However, I can imagine donations to us increasing if better opportunities arise, so I do expect there to be at least some effect.

Comment by habryka on Linch's Shortform · 2020-02-25T04:40:36.160Z · score: 6 (3 votes) · EA · GW
11. I gave significant amounts of money to the Long-Term Future Fund (which funded Catalyst), so I'm glad Catalyst turned out well. It's really hard to forecast the counterfactual success of long-reach plans like this one, but naively it looks like this seems like the right approach to help build out the pipeline for biosecurity.

I am glad to hear that! I sadly didn't end up having the time to go, but I've been excited about the project for a while.

Comment by habryka on Thoughts on The Weapon of Openness · 2020-02-14T22:42:45.194Z · score: 6 (3 votes) · EA · GW
though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

So I think this is actually a really important point. I think by default the NSA can contract out various tasks to industry professionals and academics and on average get results back from them that are better than what they could have done internally. The differential cryptoanalysis situation is a key example of that. IBM could have instead been contracted by some random other group and developed the technology for them instead, which means that the NSA had basically no lead in cryptography over IBM.

Comment by habryka on Thoughts on The Weapon of Openness · 2020-02-14T18:30:05.134Z · score: 5 (3 votes) · EA · GW

Even if all of these turn out to be quite significant, that would at most imply a lead of something like 5 years.

The elliptic curve one doesn't strike me at all like the NSA had a big lead. You are probably referring to this backdoor:

https://en.wikipedia.org/wiki/Dual_EC_DRBG

This backdoor was basically immediately identified by security researchers the year it was embedded in the standard. As you can read in the Wikipedia article:

Bruce Schneier concluded shortly after standardization that the "rather obvious" backdoor (along with other deficiencies) would mean that nobody would use Dual_EC_DRBG.

I can't really figure out what you mean by the DES recommended magic numbers. There were some magic numbers in DES that were used for defense against the differential cryptanalysis technique. Which I do agree is probably the single strongest example we have of an NSA lead, though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

To be clear, a 30 (!) year lead seems absolutely impossible to me. A 3 year broad lead seems maybe plausible to me, with a 10 year lead in some very narrow specific subset of the field that gets relatively little attention (in the same way research groups can sometimes pull ahead in a specific subset of the field that they are investing heavily in).

I have never talked to a security researcher who would consider 30 years remotely plausible. The usual impression that I've gotten from talking to security researchers is that the NSA has some interesting techniques and probably a variety of backdoors, which they primarily installed not by technological advantage but by political maneuvering, but that in overall competence they are probably behind the academic field, and almost certainly not very far ahead.

Comment by habryka on Thoughts on The Weapon of Openness · 2020-02-13T22:37:25.331Z · score: 8 (5 votes) · EA · GW
past leaks and cases of "catching up" by public researchers that they are roughly 30 years ahead of publicly disclosed cryptography research

I have never heard this and would extremely surprised by this. Like, willing to take a 15:1 bet on this, at least. Probably more.

Do you have a source for this?

Comment by habryka on How do you feel about the main EA facebook group? · 2020-02-13T00:45:55.714Z · score: 4 (2 votes) · EA · GW

Do you have the same feeling about comments on the EA Forum?

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-10T17:51:38.967Z · score: 2 (1 votes) · EA · GW
Separately, you mentioned OpenPhil's policy of (non-) disclosure as an example to emulate. I strongly disagree with this, for two reasons.

This sounds a bit weird to me, given that the above is erring quite far in the direction of disclosure.

The specific dimension of the OpenPhil policy that I think has strong arguments going for it is to be hesitant with recusals. I really want to continue to be very open with our Conflict of Interest, and wouldn't currently advocate for emulating Open Phil's policy on the disclosure dimension.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-10T17:48:58.025Z · score: 4 (2 votes) · EA · GW
I didn't see any discussion of recusal because the fund member is employed or receives funds from the potential grantee?

Yes, that should be covered by the CEA fund policy we are extending. Here are the relevant sections:

Own organization: any organization that a team member
is currently employed by
volunteers for
was employed by at any time in the last 12 months
reasonably expects to become employed by in the foreseeable future
does not work for, but that employs a close relative or intimate partner
is on the board of, or otherwise plays a substantially similar advisory role for
has a substantial financial interest in

And:

A team member may not propose a grant to their own organization
A team member must recuse themselves from making decisions on grants to their own organizations (except where they advocate against granting to their own organization)
A team member must recuse themselves from advocating for their own organization if another team member has proposed such a grant
A team member may provide relevant information about their own organization in a neutral way (typically in response to questions from the team’s other members).

Which covers basically that whole space.

Note that that policy is still in draft form and not yet fully approved (and there are still some incomplete sentences in it), so we might want to adjust our policy above depending on changes in the the CEA fund general policy.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-08T21:58:19.306Z · score: 6 (4 votes) · EA · GW

Responding on a more object-level:

As an obviously extreme analogy, suppose that someone applying for a job decides to include information about their sexual history on their CV.

I think this depends a lot on the exact job, and the nature of the sexual history. If you are a registered sex-offender, and are open about this on your CV, then that will overall make a much better impression than if I find that out from doing independent research later on, since that is information that (depending on the role and the exact context) might be really highly relevant for the job.

Obviously including potentially embarrassing information in a CV without it having much purpose is a bad idea, and mostly signals various forms of social obliviousness, as well as distract from the actually important parts of your CV, which pertain to your professional experience and factors that will likely determine how well you will do at your job.

But I'm inclined to agree with Howie that the extra clarity you get from moving beyond 'high-level' categories probably isn't all that decision-relevant.

So, I do think this is probably where our actual disagreement lies. Of the most concrete conflicts of interest that have given rise to abuses of power I have observed both within the EA community, and in other communities, more than 50% where the result of romantic relationships, and were basically completely unaddressed by the high-level COI policies that the relevant institutions had in place. Most of these are in weird grey-areas of confidentiality, but I would be happy to talk to you about the details of those if you send me a private message.

I think being concrete here is actually highly action relevant, and I've seen the lack of concreteness in company policies have very large and concrete negative consequences for those organizations.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-08T21:50:03.879Z · score: 5 (5 votes) · EA · GW
less concrete terms is mostly about demonstrating an expected form of professionalism.

Hmm, I think we likely have disagreements on the degree to which I think at least a significant chunk of professionalism norms are the results of individuals trying to limit accountability of themselves and people around them. I generally am not a huge fan of large fractions of professionalism norms (which is not by any means a rejection of all professionalism norms, just specific subsets of it).

I think newspeak is a pretty real thing, and the adoption of language that is broadly designed to obfuscate and limit accountability is a real phenomenon. I think that phenomenon is pretty entangled with professionalism. I agree that there is often an expectation of professionalism, but I would argue that exactly that expectation is what often causes obfuscating language to be adopted. And I think this issue is important enough that just blindly adopting professional norms is quite dangerous and can have very large negative consequences.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-08T02:48:05.746Z · score: 3 (2 votes) · EA · GW
You could do early screening by unanimous vote against funding specific potential grantees, and, in these cases, no COI statement would have to be written at all.

Since we don't publicize rejections, or even who applied to the fund, I wasn't planning to write any COI statements for rejected applicants. That's a bit sad, since it kind of leaves a significant number of decisions without accountability, but I don't know what else to do.

The natural time for grantees to object to certain information to be included would be when we run our final writeup past them. They could then request that we change our writeup, or ask us to rerun the vote with certain members excluded, which would make the COI statements unnecessary.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:53:33.076Z · score: 4 (6 votes) · EA · GW

This is a more general point that shapes my thinking here a bit, not directly responding to your comment.

If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch of stuff about metamours and psychedelics that's going to end up incredibly salient to them and that's not necessarily making them more informed about what they actually cared about. It can actually just be a distraction.

I feel like the thing that is happening here makes me pretty uncomfortable, and I really don't want to further incentivize this kind of assessment of stuff.

A related concept in this space seems to me to be the Copenhagen Interpretation of Ethics:

The Copenhagen Interpretation of quantum mechanics says that you can have a particle spinning clockwise and counterclockwise at the same time – until you look at it, at which point it definitely becomes one or the other. The theory claims that observing reality fundamentally changes it.
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don’t subscribe to this school of thought, but it seems pretty popular.

I feel like there is a similar thing going on with being concrete about stuff like sexual and romantic relationships (which obviously have massive consequences in large parts of the world). And maybe more broadly having this COI policy in the first place. My sense is that we can successfully avoid a lot of criticism by just not having any COI policy, or having a really high-level and vague one, because any policy we would have would clearly signal we have looked at the problem, and are now to blame for any consequences related to it.

More broadly, I just feel really uncomfortable with having to write all of our documents to make sense on a purely associative level. I as a donor would be really excited to see a COI policy as concrete as the one above, similarly to how all the concrete mistake pages on all the EA org websites make me really excited. I feel like making the policy less concrete trades of getting something right and as such being quite exciting to people like me, in favor of being more broadly palatable to some large group of people, and maybe making a bit fewer enemies. But that feels like it's usually going to be the wrong strategy for a fund like ours, where I am most excited about having a small group of really dedicated donors who are really excited about what we are doing, much more than being very broadly palatable to a large audience, without anyone being particularly excited about it.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:35:21.918Z · score: 4 (2 votes) · EA · GW
being personal friends with someone should require disclosure.

I think this comment highlights some of the reasons for why I am hesitant to just err on the side of disclosure for personal friendships.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:21:57.373Z · score: 4 (2 votes) · EA · GW
I think the onus is on LTF to find a way of managing COIs that avoids this, while also having a suitably stringent COI policy.

I mean, these are clearly trading off against each other, given all the time constraints I explained in a different comment. Sure, you can say that we have an obligation, but that doesn't really help me balance these tradeoffs.

The above COI policy is my best guess at how to manage that tradeoff. It seems to me that moving towards recusal on any of the above axes, will have to prevent at least some grants being made, or at least I don't currently really see a way forward that would not make that the case. I do think looking into some kind of COI board could be a good idea, but I do continue to be quite concerned about having a profusion of boards in which no one has any real investment and no one has time to really think through things, and am currently tending towards that being a bad idea.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:13:11.092Z · score: 6 (4 votes) · EA · GW
I can't imagine myself being able to objectively cast a vote about funding my room-mate

So, I think I agree with this in the case of small houses. However, I've been part of large group houses with 18+ people in it, where I interacted with very few of the people living in it, and overall spent much less time with many of my housemates than I did with only very casual acquaintances.

Maybe we should just make that explicit? Differentiate living together with 3-4 other people, from living together with 15 other people? A cutoff at something like 7 people seems potentially reasonable to me.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:09:12.903Z · score: 4 (2 votes) · EA · GW

Yeah, I am not sure how to deal with this. Currently the fund team is quite heavily geographically distributed, with me being the only person located in the Bay Area, so on that dimension we are doing pretty well.

I don't really know what to do if there are multiple COIs, which is one of the reasons I much prefer us to err on the side of disclosure instead of recusal. I expect if we were to include friendships as sufficient for recusal, we would very frequently have only one person on the fund being able to vote on a proposal, and I expect that to overall make our decision-making quite a bit worse.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T23:08:20.346Z · score: 17 (7 votes) · EA · GW

So, the problem here is that we are already dealing with a lot of time-constraint, and I feel pretty doomy about having a group that has even less time than the fund already has, to be involved in this kind of decision-making.

I also have a more general concern where when I look at dysfunctional organizations, one of the things I often see are profusions of board upon boards, each one of which primarily serves to spread accountability around, overall resulting in a system in which no one really has any skin in the game and in which even very simple tasks often require weeks of back-and-forth.

I think there are strong arguments in this space that should push you towards avoiding the creation of lots of specialized boards and their associated complicated hierarchies, and I think we see that in the most successful for-profit companies. I think the non-profit sector does this more, but I mostly think of this as a pathology of the non-profit sector that is causing a lot of its problems.

Comment by habryka on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T23:02:55.068Z · score: 5 (3 votes) · EA · GW

That seems good. Edited the document!