Posts

Comments

Comment by HowieL on Resources around shame and striving and EA · 2021-10-14T00:35:44.688Z · EA · GW

I haven't read the whole thing but I like this book and know another person or two who also liked it.

Comment by HowieL on Some longtermist fiction · 2021-08-10T20:42:34.748Z · EA · GW

+1 to not reading Consider Phlebas. I've been reading it because I wanted to check out the Culture series and I was compulsive about starting with the first one even though I heard others were better.

I haven't gotten much out of it and think it was a mistake. 

Comment by HowieL on Open Thread: July 2021 · 2021-07-07T20:47:01.379Z · EA · GW

Welcome! Glad you found us.

Comment by HowieL on People working on x-risks: what emotionally motivates you? · 2021-07-05T15:12:18.888Z · EA · GW

My colleague Michelle wrote some related thoughts here.

https://forum.effectivealtruism.org/posts/3k4H3cyiHooTyLY6p/why-i-find-longtermism-hard-and-what-keeps-me-motivated

Comment by HowieL on RyanCarey's Shortform · 2021-06-21T14:27:29.143Z · EA · GW

Yep - agree with all that, especially that it would be cool for somebody to look into the general question.

Comment by HowieL on RyanCarey's Shortform · 2021-06-20T21:49:05.070Z · EA · GW

My impression is that a lot of her quick success was because her antitrust stuff tapped into progressive anti Big Tech sentiment. It's possible EAs could somehow fit into the biorisk zeitgeist but otherwise, I think it would take a lot of thought to figure out how an EA could replicate this.

Comment by HowieL on Intervention options for improving the EA-aligned research pipeline · 2021-06-11T21:18:05.303Z · EA · GW

Fair enough. I guess just depends on exactly how broad/narrow of a category Linch was gesturing at.

Comment by HowieL on Intervention options for improving the EA-aligned research pipeline · 2021-06-11T17:46:42.965Z · EA · GW

I don't think Alan's really an example of this.

 

I think I’ve always been interested in computers and artificial intelligence. I followed Kasparov and Deep Blue, and it was actually Ray Kurzweil’s Age of Spiritual Machines, which is an old book, 2001 … It had this really compelling graph. It’s sort of cheesy, and it involves a lot of simplifications, but in short, it shows basically Moore’s Law at work and extrapolated ruthlessly into the future. Then, on the second y-axis, it shows the biological equivalent of computing capacity of the machine. It shows a dragonfly and then, I don’t know, a primate, and then a human, and then all humans.

Now, that correspondence is hugely problematic. There’s lots we could say about why that’s not a sensible thing to do, but what I think it did communicate was that the likely extrapolation of trends are such that you are going to have very powerful computers within a hundred years. Who knows exactly what that means and whether, in what sense, it’s human level or whatnot, but the fact that this trend is coming on the timescale it was was very compelling to me. But at the time, I thought Kurzweil’s projection of the social dynamics of how extremely advanced AI would play out unlikely. It’s very optimistic and utopian. I actually looked for a way to study this all through my undergrad. I took courses. I taught courses on technology and society, and I thought about going into science writing.

And I started a PhD program in science and technology studies at Cornell University, which sounded vague and general enough that I could study AI and humanity, but it turns out science and technology studies, especially at Cornell, means more a social constructivist approach to science and technology.

. . . 

Okay. Anyhow, I went into political science because … Actually, I initially wanted to study AI in something, and I was going to look at labor implications of AI. Then, I became distracted as it were by a great power politics and great power peace and war. It touched on the existential risk dimensions that I didn’t have the word for it, but was sort of a driving interest of mine. It’s strategic, which is interesting. Anyhow, that’s what I did my PhD on, and topics related to that, and then my early career at Yale.

I should say during all this time, I was still fascinated by AI. At social events or having a chat with a friend, I would often turn to AI and the future of humanity and often conclude a conversation by saying, “But don’t worry, we still have time because machines are still worse than humans at Go.” Right? Here is a game that’s well defined. It’s perfect information, two players, zero-sum. The fact that a machine can’t beat us at Go means we have some time before they’re writing better poems than us, before they’re making better investments than us, before they’re leading countries.

Well, in 2016, DeepMind revealed AlphaGo, and it was almost this canary in the coal mine, that Go was to me, that was sort of deep in my subconscious keeled over and died. That sort of activated me. I realized that for a long time, I’d said post tenure I would start working on AI. Then, with that, I realized that we couldn’t wait. I actually reached out to Nick Bostrom at the Future of Humanity Institute and began conversations and collaboration with them. It’s been exciting and lots of work to do that we’ve been busy with ever since.

https://80000hours.org/podcast/episodes/allan-dafoe-politics-of-ai/

Comment by HowieL on A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it · 2021-06-09T20:05:23.422Z · EA · GW

Fwiw, for mental health I'm not sure whether therapy is more likely to treat the 'root causes' than medications. You could have a model where some 'chemical thingie' that can be treated by meds is the root cause of mental illness and the actual cognitive thoughts treated by therapy are the symptoms. 

In reality, I'm not sure the distinction is even meaningful given all the feedback loops involved. 

Comment by HowieL on How well did EA-funded biorisk organisations do on Covid? · 2021-06-09T14:56:32.347Z · EA · GW

I don't think most people would consider prevention a type of preparation. EA-funded biorisk efforts presumably did not consider it that way. And more to the point, I do not want to lump prevention together with preparation because I am making an argument about preparation that is separate from prevention. So it's not about just semantics, but precision on which efforts did well or poorly.

I think it actually is common to include prevention under the umbrella of pandemic preparedness. for example, here's the Council on Foreign Relation's independent committee on Improving Pandemic Preparedness: "Based on the painful lessons of the current pandemic, the Task Force makes recommendations for improving U.S. and global capacities to deliver each of the three fundamentals of pandemic preparedness: prevention, detection, and response. " Another example: https://www.path.org/articles/building-epidemic-preparedness-worldwide/

So it might be helpful to specify what you're referring to by preparation.

Comment by HowieL on How well did EA-funded biorisk organisations do on Covid? · 2021-06-09T14:33:21.383Z · EA · GW

I think research into novel vaccine platforms like mRNA is a top priority. It's neglected in the sense that way more resources should be going into it but also my impression[1] is that the USG does make up a decent proportion of funding for early stage research into that kind of thing. So that's a sense in which the U.S.'s preparedness was prob good relative to other countries though not in an absolute sense.

Here's an article I skimmed about the importance of govt (mostly NIH) funding for the development of mRNA vaccines. https://www.scientificamerican.com/article/for-billion-dollar-covid-vaccines-basic-government-funded-science-laid-the-groundwork/

Fwiw, I think it's prob not the case that the mRNA stuff was that much of a surprise. This 2018 CHS report had self-amplifying mRNA vaccines as one of ~15 technologies to address GCBRs. https://jhsphcenterforhealthsecurity.s3.amazonaws.com/181009-gcbr-tech-report.pdf

 

[1] Though I'm rusty since I haven't worked directly on biorisk for five years and was never an expert.

Comment by HowieL on How well did EA-funded biorisk organisations do on Covid? · 2021-06-09T00:44:10.235Z · EA · GW

"effective pandemic response is not about preparation"
 

FYI - my impression is that pandemic preparedness is often defined broadly enough to include things like research into defensive technology (e.g. mRNA vaccines). It does seem like those investments were important for the response.

Comment by HowieL on Which non-EA-funded organisations did well on Covid? · 2021-06-09T00:39:37.896Z · EA · GW

Several other people who work with them are connected to EA.

Comment by HowieL on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T22:06:54.457Z · EA · GW

Note that Open Phil funded this project. https://www.nti.org/newsroom/news/nti-launch-global-health-security-index-new-grant-open-philanthropy-project/

Comment by HowieL on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T22:04:33.053Z · EA · GW

In case anybody's curious: https://coronavirus.jhu.edu/map.html

Comment by HowieL on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T13:48:22.293Z · EA · GW

I do think CHS should get some credit for arguing for taking pandemic response very seriously early on. For example, I think Tom had some tweets arguing for pulling out all the stops on manufacturing more PPE in January 2020. 

Note - I'm a bit biased since I was working on biorisk at Open Phil the first time Open Phil funded CHS.

Comment by HowieL on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T13:46:29.274Z · EA · GW

Fwiw, my vague memory is that some other people at CHS, including Tom Inglesby (the director) did better than Adalja. I think Inglesby's Twitter was generally pretty sensible though I don't have time to go back and check. I'd guess that, like most experts, he was too pessimistic about travel restrictions, though. Maybe masks, too?

Comment by HowieL on How well did EA-funded biorisk organisations do on Covid? · 2021-06-08T13:39:43.805Z · EA · GW

If you're referring to what I think you are, it was a different group at Hopkins

Comment by HowieL on EA Infrastructure Fund: May 2021 grant recommendations · 2021-06-08T07:16:33.578Z · EA · GW

If I had to pick two parts of it, it would be 3 and 4 but fwiw I got a bunch out of 1 and 2 over the last year for reasons similar to Max.

Comment by HowieL on Meta-EA Needs Models · 2021-04-05T23:59:05.141Z · EA · GW

Also seems relevant that both 80k and CEA went through YC (though I didn't work for 80k back then and don't know all the details).

Comment by HowieL on What are your main reservations about identifying as an effective altruist? · 2021-04-05T19:55:44.769Z · EA · GW

+1

Comment by HowieL on What Makes Outreach to Progressives Hard · 2021-03-14T14:37:25.412Z · EA · GW

Indeed, IIRC, EAs tend to be more progressive/left-of-center than the general population. I can't find the source for this claim right now.

 

The 2019 EA Survey says:


"The majority of respondents (72%) reported identifying with the Left or Center Left politically and just over 3% were on the Right or Center Right, very similar to 2018."

https://forum.effectivealtruism.org/posts/wtQ3XCL35uxjXpwjE/ea-survey-2019-series-community-demographics-and#Politics

Comment by HowieL on Why I find longtermism hard, and what keeps me motivated · 2021-03-11T12:06:17.114Z · EA · GW

I figured some people might be interested in whether the orientation toward longtermism that Michelle describes above is common at EA orgs, so I wanted to mention that almost everything in this post could also be describing my personal experience. (I'm the director of strategy at 80,000 Hours.)

Comment by HowieL on [deleted post] 2021-02-26T06:50:17.944Z

I think this request undermines how karma systems should work on a website. 'Only people who have engaged with a long set of prerequisites can decide to make this post less visible' seems like it would systematically prevent posts people want to see less of from being downvoted.

Comment by HowieL on Resources On Mental Health And Finding A Therapist · 2021-02-24T15:01:39.143Z · EA · GW

I really like Holly Elmore's blogpost "Kicking an Addiction to Self-Loathing."

Comment by HowieL on When you shouldn't use EA jargon and how to avoid it · 2020-10-28T17:40:04.693Z · EA · GW

Most native English speakers from outside of particular nerd cultures also would have no clue what it means.

Comment by HowieL on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T16:24:52.561Z · EA · GW

Fair enough.

Fwiw, the forum explicitly discourages unnecessary rudeness (and encourages kindness). I think tone is part of that and the voting system is a reasonable mechanism for setting that norm. But there's room for disagreement.

If the original poster came back and edited in response to feedback or said that the tone wasn't intentional, I'd happily remove my downvote.

Comment by HowieL on What types of charity will be the most effective for creating a more equal society? · 2020-10-13T14:01:02.028Z · EA · GW

I downvoted this. "Please, if you disagree with me, carry your precious opinion elsewhere" reads to me as more than slightly rude and effectively an intentional insult to people who disagree with the OP and would otherwise have shared their views. I think it's totally reasonable to worry in advance about a thread veering away from the topic you want to discuss and to preempt that with a request to directly answer your question [Edited slightly] and I wouldn't have downvoted without the reference to other people's "precious views."

Comment by HowieL on No More Pandemics: a grassroots group? · 2020-10-03T12:27:42.248Z · EA · GW

Lobbying v. grassroots advocacy

This is just semantic but I think you probably don't want to call what you're proposing a "lobbying group." Lobbying usually refers to one particular form of advocacy (face to face meetings with legislators) and in many countries[1] it is regulated more heavily than other forms of advocacy.

(It's possible that in the UK, "lobbying group" means something more general but in the U.S.)

[1] This is true in the U.S., which I know best. Wikipedia suggests it's true in the EU but appears less true in the UK.

Who else is working on this?

Here are a couple small examples of things being done along these lines, though I agree there is little overall:

-Resolve to Save Lives claims to do some advocacy for epidemic preparedness in low-income countries in collaboration with the Global Health Advocacy Incubator. The latter group seems to be hiring an Advocacy Director though the posting is old so I wouldn't be surprised if it's out of date.

-PATH has done some advocacy to encourage the U.S. government to invest in global health security.

Comment by HowieL on 5,000 people have pledged to give at least 10% of their lifetime incomes to effective charities · 2020-09-29T10:11:27.011Z · EA · GW

I didn't actually become a member until after the wording of the pledge changed but I do vividly remember the first wave of press because all my friends sent me articles showing that there were some kids in Oxford who were just like me.

Learning about Giving What We Can (and, separately, Jeff and Julia) made me feel less alone in the world and I feel really grateful for that.

Comment by HowieL on 80,000 Hours user survey closes this Sunday · 2020-09-23T13:46:40.471Z · EA · GW

Hi RandomEA,

Thanks for pointing this out (and for the support).

We only update the 'Last updated' field for major updates not small ones. I think we'll rename it 'Last major update' to make it clearer.

The edit you noticed wasn't intended to indicate that we've changed our view on the effectiveness of existential risk reduction work. That paragraph was only meant to demonstrate how it’s possible that x-risk reduction could be competitive with top charities from a present-lives-saved perspective. The author decided we could make this point better by using illustrative figures that are more conservative than 80k’s actual rough guess and made the edit. We’ve tinkered with the wording to make it clearer that they are not actual cost-effectiveness estimates.

Also, note that in both cases the paragraph was about hypothetical effectiveness if you only cared about present lives, which is very different from our actual estimate of cost effectiveness.

Hope this helps clear things up.

Comment by HowieL on Long-Term Future Fund: April 2020 grants and recommendations · 2020-09-19T20:34:22.887Z · EA · GW

Not an expert but, fwiw, my impression is that this is more common in CS than philosophy and the social science areas I know best.

Comment by HowieL on Some thoughts on EA outreach to high schoolers · 2020-09-17T10:02:38.736Z · EA · GW

I'm very worried that staff at EA orgs (myself included) seem to know very little about Gen Z social media and am really glad you're learning about this.

Comment by HowieL on Some thoughts on EA outreach to high schoolers · 2020-09-17T10:00:49.177Z · EA · GW

I think it's especially dangerous to use this word when talking about high schoolers, especially given the number of cult and near-cult groups that have arisen in communities adjacent to EA.

Comment by HowieL on MichaelA's Shortform · 2020-09-11T18:38:34.831Z · EA · GW

Seems reasonable

Comment by HowieL on MichaelA's Shortform · 2020-09-11T14:42:53.030Z · EA · GW

"People have found my summaries and collections very useful, and some people have found my original research not so useful/impressive"

I haven't read enough of your original research to know whether it applies in your case but just flagging that most original research has a much narrower target audience than the summaries/collections, so I'd expect fewer people to find it useful (and for a relatively broad summary to be biased against them).

That said, as you know, I think your summaries/collections are useful and underprovided.

Comment by HowieL on Should surveys about the quality/impact of research outputs be more common? · 2020-09-11T14:38:43.074Z · EA · GW

This all seems reasonable to me though I haven't thought much about my overall take.

I think the details matter a lot for "Even among individual researchers who work independently, or whose org isn't running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys"

A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities. I even think it's often possible to close a survey after a certain number of responses.

A counterargument is that the people who respond earliest might be unrepresentative. But for a lot of purposes, it's not obvious to me you need a representative sample. "Among the people who are making the most use of my research, how is it useful" can be pretty informative on its own.

Comment by HowieL on Should surveys about the quality/impact of research outputs be more common? · 2020-09-09T16:42:43.899Z · EA · GW

[Not meant to express an overall view.] I don't think you mention the time of the respondents as a cost of these surveys, but I think it can be one of the main costs. There's also risk of survey fatigue if EA researchers all double down on surveys.

Comment by HowieL on Asking for advice · 2020-09-09T16:36:04.955Z · EA · GW

I find it off-putting though I don't endorse my reaction and overall think the time savings mean I'm personally net better off when other people use it.

I think for me, it's about taking something that used to be a normal human interaction and automating it instead. Feels unfriendly somehow. Maybe that's a status thing?

Comment by HowieL on An argument for keeping open the option of earning to save · 2020-09-03T16:40:31.631Z · EA · GW

Though there's a bit of a tradeoff where putting the money into a DAF/trust might alleviate some of the negative effects Ben mentioned but also loses out on a lot of the benefits Raemon is going for.

Comment by HowieL on An argument for keeping open the option of earning to save · 2020-09-01T11:51:06.527Z · EA · GW

[My own views here, not necessarily Ben’s or “80k’s”. I reviewed the OP before it went out but don’t share all the views expressed in it (and don’t think I’ve fully thought through all the relevant considerations).]

Thanks for the comment!

“You say you take (1) to be obvious, but I think that you’re treating the optimal percentage as kind of exogenous rather than dependent on the giving opportunities in the system.”

I mostly agree with this. The argument’s force/applicability is much weaker because of this. Indeed, if EAs are spending a higher/lower proportion of their assets at some point in the future, that’s prima facie evidence that the optimal allocation is higher/lower at that time.

(I do think a literal reading of the post is consistent with the optimal percentage varying endogenously but agree that it had an exogenous 'vibe' and that's important.)

“So the argument really feels like:
Maybe in the future the community will give to some places that are worse than this other place [=saving]. If you’re smarter than the aggregate community then it will be good if you control a larger slice of the resources so you can help to hedge against this mistake. This pushes towards earning.
I think if you don’t have reason to believe you’ll do better than the aggregate community then this shouldn’t get much weight; if you do have such reason then it’s legitimate to give it some weight. But this was already a legitimate argument before you thought about saving! It applies whenever there are multiple possible uses of capital and you worry that future people might make a mistake. I suppose whenever you think of a new possible use of capital it becomes a tiny bit stronger?”

I think this is a good point but a bit too strong, as I do think there’s more to the argument than just the above. I feel pretty uncertain whether the below holds together and would love to be corrected but I understood the post to be arguing something like:

i) For people whose assets are mostly financial, it’s pretty easy to push the portfolio toward the now/later distribution they think is best. If this was also true for labour and actors had no other constraints/incentives, then I’d expect the community’s allocation to reflect its aggregate beliefs about the optimum so pushing away from that would constitute a claim that you know better.

ii) But, actors making up a large proportion of total financial assets may have constraints other than maximising impact, which could lead the community to spend faster than the aggregate of the community thinks is correct:

  • Large donors usually want to donate before they die (and Open Phil’s donors have pledged to do so). (Of course, it’s arguable whether this should be modeled as such a constraint or as a claim about optimal timing).

Other holders of financial capital may not have enough resources to realistically make up for that.

iii) In an idealised ‘perfect marketplace’ holders of human capital would “invest” their labour to make up for this. But they also face constraints:

  • Global priorities research, movement/community building, and ‘meta’ can only usefully absorb a limited amount of labour.
  • Human capital can’t be saved after you die and loses value each year as you age.
  • [I’m less sure about this one and think it’s less important.] As career capital opportunities dry up when people age, it will become more and more personally costly for them to stay in career capital mode to ‘invest’ their labour. This might lead reasonable behaviour from a self-interested standpoint to diverge from what would create a theoretically optimal portfolio for the community.

This means that for the community to maintain the allocation it thinks is optimal, people may have to convert their labour into capital so that it can be ‘saved/invested.’ But most people don’t even know that this is an option (ETA: or at least it's not a salient one) and haven’t heard of earning to save. So pointing this out may empower the community to achieve its aggregate preferences, as opposed to being a way to undermine them.

“But at present I’m worried that this isn’t really a new argument and the post risks giving inappropriate prominence to the idea of earning to save (which I think could be quite toxic for the community for reasons you mention), even given your caveats.”

I agree this is a reasonable concern and I was a bit worried about it, too, since I think this is overall a small consideration in favor of earning to save, which I agree could be quite toxic. But I do think the post tries to caveat a lot and it overall seems good for there to be a forum where even minor considerations can be considered in a quick post., so I thought it was worth posting. (Fwiw, I think getting this reaction from you was valuable.)

I’m open to the possibility that this isn’t realistic, though. And something like “some considerations on earning to save” might have been a better title.

Comment by HowieL on The academic contribution to AI safety seems large · 2020-07-30T15:05:39.050Z · EA · GW

If you want some more examples of specific research/researchers, a bunch of the grantees from FLI's 2015 AI Safety RFP are non-EA academics who have done some research in fields potentially relevant to mid-term safety.

https://futureoflife.org/ai-safety-research/

Comment by HowieL on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-12T10:06:48.231Z · EA · GW

Fwiw, I think you're both right here. If you were to hire a reasonably good lawyer to help with this, I suspect the default is they'd say what Habryka suggests. That said, I also do think that lawyers are trained to do things like remove vagueness from policies.

Basically, I don't think it'd be useful to hire a lawyer in their capacity as a lawyer. But, to the extent there happen to be lawyers among the people you'd consider asking for advice anyway, I'd expect them to be disproportionately good at this kind of thing.

[Source: I went to two years of law school but haven't worked much with lawyers on this type of thing.]

Comment by HowieL on Long-Term Future Fund: April 2019 grant recommendations · 2020-02-10T15:28:22.258Z · EA · GW

You say no to "Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?" and say "2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years."


I feel like I'd much better understand what you mean if you were up for giving some probabilities here even if there's a range or they're imprecise or unstable. There's a really big range within "likely" and I'd like some sense of where you are on that range.

Comment by HowieL on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:24:29.929Z · EA · GW

[Note - I endorse the idea of splitting it into two much more strongly than any of the specifics in this comment]

Agree that you shouldn't be quite as vague as the GW policy (although I do think you should put a bunch of weight on GW's precedent as well as Open Phil's).

Quick thoughts on a few benefits of staying at a higher level (none of which are necessarily conclusive):

1) It's not obviously less informative.

If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch of stuff about metamours and psychedelics that's going to end up incredibly salient to them and that's not necessarily making them more informed about what they actually cared about. It can actually just be a distraction.

Like, let's say analogous institutions also have psychedelic-related COIs but just group them under "important social relationships" or something. Now, the LTF looks like that fund where all the staff are doing psychedelics with the grantees. I don't think anybody became more informed. (This is especially the case if the info is available *somewhere* for people who care about the details).


2) Flexibility

It's just really hard to anticipate all of the relevant cases and the principles you're using are the thing you might actually want to lock in.


3) Giving lots of detail means lack of disclosure can send a lot of signal.

If you have enough detail about exactly what level of friends someone needs to be with someone else in order to trigger a disclosure then you end up forcing members to send all sorts of weird signals by not disclosing things (e.g. I don't actually consider my friendship with person X that important). This just gets complicated fast.

---

All that said, I think a lot of this just has to be determined by the level of disclosure and type of policy LTF donors are demanding. I've donated a bit and would be comfortable trusting something more general but also am probably not representative.

Comment by HowieL on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:06:30.216Z · EA · GW

I guess I think a private board might be helpful even with pretty minimal time input. I think you mostly want some people who seem unbiased to avoid making huge errors as opposed to trying to get the optimal decision in ever case. That said, I'm sympathetic to wanting to avoid the extra bureaucracy.

The comparison to the for-profit sector seems useful but I wouldn't emphasize it *too* much. When you can't rely on markets to hold an org accountable, it makes sense that you'll sometimes need an extra layer.

When for-profits start to need to achieve legitimacy that can't be provide by markets, they seem to start to look towards these kinds of boards, too. (E.g. FB looking into governance boards).

That said, I don't have a strong take on whether this is a good idea.

Comment by HowieL on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:00:29.580Z · EA · GW

Ah - whoops. Sorry I missed that.

Comment by HowieL on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:37:49.323Z · EA · GW

Having a private board for close calls also doesn't seem crazy to me.

Comment by HowieL on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:37:17.215Z · EA · GW

Hmm. Do you have to make it public every time someone recuses themself? If someone could nonpublicly recuse themself that at least gives them the option to avoid biasing the result but also not have to stick their past romantic lives on the internet.

Comment by HowieL on Attempted summary of the 2019-nCoV situation — 80,000 Hours · 2020-02-06T17:05:46.306Z · EA · GW

Thanks - this is helpful.