Posts

Comments

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-12T10:06:48.231Z · score: 8 (5 votes) · EA · GW

Fwiw, I think you're both right here. If you were to hire a reasonably good lawyer to help with this, I suspect the default is they'd say what Habryka suggests. That said, I also do think that lawyers are trained to do things like remove vagueness from policies.

Basically, I don't think it'd be useful to hire a lawyer in their capacity as a lawyer. But, to the extent there happen to be lawyers among the people you'd consider asking for advice anyway, I'd expect them to be disproportionately good at this kind of thing.

[Source: I went to two years of law school but haven't worked much with lawyers on this type of thing.]

Comment by howiel on Long-Term Future Fund: April 2019 grant recommendations · 2020-02-10T15:28:22.258Z · score: 5 (3 votes) · EA · GW

You say no to "Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?" and say "2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years."


I feel like I'd much better understand what you mean if you were up for giving some probabilities here even if there's a range or they're imprecise or unstable. There's a really big range within "likely" and I'd like some sense of where you are on that range.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:24:29.929Z · score: 14 (7 votes) · EA · GW

[Note - I endorse the idea of splitting it into two much more strongly than any of the specifics in this comment]

Agree that you shouldn't be quite as vague as the GW policy (although I do think you should put a bunch of weight on GW's precedent as well as Open Phil's).

Quick thoughts on a few benefits of staying at a higher level (none of which are necessarily conclusive):

1) It's not obviously less informative.

If somebody clicks on a conflict of interest policy wanting to figure out if they generally trust thee LTF and they see a bunch of stuff about metamours and psychedelics that's going to end up incredibly salient to them and that's not necessarily making them more informed about what they actually cared about. It can actually just be a distraction.

Like, let's say analogous institutions also have psychedelic-related COIs but just group them under "important social relationships" or something. Now, the LTF looks like that fund where all the staff are doing psychedelics with the grantees. I don't think anybody became more informed. (This is especially the case if the info is available *somewhere* for people who care about the details).


2) Flexibility

It's just really hard to anticipate all of the relevant cases and the principles you're using are the thing you might actually want to lock in.


3) Giving lots of detail means lack of disclosure can send a lot of signal.

If you have enough detail about exactly what level of friends someone needs to be with someone else in order to trigger a disclosure then you end up forcing members to send all sorts of weird signals by not disclosing things (e.g. I don't actually consider my friendship with person X that important). This just gets complicated fast.

---

All that said, I think a lot of this just has to be determined by the level of disclosure and type of policy LTF donors are demanding. I've donated a bit and would be comfortable trusting something more general but also am probably not representative.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:06:30.216Z · score: 2 (2 votes) · EA · GW

I guess I think a private board might be helpful even with pretty minimal time input. I think you mostly want some people who seem unbiased to avoid making huge errors as opposed to trying to get the optimal decision in ever case. That said, I'm sympathetic to wanting to avoid the extra bureaucracy.

The comparison to the for-profit sector seems useful but I wouldn't emphasize it *too* much. When you can't rely on markets to hold an org accountable, it makes sense that you'll sometimes need an extra layer.

When for-profits start to need to achieve legitimacy that can't be provide by markets, they seem to start to look towards these kinds of boards, too. (E.g. FB looking into governance boards).

That said, I don't have a strong take on whether this is a good idea.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-07T18:00:29.580Z · score: 5 (3 votes) · EA · GW

Ah - whoops. Sorry I missed that.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:37:49.323Z · score: 4 (3 votes) · EA · GW

Having a private board for close calls also doesn't seem crazy to me.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T22:37:17.215Z · score: 6 (3 votes) · EA · GW

Hmm. Do you have to make it public every time someone recuses themself? If someone could nonpublicly recuse themself that at least gives them the option to avoid biasing the result but also not have to stick their past romantic lives on the internet.

Comment by howiel on Attempted summary of the 2019-nCoV situation — 80,000 Hours · 2020-02-06T17:05:46.306Z · score: 2 (2 votes) · EA · GW

Thanks - this is helpful.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T15:17:22.442Z · score: 1 (1 votes) · EA · GW

(Note that I'm not saying that recusal would necessarily be bad)

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T14:48:28.020Z · score: 14 (9 votes) · EA · GW

Wanted to +1 this in general although I haven't thought through exactly where I think the tradeoff should be.

My best guess is that the official policy should be a bit closer to the level of detail GiveWell uses to describe their policy than to the level of detail you're currently using. If you wanted to elaborate, one possibility might be to give some examples of how you might respond to different situations in an EA Forum post separate from the official policy.

Comment by howiel on Request for Feedback: Draft of a COI policy for the Long Term Future Fund · 2020-02-06T14:42:38.492Z · score: 11 (5 votes) · EA · GW

+1 that requiring disclosure of past intimate relationships seems bad. Especially if the bar is lasting 2 weeks.

Comment by howiel on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-31T09:17:51.516Z · score: 12 (9 votes) · EA · GW

Fwiw, the "pleasure doing business" line was the only part of your tone that struck me as off when I read the thread.

Comment by howiel on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-30T15:33:12.759Z · score: 8 (5 votes) · EA · GW

FYI - study of outcomes a/o Jan 25 for all 99 2019-nCoV patients admitted to a hospital in Wuhan between Jan 1 and Jan 20.

Many caveats apply. Only includes confirmed cases, not suspected ones. People who end up at a hospital are selected for being more severely ill. 60% of the patients have not yet been discharged so haven't experienced the full progression of the disease. Etc.

https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)30211-7/fulltext#%20

Comment by howiel on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-28T23:33:14.158Z · score: 8 (5 votes) · EA · GW

Here's a chart of odds of death by age that was tweeted by an epidmiology professor at Hopkins. I can't otherwise vouch for the reliability of the data and caveat that mortality data sucks this early in an epidemic. https://twitter.com/JustinLessler/status/1222108497556279297

Comment by howiel on Love seems like a high priority · 2020-01-25T15:10:57.369Z · score: 1 (1 votes) · EA · GW

[Retracted]

Comment by howiel on Where are you donating this year and why – in 2019? Open thread for discussion. · 2020-01-08T18:34:43.420Z · score: 4 (5 votes) · EA · GW

Thanks for setting such a good example here, Nicole! Taking care of yourself like this is a really important community norm and sharing your example seems like a really good way to promote it.

Comment by howiel on In praise of unhistoric heroism · 2020-01-08T15:09:24.418Z · score: 11 (9 votes) · EA · GW

This riff from Eliezer seems relevant to me:

The rules say we have to use consequentialism, but good people are deontologists, and virtue ethics is what actually works.

https://www.facebook.com/yudkowsky/posts/10154965691294228

Thinking in terms of virtue ethics on a day to day basis seems like a good way for some people to internalize some of the things folks have brought up in this thread although I've never been able to do it successfully myself.

Comment by howiel on Is mindfulness good for you? · 2019-12-30T22:35:15.538Z · score: 13 (5 votes) · EA · GW

I briefly and informally looked into this several years ago and, at the time, had a few additional concerns. (Can't promise I'm remembering this perfectly and the research may have progressed since then).

1) Many of the best studies on mindfulness's effect on depression and anxiety were specifically on populations where people had other medical conditions (especially, I think, chronic pain or chronic illness) in addition to mental illness. But, most people I know who are interested in mindfulness aren't specifically interested in this population.

My impression is that Jon Kabat-Zinn initially developed Mindfulness-Based Stress Reduction (MBSR) for people with other conditions and my intuition from my experience with it is that it might be especially helpful for things like chronic pain. So I had some external validity concerns.

2) There were few studies of long-term effects and it seems pretty plausible the effects would fade over time. This is especially true if we care about intention-to-treat effects. The fixed cost of an MBSR course might only be justified if it can be amortized over a fairly long period. But it wouldn't be surprising if there are short-to-medium term benefits that fade over time as people stop practicing.

By contrast, getting a prescription for anti-depressants or anti-anxiety has a much lower fixed cost and it's less costly and easier to take a pill every day (or as needed) than to keep up a meditation practice. (On the other hand, some meds have side effects for many people.)

3) You already mention that "many of those researching it seem to be true believers" but it seems worth reemphasizing this. When I looked over the studies included in a meta-analysis (I think it was the relevant Cochrane Review), I think a significant proportion of them literally had Jon Kabat-Zinn (the founder of MBSR) as a coauthor.

---

All that said, my personal subjective experience is that meditating has had a moderate but positive effect on my anxiety and possibly my depression when I've managed to keep it up.


Comment by howiel on What is EA opinion on The Bulletin of the Atomic Scientists? · 2019-12-04T08:32:47.276Z · score: 1 (1 votes) · EA · GW

Thanks!

Comment by howiel on What is EA opinion on The Bulletin of the Atomic Scientists? · 2019-12-03T13:01:31.931Z · score: 30 (16 votes) · EA · GW

I've read some useful stuff in the Bulletin as well as some stuff I really disagree with. I definitely don't think there's anything *wrong* with it.

Greg Lewis, an EA who works on biorisk at FHI, published an article I really like in the Bulletin called "Horsepox synthesis: A case of the unilateralist’s curse?" Here's a post on their critique of Open Phil's biorisk program.

I think there are a bunch of potential reasons the Bulletin doesn't appear much in EA discussions:

-It's a media/magazine/news organization so it mostly publishes articles on current events, which EAs tend not to focus on. [ETA: As cwgoes mentions, the journal has a longer time horizon but is still more focused on currentish stuff than most EAs. More like a policy journal than an academic one.]

-While it does have some content on biorisk and AI, the two potential x-risks EAs tend to focus on, it's still quite focused on nukes.

-EA can be a bit insular and a lot of EAs know a lot more about GCR-relevant orgs with some connection to EA than those without.


Comment by howiel on ALLFED 2019 Annual Report and Fundraising Appeal · 2019-11-26T01:31:40.045Z · score: 3 (3 votes) · EA · GW

Prolific means number of papers authored?

Comment by howiel on EA Mental Health Care Navigator Pilot · 2019-11-25T17:44:52.658Z · score: 3 (2 votes) · EA · GW

Hi Danica,

Could I include your Bay Area list in a global EA mental health resource list?

Hmm. I'd like to be helpful but also want to be sensitive to the privacy of people who gave me info to add to my list (and don't have the ability to go back and check with everyone who contributed).

I'm happy to have you link to my list and list the names of the practitioners on there. I feel a little funny about taking the descriptions/commentary and adding it to a second list. do you think that would be sufficient?

Alternatively, would you be interested in expanding the existing list to global EA-recommended mental health practitioners? Or collaborating to create a separate global list?

This sounds like a great project and I'd love to help but I unfortunately don't have the time. Good luck!
 

 

Comment by howiel on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T21:09:47.647Z · score: 15 (8 votes) · EA · GW

I thought this was great. Thanks, Buck

Comment by howiel on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T23:57:07.571Z · score: 2 (2 votes) · EA · GW

"If I thought there was a <30% chance of AGI within 50 years, I'd probably not be working on AI safety."

Do you have a guess at what you would be working on?

Comment by howiel on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T23:52:38.609Z · score: 45 (16 votes) · EA · GW

Hi EarlyVelcro,

Howie from 80k here.

As Ben said in his comment, the key ideas page, which is the most current summary of 80k’s views, doesn't recommend that “EA should focus on AI alone”. We don't think the EA community's focus should be anything close to that narrow.

That said, I do see how the page might give the impression that AI dominates 80k’s recommendations since most of the other paths/problems talked about are ‘meta’ or ‘capacity building’ paths. The page mentions that “we’d be excited for people to explore [our list of problems we haven’t yet investigated] as well as other areas that could foreseeably have a positive effect on the long-term future” but it doesn’t say anything about what those problems are (other than a link to our problem profiles page, which has a list).

I think it makes sense that people end up focusing on the areas we mention directly and the page could do a better job of communicating that our priorities are more diverse.

The good news is that we’re currently putting together a more thorough list of areas that we think might be very promising but aren't among our priority paths/problems.[1] Unfortunately, it didn’t quite get done in time to add it to this version of key ideas.

More generally, I think 80k’s content was particularly heavy on AI over the last year and, while it will likely remain our top priority, I expect it will make up a smaller portion of our content over the next few years.

[1] Many of these will be areas we haven't yet investigated or areas that are too niche to highlight among our priority paths.

Comment by howiel on EA Mental Health Care Navigator Pilot · 2019-10-31T17:05:21.428Z · score: 11 (7 votes) · EA · GW

Here's a list of mental health practitioners in the Bay Area (mostly SF/East Bay) that I or someone I know has at least minimally vetted. https://docs.google.com/document/d/1KKwe1bAagI7FOInrkcnENCcQsFrKc5K3hXPG0dALWWI/edit

Unfortunately, it's increasingly out of date and I don't live in the Bay anymore.

I'd be happy to add to it if anybody has practitioners they do (or don't) like or sees one of the people on the list and wants to tell me how it went. The list is publicly viewable and I'd include as much or as little information as you'd like. Just DM me if you'd like to add someone.

Alternatively, you can make an anonymous request for me to add to the list here. https://www.admonymous.co/howie

(Also consider adding to SSC's Psychiat-list, mentioned by Milan above - https://psychiat-list.slatestarcodex.com)

Comment by howiel on Publication of Stuart Russell’s new book on AI safety - reviews needed · 2019-10-12T07:55:27.806Z · score: 1 (1 votes) · EA · GW

Huh. Ok, I think you're onto something since if I go to Audible.co.uk in Incognito, the book seems to be there. But I don't totally follow you.

It's right that my Audible/Amazon accounts were registered in the US and I'm now in the UK. Do I need to reregister my account in the UK somehow so it's consistent with where I live? Why would this make certain audiobooks unbuyable for me but not others?

Comment by howiel on Publication of Stuart Russell’s new book on AI safety - reviews needed · 2019-10-11T23:46:44.031Z · score: 6 (2 votes) · EA · GW

That's great!

I ended up getting one from the US Audible store so doesn't matter for me personally anymore. But just FYI since it's at least possible the problem isn't limited to me not knowing how to use technology:

That link for the Kindle version works for me but when I search for "Human Compatible" on my phone (in the Android Kindle App's store), it doesn't appear.

When I follow the Audible link, I get the message "Title Not For Sale In This Country/Region." Same happens when I search in my phone's Audible store.


Comment by howiel on X-risk dollars -> Andrew Yang? · 2019-10-11T23:05:04.216Z · score: 10 (6 votes) · EA · GW

Cool. That's a bit more distinctive although not more than Hillary Clinton said in her book.

Technologists like Elon Musk, Sam Altman, and Bill Gates, and physicists like Stephen Hawking have warned that artificial intelligence could one day pose an existential security threat. Musk has called it “the greatest risk we face as a civilization.” Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about “the rise of the robots” in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.

https://lukemuehlhauser.com/hillary-clinton-on-ai-risk/

Comment by howiel on X-risk dollars -> Andrew Yang? · 2019-10-11T22:59:55.625Z · score: 3 (2 votes) · EA · GW

Thanks

Comment by howiel on X-risk dollars -> Andrew Yang? · 2019-10-11T22:45:19.721Z · score: 11 (9 votes) · EA · GW

[I am not an expert on any of this.]

Is that tweet the only (public) evidence that Andrew Yang understands/cares about x-risk?

A cynical interpretation of the tweet is that we learned that Yang has one (maxed out) donor who likes Bostrom.

My impression is that: 1) it'd be very unusual for somebody to understand much about x-risk from one phone call; 2) sending out an enthusiastic tweet would be the polite/savvy thing to do after taking a call that a donor enthusiastically set up for you; 3) a lot of politicians find it cool to spend half an hour chatting with a famous Oxford philosophy professor with mind blowing ideas. I think there are a lot of influential people who'd be happy to take a call on x-risk but wouldn't understand or feel much different about it than the median person in their reference class.

I know virtually nothing about Andrew Yang in particular and that tweet is certainly *consistent* with him caring about this stuff. Just wary of updating *too* much.


Comment by howiel on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-11T13:07:46.192Z · score: 4 (3 votes) · EA · GW

Ah, sorry. Was writing quickly and that was kind of sloppy on my part. Thanks for the correction!

Edited to be clearer.

Comment by howiel on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-11T03:55:24.513Z · score: 12 (5 votes) · EA · GW

For anybody who wants to look more into CSER, Sean provided me with a his quick take on a few articles he thinks are representative and that he's proud of.

https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison?commentId=cwgFuMEc55i3w3wyf

[Edited to more accurately describe the list as just Sean's quick take]

Comment by howiel on Publication of Stuart Russell’s new book on AI safety - reviews needed · 2019-10-11T03:49:34.458Z · score: 2 (2 votes) · EA · GW

Will it be available on Kindle/Audible in the UK? If so, do you know when?

Comment by howiel on Are we living at the most influential time in history? · 2019-09-05T17:39:47.489Z · score: 20 (9 votes) · EA · GW
On synbio I'm less familiar but I'm guessing someone more than a few decades ago was able to think thoughts like "Once we understand cell biology realy well, seems like we might be able to engineer pathogens much more destructive than those served up by nature."

+1. I don't know the intellectual history well but the risk from engineered pathogens should have been apparent 4 decades ago in 1975 if not (more likely, IMO) earlier.

A fairly random sample of writing on the topic:

  • Jack London's 1910 short story "An Unparalleled Invasion" [CW: really racist] imagines genocide through biological warfare and the possibility that a "hybridization" between pathogens created "a new and frightfully virulent germ" (I don't think he's suggesting the hybridization was intentional but it's a bit ambiguous).
  • the possibility of engineering pathogens was seriously discussed 4 decades ago at the Asilomar Conference in 1975.
  • There's a 1982 sci-fi book by a famous writer where a vengeful molecular biologist releases a pathogen engineered to be GCR-or-worse.
  • In 1986, a U.S. Defense Department official was quoted saying "“The technology that now makes possible so-called ‘designer drugs’ also makes possible designer BW.”
  • In 2000 (admittedly just 2 decades ago) ~x-risk from engineered pathogens was explicitly worried about in "The Future Doesn't Need Us."
Comment by howiel on Three Jobs for Policy-Oriented EAs · 2019-07-24T10:11:35.179Z · score: 17 (9 votes) · EA · GW

Fwiw, I think the post's title contributed to the vibe Peter described. Calling it "3 jobs for policy-oriented EAs" and making a reader click through to find out the relevant cause pattern-matches to clickbait especially because all three jobs are at the same org (where you work) so the title easily could have been something like "The Good Food Institute is hiring three people to work on regulatory and policy issues related to clean meat."

Comment by howiel on A Framework for Thinking about the EA Labor Market · 2019-05-17T01:09:13.025Z · score: 10 (6 votes) · EA · GW

Oh, sorry. DC was just meant to apply to the think tanks. I was comparing to Fed jobs in SF and NY to try to account for this.

The 75% adjustment seems too high to me - at least for someone who's renting, as most entry level staff will be in both of those cities. The calculator you used assumed home ownership. Here's a cost of living calculator that claims the difference is more like 20% for renters (although nonproprietary cost of living data tends to generally be iffy in my experience). Anecdotally, I've lived in both cities and did not experience close to a 75% difference.

If you did use the 75% number, some DC think tank RAs would come out ahead but I don't have a precise enough sense of the range of salaries to know how many. If you use the 20% adjustment, I think my original claim will still hold.

Comment by howiel on A Framework for Thinking about the EA Labor Market · 2019-05-16T17:07:23.556Z · score: 1 (1 votes) · EA · GW

Greg is right that the stat is out of date. I elaborate here: https://forum.effectivealtruism.org/posts/CkYq5vRaJqPkpfQEt/a-framework-for-thinking-about-the-ea-labor-market#osRTvpjHJHNCGWa3Q

Comment by howiel on A Framework for Thinking about the EA Labor Market · 2019-05-16T17:06:21.118Z · score: 8 (5 votes) · EA · GW

For what it's worth, I think salaries at those organisations match or exceed the salaries for similar roles at comparable nonprofits. As I think has been pointed out elsewhere in the comments, that may or may not be the right 'market rate' to look at depending on whether you think these jobs offer disproportionate non-financial benefits.

Using one industry I personally happen to know well as a comparison, I think entry level salaries for research analysts at these organisations tend to be equal to or higher than salaries for economics research assistants at places like the Federal Reserve or top think tanks in DC.

Comment by howiel on A Framework for Thinking about the EA Labor Market · 2019-05-16T16:50:45.302Z · score: 12 (7 votes) · EA · GW

Hi Jon,

Howie from 80k here. Thanks for all your thoughts in the original post as well as this thread. Just wanted to make a quick factual correction. It looks like the page you're quoting, which we published in 2017, has fallen a bit out of date - at least for a subset of EA organisations.

At the most competitive EA orgs, entry level salaries in high-cost-of-living areas now typically range from ~$50k to ~$80k. The most competitive positions at those orgs typically pay at the high end of that range. That said, pay may vary outside of that range for specific positions and at other EA organisations. I'll update the page to clarify later today.

Comment by howiel on Will splashy philanthropy cause the biosecurity field to focus on the wrong risks? · 2019-05-02T09:35:08.700Z · score: 9 (4 votes) · EA · GW

Lentzos has written about elsewhere about why she thinks terrorists using synthetic bioweapons is so unlikely. I quickly summarised in this comment: https://forum.effectivealtruism.org/posts/Kkw8uDwGuNnBhiYHi/will-splashy-philanthropy-cause-the-biosecurity-field-to#QupzPSJLmjoF2A4pN

Comment by howiel on Will splashy philanthropy cause the biosecurity field to focus on the wrong risks? · 2019-05-02T09:33:40.182Z · score: 28 (13 votes) · EA · GW

Note that Lentzos has also been critical of Bill Gates for drawing attention to the risk of terrorists using bioweapons. She thinks that terrorists are unlikely to deploy powerful bioweapons (because they won't have the capabilities and because they won't have the motivation) and by talking about bioterrorists, Gates might draw attention away from state actors. https://thebulletin.org/2017/07/ignore-bill-gates-where-bioweapons-focus-really-belongs/

She's written more about why she thinks concern about terrorists using synthetic biology to create WMDs are based on myths. Her main points of disagreement with what she calls the dominant narrative:

1. Synthetic biology is not easy.

2. Do-it-yourself biology is not particularly sophisticated. 

3. Building a dangerous virus from scratch is hard. 

4. Even experts have a hard time enhancing disease pathogens.

5. Terrorists aren't interested in making WMD bioweapons.

6. There are serious technical and logistical barriers to creating a bioweapon that's a WMD.

https://thebulletin.org/2014/09/the-myths-and-realities-of-synthetic-bioweapons/

Comment by howiel on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-19T00:35:37.581Z · score: 12 (8 votes) · EA · GW

It's not as prominent as it should be. We're going to fix that.

Comment by howiel on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-18T15:59:53.863Z · score: 26 (13 votes) · EA · GW

Rob says:

Re point 2, the "clarifying talent gaps" post and the "why focus on talent gaps" article do offer different views. They were published three years apart. The older post opens with a disclaimer linking to the new one.

Just to be clear, I added the disclaimer to that page today after lexande wrote their initial comment. I don't think Rob realised that the disclaimer was new.

[Rob's now edited his post to make that clear.]

Comment by howiel on What Master's is the best preparation for an Econ PhD? · 2019-04-17T15:43:40.856Z · score: 3 (2 votes) · EA · GW

+1 on taking PhD level economics courses (especially intro micro theory) and doing well on them. Not sure how much it matters where you take those courses but I'd guess high-ranked econ programs would place more weight on courses taken at schools where they expect the PhD courses to be equally challenging.

Comment by howiel on What Master's is the best preparation for an Econ PhD? · 2019-04-17T13:34:52.148Z · score: 5 (4 votes) · EA · GW

Note that if you Google around, there used to be some pretty good guides to how to get into Econ PhD programs out there. For example, this is good and links to some other good resources: https://chrisblattman.com/about/contact/gradschool/

Comment by howiel on What Master's is the best preparation for an Econ PhD? · 2019-04-17T13:33:07.771Z · score: 6 (5 votes) · EA · GW

[My views only, not my past or present employers']

My knowledge is out of date but I thought a lot about this about 8-10 years ago when I strongly considered getting a PhD in economics and worked as an economics research assistant at the Brookings Institution. All of the below should be caveated w/ 'as of 8-10 years ago.'

In general, +1 to dmolling's comment.

As of then, the LSE Masters in Econometrics and Mathematical Economics seemed to stand out as by far the best Econ Masters program if you wanted to go on to a PhD. Note that LSE also has other Masters in Economics programs that are more geared towards being a terminal masters and going on to work in policy as opposed to getting a PhD.

Agree with dmolling that US Masters in Economics are generally not seen as good feeders for PhD programs. Seen as low status among academic economists and geared towards getting a policy job immediately as opposed to getting a PhD. It's possible another exception is the MPAID program at Harvard, which I think makes you take most of the first year curriculum for econ PhD students (or at least used to). Make sure I'm right about this before applying.

The main other options are Masters in Statistics, applied math, or other quantitative areas. I don't know how these compare to the LSE Econometrics program but top math/stats programs probably beat US masters in econ programs. I'd guess Computer Science is also becoming a more popular background.

In general, I think the main things econ PhD programs judge you on are: 1) how many math/stats courses you've taken, how hard they were, and how well you did; 2) recommendations form top economists that the people on the admissions committee know and trust.

I was always told that a top priority was taking (at least) linear algebra, real analysis, and a couple semesters of statistics. Hopefully differential equations, too, and ideally a bunch more math than that. My guess is that taking courses like these and excelling in them is more important than whether or not you come out with a particular degree so, as dmolling says, you could just take them w/o being enrolled in a grad program. Heuristic I've gotten is that if you don't get an A in real analysis, you're probably not a good candidate for a top 6 econ program (but getting an A in real analysis is far from a guarantee; most people at those programs will have taken a lot more math than that).

General advice is that you should ask a Masters program for a list of where alums of the last several years went after finishing. For an econ Masters, if few of them went on to PhD programs you'd be happy at, assume the degree won't help your admissions prospects much.

Comment by howiel on Potential funding opportunity for woman-led EA organization · 2019-03-18T11:03:57.592Z · score: 3 (2 votes) · EA · GW

I think J-PAL North America does do lots of work in the US although I doubt they're focused enough on women/children to qualify for this. https://www.povertyactionlab.org/na/about

Comment by howiel on Getting People Excited About More EA Careers: A New Community Building Challenge · 2019-03-12T15:50:51.215Z · score: 11 (4 votes) · EA · GW

Hey Sebastian,

I'm sympathetic to your comment. The fact that (I think) 80k is not making this particular mistake in its IASPC system does not imply that there's nothing to be concerned about. I think your post as well as some of the comments in other threads do a good job of laying out many of the factors pushing people toward jobs at explicitly EA orgs.

Comment by howiel on Getting People Excited About More EA Careers: A New Community Building Challenge · 2019-03-11T15:32:53.426Z · score: 12 (6 votes) · EA · GW

Thanks for the thoughts, Max. As you suggest in your parenthetical, we aren't saying that 25% of the community ought to be working at EA orgs. The distribution of the plan changes we cause is also affected by things like our network being strongest within EA. That figure is also calculated from a fairly small number of our highest impact plan changes so it could easily change a lot over time.

Personally, I agree with your take that the optimal percentage of the community working at EA orgs is less than 25%.