Posts

Comments

Comment by HowieL on Ask (Everyone) Anything — “EA 101” · 2022-10-14T11:09:18.483Z · EA · GW

The 80k podcast also has some potentially relevant episodes though they're prob not directly what you most want.

Comment by HowieL on Ask (Everyone) Anything — “EA 101” · 2022-10-14T11:04:54.220Z · EA · GW

My guess is that Part II, trajectory changes will have a bunch of relevant stuff. Maybe also a bit of part 5. But unfortunately I don't remember too clearly.

Comment by HowieL on Ask (Everyone) Anything — “EA 101” · 2022-10-13T05:55:06.570Z · EA · GW

It's been a while since I read it but Joe Carlsmith's series on expected utility might help some. 

Comment by HowieL on Ask (Everyone) Anything — “EA 101” · 2022-10-13T05:50:37.285Z · EA · GW

[My impression. I haven't worked on grantmaking for a long time.] I think this depends on the topic, size of the grant, technicality of the grant, etc. Some grantmakers are themselves experts. Some grantmakers have experts in house. For technical/complicated grants, I think non-expert grantmakers will usually talk to at least some experts before pulling the trigger but it depends on how clearcut the case for the grant is, how big the grant is, etc.

Comment by HowieL on Ask (Everyone) Anything — “EA 101” · 2022-10-13T05:48:11.555Z · EA · GW

I think parts of What We Owe the Future by Will MacAskill discuss this approach a bit.

Comment by HowieL on Ask (Everyone) Anything — “EA 101” · 2022-10-13T05:47:13.553Z · EA · GW

Others, most of which I haven't fully read and not always fully on topic:

Comment by HowieL on Ask (Everyone) Anything — “EA 101” · 2022-10-13T05:39:19.952Z · EA · GW

Much narrower recommendation for nearby problems is Overcoming Perfectionism (~a CBT workbook). 

I'd recommend to some EAs who are already struggling with these feelings (and know some who've really benefitted from it). (It's not precisely aimed at this but I think it can be repurposed for a subset of people.)

Wouldn't recommend to students recently exposed to EA who are worried about these feelings in future.

Comment by HowieL on Ask (Everyone) Anything — “EA 101” · 2022-10-13T05:34:28.602Z · EA · GW

If you haven't come across it, a lot of EAs have found Nate Soares' Replacing Guilt series useful for this. (I personally didn't click with it but have lots of friends who did).

I like the way some of Joe Carlsmith's essays touch on this. 

Comment by HowieL on Ask (Everyone) Anything — “EA 101” · 2022-10-13T05:27:11.898Z · EA · GW

FYI - subsamples of that survey were asked about this in other ways, which gave some evidence that "extremely bad outcome" was ~equivalent to extinction.


Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI [1]was 10%, weirdly more than median chance of human extinction from AI in general,[2] at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcomes.

 

  1. ^

    Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’

  2. ^

    That is, ‘future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species’

Comment by HowieL on Samotsvety Nuclear Risk update October 2022 · 2022-10-03T19:30:10.517Z · EA · GW

Thanks for this! It was really useful and will save 80,000 Hours a lot of time.

Comment by HowieL on Open EA Global · 2022-09-11T12:18:20.756Z · EA · GW

I think the people responsible for EA Global admissions (including Amy Labenz, Eli Nathan, and others) have added a bunch of value to me over the years by making it more likely that a conversation or meeting with somebody at EA Global who I don’t already know will end up being productive. Making admissions decisions at EAG (and being the public face of an exclusive admissions policy) sounds like a really thankless job and I know a bunch of the people involved end up having to make decisions that make them pretty sad because they think it’s best for the world. I mostly just wanted to express some appreciation for them and to mention that I’ve benefitted from it since it feels uncomfortable to say out loud so is probably under expressed.


One positive effect of selective admissions that I don’t often see discussed is that it makes me more likely to take meetings with folks I don’t already know. I’d guess that this increases the accessibility of EA leaders to a bunch of folks in the community.


Fwiw, I’ve sometimes gotten overambitious with the number of meetings I take at EAG and ended up socially exhausted enough to be noticeably less productive for several days afterwards. This is a big enough cost that I’ve skipped some years. So, I think in the past I’ve probably been on the margin where if the people at EAG had not been selected for being folks I could be helpful to, I’d have been less likely to go.

Comment by HowieL on Samotsvety's AI risk forecasts · 2022-09-09T11:58:56.474Z · EA · GW

I'm curious whether there's any answer AI experts could have given that would be a reasonably big update for you.

For example is there any level of consensus against ~AGI by 2070 (or some other date) that would be strong enough to move your forecast by 10 percentage points?

Comment by HowieL on Be careful with (outsourcing) hiring · 2022-09-06T17:41:59.416Z · EA · GW

I definitely agree that takeaway would be a mistake. I think my view is more like "if the specifics of what MT says on a particular topic don't feel like they really fit your organisation, you should not feel bound to them. Especially if you're a small organisation with an unusual culture or if their advice seems to clash with conventional wisdom from other sources, especially in silicon valley.

I'd endorse their book as useful for managers at any org. A lot of the basic takeaways (especially having consistent one on ones) seem pretty robust and it would be surprising if you shouldn't do them at all.

Comment by HowieL on Selfish Reasons to Move to DC · 2022-09-06T06:28:01.925Z · EA · GW

Agree with a lot of this post. I lived in DC from 2008-2010 and various short periods before and after and overall I liked it (though I'd probably like it a bit less today and expect a lot of EAs to like it less than I did).

The features of DC that most affected me: -DC felt like a company town. This had advantages. I liked having tons of friends who were think tank analysts or worked on the Hill and were trying to change the world (though I suspect polarization has made the vibe a bit worse). It also had disadvantages. Relative to NYC (which I knew best at the time) I knew relatively few people living in DC because they wanted to make DC great and this meant things like a worse music scene (despite the fact that I grew up on DC punk music). -Lots of people, especially young people, only stay for a couple of years so it was hard to maintain a friend group. I think this was a big deal. -DC is small relative to a place like NY. Overall this felt like a disadvantage to me though I expect it would be a feature to some other people. DC felt like more of a bubble and there were fewer places to explore. There was a concert I'd be interested in ~once a week instead of a couple per night. On the other hand, several houses full of friends and co-workers lived within a five minute walk which was great. That said, it's still one of the biggest metro areas in the US. -I thought it was cool/exciting to live in a city where policy and politics were happening (though I think I'd enjoy less today). -I think there were some disadvantages to everybody being very networky and the culture being kind of conservative.

Comment by HowieL on Be careful with (outsourcing) hiring · 2022-09-06T05:27:44.629Z · EA · GW

"I don't think they would put out material that fails to apply to them."

I think we mostly agree but I don't think that's necessarily true. My impression is that they mainly study what's useful to their clients and from what I can glean from their book, those clients are mostly big and corporate. I think they might fall outside of their main target audience.

+1 to Paul grahams essays.

Comment by HowieL on Be careful with (outsourcing) hiring · 2022-09-05T12:23:33.536Z · EA · GW

[Unfortunately didn't have time to read this whole post but thought it was worth chiming in with a narrow point.]

I like Manager Tools and have recommended it but my impression is that some of their advice is better optimized for big, somewhat corporate organizations than for small startups and small nonprofits with an unusual amount of trust among staff. I'd usually recommend somebody pair MT with a source of advice targeted at startups (e.g. CEO Within though the topics only partially overlap) so you know when the advice differs and can pick between them.

Comment by HowieL on Open EA Global · 2022-09-03T11:54:57.850Z · EA · GW

Just making sure you saw Eli Nathan's comment saying that this year plus next year they didn't/won't hit venue capacity so you're not taking anybody's spot

Comment by HowieL on Prioritizing x-risks may require caring about future people · 2022-08-15T18:57:14.677Z · EA · GW

No worries!

Comment by HowieL on Prioritizing x-risks may require caring about future people · 2022-08-14T23:59:15.880Z · EA · GW

tl;dr I wouldn't put too much weight on my tweet saying I think I probably wouldn't be working on x-risk if I knew the world would end in 1,000 years and I don't think my (wild) guess at the tractability of x-risk mitigation is particularly pessimistic.

***

Nice post. I agree with the overall message of as well as much of Ben's comment on it. In particular, I think emphasizing the significance of future generations, and not just reducing x-risk, might end up as a crux for how much you care about: a) how much an intervention reduces x-risk v. GCRs that are unlikely to (directly?) lead to existential catastrophe; b) whether civilization just manages to avoid x-risk v. ends up on track to flourish as much as possible and last a lot longer than (e.g.) the typical mammalian species. 

***

That said, I mostly came here to quickly caution against putting too much weight on this:

  1. Alyssa Vance’s tweet about whether the longtermism debate is academic
    1. And Howie Lempel responding, saying he thinks he would work on global poverty or animal welfare if he knew the world was ending in 1,000 years
    2. Howie’s response is interesting to me, as it implies a fairly pessimistic assessment of tractability of x-risks given that 1,000 years would shift the calculations presented here by over an OOM (>10 generations).

That's mostly for the general reason that I put approximately one Reply Tweet's worth of effort into it. But here are some specific reasons not to put too much weight on it and also that I don't think it implies a particularly pessimistic assessment of the tractability of x-risk.[1]

  1. I'm not sure I endorse the Tweet on reflection mostly because of the next point.
  2. I'm not sure if my tweet was accounting for the (expected) size of future generations. A claim I'd feel better about would be "I probably wouldn't be working on x-risk reduction if I knew there would only be ~10X more beings in the future than are alive today or if I thought the value of future generations was only ~10X more than the present." My views on the importance of the next 1,000 years depend a lot on whether generations in the coming century are order(s) of magnitude bigger than the current generation (which seems possible if there's lots of morally relevant digital minds). [2]
  3. I haven't thought hard about this but I think my estimates of the cost-effectiveness of the top non-longtermist opportunities are probably higher than implied by your table.
    1. I think I put more weight on the badness of being in a factory farm and (probably?) the significance of chickens than implied by Thomas's estimate.
    2. I think the very best global health interventions are probably more leveraged than giving to GiveWell.
  4. I find animal welfare and global poverty more intuitively motivating than working on x-risk, so the case for working on x-risk  had to be pretty  strong to get me to spend my career on it. (Partly for reasons I endorse, partly for reasons I don't.)
  5. I think the experience I had at the time I switched the focus of my career was probably more relevant to global health and animal welfare than x-risk reduction.
  6. My claim was about what I would in fact be doing, not about what I ought to be doing.

[1] Actual view: wildly uncertain and it's been a while since I last thought about this but something like the numbers from Ben's newsletter or what's implied by the 0.01% fund seem within the realm of plausibility to me. Note that, as Ben says, this is my guess for the marginal dollar. I'd guess the cost effectiveness of the average dollar is higher and I might say something different if you caught me on a different day. 

[2] Otoh, conditional on the world ending in 1,000 years maybe it's a lot less likely that we ended up with lots of digital minds?

Comment by HowieL on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-28T16:36:28.922Z · EA · GW

I agree with Caleb that theoretical AIS, infinite ethics, and rationality techniques don't currently seem to be overprioritized. I don't think there are all that many people working full-time on theoretical AIS (I would have guessed less than 20). I'd guess less than 1 FTE on infinite ethics. And not a ton on rationality, either. 

Maybe your point is more about academic or theoretical research in general? I think FHI and MIRI have both gotten smaller over the last couple of years and CSER's work seems less theoretical. But you might still think there's too much overall?

My impression is that there's much more of a supply of empirical AI safety research and, maybe, theoretical AI safety research written by part-time researchers on LessWrong. My impression is that this isn't the kind of thing you're talking about though.

There's a nearby claim I agree with, which is that object level work on specific cause areas seems undervalued relative to "meta" work.

Academic-like research into interesting areas of AI risk is far easier to get funded by many funders than direct research into, say, vaccine production pipelines.

My guess is that this has less to do with valuing theory or interestingness over practical work, and more to do with funders prioritizing AI over bio. Curious if you disagree.

Comment by HowieL on Leaning into EA Disillusionment · 2022-07-22T11:08:11.560Z · EA · GW

Know that other people have gone through the disillusionment pipeline, including (especially!) very smart, dedicated, caring, independent-minded people who felt strong affinity for EA. Including people who you may have seen give talks at EA Global or who have held prestigious jobs at EA orgs.

Also, I think even people like this who haven't gone through the disillusionment pipeline are often a lot more uncertain about many (though not all) things than most newcomers would guess. 

Comment by HowieL on Leaning into EA Disillusionment · 2022-07-22T11:00:41.844Z · EA · GW

Thanks for writing this post. I think it improved my understanding of this phenomenon and I've recommended reading it to others.

Hopefully this doesn't feel nitpicky but if you'd be up for sharing, I'd be pretty interested in roughly how many people you're thinking of:

"I know at least a handful of people who have experienced this (and I’m sure there are many more I don’t know)—people who I think are incredibly smart, thoughtful, caring, and hard-working, as well as being independent thinkers. In other words, exactly the kind of people EA needs. Typically, they throw themselves into EA, invest years of their life and tons of their energy into the movement, but gradually become disillusioned and then fade away without having the energy or motivation to articulate why."

I'm just wondering whether I should update toward this being much more prevalent than I already thought it was.

Comment by HowieL on On Deference and Yudkowsky's AI Risk Estimates · 2022-07-22T02:24:11.717Z · EA · GW

"My best guess is that I don't think we would have a strong connection to Hanson without Eliezer"

Fwiw, I found Eliezer through Robin Hanson.

Comment by HowieL on [Link] New Lancet study: Impact of the first year of COVID-19 vaccinations · 2022-06-29T23:30:14.073Z · EA · GW

111% of deaths averted?

Comment by HowieL on Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins · 2022-06-25T12:02:30.200Z · EA · GW

Agree they have a bunch of very obnoxious business practices. Just fyi you can change a seeing so nobody can see whose pages you look at.

Comment by HowieL on Idea: Pay experts to provide detailed critiques / analyses of EA ideas · 2022-06-07T22:31:59.602Z · EA · GW

I think Open Philanthropy has done some of this. For example:

The Open Philanthropy technical reports I've relied on have had significant external expert review. Machine learning researchers reviewed Bio Anchors; neuroscientists reviewed Brain Computation; economists reviewed Explosive Growth; academics focused on relevant topics in uncertainty and/or probability reviewed Semi-informative Priors.2 (Some of these reviews had significant points of disagreement, but none of these points seemed to be cases where the reports contradicted a clear consensus of experts or literature.)

https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand

Comment by HowieL on Response to Recent Criticisms of Longtermism · 2022-05-26T20:45:42.443Z · EA · GW

Was this in the deleted tweet? The tweet I see is just him tagging someone with an exclamation point. I don't really think it would be accurate to characterise that as "Torres supports the 'voluntary human extinction' movement"

Comment by HowieL on Deferring · 2022-05-16T22:28:28.748Z · EA · GW

Yeah that does sell me a bit more on delegating choice.

Comment by HowieL on Deferring · 2022-05-16T19:33:40.155Z · EA · GW

I think that's an improvement though "delegating" sounds a bit formal and it's usually the authority doing the delegating. Would "deferring on views" vs "deferring on decisions" get what you want?

Comment by HowieL on Deferring · 2022-05-16T18:06:55.311Z · EA · GW

Thanks for writing this post. I think it's really  to distinguish the two types of deference and push the conversation toward the question of when to defer as opposed to how good it is in general.

ButI think "deferring to authority" is really bad branding (as you worry about below) and your definition doesn't really capture what you mean.  I think it's probably worth changing even though I haven't come up with great alternatives.

Branding. To my ear, deferring to authority has a very negative connotation. It suggests deferring to a preexisting authority because they have power over you, not deferring to a person/norm/institution/process because you're bought into the value of coordination. Relatedly, it doesn't seem like the most natural phrase to capture a lot of your central examples.

Substantive definition. I don't think "adopting someone else's view because of a social contract to do so" is really what you mean. It suggests that if someone were not to defer in one of these cases, they'd be violating a social contract (or at least a norm or expectation), whereas I think you want to include lots of instances where that's not the case (e.g. you might defer as a solution to the unilateralist's curse even if you were under no implicit contract to do so). Most of your examples also seem to be more about acting based on someone else's view or a norm/rule/process/institution and not really about adopting their view.[1] This seems important since I think you're trying to create space for people to coordinate by acting against their own view while continuing to hold that view.

I actually think the epistemics v. action distinction is a cleaner distinction so I might base your categories just on whether you're changing your views v. your actions. 

***

Brainstorm of other names for non-epistemic deferring (none are great). Pragmatic deferring. Action deferring.  Praxological deferring (eww). Deferring for coordination.

 

[1] Technically, you could say you're adopting the view that you should take some action but that seems confusing.

Comment by HowieL on Deferring · 2022-05-16T17:58:26.712Z · EA · GW

Thanks for writing this post. I think it's really  to distinguish the two types of deference and push the conversation toward the question of when to defer as opposed to how good it is in general.

ButI think "deferring to authority" is  bad branding (as you worry about below) and I'm not sure your definition totally captures what you mean.  I think it's probably worth changing even though I haven't come up with great alternatives.

Branding. To my ear, deferring to authority has a very negative connotation. It suggests deferring to a preexisting authority because they have power over you, not deferring to a person/norm/institution/process because you're bought into the value of coordination. Relatedly, it doesn't seem like the most natural phrase to capture a lot of your central examples.

Substantive definition. I don't think "adopting someone else's view because of a social contract to do so" is exactly what you mean. It suggests that if someone were not to defer in one of these cases, they'd be violating a social contract (or at least a norm or expectation), whereas I think you want to include lots of instances where that's not the case (e.g. you might defer as a solution to the unilateralist's curse even if you were under no implicit contract to do so). Most of your examples also seem to be more about acting based on someone else's view or a norm/rule/process/institution and not really about adopting their view.[1] This seems important since I think you're trying to create space for people to coordinate by acting against their own view while continuing to hold that view.

I actually think the epistemics v. action distinction is a cleaner distinction so I might base your categories just on whether you're changing your views v. your actions (though I suspect you considered this and decided against). 

***

Brainstorm of other names for non-epistemic deferring (none are great). Pragmatic deferring. Action deferring.  Praxological deferring (eww). Deferring for coordination.

(I actually suspect that you might just want to call this something other than deferring).

 

[1] Technically, you could say you're adopting the view that you should take some action but that seems confusing.

Comment by HowieL on Has anyone actually talked to conservatives* about EA? · 2022-05-06T04:47:00.132Z · EA · GW

He also talked to Rob Wiblin. https://80000hours.org/podcast/episodes/russ-roberts-effective-altruism-empirical-research-utilitarianism/

Comment by HowieL on What I learnt from attending EAGx Oxford (as someone who's new to EA) · 2022-04-03T11:42:42.794Z · EA · GW

Glad you had a great experience though wish it could have been even better! I think it's pretty counterintuitive that most of the value from many conferences comes from 1:1s so it totally makes sense that it took you by surprise.

I wouldn't expect people to have found these in advance but, for next time, there's a bunch of good "how to do EAG(X)" and "how to do 1:1s" posts on the forum. Some non comprehensive examples:

https://forum.effectivealtruism.org/posts/sxKJckCiZQyux4kCx/ea-global-tips-networking-with-others-in-mind

https://forum.effectivealtruism.org/posts/pKbTjdopzSEApSQfc/doing-1-on-1s-better-eag-tips-part-ii

Generally the EAG and EA conferences tags seem good for finding this stuff.

https://forum.effectivealtruism.org/tag/effective-altruism-conferences

https://forum.effectivealtruism.org/tag/effective-altruism-global


I know the conference organizers have a ton of considerations when deciding how much content to blast at attendees (and it's easy for things to sink to the bottom of everybody's inbox) but some of these might be cool for them to send to future attendees.


I think going to conferences where you don't know a bunch of people already is pretty scary so I'm impressed that you went for it anyway!

Comment by HowieL on Future Matters #0: Space governance, future-proof ethics, and the launch of the Future Fund · 2022-03-24T14:27:35.090Z · EA · GW

+1. Fwiw, I was going to subscribe and then didn't when I saw how long it was.

Comment by HowieL on We should consider funding well-known think tanks to do EA policy research · 2022-02-24T15:31:18.094Z · EA · GW

Fwiw, I did some light research (hours not days) a few years ago on the differences between US and European think tanks and the (perhaps out of date) conventional wisdom seemed to be that they play a relatively outsized role in the U.S. (there are various hypotheses for why). So That may be one reason for the US/UK difference (though funders being in the US and many other issues could also be playing a role).

Comment by HowieL on The best $5,800 I’ve ever donated (to pandemic prevention). · 2022-02-05T16:12:45.332Z · EA · GW

I also donated $5,800 (though not due to this post).

Comment by HowieL on The best $5,800 I’ve ever donated (to pandemic prevention). · 2022-02-05T14:18:50.471Z · EA · GW

"A foreign national may not direct, dictate, control or directly or indirectly participate in the decision-making process of any person (such as a corporation, labor organization, political committee or political organization) with regard to the person's federal or nonfederal election-related activities. This includes decisions concerning the making of contributions, donations, expenditures or disbursements in connection with any federal state or local election or decisions concerning the administration of a political committee."

https://www.fec.gov/help-candidates-and-committees/foreign-nationals/

I think it would be pretty hard to argue that a donation swap didn't at least involve indirectly participating in someone's decision to donate.

Comment by HowieL on Get In The Van · 2022-01-26T13:29:56.414Z · EA · GW

I couldn't resist pointing out that Get in the Van is also the title of Henry Rollins' diaries from his time touring as the singer of Black Flag, the seminal hardcore punk band.

Feels particularly appropriate given the story of how Rollins joined the band:

When Black Flag returned to the East Coast in 1981, Rollins [just a fan at the time] attended as many of their concerts as he could. At an impromptu show in a New York bar, Black Flag's vocalist Dez Cadena allowed Rollins to sing "Clocked In", a song Rollins had asked the band to play in light of the fact that he had to drive back to Washington, D.C. to begin work.[20]

Unbeknownst to Rollins, Cadena wanted to switch to guitar, and the band was looking for a new vocalist.[20] The band was impressed with Rollins's singing and stage demeanor, and the next day, after a semi-formal audition at Tu Casa Studio in New York City, they asked him to become their permanent vocalist.

https://en.wikipedia.org/wiki/Henry_Rollins#Black_Flag

Despite some doubts, he accepted, due in part to Ian MacKaye's encouragement. Rollins acted as roadie for the remainder of the tour while learning Black Flag's songs during sound checks and encores, while Cadena crafted guitar parts that meshed with Ginn's.

https://en.wikipedia.org/wiki/Black_Flag_(band)#Rollins_era_(1981%E2%80%931985)

After joining Black Flag in 1981, Rollins quit his job at Häagen-Dazs, sold his car, and moved to Los Angeles. Upon arriving in Los Angeles, Rollins got the Black Flag logo tattooed on his left biceps[18] and also on the back of his neck, chose the stage name of Rollins, a surname he and MacKaye had used as teenagers.

https://en.wikipedia.org/wiki/Henry_Rollins#Black_Flag

Comment by HowieL on Comments for shorter Cold Takes pieces · 2022-01-05T05:58:11.908Z · EA · GW

"Some of the people who have written the most detailed pieces about "innovation stagnation" seem to believe something like the "golden age" hypothesis - but they seem to say so only in interviews and casual discussions, not their main works."

Just fyi - You mention Peter Thiel in a footnote here. It's been a while since I read it but iirc Peter Thiel describes something you might consider a version of the golden age hypothesis in a bit of dusk in the "You are not a lottery ticket" chapter of zero to one.

Comment by HowieL on SamClarke's Shortform · 2021-12-31T00:53:03.848Z · EA · GW

I thought about this a bunch before releasing the episode (including considering various levels of anonymity). Not sure that I have much to say that's novel but I'd be happy to chat with you about it if it would help you decide whether to do this.[1]

The short answer is:

  1. Overall, I'm very glad we released my episode. It ended up getting more positive feedback than I expected and my current guess is that in expectation it'll be sufficiently beneficial to the careers of other people similar to me that any damage to my own career prospects will be clearly worth it.
  2. It was obviously a bit stressful to put basically everything I've ever been ashamed of onto the internet :P, but overall releasing the episode has not been (to my knowledge) personally costly to me so far. 
    1. My guess is that the episode didn't do much harm to my career prospects within EA orgs (though this is in part because a lot of the stuff I talked about in the episode was already semi-public knowledge w/in EA and any  future EA  employer would have learned about them before deciding to hire me anyway). 
    2. My guess is that if I want to work outside of EA in the future, the episode will probably make some paths less accessible. For example, I'm less sure the episode would have been a good idea if it was very important to me to keep U.S. public policy careers on the table.

[1] Email me if you want to make that happen since the Forum isn't really integrated into my workflow. 

Comment by HowieL on What Small Weird Thing Do You Fund? · 2021-11-27T22:24:26.312Z · EA · GW

I've also done this.

Comment by HowieL on Despite billions of extra funding, small donors can still have a significant impact · 2021-11-24T21:25:08.799Z · EA · GW

Linking to make it easier for anybody who wants to check these out.

Is effective altruism growing? An update on the stock of funding vs. people
 

What does the growth of EA mean for our priorities and level of ambition? [talk transcript]
 

Comment by HowieL on Takeaways on US Policy Careers (Part 2): Career Advice · 2021-11-11T12:10:42.452Z · EA · GW

Thanks for putting this all together! FYI - something seems wrong with your third footnote and the ones after that seem misnumbered.

Comment by HowieL on Takeaways on US Policy Careers (Part 2): Career Advice · 2021-11-11T12:07:09.390Z · EA · GW

"A few graduate students with relevant interests seemed to agree that a masters degree (not a PhD, and not a law degree) is typically the best option for going into think tank research."

Flagging that when I worked at Brookings ten years ago, a large majority of senior fellows in economic studies had PhDs. The exceptions I can think of had decades of impressive experience.

Things may have changed since then. Also, at the time this was more true at Brookings than other think tanks and more true in economics than other fields.

Comment by HowieL on Should EA Global London 2021 have been expanded? · 2021-11-10T11:50:46.879Z · EA · GW

"Regarding staff capacity, we have been very stretched as a team this year. I fully came back from parental leave in mid-April. We ran EAG Reconnect in March, the EA Picnic in July, the EA Meta Coordination Forum in September, and EAG London in October. For much of that time, I was the only full-time staff member on the Events Team: we had one other part-time employee and two contractors. COVID-related precautions added (rough guess) 20-40% additional production time. We brought on two part-time contractors to help with admissions processing in the final weeks and brought on two additional contractors to assist with production for the week of the event (and, as usual, were helped by many wonderful volunteers during the event itself). "

I didn't realise this. I'm impressed by how much you did with so little staff.

Comment by HowieL on Buck's Shortform · 2021-11-09T15:26:45.408Z · EA · GW

I'd be pretty interested in you writing this up. I think it could cause some mild changes in the way I treat my salary.

Comment by HowieL on What's something that every EA community builder should have thought about? · 2021-11-04T19:10:54.934Z · EA · GW

Thanks. This was interesting and I think I buy that this can be really important. This comment actually gave me an interesting new frame on some of the benefits I've gotten from having a history with punk music.

In some ways I think you get this for free by being old enough to have graduated from uni before EA existed. I hadn't exactly appreciated this as a way my experience differs from younger EAs'.

Comment by HowieL on What's something that every EA community builder should have thought about? · 2021-11-01T10:31:36.370Z · EA · GW

"Best handled by people firmly grounded in some other community or field or worldview."

I'd be interested in you fleshing out a bit more what bring grounded in this way looks like to you. (E.g. what are some examples of communities/fields/worldviews EAs you know have successfully done this with?)

Comment by HowieL on Is there anyone working full-time on helping EAs address mental health problems? · 2021-11-01T10:26:40.920Z · EA · GW

Somebody recently suggested to me that they'd find it useful for a bunch of EAs to anonymously write up some thoughts on their experience with mental health (maybe especially at work).

Wondering how useful other people think this would be.

Comment by HowieL on List of EA funding opportunities · 2021-10-28T14:47:03.374Z · EA · GW

Eliezer gave some more color on this here:

This is your regular reminder that, if I believe there is any hope whatsoever in your work for AGI alignment, I think I can make sure you get funded. It's a high bar relative to the average work that gets proposed and pursued, and an impossible bar relative to proposals from various enthusiasts who haven't understood technical basics or where I think the difficulty lies. But if I think there's any shred of hope in your work, I am not okay with money being your blocking point. It's not as if there are better things to do with money.

https://www.facebook.com/yudkowsky/posts/10159562959764228

There might be more discussion in the thread.