EA Infrastructure Fund: September–December 2021 grant recommendations 2022-07-12T15:24:31.256Z
Apply to CLR as a researcher or summer research fellow! 2022-02-01T22:24:08.322Z
EA Infrastructure Fund: May–August 2021 grant recommendations 2021-12-24T10:42:08.969Z
Chi's Shortform 2021-01-20T22:58:54.361Z
My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda 2020-08-15T19:59:21.909Z
What organizational practices do you use (un)successfully to improve culture? 2020-08-14T22:42:26.812Z
English summaries of German Covid-19 expert podcast 2020-04-08T21:12:00.931Z


Comment by Chi on Can you recommend associations that deal with reducing suffering? · 2022-09-13T05:04:24.387Z · EA · GW

Center on Long-Term Risk (my employer) focuses on reducing s-risk (risks of astronomical suffering.)

(And AFAIK coined the term (long before my times though))

Comment by Chi on EA is too reliant on personal connections · 2022-09-02T10:40:35.694Z · EA · GW

Many large donors (and donation advisors) do not take general applications. This includes Open Philanthropy (“In general, we expect to identify most giving opportunities via proactive searching and networking”), Longview, REG, CERR, CLR, and the new Longtermism Fund.

Grant manager at CLR here - we take general applications to the CLR Fund and would love to get more of them. Note that our grantmaking is specifically s-risk focused.*

Copy pasting another comment of mine from another post over here:

If you or someone you know are seeking funding to reduce s-risk, please send me a message. If it's for a smaller amount, you can also apply directly to CLR Fund. This is true even if you want funding for a very different type of project than what we've funded in the past.

I work for CLR on s-risk community building and on our CLR Fund, which mostly does small-scale grantmaking, but I might also be able to make large-scale funding for s-risk projects ~in the tens of $ millions (per project) happen. And if you have something more ambitious than that, I'm also always keen to hear it :)



*We also fund things that aren't specifically targeted towards s-risk reduction but still seem beneficial to s-risk reduction. Some of our grants this year that we haven't published yet are such grants. That said, we are often not in the best position to evaluate applications that aren't focused on s-risk even if they would have some s-risk-reducing side effects, especially when these side effects are not clearly spelled out in the application.

Comment by Chi on EA Forum feature suggestion thread · 2022-08-16T22:23:40.673Z · EA · GW

Automatically create a bibliography with all the links in a post.

Comment by Chi on There are currently more than 100 open EA-aligned tech jobs · 2022-04-27T11:42:53.786Z · EA · GW

Not OP, but I'm guessing it's at least unclear for the non-safety positions at OpenAI listed but it depends a lot on what a person would do in those positions. (I think they are not necessarily good "by default", so the people working in these positions would have to be more careful/more proactive to make it positive. Still think it could be great.) Same for many similar positions on the sheet but pointing out OpenAI since a lot of roles there are listed. For some of the roles, I don't know enough about the org to judge.

Comment by Chi on Three Reflections from 101 EA Global Conversations · 2022-04-26T17:40:06.383Z · EA · GW

Haha, no, it took me quite a bit longer to phrase what I wrote but I didn't have dedicated non-writing thinking time, e.g. the claim about the expected ratio of future assets seems like something I could sanity check + get a better number for with a pen and pencil and a few minutes but I was too lazy to do that :)

(And I can't let false praise of me stand)

edit to also comment on the substantial part of your comment: Yes, that takeaway seems good to me!

edit edit: Although I'd caveat that s-risk is less mature than general longtermism (more "pre-paradigmatic" for people who like that word), so there might be less (obvious) to do for founders/leaders right now and that can be very frustrating. We still always want to hear about such people.

last edit?: And as in general longtermism, if somebody is interested in s-risk and has really high EtG potential, I might sometimes prefer that. Especially given what I said above about founder/leader type people. Something within an order of magnitude or two of FTX F for s-risk reduction would obviously be a huge win for the space and I don't think it's crazy to think that people could achieve that.

Comment by Chi on Three Reflections from 101 EA Global Conversations · 2022-04-26T16:12:08.913Z · EA · GW

I didn't run this by anyone else in the s-risk funding space, so please don't hold others to these numbers/opinions.

Tl;dr: I think this is probably right in direction but with lots of caveats. In particular, it's still the case that s-risk has a lot of money (~low hundreds $m) compared to ideas/opportunities at least right now and at least possibly more so than general longtermism. I think this might change soon since I expect s-risk money to grow less than general longtermist money.

edit: I think s-risk is ideas constrained when it comes to small grants and funding (and ideas) constrained for large grants/investments.

I'd estimate s-risk to have something in the low hundreds $m in expected value (not time-discounted) of current assets specifically dedicated to it. Your question is slightly hard to answer since I'm guessing OpenPhil and FTXF would fund at least some s-risk projects if there were more proposals/more demand for money in s-risk. Also, a lot of funded people and projects who don't work directly on s-risk still care about s-risk. Maybe that should be counted somehow. Naively not counting these people and OpenPhil/FTXF money at all and comparing current total assets in general longtermism vs. s-risk:

In absolute terms: Yup, general longtermism definitely has much more money (~two orders of magnitude.) My guess is that this ratio will grow bigger over time and that it will in expectation grow bigger over time. (~70% credence for each of the claims? Again confused about how to count OpenPhil and FTX F money and how they'll decide to spend money in the future. If I stick to not counting them as s-risk money at all, then >70% credence.)

Per person working on s-risk/general longtermism: Would still say yes although I don't have a good way to count s-risk people and general longtermist people. Could be closer to even and probably not (much) more than an order of magnitude difference. Again, quick and wild guess is that the difference will in expectation grow larger over time, but less confident in this than my guess about how the ratio of absolute money will develop. (55%?)

Per quality-adjusted idea/opportunity to spend money: Unsure. I'd (much) rather have more money-eating ideas/opportunities to reduce s-risk than more money to reduce s-risk but I'm not sure if this is more or less the case compared to general longtermism (s-risk has both fewer ideas/opportunities and less money). Also don't know how this will develop. Arguably, the ratio between money and idea/opportunity also isn't a great metric because you might care more about absolutes here. I think some people might argue that s-risk is less funding constrained compared to ideas-constrained than general longtermism. This isn't exactly what you've asked for but still seems relevant. OTOH, having less absolute money does mean that the s-risk space might struggle to fund even one really expensive project.

edit: I do think if we had significantly more money right now, we would be spending more money now-ish.

Per "how much people in the EA community care about this issue": Who knows :) I'm  obviously both biased and in a position that selects for my opinion.

Funding infrastructure: Funding in s-risk is even more centralized than in general longtermism, so if you think diversification is good, more s-risk funders are good :) There are also fewer structured opportunities for funding in s-risk and I think the s-risk funding sources are generally harder to find. Although again, I assume one could easily apply with an s-risk motivated proposal to general longtermist places, so it's kind of weird to compare the s-risk funding infrastructure to the general longtermist funding infrastructure.


I wrote this off the cuff and in particular, might substantially revise my predictions with 15 minutes of thought.

Comment by Chi on Three Reflections from 101 EA Global Conversations · 2022-04-26T09:39:02.378Z · EA · GW

If you or someone you know are seeking funding to reduce s-risk, please send me a message. If it's for a smaller amount, you can also apply directly to CLR Fund. This is true even if you want funding for a very different type of project than what we've funded in the past.

I work for CLR on s-risk community building and on our CLR Fund, which mostly does small-scale grantmaking, but I might also be able to make large-scale funding for s-risk projects ~in the tens of $ millions (per project) happen. And if you have something more ambitious than that, I'm also always keen to hear it :)

Comment by Chi on Apply to CLR as a researcher or summer research fellow! · 2022-02-13T15:17:47.198Z · EA · GW

Thanks for asking! We would definitely consider later starts if people aren't available earlier and I would be surprised if we rejected a strong candidate just on the basis that they are only available a month later. There's some chance we would shorten the default fellowship length (not necessarily by the same number of weeks that they would start later) for them, though but we would discuss this with them first. I think if they would only accept the fellowship if it starts later and is the original 9 weeks long, this would increase the threshold for accepting them somewhat, but again, I would be surprised if we rejected a very strong candidate just on the basis. (I think it would only matter for edge cases.) It also depends a bit on what other applications we get: E.g. if we get many strong applications for Germans who can only start later, we would probably be much more happy to accommodate all of them.

Comment by Chi on Apply to CLR as a researcher or summer research fellow! · 2022-02-03T15:04:44.587Z · EA · GW

Thanks for the question! It's unclear whether we'll run an S-Risk Intro Fellowship in this precise format again. We are fairly likely to run intro events with similar content in the future though. I think this will most likely happen on an annual or semi-annual basis.

Comment by Chi on Apply to CLR as a researcher or summer research fellow! · 2022-02-01T22:55:59.350Z · EA · GW

Some data that I didn't formally write up and put in the post (mostly for time reasons) on how past fellows evaluated the fellowship:



10 out of 14 fellows filled in the fellowship feedback survey:

  • 10 of 10 respondents answered "Are you glad that you participated in the fellowship" with 5/5 ("hell yeah")
  • 9 of 10 respondents answered "If the same program happened next year, would you recommend a friend (with similar background to you before the fellowship) to apply?" with 10/10 ("strongly yes).
    • 1 of 10 fellows rated the question with 9/10.
  • 4 of 10 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "notably more valuable (3-10x the counterfactual)"
    • 3 of 10 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "Much more valuable (10-30x the counterfactual)"
    • 1 of 10 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "Far more valuable (>30x the counterfactual)"
    • 1 of 10 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "Somewhat more valuable (1-3x the counterfactual)"
    • 1 of 10 respondents did not answer the question

It's possible that the respondents were anchored by the possible options for the last question: There was one option "about as valuable" and 4 options each in the directions more and less valuable. The lowest respondents could go was "not at all valuable (<10% of counterfactual)"

The survey was not anonymous (although the name field was optional and one respondent chose not to enter their name) and several of the respondents were either in employment, on a grant, or on a trial with us at the time of responding.



7 out of 9 fellows filled in the fellowship feedback survey:

  • 6 of 7 respondents answered "Are you glad that you participated in the fellowship" with 5/5 ("hell yeah")
    • 1 of 7 respondents did not answer this question
  • 3 of 7 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "Much more valuable (10-30x the counterfactual)"
    • 3 of 7 respondents answered  "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "notably more valuable (3-10x the counterfactual)"
    • 1 of 7 respondents answered "To what extent was the fellowship a good use of your time compared to what you would otherwise have been doing" with "much more valuable (>30x the counterfactual)"
  • We did not ask the question of whether they would recommend the program to someone similar to them this year.

The survey was anonymous in 2020. Several of the respondents were either in employment, on a grant, or on a trial with us at the time of responding.

Comment by Chi on Why I am probably not a longtermist · 2021-10-19T10:25:16.542Z · EA · GW

Again, I haven't actually read this, but this article discusses intransitivity in asymmetric person-affecting views, i.e. I think in the language you used: The value of pleasure is contingent in the sense that creating new lives with pleasure has no value. But the disvalue of pain is not contingent in this way. I think you should be able to directly apply that to other object-list theories that you discuss instead of just hedonistic (pleasure-pain) ones.

An alternative way to deal with intransitivity is to say that not existing and any life are incomparable. This gives you the unfortunate situation that you can't straightforwardly compare different worlds with different population sizes. I don't know enough about the literature to say how people deal with this. I think there's some long work in the works that's trying to make this version work and that also tries to make "creating new suffering people is bad" work at the same time.

I think some people probably do think that they are comparable but reject that some lives are better than neutral. I expect that that's rarer though?

Comment by Chi on Noticing the skulls, longtermism edition · 2021-10-05T21:23:15.757Z · EA · GW

person-affecting view of ethics, which longtermists reject

I'm a longtermist and I don't reject (asymmetric) person(-moment-)affecting views, at least not those that think necessary ≠ only present people. I would be very hard-pressed to give a clean formalization of necessary people though. I think it's bad if effective altruists think longtermism can only be justified with astronomical waste-style arguments and not at all if someone has person-affecting intuitions. (Staying in a broadly utilitarian framework. There are, of course, also obligation-to-ancestor-type justifications for longtermism or similar.) The person-affecting part of me just pushes me in the direction of caring more about trajectory change than extinction risk.

Since I could only ever give very handwavey defenses of person-affecting views and even handwaveier explanations of my overall moral views: Here's a paper by someone that AFAICT is at least sympathetic to longtermism and discusses asymmetric person-affecting views. (I have to admit I never got around to read the paper.) (Writing a paper on how an asymmetric person-affecting view obviously also doesn't necessarily mean that the author doesn't actually reject person-affecting views)

Comment by Chi on How would you run the Petrov Day game? · 2021-09-27T00:44:01.872Z · EA · GW

Big fan of what you describe in the end or something similar.

It's still not great, and it would still be hard to distinguish the people who opted-in and received the codes but decided not to use them from the people who just decided to not receive their codes in the first place

Not sure whether you mean it's hard from the technical side to track who received their code and who didn't (which would be surprising) or whether you mean distinguishing between people who opted out and people who opted in but decided not to see the code. If the latter: Any downside to just making it clear in the email that not receiving your code is treated as opting out? People who don't read the email text should presumably not count anyway.

On the trust-building and adding to the voices in favor of making it opt-in: I like many aspects of this game, including the fact that doing the right thing is at least plausibly "do nothing and don't tell anyone you've done/you're doing the right thing." But currently, the combination of no opt-in/opt-out and that it's not anonymous doesn't really make it feel like a trust-building exercise to me. It feels more like "Don't push the button because people will seriously hate you if you do" and also "people will also get angry if you push the button because of an honest mistake, so it's probably best to just protect yourself from information for a day" (see last year - although maybe people were more upset about the wording in some of the messages the person who pushed sent rather than being tricked into pushing itself?), which isn't great. So, I think the lack of opt-in/out makes lots of people upset + it ruins the original purpose of this event IMO, and everyone is unhappy.

Comment by Chi on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T21:24:08.205Z · EA · GW

edit: Feature already exists, thanks Ruby!

Another feature request: Is it possible to make other people's predictions invisible by default and then reveal them if you'd like? (Similar to how blacked-out spoilers work, which you can hover over to see the text.)

I wanted to add a prediction but then noticed that I heavily anchored on the previous responses and didn't end up doing it.

Comment by Chi on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T22:29:52.062Z · EA · GW

edit: no longer relevant since OP has been edited since. (Thanks!)

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10.

(emphasis mine)

This would also mean that if you have a 10% chance of succeeding, then the expected value of the path is $300,000–$2 million (and the value of information will be very high if you can determine your fit within a couple of years).

Just to clarify, that's the EV of the path per year, right?

The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles. [...]

I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million.

I assume this is also per year?

Clarifying because I think numbers like this are likely to be quoted/vaguely remembered in the future, and it's easy to miss the per year part.

Comment by Chi on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T22:27:43.700Z · EA · GW

And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.


I think I often hear longtermists discuss funding in EA and use the 22 Bil number from OpenPhilanthropy. And I think people often make some implicit mental move thinking that's also the money dedicated to longtermism- even though my understanding is very much that that's not all available to longtermism.

Comment by Chi on anoni's Shortform · 2021-07-28T00:32:52.038Z · EA · GW


1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don't. The ones that do often don't care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That's compatible with longtermism - not all longtermism is about extinction risk. (See trajectory change and s-risk.)

1.2.: No, we don't just care about humans. In fact, I think it's quite likely that most of the value or disvalue will come from non-human minds. (Though I'm thinking digital minds rather than animals.) But we can't influence how the future will go if we're not around, and many x-risk scenarios would be quite bad full stop and not just bad for humans.

1.3.: You might want to have a look at cluelessness (EA forum and GPI website should have links) or the recent 80,000 Hours podcast with Alexander Berger. Predicting the future and how we can influence it is definitely extremely hard, but I don't think we're decisively in bad enough of a position where we can - with a good conscience - just throw our hands up and conclude there's definitely nothing to be done here.



2.1 + 2.2.: Don't really want to write anything on this right now

2.3.: Definite no. It just argues that trade-offs must be made, and some bads are worse even than current suffering. Or rather: The amount of bad we can avert is greater even than if we focus on current suffering

2.4: Don't understand what you're getting at.



3.1.: Can't parse the question

3.2.: I think many longtermists struggle with this. Michelle Hutchinson wrote a post on the EA forum recently on what still keeps her motivated. You can find it by searching her name ont he EA forum.

3.3.: No. Longtermism per se doesn't say anything about how much to personally sacrifice. You can believe in longtermism + think that you should give away your last penny and work every waking hour in a job you don't like. You can not be a longtermist and think you should live a comfortable, expensive life because that's what's most sustainable. Some leanings on this question might correlate with whether you're a longtermist or not, but in principle, this question is orthogonal.


Sorry if the tone is brash. If so, that's unintentional, and I tend to be really slow otherwise, but I appreciate that you're thinking about this. (Also, I'm writing this as sleep procrastination, and my guilt is driving my typing speed)

Comment by Chi on COVID: How did we do? How can we know? · 2021-07-07T02:02:07.956Z · EA · GW

On Human Challenge Trials (HCTs):

Disclaimer: I have been completely plugged out of Covid-19 stuff for over a year, definitely not an expert on these things (anymore), and definitely speaking for myself and not 1Day Sooner (which is more bullish on HCTs)

I worked for 1Day sooner last year as one of the main people investigating the feasibility and usefulness of HCTs for the pandemic. At least back then (March 2020), we estimated that it would optimiscally take 8 months to complete  the preparations for an HCT (so not even the HCT itself). Most of this time would be used for manufacturing and approving the challenge virus, and for dose-finding studies. (You give people some of the virus and check if it's enough to induce the disease, then repeat with a higher dose etc.)

I think in a better world, you can probably speed up the approval for the challenge virus, and massively parallize dose-finding to be less lenghty. Not sure how many months that gets you down to, but the 2.5 months for preparation + the actual HCT you assume seem overly optimistic to me. I still think HCTs should have been prepared, but I'm not sure how much speed that would have actually gained us. More details here in the section "PREPARATORY STEPS NEEDED FOR HUMAN CHALLENGE TRIALS" (free access)

There was also some discussion of challenge trials with natural infection (you put people together with infectious people who have Covid-19), which might get around this? But I don't know what came out of that (I think it wasn't pursued further?). Not sure how logistically feasible that actually is. (I think it would at least be more difficult politically than a normal HCT.)

Don't think this changes the general thrust of your post, but wanted to push back on this part of it.

(There's some chance I missed followup work, perhaps even by 1Day Sooner itself, that corrects these numbers, in which case I stand embarrassed :) )

Comment by Chi on An animated introduction to longtermism (feat. Robert Miles) · 2021-06-23T21:53:34.330Z · EA · GW

Note: this is mostly about your earlier videos. I think this one was better done, so maybe my points are redundant. Posting this here because the writer has expressed some unhappiness with reception so far. I've watched the other videos some weeks ago and didn't rewatch them for this comment. I also didn't watch the bitcoin one.

First of, I think trying out EA content on youtube is really cool (in the sense of potentially high value), really scary, and because of this really cool (in the sense of "of you to do this".) Kudos for that. I think this could be really good and valuable if you incorporate feedback and improve over time.

Some reasons why I was/am skeptical of the channel when I watched the videos:

  • For the 4 videos before this one, I didn't see how they were going to help make the world better. (I can tell some hypothetical stories for 3 of them, but I don't think they achieved that goal because of some of the things later in this comment.)
  • I found the title for the Halo effect one aversive. I'm personally fine with a lot of internet meme humour, but also know some EAs who actually take offense by the Virgin vs. Chad meme. I think for something so outward facing, I want to avoid controversy where it's unnecessary. (And to be clear: not avoid it where it's necessary.) It also just feels click-baity.
  • Watching the videos, I just didn't feel like I could trust the content. If I didn't know some of the content already, it would be really hard for me to tell from the video whether the content was legitimate science or buzzfeed-level rigour. For example, I really didn't know how to treat the information in the cringe one and basically decided to ignore it. This is not to say that the content wasn't checked and legitimate, just that it's not obvious from the videos. Note that this wasn't true for the longtermism one.
  • I found the perceived jump in  topic in the cringe video aversive, and it reinforced my impression that the videos weren't very rigorous/truthseeking/honest. I was overall kind of confused by that video.
  • I think the above (and the titles) matter because of the kind of crowd you want to attract and retain with your videos.
  • I think the artistic choice is fine, but also contributes. I don't think that's a problem when not combined with the other things.

In general, the kind of questions I would ask myself, and the reason why I think all of the above are a concern are:

  1. Which kind of people does this video attract?
  2. Which of these people will get involved/in contact with EA because of these videos?
  3. Do we want these people to be involved in the EA project?
  4. Which kind of people does this video turn off?
  5. Which of these people will be turned off of EA in general because of these videos?
  6. Do we want these people to be involved in the EA project?

I'm somewhat concerned that the answer for too many people would be "no" for 3, and "yes" for 6. Obviously there will always be some "no" for 3 and some "yes"for 6, especially for such a broad medium like youtube, and balancing this is really difficult. (And it's always easier to take the skeptical stance.) But I think I would like to see more to tip the balance a bit.

Maybe one thing that's both a good indicator but also important in its own right is the kind of community that forms in the comment section. I've so far been moderately positively surprised by the comment section on the longtermism video and how your handling it, so maybe this is evidence that my concerns are misplaced. It still seems like something worth paying attention to. (Not claiming I'm telling you anything new.)

I'm not sure what your plans and goals are, but I would probably prioritise getting the overall tone and community of the channel right before trying to scale your audience.


Some comments on this video:

  • I thought it was much better in all the regards I mentioned above.
  • There were still some things I felt slightly uneasy about, but there were much, much smaller, and might be idiosyncratic taste or really-into-philosophy-or-even-specific-philosophical-positions type things. I might also have just noticed them in the context of your other videos, and might have been fine with this otherwise. I feel much less confident that they are actually bad. Examples:
    • I felt somewhat unhappy with your presentation of person-affecting views, mostly because there are versions that don't only value people presently alive. (Actually, I'm pretty confused about this. I thought your video explicitly acknowledged that, but then sounded different later. I didn't go back to check again, so feel free to discard this if it's inaccurate.) Note that I sympathise a lot with person-affecting views, so might just be biased and feel attacked.
    • I feel a bit unhappy that trajectory-change wasn't really discussed.
    • I felt somewhat uneasy about the "but what if I tell you that even this is nothing compared to what impact you could have" part when transitioning from speeding up technological progress to extincition risk reduction. It kind of felt buzzfeedy again, but I think it's plausibly I only noticed because I had the context of your other videos. On the more substantive side, I'm not familiar with the discussion around this at all, but I can imagine that whether speeding up growth or preventing extinction risk is more important is an open question to some researchers involved? Really don't know though.


Again, I think it is really cool and potentially highly valuable that you're doing this, and I have a lot of respect for how you've handled feedback so far. I don't want to discourage you from producing further videos, just want to give an idea of what some people might be concerned about/why there's not more enthusiasm for your channel so far. As I said, I think this video is definitely in the IMO right direction and find this encouraging.


edit: Just seen the comment you left on Aaron Gertler's comment about engagement. Maybe this is a crux.

Comment by Chi on A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it · 2021-06-09T19:07:05.323Z · EA · GW

Hm, I'm a bit unhappy with the framing of symptoms vs. root causes, and am skeptical about whether it captures a real thing (when it comes to mental health and drugs vs. therapy). I'm worried that making the difference between the two contributes to the problems alexrjl pointed out.

Note, I have no clinical expertise and am  just spitballing: e.g. I understand the following trajectory as archetypical for what others might call "aha! First a patch and then root causes":

[Low energy --> takes antidepressants --> then has enough energy to do therapy & changes thought patterns etc. --> becomes long-term better afterwards doesn't need antidepressants anymore"]

But even if somebody had a trajectory like this, I'm not convinced that the thought patterns should count as root cause and not e.g. physiological imbalances that gave these kind of thought patterns a rich feeding ground in the first place (, which were addressed by antidepressants and perhaps to be addressed first before long-term improvement is possible). This makes me think that even if there is some matter of fact, it's not particularly meaningful.

(This seems even more true to me for things like ADHD - not even sure what root causes would be here -, but which weren't central to OP)

I think you might plausibly have a different and coherent conception of the root causes vs. symptoms thing, but I'm worried of using that distinction anyway because root causes is pretty normatively connotated, and people have all kinds of associations to it. (Would still be curious to hear your conceptualisation if you have one)

I care much less/have no particular thoughts on this distinction in non-mental-health cases, which were the focus of OP.

+1 to appreciating the OP, and I'll probably try out some of the things suggested!

Comment by Chi on How much do you (actually) work? · 2021-05-27T15:37:05.600Z · EA · GW

Hah! Random somewhat fun personal anecdote: I think tracking actually helped me a bit with that. When I first started tracking I was pretty neurotic about doing it super exactly. Having to change my toggl so frequently + seeing the '2 minutes of supposed work X' at the end of the day when looking at my toggl was so embarrassing that I improved a bit over time. Now I'm either better at swtiching less often and less neurotic about tracking or only the latter. It also makes me feel worse to follow some distraction if I know my time is currently being tracked as something else.

Comment by Chi on Concerns with ACE's Recent Behavior · 2021-04-17T23:33:10.008Z · EA · GW

I might be a little bit less worried about the time delay of the response. I'd be surprised if fewer than say 80% of the people who would say they find this very concerning won't end up also reading the response from ACE.

FWIW, depending on the definition of 'very concerning', I wouldn't find this surprising. I think people often read things, vaguely update, know that there's another side of the story that they don't know, have the thing they read become a lot less salient,  happen to not see the follow-up because they don't check the forum much,  and end up having an updated opinion (e.g. about ACE in this case) much later without really remembering why.

(e.g. I find myself very often saying things like "oh, there was this EA post that vaguely said X and maybe you should be concerned about Y because of this, although I don't know how exactly this ended in the end" when others talk about some X-or-Y-related topic, esp. when the post is a bit older. My model of others is that they then don't go check, but some of them go on to say "Oh, I think there's a post that vaguely says X, and maybe you be concerned about Y because of this, but I didn't read it, so don't take me too seriously" etc. and this post sounds like something this could happen with.)

Maybe I'm just particularly epistemically unvirtuous and underestimate others. Maybe for the people who don't end up looking it up but just having this knowingly-shifty-somewhat-update the information just isn't very decision-relevant and it doesn't matter much. But I generally think information that I got with lots of epistemic disclaimers and that have lots of disclaimers in my head do influence me quite a bit and writing this makes me think I should just stop saying dubious things.

Comment by Chi on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T23:22:31.790Z · EA · GW

And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I'd like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what "arcs" they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.


I agree that that's how I want the eventual decision to be made. I'm not sure what exactly the intended message of this paragraph was, but at least one reading is that you want to discourage comments like Brian's or otherwise extensive discussion on the contents of the podcast list. In case anyone reads it that way, I strongly disagree.

This has some flavor of 'X at EA organisation Y probably thought about this for much longer than me/works on this professionally, so I'll defer to them', which I think EAs generally say/think/do too often. It's very easy to miss things even when you've worked on something for a while (esp. if it's more in the some months than many years range) and outsiders often can actually contribute something important. I think this is already surprisingly often the case with research, and much more so the case with something like an intro resource where people's reactions are explicitly part of what you're optimizing for. (Obviously what we care about are new-people's reactions, but I still think that people-within-EA-reactions are pretty informative for that. And either way, people within EA are clearly stakeholders of what 80,000 Hours does.)

As with everything, there's some risk of the opposite ('not expecting enough of professionals?'), but I think EA currently is too far on the deferry end (at least within EA, I could imagine that it's the opposite with experts outside of EA).

Meta: Rereading your comment, I think it's more likely that your comment was either meant as a message to 80,000 Hours about how you want them to make their decision eventually or something completely different, but I think it's good to leave thoughts on possible interpretations of what people write.

Comment by Chi on What material should we cross-post for the Forum's archives? · 2021-04-15T14:10:59.403Z · EA · GW
  • Some stuff from Paul Christiano's 'The sideways view'

In addition to everything that Pablo said (esp. the Tomasik stuff because AFAICT none of his stuff is on the forum?)

Comment by Chi on The EA Forum Editing Festival has begun! · 2021-04-09T15:59:45.511Z · EA · GW
  1. I found tagging buggy. I tried to tag something yesterday, and I believe it didn't get through although it worked today. The 'S-risks" tag doesn't show up in my list to tag posts at all, although it's an article. But that might also be something about the difference between tags and articles that I don't understand? I use firefox and didn't check on other browsers.

  2. Is there a consensus for how to use organisation tags? Specifically, is it desirable to have have every output that's ever come out of an organisation to be tagged to them or only e.g. organisational updates? I've seen the first partly, but scarcely, done and am not sure about my opinion. (I mean things like "This report is published on the EA forum and the person who worked on this report was at org X at the time and wrote it as part of their job")

edit: 3) Just adding this on here...Is there a way to tag everything that has one tag with another tag? (I'm speaking of the 'economics' tag + lots of more specific tags; 'moral philosophy' and 'metaethics' etc.)

Comment by Chi on How to work with self-consciousness? · 2021-02-04T00:10:21.699Z · EA · GW

I'm not a very experienced researcher, but I think in my short research career, I've had my fair share of dealing with self-consciousness. Here are some things I find:

Note that I mostly refer to the "I'm not worth other people's time", "This/I am dumb", "This is bad, others will hate me for it" type of self-consciousness. There might be other types of self-consciousness, e.g. "I'm nervous I'm not doing the optimal thing and feel bad because then more morally horrible things will happen" in a way that's genuinely not related to self-confidence, self-esteem etc. for which my experience will not apply. This is apart from the obvious fact that different things work for different people

Some general thoughts:

  • As my disclaimer might indicate, I see research self-consciousness not as bound to research. Before/outside of research/work/EA in general I didn't notice that I had any issues with self-confidence. But for me at least, I think research/work/EA just activated some self-esteem/self-confidence issues I had before. (That is not to say that self-esteem etc. are not domain-specific, but I still think there's some more general thing going on as well) So, I approach research self-consciousness quite holistically as character development and try to improve not strictly in the research domain, although that will hopefully also help with research self-consciousness. Because I never looked at it from a pure research-domain lens, some things might seem a bit off, but I'll try to make it relevant.
  • The goal of the things I do to improve self-consciousness is not primarily to get to a state where I think my research is great, but to get to a state where I can do research, think it's great or not, be wrong, and be okay with it. I sometimes have to remind myself of that. On the occasions on which I do and decouple my self-esteem from the research, it lowers the stakes: If my research really is crap, at least it doesn't mean I'm crap.

Things I do to improve:

  • In the moment of feeling self-conscious: I would second Jason that talking to others about the object-level is magic

  • I also have a rule of talking to others about my research/sharing writeups whenever it feels most uncomfortable to do so. Those are often the moments when I'm most hardstuck because an anxious mind doesn't research well, exchange with others really helps, but my anxiety traps me to stay in that bad state!

  • I do something easy to commit myself to something that seems scary but important for research progress, so in the moment of self-consciousness I can't just back out again. Examples:

  1. Social accountability is great for this. When I'm in a really self-conscious-can't-work-state, I sometimes commit myself to send someone something I haven't started, yet, in 30 minutes, no matter what state it's in.

  2. I also often find it way easier to say "yes, I'll do this talk/discussion round at date X" or messaging another person "Hey, I have this idea I wanted to discuss, can I send you a doc?" (Even though I don't have a good doc, yet, because I think the idea is crap) than to do the thing, so whenever I feel able to do the first, I do it and future Chi has to deal with it, no matter how self-conscious she is.

  3. Often, just starting is the hardest thing. At least for me, that's where feeling super self-conscious often happens and stops me from doing anything. I sometimes set a timer for 5 minutes to do work. That's short enough that it feels ridiculous not to be able to it, and afterwards I often feel way less self-conscious and can just continue.

  • I think the above examples are partly me basically doing iterated exposure therapy with myself. (I never put it in those terms before.) It's uncomfortable and sucky, but it helps (, and I get some perverse enjoyment out of it). I try to look for the thing I'm most scared of that feels related to my research self-consciousness, that seems barely doable, and try to do it. Unless that thing becomes easier and then I go "to the next level". E.g. maybe at some point, I want ot practice sharing my opinions publicly on a platform, but can only do so, if I ran my opinion by a friend beforehand and then excessively qualify when writing my opinion and emphasize how dumb and wrong everything could be. And that's fine, you can "cheat" a bit. But after a while you hopefully feel comfortable enough that you can challenge yourself to cut the excessive qualifiers and running things by your friend. Ideally, you do that with research related things (e.g. iterated exposure of all your crappy research stuff, then with you confidently backing them, then on some topic that scares you, then with scarier people etc. --> Not suggesting that this order makes sense for everyone), but we don't always have the opportunity to iterate quickly on research because research takes time. I have the goal to improve on this generally anyway, but I think even if not, some things are good enough nearbys that are relevant to research self-consciousness. Examples:
  1. For self-consciousness reasons, I struggle with saying "Yes, I think this is good and promising" about something I work on, which makes me useless at analyzing whether e.g. a cause area is promising, which is incidentally exactly my task right now. So I looked for things that felt similar and uncomfortable in the same way and settled for trying to post at least one opinion/idea a weekday in pre-specified channels. (I had to give up after a week, but I think it was really good and I want to continue once I have more breathing room.)

  2. For the same reason as above, I deliberately go through my messages and delete all anxious qualifiers. I can't always do that in all contexts because they make me too self-conscious, and I allow myself that.

  • I appreciate that the above self-exposure-therapy examples might be too difficult for some and that might seem intimidating. (I've definitely been at "I'd never, ever, ever write a comment on the forum!" I'm still self-conscious about what I up- and downvote and noone can even see that) But you can also make progress on a lower level, just try whatever seems manageable and be gentle to yourself. (And back off if you notice you chewed off too much.) However, it can still be pretty daunting and it might be that it's not always possible to do the above completely independently. (E.g. I think I only got started when I spent several weeks at a research organisation I respect a lot, felt terrible for many parts, but couldn't back out and just had to do or die, and had a really good environment. I'm not sure "sticking through" would have been possible for me without that organisational context)

  • I personally benefited a lot from listening to other people's stories, general content on failing, self-esteem etc. I'm not sure how applicable that is to others that try to improve research self-consciousness because I never looked at it from a pure research lens, but it's motivating to have a positive ideal as well, and not just "self-consciousness is bad." I usually consume non-EA podcasts and books for this.

On positive motivation:

  • Related to the last point of positive ideals: Recently, I found it really inspiring to look at some people who just seem to have no fear of expressing their ideas, enthusiasm, think things through themselves without anyone giving them permission etc. And I think about how valuable just these traits are apart from the whole "oh, they are so smart" thing. I find that a lot more motivating than the smartness-ideal in EA, and then I get really motivated to also become cool like that!

  • I guess for me there's also a gender thing in where the idea of becoming a kickass woman is double motivating. I think I also have the feeling that I want to make progress on this on behalf of other self-conscious people that struggle more than me. I'm not really sure why I think that benefits them, but I just somehow do. (Maybe I could investigate that intuition at some point.) And that also gives me some positive motivation.

Comment by Chi on How to discuss topics that are emotionally loaded? · 2021-02-03T01:37:45.399Z · EA · GW

Ironically, I felt somewhat upset reading OP, I think for the reason you point out. (No criticism towards OP, I was actually amused at myself when I noticed)

I think some reason-specific heterogeneity in how easily something is expressible/norms in your society also play a role:

  1. I think some reasons are just inherently fuzzier (or harder to crisply grasp), e.g. why certain language makes you feel excluded. (It's really hard to point at a concrete damage (or in summer circles, something that can't be countered with "that's not how it's meant [, but if you want to be sensitive, we can accommodate that].")) I think that's double troubling because the other person often takes you less seriously and because you might take yourself less seriously. I think at least I'm more prone to be emotional when I feel like my reasons are of this type, and maybe that's similar for others?
  2. some kinds of reasoning are more socially excepted in different circles. E.g. In some EA circles, I imagine the "anti"-vegan argument would be associated with higher social status, and in some EA circles it would be the other way around. At least in my case, I'm more prone to be emotional when I feel like I have the less socially approved opinion/reasoning process.

I guess the common thread here is feeling threatened and like one needs to defend one's opinion because it's likely to be undermined. I guess the remedy would be... Really making sure the other person feels taken seriously (including by themselves) and safe and says everything they want? (Maybe someone else can come up with something more helpful and concrete) That's obviously just the side of the non-offended person, but I feel like the ways the upset person could try to improve in such situations is even more generic and vague.

Obviously, this is just one type of being emotional during conversations. E.g if what I say explains any meaningful variance at all, it probably does so less for 4) than for 3). (Maybe not coincidentally since I'm not male)

Comment by Chi on Chi's Shortform · 2021-01-21T15:08:39.130Z · EA · GW

Thanks for the reply!

Honestly, I'm confused by the relation to gender. I'm bracketing out genders that are both not-purely-female and not-purely-male because I don't know enough about the patterns of qualifiers there.

  • In general, I think anxious qualifying is more common for women. EA isn't known for having very many women, so I'm a bit confused why there's seemingly so much of it in EA.
  • (As a side: This reminds me of a topic I didn't bring into the original post: How much is just a selection effect and how much is EA increasing anxious qualifying. Intuitively, I at least think it's not purely a selection effect, but I haven't thought closely about this.)
  • Given the above, I would expect that women are also more likely to take the EA culture, and transform it into excessive use of anxious qualifiers, but that's just speculation. Maybe the percentage change of anxious qualifier use is also higher for men, just because their baseline is lower
  • I'm not sure how this affects gender diversity in EA as a whole. I can imagine that it might actually be good because underconfident people might be less scared off if the online communication doesn't seem too confident, and they feel like they can safely use their preferred lots-of-anxious-signalling communication strategy.
  • That being said, I guess that what would do the above job (at least) equally good is what I call "3" in my reply to Misha. Or, at least I'm hopeful that there are some other communication strategies that would have that benefit without encouraging anxious signalling.
  • edit: I noticed that the last bullet point doesn't make much sense because I claim elsewhere that 3 can encourage 4 because they look so similar, and I stand by that.

Interestingly, maybe not instructively, I was kind of hesitant to bring gender into my original post. Partly for good reasons, but partly also because I worried about backlash or at least that some people would take it less seriously as a result. I honestly don't know if that says much about EA/society, or solely about me. (I felt the need to include "honestly" to make it distinguishable from a random qualifier and mark it as a genuine expression of cluelessness!)

Comment by Chi on Chi's Shortform · 2021-01-21T14:53:59.930Z · EA · GW

Reply 3/3

"displaying uncertainty or lack of knowledge sometimes helps me be more relaxed"

I think there's a good version of that experience and I think that's what you're referring to, and I agree that's a good use of qualifiers. Just wanted to make a note to potential readers because I think the literal reading of that statement is a bit incomplete. So, this is not really addressed at you :)

I think displaying uncertainty or lack of knowledge always helps to be more relaxed even when it comes from a place of anxious social signalling. (See my first reply for what exactly I mean with that and what I contrast it to) That's why people do it. If you usually anxiously qualify and force yourself not to do it, that feels scary. I still think, practicing not to do it will help with self-confidence, as in taking yourself more seriously, in the long run. (Apart from efficient communication)*

Of course, sometimes you just need to qualify things (in the anxious social signalling sense) to get yourself in the right state of mind (e.g. to feel safe to openly change your mind later, freely speculate, or to say anything at all in the first place), or allowing yourself the habit of anxious social signalling makes things so much more efficient, that you should absolutely go for it and not beat yourself up over it. Actually, an-almost ideal healthy confidence probably also includes some degree of what I call anxious social signalling and it's unrealistic to get rid of all of it.

  • I just found one other frame for what I meant with anxious social signalling partly being rewarded in EA. Usually, that kind of signaling means others take you less seriously. I think it's great that that's not so much the case in EA, but I worry that sometimes it may look like people in EA take you more seriously when you do it. Maybe because EA actually endorses what I call 3 in my first reply, but - to say the same thing for the 100th time - I worry that it also encourages anxious social signalling.
Comment by Chi on Chi's Shortform · 2021-01-21T14:39:20.367Z · EA · GW

Reply 2/3

I like the suggestions, and they probably-not-so-incidentally are also things that I often tell myself I should do more and that I hate. One drawback with them is that they are already quite difficult, so I'm worried that it's too ambitious of an ask for many. At least for an individual, it might be more tractable to (encourage them to) change their excessive use of qualifiers as a first baby step than to jump right into quantification and betting. (Of course, what people find more or less difficult confidence-wise differs. But these things are definitely quite high on my personal "how scary are things" ranking, and I would expect that that's the case for most people.) OTOH, on the community level, the approach to encourage more quantification etc. might well be more tractable. Community wide communication norms are very fuzzy and seem hard to influence on the whole. (I noticed that I didn't draw the distinction quite where you drew it. E.g. "Acknowledgements that arguments changed your mind" are also about communication norms.) I am a little bit worried that it might have backfire effects. More quantification and betting could mostly encourage already confident people to do so (while underconfident people are still stuck at "wouldn't even dare to write a forum comment because that's scary."), make the online community seem more confident, and make entry for underconfident people harder, i.e scarier. Overall, I think the reasons to encourage a culture of betting, quantification etc. are stronger than the concerns about backfiring. But I'm not sure if that's the case for other norms that could have that effect. (See also my reply to Emery )

Comment by Chi on Chi's Shortform · 2021-01-21T14:38:59.460Z · EA · GW

Reply 1/3 Got it now, thanks! I agree there's confident and uncertain, and it's an important point. I'll spend this reply on the distinction between the two, another response on the interventions you propose, and another response on your statement that qualifiers often help you be more relaxed.

The more I think about it, the more I think that there's quite a bit for someone to unpack here conceptually. I haven't done so, but here a start:

  1. There's stating epistemic degree of epistemic uncertainty to inform others how much they should update based on your belief (e.g. "I'm 70% confident in my beliefs, i.e. I think it's 70% likely I'd still hold them after lots of reflection.")
  2. There's stating probabilities which looks similar, but just tells others what your belief is, not how confident you are in it ("I think event X is 70% likely to occur")
  3. There's stating epistemic uncertainty for social reasons that are not anxiety/underconfidence driven: Making a situation less adversarial; showing that you're willing to change your mind; making it easy for others to disagree; just picking up this style of talking from people around you
  4. There's stating epistemic uncertainty for social reasons that is anxiety/underconfidence driven: Showing you're willing to change your mind, so others don't think you're cocky; Saying you're not sure, so you don't look silly if you're wrong/any other worry you have because you think maybe you're saying something 'dumb'; Making a situation less adversarial because you want to avoid conflict because you don't want others to dislike you
  5. There's stating uncertainty about the value of your contribution. That can honestly be done in full confidence, because you want to help the group allocate to attention optimally, so you convey information and social permission to not spend too much time on your point. I think online most of the reasons to do so do not apply (people can just ignore you), so I'm counting it mostly as anxious social signalling or in the best case, a not so useful habit. An exception are if you want to help people decide whether to read a long piece of text.

I think you're mostly referring to 1 and 2. I think 1 and 2 are good things to encourage and 4 and 5 are bad things to encourage. Although I think 4/5 also have their functions and shouldn't be fully discouraged (more in my (third reply)[]). I think 3 is a mix. I like 3. I really like that EA has so much of 3. But too much can be unhelpful, esp. the "this is just a habit" kind of 3. I think 1 and 2 look quite different from 4 and 5. The main problem that it's hard to see if something is 3 or 4 or both, and that often, you can only know if you know the intention behind a sentence. Although 1 can also sometimes be hard to tell apart from 3, 4, and 5, e.g. today I said "I could be wrong", which triggered my 4-alarm, but I was actually doing 1. (This is alongside other norms, e.g. expert deference memes, that might encourage 4.)

I would love to see more expressions that are obviously 1, and less of what could be construed as any of 1, 3, 4, or 5. Otherwise, the main way I see to improve this communication norm is for people to individually ask themselves which of 1,3,4,5 is their intention behind a qualifier. edit: No idea, I really love 3

Comment by Chi on Chi's Shortform · 2021-01-21T00:45:14.761Z · EA · GW

I just wondered whether there is systematic bias in how much advice there is in EA for people who tend to be underconfident and people who tend to be appropriately or overconfident. Anecdotally, when I think of Memes/norms in effective altruism that I feel at least conflicted about, that's mostly because they seem to be harmful for underconfident people to hear.

Way in which this could be true and bad: people tend to post advice that would be helpful to themselves, and underconfident people tend to not post advice/things in general.

Way in which this could be true but unclear in sign: people tend to post advice that would be helpful to themselves, and they are more appropriately or overconfident people in the community than underconfident ones.

Way in which this could be true but appropriate: advice that would be harmful when overconfident people internalize it tends to be more harmful than advice that's harmful to underconfident people. Hence, people post proportionally less of the first.

(I don't think the vast space of possible advice just has more advice that's harmful for underconfident people to hear than advice that's harmful for overconfident people to hear.)

Maybe memes/norms that might be helpful for underconfident for people to hear or their properties that could be harmful for underconfident people are also just more salient to me.

Comment by Chi on Chi's Shortform · 2021-01-21T00:34:30.291Z · EA · GW

Hey Misha! Thanks for the reply and for linking the post, I enjoyed reading the conversation. I agree that there's an important difference. The point I was trying to make is that one can look like the other, and that I'm worried that a culture of epistemic uncertainty can accidentally foster a culture of anxious social signaling, esp. when people who are inclined to be underconfident can smuggle anxious social signaling in disguised (to the speaker/writer themselves) as epistemic uncertainty. And because anxious social signalling can superficially look similar to epistemic uncertainty, they see other people in their community show similar-ish behavior and see similar-ish behavior be rewarded. Not sure how to address this without harming epistemic uncertainty though. (although I'm inclined to think the right trade-off point involves more risk of less of the good communication of epistemic uncertainty)

Or was your point that you disagree that they look superficially similar? And hence, one wouldn't encourage the other? And if that's indeed your point, would you independently agree or disagree that there's a lot of anxious social signaling of uncertainty in effective altruism?

Comment by Chi on Chi's Shortform · 2021-01-20T23:59:14.704Z · EA · GW

Should we interview people with high status in the effective altruism community (or make other content) featuring their (personal) story, how they have overcome challenges, and live into their values?

Background: I think it's no secret that effective altruism has some problems with community health. (This is not to belittle the great work that is done in this space.) Posts that talk about personal struggles, for example related to self-esteem and impact, usually get highly upvoted. While many people agree that we should reward dedication and that the thing that really matters is to try your best given your resources, I think that, within EA, the main thing that gives you status, that many people admire, desire, and tie their self-esteem to is being smart.

Other altruistic communities seem to do a better job at making people feel included. I think this has already been discussed a lot, and there seem to be some reasons for why this is just inherently harder for effective altruism to do. But one specific thing I noticed is what I associate with leaders of different altruistic communities.

When I think of most high status people in effective altruism, I don't think of their altruistic (or other personal) virtues, I think 'Wow, they're smart.' Not because of a lack of altruistic virtues - I assume -, but because smartness is just more salient to me. On the other hand, when I think of other people, for example Michelle Obama or Melinda Gates or even Alicia Keys for that matter, I do think "Wow, these people are so badass. They really live into their values." I wouldn't want to use them as role models for how to have impact, but I do use them as role models for what kind of person I would like to be. I admire them as people, and they inspire me to work on myself to become like them in relevant respects, and they make me think it's possible. I am worried that people look at high status people in effective altruism for what kind of person they would like to be, but the main trait of those people they are presented with is smartness, which is mostly intractable to try to improve.

I don't think this difference is because these non-EAs lack any smartness or achievement that I could admire. I think it's because I have consumed content where their personal story and values were put front and centre alongside what they did and how they achieved it. Similarly, I don't think that high status people in effective altruism lack any personal virtue I could aspire to, but I'm simply not exposed to it.

I don't know if it would actually improve this aspect of community health, and whether it's overall worth the time of all people involved (although I think the answer is yes if the answer to the first is yes), but this made me wonder if we should create more content with high status people in the effective altruism community that is similar to the kind of interviews with non-EAs I mentioned. 'That kind of content' is pretty vague, and one would have to figure out how we can best celebrate the kind of virtues we want to celebrate, and whether this could work, in principle, with effective altruism. (Maybe the personal virtues we most admire in high status effective altruists just are detrimental to the self esteem of others. I can imagine that with some presentations of impact obsession for example.) But this might be a worth while idea, and I am somewhat hopeful that this could be combined with the presentation of more object-level content (the type that 80k interviews are mostly about).

Comment by Chi on Chi's Shortform · 2021-01-20T22:58:54.842Z · EA · GW

Observation about EA culture and my journey to develop self-confidence:

Today I noticed an eerie similarity between things I'm trying to work on to become more confident and effective altruism culture. For example, I am trying to reduce my excessive use of qualifiers. At the same time, qualifiers are very popular in effective altruism. It was very enlightening when a book asked me to guess whether the following piece of dialogue was from a man or woman:

'I just had a thought, I don't know if it's worth mentioning...I just had a thought about [X] on this one, and I know it might not be the right time to pop it on the table, but I just thought I'd mention it in case it's useful.'

and I just immediately thought 'No, that's an effective altruist'. I think what the community actually endorses is communicating the degree of epistemic certainty and making it easy to disagree, while the above quote is anxious social signalling. I do think the community does a lot of the latter though, and it's partly rewarded because of confounding with the first. (In the above example it's obvious, but I think anxious social signaling is also often the place where 'I'm uncertain about this', 'I haven't thought much about this', and 'I might be wrong' (of course you might be wrong) come from. That's certainly the case for me.) Tangentially, there is also a strong emphasis on deference and a somewhat conservative approach to not causing harm, esp. with new projects.

Overall, I am worried that this communication norm and the two memes I mentioned foster under-confidence, a tendency to keep yourself small, and the feeling that you need permission to work on important problems or to think through important questions. The communication norm and memes I mentioned also have upsides, esp. when targeted at overconfident people, and I haven't figured out yet what my overall take on them is. I just thought it was an interesting observation that certain things I'm trying to decrease are particularly pervasive in the effective altruism community.

(I think there are also lots of other problems related to self-esteem and effective altruism, but I wanted to focus on this particular aspect.)

Comment by Chi on Training Bottlenecks in EA (professional skills) · 2021-01-19T19:22:23.544Z · EA · GW

Thanks for the reply! I was initially just self-interestedly wondering which training you got and whether you would recommend it. But I am also happy to hear about your plans in that direction.

Given the time constraints, do you think there any other people for whom it would make sense to take the lead regarding this that you are not yet in touch with about this, (e.g. a specific type of person rather than specific individuals.) And if so, which traits would that person need? You already mentioned that you want to work on it with help anyway, and I can imagine that it doesn't make sense for any other person to take this up right now given your expertise. Still wanted to ask if you think there are any sensible versions that would involve you less and would be feasible time-wise because I also think this is a majorly important topic and would love to see something happen.

Comment by Chi on My mistakes on the path to impact · 2021-01-18T23:10:37.332Z · EA · GW

I think the comparison to "the current average experience a college graduate has" isn't quite fair, because the group of people who see 80k's advice and act on is is already quite selected for lots of traits (e.g. altruism). I would be surprised if the average person influenced by 80k's EtG advice had the average college graduate experience in terms of which careers they consider and hence, where they look for advice, e.g. they might already be more inclined to go into policy, the non-profit sector or research to do good.

(I have no opinion on how your point comes out on the whole. I wasn't around in 2015, but intuitively it would also surprise me if 80k didn't do substantially more good during that time than bad, even bracketing out community building effects (, which, admittedly, is hard))

Comment by Chi on Effektiv Spenden - Fundraising and 2021 Plans · 2021-01-18T22:59:19.643Z · EA · GW

Hey, I wanted to probe a bit into why you don't write in gender neutral language on your website.

  • (For those who are not German: in German most nouns that refer to persons are not gender neutral by default, but always refer to either male or female persons, with the male version having been the default version for a long time. In the last decade, there has been a pushback against this and people started to adopt gender neutral language, which often looks a bit clunky though.) -

I saw that you justify this with better readability in your FAQ, but I didn't find the response very satisfying. On reasons not to write gender neutral:

  • Readability: My guess is that at this point, most people have gotten used to gender neutral language and don't really stumble when they read it anymore. Actually, I think there's probably a fair share of people that stumble when they read non-gender neutral language nowadays. There are also some less clunky solutions (e.g. the female version with a capitalized "I" or explicitly stating that you'll alternate gender between sections/pages). (They aren't as correct because they exclude people who are not female and male, but probably still a better alternative than not using any gender neutral language at all)
  • Appeal to target audience: You might worry that gender neutral language might not be appealing to some target audiences that would usually donate fairly large amounts of money, but would not if the website was written in gender neutral language. (e.g. conservative leaning, wealthy donors.) You'll know better than I and if you have convincing arguments that this is the case (and outweighs the money you could raise from people who are repelled from non gender neutral language), I'd probably support your decision. I would be somewhat surprised by this though. To me, using gender neutral language seems fairly normal and professional and not "lefty wooi-booi student initiative" anymore (e.g. the German Federal Agency for Civic Education uses gender neutral language, at least partly.)
  • The time cost of using gender neutral language seems fairly small

On the other hand:

  • I know at least one person who isn't involved in EA but interested in effective giving that almost didn't donate via effektiv-spenden because you don't use gender neutral language. I would guess that a fair proportion of your target audience might be similarly inclined.
  • Apart from that, I also care about gender neutral language for feminist reasons, but that's not what I wanted to focus on
Comment by Chi on Training Bottlenecks in EA (professional skills) · 2021-01-18T21:07:02.846Z · EA · GW

Hey Kathryn, this is a bit off-topic, but I was wondering what that impostor syndrome training is that Michelle mentions in the post. Asking here because I imagine more people might be interested in this.

Comment by Chi on My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda · 2020-08-17T18:11:04.680Z · EA · GW

Hey Max, thanks for your comment :)

Yeah, that's a bit confusing. I think technically, yes, IDA is iterated distillation and amplification and that Iterated Amplification is just IA. However, IIRC many people referred to Paul Christiano's research agenda as IDA even though his sequence is called Iterated amplification, so I stuck to the abbreviation that I saw more often while also sticking to the 'official' name. (I also buried a comment on this in footnote 6)

I think lately, I've mostly seen people refer to the agenda and ideas as Iterated Amplification. (And IIRC I also think the amplification is the more relevant part.)

Comment by Chi on The Case for Education · 2020-08-16T13:43:43.463Z · EA · GW

Hm, I'm not sure how easily it's reproducible/what exactly he did. I had to write essays on the topic every week and he absolutely destroyed my first essays. I think reading their essay is an exceptionally good way to find out how much the person in question misunderstands and I'm not sure how easily you can recreate this in conversation.

I guess the other thing was a combination of deep subject-matter expertise + [being very good at normal good things EAs would also do] + a willingness to assume that when I said something that didn't seem to make sense, it indeed didn't make sense, and telling me so/giving me all the possible objections to my argument; and then just feeling comfortable talking for 20 minutes (basically lecturing). I think that worked because of the formal tutor-student setting we were in and because he evidently and very obviously knew a lot more about the topic than me. I think it's harder in natural settings to realize that that's the case and confidently act on it.

What I mean by [normal good things EAs would also do]: Listening to my confused talking, paraphrasing what I was trying to say into the best steelman, making sure that that's what I meant before pointing out all the flaws.

Comment by Chi on EA Meta Fund Grants – July 2020 · 2020-08-16T12:22:01.282Z · EA · GW

Small point that's not central to your argument:

A similar thing might happen here: if there was a universal mentoring group that gave women access to both male and female mentors, why would they choose the segregated group that restricted them to a subset of mentors?

I had actually also asked WANBAM at some point whether they considered adding male mentors as well but for different reasons.

I think at least some women would still prefer female mentors. Anecdotally, I often made the experience that it's easier for other women to relate to some of my work-related struggles and that it's generally easier for me to discuss those struggles with women. This is definitely not true in every case but the hit rate (of connections where talking about work-struggles works really well) among women is much higher than among men and I expect this to be true for many other women as well.

Comment by Chi on EA Forum update: New editor! (And more) · 2020-08-15T19:25:42.241Z · EA · GW

Is there a way to have footnotes and tables in the same post? I tried just now and can't see a way. (You have to switch to EA forum doc [beta] editor for the tables which kills your footnotes; you have to switch to markdown for footnotes which kills your tables)

edit: I found some markdown code for tables which worked but then had trouble formatting within the table. Decided to just take pictures of the tables instead and upload them as pictures which also works. If anyone knows an easier/nicer way to do this, or if anything is planned, that would be great :)

Comment by Chi on The Case for Education · 2020-08-14T22:23:54.445Z · EA · GW

Thanks for writing this :) I certainly agree that the education system isn't optimal and maybe only useful to a handful of people. However, I'd like to provide myself as a data point of someone who actually thinks they benefit from their education. I'm worried that people might sometimes come away with the feeling that they're doing something wrong and pointless when going to uni/only doing signalling when that's not true in some cases.

I'm a bit of an outlier in that I'm actually in my second bachelor's degree and I definitely don't want to claim that that's a good idea for everyone. The first one was from a not well known University and my current degree is at a prestigious university. After my first year at my current university I was offered a job at an EA organization and after a lot of deliberation turned it down. I'm not sure that was the right choice but I still think I got a lot of benefits from continuing my degree. Here are some examples for why:

  • I learned a lot about writing. I got a ton of practice and feedback (both 1-2x a week). I don't think this would have been possible otherwise.
  • Last term, I took a course in Philosophy of Cognitive Science: There's a good chance I would have wanted to spend some time on the same topics in my free time for EA-ish reasons. My tutor pushed back and improved my thinking a lot and in a way that I frankly don't expect most of the people in my EA circle to do. I hope this also helps me evaluate the quality of discussion and arguments in EA a bit although I'm not sure if that's a real effect.
  • I often see the argument advanced that you could just learn much more effectively in your free time. I'm slowly arriving at a point at which I think I would probably continue learning and working on my own. However, when I started studying that was certainly not the case. It's really hard to self study (for many). I think many and probably the majority of people really benefit from the structure that forces you to do something. That and the tutoring I get at my uni make me think that quantitatively I actually learn a lot more at uni than I otherwise would, although it's true of course that I could direct my time to learn a lower quantity but more relevant content.
  • This is not argument for education per se, but at the current point I'd be quite concerned about EA curricula being too one-sided if that's the only education you get. (It depends on the field and execution of course and I might be wrong)
  • I expect the benefits are probably greater for certain graduate studies where you have more contact to mentors but I'm not sure about this.

Admittedly, all of these reasons mainly apply to my second degree. I'm a lot more willing to relent that my first degree was mostly a waste of time, although I'm still often surprised by how much stuff I learned was actually useful (mostly stats). I also think the case is much different if you're not interested in research, or interested in e.g. ML engineering.

Comment by Chi on 2019 Ethnic Diversity Community Survey · 2020-06-22T12:05:22.792Z · EA · GW

Thanks for doing this work!

I've thought of the "Improving awareness and training of social justice" point a bit in the past when thinking about gender diversity and find it difficult. I am a bit worried that it is extremely hard or impossible without everyone investing a substantial amount of time:

My impression is that a lot of (ethnic/gender/...) diversity questions has no easy fixes that some people can think about and implement, but would rather benefit a lot from every single person trying to educate themselves more to increase their own awareness, esp. community builders, and high profile people that get a lot of attention. One example that I think is hard to improve otherwise: I noticed in Toby Ord's The Precipice the following sections:

"Indeed, when I think of the unbroken chain of generations leading to our time and of everything they have built for us, I am humbled. I am overwhelmed with gratitude"

I know this doesn't detract from his overall point of what the generations of the last hundreds of thousand years have done "for us", but I can't help to wonder how reading this must feel like for some people that primarily associate history with their ancestors being fucked over by colonialism or being enslaved, and them still paying the price for this. The Precipice actually mentions this later but is clearly written from the perspective of the people descending from those inflicting injustice, and not receiving it:

"Consider that some of the greatest injustices have been inflicted [...] by groups upon groups: Systematic persecution, stolen lands, genocides. We may have duties to properly acknowledge and memorialise these wrong; to confront the acts of our past. And there may yet be ways for the beneficiaries of these acts to partly remedy them or atone for them."

While I'm a POC, I'm certainly not from an ethnicity that suffered the most from the historic (and ongoing) actions of the elites in primarily white countries. But I can imagine that many people whose families are or have been on the receiving end of the injustice might be alienated by this section, which reads a bit like it's a given that readers are on the other side of the coin.

I don't want to slander the book or the person, I very much enjoy the book, and don't assume any negative intentions (and think the section can also be read more charitably, but I think it's important that it can be read in an alienating way by POC). I just think this is a good example of the problem and that it is really hard to be aware of such things when you didn't spend substantial time to understand underrepresented groups - that you are not part of - better.

While, in an ideal world, I would like everyone in the world to do so, it is a big time sink and feel reluctant about recommending everyone to invest this time, especially when the opportunity cost are so high. I'm not sure how to remedy this; whether investing the time is clearly worth it; there are better ways to make progress that are less time intensive; or whether we should only aim for low hanging fruit; or something entirely else. I would be very curious to hear other people's thoughts, I'd gladly notice that I'm totally off the mark and I worry more than warranted :)

(I also feel weary of openly saying that investing into understanding underrepresented groups might not be worth the time, as I just did, because I think it can be very hurtful and dehumanizing.)

Comment by Chi on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-30T23:26:43.694Z · EA · GW

I respect that you are putting money behind your estimates and get the idea behind it, but would recommend you to reconsider if you want to do this (publicly) in this context and maybe consider removing these comments. Not only because it looks quite bad from the outside, but also because I'm not sure it's appropriate on a forum about how to do good, especially if the virus should happen to kill a lot of people over the next year (also meaning that even more people would have lost someone to the virus). I personally found this quite morbid and I have a lot more context into EA culture than a random person reading this, e.g. I can guess that the primary motivation is not "making money" or "the feeling of winning and being right" - which would be quite inappropriate in this context -, but that might not be clear to others with less context.

(Maybe I'm also the only one having this reaction in which case it's probably not so problematic)

edit: I can understand if people just disagree with me because you think there's no harm done by such bets, but I'd be curious to hear from the people who down voted if in addition to that you think that comments like mine are harmful because of being bad for epistemic habits or something, so grateful to hear if someone thinks comments like these shouldn't be made!

Comment by Chi on EA Survey 2018 Series: Community Demographics & Characteristics · 2018-09-21T15:25:14.989Z · EA · GW

The link to the raw data doesn't work for me and links to Enter instead and I believe you end up where you should end up.