Posts

Chi's Shortform 2021-01-20T22:58:54.361Z
My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda 2020-08-15T19:59:21.909Z
What organizational practices do you use (un)successfully to improve culture? 2020-08-14T22:42:26.812Z
English summaries of German Covid-19 expert podcast 2020-04-08T21:12:00.931Z

Comments

Comment by Chi on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T22:29:52.062Z · EA · GW

edit: no longer relevant since OP has been edited since. (Thanks!)

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10.

(emphasis mine)

This would also mean that if you have a 10% chance of succeeding, then the expected value of the path is $300,000–$2 million (and the value of information will be very high if you can determine your fit within a couple of years).

Just to clarify, that's the EV of the path per year, right?

The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles. [...]

I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million.

I assume this is also per year?


Clarifying because I think numbers like this are likely to be quoted/vaguely remembered in the future, and it's easy to miss the per year part.

Comment by Chi on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T22:27:43.700Z · EA · GW

And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.

+1

I think I often hear longtermists discuss funding in EA and use the 22 Bil number from OpenPhilanthropy. And I think people often make some implicit mental move thinking that's also the money dedicated to longtermism- even though my understanding is very much that that's not all available to longtermism.

Comment by Chi on anoni's Shortform · 2021-07-28T00:32:52.038Z · EA · GW

1.

1.1.: You might want to have a look at group of positions in metaethics called person affecting views, some of which include future people and some of which don't. The ones that do often don't care about increasing/decreasing the number of people in the future, but about improving the lives of future people that will exist anyway. That's compatible with longtermism - not all longtermism is about extinction risk. (See trajectory change and s-risk.)

1.2.: No, we don't just care about humans. In fact, I think it's quite likely that most of the value or disvalue will come from non-human minds. (Though I'm thinking digital minds rather than animals.) But we can't influence how the future will go if we're not around, and many x-risk scenarios would be quite bad full stop and not just bad for humans.

1.3.: You might want to have a look at cluelessness (EA forum and GPI website should have links) or the recent 80,000 Hours podcast with Alexander Berger. Predicting the future and how we can influence it is definitely extremely hard, but I don't think we're decisively in bad enough of a position where we can - with a good conscience - just throw our hands up and conclude there's definitely nothing to be done here.

 

2.

2.1 + 2.2.: Don't really want to write anything on this right now

2.3.: Definite no. It just argues that trade-offs must be made, and some bads are worse even than current suffering. Or rather: The amount of bad we can avert is greater even than if we focus on current suffering

2.4: Don't understand what you're getting at.

 

3.

3.1.: Can't parse the question

3.2.: I think many longtermists struggle with this. Michelle Hutchinson wrote a post on the EA forum recently on what still keeps her motivated. You can find it by searching her name ont he EA forum.

3.3.: No. Longtermism per se doesn't say anything about how much to personally sacrifice. You can believe in longtermism + think that you should give away your last penny and work every waking hour in a job you don't like. You can not be a longtermist and think you should live a comfortable, expensive life because that's what's most sustainable. Some leanings on this question might correlate with whether you're a longtermist or not, but in principle, this question is orthogonal.

 

Sorry if the tone is brash. If so, that's unintentional, and I tend to be really slow otherwise, but I appreciate that you're thinking about this. (Also, I'm writing this as sleep procrastination, and my guilt is driving my typing speed)

Comment by Chi on COVID: How did we do? How can we know? · 2021-07-07T02:02:07.956Z · EA · GW

On Human Challenge Trials (HCTs):

Disclaimer: I have been completely plugged out of Covid-19 stuff for over a year, definitely not an expert on these things (anymore), and definitely speaking for myself and not 1Day Sooner (which is more bullish on HCTs)

I worked for 1Day sooner last year as one of the main people investigating the feasibility and usefulness of HCTs for the pandemic. At least back then (March 2020), we estimated that it would optimiscally take 8 months to complete  the preparations for an HCT (so not even the HCT itself). Most of this time would be used for manufacturing and approving the challenge virus, and for dose-finding studies. (You give people some of the virus and check if it's enough to induce the disease, then repeat with a higher dose etc.)

I think in a better world, you can probably speed up the approval for the challenge virus, and massively parallize dose-finding to be less lenghty. Not sure how many months that gets you down to, but the 2.5 months for preparation + the actual HCT you assume seem overly optimistic to me. I still think HCTs should have been prepared, but I'm not sure how much speed that would have actually gained us. More details here in the section "PREPARATORY STEPS NEEDED FOR HUMAN CHALLENGE TRIALS" (free access)

There was also some discussion of challenge trials with natural infection (you put people together with infectious people who have Covid-19), which might get around this? But I don't know what came out of that (I think it wasn't pursued further?). Not sure how logistically feasible that actually is. (I think it would at least be more difficult politically than a normal HCT.)

Don't think this changes the general thrust of your post, but wanted to push back on this part of it.

(There's some chance I missed followup work, perhaps even by 1Day Sooner itself, that corrects these numbers, in which case I stand embarrassed :) )

Comment by Chi on An animated introduction to longtermism (feat. Robert Miles) · 2021-06-23T21:53:34.330Z · EA · GW

Note: this is mostly about your earlier videos. I think this one was better done, so maybe my points are redundant. Posting this here because the writer has expressed some unhappiness with reception so far. I've watched the other videos some weeks ago and didn't rewatch them for this comment. I also didn't watch the bitcoin one.

First of, I think trying out EA content on youtube is really cool (in the sense of potentially high value), really scary, and because of this really cool (in the sense of "of you to do this".) Kudos for that. I think this could be really good and valuable if you incorporate feedback and improve over time.

Some reasons why I was/am skeptical of the channel when I watched the videos:

  • For the 4 videos before this one, I didn't see how they were going to help make the world better. (I can tell some hypothetical stories for 3 of them, but I don't think they achieved that goal because of some of the things later in this comment.)
  • I found the title for the Halo effect one aversive. I'm personally fine with a lot of internet meme humour, but also know some EAs who actually take offense by the Virgin vs. Chad meme. I think for something so outward facing, I want to avoid controversy where it's unnecessary. (And to be clear: not avoid it where it's necessary.) It also just feels click-baity.
  • Watching the videos, I just didn't feel like I could trust the content. If I didn't know some of the content already, it would be really hard for me to tell from the video whether the content was legitimate science or buzzfeed-level rigour. For example, I really didn't know how to treat the information in the cringe one and basically decided to ignore it. This is not to say that the content wasn't checked and legitimate, just that it's not obvious from the videos. Note that this wasn't true for the longtermism one.
  • I found the perceived jump in  topic in the cringe video aversive, and it reinforced my impression that the videos weren't very rigorous/truthseeking/honest. I was overall kind of confused by that video.
  • I think the above (and the titles) matter because of the kind of crowd you want to attract and retain with your videos.
  • I think the artistic choice is fine, but also contributes. I don't think that's a problem when not combined with the other things.

In general, the kind of questions I would ask myself, and the reason why I think all of the above are a concern are:

  1. Which kind of people does this video attract?
  2. Which of these people will get involved/in contact with EA because of these videos?
  3. Do we want these people to be involved in the EA project?
  4. Which kind of people does this video turn off?
  5. Which of these people will be turned off of EA in general because of these videos?
  6. Do we want these people to be involved in the EA project?

I'm somewhat concerned that the answer for too many people would be "no" for 3, and "yes" for 6. Obviously there will always be some "no" for 3 and some "yes"for 6, especially for such a broad medium like youtube, and balancing this is really difficult. (And it's always easier to take the skeptical stance.) But I think I would like to see more to tip the balance a bit.

Maybe one thing that's both a good indicator but also important in its own right is the kind of community that forms in the comment section. I've so far been moderately positively surprised by the comment section on the longtermism video and how your handling it, so maybe this is evidence that my concerns are misplaced. It still seems like something worth paying attention to. (Not claiming I'm telling you anything new.)

I'm not sure what your plans and goals are, but I would probably prioritise getting the overall tone and community of the channel right before trying to scale your audience.

 

Some comments on this video:

  • I thought it was much better in all the regards I mentioned above.
  • There were still some things I felt slightly uneasy about, but there were much, much smaller, and might be idiosyncratic taste or really-into-philosophy-or-even-specific-philosophical-positions type things. I might also have just noticed them in the context of your other videos, and might have been fine with this otherwise. I feel much less confident that they are actually bad. Examples:
    • I felt somewhat unhappy with your presentation of person-affecting views, mostly because there are versions that don't only value people presently alive. (Actually, I'm pretty confused about this. I thought your video explicitly acknowledged that, but then sounded different later. I didn't go back to check again, so feel free to discard this if it's inaccurate.) Note that I sympathise a lot with person-affecting views, so might just be biased and feel attacked.
    • I feel a bit unhappy that trajectory-change wasn't really discussed.
    • I felt somewhat uneasy about the "but what if I tell you that even this is nothing compared to what impact you could have" part when transitioning from speeding up technological progress to extincition risk reduction. It kind of felt buzzfeedy again, but I think it's plausibly I only noticed because I had the context of your other videos. On the more substantive side, I'm not familiar with the discussion around this at all, but I can imagine that whether speeding up growth or preventing extinction risk is more important is an open question to some researchers involved? Really don't know though.

 

Again, I think it is really cool and potentially highly valuable that you're doing this, and I have a lot of respect for how you've handled feedback so far. I don't want to discourage you from producing further videos, just want to give an idea of what some people might be concerned about/why there's not more enthusiasm for your channel so far. As I said, I think this video is definitely in the IMO right direction and find this encouraging.

 

edit: Just seen the comment you left on Aaron Gertler's comment about engagement. Maybe this is a crux.

Comment by Chi on A bunch of reasons why you might have low energy (or other vague health problems) and what to do about it · 2021-06-09T19:07:05.323Z · EA · GW

Hm, I'm a bit unhappy with the framing of symptoms vs. root causes, and am skeptical about whether it captures a real thing (when it comes to mental health and drugs vs. therapy). I'm worried that making the difference between the two contributes to the problems alexrjl pointed out.

Note, I have no clinical expertise and am  just spitballing: e.g. I understand the following trajectory as archetypical for what others might call "aha! First a patch and then root causes":

[Low energy --> takes antidepressants --> then has enough energy to do therapy & changes thought patterns etc. --> becomes long-term better afterwards doesn't need antidepressants anymore"]

But even if somebody had a trajectory like this, I'm not convinced that the thought patterns should count as root cause and not e.g. physiological imbalances that gave these kind of thought patterns a rich feeding ground in the first place (, which were addressed by antidepressants and perhaps to be addressed first before long-term improvement is possible). This makes me think that even if there is some matter of fact, it's not particularly meaningful.

(This seems even more true to me for things like ADHD - not even sure what root causes would be here -, but which weren't central to OP)

I think you might plausibly have a different and coherent conception of the root causes vs. symptoms thing, but I'm worried of using that distinction anyway because root causes is pretty normatively connotated, and people have all kinds of associations to it. (Would still be curious to hear your conceptualisation if you have one)

I care much less/have no particular thoughts on this distinction in non-mental-health cases, which were the focus of OP.

+1 to appreciating the OP, and I'll probably try out some of the things suggested!

Comment by Chi on How much do you (actually) work? · 2021-05-27T15:37:05.600Z · EA · GW

Hah! Random somewhat fun personal anecdote: I think tracking actually helped me a bit with that. When I first started tracking I was pretty neurotic about doing it super exactly. Having to change my toggl so frequently + seeing the '2 minutes of supposed work X' at the end of the day when looking at my toggl was so embarrassing that I improved a bit over time. Now I'm either better at swtiching less often and less neurotic about tracking or only the latter. It also makes me feel worse to follow some distraction if I know my time is currently being tracked as something else.

Comment by Chi on Concerns with ACE's Recent Behavior · 2021-04-17T23:33:10.008Z · EA · GW

I might be a little bit less worried about the time delay of the response. I'd be surprised if fewer than say 80% of the people who would say they find this very concerning won't end up also reading the response from ACE.

FWIW, depending on the definition of 'very concerning', I wouldn't find this surprising. I think people often read things, vaguely update, know that there's another side of the story that they don't know, have the thing they read become a lot less salient,  happen to not see the follow-up because they don't check the forum much,  and end up having an updated opinion (e.g. about ACE in this case) much later without really remembering why.

(e.g. I find myself very often saying things like "oh, there was this EA post that vaguely said X and maybe you should be concerned about Y because of this, although I don't know how exactly this ended in the end" when others talk about some X-or-Y-related topic, esp. when the post is a bit older. My model of others is that they then don't go check, but some of them go on to say "Oh, I think there's a post that vaguely says X, and maybe you be concerned about Y because of this, but I didn't read it, so don't take me too seriously" etc. and this post sounds like something this could happen with.)

Maybe I'm just particularly epistemically unvirtuous and underestimate others. Maybe for the people who don't end up looking it up but just having this knowingly-shifty-somewhat-update the information just isn't very decision-relevant and it doesn't matter much. But I generally think information that I got with lots of epistemic disclaimers and that have lots of disclaimers in my head do influence me quite a bit and writing this makes me think I should just stop saying dubious things.

Comment by Chi on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T23:22:31.790Z · EA · GW

And if hours went into carefully picking the original ten episodes and deciding how to sequence them, I'd like to see modifications made via a process of re-listening to different podcasts for hours and experimenting with their effects in different orders, seeing what "arcs" they form, etc., rather than via quick EA Forum comments and happy recollections of isolated episodes.

 

I agree that that's how I want the eventual decision to be made. I'm not sure what exactly the intended message of this paragraph was, but at least one reading is that you want to discourage comments like Brian's or otherwise extensive discussion on the contents of the podcast list. In case anyone reads it that way, I strongly disagree.

This has some flavor of 'X at EA organisation Y probably thought about this for much longer than me/works on this professionally, so I'll defer to them', which I think EAs generally say/think/do too often. It's very easy to miss things even when you've worked on something for a while (esp. if it's more in the some months than many years range) and outsiders often can actually contribute something important. I think this is already surprisingly often the case with research, and much more so the case with something like an intro resource where people's reactions are explicitly part of what you're optimizing for. (Obviously what we care about are new-people's reactions, but I still think that people-within-EA-reactions are pretty informative for that. And either way, people within EA are clearly stakeholders of what 80,000 Hours does.)

As with everything, there's some risk of the opposite ('not expecting enough of professionals?'), but I think EA currently is too far on the deferry end (at least within EA, I could imagine that it's the opposite with experts outside of EA).

Meta: Rereading your comment, I think it's more likely that your comment was either meant as a message to 80,000 Hours about how you want them to make their decision eventually or something completely different, but I think it's good to leave thoughts on possible interpretations of what people write.

Comment by Chi on What material should we cross-post for the Forum's archives? · 2021-04-15T14:10:59.403Z · EA · GW
  • Some stuff from Paul Christiano's 'The sideways view'

In addition to everything that Pablo said (esp. the Tomasik stuff because AFAICT none of his stuff is on the forum?)

Comment by Chi on The EA Forum Editing Festival has begun! · 2021-04-09T15:59:45.511Z · EA · GW
  1. I found tagging buggy. I tried to tag something yesterday, and I believe it didn't get through although it worked today. The 'S-risks" tag doesn't show up in my list to tag posts at all, although it's an article. But that might also be something about the difference between tags and articles that I don't understand? I use firefox and didn't check on other browsers.

  2. Is there a consensus for how to use organisation tags? Specifically, is it desirable to have have every output that's ever come out of an organisation to be tagged to them or only e.g. organisational updates? I've seen the first partly, but scarcely, done and am not sure about my opinion. (I mean things like "This report is published on the EA forum and the person who worked on this report was at org X at the time and wrote it as part of their job")

edit: 3) Just adding this on here...Is there a way to tag everything that has one tag with another tag? (I'm speaking of the 'economics' tag + lots of more specific tags; 'moral philosophy' and 'metaethics' etc.)

Comment by Chi on How to work with self-consciousness? · 2021-02-04T00:10:21.699Z · EA · GW

I'm not a very experienced researcher, but I think in my short research career, I've had my fair share of dealing with self-consciousness. Here are some things I find:

Note that I mostly refer to the "I'm not worth other people's time", "This/I am dumb", "This is bad, others will hate me for it" type of self-consciousness. There might be other types of self-consciousness, e.g. "I'm nervous I'm not doing the optimal thing and feel bad because then more morally horrible things will happen" in a way that's genuinely not related to self-confidence, self-esteem etc. for which my experience will not apply. This is apart from the obvious fact that different things work for different people

Some general thoughts:

  • As my disclaimer might indicate, I see research self-consciousness not as bound to research. Before/outside of research/work/EA in general I didn't notice that I had any issues with self-confidence. But for me at least, I think research/work/EA just activated some self-esteem/self-confidence issues I had before. (That is not to say that self-esteem etc. are not domain-specific, but I still think there's some more general thing going on as well) So, I approach research self-consciousness quite holistically as character development and try to improve not strictly in the research domain, although that will hopefully also help with research self-consciousness. Because I never looked at it from a pure research-domain lens, some things might seem a bit off, but I'll try to make it relevant.
  • The goal of the things I do to improve self-consciousness is not primarily to get to a state where I think my research is great, but to get to a state where I can do research, think it's great or not, be wrong, and be okay with it. I sometimes have to remind myself of that. On the occasions on which I do and decouple my self-esteem from the research, it lowers the stakes: If my research really is crap, at least it doesn't mean I'm crap.

Things I do to improve:

  • In the moment of feeling self-conscious: I would second Jason that talking to others about the object-level is magic

  • I also have a rule of talking to others about my research/sharing writeups whenever it feels most uncomfortable to do so. Those are often the moments when I'm most hardstuck because an anxious mind doesn't research well, exchange with others really helps, but my anxiety traps me to stay in that bad state!

  • I do something easy to commit myself to something that seems scary but important for research progress, so in the moment of self-consciousness I can't just back out again. Examples:

  1. Social accountability is great for this. When I'm in a really self-conscious-can't-work-state, I sometimes commit myself to send someone something I haven't started, yet, in 30 minutes, no matter what state it's in.

  2. I also often find it way easier to say "yes, I'll do this talk/discussion round at date X" or messaging another person "Hey, I have this idea I wanted to discuss, can I send you a doc?" (Even though I don't have a good doc, yet, because I think the idea is crap) than to do the thing, so whenever I feel able to do the first, I do it and future Chi has to deal with it, no matter how self-conscious she is.

  3. Often, just starting is the hardest thing. At least for me, that's where feeling super self-conscious often happens and stops me from doing anything. I sometimes set a timer for 5 minutes to do work. That's short enough that it feels ridiculous not to be able to it, and afterwards I often feel way less self-conscious and can just continue.

  • I think the above examples are partly me basically doing iterated exposure therapy with myself. (I never put it in those terms before.) It's uncomfortable and sucky, but it helps (, and I get some perverse enjoyment out of it). I try to look for the thing I'm most scared of that feels related to my research self-consciousness, that seems barely doable, and try to do it. Unless that thing becomes easier and then I go "to the next level". E.g. maybe at some point, I want ot practice sharing my opinions publicly on a platform, but can only do so, if I ran my opinion by a friend beforehand and then excessively qualify when writing my opinion and emphasize how dumb and wrong everything could be. And that's fine, you can "cheat" a bit. But after a while you hopefully feel comfortable enough that you can challenge yourself to cut the excessive qualifiers and running things by your friend. Ideally, you do that with research related things (e.g. iterated exposure of all your crappy research stuff, then with you confidently backing them, then on some topic that scares you, then with scarier people etc. --> Not suggesting that this order makes sense for everyone), but we don't always have the opportunity to iterate quickly on research because research takes time. I have the goal to improve on this generally anyway, but I think even if not, some things are good enough nearbys that are relevant to research self-consciousness. Examples:
  1. For self-consciousness reasons, I struggle with saying "Yes, I think this is good and promising" about something I work on, which makes me useless at analyzing whether e.g. a cause area is promising, which is incidentally exactly my task right now. So I looked for things that felt similar and uncomfortable in the same way and settled for trying to post at least one opinion/idea a weekday in pre-specified channels. (I had to give up after a week, but I think it was really good and I want to continue once I have more breathing room.)

  2. For the same reason as above, I deliberately go through my messages and delete all anxious qualifiers. I can't always do that in all contexts because they make me too self-conscious, and I allow myself that.

  • I appreciate that the above self-exposure-therapy examples might be too difficult for some and that might seem intimidating. (I've definitely been at "I'd never, ever, ever write a comment on the forum!" I'm still self-conscious about what I up- and downvote and noone can even see that) But you can also make progress on a lower level, just try whatever seems manageable and be gentle to yourself. (And back off if you notice you chewed off too much.) However, it can still be pretty daunting and it might be that it's not always possible to do the above completely independently. (E.g. I think I only got started when I spent several weeks at a research organisation I respect a lot, felt terrible for many parts, but couldn't back out and just had to do or die, and had a really good environment. I'm not sure "sticking through" would have been possible for me without that organisational context)

  • I personally benefited a lot from listening to other people's stories, general content on failing, self-esteem etc. I'm not sure how applicable that is to others that try to improve research self-consciousness because I never looked at it from a pure research lens, but it's motivating to have a positive ideal as well, and not just "self-consciousness is bad." I usually consume non-EA podcasts and books for this.

On positive motivation:

  • Related to the last point of positive ideals: Recently, I found it really inspiring to look at some people who just seem to have no fear of expressing their ideas, enthusiasm, think things through themselves without anyone giving them permission etc. And I think about how valuable just these traits are apart from the whole "oh, they are so smart" thing. I find that a lot more motivating than the smartness-ideal in EA, and then I get really motivated to also become cool like that!

  • I guess for me there's also a gender thing in where the idea of becoming a kickass woman is double motivating. I think I also have the feeling that I want to make progress on this on behalf of other self-conscious people that struggle more than me. I'm not really sure why I think that benefits them, but I just somehow do. (Maybe I could investigate that intuition at some point.) And that also gives me some positive motivation.

Comment by Chi on How to discuss topics that are emotionally loaded? · 2021-02-03T01:37:45.399Z · EA · GW

Ironically, I felt somewhat upset reading OP, I think for the reason you point out. (No criticism towards OP, I was actually amused at myself when I noticed)

I think some reason-specific heterogeneity in how easily something is expressible/norms in your society also play a role:

  1. I think some reasons are just inherently fuzzier (or harder to crisply grasp), e.g. why certain language makes you feel excluded. (It's really hard to point at a concrete damage (or in summer circles, something that can't be countered with "that's not how it's meant [, but if you want to be sensitive, we can accommodate that].")) I think that's double troubling because the other person often takes you less seriously and because you might take yourself less seriously. I think at least I'm more prone to be emotional when I feel like my reasons are of this type, and maybe that's similar for others?
  2. some kinds of reasoning are more socially excepted in different circles. E.g. In some EA circles, I imagine the "anti"-vegan argument would be associated with higher social status, and in some EA circles it would be the other way around. At least in my case, I'm more prone to be emotional when I feel like I have the less socially approved opinion/reasoning process.

I guess the common thread here is feeling threatened and like one needs to defend one's opinion because it's likely to be undermined. I guess the remedy would be... Really making sure the other person feels taken seriously (including by themselves) and safe and says everything they want? (Maybe someone else can come up with something more helpful and concrete) That's obviously just the side of the non-offended person, but I feel like the ways the upset person could try to improve in such situations is even more generic and vague.

Obviously, this is just one type of being emotional during conversations. E.g if what I say explains any meaningful variance at all, it probably does so less for 4) than for 3). (Maybe not coincidentally since I'm not male)

Comment by Chi on Chi's Shortform · 2021-01-21T15:08:39.130Z · EA · GW

Thanks for the reply!

Honestly, I'm confused by the relation to gender. I'm bracketing out genders that are both not-purely-female and not-purely-male because I don't know enough about the patterns of qualifiers there.

  • In general, I think anxious qualifying is more common for women. EA isn't known for having very many women, so I'm a bit confused why there's seemingly so much of it in EA.
  • (As a side: This reminds me of a topic I didn't bring into the original post: How much is just a selection effect and how much is EA increasing anxious qualifying. Intuitively, I at least think it's not purely a selection effect, but I haven't thought closely about this.)
  • Given the above, I would expect that women are also more likely to take the EA culture, and transform it into excessive use of anxious qualifiers, but that's just speculation. Maybe the percentage change of anxious qualifier use is also higher for men, just because their baseline is lower
  • I'm not sure how this affects gender diversity in EA as a whole. I can imagine that it might actually be good because underconfident people might be less scared off if the online communication doesn't seem too confident, and they feel like they can safely use their preferred lots-of-anxious-signalling communication strategy.
  • That being said, I guess that what would do the above job (at least) equally good is what I call "3" in my reply to Misha. Or, at least I'm hopeful that there are some other communication strategies that would have that benefit without encouraging anxious signalling.
  • edit: I noticed that the last bullet point doesn't make much sense because I claim elsewhere that 3 can encourage 4 because they look so similar, and I stand by that.

Interestingly, maybe not instructively, I was kind of hesitant to bring gender into my original post. Partly for good reasons, but partly also because I worried about backlash or at least that some people would take it less seriously as a result. I honestly don't know if that says much about EA/society, or solely about me. (I felt the need to include "honestly" to make it distinguishable from a random qualifier and mark it as a genuine expression of cluelessness!)

Comment by Chi on Chi's Shortform · 2021-01-21T14:53:59.930Z · EA · GW

Reply 3/3

"displaying uncertainty or lack of knowledge sometimes helps me be more relaxed"

I think there's a good version of that experience and I think that's what you're referring to, and I agree that's a good use of qualifiers. Just wanted to make a note to potential readers because I think the literal reading of that statement is a bit incomplete. So, this is not really addressed at you :)

I think displaying uncertainty or lack of knowledge always helps to be more relaxed even when it comes from a place of anxious social signalling. (See my first reply for what exactly I mean with that and what I contrast it to) That's why people do it. If you usually anxiously qualify and force yourself not to do it, that feels scary. I still think, practicing not to do it will help with self-confidence, as in taking yourself more seriously, in the long run. (Apart from efficient communication)*

Of course, sometimes you just need to qualify things (in the anxious social signalling sense) to get yourself in the right state of mind (e.g. to feel safe to openly change your mind later, freely speculate, or to say anything at all in the first place), or allowing yourself the habit of anxious social signalling makes things so much more efficient, that you should absolutely go for it and not beat yourself up over it. Actually, an-almost ideal healthy confidence probably also includes some degree of what I call anxious social signalling and it's unrealistic to get rid of all of it.

  • I just found one other frame for what I meant with anxious social signalling partly being rewarded in EA. Usually, that kind of signaling means others take you less seriously. I think it's great that that's not so much the case in EA, but I worry that sometimes it may look like people in EA take you more seriously when you do it. Maybe because EA actually endorses what I call 3 in my first reply, but - to say the same thing for the 100th time - I worry that it also encourages anxious social signalling.
Comment by Chi on Chi's Shortform · 2021-01-21T14:39:20.367Z · EA · GW

Reply 2/3

I like the suggestions, and they probably-not-so-incidentally are also things that I often tell myself I should do more and that I hate. One drawback with them is that they are already quite difficult, so I'm worried that it's too ambitious of an ask for many. At least for an individual, it might be more tractable to (encourage them to) change their excessive use of qualifiers as a first baby step than to jump right into quantification and betting. (Of course, what people find more or less difficult confidence-wise differs. But these things are definitely quite high on my personal "how scary are things" ranking, and I would expect that that's the case for most people.) OTOH, on the community level, the approach to encourage more quantification etc. might well be more tractable. Community wide communication norms are very fuzzy and seem hard to influence on the whole. (I noticed that I didn't draw the distinction quite where you drew it. E.g. "Acknowledgements that arguments changed your mind" are also about communication norms.) I am a little bit worried that it might have backfire effects. More quantification and betting could mostly encourage already confident people to do so (while underconfident people are still stuck at "wouldn't even dare to write a forum comment because that's scary."), make the online community seem more confident, and make entry for underconfident people harder, i.e scarier. Overall, I think the reasons to encourage a culture of betting, quantification etc. are stronger than the concerns about backfiring. But I'm not sure if that's the case for other norms that could have that effect. (See also my reply to Emery )

Comment by Chi on Chi's Shortform · 2021-01-21T14:38:59.460Z · EA · GW

Reply 1/3 Got it now, thanks! I agree there's confident and uncertain, and it's an important point. I'll spend this reply on the distinction between the two, another response on the interventions you propose, and another response on your statement that qualifiers often help you be more relaxed.

The more I think about it, the more I think that there's quite a bit for someone to unpack here conceptually. I haven't done so, but here a start:

  1. There's stating epistemic degree of epistemic uncertainty to inform others how much they should update based on your belief (e.g. "I'm 70% confident in my beliefs, i.e. I think it's 70% likely I'd still hold them after lots of reflection.")
  2. There's stating probabilities which looks similar, but just tells others what your belief is, not how confident you are in it ("I think event X is 70% likely to occur")
  3. There's stating epistemic uncertainty for social reasons that are not anxiety/underconfidence driven: Making a situation less adversarial; showing that you're willing to change your mind; making it easy for others to disagree; just picking up this style of talking from people around you
  4. There's stating epistemic uncertainty for social reasons that is anxiety/underconfidence driven: Showing you're willing to change your mind, so others don't think you're cocky; Saying you're not sure, so you don't look silly if you're wrong/any other worry you have because you think maybe you're saying something 'dumb'; Making a situation less adversarial because you want to avoid conflict because you don't want others to dislike you
  5. There's stating uncertainty about the value of your contribution. That can honestly be done in full confidence, because you want to help the group allocate to attention optimally, so you convey information and social permission to not spend too much time on your point. I think online most of the reasons to do so do not apply (people can just ignore you), so I'm counting it mostly as anxious social signalling or in the best case, a not so useful habit. An exception are if you want to help people decide whether to read a long piece of text.

I think you're mostly referring to 1 and 2. I think 1 and 2 are good things to encourage and 4 and 5 are bad things to encourage. Although I think 4/5 also have their functions and shouldn't be fully discouraged (more in my (third reply)[https://forum.effectivealtruism.org/posts/rWSLCMyvSbN5K5kqy/chi-s-shortform?commentId=un24bc2ZcH4mrGS8f]). I think 3 is a mix. I like 3. I really like that EA has so much of 3. But too much can be unhelpful, esp. the "this is just a habit" kind of 3. I think 1 and 2 look quite different from 4 and 5. The main problem that it's hard to see if something is 3 or 4 or both, and that often, you can only know if you know the intention behind a sentence. Although 1 can also sometimes be hard to tell apart from 3, 4, and 5, e.g. today I said "I could be wrong", which triggered my 4-alarm, but I was actually doing 1. (This is alongside other norms, e.g. expert deference memes, that might encourage 4.)

I would love to see more expressions that are obviously 1, and less of what could be construed as any of 1, 3, 4, or 5. Otherwise, the main way I see to improve this communication norm is for people to individually ask themselves which of 1,3,4,5 is their intention behind a qualifier. edit: No idea, I really love 3

Comment by Chi on Chi's Shortform · 2021-01-21T00:45:14.761Z · EA · GW

I just wondered whether there is systematic bias in how much advice there is in EA for people who tend to be underconfident and people who tend to be appropriately or overconfident. Anecdotally, when I think of Memes/norms in effective altruism that I feel at least conflicted about, that's mostly because they seem to be harmful for underconfident people to hear.

Way in which this could be true and bad: people tend to post advice that would be helpful to themselves, and underconfident people tend to not post advice/things in general.

Way in which this could be true but unclear in sign: people tend to post advice that would be helpful to themselves, and they are more appropriately or overconfident people in the community than underconfident ones.

Way in which this could be true but appropriate: advice that would be harmful when overconfident people internalize it tends to be more harmful than advice that's harmful to underconfident people. Hence, people post proportionally less of the first.

(I don't think the vast space of possible advice just has more advice that's harmful for underconfident people to hear than advice that's harmful for overconfident people to hear.)

Maybe memes/norms that might be helpful for underconfident for people to hear or their properties that could be harmful for underconfident people are also just more salient to me.

Comment by Chi on Chi's Shortform · 2021-01-21T00:34:30.291Z · EA · GW

Hey Misha! Thanks for the reply and for linking the post, I enjoyed reading the conversation. I agree that there's an important difference. The point I was trying to make is that one can look like the other, and that I'm worried that a culture of epistemic uncertainty can accidentally foster a culture of anxious social signaling, esp. when people who are inclined to be underconfident can smuggle anxious social signaling in disguised (to the speaker/writer themselves) as epistemic uncertainty. And because anxious social signalling can superficially look similar to epistemic uncertainty, they see other people in their community show similar-ish behavior and see similar-ish behavior be rewarded. Not sure how to address this without harming epistemic uncertainty though. (although I'm inclined to think the right trade-off point involves more risk of less of the good communication of epistemic uncertainty)

Or was your point that you disagree that they look superficially similar? And hence, one wouldn't encourage the other? And if that's indeed your point, would you independently agree or disagree that there's a lot of anxious social signaling of uncertainty in effective altruism?

Comment by Chi on Chi's Shortform · 2021-01-20T23:59:14.704Z · EA · GW

Should we interview people with high status in the effective altruism community (or make other content) featuring their (personal) story, how they have overcome challenges, and live into their values?

Background: I think it's no secret that effective altruism has some problems with community health. (This is not to belittle the great work that is done in this space.) Posts that talk about personal struggles, for example related to self-esteem and impact, usually get highly upvoted. While many people agree that we should reward dedication and that the thing that really matters is to try your best given your resources, I think that, within EA, the main thing that gives you status, that many people admire, desire, and tie their self-esteem to is being smart.

Other altruistic communities seem to do a better job at making people feel included. I think this has already been discussed a lot, and there seem to be some reasons for why this is just inherently harder for effective altruism to do. But one specific thing I noticed is what I associate with leaders of different altruistic communities.

When I think of most high status people in effective altruism, I don't think of their altruistic (or other personal) virtues, I think 'Wow, they're smart.' Not because of a lack of altruistic virtues - I assume -, but because smartness is just more salient to me. On the other hand, when I think of other people, for example Michelle Obama or Melinda Gates or even Alicia Keys for that matter, I do think "Wow, these people are so badass. They really live into their values." I wouldn't want to use them as role models for how to have impact, but I do use them as role models for what kind of person I would like to be. I admire them as people, and they inspire me to work on myself to become like them in relevant respects, and they make me think it's possible. I am worried that people look at high status people in effective altruism for what kind of person they would like to be, but the main trait of those people they are presented with is smartness, which is mostly intractable to try to improve.

I don't think this difference is because these non-EAs lack any smartness or achievement that I could admire. I think it's because I have consumed content where their personal story and values were put front and centre alongside what they did and how they achieved it. Similarly, I don't think that high status people in effective altruism lack any personal virtue I could aspire to, but I'm simply not exposed to it.

I don't know if it would actually improve this aspect of community health, and whether it's overall worth the time of all people involved (although I think the answer is yes if the answer to the first is yes), but this made me wonder if we should create more content with high status people in the effective altruism community that is similar to the kind of interviews with non-EAs I mentioned. 'That kind of content' is pretty vague, and one would have to figure out how we can best celebrate the kind of virtues we want to celebrate, and whether this could work, in principle, with effective altruism. (Maybe the personal virtues we most admire in high status effective altruists just are detrimental to the self esteem of others. I can imagine that with some presentations of impact obsession for example.) But this might be a worth while idea, and I am somewhat hopeful that this could be combined with the presentation of more object-level content (the type that 80k interviews are mostly about).

Comment by Chi on Chi's Shortform · 2021-01-20T22:58:54.842Z · EA · GW

Observation about EA culture and my journey to develop self-confidence:

Today I noticed an eerie similarity between things I'm trying to work on to become more confident and effective altruism culture. For example, I am trying to reduce my excessive use of qualifiers. At the same time, qualifiers are very popular in effective altruism. It was very enlightening when a book asked me to guess whether the following piece of dialogue was from a man or woman:

'I just had a thought, I don't know if it's worth mentioning...I just had a thought about [X] on this one, and I know it might not be the right time to pop it on the table, but I just thought I'd mention it in case it's useful.'

and I just immediately thought 'No, that's an effective altruist'. I think what the community actually endorses is communicating the degree of epistemic certainty and making it easy to disagree, while the above quote is anxious social signalling. I do think the community does a lot of the latter though, and it's partly rewarded because of confounding with the first. (In the above example it's obvious, but I think anxious social signaling is also often the place where 'I'm uncertain about this', 'I haven't thought much about this', and 'I might be wrong' (of course you might be wrong) come from. That's certainly the case for me.) Tangentially, there is also a strong emphasis on deference and a somewhat conservative approach to not causing harm, esp. with new projects.

Overall, I am worried that this communication norm and the two memes I mentioned foster under-confidence, a tendency to keep yourself small, and the feeling that you need permission to work on important problems or to think through important questions. The communication norm and memes I mentioned also have upsides, esp. when targeted at overconfident people, and I haven't figured out yet what my overall take on them is. I just thought it was an interesting observation that certain things I'm trying to decrease are particularly pervasive in the effective altruism community.

(I think there are also lots of other problems related to self-esteem and effective altruism, but I wanted to focus on this particular aspect.)

Comment by Chi on Training Bottlenecks in EA (professional skills) · 2021-01-19T19:22:23.544Z · EA · GW

Thanks for the reply! I was initially just self-interestedly wondering which training you got and whether you would recommend it. But I am also happy to hear about your plans in that direction.

Given the time constraints, do you think there any other people for whom it would make sense to take the lead regarding this that you are not yet in touch with about this, (e.g. a specific type of person rather than specific individuals.) And if so, which traits would that person need? You already mentioned that you want to work on it with help anyway, and I can imagine that it doesn't make sense for any other person to take this up right now given your expertise. Still wanted to ask if you think there are any sensible versions that would involve you less and would be feasible time-wise because I also think this is a majorly important topic and would love to see something happen.

Comment by Chi on My mistakes on the path to impact · 2021-01-18T23:10:37.332Z · EA · GW

I think the comparison to "the current average experience a college graduate has" isn't quite fair, because the group of people who see 80k's advice and act on is is already quite selected for lots of traits (e.g. altruism). I would be surprised if the average person influenced by 80k's EtG advice had the average college graduate experience in terms of which careers they consider and hence, where they look for advice, e.g. they might already be more inclined to go into policy, the non-profit sector or research to do good.

(I have no opinion on how your point comes out on the whole. I wasn't around in 2015, but intuitively it would also surprise me if 80k didn't do substantially more good during that time than bad, even bracketing out community building effects (, which, admittedly, is hard))

Comment by Chi on Effektiv Spenden - Fundraising and 2021 Plans · 2021-01-18T22:59:19.643Z · EA · GW

Hey, I wanted to probe a bit into why you don't write in gender neutral language on your website.

  • (For those who are not German: in German most nouns that refer to persons are not gender neutral by default, but always refer to either male or female persons, with the male version having been the default version for a long time. In the last decade, there has been a pushback against this and people started to adopt gender neutral language, which often looks a bit clunky though.) -

I saw that you justify this with better readability in your FAQ, but I didn't find the response very satisfying. On reasons not to write gender neutral:

  • Readability: My guess is that at this point, most people have gotten used to gender neutral language and don't really stumble when they read it anymore. Actually, I think there's probably a fair share of people that stumble when they read non-gender neutral language nowadays. There are also some less clunky solutions (e.g. the female version with a capitalized "I" or explicitly stating that you'll alternate gender between sections/pages). (They aren't as correct because they exclude people who are not female and male, but probably still a better alternative than not using any gender neutral language at all)
  • Appeal to target audience: You might worry that gender neutral language might not be appealing to some target audiences that would usually donate fairly large amounts of money, but would not if the website was written in gender neutral language. (e.g. conservative leaning, wealthy donors.) You'll know better than I and if you have convincing arguments that this is the case (and outweighs the money you could raise from people who are repelled from non gender neutral language), I'd probably support your decision. I would be somewhat surprised by this though. To me, using gender neutral language seems fairly normal and professional and not "lefty wooi-booi student initiative" anymore (e.g. the German Federal Agency for Civic Education uses gender neutral language, at least partly.)
  • The time cost of using gender neutral language seems fairly small

On the other hand:

  • I know at least one person who isn't involved in EA but interested in effective giving that almost didn't donate via effektiv-spenden because you don't use gender neutral language. I would guess that a fair proportion of your target audience might be similarly inclined.
  • Apart from that, I also care about gender neutral language for feminist reasons, but that's not what I wanted to focus on
Comment by Chi on Training Bottlenecks in EA (professional skills) · 2021-01-18T21:07:02.846Z · EA · GW

Hey Kathryn, this is a bit off-topic, but I was wondering what that impostor syndrome training is that Michelle mentions in the post. Asking here because I imagine more people might be interested in this.

Comment by Chi on My Understanding of Paul Christiano's Iterated Amplification AI Safety Research Agenda · 2020-08-17T18:11:04.680Z · EA · GW

Hey Max, thanks for your comment :)

Yeah, that's a bit confusing. I think technically, yes, IDA is iterated distillation and amplification and that Iterated Amplification is just IA. However, IIRC many people referred to Paul Christiano's research agenda as IDA even though his sequence is called Iterated amplification, so I stuck to the abbreviation that I saw more often while also sticking to the 'official' name. (I also buried a comment on this in footnote 6)

I think lately, I've mostly seen people refer to the agenda and ideas as Iterated Amplification. (And IIRC I also think the amplification is the more relevant part.)

Comment by Chi on The Case for Education · 2020-08-16T13:43:43.463Z · EA · GW

Hm, I'm not sure how easily it's reproducible/what exactly he did. I had to write essays on the topic every week and he absolutely destroyed my first essays. I think reading their essay is an exceptionally good way to find out how much the person in question misunderstands and I'm not sure how easily you can recreate this in conversation.

I guess the other thing was a combination of deep subject-matter expertise + [being very good at normal good things EAs would also do] + a willingness to assume that when I said something that didn't seem to make sense, it indeed didn't make sense, and telling me so/giving me all the possible objections to my argument; and then just feeling comfortable talking for 20 minutes (basically lecturing). I think that worked because of the formal tutor-student setting we were in and because he evidently and very obviously knew a lot more about the topic than me. I think it's harder in natural settings to realize that that's the case and confidently act on it.

What I mean by [normal good things EAs would also do]: Listening to my confused talking, paraphrasing what I was trying to say into the best steelman, making sure that that's what I meant before pointing out all the flaws.

Comment by Chi on EA Meta Fund Grants – July 2020 · 2020-08-16T12:22:01.282Z · EA · GW

Small point that's not central to your argument:

A similar thing might happen here: if there was a universal mentoring group that gave women access to both male and female mentors, why would they choose the segregated group that restricted them to a subset of mentors?

I had actually also asked WANBAM at some point whether they considered adding male mentors as well but for different reasons.

I think at least some women would still prefer female mentors. Anecdotally, I often made the experience that it's easier for other women to relate to some of my work-related struggles and that it's generally easier for me to discuss those struggles with women. This is definitely not true in every case but the hit rate (of connections where talking about work-struggles works really well) among women is much higher than among men and I expect this to be true for many other women as well.

Comment by Chi on EA Forum update: New editor! (And more) · 2020-08-15T19:25:42.241Z · EA · GW

Is there a way to have footnotes and tables in the same post? I tried just now and can't see a way. (You have to switch to EA forum doc [beta] editor for the tables which kills your footnotes; you have to switch to markdown for footnotes which kills your tables)


edit: I found some markdown code for tables which worked but then had trouble formatting within the table. Decided to just take pictures of the tables instead and upload them as pictures which also works. If anyone knows an easier/nicer way to do this, or if anything is planned, that would be great :)

Comment by Chi on The Case for Education · 2020-08-14T22:23:54.445Z · EA · GW

Thanks for writing this :) I certainly agree that the education system isn't optimal and maybe only useful to a handful of people. However, I'd like to provide myself as a data point of someone who actually thinks they benefit from their education. I'm worried that people might sometimes come away with the feeling that they're doing something wrong and pointless when going to uni/only doing signalling when that's not true in some cases.

I'm a bit of an outlier in that I'm actually in my second bachelor's degree and I definitely don't want to claim that that's a good idea for everyone. The first one was from a not well known University and my current degree is at a prestigious university. After my first year at my current university I was offered a job at an EA organization and after a lot of deliberation turned it down. I'm not sure that was the right choice but I still think I got a lot of benefits from continuing my degree. Here are some examples for why:

  • I learned a lot about writing. I got a ton of practice and feedback (both 1-2x a week). I don't think this would have been possible otherwise.
  • Last term, I took a course in Philosophy of Cognitive Science: There's a good chance I would have wanted to spend some time on the same topics in my free time for EA-ish reasons. My tutor pushed back and improved my thinking a lot and in a way that I frankly don't expect most of the people in my EA circle to do. I hope this also helps me evaluate the quality of discussion and arguments in EA a bit although I'm not sure if that's a real effect.
  • I often see the argument advanced that you could just learn much more effectively in your free time. I'm slowly arriving at a point at which I think I would probably continue learning and working on my own. However, when I started studying that was certainly not the case. It's really hard to self study (for many). I think many and probably the majority of people really benefit from the structure that forces you to do something. That and the tutoring I get at my uni make me think that quantitatively I actually learn a lot more at uni than I otherwise would, although it's true of course that I could direct my time to learn a lower quantity but more relevant content.
  • This is not argument for education per se, but at the current point I'd be quite concerned about EA curricula being too one-sided if that's the only education you get. (It depends on the field and execution of course and I might be wrong)
  • I expect the benefits are probably greater for certain graduate studies where you have more contact to mentors but I'm not sure about this.

Admittedly, all of these reasons mainly apply to my second degree. I'm a lot more willing to relent that my first degree was mostly a waste of time, although I'm still often surprised by how much stuff I learned was actually useful (mostly stats). I also think the case is much different if you're not interested in research, or interested in e.g. ML engineering.

Comment by Chi on 2019 Ethnic Diversity Community Survey · 2020-06-22T12:05:22.792Z · EA · GW

Thanks for doing this work!

I've thought of the "Improving awareness and training of social justice" point a bit in the past when thinking about gender diversity and find it difficult. I am a bit worried that it is extremely hard or impossible without everyone investing a substantial amount of time:

My impression is that a lot of (ethnic/gender/...) diversity questions has no easy fixes that some people can think about and implement, but would rather benefit a lot from every single person trying to educate themselves more to increase their own awareness, esp. community builders, and high profile people that get a lot of attention. One example that I think is hard to improve otherwise: I noticed in Toby Ord's The Precipice the following sections:

"Indeed, when I think of the unbroken chain of generations leading to our time and of everything they have built for us, I am humbled. I am overwhelmed with gratitude"

I know this doesn't detract from his overall point of what the generations of the last hundreds of thousand years have done "for us", but I can't help to wonder how reading this must feel like for some people that primarily associate history with their ancestors being fucked over by colonialism or being enslaved, and them still paying the price for this. The Precipice actually mentions this later but is clearly written from the perspective of the people descending from those inflicting injustice, and not receiving it:

"Consider that some of the greatest injustices have been inflicted [...] by groups upon groups: Systematic persecution, stolen lands, genocides. We may have duties to properly acknowledge and memorialise these wrong; to confront the acts of our past. And there may yet be ways for the beneficiaries of these acts to partly remedy them or atone for them."

While I'm a POC, I'm certainly not from an ethnicity that suffered the most from the historic (and ongoing) actions of the elites in primarily white countries. But I can imagine that many people whose families are or have been on the receiving end of the injustice might be alienated by this section, which reads a bit like it's a given that readers are on the other side of the coin.

I don't want to slander the book or the person, I very much enjoy the book, and don't assume any negative intentions (and think the section can also be read more charitably, but I think it's important that it can be read in an alienating way by POC). I just think this is a good example of the problem and that it is really hard to be aware of such things when you didn't spend substantial time to understand underrepresented groups - that you are not part of - better.

While, in an ideal world, I would like everyone in the world to do so, it is a big time sink and feel reluctant about recommending everyone to invest this time, especially when the opportunity cost are so high. I'm not sure how to remedy this; whether investing the time is clearly worth it; there are better ways to make progress that are less time intensive; or whether we should only aim for low hanging fruit; or something entirely else. I would be very curious to hear other people's thoughts, I'd gladly notice that I'm totally off the mark and I worry more than warranted :)

(I also feel weary of openly saying that investing into understanding underrepresented groups might not be worth the time, as I just did, because I think it can be very hurtful and dehumanizing.)

Comment by Chi on Concerning the Recent 2019-Novel Coronavirus Outbreak · 2020-01-30T23:26:43.694Z · EA · GW

I respect that you are putting money behind your estimates and get the idea behind it, but would recommend you to reconsider if you want to do this (publicly) in this context and maybe consider removing these comments. Not only because it looks quite bad from the outside, but also because I'm not sure it's appropriate on a forum about how to do good, especially if the virus should happen to kill a lot of people over the next year (also meaning that even more people would have lost someone to the virus). I personally found this quite morbid and I have a lot more context into EA culture than a random person reading this, e.g. I can guess that the primary motivation is not "making money" or "the feeling of winning and being right" - which would be quite inappropriate in this context -, but that might not be clear to others with less context.


(Maybe I'm also the only one having this reaction in which case it's probably not so problematic)


edit: I can understand if people just disagree with me because you think there's no harm done by such bets, but I'd be curious to hear from the people who down voted if in addition to that you think that comments like mine are harmful because of being bad for epistemic habits or something, so grateful to hear if someone thinks comments like these shouldn't be made!

Comment by Chi on EA Survey 2018 Series: Community Demographics & Characteristics · 2018-09-21T15:25:14.989Z · EA · GW

The link to the raw data doesn't work for me and links to https://github.com/peterhurford/ea-data/blob/master/data/2018/2018-ea-survey-anon-currencied-processed.csv%20for%202018 Enter https://github.com/peterhurford/ea-data/blob/master/data/2018/2018-ea-survey-anon-currencied-processed.csv instead and I believe you end up where you should end up.