Posts

A tale from Communist China 2020-10-20T23:10:57.854Z
Wei_Dai's Shortform 2019-12-11T21:07:57.056Z
How should large donors coordinate with small donors? 2019-01-08T22:47:56.661Z
Beyond Astronomical Waste 2018-12-27T09:27:26.728Z

Comments

Comment by Wei_Dai on How to engage with AI 4 Social Justice actors · 2022-04-26T16:03:14.749Z · EA · GW

Here are some of my previous thoughts (before these SJ-based critiques of EA were published) on connections between EA, social justice, and AI safety, as someone on the periphery of EA. (I have no official or unofficial role in any EA orgs, have met few EA people in person, etc.) I suspect many EA people are reluctant to speak candidly about SJ for fear of political/PR consequences.

Comment by Wei_Dai on Updating on Nuclear Power · 2022-04-26T02:23:48.272Z · EA · GW

It’s economically feasible to go all solar without firm generation, at least in places at the latitude of the US (further north it becomes impossible, you’d need to import power).

How much does this depend on the costs of solar+storage continuing to fall? (In one of your FB posts you wrote "Given 10-20 years and moderate progress on solar+storage I think it probably makes sense to use solar power for everything other than space heating") Because I believe since you wrote the FB posts, these prices have been going up instead. See this or this.

Covering 8% of the US or 30% of Japan (eventually 8-30% of all land on Earth?) with solar panels would take a huge amount of raw materials, and mining has obvious diseconomies at this kind of scale (costs increase as the lowest cost mineral deposits are used up), so it seems premature to conclude "economically feasible" without some investigation into this aspect of the problem.

Comment by Wei_Dai on Where is the Social Justice in EA? · 2022-04-05T08:06:44.451Z · EA · GW

Taking the question literally, searching the term ‘social justice’ in EA forum reveals only 12 mentions, six within blog posts, and six comments, one full blog post supports it, three items even question its value, the remainder being neutral or unclear on value.

That can't be right. I think what may have happened is that when you do a search, the results page initially shows you only 6 each of posts and comments, and you have to click on "next" to see the rest. If I keep clicking next until I get to the last pages of posts and comments, I can count 86 blog posts and 158 comments that mention "social justice", as of now.

BTW I find it interesting that you used the phrase "even question its value", since "even" is "used to emphasize something surprising or extreme". I would consider questioning the values of things to be pretty much the core of the EA philosophy...

Comment by Wei_Dai on Modelling Great Power conflict as an existential risk factor · 2022-02-06T15:14:12.495Z · EA · GW

It seems to me that up to and including WW2, many wars were fought for economic/material reasons, e.g., gaining arable land and mineral deposits, but now, due to various changes, invading and occupying another country is almost certainly economically unfavorable (causing a net loss of resources) except in rare circumstances. Wars can still be fought for ideological ("spread democracy") and strategic ("control sea lanes, maintain buffer states") reasons (and probably others I'm not thinking of right now), but at least one big reason for war has mostly gone away at least for the foreseeable future?

Curious if you agree with this, and what you see as the major potential causes of war in the future.

Comment by Wei_Dai on Introducing a New Course on the Economics of AI · 2021-12-22T05:53:50.681Z · EA · GW

Not directly related to the course, but since you're an economist with an interest in AI, I'm curious what you think about AGI will drastically increase economies of scale.

Comment by Wei_Dai on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-26T23:49:09.266Z · EA · GW

My own fantasy is that people will eventually be canceled for failing to display sufficient moral uncertainty. :)

Comment by Wei_Dai on Why AI alignment could be hard with modern deep learning · 2021-09-23T22:11:11.449Z · EA · GW

Sounds like their positions are not public, since you don't cite anyone by name? Is there any reason for that?

Comment by Wei_Dai on Why AI alignment could be hard with modern deep learning · 2021-09-23T09:19:47.504Z · EA · GW

There’s a very wide range of views on this question, from “misalignment risk is essentially made up and incoherent” to “humanity will almost certainly go extinct due to misaligned AI.” Most people’s arguments rely heavily on hard-to-articulate intuitions and assumptions.

My sense is that the disagreements are mostly driven "top-down" by general psychological biases/inclinations towards optimism vs pessimism, instead of "bottom-up" as the result of independent lower-level disagreements over specific intuitions and assumptions. The reason I think this is that there seems to be a strong correlation between concern about misalignment risk and concern about other kinds of AI risk (i.e., AI-related x-risk). In other words, if the disagreement was "bottom-up", then you'd expect that at least some people who are optimistic about misalignment risk would be pessimistic about other kinds of AI risk, such as what I call "human safety problems" (see examples here and here) but in fact I don't seem to see anyone whose position is something like, "AI alignment will be easy or likely solved by default, therefore we should focus our efforts on these other kinds of AI-related x-risks that are much more worrying."

(From my limited observation, optimism/pessimism on AI risk also seems correlated to optimism/pessimism on other topics. It might be interesting to verify this through some systematic method like a survey of researchers.)

Comment by Wei_Dai on In favor of more anthropics research · 2021-08-21T21:39:13.879Z · EA · GW

See this comment by Vladimir Slepnev and my response to it, which explain why I don't think UDT offers a full solution to anthropic reasoning.

Comment by Wei_Dai on AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA · 2021-08-21T18:37:22.109Z · EA · GW

Do you have a place where you've addressed critiques of Against Democracy that have come out after it was published, like the ones in https://quillette.com/2020/03/22/against-democracy-a-review/ for example?

Comment by Wei_Dai on AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA · 2021-08-21T17:37:49.867Z · EA · GW

Can you address these concerns about Open Borders?

  1. https://www.forbes.com/sites/modeledbehavior/2017/02/26/why-i-dont-support-open-borders

  2. Open borders is in some sense the default, and states had to explicitly decide to impose immigration controls. Why is it that every nation-state on Earth has decided to impose immigration controls? I suspect it may be through a process of cultural evolution in which states that failed to impose immigration controls ceased to exist. (See https://en.wikipedia.org/wiki/Second_Boer_War for one example that I happened to come across recently.) Do you have another explanation for this?

Comment by Wei_Dai on Towards a Weaker Longtermism · 2021-08-10T06:11:29.962Z · EA · GW

This is crazy, and I think it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.

Have you read Is the potential astronomical waste in our universe too small to care about? which asks the question, should these two parts of you make a (mutually beneficial) deal/bet while being uncertain of the size of (the reachable part of) the universe, such that the part of you that cares about galaxies gets more votes in a bigger universe, and vice versa? I have not been able to find a philosophically satisfactory answer to this question.

If you do, then one or the other part of you will end up with almost all of the votes when you find out for sure the actual size of the universe. If you don't, that seems intuitively wrong also, analogous to a group of people who don't take advantage of all possible benefits from trade. (Maybe you can even be Dutch booked, e.g. by someone making separate deals/bets with each part of you, although I haven't thought carefully about this.)

Comment by Wei_Dai on Draft report on existential risk from power-seeking AI · 2021-06-02T02:57:28.076Z · EA · GW

I’m focused, here, on a very specific type of worry. There are lots of other ways to be worried about AI -- and even, about existential catastrophes resulting from AI.

Can you talk about your estimate of the overall AI-related x-risk (see here for an attempt at a comprehensive list), as well as total x-risk from all sources? (If your overall AI-related x-risk is significantly higher than 5%, what do you think are the other main sources?) I think it would be a good idea for anyone discussing a specific type of x-risk to also give their more general estimates, for a few reasons:

  1. It's useful for the purpose of prioritizing between different types of x-risk.
  2. Quantification of specific risks can be sensitive to how one defines categories. For example one might push some kinds of risks out of "existential risk from misaligned AI" and into "AI-related x-risk in general" by defining the former in a narrow way, thereby reducing one's estimate of it. This would be less problematic (e.g., less likely to give the reader a false sense of security) if one also talked about more general risk estimates.
  3. Different people may be more or less optimistic in general, making it hard to compare absolute risk estimates between individuals. Relative risk levels suffer less from this problem.
Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-26T04:37:40.979Z · EA · GW

If there are lots of considerations that have to be weighed against each other, then it seems easily the case that we should decide things on a case by case basis, as sometimes the considerations might weigh in favor of downvoting someone for refusing to engage with criticism, and other times they weigh in the other direction. But this seems inconsistent with your original blanket statement, "I don’t think any person or group should be downvoted or otherwise shamed for not wanting to engage in any sort of online discussion"

About online versus offline, I'm confused why you think you'd be able to convey your model offline but not online, as the bandwidth difference between the two don't seem large enough that you could do one but not the other. Maybe it's not just the bandwidth but other differences between the two mediums, but I'm skeptical that offline/audio conversations are overall less biased than online/text conversations. If they each have their own biases, then it's not clear what it would mean if you could convince someone of some idea over one medium but not the other.

If the stakes were higher or I had a bunch of free time, I might try an offline/audio conversation with you anyway to see what happens, but it doesn't seem like a great use of our time at this point. (From your perspective, you might spend hours but at most convince one person, which would hardly make a dent if the goal is to change the Forum's norms. I feel like your best bet is still to write a post to make your case to a wider audience, perhaps putting in extra effort to overcome the bias against it if there really is one.)

I'm still pretty curious what experiences led you to think that online discussions are often terrible, if you want to just answer that. Also are there other ideas that you think are good but can't be spread through a text medium because of its inherent bias?

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-24T23:19:09.240Z · EA · GW

(It seems that you're switching the topic from what your policy is exactly, which I'm still unclear on, to the model/motivation underlying your policy, which perhaps makes sense, as if I understood your model/motivation better perhaps I could regenerate the policy myself.)

I think I may just outright disagree with your model here, since it seems that you're not taking into account the significant positive externalities that a public argument can generate for the audience (in the form of more accurate beliefs, about the organizations involved and EA topics in general, similar to the motivation behind the DEBATE proposal for AI alignment).

Another crux may be your statement "Online discussions are very often terrible" in your original comment, which has not been my experience if we're talking about online discussions made in good faith in the rationalist/EA communities (and it seems like most people agree that the OP was written in good faith). I would be interested to hear what experiences led to your differing opinion.

But even when online discussions are "terrible", that can still generate valuable information for the audience, about the competence (e.g., reasoning abilities, PR skills) or lack thereof of the parties to the discussion, perhaps causing a downgrade of opinions about both parties.

Finally, even if your model is a good one in general, it's not clear that it's applicable to this specific situation. It doesn't seem like ACE is trying to "play private" as they have given no indication that they would be or would have been willing to discuss this issue in private with any critic. Instead it seems like they view time spent on engaging such critics as having very low value because they're extremely confident that their own conclusions are the right ones (or at least that's the public reason they're giving).

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-24T20:42:40.823Z · EA · GW

Still pretty unclear about your policy. Why is ACE calling the OP "hostile" not considered "meta-level" and hence not updateable (according to your policy)? What if the org in question gave a more reasonable explanation of why they're not responding, but doesn't address the object-level criticism? Would you count that in their favor, compared to total silence, or compared to an unreasonable explanation? Are you making any subjective judgments here as to what to update on and what not to, or is there a mechanical policy you can write down (that anyone can follow and achieve the same results)?

Also, overall, is you policy intended to satisfy Conservation of Expected Evidence, or not?

ETA: It looks like MIRI did give at least a short object-level reply to Paul's takeoff speed argument along with a meta-level explanation of why they haven't given a longer object-level reply. Would you agree to a norm that said that organizations have at least an obligation to give a reasonable meta-level explanation of why they're not responding to criticism on the object level, and silence or an unreasonable explanation on that level could be held against them?

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-24T09:23:10.062Z · EA · GW

I would be curious to read more about your approach, perhaps in another venue. Some questions I have:

  1. Do you propose to apply this (not updating when an organization refuses to engage with public criticism) universally? For example would you really not have thought worse of MIRI (Singularity Institute at the time) if it had labeled Holden Karnofsky's public criticism "hostile" and refused to respond to it, citing that its time could be better spent elsewhere? If not, how do you decide when to apply this policy? If yes, how do you prevent bad actors from taking advantage of the norm to become immune to public criticism?
  2. Would you update in a positive direction if an organization does effectively respond to public criticism? If not that seems extremely strange/counterintuitive, but if yes I suspect that might lead to dynamic inconsistencies in one's decision making (although I haven't thought about this deeply).
  3. Do you update on the existence of the criticism itself, before knowing whether or how the organization has chosen to respond?

I guess in general I'm pretty confused about what your proposed policy or norm is, and would appreciate some kind of thought-out exposition.

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-20T20:47:48.897Z · EA · GW

FWIW if I was in a position similar to ACE’s here are a few potential “compromises” I would have explored.

Inferring from the list you wrote, you seem to be under the impression that the speaker in question was going to deliver a talk at the conference, but according to Eric Herboso's top-level comment, "the facebook commenter in question would be on a panel talking about BLM". Also, the following sentence from ACE's Facebook post makes it sound like the only way ACE staff members would attend the conference was if the speaker would not be there at all, which I think rules out all of the compromise ideas you generated.

In fact, asking our staff to participate in an event where a person who had made such harmful statements would be in attendance, let alone presenting, would be a violation of our own anti-discrimination and anti-harassment policy.

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-18T22:45:27.518Z · EA · GW

And suppose we did make introductory spaces "safe" for people who believe that certain types of speech are very harmful, but somehow managed to keep norms of open discussion in other more "advanced" spaces. How would those people feel when they find out that they can't participate in the more advanced spaces without the risk of paying a high subjective cost (i.e., encountering speech that they find intolerable)? Won't many of them think that the EA community has performed a bait-and-switch on them and potentially become hostile to EA? Have people who have proposed this type of solution actually thought things through?

I think it's important to make EA as welcoming as possible to all people, but not by compromising in the direction of safetyism, as I don't see any way that doesn't end up causing more harm than good in the long run.

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-18T20:39:09.694Z · EA · GW

one of the ones I find most concerning are the University of California diversity statements

I'm not sure I understand what you mean here. Do you think other universities are not requiring diversity statements from job applicants, or that the University of California is especially "concerning" in how it uses them? If it's the latter, what do you think the University of California is doing that others aren't? If the former, see this article from two years ago, which states:

Many more institutions are asking her to submit a statement with her application about how her work would advance diversity, equity, and inclusion.

The requests have appeared on advertisements for jobs at all kinds of colleges, from the largest research institutions to small teaching-focused campuses

(And it seems a safe bet that the trend has continued. See this search result for a quick sense of what universities currently have formal rubrics for evaluating diversity statements. I also checked a random open position (for a chemistry professor) at a university that didn't show up in these results and found that it also requires a diversity statement: "Applicants should state in their cover letter how their teaching, research, service and/or life experiences have prepared them to advance Dartmouth’s commitment to diversity, equity and inclusion.")

Another reason I think academia has been taken over by cancel culture is that I've read many news stories, blog posts, and the like about cancel culture in academia, and often scan their comment sections for contrary opinions, and have yet to see anyone chime to say that they're an academic and cancel culture doesn't exist at their institution (which I'd expect to see if it weren't actually widespread), aside from some saying that it doesn't exist as a way of defending it (i.e., that what's happening is just people facing reasonable consequences for their speech acts and doesn't count as cancel culture). I also tried to Google "cancel culture isn't widespread in academia" in case someone wrote an article arguing that, but all the top relevant results are articles arguing that cancel culture is widespread in academia.

Curious if you have any evidence to the contrary, or just thought that I was making too strong a claim without backing it up myself.

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-18T16:01:20.941Z · EA · GW

Can you explain more about this part of ACE's public statement about withdrawing from the conference:

We took the initiative to contact CARE’s organizers to discuss our concern, exchanging many thoughtful messages and making significant attempts to find a compromise.

If ACE was not trying to deplatform the speaker in question, what were these messages about and what kind of compromise were you trying to reach with CARE?

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-18T14:13:29.291Z · EA · GW

but the main EAA Facebook group does not seem like an appropriate place to have them, since it’s one of the first places people get exposed to EAA.

I might agree with you if doing this had no further consequences beyond what you've written, but... quoting an earlier comment of mine:

You know, this makes me think I know just how academia was taken over by cancel culture. They must have allowed “introductory spaces” like undergrad classes to become “safe spaces”, thinking they could continue serious open discussion in seminar rooms and journals, then those undergrads became graduate students and professors and demanded “safe spaces” everywhere they went. And how is anyone supposed to argue against “safety”, especially once its importance has been institutionalized (i.e., departments were built in part to enforce “safe spaces”, which can then easily extend their power beyond “introductory spaces”).

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-18T13:21:38.723Z · EA · GW

It makes sense that what is most important, powerful, or influential in national politics is still highly correlated with what most people in our society sincerely believe, due to secret ballot voting and the national scope, but I think in many other arenas, some arguably more important than current national politics (e.g. because they play an outsized role in the economy or in determining what future generations will believe), local concentrating of true believers and preference falsification have caused a divergence between the two senses of "dominant".

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-17T07:26:17.703Z · EA · GW

Maybe we're just using the word "dominant" in different ways? I meant it in the sense of "most important, powerful, or influential", and not something like "sincerely believed by the majority of people" which may be what you have in mind? (I don't believe the latter is true yet.)

Comment by Wei_Dai on Concerns with ACE's Recent Behavior · 2021-04-17T04:52:34.718Z · EA · GW

I’m sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to alienate them and promote a split in the movement, while also exposing EA to substantial PR risk.

I've refrained from making certain posts/comments on EAF in part for these reasons. I think in the long run these outcomes will be very hard to avoid, given the vastly different epistemic approaches between the two sides, and e.g., "silence is violence", but it could be that in the short/medium term it's really important for EA to not become a major "public enemy" of the dominant ideology of our times.

ETA: If anyone disagrees with my long-run prediction (and it's not because something happens that makes the issue moot, like AIs take over), I'd be interested to read a story/scenario in which these outcomes are avoided.

Comment by Wei_Dai on Progress Open Thread: March 2021 · 2021-03-23T20:31:22.703Z · EA · GW

Thanks for this explanation. That part of Habryka's comment also struck me as very suspicious when I read it, but it wasn't immediately obvious what's wrong with it exactly.

Comment by Wei_Dai on Progress Open Thread: March 2021 · 2021-03-23T04:58:45.052Z · EA · GW

I came across an article titled Improving Long-Term Outcomes in Adolescent Adoption which seems potentially very useful for you, both before and after adoption. (You may have already found it through your own research, but just case...)

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-22T14:44:27.378Z · EA · GW

I think you are strongly focused on a part of the conversation that is of particular importance to you (something along the lines of whether people who are not motivated or skilled at expressing sympathy will be welcome here), while Jacob is mostly focused on other aspects.

This seems clearly true to me, but I don't see how it explains the things that I'm puzzled by. I will stop here as well, as my previous comment answering your question was downvoted to negative karma, perhaps indicating that such discussion (or my specific way of discussing it) is not appropriate for this forum.

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-22T07:02:40.770Z · EA · GW

To the extent that this story causes an emotional toll on readers, we should partly blame the media here for trying to fit a racial narrative to an event where it is not at all clear it fits.

I just came across a great article by Andrew Sullivan about this. His conclusions:

And so it seems to me that the media’s primary role in cases like these is providing some data and perspective on what’s actually happening, to allay irrational fear. Instead they contribute to the distortion by breathlessly hyping one incident without a single provable link to any go this — and scare the bejeezus out of people unnecessarily.

The media is supposed to subject easy, convenient rush-to-judgment narratives to ruthless empirical testing. Now, for purely ideological reasons, they are rushing to promote ready-made narratives, which actually point away from the empirical facts. To run sixteen separate pieces on anti-Asian white supremacist misogynist hate based on one possibly completely unrelated incident is not journalism. It’s fanning irrational fear in the cause of ideological indoctrination. And it appears to be where all elite media is headed.

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-22T06:28:51.615Z · EA · GW

What do you believe needs explaining?

The series of seemingly elementary errors in Jacob's recent comments, which were puzzling to me given his obviously high level of reasoning abilities. I tried to point them out in my earlier comments and don't want to repeat them all again, but for example, his insistent defense/support of Khorton's downvote based on his own very mild interpretation of what a downvote means, when it seems clear that what's more important in judging the consequences and appropriateness of the downvote is how Khorton, Dale, and most other EAF participants are likely to understand it, and then ignoring my arguments and evidence around this after I pointed them out to him.

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-21T17:21:00.166Z · EA · GW

I think your characterization of my thought process is completely false for what it’s worth. I went out of my way multiple times to say that I was not expressing disapproval of Dale’s comment.

That's certainly better news than the alternative, but I hope you find it understandable that I don't update to 100% believing your claim, given that you may not have full introspective access to all of your own cognitive processes, and what appears to me to be a series of anomalies that is otherwise hard to explain. But I'm certainly willing to grant this for the purposes of further discussion.

Edit: Maybe it’s helpful for me to clarify that I think it’s both good for Dale to write his comment, and for Khorton to write hers.

It's helpful and confusing at the same time. If you think it was good for Dale to write his comment, the existence of Khorton's downvote and highly upvoted (at the time) comment giving a very short explanation of the downvote serves as a clear discouragement for Dale or others against writing a similar comment in the future (given what a downvote means to most EAF participants and what Khorton usually means to convey by a downvote according to her own words). Perhaps you actually mean something like either of the following?

  1. It would have been good if Khorton just suggested that Dale be more sympathetic without the downvoting.
  2. It would have been good for Khorton to write her comment if Dale and others interpreted Khorton's downvote and comment the way you interpreted it (i.e., as merely a suggestion to do better next time, as opposed to a judgment that the overall merit of the comment isn't high enough to belong to the forum).
Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-21T14:24:02.481Z · EA · GW

On further reflection, I think ultimately all this back and forth is dancing around the question of whether, if some group of people think they're being victimized or being deliberately targeted for hatred, is it ok to say that maybe they're not being targeted as much as they think they are. I could be wrong, but my guess is that given today's overall political environment, your social-emotional intelligence is telling you that it's not ok to say that, and making you feel an aversion to a comment like Dale's which does in effect say that. But consciously or unconsciously you feel like you can't say this explicitly either (it's a norm that loses much of its power if stated explicitly, and also contrary to the spirit of EA) so you and others in a similar position end up rationalizing various "sayable" criticisms of Dale's comment that (since they're just rationalizations and not the real underlying reasons) don't really stand up on examination.

Comment by Wei_Dai on Is Democracy a Fad? · 2021-03-21T13:33:35.305Z · EA · GW

Changes will keep coming — and some of these changes might push states back toward dictatorship. I feel especially nervous about the long-run impact of automation.

This section doesn't explain why automation will lead to a dictatorship, as opposed to something like an aristocracy or plutocracy, i.e., rule by a relatively large elite who own or control the automation. My LessWrong / Alignment Forum post AGI will drastically increase economies of scale can perhaps help to close this gap.

Judging by some of your footnotes, it seems that you're mainly motivated by the question of whether there will be a Long Reflection that's democratic or highly inclusive. I have recently become more pessimistic about such a Long Reflection myself, and would be interested if you have any thoughts on my concerns.

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-21T12:29:04.836Z · EA · GW

I think my position is compatible with xccf's. Some people (like xccf) may choose to try to alleviate negative feelings by expressing sympathy, "standing with", or otherwise providing social/emotional support, while others (like Dale) may choose to do so by pointing out why some of the feelings may be excessive given the actual facts. Both of these seem reasonable strategies to me. IRL perhaps you could decide to deploy one or the other (or both) based in part on which is likely to work better on the particular person you're talking to, but that's not possible in a public forum, in which case it seems reasonable to welcome/encourage both types of commenters.

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-21T11:47:42.869Z · EA · GW

The OP asked people to support Asian community members who were upset, while at least the last paragraph of Dale’s post seemed to assume that OP was arguing that we should be searching for ways to reduce violence against Asians.

It seems totally reasonable to interpret the OP as arguing for the latter as well as the former:

  1. The title of the post references "the Asian diaspora" instead of just "Asian community members"
  2. The OP also wrote "As a community, we should stand against the intolerance and unnecessary suffering caused by these hate crimes" and a reasonable interpretation of this is to oppose the intolerance and suffering in concrete ways, not just performatively in front of other community members. For example, when someone writes "Biden and Harris visit Atlanta after shooting rampage, vowing to stand against racism and xenophobia" is the reader not supposed to infer that Biden and Harris will try to do some concrete things against racism and xenophobia?
  3. If Dale was trying to interpret the OP as charitably as possible, is it really more charitable to interpret it as not arguing that we should be searching for ways to reduce violence against Asians? It seems like you yourself interpret it that way, otherwise why did you respond by asking for recommendations for organizations to support?

but the larger context is a significant increase in violent incidents against Asians

Dale actually did also address the larger context/trend, in the paragraph starting with "Despite the lack of good data, I suspect that it is indeed the case that anti-asian crimes have risen significantly this year."

The best would be “try harder to think from the perspective of the listener”, but this is of course very difficult especially when there is a large gap in experience between the speaker and the listener. If I were trying super-hard I would run the post by an Asian friend to see if they felt like it engaged with the key arguments, but I think it would be unreasonable to expect, or expend, that level of effort for forum comments.

This paragraph seems to have worse "communication mistakes" than anything I can see in Dale's comment, at least if the listener is someone like myself. (I'll avoid explaining more explicitly unless you want me to, for the same reason you mentioned.)

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-21T06:16:55.301Z · EA · GW

I have been assuming that EAF follows the same norm as LW with regard to downvotes, namely that it means "I’d like to see fewer comments like this one." Just in case EAF follows a different norm, I did a quick search and happened across a comment by Khorton herself (which was highly upvoted, so I think is likely representative of typical understanding of downvotes on EAF):

In order of frequency:

-I strong downvote spam (weekly)

-I downvote people for antisocial behaviour, like name calling (monthly)

-I sometimes downvote comments that are obviously unhelpful or wrong (I’ll usually explain why, if no one else has) (every couple of months)

-I occasionally downvote posts if I don’t think they’re the type of thing that should be on the Forum (for example, they’re very poorly written, very incorrect, or offensive) (a couple times a year)

So it seems basically the same, i.e., a downvote means that on net the voter would prefer not to see a comment like it on the forum. Given that some people may not be very good at or very motivated to express sympathy in connection with stating an alternative hypothesis, this seems equivalent to saying that she would prefer such people not post such alternative hypotheses on the forum.

And if Dale totally ignores this advice, the penalty is… mild social disapproval from Khorton and lots of upvotes from other people, as far as I can tell.

Sure, this seems to be the current norm, but as Khorton's comment had garnered substantial upvotes before I disagreed with it (I don't remember exactly but I think it was comparable to Dale's initial comment at that point), I was worried about her convincing others to her position and thereby moving the forum towards a new norm.

Anyway, I do agree with "I think it’s good for people to point out ways that criticism can be phrased more sympathetically" and would have no objections to any comments along those lines. I note however that is not what Khorton did and her comment in fact did not point out any specific ways that Dale's comment could have been phrased more sympathetically.

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-20T21:35:59.455Z · EA · GW

Thanks for clarifying your position. I think in that case, my remaining disagreement with you is that I think stating an alternative hypothesis (along with supporting evidence) is a good thing in and of itself and should not be discouraged or met with social disapproval just because its writer did not do so with sufficient sympathy. Different people have different levels of skill and/or motivation for expressing sympathy, and should all be encouraged to participate on EAF as long as their comments have sufficient merit along other dimensions.

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-20T19:32:26.429Z · EA · GW

I upvoted Dale's comment instead, because if one reason "this has been a difficult week for Asian Americans" is a wrong or overly-confident belief that the Atlanta shooting was mainly motivated by anti-Asian bias or hatred, then pointing that out can be better than merely expressing sympathy (which others have already done), and certainly doesn't constitute an "unsympathetic" response. Of course doing this may not be a good idea in all circumstances and for all audiences, but if bringing up alternative hypotheses and evidence to support them can't even be done on EAF without risking social disapproval, I think something has gone seriously wrong.

Comment by Wei_Dai on Please stand with the Asian diaspora · 2021-03-20T06:43:02.725Z · EA · GW

What's the argument for supporting organizations in this cause area? If you're just trying to purchase fuzzies for yourself or other community members, that seems fine, but it's hard for me to see it making sense to prioritize anti-Asian violence as a cause area by the usual EA metrics.

But maybe there are other related causes that are more promising from an EA perspective, like lowering US-China tensions, or otherwise reducing the risks of a US-China war...

Comment by Wei_Dai on I'm Cullen O'Keefe, a Policy Researcher at OpenAI, AMA · 2020-11-14T09:35:07.189Z · EA · GW

Example of institutions being taken over by cancel culture and driving out their founders:

Like Andrew Sullivan, who joined Substack after parting ways with New York magazine, and Glenn Greenwald, who joined Substack after resigning from The Intercept, which he co-founded, Yglesias felt that he could no longer speak his mind without riling his colleagues. His managers wanted him to maintain a “restrained, institutional, statesmanlike voice,” he told me in a phone interview, in part because he was a co-founder of Vox. But as a relative moderate at the publication, he felt at times that it was important to challenge what he called the “dominant sensibility” in the “young-college-graduate bubble” that now sets the tone at many digital-media organizations.

Comment by Wei_Dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-10-17T19:51:03.101Z · EA · GW

I think your earlier comments make sense from the perspective of trying to convince other folks here to think about these issues and I didn’t intend for the grandparent to be pushing against that.

I think this is the crux of the issue, where we have this pattern where I interpret your comments (here, and with various AI safety problems) as downplaying some problem that I think is important, or is likely to have that effect in other people's minds and thereby make them less likely to work on the problem, so I push back on that, but maybe you were just trying to explain why you don't want to work on it personally, and you interpret my pushback as trying to get you to work on the problem personally, which is not my intention.

I think from my perspective the ideal solution would be if in a similar future situation, you could make it clearer from the start that you do think it's an important problem that more people should work on. So instead of "and lots of people talk about it already" which seems to suggest that enough people are working on it already, something like "I think this is a serious problem that I wish more people would work on or think about, even though my own comparative advantage probably lies elsewhere."

Curious how things look from your perspective, or a third party perspective.

Comment by Wei_Dai on Some thoughts on the EA Munich // Robin Hanson incident · 2020-10-17T13:43:37.878Z · EA · GW

To followup on this, Paul and I had an offline conversation about this, but it kind of petered out before reaching a conclusion. I don't recall all that was said, but I think a large part of my argument was that "jumping ship" or being forced off for ideological reasons was not "fine" when it happened historically, for example communists from Hollywood and conservatives from academia, but represented disasters (i.e., very large losses of influence and resources) for those causes. I'm not sure if this changed Paul's mind.

Comment by Wei_Dai on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-17T07:25:03.016Z · EA · GW

It's based on how I expect some people in the EA community to react (they would be less likely to consider me in a positive light, take my ideas seriously, be willing to lend me their cooperation when I need it, hire me, etc.), and also on the fact that I live in a very left-leaning area (as most EAs probably do) where being (or suspected of being) a Trump supporter can easily make someone socially ostracized, which would impact not just me but my family. And yes, I also expect and fear that my views will be tracked down, perhaps deliberately misinterpreted, and used against me, by someone who might hold a grudge against me in the future, or just think that's a good way to get what they want, e.g., in a policy dispute.

If you're still skeptical that people are reluctant or afraid to speak positively about Trump or Republicans in general, have you noticed that nobody has pushed back against the recent Democrat-promoting posts here on object-level grounds? I've seen the same on FB posts of prominent EA people promoting voting for Democrats, where every comment is some flavor of support. Can it really be that out of thousands of forum users and FB friends/followers, there is not one Trump or Republican supporter who might object to voting for Democrats on object-level grounds, or perhaps just someone who thinks that the authors are overstating their case for object-level reasons?

Comment by Wei_Dai on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-16T07:12:48.145Z · EA · GW

I urge those who are concerned about cancel culture to think more strategically. For instance, why has cancel culture taken over almost all intellectual and cultural institutions? What can EA do to fight it that those other institutions couldn't do, or didn't think of? Although I upvoted this post for trying to fight the good fight, I really doubt that what it suggests is going to be enough in the long run.

Although the post includes a section titled "The Nature of Cancel Culture", it seems silent on the social/political dynamics driving cancel culture's quick and widespread adoption. To make an analogy, it's like trying to defend a group of people against an infectious disease that has already become a pandemic among the wider society, without understanding its mechanism of infection, and hoping to make do with just common sense hygiene.

In one particularly striking example, I came across this article about a former head of the ACLU. It talks about how the ACLU has been retreating from its free speech principles, and includes this sentence:

But the ACLU has also waded into partisan political issues, at precisely the same time as it was retreating on First Amendment issues.

Does it not seem like EA is going down the same path, and for probably similar reasons? If even the ACLU couldn't resist the pull of contemporary leftist ideology and its attending abandonment of free speech, why do you think EA could, absent some truly creative and strategic thinking?

(To be clear, I don't have much confidence that sufficiently effective strategic ideas for defending EA against cancel culture actually exist or can be found by ordinary human minds in time to make a difference. But I see even less hope if no one tries.)

Comment by Wei_Dai on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-15T21:57:26.738Z · EA · GW

Sorry, I did not mean to imply that someone who just wrote a whole post about opposing Trump's reelection will get into trouble for saying a few positive things about him. Should I have been more clear about that? I thought it would be obvious that the risk is in being taken as a Trump supporter, or creating doubt in others' minds that one might be a Trump supporter. Or do you think a healthy debate about whether EA should oppose Trump's reelection can be had that excludes every potential participant except those who are clearly at no such risk?

Comment by Wei_Dai on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-15T18:15:54.382Z · EA · GW

but I think that the object-level case for engaging in the US election to get Donald Trump out of office is sufficiently strong that it – at the very, very least – deserves to be heard and discussed

Unfortunately I don't think "discussed" is possible in today's environment, due to reasons I wrote at ea.greaterwrong.com /posts/68TmDK6MrjfJgvA7p/introducing-landslide-coalition-an-ea-themed-giving-circle. For example I'm personally afraid to say anything that could be interpreted as being positive about Trump in public (or even in private), and I'm probably within the top percentile of EAs in terms of being less vulnerable to cancellation.

Comment by Wei_Dai on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T17:14:26.418Z · EA · GW

See also https://www.lesswrong.com/posts/2LtJ7xpxDS9Gu5NYq/open-and-welcome-thread-october-2020?commentId=YrRcRxNiJupZjfgnc

ETA: In case it's not clear, my point is that there's also an additional chilling effect from even smaller but more extreme tail risks.

Comment by Wei_Dai on When does it make sense to support/oppose political candidates on EA grounds? · 2020-10-15T15:39:28.942Z · EA · GW

I happen to believe this is misguided, but first I want to point out the irony in believing that politicization makes a movement less effective and yet fearing the awesome power of the social justice warriors.

Something can be "less effective" and "powerful" at the same time, if the power is misapplied. I find it very surprising and dispiriting that this needs to be explicitly pointed out, in a place like this.

I also stand by my previous comments, which are now hidden on EA Forum, but can still be viewed at ea.greaterwrong.com /posts/68TmDK6MrjfJgvA7p/introducing-landslide-coalition-an-ea-themed-giving-circle.

Comment by Wei_Dai on [deleted post] 2020-10-12T03:38:04.277Z
  1. In general, partisan politics is far from neglected and therefore unlikely to be the most effective use of altruistic resources.
  2. Partisan politics is very tempting for people to engage in, due to basic human nature, hence the risk of a slippery slope.
  3. It's very hard to avoid bias when thinking/talking about partisan politics, both as individuals and as a community. For example, in many social circles, defending Trump on any aspect can cause someone to be branded as a racist, to be shunned, even to lose their livelihood (or at least to lose social status/prestige). A community that is considered insufficiently opposed to Trump can come to be seen as "toxic" and shunned by other communities that it has to interact with. Under these circumstances, open and reasoned debate becomes impossible, and one can easily come to believe that "EA and partisan values happen to be in alignment" to a much higher degree than is actually the case.
Comment by Wei_Dai on [deleted post] 2020-10-12T01:13:49.470Z

I want to push back against this kind of explicit engagement in partisan politics, but I feel like that's probably a losing battle while Trump is around. Can we at least have a consensus and commitment that we go back to the previous norm after this election, to prevent a slippery slope where engaging in partisan politics becomes increasingly acceptable in EA?