Posts

[Linkpost] New Oxford Malaria Vaccine Shows ~75% Efficacy in Initial Trial with Infants 2021-04-23T23:50:20.545Z
Some EA Forum Posts I'd like to write 2021-02-23T05:27:26.992Z
RP Work Trial Output: How to Prioritize Anti-Aging Prioritization - A Light Investigation 2021-01-12T22:51:31.802Z
Some learnings I had from forecasting in 2020 2020-10-03T19:21:40.176Z
How can good generalist judgment be differentiated from skill at forecasting? 2020-08-21T23:13:12.132Z
What are some low-information priors that you find practically useful for thinking about the world? 2020-08-07T04:38:07.384Z
David Manheim: A Personal (Interim) COVID-19 Postmortem 2020-07-01T06:05:59.945Z
I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA 2020-06-30T19:35:13.376Z
Are there historical examples of excess panic during pandemics killing a lot of people? 2020-05-27T17:00:29.943Z
[Open Thread] What virtual events are you hosting that you'd like to open to the EA Forum-reading public? 2020-04-07T01:49:05.770Z
Should recent events make us more or less concerned about biorisk? 2020-03-19T00:00:57.476Z
Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar? 2020-03-12T21:19:19.565Z
All Bay Area EA events will be postponed until further notice 2020-03-06T03:19:24.587Z
Are there good EA projects for helping with COVID-19? 2020-03-03T23:55:59.259Z
How can EA local groups reduce likelihood of our members getting COVID-19 or other infectious diseases? 2020-02-26T16:16:49.234Z
What types of content creation would be useful for local/university groups, if anything? 2020-02-15T21:52:00.803Z
How much will local/university groups benefit from targeted EA content creation? 2020-02-15T21:46:49.090Z
Should EAs be more welcoming to thoughtful and aligned Republicans? 2020-01-20T02:28:12.943Z
Is learning about EA concepts in detail useful to the typical EA? 2020-01-16T07:37:30.348Z
8 things I believe about climate change 2019-12-28T03:02:33.035Z
Is there a clear writeup summarizing the arguments for why deep ecology is wrong? 2019-10-25T07:53:27.802Z
Linch's Shortform 2019-09-19T00:28:40.280Z
The Possibility of an Ongoing Moral Catastrophe (Summary) 2019-08-02T21:55:57.827Z
Outcome of GWWC Outreach Experiment 2017-02-09T02:44:42.224Z
Proposal for an Pre-registered Experiment in EA Outreach 2017-01-08T10:19:09.644Z
Tentative Summary of the Giving What We Can Pledge Event 2015/2016 2016-01-19T00:50:58.305Z
The Bystander 2016-01-10T20:16:47.673Z

Comments

Comment by Linch on Is SARS-CoV-2 a modern Greek Tragedy? · 2021-05-10T19:31:20.505Z · EA · GW

Relevant external predictions:

Bayesian analysis on Rootclaim

Metaculus questions on 

Will it turn out that Covid-19 originated inside a research lab in Hubei? (Note that the resolution criteria here is weird, so I'd trust the final probabilities less than the reasoning/links people gave).

Credible claim by 2024 that COVID-19 likely originated in a lab?

Comment by Linch on Ending The War on Drugs - A New Cause For Effective Altruists? · 2021-05-10T19:22:46.962Z · EA · GW

If you're thinking as a citizen, this question isn't so relevant: it doesn't really cost you anything to support this policy change, talk to your friends about it, etc., and it's not as if supporting this would take public money from anything else you might value - indeed, it's a revenue raiser. I leave it open how valuable it is to spend extra 'citizen time' on this vs some other policy. Drug policy reform could be one item in a potential basket of 'no-cost' policies an effective altruist might support alongside, say, improved animal welfare. (I previously posted about a policy platform back in 2019, but nothing much happened.)

This assumes (possibly correctly, but should still be noted) that supporting policy changes/talking to friends etc about this is indeed close to free. I think there are some reasons to think otherwise (eg time spent supporting specific policy changes can be spent supporting other policy changes, time spent being in an "activist mindset"about X  policy  when talking with friends trades off against both being activisty about Y policy and also with time being in more exploratory modes of thinking, etc).

Comment by Linch on Should you do a PhD in science? · 2021-05-08T23:27:15.466Z · EA · GW

For the very specific point made at the beginning, I don't think most scientific fields are pyramid schemes, unless by pyramid scheme you mean anything where there's a lot more people at the bottom than the top and the top is more desirable than the bottom (Like I don't think large companies like Google are pyramid schemes, unless you really stretch the term).  

Words are made by man, but I guess (removing normative judgments) my understanding of a pyramid scheme is a situation where a) many people at the bottom want to be at the top, b)  the success of the people at the top is a direct result of contributions from people at the bottom, and c) most of the incentive for work for people at the bottom is the excess returns at the top.

In that regard, legal pyramid schemes include poker, some arts, and some humanities. Also (more archetypally) multilevel marketing etc. I can also sort of see a case for professional e-sports/gaming, though I think the case is much weaker than for poker. 

In contrast, undergrad education in STEM (eg chemistry or CS) and some of the social sciences is both directly useful and relevant signaling that prepares people for non-academic jobs. Depending on the subfield, this is often true for PhDs as well, though the case for postdocs might be lower.

Comment by Linch on Are global pandemics going to be more likely or less likely over the next 100 years? · 2021-05-07T08:14:07.053Z · EA · GW

Possibly relevant prior question.

Comment by Linch on Linch's Shortform · 2021-05-05T18:33:23.942Z · EA · GW

Yep, aware of this! Solid post.

Comment by Linch on Linch's Shortform · 2021-05-04T22:46:30.025Z · EA · GW

Are there any EAA researchers carefully tracking the potential of huge cost-effectiveness gains in the ag industry from genetic engineering advances of factory farmed animals? Or (less plausibly) advances from better knowledge/practice/lore from classical artificial selection? As someone pretty far away from the field, a priori the massive gains made in biology/genetics in the last few decades seems like something that we plausibly have not priced in in. So it'd be sad if EAAs get blindsided by animal meat becoming a lot cheaper in the next few decades (if this is indeed viable, which it may not be).

Comment by Linch on Ben Garfinkel's Shortform · 2021-05-04T04:17:04.233Z · EA · GW

I also agree value drift hasn't historically driven long-run social change
 

My impression is that the differences in historical vegetarianism rates between India and China, and especially India and southern China (where there is greater similarity of climate and crops used), is a moderate counterpoint. At the timescale of centuries, vegetarianism rates in India are much higher than rates in China. Since factory farming is plausibly one of the larger sources of human-caused suffering today, the differences aren't exactly a rounding error. 

Comment by Linch on The State of the World — and Why Monkeys are Smarter than You · 2021-05-03T16:50:54.274Z · EA · GW

I too got 11/13 (wrong on q6 and q11;  because I expected a lot of really old people and  that black rhinos were more endangered) 

Comment by Linch on Linch's Shortform · 2021-05-03T15:55:30.126Z · EA · GW

Something that came up with a discussion with a coworker recently is that often internet writers want some (thoughtful) comments, but not too many, since too many comments can be overwhelming. Or at the very least, the marginal value of additional comments is usually lower for authors when there are more comments. 

However, the incentives for commentators is very different: by default people want to comment on the most exciting/cool/wrong thing, so internet posts can easily by default either attract many comments or none. (I think) very little self-policing is done, if anything a post with many comments make it more attractive to generate secondary or tertiary comments, rather than less.

Meanwhile, internet writers who do great work often do not get the desired feedback. As evidence:  For ~ a month, I was the only person who commented on What Helped the Voiceless? Historical Case Studies (which later won the EA Forum Prize). 

This will be less of a problem if internet communication is primarily about idle speculations and cat pictures. But of course this is not the primary way I and many others on the Forum engage with the internet. Frequently, the primary publication venue for some subset of EA research and thought is this form of semi-informal internet communication.

Academia gets around this problem by having mandated peer review for work, with a minimum number of peer reviewers who are obligated to read (or at least skim) anything that passes through an initial filter. 

It is unclear how (or perhaps if?) we ought to replicate something similar in a voluntary internet community.

On a personal level, the main consequence of this realization is that I intend to become slightly more inclined to comment more on posts without many prior comments, and correspondingly less to posts with many comments (especially if the karma:comment ratio is low).

On a group level, I'm unsure what behavior we ought to incentivize, and/or how to get there. Perhaps this is an open question that others can answer?
 

Comment by Linch on Consciousness research as a cause? [asking for advice] · 2021-05-02T07:38:51.035Z · EA · GW

Your link redirects to what is presumably something else.

Comment by Linch on What are your favorite examples of moral heroism/altruism in movies and books? · 2021-05-02T06:52:23.412Z · EA · GW

I watched the documentary today with roommates on your recommendation and enjoyed it! 

Comment by Linch on Consciousness research as a cause? [asking for advice] · 2021-05-02T00:53:14.397Z · EA · GW

Right, I guess the higher-level thing I'm getting at is that while introspective access is arguably the best tool that we have to access subjective experience in ourselves right now, and stated experiences is arguably the best tool for us to  see it in others (well, at least humans), we shouldn't confuse stated experiences as identical to subjective experience. 

To go with the perception/UFO example, if someone (who believes themself to be truthful) reports seeing an UFO and it later turns out that they "saw" an UFO because their friend pulled a prank on them, or because this was an optical illusion, then I feel relatively comfortable in saying that they actually had the subjective experience of seeing an UFO. So while external reality did not actually have an UFO, this was an accurate qualia report. 

In contrast, if their memory later undergoes falsification, and they misremembered seeing a bird (which at the time they believed it was a bird) as seeing an UFO, then they only had the subjective experience of remembering seeing an UFO, not the actual subjective experience of seeing an UFO.
 
Some other examples:

1. If I were to undergo surgery, I would pay more money for a painkiller that numbs my present experience of pain than I would pay for a painkiller that removes my memory of pain (and associated trauma etc), though I would pay nonzero dollars for the later. This is because my memory of pain is an experience of an experience, not identical with the original experience itself.

2. Many children with congenital anosmia (being born without a sense of smell) act as if they have a sense of smell until tested.  While I think it's reasonable to say that they have some smell-adjacent qualia/subjective experiences, I'd be surprised if they hallucinated qualia identical to the experiences of people with a sense of smell, and I would be inaccurate to say that their subjective experiences of smell is the same as people who have the objective ability to smell. 

Comment by Linch on Consciousness research as a cause? [asking for advice] · 2021-05-01T18:33:55.638Z · EA · GW

Let me put it a different way. Suppose we simulate Bob's experiences on a computer. From a utilitarian lens, if you can run Bob on a computational substrate that goes 100x faster, there's a strong theoretical case that FastBob is 100x as valuable per minute run (or 100x as disvaluable if Bob's suffering). But if you trick simulatedBob to thinking that he's 100x faster (or if you otherwise distort the output channel so the channel lies to you about the speed), then it seems to be a much harder case to argue that FakeFastBob is indeed 100x faster/more valuable.

Comment by Linch on Consciousness research as a cause? [asking for advice] · 2021-04-30T20:31:56.735Z · EA · GW

(I have not read the report in question)

There are some examples of situations/interventions where I'm reasonably confident that the intervention changes qualia reports more than it changes qualia. 

The first that jumps to mind is meditation: in the relatively small number of studies I've seen, meditation dramatically changes how people think they perceive time (time feels slower, a minute feels longer, etc), but without noticeable effects on things like reaction speed, cognitive processing of various tasks, etc. 

This to me is moderate evidence that the subjective experience of the subjective experience of time has changed, but not (or at least not as much) the actual subjective experience of time.

Anecdotally, I hear similar reports for recreational drug use (time feels slower but reaction speed doesn't go up...if anything it goes down).

This is relevant to altruists because (under many consequentialist ethical theories) extending subjective experience of time for pleasurable experiences seems like a clear win, but the case for extending the subjective experience of the subjective  experience of time is much weaker.

Comment by Linch on What are your favorite examples of moral heroism/altruism in movies and books? · 2021-04-26T17:18:32.467Z · EA · GW

See prior examples in this question.

Comment by Linch on What are your questions for World Malaria Day with Rob Mather (AMF), Maddy Marasciulo (Malaria Consortium), and Alekos Simoni (Target Malaria)? · 2021-04-23T05:23:29.035Z · EA · GW

Other than money, how can EAs best support your work? :) 

Comment by Linch on What are your questions for World Malaria Day with Rob Mather (AMF), Maddy Marasciulo (Malaria Consortium), and Alekos Simoni (Target Malaria)? · 2021-04-23T05:22:57.813Z · EA · GW

One thing that I haven't thought much about until recently is that almost all causes of deaths in Low income/low-middle income countries are modeled rather than aggregations of recorded causes of death. So how much should we actually trust the statistics on things like "440,000 people died last year from malaria (or X,000 from Y country)?" Should we assume it's within a ~20% band of reasonableness, or is our actual understanding of the situation much blurrier than that? 

Of course, every death from preventable disease is one too many, but it's good to have a clear/crisp/accurate understanding of the world to prioritize accordingly. 

Comment by Linch on What are your questions for World Malaria Day with Rob Mather (AMF), Maddy Marasciulo (Malaria Consortium), and Alekos Simoni (Target Malaria)? · 2021-04-23T05:19:36.211Z · EA · GW

Some global health people in my circles are claiming that interruptions to ongoing public health efforts and basic health services due to covid-19 will reverse much of the recent gains we've made against malaria and other neglected tropical diseases. How much credence do you have in such beliefs?

Concretely, what do you think is the probability that we'll have more estimated deaths from malaria in 2025 than in 2015? (around ~440,000 deaths iirc) .

Comment by Linch on What are your questions for World Malaria Day with Rob Mather (AMF), Maddy Marasciulo (Malaria Consortium), and Alekos Simoni (Target Malaria)? · 2021-04-21T02:58:40.211Z · EA · GW

Questions for both:

How has the covid-19 pandemic affected your ability to raise money for malaria prevention from both EA and non-EA sources?  

How has it affected your ability to deliver {bednets, chemoprevention} to beneficiaries? 

Comment by Linch on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-09T16:07:35.346Z · EA · GW

Do you have thoughts on pandemic prevention NPIs (eg vector control)? Many of these things are technically non-pharmaceutical interventions, though of course looks very different from mask mandates or social distancing orders!

Comment by Linch on Non-pharmaceutical interventions in pandemic preparedness and response · 2021-04-08T21:00:42.665Z · EA · GW

Thanks a lot for this! Like willbradshaw I agree that this post is "well-written, thoughtful, well-linked and thorough!"

What are some objections to anything I’ve written here?

If I were to nitpick, I think my biggest objection is that your approach to tackling the problem of NPIs for pandemic preparedness and response appears extremely atheoretical. I think this is fine for a scoping study that tries to estimate the scale of the problem, and fine (perhaps even highly underrated!) for clinical studies. But I think we can get decent results at lower cost with a bit of simple theory.

I believe this because I think the human body in general, and the immune system in particular, is woefully complicated, so it makes sense that we cannot have much faith in biologically plausible mechanisms for treatments, which leads us to necessitate correspondingly greater faith in end-to-end RCTs(and be in a state of radical cluelessness otherwise). But there are other parts of epidemiology that's simpler and more well-understood, such that for transmission we can be reasonably confident in our ability to dice the problem and isolate it into specific confusing subcomponents.

For example, suppose we are worried about a potential respiratory disease pandemic, and we want to figure out whether intervention X (say installing MERV filters for offices) has a sufficiently large impact on an (un)desired endpoint (eg symptomatic disease, hospitalizations). One approach might just be: 

Sounds plausible, but we can't know much with confidence in the absence of end-to-end empirical results. What we can do is run an RCT where we install MERV filters in the treatment group and don't install it in the control group with a sufficiently large sample size to power for differences that are big enough for us to care about, and compare results after the study's natural endpoint.

I think this is good, but potentially quite expensive/time-consuming (which is really bad in a fast-moving pandemic!). One way we can potentially do better:
 

Well, disease transmission isn't magic, and we're reasonably confident in the very high-level theory of respiratory diseases. So we can at least decompose the problem into two parts: 

  1. Treat  human bodies as a black box function that takes in some combination of scary microbe-laden particles and outputs some probability of undesired endpoints.
  2. Model the world as something that sends scary microbe-laden particles and figure out which interventions reduce such particles to  a level that the modeled function in (1) should consider too low to notice.

My decomposition isn't particularly interesting, but I think it's reasonably clean. With it, we can 

Tackle 1) with human challenge trials where microbe dose/frequency/timing is variable, to understand what are plausible ranges of parameters for how many droplets are needed to be bad.

Tackle 2) with some combination of 

  1. computational fluid dynamics simulations
  2. lab experiments on how much people breathe each other's air, and how fast air need to cycle to reduce that.
  3. field experiments on effect of MERV filters on closely analogous particles  (on the physical level)
  4. prior knowledge of the transmission patterns of other similar diseases
  5. ???

Now my decomposition is still quite high level, and I'm not sure that my suggested instrumentalizations here aren't dumb. But hopefully what I'm gesturing at makes sense?

Comment by Linch on A Biosecurity and Biorisk Reading+ List · 2021-04-08T10:45:15.179Z · EA · GW

Thanks a lot for this list!

This is a bit of a tangent, but one implicit assumption I find interesting in your list and when other EA biosecurity-focused people talk about existential biosecurity (eg, this talk by Kevin Esvelt ) is that there's relatively little focus on what I consider "classical epidemiology."  

This seems in contrast to the implicit beliefs of both a) serious EAs who haven't thought as much about biosecurity (weak evidence here: the problem/speaker selection 80000 hours podcasts) and b) public health people who are less aware of EA (weak evidence here: undergrad or grad students in public health who I sometimes talk to or are in an advisory position for). 

Putting numbers to this vague intuition, I would guess that your reading list here would suggest an optimal  biosecurity-focused portfolio will have a focus of ~5-20% in classical epidemiology, whereas many EA students would think the weighting of epidemiology should be closer to ~30-60%. 

I'm interested in whether you agree with my distinction here and consider it a fair characterization? If so, do you think it's worthwhile to have a writeup explaining why (or why not!) many EA-aligned students overweight epidemiology in their portfolio of considerations for important ways to reduce existential biorisk? 

EDIT: Relevant Twitter poll.

Comment by Linch on Systemic change, global poverty eradication, and a career plan rethink: am I right? · 2021-04-05T03:08:12.920Z · EA · GW

You may be interested in Max Roser's perspective on his interactions with Hickel.

Comment by Linch on How much does performance differ between people? · 2021-04-01T15:37:38.325Z · EA · GW

This is really cool, thank you!

Comment by Linch on What are your main reservations about identifying as an effective altruist? · 2021-04-01T01:50:41.047Z · EA · GW

If I recall correctly, this was not your position several years ago , when we talked about this more(circa 2015, 2016 or so). Which is not too surprising -- I mean I sure hope I changed a lot in the intervening years! 

But assuming my memory of this is correct, do you recall when you made this shift, and the core reasons for it?  Interested if there's a short/fast way to retrace your intellectual journey so that other people might make the relevant updates. 

Comment by Linch on How much does performance differ between people? · 2021-03-29T20:35:51.604Z · EA · GW

I have indeed made that comment somewhere. It was one of the more insightful/memorable comments she made when I interviewed her, but tragically I didn't end up writing down that question in the final document (maybe due to my own lack of researcher taste? :P)

That said, human memory is fallible etc so maybe it'd be worthwhile to circle back to Liv and ask if she still endorses this, and/or ask other poker players how much they agree with it. 

Comment by Linch on How much does performance differ between people? · 2021-03-26T00:52:22.071Z · EA · GW

Thanks for this. I do think there's a bit of sloppiness in EA discussions about heavy-tailed distributions in general, and the specific question of differences in ex ante predictable job performance in particular. So it's really good to see clearer work/thinking about this.

I have two high-level operationalization concerns here: 

  1. Whether performance is ex ante predictable seems to be a larger function  of our predictive ability than of the world. As an extreme example of what I mean, if you take our world on November 7, 2016 and run  high-fidelity simulations 1,000,000 times , I expect 1 million/1 million of those simulations to end up with Donald Trump winning the 2016 US presidential election. Similarly, with perfect predictive ability, I think the correlation  between ex ante predicted work performance and ex post actual performance approach 1 (up to quantum) . This may seem like a minor technical point, but  I think it's important to be careful of the reasoning here when we ask whether claims are expected to generalize from domains with large and obvious track records and proxies (eg past paper citations to future paper citations) or even domains where the ex ante proxy may well have been defined ex post (Math Olympiad records to research mathematics) to domains of effective altruism where we're interested in something like counterfactual/Shapley impact*.
  2. There's counterfactual credit assignment issues for pretty much everything EA is concerned with, whereas if you're just interested in individual salaries or job performance in academia, a simple proxy like $s or citations is fine. Suppose Usain Bolt is 0.2 seconds slower at running  100 meters. Does anybody actually think this will result in huge differences in the popularity of sports, or percentage of economic output attributable to the "run really fast" fraction of the economy, never mind our probability of spreading utopia throughout the stars? But nonetheless Usain Bolt likely makes a lot more money, has a lot more prestige, etc than the 2nd/3rd fastest runners. Similarly, academics seem to worry constantly about getting "scooped" whereas they rarely worry about scooping others, so a small edge in intelligence or connections or whatever can be leveraged to a huge difference in potential citations, while being basically irrelevant to counterfactual impact. Whereas in EA research it matters a lot whether being "first" means you're 5 years ahead of the next-best candidate or 5 days.

Griping aside, I think this is a great piece and I look forward to perusing it and giving more careful comments in the coming weeks!

*ETA: In contrast, if it's the same variable(s) that we can use to ex ante predict a variety of good outcomes of work performance across domains, then we can be relatively more confident that this will generalize to EA notions. Eg, fundamental general mental ability, integrity, etc. 

Comment by Linch on Some global catastrophic risk estimates · 2021-03-25T21:50:16.987Z · EA · GW

I think somewhat higher chance of users being alive than that, because of the big correlated stuff that EAs care about.

Comment by Linch on EA Funds has appointed new fund managers · 2021-03-23T23:11:40.046Z · EA · GW

I disagree. I think there's something wrong with that, inasmuch as global health donors can already directly defer to Givewell, and Elie's public views on within-cause prioritization do not appear to be obviously different from Givewell's.

Comment by Linch on I scraped all public "Effective Altruists" Goodreads reading lists · 2021-03-23T21:47:46.154Z · EA · GW

Some of the books are also mandatory reading in American public school education, or mandatory reading in other countries/institutions.

Comment by Linch on Please stand with the Asian diaspora · 2021-03-23T09:43:16.971Z · EA · GW

The university's defence was that on average asians had inferior personalities

Can you mention the page number(s) and/or quote the relevant sections somewhere? I tried skimming the 130 page pdf and didn't see anything that alluded to this directly.

Comment by Linch on Progress Open Thread: March 2021 · 2021-03-23T09:31:26.782Z · EA · GW

Congrats!

Comment by Linch on What are some historical examples of people and organizations who've influenced people to do more good? · 2021-03-23T09:18:56.310Z · EA · GW

Sorry I don't think I fully understand you. Can you rephrase?

Comment by Linch on Please stand with the Asian diaspora · 2021-03-23T03:12:47.706Z · EA · GW

Though I wouldn't be surprised if I found a better organization with more searching.. I'm definitely in the market for that if you have ideas.

I don't have direct ideas for the stated goal, but some brainstorming on the purpose of why you are interested in Asian advocacy might be fruitful? If you are interested in things that help Asian diaspora have better lives, have a wildly flourishing future, etc, I'd bet that the same general (human-focused) cause areas that EAs are interested in (scientific advancement, reducing existential synthetic biology and AI risk, etc) are in expectation better for Asian people than Laaunch or the other orgs above. 

If you want things that disproportionately benefit Asians (eg, because "I want to show support for Asians so I want to do basically the same thing I was planning to do anyway" is a bad look/ serve poorly as an honest loyalty signal), I'd probably look into ways to improve health and other outcomes in countries with a lot of Asians, or affect a lot of Asians. Plausible targets include  outdoor pollution, smoking cessation, deworming, and lead poisoning. I'd also more speculatively suggest donating to reduce nuclear risk, since I think the majority of potential flashpoints for nuclear risk is in Asia. 

If you want to donate to organizations that disproportionately benefit upper-middle class Asians in Western countries (because this is the relevant group of friends/collaborators/students/etc that you wish to express solidarity to), I'm pretty stuck on ideas, yeah. I think there's a fairly high difficulty in finding charities in this space with any nontrivial and positive tangible outcome (even more so than normal for charity selection). 

Speaking personally, I do think the impact of racism on me is nonzero and negative. But almost all of the experiences of racism in my adult life looks less like explicit and obvious racial prejudice and more like statistical disparate impact, in both my corporate and social life. In no individual case would there be obvious racism, but collectively a (murky) picture is painted. Eg, I have to apply to X jobs to get offers I'm happy with, whereas I suspect my demographic twin of a different race only have to apply to ~Y jobs to get the same number of offers, visa approvals to the US are harder for Asians than for Europeans*, stuff like that. This is only my own anecdotal experience, but I suspect it generalizes well to East Asian people** who are likely to be your friends/coworkers/etc. 

It seems pretty hard to meaningfully improve on the disparate impact stuff without a clear theory of change, and I don't think there are obvious quick fixes (eg, I sure don't want to work in any job that I have to  sue to get!). I'd maybe weakly endorse a variation of Dale's comment here and suggest organizations that lobby for greater standardization/legibility in admittance to universities and prestige jobs (under the assumption that illegibility and informal systems almost always disproportionately benefit people with power, and harms minorities). But I'm hesitant to recommend donating to any specific group that works on this without at least doing some due diligence on their theory of change.

Thinking farther afield, I'd also be interested in great power stuff and other things that mitigate potential future ethnic tensions.

* And people who are ethnically Asian are more likely to be from Asian countries than from European countries.

** I'm less sure about generalization to eg, West Asians because I can imagine a lot of anti-Arab and Anti-Israeli sentiment that's more direct. 

Comment by Linch on Please stand with the Asian diaspora · 2021-03-20T19:58:37.512Z · EA · GW

They (EDIT: Laaunch) seem to be doing a lot of different things and I'm confused as to what their theory of change is. 

(Tbc I only had a cursory look at their website so it's possible I missed it).

Comment by Linch on What Makes Outreach to Progressives Hard · 2021-03-18T17:17:53.746Z · EA · GW

Speaking descriptively, are most active leftists members of the working class rather than the PMC? My impression is that while many working-class people have implicitly leftist views on economics, the demographics that leftists predominantly draw from for activism is the highly educated PMC class, similar to EA. 

This impression can of course be wrong due to selection bias of who I end up talking with, so I'd personally find it valuable to correct for this bias! 

Comment by Linch on The Importance of Artificial Sentience · 2021-03-16T18:53:05.405Z · EA · GW

Relevant EA Forum question and thread.

Comment by Linch on Strong Evidence is Common · 2021-03-14T05:14:36.501Z · EA · GW

I think in the real world there are many situations where (if we were to put explicit Bayesian probabilities on such beliefs, which we almost never do), beliefs with ex ante ~0 credence quickly get extraordinary updates. My favorite example is sense perception. If I woke up after sleeping on a bus and were to put explicit Bayesian probabilities on anticipating what I will see next time I open my eyes, then my belief I'd assign in the true outcome (ignoring practical constraints like computation and my near inability to have any visual imagery) has ~0 credence. Yet it's easy to get strong Bayesian updates: I just open my eyes. In most cases, this should be a large enough update, and I go on my merry way. 

But suppose I open my eyes and instead see  people who are  approximate lookalikes of dead US presidents sitting around the bus. Then at that point (even though the ex ante probability of this outcome and that of a specific other thing I saw isn't much different), I will correctly be surprised, and have some reasons to doubt my sense perception.

Likewise, if instead of saying your name is Mark Xu, you instead said "Lee Kuan Yew", I at least would be pretty suspicious that your actual name is Lee Kuan Yew.

I think a lot of this confusion in intuitions can be resolved by looking at what MacAskill calls the difference between unlikelihood and fishiness:

Lots of things are a priori extremely unlikely yet we should have high credence in them: for example, the chance that you just dealt this particular (random-seeming) sequence of cards from a well-shuffled deck of 52 cards is 1 in 52! ≈ 1 in 10^68, yet you should often have high credence in claims of that form.  But the claim that we’re at an extremely special time is also fishy. That is, it’s more like the claim that you just dealt a deck of cards in perfect order (2 to Ace of clubs, then 2 to Ace of diamonds, etc) from a well-shuffled deck of cards. 

Being fishy is different than just being unlikely. The difference between unlikelihood and fishiness is the availability of alternative, not wildly improbable, alternative hypotheses, on which the outcome or evidence is reasonably likely. If I deal the random-seeming sequence of cards, I don’t have reason to question my assumption that the deck was shuffled, because there’s no alternative background assumption on which the random-seeming sequence is a likely occurrence.  If, however, I deal the deck of cards in perfect order, I do have reason to significantly update that the deck was not in fact shuffled, because the probability of getting cards in perfect order if the cards were not shuffled is reasonably high. That is: P(cards not shuffled)P(cards in perfect order | cards not shuffled) >> P(cards shuffled)P(cards in perfect order | cards shuffled), even if my prior credence was that P(cards shuffled) > P(cards not shuffled), so I should update towards the cards having not been shuffled.

Put another way, we can dissolve this by looking explicitly at Bayes' theorem. 

and in turn, 

 is high in both the "fishy" and "non-fishy" regimes. However, is much higher for fishy hypotheses than  for non-fishy hypotheses, even if the surface-level evidence looks similar!

Comment by Linch on Strong Evidence is Common · 2021-03-14T04:35:15.137Z · EA · GW

I’m guessing they would have happily accepted a bet at 20:1 odds that my driver’s license would say “Mark Xu” on it

Pretty minor point, but personally there are many situations where I'd be happy to accept the other side of that bet for many (most?) people named Mark Xu, if the only information  I and the other person had was someone saying "Hi, I'm Mark Xu."

Comment by Linch on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-10T23:19:31.520Z · EA · GW

If you were given several million dollars to structure a media vertical on "news that actually matters"*, what would you do differently from Vox's Future Perfect

*By EAish lights

Comment by Linch on A full syllabus on longtermism · 2021-03-10T23:02:27.348Z · EA · GW

Very minor note, and I doubt that this much matters to Julia, but for future cases where this pops up, I think it's better if you just typed your preferred name for computational kindness reasons, since presumably your full name is more readily accessible to you than to Julia.

Comment by Linch on Response to Phil Torres’ ‘The Case Against Longtermism’ · 2021-03-10T22:47:35.849Z · EA · GW

The last time I tried to isolate the variable of intellectual dishonesty using a non-culture war example on this forum (in this case using fairly non-controversial (to EAs) examples of intellectual dishonesty, and with academic figures that I at least don't think are unusually insightful by EA lights), commentators appeared to be against the within-EA  cancellation of them, and instead opted for a position more like:

I would be somewhat unhappy to see them given just a talk with Q&A, with no natural place to provide pushback and followup discussion, but if someone were to organize an event with Baumeister debating some EA with opinions on scientific methodology, I would love to attend that.

This appears broadly analogous to how jtm presented Torres' book in his syllabus. Now of course a) there are nontrivial framing effects so perhaps people might like to revise their conclusions in my comment and b) you might have alternative reasons to not cite Torres in certain situations (eg very high standard for quality of argument, deciding that personal attacks on fellow movement members is verbotten), but at least the triplet-conjunction presented in your comment (
bad opinions + intellectual dishonesty + lack of extraordinary insight) did not, at the time, seem to be sufficient criteria in the relatively depoliticized examples I cited.

Comment by Linch on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-10T16:42:09.860Z · EA · GW

Since after writing your previous book, do you still follow EA or rationalist content? (eg this Forum or LessWrong)? If so, what things do you find the most helpful?

Comment by Linch on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-10T16:40:35.666Z · EA · GW

From your Twitter, it appears that you think a lot about covid-19. So why is the UK response to covid-19 so bad*? 

Sometimes my American friends will blame US covid failures on US-specific factors (eg, our FDA, presidential system, Trump). But of course the UK is a (by international standards) culturally similar entity that does not share many of those factors, and still appears to have outcomes at least as bad if not worse. So why?


*I admit this is a bit of a leading question. My stance is something like With the major asterisk of vaccinations, it appears that UK outcomes of covid are quite bad by international standards. Moreover, we can trace certain gov't actions (eg "eat out to help out") as clearly bad in a way that was ex ante predictable. But feel free to instead respond "actually you're wrong and the UK response isn't so bad due to XYZ contextual factors!" :)

 

Comment by Linch on Linch's Shortform · 2021-03-09T02:00:41.066Z · EA · GW

The default rich text editor. 

The issue is that if I want to select one line and quote/unquote it, it either a) quotes (unquotes) lines before and after it, or creates a bunch of newlines before and after it. Deleting newlines in quote blocks also has the issue of quoting (unquoting) unintended blocks.

Perhaps I should just switch to markdown for comments, and remember to switch back to a rich text editor for copying and pasting top-level posts?

Comment by Linch on Linch's Shortform · 2021-03-09T00:43:31.104Z · EA · GW

I find it quite hard to do multiple quote-blocks in the same comment on the forum. For example, this comment took one 5-10 tries to get right. 

Comment by Linch on Linch's Shortform · 2021-03-09T00:41:51.563Z · EA · GW

Could this also be simply because of a difference in the extent to which people already know your username and expect to find posts from it interesting on the two sites? Or, relatedly, a difference in how many active users on each site you know personally?
 

 

Yeah I think this is plausible. Pretty unfortunate though. 

Also, have you crossposted many things and noticed this pattern, or was it just a handful? Hmm, I have 31 comments on LW, and maybe half of them are crossposts? 

I don't ever recall having a higher karma on LW than the Forum, though I wouldn't be surprised if it happened once or twice.

Comment by Linch on The Importance of Artificial Sentience · 2021-03-03T23:19:57.376Z · EA · GW

and the staff at PETRL
 

Wow, PETRL has staff now? When did that happen? And where can I read about their upcoming work/plans? 

Comment by Linch on alexrjl's Shortform · 2021-03-03T10:40:17.138Z · EA · GW

Ah yeah. Damn, I could have sworn I did the math before on this (for this exact question) but somehow forgot the result.😅

Comment by Linch on alexrjl's Shortform · 2021-03-03T00:57:20.542Z · EA · GW

~quadratically

Why not  cubically? Because the Milky Way is flat-ish?