Posts

Is there a clear writeup summarizing the arguments for why deep ecology is wrong? 2019-10-25T07:53:27.802Z · score: 11 (6 votes)
Linch's Shortform 2019-09-19T00:28:40.280Z · score: 5 (1 votes)
The Possibility of an Ongoing Moral Catastrophe (Summary) 2019-08-02T21:55:57.827Z · score: 43 (22 votes)
Outcome of GWWC Outreach Experiment 2017-02-09T02:44:42.224Z · score: 14 (16 votes)
Proposal for an Pre-registered Experiment in EA Outreach 2017-01-08T10:19:09.644Z · score: 11 (11 votes)
Tentative Summary of the Giving What We Can Pledge Event 2015/2016 2016-01-19T00:50:58.305Z · score: 7 (7 votes)
The Bystander 2016-01-10T20:16:47.673Z · score: 5 (5 votes)

Comments

Comment by linch on What are your top papers of the 2010s? · 2019-10-25T08:00:57.769Z · score: 7 (4 votes) · EA · GW

The degree that EA thought relies on cutting-edge* research in economics, philosophy, etc, from the last 10 years is kinda surprising if you think about it.

It's kinda weird that not just Superintelligence but also Poor Economics, Compassion by the Pound, information hazards, unilateralist's curse, and other things we just kinda assume to be "in the water supply" rely mostly on arguments or research that's not even a decade old!


*the less polite way to put it is "likely to be overturned" :P

Comment by linch on What are your top papers of the 2010s? · 2019-10-25T07:31:17.403Z · score: 9 (4 votes) · EA · GW

I talked about it before several times, but the biggest one is:

The Possibility of an Ongoing Moral Catastrophe by Evan G. Williams, which I summarized here.

Other than that, in philosophy mostly stuff by Bostrom:

The Unilateralist's Curse

Information Hazards

(Also flagging Will's work on moral uncertainty, though it's unclear to me that his PhD thesis is the best presentation)

In CS:

Adversarial Examples are Not Bugs, They Are Features by Ilyas et.al. (makes clear something I suspected for a while about that topic)

World Models by Ha and Schmidhuber

(Those two papers are far from the most influential ML papers in the last decade! But I usually learn ML from video lectures/blog posts/talking to people rather than papers)

(Probably also various AI Safety stuff, though no specific paper comes to mind).

Designing Data-Intensive Applications quoted a ton of papers (that I did not read).

In Economics:

The academic textbook Compassion by the Pound.

Poor Economics (which won the 2019 Nobel Prize!)

Meta*:

Comment on 'The aestivation hypothesis for resolving Fermi's paradox'

Does suffering dominate enjoyment in the animal kingdom?

*(the research/arguments weren't directly decision-relevant for me, but the fact that they overturned something a lot of EAs believed to be true were a useful meta-update)


Comment by linch on What actions would obviously decrease x-risk? · 2019-10-15T06:01:12.165Z · score: 7 (3 votes) · EA · GW

This is the only answer here I'm moderately confident is correct. A pity the EV is so low!

Comment by linch on Linch's Shortform · 2019-09-19T00:28:40.458Z · score: 28 (12 votes) · EA · GW

cross-posted from Facebook.

Sometimes I hear people who caution humility say something like "this question has stumped the best philosophers for centuries/millennia. How could you possibly hope to make any progress on it?". While I concur that humility is frequently warranted and that in many specific cases that injunction is reasonable [1], I think the framing is broadly wrong.


In particular, using geologic time rather than anthropological time hides the fact that there probably weren't that many people actively thinking about these issues, especially carefully, in a sustained way, and making sure to build on the work of the past. For background, 7% of all humans who have ever lived are alive today, and living people compose 15% of total human experience [2] so far!!!


It will not surprise me if there are about as many living philosophers today as there were dead philosophers in all of written history.


For some specific questions that particularly interest me (eg. population ethics, moral uncertainty), the total research work done on these questions is generously less than five philosopher-lifetimes. Even for classical age-old philosophical dilemmas/"grand projects" (like the hard problem of consciousness), total work spent on them is probably less than 500 philosopher-lifetimes, and quite possibly less than 100.


There are also solid outside-view reasons to believe that the best philosophers today are just much more competent [3] than the best philosophers in history, and have access to much more resources[4].


Finally, philosophy can build on progress in natural and social sciences (eg, computers, game theory).


Speculating further, it would not surprise me, if, say, a particularly thorny and deeply important philosophical problem can effectively be solved in 100 more philosopher-lifetimes. Assuming 40 years of work and $200,000/year per philosopher, including overhead, this is ~800 million, or in the same ballpark as the cost of developing a single drug[5].


Is this worth it? Hard to say (especially with such made-up numbers), but the feasibility of solving seemingly intractable problems no longer seems crazy to me.


[1] For example, intro philosophy classes will often ask students to take a strong position on questions like deontology vs. consequentialism, or determinism vs. compatibilism. Basic epistemic humility says it's unlikely that college undergrads can get those questions right in such a short time.


[2] https://eukaryotewritesblog.com/2018/10/09/the-funnel-of-human-experience/


[3] Flynn effect, education, and education of women, among others. Also, just https://en.wikipedia.org/wiki/Athenian_democracy#Size_and_make-up_of_the_Athenian_population. (Roughly as many educated people in all of Athens at any given time as a fairly large state university). Modern people (or at least peak performers) being more competent than past ones is blatantly obvious in other fields where priority is less important (eg, marathon runners, chess).


[4] Eg, internet, cheap books, widespread literacy, and the current intellectual world is practically monolingual.


[5] https://en.wikipedia.org/wiki/Cost_of_drug_development

Comment by linch on Are we living at the most influential time in history? · 2019-09-06T23:54:00.809Z · score: 10 (6 votes) · EA · GW

>> And it gets even more so when you run it in terms of persons or person years (as I believe you should). i.e. measure time with a clock that ticks as each lifetime ends, rather than one that ticks each second. e.g. about 1/20th of all people who have ever lived are alive now, so the next century it is not really 1/2,000th of human history but more like 1/20th of it.

And if you use person-years, you get something like 1/7 - 1/14! [1]

>> I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms.

I'm pretty confused about how these dramatically different priors are formed, and would really appreciate it if somebody (maybe somebody less busy than Will or Toby?) could give pointers on how to read up more on forming these sort of priors. As you allude to, this question seems to map to anthropics, and I'm curious how much the priors here necessarily maps to your views on anthropics. Eg, am I reading the post and your comment correctly that Will takes an SIA view and you take an SSA view on anthropic questions?

In general, does anybody have pointers on how best to reason about anthropic and anthropic-adjacent questions?

[1] https://eukaryotewritesblog.com/2018/10/09/the-funnel-of-human-experience/


Comment by linch on What book(s) would you want a gifted teenager to come across? · 2019-08-12T04:52:11.793Z · score: 1 (1 votes) · EA · GW

One of the most impactful purchases I've ever made! :P

Comment by linch on What book(s) would you want a gifted teenager to come across? · 2019-08-07T07:30:21.161Z · score: 10 (4 votes) · EA · GW

Ender's Game by Orson Scott Card really spoke to me as a kid, though hopefully your students are better socialized! :P

Comment by linch on What book(s) would you want a gifted teenager to come across? · 2019-08-07T07:29:06.208Z · score: 4 (3 votes) · EA · GW

The Signal and the Noise by Nate Silver (of 538 fame) is the best and most readable introduction to Bayesian statistics and Bayesian reasoning that I'm aware of.

Comment by linch on What book(s) would you want a gifted teenager to come across? · 2019-08-07T07:27:34.475Z · score: 1 (1 votes) · EA · GW

Dairy of a Madman by Lu Xun was helpful for me in cultivating a strong sense of dissatisfaction with the way things are and the implicit or explicit rules that govern social reality.

I don't know if there are any good translations though.

Comment by linch on What book(s) would you want a gifted teenager to come across? · 2019-08-07T07:08:46.325Z · score: 1 (1 votes) · EA · GW

Re Poor Economics:

I still remember the experiments in (I think) India where they demonstrated that even for people living in extreme poverty, where most of marginal spending goes to food, increased income frequently resulted in people buying better-tasting calories, not just more calories. A+.

Comment by linch on What book(s) would you want a gifted teenager to come across? · 2019-08-07T07:03:27.238Z · score: 2 (2 votes) · EA · GW

I thought Chiang was unusually high in literary merit, but what do you think is the relevance to EA?

Comment by linch on What book(s) would you want a gifted teenager to come across? · 2019-08-07T07:02:31.681Z · score: 1 (1 votes) · EA · GW

Strongly seconded. Both had a large effect on me, especially Famine, Affluence and Morality when I was a teenager.

Comment by linch on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-07T06:48:40.908Z · score: 1 (1 votes) · EA · GW

For #2, Ideological Turing Tests could be cool too.

Comment by linch on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-04T07:24:53.308Z · score: 1 (1 votes) · EA · GW

You may also like our discussion sheets for this topic:

https://drive.google.com/drive/u/1/folders/0B3C8dkkHYGfqeXh1SE5kLVJPdGM

Comment by linch on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-04T03:24:53.907Z · score: 1 (1 votes) · EA · GW

Sure! In general you can assume that anything I write publicly is freely available for academic purposes. I'd also be interested in seeing the syllabus if/when you end up designing it.

Comment by linch on How urgent are extreme climate change risks? · 2019-08-03T02:05:34.345Z · score: 2 (2 votes) · EA · GW

Messaged. Will share more widely if/when it's ready for prime time. :)

Comment by linch on The Possibility of an Ongoing Moral Catastrophe (Summary) · 2019-08-03T02:02:23.567Z · score: 1 (1 votes) · EA · GW
We absolutely welcome summaries! People getting more ways to access good material is one of the reasons the Forum exists.

Yay!

did you consider copying the summary into a Forum post, rather than linking it?

Yes. I did a lot of non-standard formatting tricks in Google Docs when I first wrote it (because I wasn't expecting to ever need to port it over to a different format). So when I first tried to copy it over, the whole thing looked disastrously unreadable.

Changed the title. :)


Comment by linch on 'Longtermism' · 2019-08-02T22:28:52.128Z · score: 3 (2 votes) · EA · GW

Great post!

In general, if I imagine ‘longtermism’ taking off as a term, I imagine it getting a lot of support if it designates the first concept, and a lot of pushback if it designates the second concept. It’s also more in line with moral ideas and social philosophies that have been successful in the past: environmentalism claims that protecting the environment is important, not that protecting the environment is (always) the most important thing; feminism claims that upholding women’s rights is important, not that doing so is (always) the most important thing. I struggle to think of examples where the philosophy makes claims about something being the most important thing, and insofar as I do (totalitarian marxism and fascism are examples that leap to mind), they aren’t the sort of philosophies I want to emulate.

Maybe this is the wrong reference class, but I can think of several others: utilitarianism, Christianity, consequentialism, where the "strong" definition is the most natural that comes to mind.

Ie, a naive interpretation of Christian philosophy is that following the word of God is the most important thing (not just one important thing among many). Similarly, utilitarians would usually consider maximizing utility to be the most important thing, consequentialists would probably consider consequences to be more important than other moral duties, etc.

Comment by linch on How urgent are extreme climate change risks? · 2019-08-02T09:46:39.001Z · score: 5 (5 votes) · EA · GW

If you're interested, I just wrote a draft of an article on this, happy to share and solicit feedback! :)

Comment by linch on There are *a bajillion* jobs working on plant-based foods right now · 2019-07-20T10:05:17.166Z · score: 38 (13 votes) · EA · GW

I was asked to comment here. As you know, I did a data science internship at Impossible Foods in late 2016. I'm mostly jotting down my own experiences, along with some anonymized information from talking to others.

NB: "Tech" below refers to jobs that are considered mainstream tech in Silicon Valley (software, data science, analytics, etc), while "science" refers to the food science/biochemistry/chemistry work that is Impossible's core product.

Pros:

  • Highly mission-driven. Many people were vegetarian or vegan (all the food the company served was vegan by default), and people there seemed fairly dedicated to the cause of replacing farmed animals with plants (less than I would expect from a EA or AR nonprofit)
  • Diversity. The gender ratio in the main office was slightly more women than men, and there was a lot of representation from different countries that I usually don't see in Silicon Valley (though this could just be because biology/biochemistry draws from a different population than CS).
  • Niceness. People seemed really nice to each other a lot, and there wasn't a lot of the assholish personalities I sometimes associate with startups.
  • Interesting problems. My subjective sense is that tech there is usually used to support scientific pursuits rather than eg, tech as a product or business development, and is more interesting in a broader sense than most big company or startup work.
  • Lots of opportunities to grow. People who're up for the challenge often take on quite impressive challenges at low levels of seniority.
  • Benefits. I didn't use them much, but my impression is that the company seemed quite progressive about things like vacation days and paternity leave(?).
  • Reasonable work-life balance. This seemed true of the tech people I knew, however the scientists seemed a little overworked and the business development people seemed a lot overworked. I don't know how this compares to other startups.
  • The CEO (Pat Brown) appeared highly competent and clearly thoughtful. From my relatively brief interactions with him, there's a reasonable chance he would have been at home in Stanford EA if he was much younger. Eg, he talks about quantitative cause prioritization and had a short rant at one point about selection bias in business advice.

Cons:

  • Low pay. I feel like there's a large mission/salary tradeoff that the company makes because it knows it could hire enough True Believers. My intern pay was substantially below market, and this seemed true of the other interns I talked to, as well as full-timers I talked to in broadly "tech" roles. I don't know if this has changed by 2019. Another caveat is that I didn't ask about equity, and Impossible's valuation ~quadrupled in the last 3 years, so it's quite possible full-timers were actually well-compensated even if they didn't perceive it that way at the time. A final caveat is that I'm comparing with other for-profit companies, and maybe a better point of comparison is (EA) nonprofits or academia, and my guess is that Impossible pays better.
  • Subpar conflict resolution. I was pretty shielded from the politics as an intern, but I hear more bad stories from others than I would expect from a company of its size (caveat: I have a very poor understanding of the actual base rate of bad conflicts at successful companies). Possibly because of the niceness? I feel like people leave on bad terms more than I would guess.
  • Technical mentorship. Because tech is not the main product, you'll get less senior mentorship or guidance than a primarily tech company. (Obviously, the opposite is true if you're a food scientist or biochemist).
  • Incrementalist work. Impossible always had a vision of being the eventual replacement of all animal-based products, however when I joined in 2016, it was very much at the tail end of experimentation and the beginning of being laser-focused on beef, which seems less intellectually and altruistically interesting. My impression is that this was much more true as of 2018, however they seemed to have developed pork and fish replacements recently? [1]

Neutrals:

  • The company seems fairly high-prestige in the public eye. It's extremely well-known for its size, and people are often excited to talk to me about the work there (in a way that I've never experienced before or since). This seems good for career capital, and well-being, however I want to caution against seeing this as a clear positive. It's easy to fall into prestige traps, and people should introspect about this before they apply. (Also local prestige matters more than global prestige for most job pivots, so public opinion is a poor proxy for how much future employers care).
  • Environmentalism. People at Impossible are much more likely to be environmentalists than animal welfare people. Personally I find Deep Ecology views to be philosophically untenable, but obviously other EAs have different philosophical views. I write this so people can make an informed decision self-selecting in.

On balance, I don't think I'm informed enough to judge whether working at Impossible is better than a typical reader's alternatives. My gut instinct is that if you have other altruistic options that can make full use of your skillsets (clean meat seems especially exciting), then it's more impactful to do more early-stage work than being at Impossible, but I'm very uncertain about this opinion and it's confounded by a lot of details on the ground.

Additional Note 2019/7/20: Rereading this, I think people are usually biased against applying, and I think it's still worthwhile for people who consider farmed animal welfare their top (or close to top) cause area to apply to Impossible.

[1] https://www.digitaltrends.com/cool-tech/impossible-sausage-little-caesars/




Comment by linch on What books or bodies of work, not about EA or EA cause areas, might be beneficial to EAs? · 2019-06-18T08:38:57.149Z · score: 1 (1 votes) · EA · GW

Facebook post that has a longer list (though the framing's slightly different. "potentially lifechanging" rather than useful):

https://www.facebook.com/linchuan.zhang/posts/1697471177010326

Comment by linch on What books or bodies of work, not about EA or EA cause areas, might be beneficial to EAs? · 2019-06-14T08:19:05.146Z · score: 5 (5 votes) · EA · GW

Obvious point: While EAs are special in some important ways, there are many more ways in which EAs aren't that special. So if you want to be effective at what you do, then often generally good advice/resources for your field would be helpful.

Eg, if you want to be good at accounting, the best books on accounting continue to be useful as an "EA accountant", if you want to be good at entrepreneurship/programming/social skills/research, the generally useful resources are still good for those things.

Books I found helpful:

Decisive[1]

The Productivity Project

Code Complete

Designing Data-Intensive Applications

Anathem (fiction)

Books that have the potential to be helpful, but I did not personally find dramatically helpful:

Thinking, Fast and Slow

Deep Work

The Signal and the Noise

Crucial Conversations

The Art of Learning [2]


[1]

https://docs.google.com/document/d/14kwkA0XBHSewIjlfIHHqQH9ISo0s5d_kp-w7w594Pks/edit

[2] https://docs.google.com/document/d/1WCMQyOBo7ROx012CkIIzQ7rklBecaRzRhxusPDh3wBY/edit

Comment by linch on Can/should we define quick tests of personal skill for priority areas? · 2019-06-14T04:14:47.859Z · score: 1 (1 votes) · EA · GW

Definitely agree on "should," assuming it's tractable. As for "can", one possible approach is to hunt down the references in Hunter and Schimdt[1], or similar/more recent meta-analyses, disaggregate by career fields that are interesting to EAs, and look at what specific questions are asked in things like "work sample tests" and "structured employment interviews."

Ideally you want questions that are a) predictive, b)relatively uncorrelated with general mental ability[2] and c) are reasonable to ask earlier on in someone's studies[3].

One reason to be cynical of this approach is that personnel selection is a well-researched and economically really lucrative if for-profit companies can figure it out, and yet very good methods do not already exist.

One reason to be optimistic is that if we're trying to help EAs figure out their own personal skills/comparative advantages, this is less subject to adversarial effects.


[1] https://pdfs.semanticscholar.org/8f5c/b88eed2c3e9bd134b46b14b6103ebf41c93e.pdf

[2] Because if the question just tests how smart you are, it says something about absolute advantage but not comparative.

[3] Otherwise this will ruin the point of cheap tests.

Comment by linch on Not getting carried away with reducing extinction risk? · 2019-06-01T23:09:58.676Z · score: 18 (8 votes) · EA · GW

I think this summarizes the core arguments for why focusing on extinction risk prevention is a good idea. https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive/



Comment by linch on 2018 AI Alignment Literature Review and Charity Comparison · 2019-05-30T06:48:19.670Z · score: 3 (2 votes) · EA · GW

Really late to the party, but thanks so much for this great post!

Minor detail: Shah et al.'s Value Learning Sequence should redirect here:

https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc


Comment by linch on How to improve your productivity: a systematic approach to sustainably increasing work output · 2019-05-21T06:07:24.947Z · score: 4 (3 votes) · EA · GW

Object level: I assume you are already familiar with The Productivity Project?

https://www.amazon.com/Productivity-Project-Accomplishing-Managing-Attention/dp/1101904038

The author attempts a series of productivity experiments, similar to what you are planning (if not quite as pre-registered and systematic).

Comment by linch on How do we check for flaws in Effective Altruism? · 2019-05-08T07:50:31.000Z · score: 4 (3 votes) · EA · GW
Divergence of efforts in AI alignment could lead to an arms race

Can you be a bit concrete about what this will look like? Is this because different approaches to alignment can also lead to insight in capabilities, or is there something else more insidious?

Naively it's easy to see why an arms race in AI capabilities is bad, but competition for AI alignment seems basically good.

Comment by linch on Value of Working in Ads? · 2019-04-11T16:47:53.753Z · score: 6 (4 votes) · EA · GW
Thinking counterfactually, if we assume you are purely executing a plan that others at Google created with programming skills that Google could hire other engineers to replace, the marginal impact of doing software engineering for Google Ads is essentially zero.

I don't think this is true.

1. Headcount for teams at tech companies, including Google, regularly takes 3-6 months to get filled, if not longer. So if Jeff doesn't take his job (or alternatively, if he chooses to leave now), the projects he works on gets delayed by 3-6 engineering months, as a first approximation. 3-6 months is significant in an industry where people regularly change jobs every 2-3 years.

2. There is a lot of variance in engineering productivity both in general and in Google specifically. Perhaps the prior should be that your productivity is average for your job, however.

3. Even if Jeff has no say in high-level strategy or product, there are a lot of small, subtle, design decisions that engineers make on a daily basis that makes the product more/less usable, easier-to-maintain, etc. Though again maybe you should have a neutral prior.

Comment by linch on What skills would you like 1-5 EAs to develop? · 2019-03-29T07:21:06.644Z · score: 1 (1 votes) · EA · GW

Very cool!

Comment by linch on What skills would you like 1-5 EAs to develop? · 2019-03-27T07:43:40.510Z · score: 11 (4 votes) · EA · GW

Book up on tax law to offer targeted tax advice -- It’s plausible that many EAs often have financial situations more complicated than “I make $X/year in salary and I want to donate some % of it” while having much less money than Good Ventures (so having lawyers on retainer is not a live possibility).

  • Evidence for: Even somebody whose only job is at BigTech usually has relevant compensation in at least three different buckets (salary/bonuses, stock, 401K matching). I can imagine situations where an one hour consultation (or reading a 20 minute blog post) is at least as helpful as a 2-3 hour consultation with a tax attorney who does not have the relevant EA context, as well as possibly seeing things that some conventional tax attorneys would just miss.
  • Other evidence: complications from startup exits, cryptocurrency, consulting work

I can imagine that as a community we can support ~2-5 people specializing in US tax law as the movement grows, and maybe part-time people in the tax law of other countries (probably not a bad general option for earning-to-give if it turns out your time is only needed partway through the year).

Cognitive Enhancement through genetic engineering - Plausibly very important in general, but I think 1-5 people is a good start. When South Bay EA did a meetup about this, I think we’ve broadly concluded (note, did not formally poll. Just my read of the room) that both of the following statements seem to have a >50% chance of being true:

  • In an ideal world, it’s better to have human cognitive enhancement before AGI
  • If cognitive enhancement has to happen at all, it’s quite important that it’s done well.

I think it’s plausible (~30% credence, did not think about this too deeply) that human cognitive enhancement has comparable importance to bio x-risk, and I basically never hear about people going into it for EA reasons, possibly because of the social/political biases of being a mostly Western, center-left movement.

Farmed Animals Genetic Engineering - For a movement that pride ourselves on jokes about hedonium and rats on heroin, I don’t think I know anybody who works on genetically engineering animals to suffer less. This only matters in the conjunction of a) near time AGI doesn’t happen, b) farmed animal suffering matters a lot (in both a moral and epistemic sense), c) clean/plant-based meat will not have high adoption within a generation, d) it’s technically possible for you to engineer animals to suffer less in a cost-effective manner and e) there is enough leeway in the current system that lets you do so. Even with that in mind, I still think nonzero AR/AW people should investigate this. For d) in particular, I will personally be very surprised if you can’t engineer chickens to suffer 1% less given approximately the same objective external conditions, and will not be too surprised if you can reduce chicken suffering by 50%.

I think there are obvious biases for why animal rights activists go into clean meat rather than engineering animals to feel less pain, so the fact that this path probably does not currently exist should not be surprising.

Micro-optimizations for user happiness within large tech companies. A large portion of your screen time is spent in crafted interactions by a very small number of companies (FB, Google, Apple, Netflix etc). Related to the idea above of targeting animal happiness directly, why aren’t people trying harder to target human happiness directly? It seems like a fair number of EAs are interested in mental health, but all are trying to partly cure *major problems*, rather than consider that a .002 sd change in the happiness of a billion people is a ridiculously large prize.

I know exactly one person (working very part-time) on this. I think there’s a decent chance that a single person who knows how to Get Things Done within a large company can convince execs to let them lead a team to investigate this, and also a decent chance that this is very plausibly doable without substantial technological or cultural changes. These large tech companies already spend hundreds of millions of dollars (if not more) on other ethics initiatives like diversity, fairness, transparency, user privacy, preventing suicides etc. So it’s not at all crazy to me that somebody can manage upwards by crafting a convincing enough pitch* to launch something like this in at least one tech company.

Involvement in various militaries - Pretty speculative. I’ve talked to former (American) military members who think it’s not very impactful, but I still think that prima facie it’d be nice if we had very EA-sympathetic people within earshot of people high in the chains of command in, say, militaries of permanent UN Security Council Members, or technologically advanced places like the IDF.

Content creation/social media marketing. I have some volunteering experience in this, enough to know that this is a non-trivially difficult skill with large quality differences between people who are really good at this vs. average. EA does not currently want to be a mass movement (and probably never will), but assuming that this changes in the next 5-10 years(~15-20%?) , I think having 1-5 people who are good at this skill would be nice to have, and I’d rather not buy our branding on the market.

*Hypothetical example pitch: "we always say that we respect our users and want them to be happy. But as a data-driven firm, we can't just say this and not follow up with measurable results. Here are some suggested relevant metrics of user happiness (citations 1,2,3), and here's the pilot project that increased user happiness in this demographic by .0x standard deviations."

Comment by linch on Cross-post: Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions, by 80,000 Hours. · 2018-11-18T00:01:19.582Z · score: 4 (3 votes) · EA · GW
it may be helpful if organisations published typical ratios of applicants to hires to let people plan accordingly

While in general I'm a big fan of more data, I worry that in this particular case it will shed more heat than light. I suspect that "ratios of applicants to hires" will be a really poor proxy for how competitive a position is. For example, Walmart in DC has something like a 2.6% acceptance rate.

Further, I don't think there's a good way to provide actually useful statistics on applicant quality in a privacy-conscious way. Eg, you can't just use the mean or the median. It doesn't matter if the median candidate has <1 year of ops job experience if the top 10 candidates all have 15+.

Comment by linch on The marketing gap and a plea for moral inclusivity · 2017-07-13T05:56:57.260Z · score: 1 (1 votes) · EA · GW

This seems like a perfectly reasonable comment to me. Not sure why it was heavily downvoted.

Comment by linch on Open Thread #36 · 2017-04-25T04:04:22.806Z · score: 1 (1 votes) · EA · GW

Do you live in the South Bay (south of San Francisco?).

Did you recently move here and want to be plugged in to what EAs around here are doing and thinking? Did you recently learn about effective altruism and want to know what the heck it's about? Well, join South Bay Effective Altruism's first fully newbie-friendly meetup!

We'll discuss cause prioritization, what causes areas YOU are interested in, and how we can help each other do the most good!

https://www.facebook.com/events/305401856547678/?active_tab=discussion https://www.meetup.com/South-Bay-Effective-Altruism/events/239444560/

The actual meetup will be this Friday at 7pm, but you can also comment here or message me at email[dot]Linch[at]gmail[dot]com to be in the loop for future events.

Comment by linch on Intuition Jousting: What It Is And Why It Should Stop · 2017-03-31T02:04:37.170Z · score: 2 (2 votes) · EA · GW

I've noticed this before, and I think it's a wrong truth-seeking device on a technical level.

Basically, I'm really leery of reductio ad absurdums with statements that are inherently probabilistic in general, but especially when it comes to ethics.

A straightforward reductio ad absurdum goes:

  1. Say we believe in P
  2. P implies Q
  3. Q is clearly wrong
  4. Therefore, not P.

However, in philosophical ethics it's more like

  1. Say we believe in P
  2. A seems reasonable
  3. B seems reasonable
  4. C seems kind of reasonable.
  5. D seems almost reasonable if you squint a little, at least it's more reasonable than P
  6. E has a >50% chance of being right.
  7. P and A and B and C and D and E implies Q
  8. Q is an absurd/unintuitive conclusion.
  9. Therefore, not P

The issue here is that most of the heavy lifting is done by appeals to conjunctions, and conflating >50% probabilities with absolute truths.

Comment by linch on Open Thread #36 · 2017-03-16T05:56:38.993Z · score: 5 (5 votes) · EA · GW

Apologies, rereading it again, I think my first comment was rude. :/

I do a lot of selfish and suboptimal things as well, and it will be inefficient/stressful if each of us have to always defend any deviation from universal impartiality in all conversations.

I think on the strategic level, some "arbitrariness" is fine, and perhaps even better than mostly illusory non-arbitrariness. We're all human, and I'm not certain it's even possible to really cleanly delineate how much you value different satisfying different urges for a meaningful and productive life.

On the tactical level, I think general advice on frugality, increasing your income, and maximizing investment returns is applicable. Off the top of my head, I can't think of any special information specifically to the retirement/EA charity dichotomy. (Maybe the other commentators can think of useful resources?)

(Well, one thing that you might already be aware of is that retirement funds and charity donations on two categories that are often tax-exempt, at least in the US. Also, many companies "match" your investment into retirement accounts up to a certain %, and some match your donations. Optimizing either of those categories can probably save you (tens of) thousands of dollars a year)

Sorry I can't be more helpful!

Comment by linch on Open Thread #36 · 2017-03-15T04:33:49.754Z · score: 4 (6 votes) · EA · GW

My personal opinion is that individuals should save enough to mitigate emergencies, job transitions, etc. (https://80000hours.org/2015/11/why-everyone-even-our-readers-should-save-enough-to-live-for-6-24-months/), but no more.

It just seems rather implausible, to me, that retirement money is anywhere close to being a cost-effective intervention, relative to other likely EA options.

Comment by linch on Changes in funding in the AI safety field · 2017-03-11T09:52:58.513Z · score: 0 (0 votes) · EA · GW

The quality of this intervention has already been discussed elsewhere on this forum: http://effective-altruism.com/ea/17v/ea_funds_beta_launch/ad5

Comment by linch on EA Funds Beta Launch · 2017-03-04T01:53:43.601Z · score: 7 (7 votes) · EA · GW

This intervention appears to pass my initial heuristic of "Important, Neglected and Tractable." However, do you have any non-anecdotal evidence that it works? In particular, has Dr. Trust's spellcasting gone through an RCT? If not, can you point to examples of interventions in a similar reference class that have?

Comment by linch on How many hits does hits-based giving get? A concrete study idea to find out (and a $1500 offer for implementation) · 2017-03-03T07:04:38.364Z · score: 0 (0 votes) · EA · GW

Congratulations! This is very exciting and I'm looking forward to hearing about future updates.

Comment by linch on Donating To High-Risk High-Reward Charities · 2017-02-17T10:53:27.442Z · score: 1 (1 votes) · EA · GW

An added reason to not take expected value estimates literally (which applies to some/many casual donors, but probably not to AGB or GiveWell) is if you believe that you are not capable of making reasonable expected value estimates under high uncertainty yourself, and you're leery of long casual chains because you've developed a defense mechanism against your values being Eulered or Dutch-Booked.

Apologies for the weird terminology, see: http://slatestarcodex.com/2014/08/10/getting-eulered/ and: https://en.wikipedia.org/wiki/Dutch_book

Comment by linch on GiveWell and the problem of partial funding · 2017-02-15T12:39:21.443Z · score: 3 (3 votes) · EA · GW

The GiveWell Top Charities are part of the Open Philanthropy Project’s optimal philanthropic portfolio, when only direct impact is considered. There’s not enough money to cover the whole thing. These are highly unlikely to both be true. Global poverty cannot plausibly be an unfillable money pit at GiveWell’s current cost-per-life-saved numbers. At least one of these three things must be true:

GiveWell’s cost per life saved numbers are wrong and should be changed.

The top charities’ interventions will reach substantially diminishing returns long before they’ve managed to massively scale up.

A few billion dollars can totally wipe out major categories of disease in the developing world.

I don't think I understand the trilemma you presented here.

As a sanity check, under-5 mortality is about 6 million worldwide. Assuming that more than 2/3 is preventable (which I think is a reasonable assumption if you compare with developed world numbers on under-5 mortality), this means there are 4 million+ preventable deaths (and corresponding suffering) per year. At $10000 to prevent a death, this is already way more money than Open Phil has in a few months. At $3500 to prevent a death, this is still more money than Open Phil has in even a single year.

We would expect the numbers to also be much larger if we're not prioritizing just deaths, but also prevention of suffering.

Comment by linch on Outcome of GWWC Outreach Experiment · 2017-02-10T00:08:02.537Z · score: 0 (0 votes) · EA · GW

Yeah that makes sense!

Comment by linch on Outcome of GWWC Outreach Experiment · 2017-02-09T07:21:00.653Z · score: 2 (2 votes) · EA · GW

I'm not sure how you're operationalizing the difference between unlikely and very unlikely, but I think we should not be able to make sizable updates from this data unless the prior is REALLY big.

(You probably already understand this, but other people might read your comment as suggesting something more strongly than you're actually referring to, and this is a point that I really wanted to clarify anyway because I expect it to be a fairly common mistake)

Roughly: Unsurprising conclusions from experiments with low sample sizes should not change your mind significantly, regardless of what your prior beliefs are.

This is true (mostly) regardless of the size of your prior. If a null result when you have a high prior wouldn't cause a large update downwards, then a null result on something when you have a low prior shouldn't cause a large shift downwards either.

[Math with made-up numbers below]

As mentioned earlier:

If your hypothesis is 10%: 23% probability experiment confirms it.

If your hypothesis is 1%: 87% probability experiment is in line with this

5%: 49%

20%: 4.4%

Say your prior belief is that there's a 70% chance of talking to new people having no effect (or meaningfully close enough to zero that it doesn't matter), a 25% chance that it has a 1% effect, and a 5% chance that it has a 10% effect.

Then by Bayes' Theorem, your posterior probability should be: 75.3% chance it has no effect

23.4% chance it has a 1% effect

1.24% chance it has a 10% effect.

If, on the other hand, you originally believed that there's a 50% chance of it have no effect, and a 50% chance of it having a 10% effect, then your posterior should be:

81.3% chance it has no effect

18.7% chance it has a 10% effect.

Finally, if your prior is that it already has a relatively small effect, this study is far too underpowered to basically make any conclusions at all. For example, if you originally believed that there's a 70% chance of it having no effect, and a 30% chance of it having a .1% effect, then your posterior should be:

70.3% chance of no effect

29.7% chance of a .1% effect.

This is all assuming ideal conditions.Model uncertainty and uncertainty about the quality of my experiment should only decrease the size of your update, not increase it.

Do you agree here? If so, do you think I should rephrase the original post to make this clearer?

Comment by linch on 80,000 Hours: EA and Highly Political Causes · 2017-02-02T01:04:37.344Z · score: 1 (1 votes) · EA · GW

Thanks for the edit! :) I appreciate it.

I think your model has MUCH more plausible numbers after the edit, but on a more technical level, I still think a linear model that far out is not ideal here. We would expect diminishing marginal returns well before we hit an increase in spending by a factor of 10.

Probably much better to estimate based on "cost per vote" (like you did below), and then use something like Silver's estimates for marginal probability of a vote changing an election.

To be clear, I have nothing against linear models and use them regularly.

Comment by linch on 80,000 Hours: EA and Highly Political Causes · 2017-02-01T08:38:13.505Z · score: 3 (3 votes) · EA · GW

I'm really confused by both your conclusion and how you arrived at the conclusion.

I. Your analysis suggest that if Clinton doubles her spending, her chances of winning will increase by less than 2% (!)

This seems unlikely.

II. "Hillary outspent Trump by a factor of 2 and lost by a large margin." I think this is exaggerating things. Clinton had a 2.1% higher popular vote. 538 suggests (http://fivethirtyeight.com/features/under-a-new-system-clinton-could-have-won-the-popular-vote-by-5-points-and-still-lost/) that Clinton would probably have won if she had a 3% popular vote advantage.

First of all, I dispute that losing by less than 1-in-100 of the electoral body is a "large margin." Secondly, I don't think it's very plausible that shifting order 1 million votes with $1 billion in additional funding has less than a 2% chance. ($1,000 per vote is well within the statistics I've seen on GOTV efforts, and actually seriously on the high end).

III. "I mean presumably even with 10x more money or $6bn, Hillary would still have stood a reasonable chance of losing, implying that the cost of a marginal 1% change in the outcome is something like $500,000,000 - $1,000,000,000 under a reasonable pre-election probability distribution."

I don't think this is the right way to model marginal probability, to put it lightly. :)

Comment by linch on EA essay contest for <18s · 2017-01-27T00:26:05.739Z · score: 0 (0 votes) · EA · GW

I wouldn't worry too much about detecting plagiarism. There isn't THAT much content in the EA space, and some member of a group of us would likely be able to recognize content that are repeats of things we've seen before.

Comment by linch on EA essay contest for <18s · 2017-01-24T08:47:40.393Z · score: 2 (2 votes) · EA · GW

Once this idea is more developed, Students for High-Impact Charity would be happy to help advertise/promote it.

Comment by linch on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-24T08:22:11.783Z · score: 1 (1 votes) · EA · GW

The hitchhiker is mentioned in Chapter One of Reasons and Persons. Interestingly, Parfit was more interested in the moral implications than the decision-theory ones.

Comment by linch on Proposal for an Pre-registered Experiment in EA Outreach · 2017-01-16T06:22:15.050Z · score: 1 (1 votes) · EA · GW

UPDATE: I now have my needed number of volunteers, and intend to launch the experiment tomorrow evening. Please email, PM, or otherwise contact me in the next 12 hours if you're interested in participating.

Comment by linch on Tell us how to improve the forum · 2017-01-15T08:37:35.229Z · score: 2 (2 votes) · EA · GW

I often see spambots in the comments.