Posts

The Repugnant Conclusion Isn't 2022-08-23T08:09:03.583Z
Punching Utilitarians in the Face 2022-07-13T18:43:36.422Z
EA Organization Updates: March 2022 2022-03-24T17:33:06.278Z
EA Organization Updates: February 2022 2022-02-26T07:11:57.603Z
EA Organization Updates: January 2022 2022-01-25T21:10:57.053Z
EA Organization Updates: December 2021 2021-12-22T19:34:13.294Z
[Linkpost] Alexander Berger On Philanthrophic Opportunities And Normal Awesome Altruism 2021-12-08T22:24:54.049Z
How should Effective Altruists think about Leftist Ethics? 2021-11-27T13:25:37.281Z
A Red-Team Against the Impact of Small Donations 2021-11-24T16:03:40.479Z
[Linkpost] Apply For An ACX Grant 2021-11-12T09:44:36.017Z
Why aren't you freaking out about OpenAI? At what point would you start? 2021-10-10T13:06:40.911Z
What is the role of public discussion for hits-based Open Philanthropy causes? 2021-08-04T20:15:28.182Z
Writing about my job: Internet Blogger 2021-07-19T20:24:31.357Z
Does Moral Philosophy Drive Moral Progress? 2021-07-02T21:22:24.111Z
Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare 2021-06-04T21:08:11.200Z
Base Rates on United States Regime Collapse 2021-04-05T17:14:22.775Z
Responses and Testimonies on EA Growth 2021-03-10T23:22:16.613Z
Why Hasn't Effective Altruism Grown Since 2015? 2021-03-09T14:43:01.316Z

Comments

Comment by AppliedDivinityStudies on New interview with SBF on Will MacAskill, "earn to give" and EA · 2022-12-07T15:10:16.531Z · EA · GW

Does EA Forum have a policy on sharing links to your own paywalled writing? E.g. I've shared link posts to my blog, and others have shared link posts to their substacks, but I haven't see anyone share a link post to their own paid substack before.

Comment by AppliedDivinityStudies on The Repugnant Conclusion Isn't · 2022-09-07T23:09:47.279Z · EA · GW

I think the main arguments against suicide are that it causes your loved ones a lot of harm, and (for some people) there is a lot of uncertainty in the future.  Bracketing really horrible torture scenarios, your life is an option with limited downside risk. So if you suspect your life (really the remaining years of your life) is net-negative, rather than commit suicide you should increase variance because you can only stand to benefit.

Comment by AppliedDivinityStudies on Critique of MacAskill’s “Is It Good to Make Happy People?” · 2022-08-23T15:05:05.525Z · EA · GW

The idea that "the future might not be good" comes up on the forum every so often, but this doesn't really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don't fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we're pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today

Comment by AppliedDivinityStudies on The Repugnant Conclusion Isn't · 2022-08-23T10:29:40.284Z · EA · GW

Yeah, it's difficult to intuit, but I think that's pretty clearly because we're bad at imagining the aggregate harm of billions (or trillions) of mosquito bites. One way to reason around this is to think:
- I would rather get punched once in the arm than once in the ribs, but I would rather get punched once in the ribs than 10x in the arm
- I'm fine with disaggregating, and saying that I would prefer a world where 1 person gets punched in the gut to a world where 10 people get punched in the arm
- I'm also fine with multiplying those numbers by 10 and saying that I would prefer 10 people PiG to 100 people PiA
- It's harder to intuit this for really really big numbers, but I am happy to attribute that to a failure of my imagination, rather than some bizarre effect where TU only holds for small populations
- I'm also fine intensifying the first harm by a little bit so long as the populations are offset (e.g. I would prefer 1 person punched in the face to 1000 people punched in the arm)
- Again, it's hard to continue to intuit this for really extreme harms and really large populations, but I am more willing to attribute that to cognitive failures and biases than to a bizarre ethical rule

Etc etc. 

Comment by AppliedDivinityStudies on The Repugnant Conclusion Isn't · 2022-08-23T10:21:23.829Z · EA · GW

Thanks for the link! I knew I had heard this term somewhere a while back, and may have been thinking about it subconsciously when I wrote this post.

R.e.
> For instance, many people wouldn't want to enter solipsistic experience machines (whether they're built around eternal contentment or a more adventurous ideal life) if that means giving up on having authentic relationships with loved ones.


I just don't trust this intuition very much. I think there is a lot of anxiety around experience machines due to:
- Fear of being locked in (choosing to be in the machine permanently)
- Fear that you will no longer be able to tell what's real

And to be clear, I share the intuition that experience machines seem bad, and yet I'm often totally content to play video games all day long because it doesn't violate those two conditions.

So what I'm roughly arguing is: We have some good reasons to be wary of experience machines, but I don't think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility. 
 

Comment by AppliedDivinityStudies on The Repugnant Conclusion Isn't · 2022-08-23T09:47:37.802Z · EA · GW

people alive today have negative terminal value

This seems entirely plausible to me. A couple jokes which may help generate an intuition here (1, 2)

You could argue that suicide rates would be much higher if this were true, but there are lots of reasons people might not commit suicide despite experiencing net-negative utility over the course of their lives.

At the very least, this doesn't feel as obviously objectionable to me as the other proposed solutions to the "mere addition paradox".

 

Comment by AppliedDivinityStudies on Most* small probabilities aren't pascalian · 2022-08-09T06:17:12.332Z · EA · GW

The problem (of worrying that you're being silly and getting mugged) doesn't arise when probabilities are tiny, it's when probabilities are tiny and you're highly uncertain. We have pretty good bounds in the three areas you listed, but I do not have good bounds on say, the odds that "spending the next year of my life on AI Safety research" will prevent x-risk.

In the former cases, we have base rates and many trials. In the latter case, I'm just doing a very rough fermi estimate. Say I have 5 parameters with an order of magnitude of uncertainty on each one, which when multiplied out, is just really horrendous.

Anyway, I mostly agree with what you're saying, but it's possible that you're somewhat misunderstanding where the anxieties you're responding to are coming from.


 

Comment by AppliedDivinityStudies on Reducing nightmares as a cause area · 2022-07-20T19:46:53.824Z · EA · GW

Thanks this is interesting, I wrote a bit about my own experiences here:

https://applieddivinitystudies.com/subconscious/

Comment by AppliedDivinityStudies on Punching Utilitarians in the Face · 2022-07-14T01:16:11.493Z · EA · GW

Under mainstream conceptions of physics (as I loosely understand them), the number of possible lives in the future is unfathomably large, but not actually infinite.

Comment by AppliedDivinityStudies on Punching Utilitarians in the Face · 2022-07-14T01:12:25.746Z · EA · GW

Longtermism does mess with intuitions, but it's also not basing its legitimacy on a case from intuition. In some ways, it's the exact opposite: it seems absurd to think that every single life we see today could be nearly insignificant when compared to the vast future, and yet this is what one line of reasoning tells us.

Comment by AppliedDivinityStudies on Punching Utilitarians in the Face · 2022-07-13T18:44:16.266Z · EA · GW

I originally wrote this post for my personal blog and was asked to cross-post here. I stand by the ideas, but I apologize that the tone is a bit out of step with how I would normally write for this forum.

Comment by AppliedDivinityStudies on 300+ Flashcards to Tackle Pressing World Problems · 2022-07-11T21:01:11.862Z · EA · GW

I read the title and thought this was a really silly approach, but after reading through the list I am fairly surprised how sold I am on the concept. So thanks for putting this together!

Minor nit: One concern I still have is over drilling facts into my head which won't be true in the future. For example, instead of:
> The average meat consumption per capita in China has grown 15-fold since 1961

 I would prefer:
> Average meat consumption per capita in China grew 15x in the 60 years after 1961

Comment by AppliedDivinityStudies on Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies · 2022-06-24T19:12:36.213Z · EA · GW

This is great, thanks Michael. I wasn't aware of the recent 2022 paper arguing against the Stevenson/Wolfers result. A couple questions:

In this talk (starting around 6:30), Peter Favaloro from Open Phil talks about how they use a utility function that grows logarithmically with income, and how this is informed by  Stevenson and Wolfers (2008). If the scaling were substantially less favorable (even in poor countries), that would have some fairly serious implications for their cost-effectiveness analysis. Is this something you've talked to them about?

Second, just curious how the Progress Studies folk responded when you gave this talk at the Austin workshop.


 

Comment by AppliedDivinityStudies on Questions to ask Will MacAskill about 'What We Owe The Future' for 80,000 Hours Podcast (possible new audio intro to longtermism) · 2022-06-21T21:18:15.798Z · EA · GW

For some classes of meta-ethical dilemmas, Moral Uncertainty recommends using variance voting, which requires you to know the mean and variance of each theory under consideration.

How is this applied in practice? Say I give 95% weight to Total Utilitarianism and 5% weight to Average Utilitarianism, and I'm evaluating an intervention that's valued differently by each theory. Do I literally attempt to calculate values for variance? Or am I just reasoning abstractly about possible values?

Comment by AppliedDivinityStudies on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-17T03:53:48.723Z · EA · GW

If this dynamic leads you to put less “trust” in our decisions, I think that’s a good thing!


I will push back a bit on this as well. I think it's very healthy for the community to be skeptical of Open Philanthropy's reasoning ability, and to be vigilant about trying to point out errors.

On the other hand, I don't think it's great if we have a dynamic where the community is skeptical of Open Philanthropy's intentions. Basically, there's a big difference between "OP made a mistake because they over/underrated X" and "OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants."

 

Comment by AppliedDivinityStudies on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-16T20:40:11.188Z · EA · GW

In general, WSJ reporting on SF crime has been quite bad. In another article they write

Much of this lawlessness can be linked to Proposition 47, a California ballot initiative passed in 2014, under which theft of less than $950 in goods is treated as a nonviolent misdemeanor and rarely prosecuted.

Which is just not true at all. Every state has some threshold, and California's is actually on the "tough on crime" side of the spectrum.

Shellenberger himself is an interesting guy, though not necessarily in a good way.

Comment by AppliedDivinityStudies on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-16T20:31:05.951Z · EA · GW

Thanks!

Comment by AppliedDivinityStudies on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-16T20:11:51.062Z · EA · GW

Conversely, if sentences are reduced more than in the margin, common sense suggests that crime will increase, as observed in, for instance, San Francisco.

 

A bit of a nit since this is in your appendix, but there are serious issues with this reasoning and the linked evidence. Basically, this requires the claims that:
1. San Francisco reduced sentences
2. There was subsequently more crime

1. Shellenberger at the WSJ writes:

 the charging rate for theft by Mr. Boudin’s office declined from 62% in 2019 to 46% in 2021; for petty theft it fell from 58% to 35%. 

He doesn't provide a citation, but I'm fairly confident he's pulling these numbers from this SF Chronicle writeup, which is actually citing a change from 2018-2019 to 2020-2021. So right off the bat Shellenberger is fudging the data.

Second, the aggregated data is misleading because there were specific pandemic-effects in 2020 unrelated to Boudin's policies. If you look at the DA office's disaggregated data, there is a drop in filing rate in 2020, but it picks up dramatically in 2021. In fact, the 2021 rate is higher than the 2019 rate both for crime overall, and for the larceny/theft category. So not only is Shellenberger's claim misleading, it's entirely incorrect.

You can be skeptical of the DA office's data, but note that this is the same source used by the SF Chronicle, and thus by Shellenberger as well. 

2. Despite popular anecdotes, there's really no evidence that crime was actually up in San Francisco, or that it occurred as a result of Boudin's policies.
- Actual reported shoplifting was down from 2019-2020
- Reported shoplifting in adjacent countries was down less than in California as a whole, indicating a lack of "substitution effects" where criminals go where sentences are lighter
- The store closures cited by Shellenberger can't be pinned on increased crime under Boudin because:
A) Walgreens had already  announced a plan to close 200 stores back in 2019
B) Of the 8 stores that closed in 2019 and 2020, at least half closed in 2019, making the 2020 closures unexceptional
C) The 2021 store closure rate for Walgreens is actually much lower than comparable metrics, like the closures of sister company Duane Reader in NYC over the same year, or the dramatic drop in Walgreens stock price. It is also not much higher than the historical average of 3.7 store closures per year in SF.

I have a much more extensive writeup on all of this here:
https://applieddivinitystudies.com/sf-crime-2/

Finally, the problem with the "common sense" reasoning is that it goes both ways. Yes, it seems reasonable to think that less punishment would result in more crime,  but we can similarly intuit that spending time in prison and losing access to legal opportunities would result in more crime. Or that having your household's primary provider incarcerated would lead to more crime. Etc etc. Yes, we are lacking in high quality evidence, but that doesn't mean we can just pick which priors to put faith in.

Comment by AppliedDivinityStudies on Announcing a contest: EA Criticism and Red Teaming · 2022-06-03T19:23:30.838Z · EA · GW

I can't speak for everyone, but will quickly offer my own thoughts as a panelist:
1. Short and/or informally written submissions are fine. I would happily award a tweet thread it if was good enough. But I'm hesitant to say "low effort is fine", because I'm not sure what else that implies.
2. It might sound trite, but I think the point of this contest (or at least the reason I'm excited about it) is to improve EA. So if a submission is totally illegible to EA people, it is unlikely to have that impact. On "style of argument" I'll just point to my own backlog of very non-EA writing on mostly non-EA topics.
3. I wouldn't hold it against a submission as a personal matter,  and wouldn't dismiss it out of hand, but it's definitely a negative if there are substantive mistakes  that could have been avoided using only  public information.
 

Comment by AppliedDivinityStudies on Announcing a contest: EA Criticism and Red Teaming · 2022-06-03T19:11:56.860Z · EA · GW

The crucial complementary question is "what percentage of people on the panel are neartermists?"

FWIW, I have previously written about animal ethics, interviewed Open Phil's neartermist co-CEO, and am personally donating to neartermist causes.

Comment by AppliedDivinityStudies on Open Philanthropy's Cause Exploration Prizes: $120k for written work on global health and wellbeing · 2022-06-02T06:28:32.489Z · EA · GW

Are there any limitations on the kinds of feedback we can/should get before submitting? For example, is it okay to:
- Get feedback from an OpenPhil staff member?
- Publish on the forum, get feedback, and make edits before submitting a final draft?
- Submit an unpublished piece of writing which has previously been reviewed?

If so, should reviewers be listed in order to provide clarity on input? Or omitted to avoid the impression of an "endorsement"?

Comment by AppliedDivinityStudies on Thought experiment: If you had 3 months to turn a stressed and unhappy person into a calm and happy one, what meta approach would you take? · 2022-05-09T17:51:15.577Z · EA · GW

Antidepressants do actually seem to work, and I think it's weird that people forget/neglect this. See Scott's review here and more recent writeup. Those are both on SSRIs, there is also Wellbutrin (see Robert Wiblin's personal experience with it here) and at least a few other fairly promising pharmacological treatments.

I would also read the relevant Lorien Psych articles and classic SSC posts on depression treatments and anxiety treatments.

Since you asked for the meta-approach: I think the key is to stick with each thing long enough to see if it works, but also do actually move on and try other things. 

Comment by AppliedDivinityStudies on Covid memorial: 1ppm · 2022-04-14T20:17:04.951Z · EA · GW

Ideas are like investments, you don't want just want a well diversified portfolio, you want to intentionally hedge against other assets. In this view, the best way to develop a scout's mindset for yourself is to read a wide variety of writers, many of whom will be quite dogmatic. The goal shouldn't be to only read other reasonable people, but to read totally unreasonable people across domains and synthesize their claims into something coherent.

As you correctly note, Graeber is a model thinker in a world of incoherent anarchist/marxist ramblings. I think our options are to either dismiss the perspective altogether (reasonable, but tragic) or take his factual claims with a grain of salt, while acknowledging his works as fountain of insight. 

I would happily accept the criticism if there were any anarchist/marxist thinker alive today reasoning more clearly than Graeber, but I don't think there is.

Comment by AppliedDivinityStudies on EA should taboo "EA should" · 2022-03-29T18:47:43.885Z · EA · GW

Strongly agree on this. It's been a pet peeve of mine to hear exactly these kinds of phrases. You're right that it's nearly a passive formulation, and frames things in a very low-agentiness way.

At the same time, I think we should recognize the phrasing as a symptom of some underlying feeling of powerlessness. Tabooing the phrase might help, but won't eradicate the condition. E.g.:
- If someone says "EA should consider funding North Korean refugees"
- You or I might respond "You should write up that analysis! You should make that case!"
- But the corresponding question is: Why didn't they feel like they could do that in the first place? Is it just because people are lazy? Or were they uncertain that their writeup would be taken seriously? Maybe they feel that EA decision making only happens through "official channels" and random EA Forum writers not employed by large EA organizations don't actually have a say?

 

Comment by AppliedDivinityStudies on Podcast: Samo Burja on the war in Ukraine, avoiding nuclear war and the longer term implications · 2022-03-11T19:42:03.736Z · EA · GW

FYI Samo's forecasts on this were pretty wrong:
https://astralcodexten.substack.com/p/ukraine-warcasting?s=r

 

Comment by AppliedDivinityStudies on What is the new EA question? · 2022-03-03T18:45:26.515Z · EA · GW

I would add that we should be trying to increase the pool of resources. This includes broad outreach like Giving What We Can and the 80k podcast, as well as convincing EAs to be more ambitious, direct outreach to very wealthy people, and so on.

 

Comment by AppliedDivinityStudies on EA Organization Updates: February 2022 · 2022-03-01T15:46:32.726Z · EA · GW

Oh man, fixed, thank you.

Comment by AppliedDivinityStudies on Some thoughts on vegetarianism and veganism · 2022-02-23T01:17:16.648Z · EA · GW

It sounds wild, but AFAIK, the cotton gin and maybe some other forms of automation actually made slavery more profitable! 

From Wikipedia:
> Whitney's gin made cotton farming more profitable, so plantation owners expanded their plantations and used more slaves to pick the cotton. Whitney never invented a machine to harvest cotton, it still had to be picked by hand. The invention has thus been identified as an inadvertent contributing factor to the outbreak of the American Civil War.

 

Comment by AppliedDivinityStudies on Future-proof ethics · 2022-02-03T23:42:09.633Z · EA · GW

across the board the ethical trend has been an extension of rights, franchise, and dignity to widening circles of humans

 

I have two objections here.
1) If this is the historical backing for wanting to future-proof ethics, shouldn't we just do the extrapolation from there directly instead of thing about systematizing ethics? In other words, just extent rights to all humans now and be done with it.
2) The idea that the ethical trend has been a monotonic widening is a bit self-fulfilling, since we don't no longer consider some agents to be morally important. I.e. the moral circle has narrowed to exclude ancestors, ghosts, animal worship, etc. See Gwern's argument here:
https://www.gwern.net/The-Narrowing-Circle

Comment by AppliedDivinityStudies on Idea: Red-teaming fellowships · 2022-02-03T10:07:39.512Z · EA · GW

One really useful way to execute this would be to bring in more outside non-EA experts in relevant disciplines. So have people in development econ evaluate GiveWell (great example of this here), engage people like Glen Wely to see how EA could better incorporate market-based thinking and mechanism design, engage hardcore anti-natalist philosophers (if you can find a credible one), engage anti-capitalist theorists skeptical of welfare and billionaire philanthropy, etc.

One specific pet project I'd love to see funded is more EA history. There are plenty of good legitimate expert historians, and we should be commissioning them to write for example on the history of philanthropy (Open Phil did a bit here), better understanding the causes of past civilizations' ruin, better understanding intellectual moral history and how ideas have progressed over time, and so on. I think there's a ton to dig into here, and think history is generally underestimated as a perspective (you can't just read a couple secondary sources and call it a day).

Comment by AppliedDivinityStudies on Idea: Red-teaming fellowships · 2022-02-03T10:02:08.512Z · EA · GW

I agree that it's important to ask the meta questions about which pieces of information even have high moral value to begin with. OP gives as an example, the moral welfare of shrimps. But who cares? EA puts so little money and effort into this already on the assumption that they probably are valuable. Even if you demonstrated that they weren't or forced an update in that direction, the overall amount of funding shifted would be fairly small.

You might worry that all the important questions are already so heavily scrutinized as to bear little low-hanging fruit, but I don't think that's true. EAs are easily nerd sniped, and there isn't any kind of "efficient market" for prioritizing high impact questions. There's also a bit of intimidation here where it feels a bit wrong to challenge someone like MacAskill or Bostrom on really critical philosophical questions. But that's precisely where we should be focusing more attention.

Comment by AppliedDivinityStudies on Idea: Red-teaming fellowships · 2022-02-03T09:58:20.793Z · EA · GW

This is a good idea, but I think you mind find that there's surprisingly little EA consensus. What's the likelihood that this is the most important century? Should we be funding near-term health treatments for the global poor, or does nothing really matter aside from AI Safety? Is the right ethics utilitarian? Person-affecting? Should you even be a moral realist?

As far as I can tell, EAs (meaning both the general population of uni club attendees and EA Forum readers, alongside the "EA elite" who hold positions of influence at top EA orgs) disagree substantially amongst themselves on all of these really fundamental and critical issues.

What EAs really seems to have in common is an interest in doing the most good, thinking seriously and critically about what that entails, and then actually taking those ideas seriously and executing. As Helen once put it, Effective Altruism is a question, not an ideology.

So I think this could be valuable in theory, but I don't think your off-the-cuff examples do a good job of illustrating the potential here. For pretty much everything you list, I'm pretty confident that many EAs already disagree, and that these are not actually matters of group-think or even local consensus.

Finally, I think there are questions which are tricky to red-team because of how much conversation around them is private, undocumented, or otherwise obscured. So if you were conducting this exercise, I don't think it would make sense as an entry-level thing, I think you would have to find people who are already fairly knowledgeable. 

Comment by AppliedDivinityStudies on Future-proof ethics · 2022-02-02T21:54:48.639Z · EA · GW

Do you have a stronger argument for why we should want to future-proof ethics? From the perspective of a conservative Christian born hundreds of years ago, maybe today's society is very sinful. What would compel them to adopt an attitude such that it isn't?

Similarly, say in the future we have moral norms that tolerate behavior we currently  see as reprehensible. Why would we want to adopt those norms? Should we assume that morality will make monotonic progress, just because we're repulsed by some past moral norms? That doesn't seem to follow. In fact, it seems plausible that morality has simply shifted. From the outside view, there's nothing to differentiate "my morality is better than past morality" from "my morality is different than past morality, but not in any way that makes it obviously superior".

You can imagine, for example, a future with sexual norms we would today consider reprehensible. Is there any reason I should want to adopt them?

Comment by AppliedDivinityStudies on Future-proof ethics · 2022-02-02T21:48:43.712Z · EA · GW

One candidate you don't mention is:

- Extrapolate from past moral progress to make educated guesses about where moral norms will be in the future.

On a somewhat generous interpretation, this is the strategy social justice advocates have been using. You look historically, see that we were wrong about treating women, minorities, etc less worthy of moral consideration, and try to guess which currently subjugated groups will in the future be seen as worthy of equal treatment. This gets you to feeling more concern for trans people, people with different sexual preferences (including ones that are currently still taboo), for poor people, disabled people, etc, and eventually maybe animals too.

Another way of phrasing that is:
- Identify which groups will be raised in moral status in the future, and work proactively to raise their status today.

Will MacAskill has an 80k podcast titled "Our descendants will probably see us as moral monsters". One way to interpret the modern social justice movement is that it advocates for adopting a speculative future ethics, such that we see each other as moral monsters today. This has led to mixed results.

 

Comment by AppliedDivinityStudies on Research idea: Evaluate the IGM economic experts panel · 2022-01-19T21:51:43.487Z · EA · GW

If you read the expert comments, very often they complain that the question is poorly phrased. It's typically about wording like "would greatly increase" where there's not even an attempt to define "greatly". So if you want to improve the panel or replicate it, that is my #1 recommendation.

...My #2 recommendation is to create a Metaculus market for every IGM question and see how it compares.

Comment by AppliedDivinityStudies on Is EA over-invested in Crypto? · 2022-01-16T16:21:24.558Z · EA · GW

At what level of payoff is that bet worth it? Lets say the bet is a 50/50 triple-or-nothing bet. So, either EA ends up with half its money, or ends up with double. I'd guess (based on not much) that right now losing 50% of EA's money is more negative than doubling EA's money is positive.


There is an actual correct answer, at least the abstract. According to the Kelly criterion, on a 50/50 triple-or-nothing bet, you should put down 25% of your bankroll.

Say EA is now at around 50/50 Crypto/non-Crypto, what kind of returns would justify that allocation? At 50/50 odds, there's actually no multiple that makes the math work out.

But that's just for the strict case we're discussing. See the section on "Investment formula" for what to do about partial losses.

Finally, instead of a 50/50 triple-or-nothing bet, we can model this as a 75/25 double-or-nothing bet (same EV as you bet). In that case, we get that a 50/50 allocation is optimal.

But note that the Kelly criterion is optimizing for log(wealth)! Log(wealth) approximates utility in individuals, but not in aggregate. Since EA is trying to give all its money away, the marginal returns slope off much more gradually. (See some very rough estimates here.) If you're just optimizing for wealth, you would be okay with a riskier allocation.

BTW, it's not just "over-invested in X", you have to think about the entire portfolio. So given that almost all EA money is either Sam or Dustin, you have to consider the correlation between Crypto and FB stock.

I'll also add that you have to consider all future EA money in determining what % of the bankroll we're using.

It doesn't really matter though, since EA doesn't "own" or "control" Sam's wealth in any meaningful way.

Comment by AppliedDivinityStudies on Bryan Caplan on EA groups · 2022-01-16T15:39:09.416Z · EA · GW

People like to hear nice things about themselves from prominent people, and Bryan is non-EA enough to make it feel not entirely self-congratulatory. 

Comment by AppliedDivinityStudies on Is there a market for products mixing plant-based and animal protein? Is advocating for "selective omnivores" / reducitarianism / mixed diets neglected - with regards to animal welfare? · 2022-01-05T00:38:02.950Z · EA · GW

A while back I looked into using lard and/or bacon in otherwise vegan cooking. The idea being that you could use a fairly small amount of animal product to great gastronomical effect. One way to think about this is to consider whether you would prefer:
A: Rice and lentils with a tablespoon of bacon
B: Rice with 0.25lb ground beef

I did the math on this, and it works out surprisingly poorly for lard. You're consuming 1/8th as much mass, which sounds good, except that by some measures, producing pig induces 4x as much suffering as producing beef per unit of mass. So it's a modest 2x gain, but nothing revolutionary.

On the other hand, the math works out really favorably for butter. Using that same linked analysis, if you can replace 100g beef with lentils fried in 10g butter, you're inducing ~150x less suffering.

One upshot of this is that almost all the harm averted by consuming vegan baked goods instead of conventional ones is from avoiding the eggs, rather than the butter. So I would really love to see a "veganish" bakeshop that uses butter but not eggs.
 

Comment by AppliedDivinityStudies on Bayesian Mindset · 2022-01-04T21:29:30.060Z · EA · GW

The tension between overconfidence and rigorous thinking is overrated:

Swisher: Do you take criticism to heart correctly?

Elon: Yes.

Swisher: Give me an example of something if you could.

Elon: How do you think rockets get to orbit?

Swisher: That’s a fair point.

Elon: Not easily. Physics is very demanding. If you get it wrong, the rocket will blow up. 
Cars are very demanding. If you get it wrong, a car won’t work. Truth in engineering and science is extremely important.

Swisher: Right. And therefore?

Elon: I have a strong interest in the truth.

Source and previous discussion.

Comment by AppliedDivinityStudies on World's First Octopus Farm - Linkpost · 2021-12-22T23:40:48.562Z · EA · GW

Okay sorry, maybe I'm having a stroke and don't understand. The original phrasing and new phrasing look identical to me.

Comment by AppliedDivinityStudies on World's First Octopus Farm - Linkpost · 2021-12-22T22:44:09.963Z · EA · GW

Oh wait, did you already edit the original comment? If not I might have misread it. 

Comment by AppliedDivinityStudies on World's First Octopus Farm - Linkpost · 2021-12-22T19:09:53.221Z · EA · GW

I agree that it's pretty likely octopi are morally relevant, though we should distinguish between "30% likelihood of moral relevance" and "moral weight relative to a human".

Comment by AppliedDivinityStudies on World's First Octopus Farm - Linkpost · 2021-12-22T19:02:43.298Z · EA · GW

I don't have anything substantive to add, but this is really really sad to hear. Thanks for sharing.

Comment by AppliedDivinityStudies on Bayesian Mindset · 2021-12-22T00:31:09.093Z · EA · GW

The wrong tool for many.... Some people accomplish a lot of good by being overconfident.

But Holden, rationalists should win. If you can do good by being overconfident, then bayesian habits can and should endorse overconfidence.

Since "The Bayesian Mindset" broadly construed is all about calibrating confidence, that might sound like a contradiction, but it shouldn't. Overconfidence is an attitude, not an epistemic state.

Comment by AppliedDivinityStudies on A Case for Improving Global Equity as Radical Longtermism · 2021-12-18T08:54:01.348Z · EA · GW

~50% of Open Phil spending is on global health, animal welfare, criminal justice reform, and other "shortermist" and egalitarian causes.

This is their recent writeup on one piece of how they think about disbursing funds now vs later https://www.openphilanthropy.org/blog/2021-allocation-givewell-top-charities-why-we-re-giving-more-going-forward

Comment by AppliedDivinityStudies on EA megaprojects continued · 2021-12-09T00:10:30.880Z · EA · GW

This perspective strikes me as as extremely low agentiness.

Donors aren't this wildly unreachable class of people, they read EA forum, they have public emails, etc. Anyone, including you, can take one of these ideas, scope it out more rigorously, and write up a report. It's nobody's job right now, but it could be yours.

Comment by AppliedDivinityStudies on What are some success stories of grantmakers beating the wider EA community? · 2021-12-08T11:04:17.218Z · EA · GW

Sure, but outside of OpenPhil, GiveWell is the vast majority of EA spending right?

Not a grant-making organization, but as another example, the Rethink Priorities report on Charter Cities seemed fairly "traditional EA" style analysis.

Comment by AppliedDivinityStudies on What are some success stories of grantmakers beating the wider EA community? · 2021-12-08T11:00:50.888Z · EA · GW

There's a list of winners here, but I'm not sure how you would judge counterfactual impact. With a lot of these, it's difficult to demonstrate that the grantee would have been unable to do their work without the grant.

At the very least, I think Alexey was fairly poor when he received the grant and would have had to get a day job otherwise.

Comment by AppliedDivinityStudies on What are some success stories of grantmakers beating the wider EA community? · 2021-12-07T06:13:58.602Z · EA · GW

I think the framing of good grantmaking as "spotting great opportunities early" is precisely how EA gets beat.

Fast Grants seems to have been hugely impactful for a fairly small amount of money, the trick is that the grantees weren't even asking, there was no institution to give no, and no cost-effectiveness estimate to run. It's a somewhat more entrepreneurial approach to grantmaking. It's not that EA thought it wasn't very promising, it's that EA didn't even see the opportunity.

I think it's worth noting that a ton of OpenPhil's portfolio would score really poorly along conventional EA metrics. They argue as much in this piece. So of course the community collectively gets credit because OpenPhil identifies as EA, but it's worth noting that their "hits based giving" approach divers substantially from more conventional EA-style (quantitative QALY/cost-effectiveness) analysis and asking what that should mean for the movement more generally.

Comment by AppliedDivinityStudies on Liberty in North Korea, quick cost-effectiveness estimate · 2021-12-02T01:29:53.396Z · EA · GW

Saying "I'd rather die than live like that" is distinct from "this is worse than non-existence." Can you clarify?

Even the implication that moving a NK person to SK is better than saving 10 SK lives is sort of implausible - for both NKs and SKs alike. I don't know what they would find implausible. To me it seems plausible.