Posts
Comments
If you are working with fast feedback loops, you can make things and then show people the things. If you're working with slow feedback loops, you have nothing to show and people don't really know what you're doing. The former intuitively seems much better if your goal is status-seeking (which is somewhat my goal in practice, even if ideally it shouldn't be).
As an example, let's say you have three interventions with that distribution, and they turn out to be perfectly distributed, you have total cost=$11,010 and total effect=3 so, as a funder that cares about expected value, $3670 is the value you care about.
That's true if you spend money that way, but why would you spend money that way? Why would you spend less on the interventions that are more cost-effective? It makes more sense to spend a fixed budget. Given a 1/3 chance that the cost per life saved is $10, $1000, or $10,000, and you spend $29.67, then you save 1 life in expectation (= 1/3 * (29.67 / 10 + 29.67 / 1000 + 29.67 / 10,000)).
Not sure how useful it is as an intuition pump, but here is an even more extreme/absurd example: if there is a 0.001% chance that the cost is 0 and a 99.999% chance that the cost is $1T, mean(effect/cost) would be ∞
That's a feature, not a bug. If something has positive value and zero cost, then you should spend zero dollars/resources to invoke the effect infinitely many times and produce infinite value (with probability 0.00001).
If opportunities have consistently diminishing returns (i.e. the second derivative is negative), then it's convex. Giving opportunities may or may not actually be convex.
I could be missing something but this sounds wrong to me. I think the actual objective is mean(effect / cost)
. effect / cost
is the thing you care about, and if you're uncertain, you should take the expectation over the thing you care about. mean(cost / effect)
can give the wrong answer because it's the reciprocal of what you care about.
mean(cost) / mean(effect)
is also wrong unless you have a constant cost. Consider for simplicity a case of constant effect of 1 life saved, and where the cost could be $10, $1000, or $10,000. mean(cost) / mean(effect)
= $3670 per life saved, but the correct answer is 0.0337 lives saved per dollar = $29.67 per life saved.
"the intuitively unacceptable implication that saving lives in richer countries would, other things being equal, be more valuable on the grounds that such people are richer and so better off."
FWIW my intuition is that this implication is pretty obviously correct—would I rather live 1 year of life as a wealthy person in the United States, or as a poor person in Kenya? Obviously I'd prefer the former.
The difference in welfare is almost always swamped by the difference in ability to improve people's lives, hence it's better to help the worse-off person. But all else equal, it would be better to extend the life of the better-off person.
I was not previously familiar with the term cash-on-cash but it looks like you're saying you can earn a 20% return if you use ~5:1 leverage. In that case, sure, but that's a lot of leverage, and 20% is actually a pretty bad return at that much leverage. Historically, stocks would have returned about 40%.
I don't think you can get anything remotely close to 20% return because nothing ever reliably earns a 20% return. The real estate market in aggregate has historically performed about as well as equities with somewhat lower risk. An individual's real estate investments will be riskier than equities due to lack of diversification. For a good post on this, see https://rhsfinancial.com/2019/05/01/better-investment-stocks-real-estate/
I have some feedback on this post that you should feel free to ignore.
In my experience, when you ask someone for feedback, there's about a 10% chance that they will bring up something really important that you missed. And you don't know who's going to notice the thing. So even if you've asked 9 people for feedback and none of them said anything too impactful, maybe the 10th will say something critically important.
Climate change is one of the most popular global concerns today. However, according to the principles of Effective Altruism, a philosophical and social movement that applies reason and evidence to philanthropy, climate change should not be our top global priority.
This phrasing makes it sounds like you're saying, "Normal people would think climate change is important, but if you're a weirdo who holds these 'effective altruist' values, then you don't think it's important." But your actual claim is more like, "According to normal people's values, climate change isn't as important as they think because they're empirically wrong about how bad it will be." I would remove the reference to EA from the abstract, although I think it's fine to keep it in the first section as an explanation for why you started thinking about the impact of climate change.
I don't believe biodiversity is an important cause area, for basically two reasons:
- Species themselves are not inherently valuable. The experiences of individual conscious animals are what's valuable, and the welfare of wild animals is basically orthogonal to biodiversity, at least as far as anyone can tell—even if biodiversity and wild animal welfare are positively correlated, I've never seen a good argument to that effect, and surely increasing biodiversity isn't the best way to improve wild animal welfare.
- You could perhaps argue that loss of biodiversity poses an existential threat to humanity, which matters more for the long-run future than wild animal welfare. But it seems like a very weak x-risk compared to things like AGI or nuclear war.
Most people who prioritize biodiversity (IMO) don't seem to understand what actually matters, and they act as if a species is a unit of inherent value, when it isn't—the unit of value is an individual's conscious experience. If you wanted to ague that biodiversity should be a high priority, you'd have to claim either that (1) increasing biodiversity is a particularly effective way of improving wild animal welfare or (2) loss of biodiversity constitutes a meaningful existential risk. I've never seen a good argument for either of those positions, but an argument might exist.
(Or you could argue that biodiversity is very important for some third reason, but it seems unlikely to me that there could be any third reason that's important enough to be worth spending EA resources on.)
thinking we could reliably plan and run them when we don't even know most species involved in them
This argument seems symmetric to me. If you support decreasing biodiversity, you're claiming that we can reliably decrease it. If you support increasing diversity, you're claiming that we can reliably increase it. So the parent comment and OP are both making the same assumption—that it's possible in principle to reliably affect biodiversity one way or the other. (Which I think is true—we have a pretty good sense that certain activities affect biodiversity, eg cutting down rainforests decreases it.)
If you want to take as little risk as possible, you're right that cash is not the safest investment because it's vulnerable to inflation. It would be safer on a real basis to invest in something like Harry Browne's Permanent Portfolio, which is 25% cash, 25% stocks, 25% Treasury bonds, 25% gold. Just make sure your investments are liquid enough that you can sell them quickly if you need to.
IMO you should be prepared for the stock market to fall 50%.
It's the same as the standard notion in that you're hedging something. It's different in that the thing you're hedging isn't a security. If you wanted to, you could talk about it in terms of the beta between the hedge and the mission target.
I use the LessWrong anti-kibitzer to hide names. All you have to do to make it work on the EA Forum is change the URL from lesswrong.com to forum.effectivealtruism.org.
A personal example: I wrote Should Global Poverty Donors Give Now or Later? and then later realized my approach was totally wrong.
To piggyback on this, "with the resources available to us" is tautologically true. The mission statement would have identical meaning if it was simply "Our mission is to help others as much as we can."
Taking a step back, I don't really like the concept of mission statements in general. I think they almost always communicate close to zero information, and organizations shouldn't have them.
I read this post kind of quickly, so apologies if I'm misunderstanding. It seems to me that this post's claim is basically:
- Eliezer wrote some arguments about what he believes about AI safety.
- People updated toward Eliezer's beliefs.
- Therefore, people defer too much to Eliezer.
I think this is dismissing a different (and much more likely IMO) possibility, which is that Eliezer's arguments were good, and people updated based on the strength of the arguments.
(Even if his recent posts didn't contain novel arguments, the arguments still could have been novel to many readers.)
That being said, I don't think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war.
I wouldn't sell yourself short. IMO, any nuclear exchange would dramatically increase the probability of a global nuclear war, even if the probability is still small by non-xrisk standards.
Some people have told me (probably as a joke) that the best way to improve wild animal welfare is to invent AGI and let the AGI figure it out.
I believe this, not as a joke. But I do agree with you that this requires solving the broader alignment problem and also ensuring that the AGI cares about all sentient beings.
Viewed in this light, the absence of any competing explicitly beneficentrist movements is striking. EA seems to be the only game in town for those who are practically concerned to promote the general good in a serious, scope-sensitive, goal-directed kind of way.
Before EA, I think there were at least two such movements:
- a particular subset of the animal welfare movement that cared about effectiveness, e.g., focusing on factory farming over other animal welfare issues explicitly because it's the biggest source of harm
- AI safety
Both are now broadly considered to be part of the EA movement.
Thank you for this! I had been trying to solve this exact problem recently, and I wasn't sure if I was doing it right. And this spreadsheet is much more convenient than the way I was doing it.
The hyperlink on the word "this" (in both instances) is broken. I don't see how to get to the calculator.
Eliezer said something similar, and he seems similarly upset about it: https://twitter.com/ESYudkowsky/status/1446562238848847877
(FWIW I am also upset about it, I just don't know that I have anything constructive to say)
Looking at the Decade in Review, I feel like voters systematically over-rate cool but ultimately unimportant posts, and systematically under-rate complicated technical posts that have a reasonable probability of changing people's actual prioritization decisions.
Example: "Effective Altruism is a Question (not an ideology)", the #2 voted post, is a very cool concept and I really like it, but ultimately I don't see how it would change anyone's important life decisions, so I think it's overrated in the decade review.
"Differences in the Intensity of Valenced Experience across Species", the #35 voted post (with 1/3 as many votes as #2), has a significant probability of changing how people prioritize helping different species, which is very important, so I think it's underrated.
(I do think the winning post, "Growth and the case against randomista development", is fairly rated because if true, it suggests that all global-poverty-focused EAs should be behaving very differently.)
This pattern of voting probably happens because people tend to upvote things they like, and a post that's mildly helpful for lots of people is easier to like than a post that's very helpful for a smaller number of people.
(For the record, I enjoy reading the cool conceptual posts much more than the complicated technical posts.)
Thanks, I hadn't seen this previous post!
I will give an example of one of my own failed projects: I spent a couple months writing Should Global Poverty Donors Give Now or Later? It's an important question, and my approach was at least sort of correct, but it had some flaws that made my approach pretty much useless.
How quickly can campaigns spend money? Can they reasonably make use of new donations within less than 8 days?
Sounds plausible. Some data: The PhilPapers survey found that 31% of philosophers accept or lean toward consequentailism, vs. 32% deontology and 37% virtue ethics. The ratios are about the same if instead of looking at all philosophers, you look at just applied ethicists or normative ethicists.
I don't know of any surveys on normative views of philosophy-adjacent people, but I expect that (e.g.) economists lean much more consequentialist than philosophers. Not sure what other fields one would consider adjacent to philosophy. Maybe quant finance?
You could do something very similar by having one person short a liquid security with low borrowing costs (like SPY maybe) and have the other person buy it.
The buyer will tend to make more money than the shorter, so you could find a pair of securities with similar expected return (e.g., SPY and EFA) and have each person buy one and short the other.
You could also buy one security and short another without there being a second person. But I don't think this is an efficient use of capital—it's better to just buy something with good expected return.
Is it possible to do the most good while retaining current systems (especially economic)? What in these systems needs to be transformed?
This question is already pretty heavily researched by economists. There are some known answers (immigration liberalization would be very good) and some unknowns (how much is the right amount of fiscal stimulus in recessions?). For the most part, I don't think there's much low-hanging fruit in terms of questions that matter a lot but haven't been addressed yet. Global Priorities Institute does some economics research, IMO that's the best source of EA-relevant and neglected questions of this type.
As a positive example, 80,000 Hours does relatively extensive impact evaluations. The most obvious limitation is that they have to guess whether any career changes are actually improvements, but I don't see how to fix that—determining the EV of even a single person's career is an extremely hard problem. IIRC they've done some quasi-experiments but I couldn't find them from quickly skimming their impact evaluations.
A related thought: If an org is willing to delay spending (say) $500M/year due to reputational/epistemic concerns, then it should easily be willing to pay $50M to hire top PR experts to figure out the reputational effects of spending at different rates.
(I think delays in spending by big orgs are mostly due to uncertainty about where to donate, not about PR. But off the cuff, I suspect that EA orgs spend less than the optimal amount on strategic PR (as opposed to "un-strategic PR", e.g., doing whatever the CEO's gut says is best for PR).)
The link to "Unjournal" is broken, it goes to https://forum.effectivealtruism.org/posts/kftzYdmZf4nj2ExN7/bit.ly/eaunjournal instead of bit.ly/eaunjournal.
FWIW my intuition is that if you have a name for a thing, it means the opposite of that is the default. If there's a special term for "longtermist", that means people are not longtermists by default (which I think is basically true—most people are not longtermists, and longtermism is kind of a weird position (although I do happen to agree with it)). Sort of like how EAs are called EAs, but there's no word for people who aren't EAs, because being not-EA is the default.
Thanks for the heads up, it should be working again now.
FWIW I would not be offended if someone said Scott's writing is better than mine. Scott's writing is better than almost everyone's.
Your comment inspired me to work harder to make my writings more Scott-like.
Yeah, the two things are orthogonal as far as I can see. The person-affecting view is perfectly with consistent with either a zero or a nonzero pure time preference.
I don't know of any EAs or philosophers with a nonzero pure time preference, but it's pretty common to believe that creating new lives is morally neutral. Someone who believes this might plausibly be a short-termist. I have a few friends who are short-termist for that reason.
In addition to what Brendon said, I'd say that finance best practices for EAs are mostly the same as best practices for anyone else. I like the Bogleheads wiki as a good resource for beginners.
IMO you can get most of the benefits of investing just by following best practices. If you want to take it further, you can follow some of the tips in the articles Brendon linked, or read my post Asset Allocation and Leverage for Altruists with Constraints, which gives my best guess as to how EAs should invest differently than most people.
The most prominent example I've seen recently is Frank Abagnale, the real-life protagonist of the supposedly-nonfiction movie Catch Me If You Can, who basically totally fabricated his life story, and (AFAICT) makes a living off making appearances where he tells his story, and he still regularly gets paid to do this, even though it's pretty well-documented that he's lying about almost everything.
Thanks for pointing this out! I updated the post.
I haven't drug-lorded personally, but I've watched Breaking Bad, and my understanding of the general process is
customers buy drugs in cash -> street dealers kick up to managers -> managers kick up to drug lords
so the drug lords end up accumulating piles of cash. Hard to convert cash into crypto so I think it would be better if CEA could directly receive cash.
Maybe a drug lord mega-donor could donate a storage unit to CEA, and that storage unit happens to be filled with cash? That's probably better than a direct cash donation, because the drug lord would have to report the cash donation on their taxes.
EA-aligned drug lord can solve this problem by donating colossal wonga to charity.
How capable are charities at accepting large cash donations? If this is an issue, maybe CEA could serve as an intermediary to redistribute drug lord cash to other charities, I know they've done similar things for e.g. helping new EA charities that aren't yet officially registered.
This isn't a particularly deep or informed take, but my perspective on it is that the "misinformation problem" is similar to what Scott called the cowpox of doubt:
What annoys me about the people who harp on moon-hoaxing and homeopathy – without any interest in the rest of medicine or space history – is that it seems like an attempt to Other irrationality.
It’s saying “Look, over here! It’s irrational people, believing things that we can instantly dismiss as dumb. Things we feel no temptation, not one bit, to believe. It must be that they are defective and we are rational.”
But to me, the rationality movement is about Self-ing irrationality.
It is about realizing that you, yes you, might be wrong about the things that you’re most certain of, and nothing can save you except maybe extreme epistemic paranoia.
10 years ago, it was popular to hate on moon-hoaxing and homeopathy, now it's popular to hate on "misinformation". Fixating on obviously-wrong beliefs is probably counterproductive to forming correct beliefs on important and hard questions.
Yeah I feel the same way, I wonder if there's a good fix for that. Given the current setup, long effortposts are usually only of interest to a small % of people, so they don't get as many upvotes.
I know it's a joke, but if you want to build status, short posts are much better than long posts.
Which is more impressive: the millionth 200-page dissertation published this year, or John Nash's 10-page dissertation?
Which is more impressive: the latest complicated math paper, or Conway & Soifer's two-word paper?
I like when writing advice is self-demonstrating.
In response to this comment, I wrote a handy primer: https://mdickens.me/2022/03/18/altruistic_investors_care_about_aggregate_altruistic_portfolio/
That's a complicated question, but in short, if you believe that there will be better donation opportunities in the future, you might use a DAF.