Posts
Comments
When you say “working with African leaders”, I worry that in many countries that means “paying bribes which prop up dictatorships and fund war.” How can we measure the extent to which money sent to NGOs in sub Saharan Africa is redirected toward harmful causes via taxes, bribes, or corruption ?
I’d like to push back a bit on that - it’s so common in the EA world to say, if you don’t believe in malaria nets, you must have an emotional problem. But there are many rational critiques of malaria nets. Malaria nets should not be this symbol where believing in them is a core part of the EA faith.
I think we should move away from messaging like “Action X only saves 100 lives. Spending money on malaria nets instead would save 10000 lives. Therefore action X sucks.” Not everyone trusts the GiveWell numbers, and it really is valuable to save 100 lives in any absolute way you look at it.
I understand why doctors might come to EA with a bad first impression given the anti-doctor sentiment. But we need doctors! We need doctors to help develop high-impact medical interventions, design new vaccines, work on anti-pandemic plans, and so many other things. We should have an answer for doctors who are asking, what is the most good I can do with my work, that is not merely asking them to donate money.
It is really annoying for Flynn to be perceived as “the crypto candidate”. Hopefully future donations encourage candidates to position themselves more explicitly as favoring EA ideas. The core logic that we should invest more money in preventing pandemics seems like it should make political sense, but I am no political expert.
Similar issues come up in poker - if you bet everything you have on one bet, you tend to lose everything too fast, even if that one bet considered alone was positive EV.
I think you have to consider expected value an approximation. There is some real, ideal morality out there, and we imperfect people have not found it yet. But, like Newtonian physics, we have a pretty good approximation. Expected value of utility.
Yeah, in thought experiments with 10^52 things, it sometimes seems to break down. Just like Newtonian physics breaks down when analyzing a black hole. Nevertheless, expected value is the best tool we have for analyzing moral outcomes.
Maybe we want to be maximizing log(x) heee, or maybe that’s just an epicycle and someone will figure out a better moral theory. Either way, the logical principle that a human life in ten years shouldn’t be worth less than a human life today seems like a plausible foundational principle.
Another source of epistemic erosion happens whenever a community gets larger. When you’re just a few people, it’s easier to change your mind. You just tell your friends, hey I think I was wrong.
When you have hundreds of people that believe your past analysis, it gets harder to change your mind. When peoples’ jobs depend on you, it gets even harder. What would happen if someone working in a big EA cause area discovered that they no longer thought that cause area was effective? Would it be easy for them to go public with their doubts?
So I wonder how hard it is to retain the core value of being willing to change your mind. What is an important issue that the “EA consensus” has changed its mind on in the past year?
Another issue that makes it hard to evaluate global health interventions is the indirect effects of NGOs in countries far from the funders. For example this book made what I found to be a compelling argument that many NGOs in Africa are essentially funding civil war, via taxes or the replacement of government expenditure:
https://www.amazon.com/Dancing-Glory-Monsters-Collapse-Africa/dp/1610391071
African politics are pretty far outside my field of expertise, but the magnitudes seem quite large. War in the Congo alone has killed millions of people over the past couple decades.
I don’t really know how to make a tradeoff here but I wish other people more knowledgeable about African politics would dig into it.
Is this forum looking to hire more people?
There is also a “startup” aspect to EA activity - it’s possible EA will be much more influential in the future, and in many cases that is the goal, so helping now can make that happen.
I feel like the net value to the world of an incremental Reddit user might be negative, even….
For one, I don’t see any intercom. (I’m on an iPhone).
For two, I wanted to report a bug that whenever writing a comment, the UI zooms in so that the comment box takes up the whole width. Then it never un-zooms.
Another bug, while writing a comment while zoomed in and scrolling left to right, the scroll bar appears in the middle of the text.
A third bug, when I get a notification that somebody has responded to my post, and view it using the drop down at the upper right, then try to re-use that menu, the X button is hidden, off the screen to the right. Seems like a similar mobile over-zoom thing.
If your interpretation of the thought experiment is that suffering cannot be mapped onto a single number, then the logical corollary is that it is meaningless to “minimize suffering”. Because any ordering you can place on the different possible amounts of suffering an organism experiences implies that they can be mapped onto a single number.
Even a brief glance through posts indicates that there is relatively little discussion about global health issues like malaria nets, vitamin A deficiency, and parasitic worms, even though those are among the top EA priorities.
In some sense the idea of a separate self is an invention. Names are an invention - the idea that I can be represented as “Kevin” and I am different from other humans. The invention is so obvious nowadays that we take it for granted.
It isn’t unique to humans, though… at least parrots and dolphins also have sequences of sounds that they use to identify specific individuals. Maybe those species are much more “human-like” than we currently expect.
I wonder a lot where to draw the line for animal welfare. It’s hard to worry about planaria. But animals that have names, animals whose family calls to them by name… maybe that has something to do with where to draw the line.
To me this sort of extrapolation seems like a “reductio ad absurdum” that demonstrates that suffering is not the correct metric to minimize.
Here’s a thought experiment. Let’s say that all sentient beings were converted to algorithms, and suffering was a single number stored in memory. Various actions are chosen to minimize suffering. Now, let’s say you replaced everyone’s algorithm with a new one. In the new algorithm, whenever you would previously get suffering=x, you instead get suffering=x/2.
The total amount of global suffering is cut in half. However, nothing else about the algorithm changes, and nobody’s behavior changes.
Have you done a great thing for the world, or is it a meaningless change of units?
Monotonic transformations can indeed solve the infinity issue. For example the sum of 1/n doesn’t converge, but the sum of 1/n^2 converges, even though x -> x^2 is monotonic.
You could discount utilons - say there is a “meta-utilon” which is a function of utilons, like maybe meta utilons = log(utilons). And then you could maximize expected metautilons rather than expected utilons. Then I think stochastic dominance is equivalent to saying “better for any non decreasing metautilon function”.
But you could also pick a single metautilon function and I believe the outcome would at least be consistent.
Really you might as well call the metautilons “utilons” though. They are just not necessarily additive.
In general, it’s a good idea to not let strangers touch your phone. Someone can easily run off with it, and worse, while it’s unlocked, take advantage of elevated access privileges.
I think you may be underestimating the value of giving blood. It seems like according to the analysis here:
A blood donation is still worth about 1/200 of a QALY. That’s still altruistic; it isn’t just warm fuzzies. If someone does not believe the EA community’s analyses of the top charities, we should still encourage them to do things like give blood.
I personally hope that EA shifts a bit more in the “big tent” direction, because I think the principles of being rational and analytical about the effectiveness of charitable activity are very important, even though some of the popular charities in the EA community do not really seem effective to me. Like I disagree with the analysis while agreeing on the axioms. And as a result I am still not sure whether I would consider myself an “effective altruist” or not.
I believe by your definition, lethal autonomous weapon systems already exist and are widely in use by the US military. For example, the CIWS system will fire on targets like rapidly moving nearby ships without any human intervention.
https://en.wikipedia.org/wiki/Phalanx_CIWS
It's tricky because there is no clear line between "autonomous" and "not autonomous". Is a land mine autonomous because it decides to explode without human intervention? Well, land mines could have more and more advanced heuristics slowly built into them. At what point does it become autonomous?
I'm curious what ethical norms you think should apply to a system like the CIWS, designed to autonomously engage, but within a relatively restricted area, i.e. "there's something coming fast toward our battleship, let's shoot it out of the air even though the algorithm doesn't know exactly what it is and we don't have time to get a human into the loop".
Thank you for a well written post. The fact that there are 10 quintillion insects makes it hard to care about insect welfare. At some point, when deciding whether it is effective to improve insect welfare, we have to compare to the effectiveness of other interventions, like improving human welfare. How many insect lives are worth one human life?
This is just estimating, but if the answer is one billion or less, then I should care more about insect life than human life, which doesn’t seem right. If the answer is a quadrillion or more, it seems like any intervention will not have sufficient impact. Therefore this only makes sense with an ethical theory that places one human life between a billion and a quadrillion insects.
I’m not sure what the right answer here is but it seems like something that needs a good answer in order to claim effectiveness.
I'd trade at least 5 high-quality introductions like the one above for a single intro from the same distribution.
Personally, when I'm recruiting for a role, I'm usually so hungry to get more leads that I'm happy to follow up with very weak references. I would take 5 high-quality introductions, I would take one super-high-quality introduction, I would like all of the above. Yeah, it's great to hire from people who have worked with a friend of yours before, but that will never be 100% of the good candidates.
This may very much depend on what sort of role you're hiring for, though. Most of my experience is in hiring software engineers, where hiring is almost always limited by how many candidates you can find who will even talk to you, rather than your ability to assess them.
Excellent, sounds like you're on it. I do in fact use an iPhone. I should have made a more specific note about where I saw overlapping text earlier, I can't seem to find it again now. I'll use the message us link about any future minor UI bugs.
What's up EAers. I noticed that this website has some issues on mobile devices - the left bar links don't work, several places where text overlaps, tapping the search icon causes an inappropriate zoom - is there someone currently working on this where it would help if I filed a ticket or reported an issue?
Yes, this is completely correct and many people do not get the mathematics right.
One example to think of is “the odds the Earth gets hit by a huge asteroid in the date range 2000-3000”. Whatever the odds are, they will probably steadily, predictably update downwards as time passes. Every day that goes by, you learn a huge asteroid did not hit the earth that day.
Of course, it’s possible an asteroid does hit the Earth and you have to drastically update upwards! But the vast majority of the time, the update direction will be downwards.
Grifters are definitely a problem in large organizations. The tough thing is that many grifters don’t start out as grifters. They start out honest, working hard, doing their best. But over time, their projects don’t all succeed, and they discover they are still able to appear successful by shading the truth a bit. Little by little, the honest citizen can turn into a grifter.
Many times a grifter is not really malicious, they are just not quite good enough at their job.
Eventually there will be some EA groups or areas that are clearly “not working”. The EA movement will have to figure out how to expel these dysfunctional subgroups.