Comment by Kevin Lacker on Who will be in charge once alignment is achieved? · 2022-12-16T18:09:04.799Z · EA · GW

Some people think that, with a super-powerful AI running the world, there would be no need for traditional government. The AI can simply make all the important decisions to optimize human welfare.

This is similar to the Marxist idea of the "withering away of the state". Once perfect Communism has been achieved, there will be no more need for government.

In practice, Stalinism didn't really wither away. It was more like, Stalin gained personal control over this new organization, the Communist Party, to reinforce his own dictatorship and bend the nation to his will.

If we have transformational superhuman AI, the risk of war seems quite high. But an AI powerful enough to turn the whole world into paper clips could win a war immediately, without bloodshed. Or with lots of bloodshed, if that's what it wanted.

One possible outcome of superhuman AI is a global dictatorship. Whoever controls the superhuman AI controls the world, right? The CEO of the AI company that wins the race aligns the AI to themselves and makes themselves into an immortal god-king. At first they are benevolent. Over time it becomes impossible for the god-king to retain their humanity, as they become less and less like any normal human. The sun sets on the humanist era.

But this is turning into a science fiction story. In practice a "superhuman AI" probably won't be all-powerful like this, there will be many details of what it can and can't do that I can't predict. Or  maybe the state will just wither away!

Comment by Kevin Lacker on The winners of the Change Our Mind Contest—and some reflections · 2022-12-16T17:46:03.128Z · EA · GW

against malaria foundation don't give a high proportion of money to evil dictatorships but they do give some. Same goes for deworm the world.


I was wondering about this, because I was reading a book about the DRC - Dancing in the Glory of Monsters - which was broadly opposed to NGO activity in the country as propping up the regime. And I was trying to figure out how to square this criticism with the messages from the NGOs themselves. I am not really sure, though, because the pro-NGO side of the debate (like EA) and the anti-NGO side of the debate (like that book) seem to mostly be ignoring each other.

I think there should be some kind of small negative adjustment (even if token) from GiveWell on this front.

Yeah, I don't even know if it's the sort of thing that you can adjust for. It's kind of unmeasurable, right? Or maybe you can measure something like, the net QALYs of a particular country being a dictatorship instead of a democracy, and make an argument that supporting a dictator is less bad than the particular public health intervention is good.

I would at least like to see people from the EA NGO world engage with this line of criticism, from people who are concerned that "the NGO system in poor countries, overall, is doing more unmeasurable harm than measurable good".

Comment by Kevin Lacker on The case for transparent spending · 2022-12-15T21:57:50.272Z · EA · GW

I think the Wytham Abbey situation is a success for transparency. Due to transparency, many people became aware of the purchase, and were able to give public feedback that it seemed like a big waste of money, and it's embarassing to the EA cause. Now, hopefully, in the future EA decisionmakers will be less likely to waste money in this way.

It's too much to expect EA decisionmakers to never make any mistakes ever. The point of transparency is to force decisionmakers to learn from their mistakes, not to avoid ever making any mistakes.

Comment by Kevin Lacker on The winners of the Change Our Mind Contest—and some reflections · 2022-12-15T21:49:49.816Z · EA · GW

I'm glad this contest happened, but I was hoping to see some deeper reflection. To me it seems like the most concerning criticisms of the GiveWell approach are criticisms along more fundamental lines. Such as -

  1. It's more effective to focus on economic growth than one-off improvements to public health. In the long run no country has improved public health problems via a charitable model, but many have done it through economic growth. 
  2. NGOs in unstable or war-torn countries are enabling bad political actors. By funding public health in a dictatorship you are indirectly funding the dictator, who can reduce their own public health spending to counteract any benefit of NGO spending.

This might be too much to expect from any self-reflection exercise, though.

Comment by Kevin Lacker on What should EA do in response to the FTX crisis? · 2022-11-10T19:48:51.597Z · EA · GW
  1. Don't rush to judgment. We don't know the full story yet.
  2. If it's fraud, give back whatever money can be given back.
  3. If it's fraud, make it clear that the EA community does not support a philosophy of "making money on criminal activity is okay if you donate it to an effective charity".
Comment by Kevin Lacker on Community support given FTX situation · 2022-11-10T17:13:34.145Z · EA · GW

I don't know how the criminal law works. But if it turns out that the money in the FTX Future Fund was obtained fraudulently, would it be ethical to keep spending it, rather than giving it back to the victims of the fraud?

Comment by Kevin Lacker on Does the US public support radical action against factory farming in the name of animal welfare? · 2022-11-09T23:32:45.672Z · EA · GW

Banning slaughterhouses is essentially a ban on eating meat, right? I can't imagine that 43% of the US public would support that, when no more than 10% of the US public is vegetarian in the first place. (Estimates vary, you say 1% in this article, and 10% is the most aggressive one I could find.)

It seems much more likely that these surveys are invalid for some reason. Perhaps the word "slaughterhouses" confused people, or perhaps people are just answering surveys based on emotion without bothering to think through what banning slaughterhouses actually means.

Comment by Kevin Lacker on Money Stuff: FTX Had a Death Spiral · 2022-11-09T22:22:29.254Z · EA · GW

This explanation of events seems to contradict several of SBF's public statements, such as:

"FTX has enough to cover all client holdings."

"We don't invest client assets (even in treasuries)."


I guess we'll know more for sure in the coming days. One big open question for EA is whether SBF's money was obtained through fraudulent or illegal activities. As far as I can tell, it is too soon to tell.

Comment by Kevin Lacker on FTX Crisis. What we know and some forecasts on what will happen next · 2022-11-09T17:13:54.312Z · EA · GW

In the last few hours, Coindesk reported that Binance is "strongly leading towards" not doing the FTX acquisition.

Comment by Kevin Lacker on has probably collapsed · 2022-11-09T17:09:15.477Z · EA · GW

I believe the title of this article is misleading - was not technically bought out by Binance. Binance signed a non-binding letter of intent to buy Sometimes this is just a minor detail, but in this case it seems quite important. As of the time I am writing this comment (9 a.m. California time on November 9) Polymarket shows an 81% chance that Binance will pull out of this deal.

I am not an expert in crypto, but I think people should not assume that this acquisition will go through. It is possible that FTX will just become insolvent. See the relevant Polymarket:

Comment by Kevin Lacker on EA's Culture and Thinking are Severely Limiting its Impact · 2022-07-26T20:40:19.517Z · EA · GW

The point about corruption is a good one and it worries me that so many EA cause areas seem to ignore corruption. When you send money to well funded NGOs in corrupt countries, you are also supporting the status quo political leadership there, and the side effects from this seem like they could be more impactful than the stated work of the NGO.

Comment by Kevin Lacker on Energy Access in Sub-Saharan Africa: Open Philanthropy Cause Exploration Prize Submission · 2022-05-29T12:49:01.734Z · EA · GW

When you say “working with African leaders”, I worry that in many countries that means “paying bribes which prop up dictatorships and fund war.” How can we measure the extent to which money sent to NGOs in sub Saharan Africa is redirected toward harmful causes via taxes, bribes, or corruption ?

Comment by Kevin Lacker on High Impact Medicine, 6 months later - Update & Key Lessons · 2022-05-29T12:41:45.550Z · EA · GW

I’d like to push back a bit on that - it’s so common in the EA world to say, if you don’t believe in malaria nets, you must have an emotional problem. But there are many rational critiques of malaria nets. Malaria nets should not be this symbol where believing in them is a core part of the EA faith.

Comment by Kevin Lacker on High Impact Medicine, 6 months later - Update & Key Lessons · 2022-05-28T20:29:18.947Z · EA · GW

I think we should move away from messaging like “Action X only saves 100 lives. Spending money on malaria nets instead would save 10000 lives. Therefore action X sucks.” Not everyone trusts the GiveWell numbers, and it really is valuable to save 100 lives in any absolute way you look at it.

I understand why doctors might come to EA with a bad first impression given the anti-doctor sentiment. But we need doctors! We need doctors to help develop high-impact medical interventions, design new vaccines, work on anti-pandemic plans, and so many other things. We should have an answer for doctors who are asking, what is the most good I can do with my work, that is not merely asking them to donate money.

Comment by Kevin Lacker on Yglesias on EA and politics · 2022-05-23T14:42:22.504Z · EA · GW

It is really annoying for Flynn to be perceived as “the crypto candidate”. Hopefully future donations encourage candidates to position themselves more explicitly as favoring EA ideas. The core logic that we should invest more money in preventing pandemics seems like it should make political sense, but I am no political expert.

Comment by Kevin Lacker on St. Petersburg Demon – a thought experiment that makes me doubt Longtermism · 2022-05-23T13:43:25.730Z · EA · GW

Similar issues come up in poker - if you bet everything you have on one bet, you tend to lose everything too fast, even if that one bet considered alone was positive EV.

I think you have to consider expected value an approximation. There is some real, ideal morality out there, and we imperfect people have not found it yet. But, like Newtonian physics, we have a pretty good approximation. Expected value of utility.

Yeah, in thought experiments with 10^52 things, it sometimes seems to break down. Just like Newtonian physics breaks down when analyzing a black hole. Nevertheless, expected value is the best tool we have for analyzing moral outcomes.

Maybe we want to be maximizing log(x) heee, or maybe that’s just an epicycle and someone will figure out a better moral theory. Either way, the logical principle that a human life in ten years shouldn’t be worth less than a human life today seems like a plausible foundational principle.

Comment by Kevin Lacker on EA culture is special; we should proceed with intentionality · 2022-05-22T21:14:51.907Z · EA · GW

Another source of epistemic erosion happens whenever a community gets larger. When you’re just a few people, it’s easier to change your mind. You just tell your friends, hey I think I was wrong.

When you have hundreds of people that believe your past analysis, it gets harder to change your mind. When peoples’ jobs depend on you, it gets even harder. What would happen if someone working in a big EA cause area discovered that they no longer thought that cause area was effective? Would it be easy for them to go public with their doubts?

So I wonder how hard it is to retain the core value of being willing to change your mind. What is an important issue that the “EA consensus” has changed its mind on in the past year?

Comment by Kevin Lacker on Impact is very complicated · 2022-05-22T14:34:16.594Z · EA · GW

Another issue that makes it hard to evaluate global health interventions is the indirect effects of NGOs in countries far from the funders. For example this book made what I found to be a compelling argument that many NGOs in Africa are essentially funding civil war, via taxes or the replacement of government expenditure:

African politics are pretty far outside my field of expertise, but the magnitudes seem quite large. War in the Congo alone has killed millions of people over the past couple decades.

I don’t really know how to make a tradeoff here but I wish other people more knowledgeable about African politics would dig into it.

Comment by Kevin Lacker on You should join an EA organization with too many employees · 2022-05-21T21:00:39.278Z · EA · GW

Is this forum looking to hire more people?

There is also a “startup” aspect to EA activity - it’s possible EA will be much more influential in the future, and in many cases that is the goal, so helping now can make that happen.

I feel like the net value to the world of an incremental Reddit user might be negative, even….

Comment by Kevin Lacker on Contact us · 2022-05-21T20:55:50.919Z · EA · GW

For one, I don’t see any intercom. (I’m on an iPhone).

For two, I wanted to report a bug that whenever writing a comment, the UI zooms in so that the comment box takes up the whole width. Then it never un-zooms.

Another bug, while writing a comment while zoomed in and scrolling left to right, the scroll bar appears in the middle of the text.

A third bug, when I get a notification that somebody has responded to my post, and view it using the drop down at the upper right, then try to re-use that menu, the X button is hidden, off the screen to the right. Seems like a similar mobile over-zoom thing.

Comment by Kevin Lacker on The case to abolish the biology of suffering as a longtermist action · 2022-05-21T20:50:55.386Z · EA · GW

If your interpretation of the thought experiment is that suffering cannot be mapped onto a single number, then the logical corollary is that it is meaningless to “minimize suffering”. Because any ordering you can place on the different possible amounts of suffering an organism experiences implies that they can be mapped onto a single number.

Comment by Kevin Lacker on james.lucassen's Shortform · 2022-05-21T18:11:44.499Z · EA · GW

Even a brief glance through posts indicates that there is relatively little discussion about global health issues like malaria nets, vitamin A deficiency, and parasitic worms, even though those are among the top EA priorities.

Comment by Kevin Lacker on Wormy the Worm · 2022-05-21T18:00:39.421Z · EA · GW

In some sense the idea of a separate self is an invention. Names are an invention - the idea that I can be represented as “Kevin” and I am different from other humans. The invention is so obvious nowadays that we take it for granted.

It isn’t unique to humans, though… at least parrots and dolphins also have sequences of sounds that they use to identify specific individuals. Maybe those species are much more “human-like” than we currently expect.

I wonder a lot where to draw the line for animal welfare. It’s hard to worry about planaria. But animals that have names, animals whose family calls to them by name… maybe that has something to do with where to draw the line.

Comment by Kevin Lacker on The case to abolish the biology of suffering as a longtermist action · 2022-05-21T17:50:47.618Z · EA · GW

To me this sort of extrapolation seems like a “reductio ad absurdum” that demonstrates that suffering is not the correct metric to minimize.

Here’s a thought experiment. Let’s say that all sentient beings were converted to algorithms, and suffering was a single number stored in memory. Various actions are chosen to minimize suffering. Now, let’s say you replaced everyone’s algorithm with a new one. In the new algorithm, whenever you would previously get suffering=x, you instead get suffering=x/2.

The total amount of global suffering is cut in half. However, nothing else about the algorithm changes, and nobody’s behavior changes.

Have you done a great thing for the world, or is it a meaningless change of units?

Comment by Kevin Lacker on NunoSempere's Shortform · 2022-05-21T17:23:01.110Z · EA · GW

Monotonic transformations can indeed solve the infinity issue. For example the sum of 1/n doesn’t converge, but the sum of 1/n^2 converges, even though x -> x^2 is monotonic.

Comment by Kevin Lacker on NunoSempere's Shortform · 2022-05-21T00:05:18.433Z · EA · GW

You could discount utilons - say there is a “meta-utilon” which is a function of utilons, like maybe meta utilons = log(utilons). And then you could maximize expected metautilons rather than expected utilons. Then I think stochastic dominance is equivalent to saying “better for any non decreasing metautilon function”.

But you could also pick a single metautilon function and I believe the outcome would at least be consistent.

Really you might as well call the metautilons “utilons” though. They are just not necessarily additive.

Comment by Kevin Lacker on What should one do when someone insists on actively making life worse for many other people? · 2022-05-20T23:56:06.484Z · EA · GW

In general, it’s a good idea to not let strangers touch your phone. Someone can easily run off with it, and worse, while it’s unlocked, take advantage of elevated access privileges.

Comment by Kevin Lacker on "Big tent" effective altruism is very important (particularly right now) · 2022-05-20T23:12:22.412Z · EA · GW

I think you may be underestimating the value of giving blood. It seems like according to the analysis here:

A blood donation is still worth about 1/200 of a QALY. That’s still altruistic; it isn’t just warm fuzzies. If someone does not believe the EA community’s analyses of the top charities, we should still encourage them to do things like give blood.

Comment by Kevin Lacker on "Big tent" effective altruism is very important (particularly right now) · 2022-05-20T22:52:03.805Z · EA · GW

I personally hope that EA shifts a bit more in the “big tent” direction, because I think the principles of being rational and analytical about the effectiveness of charitable activity are very important, even though some of the popular charities in the EA community do not really seem effective to me. Like I disagree with the analysis while agreeing on the axioms. And as a result I am still not sure whether I would consider myself an “effective altruist” or not.

Comment by Kevin Lacker on Risks from Autonomous Weapon Systems and Military AI · 2022-05-19T23:38:47.415Z · EA · GW

I believe by your definition, lethal autonomous weapon systems already exist and are widely in use by the US military. For example, the CIWS system will fire on targets like rapidly moving nearby ships without any human intervention.

It's tricky because there is no clear line between "autonomous" and "not autonomous". Is a land mine autonomous because it decides to explode without human intervention? Well, land mines could have more and more advanced heuristics slowly built into them. At what point does it become autonomous?

I'm curious what ethical norms you think should apply to a system like the CIWS, designed to autonomously engage, but within a relatively restricted area, i.e. "there's something coming fast toward our battleship, let's shoot it out of the air even though the algorithm doesn't know exactly what it is and we don't have time to get a human into the loop".

Comment by Kevin Lacker on Why should I care about insects? · 2022-05-19T02:13:36.300Z · EA · GW

Thank you for a well written post. The fact that there are 10 quintillion insects makes it hard to care about insect welfare. At some point, when deciding whether it is effective to improve insect welfare, we have to compare to the effectiveness of other interventions, like improving human welfare. How many insect lives are worth one human life?

This is just estimating, but if the answer is one billion or less, then I should care more about insect life than human life, which doesn’t seem right. If the answer is a quadrillion or more, it seems like any intervention will not have sufficient impact. Therefore this only makes sense with an ethical theory that places one human life between a billion and a quadrillion insects.

I’m not sure what the right answer here is but it seems like something that needs a good answer in order to claim effectiveness.

Comment by Kevin Lacker on Try working on something random with someone cool · 2022-05-18T16:39:06.971Z · EA · GW

I'd trade at least 5 high-quality introductions like the one above for a single intro from the same distribution. 

Personally, when I'm recruiting for a role, I'm usually so hungry to get more leads that I'm happy to follow up with very weak references. I would take 5 high-quality introductions, I would take one super-high-quality introduction, I would like all of the above. Yeah, it's great to hire from people who have worked with a friend of yours before, but that will never be 100% of the good candidates.

This may very much depend on what sort of role you're hiring for, though. Most of my experience is in hiring software engineers, where hiring is almost always limited by how many candidates you can find who will even talk to you, rather than your ability to assess them.

Comment by Kevin Lacker on Open Thread: Spring 2022 · 2022-05-18T16:28:57.424Z · EA · GW

Excellent, sounds like you're on it. I do in fact use an iPhone. I should have made a more specific note about where I saw overlapping text earlier, I can't seem to find it again now. I'll use the message us link about any future minor UI bugs.

Comment by Kevin Lacker on Open Thread: Spring 2022 · 2022-05-16T16:08:41.211Z · EA · GW

What's up EAers. I noticed that this website has some issues on mobile devices - the left bar links don't work, several places where text overlaps, tapping the search icon causes an inappropriate zoom - is there someone currently working on this where it would help if I filed a ticket or reported an issue?

Comment by Kevin Lacker on Rational predictions often update predictably* · 2022-05-16T13:33:01.149Z · EA · GW

Yes, this is completely correct and many people do not get the mathematics right.

One example to think of is “the odds the Earth gets hit by a huge asteroid in the date range 2000-3000”. Whatever the odds are, they will probably steadily, predictably update downwards as time passes. Every day that goes by, you learn a huge asteroid did not hit the earth that day.

Of course, it’s possible an asteroid does hit the Earth and you have to drastically update upwards! But the vast majority of the time, the update direction will be downwards.

Comment by Kevin Lacker on The biggest risk of free-spending EA is not optics or motivated cognition, but grift · 2022-05-16T13:25:25.378Z · EA · GW

Grifters are definitely a problem in large organizations. The tough thing is that many grifters don’t start out as grifters. They start out honest, working hard, doing their best. But over time, their projects don’t all succeed, and they discover they are still able to appear successful by shading the truth a bit. Little by little, the honest citizen can turn into a grifter.

Many times a grifter is not really malicious, they are just not quite good enough at their job.

Eventually there will be some EA groups or areas that are clearly “not working”. The EA movement will have to figure out how to expel these dysfunctional subgroups.