Maximizing long-term impact 2015-03-03T19:50:01.524Z · score: 6 (6 votes)


Comment by squark on 2017 AI Safety Literature Review and Charity Comparison · 2017-12-22T10:19:44.715Z · score: 2 (2 votes) · EA · GW

Nice review! Two comments so far:

  • Re Critch's paper, the result is actually very intuitive once you understand the underlying mechanism. Critch considers a situation of, so to speak, Aumannian disagreement. That is, two agents hold different beliefs, despite being aware of each other's beliefs, because some assumption of Aumann's theorem is false: e.g. each agent considers emself smarter than the other. For example, imagine that Alice believes the Alpha Centauri system has more than 10 planets (call it "proposition P"), Bob believes it has less than 10 planets ("proposition not-P") and each is aware of the other's belief and considers it to be foolish. In this case, an AI that benefits Alice if P is true and benefits Bob if not-P is true would seem like an excellent deal for both of them, because each will be sure the AI is in eir own favor. In a way, the AI constitutes a bet between the two agents.

Critch writes: "It is also assumed that the players have common knowledge of one another’s posterior... Future work should design solutions for facilitating the process of attaining common knowledge, or to obviate the need to assume it." Indeed it is interesting to study what happens when each agents does not know the other's beliefs.

  • I will risk being accused of self-advertisement, but given that one of my papers appeared in the review it doesn't seem too arrogant to point at another which IMHO is not less important, namely "Forecasting using incomplete models", a paper that builds on Logical Induction in order to develop a way to reason about complex environments that doesn't require logic/deduction. I think it would be nice if this paper was included, although of course it's your review and your judgment whether it merits it.
Comment by squark on Taking Systemic Change Seriously · 2016-10-31T06:12:12.847Z · score: -1 (1 votes) · EA · GW

I'm not sure what point you could see in continuing this conversation either way, since you clearly aren't armed with any claims which haven't already been repeated and answered over and over in the most basic of arguments over socialism and Marxist philosophy...

Indeed I no longer see any point, given that you now reduced yourself to insults. Adieu.

Comment by squark on Taking Systemic Change Seriously · 2016-10-30T06:52:27.123Z · score: 0 (2 votes) · EA · GW

You still haven't provided a reference for "materialist epistemology".

I would say this is a good essay:

So, "historical materialism" is some collection of vague philosophical ideas by Marx. Previously, you replied to my claim that "to the extent they [utopian socialism and Marxism] are based on any evidence at all, this evidence is highly subjective interpretation of history" by saying that "Marxism was derived from materialist epistemology". This is extremely misleading to say that Marxism was derived from something when that something is itself an invention of Marx! To say that historical materialism is "evidence" for Marxism is to deprive to word "evidence" of all meaning. Evidence is not just something someone says that they claim justifies something else they say. Evidence is (by definition) objective, something that all participants in the conversation will agree upon given a minimal standard of intellectual honesty. If you honestly think "historical materialism" is an objective truth that everyone are obliged to accept (even if we assumed it is well defined at all, which it probably isn't), then I see no point in continuing this conversation.

What you certainly shouldn't expect is that everything be quantitative, or that everything be condensed into a meta-analysis that you can look at really quickly to save you the trouble of engaging with complicated and complex social and political issues. Sociopolitical systems are too complicated for that, which is why people who study political science and international relations do not condense everything into quantitative evidence.

Quantitative does not imply "you can look at it really quickly". Quantum field theory is very quantitative but I really want to meet someone who understood it by "looking at it really quickly." On the other hand, when something meets a high epistemic standard it makes it more worthwhile to spend time looking at it.

"Sociopolitical systems are complicated" does not imply "we should treat weak evidence as if it is strong evidence". If a question is so complicated that you cannot find any strong evidence to support an answer, it means that you should have low confidence in any answer that you can find. In other words, you should assign high probability to this answer being wrong. If some field of social sciences fails to provide strong evidence for its claims, this only means we should assign low confidence to its conclusions.

A singleminded emphasis on statistics is absolutely not what effective altruism is about. There are no meta-analyses citing data about the frequency of above-human-intelligence machines being badly aligned with values; there are no studies which quantify the sentience of cows and chickens; there are no regression tables showing whether the value of creating an active social movement is worth the expense. And yet we concern ourselves with those things anyway.

Yes, but you are ignoring two important considerations.

One is that e.g. becoming vegetarian will not cause a catastrophe if it turns outs that animals lack consciousness. On the other hand, a communist revolution will (and did) cause a catastrophe if our assumptions about its consequences are misguided.

The other is that the claim that a random AI is not aligned with human values is an "antiprediction". That is, a low information prior should not assign high probability to our values among all possible values. Therefore, the burden of proof is on the claim that the AI will be aligned. On the other hand, Marxist theories make complicated detailed claims about complicated detailed social systems. Such a claim is very far from the prior and strong evidence is required to justify it.

If you intend to go by quantitative data then I would suggest avoiding cases with a <10 sample size and I would also suggest correcting for significant confounding variables such as "dictatorship".

I'm not saying we have a lot of data. I'm saying we don't have much data but the data we do have points in the opposite direction. Regarding dictatorship, my hypothesis is that there is a causal link communism->dictatorship, so it is hardly a confounder.

In this case the USSR had no class since there were no capital owners.

Not entirely - the USSR's economy was complicated and changed significantly throughout the decades. The more general point of course is that the USSR did not succeed in abolishing political class.

You claimed USSR didn't abolish class. I said that abolishing class is hard because "class" can exist without being coded into law. You replied by saying "class" only refers to capital owners. Now you revert to the definition I assumed.

We might, but as I said above, many of the people who seriously engage with the relevant literature find these concerns to be small and other concerns to be large, for various reasons.

And many of "the people who seriously engage" reach the diametrically opposite conclusion.

The West has been successful, yes, but it's not clear how successful it's been in distributing its goods fairly and the extent to which its rise was due to exploitation of other countries.

I'm not sure what "fairly" means or why it should be ranked so high in importance. "Exploitation" is also a word that is used so often that its meaning became diluted (I also suspect that if all countries were liberal democracies it would be a win-win for almost everyone). If "fairness" is the main argument in favor of communist systems, then from my perspective it is paperclip maximization and there is no point in discussing it further.

There is a high standard of evidence whenever large ideas are discussed, but there's certainly no disproportionate 'burden of proof' to be placed against non capitalist ideas which haven't been tried or the non capitalist ideas which actually have been tried and have been quite successful in their own contexts.

There is is very high burden of proof for any policy proposal with potentially catastrophic consequences. The existing system (in Western-style democracies), with all its shortcomings, already underwent significant optimization and is pretty good compared to most alternatives. You can only risk destroying it if you have very strong evidence that the risk is negligible wrt the gains.

Also, don't misread me as saying "Communist countries worked so we should look into communism." I'm better interpreted as saying "lots of people from different perspectives have traced serious problems to the private ownership of the means of production, so we should look at the various ways to change that."

Yeah, and other people traced serious problems to other things like "the state exists and imposes regulation on the market" (for the record, I suspect that both groups are wrong). Let's not privilege the hypothesis.

Comment by squark on Taking Systemic Change Seriously · 2016-10-29T06:16:16.431Z · score: -1 (1 votes) · EA · GW

What is the right way to approach things?

By combining insights from sociology, history, economics, and other domains. For instance, materialist epistemology is a method of analysis that draws upon sociology and history to understand economic developments.

You still haven't provided a reference for "materialist epistemology".

Anyone can claim to "combine insights" from anything. In fact, most political ideologies claim such insights, nevertheless reaching different, sometimes diametrically opposite, conclusions.

Sure, but not everything that counts as empirical data can be fit into a regression table and subjected to meta-analysis.

If you're proposing to overhaul the entire system of government and economics, at the very least I expect you to provide objective, quantitative evidence. This is what effective altruism is about: doing good using evidence based methods.

The Khmer Rouge abolished money.

Yes. They also killed a quarter of their population. So whether or not their economy succeeded seems to be more strongly governed by other factors.

There is remarkable correlation between communism and killing / imprisoning large numbers of innocent people. It is unlikely to be a coincidence.

Yes, well class in the Marxist definition is about the distinction between capital owners and laborers, which is a bit different from how it's used in other contexts.

In this case the USSR had no class since there were no capital owners.

We might think that inequality of wealth is bad as it allocates goods to those who can afford them rather than those who need them; we might think that capitalist markets lead to tragedies of the commons which exacerbate resource shortages and existential risks; we might think that unequal distribution of power in society corrupts politics.

Alternatively, we might think that markets are good since they create incentives for productivity and innovation; since they make sure decisions in the economy are made in a distributed way, not prone to a single point of failure; since this distributed way naturally assigns more weight to people who have proven themselves to be competent. We might think that tragedies of the commons can be solved by controlling market incentives through taxation and regulation; that there is no need to throw the baby out with the bathwater by destroying the entire market.

All of this is speculation.

People in modern Western-style democracies (which are all capitalist) enjoy personal freedom and quality of life unrivaled in the entire history of the human race. On the other hand, virtually all attempts to implement communism lead to disaster.

This is not really a good comparison, given many cases of success in socialist and communist economies (such as Cuba, which roundly beats other Latin American countries in human development standards) and many failures in capitalist economies (such as the widespread economic disaster which followed the end of the Soviet system).

Cuba is still a dictatorship with a track record of human right violations. I wouldn't want to live there. My point is that Western-style capitalist democracy is the most successful model of government we know, and there is a high burden of proof for claiming some alternative is better.

Comment by squark on Taking Systemic Change Seriously · 2016-10-28T19:46:37.962Z · score: 1 (3 votes) · EA · GW

...asking for an empirical meta study of complex social ideologies is not the right way to approach things.

What is the right way to approach things? In order to claim that certain policies will have certain consequences, you need some kind of model. In order to know that a model is useful, you need to test it against empirical data. The more broad, unusual and complex policy changes you are proposing, the more stringent standard of evidence you need to meet.

I have seen several empirical analyses by economists showing positive economic and welfare data from Soviet countries.

My family lived in the Soviet Union for its entire history. I assure you that it was a hellhole.

Many types of socialism and communism have not been implemented. For instance, Marxism advocates a classless and moneyless society. The USSR was not classless and was not moneyless.

The Khmer Rouge abolished money. Abolishing class is much harder since class can exist without formal acknowledgement in the legal system. The real question, though, is why should we think these changes are possible or desirable.

I don't see how any of this takes away from the point it started from, namely that capitalism as an economic system has its own record of brutality as well as communism.

But the two are not on equal footing. People in modern Western-style democracies (which are all capitalist) enjoy personal freedom and quality of life unrivaled in the entire history of the human race. On the other hand, virtually all attempts to implement communism lead to disaster. So, although it is theoretically possible that some implementation of communism is superior, there is a very high burden of proof involved.

I said "public ownership of the means of production", and Marxism is just one of several frameworks for doing this.

Well, Marxism was your justification for it.

More importantly, I did not suggest that the EA community embrace it. I suggested that people look into it, see if was desirable, etc. Doing so requires serious engagement with the relevant literature and discussing it with people who can answer your questions better. If I was trying to argue for socialism or communism, of course I would be speaking much differently and with much more extensive sources and evidence.

In this case, I suggest formulating a much broader objective e.g. "alternative systems of government / economics". This might be communism, might be anarcho-capitalism, might be something else altogether. IMO, the best strategy is moving one level of "meta" up. Instead of promoting a specific political ideology, let's fund promising research into theoretical tools that enable evaluating policy proposals or government systems.

Comment by squark on Taking Systemic Change Seriously · 2016-10-28T17:09:15.574Z · score: 0 (2 votes) · EA · GW

All economic systems make certain assumptions about the way wealth and society are organized, different perspectives make different assumptions and operate on different levels of analysis, so e.g. Marxists aren't concerned with computing DSGE.

This is a generic statement that conveys little information about Marxism in particular.

I don't know what that would even look like. Can you recommend me a good survey of studies (preferably meta analyses) supporting libertarian ideas? There is no such thing.

I never claimed that implementing libertarian ideas is effective altruism! I'm sorry but the burden of proof is on you.

Materialism has many different meaningw and you are referring to something completely different. I am referring not to materialism as a theory of mind but to materialist epistemology, a method of social analysis.

Do you have a reference for this? Googling "materialist epistemology" doesn't yield much. You claimed that "Marxism was derived from materialist epistemology", does materialist epistemology precede Marxism? What was "materialist epistemology" derived from?

I think most of the elites who supported Stalin are now dead. In any case this seems like a pretty strange thing to worry about, like saying we should disbelieve in evolution because of social darwinists and eugenics.

My point is that intellectual elites are untrustworthy about this sort of questions and we should only believe direct evidence. Reverse stupidity is not intelligence, but stupidity is also not intelligence.

Yes, but virtually all communist countries were terrible virtually all of the time.

Not really true, and we might think that various directions in Marxism and socialism can be implemented without following the same policies that they did.

Why is that not really true? Maybe they can be implemented differently or maybe implementing them differently won't help. If your theory keeps failing the experimental test despite all sorts of tweaking, maybe you should abandon it and consider a different theory.

Also, I don't know what it means for capitalism to be "wrong". Capitalism is just what happens when you allow people to free exchange goods and services and enter into contracts. It might be that limiting such exchange and replacing it but something state-controlled is useful but this clearly depends on the nature of limitations and replacement. So the question is not whether capitalism is "wrong" but whether the system you are proposing instead of capitalism is an improvement.

Harmful, immoral, etc.

It sounds like you completely ignored my explanation.

Instead of arguing with me you would probably learn more by going to serious readings such as Marx, postwar socialist theory or to communities which are specifically oriented to discuss this sort of thing, such as

That is an extremely condescending comment. You came here suggesting that the EA community embraces Marxism as an effective cause. I'm asking you for supporting evidence. You refuse to provide the evidence, or even explain the nature of the evidence, suggesting that I should read whatnot before I gain the right to talk about it. If I claimed that Mahayana Buddhism is an excellent recipe to systemic change, you would be right to demand at least an outline of supporting evidence before being sent to read the Aṣṭasāhasrikā Prajñāpāramitā Sūtra.

Comment by squark on Taking Systemic Change Seriously · 2016-10-28T07:49:45.559Z · score: -1 (1 votes) · EA · GW

Mainstream economics doesn't seek to answer the same questions that Marxian economics does...

I'm not so sure, can you spell it out in more detail? Maybe you're saying that Marxian economics is mostly prescriptivist while mainstream economics is mostly descriptivist. But then, we have welfare economics and mechanism design which are more or less mainstream and have a prescriptivist bend.

...much of the 19th century socialist work was very mainstream and derived from the ordinary economic thought of the time.

I suspect it depends on the socialist work. For example, do you think Fourier's phalanstère is derived from ordinary economic thought of the time? Or their prediction that the seas will become lemonade?

Needless to say, modern heterodox economists use studies quite frequently.

Can you recommend a good survey of studies (preferably meta analyses) supporting Marxist ideas?

To the extent they are based on any evidence at all, this evidence is highly subjective interpretation of history.

Marxism was derived from materialist epistemology.

Materialism is a school of philosophy. In what sense does it qualify as "evidence"? In any case, it seems perfectly consistent to be a materialist / physicalist and deny Marxism?

We might think that intellectual elites who engage with socialist thought or with Marxist thought differentiate between the various doctrines and directions within this ideological space and accept some ideas while rejecting others.

What reason do we have to think the opinions of these elites today are much more accurate than when they supported Stalin?

Or we might think that the behavior of a state doesn't make all of its policies wrong: for instance, we might dispute the idea that capitalist states' rampant imperialism demonstrates that capitalism is always wrong.

Yes, but virtually all communist countries were terrible virtually all of the time.

Also, I don't know what it means for capitalism to be "wrong". Capitalism is just what happens when you allow people to free exchange goods and services and enter into contracts. It might be that limiting such exchange and replacing it but something state-controlled is useful but this clearly depends on the nature of limitations and replacement. So the question is not whether capitalism is "wrong" but whether the system you are proposing instead of capitalism is an improvement.

Bottom line, the most important question is this: what evidence do we have that implementing Marxist ideas is effective or at least beneficial?

Comment by squark on Taking Systemic Change Seriously · 2016-10-27T19:24:34.992Z · score: 3 (5 votes) · EA · GW

Well the original strands of thought mostly came from early 19th century utopian socialists and were updated by Marx and Engels. There has been a lot of post-Marxian analysis as well.

AFAICT, the strands of thought you are talking about are poorly correlated with reality. Marxist thought is largely outside of mainstream economics. They use neither studies nor mathematical models (at least they didn't in the 19th century). To the extent they are based on any evidence at all, this evidence is highly subjective interpretation of history. Finally, Marxist revolutions caused suffering and death on massive scale.

I suspect that Marxism is popular with intellectual elites for purely political reasons that have little to do with its objective intellectual merit. The same sort of elites supported Stalin and Mao in their time. To me it seems like a massive failure to update.

Comment by squark on Taking Systemic Change Seriously · 2016-10-26T18:42:33.090Z · score: 3 (5 votes) · EA · GW

Of the examples you give here, I think #1 is the best by far.

Regarding #2, I think that world government is a great idea (assuming it's a liberal, democratic world government!) but it's highly unobvious how to get there. In particular, am very skeptical about giving more power to the UN. The UN is a fundamentally undemocratic institution, both because each country has 1 vote regardless of size and because (more importantly) many countries are represented by undemocratic governments. I am not at all convinced removing the security council veto power would have positive consequences. IMHO the first step towards world government or any similar goal would be funding a research programme that will create a plan that is evidence based, nonpartisan and incremental / reversible.

Regarding #3, I am really not sure who these theorists are and why should we believe them.

Another potentially relevant cause area (although I'm not sure whether this is "systemic change" as you understand it) is reforming the education system: setting more well-defined goals, using evidence based methods, improving incentive mechanisms, educating for rationality.

Comment by squark on Ask MIRI Anything (AMA) · 2016-10-13T18:57:36.692Z · score: 2 (2 votes) · EA · GW

So you claim that you have values related to animals that most people don't have and you want your eccentric values to be overrepresented in the AI?

I'm asking unironically (personally I also care about wild animal suffering but I also suspect that most people would care about if they spent sufficient time thinking about it and looking at the evidence).

Comment by Squark on [deleted post] 2016-03-01T05:40:25.175Z

Who said we will preserve wild nature in its present form? We will re-engineer it to eliminate animal suffering while enhancing positive animal experience and wild nature's aesthetic appeal.

Comment by squark on Effective Altruism and ethical science · 2016-01-31T21:34:12.635Z · score: 0 (0 votes) · EA · GW

I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as "bad" without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.

How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.

Comment by squark on Effective Altruism and ethical science · 2016-01-31T21:31:34.816Z · score: 0 (0 votes) · EA · GW

Yes, medical science has no normative force. The fact smoking leads to cancer is a claim about causal relationship between phenomena in the physical world. The fact cancer causes suffering and death is also such a relationship. The idea that suffering and death are evil is already a subjective preference (subjective not in the sense that it is undefined but in the sense that different people might have different preferences; almost all people prefer avoiding suffering and death but other preferences might have more variance).

Comment by squark on Effective Altruism and ethical science · 2016-01-28T08:50:40.136Z · score: 0 (0 votes) · EA · GW

I completely don't understand what you mean by "killing people is incorrect." I understand that "2+2=5" is "incorrect" in the sense that there is a formally verifiable proof of "not 2+2=5" from the axioms of Peano arithmetic. I understand that general relativity is "correct" in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is "correct" in the sense that it is the simplest model that produces all previous observations; the distinction is not very important at the moment). I don't see any verification procedure for the morality of killing people, except checking whether killing people matches the preferences of a particular person or the majority in a particular group of people.

"I used to be a meat-eater, and did not care one bit about the welfare of animals... Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn't mind killing people that murder is wrong is ludicrous."

The fact you found your friend's arguments to be persuasive means there was already some foundation in your mind from which "eating meat is wrong" could be derived. The existence of such a foundation is not a logical or physical necessity. To give a radical example, imagine someone builds an artificial general intelligence programmed specifically to kill as many people as it can, unconditionally. Nothing you say to this AGI will convince it that what it's doing is wrong. In case of humans, there are many shared values because we all have very similar DNA and most of us are part of the same memetic ecosystem, but it doesn't mean all of our values are precisely identical. It would probably be hard to find someone who has no objection to killing people deep down, although I wouldn't be surprised if extreme psychopaths like that exist. However other more nuanced values may vary more signicantly.

Comment by squark on Effective Altruism and ethical science · 2016-01-27T09:02:03.306Z · score: 0 (0 votes) · EA · GW

"I'm not entirely sure what you mean here. We don't argue that it's wrong to interfere with other cultures."

I was refuting what appeared to me as a strawman of ethical subjectivism.

"If someone claims they kill other humans because it's their moral code and it's the most good thing to do, that doesn't matter. We can rightfully say that they are wrong."

What is "wrong"? The only meaningful thing we can say is "we prefer people not the die therefore we will try to stop this person." We can find other people who share this value and cooperate with them in stopping the murderer. But if the murderer honestly doesn't mind killing people, nothing we say will convince them, even if they are completely rational.

Comment by squark on Effective Altruism and ethical science · 2016-01-27T08:51:46.154Z · score: 1 (1 votes) · EA · GW

Thanks for replying!

"There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable."

The parameters you measure are physical properties to which you assign moral significance. The parameters themselves are science, the assignment of moral significance is "not science" in the sense that it depends on the entity doing the assignment.

The problem with your breatharianism example is that the claim "you can eat nothing and stay alive" is objectively wrong but the claim "dying is bad" is a moral judgement and therefore subjective. That is, the only sense in which "dying is bad" is a true claim is by interpreting it as "I prefer that people won't die."

Comment by squark on Effective Altruism and ethical science · 2016-01-26T09:59:23.590Z · score: 8 (8 votes) · EA · GW

This essay comes across as confused about the is-ought problem. Science in the classical sense studies facts about physical reality, not moral qualities. Once you already decided something is valuable, you can use science to maximize it (e.g. using medicine to maximize health). Similarly if you already decided hedonistic utilitarianism is correct you can use science to find the best strategy for maximizing hedonistic utility.

I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. In other words, I think there is an objective function that takes a particular intelligent agent and produces a system of ethics but it is not the constant function.

Assessing the quality of conscious experiences using neuroscience might be a good tool for helping moral judgement, but again it is only useful in light of assumptions about ethics that come from elsewhere. On the other hand neuroscience might be useful for computing the "ethics function" above.

The step from ethical subjectivism to the claim it's wrong to interfere with other cultures seems to me completely misguided, even backwards. If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it (at the same time it can be completely rational for you to resist). There is no universal value of "respecting other cultures" anymore than any other value is universal. If my ethics happens to include the value of "respecting other cultures" then I need to find the optimal trade-off between allowing the bad thing to continue and violating "respect".

Comment by squark on Ethical offsetting is antithetical to EA · 2016-01-15T10:07:55.277Z · score: 2 (2 votes) · EA · GW

I don't think one should agonize over offsets. I think offsets are not a satisfactory solution the problem of balancing resource spending on charitable vs. personal ends since they don't reflect the correct considerations. If you admit X leads to mental breakdowns then you should admit X is ruled out by purely consequentialist reasoning, without the need to bring in extra rules such as offsetting.

Comment by squark on The EA Newsletter & Open Thread - January 2016 · 2016-01-13T09:20:27.047Z · score: 1 (1 votes) · EA · GW

In the preferences page there is a box for "EA Profile Link." How does it work? That is, how do other users get from my username to the profile? I linked my LessWrong profile but it doesn't seem to have any effect...

Comment by squark on Ethical offsetting is antithetical to EA · 2016-01-13T09:06:02.854Z · score: 3 (3 votes) · EA · GW

Your reply seems to be based on the premise that EA is some sort of a deontological duty to donate 10% of your income towards buying bednets. My interpretation of EA is very different. My perspective is that EA is about investing significant effort into optimizing the positive impact of your life on the world at large, roughly in the same sense that a startup founder invests significant effort into optimizing the future worth of their company (at least if they are a founder that stands a chance).

The deviation from imaginary “perfect altruisim” is either due to having values other than improving the world or due to practical limitations of humans. In neither case do moral offsets offer much help. In the former case, the deciding factor is the importance of improving the world versus the importance of helping yourself and your close circle, which offsets completely fail to reflect. In the latter case, the deciding factor is what can you actually endure without losing productivity to an extent which is more significant than the gain. Again, moral offsets don’t reflect the relevant considerations.

Comment by squark on The Effective Altruism Newsletter & Open Thread - 15 December 2015 · 2015-12-19T07:01:51.071Z · score: 2 (2 votes) · EA · GW

I think downvoting as disagreement is terrible.

First, promoting content based on majority agreement is a great way to build an echo chamber. We should promote content which is high-quality (well written, well argumented, thought-provoking, contains novel insights, provides a balanced perspective etc.). Hearing repetitions of what you already believe just amplifies your confirmation bias. I want to learn something new.

Second, downvoting creates a strong negative incentive against posting. Silencing people you disagree with is also a great way to build an echo chamber.

Third, downvoting based on disagreement creates a battle atmosphere. Instead of a platform for rational, well-meaning debate we risk turning into a scuffle between factions with different ideologies.

All in all I think the rules for downvoting posts should be slightly more lax than for downvoting comments. Downvoting a low-quality post is acceptable (but be very cautious before deciding something you disagree with is "low-quality"). Downvoting a comment is only acceptable when the comment is not in good faith (spam, trolling, flaming etc.). I think this is essential to maintain a healthy amicable atmosphere.

Comment by squark on Measuring QALYs from advocating a rational response to the Paris attacks and ISIS · 2015-11-26T16:45:38.136Z · score: 0 (0 votes) · EA · GW

Some simple observations.

To perform such a QALY estimate you need

  1. A credible model for predicting the consequences of possible responses
  2. An estimate of how likely your advocacy is to effect policy

1 is something you need to even know what the best response it (and I'm currently not sure whether you have it).

2 sounds like something that should have been researched by many people by now, but I'm far from an expert so no specific suggestions.

Comment by squark on The Effective Altruism Newsletter & Open Thread - 23 November 2015 Edition · 2015-11-26T16:32:46.321Z · score: 3 (3 votes) · EA · GW

I think that most people here will tell you that we already know specific examples of such wrongdoing e.g. factory farming.

Comment by squark on The Effective Altruism Newsletter & Open Thread - 26 October 2015 Edition · 2015-10-29T21:33:11.966Z · score: 1 (1 votes) · EA · GW

I have some evidence that there are many software engineers who would gladly volunteer to code for EA causes (and some access to such engineers). What volunteering opportunities like that are available? EA organizations that need coders? Open source projects that can be classified as EA causes? Anything else?

Comment by squark on GiveBots vs. Humans · 2015-10-29T20:54:34.345Z · score: 0 (0 votes) · EA · GW

What do you mean by "accept what it really means to be Human"? To what end is it "more productive"?

Not any "human" thing is a good thing. Being susceptible to disease, old age and death is part of "being human" but it is a part I would rather part with. On the other hand, being Human also means having the ability to use reason to find a better strategy than the one suggested by the initial emotional response.

A rational thinker should factor the limitations of their own brain into their decision making. Also, sometimes we do care about some people more than about other people (e.g. friends and family). However, certain behaviors are simply bugs (in more scientific language, cognitive biases). There is no rational reason to "accept" them if we can find a way to work around them.

Comment by squark on Charities I Would Like to See · 2015-09-22T06:33:04.281Z · score: 0 (0 votes) · EA · GW

This strikes me as a strange choice of words since e.g. I think it is good to occasionally experience sadness. But arguing over words is not very fruitful.

I'm not sure this interpretation is consistent with "filling the universe with tiny beings whose minds are specifically geared toward feeling as much pure happiness as possible."

First, "pure happiness" sounds like a raw pleasure signal rather than "things conscious beings experience that are good" but ok, maybe it's just about wording.

Second, "specifically geared" sounds like wireheading. That is, it sounds like these beings would be happy even if they witnessed the holocaust which again contradicts my understanding of "things conscious beings experience that are good." However I guess it's possible to read it charitably (from my perspective) as minds that have superior ability to have truly valuable experiences i.e. some kind of post-humans.

Third, "tiny beings" sounds like some kind of primitive minds rather than superhuman minds as I would expect. But maybe you actually mean physical size in which case I might agree: it seems much more efficient to do something like running lots of post-humans on computronium than allocating for each the material resources of a modern biological human (although at the moment I have no idea what volume of computronium is optimal for running a single post-human: on the one hand, running a modern-like human is probably possible in a very small volume, on the other hand a post-human might be much more computationally expensive).

So, for a sufficiently charitable (from my perspective) reading I agree, but I'm not sure to which extent this reading is aligned with your actual intentions.

Comment by squark on Charities I Would Like to See · 2015-09-21T19:38:36.659Z · score: 0 (2 votes) · EA · GW

Upvoted, because although I disagree with much of this on object level, I think the post is totally legit and I think we should encourage original thinking.

Perhaps we need to find a time and place to start a serious discussion of ethics. I think hedonistic utilitarianism is wrong already on the level of meta-ethics. It seems to assume the existence of universal morals which from my point of view is a non-sequitur. Basically all discussions of universal morals are games with meaningless words, maps of non-existing territories. The only sensible meta-ethics I know is equating ethics with preferences. It seems that there is such a thing as intelligent agents with preferences (although we have no satisfactory mathematical definition yet). Of course each agent has its own preferences and the space of possible preferences is quite big (orthogonality thesis). Hence ethical subjectivism. Human preferences don't seem to differ much from human to human once you take into account that much of the differences in instrumental goals are explained by different beliefs rather than different terminal goals (=preferences). Therefore it makes sense in certain situations to use approximate models of ethics that don't explicitly mention the reference human, like utilitarianism. On the other hand, there is no reason the precise ethics should have a simple description (complexity of value). It is a philosophical error to think ethics should be low complexity like physical law since ethics (=preferences) is a property of the agent and has quite a bit of complexity put in by evolution. In other words, ethics is in the same category as the shape of Africa rather than Einstein's equations. Taking simplified models which take only one value into account (e.g. pleasure) to the extreme is bound to lead to abhorrent conclusions as all other values as sacrificed.

Comment by squark on September Open Thread · 2015-09-20T07:26:45.037Z · score: 1 (1 votes) · EA · GW

I agree with Tom. I think the core values of EA have to include:

  1. Always keep looking for new creative ways to do better.
  2. Maintain an open, honest and respectful discussion with your peers.

In particular exploring new interventions and causes should always be in the EA spotlight. When you think something is an effective charity but most EAs wouldn't agree with you, in my book it's a reason to state your case loud and clear rather than self-censor.

Comment by squark on Might wireheaders turn into paperclippers? · 2015-09-14T15:15:58.600Z · score: 0 (0 votes) · EA · GW

"Hold until future orders" is one approach but it might turn out to be much more difficult than actually creating an AI with correct values. This is because the formal specification of metaethics (that is a mathematical procedure that takes humans as input and produces a utility function as output) should be of much lower complexity than specifying what it means to "protect from other AI but do nothing else."

Comment by squark on Might wireheaders turn into paperclippers? · 2015-09-14T08:14:34.451Z · score: 2 (4 votes) · EA · GW

I completely agree that many conceivable post-human future have low value. See also "unhumanity" scenario in my analysis. I think that term "existential risk" might be somewhat misleading since what we're really aiming it as "existence of beings and experiences that we value" rather than just existence of "something." That is, I view your reasoning not as an argument for caring less about existential risk but as an argument for working towards a valuable far future.

Regarding MIRI, I think their position is completely adequate since once we create a singleton which endorses our values it will guard us from all sorts of bad futures, not only from extinction.

Regarding "consciousness as similarity", I think it's a useful heuristic but it's not necessarily universally applicable. I consider certain futures in which I gradually evolve into something much more complex than my current self as positive, but one must be very careful about which trajectories to endorse. Building an FAI will save us from doing irreversible mistakes, but if for some reason constructing a singleton turns out to be intractable we will have to think of other solutions.

Comment by squark on Should people be allowed to ear-mark their taxes to specific policy areas for a price? · 2015-09-13T17:43:51.242Z · score: 0 (0 votes) · EA · GW

Interesting. One way to solve the replaceability problem is to force the government to announce a preliminary budget before the ear-marking "bids" and pledge to treat the bids as differentials with respect to the preliminary budget.

Comment by squark on On Values Spreading · 2015-09-13T17:32:12.365Z · score: 1 (1 votes) · EA · GW

The question is what is the mechanism of value spreading.

If the mechanism is having rational discussions then it is not necessarily urgent to have these discussions right now. Once we create a future in which there is no death and no economic pressures to self-modify in ways that are value destructive, we'll have plenty of time for rational discussions. Things like "experience machine" also fit into this framework, as long as the experiences are in some sense non-destructive (this rules out experiences that create addiction, for example).

If the mechanism is anything but rational discussion then

  1. It's not clear in what sense the values you're spreading are "correct" if it's impossible to convince other people through rational discussion.
  2. I would definitely consider this sort of intervention as evil and would fight rather than cooperate with it (at least assuming the effect cannot be reversed by rational discussion; I also consider hedonistic utilitarianism abhorrent except as an approximate model in very restricted contexts).

Regarding MIRI in particular, I don't think the result of their work depends on the personal opinions of its director in the way you suggest. I think that any reasonable solution to the FAI problem will be on the meta-level (defining what does it mean for values to be "correct") rather than the object level (hard-coding specific values like animal suffering).

Comment by squark on The Bittersweetness of Replaceability · 2015-07-12T18:23:17.255Z · score: 6 (6 votes) · EA · GW

I think the main comparative advantage (= irreplaceability) of the typical EA comes not from superior technical skill but from motivation to improve the world (rather than make money, advance one's career or feel happy). This means researching questions which are ethically important but not grant-sexy, donating to charities which are high-impact but don't yield a lot of warm-fuzzies, promoting policy which goes against tribal canon etc.

Comment by squark on Have we underestimated the risk of a NATO-Russia nuclear war? Can we do anything about it? · 2015-07-12T18:12:05.164Z · score: 0 (0 votes) · EA · GW

In the Arab Spring many of the revolutionary groups were radical Islamists rather than champions of liberal democracy. Also, I didn't say anything about revolution: in some cases a gradual transition is more likely to work.

Infiltrating an organization you hate while preserving sanity and your true values is a task few people are capable of. I'm quite certain I wouldn't make it.

I think that we need serious research + talking to people from the relevant countries to devise realistic strategies.

Comment by squark on Have we underestimated the risk of a NATO-Russia nuclear war? Can we do anything about it? · 2015-07-11T18:08:44.820Z · score: 1 (1 votes) · EA · GW

Do you think Soviet attempts to foster communism in the US during the cold war were a stabilising influence?

Well, they might have been stabilizing if they worked :) Although I think war between communist countries is much more likely than war between liberal democracies.

Countries generally and rightfully take affront at foreigners trying to meddle with their internal affairs.

I mostly agree with the descriptive claim but not with the normative claim. Why "rightfully"?

For a more recent example, look at the aftermath of the western coup in Ukraine.

"Western" coup? The revolutionaries were pro-Western to some extent, but why is it a good example of foreign meddling?

I agree that backfiring is a serious risk of such interventions but I don't think we should write them off completely. Moreover, interventions by private organizations, especially private organization whose support base is spread over many countries, seem much less likely to precipitate a diplomatic crisis than direct interventions by governments.

Comment by squark on Have we underestimated the risk of a NATO-Russia nuclear war? Can we do anything about it? · 2015-07-11T10:58:05.290Z · score: 0 (0 votes) · EA · GW

I'm not sure what the EA movement can do that will have significant effect in the short term. In the long term we should be looking into establishing liberal democracy in countries which either posses nuclear weapons or have the capacity to develop them in the near future (Russia, China, North Korea, Pakistan, Iran...). For example we can support the pro-liberalisation groups which already exist in these countries.

Comment by squark on Maximizing long-term impact · 2015-03-16T18:18:33.465Z · score: 1 (1 votes) · EA · GW

Hi Tom, thx for commenting!

For me, the meta-point that we should focus on steering into better scenarios was a more important goal of the post than explaining the actual scenarios. The latter serve more as examples / food for thought.

Regarding objections to Utopian scenarios, I can try to address them if you state the objections you have in mind. :)

Regarding dictatorships, I indeed focused on situations that are long-term stable since I'm discussing long-term scenarios. A global dictatorship with existing technology might be possible but I find it hard to believe it can survive for more than a couple of thousand years.

Comment by squark on Maximizing long-term impact · 2015-03-16T18:03:50.105Z · score: 0 (0 votes) · EA · GW

If your only requirement is for all sentient beings to be happy, you should be satisfied with a universe completely devoid of sentient beings. However, I suspect you wouldn't be (?)

Regarding definition of good, it's pointless to argue about definitions. We should only make sure both of us know what each word we use means. So, let's define "koodness(X)" to mean "the extent to which things X wants to happen actually happen" and "gudness" to mean "the extent to which what is happening to all beings is what they want to happen" (although the latter notion requires clarifications: how do we average between the beings? do we take non-existing beings into account? how do we define "happening to X"?)

So, by definition of kood, I want the future world to be kood(Squark). I also want the future world to be gud among other things (that is, gudness is a component of koodness(Squark)).

I disagree with Mill. It is probably better for a human being not become a pig, in the sense that a human being prefers not becoming a pig. However, I'm not at all convinced a pig prefers to become a human being. Certainly, I wouldn't want to become a "Super-Droid" if it comes at a cost of losing my essential human qualities.

Comment by squark on Maximizing long-term impact · 2015-03-09T15:16:23.951Z · score: 0 (0 votes) · EA · GW

My distribution isn't tight, I'm just saying there is a significant probability of large serial depth. You are right that much of the benefit of current work is "instrumental": interesting results will convince other people to join the effort.

Comment by squark on Maximizing long-term impact · 2015-03-09T15:03:26.571Z · score: 1 (1 votes) · EA · GW

Hi Uri, thanks for the thoughtful reply!

It is not necessarily bad for future sentients to be different. However, it is bad for them to be devoid of properties that make humans morally valuable (love, friendship, compassion, humor, curiosity, appreciation of beauty...). The only definition of "good" that makes sense to me is "things I want to happen" and I definitely don't want a universe empty of love. A random UFAI is likely to have none of the above properties.

Comment by squark on Maximizing long-term impact · 2015-03-07T16:12:43.431Z · score: 0 (0 votes) · EA · GW

I think the distance between our current understanding of AI safety and the required one is of similar order of magnitude to the distance between invention of Dirac sea in 1930 and discovery of asymptotic freedom in non-Abelian gauge theory in 1973. This is 43 years of well-funded research by the top minds of mankind. And that without taking into account the engineering part of the project.

If remaining time frame for solving FAI is 25 years than:

  1. We're probably screwed anyway
  2. We need invest all possible effort into FAI since the tail the probability distribution is probably fast falling

On the other hand, my personal estimate regarding time to human level AI is about 80 years. This is still not that long.

Comment by squark on Maximizing long-term impact · 2015-03-05T19:59:59.566Z · score: 0 (0 votes) · EA · GW

Thx for the feedback and the references!

I think Ord's "coarse setting" is very close to my type II. The activities you mentioned belong to type II inasmuch as they consider specific scenarios or to type I inasmuch as they raise general awareness of the subject.

Regarding relative value vs. time: I absolutely agree! This is part of the point I was trying to make.

Btw, I was somewhat surprised by Ord's assessment of the value of current type III interventions in AI. I have a very different view. In particular, the 25-35 years time window he mentions strikes me as very short due to what Ord calls "serial depth effects". He mentions examples from the business literature on the time scale of several years but I think that the time scale for this type of research is larger by orders of magnitude. AI safety research seems to me similar to fundamental research in science and mathematics: driven mostly by a small pool of extremely skilled individuals, a lot of dependent steps, and thus very difficult to scale up.

Comment by squark on Long-term reasons to favour self-driving cars · 2015-03-05T19:48:13.835Z · score: 0 (0 votes) · EA · GW

In a way, the two are interchangeable: if we define "steps" as changes of given magnitude then faster change means more densely spaced steps.

There is another effect that has to be taken into account. Namely, some progress in understanding how to adapt to automation might be happening without the actual adoption of automation, that is, progress that occurs because of theoretical deliberation and broader publicity for the relevant insights. This sort of progress creates an incentive to move all adoption later in time.

Comment by squark on Long-term reasons to favour self-driving cars · 2015-02-22T21:18:09.910Z · score: 1 (1 votes) · EA · GW

Your toy model makes sense. However, if instead of considering the future automation technology X we consider some past (already adopted) automation technology Y, the conclusion would be opposite. Therefore, to complete your argument you need to show that in some sense the next significant step in automation after self-driving cars is closer in time than the previous significant step in automation.

Comment by squark on Long-term reasons to favour self-driving cars · 2015-02-19T18:15:22.142Z · score: 0 (0 votes) · EA · GW

Thx for replying!

I'm still not sure I follow your argument in full. Consider two scenarios:

  1. Self-driving cars are adopted soon. Progress in automation continues. Automation is eventually adopted in other areas as well.

  2. Self-driving cars are adopted later. Progress in automation still continues, in particular through advances in other field such as computer game AI. Eventually, self-driving cars and automation in other areas are adopted.

In each of these scenarios, we can consider the time at which a given type/level of automation was adopted. You claim that in scenario 2 these times will be spaced denser than in scenario 1. However, a priori it is possible to imagine that in scenario 2 all of these times occur later in time but with the same spacing.

What am I missing?

Comment by squark on Long-term reasons to favour self-driving cars · 2015-02-18T18:17:19.005Z · score: 1 (1 votes) · EA · GW

Hi Owen and Sebastian,

The assumption behind your argument seems to be that slowing (resp. accelerating) progress in automation will result in faster (resp. slower) changes in the future rather than e.g. uniform time translation. Can you explain the reasoning behind this assumption in more detail?

Comment by squark on The Value of a Life · 2015-02-17T18:49:15.847Z · score: 0 (2 votes) · EA · GW

Hi Nate, nice post!

I think you're describing the difference between instrumental value and terminal value. The market price of something is its instrumental value. A dollar is valuable because of the things you can buy with it, not because of intrinsic worth. On the other hand, human lives, happiness etc. have intrinsic worth. I think that the distinction will persist in almost any imaginable universe although the price ratios can be vastly different.

Comment by squark on On making spaces friendlier to parents · 2015-02-05T09:01:52.135Z · score: 0 (0 votes) · EA · GW

Hi Julia, thx for replying!

I don't know enough about the vegetarian community but I think that it grew so much recently that it might be considered a young movement, like EA (it is also a related movement, obviously). Political opinions definitely seem to be transmitted from parent to child, at least from my experience. It is true that there are "teenage rebellions" but I think that the opposite is more common. Academic ideas are often very narrow-field and of little interest to the wide public so a different approach is natural.

I'm not planning to put pressure on my son either. But I'm definitely planning to expose him to my own worldview. My hope is that if I provide him with the requisite intellectual tools and expose him to knowledge, there is a good chance he will adopt a large part of my worldview. After all, in a sufficiently rational mind truth should triumph over falsehood and if my ideas are not truth then let them perish.

Comment by squark on Effective Altruism and Utilitarianism · 2015-02-01T19:20:03.311Z · score: 0 (2 votes) · EA · GW

Hi Tom,

Thx for starting a discussion on moral philosophy: I find it interesting and important!

It seems to me that you're wrong when you say that assigning special importance to people closer to oneself makes one a non-consequentialist. One can measure actions by their consequences and measure the consequences in ways that are asymmetric with respect to different people.

Personally I believe that ethics is a property of the human brain and as such it

  1. Has high Kolmogorov complexity (complexity of value). In particular it is not just "maximize pleasure - pain" or something like that (even though pleasure might be a complex concept in itself).
  2. Varies from person to person and between different moments in the life of the same person.
  3. Unlikely to assign equal value to all people since it doesn't make much sense evolutionary. Yes, I know we are adaption executors rather than fitness optimizers. Nevertheless, the thing we do optimize (which is not evolutionary fitness) came about through evolution and I see no reason it would be symmetrical with respect to permutations of people.

Btw, the last point doesn't mean you shouldn't give to people you don't know. It just means you shouldn't reach the point your own family is at subsistence level.

Comment by squark on The Privilege of Earning To Give · 2015-02-01T19:02:48.689Z · score: 2 (2 votes) · EA · GW

Well, the problem with optimizing for a specific target audience is the risk to put off other audiences. I would say something like:

Being born with advantages isn't something to feel guilty about. Being born with advantages is something to be glad about: it gives you that much more power to improve life for everyone.