Posts
Comments
The point of my comment was that even if you're 100% sure about the eventual interest rate move (which of course nobody can be), you still have major risk from path dependency (as shown by the concrete example). You haven't even given a back-of-the-envelope calculation for the risk-adjusted return, and the "first-order approximation" you did give (which both uses leverage and ignores all risk) may be arbitrarily misleading, even for the purpose of "gives an idea of how large the possibilities are". (Because if you apply enough leverage and ignore risk, there's no limit to how large the possibilities are of any given trade.)
We welcome other criticisms to discuss, but comments like your first line are not helpful!
I thought about not writing that sentence, but figured that other readers can benefit from knowing my overall evaluation of the post (especially given that many others have upvoted it and/or written comments indicating overall approval). Would be interested to know if you still think I should not have said it, or should have said it in a different way.
I think this post contains many errors/issues (especially for a post with >300 karma). Many have been pointed out by others, but I think at least several still remain unmentioned. I only have time/motivation to point out one (chosen for being relatively easy to show concisely):
Using the 3x levered TTT with duration of 18 years, a 3 percentage point rise in rates would imply a mouth-watering cumulative return of 162%.
Levered ETFs exhibit path dependency, or "volatility drag", because they reset their leverage daily, which means you can't calculate the return without knowing what the interest rate does in between the 3% rise. TTT's website acknowledges this with a very prominent disclaimer:
Important Considerations
This short ProShares ETF seeks a return that is -3x the return of its underlying benchmark (target) for a single day, as measured from one NAV calculation to the next.
Due to the compounding of daily returns, holding periods of greater than one day can result in returns that are significantly different than the target return, and ProShares' returns over periods other than one day will likely differ in amount and possibly direction from the target return for the same period. These effects may be more pronounced in funds with larger or inverse multiples and in funds with volatile benchmarks."
You can also compare 1 and 2 and note that from Jan 1, 2019 to Jan 1, 2023, the 20-year treasury rate went up ~1%, but TTT is down ~20% instead of up (ETA: and has paid negligible dividends).
A related point: The US stock market has averaged 10% annual returns over a century. If your style of reasoning worked, we should instead buy a 3x levered S&P 500 ETF, get 30% return per year, compounding to 1278% return over a decade, handily beating out 162%.
Pure selfishness can't work, since if everyone is selfish, why would anyone believe anyone else's PR? I guess there has to be some amount of real altruism mixed in, just that when push comes to shove, people who will make decisions truly aligned with altruism (e.g., try hard to find flaws in one's supposedly altruistic plans, give up power after you've gained power for supposedly temporary purposes, forgo hidden bets that have positive selfish EV but negative altruistic EV) may be few and far between.
Ignaz Semmelweis
This is just a reasonable decision (from a selfish perspective) that went badly, right? I mean if you have empirical evidence that hand-washing greatly reduced mortality, it seems pretty reasonable that you might be able to convince the medical establishment of this fact, and as a result gain a great deal of status/influence (which could eventually be turned into power/money).
The other two examples seem like real altruism to me, at least at first glance.
The best you can do is “egoism, plus virtue signalling, plus plain insanity in the hard cases”.
Question is, is there a better explanation than this?
Do you know any good articles or posts exploring the phenomenon of "the road to hell is paved in good intentions"? In the absence of a thorough investigation, I'm tempted to think that "good intentions" is merely a PR front that human brains put up (not necessarily consciously), and that humans deeply aligned with altruism don't really exist, or are even rarer than it looks. See my old post A Master-Slave Model of Human Preferences for a simplistic model that should give you a sense of what I mean... On second thought, that post might be overly bleak as a model of real humans, and the truth might be closer to Shard Theory where altruism is a shard that only or mainly gets activated in PR contexts. In any case, if this is true, there seems to be a crucial problem of how to reliably do good using a bunch of agents who are not reliably interested in doing good, which I don't see many people trying to solve or even talk about.
(Part of "not reliably interested in doing good" is that you strongly want to do things that look good to other people, but aren't very motivated to find hidden flaws in your plans/ideas that only show up in the long run, or will never be legible to people whose opinions you care about.)
But maybe I'm on the wrong track and the main root cause of "the road to hell is paved in good intentions" is something else. Interested in your thoughts or pointers.
Over time, I've come to see the top questions as:
- Is there such a thing as moral/philosophical progress? If yes, is there anything we can feasibly do to ensure continued moral/philosophical progress and maximize the chances that human(-descended) civilization can eventually reach moral/philosophical maturity where all of the major problems that currently confuse us are correctly solved?
- Is there anything we might do prior to reaching moral/philosophical maturity that would constitute a non-negligible amount of irreparable harm? (For example, perhaps creating an astronomical amount of digital/simulated suffering would qualify.) How can we minimize the chances of this?
In one of your charts you jokingly ask, "What even is philosophy?" but I'm genuinely confused why this line of thinking doesn't lead a lot more people to view metaphilosophy as a top priority, either in the technical sense of solving the problems of what philosophy is and what constitutes philosophical progress, or in the sociopolitical sense of how best to structure society for making philosophical progress. (I can't seem to find anyone else who often talks about this, even among the many philosophers in EA.)
Would be interested in your (eventual) take on the following parallels between FTX and OpenAI:
- Inspired/funded by EA
- Taking big risks with other people's lives/money
- Attempt at regulatory capture
- Large employee exodus due to safety/ethics/governance concerns
- Lack of public details of concerns due in part to non-disparagement agreements
just felt like SBF immediately became a highly visible EA figure for no good reason beyond $$$.
Not exactly. From Sam Bankman-Fried Has a Savior Complex—And Maybe You Should Too:
It was his fellow Thetans who introduced SBF to EA and then to MacAskill, who was, at that point, still virtually unknown. MacAskill was visiting MIT in search of volunteers willing to sign on to his earn-to-give program. At a café table in Cambridge, Massachusetts, MacAskill laid out his idea as if it were a business plan: a strategic investment with a return measured in human lives. The opportunity was big, MacAskill argued, because, in the developing world, life was still unconscionably cheap. Just do the math: At $2,000 per life, a million dollars could save 500 people, a billion could save half a million, and, by extension, a trillion could theoretically save half a billion humans from a miserable death.
MacAskill couldn’t have hoped for a better recruit. Not only was SBF raised in the Bay Area as a utilitarian, but he’d already been inspired by Peter Singer to take moral action. During his freshman year, SBF went vegan and organized a campaign against factory farming. As a junior, he was wondering what to do with his life. And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth.
SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.” But, right there, between a bright yellow sunshade and the crumb-strewn red-brick floor, SBF’s purpose in life was set: He was going to get filthy rich, for charity’s sake. All the rest was merely execution risk.
To give some additional context, China emitted 11680 MT of Co2 in 2020, out of 35962 MT globally. In 2022 it plans to mine 300 MT more coal than the previous year (which also added 220 MT of coal production), causing an additional 600 MT of Co2 from this alone (might be a bit higher or lower depending on what kind of coal is produced). Previously, China tried to reduce its coal consumption, but that caused energy shortages and rolling blackouts, forcing the government to reverse direction.
Given this, it's really unclear how efforts like persuading Canadian voters to take climate change more seriously can make enough difference to be considered "effective" altruism. (Not sure if that line in your conclusions is targeted at EAs, or was originally written for a different audience.) Perhaps EAs should look into other approaches (such as geoengineering) that are potentially more neglected and/or tractable?
To take a step back, I'm not sure it makes sense to talk about "technological feasibility" of lock-in, as opposed to say its expected cost, because suppose the only feasible method of lock-in causes you to lose 99% of the potential value of the universe, that seems like a more important piece of information than "it's technologically feasible".
(On second thought, maybe I'm being unfair in this criticism, because feasibility of lock-in is already pretty clear to me, at least if one is willing to assume extreme costs, so I'm more interested in the question of "but can it be done at more acceptable costs", but perhaps this isn't true of others.)
That aside, I guess I'm trying to understand what you're envisioning when you say "An extreme version of this would be to prevent all reasoning that could plausibly lead to value-drift, halting progress in philosophy." What kind of mechanism do you have in mind for doing this? Also, you distinguish between stopping philosophical progress vs stopping technological progress, but since technological progress often requires solving philosophical questions (e.g., related to how to safely use the new technology), do you really see much distinction between the two?
Consider a civilization that has "locked in" the value of hedonistic utilitarianism. Subsequently some AI in this civilization discovers what appears to be a convincing argument for a new, more optimal design of hedonium, which purports to be 2x more efficient at generating hedons per unit of resources consumed. Except that this argument actually exploits a flaw in the reasoning processes of the AI (which is widespread in this civilization) such that the new design is actually optimized for something different from what was intended when the "lock in" happened. The closest this post comes to addressing this scenario seems to be "An extreme version of this would be to prevent all reasoning that could plausibly lead to value-drift, halting progress in philosophy." But even if a civilization was willing to take this extreme step, I'm not sure how you'd design a filter that could reliably detect and block all "reasoning" that might exploit some flaw in your reasoning process.
Maybe in order to prevent this, the civilization tried to locked in "maximize the quantity of this specific design of hedonium" as their goal instead of hedonistic utilitarianism in the abstract. But 1) maybe the original design of hedonium is already flawed or highly suboptimal, and 2) what if (as an example) some AI discovers an argument that they should engage in acausal trade in order to maximize the quantity of hedonium in the multiverse, except that this argument is actually wrong.
This is related to the problem of metaphilosophy, and my hope that we can one day understand "correct reasoning" well enough to design AIs that we can be confident are free from flaws like these, but I don't know how to argue that this is actually feasible.
I don't have good answers to your questions, but I just want to say that I'm impressed and surprised by the decisive and comprehensive nature of the new policies. It seems that someone or some group actually thought through what would be effective policies for achieving maximum impact on the Chinese AI and semiconductor industries, while minimizing collateral damage to the wider Chinese and global economies. This contrasts strongly with other recent US federal policy-making that I've observed, such as COVID, energy, and monetary policies. Pockets of competence seem to still exist within the US government.
But two formidable new problems for humanity could also arise
I think there are other AI-related problems that are comparable in seriousness to these two, which you may be neglecting (since you don't mention them here). These posts describe a few of them, and this post tried to comprehensively list my worries about AI x-risk.
They are building their own alternatives, for example CodeGeeX is a GPT-sized language model trained entirely on Chinese GPUs.
It used Huawei Ascend 910 AI Processors, which was fabbed by TSMC, which will no longer be allowed to make such chips for China.
absent a war, China can hope to achieve parity with the West (by which I mean the countries allied with the US including South Korea and Japan) on the hardware side by buying chips from Taiwan like everyone else
Apparently this is no longer true as of Oct 2022. From https://twitter.com/jordanschnyc/status/1580889364233539584:
Summary from Lam Research, which is involved with these new sanctions:
- All Chinese advanced computing chip design companies are covered by these sanctions, and TSMC will no longer do any tape-out for them from now on;
This was apparently based on this document, which purports to be a transcript of a Q&A session with a Lam Research official. Here's the relevant part in Chinese (which is consistent with the above tweet):
Q:台积电/Global Foundry给中国流片受到什么影响?
高算力芯片给中国继续流片有困难,可以提供先进制程但是非高算力的芯片,高算力芯片给了相应的指标定义。如果是28家实体清单的企业是全部芯片都不能流片。
What precautions did you take or would you recommend, as far as preventing the (related) problems of falling in with the wrong crowd and getting infected with the wrong memes?
What morality and metaethics did you try to teach your kids, and how did that work out?
(Some of my posts that may help explain my main worries about raising a kid in the current environment: 1 2 3. Would be interested in any comments you have on them, whether from a parent's perspective or not.)
If the latter, we’re not really seeking ‘AI alignment’. We’re talking about using AI systems as mass ‘moral enhancement’ technologies. AKA ‘moral conformity’ technologies, aka ‘political indoctrination’ technologies. That raises a whole other set of questions about power, do-gooding, elitism, and hubris.
I would draw a distinction between what I call "metaphilosophical paternalism" and "political indoctrination", the difference being whether we're "encouraging" what we think are good reasoning methods and good meta-level preferences (e.g., preferences about how to reason, how to form beliefs, how to interact with people with different beliefs/values), or whether we're "encouraging" object-level preferences for example about income redistribution.
My precondition for doing this though, is that we first solve metaphilosophy, in other words have a thorough understanding of what "good reasoning" (including philosophical and moral reasoning) actually consists of, or a thorough understanding of what good meta-level preferences consist of. I would be the first to admit that we seriously lack this right now. It seems a very long shot to develop such an understanding before AGI, but I have trouble seeing how to ensure a good long term outcome for future human-AI civilization unless we succeed in doing something like this.
I think in practice what we're likely to get is "political indoctrination" (given huge institutional pressure/incentive to do that), which I'm very worried about but am not sure how to prevent, aside from solving metaphilosophy and talking people into doing metaphilosophical paternalism instead.
So, we better be honest with ourselves about which type of ‘alignment’ we’re really aiming for.
I have had discussions with some alignment researchers (mainly Paul Christiano) about my concerns on this topic, and the impression I get is that they're mainly focused on "aligned with individual people’s current values as they are" and they're not hugely concerned about this leading to bad outcomes like people locking in their current beliefs/values. I think Paul said something like he doesn't think many people would actually want their AI to do that, and others are mostly just ignoring the issue? They also don't seem hugely concerned that their work will be (mis)used for "political indoctrination" (regardless of what they personally prefer).
So from my perspective, the problem is not so much alignment researchers "not being honest with themselves" about what kind of alignment we're aiming for, but rather a confusing (to me) nonchalance about potential negative outcomes of AIs aligned with religious or ideological values.
ETA: What's your own view on this? How do you see things working out in the long run if we do build AIs aligned to people's current values, which include religious values for many of them? Based on this, are you worried or not worried?
If you think the Simulation Hypothesis seems likely, but the traditional religions are idiotic
I think the key difference here is that while traditional religions claim detailed knowledge about who the gods are, what they're like, what they want, and what we should do in light of such knowledge, my position is that we currently actually have little idea who our simulators are and can't even describe our uncertainty in a clear way (such as with a probability distribution), nor how such knowledge should inform our actions. It would take a lot of research, intellectual progress, and perhaps increased intellectual capacity to change that. I'm fairly certain that any confidence in the details of gods/simulators at this point is unjustified, and people like me are simply at a better epistemic vantage point compared to traditional religionists who make such claims.
These are the human values that religious people would want the AI to align with. If we can’t develop AI systems that are aligned with these values, we haven’t solved the AI alignment problem.
I also think that the existence of religious values poses a serious difficulty for AI alignment, but I have the opposite worry, that we might develop AIs that "blindly" align with religious values (for example locking people into their current religious beliefs because they seem to value faith), thus causing a great deal of harm according to more enlightened values.
It's not clear to me what should be done with religious values though, either technically or sociopolitically. One (half-baked) idea I have is that if we can develop a good understanding of what "good reasoning" consists of, maybe aligned AI can use that to encourage people to adopt good reasoning processes that eventually cause them to abandon their false religious beliefs and the values that are based on those false beliefs, or allow the the AI to talk people out of their unjustified beliefs/values based on the AI's own good reasoning.
Have you seen Problems in AI Alignment that philosophers could potentially contribute to? (See also additional suggestions in the comments.) Might give your fellows some more topics to research or think about.
ETA: Out of those problems, solving metaphilosophy is currently the highest on my wish list. See this post for my reasons why.
I really appreciate this work. I've been looking into some of the same questions recently, but like you say everything I've been able to find up to now seem very siloed and fail to take into account all of the potentially important issues. To convince people of your thesis though, I think it needs more of the following:
- Discussion of more energy transition scenarios and their potential obstacles. It currently focuses a lot on the impossibility of using batteries to store 1 month worth of electricity, but I'm guessing that it might be much more realistic to use batteries only for daily storage, with seasonal/longer term variations being handled by a combination of overcapacity and fossil fuel backup, or by adaptation on the demand side.
- Discussion of counterarguments to your positions. You already do some of this (e.g. "Dave finds it pessimistic, he thinks they give too much importance to land use and climate impacts, and that the model should have higher efficiency and growth of renewables.") but would appreciate more details of the counterarguments and why you still disagree with them.
- In the long run, why is it impossible to build an abundant energy system using only highly available minerals? It seems like your main argument here is that renewables have low EROI, but why can't we greatly improve that in the future? For example, if much of the current energy investment into renewables goes to spending energy on maintaining living standards that workers demand (I don't know if this is actually true or not), we could potentially lower that amount by increasing automation. What are the fundamental limits to such improvements?
Yeah, seems like we've surfaced some psychological difference here. Interesting.
I’m just talking about the undeniable datum that we do, as a general rule, care more about those we know than we do about total strangers.
There are lots of X and Y such that, as a general rule, we care more about someone in X than we do someone in Y. Why focus on X="those we know" and Y="total strangers" when this is actually very weak compared to other Xs and Ys, and explains only a tiny fraction of the variation in how much we care about different members of humanity?
(By "very weak" I mean suppose someone you know was drowning in a pond, and a total stranger was drowning in another pond that's slightly closer to you, for what fraction of the people you know, including e.g. people you know from work, would you instinctively run to save them over the total stranger? (And assume you won't see either of them again afterwards, so you don't run to save the person you know just to avoid potential subsequent social awkwardness.) Compare this with other X and Y.)
If I think about the broader variation in "how much I care" it seems it's almost all relational (e.g., relatives, people who were helpful to me in the past, strangers I happen to come across vs distant strangers). And if I ask "why?" the answer I get are like, "my emotions were genetically programmed to work that way" and "because of kin selection" and "it was a good way to gain friends/allies in the EEA". Intrinsic / non-relational features (either the features themselves, or how much I know or appreciate the features) just don't seem to enter that much into the equation.
(Maybe you could argue that upon reflection I'd want to self-modify away all that relational stuff and just value people based on their intrinsic features. Is that what you'd argue, and if so what's the actual argument? It seems like you sort of hint in this direction in your middle parenthetical paragraph, but I'm not sure.)
I’m very confident that ~0/100 people would choose D, which is what you’re arguing for!
In my post I said there's an apparent symmetry between M and D, so I'm not arguing for choosing D but instead that we are confused and should be uncertain.
By arguing that most people’s imagined inhabitants of utopia ‘shut up and multiply’ rather than divide, I’m just saying that these utopians care a lot about strangers, and therefore that caring about strangers is something that regular people hold dear as an important human value, even though they often fail at it.
Ok, I was confused because I wasn't expecting how you're using ‘shut up and multiply’. At this point I think you have a different argument for caring a lot about strangers which is different from Peter Singer's. Considering your own argument, I don't see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners' dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I'm all for that, but ultimately my own altruism values people's welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it's just raw unexplained intuitions, then I'm not sure we should put much stock in them.)
Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I'm not sure we can derive strong conclusions about human values based on these imaginations anyway.
So I think that “Shut Up and Divide” only challenges the Drowning Child argument insofar as you have very strange ethical intuitions, not shared by many.
Suppose I invented a brain modification machine and asked 100 random people to choose between:
- M(ultiply): change your emotions so that you care much more in aggregate about humanity than your friends, family, and self
- D(ivide): change your emotions so that you care much less about random strangers that you happen to come across than you currently do
- S(cope insensitive): don't change anything.
Would most of them "intuitively" really choose M?
Most people’s imagined inhabitants of utopia fit the former profile much more closely.
From this, it seems that you're approaching the question differently, analogous to asking someone if they would modify everyone's brain so that everyone cares much more in aggregate about humanity (thereby establishing this utopia). But this is like the difference between unilaterally playing Cooperate in Prisoners' Dilemma, versus somehow forcing both players to play Cooperate. Asking EAs or potential EAs to care much more about humanity than they used to, and not conditional on everyone else doing the same, based on your argument, is like asking someone to unilaterally play Cooperate, while using the argument, "Wouldn't you like to live in a utopia where everyone plays Cooperate?"
I don't think I understand what your argument is.
your intuitive sense of caring evolved when your sphere of influence was small
Even in our EEA we had influence beyond the immediate tribe, e.g., into neighboring tribes, which we were evolved to care much less about, hence inter-tribal raids, warfare, etc.
So it seems like your default behavior should be extended to your new circumstances instead of extending your new circumstances to default state.
I'm just not sure what you mean here. Can you explain with some other examples? (Are Daniel Kirmani's extrapolations of your argument correct?)
It seems empirically false and theoretically unlikely (cf kin selection) that our emotions work this way. I mean, if it were true, how would you explain things like dads who care more about their own kids that they've never seen than strangers' kids, (many) married couples falling out of love and caring less about each other over time, the Cinderella effect?
So I find it very unlikely that we can "level-up" all the way to impartiality this way, but maybe there are other versions of your argument that could work (in implying not utilitarianism/impartiality but just that we should care a lot more about humanity in aggregate than many of us currently do). Before going down that route though, I'd like to better understand what you're saying. What do you mean by the "intrinsic features" of the other person that makes them awesome and worth caring about? What kind of features are you talking about?
I don’t care to defend empty “cheap talk” signals, but the best virtue signals offer some proof of their claim by being difficult to fake.
"Cheap talk" isn't the only kind of virtue signaling that can go bad. During the Cultural Revolution, "cheap talk" would have been to chant slogans with everyone else, whereas "real" virtue signals (that are difficult to fake) would have been to physically pick up a knife and stab the "reactionary" professor, or pick up a gun and shoot at the "enemies of the revolution" (e.g., other factions of Red Guards).
To me, the biggest problem with virtue signaling is the ever-present possibility of the underlying social dynamics (that drives people to virtue signal) spiraling out of control and causing more harm than good, sometimes horrendous amounts of harm as in cases like the Cultural Revolution. At the same time, I have to acknowledge (like I did in this post) that without virtue signaling, civilization itself probably wouldn't exist. I think ideally we'd study and succeed in understanding the dynamics and then use that understanding to keep an optimal balance where we can take advantage of the positive side effects of virtue signaling while keeping the harmful ones at bay.
It seems key to the project of "defense of Enlightenment ideas" to figure out whether the Age of Enlightenment came about mainly through argumentation and reasoning, or mainly through (cultural) variation and selection. If the former, then we might be able to defend Enlightenment ideas just by, e.g., reminding people of the arguments behind them. But if it's the latter, then we might suspect that the recent decline of Enlightenment ideas was caused by weaker selection pressure towards them (allowing "cultural drift" to happen to a greater extent), or even a change in the direction of the selection pressure. Depending on the exact nature of the changes, either of these might be much harder to reverse.
A closely related line of inquiry is, what exactly was/is the arguments behind Enlightenment ideas? Did the people who adopted them do so for the right reasons? (My shallow investigation linked above suggests that the answer is at least plausibly "no".) In either case, how sure are we that they're the right ideals/values for us? While it seems pretty clear that Enlightenment ideas historically had good consequences in terms of, e.g., raising the living standards of many people, how do we know that they'll still have net positive consequences going forward?
To try to steelman the anti-Enlightenment position:
- People in "liberal" societies "reason" themselves into harmful conclusions all the time, and are granted "freedom" to act out their conclusions.
- In an environment where everyone has easy access to worldwide multicast communication channels, "free speech" may lead to virulent memes spreading uncontrollably (and we're already seeing the beginnings of this).
- If everyone adopts Enlightenment ideas, then we face globally correlated risks of (1) people causing harm on increasingly large scales and (2) cultures evolving into things we wouldn't recognize and/or endorse.
From the mid-1960s onwards, numerous Global-South scholars have advanced the term ‘underdevelopment’ to explain how colonizing powers and extractive corporations actively produce impoverished, so-called ‘undeveloped’ conditions through their extraction of materials, labour-power, and knowledge from the Global South (e.g., Frank, 1966; Rodney, 1972; Hickel, 2017).
I'm curious, how do such scholars explain why "colonizing powers and extractive corporations" failed to produce ‘undeveloped’ conditions in places like South Korea and Japan, whereas places that definitively kept out "colonizing powers and extractive corporations" such as China (from about 1950 to 1980) and North Korea were/are nevertheless afflicted with ‘undeveloped’ conditions?
I mostly think it needs a defense of Enlightenment ideas such as reason and liberalism
Have you ever looked into how Enlightenment ideas came about and started spreading in the first place? I have but only in a very shallow way. Here's a couple of my previous comments about it:
Thanks for the response. I don't disagree with anything you say here, and to be clear, I have a lot of both empirical and moral uncertainty about this topic.
It’s also worth noting that many girls resisted being footbound in the first place.
This makes me think of another parallel: parents forcing kids to practice musical instruments, which a lot of kids also resist, and arguably causes real suffering among the kids who hate doing it. (I'm thinking of places like China where this phenomenon is much more widespread than in the US.) How likely is a "moral campaign" for stopping this likely to succeed, without some economic force behind it?
Another parallel might be forcing kids to go to school and to do homework.
Was the "moral campaign" against footbinding itself actually about morality, or was it also mainly about economics and/or status? (Or maybe all these things are inextricably linked in our minds at a deep level.) At least one paper takes the latter perspective (albeit expressed in the language of "postcolonial feminism"). From its conclusions section:
Interestingly, foot-bound women were the strongest proponents of the practice in the face of the anti-footbinding movement.66 As Patricia Angela Sieber notes, from the perspective of these silenced Chinese women, footbinding was a “symbolic iconography of domesticity rather than [of] the deformed who hid themselves from government inspection and who reapplied binders as inspectors left.67 Why did the women who were being ‘liberated’ resist these policies for liberation? Because the anti-footbinding movement was not a movement for the ‘liberation’of feet-bound women as missionaries and reformers claimed, but was rather a product of “colonial conditions of global unevenness.”68
The presence of Western missionaries and colonialists in China led to an anti-footbinding movement, which gained exposure on a global scale. Male reformers, who were already shamed by China’s military defeats, were further shamed by westerners who through anti-footbinding tactics brought forth a colonial perspective of ‘acceptable’ cultural practices, which these Chinese reformers accepted.69 As Kenneth Butler notes “exploitation” occurs when an individual or group with power utilizes that power to his or her own advantage, at the expense of another individual or group without it.70 A review of the anti-footbinding historiography and feet-bound Chinese women’s histories reveals two acts of exploitation. The first was the colonialist imposition of morality upon the Chinese reformers and the second was the enforcement of anti-footbinding measures through imperial decrees, missionaries, and reform movements.71 Through the termination of practices such as footbinding, the Chinese nation did make the transition from tradition to modernity, but the end of footbinding was by no means achieved in the interest of foot-bound women whose voices and wants have since been marginalized in Chinese history.
It occurs to me that footbinding shares similarities with wearing high heels. Looks like I'm not the only one who noticed.
I posed a puzzle in Is the potential astronomical waste in our universe too small to care about?, which seems to arise in any model of moral uncertainty where the representatives of different moral theories can trade resources, power, or influence with each other, like here. (Originally my post assumed a Moral Parliament where the delegates can trade votes with each other.)
(There was some previous discussion of my puzzle on this forum, when the Greaves and Cotton-Barratt preprint came out.)
which could mean that invading Taiwan would give China a substantial advantage in any sort of AI-driven war.
My assessment is that actually the opposite is true. Invading (and even successfully conquering) Taiwan would actually cause China to fall behind in any potential AI race. The reason is that absent a war, China can hope to achieve parity with the West (by which I mean the countries allied with the US including South Korea and Japan) on the hardware side by buying chips from Taiwan like everyone else, but if a war happened, the semiconductor foundries in Taiwan would likely be destroyed (to prevent them from falling to the Chinese government), and China lacks the technology to rebuild them without Western help. Even if the factories are not destroyed, critical supplies (such as specialty chemicals) would be cut off and the factories would become useless. Almost all of the machines and supplies that go into a semi foundry are made outside Taiwan in the West, and while China is trying to develop its own domestic semiconductor supply chains, it's something like 10 years behind the state of the art in most areas, and not catching up, because the enormous amount of R&D going into the industry across all of the supply chains across the West is not something China can match on its own.
So my conclusion is that if China invades Taiwan, it would lose access to the most advanced semiconductor processes, while the West can rebuild the lost Taiwan foundries without too much trouble. (My knowledge about all of this came from listening to a bunch of different podcasts, but IIRC, Jon Y (Asianometry) on Semiconductor Tech and U.S.-China Competition should cover most of it.)
Here are some of my previous thoughts (before these SJ-based critiques of EA were published) on connections between EA, social justice, and AI safety, as someone on the periphery of EA. (I have no official or unofficial role in any EA orgs, have met few EA people in person, etc.) I suspect many EA people are reluctant to speak candidly about SJ for fear of political/PR consequences.
It’s economically feasible to go all solar without firm generation, at least in places at the latitude of the US (further north it becomes impossible, you’d need to import power).
How much does this depend on the costs of solar+storage continuing to fall? (In one of your FB posts you wrote "Given 10-20 years and moderate progress on solar+storage I think it probably makes sense to use solar power for everything other than space heating") Because I believe since you wrote the FB posts, these prices have been going up instead. See this or this.
Covering 8% of the US or 30% of Japan (eventually 8-30% of all land on Earth?) with solar panels would take a huge amount of raw materials, and mining has obvious diseconomies at this kind of scale (costs increase as the lowest cost mineral deposits are used up), so it seems premature to conclude "economically feasible" without some investigation into this aspect of the problem.
Taking the question literally, searching the term ‘social justice’ in EA forum reveals only 12 mentions, six within blog posts, and six comments, one full blog post supports it, three items even question its value, the remainder being neutral or unclear on value.
That can't be right. I think what may have happened is that when you do a search, the results page initially shows you only 6 each of posts and comments, and you have to click on "next" to see the rest. If I keep clicking next until I get to the last pages of posts and comments, I can count 86 blog posts and 158 comments that mention "social justice", as of now.
BTW I find it interesting that you used the phrase "even question its value", since "even" is "used to emphasize something surprising or extreme". I would consider questioning the values of things to be pretty much the core of the EA philosophy...
It seems to me that up to and including WW2, many wars were fought for economic/material reasons, e.g., gaining arable land and mineral deposits, but now, due to various changes, invading and occupying another country is almost certainly economically unfavorable (causing a net loss of resources) except in rare circumstances. Wars can still be fought for ideological ("spread democracy") and strategic ("control sea lanes, maintain buffer states") reasons (and probably others I'm not thinking of right now), but at least one big reason for war has mostly gone away at least for the foreseeable future?
Curious if you agree with this, and what you see as the major potential causes of war in the future.
Not directly related to the course, but since you're an economist with an interest in AI, I'm curious what you think about AGI will drastically increase economies of scale.
My own fantasy is that people will eventually be canceled for failing to display sufficient moral uncertainty. :)
Sounds like their positions are not public, since you don't cite anyone by name? Is there any reason for that?
There’s a very wide range of views on this question, from “misalignment risk is essentially made up and incoherent” to “humanity will almost certainly go extinct due to misaligned AI.” Most people’s arguments rely heavily on hard-to-articulate intuitions and assumptions.
My sense is that the disagreements are mostly driven "top-down" by general psychological biases/inclinations towards optimism vs pessimism, instead of "bottom-up" as the result of independent lower-level disagreements over specific intuitions and assumptions. The reason I think this is that there seems to be a strong correlation between concern about misalignment risk and concern about other kinds of AI risk (i.e., AI-related x-risk). In other words, if the disagreement was "bottom-up", then you'd expect that at least some people who are optimistic about misalignment risk would be pessimistic about other kinds of AI risk, such as what I call "human safety problems" (see examples here and here) but in fact I don't seem to see anyone whose position is something like, "AI alignment will be easy or likely solved by default, therefore we should focus our efforts on these other kinds of AI-related x-risks that are much more worrying."
(From my limited observation, optimism/pessimism on AI risk also seems correlated to optimism/pessimism on other topics. It might be interesting to verify this through some systematic method like a survey of researchers.)
See this comment by Vladimir Slepnev and my response to it, which explain why I don't think UDT offers a full solution to anthropic reasoning.
Do you have a place where you've addressed critiques of Against Democracy that have come out after it was published, like the ones in https://quillette.com/2020/03/22/against-democracy-a-review/ for example?
Can you address these concerns about Open Borders?
-
https://www.forbes.com/sites/modeledbehavior/2017/02/26/why-i-dont-support-open-borders
-
Open borders is in some sense the default, and states had to explicitly decide to impose immigration controls. Why is it that every nation-state on Earth has decided to impose immigration controls? I suspect it may be through a process of cultural evolution in which states that failed to impose immigration controls ceased to exist. (See https://en.wikipedia.org/wiki/Second_Boer_War for one example that I happened to come across recently.) Do you have another explanation for this?
This is crazy, and I think it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.
Have you read Is the potential astronomical waste in our universe too small to care about? which asks the question, should these two parts of you make a (mutually beneficial) deal/bet while being uncertain of the size of (the reachable part of) the universe, such that the part of you that cares about galaxies gets more votes in a bigger universe, and vice versa? I have not been able to find a philosophically satisfactory answer to this question.
If you do, then one or the other part of you will end up with almost all of the votes when you find out for sure the actual size of the universe. If you don't, that seems intuitively wrong also, analogous to a group of people who don't take advantage of all possible benefits from trade. (Maybe you can even be Dutch booked, e.g. by someone making separate deals/bets with each part of you, although I haven't thought carefully about this.)
I’m focused, here, on a very specific type of worry. There are lots of other ways to be worried about AI -- and even, about existential catastrophes resulting from AI.
Can you talk about your estimate of the overall AI-related x-risk (see here for an attempt at a comprehensive list), as well as total x-risk from all sources? (If your overall AI-related x-risk is significantly higher than 5%, what do you think are the other main sources?) I think it would be a good idea for anyone discussing a specific type of x-risk to also give their more general estimates, for a few reasons:
- It's useful for the purpose of prioritizing between different types of x-risk.
- Quantification of specific risks can be sensitive to how one defines categories. For example one might push some kinds of risks out of "existential risk from misaligned AI" and into "AI-related x-risk in general" by defining the former in a narrow way, thereby reducing one's estimate of it. This would be less problematic (e.g., less likely to give the reader a false sense of security) if one also talked about more general risk estimates.
- Different people may be more or less optimistic in general, making it hard to compare absolute risk estimates between individuals. Relative risk levels suffer less from this problem.
If there are lots of considerations that have to be weighed against each other, then it seems easily the case that we should decide things on a case by case basis, as sometimes the considerations might weigh in favor of downvoting someone for refusing to engage with criticism, and other times they weigh in the other direction. But this seems inconsistent with your original blanket statement, "I don’t think any person or group should be downvoted or otherwise shamed for not wanting to engage in any sort of online discussion"
About online versus offline, I'm confused why you think you'd be able to convey your model offline but not online, as the bandwidth difference between the two don't seem large enough that you could do one but not the other. Maybe it's not just the bandwidth but other differences between the two mediums, but I'm skeptical that offline/audio conversations are overall less biased than online/text conversations. If they each have their own biases, then it's not clear what it would mean if you could convince someone of some idea over one medium but not the other.
If the stakes were higher or I had a bunch of free time, I might try an offline/audio conversation with you anyway to see what happens, but it doesn't seem like a great use of our time at this point. (From your perspective, you might spend hours but at most convince one person, which would hardly make a dent if the goal is to change the Forum's norms. I feel like your best bet is still to write a post to make your case to a wider audience, perhaps putting in extra effort to overcome the bias against it if there really is one.)
I'm still pretty curious what experiences led you to think that online discussions are often terrible, if you want to just answer that. Also are there other ideas that you think are good but can't be spread through a text medium because of its inherent bias?
(It seems that you're switching the topic from what your policy is exactly, which I'm still unclear on, to the model/motivation underlying your policy, which perhaps makes sense, as if I understood your model/motivation better perhaps I could regenerate the policy myself.)
I think I may just outright disagree with your model here, since it seems that you're not taking into account the significant positive externalities that a public argument can generate for the audience (in the form of more accurate beliefs, about the organizations involved and EA topics in general, similar to the motivation behind the DEBATE proposal for AI alignment).
Another crux may be your statement "Online discussions are very often terrible" in your original comment, which has not been my experience if we're talking about online discussions made in good faith in the rationalist/EA communities (and it seems like most people agree that the OP was written in good faith). I would be interested to hear what experiences led to your differing opinion.
But even when online discussions are "terrible", that can still generate valuable information for the audience, about the competence (e.g., reasoning abilities, PR skills) or lack thereof of the parties to the discussion, perhaps causing a downgrade of opinions about both parties.
Finally, even if your model is a good one in general, it's not clear that it's applicable to this specific situation. It doesn't seem like ACE is trying to "play private" as they have given no indication that they would be or would have been willing to discuss this issue in private with any critic. Instead it seems like they view time spent on engaging such critics as having very low value because they're extremely confident that their own conclusions are the right ones (or at least that's the public reason they're giving).
Still pretty unclear about your policy. Why is ACE calling the OP "hostile" not considered "meta-level" and hence not updateable (according to your policy)? What if the org in question gave a more reasonable explanation of why they're not responding, but doesn't address the object-level criticism? Would you count that in their favor, compared to total silence, or compared to an unreasonable explanation? Are you making any subjective judgments here as to what to update on and what not to, or is there a mechanical policy you can write down (that anyone can follow and achieve the same results)?
Also, overall, is you policy intended to satisfy Conservation of Expected Evidence, or not?
ETA: It looks like MIRI did give at least a short object-level reply to Paul's takeoff speed argument along with a meta-level explanation of why they haven't given a longer object-level reply. Would you agree to a norm that said that organizations have at least an obligation to give a reasonable meta-level explanation of why they're not responding to criticism on the object level, and silence or an unreasonable explanation on that level could be held against them?