Posts

[Creative Writing Contest] [Poetry] [Referral] "The Bell-Buoys" 2021-09-14T19:07:30.666Z
[Creative Writing Contest] [Referral] [Poem] Akbar's Bridge 2021-09-14T06:59:23.030Z

Comments

Comment by WSCFriedman on [Creative Writing Contest] Counting the Living · 2021-12-26T02:21:01.113Z · EA · GW

Oh, this is good. I want more.

Thanks!

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-29T22:26:46.149Z · EA · GW

I am happy to read your arguments! Again, I do not intend to carry out a serious investigation of the topic until I have the time and energy to do it with full charity towards both sides and the ability to actually update, but I am glad to have evidence I can evaluate with more focus and in more detail when I do.

"You are assuming their lives are net util even if their lives may be miserable. (which I think is the repugnant conclusion? I've never really liked the framing of it either) Let's break this down."

Not quite. I am assuming their lives are not subjectively miserable even if they look like they are objectively miserable. That's what I mean by 'net util.' There are situations where people who look objectively happy commit suicide and situations where people who look objectively unhappy actively and strongly desire to keep living.

"Additionally, there are other negative externalities which you are not acknowledging[.]"

And there's additional positive externalities I'm not acknowledging! I would need to carry out a serious exploration of all the externalities and of the entire situation to feel comfortable making a decision on my own instead of trusting my most-trusted authorities, who eat meat.

"1. We don't care about qualia. We care about suffering."

I think it is possible for pain to exist without suffering, but I'm not sure suffering can exist without the-thing-I-am-labeling-qualia. I think that pain-without-suffering is possible either because the brain interprets pain in a non-suffering manner, or because there is nothing there to notice the pain - if I'm unconscious, there may be pain signals in my nervous system, my body may be flinching, but I do not suffer because I'm unconscious, so there's nobody there to suffer. These seem to be cheap examples that the thing is possible. I do not know whether or not it is true.

By "qualia," what I fundamentally mean is "the thing that makes pain into suffering and pleasure into joy." And I think I do require that in order to care about pain.

"The current conception of consciousness (correct me if I'm wrong any neuroscientists in the crowd) is that consciousness is the interaction of the thalamus and the cortex."

I am not a neuroscientist in the slightest and this is one of the things I would need to launch a serious investigation of when I launch a serious investigation, which I am not doing right now but which I agree is the highest launch-serious-investigation-priority once I have tried to figure out whether literally infinite positive and negative utility are relevant thanks to the existence of an afterlife.

"Some people live net negative lives and won't off themselves because they think suicide is a net neutral decision. (infinite bad and infinite good possibility after death) I don't see how them persisting is justification of net util."

And this is why I attempted to clarify (possibly in another thread?) that I feel that similar patterns persisted in classical antiquity, back before Hell and Heaven were common beliefs.

OK, back to the specific chicken welfare question:

"They are killed after birth because they are deemed worthless. I don't believe these were net util lives. I believe they were negative lives..."

I'm not sure if this helps, but I tend to think of comparing utilities across lifetimes as imagining serial reincarnation. Like, I-the-force-looking-out-from-behind-Bill-Friedman's-eyes lives through each life in turn.

But, in that case, lifespan matters. Two days of good life is two days of good life; two days of bad life is two days of bad life. Living for 2000 days in one body seems to me equivalent to living in 2 days in 1,000 bodies, except for how it changes the goodness or badness of those days.

But in that case - I mean, I don't actually know whether the male chicks' lives are worth living, because I haven't done the serious in-depth investigation required to know this, but if they were negative and female lives were positive, 7 years = 2556 days = each female outweighs 1278 males.

... But, also, I don't know if female chickens' lives are worth living! Or males! I do not know the answer and the investigation is on the queue.

Comment by WSCFriedman on EA Forum Creative Writing Contest: $22,000 in prizes for good stories · 2021-10-29T05:26:10.740Z · EA · GW

I've just submitted two stories! Hopefully they've landed properly. Thanks for the form!

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-28T00:47:28.283Z · EA · GW

I can understand that, logically speaking, but it does not suffice to convince me. This is especially true because of the % of people who attempt suicide, don't die, and say later it was a giant mistake and they regret it. I could imagine a world in which people were usually or even often wrong both about committing suicide and not committing suicide, but it seems to me like a lot of added complexity.

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-28T00:43:34.563Z · EA · GW

I agree completely.

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T20:10:57.879Z · EA · GW

I was absolutely implying this! That was a fundamental part of my system, which went unspoken and which I am happy to defend.

And it's why I pointed out that you don't seem to have even semicommon mass suicide in the classical world, before the rise of Judeo-Christian beliefs on Heavenly and Hellish fates, when people think of the afterlife as grey fuzz if they think there is an afterlife and when culture often considers it morally heroic to commit suicide, rather than sinful. It seems more common, then, but even then it's very rare, almost always only in cases where people have strong predictive reason to believe things are about to get much worse and not going to get better, even though they don't know about the hedonic treadmill.

(I think the most common case is 'we're about to be captured by an extremely cruel enemy, tortured, maybe killed, maybe worse, almost certainly enslaved if we survive' - and even then I don't think most of the population of sacked cities kills themselves first, it's just something you hear about a noticeable minority of people doing, in what is basically the worst situation that can happen.)

And evolutionary pressure against suicide is what I presume produced the hedonic treadmill. "Whatever happens, on the macro scale you will be happy slightly above the suicide rate" seems like a great thing for evolution to engineer in, and I'm not really surprised it did.

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T07:32:58.008Z · EA · GW

Yeah, IIRC both G.K. Chesterton and C.S. Lewis wrote about how anyone can just say "the future will agree with me," as a way of getting support for your ideas, but nobody really knows about the future and probably everyone is wrong because the future will be more complicated than anyone thinks, and so arguments from the future are bad logic and invalid. (I think that Lewis's is a bit of the Screwtape Letters and that Chesterton's essay is in "What's Wrong With The World.") So I endorse this complaint.

But I didn't include that in my description because I do in fact think veganism will take over the world once the technology gets far enough, so that wasn't my true objection to the story.

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T04:58:16.987Z · EA · GW

(Part 3 of 3, threaded because I want to discuss different things you bring up in different places.)

"You said you feel threatened by a piece like this which paints the current treatment of animals as something that will be viewed as horrific in the future and understand you may contribute in a small way to that? What do you make of the current treatment of animals in our society? (I'm very open to hearing your thoughts, even if they may be very different than my own)"

I appreciate it!

For clarification: I would not describe myself as "feeling threatened" in the sense of "my position is unstable," so much as I would say that I felt, as of the time I read the story some hours ago, as "threatened" in the sense that someone who was being subject to extortion might feel threatened; that is, threatened meaning having been made the subject of a threat. I do not rationally expect anyone is going to burn down my house - but that was the kind of reaction I had.

I do recognize that, as I eat meat and consume dairy products, I am engaging in a potential evil. My opinions on this are complicated, but I have not stopped doing so.

To begin with, I am a total-sum utilitarian; that is to say, I do not think the repugnant conclusion is repugnant. Creating people who would prefer to live is doing them a favor. Creating someone on condition he later die for you is ethical as long as he would agree that, yup, existing had totally been worth it, and as long as his life didn't cause enough suffering (in side effects) to counterbalance it. So for this reason, I default to non-vegetarianism.

There is still the 'factory farming is uniquely terrible' argument! I have a great deal of sympathy for this argument! However, I think the case is weaker than it seems.

First, I am not in fact convinced that animals have qualia? Like, that is kind of a weak argument, just multiply the probability that animals have qualia by the total sum of the utility conditional that they do and go on from there? But - we really don't understand where consciousness comes from or how it works and I don't really know that there's anything actually inside a chicken's skull capable of suffering. So I do want this point made before I go on with the second, more important one:

I believe in the hedonic treadmill; that is, that people vastly overestimate and underestimate how much their happiness will change based on predictable factors (see https://slatestarcodex.com/2016/03/23/the-price-of-glee-in-china/ for a recent extreme case). I know enough history to know that the past was really extremely horrifyingly terrible - and, yet, mass suicides are not a common feature of life at any period in history, even those periods where nobody believed in a morally-relevant afterlife. Mass suicides did still happen occasionally, but (a) only under really extreme circumstances and (b) by people who did not know about the hedonic treadmill. So while I have no doubt that factory farming is worse for animals than conventional farming (other than the doubt of whether or not the animals are morally relevant), the question of "is it literally worse than death" is a much harder one.

You could still argue that, even if these arguments were persuasive, I should avoid eating meat anyway, just on the off chance it's a moral catastrophe. My response to that really just is that I am uncomfortable around Pascal's Mugging arguments and while I feel that I should probably investigate them I don't feel that I am compelled to obey every request that goes "Change your behavior or be at fault for a moral catastrophe!" I feel that being shaped like that is bad, because then anyone can just extort you effortlessly. Low-probability arguments that might be important are going on a queue based on probability, where I investigate one at a time as I have time. Right now I'm trying to figure out which religion is true, if any. Next on the queue is a Serious Long-Term Investigation Of Animal Welfare, but I expect it will take a while to get there.

If you want me to unpack anything, I'm happy to do that! Alternatively speaking, if you'd like to provide me with material for my eventual investigation, I'd also be happy to add it to the list. But so far I don't have any specific plans for changes to my behavior.

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T04:37:18.848Z · EA · GW

(Part 2 of 3, threaded because I want to discuss different things you bring up in different places.)

"Also, the goal for a piece like this isn't just to convince people to go vegan. It's also to make vegans reflect about their own engagement on the issue."

I believe that the EA writing contest was established to fund the creation of art that would persuade people who are not currently EA of EA causes and make them think more highly of EA. Insofar as I am wrong, I am wrong; insofar as I am not wrong, art-for-rallying-the-base is not actually bad, but is off-topic for the contest.

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T04:33:43.584Z · EA · GW

I am perfectly willing to have a long, point-by-point disagreement with you! I'm going to divide it into three threads, though; one for the actual argument about veganism, one for a side note about your second-to-last paragraph, and one for the meta-argument about pointy persuasion vs nice persuasion. This post is for the last; that is, for the statement:

"There is a place for delicate and tender art, and other art should be more pointed and direct."

I'm going to disagree. I think that, in terms of 'ideological art', there is a place for art that persuades by attempting to convince someone that you are on their side, and a place for art that persuades by attempting to convince someone that they really should be on your side, and a place for art that rallies and inspires people who are already on your side, and a place for art that genuinely instructs on a basis that has nothing at all to do with persuasion.

But I don't think there's a place for 'pointed and direct' art in terms of persuading people. I think that most persuasion is marginal, and comes by a long series of individual debates at the end of each of which the person you're talking to feels "Yeah, that was a good point, you're a decent person." I think "Guided by the Beauty of our Weapons" (https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) is instructive here, but especially the quote,  “First they ignore you, then they laugh at you, then they fight you, then they fight you half-heartedly, then they’re neutral, then they then they grudgingly say you might have a point even though you’re annoying, then they say on balance you’re mostly right although you ignore some of the most important facets of the issue, then you win.” But central to this is the step from fighting you to fighting you half-heartedly, and there is no way to get someone to take that step by offending them.

In my worldview, people largely change their minds via positive affect ("I like these people and these ideas, I want to associate with them") and negative affect ("those people are jerks, whatever they're for, I'm against"), and most argument consists of first subconsciously deciding what you want to believe, then steadily trying out arguments until you find one that you can buy, and - it's not that you can't be truth-seeking, I try to be truth-seeking, but I try with great difficulty, aware of what I desire to be true and aware that what I desire to be true does not systematically correlate with what is true. Biases are hard, and one of the biases is "I am biased to dislike people who are mean to me, them and everything they care about." And I think that when you get someone's back up, they are then harder to persuade to the cause that offended them, for quite a while until the effect fades.

I play role-playing games. I often think in terms of - dice rolls, probabilities. And one common mechanic in D&D and similar games is that each time you make a roll to succeed against a particular condition - some spell or poison or magical effect - you get +1 to the next roll to keep resisting it. Because you've fought it before and you can throw it off. Scott Alexander likes the cowpox metaphor; unpersuasive arguments for cause A inoculate you against potentially persuasive arguments for cause A, because you've already dismissed argument A. In that context, the EA community picking arguments for the sake of persuading people needs to be choosing not only for what will persuade some of their potential audience, but for what won't offend any of their potential audience, because every EA story read as 'an attack by EA on us' will make every person who has that reaction harder to persuade of EA in the future.

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T00:50:51.411Z · EA · GW

Ha!

Yeah, that's something where I think it would be a correct invocation of the rule if we wanted to implement the rule, but I don't think we want to implement the rule, so it's just funny. :P 

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T00:46:40.135Z · EA · GW

I am glad you are not unhappy with my post! I apologize if I am being too aggressive in this and I don't want to offend you.

But... I do identify with Whittaker. And I don't really feel that my opinion on how someone might view animal welfare has been altered, because - I feel threatened, and that isn't a good state to change your mind in? Insofar as I have reactions, they aren't scout-mindset I-desire-to-open-my-mind-to-the-topic, they're soldier-mindset I-am-under-intellectual-attack-and-must defend myself. I grant that you are probably correct that the future will condemn eating meat, but I still want - in terms of 'desire', not 'endorsed desire' - to come up with counterarguments, with the only required analysis being 'will this allow me to defend myself', not 'is it true'.

I don't think that most people operate by first feeling offended, then kindly and rationally considering the offensive argument. I think that doing that is a high-level skill that is difficult to learn, and that the more you offend someone at first, the more they're going to want to push back and the less they're going to want to listen to you. I can observe this in myself and theorize that it is responsible for phenomena I have observed in others, I expect  a large portion (probably a large minority?) of other readers who are not already convinced you are right to become offended when you make your explanation, and I expect that, as a result, the story will not work well for purposes of convincing people.

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T00:15:36.594Z · EA · GW

I... hmm. I'd guess the basic thing going on is irrational defensiveness of the sort where any documentary about the Israel/Palestine mess is going to get blasted by both sides because it is clearly and obviously biased in favor of the other side, regardless of how balanced it actually is? Like, writing a story about cancel culture in the future that doesn't condemn it is endorsing it? I'm trying to unpack my brain's explanation and I'm really not finding it a very convincing explanation.

I think the best I can come up with, in defensive-mode not explanation-mode, is: If this was a news article today, it would be pro-cancel-culture. It is not the style of article Scott Alexander would write, which would be an elaborate analysis with lots of graphs, it is not the kind of article a right-wing source would write, which would be scornful and mocking; it comes across in style as resembling the sort of thing that is neutral on the face of it but Really We Know What Opinion The New York Times Has About This Sort Of Thing.

This still doesn't look very convincing to me, to be clear! But I'm trying to explain my reaction. Which is not wholly reasonable but I will still defend as representative of a large portion of your target audience.

(And I don't really see the middle-of-the-road people as all that middle-of-the-road, or all that portrayed-as-unambiguously-good. Everyone back in the past wasn't all sorts of good things. If you had to put signs on all the past people of all the evils they didn't condemn, you've got 1 bit of useful information and 99 bits that could be compressed down to 'he was born in 1465 and had the standard opinions of his time and place except.' So, in that case, I did read it differently.)

(And - I sort of assume that factory farming will disappear as soon as tasty cheap synthetic meat shows up? Everyone will convert to vegetarianism when that happens. Once tasty cheap synthetic cheese and eggs and milk show up, everyone will convert to veganism. Then they will forget that veganism mattered and we will end up with Cordelia Vorkosigan, who 'doesn't eat anything but vat-protein if she can help it' and this comes up practically never because why would it? So that didn't really read to me as 'point of glory' so much as 'yup, plausible element of the future.')

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-27T00:03:38.860Z · EA · GW

That is 100% reasonable and I am probably not behaving reasonably! But I think the fact that I did freak out suggests that the way I read the story is at least plausible and that people having my reaction is a risk?

Comment by WSCFriedman on EA Forum Creative Writing Contest: Submission thread for work first published elsewhere · 2021-10-26T21:35:02.742Z · EA · GW

I'm pretty sure you already know about this old EA Scott Alexander post, but just to be sure: 

https://www.lesswrong.com/posts/pC47ZTsPNAkjavkXs/efficient-charity-do-unto-others

Comment by WSCFriedman on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-26T21:30:59.808Z · EA · GW

This story is on an issue where I do not agree with the standard EA consensus, so I feel as if my voice may be useful as an example of a 'person not yet persuaded', since that group is presumably the target audience for this fiction.

My body has completely switched over from 'relaxed, cheerful, listening to a story' to 'under threat'; sweating, faster heartbeats, soldier mindset instead of explorer, how-do-I-defend-myself-as-fast-as-I-can. I think this will not be a good story to make other people like EA more. I think it will make them like EA less.

To help clarify my position, I am starting with the position that 'cancel culture' is bad. A good deal of my horror and shock is that of presenting one of the things I like least about our current culture as a perpetual feature, blazed in titanic letters a hundred and fifty years in the future. But putting that side - this does not feel to me like a persuasive argument. It feels like an attack. It isn't saying "you should join us," it's saying "join us or we will destroy your memory." It doesn't read to me as "the future will condemn you," but as "we will make the future condemn you." I don't actually feel this story is persuasive or illustrative or teaches useful thinking habits. I think it's a threat: "You'll be shunned if you don't sign up with us."

But I - I think that the image of Good as that which can cooperate, Good as that which can be trusted, Good that you can relax and be safe and explore in the presence of because it won't try to hurt you - is worth preserving. And I don't think this story does. I think reading this story will drive away people who doesn't already agree with you (based on my current emotional state) and fire up the people who already do (based on the upvoting), and I don't think these are good things to try to do.

Or it could just be that I'm treating this story unfairly because it pushed one of my buttons. That's also wholly possible.

Comment by WSCFriedman on [Creative Writing Contest] The Legend of the Goldseeker · 2021-10-25T22:07:16.409Z · EA · GW

I did not understand what the story was trying to say, very well. It just seemed to me to be 'a series of bad things happened because of failures of effective communication and understanding?' I can read it as a criticism of overconfidence, but I feel as if there have already been a lot of criticisms of overconfidence, and at this point I'm kind of worried we need more criticisms of underconfidence? I did not end up with very strong opinions about the story in any particular way, and I suspect it was a failure of my understanding at least as much as a failure of the story.

Comment by WSCFriedman on [Creative Writing Contest] The Rise of The Effective Shoppers · 2021-10-22T06:07:20.763Z · EA · GW

I think this is a very cute, clever story! I appreciate it and have upvoted it! I don't think I have any clever comments, though I'll let you know if I think of any.

Comment by WSCFriedman on [Creative Writing Contest] What You Do · 2021-10-19T06:26:43.087Z · EA · GW

This is a very nice little story and I definitely liked it. Thank you for writing it.

Comment by WSCFriedman on [Creative Writing Contest] Counting Beans · 2021-10-19T06:22:59.572Z · EA · GW

This absolutely amused me, in a grim way. Thanks.

Comment by WSCFriedman on [Creative Writing Contest] An Ordinary Plea · 2021-10-18T21:54:05.568Z · EA · GW

Reply-edit for clarification to expand my response to one of your points: I think it is worth, in a lot of situations, judging based on "should it have worked," instead of "did it work." That your model predicted it shouldn't work and it did work is evidence your model is seriously flawed, just to be clear, I'm not arguing we should completely throw out the experiment and just go with our previous model, but, also, we shouldn't say "the one guy who won the lottery was right and everyone else was wrong," because everyone who bought a ticket had the same chance of winning, and ex ante the lottery was a losing bet for all of them.

(Unless the lottery was crooked but that's a side note.)

So, even if it worked, I still think the protagonist's motive was unreasonable; even if it worked, I don't feel it should have worked, statistically speaking, as opposed to him getting immediately spotted, arrested, and spending the next five years of his life in jail in which he can do no good at all. Or someone's angry brother taking a shot at him with a firearm, causing him to die instantly after he'd donated only $8000 to Givewell's top charities, as opposed to if he'd peacefully sat back and worked a high-paying job he would have donated $800,000 over the course of his life. Or someone successfully suing to get all the donated money back as a class-action suit, causing the Against Malaria Foundation to go bankrupt because it already spent it all on bed nets and couldn't get a refund. Not that all of those are equally likely, but there are a lot of ways for his kind of plan to fail at levels of badness approaching these, and if they fail this way he definitely killed people, and I don't find the assumption that he knew none of them would happen very persuasive.

Comment by WSCFriedman on [Creative Writing Contest] An Ordinary Plea · 2021-10-18T07:30:46.241Z · EA · GW

I'm glad you aren't offended! I get easily worried that I might be saying things in an offensive manner and I appreciate you reassuring me that I didn't! I am always very happy to write long and elaborate reviews of fiction and I am glad you appreciated it.

And I would agree that the protagonist is evil (indeed, he admits he is evil - he's quite clear that he enjoyed what he did) and also took a set of actions which may have had net-positive utility. I don't think we know that it did; it's possible that some vague combination of making people distrust EA-style arguments, imposing costs on people both directly (his victims) and indirectly (court costs, prison costs, stress to everyone vaguely associated, costs of additional security precautions taken because his existence is evidence of the world being less safe than you thought) and so forth and so on made it net-negative.

But I will confidently deny that he was in an epistemic position to expect his actions would be positive, let alone the optimal decision. I could theoretically imagine a world in which this was false, and he genuinely did have the knowledge required for his actions to actually be both ex post and ex ante optimal, but I don't actually think I can actually imagine a world in which I was in the epistemic state of knowing that he knew his actions were ex post and ex ante optimal; my mental state in such a world would be sufficiently different that I'm not sure I'd be the same person. So I'm really quite comfortable condemning him, though I'll admit I'd vote for life imprisonment instead of execution.

And Unsong is very interesting! It doesn't always succeed at what it's doing, as I mentioned I find the protagonist kind of boring, but it's trying such fascinating things and it succeeds sufficiently often to be worth reading.

Comment by WSCFriedman on [Creative Writing Contest] Communities of Rigor · 2021-10-14T07:38:42.903Z · EA · GW

I'm sorry, but I don't have anything to say about the story, because I didn't 'get' what it was saying.

Sorry. I don't know what you were trying to do because whatever you were trying to do you didn't succeed in doing it to me.

Comment by WSCFriedman on [Creative Writing Contest] [Fiction] The Fey Deal · 2021-10-13T20:01:00.881Z · EA · GW

I really like your long version, myself, but I'm already familiar with EA. :)

Comment by WSCFriedman on [Creative Writing Contest] An Ordinary Plea · 2021-10-13T19:59:42.427Z · EA · GW

Also I want to do a completely separate post in response to one of your short comments:

"What's wrong with the speaker's super-harsh utilitarianism?"

My immediate response, just automatic on reflex without engaging my brain's slow mode, is "planning fallacy / arrogance / sin of pride." What's wrong is that he assumes he's in a sufficiently strong level of knowledge, self-discipline, and self-control that he actually can pull off his ubermensch act, instead of it all going horribly wrong and blowing up in his face. That's always what's wrong with characters doing that. That's why so much EA thought focuses on the question of how to 'first, do no harm'. That's why EA takes the High Modernists so seriously as an object lesson, that's why rationalist circles are the only ones I've ever been in where you can just say "Chesterton's Fence!"  and the burden of proving an argument automatically switches over to the party arguing for a reform.

Writers have been asking that question and giving basically that answer for, what, a hundred and fifty years? Since "Crime and Punishment," I think, which is apparently from 1866. The most recent modern artwork that I found memorable that said it was Fate/zero, back in, IIRC, 2012. I'm not going to say you can't update timeless and eternal themes for a modern audience, that's a perfectly reasonable thing to do, but I didn't actually find that the story started interesting EA-type conversations (though, again, I thought it was very well-written - your voice and prose style are excellent), and the specific reason for that is because I didn't really see that it was doing anything new or exciting, philosophically speaking. Insofar as you intended it to have a different moral answer, that... didn't really come across? It was just asking the same question that had been answered so many times before.

Comment by WSCFriedman on [Creative Writing Contest] An Ordinary Plea · 2021-10-13T19:49:23.639Z · EA · GW

(For clarification: Step One is that the story has to work as a story, I agree with that completely, if I'm passing that over in my response, it isn't because I disagree, but because I agree too much to have anything interesting to say on it.)

Comment by WSCFriedman on [Creative Writing Contest] An Ordinary Plea · 2021-10-13T19:48:16.260Z · EA · GW

The odd thing is, I would use HPMOR as a model of how to do it right. Its main failing is that it fails to make it clear that the protagonist isn't a perfect propaganda figure, and shouldn't be emulated - that the audience ends up thinking that the protagonist is making giant mistakes throughout the story because the author thinks those are the correct decisions, not because he's an eleven-year-old in over his head. But the author agrees with him enough that if you aren't paying careful attention, he comes off as an arrogant jerk the author endorses, instead of a person with many virtues and the vice of being an arrogant jerk.

(Clarification: I tried to read HPMOR twice and disliked it before I fell in love the third time. I think it is genuinely Great Art, but flawed in the way most Great Art is flawed - that of aiming for being twice as good as the best art that previously existed and ending up being an uneven mix of 160% as good and 60% as good, which may well make you throw the book at the wall during the 60% parts.)

But that side-note aside: I agree that making all EA-style figures into saints is a risk. People may well get turned off at being preached to; I know I do. Again, see my comment on "Blue Bird and Black Bird." But...

If you've ever read Scott Alexander's Unsong - I don't know if you have - the central figure in the story is the Comet King. He isn't the hero; the hero is a not-very-interesting nebbish stuck in the world the Comet King made. But the Comet King is EA, is really presented as purely good, and is a genuinely psychologically fascinating character. Every page he's on blazes with light and life and joy, and yet he isn't a Mary Sue, because by the time the main story starts, he has already lost. The book is about the aftermath of a perfectly good hero failing to save the world, the book is unquestionably pro-effective-altruist, and where the book fails, it isn't because it's being too pro-effective-altruist, it's because the protagonist if the main story is really kind of dull and uninteresting.

So I think it can work. As good examples of other fiction that work despite being ideological, I'd recommend Bujold's "Shards of Honor," Terry Pratchett's "Night Watch," and, as I said, "The Lord of the Rings."

But although I I think there's problems with EA-heroes if badly written, I also think there's an equal and equivalent problem with EA-style villains, even if they are well-written. It can work I agree, artistically speaking, to put a viewpoint you sympathize with but disagree with in the mouth of your villain, then make him take it to evil ends. That's something I've written and it's something I enjoy writing, because it allows you to have a strong ideological conflict between two good ends while still having a hero, and that has a lot of potential to work well.

But I'm not wholly comfortable with it, and this is why:

 I think our subconscious or semi-conscious mind has bins labeled "traits of villains" and "traits of heroes," and when we see something in real life that we are used to thinking of as always being labeled in fiction with "trait of villain" or "trait of heroes," we apply that label in real life. When an artwork gives 'uses reason and logic to try to maximize good' as a trait of villains, especially in a culture in which everyone else is also using it as a trait of villain, it reinforces that as being part of the 'villain trait' box, since it's usually (not in your case!) paralleled with a conservative, chauvinistic anti-intellectual badass hero. I think this will genuinely cause people to immediately round off, in real life, 'tries to use reason and logic to maximize good' as something that will go horribly wrong in real life, and 'is a badass cowboy who plays by his own rules but accomplishes good things anyway because he's a FUNDAMENTALLY GOOD PERSON' as being something that will go right in real life. In this case, if you want me to say I have less respect for people than you do, that's entirely possible; I don't think people rationally feel that they do this, I don't think people who slow down to think do this, but I think that people who operate on automatic do this; that they treat the Terminator movies as a model for how AI will go wrong in real life, and I think that this has bad consequences if everyone is saying the same thing about how 'trying to maximize good' will go horribly wrong.

Now, if you believe that EA will go horribly wrong in real life, if you really think that Engels ought to be treated as an early member of EA and as a model of how badly EA will go wrong in the future, it makes sense to write that. But I don't. And from your comments, I don't think you do.

Now, again, this isn't a reason why I think you should never do the 'charismatic villain arguing for an underappreciated cause' bit. Plenty of good stories have done it. I've done it. But if it isn't a cause you disapprove of, and most of the other writers writing about the cause have the same take you have on it (and if it's a thriving cause and the take is one of the first ones you thought of, they probably will) you're (a) doing something that is individually totally reasonable, and (b) contributing to an unreasonable aggregate. And that's not something I really approve of, and it's not something you... ex ante ought to expect members of the subculture to approve of? Or subsidize? Like, Larks' second point wasn't one I said explicitly, because it's attributing malice where malice isn't the most likely option, but I totally thought it before the obvious realization kicked in that it was statistically unlikely.

Comment by WSCFriedman on [Creative Writing Contest] An Ordinary Plea · 2021-10-12T22:25:32.328Z · EA · GW

I second everything Sophia said, but would like to raise a few other points to clarify:

First, I also dislike works-that-fit-in-my-brain's-internal-category-labeled-propaganda. (As some evidence of this, I'll offer my comment on 'blue bird and black bird'.) Nonetheless I feel that there is an enormous range for stories in which "make people reading it think more kindly of X" is a clear goal that do not fit in my brain's internal category of propaganda. It's quite clear that Lois McMaster Bujold is opposed to eternal smouldering guerilla wars of resistance and in favor of artificial wombs, and none of that makes the Vorkosigan Saga non-amazing. I can also offer basically any culture-clash story; usually those have the objective of making the audience say "both cultures have their good points and their bad points and their inhabitants are both human", but a lot of those still work effectively without making the audience zone out. None of those flip my "I am being propagandized to" switch.

Two possible explanations for what's going on.

First is the worldview emphasis. If the author "presents a world in which X is true," the audience can ask themselves, "is this world consistent? Does it resemble our world?" If it is straightforwardly true that, in the Lord of the Rings, power inherently corrupts, reading the Lord of the Rings gives us an opportunity to look at a world in which power inherently corrupts. Insofar as it seems coherent and non-self-contradictory, the audience has an example of "what could be" to compare that fictional world to their real world. Maybe it doesn't give you any useful real-world experience; maybe Tolkien is relying on factors that don't exist in his world, or are too weak to have the effects he describes. But by expanding the audience's worldview, the author gives them a new model to use to analyze reality.

Second is Yudkowsky's line: "Nonfiction conveys knowledge. Fiction conveys experience." We don't, most of us, have experience with trying to use logic and math to do good. A work of fiction in which a character tries to use logic and math to figure out how to do the most good, even if it is in a fantasy world in which you can use magic to heal people, gives us a starting model of "how to use logic and math to do the most good," lets us know that this is a thing you can do (at least in an apparently-but-not-necessarily self-consistent alternate universe), gives you a model (as previous paragraph) and thereby gets us a start on doing it in our universe. By a character giving you a model of one potential way to behave, the audience can learn that this is a potential way to behave, and thereby start wondering if it is worth trying to adapt any elements of it to their own lives, without the author ever needing to preach.

Comment by WSCFriedman on [Creative Writing Contest] An Ordinary Plea · 2021-10-11T21:20:28.709Z · EA · GW

This is a beautiful story, but I don't actually expect reading it to make people think more kindly of effective altruism. 

I could, of course, be wrong.

Comment by WSCFriedman on [Creative writing contest] Blue bird and black bird · 2021-10-09T04:39:35.443Z · EA · GW

I am commenting purely to let you know that one of the thumbs-up on your post is mine.

Comment by WSCFriedman on [Creative writing contest] Blue bird and black bird · 2021-10-09T04:39:00.867Z · EA · GW

Last I saw, "The Reset Button" was leading it by one vote.

Comment by WSCFriedman on [Creative Writing Contest] [Fiction] The Fey Deal · 2021-10-08T21:59:01.464Z · EA · GW

I'm sorry, but I have the weirdest bit of commentary to give on this: There's several places where the comma is outside the quotes (if you do a search for ", you'll find them) and it's making me go all twitchy-eyed. I'm about 95% confident commas are supposed to go inside the quotes, the way other punctuation does, at least in English? Sorry about this.

Comment by WSCFriedman on [Creative writing contest] Blue bird and black bird · 2021-10-08T21:54:15.805Z · EA · GW

I could be mistaken, but I feel as if that would completely change it into a different sort of thing. I admit it would be a thing that I-personally would probably like more, but I feel it would also remove all the power the story currently possesses. I feel as if this would be removing a thing from existence and replacing it with a new and different thing, instead of improving a thing - and this is clearly a popular thing, since it's the second-highest-rated submission to the contest, so far.

Comment by WSCFriedman on [Creative Writing Contest] [Fiction] The Engine · 2021-10-08T21:49:45.911Z · EA · GW

I have a question for you, and I think the answer might help make the story clearer.

Under what context is your narrator giving this explanation? Why is he saying all this? What's the framing device for it? Because if he's trying to explain quick history to someone who doesn't know it (why doesn't the person know it? A small child? A foreigner? Just someone technologically ignorant?), he has no reason to bring up the analogue-vegetarians at all. Just "this is how cars work." If the listener then asks (possibly offpage) if this is wrong, he can explain "it's not like chickens actually matter" (and I wouldn't even say 'some people have a delusion chickens are people', we can just appeal to the perceived-as-true-to-most-of-humanity-belief that chickens have no moral value and leave it at that) "and anyway it's better than the alternatives which are all super-expensive."

But if, instead, he's bringing it up in the context of trying to argue someone out of analogue-vegetarianism, then he needs numbers. Then he would want  the ability to say, "if everyone did that, that would dectuple the cost-per-mile of cars, the economy would collapse. Nobody would be able to afford to drive to work from their houses, we'd have to go back to coal power plants polluting the air, factories would close across the country, we'd be in a desperate battle for survival." It's not that these things would necessarily be true; in our world, which doesn't have the Phobic Reactor, our economy is fine. But if he's pitching his side's case, he isn't just going to say, "this side is delusional," he's going to say "and their delusions would have horrible consequences if people believed them." Otherwise he's leaving good arguments on the table.

Does this make sense? I'm not saying these are the only two possibilities, obviously; there's lots of other contexts in which he might be explaining. (An ad for the newest, super-efficient phobic-reactor-fueled-car, say - someone might explain history there, just to clarify how awesome the new product was.) But I think that thinking about the question would help with troubleshooting the story.

Comment by WSCFriedman on [Creative Writing Contest] [Fiction] The Engine · 2021-10-08T21:30:52.189Z · EA · GW

I mean, I see these as totally different things (preventing suffering in Nigeria - well, and other third-world countries - is why I'm here), but that's probably moving outside the question as posed. I wouldn't be willing to be a butcher, but that's squeamishness, not a moral decision; I wouldn't want to be a plumber, either.

But... actually no I think I'm going to move my actual advice to the 'do you have recommendations' thread just above. See you there!

Comment by WSCFriedman on [Creative Writing Contest] [Fiction] The Engine · 2021-10-07T22:38:35.665Z · EA · GW

On my second reread, I figured out what was supposed to be going on in the events, if not the meaning of the story. But while I considered factory farming as one possibility for the thing it was supposed to be equivalent to, I felt the analogy whiff, and so decided it probably wasn't what you intended.

The reason is, the story depends on your initial belief that animal suffering (specifically the suffering of chickens) is fundamentally important. But what it's trying to convince you of is that animal suffering is fundamentally important. So it's a closed loop. If you aren't a vegetarian and are a consequentialist (hi), it's saying "you know the thing you know a little about and don't really like, but don't have strong enough opinions about to change your behavior over? What if we had more of that, and less of lots of other things you DO feel strongly are bad?" My general attitude isn't that the narrator is wrong, it's that people don't talk like he does. He's talking as if he has some kind of dark and terrible secret to hide, but the secret is only dark and terrible if you start out believing he's wrong, and then his secret isn't dark and terrible, because he's admitting it openly, so it isn't a secret.

I feel as if, in order to write a story to make someone emotionally feel the importance of vegetarianism, you would need to say "X, which you already condemn is morally equivalent to eating meat" in such manner that people who read it actually agreed with you that X was morally equivalent to eating meat and that since they condemn X, they should stop eating meat, without instead having them say "But X isn't equivalent at all!" or - the trap I found this story to fall into - "why should I care about X?"

Because, conditional on chickens not haveing qualia, I don't care if they do have fear. The evil done in the story could be evil if we are supposed to believe that the fact that chickens do have fear proves they do have qualia, but I didn't get that idea from anywhere - so we're back to the closed loop, where you need to be a vegetarian to be convinced by the story of vegetarianism.

Comment by WSCFriedman on [Creative Writing Contest] [Fiction] The Engine · 2021-10-06T20:56:36.728Z · EA · GW

I'm sorry, but I straightforwardly don't get the story. It definitely feels like it's trying to make a grand analogy but the analogy does not, for me, connect. I don't know what it's trying to say - there's about twenty potential things I could imagine it having been written to be analogous to, all of which seem to me no more than 40%-70% fitting - and so I got no emotional charge out of reading it, only vague curiosity as to what it was meant for.

Comment by WSCFriedman on [Creative writing contest] Aging Parable · 2021-10-03T21:33:34.816Z · EA · GW

Not bad!

Comment by WSCFriedman on EA Forum Creative Writing Contest: $22,000 in prizes for good stories · 2021-10-01T21:16:29.827Z · EA · GW

Thank you! I've bookmarked it.

Comment by WSCFriedman on [Creative Writing Contest] [Fiction] Houseproud · 2021-09-28T06:00:43.972Z · EA · GW

This is a good story and I'm glad you got published, I just don't see the relevance to the EA contest. I'm glad you submitted it because it meant I got to read it, and it was totally worth reading, it's just that I don't really read it as "EA". If that makes sense?

Comment by WSCFriedman on [deleted post] 2021-09-27T21:08:53.255Z

Well, this was absolutely terrifying. Thanks for writing it.

Comment by WSCFriedman on [Creative Writing Contest] The Reset Button · 2021-09-26T07:39:39.104Z · EA · GW

I actually read the protagonist as 'probably suffering from radiation poisoning, might be about to literally die from the next bomb or the building collapsing' as of the moment before they hit the reset, so I would see such planning as irrational rather than sensible - a little information might help, but not if it risks your life (which is what you're thinking about if you're selfish) or the fate of the world (which is what you're thinking about if you're selfless).

Comment by WSCFriedman on Light Before Darkness · 2021-09-24T08:33:40.645Z · EA · GW

It makes some sense? The added thing makes everything more confusing, though.

Reading what you say feels like I'm reading words that have been translated out of a foreign language and culture, or are writing in 17th-century English by a 17th-century author, or maybe you're a time traveler from the 22nd century and there's been linguistic drift since then? Or maybe you're a Zen monk and speak in koans? It isn't that I feel your culture is inconsistent or anything, it's just that you seem to be using words as if they had obvious secondary meanings and connotations that they don't have in my language.

Comment by WSCFriedman on Light Before Darkness · 2021-09-23T21:47:47.234Z · EA · GW

I'm sorry, but, having read it, I don't know what your religion is.

This is a serious statement: I don't actually know what you're trying to say, after having read it. I don't even know what you mean by writing-against or writing-towards.

I think you may be slightly understating the extent to which the transparency illusion applies.

Comment by WSCFriedman on EA Forum Creative Writing Contest: $22,000 in prizes for good stories · 2021-09-22T22:08:39.328Z · EA · GW

Also, different comment that I'm kicking myself for not bringing up until now:

The Submission Grinder is a website that tracks places where people who write SF&F can submit stories. If you can get listed with them, that ought to bring more attention to the contest. 

Here's the link: https://thegrinder.diabolicalplots.com/

Comment by WSCFriedman on [Creative writing contest] Blue bird and black bird · 2021-09-22T08:45:25.190Z · EA · GW

Ah, but are there a thousand hacking at the branches of evil for each one who thinks they are striking at the root?

Comment by WSCFriedman on Light Before Darkness · 2021-09-22T08:43:20.509Z · EA · GW

I'm afraid my downvote wasn't articulate, but instinctive: It seemed like it wasn't actually saying anything, just being philosophical for the sake of being philosophical, or poetic for the sake of being poetic. I can't actually figure out how to translate it into what I think of as 'plain English'; I can't give a one-sentence summary of the themes, or of what you're trying to say, and it didn't reach the extraordinary  (staggering) level of poetic beauty that would make me upvote it anyway, just because I enjoyed the words as music without knowing their meaning.

This isn't saying that there is no meaning! People don't usually say things that they think mean nothing. Just that I got no meaning out of it, and hence, if someone vaguely like me was going around saying "You know this EA thing? I'd kind of like some fiction to help me intuit how it works," I would not recommend it to them.

Comment by WSCFriedman on [Creative writing contest] Blue bird and black bird · 2021-09-21T22:25:28.220Z · EA · GW

See, the thing is, I can't find any improvements because the entire premise feels to me inherently propaganda-ish. I'm sorry, I can try to break it down into more detail, but I suspect that it will be unfixable for me.

I'm going to try to rank the main bullet points of my discomfort in order of how important they are, most to least.

• I feel very uncomfortable with the entire dynamic of a 'right way / wrong way' pair. Partly this is because of individual cases where people using it ticked me off, but fundamentally it is that the idea of the character 'who exists solely to be wrong' makes me uncomfortable. Even if he isn't a strawman, he makes me uncomfortable.
• • The spiritual equality of humanity is a fundamental... not religious but sub-religious belief for me? The kind of thing you make religion out of, or that determines which religions feel right to you? Starting with the statement "all people are equally valuable" implies "in representative fiction, all people ought to be equally valuable to the extent to which the work is attempting to be representative of people's mental and spiritual states." That's why, e.g., the Sharpe series will never be a favorite of mine - because there are all these horrible people who exist so the protagonist can be better than them.
• • Worse: My natural instinct is always to support the underdog. There's a part of my brain that thinks that the underdog must be right, purely because he is the underdog. And the ultimate underdog is the one where the entire universe is opposed to him; the character created to be wrong therefore has the highest level of underdog power humanly possible, far exceeding normal victims, who at least have the author's sympathies, or actual villains, who might win partial victories or get awesome scenes; the Bad Example is doomed by the nature of the universe he lives in, and is therefore the character my underdog instinct must make me support.
• • The only exception I can think of to this general principle is Yudkowsky's "Inadequate Equililbria," but that's because Simplico gets some very good lines; he genuinely makes good arguments for his position, even though the author disagrees with it. But you, in your situation, can't give the bluebird better lines, because you are writing an extremely short, extremely simple parable, and witty dialogue or complicated back-and-forth arguments would spoil the whole effect. 

• The general style of it - soft, beautiful art, accompanied by largely one-syllable words in that specific kind of style and formatting - immediately suggested something aimed at small children. (2-5) This, to me, implied two things - first, that it would treat its audience like children, and second, that the material would be presented for children. But:
• • I didn't like being treated like a child when I was a child, and I still feel lingering discomfort about anything that treats me like a child.
• • I dislike simple analyses of complicated topics, and by this work's nature, it has to be a simple analysis, because you're either aiming it at small children or making it look as if you are.
• • I dislike the idea of giving propaganda to small children, so I'm going to feel more negatively about any propagandistic elements that exist; this is a magnifier, not a source, but it's definitely a magnifier.

• I am not, personally, a vegetarian. I'm inside EA, but only inside the first circle; my belief in the fundamental equality of humanity, combined with historical knowledge that helps me realize how terrible the lives of people in the third world are, leads me to believe that obviously third-world charity is more important than first-world charity, and clearly other people who are worse off can use the money more than I can, I have a reasonably comfortable life even by first-world standards. And I worry about X-risk literal and metaphorical because history is terrible, and I see insufficient reason to assume it won't continue to be. But I have not yet been convinced of animal rights even to the extent of vegetarianism. I agree that I ought to carry out a serious investigation; the serious investigation is currently in the queue behind a serious investigation of what religion, if any, is true, and I am not seriously altering my actions prior to the investigation for Pascal's Mugging style reasons. But as a result of that, any attempt to preach vegetarianism to me is automatically going to trigger my brain's 'defend beliefs' module, which will run a quick check to determine if this is the kind of argument I need to take seriously or if it (a) puffery or (b) emotional manipulation, and almost any argument that doesn't fit the 'serious analysis, making strong arguments, responding to my concerns, and logically explaining why I am wrong' pattern is going to end up in one bin or the other.
• • This wound up in the 'emotional manipulation' bin, partly because I'd already been feeling emotionally manipulated by the first few panels, partly because of the aimed-at-children style, and partly because it felt as if it was executed with too much craft to be puffery. But I dislike emotional manipulation, vegetarianism is a political cause, and emotional manipulation for a political cause is propaganda.
• • So, stacked on top of all my other issues, this resulted in my initial comment, an attempt to convey 'this strongly didn't work for me' while attempting to be as polite and informative as possible.

Again, I'm sorry. I can explain what my reactions were, I can analyze and dissect them, but I can't explain how the story could be altered to avoid triggering them because the problems seem to me to be fundamental to the nature of the artwork, and I cannot imagine an alternate design for the artwork that would not feel to me as if it shared these problems. This doesn't mean it's impossible, but it means it is beyond my level of skill to achieve. 

Comment by WSCFriedman on [Creative Writing Contest] [Fiction] [Referral] A Common Sense Guide to Doing the Most Good, by Alexander Wales · 2021-09-20T23:56:27.539Z · EA · GW

My specific worry is about people coming to the conclusion that it is "a problem with EA," or "a problem with consequentialism," instead of "a problem with organizations," and thereby making people who hadn't heard of EA becoming more negatively (instead of more positively) inclined towards it.

Comment by WSCFriedman on [Creative Writing Contest] The Reset Button · 2021-09-20T22:06:36.589Z · EA · GW

I am a writer (though not a published one) and I second his judgement. I felt brief disquiet at the line he commented on, but didn't analyze it until I read his post because the story as a whole had still worked very well for me. I think the change makes a good story better, and I thank both Steve for suggesting it and Joshua for implementing it.