Posts

EA Outreach on Omegle 2022-08-11T10:34:25.457Z
Undergraduate Making Life-Altering Choices While Sober, Please Advise 2021-07-10T08:56:07.887Z
Creating A Kickstarter for Coordinated Action 2021-02-03T04:08:37.155Z
A Case Study in Newtonian Ethics--Kindly Advise 2020-12-05T07:40:18.893Z
Lumpyproletariat's Shortform 2020-09-29T11:09:01.554Z
New member--essential reading and unwritten rules? 2020-07-13T05:54:49.488Z

Comments

Comment by Lumpyproletariat on Chaining the evil genie: why "outer" AI safety is probably easy · 2022-08-31T05:53:13.076Z · EA · GW

1. For each AGI, there will be tasks that have difficulty beyond it’s capabilities. 

2. You can make the task “subjugate humanity under these constraints” arbitrarily more difficult or undesirable by adding more and more constraints to a goal function. 

 

(Apologies for terseness here, I do appreciate that you put effort that went into writing this up.)

1. It seems to me you underestimate the capabilities of early AGI. Speed alone is sufficient for superintelligence, FOOM isn't necessary for AI to be overwhelmingly more mentally capable.

2. One can't actually make the task "subjugate humanity under these constraints" arbitrarily more difficult or undesirable by adding more constraints to the goal function. Constraints aren't uncorrelated with each other--you can't make invading medieval France arbitrarily hard by adding more pikemen, archers, cavalry, walls, trenches, sailboats. Innovative methods to bypass pikemen from outside your paradigm also sidestep archers, cavalry, walls, etc. If you impose all the constraints available to you, they are correlated because you/your culture/your species came up with them. Saying that you can pile on more safeguards to reduce the probability of failure to zero is like saying that if a wall made out of red bricks is only 50% likely to be breached, creating a second wall out of blue bricks will drop the probability of a breach to 25%. 

Comment by Lumpyproletariat on Chaining the evil genie: why "outer" AI safety is probably easy · 2022-08-31T05:35:31.878Z · EA · GW
  1. For each AGI, there will be tasks that have difficulty beyond it’s capabilities. 

2. You can make the task “subjugate humanity under these constraints” arbitrarily more difficult or undesirable by adding more and more constraints to a goal function. 

 

Apologies for being terse here. I think you Did A Good Thing in writing this up, even if that doesn't come across in my comment. 

My main points of disagreement:

I think you underestimate AI capabilities. 

Main disagreements with titotal: 1.Constraints aren't uncorrelated. You can't make invading a country *arbitrarily hard* by adding more pikemen, archers, cavalry, walls, trenches. Solutions to one problem solve others also

Comment by Lumpyproletariat on EA Outreach on Omegle · 2022-08-12T11:11:36.843Z · EA · GW

Oh, these people are certainly not bots. Chatbots aren't very, uh, good at disguising themselves. They're more likely to do things like unpromptedly say "bot? I'm not a bot. are you a bot?" in response to your saying "bot flies are nasty insects" or link you to an h-game than they are likely to ask whether you're a Luddite or for college advice or tell you how to contact them on Discord where they send the conversation up to that point as a text file. Humans sound like humans, bots sound like bots. (Also these people have sleep schedules and stuff and all the other thousand tells that make one confident that someone is made of flesh and blood.)

Why, then, are Omeglers more amenable to convincing than meat people? I'm not sure. Part of it might be that on average there's a larger gap between how smart they are and how smart the average EA is, than the the gap between the average EA and the average person EAs find themselves trying to convince. I'm not sure that having good ideas was super important in how convincingly I came across.

Another part is that they're somewhat preselected for hearing weird ideas out; these are, after all, people who chose to spend their time listening to utter strangers utter their politics. 

Another part could be that they're starved for good conversation. Presuming that the average EA isn't far behind the average LessWronger or Slate Star Codex reader, average IQ is in the global 2%. It doesn't seem outlandish that some Omeglers found me the most intelligent person that they'd had an extended conversation with.  

And, finally--I probably spoke with a couple hundred people on Omegle, filtering out people who weren't interesting to talk to very quickly. Median conversation length was measured in seconds, those that lasted longer, only a few minutes, highly enjoyable conversations lasted hours and ended in shared contact info maybe 25% of the time. Extricating oneself literally took only the click of a button. Four people who wanted to stay in contact, does not seem like an outlandish hit rate.

None of this theorizing is particularly grounded; I have not and do not intend to spend much in the way of braincycles here. 

Comment by Lumpyproletariat on EA for dumb people? · 2022-08-04T03:45:02.486Z · EA · GW

The 100-130 IQ range contains most of the United State's senators. 

You don't need a license to be more ambitious than the people around you, you don't need to have  131 IQ or greater to find the most important thing and do your best. I'm confident in your ability to have a tremendous outsize impact in the world, if you choose to attempt it.

Comment by Lumpyproletariat on Why EAs are skeptical about AI Safety · 2022-08-03T22:40:53.901Z · EA · GW

If you're unconvinced about AI danger and you tell me what specifically are your cruxes, I might be able to connect you with Yudkowskian short stories that address your concerns. 

The ones which come immediately to mind are:

That Alien Message

Sorting Pebbles Into Correct Heaps

Comment by Lumpyproletariat on One Million Missing Children · 2022-07-12T04:19:50.500Z · EA · GW

I can't speak for anyone but myself, but I really don't like the idea of creating humans because other people want them for something. Hearing arguments framed that way fills me with visceral horror and makes it relatively harder for me to pay attention to anything else. 

Comment by Lumpyproletariat on AGI Ruin: A List of Lethalities · 2022-06-07T22:08:13.685Z · EA · GW

If you want to catch up quickly to the front of the conversation on AI safety, you might find this YouTube channel helpful: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg

If you prefer text to video, I'm less able to give you an information-dense resource--I haven't kept track of which introductory sources and compilations have been written in the past six years. Maybe other people in the comments could help.

If you want to learn  the mindset and background knowledge that goes into thinking productively about AI (and EA in general, since this is--for many of the old hands--where it all started) this is the classic introduction: https://www.readthesequences.com/

Comment by Lumpyproletariat on The Case for Rare Chinese Tofus · 2022-02-09T05:03:42.312Z · EA · GW

Strong upvote because I think this should be at the top of the conversation and this is what I came here to say. 

Tofu has strong negative associations for many Americans; if you want to sell something which does not taste like American tofu and doesn't have the texture of American tofu I would advise you in the strongest possible language to call it anything but tofu.

Comment by Lumpyproletariat on The Culture of Fear in Effective Altruism Is Much Worse than Commonly Recognized · 2022-02-07T03:40:57.716Z · EA · GW

Criticism has become so distorted from what it should be that my intention would not even be to criticize. Yet there is no way to suggest any organization could be doing anything better without it someone interpreting it as an attempt to sabotage it. It's not that I'm afraid of how others will respond. It's that so many individual actors have come to fear each other and the community itself. It's too much of a hassle to make it worthwhile to resolve the barrage of hostility from trying to contribute to anything.

I notice that the OP has gotten twenty upvotes--including one from me--, but that I myself have never encountered the phenomenon described. My experience, like D0TheMath's, is that people who offer criticism are  taken seriously. Other people in this comment section, at least so far, seem to have similar experiences.

Could some of the people who've experienced such chilling effects give more details about it? By PM if they don't anticipate as strongly as I do that the responses on the open forum will be civil and gracious?

Comment by Lumpyproletariat on World's First Octopus Farm - Linkpost · 2021-12-23T01:28:01.957Z · EA · GW

Oh, I'm sorry for being unclear! The second phrasing emphasizes different words (as and adult human) in a way I thought made the meaning of the original post clearer.

Comment by Lumpyproletariat on World's First Octopus Farm - Linkpost · 2021-12-22T22:46:56.457Z · EA · GW

I haven't edited the original comment.

Comment by Lumpyproletariat on World's First Octopus Farm - Linkpost · 2021-12-22T21:22:50.579Z · EA · GW

https://en.wikipedia.org/wiki/Cephalopod_intelligence

I am not an expert on animal intelligence, but octopuses seem as or more intelligent than monkeys from my limited understanding. They aren't proven as intelligent as the great apes--none have been taught a human language that I know of--but might have languages of their own, and considering their other feats I would be mildly surprised if there aren't at least several species which could learn a sign language which doesn't take hands.

Comment by Lumpyproletariat on World's First Octopus Farm - Linkpost · 2021-12-22T21:12:29.854Z · EA · GW

Do you think the initial post would have read better as: "I think that an octopus is ~30% likely to be as morally relevant as an adult human (with wide error bars, I don't know as much about the invertebrates as I'd like to), so this is pretty horrifying to me."?

Comment by Lumpyproletariat on World's First Octopus Farm - Linkpost · 2021-12-22T03:44:03.009Z · EA · GW

Assuming ten pound octopi, that's ~600,000 octopi farmed and killed every year. I think that an octopus is ~30% likely to be as morally relevant as an adult human (with wide error bars, I don't know as much about the invertebrates as I'd like to), so this is pretty horrifying to me.

Comment by Lumpyproletariat on Why do you find the Repugnant Conclusion repugnant? · 2021-12-18T08:30:52.858Z · EA · GW

Thank you for the explanation! 

Comment by Lumpyproletariat on Why do you find the Repugnant Conclusion repugnant? · 2021-12-17T21:53:04.074Z · EA · GW

This comment seems to me to be requesting clarification in good faith. Might someone who downvoted it explain to why, if it wouldn't take too much time or effort? I'm fairly new to the forum and would like a more complete view of the customs.

Edited to add: Perhaps because it was perceived as lower effort than the parent comment, and required another high-effort post in response, which might have been avoided by a closer reading?

Comment by Lumpyproletariat on [deleted post] 2021-12-17T21:44:49.539Z

Other people might link you to philosophical background reading which is assumed by most EAs. I'll just say, that even though good is a subjective thing and everyone has their own values, there is typically a significant convergence. For instance, most people will say that the world is better when fewer people suffer from preventable diseases. If you would also say that, it's worth your while to think about which courses of action reduce global disease, and work with other people who agree.

Comment by Lumpyproletariat on High School Seniors React to 80k Advice · 2021-12-16T21:04:03.057Z · EA · GW

This should probably be its own essay at some point, but here's the short and sloppy version:

Against these objections is the material in the essay itself. Close reading is hard.

I think this line touches on something which is important to understand. My college required me to take an English class, and I took it online last summer. This gave me the opportunity to read almost every essay and scrap of writing produced by the thirty-odd people of the class in the context of writing their thoughts in response to essays about better writing, a lovely bit of recursion which I found enlightening. I think I have a better model now of how slightly-above-average (if they were worse they wouldn't be in college, if they were better they'd have skipped the class) people engage with the written word.

The teacher linked an essay begging new college students not to worry about the pointless things high schoolers are graded on, and instead focus on writing compellingly--those pointless things were enumerated, by way of example. This essay was quite scathing, strident! Several students replied by saying that while they'd forgotten those pointless rules they were glad for the reminder--expressed concern that they hadn't conformed to the rules in their introductory essays, and vowed to obey them in the future

This wasn't an isolated occurrence; there were always people who read something and got exactly the opposite of the author's point. Those who didn't got a point utterly removed, that I'd have to dig through the essay to see what line they'd misread if I wanted to understand them. People who got from the essay what the instructor hoped the class would were maybe a tenth of the total (myself not among them; I learned a lot in that class, but nothing the teacher had set out to teach).

When asked to choose which of three works expressed a particular point best--there was a personal essay, an analytic essay, and an inane video--the class overwhelmingly preferred the video. Detailing why, they said that they could hear inflection and tone in the video, that they didn't have to struggle with individual words and lose their place in the sentence, that they didn't have to reread things to understand what was being said. 

My conclusion is that if something is expressed only in writing it cannot reach the absolute majority of the population, any more than a particularly well-written verse in French can permeate the Anglosphere. I think that in many cases where highly literate people think they've identified an important problem, they've instead failed to diagnose illiteracy. (I watched the course instructor struggle with that; they didn't seem any more able to understand that the class couldn't understand them than the class was able to understand them. They were always engaging with them on a level which implied they didn't realize the vast gulf of inferential distance.)

Comment by Lumpyproletariat on Seeking a Collaboration to Stop Hurricanes? · 2021-12-08T03:07:29.009Z · EA · GW

I'll tone-down my emphases - my own impulse would have been to color-code with highlighters and side-bars, but I see that's not what most people want, here :)

It's really a shame, because once I got over my own hangups with regards to how English should be done, your emphasis did make it easier to read tone. But I nearly bounced off of it entirely, so if other people are similar to me in that regard the costs outweigh the benefit.

Regarding ideas to stop hurricanes, you seem to know more than I do about weather systems. I remember from googling around after Hurricane Harvey that there's a group from Norway trying to solve the problem with a bubble curtain (a pipe run underwater that releases bubbles to bring cold air to the surface), googling around again found them here: https://www.oceantherm.no/

I was skimming their website and it looks like they're plausibly funding constrained; I'm going to email them about posting something on the forum and/or applying for a grant.

Comment by Lumpyproletariat on Seeking a Collaboration to Stop Hurricanes? · 2021-12-07T01:10:34.832Z · EA · GW

Strong upvote so that more people see this. Even if after checking this particular intervention doesn't pan out, I think that a megaproject to oppose hurricanes has a high chance of being cost effective and we should spend more time thinking about ways to do that.

By way of parting: I think that the bolded sentences and all-caps words might turn some people off of reading through the thing; I know that the mental voice it initially conjured in my own mind was louder than I appreciated. 

Comment by Lumpyproletariat on Can we influence the values of our descendants? · 2021-12-02T22:36:54.064Z · EA · GW

Epistemic status: just spitballing. But:

Again, emphasis on small. The causal effects identified in the papers were in the range of about 0.1 standard deviations of the outcome per standard deviation of the exposure. That is, if you worked hard to move culture in a particular direction, I would expect at most 10% of the change you bring in to persist in future generations.

It is plausible that smaller effects exist - which we would not have enough statistical power to detect. It is unlikely that stronger effects exist, since those would be easier to detect. And I would correspondingly have expected to be flooded by papers studying how your ancestors' religion explains your Netflix watching habits.

Perhaps the similarity between areas is caused less by the effects being small, and more by cultural dispersal and diffusion? I know that my ancestors, the early Christians on the Mediterranean, had a huge effect on my culture today. But since they had a similarly huge effect on the descendants of Brits and Swedes, that might not be very visible if you compare their geographic place of origin with other parts of Europe. In fact, I'm myself also descended from the British Isles; the descendants of the Classical inhabitants of Iberia are also the descendants of Classical "barbarians". I've read that it only takes ~1,500 years for someone to become the common ancestor of all Eurasians. (Link goes to where I read it, not the original source which I haven't followed up on.) Weak observed effects of medieval culture on Italian cities may be less that cultural effects decay, and more that modern Italians have ancestors from every city on the peninsula. 

Comment by Lumpyproletariat on Effective Altruism, Before the Memes Started · 2021-10-13T10:18:15.047Z · EA · GW

I enjoyed reading this; the format and content agreed with me--pun unintended.

Comment by Lumpyproletariat on Johannes Ackva: An update to our thinking on climate change · 2021-10-07T03:45:29.581Z · EA · GW

I shared this video with a Discord server and the response was positive. I was worried that no one would watch it, since it was forty minutes long; but apparently many people are more willing to watch a long video than read a short article (well, that could be rational; it probably takes them less time). 

Comment by Lumpyproletariat on How do I find people who really don't care about having more money? · 2021-09-09T23:33:30.480Z · EA · GW

Have you encountered the FIRE (Financially Independent Retire Early) movement? I think that Mr. Money Mustache's blog is the most central example, though I haven't exactly been plugged into the community.

https://www.mrmoneymustache.com/

There's also a reddit I know near nothing about:

https://www.reddit.com/r/Fire/

They're a community characterized by turning their noses up at money and the things one can buy with it--they cut down their living expenses until they can afford to retire at a young age, and then do so.

Comment by Lumpyproletariat on Moral dilemma · 2021-09-09T20:56:13.429Z · EA · GW

I'm very glad to have helped in any way. Take care of yourself!

Comment by Lumpyproletariat on Moral dilemma · 2021-09-05T20:29:56.316Z · EA · GW

I know that people presenting Pascal's wager usually claim that the utility of being accepted to their favorite heaven or their least favorite hell is infinite--but I don't think it is. But we'll leave that aside, because as you point out other people believe that the utility of heaven is infinite and I could be mistaken.

(For the counter-argument that two people going to hell is worse than one, I would tend to think of it as some infinities being greater than others (as in mathematics).)

If different infinities are allowed to be better or worse than each other, than you shouldn't need to worry about heaven or hell! You should be focused on maximizing your odds of infinite utility by doing the things most likely to lead to such a state.

The odds of any given religion being true is very, very small. Especially considering that they are all logically inconsistent. The odds of me being an extraterrestrial being of phenomenal power, able to create heavens and hells, is substantially higher than Sunni Islam or the Church of England having the right of things. Because at least the idea isn't logically impossible. So making me happy with you is more important than abiding by the laws of any earthly religion. The chance of you being yourself an omnipotent alien who'll come into your power once you feel less tormented is larger than the odds of an earthly religion being true--because while it's a silly idea with no evidence backing it and the entire edifice of science flatly refuting it, at least it doesn't contradict itself.

But now that we're focusing on maximizing our odds of getting infinite utility, there are even more promising prospects than supposing impossible things about strangers or ourselves. The odds of future humans reversing entropy (or finding a way to make infinite computations using finite resources, or any other solution given trillions of years to think about it) is much higher than the odds of any of Earth's religions being true. So if we take that view, the most important thing one can do is maximize the odds of human civilization surviving and maximizing the daily positive utility of that future civilization.

Comment by Lumpyproletariat on Moral dilemma · 2021-09-04T23:30:13.370Z · EA · GW

I don't think that infinite utility or disutility is a common feature of Pascalian wagers, only a very large amount of utility or disutility. For instance, myself going to Hell isn't infinite disutility--there are worse things, such as two people going to hell. 

(Unless we consider a finite amount of utility or disutility extended perpetually to be an infinite amount, in which case everything we do is equally infinitely positive or negative utility and no good or bad deed is better or worse than any other good or bad deed. Which seems very wrong to me, though I admit I don't have a reason off the top of my head why that's the case.)

Once you've accepted Hell as a finite (though very large) disutility, you can multiply it by the (utterly minuscule) odds of a logically inconsistent religion being true and everything anyone knows about physics being wildly off base. 

Comment by Lumpyproletariat on Takeaways on US Policy Careers (Part 1): Paths to Impact and Personal Fit · 2021-08-30T02:27:32.529Z · EA · GW

I'm commenting here to make this post easier for me to find in the future, and also so that I'll be reminded of at random intervals in the future.

Comment by Lumpyproletariat on This Can't Go On · 2021-08-05T06:55:59.799Z · EA · GW

Thank you for writing this.

After I first read all the material on Cold Takes (well, I skimmed it--but only because you asked me to!) I figured that I wouldn't bother to keep up with new stuff as it comes out; what good would it do me? It was written for a general audience and I am not a general audience.

That attitude lasted until around the third time I found myself linking to or quoting from your blog as a starting point for conversations with the vastly inferentially removed. And, I've known about the blog less than a week.

Needless to say I'll be keeping up with what you write.

Comment by Lumpyproletariat on Undergraduate Making Life-Altering Choices While Sober, Please Advise · 2021-07-26T09:14:13.936Z · EA · GW

Thank you for linking me to Kelsey Piper--I haven't read Chris Olah's essay yet, but I'm sure I'd've thanked you for linking him too had I only. I'm going to give Focusmate a go; I've been meaning to set it up but procrastinated doing so long enough to generate an ugh field about that. Thank you for that, too.

Comment by Lumpyproletariat on Undergraduate Making Life-Altering Choices While Sober, Please Advise · 2021-07-26T09:10:06.831Z · EA · GW

Thank you for the recommendation, I'll check it out.

Comment by Lumpyproletariat on Undergraduate Making Life-Altering Choices While Sober, Please Advise · 2021-07-26T09:09:38.684Z · EA · GW

I've had a tab on my browser open to a page of therapists who take my parent's insurance for . . . semesters, now. Thank you for giving me the impetus to email a couple of them. I'll let you know how it goes. :)

Comment by Lumpyproletariat on Undergraduate Making Life-Altering Choices While Sober, Please Advise · 2021-07-26T09:06:41.612Z · EA · GW

Thank you for the response. I've given checklists a try in the past and found them useful then; my problem has been that I don't remember/think to draw up new ones. The obvious solution to that is obvious, so I set a repeating alarm on my phone to remind me. Whenever it goes off I'll draw up a to-do list for the day, even a short one.

If I hadn't had to explain why it was to-do lists hadn't worked for me in the past, I wouldn't have thought of the obvious solution (though, time will tell how well it works--I only just now set the alarm), so many thanks for replying to me and writing up such a long and thoughtful list of things which worked for you.

Comment by Lumpyproletariat on Is EA just about population growth? · 2021-01-20T23:46:30.304Z · EA · GW

"Regarding your aside, I think that illustrates an interesting potential solution to the dilemma (?) The purpose is not to save lives (because in your case, the world where 100% of people die is less or equally bad than 50% of people dying). This is an interesting case, and perhaps there's a way to rephrase the original claim to accommodate it, though I'm not certain how."

I must have inadequately written my parenthetical aside; perhaps I inadequately wrote everything. 

The purpose is entirely to save lives. We have a world with seven billion people. If all of them died, it amount of disutility in my view would be X times seven billion, where X is the disutility from someone dying. If the world instead had fourteen billion people and seven billion of them died, the disutility would still be X times seven billion. The human race existing doesn't matter to me, only the humans. If no one had any kids and this generation was the last one, I don't think that would be a bad thing.

This isn't something which all EAs think (some of them value "humanity" as well as the humans), though it does seem to be a view over represented by people who responded to this thread.

"The way I see it, the people of the future 'existing' is a knob that we have the power to control (in a broad sense). It's not something that would happen 'either way.'"

I know a man who plans to have a child the traditional way. We've spoken about the topic and I've told him my views; there's not terribly much more I could do. I have very little power over whether or not that child will exist--none whatsoever, in any practical way.

That child doesn't exist yet--there's some chance they never will. I want that child to have a happy life, and to not die unless they want to. When that entity becomes existent, the odds are very good I'll be personally involved in said entity's happiness; I'll be a friend of the family. Certainly, if twelve years in the child fell in a river and started to drown, I'd muddy my jacket to save them.

But I wouldn't lift a finger to create them. Do I explain myself? 

Something analogous could be said about all the humans who do not exist, but will. We have control over the "existence knob" in such a broad sense that there's little point bringing it up at all. So, living in a world where people exist, and will continue to do so, it seems like the most important thing is to keep them alive.

Valuing the people who exist is a very different thing from valuing people existing. EA is not just about population growth--it isn't about population growth at all.

Comment by Lumpyproletariat on Is EA just about population growth? · 2021-01-17T21:46:32.799Z · EA · GW

There must be something I don't understand; I don't see a puzzle here at all. You spent a lot of time writing this up, presumably you spent a lot of time thinking about it, so I'm going to spend at least a small amount of time trying to find where our worldviews glide past each other.

Here's my take. It's  a fairly simple take, as I'm a fairly simple person. 

If someone exists, one ought to be nice to them. Certainly, one ought not to let them die--to do so would be unkind, to say the least. People who exist should have good lives--if someone doesn't have a good life or will lose their good life, this is a problem one ought to fix. So far, nothing but bog-standard moral fare. 

If someone doesn't exist, they don't exist--it's impossible to be kind or cruel to someone who doesn't exist. I don't think many would disagree on that point either. 

Now here, perhaps, is where we lose each other: if someone is going to exist, and one is aware of this fact, one should probably take preemptive steps to ensure that future person will have a good life--a life happy, fulfilling, and long. This isn't because hypothetical people have moral value, it's because we are aware in advance that the problem won't always be a hypothetical one. We can realistically foresee that unless we course correct on this destroying the biosphere project we've undertaken, people will come into existence and lead terrible, cruelly short lives.

I (and many others, I gather) aren't doing this so that more people will be born--we're doing this so that people who will be born either way live happily.

(Parenthetical aside: some people place value on the human species continuing to exist--I don't, personally; if everyone alive died that would be awful, but I don't think it'd be more awful than if there had been fourteen billion minds before seven billion died. That said, if we care at all about aesthetics I can see the aesthetic argument in favor of human survival, in that all aesthetics would die with us.)

This is a very different problem from educating women and predictably causing fewer people to exist in the first place. My value isn't people existing, my value is good long lives for those who do (or will).

Comment by Lumpyproletariat on Is EA just about population growth? · 2021-01-17T09:40:11.115Z · EA · GW
  • Suppose, towards a contradiction, that the goal of life is to save lives.
  • We know educating women more is good and would be done in an ideal world.
  • Increasing women's education leads to fewer lives because of declining fertility.
  • Therefore, the goal of life must not be to save lives.

 

What if one's goal is to save lives which already exist, contingent on their already existing? 

Pure utilitarianism doesn't necessarily lead to screwy answers when thinking about the future--for instance, suppose that matter is convertible to computronium, and computronium is convertible to hedonium, and that there is thus a set amount of joy in the universe; in that instance, creating more people just trades against the happiness of those who already exist, who could have used all that matter for themselves, but are now morally obligated to share.

But I tend to be of the view that potential people don't exist and thus don't have moral significance. If it's foreseeable that someone in particular will exist (and at that point have moral significance) we ought to make sure things go well for them. But I don't feel any moral obligation to bring them into existence.

Comment by Lumpyproletariat on Lumpyproletariat's Shortform · 2021-01-04T00:43:43.454Z · EA · GW

This is crossposted from the December career advice thread:

I notice that the thread has gotten long and a lot of people's questions are being buried (one thing I intensely dislike about upvote-style forums is that it isn't trivial to scroll down to the end of the thread and see what's new ("Oh, but you can sort by new if you want to," one replies, and, sure, I guess, but unless everyone else with good opinions does too that doesn't exactly solve the problem, now does it?)). The buried questions don't seem less important than the ones posted first, and I wish I was competent to give expert advice apropos them/had a way to direct the community's gaze to them.

I have a question of my own--regarding changing my undergraduate major--but I'll wait for the January thread to ask it.

Comment by Lumpyproletariat on Careers Questions Open Thread · 2021-01-04T00:42:01.900Z · EA · GW

I notice that the thread has gotten long and a lot of people's questions are being buried (one thing I intensely dislike about upvote-style forums is that it isn't trivial to scroll down to the end of the thread and see what's new ("Oh, but you can sort by new if you want to," one replies, and, sure, I guess, but unless everyone else with good opinions does too that doesn't exactly solve the problem, now does it?)). The buried questions don't seem less important than the ones posted first, and I wish I was competent to give expert advice apropos them/had a way to direct the community's gaze to them.

I have a question of my own--regarding changing my undergraduate major--but I'll wait for the January thread to ask it.

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-13T03:34:51.849Z · EA · GW

You are an amazing alien, a soul akin enough to mine that I feel slightly less an alien for talking to you. I really don't know why people don't live stranger lives, when ordinary lives chasing money and status are so terribly depressing. It is nice to meet a fellow denizen of planet Camazotz dancing to the beat of a drum other than Its.

(Does one still waive the apostrophe when they're referring to a possession of the proper noun It?)

Clarification clarified. If someone invaded my personal space and dark triaded at me, I imagine I would use my bigness and noise to make them leave. I'm sympathetic to people less big. 

I feel fairly negative towards upvotes my self. They make it easy to pile on someone without actually engaging with them. 

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-09T00:34:07.761Z · EA · GW

I accidentally posted this comment four times, due largely to technical incompetence. Which is fine, I suppose; it adds emphasis!

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-09T00:29:41.074Z · EA · GW

Well. I'm floored. People keep upvoting this and saying such wonderfully kind things in the comments . . . Every time I got the notification there was a new comment under this post, I internally flinched and cringed. I'd just written at length about my internal subjective experience, an I regretted writing it from before I clicked submit. It took a lot of evidence piling up to convince the socially cautious part of my brain it was wrong. 

I'm going to update hard towards writing pieces like this one/writing more frequently. It seems like other people ought to as well, it seems like something people want to read. I imagine most of us don't have any new breakthroughs to report in the field of effective altruism. But we probably all have interesting days where we face dilemmas or win victories which would make utterly no sense to most anyone. And, I guess it makes sense you'd want to hear mine because I'd like to hear yours.

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-09T00:24:43.248Z · EA · GW

Well. I'm floored. People keep upvoting this and saying such wonderfully kind things in the comments . . . Every time I got the notification there was a new comment under this post, I internally flinched and cringed. I'd just written at length about my internal subjective experience, an I regretted writing it from before I clicked submit. It took a lot of evidence piling up to convince the socially cautious part of my brain it was wrong. 

I'm going to update hard towards writing pieces like this one/writing more frequently. It seems like other people ought to as well, it seems like something people want to read. I imagine most of us don't have any new breakthroughs to report in the field of effective altruism. But we probably all have interesting days where we face dilemmas or win victories which would make utterly no sense to most anyone. And, I guess it makes sense you'd want to hear mine because I'd like to hear yours.

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-09T00:24:17.902Z · EA · GW

Well. I'm floored. People keep upvoting this and saying such wonderfully kind things in the comments . . . Every time I got the notification there was a new comment under this post, I internally flinched and cringed. I'd just written at length about my internal subjective experience, an I regretted writing it from before I clicked submit. It took a lot of evidence piling up to convince the socially cautious part of my brain it was wrong. 

I'm going to update hard towards writing pieces like this one/writing more frequently. It seems like other people ought to as well, it seems like something people want to read. I imagine most of us don't have any new breakthroughs to report in the field of effective altruism. But we probably all have interesting days where we face dilemmas or win victories which would make utterly no sense to most anyone. And, I guess it makes sense you'd want to hear mine because I'd like to hear yours.

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-09T00:20:06.474Z · EA · GW

Well. I'm floored. People keep upvoting this and saying such wonderfully kind things in the comments . . . Every time I got the notification there was a new comment under this post, I internally flinched and cringed. I'd just written at length about my internal subjective experience, and I regretted writing it from before I clicked submit. It took a lot of evidence piling up to convince the socially cautious part of my brain it was wrong. 

I'm going to update hard towards writing pieces like this one/writing more frequently. It seems like other people ought to as well, it seems like something people want to read. I imagine most of us don't have any new breakthroughs to report in the field of effective altruism. But we probably all have interesting days where we face dilemmas or win victories which would make utterly no sense to most anyone. And, I guess it makes sense you'd want to hear mine because I'd like to hear yours.

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-06T07:07:47.539Z · EA · GW

I put on my goggles to attempt literary analysis, and then I took them back off. Anyone else want to give it a go?

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-06T07:05:38.875Z · EA · GW

So, I translated one of the poems. The other two are addressed to Jesus Christ, dated as being written in November this year, and signed with something incomprehensible. This one is shorter than the other two, written on a smaller piece of paper, and isn't signed, dated, or addressed. It reads: 

 

our hopeful nation

this high should've came with a barcode

25 candles hovered above a golden mountain top

Jesus Christ

the wind

the dust

and the Holy Spirit

revising sky rays

a violent gospel

volume 13

net speed 98.b

the taste of my music on a Sunday night

there's something about the riot that makes me want to breed 

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-06T06:58:57.498Z · EA · GW

Hm, it's interesting different people's thoughts on and reactions to the same events. Every one of us are aliens. You see people optimizing for money and you feel negatively towards them for participating in a race to the bottom. I see people optimizing for money and I respect their hustle. Maybe it's a class thing. I worked as a waiter until I found a way to hurt myself doing so (such tends to be how my stints of gainful employment end--I need to land an office job or one day I'll trip on my own feet and stumble into an open grave), and I spent every minute of it optimizing for money, sometimes aggressively so. Maybe as I claw my way up the social ladder I'll come around to your point of view. Yesterday was novel, I imagine it gets old.

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-06T06:58:41.238Z · EA · GW

Thank you for the kind words and human connection--I don't want to reiterate word for word what I said under EdoArad's post, but I'd like to. It seems to me that separating the conversation and disordering it is a a tradeoff upvote-style forums make, and I'm entirely unconvinced that such is worth it. Especially for a relatively small forum where everyone reading comments is reading all the way to the bottom anyway. 

My situation is a bit different than yours, I think. I don't feel the a strong need to spend money on things; I don't anticipate my personal expenses ever rising above five hundred dollars a month unless I move somewhere with a higher cost of living--with the expectation that such would be a net gain. After I can consistently cover essential expenses without worry, I plan to use my money as effectively as I can (well, before that point too). In my case spending money on anything trades directly against becoming financially independent sooner and then donating the surplus. I also imagine that if I made a habit of charitable giving at this juncture, I'd notice it financially pretty quick.

That said, your, EdoArad's, and DonyChristie's perspectives have helped me gain, well, perspective. I'll think about this more.

Comment by Lumpyproletariat on A Case Study in Newtonian Ethics--Kindly Advise · 2020-12-06T06:58:20.077Z · EA · GW

Thank you for kind words and for validation. If I were properly calibrated I'd find neither useful, but I'm not and I do. It's very gratifying that people seem to appreciate my having written this.

I did not set out to create something that people find beautiful, but if I did so I am happy. Seeing this upvoted, and tagged as "art", was surprising but pleasant. (Parenthetical aside: is art something created for the self, or something created to express the self to others? I've heard both stances espoused. If the latter, then expressing thoughts one finds ugly in a way others find beautiful could be regarded as a failure. Human endeavors are complicated, and I don't think art is often created for any one reason, but I still think there's an answer to something hiding in that line of questioning.)

In response to your first thought, I communicated poorly. When I referred to units of caring, I wasn't positing a finite amount of empathy, I was referencing this. I'll edit the OP to be less opaque. In response to your second thought, I think my thoughts are covered in my response to shaybenmoshe.

Comment by Lumpyproletariat on Make a $10 donation into $35 · 2020-12-05T05:14:41.638Z · EA · GW

I sent my free money towards the Malaria Consortium. Thank you to the people who spent the money, and the people who made me aware such was happening. 

I don't like having online accounts that are connected to my real info, especially financial info. No particular rationale, I just hang out in a weird part of mind-space and get anxious when I haven't cleaned up after myself. I'm having difficulties deleting my account--when I try it wants me to choose an application to do so with. I'm using Firefox on Ubuntu, if that matters--but it seems like this is something I should be able to do entirely on the every.org website?