Posts

EA opportunity: Cryptocurrency and DeFi 2021-10-07T15:58:55.879Z
Mass social media as a tool for change 2021-10-03T14:57:45.362Z
Should you optimise for the long-term survival of people who don't care for their own long-term survival? 2021-10-03T07:18:05.384Z
Social tokens and effective altruism 2021-09-16T07:08:46.650Z

Comments

Comment by Samuel Shadrach on Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness · 2021-10-14T11:22:10.311Z · EA · GW

Cheers to initiating a hundred years of debate on the charity-equivalent of EMH.

Comment by Samuel Shadrach on Early career EA's should consider joining fast-growing startups in emerging technologies · 2021-10-13T09:25:52.449Z · EA · GW

Will supplement this article with my post on [Cryptocurrency and DeFi](https://forum.effectivealtruism.org/posts/KPy4yuSsGk4qMwK3g/ea-opportunity-cryptocurrency-and-defi) as a possible industry whose startups you could join.

Comment by Samuel Shadrach on An Argument To Prioritize "Positively Shaping the Development of Crypto-assets" · 2021-10-09T19:20:10.246Z · EA · GW

Thousands of users could collectively agree to "fork" a blockchain and mint new crypto-assets for any desired cause or public good.

There is ofcourse lots of resistance to this idea at a social level (and there should be), but the idea is that this decision is, atleast at a tech level, directly in the hands of users and citizens.

Versus in a central bank system where there's layers upon layers of representatives standing between the citizens and the actual lever that decides where printed money goes.

Comment by Samuel Shadrach on [Creative Writing Contest] [Fiction] The Fey Deal · 2021-10-08T09:44:23.438Z · EA · GW

Agreed, it would have been simpler it was framed as the Fei taking your money and giving it to GiveWell. Perhaps the Fei wish to test human morals.

Comment by Samuel Shadrach on Should you optimise for the long-term survival of people who don't care for their own long-term survival? · 2021-10-05T17:38:53.753Z · EA · GW

Makes sense. For personal, I can definitely see why acceptable and beneficial are different. I'm not sure how much the distinction matters for a society or hivemind. Whatever seems beneficial for society is what it should enforce norms towards and also deem acceptable.

I feel like assuming utilitarianism will alienate people, might be better to keep the societal goal and corresponding norms more loosely and broadly defined. That way everyone individual can evaluate - does this society enforce social norms useful enough to my own personal goals both for myself and society - that I find it more value to accept and further enforce these social norms than rebel against them. Like how effective altruism forum doesn't explicitly refer to utilitarianism in the intro although the concepts are overlapping.

Comment by Samuel Shadrach on Should you optimise for the long-term survival of people who don't care for their own long-term survival? · 2021-10-05T09:08:44.545Z · EA · GW

Great answer and I tend to agree that a 100% comprehensive ruleset may be unobtainable. I wonder if we could still get meaningful rules of thumb even if not 100% comprehensive. And maybe these rules of thumb for what social norms are good can be generalisable across "whom" you're setting social norms or policy for.

Maybe what social norms are good for "X choosing to respect or disrespect Y's autonomy" are similar whether:

 - X and Y are equal-standing members of the LW community

 - X is the parent of Y

 - X is a national law-making body and Y is citizens 

 - X is programming the goals for an AGI that is likely to end up governing Y


And as you mention, rules conditional on mental impairment or a sense of long-term wellbeing might end up on this list.


Maybe I'll also explain my motivation in wanting to come up with such general rules even though it seems hard.

I feel that we can't say for sure who will be in power (X) and who will be subjected to it (Y) in the future, but I do tend to feel power asymmetries will grow in the future. And there is some non-trivial probability that people from certain identifiable groups (scientists in certain fields, member of LW community, etc) end up in those positions of power. And therefore it might be worthwhile to cultivate those norms right here. It feels easier to do any form of moral advocacy on someone before they are in power, versus after they are in power.

I understand if you still feel my approach to the problem is not a good one, I just wanted to share my motivation anyway.

Comment by Samuel Shadrach on Should you optimise for the long-term survival of people who don't care for their own long-term survival? · 2021-10-04T09:53:37.976Z · EA · GW

Thanks for replying again.

I'm just wondering if there's a way to condense down the set of rules or norms under which it is acceptable to take away someone's decision-making power. Or personally take decisions that will impact them but not respect their stated preferences.

If I try rephrasing what you've said so far:

1. People with impaired mental capabilities

Is it possible to universally define what classifies as mentally impaired here? Would someone with low IQ count? Someone with a brain disorder from birth? Someone under temporary psychedelic influence? Would an AI considering all humans stupid relative to its own intelligence count?

2. People whose actions or self-declared short-term preferences differ from _____

Should the blank be filled with "their self-declared long-term preferences" or "what you think their long-term preferences should be"? Or something else? I'm trying to understand what exactly wellbeing means here and who gets to define it.

Comment by Samuel Shadrach on Should you optimise for the long-term survival of people who don't care for their own long-term survival? · 2021-10-03T17:28:39.737Z · EA · GW

Thanks for the response.

> they develop a level of self-rationality that typically does better than .... in maximizing their own preferences

What does it mean for someone to undertake actions that are not maximising their own preferences? What does it mean to be rational when it comes to moral or personal values?

Would I be right in assuming you're using a model where people have terminal goals which they can self-determine, but are then supposed to "rationally" act in favour of those terminal goals? And that if someone is not taking decisions that will take them closer to these goals (as decided by a rational mind), you feel it is morally acceptable (if not obligatory) that you take over their decision-making power?

Comment by Samuel Shadrach on Against neutrality about creating happy lives · 2021-10-03T08:03:54.080Z · EA · GW

But aren't our intuitions, well, intuitive, and it's just a psychological matter of fact whether we have them or not?

But intuitions can change with time. People can consciously change other people's intuitions with time via communication. People can use communication that looks very little like reasoning or logic to do this. For instance, watching a movie can change your intuitions - even if the movie itself has no words and even if you use no words to reason in your head what you learnt from the movie. A forum post can belong to this category of communication. (It's a different question whether this particular forum should accept them, or judge their quality bar etc)

Comment by Samuel Shadrach on Why I am probably not a longtermist · 2021-09-25T19:09:12.076Z · EA · GW

Which of these represents your view more closely?

1: "I care about the future of humanity. I don't care about the long-term future of humanity as strongly  as the near-term future. Therefore I wish to work on impacting the near-term future."

2: "I care about the future of humanity. I care about the long-term future of humanity as well as the near-term future, with neither form of care strictly dominating the other. However, I do not feel I can meaningfully impact the long-term future. I still care about both. Therefore I wish to work on impacting the near-term future."

3: "I care about the future of humanity. I try to care about the long-term future of humanity as well as the near-term future, with neither form of care strictly dominating the other. However, I do not feel I can meaningfully impact the long-term future. Because I feel I cannot meaningfully impact the long-term future, I also find it hard to care as much about the long-term future. Therefore I wish to work on impacting the near-term future."

Basically I'm just trying to understand how you reason between "what you care about" and "how to work towards what you care about". Utilitarians usually assume a strict distinction between action and (terminal) goals, with goals being axiomatic and action being justified by goals. Your post however seems to (in my mind) spend time justifying terminal (not instrumental) goals based on how easy or difficult it is to act in favour of them.