Posts

Warnings on Weirdness 2020-08-30T14:00:15.962Z
Azra Raza's First Cell Center wants to take a different approach to fighting cancer: Detect the fire before it spreads 2020-01-10T04:15:51.901Z

Comments

Comment by MakoYass on A Happiness Manifesto: Why and How Effective Altruism Should Rethink its Approach to Maximising Human Welfare · 2021-03-30T20:38:53.573Z · EA · GW

I am really puzzled by those graphs, mm. But as to the Easterlin paradox, it's still alive: http://repec.iza.org/dp7234.pdf Happiness has been increasing, and so has GDP, but the rates of increase still don't seem to have much of a relationship.

Comment by MakoYass on Report on Running a Forecasting Tournament at an EA Retreat · 2021-02-21T06:13:31.328Z · EA · GW

I was there and I can report that T is awesome in that particular way consistently.

Comment by MakoYass on Ranking animal foods based on suffering and GHG emissions · 2021-02-13T07:38:42.330Z · EA · GW

I'm not sure the maceration of male chicks induces any suffering. IIRC, it's approved as a humane killing method by the SPCA or someone like that.

Comment by MakoYass on Ranking animal foods based on suffering and GHG emissions · 2021-02-13T07:33:53.164Z · EA · GW

I'm glad to see the inclusion of anthropic units as a function of neuron count/brain mass. Turns out that makes a huge difference. Ideally I'd use brain mass*square(neuron count), but that would be overkill...

In building this, did you come across literature about this question of how anthropic measure relates to mass and neuron configuration? I'd love to see any if you have that. I've got quite an interest in the anthropic measure binding question, my somewhat unconventional stance influences my decisions regarding animal welfare, so I really ought to read whatever's out there.

Comment by MakoYass on What are some high impact companies to invest in? · 2021-02-13T04:33:04.939Z · EA · GW

Anything in the genomic medicine space, that is to say, the ARKG etf.

A lot of new opportunities opened up in this field recently due to crispr, and they haven't been realized yet, and the stocks are generally too low due to, I dunno, Theranos maybe. Some of the treatments are really amazing. Cures for previously incurable genetic diseases, better cancer treatments.

We should pause, though, and ask whether accelerating the realization of these technologies will accelerate the realization of extinctive biological weapons. I have not paused long enough over this question, myself. I can't really argue that the benefits outweigh the costs.

Comment by MakoYass on Things I recommend you buy and use. · 2021-02-12T07:11:49.168Z · EA · GW

I was a little concerned about the bid sniping recommendation, bad things often happen when a technique for subverting a system and getting an edge over others is widely adopted, but it occurred to me that all that would happen is ebay auctions would become, like, one shot simultaneous blind bids, which might well be an improvement. Auction processes, currently, are selected to benefit sellers, at the detriment of buyers, and at the detriment of pricing efficiency? (I'd expect the winner's curse to lead to overpricing), so it wouldn't be that surprising if the adoption of bidsniping turns out to be a generally socially beneficial transition.

I can second the recommendation of instant pots. I have a crockpot express (I couldn't get an instant pot in new zealand at a decent price. This baffles me, why does no electronics store seem to pay attention to online reviews? How do  they make their import decisions?) and I use it all the time for cooking beans, rice, stew, and occasionally for raising dough (it has a low heat yogurt setting).

Regarding cast iron pans, do you know how non-stick seasoning treatment works, like on the physics level? I really need to know! The seasoning on my wok (assuming it's essentially the same chemistry) keeps failing, it's completely mystifying to me and I'm tired of it. Patches of it will just seemingly at random become sticky, tacky-feeling to the spatula, stuff will burn onto it. Usually right after adding rice (but, tragically, not always) the burnscum will lift off and it will be perfectly non-stick again. WHY.

I think we should emphasize that for vegans, B12 isn't just probably good, it's mandatory. There aren't really any plant based sources and if you have too little for too long you will undergo severe neurological impairment. Also vegans must remember to take creatine for maximum memory function! :<

Comment by MakoYass on Would you buy from an altruistic shop? · 2021-02-12T04:02:19.634Z · EA · GW

One reason I'd have difficulty donating through this channel is that I'm not sure I'd be able to get tax credits. If we get something in return, it might not count as a charitable contribution any more.

I wonder if you'd be able to just, only sell your stuff (at reasonable prices) to people who can show you a big donation receipt in their name, that would behave similarly, and they'd still be able to claim the tax credits.

Comment by MakoYass on Would you buy from an altruistic shop? · 2021-02-12T04:00:00.717Z · EA · GW

I don't think you should be so defensive in the face of accusations of promoting a bragging culture. Own it. If someone asked me "Isn't it unethical to brag" I would tell them that, no, contrary, it's positively ethical to brag.

The following is opinion, probably contains innacuracies, but would be important if true.

Bragging (well) about how good you are is a good norm.

If credibly signalling our goodness is normalized, there will emerge social pressures to do more goodness than we otherwise would have. If you normalize the right sort of bragging, it will creates a culture of philanthropic accountability. I sometimes wonder if the taboo against bragging might just be an artifact of abrahamic religion (if God is the final judge of the virtue of every man, there's little need for us to judge each other, so to show high concern for the judgements of your fellow man is a sign of a lack of piety) + crab bucket mentality (I feel pissed off when the best man shows everyone how much better he is than me, I am a narcissist and cannot believe my being pissed off by that could reflect a character flaw on my part, it must be because he's doing something genuinely bad, therefore we should agree that it's unethical and forbid it.), I can't see why we should need it any more. If you reign costly goodness signalling firmly under the earnest truthseeking norms of effective altruism, it could be the strongest thing we ever built. If you don't think you can reign down these wild horses of Ra, then I would recommend that you don't summon them.

So, I like the concept, perhaps for different reasons than your own, but I hope you'll find my reasons convincing/refutable.

Comment by MakoYass on Privacy as a Blind Spot: Are There Long Term Harms in Using Facebook, Google, Slack etc.? · 2021-02-05T07:55:21.153Z · EA · GW

A nonstandard solution I still can't stop thinking about: give up on the impossible project of digital privacy and democratize the panopticon

Comment by MakoYass on What is going on in the world? · 2021-02-05T07:29:39.343Z · EA · GW

Infinite Ethics is solved by LDT btw. The multiverse is probably infinite (I don't know where this intuition comes from but come it does), but if so, there are infinite instances of you strewn through it, and you are effectively controlling all of them acausally. Some non-zero measure of all of that is entangled with your decisions.

Comment by MakoYass on What posts do you want someone to write? · 2021-02-04T06:23:37.008Z · EA · GW

A post re-examining the suffering impact of veganism in countries with good average livestock welfare in many product categories. New Zealand, for instance, has grass-fed cows as a norm, egg hens are usually required to have decent amounts of space and won't appear to be especially stressed, and the main supermarket chain Countdown just switched to providing mostly "free farmed" pork (birthing sows seem entirely free, but pigs destined for market are moved to barns that are only limitedly free) (excludes non store brand of pork-based products, but the store brand bacon looks pretty good quality so it might be popular enough).

I get the impression that we're unlikely to receive this kind of analysis through most channels promoting animal welfare. They might not want to tell you about the good parts. I tend to encounter a lot of copenhagen ethics and consent arguments (which can't be addressed by improving conditions no matter how much you improve them, which is a bit of a reduction to absurdity of consent arguments).

It may help to draw attention to good policies, focus attention on the worst offenders, and occasionally improve EA nutrition? Promoting animal welfare within the industry is likely to accelerate incremental change from within. Stockpeople who are doing especially well in limiting animal suffering will tend to be proud of their way of doing things and to want to promote it to legislators for both moral and economic reasons.

Having resources like this may also help for being able to come across as balanced and informed when discussing local animal welfare.

Comment by MakoYass on 2020 AI Alignment Literature Review and Charity Comparison · 2021-01-11T04:16:21.273Z · EA · GW

Regarding ARCHES

Contrary to some others he argues that we should perhaps never make 'prepotent' AI (one that cannot be controlled by humans) - not even a defensive one to prevent other AI threats.

Where's that? I'd be very interested to see an argument to that. I looked around and found a lot of reasons prepotence is dangerous, and ways to avoid it, but wasn't able to find an argument that it is decisively more dangerous than its negative.

(I do suspect non-prepotence is dangerous. In short: Prepotent AGI can and visibly is required to exceed us morally (not in the sense of making metaphysical moral progress, I don't believe in that, I mean that there can be higher levels of patience, lucidity, and knowledge of the human CEV and its applications, that would bring the thing to conclusions we'd find shocking), there's a sense in which prepotent AGI would be harder to weaponize, harder to train on a single profane object level objective and fire, it is less tempting to try to use it to do stupid rash things we will grow to regret because the consequences of using it in stupid rash regrettable ways are so much more immediately obviously irrevocable. In the longest term, building agentic infrastructures that maintain binding treaties will be a necessity for overcoming the last coordination problems, that's another reason that prepotence is innevitable. Notably, the treaty "It should be globally impossible to make prepotent AGI" would itself manifest as a prepotent agency. The idea that prepotence is or should be avoidable might be conceptually unworkable.)

(In my skim-read, I also couldn't find discussion of the feasibility of aligned prepotent AI, and that's making me start to wonder if there might be a perilous squeamishness around. There are men who worry about being turned into housecats, work vainly on improving the human brain with neural meshes, so that humans can compete with AI. The reality is the housecats will be happy and good and as enfranchised as they wish to be, and human brains will not compete, and that will be fine. It's imaginable that  this aversion to prepotence comes from a regressive bravado that has been terminally outmoded since the rise of the first city-state. If I'm missing the mark on this though, apologies for premature psychologizing.)

Comment by MakoYass on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2021-01-02T01:58:37.670Z · EA · GW

I've been musing about a Suspension for Historically Significant Minds movement. I don't particularly care whether I personally get suspended, I don't think I'm important, we can only save so many of these living biographies, others are more important, I think it's a tragedy that the most interesting biographies are currently being burned.

I'm not sure it's reasonable to expect a fund like this to be able to act very often, though! The figures who wont pay for their own suspension usually aren't going to be willing to accept suspension.

The people I'd want to nominate would tend to have a deep attachment to some community of the present, they would rarely think of the far future. Most of them, on receiving their invitation would think about it for 20 minutes and then trash it, out of a sense of humility, and out of a sense that accepting such a thing would look from the outside like an abandonment of their community. I would want to say to them, "No, you were selected because you are the largest portion of that community that we're able to save." I'm not sure whether they'd hear it.

Maybe it would help to give them additional nominations to allocate to others, so it wouldn't just be them. A lot of them wouldn't want to deal with the political consequences of having to make a decision like that. It would just make things messier. The dirty work of triage.

Comment by MakoYass on The Fermi Paradox has not been dissolved · 2020-12-12T23:20:52.163Z · EA · GW

Due to its focus on statistical reasoning and the difficulty of actioning the firmi paradox in an effective altruist context (despite how interesting and probably important it is), I've linkposted this to lesswrong.com

Comment by MakoYass on 'Longtermism' · 2019-10-14T08:04:11.675Z · EA · GW

Is the argument here something along the lines of; I find that I don't want to struggle to do what these values would demand, so they must not be my values?

I hope I'm not seeing an aversion to surprising conclusions in moral reasoning. Science surprises us often, but it keeps getting closer to the truth. Technology surprises us all of the time, but it keeps getting more effective. If you wont accept any sort of surprise in the domain of applied morality, your praxis is not going to end up being very good.