Posts

Catholic theologians and priests on artificial intelligence 2022-06-14T18:53:49.115Z
Book review of "Mengzi" 2022-03-16T22:58:17.799Z

Comments

Comment by anonymous6 on The (Allegedly) Best Business Books · 2022-09-12T09:22:47.356Z · EA · GW

"Business Adventures" by John Brooks is a collection of midcentury New Yorker articles about business, obviously very old-fashioned but they are really quite good. There's something to be said for learning about:

  • a short squeeze in shares of the Piggly-Wiggly supermarket chain (founder went broke, later tried to make Amazon Just Walk Out but with 1950s punch card technology)
  • the first insider trading lawsuit
  • the first "Big Tech" information technology company that spun off from university research and promoted liberal political causes (Xerox)
  • the earliest cases of employees being sued over noncompete/intellectual property agreements

Among many other interesting but less obviously relevant topics.

Comment by anonymous6 on Who are some less-known people like Petrov? · 2022-09-07T15:27:45.903Z · EA · GW

José Figueres Ferrer was victorious in the Costa Rican civil war, after which he appointed himself head of the provisional junta.

Sounds like trouble — but he only ruled for 18 months, during which time he abolished the army and extended the franchise to women and nonwhite people. Then he stepped down and there have been fair elections since.

Comment by anonymous6 on Why I Hope (Certain) Hedonic Utilitarians Don't Control the Long-term Future · 2022-08-08T02:55:42.811Z · EA · GW

In meditation there are the jhanas, which include states of intense physical pleasure (like a runner's high). I learned how to do this, but the pleasure gets boring -- though not less intense -- after about 10 minutes or so and I feel tempted to go do something which is less pleasurable (and not less painful or offering greater future benefits). (And you'd think it would be habit forming, but in fact I have a hard time keeping up my meditation habit...)

What this taught me is that I don't always want to maximize pleasure, even if I can do it with zero cost. I thus have a hard time making sense of what hedonists mean by "pleasure".

If it's just positive emotions and physical pleasure, then that means sometimes hedonists would want to force me to do things I don't want to do, with no future benefit to me, which seems odd. (I guess a real bullet-biting hedonist could say that I have a kind of akrasia, but it's not very persuasive.)

It also seems that, sometimes,  hedonists who say "pleasure" mean some kind of more subtle multidimensional notion of subjective experiences of human flourishing. Giving a clear definition of this seems beyond anybody now living, but in principle it seems like a safer basis for hedonistic utilitarianism, and bothers me a lot less.

But now I'm not so sure, because I think most of your arguments here also go through even for a complicated, subtle notion of "pleasure" that reflects all our cherished human values.

Comment by anonymous6 on arxiv.org - I might work there soon · 2022-07-18T19:34:31.416Z · EA · GW

I personally think Distill just had way-too-high standards for the communication quality of the papers they wanted to publish. They also specifically wanted work that "distills" important concepts, rather than the traditional novel/beat-SOTA ML paper.

I think I get the strategic point of this -- they wanted to create some prestige to become a prestigious venue, even though they were publishing work that traditionally "doesn't count". But it seems like it failed and they might have been better off with lower standards and/or allowing more traditional ML research.

You could still do a good ML paper with some executable code, animations, and interactive diagrams. Maybe you can get most of the way there by auto-processing a Jupyter notebook and then cleaning it up a little. It might have mediocre writing and ugly diagrams, but that's probably fine and in many cases could still be an improvement on a PDF.

Comment by anonymous6 on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform · 2022-06-16T18:13:52.402Z · EA · GW

Unlike poverty and disease, many of the harms of the criminal justice system are due to intentional cruelty. People are raped, beaten, and tortured every day in America's jails and prisons. There are smaller cruelties, too, like prohibiting detainees from seeing visitors in order to extort more money out of their families.

To most people, seeing people doing intentional evil (and even getting rich off it) seems viscerally worse than harm due to natural causes.

I think from a ruthless expected utility perspective, this probably is correct in the abstract, i.e. all else equal, murder is worse than equivalently painful accidental death. However I doubt taking it into account (and even being very generous about things like "illegible corrosion to the social fabric") would importantly change your conclusions about $/QALY in this case, because all else is not equal.

But, I think the distinction is probably worth making, as it's a major difference between criminal justice reform and the two baselines for comparison.

Comment by anonymous6 on Catholic theologians and priests on artificial intelligence · 2022-06-14T20:49:44.357Z · EA · GW

Good call -- I added a little more detail about these two discussions.

Comment by anonymous6 on Responsible/fair AI vs. beneficial/safe AI? · 2022-06-10T16:51:28.289Z · EA · GW

A thought about some of the bad dynamics on social media that occurred to me:

Some well-known researchers in the AI Ethics camp have been critical of the AI Safety camp (or associated ideas like longtermism). It seems to be true that by contrast, AI Safety researchers are neutral-to-positive on AI Ethics. So there is some asymmetry.

However, there are certainly mainstream non-safety ML researchers who are harshly (typically unfairly) critical of AI Ethics. And there are also AI-Safety/EA-adjacent popular voices (like Scott Alexander) who criticize AI Ethics. Then on top of this there are fairly vicious anonymous trolls on Twitter.

So some AI Ethics researchers reasonably feel like they're being unfairly attacked and that people socially connected to EA/AI Safety are in the mix, which may naturally lead to hostility even if it isn't completely well-directed.

Comment by anonymous6 on Responsible/fair AI vs. beneficial/safe AI? · 2022-06-02T20:48:11.186Z · EA · GW

https://facctconference.org is the major conference in the area. It's interdisciplinary -- mix of technical ML work, social/legal scholarship, and humanities-type papers.

Some big names: Moritz Hardt, Arvind Narayanan, and Solon Barocas wrote a textbook https://fairmlbook.org and they and many of their students are important contributors. Cynthia Dwork is another big name in fairness, and Cynthia Rudin in explainable/interpretable ML. That's a non-exhaustive list but I think is a decent seed for a search through coauthors.

I believe there is in fact important technical overlap in the two problem areas. For example, https://causalincentives.com is research from a group of people who see themselves as working in AI safety. Yet people in the fair ML community are also very interested in causality, and study it for similar reasons using similar tools.

I think much of the expressed animosity is only because the two research communities seem to select for people with very different preexisting political commitments (left/social justice vs. neoliberal), and they find each other threatening for that reason.

On the other hand, there are differences. An illustrative one is that fair ML people care a lot about the fairness properties of linear models, both in theory and in practice right now. Whereas it would be strange if an AI Safety person cared at all about a linear model -- they're just too small and nothing like the kind of AI that could become unsafe.

Comment by anonymous6 on Mastermind Groups: A new Peer Support Format to help EAs aim higher · 2022-06-01T03:43:49.488Z · EA · GW

My feeling about the phrase "Mastermind Group" is fairly negative. I have heard people mention it from time to time and knew it was from Napoleon Hill, who was kind of the inventor of the self-help/self-improvement book. The phrase is something I associate,  I think reasonably, with the whole culture of self-improvement seminars and content that descends from Hill -- what used to be authors/speakers like Tony Robbins and is now also really big on YouTube. The kind of thing where someone is going to sell you a course on how to get rich, and the way to get rich is to learn to successfully sell a course on how to get rich.

Take this for what it's worth -- just one person's possibly skewed gut reaction to this phrase. I think the idea of peers meeting in a group to support each other remains sound.

Comment by anonymous6 on Complex Systems for AI Safety [Pragmatic AI Safety #3] · 2022-05-26T17:57:48.392Z · EA · GW

One way I think it is plausible to draw lines between RL/core DL  is that post-AlphaGo a lot of people were very bullish on specifically deep networks + reinforcement learning. Part of the idea was that supervised learning required inordinately costly human labeling, whereas RL would be able to learn from cheap simulations and even improve itself online in the world. OpenAI was originally almost 100% RL-focused. That thread of research is far from dead but it has certainly not panned out the way people hoped at the time (e.g. OpenAI has shifted heavily away from RL). 

Meanwhile non-RL deep learning methods, especially generative models that kind of sidestep the labeling issue, have seen spectacular success.

Comment by anonymous6 on Try to sell me on [ EA idea ] if I'm [ person with a viewpoint ] · 2022-05-18T19:08:23.255Z · EA · GW

I gave this a shot and it ended up being an easier sell than I expected:

"AI is getting increasingly big and important. The cutting edge work is now mainly being done by large corporations, and the demographics of the people who work on it are still overwhelmingly male and exclude many disadvantaged groups.

In addition to many other potential dangers, we already know that AI systems trained on data from society can unintentionally come to reflect the unjust biases of society: many of the largest and most impressive AI systems right now have this problem to some extent. A majority of the people working on AI research are quite privileged and many are willfully oblivious to the risks and dangers.

Overall, these corporations expect huge profits and power from developing advanced AI, and they’re recklessly pushing forward in improving its capabilities without sufficiently considering the harms it might cause.

We need to massively increase the amount of work we put into making these AI systems safe. We need a better understanding of how they work, how to make them reflect just values, and how to prevent possible harm, especially since any harm is likely to fall disproportionately on disadvantaged groups. And we even need to think about making the corporations building them slow down their development until we can be sure they’re not going to cause damage to society. The more powerful these AI systems become, the more serious the danger — so we need to start right now."

I bet it would go badly if one tried to sell a social justice advocate on some kind of grand transhumanist vision of the far future, or even just on generic longtermism, but it's possible to think about AI risk without those other commitments.

Comment by anonymous6 on Thought experiment: If you had 3 months to turn a stressed and unhappy person into a calm and happy one, what meta approach would you take? · 2022-05-09T18:34:12.675Z · EA · GW

It is rare, but does happen, that using psychedelic drugs can trigger a psychotic episode. Even though it is rare, this is a such a bad outcome that it's worth taking into consideration.

My layperson's understanding of the risks and tradeoffs right now is as follows: I think that used as a treatment for a concrete and difficult problem like PTSD, psychedelic drugs seem like immensely useful tools that should be used much more.

But for just general self-improvement or self-actualization, using psychedelic drugs feels to me like "picking up pennies in front of a steamroller" -- it will be fairly good for most people most of the time, with a huge tail risk.

I don't think it's well understood when, why, or how often this happens. I wish it were better understood, as I suspect it's specific people who are at risk and most people can use psychedelics safely. But from where I sit it seems like a -EV bet absent better information about your own brain.

Comment by anonymous6 on The AI Messiah · 2022-05-06T01:42:15.550Z · EA · GW

One can imagine, say, a Christian charitable organization where non-Christians see its work for the poor, the sick, the hungry, and those in prison as good, and don't really mind that some of the money also goes to funding theologians and building cathedrals.

Although Christianity kind of has it built in that you have to help the poor even if the Second Coming is tomorrow. The risk in EA is if people were to become erroneously convinced of a short AI timeline and conclude all the normal charitable stuff is now pointless.

Comment by anonymous6 on Effective altruism’s odd attitude to mental health · 2022-05-01T21:10:04.938Z · EA · GW

People (including I think some of the research at the Happier Lives Institute) often distinguishes "serious mental illness (SMI)" which is roughly schizophrenia, bipolar I, and debilitating major depression, from "any mental illness (AMI)", which includes everything.

The term "mental health" lumps together these two categories that, despite their important commonalities, I think probably should be analyzed in very different ways.

For example, with SMI, there are often treatments with huge obvious effects. But the side effects are bad, and patients may refuse treatment for various reasons including lack of insight. Treating these diseases can have a huge impact -- the difference between someone being totally unable to work or care for themselves and then dying young by accident or suicide, vs. being able to live an independent and successful life. But they are fairly rare in the population.

Whereas it seems that with the set AMI minus SMI, like generalized anxiety, etc., effect sizes of treatments are small and hard to measure. There's often so much demand for treatment that rationing is required. Impairment and suffering can be really bad but not, I think, typically as bad as SMI. But these diseases are much more prevalent so even if effect sizes are smaller, maybe the total impact of an intervention is much greater.

This distinction is obvious, but I want to point it out explicitly, as I think even though everyone kind of knows this, it's still underrated, and probably important for thinking about expected impact.

Comment by anonymous6 on What's the best machine learning newsletter? How do you keep up to date? · 2022-03-26T15:10:30.953Z · EA · GW

Wow, that certainly is more “attention” than I remember at the time.

I think filtering on that level of hype alone would still leave you reading way too many papers.

But I can see that it might be more plausible for someone with good judgment + finger on the pulse to do a decent job predicting what will matter (although then maybe that person should be doing research themselves).

Comment by anonymous6 on What's the best machine learning newsletter? How do you keep up to date? · 2022-03-26T02:52:28.896Z · EA · GW

The Transformers paper (Attention is All You Need) was only a poster at NIPS 2017 (not even a spotlight let alone an oral presentation). I don’t know if anyone at the time predicted the impact it would have.

It’s hard to imagine a newsletter that could have picked out that paper at the time as among the most important of the hundreds included. For comparison, I think probably that at the time, there was much more hype and discussion of Hinton and students’ capsule nets (also had a NIPS 2017 paper).

I think this is generally true of ML research. It’s usually very hard to predict impact in advance. You could probably do pretty well with 6 months to a year lag though.

I will recommend the TWIML podcast which interviews a range of good researchers, but not only on the biggest stuff.

Comment by anonymous6 on On expected utility, part 2: Why it can be OK to predictably lose · 2022-03-18T14:36:18.093Z · EA · GW

Somehow, the 1-life-vs-1000 thought experiment made part of my brain feel like this was decision making in an emergency, where you have to make a snap judgment in seconds. And in a real emergency, I think saving 1 life might very well be the right choice just because -- how sure are you the chance is really 1%? Are you sure you didn't get it wrong? Whereas if you are 99.9% certain you can save one life, you must have some really clear, robust reason to think that.

If I imagine a much slower scenario, so that I can convince myself I really am sure that the probability of saving 1000 lives is actually known to be 1%, it seems like a much clearer choice to save 10.

My brain still comes up with a lot of intuitive objections for utility maximization, but I hadn't noticed this one before.