Posts

nonn's Shortform 2021-11-14T19:52:21.658Z

Comments

Comment by nonn on Leaning into EA Disillusionment · 2022-07-29T19:48:01.711Z · EA · GW

typo, imo. (in my opinion)

Comment by nonn on Leaning into EA Disillusionment · 2022-07-22T18:36:21.599Z · EA · GW

I'm somewhat more pessimistic that disillusioned people have useful critiques, at least on average. EA asks people to swallow a hard pill "set X is probably the most important stuff by a lot", where X doesn't include that many things. I think this is correct (i.e. the set will be somewhat small), but it means that a lot of people's talents & interests probably aren't as [relatively] valuable as they previously assumed.

That sucks, and creates some obvious & strong motivated reasons to lean into not-great criticisms of set X. I don't even think this is conscious, just vague 'feels like this is wrong' when people say [thing I'm not the best at/dislike] is the most important. This is not to say set X doesn't have major problems

They might more often have useful community critiques imo, e.g. more likely to notice social blind spots that community leaders are oblivious to.

Also, I am concerned about motivated reasoning within the community, but don't really know how to correct for this. I expect the most-upvoted critiques will be the easy-to-understand plausible-sounding ones that assuage the problem above or social feelings, but not the correct ones about our core priorities. See some points here: https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism

Comment by nonn on Leaning into EA Disillusionment · 2022-07-22T18:34:48.685Z · EA · GW

I'd add a much more boring cause of disillusionment: social stuff

It's not all that uncommon for someone to get involved with EA, make a bunch of friends, and then the friends gradually get filtered through who gets accepted to prestigious jobs or does 'more impactful' things in community estimation (often genuinely more impactful!)

Then sometimes they just start hanging out with cooler people they meet at their jobs, or just get genuinely busy with work, while their old EA friends are left on the periphery (+ gender imbalance piles on relationship stuff). This happens in normal society too, but there seem to be more norms/taboos there that blunt the impact.

Comment by nonn on EA Shouldn't Try to Exercise Direct Political Power · 2022-07-22T04:41:23.678Z · EA · GW

Your second question "Will the potential negative press and association with Democrats be too harmful to the EA movement to be worth it?" seems to ignore that a major group EAs will be running against will be democrats in primaries.

So it's not only that you're creating large incentives for republicans to attack EA, you're also creating it for e.g. progressive democrats. See: Warren endorsing Flynn's opponent & somewhat attacking flynn for crypto billionaire sellout stuff

That seems potentially pretty harmful too. It'd be much harder to be an active group on top universities if progressive groups strongly disliked EA.

Which I think they would, if EAs ran against progressives enough that Warren or Bernie or AOC more strongly criticized EA. Which would be in line the incentives we're creating & general vibe [pretty skeptical of a bunch of white men, crypto billionaires, etc].

Comment by nonn on Punching Utilitarians in the Face · 2022-07-13T19:01:04.465Z · EA · GW

Random aside, but does the St. Petersburg paradox not just make total sense if you believe Everett & do a quantum coin flip? i.e. in 1/2 universes you die, & in 1/2 you more than double. From the perspective of all things I might care about in the multiverse, this is just "make more stuff that I care about exist in the multiverse, with certainty"

Or more intuitively, "with certainty, move your civilization to a different universe alongside another prospering civilization you value, and make both more prosperous".

Or if you repeat it, you have "move all civilizations into a few giant universes, and make them dramatically more prosperous.

Which is clearly good under most views, right?

Comment by nonn on My bargain with the EA machine · 2022-05-02T02:14:50.225Z · EA · GW

Another complication: we want to select for people who are good fits for our problems, e.g. math kids, philosophy research kids, etc. To some degree, we're selecting for people with personal-fun functions that match the shape of the problems we're trying to solve (where what we'd want them to do is pretty aligned with their fun)

I think your point applies with cause selection, "intervention strategy", or decisions like "moving to Berkeley". Confused more generally

Comment by nonn on My bargain with the EA machine · 2022-05-02T02:10:48.666Z · EA · GW

I'm confused about how to square this with specific counterexamples. Say theoretical alignment work: P(important safety progress) probably scales with time invested, but not 100x by doubling your work hours. Any explanations here?

Idk if this is because uncertainty/ probabilistic stuff muddles the log picture. E.g. we really don't know where the hits are, so many things are 'decent shots'. Maybe after we know the outcomes, the outlier good things would be quite bad on the personal-liking front. But that doesn't sound exactly correct either

Comment by nonn on FTX/CEA - show us your numbers! · 2022-04-19T00:34:14.380Z · EA · GW

Curious if you disagree with Jessica's key claim, which is "McKinsey << EA for impact"? I agree Jessica is overstating the case for "McKinsey <= 0", but seems like best-case for McKinsey is still order(s) of magnitude less impact than EA.

Subpoints:

  • Current market incentives don't address large risk-externalities well, or appropriately weight the well-being of very poor people, animals, or the entire future.
  • McKinsey for earn-to-learn/give could theoretically be justified, but that doesn't contradict Jessica's point of spending money to get EAs
  • Most students require a justification for anyone charitable spending significant amounts of money on movement building & competing with McKinsey reads favorably

Agree we should usually avoid saying poorly-justified things when it's not a necessary feature of the argument, as it could turn off smart people who would otherwise agree.

Comment by nonn on How about we don't all get COVID in London? · 2022-04-11T16:33:26.753Z · EA · GW

There were tons of cases from EAGx Boston (an area with lower covid case counts). I'm one of them. Idk exact numbers but >100 if I extrapolate from my EA friends.

Not sure whether this is good or bad tho, as IFR is a lot lower now. Presumably lower long covid too, but hard to say

Comment by nonn on Some thoughts on vegetarianism and veganism · 2022-02-16T00:14:00.565Z · EA · GW

An argument against that doesn't seem directly considered here: veganism might turn some high-potential people off without compensatory benefits, and very high base rates of non-veganism (~99% of western people are non-vegan IIRC) means this may matter even on relatively marginal effects.

Obviously many things can be mitigated significantly by being kind/accommodating (though at some level there's a little remaining implied "you are doing bad"). But even accounting for that, a few things remain despite accommodating. E.g.

  • People feel can vaguely outgroupy because most core EAs in many groups are vegan, & most new people will feel slightly awkward about that, which affects likely comfort & future involvement in EA spaces
  • On the margin, promising people may not repeatedly come to events that would expose them to EA ideas because they don't like the food (empirically this was a fairly common complaint at newbie events in my university). E.g. it may not be filling if you don't like tofu variants, which a significant fraction of the population doesn't
  • Probably more things. Diet & dinners are fairly central to people's social lives, so I'd expect other effects too.

And to be clear, there are plausible compensatory benefits that you highlight. Though they're not direct effects, so I wonder if they could be gotten in other ways without the possible downsides

Comment by nonn on nonn's Shortform · 2021-11-14T19:52:21.839Z · EA · GW

Still wondering why I never see moral circle expansion advocates make the argument I made here

That argument seems to avoid the suffering-focused problem where moral circle expansion doesn't address, or might even make worse, the worst suffering scenarios for the future (e.g. threats in multipolar futures). Namely, the argument I linked says despite potentially increasing suffering risk, it also increases the value of good futures enough to be worth it

TBC, I don't hold this view because I believe we need a solid "great reflection" to achieve the best futures anyway, and that such a reflection is extremely likely to produce the relevant moral circle expansion

Comment by nonn on JP's Shortform · 2021-09-06T15:07:23.350Z · EA · GW

Yeah I agree that's pretty plausible.  That's what I was trying to make an allowance for with "I'd also distinguish vacations from...", but worth mentioning more explicitly.

Comment by nonn on JP's Shortform · 2021-09-05T15:27:06.236Z · EA · GW

For the sake of argument, I'm suspicious of some of the galaxy takes.

Excellent prioritization and execution on the most important parts. If you try to do either of those while tired, you can really fuck it up and lose most of the value

I think relatively few people advocate working to the point of sacrificing sleep, prominent hard-work-advocate (& kinda jerk) rabois strongly pushes for sleeping enough & getting enough exercise.
Beyond that, it's not obvious working less hard results in better prioritization or execution.  A naive look at the intellectual world might suggest the opposite afaict, but selection effects make this hard.  I think having spent more time trying hard to prioritize, or trying to learn about how to do prioritization/execution well is more likely to work.  I'd count "reading/training up on how to do good prioritization" as work

Fresh perspective, which can turn thinking about something all the time into a liability

Agree re: the value of fresh perspective, but idk if the evidence actually supports that working less hard results in fresh perspective.  It's entirely plausibly to me that what is actually needed is explicit time to take a step back - e.g. Richard Hamming Fridays - to reorient your perspective.  (Also, imo good sleep + exercise functions as a better "fresh perspective" that most daily versions of "working less hard", like chilling at home)
TBH, I wonder if working on very different projects to reset your assumptions about the previous one or reading books/histories of other important project/etc works better is a better way of gaining fresh perspective, because it's actually forcing you into a different frame of mind.  I'd also distinguish vacations from "only working 9-5", which is routine enough that idk if it'd produce particularly fresh perspective.

Real obsession, which means you can’t force yourself to do it

Real obsession definitely seems great, but absent that I still think the above points apply.  For most prominent people, I think they aren't obsessed with ~most of the work their doing (it's too widely varied), but they are obsessed with making the project happen.  E.g. Elon says he'd prefer to be an engineer, but has to do all this business stuff to make the project happen.
Also idk how real obsession develops, but it seems more likely to result from stuffing your brain full of stuff related to the project & emptying it of unrelated stuff or especially entertainment, than from relaxing.

Of course, I don't follow my own advice.  But that's mostly because I'm weak willed or selfish, not because I don't believe working more would be more optimal

Comment by nonn on Open Philanthropy is seeking proposals for outreach projects · 2021-07-19T11:49:53.867Z · EA · GW

Minor suggestion:  Those forms should send responses after you submit, or give the option "would you like to receive a copy of your responses"

Otherwise, it may be hard to clarify whether a submission went through, or details of what you submitted

Comment by nonn on Cause Prioritization in Light of Inspirational Disasters · 2020-06-08T04:04:28.653Z · EA · GW

I think that depends a lot on framing. E.g. if this is just a prediction of future events, it sounds less objectionable to other moral systems imo b/c it's not making any moral claims (perhaps some by implication, as this forum leans utilitarian)

In the case of making predictions, I'd strongly bias to say things I think are true even if they end up being inconvenient, if they are action relevant (most controversial topics are not action relevant, so I think people should avoid them). But this might be important for how to weigh different risks against each other! Perhaps I'm decoupling too much tho

Aside: I don't necessarily think the post's claim is true, because I think certain other things are made worse by events like this which contributes to long-run xrisk. I'm very uncertain tho, so seems worth thinking about, though maybe not in a semi-public forum

Comment by nonn on Ask Me Anything! · 2019-08-22T20:12:02.312Z · EA · GW

Agree, tried to add more clarification below. I'll try to avoid this going forward, maybe unsuccessfully.

Tbh, I mean a bit of both definitions (Will's views are quite surprising to me, which is why I want to know more), but mostly the former (i.e. stating it's close to 0% or 100%).

Comment by nonn on Ask Me Anything! · 2019-08-22T19:56:27.882Z · EA · GW
I sometimes find the terminology of "no x-risk", "going well" etc.

Agree on "going well" being under-defined. I was mostly using that for brevity, but probably more confusion than it's worth. A definition I might use is "preserves the probability of getting to the best possible futures", or even better if it increases that probability. Mainly because from an EA perspective (even if people are around) if we've locked in a substantially suboptimal moral situation, we've effectively lost most possible value - which I'd call x-risk.

The main point was fairly object-level - Will's beliefs imply it's near 1% likelihood of AGI in 100 years, or near 99% likelihood of it "not reducing the probability of the best possible futures", or some combination like <10% likelihood of AGI in 100 years AND even if we get it, >90% likelihood of it not negatively influencing the probability of the best possible futures. Any of these sound somewhat implausible to me, so I'm curious for the intuition behind whichever one Will believes.


I think it's a mistake to approach these questions with a 50-50 prior. Instead, we should consider the base rate for "events that are at least as transformative as the industrial revolution

Def agree. Things-like-this shouldn't be approached with a 50-50 prior - throw me in another century & I think <5% likelihood of AGI, the Industrial Revolution, etc is very reasonable on priors. I just think that probability can shift relatively quickly in response to observations. For the industrial revolution, that might be when you've already had the agricultural revolution (so a smallish fraction of the population can grow enough food for everyone), you get engines working well & relatively affordably, you have large-scale political stability for a while s.t. you can interact peacefully with millions of other people, you have proto-capitalism where you can produce/sell things & reasonably expect to make money doing so, etc. At that point, from an inside view, it feels like "we can use machines & spare labor to produce a lot more stuff per person, and we can make lots of money off producing a lot of stuff, so people will start doing that more" is a reasonable position. So those would shift me from single digits or less, to at least >20% on the industrial revolution in that century, probably more but discounting for hindsight bias. (I don't know if this is a useful comparison, just using since you mentioned & does seem similar in some ways where base rate is low, but it did eventually happen).

For AI, these seem relevant: when you have a plausible physical substrate, have better predictive models for what the brain does (connectionism & refinements seem plausible & have been fairly successful over the last few decades despite being unpopular initially), start to see how comparably long-evolved mechanisms work & duplicate some of them, reach super-human performance on some tasks historically considered hard/ requiring great intelligence, have physical substrate reaching scales that seem comparable to the brain, etc.

In any case, these are getting a bit far from my original thought, which was just which of those situations w.r.t. AGI does Will believe & some intuition for why


And finally, in terms to my personal values, the top priority is to avoid risks of astronomical suffering (s-risks)

I'd usually want to modify my definition of "well" to "preserves the probability of getting to the best possible futures AND doesn't increase the probability of the worst possible futures", but that's a bit more verbose.




Comment by nonn on Ask Me Anything! · 2019-08-21T19:20:14.438Z · EA · GW

If you believe "<1% X", that implies ">99% ¬X", so you should believe that too. But if you think >99% ¬X seems too confident, then you should modus tollens and moderate your <1% X belief. When other people give e.g. 30% X, that only implies 70% ¬X, which seems more justifiable to me.

I use AGI as an example just because if it happens, it seems more obviously transformative & existential than biorisk, where it's harder to reason about whether people survive. And because Will's views seem to diverge quite strongly from average or median predictions in the ML community, not that I'd read all too much into that. Perhaps further, many people in the EA community believe there's good reason to think those predictions are too conservative if anything, and have arguments for significant probability of AGI in the next couple decades, let alone century.

Since Will's implied belief is >99% no xrisk this century, this either means AGI won't happen, or that it has a very high probability of going well (getting or preserving most of the possible value in the future, which seems the most useful definition of existential for EA purposes). That's at first glance of course, so not wanting the whole book, just want an intuition for how you seem to get such high confidence ¬X, especially when it seems to me there's some plausible evidence for X.

Comment by nonn on Ask Me Anything! · 2019-08-21T05:56:47.332Z · EA · GW

This is just a first impression, but I'm curious about what seems a crucial point - that your beliefs seem to imply extremely high confidence of either general AI not happening this century, or that AGI will go 'well' by default. I'm very curious to see what guides your intuition there, or if there's some other way that first-pass impression is wrong.

I'm curious about similar arguments that apply to bio & other plausible x-risks too, given what's implied by low x-risk credence

Comment by nonn on Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment · 2018-02-23T20:36:18.162Z · EA · GW

I think there’s a significant[8] chance that the moral circle will fail to expand to reach all sentient beings, such as artificial/small/weird minds (e.g. a sophisticated computer program used to mine asteroids, but one that doesn’t have the normal features of sentient minds like facial expressions). In other words, I think there’s a significant chance that powerful beings in the far future will have low willingness to pay for the welfare of many of the small/weird minds in the future.[9]

I think it’s likely that the powerful beings in the far future (analogous to humans as the powerful beings on Earth in 2018) will use large numbers of less powerful sentient beings

So I'm curious for your thoughts. I see this concern about "incidental suffering of worker-agents" stated frequently, which may be likely in many future scenarios. However, it doesn't seem to be a crucial consideration, specifically because I care about small/weird minds with non-complex experiences (your first consideration).

Caring about small minds seems to imply that "Opportunity Cost/Lost Risks" are the dominate consideration - if small minds have moral value comparable to large minds, then the largest-EV risk is not optimizing for small minds and wasting resources thrown at large minds with complex/expensive experiences (or thrown at something even less efficient, like biological beings, any non-total-consequentialist view, etc). This would you lose you many orders of magnitude of optimized happiness, and this loss would be worse than the other scenarios' aggregate incidental suffering. Even if this inefficient moral position merely reduced optimized happiness by 10% - far less than an order of magnitude - this would dominate incidental suffering, even if the incidental suffering scenarios were significantly more probable. And even if you very heavily weight suffering compared to happiness, my math still suggests this conclusion survives by a significant margin).

Also note that Moral Circle Expansion is relevant conditional on solving the alignment problem, so we're in the set of worlds where the alignment problem was actually solved in some way (humanity's values are somewhat intact). So, the risk is that whatever-we're-optimizing-the-future-for is far less efficient than ideal hedonium could have been, because we're wasting it on complex minds, experiences that require lots of material input, or other not-efficiently-value-creating things. "Oh, what might have been", etc. Note this still says values spreading might be very important, but I think this version has a slightly different flavor that implies somewhat different actions. Thoughts?