Posts
Comments
A question about this--do you work at the University of Canterbury now, or will you be supervising these students remotely?
See here: https://80000hours.org/podcast/episodes/david-wallace-many-worlds-theory-of-quantum-mechanics/
Basically, you can treat fraction of worlds as equivalent to probability, so there is little apparent need to change anything if MWI turns out to be true.
This is not about us. A bunch of retail investors just completely lost their shirts due to, I don't know what exactly, but let's say "apparent bad behavior". If possible, we should try to provide some kind of support to them.
Is there any way that could possibly be true, given the events of the last few days?
Jesus. I hope it doesn't come to that in this case.
Or we should try to quickly move any money made in crypto into the S&P. I don't think this is about patient vs. urgent philanthropy per se.
What happens if the money was donated to a charity that is subject to clawbacks, but the charity then spent the money? Do they try to claw it back from the suppliers or employees or whoever? Can it trigger a cascade of bankruptcies?
Got any better ideas?
My understanding is Dustin has already diversified out of Meta to some large degree (though I have no insider information).
Stipulate, for the sake of the argument, that Lukas et al. actually disagree with the doomers about various points. What would follow from that?
Good point, I didn't think of that.
I strongly agree with your comment, but I want to point out in defense of this trend that nuclear weapons policy seems to be unusually insulated from public input and unusually likely to be highly sensitive/not good to discuss in public.
I think I am missing something here. Does the book purport to mention every collapse? Why does WWOTF need to mention the Bronze Age Collapse?
Maybe they should, maybe the shouldn't, but I don't think Gavin was saying such things should be encouraged. I think he was saying that there should be some kind of response if such leaks happen.
I appreciated the link to the hardscrapple frontier, which I had not heard of, FWIW.
Is the argument here that nobody should criticize effective altruism on websites that are not EA forum, because then outsiders might get a negative impression? And if so, what kind of impression would outsiders get if they knew about this proposed rule?
An interesting thought, but I think this overlooks the fact that wealth is heavy tailed. So it is (probably) higher EV to have someone with a 10% shot at their tech startup getting huge than one person with a 100% chance of running a succesful plumbing company.
This is a great comment, you may want to consider making it a top level post on the forum so more people will see it.
A lot of people got into EA after reading a book, and a lot of people find new topics to investigate by reading newspaper articles.
The content of this comment seems reasonable to me. How is it "LARPing"?
I meant broad sense existential risk, not just extinction. The first graph is supposed to represent a specific worldview where the relevant form of existential risk is extinction, and extinction is reasonably likely. In particular I had Eliezer Yudkowsky's views about AI in mind. (But I decided to draw a graph with the transition around 50% rather than his 99% or so, because I thought it would be clearer.) One could certainly draw many more graphs, or change the descriptions of the existing graphs, without representing everyone's thoughts on the function mapping percentile performance to total realized value.
Thanks for explaining how you think about this issue, I will have to consider that more. My first thought is that I'm not utilitarian enough to say that a universe full of happy biological beings is ~0.0% as good as if they were digital, even conditioning on being biological being the wrong decision. But maybe I would agree on other possible disjunctive traps.
FWIW, when I first saw that I wondered "what's the difference between the A-aesthetic and the B-aesthetic?" It might be clearer to say "non-aesthetic" or just something like "no frills".
Thanks for this post. I wonder if it would be good to somehow target different outside of EA subcultures with messaging corresponding to their nearest-neighbor EA subculture. To some extent I guess this already happens, but maybe there is an advantage to explicitly thinking about it in these terms.
Why, if you don't mind me asking?
His name is Carrick Flynn, not Flynn Carrick.
Have his thoughts on the mathematical universe idea changed since he first put it forward?
A small thing, but citing a particular person seems less culty to me than saying "some well-respected figures think X because Y". Having a community orthodoxy seems like worse optics than valuing the opinions of specific named people.
What category would you put ideas like the unilateralist's curse or Bostrom's vulnerable world hypothesis? They seem like philosophical theories to me, but not really moral theories (and I think they attract a disproportionate amount of criticism).
Can you be more specific about what this journalist wants to talk about? What do you mean by risk mitigation when traveling?
I don't share this view, and I agree that it is weird. But maybe the feeling behind it is something like: if I, personally, were in extreme poverty I would want people to prioritize getting me material help over mental health help. I imagine I would be kind of baffled and annoyed if some charity was giving me CBT books instead of food or malaria nets.
That's just a feeling though, and it doesn't rigorously answer any real cause prioritization question.
MacAskill (who I believe coined the term?) does not think that the present is the hinge of history. I think the majority view among self-described longtermists is that the present is the hinge of history. But the term unites everyone who cares about things that are expected to have large effects on the long-run future (including but not limited to existential risk).
I think the term's agnosticism about whether we live at the hinge of history and whether existential risk in the next few decades is high is a big reason for its popularity.
The original EA materials (at least the ones that I first encountered in 2015 when I was getting into EA) promoted evidence-based charity, that is making donations to causes with very solid evidence. But the the formal definition of EA is equally or more consistent with hits based charity, making donations with limited or equivocal evidence but large upside with the expectation that you will eventually hit the jackpot.
I think the failure to separate and explain the difference between these things leads to a lot of understandable confusion and anger.
Thank you!