Posts
Comments
I did the summer fellowship last year and found it extremely useful in getting research experience, having space to think about x-risk questions with others who were also interested in these questions, and making very valuable connections. I also found the fellowship very enjoyable.
Great post!
My experience with Atlas fellows (although there was substantial selection bias involved here) is that they're extremely high calibre.
I also think there's quite a lot of friction in getting LTFF funding - it takes quite a long time to come through I think is the main one. I think there are quite large benefits to being able to unilaterally decide to do some project and having the funding immediately available to do it.
Yeah this seems right.
I think I don't understand the point you're making with your last sentence.
Yeah, I'm pretty sceptical of the judgement of experienced community builders on the sorts of questions like effect of different strategies on community epistemics. I think if I frame this as an intervention "changing community building in x way will improve EA community epistemic" I have a strong prior that it has no effect because most interventions people try to have no or small effect (see famous graph of global health interventions.)
I think the following are some examples of places where you'd think people would have good intuitions about what works well but they don't
- Parenting. We used to just systematically abuse children and think it was good for them (e.g denying children the ability to see their parents in the hospital). There's a really interesting passage in Invisible China where the authors describe loving grandparents deeply damaging the grandchildren they care for by not giving them enough stimulation as infants.
- Education. It's really really hard to find education interventions which work in rich countries. It's also interesting that in the US there's lots of opposition from teachers over teaching phonics despite it being one of the few rich country education interventions with large effect sizes (although it's hard to judge how much of this is for self-interested reasons)
- I think it's unclear how well you'd expect people to do on the economics examples I gave. I probably would have expected people to do well with cash transfers since in fact lots of people do get cash transfers (e.g pensions, child benefits, inheritance) and do ok with minimum wage since at least some fraction of people have a sense of how the place they work for hires people.
- Psychotherapy. We only good treatments that worked for specific mental health conditions (rather than to generally improve people's lives, I haven't read anything on this) other than mild-moderate depression when we started doing RCTs. I'm most familiar with OCD treatment specifically and the current best practice was only developed in the late 60s.
I suppose I think the example I gave where someone I know doing selections for an important EA program didn't include questions about altruism because they thought that adverse selection effects were sufficiently bad.
Maybe, I meant to pick examples where I thought the consensus of economists was clear (in my mind it's very clearly the consensus that having a low minimum wage has no employment effects.)
I completely stand by the minimum wage one, this was the standard model of how labour markets worked until like the shapiro-Stiglitz model (I think) and is still the standard model for how input markets work, and if you're writing a general equilibrium model you'll probably still have wage = marginal product of labour.
Meta-analysis find that minimum wage doesn't increase unemployment until about 60% of median wage https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/844350/impacts_of_minimum_wages_review_of_the_international_evidence_Arindrajit_Dube_web.pdf, and most economists don't agree that a even a $15 an hour minium wage would lead to substantial unemployment (although many are uncertain) https://www.igmchicago.org/surveys/15-minimum-wage/
I think one of my critiques of this is that I'm very sceptical that strong conclusions should be drawn from any individual's experiences and those of their friends. My current view is that we just have limited evidence for any models of what good and bad community building looks like and the way to move forward is do try a wide range of stuff and do what seems to be working well.
I think I mostly disagree with your third paragraph. The assumptions I see here are:
- Not being very truth seeking with new people will either select for people who aren't very critical or will make people who are critical into not critical people
- This will have second order effects on the wider community epistemics specifically in the direction of less critiques of EA ideas
i.e it's not obvious to me it makes EA community epistemics worse in the sense that EAs make worse decisions as a result of this.
Maybe these things are true or maybe they aren't. My experience has not been this ( for context have been doing uni group cb for 2 years) the sorts of people who get excited about EA ideas and get involved are very smart, curious people who are very good critical thinkers.
But in the sprit of the post what I'd want to see are some regressions, like I'd want to see some measure of if the average new EA at a uni group which doesn't cb in a way that strongly promotes a kind of epistemic frankness are less critical of ideas in general than an appropriate reference class.
Like currently I don't talk about animal welfare when first talking to people about EA because it's reliably the thing which puts the most people off. I think the first order effect of this is very clear - more people come to stuff - and my guess is that there are ~no second-order effects. I want to see some systematic evidence that this would have bad second order effects before I give up the clearly positive first order one.
I agree I think this second part isn't intuitive to most people. I was using intuitive somewhat loosely to mean based on intuitions the person making the argument has.
Ok this me explaining on part of my thought process.
I'm pretty sceptical of macroeconomic theory. I think we mostly don't understand how inflation works, DSGE models (the forefront of macroeconomic theory) mostly don't have very good predictive power, we don't really understand how economic growth works for instance. So even if someone shows me a new macro paper that proposes some new theory and attempts to empirically verify it with both micro and macro data I'll shrug and eh probably wrong.
But this is so radically far ahead epistemically than something like the evaporative cooling model. We have thousands of datapoints for macro data and tens (?) of millions of micro data, macro models are actively used by commercial and central banks so get actual feedback on their predictions and they're still not very good. Even in microeconomics where we have really a lot of data and a lot of quasi-random variation, we got something as basic as the effect of the minimum wage on unemployment wrong until we started doing good causal inference work, despite the minium wage effect being predicted by a model which worked very well in other domains (i.e supply and demand.)
If when I read an econ paper I need high quality casual inference to belive the theory it's offering me, and even thousands of datapoints aren't enough to properly specify test a model it's unclear to me why I should have a lower standard of evidence for other social science research. The evaporative cooling model isn't supported by
- High-quality casual inference
- Any regression at all
- More than 4 data points
- In-depth case studies or ethnographies
- Regular application by practitioners who get good results using it
If I read a social science paper which didn't have any of these things I'd just ignore it - as it is I mostly ignore anything that doesn't have some combination of high-quality causal inference or a large literature of low-medium quality causal inference and observational studies reporting similar effects.
This is like a very hard version of this take - in practice because we have to make decisions in the actual world I use social science with less good empirical foundations - but I just have limited trust in that sort of thing. But man even rcts sometimes don't replicate or scale.
I agree pretty strongly with this. I think it especially matters since, in my cause prio, the case for working on AI x-risk is much higher than other causes of x-risk even if level of x-risk they posed were the same because I'm not convinced that the expected value of the future conditional on avoiding bio and nuclear x-risk is positive. More generally I think the things that it's worth focusing on from a longtermist perspective compared to just a "dying is bad" perspective can look different within cause areas especially AI. For instance, I think it makes governance stuff and avoiding multi-agent failures look much more important.
Strongly agree with this take. There's nothing stopping us from getting empirical data here and I think we have no strong reason to expect our personal experiences to generalise or that models we create that aren't therotietrically or empirically grounded to be correct.
I'm extremely sceptical that the evaporative cooling model applies. As far as I'm aware its only empirical support is the three anecdotes in the original post. Almost all social science is wrong so I basically don't update at all on the model's predictions.
Anyone can apply - we can definitely accept people in the US, Canada, Ireland and EU countries - we're currently unsure about others.
This was very useful, thanks!
I think people should be very careful about promoting earning to give in light of this. It still seems true that because the capital is much more unequally distributed than income if you're trying to earn to give you should be doing by trying to increase the value of equity you hold in firms rather than working a high paying job. Wealth also seems to be distributed according to a power law which also pushes towards a strategy of being extremely ambitious if one is earning to give.
I think it would be very bad if people who otherwise could do high impact direct work switched to earning to give in investment banking, consulting or corporate law as a result of this. EA funding has not declined to the point where there is an immediate crisis where relatively small amounts of money from high paying jobs is needed to keep the EA movement going - Dustin is worth somewhere between 5 and 10 billion, founders pledge has 8.5bn committed (although substantially less than 100% of this will go to the highest impact things.)
Yes, thanks
Yeah this is just about the constant risk case, I probably should have referred to it not covering time of perils explicitly, although same mechanism with neglectedness should still apply.
Thanks! Fixed
wow that's really interesting, I'll look more deeply into that. It's defintely not what I've read happened, but at this point I think it's proably worth me reading the primary sources rather than relying on books.
I have no specifc source saying explicitly that there wasn't a plan to use nuclear weopons in response to a tactical nuclear weopon. However, I do know what the decsion making stucture for the use of nuclear weopons was. In a case where there hadn't been a decapiting strike on civillian administrators, the Presidnet was presented with plans from the SIOP (US nuclear plan) which were exclusively plans based around a statagy of descrution of the Communist bloc. The SIOP was the US nuclear plan but triggers for nuclear war weren't in it anywhere. When induvidual soliders had tactical nuclear weopons their instructions weren't fixed - they could be instructed explictly not to use tactical nukes, in general though the structure of the US armed forces was to let the commanding officer decide the most approate course of action in a given sitaution.
Second thing to note - tactical nukes were viewed as battlefeild weopons by both sides. Niether viewed them as anything special becaue they were nuclear in the sense that they should engender an all out attack.
So maybe I should clarify that by saying that there was no plan that required the use of tactical nuclear weopons in response a Soviet use of them.
Probably the best single text of US nuclear war plans is The Bomb by Fred Kaplan.
Probably best source on how tactical nukes were used is Command and Control by Eric Schollsser
On the second one, I have a post here that serves to give the wider statagic context:
https://forum.effectivealtruism.org/posts/DxJSPyEAvuCMdwYWx/the-mystery-of-the-cuban-missile-crisis
But it's not clear to me how Berlin is relvent. It's relvent insofar as it's an important factor in why the crisis happened but it's not clear to me why Berlin increased the chance of escaltion into nuclear war beyond the fact that the Soviet response to a US invasion of Cuba could be to attempt to take Berlin.
Why does the China-India war matter here post Sino-Soviet split?
Thanks for you feedback! Unfortunately I am a smart junior person, so looks like we know who'll be doing the copy editing
Yeah I think that's very reasonable
Yes!
I think three really good books are One minute to Midnight, Nuclear folly, and Gambling with Armageddon. Lots of other ones have shortish sections but these three focus more almost completely on the crisis.
Also deals with the issue from the same persecptive I've presented here.
I think that there is something to the claim being made in the post which is that longtermism as it currently is is mostly about increasing number of people in the future living good lives. It seems genuinely true that most longtermists are prioritising creating happiness over reducing suffering. This is the key factor which pushes me towards longtermist s-risk.
I think the key point here is that it is unsually easy to recuirt EAs at uni compared to when they're at McKinsey. I think it's unclear if a) among the the best things for a student to do is go to McKinsey and b) how much less likely it is that an EA student goes to McKinsey. I think it's pretty unlikely going to McKinsey is the best thing to do, but I also think that EA student groups have a realtively small effect on how often students go into elite coporate jobs (a bad thing from my perspective) at least in software engineering.
I'm obviously not speaking for Jessica here, but I think the reason the comparison is relevant is that the high spend by Goldman ect suggests that spending a lot on recruitment at unis is effective.
If this is the case, which I think is also supported by the success of well funded groups with full or part time organisers, and that EA is in an adversarial relationship to with these large firms, which I think is large true, then it makes sense for EA to spend similar amounts of money trying to attract students.
The relvent comparison is then comparing the value of the marginal student recurited with malaria nets ect.
I'm going through this right now. There have just clearly been times both as a group organiser and in my personal life when I should have just spent/taken money and in hindsight clearly had higher impact, e.g buying uni textbooks so I study with less friction to get better grades.
I view India-Pakistan as the pair of nuclear armed states most like have a nuclear exchange. Do you agree with this and if so what should this imply about our priorities in the nuclear space.
As long as China and Russia have nuclear weapons, do you think it's valuable for the US to maintain a nuclear arsenal? What about the UK and France?
So the model is more like, during the Russian revolution for instance it's a 50/50 chance that whichever leader came out of that is very strongly selected to have dark traid traits, but this is not the case for the contemporary CCP.
Yeah seems plausible. 99:1 seems very very strong. If it were 9:1 means we're in a 1/1000 world, 1:2 means an approx 1/10^5. Yeah, I don't have a good enough knowledge of rulers before they gained close to absolute power to be able to evaluate that claim. Off the top of my head, Lenin, Prince Lvov (the latter led the provisional govt' after Feb revolution) were not dark triady.
The definition of unstable also looks important here. If we count Stalin and Hitler, both of whom came to power during peacetime, then it seems like also should count Soviet leaders who succeeded Stalin, CCP leaders who succeeded Mao, Bashar al-Assad, Pinochet, Mussolini. Sanity check from that group makes it seem more much like a 1:5 than 1:99. Deng definitely not Dark Triad, nor Bashar, don't know enough about the others but they don't seem like it?
If we're only counting Mao, then the selection effect looks a lot stronger off the top of my head, but should also probably be adjusted because the mean of sadism seems likely much higher after a period of sustained fighting given the effect of prison guards for instance becoming more sadistic over time, and gennerally violence being normalised.
Don't know enough about psychopathy or machivallianism.
It's also not completely clear to me that Stalin and Mao were in the top 10% for sadism at least. Both came from very poor peasant societies. I know at least Russian peasant life in 1910 was unbelievably violent and they reguarly did things which we sort of can't imagine. My general knowledge of European peasant societies - e.g crowds at public executions - makes me think that it's likely that the average Chinese peasant in 1910 would have scored very highly on sadism. If you look at the response of the Chinese police/army to the 1927 Communist insurgency it was unbelievably cruel.
Makes screening for malicious actors seem worse and genetic selection seem better.
Apologies that this is so scattered.
I'm currently doing research on this! The big big driver is age, income is pretty small comparatively, the education effect goes away when you account for income and age. At least this what I get from the raw health survey of England data lol.
It seems like a strange claim that both the atrocities committed by Hitler, Stalin and Mao were substantially more likely because they had dark triad traits and that when doing genetic selection we're interested in removing the upper tail, in the article it was the top 1%. To take this somewhat naively, if we think that the Holocaust, and Mao and Stalin's terror-famines wouldn't have happened unless all three leaders exhibited dark tetrad traits in the top 1%, this implies we're living in a world world that comes about with probability 1/10^6, i.e 1 in a million, assuming the atrocities were independent events. This implies a need to come up with a better model.
Edit 2: this is also wrong. Assuming independence the number of atrocious should be binomially distributed with p=1/100 and n=#of leaders in authoritarian regimes with sufficiently high state capacity or something. Should probably be a markov-chain model.
If we adjust the parameters to top 10% and say that the atrocities were 10% more likely to happen if this condition is met, this implies we're living in a world that's come about with probability (p/P(Dark triad|Atrocity)^3, where p is the probability of that the atrocity would have occurred without Hitler, Stalin and Mao having dark triad traits. The interpretation of P(Dark triad|Atrocity) is what's the probability that a leader has a dark triad traits given they've committed an atrocity. If you have p as 0.25 and P(Dark|Atrocity) as 0.75 this means we're living in a 1/9 world, which is much more reasonable. But, this makes this intervention look much less good.
Edit: the maths in this section is wrong because I did a 10% probability increase of p as 1.1*p rather than p having an elasticity of 0.1 with respect to the resources put into the intervention or something. I will edit this later.
Excluding 10% of population from politcal power seems like a big ask. If the intervention reduced the probability that someone with dark triad traits coming to power (in a system where they could commit an atrocity) by 10%, which seems ambitious to me, this reduces the probability of an atrocity by 1% (if the above model is correct). Given this requires excluding 10% of the population from politcal power, which I'd say is generously 10%, this means that EV of the intervention is reducing the probability of an atrocity by 0.1%. Although this would increase if the intervention could be used multiple times, which seems likely.
I definitely feel this as a student. I care a lot about my impact and I know intellectually that being really good at being a student the best thing I can do for long term impact. Emotionally though, I find it hard to know that the way I'm having my impact is so nebulous and also doesn't take very much work do well.
I organise EA Warwick and we've had decent success so far with concepts workshops as an alternative to fellowships. They're much less of a time commitment for people, and after the concepts workshop people seem to basically bought into EA and want to get involved more heavily. We've only done 3 this term so far, so definitely we don't know how this will turn out.
Thanks :)
Yes, I kind of did see this coming (although not in the US) and I've been working on a forum post for like a year and now I will finish it.
Yeah I wrote it in google docs and then couldn't figure out how to transfer the del and suffixes to the forum.
I think this is correct and EA thinks about neglectedness wrong. I've been meaning to formalise this for a while and will do that now.
If preference utilitarianism is correct there may be no utility function that accurately describes the true value of things. This will be the case if people's preferences aren't continuous or aren't complete, for instance if they're expressed as a vector. This generalises to other forms of consequentialism that don't have a utility function baked in.
A 6 line argument for AGI risk
(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability
(2) An AGI could be sufficiently intelligent that it's limited by physics and computability but humans can't be
(3) An AGI will come into existence
(4) If the AGIs goals aren't the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met
(5) Meeting human goals won't be instrumentally useful in the long run for an unaligned AGI
(6) It is more morally valuable for human goals to be met than an AGIs goals
Thank you, those both look like exactly what I'm looking for
But thank you for replying, in hindsight by reply seems a bit dismissive :)
Not really because that paper is essentially just making the consequentialist claim that axiological long termism implies that the action we should take are those which help the long run future the most. The Good is still prior to the Right.
Hi Alex, the link isn't working
I'm worried about associating Effective altruism and rationality closely in public. I think rationality is reasonably likely to make enemies. The existence of r/sneerclub is maybe the strongest evidence of this, but also the general dislike that lots of people have for silicon valley and ideas that have a very silicon valley feel to them. I'm unsure to degree people hate Dominic Cummings because he's a rationality guy, but I think it's some evidence to think that rationality is good at making enemies. Similarly, the whole NY times-Scott Alexander crazyness makes me think there's the potential for lots of people to be really anti rationality.
I think empirical claims can be discriminatory. I was struggling with how to think about this for a while, but I think I've come to two conclusions. The first way I think that empirical claims can be discrimory is if they express discriminatory claims with no evidence, and people refusing to change their beliefs based on evidence. I think the other way that they can be discriminatory is when talking about the definitions of socially constructed concepts where we can, in some sense and in some contexts, decide what is true.
I think the relevant split is between people who have different standards and different preferences for enforcing discourse norms. The ideal type position on the SJ side is that a significant number of claims relating to certain protected characteristics are beyond the pale and should be subject to strict social sanctions. The facebook group seems to on the over side of this divide.
I think using Bayesian regret misses a number of important things.
It's somewhat unclear if it means utility in the sense of a function that maps preference relations to real numbers, or utility in axiological sense. If it's in the former sense then I think it misses a number of very important things. The first is that preferences are changed by the political process. The second is that people have stable preferences for terrible things like capital punishment.
If it means it in the axiological sense then I don't think we have strong reason to believe that how people vote will be closely related and I think we have reason to believe it will be different systematically. This also makes it vulnerable to some people having terrible outcomes.
Lots of what I'm worried about with elected leaders are negative externalities. For instance, quite plausibly the main reasons Trump was bad was his opposition to climate change and rejecting democratic norms. The former harms mostly people in other countries and future generations, and the latter mostly future generations (and probably people in other countries too more than Americans although it's not obviously true.)
It also doesn't account for dynamic affects of parties changing their platforms. My claim is that the overton window is real and important.
I think that having strong political parties which the electoral system protects is good for stopping these things in rich democracies because I think the gatekeepers will systematically support the system that put them in power. I also think the set of polices the elite support is better in the axiological sense than those supported by the voting population. The catch here is that the US has weak political parties that are supported by electoral system.