I do think there are some cases where there isn't a clear line between what you call "marketing" and "skilling up."
If I do the "menial operations work" of figuring out how to easily get people to go to an EA conference, is that "marketing" or "skilling up"? It depends; if my goal is to do technical research only, then it probably isn't a useful skill, but operations is a very useful skill that you can build while doing EA community building.
If I know a group organizer has done the gruntwork of operations, I know that they can handle work that may not be that intellectually stimulating (regardless of the kind of work). I know that they are highly conscientious and able to not let tasks slip through the cracks. These are extremely useful traits in anyone. Of course, group organizing isn't the only way you can get these skills, but it's a pretty good one.
I wasn't intending to single out you or any specific person when asking that question. More that the community overall seems to collectively have responded differently (in view of up/downvotes). Due to the fact that different people see different posts, it's hardly a controlled experiment, so it could have been just chance who happened to see the post first and make a first impression.
Somebody writes about an issue that happens to be a popular mainstream cause and asks, "how can I be most effective at doing good, given that I want to work specifically on this cause?"
I'm not saying the two issues are remotely equivalent. Obviously, to argue "this should be an EA cause area" would require very different arguments, and one might be much stronger than the other. With Ukraine, maybe you could justify it as being adjacent to nuclear risk, but the post wasn't talking about nuclear risk. Maybe close to being about preventing great power conflict, but the post wasn't talking about that, either. So, like this post, it is outside of the "standard" EA cause areas.
This comment seems to imply that if somebody is posting about a cause that isn't within the "standard" cause areas, then they should need to justify posting about it as to why this would be better than other cause areas. They cannot "leave that exercise to the reader." The first paragraph of this comment makes a meta-level point that suggests people shouldn't even post about an issue and let readers debate it in the comments (which, in fairness, is not what the author of this post did, when explicitly asking for it not to be debated in the comments after this comment was written). Instead, the author themselves must make a case for the object-level merits of the cause.
It seems others might agree, given that this comment has more karma than the original post (edit: this may or may not be currently true, but it was true at the time of this comment). If people on the forum have these beliefs about meta-level discussion norms, then I ask: why apply it to abortion and not Ukraine?
I strongly suspect that the answer is that people are letting their object-level opinions of issues subtly motivate their meta-level opinions of discussion norms. I'd rather that not happen.
I think it's essential to ask some questions first:
Why do people hold these views? (Is it just their personality, or did somebody in this community do something wrong?)
Is there any truth to these views? (As can be seen here, anti-AI safety views are quite varied. For example, many are attacks on the communities that care about them rather than the object-level issues.)
Does it even matter what these particular people think? (If not, then leave them be.)
Only then should one even consider engaging in outreach or efforts to improve optics.
Wanted to make a very small comment on a very small part of this post.
An assistant professor in AI wants to have several PhDs funded. Hearing about the abundance of funding for AI safety research, he drafts a grant proposal arguing why the research topic his group would be working on anyway helps not only with AI capabilities, but also with AI alignment. In the process he convinces himself this is the case, and as a next step convinces some of his students.
Yes, this certainly might be an issue! This particular issue can be mitigated by having funders do lots of grant followups to make sure that differential progress in safety, rather than capabilities, is achieved.
X-Risk Analysis by Dan Hendrycks and Mantas Mazeika provides a good roadmap for doing this. There are also some details in this post (edit since my connection may not have been obvious: I work with Dan and I'm an author of the second post).
Curious why people are downvoting this? If it's some substantive criticism of the work I'd be interested in hearing it.
If it's just because it's not very thought through, then what do you think the "not front page" function of the forum is for? (This might sound accusatory but I mean it genuinely).
One of the reasons I posted was because I wanted to hear thoughts/criticisms of the work overall, since I felt I didn't have a good context. Or maybe to find somebody who knew it better. But downvotes don't help with this.
This reminds me of Adorno and Horkheimer'sThe Dialectic of Enlightenment, which argues, for some of the same reasons you do, that "Enlightenment is totalitarian." A piece that feels particularly related:
For the Enlightenment, whatever does not conform to the rule of computation and utility is suspect.
They would probably say "alienation" rather than "externalization," but have some of the same criticisms.
(I don't endorse the Frankfurt School or critical theory. I just wanted to note the similarities.)
One thing to consider is moral and epistemic uncertainty. The EA community already does this to some extent, for instance MacAskill's Moral Uncertainty, Ord's Moral Parliament, the unilateralist's curse, etc. but there is an argument that it could be taken more seriously.
This is a good point which I don't think I considered enough. This post describes this somewhat.
I do think the signal for which actions are best to take has to come from somewhere. You seem to be suggesting the signal can't come from the decisionmaker at all since people make decisions before thinking about them. I think that's possible, but I still think there's at least some component of people thinking clearly about their decision, even if what they're actually doing is trying to emulate what those around them would think.
We do want to generate actual signal for what is best, and maybe we can do this somewhat by seriously thinking about things, even if there is certainly a component of motivated reasoning no matter what.
A leaderboard on the forum, ranking users by (some EA organization's estimate of) their personal impact could give rise to a whole bunch of QALYs.
If this estimate is based on social evaluations, won't the people making those evaluations have the same problem with motivated reasoning? It's not clear this is a better source of signal for which actions are best for individuals.
If signal can never truly come from subjective evaluation, it seems like it wouldn't be solved by moving to social evaluation. One thing that would seem difficult would be concrete, measurable metrics, but this seems way harder in some fields than others.
Yes, people will always have motivated reasoning, for essentially every explanation of their actions they give. That being said, I expect it to be weaker for the small set of things people actually think about deeply, rather than things they're asked to explain after the fact that they didn't think about at all. Though I could be wrong about this expectation.
EA groups often get criticized by university students for "not doing anything." The answer usually given (which I think is mostly correct!) is that the vast majority of your impact will come from your career, and university is about gaining the skills you need to be able to do that. I usually say that EA will help you make an impact throughout your life, including after you leave college; the actions people usually think of as "doing things" in college (like volunteering), though they may be admirable, don't.
Which is why I find it strange that the post doesn't mention the possibility of becoming a lifeguard.
In this story, the lifeguards aren't noticing. Maybe they're complacent. Maybe they don't care about their jobs very much. Maybe they just aren't very good at noticing. Maybe they aren't actually lifeguards at all, and they just pretend to be lifeguards. Maybe the entire concept of "lifeguarding" is just a farce.
But if it's really just that they aren't noticing, and you are noticing, you should think about whether it really makes sense to jump into the water and start saving children. Yes, the children are drowning, but no, you aren't qualified to save them. You don't know how to swim that well, you don't know how to carry children out of the water, and you certainly don't know how to do CPR. If you really want to save lives, go get some lifeguard training and come back and save far more children.
But maybe the children are dying now, and this is the only time they're dying, so once you become a lifeguard it will be too late to do anything. Then go try saving children now!
Or maybe going to lifeguard school will destroy your ability to notice drowning children. In that case, maybe you should try to invent lifeguarding from scratch.
But unless all expertise is useless and worthless, which it might be in some cases, it's at least worth considering whether you should be focused on becoming a good lifeguard.
This is the third time I've seen a suggestion like this, and antitrust law is always brought up. I feel like maybe it's worth a post that just says "no, you can't coordinate salaries/hiring practices/etc., here's why" since that would be helpful for the general EA population to know.
To me, short timelines would mean the crunch in movement building was in the past.
It's also really not obvious when exactly "crunch time" would be. 10 years before AGI? 30 years?
If AGI is in five years I expect movement building among undergrads to not matter at all. If it's in ten years maybe you could say "movement building has almost run its course" but I still think "crunch time" would probably still be in the past.
Edit: I'm referring to undergrad movement building here. Talking to tech executives, policymakers, existing ML researchers etc. would have a different timeline.
The terminology around AI (AI, ML, DL, RL) is a bit confused sometimes. You're correct that deep reinforcement learning does indeed use deep neural nets, so it could be considered a part of deep learning. However, colloquially deep learning is often taken to mean the parts that aren't RL (so supervised, unsupervised, and self-supervised deep learning). RL is pretty qualitatively different from those in the way it is trained, so it makes sense that there would be a different term, but it can create confusion.
Yes, but please note this on your application. In general, short periods of unavailability are fine, but we won't give any extensions for them so you will likely have to complete the material at an accelerated pace at the times when you are available.
Yes, it's possible that would be better (though I can see pros and cons to both approaches). I just saw a need and wanted to fill it, and the people I talked to about this idea beforehand seemed generally happy about it (none suggested this idea which I agree could work!).
That being said, I'm not attached to it. If you think this would be better and people on the slack seem to agree then I wouldn't be opposed to shutting down the slack.
I think it's easier than it might seem to do something net negative even ignoring opportunity cost. For example, actively compete with some other better project, interfere with politics or policy incorrectly, create a negative culture shift in the overall ecosystem, etc.
Besides, I don't think the attitude that our primary problem is spending down the money is prudent. This is putting the cart before the horse, and as Habryka said might lead to people asking "how can I spend money quick?" rather than "how can I ambitiously do good?" EA certainly has a lot of money, but I think people underestimate how fast $50 billion can disappear if it's mismanaged (see, for an extreme example, Enron).
I thought this comment was valuable and it's also a concern I have.
It makes me wonder if some of the "original EA norms", like donating a substantial proportion of income or becoming vegan, might still be quite important to build trust, even as they seem less important in the grand scheme of things (mostly, the increase in the proportion of people believing in longtermism). This post makes a case for signalling.
It also seems to increase the importance of vetting people in somewhat creative ways. For instance, did they demonstrate altruistic things before they knew there was lots of money in EA? I know EAs who spent a lot of their childhoods volunteering, told their families to stop giving them birthday presents and instead donate to charities, became vegan at a young age at their own initiative, were interested in utilitarianism very young, adopted certain prosocial beliefs their communities didn't have, etc. When somebody did such things long before it was "cool" or they knew there was anything in it for them, this demonstrates something, even if they didn't become involved with EA until it might help their self-interest. At least until we have Silicon Valley parents making sure their children do all the maximally effective things starting at age 8.
It's kind of useful to consider an example, and the only example I can really give on the EA forum is myself. I went to one of my first EA events partially because I wanted a job, but I didn't know that there was so much money in EA until I was somewhat involved (also this was Fall 2019, so there was somewhat less money). I did some of the things I mentioned above when I was a kid (or at least, so I claim on the EA forum)! Would I trust me immediately if I met me? Eh, a bit but not a lot, partially because I'm one of the hundreds of undergrads somewhere near AI safety technical research and not (e.g.) an animal welfare person. It would be significantly easier if I'd gotten involved in 2015 and harder if I'd gotten involved in 2021.
Part of what this means is that we can't rely on trust so much anymore. We have to rely on cold, hard, accomplishments. It's harder, it's more work, it feels less warm and fuzzy, but it seems necessary in this second phase. This means we have to be better about evaluating accomplishments in ways that don't rely on social proof. I think this is easier in some fields (e.g. earning to give, distributing bednets) than others (e.g. policy), but we should try in all fields.
We'll consider this if there's enough demand for it! But especially for the latter option, it might make sense for students to work through the last three weeks on their own (ML Safety lectures will be public by then).
It will be mostly asynchronous, with a few hours of synchronous content per week. We also expect to have sections at different times for people in different timezones so there should be one that works for you.
I completely agree! Summer plans are often solidified quite early, so promoting earlier is better. I'm no stranger to the idea of doing things early!
In this case, we saw a need the need for this program only a few weeks ago and we're now trying to fill it. If we do run it again next year, we'll announce it earlier, though there's definitely still some benefit to having applications open fairly late (e.g. for people who may not have gotten other positions because they lacked ML knowledge).
This is tricky, because it's really an empirical claim for which we need empirical evidence. I don't currently have such evidence about anyone's counterfactual choices. But I think even if you zoom in on the top 10% of a skewed distribution, it's still going to be skewed. Within the top 10% (or even 1%) of researchers, nonprofits, it's likely only a small subset are making most of the impact.
I think it's true that "the higher we aim, the higher uncertainty we have" but you make it seem as if that uncertainty always washes out. I don't think it does. I think higher uncertainty often is an indicator that you might be able to make it into the tails. Consider the monetary EV of starting a really good startup or working at a tech company. A startup has more uncertainty, but that's because it creates the possibility of tail gains.
Anecdotally I think that certain choices I've made have changed the EV of my work by orders of magnitude. It's important to note that I didn't necessarily know this at the time, but I think it's true retrospectively. But I do agree it's not necessarily true in all cases.
This is an interesting post! I agree with most of what you write. But when I saw the graph, I was suspicious. The graph is nice, but the world is not.
I tried to create a similar graph to yours:
In this case, fun work is pretty close to impactful toll. In fact, the impact value for it is only about 30% less than the impact value of impactful toll. This is definitely sizable, and creates some of the considerations above. But mostly, everywhere on the pareto frontier seems like a pretty reasonable place to be.
But there's a problem: why is the graph so nice? To be more specific: why are the x and y axes so similarly scaled?
Why doesn't it look like this?
Here I just replaced x in the ellipse equation with log(x). It seems pretty intuitive that our impact would be power law distributed, with a small number of possible careers making up the vast majority of our possible impact. A lot of the time when people are trying to maximize something it ends up power law distributed (money donated, citations for researchers, lives saved, etc.). Multiplicative processes, as Thomas Kwa alluded to, will also make something power law distributed. This doesn't really look power law distributed quite yet though. Maybe I'll take the log again:
Now, fun work is unfortunately 100x less impactful than impactful toll. That would be unfortunate. Maybe the entire pareto frontier doesn't look so good anymore.
I think this is an inherently fatal flaw with attempts to talk about trading off impact and other personal factors in making choices. If your other personal factors are your ability to have fun, have good friendships, etc., you now have to make the claim that those things are also power-law distributed, and that your best life with respect to those other values is hundreds of times better than your impact maximizing life. If you don't make that claim, then either you have to give your other values an extremely high weight compared with impact, or you have to let impact guide every decision.
In my view, the numbers for most people are probably pretty clear that impact should be the overriding factor. But I think there can be problems with thinking that way about everything. Some of those problems are instrumental: if you think impact is all that matters, you might try to do the minimum of self-care, but that's dangerous.
I think people should think in the frame of the original graph most of the time, because the graph is nice, and a reminder that you should be nice to yourself. If you had one of the other graphs in your head, you wouldn't really have any good reason to be nice to yourself that isn't arbitrary or purely instrumental.
But every so often, when you face down a new career decision with fresh eyes, it can help to remember that the world is not so nice.
Distill was never really about distillations in the sense this post is referring to. It was a journal that focused on having very high-quality presentation/visualizations. It's also no longer active: https://distill.pub/2021/distill-hiatus/
One thing you didn't mention is grant evaluation. I personally do not mind grants being given out somewhat quickly and freely in the beginning of a project. But before somebody asks for money again, they should need to have their last grant evaluated to see whether it accomplished anything. My sense is that this is not common (or thorough) enough, even for bigger grants. I think as the movement gets bigger, this seems pretty likely to lead to unaccountability.
Maybe more happens behind the scenes than I realize though, and there actually is a lot more evaluation than I think.
It's very difficult to take an arbitrary project that you're excited about for other reasons, and tweak it to "make it EA".
I think it also applies here (which, by the way, is one of the most thought-provoking and useful parts of this post). I think some alternative phrasing like the below actually might make the point even more self-evident:
"It's very difficult to take an arbitrary project that you're excited about for other reasons, and tweak it to make it the most maximally impactful project you could be working on."
If the community has so much money, and we believe this is such an important problem, why can't we just hire/fund world experts in AI/ML to work on it?
Food for thought: LeCun and Hinton both hold academic positions in addition to their industry positions at Meta and Google, respectively. Yoshua Bengio is still in academia entirely. Do you think that tech companies haven't tried to buy every minute of their attention? Why are the three pioneers of deep learning not all in the highest-paying industry job? Clearly, they care about something more than this.