Posts

Comments

Comment by Asa Cooper Stickland on 20 Critiques of AI Safety That I Found on Twitter · 2022-06-23T16:32:48.242Z · EA · GW

Although to be clear it's still nice to have bunch of different critical perspectives! This post exposed me to some people I didn't know of.

Comment by Asa Cooper Stickland on 20 Critiques of AI Safety That I Found on Twitter · 2022-06-23T16:18:09.709Z · EA · GW

"Still, these types of tweets are influential, and are widely circulated among AI capabilities researchers." I'm kind of skeptical of this.

Outside of Giada Pistilli and Talia Ringer I don't think these tweets would appear on the typical ML researcher timeline, they seem closer to niche rationality/EA shitposting.

Whether the typical ML person would think alignment/AI x-risk is really dumb is a different question, and I don't really know the answer to that one!

Comment by Asa Cooper Stickland on Job, skills, and career capital suggestions for 18-year-old · 2022-06-23T11:10:15.580Z · EA · GW

Would it be possible to get a job tutoring high school students (or younger students)?

Maybe you could reach out to local EA orgs and see if they have any odd jobs they could pay you for?

Also if it's at all financially possible for you, I would recommend self-study in whatever you're interested in (e.g. programming/math/social science blah blah blah) with the hope of getting a job offering more career capital later on, rather than stressing out too much about getting a great job right now.

Comment by Asa Cooper Stickland on [Linkpost] Towards Ineffective Altruism · 2022-05-23T19:40:33.970Z · EA · GW

Well AI Safety is strongly recommended by 80k, gets a lot of funding, and is seen as prestigious / important by people (The last one is just in my experience). And the funding and attention given to longtermism is increasing. So I think it's fair to criticize these aspects if you disagree with them, although I guess charitable criticism would note that global poverty etc got a lot more attention in the beginning and is still well funded and well regarded by EA.

Comment by Asa Cooper Stickland on My GWWC donations: Switching from long- to near-termist opportunities? · 2022-04-24T14:09:55.493Z · EA · GW

I'm pretty skeptical about arguments from optics, unless you're doing marketing for a big organization or whatever. I just think it's really valuable to have a norm of telling people your true beliefs rather than some different version of your beliefs designed to appeal to the person you're speaking to. This way people get a more accurate idea of what a typical EA person thinks if they talk to them, and you're likely better able to defend your own beliefs vs the optics-based ones if challenged. (The argument about there being so much funding in longtermism that the best opportunities are already funded I think is pretty separate to the optics one, and I don't have any strong opinions there)

For me, I would donate to where you think there's the highest EV, and if that turns out to be longtermism, think about a clear and non-jargony way to explain that to non-EA people, i.e. say something like 'I'm concerned about existential risks from things like nuclear war, future pandemics and risks from emerging technologies like AI, so I donate some money to a fund trying to alleviate those risks' (rather than talking about the 10^100 humans who will be living across many galaxies etc etc). A nice side effect of having to explain your beliefs might be convincing some more people to go check out this 'longtermism' stuff!

Comment by Asa Cooper Stickland on How much current animal suffering does longtermism let us ignore? · 2022-04-22T20:01:02.182Z · EA · GW

EDIT: I made this comment assuming the comment I'm replying to is making a critique of longtermism but no longer convinced this is the correct reading 😅 here's the response anyway:

Well it's not so much that longtermists ignore such suffering, it's that anyone who is choosing a priority (so any EA, regardless of their stance on longtermism) in our current broken system will end up ignoring (or at least not working on alleviating) many problems.

For example the problem of adults with cancer in the US is undoubtedly tragic but is well understood and reasonably well funded by the government and charitable organizations, I would argue it fails the 'neglectedness' part of the traditional EA neglectedness, tractability, importance system. Another example, people trapped in North Korea, I think would fail on tractability, given the lack of progress over the decades. I haven't thought about those two particularly deeply and could be totally wrong but this is just the traditional EA framework for prioritizing among different problems, even if those problems are heartbreaking to have to set aside.

Comment by Asa Cooper Stickland on The Future Fund’s Project Ideas Competition · 2022-03-07T13:31:55.707Z · EA · GW

AI Safety Academic Conference

Technical AI Safety

The idea is to fund and provide logistical/admin support for a reasonably large AI safety conference along the lines of Neurips etc. Academic conferences provide several benefits: 1) Potentially increasing the prestige of an area and boosting the career capital of  people who get accepted papers. 2) Networking and sharing ideas, 3)  Providing feedback on submitted papers and highlighting important/useful papers.  This conference would be unusual in that the work submitted shares approximately the same concrete goal (avoiding risks from powerful AI).  While traditional  conferences might focus on scientific novelty and complicated/"cool" papers, this conference could have a particular focus on things like reproducibility or correctness of empirical results, peer support and mentorship, non-traditional research mediums (e.g. blog posts/notebooks) , and encouraging authors to have a plausible story for why their work is actually reducing risks from AI.