Posts

Maybe AI risk shouldn't affect your life plan all that much 2022-07-22T15:30:23.010Z
If you're unhappy, consider leaving 2022-07-20T15:49:39.807Z
Impact is very complicated 2022-05-22T04:24:13.768Z
Notes From a Pledger 2022-04-30T15:35:50.555Z
Editing Advice for EA Forum Users 2022-04-22T02:45:55.738Z
[Creative Writing Contest] Counting Beans 2021-10-19T05:51:38.903Z
[Creative Writing Contest] What You Do 2021-10-18T15:22:35.603Z
Why are fund payouts filtered as Community Posts? 2019-04-18T16:08:17.750Z
EA Global Lightning Talks (San Francisco 2018) 2018-11-30T21:14:38.503Z
The Righteous Mind book review 2018-09-05T18:08:21.617Z

Comments

Comment by Justis on Maybe AI risk shouldn't affect your life plan all that much · 2022-07-22T16:02:35.707Z · EA · GW

I think it varies with the merits of the underlying argument! But at the very least we should suppose there's an irrational presumption toward doom: for whatever reason(s), maybe evopsych or maybe purely memetic, doomy ideas have some kind of selection advantage that's worth offsetting.

Comment by Justis on If you're unhappy, consider leaving · 2022-07-22T00:09:12.236Z · EA · GW

Seconded!

Comment by Justis on If you're unhappy, consider leaving · 2022-07-20T20:59:43.631Z · EA · GW
  1. Are there other such posts? I guess because I - erm - left for a while, I may have missed them!
  2. Might be worth thinking about, sure. In my case I don't think there'd have been much.
  3. Maybe! I dunno. A and B have been pretty easy for me. And C, yeah! But  to some degree I think that's the point - if you're unhappy with a life trying to make EA your social life + career + sense of meaning, try making a life where it's none of those things (and donate far more money to charity than anyone else you know thinks is reasonable, sure), and then when you're fully stable and secure, you can consider returning with a clear mind.

    I suppose all this is quite n=1 though! It's just worked well for me. But mileage may vary.
Comment by Justis on Is the time crunch for AI Safety Movement Building now? · 2022-06-08T20:01:02.986Z · EA · GW

Some other considerations I think might be relevant:

  • Are there top labs/research outfits that are eager for top technical talent, and don't care that much how up to speed that talent is on AI safety in particular? If so, seems like you could just attract eg. math Olympiad finalists or something and give them a small amount of field-specific info to get them started. But if lots of AI safety-specific onboarding is required, that's pretty bad for movement building.
  • How deep is the well of untapped potential talent in various ways/various places? Seems like there's lots and lots of outreach at top US universities right now, arguably even too much for image reasons. There's probably not enough in eg. India or something - it might be really fruitful to make a concerted effort to find AI Safety Ramanujan. But maybe he ends up at Harvard anyway.
  • Looking at current top safety researchers, were they as a rule at it for several years before producing anything useful? My impression is that a lot of them came on to the field pretty strong almost right away, or after just a year or so of spinning up. It wouldn't surprise me if many sufficiently smart people don't need long at all. But maybe I'm wrong!
  • The 'scaling up' step interests me. How much does this happen? How big of a scale is necessary?
  • Retention seems maybe relevant too. Very hard to predict how many group participants will stick with the field, and for how long. Introduces a lot of risk, though maybe not relevant for timelines per se.
Comment by Justis on Impact is very complicated · 2022-05-22T14:31:21.657Z · EA · GW

Yeah! This was the actually the first post I tried to write. But it petered out a few times, so I approached it from a different angle and came up with the post above instead. I definitely agree that "robustness" is something that should be seen as a pillar of EA - boringly overdetermined interventions just seem a lot more likely to survive repeated contact with reality to me, and I think as we've moved away from geeking out about RCTs we've lost some of that caution as a communtiy.

Comment by Justis on Impact is very complicated · 2022-05-22T14:29:53.846Z · EA · GW

Yes, I agree it's a confused concept. But I think that same confused concept gets smuggled into conversations about "impact" quite often. 

It's also relevant for coordination: any time you can be the 100th person that joins a group of 100 that suddenly is able to save lots of lives, there first must have been 99 people who coordinated on the bet they'd be able to get you or someone like you. But how did they make that bet?

Comment by Justis on EA is more than longtermism · 2022-05-03T17:08:19.060Z · EA · GW

My view is that more traditional philanthropic targets make for a much easier sell, so GiveWell style messaging is going to reach/convince way more people than longtermist or x-risk messaging.

So you'll probably have way, way more people who are interested in EA on the global poverty and health side. I still only donate my pledge money to AMF, plus $100 a month extra to animal welfare, despite being somewhat involved in longtermist/x-risk stuff professionally (and pretty warm on these projects beyond my own involvement).

That being said, for some people EA is their primary social peer group. These people also tend to be highly ambitious. That's a recipe for people trying really hard to figure out what's the most prized, and orienting toward that. So there's lots of buzz around longtermism, despite the absolute numbers in the longtermist direction (people, clicks, eyeballs, money) being lower than those for more traditional, popular interventions.

Comment by Justis on It could be useful if someone ran a copyediting service · 2022-05-01T03:35:54.176Z · EA · GW

I've found the LessWrong editing service to be a pretty exciting way to provide copyediting, proofreading, feedback etc. to lots and lots of individuals over the last several months. Perhaps an expansion of that model could be valuable? This month there were 32 posts I did copyediting for through the service, which is more than usual but not by too much. That's way more than I would have even actively trying to promote myself, and I haven't had to do the promoting (or handle billing with a whole bunch of individuals). If there's more money for centralized-funding-of-edits, I at least continue to have excess capacity there and find it a lot of fun!

Comment by Justis on It could be useful if someone ran a copyediting service · 2022-05-01T03:27:57.421Z · EA · GW

Thanks! Yeah, I think right now I do ~all the feedback requests, and my goal tends to be 24h turnaround time or less (though it does sometimes get closer to 48h).

Comment by Justis on How many EAs failed in high risk, high reward projects? · 2022-04-26T14:45:26.806Z · EA · GW

I've failed a few times. My social instincts tried to get me not to post this comment, in case it makes it more likely that I fail again, and failing hurts. I suspect there's really strong survivorship bias here.

Comment by Justis on How do you, personally, experience "EA motivation"? · 2019-08-19T21:08:15.010Z · EA · GW

When I was young I felt like "Gosh! When I'm older and have a job, I really should use my power as a globally rich person to help those who are much less well off, because that's obviously morally obligatory and this Peter Singer guy makes sense."

When I read Slate Star Codex's "Everything is Commensurable" I thought "Oh right, I suppose now's the time for that, I have more money than I need, and 10% seems about right."

It felt satisfying to be doing something definitive, to have an ironclad excuse for not freaking out about whatever the issue of the day is. "I'm doing my part, anyway."

Then I learned there was a community, was dazzled by how impressive they all were, overjoyed that they wanted to welcome me, and had a strong emotional reaction to want to be a part of it. It was more excitement about the people than the projects. They felt very much like "my people."

Now I don't feel much of anything about it (maybe a touch of pride or annoyance about losing so much money), but I still give my 10% to AMF monthly, and I don't plan to stop, so I guess the earlier surges of emotions did their job.

Comment by Justis on EA Global Lightning Talks (San Francisco 2018) · 2018-12-03T18:20:05.557Z · EA · GW

I also found his very interesting, though I craved something in a longer format. I could tell he had heftier models for situations where things cancel out less neatly, and I want to see them to see how robust they are! Looking forward to seeing what he's working on at the Global Priorities Institute.

Comment by Justis on Towards Better EA Career Advice · 2018-11-22T09:12:25.279Z · EA · GW

Test prep tutoring and nowhere-near-the-top programming are both very good for making a living without spending much energy. The Scott Alexander post you and lexande linked has a good description of the relevant considerations for test prep tutoring.

Living in a random non-hub city, programming jobs for the state pay only about $50k/yr to start, but they're easy to get (trial task for one was basically just "make an HTML website with maybe a button that does something") and the expectations tend to be pretty low. I worked one of these as my main source of income until enough EA volunteering became EA freelancing became just barely sufficient to quit the day job and see what happened. I think this route is underappreciated, and the movement's central orgs seem to have a lot more capacity to pay for specific work than to hire full-time, higher prestige employees.

Main downside of a low-stress programming day job is that being in an extremely unambitious environment for 40 hours a week can be psychologically uncomfortable.

Comment by Justis on Near-Term Effective Altruism Discord · 2018-09-10T15:15:28.386Z · EA · GW

+1. I'm in a very similar position - I make donations to near-term orgs, and am hungry for discussion of that kind. But because I sometimes do work for explicitly long-term and x-risk orgs, it's hard for me to be certain if I qualify under current wording.

Comment by Justis on Which piece got you more involved in EA? · 2018-09-06T17:23:11.356Z · EA · GW

The piece that got me to take the plunge and start giving 10% was Scott Alexander's Nobody Is Perfect, Everything Is Commensurable.

It convinced me singlehandedly to Try Giving, and I went to my first EA Global and took the pledge a couple years later. Before that, I'd pretty much not heard of EA as a movement at all.

Comment by Justis on CEA on community building, representativeness, and the EA Summit · 2018-08-19T21:23:45.959Z · EA · GW

I really like the Open Philanthropy Project's way of thinking about this problem:

https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy

The short version (in my understanding):

  1. Split assumptions about the world/target metrics into distinct "buckets".
  2. Do allocation as a two step process: intra-bucket on that bucket's metric, and inter-bucket separately using other sorts of heuristics.

(If you like watching videos rather than reading blog posts, Holden also discussed this approach in his fireside chat at EAG 2018: San Francisco.)

Comment by Justis on CEA on community building, representativeness, and the EA Summit · 2018-08-19T21:08:31.649Z · EA · GW

Disclosure: I copyedited a draft of this post, and do contract work for CEA more generally

I don't think that longtermism is a consensus view in the movement.

The 2017 EA Survey results had more people saying poverty was the top priority than AI and non-AI far future work combined. Similarly, AMF and GiveWell got by far the most donations in 2016, according to that same survey. While I agree that someone can be a longtermist and think that practicality concerns prioritize near-term good work for now anyway, I don't think this is a very compelling explanation for these survey results.

As a first pass heuristic, I think EA leadership would guess correctly about community-held views more often if they held the belief "the modal EA-identifying person cares most about solving suffering that is happening in the world right now."