Posts

Recruiters & "Non-credentialed" job seekers: Stellarworx is a non-EA database of skilled US job seekers without traditional degrees 2023-03-28T01:53:13.340Z
A_lark's Shortform 2021-12-19T05:35:21.673Z

Comments

Comment by A_lark on Announcing “Effective Dropouts” · 2022-09-27T22:50:18.314Z · EA · GW

I just came across this non-EA org on Reddit, for non-college-grads with skillz. May be of interest to this crowd: https://opportunityatwork.org/our-mission/

Comment by A_lark on Good Deeds: Under-recognition · 2022-09-27T22:42:38.182Z · EA · GW

I enjoyed reading these examples; thanks for writing them up.

Comment by A_lark on A_lark's Shortform · 2022-09-27T21:52:36.960Z · EA · GW

Resource for Recruiters and Hiring Managers: Stellarworx is a non-EA database of skilled job seekers without traditional degrees

EAs orgs are looking for top talent. EA Recruiters I talk with often acknowledge that university credentials are a useful proxy for employability, but obviously not the only available path for skilled, altruistic people. Recruiters would prefer to be able to easily identify strong candidates who have not gone to college as well, but this is harder to do.

FYI: I just came across a database* and a network outside EA which might serve as a useful model or node for finding such people.

Opportunities@Work is a US non-profit working to help people overcome credentialist barriers to employment. They estimate that the US labor force includes 70M underemployed “STARs”— people who are Skilled Through Alternative Routes such as community college, on the job training, partial college, self-teaching etc.

They offer a talent matching database*: https://stellarworx.org

Employers can search for specific skills among candidates, post jobs, and be matched with candidates via a matching algorithm.

I don’t expect this database to have many EAs in it (yet!). However, it could be worth a look when you have a role that doesn’t require pre-existing EA knowledge and when you’re specifically looking to diversify a candidate pool. (STAR candidates are disproportionately rural, veterans, and/or people of color).

Someone may also want to replicate this database idea for EA talent in the future.

*Disclaimer: I haven’t tested this database yet. It seems a bit new and potentially buggy, hence my posting in Shortform.

Comment by A_lark on EA culture is special; we should proceed with intentionality · 2022-05-24T08:04:13.427Z · EA · GW

+1 

I think GiveWell and OP's early commitment to transparency were admirable, if unusual and time-consuming. Not all groups will go as in-depth, of course, but I think it's usually good when EA leaders and emerging leaders are brave enough to practice their reasoning skills in the real world of their projects, and to show their thinking as it develops.

Comment by A_lark on Using TikTok to indoctrinate the masses to EA · 2022-05-04T02:11:14.079Z · EA · GW

I love the talksfast approach for this content. I feel like you’re delivering as close to Forum-level density as you can in a short TikTok, which is very satisfying to my brain. Hats off!

Comment by A_lark on AI Risk is like Terminator; Stop Saying it's Not · 2022-05-04T01:19:58.760Z · EA · GW

Well, I see Matt is writing about this now! https://www.slowboring.com/p/the-case-for-terminator-analogies?mc_cid=592b38485e&mc_eid=9fb990b6bf

Comment by A_lark on I burnt out at EAG. Let's talk about it. · 2022-04-23T21:42:53.726Z · EA · GW

Something like this has happened to me too.

One bit of advice I haven’t seen here yet: Consider making it an extremely high priority to work out vigorously daily or every other day, even while at a conference. A short HIIT (high intensity interval training) can do wonders for stabilizing one’s energy and increasing one’s tolerance to stress.

At one conference, a group of us met up to work out together in the morning. That was lovely! It encouraged some of us to go to bed at a reasonable time the night before too, which was an added bonus.

If you invite others to workout with you, you’ll get multiple benefits — the workout itself, productive social time, helping others who might not have gotten a workout in otherwise, and additional community-building cred.

I also agree that we all need to cancel sometimes! However, I find I also have to hold myself accountable for cancelling only very very rarely, lest I slip into a pattern of overcommitting, burning out, and then canceling on people. EAs are very kind about mental health, but that kind of pattern would still cost me opportunities. I have to try really hard to focus on prevention instead.

Comment by A_lark on A_lark's Shortform · 2022-04-21T04:08:15.110Z · EA · GW

Does the Forum have an auto-save feature for drafts?

Could it?

I made the foolish mistake of drafting a post recently, within the Forum. My computer died and I lost the draft :(

Comment by A_lark on A review of Our Final Warning: Six Degrees of Climate Emergency by Mark Lynas · 2022-04-16T13:23:31.697Z · EA · GW

Thanks for this! Did the author respond to your outreach?

Comment by A_lark on Free-spending EA might be a big problem for optics and epistemics · 2022-04-14T12:50:20.987Z · EA · GW

A lot of good points here.

A few thoughts on the benefits of a frugal community:

  • norms of frugality can help people avoid some of the consumeristic rat race of broader society. I don’t want EAs caught up in “keeping up with the Jones’s.” I want EAs keeping up with good ideas and good actions.
  • I think we want a community where someone who uses careful reasoning to take an impactful role for $60K/yr feels just as welcome in EA as someone who uses careful reasoning to take an impactful role for $160k/yr.
Comment by A_lark on Free-spending EA might be a big problem for optics and epistemics · 2022-04-14T12:14:15.529Z · EA · GW

I’m uncomfortable with this too, but more comfortable than I used to be.

Privileged people have a lot of power/leverage in the world. That leverage can be squandered, used for selfish means, or used for good.

If we think EAs have uniquely good ideas for identifying and solving neglected, pressing global problems, I want people with lots of leverage to learn from EA. The counterfactual is they use their leverage to do less altruistic or less effective things. I am willing to put money toward avoiding that.

Comment by A_lark on Free-spending EA might be a big problem for optics and epistemics · 2022-04-14T11:51:05.139Z · EA · GW

I agree that it’s possible to be unthinkingly frugal. It’s also possible to be unthinkingly spendy. Both seem bad, because they are unthinking. A solution would be to encourage EA groups to practice good thinking together, and to showcase careful thinking on these topics.

I like the idea of having early EA intro materials and university groups that teach BOTECs, cost-benefit analysis, and grappling carefully with spending decisions.

This kind of training, however, trades off against time spent learning about eg. AI safety and biosecurity.

Comment by A_lark on Free-spending EA might be a big problem for optics and epistemics · 2022-04-14T11:37:49.124Z · EA · GW

These points concern me too. When you say, I hope they will be taken seriously, I’m unsure who you have in mind. Taken seriously by who?

Comment by A_lark on Free-spending EA might be a big problem for optics and epistemics · 2022-04-14T11:33:05.419Z · EA · GW

This is a valuable point.

Comment by A_lark on Save the Date: EAGxMars · 2022-04-01T22:46:22.987Z · EA · GW

ALLFED is presenting on this, pretty sure. Make sure to fully fill out your Swapcard and tell people you’re interested in this and I’m sure they’ll be happy to have additional one on ones about the topic.

Comment by A_lark on Save the Date: EAGxMars · 2022-04-01T22:44:53.892Z · EA · GW

This deadpan logistical footnote is my favorite; I actually lol’d: “ It’s possible that attendees will plan on staying for longer than a weekend, given the journey. We might organise some retreats around the event.”

Comment by A_lark on Announcing What The Future Owes Us · 2022-04-01T22:40:42.638Z · EA · GW

I pre-ordered this next year and fully agree with Stephen Fry. So far, future people seem more caught up in the theory. I’m disappointed that we’re not seeing a lot of direct work from them yet, but I have some hope this book will move the needle.

Comment by A_lark on EA should taboo "EA should" · 2022-03-30T01:25:17.433Z · EA · GW

This also seems relevant: Shoulding at the Universe: https://m.youtube.com/watch?v=RpXyy2RLnEU

Comment by A_lark on EA should taboo "EA should" · 2022-03-30T01:16:33.519Z · EA · GW

I think another problem with “EAs should” is that it’s a phrase that ignores trade-offs. I’d like to see things rephrased to something like this: ‘ “EA should be more geographically diverse” becomes “here’s why geographic diversity would help EA”’ + “so it’s worth it to have less funding for [other project] in order to increase funding for this project.”

When I need to think of an example of a trade-off I’m willing to make, it tends to sharpen my thinking. I realize EA feels flush with cash right now, but alas, money, people,time, and attention are not infinite! Thinking about actual trade-offs keeps that fact clear.

(For me, writing this comment traded-off against time spent reading another post. I endorse that choice).

Comment by A_lark on Erin Braid's Shortform · 2022-03-29T02:35:35.095Z · EA · GW

Thank you for writing this. I think this points to an important point/risk/trade-off for people who take an EA path in their careers. EA can be really interdisciplinary, in a way that may not be legible outside EA. This is tricky for career planning.

Comment by A_lark on technicalities's Shortform · 2022-03-29T02:20:37.301Z · EA · GW

Love this!

Comment by A_lark on A_lark's Shortform · 2021-12-19T03:28:50.013Z · EA · GW

Media that is not about EA but could be I may start a running list of fun, silly, biting, earnest and/or unexpected ways to convey EA ideas. I'll add what I think it could be useful for. YMMV.

First entry:

  • Possible (sarcastic) response to someone who thinks it's offensive to say that some things are better than others: Posit Guy, by JrEg Summary: A guy posits something; others are offended that positing anything is arrogant https://www.youtube.com/watch?v=MKe1m3aqsmw

== I may also start a running list of EA quotes in the media that I like:

Changing up the name of a thing over time is a good way to keep the concept fresh, so people pay attention to the meaning, not just the meme they've heard or read a million times before (IMO). I really like how this journalist turned this phrase around.

  • A good resource for people who care about climate change and are new to x-risk or think x-risk is weird. Possibly a good resource for policymakers, who the author addresses directly at the end.

Summary: Yglesias takes climate change seriously as a risk to humanity, and then quite smoothly pivots to x-risks from The Precipice. There's no mention of AI; he focuses on biosecurity and uses COVID as an example. Excerpt:

"So I commend the film and despite my quibbles with the McKay-Sirota theory of climate politics, I endorse McKay’s policy prescription. Let’s do it. But I think that art can sometimes get away from artists, and in this case the message is much bigger than climate change.

There is a range of often goofy-sounding threats to humanity that don’t track well onto our partisan cleavages or culture war battles other than that addressing them invariably involves some form of concerted action of the sort that conservatives tend to disparage. And this isn’t a coincidence. If existential threats were materializing all the time, we’d be dead and not streaming satirical films on Netflix. So the threats tend to sound “weird,” and if you talk a lot about them you’ll be “weird.” They don’t fit well into the grooves of ordinary political conflict because ordinary political conflict is about stuff that happens all the time.

So read Ord’s book “The Precipice” and find out all about it. Did you know that the Biological Weapons Convention has just four employees? I do because it’s in the book. Let’s maybe give them six?

For all that, though, I am genuinely shocked that the actual real-world emergence of SARS-Cov-2 has not caused more people to care about pandemic risk...

And we’re doing very little about this. ... If you went on TV to talk about comets, people would laugh at you. But people on TV are talking about the pandemic all the time. So why can’t we talk about forward-looking pandemic policy?

There are rumors in D.C. of an effort to put together an Omicron-focused supplemental appropriation.

That’s a perfectly reasonable idea. But it’s crucial for policymakers to see that the Omicron problem isn’t just — or even necessarily primarily — about Omicron. It’s about variants and mutation in general. We have a variant right now that pairs breakthrough capability with high transmissibility, but doesn’t seem to attack the lungs nearly as aggressively as Delta did. There’s no guarantee we’ll get so lucky with the next variant, and we need to be improving our capabilities and tackling the pandemic issue in a much more serious way."