Posts

AI Alignment YouTube Playlists 2022-05-09T21:31:47.968Z

Comments

Comment by jacquesthibs (jaythibs) on Training a GPT model on EA texts: what data? · 2022-06-10T01:14:34.589Z · EA · GW

I just scraped the EA Forum for you. Contains metadata too: authors, score, votes, date_published, text (post contents), comments.

Here’s a link: https://drive.google.com/file/d/1XA71s2K4j89_N2x4EbTdVYANJ7X3P4ow/view?usp=drivesdk

Good luck.

Note: We just released a big dataset of AI alignment texts. If you’d like to learn more about it, check out our post here: https://www.lesswrong.com/posts/FgjcHiWvADgsocE34/a-descriptive-not-prescriptive-overview-of-current-ai

Comment by jacquesthibs (jaythibs) on EA will likely get more attention soon · 2022-05-12T14:18:54.696Z · EA · GW

Great points, here’s my impression: 

Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torres’ stuff was the worse we can expect. I am still reading Torres’ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).

Regarding the articles: His way of writing is by telling the general story in a way that it’s obvious he knows a lot about EA and had been involved in the past, but then he bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in his writings, it’s hard not to believe he might be doing this because it gives him plausible deniability since what he’s saying is often not “wrong”, but it is bent to the point that the reader ends up inferring things that are false.

To me, in the case of his latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesn’t even actually care about human lives. The way he writes gives him credibility to the uninformed because it’s not just an all-out attack where it is obvious to the reader what his intentions are.

Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.

*This is a response to both of your comments.

Comment by jacquesthibs (jaythibs) on What are your recommendations for technical AI alignment podcasts? · 2022-05-12T03:46:48.377Z · EA · GW

Aside from those already mentioned:


The Inside View has a couple of alignment relevant episodes so far.

These two episodes of Machine Learning Street Talk.

FLI has some stuff.

Comment by jacquesthibs (jaythibs) on EA will likely get more attention soon · 2022-05-12T03:37:53.775Z · EA · GW

One thing that may backfire with the slow rollout of talking to journalists is that people who mean to write about EA in bad faith will be the ones at the top of the search results. If you search something like “ea longtermism”, you might find bad faith articles many of us are familiar with. I’m concerned we are setting ourselves up to give people unaware of EA a very bad faith introduction.

Note: when I say “bad faith“ here, it may just be a matter of semantics with how some people are seeing it as. I think I might not have the vocabulary to articulate what I mean by “bad faith.” I actually agree with pretty much everything David has said in response to this comment.

Comment by jacquesthibs (jaythibs) on AI Alignment YouTube Playlists · 2022-05-10T02:06:24.367Z · EA · GW

Saving for potential future use. Thanks!

Comment by jacquesthibs (jaythibs) on Transcripts of interviews with AI researchers · 2022-05-09T17:30:13.723Z · EA · GW

Fantastic work. And thank you for transcribing!

Comment by jacquesthibs (jaythibs) on The AI Messiah · 2022-05-05T20:04:30.431Z · EA · GW

If anything, this is a claim that people have been bringing up on Twitter recently, the parallels between EA and religion. It’s certainly something we should be aware of since, having ”blind faith” in religion is something that be good, we don’t seem to actually want to do this within EA. I could explain why I think AI risk is different from messiah thing, but Rob Miles explains it well here: 

Given limited information (but information nonetheless), I think AI risk could potentially lead to serious harm or not at all, and it’s worth hedging our bets on this cause area (among others). This feels different then choosing to have blind faith in a religion, but I can see why outsiders think this. Though we can be victims of post-rationalization, I think religious folks have reasons to believe in a religion. I think some people might gravitate towards AI risk as a way to feel more meaning in their lives (or something like that), but my impression is that this is not the norm. 

At least in my case,  it’s like, “damn we have so many serious problems in the world and I want to help with them all, but I can’t. So, I’ll focus on areas of personal fit and hedge my bets even though I’m not so sure about this AI thing and donate what I can to these other serious issues.”

Comment by jacquesthibs (jaythibs) on 2021 AI Alignment Literature Review and Charity Comparison · 2022-03-09T22:38:24.514Z · EA · GW

Avast is telling me that the following link is malicious: 

Ding's China's Growing Influence over the Rules of the Digital Road describes China's approach to influencing technology standards, and suggests some policies the US might adopt.  #Policy

Comment by jacquesthibs (jaythibs) on I’m Offering Free Coaching for Software Developers in the EA community · 2022-02-23T16:04:14.215Z · EA · GW

Who am I? Until recently, I worked as a data scientist in the NLP space. I'm currently preparing for a new role, but unsure if I want to:

  1. Work as a machine learning engineer for a few years then transition to alignment, founding a startup/org or continue working as ML engineer.
  2. Or, try to get a role as close to alignment as possible.

When I first approached Yonatan, I told him that my goal was to become "world-class in ml within 3 years" in order to make option 1 work. My plan involved improving my software engineering skills since it was something I felt I was lacking. I told him my plan on how to improve my skills and he basically told me I was going about it all wrong. In the end, he said I should seek mentorship with someone who has the incentive to help me improve my programming skills (via weekly code reviews) ASAP. I had subconsciously avoided this approach because my experiences with mentorship were less than stellar. I took a role with the promise that I would be mentored and, in the end, I was the one doing all the mentoring...

Anyway, after a few conversations with Yonatan, it became clear that seeking mentorship would be at least 10X more effective than my initial plan.

Besides helping me change my approach to becoming a better programmer (and everything else in general), our chats have allowed me to change my career approach in a better direction. Yonatan is good at helping you avoid spouting vague, bad arguments for why you want to do x.

I'm still in the middle of the job search process so I will update this comment in a few months once the dust has settled. For now, I need to go, things have changed recently and I need to get in touch with Yonatan for feedback. :)

I highly recommend this service. It is lightyears ahead of a lot of other "advice" I've found online.

Comment by jacquesthibs (jaythibs) on Potential EA NYC Coworking Space · 2021-12-08T00:10:22.751Z · EA · GW

I'd be interested in this if I moved to NYC. I'm currently at the very early beginnings of preparing for interviews and I'm not sure where I'll land yet so I won't answer the survey. Definitely a great idea, though. The decently-sized EA community in NYC is one of the reasons it's my top choice for a place to move to.

Comment by jacquesthibs (jaythibs) on AGI Safety Fundamentals curriculum and application · 2021-11-29T18:51:33.562Z · EA · GW

I just want to say that this course curriculum is amazing and I really appreciate that you've made it public. I've already gone through about a dozen articles. I'm an ML engineer who wants to learn more about AGI safety, but it's unfortunately not a priority for me at the moment. That said, I will still likely go through the curriculum on my own time, but since I'm focusing on more technical aspects of building ML models at the moment, I won't be applying since I can't strongly commit to the course.  Anyways, again, I appreciate making the curriculum public. As I slowly go through it, I might send some questions for clarification along the way. I hope that's ok. Thanks!

Comment by jacquesthibs (jaythibs) on Are there good EA projects for helping with COVID-19? · 2020-03-13T03:33:33.251Z · EA · GW

Reposting what I wrote on Facebook:

Young low-risk adults doing groceries and other errands for high-risk old adults.

I wonder if it is effective and if there is a way to scale it? I talked to one EA from Italy and they said a student union is also doing this there. I am looking into how to accomplish this in Canada.

We could potentially fund an app or something so that anyone who wants to volunteer can quickly take part and accept a request.

The request could be taken via telephone for example and then placed in the app.

Or we create a simple process without any apps. Google sheet?

Dealing with superspreaders: it’s crucial to give guidelines and make sure young people are much less likely to catch the virus than old people. I think this is doable.