Posts

Compendium of problems with RLHF 2023-01-30T08:48:08.329Z
Easy fixing Voting 2022-10-02T17:03:20.229Z
An app to assess the degree of neglectedness of each field of study 2022-06-28T23:23:27.452Z

Comments

Comment by Raphaël S (charbel-raphael-segerie) on The Importance of AI Alignment, explained in 5 points · 2023-03-03T07:10:41.041Z · EA · GW

Excellent breakdown, thanks

Comment by Raphaël S (charbel-raphael-segerie) on Summaries: Alignment Fundamentals Curriculum · 2022-09-26T23:18:32.257Z · EA · GW

This will be very valuable to me, thanks !

Comment by Raphaël S (charbel-raphael-segerie) on EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22’) · 2022-09-05T16:25:00.844Z · EA · GW

Thank you

Comment by Raphaël S (charbel-raphael-segerie) on List of AI safety courses and resources · 2022-08-15T10:23:06.412Z · EA · GW

Here is the curriculum of the ML4Good, an AGI safety camp organized by EffiSciences to  tprosaic alignment researchers.

The program contains many programming exercises

Comment by Raphaël S (charbel-raphael-segerie) on Why does no one care about AI? · 2022-08-07T22:21:56.181Z · EA · GW

Babbling without any pruning:

- It is very difficult to imagine such an AI
- Currently, computers and AI in video games are stupid
- the collective representations are such as Ex Machina or Terminator reinforce the idea that it is only fiction
- understanding  the orthogonality thesis requires a fine-grained epistemology to dissociate two concepts often linked in practice in everyday life
- Loss of status for academics
- It is possible that it is really too complicated for an AI of just higher level than a human-level to design a nanoconductor factory. In order to understand the risks, we have to take a step back on history, understand what an exponential curve is, and tell ourselves that a superintelligence will arrive later on

Comment by Raphaël S (charbel-raphael-segerie) on The Possibility of Microorganism Suffering · 2022-08-07T21:39:48.940Z · EA · GW

I don't have the time to elaborate, but I find this post compelling

Comment by Raphaël S (charbel-raphael-segerie) on Longtermists Should Work on AI - There is No "AI Neutral" Scenario · 2022-08-07T17:55:25.392Z · EA · GW

I think in the EA community, the bottleneck is the supply of AI safety related jobs/projects, but there is already a very strong desire to move into AI safety. The problem is not  longtermists who are already working on something else. They should generally continue to do so, because the portfolio argument is compelling. The problem is the bootstrapping problem for people who want to start working an AI safety

Even if you only value AI safety, having a good portfolio community is important and makes our community attractive. Ai safety is still weird. FTX was originally only vegan, and only then shifted to long term considerations. That's the trajectory of most people here. Being diverse is at least cool for that reason.

Comment by Raphaël S (charbel-raphael-segerie) on Longtermists Should Work on AI - There is No "AI Neutral" Scenario · 2022-08-07T17:13:52.917Z · EA · GW

According to you, what should be the proportion of longtermists who should work on AI?

Comment by Raphaël S (charbel-raphael-segerie) on Why EAs are skeptical about AI Safety · 2022-07-19T11:09:25.719Z · EA · GW

I can organize a session with my AI safety novice group to build the kialo

Comment by Raphaël S (charbel-raphael-segerie) on Why EAs are skeptical about AI Safety · 2022-07-19T11:06:49.939Z · EA · GW

We could use kialo, a web app, to map those points and their counterarguments

Comment by Raphaël S (charbel-raphael-segerie) on Some unfun lessons I learned as a junior grantmaker · 2022-05-28T21:50:22.334Z · EA · GW

I think the elephant in the room is : "Why are they part-time?"

If making more grants is so important, either hire more people or work full-time, no? This is something I do not understand with the current status quo

Comment by Raphaël S (charbel-raphael-segerie) on Hypertension is Extremely Important, Tractable, and Neglected · 2022-05-13T19:38:36.580Z · EA · GW

How did you discover this cause area? Is there a way to automatically browse all diseases and associated daly and see the research effort associated with each disease?

Comment by Raphaël S (charbel-raphael-segerie) on The Case for Non-Technical AI Safety/Alignment Growth & Funding · 2022-04-28T16:45:15.828Z · EA · GW

I have the impression that one of the reasons for the focus on technical AI is the fact that once you succeed in aligning an AI, you expect it to perform a pivotal act, e.g. burn out all the gpus on earth. To achieve this pivotal act, it seems that going through AI governance is not really necessary?

But yes, it does seem to be a bit of a stretch

Comment by Raphaël S (charbel-raphael-segerie) on The Case for Non-Technical AI Safety/Alignment Growth & Funding · 2022-04-28T16:43:48.432Z · EA · GW

Great first post!

Do we have statistics on the number of people and organizations in AI technical safety and people in AI governance?

Comment by Raphaël S (charbel-raphael-segerie) on Consider Not Changing Your Forum Username to Your Real Name · 2022-04-28T16:28:07.690Z · EA · GW

You can also encrypt your name in order to make the job harder to google but still readable. For example, replace "e" by "3", "a" by "ae". Do you know any other tricks?

Comment by Raphaël S (charbel-raphael-segerie) on Introducing Canopy Retreats · 2022-04-24T17:27:11.164Z · EA · GW

nice new wording!