Posts
Comments
Excellent breakdown, thanks
This will be very valuable to me, thanks !
Thank you
Here is the curriculum of the ML4Good, an AGI safety camp organized by EffiSciences to tprosaic alignment researchers.
The program contains many programming exercises
Babbling without any pruning:
- It is very difficult to imagine such an AI
- Currently, computers and AI in video games are stupid
- the collective representations are such as Ex Machina or Terminator reinforce the idea that it is only fiction
- understanding the orthogonality thesis requires a fine-grained epistemology to dissociate two concepts often linked in practice in everyday life
- Loss of status for academics
- It is possible that it is really too complicated for an AI of just higher level than a human-level to design a nanoconductor factory. In order to understand the risks, we have to take a step back on history, understand what an exponential curve is, and tell ourselves that a superintelligence will arrive later on
I don't have the time to elaborate, but I find this post compelling
I think in the EA community, the bottleneck is the supply of AI safety related jobs/projects, but there is already a very strong desire to move into AI safety. The problem is not longtermists who are already working on something else. They should generally continue to do so, because the portfolio argument is compelling. The problem is the bootstrapping problem for people who want to start working an AI safety
Even if you only value AI safety, having a good portfolio community is important and makes our community attractive. Ai safety is still weird. FTX was originally only vegan, and only then shifted to long term considerations. That's the trajectory of most people here. Being diverse is at least cool for that reason.
According to you, what should be the proportion of longtermists who should work on AI?
I can organize a session with my AI safety novice group to build the kialo
We could use kialo, a web app, to map those points and their counterarguments
I think the elephant in the room is : "Why are they part-time?"
If making more grants is so important, either hire more people or work full-time, no? This is something I do not understand with the current status quo
How did you discover this cause area? Is there a way to automatically browse all diseases and associated daly and see the research effort associated with each disease?
I have the impression that one of the reasons for the focus on technical AI is the fact that once you succeed in aligning an AI, you expect it to perform a pivotal act, e.g. burn out all the gpus on earth. To achieve this pivotal act, it seems that going through AI governance is not really necessary?
But yes, it does seem to be a bit of a stretch
Great first post!
Do we have statistics on the number of people and organizations in AI technical safety and people in AI governance?
You can also encrypt your name in order to make the job harder to google but still readable. For example, replace "e" by "3", "a" by "ae". Do you know any other tricks?
nice new wording!