That's just one thought that motivated me to write this question. It would be extremely valuable to introduce Chinese students and professionals to AGI safety. Not only because China has a strong AIindustry, but also because China has >1.4 billion people. Yet as far as I know, most AI alignment projects and organizations target English speakers. I've spent very little time researching AI alignment in China, and I could certainly be wrong.
If people want to do more research, I'd recommend the 2022 AI Index Report. Here is a (possibly misleading; again, I haven't looked into this carefully) graph from page 26:
I think it is much higher priority (from the perspective of reducing AI x-risk) to translate AI alignment concepts, particularly the AGI Safety Fundamentals course material. It takes a lot of inferences to go from "I'm interested in doing good" to "I like EA ideas" to "I think AI alignment is important" to "I want to work on AI, where can I start?" And even if many Mandarin speakers reach that last point through a Mandarin translation of 80,000 Hours, they will currently find very few (if any?) structuredopportunities to skill [EA · GW] up [? · GW] for AI alignment.
Thanks for sharing these. The Chinese Association for AGI appears to focus on advancing AI capabilities rather than AI safety. I used Google Translate to translate the lead paragraph of the website's current opening page:
Notice of the 7th China General Artificial Intelligence Annual Conference
The China General Artificial Intelligence Annual Conference has been successfully held for six consecutive sessions. It is an annual event for Chinese general artificial intelligence enthusiasts, involving computer science, philosophy, logic, education, psychology, sociology, law, medicine and other disciplines. In order to better demonstrate and promote the research and application of general artificial intelligence, the 7th China General Artificial Intelligence Annual Conference in 2022 will be held at Northwest University for Nationalities in Lanzhou City.