Singapore’s Technical AI Alignment Research Career Guidepost by yiyang · 2020-08-26T08:09:57.841Z · EA · GW · 7 comments
1. Introduction 2. Acknowledgements 3. Is Singapore a good country to pursue a career in technical AI alignment research? Reasons in favour Reasons against Conclusion 4. Key career recommendations How to make an impact in technical AI alignment research within Singapore Recommended local organisations 4.4.1. Academic Institutions 4.4.2. Big tech companies 4.5.4. Considerations for migration 188.8.131.52. Non-Singapore citizens 184.108.40.206. Singapore citizens None 7 comments
The original version of this career guide is on EA Singapore's website. I've removed pretty significant chunks from the guide and made some changes to it, so that it's better suited for the average EA forum audience. If you're based in Singapore, you might want to read the original version instead.
Epistemic status: 70% confident that technical AI alignment research is likely to be in the top 5 highest impact career pathways in Singapore. I spent about 100 hours into this, but maybe 40% of it is spent on the the problem profile part (section 3) and the remaining 60% on the career guide part (section 4). Besides some online research, I did two "formal" interviews and gotten "informal" feedback from local AI researchers by sending them the minimal viable product of this document. I don't have a good broad picture of the individual AI researchers and their work in Singapore, which means I can't really pinpoint the best AI researchers to work with in Singapore. Furthermore, the fact that I'm not an AI researcher myself means that I don't have a good "inside view" of the space too.
AI risk is a pressing cause area we need to address urgently, but not many people are working directly on it in Singapore. This career guide is meant to clear up some uncertainties about working on AI risk, as well as inspire more people to seek a career pathway in technical AI alignment research.
In this career guide, I will (a) explain why pursuing a technical AI alignment research career in Singapore is potentially impactful, and (b) give my recommendations on how to plan for a career in this field.
Many of the original ideas that motivated me to write this career guide came from Loke Jia Yuan. I have also received many helpful insights and feedback from my discussions with: Tan Zhi-Xuan, Harold Soh, Nigel Choo, Jason Tamara Widjaja, Lin Xiaohao, Lauro Langosco, Aaron Pang, Wanyi Zeng, Simon Flint, and Pooja Chandwani. Their kind help does not imply they agree with everything I’ve written. All mistakes and opinions in this document remain my own.
3. Is Singapore a good country to pursue a career in technical AI alignment research?
3.1. Reasons in favour
- There are research organisations working on safe and beneficial AI. There are at least three research groups that are working on issues related to AI alignment, which I’ve listed below (in section 4.4). Although AI alignment research in Singapore is mostly driven by the government, there are also large tech companies with AI R&D operations in Singapore which can be influenced to conduct more AI alignment research.
- There are local funding opportunities for safe and beneficial AI research. S$150m of the S$500m AI R&D funding from the Singapore government will be invested in AI Singapore, one of the organisations that distributes grants related to safe and beneficial AI. However, corporate funded AI alignment related research in academia is still very scarce according to one AI researcher.
- The Singapore government is already building talent pipelines into AI R&D. Although Singapore already has some promising young AI researchers, the government is still diverting large amounts of resources into developing more AI talents, such as the AI Apprenticeship, Defence Science and Technology Agency’s TIL AI Camp, and the various programmes offered in local public higher learning institutions.
- The Singapore government is generally open to long term considerations. The Centre for Strategic Futures (CSF), under the Prime Minister’s Office (PMO), has engaged with the Centre for the Study of Existential Risk (CSER) and Nick Bostrom on a few occasions in regards to far future AI risks. However, I am unsure how much of these engagements have influenced research priorities.
- The Singapore government is starting to regulate the use of AI. The Monetary Authority of Singapore (MAS) and The Personal Data Protection Commission (PDPC) have published separate non-enforceable ethical guidelines for AI in recent years. However, according to the Non-Profit Working Group on AI (which has a few Singapore-based EAs in the group), there are parts of the Model Artificial Intelligence Governance Framework (from the PDPC) that can be better improved even in its second edition. Despite that, I think this is a good starting point.
- Singapore has the potential to influence other countries. Although Singapore lacks powerful cross border influence over other countries as compared to the US, China, or the EU, there have been times when other countries have imported expertise from Singapore. Furthermore, Singapore ranks 8th among 25 other countries in the Asia Power Index , due to its strength in creating economic interdependencies with other countries. If Singapore is able to carve a certain AI expertise niche in the future (perhaps in freight management, municipal services, education, public health, and border control, according to the National AI Strategy), there is a non-trivial chance for Singapore to export this expertise in conjunction with safe and beneficial AI best practices.
3.2. Reasons against
- Singapore's AI research capabilities are limited in terms of quantity and quality compared to other countries. If you can work on AI research in other countries, you are likely to make a bigger impact there than in Singapore. Even if Singapore’s might rank between 6th-20th in the AI Potential Index (my best guess), there is probably still a large gap with the highest ranking countries, especially the US and China. Moreover, according to OECD.AI, although Singapore ranked 10th in the world based on the number of top 1% AI scientific publications in 2019 (16 publications), it is absolutely dwarfed by the US (1897), EU (314), and China (226). On the ground, one CS PhD student agrees that there is generally a lack of high quality local publications on safe and beneficial AI.
Singapore’s AI research is focused more on short term AI capabilities. If you think far future AI risks [? · GW] (when human-level intelligence [AGI] or superintelligence emerges) outweigh present day AI risks, you may think Singapore is not a good country for AI alignment research. However, such research on short term AI capabilities is potentially impactful in the long term too, according to some AI researchers like Paul Christiano, Ian Goodfellow, and Rohin Shah.
- Singapore’s AI research is focused more on current techniques. If you think we need to have new ideas on how intelligence works to tackle AI alignment issues, then Singapore is not a good country for that. However, if you think prosaic AGI is a strong possibility, then working on AI alignment research in Singapore might be good.
- It’s difficult to recruit local CS undergraduates into PhD programmes. This is more specific to people wanting to develop talent pipelines into the AI research space, such as hiring managers or recruiters. According to a Singapore-based AI researcher, local CS undergraduates have very lucrative opportunities in the private sector, and it’s not easy for local CS PhD programmes (academia or public research sectors mostly funded by the government) to compete with such opportunities.
Singapore is not able to directly compete with other countries in terms of the quality and quantity of AI research, and its research is more focused on short term AI capabilities. Yet I think there are pockets of opportunities that we can leverage on to contribute towards safe and beneficial AI research. There are existing organisations working on tangentially related short term AI alignment issues, and the Singapore government is already taking charge of developing an AI ecosystem in the country in terms of funding, talent, and regulation. Furthermore, the government’s core competency in long-term foresight and developing economic influence in other countries can further contribute to AI researcher’s impact.
What does this mean for you as a potential job seeker wanting to make an impact in safe and beneficial AI research? If you’re not able to move out of Singapore to pursue a career in safe and beneficial AI research, then your next-best option is to work in existing organisations in Singapore that are working on AI alignment issues (even if they are focused on short term AI capabilities). It might also be good to work in the intersection of AI alignment research and an industry endorsed by the National AI strategy. For example, NUS Ubicomp Lab, which works on AI explainability within public health, is in that intersection. Expertise developed here may be exported to other countries in the future, increasing the impact of your research. Furthermore, while building your career capital in AI research, it is likely to be impactful to build a community of safety-aligned researchers in Singapore too.
4. Key career recommendations
4.1. How to make an impact in technical AI alignment research within Singapore
In general, there are two broad approaches to make an impact in technical AI alignment research in Singapore. First, you can enter the for-profit sector, either as a researcher, engineer, or product manager, then move up the ladder while shaping priorities towards safe and beneficial AI R&D. However, it’s important that you remain up to date with the latest developments in the AI alignment space—this helps ensure that when you are communicating AI risks and ethics to those less familiar, it is done so in a way that is clear, science-based, and realistic (not scaremongering nor naively optimistic).
The second way is to enter academia. Here, your focus should be on conducting AI alignment research. A secondary goal you can also work on is to cultivate a community of safety-aligned researchers with the aim to collaborate on research, or even form an academic research group. You could also try to aim to move into the intersection of AI alignment research and an industry endorsed by the National AI strategy (freight management, municipal services, education, public health, and border control). With the government’s history of developing economic interdependencies and exporting expertise, you might be able to extend your impact beyond Singapore.
For a high level point of view of these career paths, you can take a look at this flowchart.
4.4. Recommended local organisations
4.4.1. Academic Institutions
There are three potentially promising academic centres that work on tangentially related safe and beneficial AI research:
Collaborative, Learning, and Adaptive Robots (CLeAR) group is under the leadership of Harold Soh, an assistant professor at NUS, who researches human-robot interactions (HRI). It is a very broad interdisciplinary field. There are currently 3 postdoctoral researchers and 5 PhD students in this group.
- I think this is an especially promising group. Their work is likely to contribute to specification and assurance issues in AI alignment.
- If you’re interested in joining this group, you’ll need to be technically competent in ML and decision theory, as well as have strong competencies in probability theory and linear algebra—the “lingua franca” used in this group. Having research interests in robotics and human-robot interactions is also important. It’s not enough to be interested in computer vision, rather, this group looks for someone interested in computer vision within the scope of human-robot interactions.
NUS Ubicomp Lab combines explainable AI and human-computer interactions to develop solutions to improve people’s health and well-being in smart cities. Their work here is also potentially useful to solve specification and assurance issues in AI alignment. This might be an especially promising organisation considering that it’s within the intersection of AI alignment research and an industry endorsed by the National AI strategy (public health). You might be able to extend your impact beyond Singapore since the Singapore government has a history of exporting expertise.
Cognitive, Human-like, Empathetic & Explainable Machine-Learning (CHEEM) is a subdivision of A*STAR. Although not technically an academic institution, A*STAR is a statutory board under the Ministry of Trade and Industry, and collaborates heavily with other researchers in academia and in industry. According to one CS PhD student, this division seems to be the most promising in A*STAR in terms of AI alignment research.
If you’re not able to work in these recommended local academic institutions, it’s probably still worth trying to build career capital in AI research now, and then shift towards AI alignment research. There are still many other opportunities for AI research in different institutions such as NUS, NTU, or A*STAR.
And if you’re a Singapore citizen, some corporate-funded scholarships (such as Sensetime-NTU, Alibaba-NTU, Salesforce-NTU/NUS, or A*STAR’s scholarships) do offer a considerable amount of monthly allowance (SG$5,000), which is slightly above the median salary of a fresh graduate with a BSc in CS. This is a great entry point into becoming a researcher in industry, if you are very certain that you are not planning to go into academia in the long term.
4.4.2. Big tech companies
Besides university AI labs or PhD programmes that are partnered with tech companies, you can also apply directly to work in a company. I think the general aim is to aim for a big prestigious tech company that has an AI lab in Singapore as they have more resources and influence over the tech landscape. However, that means it’s likely to be more competitive too.
Furthermore, if your interests lie heavily in computer vision, it's better to find a job in the private sector R&D than in academia. According to one PhD student, tech companies generally have more resources in this area, as deep learning at a high level can be a competition of resources.
Here are some recommended companies to apply:
Apply for prestigious tech companies that have an AI R&D team locally. Here are some of them:
- Lazada (owned by Alibaba)
- Sensetime (in partnership with NTU’s PhD programme)
- Salesforce (in partnership with NTU’s PhD programme)
- YITU Technology
- Untangle (local start-up that is trying to develop AI interpretability as a service)
You can also apply for software engineering jobs in these prestigious tech companies that don’t have an AI R&D team
- Grab (AI division is currently undergoing restructuring)
Besides the ones I’ve listed here, there are also potentially more tech companies in the Singapore High Impact Job Board.
4.5.4. Considerations for migration
220.127.116.11. Non-Singapore citizens
Graduate programmes, research jobs, and faculty positions are very welcoming to global talent. However, in non-research government organisations, I foresee that this would require a case-by-case inquiry. Organisations or sub-organisations in defense, strategy, or policy are likely to be restricted to Singapore citizens only. For example, only Singapore citizens are allowed to apply for jobs in the National AI Office.
In private sectors, this is probably easier if you have very prestigious credentials and exceptional experiences. It’s also easier if you are already in a Singapore university, which has career services for international students.
18.104.22.168. Singapore citizens
If you’re a Singapore citizen and if you’re able to get a job in the US, you can take advantage of H1B1 visas, which has a higher quota compared to the H1B visa.
Comments sorted by top scores.