There’s too much to learn 2022-02-27T08:01:26.435Z
EA Malaysia Report (August 2021 - January 2022) 2022-02-26T02:43:22.191Z
Effective Altruism Virtual Programs Feb-Mar 2022 2022-01-12T01:24:11.859Z
Effective Altruism Virtual Programs Jan-Feb 2022 2021-12-13T20:08:28.115Z
Effective Altruism Virtual Programs Dec-Jan 2022 2021-11-12T09:35:24.904Z
Should you organise your own introductory EA program or outsource it to EA Virtual Programs? 2021-08-12T10:20:25.422Z
There will now be EA Virtual Programs every month! 2021-07-17T04:35:43.320Z
EA Malaysia Cause Prioritisation Report (2021) 2021-04-24T05:48:58.731Z
Singapore AI Policy Career Guide 2021-01-21T03:05:59.687Z
Local priorities research: what is it, who should consider doing it, and why 2020-09-06T14:57:43.228Z
Singapore’s Technical AI Alignment Research Career Guide 2020-08-26T08:09:57.841Z


Comment by Yi-Yang (yiyang) on Community Builders Spend Too Much Time Community Building · 2022-06-30T02:47:23.216Z · EA · GW

(Weakly held personal opinion) I would go further and say that you attract people like you.[1]If what you or your core group is signalling most to outsiders are your community building (or marketing) qualities, you're likely to attract folks who are also keen on community building (and put off folks who are likely keen on the object level work you're recruiting for). 

Here's an intuition pump I have. Imagine two EA uni group websites that are exactly the same except for one difference in their profile page:

  • Website A showcases students who have internships in orgs solving x-risk issues, has co-published a paper on cost-effective poverty interventions, wrote a series of blog posts on effective animal advocacy, etc.
  • Website B showcases students who have basically none of the above

I feel pretty confident that A will attract the right kinds of people into EA.  

I also feel somewhat confident that B will be a net negative. I could imagine that each cohort of students coming into B  gets worse in quality each year, until it becomes "ponzi scheme'ish" entity. 

  1. ^

Comment by Yi-Yang (yiyang) on I just found out that I missed the deadline to sign up for the online course by 1 day, is there anyone I can contact, or any chance someone can receive a late application if there is still space left? · 2022-05-24T12:16:39.605Z · EA · GW

Hi Pride, I'm Yi-Yang and I run EA VP. Unfortunately, we don't usually let folks apply late. EA VP does run programs every month so you could catch the upcoming one. The next deadline is on Sun, June 26th. 

Comment by Yi-Yang (yiyang) on Community Builder Writing Contest: $20,000 in prizes for reflections · 2022-03-12T03:03:12.581Z · EA · GW

Strong upvote 

Comment by Yi-Yang (yiyang) on The Future Fund’s Project Ideas Competition · 2022-03-10T04:02:13.190Z · EA · GW

A service/consultancy that calculates the value of information of research projects

Epistemic Institutions, Research That Can Help Us Improve

When undertaking any research or investigations, we want to know whether it's worth spending money or time on it. There are a lot of research-type projects in EA and the best way to  evaluate and prioritise them is to calculate their value of information (VOI). However, VoI calculations can be complex and we need to build a team of experts that can form a VoI consultancy or service provider.

Examples of use cases:
1. A grant maker wants to know whether it's worth spending 0.5FTE on investigating cause area Y vs cause area X. 
2. A thinktank has generated a list of policy ideas to investigate but is uncertain which to prioritise. 
3. A research org also has a list of research questions but want to know which one has the highest VoI.

In each of this use case, I suspect a VoI consultancy can be extremely valuable.

David Manheim has written more about VoI here.

I think there might be harder meta-problem: should we even spend time and money on calculating the VoI of certain investigations? A failure mode is where the VoI consultancy calculates a bunch of research projects that turn out to have very low VoI.

I guess figuring out baseline, the cost of doing VoI calculations, and having a cheap heuristic as a preliminary calculation could help, but I'm highly uncertain. 

Comment by Yi-Yang (yiyang) on EA Geneva’s fellowship: a fellowship model for non-university groups · 2022-03-03T11:12:01.004Z · EA · GW

Hi Naomi! Do the participants engage with any required learning materials outside of group discussions in this version of the fellowship? Something like the usual 8 week virtual programs version.

Comment by Yi-Yang (yiyang) on There’s too much to learn · 2022-02-28T14:14:35.781Z · EA · GW

Agree with this! I can definitely see that there's some kind of fine tuning you can do, like making it less challenging so your motivation and probability of success goes up. 

Comment by Yi-Yang (yiyang) on There’s too much to learn · 2022-02-28T14:12:09.179Z · EA · GW

(1), (2) great points!

(3) Possibly, I definitely took some inspiration from 80K's career planning guide too.  

Comment by Yi-Yang (yiyang) on EA Malaysia Report (August 2021 - January 2022) · 2022-02-27T00:44:26.509Z · EA · GW

A low-energy version of this could be a co-working retreat

Oh interesting! I see a few examples of this when Googling. If you have a go-to resource for organising this, would love to check it out. 

Comment by Yi-Yang (yiyang) on EA Malaysia Report (August 2021 - January 2022) · 2022-02-27T00:42:59.352Z · EA · GW

Less tailor-made events and more consistent simple meetups (socials, YT watch parties, etc).  

Less tailor-made targeted outreach and more advertising.

Comment by Yi-Yang (yiyang) on EA Malaysia Report (August 2021 - January 2022) · 2022-02-27T00:40:00.235Z · EA · GW

An animal welfare one! But more heavily modified to an amateur philosophy audience. 

Comment by Yi-Yang (yiyang) on EA Malaysia Report (August 2021 - January 2022) · 2022-02-26T11:54:15.751Z · EA · GW


Comment by Yi-Yang (yiyang) on EA Malaysia Report (August 2021 - January 2022) · 2022-02-26T07:36:31.635Z · EA · GW

Possibly! Fingers crossed for that. :)

Comment by Yi-Yang (yiyang) on Should you organise your own introductory EA program or outsource it to EA Virtual Programs? · 2022-02-09T11:03:23.165Z · EA · GW

Small cohort size seems costly from a facilitator's point of view. And some participants found smaller group sizes more intimidating too.

EA VP has been increasing their cohort sizes recently. Attrition rates are at around 30% so having a cohort size of at least 4 participants by the end of program seems like a good number to have.  

I'm curious what the attrition rates are for the Stanford EA format, and how they're able to get so many facilitators. 

Comment by Yi-Yang (yiyang) on We Ran a "Next Steps" Retreat for Intro Fellows · 2022-02-06T01:09:05.412Z · EA · GW


Btw the Coat of Arms link is giving me an "Access denied" message. 

Comment by Yi-Yang (yiyang) on EA views on the AUKUS security pact? · 2021-10-05T04:16:12.339Z · EA · GW

So perhaps the idea behind your first bullet point from the Economist is that a balanced power dynamic reduces either side's credence that conflict will help their position?

Yes, that's right! 

And the follow up points about China's economic influence tilt the balance in China's favour, thereby raising again the chances of conflict? 

Yes, but it's more like China's economic influence has tilted the balance in China's favour for some years now (i.e. Belt & Road Initiative). It's only  recently with AUKUS that there's more of a balance between China and the US overall. 

However, in terms of economic  influence, China still has a stronger foothold in ASEAN than the US. 

Comment by Yi-Yang (yiyang) on EA views on the AUKUS security pact? · 2021-10-01T02:06:17.287Z · EA · GW

Key takeaways from The Economist's latest briefing:

  • ASEAN members probably benefit from a balance of power between the US and China, so AUKUS tips the scale slightly towards more balance. However, there is also a short history of flip-flopping support (e.g. Philippines preferring China at first then the US, Malaysia not liking the nine-dash line but still kowtowing to China). 
  • "But China’s gambit makes stark the fact that America is unable to match it. And its lack of economic leadership remains, in the words of Bilahari Kausikan, Singapore’s former top diplomat, 'the big hole in American strategy'."

Although my guess is that the West's soft power is still stronger in Southeast Asia than China's, past colonial atrocities and the lack of restitution are still bottlenecks from better coordination. It's quite persuasive to hear this, "look, these Western imperialists are at it again. I (China) am the only hope to a fairer and richer future." For example, you get articles like this.

However, it's also not clear to me that Western soft power still has foothold. The Chinese diaspora in Malaysia, a significant ethnic minority, is generally pro-China. 

Comment by Yi-Yang (yiyang) on Learning, Knowledge, Intelligence, Mastery, Anki - TYHTL post 2 · 2021-09-07T11:15:46.390Z · EA · GW

Hi Alexander, thanks for writing this up!

Some context. I used to use Anki for 1-2 years. Completed the "Learning How to Learn" MOOC and read the book it was based on. Taught 13-16 year olds math and English for 2 years. Conducted EA presentations in Malaysia and previously in Singapore. Currently running EA Virtual Programs (I noticed that you're in the intro program!). FYI, my opinions are mine and not CEA's.

 In conjunction with "learning how to learn better",  "learning how to prioritise which learning strategy works for specific scenarios" seems just as important. It's really hard to know:

  1. The value of information
  2. The value of easy retrieval of information beforehand. 

I think for many of the us, time is likely one of the biggest bottlenecks to better learning. For example, I really really want to apply a lot of the meta-learning tools when I'm reading The Happiness Trap, but I intuitively chose to just do two things only:

  1. Read and take summarised notes.
  2. Write down how I want to practice the ACT therapy techniques from the book.   

In my case, I don't think doing deep learning  (e.g. writing notes, creating space repetition notes, reflect, do exercises, discuss, etc) is what I needed considering how busy life is for me now. My end goal is to be more sustainable mental health wise, and I want to apply the tools I've read in the book. It seems like the value of information is high here for achieving my goal, but the value of easy retrieval of information is low because I don't know how I'm going to use it or when I'm going to use it.

But again it's hard to know whether a certain information is valuable and should be easily retrievable. One failure mode that could happen is not being able to make a connection with something else important because I didn't do enough deep learning. Like if I didn't understand the concept of "cognitive fusion" fully, I might be forgoing a potential connection with another therapy technique that can help me better.  But it's really hard to know for sure beforehand.

Applying this to EA VP, I wonder if there are certain key learning outcomes that participants should really internalise and do a lot of deep learning; and, whether there are other learning outcomes that are less important that reading and remembering fuzzy impressions of it is enough for most participants.  

That makes me think that we should try to be clear as much as we can with the value of information and  the value of easy retrieval of information for most of our learning outcomes so that participants can say, for example, "oh EA VP says X is super important and will likely need it in future work, so I should do more deep learning here. And Y isn't so important, so I'll just read it."  

Besides these two things, I wonder if there's a simpler heuristic for choosing when one should prioritise doing deep learning versus prioritise doing shallow learning. Or something in the middle, which is the likelier case. 

Comment by Yi-Yang (yiyang) on Can the EA community copy Teach for America? (Looking for Task Y) · 2021-09-01T12:41:46.238Z · EA · GW

I think this sounds right! This makes me feel like we should also pay particular attention to making the facilitator experience is great too. 

Organising local intro EA programs can also be a great Task Y candidate. 

Comment by Yi-Yang (yiyang) on There will now be EA Virtual Programs every month! · 2021-07-19T08:22:14.702Z · EA · GW

Hi Michael!

I'm interested in running a local in-person program at my university from September to October with the virtual program as overflow capacity, in case our capacity for in-person cohorts isn't enough to accept all quality applicants. Would that setup be possible?

Yes, just direct people who are not able to join your local program to EA VP's website! And tell them to state in the application form that they want to be in a cohort with other people from the same uni.  

Also, is there a reason that the program is no longer called a fellowship?

I spoke to Emma about this, so here's what I gathered:

When we think about fellowships, we generally think about programs that are highly selective, are intensive, has funding, has various supports and opportunities (example 1, example 2). It sounds misleading when we use the term of "fellowship" and that's bad for EA's reputation so we use "programs" instead.

I didn't ask whether locally organised programs should also have the same naming conventions, so I'm still clarifying this.

Comment by Yi-Yang (yiyang) on My current impressions on career choice for longtermists · 2021-06-15T13:41:33.825Z · EA · GW

This might just be an extension of the "community building" aptitudes, but here's another potential aptitude.

"Education and training" aptitudes

Basic profile: helping people absorb crucial ideas and the right skills efficiently, so that we can reduce talent/skills bottlenecks in key areas.


Introductory EA program, in-depth EA fellowship, The Precipice reading group, AI safety programmes, alternative protein programmes, operations skills retreat, various workshops organised in EAGs/EAGxs, etc

How to try developing this aptitude:

I'll split these into three areas: (a) pedagogical knowledge, (b) content knowledge, and (c) operations.

(a) Pedagogical knowledge

This specific knowledge you learn and skills you develop to teach effectively or help others learn more effectively. Examples: breaking down learning objectives into digestible chunks, how to design effective engaging learning experience, creating and presenting content, (EDIT) how to measure whether your students are actually learning .

This could be applied to classroom/workshop settings, reading and discussion groups, career guides, online courses, etc

You can pick up knowledge and skills either 
- formally: teaching courses, meta-learning courses, teaching assistant 
- or informally: helping others learn

(b) Content knowledge

This is knowledge specific to the domain you want others to learn. If you're teaching English alphabets, you need to know what it is (symbols that you can rearrange to create meanings and associations with physical or abstract things), why it's relevant (so you have a similar language with others to learn and communicate with), and how to apply it ("m"+"o"+"m" is mom!).

It's sometimes not necessary that you're an expert in this, but it helps a lot if you are above average at it.

(c) Operations
A big (but sometimes forgotten) part of organising classrooms, discussion groups, or workshops is that it needs to smooth (or within an expected parameter) to reduce any friction in the learning experience. It also helps that you understand the different trade-offs of running an education project (i.e. quality of learning vs. student's capacity vs. educator's capacity vs. financial cost).

You can pick up knowledge and skills either 
- formally: operations courses, project management courses, productivity books
- or informally: learning from "that friend who usually get things done and is generally reliable"

On track?

It's hard to generalise since there's so many different models (e.g. classroom, online courses, discussion groups) of how to educate/train a person, and each different model requires a different way of thinking.  Here's my rough take on this: 

Level 1: you get positive feedback from others when you had to explain and teach a certain topic informally (e.g. with friends over dinner, homework group, helping students as a teaching assistant during office hours).

Level 2: you get positive feedback when facilitating discussions.

Level 3: you get positive feedback when teaching a workshop.

Level 4 (you're likely on track here): you get positive feedback when teaching and running a course, online course, or lecture series with more than 50 participants

Comment by Yi-Yang (yiyang) on AMA: Working at the Centre for Effective Altruism · 2021-05-30T08:07:54.838Z · EA · GW

Looking at the comments, it seems like CEA has changed a lot over the years! 

This may be too broad, but in CEA's list of team values, what has CEA as a whole done well in? And which ones do you think the team wants to prioritise improving on? 

Comment by Yi-Yang (yiyang) on EA Malaysia Cause Prioritisation Report (2021) · 2021-05-14T05:37:46.678Z · EA · GW

You've made some good points that I didn't get to write in our forum post, and I've made an edit to direct readers to your comment. 

Comment by Yi-Yang (yiyang) on EA Malaysia Cause Prioritisation Report (2021) · 2021-05-14T05:30:33.982Z · EA · GW

Hi Jamie!

Looking at your methodology though, it seems as if you were attempting to essentially redo EA cause prioritisation research to date from scratch in a short timeframe?

My guess of the most useful process would have been to just take some of the most commonly / widely recommended EA cause areas (and maybe a couple of other contenders) and try to clarify how they seem more or less promising in the Malaysian context specifically.

If you agree with my characterisation of your process, with the benefit of hindsight, would you recommend that other national groups follow your methodology or my suggested alternative?

Yes I agree. I think national groups should highly consider that their first iteration of local priorities research be that - taking recommended EA cause areas and conducting shallow research on them. 

That's what we did EA Singapore, although not in a very deliberate way. Once that's out of the way, it saved a lot of time for deeper, more useful research.  Here are the reasons why I think EA Malaysia chose to do this instead: (a) we wanted to test out a methodology, (b) we want to have a stronger consensus in our team when many felt some non-EA recommended areas should be included, and (c) we wanted to be sure we didn't miss any potential promising cause areas out. 

I don't think they are good reasons per say, but I just wanted to put them out there. 

what sorts of considerations do you think differ between Malaysia and other contexts in which these questions have been considered, if any?

I'm not sure if I understood your question correctly, so please do let me know if I didn't.  

I don't think there's any significant considerations that are different, since most of these considerations (or the specific methodology) are from Charity Entrepreneurship. If I were to compare CE's methodologies to other methodologies used by other organisations, I imagine it would be significantly different. 

Comment by Yi-Yang (yiyang) on EA Malaysia Cause Prioritisation Report (2021) · 2021-05-14T05:00:47.945Z · EA · GW

Hi Zeshen! I'll be answering you from my own personal capacity, so my views are not EA Malaysia's.  

I'm wondering if we have good reliable statistics on causes of deaths in the country (death being a proxy for suffering), and we could look into the categories of avoidable deaths (e.g. curable illnesses)

For health specific statistics, I've used information from IHME. For animal consumption, I've used data from FAO

and whether those areas are receiving enough support / funding.

It's a bit tough finding exact information about this. I did find one example from this report on the Lancet

mental health spending (RM344·82 million or 1% of the health budget) remains below the average spending on mental health of upper-middle-income countries. 


Also, from a poverty perspective, I'm curious if we have an idea how many Malaysians live in hardcore poverty and what can be done to get them out of it.

I have only done a bit of research on poverty, but my intuition tells me that Khazanah Research Institute probably has some information about this.  One of the top Google search results is this report, which I find helpful in dealing with the "where is Malaysia's poverty line issue" that you may have seen in the news sometimes. 

Also, what exactly is EA Malaysia's role as compared to EA global? I can imagine that global issues such as climate change and AI existential risks are also being heavily looked at by EA global and others, and depending on the issue, EA Malaysia's involvement could be either independent, complementary, or redundant. 

I love how you framed the outcomes of our involvement. I might even add, "destructive", which is different from "redundant" - our involvement could caused more harm than good.   

Ideally, we want to be complementary, if working on a certain thing is not our comparative advantage. For example, I would imagine top AI governance research institutions elsewhere have a better comparative advantage than Malaysia's; this would mean that Malaysian wanting to work in this space using EA's perspective but still want to be in Malaysia, would probably have the most impact in localising AI governance research from elsewhere into policy recommendations. 

I don't feel confident giving specific recommendations on reduce risk of doing redundant or destructive work, and increase the chance of doing complementary work. My only intuition to this is to over-coordinate (or coordinate more than you're used to).  

Comment by Yi-Yang (yiyang) on EA Malaysia Cause Prioritisation Report (2021) · 2021-04-27T13:16:22.995Z · EA · GW

Hi Brian! Thank for your response.  I'll be using "we" (as a team) to address most of your comments, and "I" at the end to address one point. 

I think it would be a lot better though if you had "problem profiles" like 80,000 Hours's for those causes you listed, especially the top 2-4 causes.  

Yes if there is a case for conducting further research, we are definitely considering deeper research in the top causes, and producing “problem profiles”. 

Or if not making full problem profiles, putting a few sentences or bullets about the scale and neglectedness of each of the causes would help.

We realised that our last point at the disclaimer didn’t make clear an additional related issue, which addresses this concern of yours. We didn’t detail which piece of evidence or arguments that made us give a certain score. Technically we did - it’s probably somewhere in our meeting minutes and it’s very messy - hence we’ve decided not to address this issue at this time. However, if we were to conduct another research like this, we definitely want to be better at making explicit our assumptions, evidences, and arguments.

The 2 that I think are very questionable though are financial literacy and improving diversity and inclusion. I don't see why these two could be in the top 8 causes for Malaysia. Maybe one of you could make the case for why these two causes are very impactful to work on, especially compared to other alternatives I list below?

We actually found a huge variance of scores for the above two causes areas in both the initial ranking stage and weighted factor model stage. So some of us in our team do agree with you that these cause areas shouldn’t be in the top 8. It also might be the case that we didn’t brainstorm enough cause areas that may reach the top 8. 

As a side note, most of us in our team have a lot of strong feelings with diversity and inclusion issues in Malaysia (although some of us did put a lower score for this cause area, we weren’t that surprised it made it in the top 8). In a nutshell, issues of race and religion are often used as a dividing force within Malaysia at the legislative, political and social level in much of Malaysia’s modern history. 

On a personal note, I wouldn't be surprised if these two cause areas actually do drop out in the next iteration of research (unless there's really convincing evidence of a cost-effective intervention).  

Would love to check out EA PH's cause prioritisation report soon! :)

Comment by Yi-Yang (yiyang) on Singapore’s Technical AI Alignment Research Career Guide · 2020-10-16T13:04:42.473Z · EA · GW

Hi Misha, sorry for the late reply. Thanks for the heads up! I've added this feedback for a future draft.

Comment by Yi-Yang (yiyang) on Local priorities research: what is it, who should consider doing it, and why · 2020-09-09T08:48:27.621Z · EA · GW

I appreciate the feedback Peter!

Comment by Yi-Yang (yiyang) on Singapore’s Technical AI Alignment Research Career Guide · 2020-09-01T03:41:08.610Z · EA · GW

That's great! Thanks again for the feedback.

Comment by Yi-Yang (yiyang) on Singapore’s Technical AI Alignment Research Career Guide · 2020-08-27T08:23:22.399Z · EA · GW

In regards to what I meant by "short term AI capabilities", I was referring to prosaic AGI - potentially powerful AI systems that uses current techniques instead of hypothetical new ideas surrounding how intelligence works. When you mentioned "I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using 'essentially current techniques'", I took it as prosaic AGI too, but you might mean something else.

I've reread all the write-ups, and you're right that they don't imply that "research on short term AI capabilities is potentially impactful in the long term". I really have jumped the gun there. Thanks for letting me know!

I've rephrased the problematic part to the following:

"Singapore’s AI research is focused more on current techniques. If you think we need to have new ideas on how intelligence works to tackle AI alignment issues, than Singapore is not a good country for that. However, if you think prosaic AGI [link to Paul's Medium article] is a strong possibility, then working on AI alignment research in Singapore might be good."

If you feel like this rephrasing is still problematic, please do let me know. I don't have a strong background in AI alignment research, so I might have misunderstood some parts of it.