EA Philippines 2020 Annual Report 2021-01-27T07:52:56.463Z
EA Philippines 2020 Community & Impact Survey Report 2021-01-27T06:06:45.801Z
How EA Philippines got a Community Building Grant, and how I decided to leave my job to do EA-aligned work full-time 2021-01-27T05:57:41.056Z
CHOICE - Creating a memorable acronym for EA principles 2021-01-07T07:12:10.816Z
How do EA researchers decide on which topics to write on, and how much time to spend on it? 2020-12-31T14:33:07.407Z
BrianTan's Shortform 2020-12-27T15:04:12.608Z
What are the "PlayPumps" of Climate Change? 2020-12-05T15:03:30.919Z
Can we convince people to work on AI safety without convincing them about AGI happening this century? 2020-11-26T14:46:28.241Z
If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities? 2020-11-26T07:54:53.479Z
Questions for Nick Beckstead's fireside chat in EAGxAPAC this weekend 2020-11-17T15:05:24.154Z
Questions for Jaan Tallinn’s fireside chat in EAGxAPAC this weekend 2020-11-17T02:12:43.781Z
Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety 2020-10-03T10:39:15.736Z
EA Philippines' Strong Progress and Learnings in 2019 2020-02-04T13:37:12.714Z


Comment by briantan on (Autistic) visionaries are not natural-born leaders · 2021-01-27T08:53:28.418Z · EA · GW

Great, thanks Guzey! There's a typo on the first sentence of the update though: "Update on the word "(Autistic)" in the title: I'm now aware of any of the people I discuss in the post being diagnosed with any autism spectrum disorders". The word "now" is supposed to be "not". :)

Comment by briantan on BrianTan's Shortform · 2021-01-27T05:46:01.471Z · EA · GW

How I learned about EA and how EA Philippines started

I’m Brian, co-founder of EA Philippines, and a recipient of a community building grant from the Centre for Effective Altruism. I think some people might be curious about how I first found out about and got interested in effective altruism, since this is a question a lot of EAs ask to other EAs. So I decided to write this Shortform post. I’ll also talk below about how we started EA Philippines.

I first found out about EA through the 80,000 Hours website in 2017. I was helping organize an event called the ASES Bootcamp for Career Design, a three-day conference that aimed to give Filipino students career advice on how to be great designers, marketers, coders, and entrepreneurs.

I posted on social media about the event, and a friend of mine sent me the link to 80,000 Hours, since he said it might be relevant for me to read for organizing the bootcamp. I ended up reading the entire 80,000 Hours career guide online, and I found it really interesting. I was already drawn to making a large impact with my life before reading 80,000 Hours, but I mainly thought about doing this through creating or joining a startup, or through content creation. 

80,000 Hours helped me realize that there are other high-impact career paths out there, and that there’s more pressing problems to solve than the ones I had in mind. I think the two top problems I wanted to solve at the time were helping people become more productive (i.e. through creating productivity-related tools or content), and helping people join or start startups, neither of which were in 80K’s top paths or problems. 

However, no one else I talked to about 80,000 Hours or EA was as interested in it as me back then. As such, I didn’t do much about my interest in effective altruism. However, I think 80,000 Hours played a role in how I crafted a life mission for myself later that year in 2017, which is to help people at scale. I made this my mission statement because I looked up to people, such as entrepreneurs and content creators, who were able to help millions or billions of people at scale. I wanted to emulate them with my life.

How we started EA Philippines

In the middle of 2018, I was about to enter my final year in my undergraduate degree in the Ateneo de Manila University, and I was thinking that I should start re-planning my career given that I was going to graduate soon. As such, I re-read the 80,000 Hours career guide, and got interested again in effective altruism because of that.

So in around August 2018, I searched on Facebook if there was an Effective Altruism Philippines or Manila, and there was none yet. However, I then searched again around October 2018, and suddenly there was already a Facebook page for Effective Altruism Manila, which we later renamed to Effective Altruism Philippines, with only around 40 likes.

I decided to message the page and found out that it was just one person running it - Kate Lupango. I met up with her and she invited Tanya Quijano, someone who she was told was also interested in effective altruism. Soon after, we decided to start organizing events under the name Effective Altruism Philippines, and our group has grown a lot since then. You can read about our progress in 2019 here.

In 2019, while helping run EA Philippines, I was interested to invest more time in learning more about EA. I listened to a lot of 80K podcast episodes, and read a lot of EA and EA-related books. I was just listening to these for fun. Little did I know that this knowledge would be very valuable for me for a job.

If you want to learn more about how EA Philippines got a community building grant, you can read this post.

Comment by briantan on (Autistic) visionaries are not natural-born leaders · 2021-01-27T01:14:55.773Z · EA · GW

As I wrote in another comment, my view on whether Jobs, Musk, or Page are actually autistic are in flux as I read other's comments here, like yours, and read more about others' views on them online. I'm not that familiar with autism/Asperger's, but initially I thought that at least 2/3 of them are not autistic. So it's interesting for me to learn that a couple other people on this forum like you agree that there's good evidence of them being autistic / having Asperger's.

I also agree that in this context the term autistic isn't used in a derogatory way. I am also not claiming that the use of the word should be banned.

Initially, I thought Guzey should change the article's title. I'm changing my mind now and would be fine if he kept the title as is, but I would slightly prefer it if he added something like this in the article:

"Jobs, Musk, or Page have never been formally diagnosed as autistic, but my impression is that they exhibited a host of traits typically associated with autism/Asperger's. This is why I put the title as "(Autistic) visionaries are not natural born leaders"."

This is just so people reading this would not think that these people have been formally diagnosed as autistic, when in fact they haven't been.

Comment by briantan on (Autistic) visionaries are not natural-born leaders · 2021-01-27T01:05:24.301Z · EA · GW

Hi Dale, I agree that we should encourage people here to adopt a scout mindset, and I also agree that this does not require you to have definitive proof of something.

My view on whether Jobs, Musk, or Page are actually autistic are in flux as I read other's comments here, like yours, and read more about others' views on them online. I'm not that familiar with autism/Asperger's, but initially I thought that at least 2/3 of them are not autistic. So it's interesting for me to learn that a couple other people on this forum like you agree that there's good evidence of them being autistic / having Asperger's.

Anyway, in what you quoted, we want people to write with "Clarity about what you believe, your reasons for believing it, and what would cause you to change your mind."

Given that, I just wish that Guzey would have put a caveat near the top of his article that there's no authoritative source saying that these leaders have been formally diagnosed as autistic. I'm not saying that he should caveat all his claims and always be super clear about everything, but for a topic like autism, I think he should have been clearer about his claim in the title.

Initially, I thought Guzey should change the article's title. I'm changing my mind now and would be fine if he kept the title as is, but I would slightly prefer it if he added something like this in the article:

"Jobs, Musk, or Page have never been formally diagnosed as autistic, but my impression is that they exhibited a host of traits typically associated with autism/Asperger's. This is why I put the title as "(Autistic) visionaries are not natural born leaders"."

Comment by briantan on (Autistic) visionaries are not natural-born leaders · 2021-01-26T13:58:12.423Z · EA · GW

I don't have a problem with you writing that they were all terrible leaders or terrible at running a company. This is because I think there's enough good evidence showing that Jobs, Musk, and Page were all terrible leaders when they started out, or at least showed examples of bad leadership. And your article cites some of this good evidence.

Meanwhile, you haven't really cited any evidence of them being autistic, and I don't think there's enough good evidence that they are. And yet you hint that all of them have it, without any caveats in the article that this is your impression rather than a proper diagnosis. And from articles I've read online, there's no definitive answer to them being autistic or not.

Also, we need to take special care with words that have been used before in a very discriminative or derogatory way, such as autistic. Here's an article that talks about how labelling someone with autism or autistic can be derogatory: "Whatever way he meant it, "autistic" is often used as an insult and it's insensitive to use a term that describes a disability or a condition in this way, says the National Autistic Society."

Comment by briantan on (Autistic) visionaries are not natural-born leaders · 2021-01-26T10:18:47.843Z · EA · GW

Yeah but I think there's still something wrong with hinting that people are "(autistic)", when they aren't diagnosed with it, or don't want to be known as that.

Comment by briantan on (Autistic) visionaries are not natural-born leaders · 2021-01-26T06:18:14.089Z · EA · GW

I browsed through this writeup and it's a comforting reminder that these 3 leaders/visionaries were not natural-born leaders. 

I have a minor comment on the title though. I did a quick Google search, and there's no definitive source saying that Steve Jobs, Elon Musk, or Larry Page are autistic. So I find the "(autistic)" part in the title unhelpful and misleading. The URL of your blog post also implies that all three of them are autistic, when none of them are definitively so. What made you choose this title? Would you consider changing it, such as to "Visionaries who are not natural-born leaders"?

Comment by briantan on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T10:16:16.913Z · EA · GW

No worries that you don't have the time to explain it Michael! I'm glad to hear that others haven't heard of the idea before and that this is a new topic. Hopefully someone else can explain it in more depth. I think sometimes concepts featured in 80K podcast episodes or other EA content can be really hard to grasp, and maybe others can create visuals, videos, or better explanations to help.

An example of another hard to grasp topic in 80K's past episodes is complex cluelessness. I think Hilary Greaves and Arden did a good/okay job in explaining it, and I kinda get the idea, but it would be hard for me to explain without looking up the paper, reading the transcript, or listening to the podcast again.

Comment by briantan on [Podcast] Ajeya Cotra on worldview diversification and how big the future could be · 2021-01-23T03:20:54.820Z · EA · GW

I'm not finished yet with the whole episode, but I didn't understand the part about fairness agreements and the veil of ignorance that Rob and Ajeya were talking about as a way to figure out how much money to allocate per worldview. This was the part from 00:27:50 to 00:41:05.  I think I understood the outlier opportunities principle though.

I've re-read the transcript once to try and understand it more but I still don't. I also googled about the Veil of Ignorance, and it started to make more sense, but I still don't understand the fairness agreements part. Is there a different article that explains what Ajeya meant by that? Or can someone explain it in a better way? Thanks!

Comment by briantan on Ranking animal foods based on suffering and GHG emissions · 2021-01-21T01:59:43.699Z · EA · GW

Also, have you considered adding other farmed fish species to this tool, like tuna, milkfish, and tilapia? I'm from the Philippines where milkfish and tilapia are by far the two most consumed farmed fish species, based on this report by Fish Welfare Initiative. If anyone can point me to sources on relative welfare or climate impact between different farmed fish species, I'd be interested to read :)

Comment by briantan on Ranking animal foods based on suffering and GHG emissions · 2021-01-21T01:53:14.338Z · EA · GW

This tool is great! I thought I had read enough about animal welfare and this topic but apparently not, since I had these reactions:

  1. I'm surprised that when animal welfare is set to 99% as the relative priority, that shrimp is by far the #1 harm. Is there something you can link me to that explains this more? I read through Charity Entrepreneurship's shrimp welfare charity idea report before, and they indicated that they thought there was only a 20% chance they were sentient. So I didn't think they would be #1 on this list, even when I adjust animal welfare prioritization down to 22% on your tool.
  2. When animal welfare is set to 99%, I'm surprised that eating an egg from a cage-free hen is only ~25% less harm than an egg from a caged hen. I thought it would be more, i.e. 50-80%. Can you point me to a source that says why they're still suffering a lot?
  3. When animal welfare and climate change are both set to 50%, I'm a bit surprised that a cage free hen is only ~5% less harm. Can you help explain what extra harm on climate change cage-free farms/hens have?

The comments above were when I was playing with the relative priorities, and  I didn't make any changes yet to other parameters. I now see that the brain size and neuron count checkboxes can also drastically affect the tool. But I assume that the default you set is our best guess on how to determine sentience?

Also, something I'd be curious on is how much climate impact certain crops or plant-based products have, and if they could be put on the same tool. I'd be interested to know how much better both for climate and animal welfare consuming various plant-based products are compared to the various animal products.

Comment by briantan on EA Organisations: If you created a sequence about your approach to doing good, what would you write? · 2021-01-21T01:33:48.275Z · EA · GW

I'm not from Open Philanthropy but I think their writeup on "Hits-Based Giving" is a must-read article to understand their strategy. I think it's also useful for those interested in improving how they decide which charities or projects to donate to or make grants to.

I've heard of the term before, but I only recently found out about and read this writeup. Reading through the writeup helped me understand and be able to explain it better.

Comment by briantan on CHOICE - Creating a memorable acronym for EA principles · 2021-01-12T13:45:35.949Z · EA · GW

Good point. I've edited the article and added an update now at the end of the summary! Thanks

Comment by briantan on 2020: Forecasting in Review · 2021-01-11T09:23:44.638Z · EA · GW

I really like the titles to the sections - very creative.

Comment by briantan on CHOICE - Creating a memorable acronym for EA principles · 2021-01-10T06:42:02.916Z · EA · GW

Yeah I do see now how they're very carefully chosen, and I just saw on their website and remembered now that a lot of organizations and people have voiced their support for these principles. I don't think it's worth changing them to fit this acronym, since that would mean having them run through multiple organizations again, with not a strong reason for doing so.

From the comments on this post, I realize this isn't that good of an idea anymore. I still think it was worth sharing, and other people are free to experiment in using this acronym to explain what EA is about. But I wouldn't push for it anymore to replace CEA's current guiding 


Comment by briantan on CHOICE - Creating a memorable acronym for EA principles · 2021-01-08T03:29:03.908Z · EA · GW

Thanks for sharing. I think all of those are valid points to raise.

  • On it being low-fidelity: I understand it is a bit low-fidelity, but I think it's more likely it would lead to good outcomes than bad outcomes. I don't expect or want people to learn these by heart anyway - it's just one way to phrase/frame what the principles are.
  • On guiding principles being changed frequently: I think this is the more valid point. 

Regarding the proposed acronym:

  • I didn't consider the pro-life/pro-choice association, although I think most people wouldn't associate it with that, so I don't think this should be an issue. 
  • I also didn't think people would associate it with the choice/obligation debate. And even if people did, my guess is people are leaning on EA being a choice rather than an obligation.

So far, the acronyms CARING and SOCIAL have been suggested. I like the acronym CARING more, so if people agree with the value of using an acronym, but dislike the word CHOICE, CARING could be used.

Comment by briantan on CHOICE - Creating a memorable acronym for EA principles · 2021-01-07T23:56:17.679Z · EA · GW

I disagree. I think these are part of the EA dogma but are not the same as having a scientific mindset.

I think there's a lot of truth in this. And you helped me realize that I could easily add back in "Scientific Mindset" if I make the acronym "CHOICES". The only downside is that having 7 principles instead of 6 though makes it slightly harder and longer to explain. So because of this downside, I'm still uncertain on whether to use CHOICE or CHOICES.

I guess other people can see whether they'd want to add Scientific Mindset to the principles or not, and they can use CHOICE or CHOICES depending on their preference. What do you think about that?

Comment by briantan on CHOICE - Creating a memorable acronym for EA principles · 2021-01-07T13:40:38.494Z · EA · GW

1-2 people seem to have downvoted. I'd like to know why people downvoted if you're willing to share your thoughts/feedback!

Comment by briantan on CHOICE - Creating a memorable acronym for EA principles · 2021-01-07T09:05:19.105Z · EA · GW

I like the word caring too, so this is an interesting suggestion! A couple of comments:

  1. Would "rationality" be better than "reasoning carefully"?
  2. Terms that are two words tend to be harder to remember for acronyms, so that's why I'd go with "rationality" rather than "reasoning carefully". Greatest impact isn't also ideal because it's two words, and it isn't a usual phrase used in the community.
Comment by briantan on vaidehi_agarwalla's Shortform · 2021-01-07T02:41:36.371Z · EA · GW

You could add this recent post to the list:

Comment by briantan on Propose and vote on potential tags · 2021-01-05T10:00:10.259Z · EA · GW

Alright. I've gone ahead and made the Philippines tag here, along with a description for it. I've also tagged all 5 pasts posts on this topic already. The description I wrote could be a template for how other country-specific tags should be like. I felt that the description you wrote for China didn't apply as much to the Philippines tag. 

If you or anyone else wants to let me know if the description is alright, or if I should change anything, let me know!

Comment by briantan on BrianTan's Shortform · 2021-01-05T09:01:24.722Z · EA · GW

Hey Prabhat, yeah I'm aware of Kurzgesagt, and am happy they have videos on topics related to EA. But they've never specifically mentioned EA or GiveWell yet. I think either of those happening could have a large effect.

Comment by briantan on BrianTan's Shortform · 2021-01-05T08:58:40.471Z · EA · GW

Yeah I'm aware of Kurzgesagt. Thanks for linking still though!

Comment by briantan on Propose and vote on potential tags · 2021-01-05T00:28:04.478Z · EA · GW

I think it's a good idea to go with a Philippines tag rather than an EA Philippines tag. Both are quite interchangeable because 100% of past posts (there's 5 of them) related to the Philippines are also written by people in EA Philippines, and 100% of past posts by EA Philippines are related to the Philippines. 

I think this will continue for quite a few years for ~80-100% of posts, since we expect only a few people to not be affiliated with EA Philippines but still be writing about the Philippines. I think that 90-100% of posts by EA Philippines will relate to the Philippines.

I also agree that for national EA groups, rather than have an EA-chapter-specific tag as well as a country-specific tag, we should just have the country-specific tag.

I don't understand how a post related specifically to an EA chapter wouldn't also be related to the country, so I think one country tag (rather than a country and a chapter tag) is enough. 

I would prefer to just have a Philippines tag already rather than a Southeast Asia tag. This is because:

  1. I think we'll hit 10 posts soon, i.e. by the midpoint of 2021
    1. We already have 5 past posts that could be tagged under Philippines
    2. I have ~3 more posts coming up (likely this month) that would also be tagged under Philippines
  2. Therefore rather than tagging these posts as under Southeast Asia, then having to move them to Philippines after we hit 10 posts, I'd rather we just have them tagged as under the Philippines already.

I think the principle should be like "If there are 5 or more posts already for a specific country or EA national chapter, and if you would want to create a tag for easier visibility of posts related to that country/chapter, then you should create a tag for that specific country already." Let me know what you think of this principle!

Comment by briantan on Prabhat Soni's Shortform · 2021-01-04T06:16:41.983Z · EA · GW

I don't know if it would raise risks, and I haven't watched the movie (only the trailer), but I'm disappointed about this movie. Superintelligence is a really important concept, and they turned it into a romantic action comedy film, and making it a not-so-serious topic. The film also didn't do well amongst critics - it has an approval rating 29% on Rotten Tomatoes.

I think there's nothing we can do about the movie at this rate though.

Comment by briantan on Propose and vote on potential tags · 2021-01-04T06:08:32.812Z · EA · GW

Can I create a tag called "EA Philippines", for posts by people related to EA Philippines, such as about our progress or research?  I'd like to easily see a page compiling posts related to EA Philippines. I could create a sequence for this, but a sequence usually implies things are in a sequential order and more related to each other. But our posts will likely be not that related to each other, so a tag would likely be better.

A counterargument is I currently don't see any tags for any EA chapter, except for EA London updates, But these aren't about EA London specifically - they're just the updates they compile on the EA movement. Adding in one tag for one chapter seems harmless, but if eventually 50-100 chapters do this, things might get disorganized. Curious to hear others' thoughts on this!

Comment by briantan on How do EA researchers decide on which topics to write on, and how much time to spend on it? · 2021-01-01T12:52:25.973Z · EA · GW

Ah I meant I've only read 1/3-1/2 of the "An experiment to evaluate the value of one researcher's work". I'll try to finish reading it within the next few days. 

That's cool to hear though that it takes you 0-5 mins to predict the value of a project. I may want to book a call with you too to dig deeper about your forecasting system for projects. Could you DM me on the Forum your Calendly link (if you have one)? :)

Comment by briantan on How do EA researchers decide on which topics to write on, and how much time to spend on it? · 2021-01-01T11:17:35.867Z · EA · GW

Thanks for this answer Nuno! I've read the first 1/3-1/2 of the post you linked so far, and I think it's a cool framework. I think estimating value as "microbostroms"/"milibostroms" or QARPs, and framing things as an order of magnitude away from each other, are a good idea. 

I'm curious how much time you expect to take to predict an initial estimate of the value of 1 research project? I don't have a list of topics yet, but if I do have one, I'm wondering how much time it would take you to predict their value.

Also, after you're done predicting (or crowdsourcing predictions) of the value of your projects, how do you then estimate how long each research question/project would take? Do you also do forecasts for that? And do you then just divide the milibostroms by the estimated number of hours to find what's most cost-effective to pursue?

Comment by briantan on What’s the low resolution version of effective altruism? · 2020-12-31T13:50:39.457Z · EA · GW

I wonder if a better low resolution version of EA is to highlight how some people in the EA community are willing to make large changes to their careers/career plans to solve big but neglected problems. 

An article on this could cite examples like Elie Hassenfeld and Holden Karnofsky switching from hedge fund trading to founding GiveWell, or Ben Todd switching from wanting to work in Climate Change to found 80,000 Hours, or Marie Gibbons switching from veterinary work to working on clean meat.

 I feel like EA in mainstream media focuses a bit too much on donations. I think I'd rather have more people optimizing their career plans to do more direct work to solve pressing problems, rather than have more people earn-to-give or just optimizing their donations.

Comment by briantan on What’s the low resolution version of effective altruism? · 2020-12-31T13:38:00.342Z · EA · GW

From another tweet on the same thread with @nonmayorpete:

"Some people get a bad "i can be more virtuous by being smarter haha" impression. it also has a rep for being very utilitarian and putting a too much weight into world-ending AI risk.

this is the first time i've seen negative vibes about it being money-only though"

I'm just showing this as an example, not because I've heard of this criticism before. This is my first time hearing the criticism about "being virtuous by being smarter", and "putting too much weight into world-ending AI risk". I've heard the latter from people within the community, but not from someone outside of it.

But I'm not from SF or the U.S., so I'm not really exposed to people who have these low resolution definitions of effective altruism. I think here in the Philippines, we thankfully don't have any negative, low resolution versions of EA circulating around yet.

Comment by briantan on What’s the low resolution version of effective altruism? · 2020-12-31T13:33:31.543Z · EA · GW

Here's one in a thread I saw on Twitter from @nonmayorpete. This tweet got 1,600 likes:

"Hey SF-based techies I wrote your resolutions for you:
- Delete ride-hailing and food-delivery apps
- Learn 3 bus lines
- Walk the Crosstown Trail
- Google who your Supervisor is
- Volunteer with your time, not your skills
- Pick a cause that is not effective altruism"

Another user replied: "What's effective altruism, and what's wrong with it?"

From @nonmayorpete: "I’ll let you look it up. It’s a completely fair topic to be interested in but it conveniently lets high-income people justify not getting their hands dirty in literally anything"

Other tweets of Pete aren't as negative on EA as that one, and Luke Freeman from GWWC and Aaron Gertler from CEA have both responded to the thread to try correcting his view. But it still shows how low-resolution and negative people's perception of EA can be.

Comment by briantan on BrianTan's Shortform · 2020-12-28T05:43:40.621Z · EA · GW

Oh cool. Yeah I had heard that they have been experimenting and have had some success advertising on podcasts. I didn't know though that they've advertised on more than 35 already. I wonder if they're going to try sponsoring YouTube videos next. Not running YouTube ads, but paying so they can be mentioned by a YouTuber with a somewhat EA-aligned audience. I think Lex Fridman would be a good example. He interviewed Will MacAskill this year. Maybe that would be worth trying out.

Comment by briantan on BrianTan's Shortform · 2020-12-27T15:04:12.949Z · EA · GW

GiveWell is a sponsor of the latest podcast episode of Tim Ferriss, the best-selling author and podcaster. I am a fan of Tim Ferriss and his podcast, and I think GiveWell sponsoring his podcast will lead to thousands of people learning about GiveWell, and subsequently donating to GiveWell and/or their top charities based on his recommendation. 

I wouldn't be surprised if GiveWell got at least a 4x return on donations by sponsoring this podcast. I am really happy to see  GiveWell mentioned on his podcast, and I think Tim gave a really personal spiel in his recommendation of them! He even mentioned that his podcast episode with William MacAskill in 2015 led to at least $100,000 of organic donations to GiveWell and/or its charities (based on donors organically attributing them to Tim Ferriss).

Comment by briantan on Stanford EA has Grown During the Pandemic; Your Group Can Too · 2020-12-25T03:41:29.740Z · EA · GW

Thanks for sharing Kuhan! EA Philippines (and our student chapter EA Blue) has grown a lot as well and have had a good amount of people showing up to our virtual events. So I definitely see why Stanford EA has grown, and I think a few other chapters (mainly student chapters I think) have grown and maximized virtual activities during this pandemic.

Anyway, I have simple questions on two things you said:

1. On co-working:

On top of co-working being a good opportunity for socializing, I’ve found my and other members’ productivity goes up a lot when sharing our screens during co-working sessions (highly recommend).

Do you mean two or more people are sharing their screen at the same time? How does that work? We share our screens for group meetings, but I've never heard of screen-sharing during co-working sessions. Also, wouldn't people feel like they are being watched (or that they might show something private) if they are screensharing while working?

2. When you said you have "around twice as much weekly programming as pre-pandemic", around how many events on average is that per week exactly?  Also, how do you know that this is the right amount (and not too little or too much)? Is it your hope that every member at Stanford EA is showing up to at least 1 event per week?

Comment by briantan on vaidehi_agarwalla's Shortform · 2020-12-19T04:24:20.348Z · EA · GW

Hm I think Swapcard is good enough for now, and I like it more than the Grip app. I think this comes down to what specific features people want in the conference app and why this would make things easier or better.

Of course it would be good to centralize platforms in the future (i.e. maybe the EA Hub also becomes a Conference platform), but I don't see that being a particularly good use of time.

Comment by briantan on Propose and vote on potential tags · 2020-12-19T04:17:32.046Z · EA · GW

Oh cool, yeah I guess this works!

Comment by briantan on Propose and vote on potential tags · 2020-12-18T15:01:42.977Z · EA · GW

Another potential argument in favor of having a tag for Feedback Request is it might encourage EAs to share work with each other and get feedback more often, which is likely a good thing. 

In my workplace at First Circle, we have a process called "Request for Comment" or "RFC" where we write documents in a specific format and share them on an #rfc slack channel, so that people know we want feedback on a proposal or writeup in order to move forward with our work. This was very effective in getting people to share work, get feedback on work asynchronously rather than via a synchronous meeting, and to streamline and house one place for feedback requests. Maybe a tag for "Feedback Request" could also streamline things?

For example, if an EA wants to see what they could give feedback on, they could click this tag to check out things they could give feedback on. 

It could also be good practice for authors of feedback requests to put a deadline on when they need feedback on something by. This is so people backreading know if they should still give feedback if a deadline has passed.

Comment by briantan on Propose and vote on potential tags · 2020-12-18T14:56:21.455Z · EA · GW

Should we have a tag for "Feedback Request"?

We in EA Philippines have made 2 posts (and have another upcoming one) already that were specifically for requesting feedback from the global EA community on an external document we wrote, before we post this document for the general public. See here and here as examples from EA PH, and this other example from a different author. 

I think it happens quite often that EAs or EA orgs ask for feedback on an external document or on a writeup they have rough thoughts on, so I think it's worth having this tag.

A potential counterargument to this being a tag is that lots of authors (or most authors) would want feedback on their posts anyway, and it's hard separating which ones are feedback requests and which ones aren't. I guess the use of this tag is ideally for posts that authors specifically want answers to a few questions for, or if they want feedback on an external document, rather than just getting general feedback on their article. Would appreciate any thoughts on this!

Comment by briantan on What are some potential coordination failures in our community? · 2020-12-14T02:50:55.770Z · EA · GW

Ah I think that's a good and interesting suggestion. I think this could be mixed into the EA Hub still though so that the contacts are linked to specific EAs who are in contact with them.

Comment by briantan on CEA's 2020 Annual Review · 2020-12-13T16:11:16.203Z · EA · GW

Got it, thanks Max!

Comment by briantan on CEA's 2020 Annual Review · 2020-12-13T16:10:56.036Z · EA · GW

Got it, thanks for the context!

I'm curious if you have a target % return for this fund per year with your investing, and what your target % return is for your personal funds for donation? I also wonder if you think EAs you know perform better with their investment returns than the average investor.

Comment by briantan on Jamie_Harris's Shortform · 2020-12-13T10:59:27.766Z · EA · GW

Hey Jamie, thanks for doing this, I find the results interesting. Just want to point out what I think are two small typos that made it harder to understand what you wrote:

I asked a free-text response question: "Do you think that the value of graduate training would increase/compound, or decrease/discount, the got further into their career?" 4 respondents wrote that the value of graduate training would decrease/discount the got further into their career

Could you correct what you put above?

Also, I'm curious on 

1. What Master's or Ph.D degrees  are you considering to take?

2. What do you think would be a good Master's or Ph.D degree to take for the average "generalist" researcher at an EA / longtermist non-profit (if this is different from what you personally would take)?


Comment by briantan on What are some potential coordination failures in our community? · 2020-12-12T15:53:43.640Z · EA · GW

I believe the EA Hub is supposed to be this, but I don't think it's at this level yet (both in terms of use and of features). Vaidehi can talk more about this

Comment by briantan on Tips for Volunteer Management · 2020-12-11T13:04:32.058Z · EA · GW

Thanks Naomi for this post! I think there's some good advice here. I think the advice would be more persuasive though and would stick with me more if you cited examples from your personal experiences. 

I think you could cite anonymous examples (or to check with the volunteers if they're willing to have you put potentially de-anonymizing details). It would be good to know of stories of volunteers feeling really good because of the advice here, or feeling really bad because this advice wasn't implemented yet at the time. Just wanted to give this feedback!

Comment by briantan on CEA's 2020 Annual Review · 2020-12-11T03:53:19.751Z · EA · GW

I’m curious to learn more about the following:

We invested funds to generate a return of $126,000 in interest and $1.8M in investment returns (for the Carl Shulman discretionary fund), while freeing up money held in old restrictions.

$1.8M in investment returns for a fund that initially started at $5M is quite high - that's roughly a 36% return in a year. How was that investment return achieved? Also, this is the first time I’m hearing about this discretionary fund. Are there any reports of payouts made by this fund, or what the plans are for it for the future?

Comment by briantan on CEA's 2020 Annual Review · 2020-12-11T03:51:57.588Z · EA · GW

Thanks for publishing this very thorough review Max! I read most of it and the community building grants and group support sections were particularly important and useful for me to know, though the other parts were also useful to read.

I have a few questions which one of you at CEA may want to answer, and I’ll split these into different comments, so that people can reply separately to each question:

1. Regarding this line in your section on community building grants:

We judged 86 of the 145 group members to have taken significant action based on a good understanding of EA ideas, and we categorised these cases as strong, moderate, or weak based on our expectations about the counterfactual impact the group had on the individual.

I'd like to learn more about how CEA or the CB grants programme categorizes these cases into strong, moderate, or weak impact? I think there is a lot of value in community builders, especially CB grantees, having a better understanding of what CEA considers to be impactful (and how you measure it). This way, this prevents CB grantees from being very positive about what CEA thinks is just a weak case of impact, or grantees thinking that something is moderate impact, but CEA thinks it's strong impact. This then allows community builders to focus on generating more moderate or strong cases of impact, although of course they should not Goodhart (i.e. optimize too hard in a way that hampers the group).

I also understand that examples of impact (and therefore evaluating these examples) can vary widely across different group types (national, city, or university) and those in different countries  (i.e. EA Philippines vs. EA London), but I'd still like to hear more about it.

 In my head, I think that CEA should be measuring two things when trying to measure the impact of a group on its members: 
a) How high is the expected value of the action or career plan change that the person has taken

b) How counterfactual the impact of the group is on the person

The two things above can then be combined so that a case can be classified as strong, moderate, or weak impact.  I'd like to know if what I wrote above on high expected value + the degree of it being counterfactual is aligned with CEA and/or the CBG programme's thinking on evaluating these cases of impact. If you think though this information is too sensitive to share on the forum, then you can just send it to me and/or other CB grantees privately (or let me know if Harri will release a writeup on this for community builders in 2021). Thanks!

Comment by briantan on 500 Million, But Not A Single One More · 2020-12-09T09:11:39.085Z · EA · GW

Sharing this here for those viewing this post today (Dec. 9, 2020): 

The Future of Life Institute (an EA-aligned org) is awarding their annual Future of Life Award in about 8 hours time to Viktor Zhdanov and Bill Foege, for their critical contributions to the eradication of smallpox!

If you'd like to attend the event, learn more about it here:

The event will include the following people:
- Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases,
- Bill Gates, Microsoft co-founder and philanthropist,
- Jennifer Doudna, Berkeley Biochemist and 2020 Nobel Laureate.
Previous winners are Vasili Arkhipov, Stanislav Petrov and Mathew Meselson, who helped prevent two nuclear wars and one bioweapon arms race, and they/family members will join the celebration. The event will also feature a panel discussion moderated by MIT Prof. Max Tegmark about interesting issues relevant to this year’s award. You can find more information about the award here:

To attend the event, join via the following links:

Comment by briantan on Clean technology innovation as the most cost-effective climate action · 2020-12-07T01:20:25.359Z · EA · GW

Hey Michael, thanks for pointing out those two externalities for that family planning intervention! Those two makes me even more positive about family planning interventions then. I think $0.23/tonne of CO2 is pretty cost-effective.

Comment by briantan on What are the "PlayPumps" of Climate Change? · 2020-12-07T01:15:49.705Z · EA · GW

Hey Dan, I think this is a brilliant example. I think this and the Solar Roadways are the best examples listed here.  I This article you cited is pretty good:

Maybe a few EAs should test out using this Carbon for Water example or Solar Roadways at a giving game (vs. other Giving Green or Founders Pledge charities) next time!

Comment by briantan on What are the "PlayPumps" of Climate Change? · 2020-12-07T01:01:57.801Z · EA · GW

Thanks for pointing this out - this makes sense! Copied the part here for anyone that wants to find out where that specific part is:

Giving Green's goal is to use an EA-style, highly rigorous research methodology to develop recommendations that will make current climate donations much more effective and that will crowd money to effective organizations. For instance, consider the space of carbon offsets. We believe (as do the researchers at Founder's Pledge and Let’s Fund) that carbon offsets are not among the most cost-effective solutions to climate change. However, unlike Founder's Pledge, we conduct research and make recommendations in the offset space because many organizations (specifically businesses looking to make claims of carbon-neutrality) want to buy carbon offsets. We think that providing recommendations in the offset market has a good amount of social value, as purchasers in this space are unlikely to switch over to, for example, policy charities.