Posts

Why are some EAs into cryonics? 2021-03-17T07:10:05.693Z
A ranked list of all EA-relevant documentaries, movies, and TV series I've watched 2021-02-21T11:00:26.634Z
EA Philippines 2020 Annual Report 2021-01-27T07:52:56.463Z
EA Philippines 2020 Community & Impact Survey Report 2021-01-27T06:06:45.801Z
How EA Philippines got a Community Building Grant, and how I decided to leave my job to do EA-aligned work full-time 2021-01-27T05:57:41.056Z
CHOICE - Creating a memorable acronym for EA principles 2021-01-07T07:12:10.816Z
How do EA researchers decide on which topics to write on, and how much time to spend on it? 2020-12-31T14:33:07.407Z
BrianTan's Shortform 2020-12-27T15:04:12.608Z
What are the "PlayPumps" of Climate Change? 2020-12-05T15:03:30.919Z
Can we convince people to work on AI safety without convincing them about AGI happening this century? 2020-11-26T14:46:28.241Z
If someone identifies as a longtermist, should they donate to Founders Pledge's top climate charities than to GiveWell's top charities? 2020-11-26T07:54:53.479Z
Questions for Nick Beckstead's fireside chat in EAGxAPAC this weekend 2020-11-17T15:05:24.154Z
Questions for Jaan Tallinn’s fireside chat in EAGxAPAC this weekend 2020-11-17T02:12:43.781Z
Feedback Request on EA Philippines' Career Advice Research for Technical AI Safety 2020-10-03T10:39:15.736Z
EA Philippines' Strong Progress and Learnings in 2019 2020-02-04T13:37:12.714Z

Comments

Comment by BrianTan on Wild Animal Initiative featured in Vox · 2021-04-22T11:02:39.191Z · EA · GW

I think this post should be on the Forum and doesn't feel too much like marketing/propaganda to me. I wouldn't have known about this article if not for WAI posting about it.

I guess if we start getting way too many (i.e. more than 4 a day across different orgs?) of these positive self-posts by organizations on the forum, we should start getting concerned, but I don't think we're close to that point yet.

Comment by BrianTan on CEA update: Q1 2021 · 2021-04-22T10:28:55.606Z · EA · GW

Got it, thanks! If they could be compiled and put on the EA Hub Resources, such as on this page, that would probably be the best place to compile them?

Comment by BrianTan on CEA update: Q1 2021 · 2021-04-22T07:28:37.121Z · EA · GW

On Fellowship Data:

  1. If a fellowship starts in Feb but ends in April, does that count toward Q1 data or Q2 data? 
  2. Regarding the number of participants for fellowships, I'm not sure how that data is collected, but maybe Marie Buhl reached out directly privately to fellowship organizers to collect this data. Anyway, there's a good chance that data for EA Philippines's chapters aren't accurately included in Q1 data.

    For example, 2 of our student chapters in EA Philippines,  EA UP Diliman and EA Blue, are both running intro fellowships since Feb and March respectively. They have a combined 67 participants (36 and 31 respectively), though a few have already dropped out or are not set to graduate. I assume both universities are non-focus universities. So if our participant data is part of your Q1 data set, then that means 29% of fellows (67/230) from non-focus universities are from our universities. I think that's too high, so my assumption would be most or all of their fellows are not yet in the data set.
  3. Related to #2, I think it would be useful to have a public Google Sheet with rows for which groups are running fellowships, and columns for what country and university they're from, when they've started, when they will end, how many participants, and how many graduates (if data exists on that already). I assume most fellowship organizers would be okay with this data being public. I think having this public Google Sheet can let us easily know which groups are and aren't on the list yet. I think it's also good for others to know which universities/cities have fellowships - maybe they can recommend friends from those universities/cities to join those fellowships.
  4. I think it would be good to include data for both participants and graduates. Would be interesting to know what the avg. drop-off rate is for these fellowships.
  5. Also, a separate Google Sheet that lists which groups are going to run fellowships or reading groups in the next 1-3 months (and whether they accept people virtually or not) could also help people interested to know what fellowships or reading groups are coming up. But I think this is less important to do than the other things I suggested or mentioned above.
Comment by BrianTan on CEA update: Q1 2021 · 2021-04-22T07:04:55.593Z · EA · GW

This update and CEA's plans for 2021 mention the term "highly-ranked university groups" and "focus universities" a lot. Could you clarify what you mean by one or both of those terms? (i.e. are you looking at the top 40 universities globally based on a specific website)? Thanks!

Comment by BrianTan on CEA update: Q1 2021 · 2021-04-22T07:03:00.837Z · EA · GW

Thanks for writing and publishing this! Lots of exciting progress. I have some questions, which I'll separate into different comments:

Group model write-ups: James Aung coordinated organizers at CBG-funded university groups (Oxford, Cambridge, Harvard, Yale, Stanford, and Brown) to write up documents explaining how their groups work.

Is it possible to get links to these documents? I and other student chapter leaders in EA Philippines would be interested to read them, and I think other student group organizers would be interested too. In particular, we think a lot about what the best organizational structure is for uni groups, and what strategies to use for a) goal-setting, b) deciding what projects to run, and c) dividing roles and responsibilities.

Comment by BrianTan on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-22T02:00:26.681Z · EA · GW

Thanks for taking action on the feedback! I welcome this change and am looking forward to that new episode. Here's 3 people I would nominate for that episode:

Tied as my top preference:

  1. Peter Hurford - Since he has already volunteered to be interviewed anyway, and I don't think Rethink Priorities's work has been featured yet on the 80K podcast. They do research across animal welfare, global health and dev't, meta, and long-termist causes, so seems like they do a lot of thinking about cause prioritization.
  2. Joey Savoie - Since he has experience starting or helping start new charities in the near-termist space, and Charity Entrepreneurship hasn't been prominently featured yet on the 80K podcast. And Joey probably leans more towards the neartermist side of things than Peter, since Rethink does some longtermist work, while CE doesn't really yet.

2nd preference: 

  1. Neil Buddy Shah - Since he is now Managing Director at GiveWell, and has talked about animal welfare before too.

I could think of more names (i.e. the ones Peter listed), but I wanted to make a few strong recommendations like the ones I wrote above instead. I think one name missing on Peter's list of people to consider interviewing is Michael Plant.

Comment by BrianTan on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-22T01:42:55.771Z · EA · GW

Just a quick point on this:

And the leaders' forum is quite representative of highly engaged EAs , who also favour AI & longtermist causes over global poverty work by a 4:1 ratio anyway.

If it's a 4:1 ratio only, I don't think that should mean that longtermist content should be ~95% of the content on 80K's Intro to EA collection. That survey shows that some leaders and some highly engaged EAs still think global poverty or animal welfare work are the most important to work on, so we should put some weight on that, especially given our uncertainty (moral and epistemic).

As such, it makes sense for 80K to have at least 1-2 out of 10 episodes to be focused on neartermist priorities like GH&D and animal welfare. This is similar to how I think it makes sense for some percentage of the EA (or "priorities") community to work on neartermist work.  It doesn't have to be a winner-take-all approach to cause prioritization and what content we say is "EA".

Based on their latest comment below, 80K seems to agree with putting in 1-2 episodes on neartermist work. Would you disagree with their decision?

Comment by BrianTan on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T08:44:25.201Z · EA · GW

I understand - thanks for the clarification!

Comment by BrianTan on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T08:41:52.550Z · EA · GW

Hi Ryan,

On #1: I agree that we should focus on the merits of the ideas on what causes to prioritize. However, it's not clear to me that longtermism has convincingly won the cause prioritization / worldview prioritization debate that it should be ~95% of an Intro to EA collection, which is what 80K's feed is. I see CEA and OpenPhil as expert organizations on the same level as 80K's expertise, and CEA and OpenPhil still prioritize GH&D and Animal Welfare substantially, so I don't see why 80K's viewpoint or the longtermist viewpoint should be given especially large deference to, so much so that ~95% of an Intro to EA collection is longtermist content.

On #2: My underlying argument is that I do see a lot of merit in the arguments that GH&D or animal welfare should be top causes. And I think people in the EA community are generally very smart and thoughtful, so I think it's thoughtful and smart that a lot of EAs, including some leaders, prioritize GH&D and animal welfare. And I think they would have a lot of hesitations with the EA movement being drastically more longtermist than it already currently is, since that can lessen the number of smart, thoughtful people who get interested in and work on their cause, even if their cause has strong merits to be a top priority.

Comment by BrianTan on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T08:25:55.004Z · EA · GW

Hi Rob, I also like the idea of "there being a wide variety of 'intro' EA resources that reflect different views of what EA causes and approaches are best, cater to different audiences, and employ different communication/pedagogy methods."

However, it's not easy for "people make new intro resources to compete with the old one, rather than trying to make any one resource universally beloved (which can lead to mediocre or uncohesive designed-by-committee end products)." Most people do not have the brand or reach of 80,000 Hours. 

It's likely that only very popular figures in the EA community would get substantial reach if they made an Intro to EA collection, and it would still likely not be as large as the reach of 80,000 Hours's. As such, 80,000 Hours's choice of what Intro to EA resources to include is quite hard to compete with, and thus should ideally be more representative of what the community thinks.

80K will somewhat solve this problem themselves since they will create their own feed that exposes people to a wider variety of problems and topics, and possibly they could create a near-termist feed aside from that too. But I still think it would be better if what 80K marketed as an "Intro to EA" feed had more global health and dev't and animal welfare content. I talk more about this here.

I do see that many hours probably went into picking the ten episodes. But it seems like 80K didn't get enough feedback from more people (or a wider variety of people) before releasing this. Hence I'm giving my feedback this way, and judging from the upvotes, quite a few people agree with me. 

Of course, I agree that more testing and re-listening could be done. But I would think that a significant % of people who get interested in EA, including quite a few people who are into longtermism, first get interested in global health and development or animal welfare, before getting interested in longtermism. And I think 80K might be losing out on these people with this feed.

Comment by BrianTan on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-17T08:09:00.425Z · EA · GW

Hi Rob and Keiran, thanks for the quick response! I agree that this is a difficult issue. Thanks for letting us know about that 2nd feed with a wider variety of things that EAs are up to. I think that's a good thing to have.

Even with that 2nd feed though, I think it would still be better if the "Effective Altruism: An Introduction Feed" had the Lewis Bollard episode and an episode on global health and dev't, whether by substituting episodes or expanding it to 12 episodes. I don't want to make this into a big debate, but I want to share my point of view below.

Because the feed is marketed as something "to help listeners quickly get up to speed on the school of thought known as effective altruism", and because of 80K's wide reach, I think some people seeing this list or listening to this feed may have a misrepresentative view of what EA is. Specifically, they might think we are more longtermist than the community really is, or be expected to lean longtermist.

Also, all or most of the popular "Intro to EA" resources or collections out there at least give a substantial part on global health and dev't and animal welfare, such as the Intro to EA on the EA website, the Intro EA Fellowship syllabus created by EA Oxford (with input from CEA and other EAs), and Will MacAskill's TED talk. And CEA still makes sure to include GH&D (Global Health and Dev't) and Animal Welfare content substantially in their EA conferences. 

All of these are reflections that the community still prioritizes these two causes a lot. I know that key leaders of EA do lean longtermist, as seen in 80K's key ideas page, or some past leaders forum surveys, or how 3-4 weeks of the Intro EA Fellowship syllabus are on longtermist-related content, while only 1-2 weeks are on GH&D, and 1 week on animal welfare / moral circle expansion. 

I'm fine with the community  and the resources leaning to be longtermist, since I do generally agree with longtermism. But I don't think "Intro to EA" resources or collections like 80K's feed should only have snippets/intros/outros of GH&D and animal welfare content, and then be ~95% longtermist content.

Of course, people consuming your feed who are interested in global health and dev't and animal welfare could listen to your episode 0/intros/outros, or find other podcast episodes that interest them through your website. But I worry about a larger trend here of GH&D and animal welfare content being drastically lessened, and people interested in these causes feeling more and more alienated from the EA community.

I think 80K has some significant power/effect in influencing the EA community and its culture. So I think when 80K decides to reshape the way effective altruism is introduced to be ~95% longtermist content, it could possibly influence the community significantly in ways that people not interested in or working on longtermism would not want, including leaders in the EA community who work on non-longtermist causes.

I'd understand if 80K still decides not to include an episode on GH&D and animal welfare into your Intro to EA feed, since you're free to do what you want to do, but I hope I laid out some arguments on why that might be a bad decision. 

It's a bit time-consuming and effortful to write these, so I hope this doesn't blow up into a huge debate or something like that. Just trying to offer my point of view, hoping that it helps!

Comment by BrianTan on Launching a new resource: 'Effective Altruism: An Introduction' · 2021-04-16T01:08:26.393Z · EA · GW

Thanks for making this podcast feed! I have a few comments about what you said here:

 Like 80,000 Hours itself, the selection leans towards a focus on longtermism, though other perspectives are covered as well. The most common objection to our selection is that we didn’t include dedicated episodes on animal welfare or global development.

We did seriously consider including episodes with Lewis Bollard and Rachel Glennester, but i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.

I think if you are going to call this feed "Effective Altruism: An Introduction", it doesn't make sense to skew the selection towards longtermism so heavily. Maybe you should have phrased the feed as "An Introduction to Effective Altruism & Longtermism" given the current list of episodes. 

In particular, I think it would be better if the Lewis Bollard episode was added, and one on Global Health & Dev't, such as either the episode with Rachel Glennerster or James Snowden (which I liked). 

If 80K wanted to limit the feed to 10 episodes, then that means 2 episodes would have to be taken out. As much as I like the episode with David Denkenberger, I don't think learning about ALLFED is "core" to EA, so that's one that I would have taken out. A 2nd episode to take out is a harder choice, but I would pick between taking one out among the episodes with Will MacAskill, Paul Christiano, or Hilary Greaves. I guess I would pick the one with Will, since I didn't get much value from that episode, and I'm unsure if others would.

Alternatively, an easier solution is to expand the number of episodes in the feed to 12. 12 isn't that much farther from 10.

I think it is important to include an episode on animal welfare and global health and development because

  1. The EA movement does important work in these two causes
  2. Many EAs still care about or work on these two causes, and would likely want more people to continue entering them
  3. People who get pointed to this feed and don't get interested in longtermism (or aren't a fit for careers in it) might think that the EA movement is not for them, even when it could be, if they just learned more about animal welfare or global health and development.

As a broader point, when we introduce or talk about EA, especially with large reach (like 80K's reach), I think it's important to convey that the EA movement works on a variety of causes and worldviews. 

Even from a longtermist perspective, I think the EA community is better the "broader" it is and the more it also includes work on other "non-longtermist" causes, such as global health and development and animal welfare. This way, the community can be bigger, and it's probably easier to influence things for the long-term better the bigger the community is. For example, more people would be in government or in influential roles.

These are just my thoughts. I'm open to hearing others' thoughts too!

Comment by BrianTan on 8+ productivity tools for movement building · 2021-04-13T14:34:33.290Z · EA · GW

Thanks for this post! From your list above, we in EA Philippines use Airtable, Calendly, Google Workspace, Asana, and Slack, and we generally have good experiences with these. I've also used Zapier personally before, but we haven't made much use of it yet for EA Philippines. We should though soon.

Other software EA PH has used: Discord, Webflow, Miro, and Otter.ai

#1: Our 3 student chapters in EA Philippines use Discord for their groups/fellowships, not so much for productivity but for community chatting. Students in the Philippines are more active on Discord than on Slack.

#2: Another honorable mention is Webflow, which we use for EA Philippines's website. Webflow's student plan only costs $19/year, and they also have a free plan (if you are okay with hosting your site on a domain ending in" webflow.io"). CEA would probably be happy to provide funding for websites on Squarespace, but Webflow's student plan is still significantly cheaper and could save your group or CEA some money. 

One con about Webflow is it's a bit harder to use than Squarespace, but there are still free or paid templates that can be used. I think CEA / Catherine Low might have an EA Group website template on Webflow that could be duplicated, though I'm unsure if that's still available. I have knowledge of UI/UX design and of Webflow, so that's why I used it. And I wanted the design to be custom versus using Squarespace, which seems harder to fully customize.

#3: For brainstorming / ideation sessions, collaborative meetings, or just making diagrams, you can use the collaborative whiteboard platform  Miro.  We used this a lot in my previous company, and we've used it to make a theory of change diagram for EA Philippines. Miro's student plan gives you Professional access for 2 years. Google Jamboard is also an alternative, though I prefer Miro. 

#4: For transcribing talks or meetings, you can use Otter.ai. We've used this before to easily transcribe interviews we've done with experts or to create transcripts of past talks EA Philippines has organized. Otter has a free plan, but you can get a free 1-month business trial, and their plans are 50% off for students and teachers.

Free Student Plans for Airtable and Asana

Something not mentioned in your article above is that student groups can get free plans for Airtable and Asana. Airtable for Education gives students free access to Airtable Pro for a maximum of 2 years. Even if Airtable has a free plan, it's easy to go above the free plan, so we're thankful one of our student chapter leaders applied for an Airtable pro workspace. So EA Philippines's Airtable base is in this free Airtable pro workspace now.

Meanwhile, Asana also gives free 6-months of Asana Pro for student groups.  EA Philippines and our 3 student chapters have been using this free 6-month trial of Asana for the last 5 months, since the free plan of Asana only allows 15 people, and we wanted easier collaboration between our groups and us. We will soon have to migrate to the EA Hub Asana though, which is going to be a headache. But at least we had a test-run of Asana for 6 months, and now we're more confident of using ~$200 USD of our grant on it.

Hope this info helps other group leaders, especially other student chapters!

Comment by BrianTan on Announcing "Naming What We Can"! · 2021-04-06T15:08:46.162Z · EA · GW

How about Confidance, since guesstimate cells look like a dance floor to me

Comment by BrianTan on Announcing "Naming What We Can"! · 2021-04-02T02:08:22.344Z · EA · GW

BERI should be renamed to BERI Good.

Comment by BrianTan on New Top EA Causes for 2021? · 2021-04-02T01:50:38.505Z · EA · GW

This is gold.

Comment by BrianTan on New Top EA Causes for 2021? · 2021-04-02T01:34:49.212Z · EA · GW

What a grEAt idEA!

Comment by BrianTan on New Top EA Causes for 2021? · 2021-04-01T09:44:40.848Z · EA · GW

Punning What WEA Can.

The acronym EA is so flexible and can be used to create so many puns. And yet there are so little puns being used or made in the EA community. So I think more EAs, on the margin, should create and use puns with the EA acronym. These can be used as names for group events, or to show how EA is already ingrained in so many concepts or causes. Here are a bunch of ideas:

Group Events

The Most Pressing Puns (using words that already have EA in them)

  1. REAding Groups
  2. RetrEAts
  3. IcebrEAkers
  4. tEA time
  5. External OutrEAch
  6. ResEArch Workshop
  7. Podcast Episode REActions Meetup
  8. BEAch Outings

Other Potentially Promising Puns

  1. FEAllowships
  2. ConfEArences
  3. LightnEAng Talks
  4. wEAtch Parties
  5. DEAbates
  6. DinnEAr Parties
  7. CarEAr Planning Workshops
  8. GathEAr Town
  9. SocEAls
  10. GEAneral AssembliEAs
  11. Speed dEAting
  12. PodcEAst Discussions
  13. BoEArd Games
  14. CowEArking
  15. One-on-OnEAs
  16. SlEAck Discussion
  17. GivEAng Games

EA Concepts / Topics

The Most Pressing Puns (using words that already have EA in them)

  1. Global HEAlth
  2. Mental HEAlth
  3. EArning to Give
  4. NuclEAr Security
  5. Global Priorities ResEArch
  6. Scientific ResEArch
  7. GrEAt Power Conflict
  8. PEAce & Conflict Studies
  9. Clean MEAt
  10. Plant-Based MEAt
  11. High School OutrEAch
  12. Using the hEAd & hEArt
  13. ReplacEAbility
  14. LEAdership
  15. ForeseEAbility
  16. NuclEAr Energy
  17. TEAching EA
  18. DisEAse Eradication
  19. DiarrhEA Eradication
  20. Pain & PlEAsure
  21. Moral REAlism
  22. MEAning CrEAtion

Other Potentially Promising Puns

  1. RationalitEA
  2. DiversitEA
  3. ForecEAsting
  4. TEAchnical AI Safety
  5. Animal WEAlfare
  6. EAconomic Growth
  7. DEAvelopment
  8. ClimEAte Change
  9. PEArsonal Fit
  10. BiosEAcurity
Comment by BrianTan on Open and Welcome Thread: March 2021 · 2021-03-30T07:40:06.285Z · EA · GW

No problem!

Comment by BrianTan on Open and Welcome Thread: March 2021 · 2021-03-26T03:39:51.774Z · EA · GW

Hey Rob, welcome to the forum!

Given your interest in Buddhism, you might be interested to join the Buddhists in EA Facebook group. There is also an EA Meditation community that you or others interested in mindfulness can join. I believe they have 3 weekly guided meditations for people to join. Two people in my group in EA Philippines regularly join the meditations and find them valuable.

Regarding what you said about wanting to do something you could directly see the impact of, I think there's many ways you can still achieve this within EA. As you learn more about EA, you can have 1-1's or chats with people newer to EA than you, and help them learn more about EA or one of its causes/concepts, give them career or productivity advice, or even just some tips on mindfulness. 

Also, being able to volunteer for an EA project can allow you to see some more direct impact other than just donating. There might be a volunteer or job opportunity that might interest you in the EA Work Club.

Lastly, regarding if anyone in EA is doing anything for communities on a local level in the developed world, I can think of a few ideas:

  1. Mental health is a big problem both in developed and developing countries, and is prioritized by some people in the EA community. For example, the charity Canopie, incubated by the EA org Charity Entrepreneurship, are developing and testing a mobile app to treat pregnancy-related mental health issues in the United States. So I think if you can find ways to cost-effectively improve mental health in your local community, that could still be impactful to work on.
  2. Another thing that involves local communities is if you're in the U.S., having approval voting become the voting method in your state could be high-impact, because of how approval voting can lessen polarization. The EA-aligned org called The Center for Election Science works on this.
  3. Finding ways to get involved or educated about the policymaking in your area, and seeing if you can advocate for more effective and evidence-based policies, can plausibly be high-impact too.
  4. Finally, I guess doing EA community building work, or running Rationality talks/workshops for your local community, are ways you can introduce people in your area to some relevant ideas, which can help them think of other ways they can make an impact on the world.

I hope some of these links and ideas help!

Comment by BrianTan on EA Funds has appointed new fund managers · 2021-03-26T00:42:02.486Z · EA · GW

Got it, thanks for explaining!

Comment by BrianTan on EA Funds has appointed new fund managers · 2021-03-25T11:45:20.036Z · EA · GW

Thanks for clarifying! I edited my comment to say EA Funds instead of CEA now.

Comment by BrianTan on EA Funds has appointed new fund managers · 2021-03-25T11:23:43.910Z · EA · GW

I guess as a personal example, I would be somewhat interested to apply to be a guest fund manager for the EA Infrastructure Fund. I'm not sure who within the EA Funds network would directly recommend or refer me to apply, hence an expression of interest form is the main way I can signal that I am interested and potentially a good fit for this role.

I think if you opened up public applications for the role, I would like to think I would be within the top 25-50% of applicants (I'm very uncertain here though, since I don't know who I'm applying against. Also, I'm likely to just be in the average candidate range, i.e. the 50% mark). But I think there's a small chance, maybe 5-10%, that I would be as good as the guest fund managers you would be willing to hire that apply through your private application process. And I think a 5-10% chance is good enough for me to spend up to 1 hour applying for the role.

I also want to be able to know what EA Funds looks for to gauge someone's grantmaking skill/ability, and going through an application process with questions about this would help me learn about my own grantmaking skills/abilities/fit too.

Also, an expression of interest form should only take 10 minutes to fill out hopefully, and you can contact the top 10-30 candidates (i.e. people who have the most reputable backgrounds) who fill it up to go through a longer, 1-hour application form. 

I think my background is somewhat reputable enough within the EA community, but I don't know who in the EA Funds network would recommend me to apply to be a fund manager. And maybe there's ~10 other people in similar situations as me who exist, a couple of which could be a good fit to be a guest fund manager, if only they were recommended/referred, or if they could signal their interest through a short form!

Overall though, I understand EA Funds' viewpoint, and would understand if you continue to keep using a private application process. I guess someone who wanted to apply to be a guest fund manager or permanent fund manager could literally just email you though to signal that they are interested, so that they could be invited to apply. But having an expression of interest form could streamline that for you, and increase the chance that someone who's a good fit but isn't connected would still signal to EA Funds that they are interested.

Comment by BrianTan on How to run an effective stall (we think) · 2021-03-24T08:59:38.377Z · EA · GW

Cool and smart set up! I'm a bit surprised that cancer research and biorisk had very few votes (I thought cancer research would be a bit more popular among non-EAs).

Regarding biorisk, I think labelling it "Biosecurity and Pandemic Preparedness" would cause more people to consider voting for it, which should be better.

Comment by BrianTan on Proposed Longtermist Flag · 2021-03-24T06:28:19.122Z · EA · GW

I still understood it as a sun, but maybe 10-20% of people won't?

Comment by BrianTan on Proposed Longtermist Flag · 2021-03-24T06:27:20.766Z · EA · GW

I had to Google vexillological - I learned a new word today!

Comment by BrianTan on Proposed Longtermist Flag · 2021-03-24T06:21:56.775Z · EA · GW

Personally, I like the symbolisms and explanations for the flag, but I like the aesthetic appeal and simplicity of the utilitarian flag a lot more.

I have two somewhat nitpicky comments on the aesthetics:

  1. Some people might not understand that what's on the flag is a sun (without seeing the explanation). It looks like a red, Chinese fan to me. The sun could be made to look simpler and more like a sun, i.e. fewer rays and being a circle instead of a semicircle.
  2. It's hard for me to like or get used to these colors, but I think the symbolism behind them means they should stay as is.

Nevertheless, even if the two aesthetic comments I said above aren't done, I think this is good enough!

Comment by BrianTan on EA Funds has appointed new fund managers · 2021-03-23T23:52:29.735Z · EA · GW

I think it was sensible that EA Funds did a private appointment process rather than a public application process for this round of new fund managers. 

But for future guest fund managers, would you consider having public applications for that? Quite a few people might be interested and a fit for such a role, but not closely connected with anyone who might nominate them to apply for the role. Maybe even just an expression of interest form would be something EA Funds could have, for people to signal they want to either be a guest fund manager or a permanent one.

Comment by BrianTan on EA Funds has appointed new fund managers · 2021-03-23T14:24:43.630Z · EA · GW

Thanks for this post and congrats to the new fund managers! I wish the outgoing fund managers well too.

One quick question: Are all the fund manager roles still on a volunteer basis?

Also, I just saw the new EA Funds homepage. It looks good, and the copy seems well-written and well thought out to me. Kudos!

Comment by BrianTan on Open and Welcome Thread: March 2021 · 2021-03-23T14:07:42.955Z · EA · GW

Welcome Sive! Have you joined an EA fellowship yet? An intro one can help you understand the general underlying concepts, and an in-depth one can also let you go deeper in them if you've finished an intro fellowship :)

Comment by BrianTan on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-23T00:44:51.150Z · EA · GW

Thanks for clarifying! I wasn't aware. 

I thought the term AI safety was shorthand for technical AI safety, and didn't really include AI policy/strategy. I personally use the term AI risk (or sometimes AI x-risk) to group together work on AI safety and AI strategy/policy/governance, i.e. work on AI risk = work on AI safety or AI strategy/policy. 

I was aware though of AI safety being referred to as AI alignment.

Comment by BrianTan on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-22T01:03:16.941Z · EA · GW

Great, I think that's a good idea actually! I'm looking forward to see other potential good ideas like that from Nonlinear's research.

Comment by BrianTan on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-21T10:47:51.034Z · EA · GW

I see, makes sense!

Comment by BrianTan on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-21T10:46:28.887Z · EA · GW

Thanks for the reply Kat!

However, I'm still a bit confused.  When you say "We are not intending to do technical AI safety work. We are going to focus on non-technical for the time being.", do you mean you will only be researching high leverage, non-technical AI Safety interventions? Or do you mean that the research work you're doing is non-technical

I understand that the research work you're doing is non-technical (in that you probably aren't going to directly use any ML to do your research), but I'm not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI's GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute's work on Lethal Autonomous Weapons Systems). Could you elaborate on what you mean when you say you will focus on non-technical AI safety work for the time being? Maybe you could give some examples of possible non-technical AI safety interventions? Thanks!

Comment by BrianTan on Open and Welcome Thread: March 2021 · 2021-03-20T11:11:05.830Z · EA · GW

Welcome Schuyler! I haven't encountered anyone else yet in the EA community who works on energy and climate policy in a government role, so it's nice to have someone with your background. 

You might be interested to join the Effective Environmentalism Facebook group, which gathers people who want to discuss and collaborate on impactful action for the climate crisis and other sustainability challenges. If you haven't seen some of Founders Pledge's resources on their Climate Change analysis and charity recommendations, such as this executive summary, I'd encourage you to read them. I'd also love to hear your thoughts on them!

Comment by BrianTan on Open and Welcome Thread: March 2021 · 2021-03-20T11:10:29.423Z · EA · GW

Welcome Schuyler! I haven't encountered anyone else yet in the EA community who works on energy and climate policy in a government role, so it's nice to have someone with your background. 

You might be interested to join the Effective Environmentalism Facebook group, which gathers people who want to discuss and collaborate on impactful action for the climate crisis and other sustainability challenges. If you haven't seen some of Founders Pledge's resources on their Climate Change analysis and charity recommendations, such as this executive summary, I'd encourage you to read them. I'd also love to hear your thoughts on them!

Comment by BrianTan on How to make people appreciate asynchronous written communication more? · 2021-03-20T10:25:43.395Z · EA · GW

Having an updates document that people fill in every week might be useful for you to either replace or complement your meetings? Alternatively, an agenda doc per meeting where you can transcribe whatever the other people say helps solve the problem of not being able to remember or document what other people say. I also record a few of the meetings I'm in, especially important ones (with the other person/s' permission of course), in case I want to revisit them in the future.

Comment by BrianTan on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-20T02:53:07.766Z · EA · GW

Not sure if you noticed, but your comment got cut off after "making"

Comment by BrianTan on Name for the larger EA+adjacent ecosystem? · 2021-03-19T05:41:51.473Z · EA · GW

Why not just call it the EA-adjacent ecosystem? I think there are lots of communities that intersect with EA, and it would probably be difficult to make one acronym that includes all of these communities.

Comment by BrianTan on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-19T01:50:46.545Z · EA · GW

I could imagine that the feedback loops for technical AI safety research might be long - i.e. 2 years or longer (although I'm unsure). Would you agree with this? 

Also, what number of months of FTE work do you think you'll be granting for usually?

Comment by BrianTan on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-19T01:49:59.335Z · EA · GW

What grant sizes do you think you will be giving for your first year of making grants?

Comment by BrianTan on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-19T01:49:42.186Z · EA · GW

Both founders don't seem to have a background in technical AI safety research. Why do you think Nonlinear will be able to research and prioritize these interventions without having prior experience or familiarity in technical AI safety research?

Relatedly, wouldn't the organization be better if it hired for a full-time researcher or have a co-founder who has a background in technical AI safety research? Is this something you're considering doing?

Comment by BrianTan on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-19T01:48:07.173Z · EA · GW

This organization is interesting. I have a few questions, which I'll split into different questions, so people can vote on them separately:

What made you decide to start an organization on researching high leverage AI Safety interventions?

Comment by BrianTan on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-19T01:18:34.006Z · EA · GW

Also, yes Execs can blame whoever wrote the racist or inappropriate content (if it was released without anyone else approving it), which in a big company would be a copywriter or content designer, but in smaller companies could be a UI/UX designer writing that content.

Oh and I don't think it's ridiculous to think that an initiative is failing because a designer decided to not make it prominent enough on the website. Making it more prominent could help, and that is something the designer has a say on.

Comment by BrianTan on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-19T01:12:33.762Z · EA · GW

I guess something I'll agree on here is that CEA contracting a UI/UX designer who isn't that familiar with CEA's goals and the EA movement might totally miss out on the need for EA being a community to be highlighted more in the website.

But that doesn't mean a UI/UX designer doesn't have to try and surface what the most important things to include in a webpage are. A lot of designers create the designs of entire websites, based on talking to users and understanding the organization's goals. Each section's copywriting and layout is an important design choice. 

Yes, the executives in an organization have a say in the website's content and design, but that doesn't mean the UI/UX designer can't have a say in that.

A related field to UI/UX design is Content Design and Copywriting. Some UI/UX designers do both content design and copywriting too. Wouldn't you at least say that those choices of what content to include in a website is a content designer or copywriter's job? (Yes, other people have a say in it, but that doesn't mean the ideas can't come from a content designer too.)

Comment by BrianTan on AMA: Holden Karnofsky @ EA Global: Reconnect · 2021-03-19T01:03:51.214Z · EA · GW

Hey Charles, yeah Sean _o_ h made a similar comment. I now see that a lot of the scientific research grants are still targeted towards global health and development or biosecurity and pandemic preparedness. 

Nevertheless, I think my questions still stand - I'd still love to hear how OpenPhil decided to grant more towards scientific research, especially for global health and development. I'm also curious if there are already any "big wins" among these scientific research grants.

I also think it's worth asking him "Do you think more EAs should be looking into careers in scientific research? Why or why not?". I think only a few EA groups have discussion groups about scientific research or improving science, so I guess a related question would be if he thinks that there should be more reading groups / discussion groups on scientific research or improving science, in order to increase the number of EAs interested in scientific research as a career.

Comment by BrianTan on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-18T05:19:47.012Z · EA · GW

I don't have the energy to fully engage with these, but maybe we just misunderstand each other in terms of what we define as UI/UX design. To me, and many other UI/UX designers, the UI/UX design is the end-to-end experience of using a website, product, or service, so I think everything I pointed out still falls into the realm of UI/UX design. It's not just about better interactions. And I think content choices / tradeoffs still can be considered part of the UI/UX design.

Regarding my ethnicity, I have lived in the Philippines for pretty much all of my life, but I am of Chinese descent. I lead community building work for EA Philippines, and have heard a few times from Filipinos that the EA and 80K websites look very Western or White and not as applicable to them. Having non-white photos doesn't fully solve that problem, but we can take steps towards solving it.

Comment by BrianTan on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-18T01:14:49.015Z · EA · GW

Hey Charles, good question.

I'll only comment on the effectivealtruism.org website, since this got 9 upvotes, and I have a few ideas on how the UI or UX could be improved. Ideally I would validate these problems and solutions more with surveys, user interviews, or usability testing, but since I don't have the time for that, I'll just come in with my assumptions. Here are a few problems I see, with a potential solution to each of them:

Problems:

  1. It would take a while for someone new to EA to know that they can (and probably should) try to find an EA group near them. They might never know that an EA group exists near them too from simply browsing the website. More broadly, EA being a community  and not just a philosophy is something not highlighted on the homepage and in most pages.
    1. More rationale: There is no mention of EA groups in the homepage, even if there are 250 of them around the world. A user would mainly find out about the existence of EA groups on the website if they happen to scroll and read #3 on the take action page. However, I think EA groups are one of the best ways people get more engaged and connected in EA, so I think it's important these are highlighted more.
    2. Solution: A prominent header or section dedicated to saying there are 250 EA groups across x no. of countries around the world, and a prominent button or link to https://eahub.org/groups/ would be something I'd recommend.
  2. Someone new to EA scrolling through the website will not see that the movement isn't just for people in the West, or for white people.
    1. Solution: This is related to #1, but having some sort of world map, where people can see how EA groups exist all over the world, will make it seem that this isn't just a movement for the West, or white people.  Maybe something like the one below that I've screenshotted and edited a bit from the EA Hub groups page.
  • Another solution: There should also be a few photos or testimonials where people of different ethnicities are shown. I think CEA's website and GWWC's website both do a good job of this.

3. I'm uncertain here, but I don't think the average new user would find a lot of value from browsing the EA concepts page. 

Solution: Make the EA concepts link a lot less prominent.

4. Lastly, I'm also uncertain here, but if we want to make EA more broadly appealing, especially to design-minded / visual-minded people, I think they will value and find it more appealing if the website had slightly better visual design. And I think the community needs more design-minded people than we currently have, and I think it's fine to make the EA website more broadly appealing, at least based on its visual design. Overall I think the visual design of CEA and GWWC's current websites are better than the EA website. I think CEA's is really good currently, mainly because of their use of nice photos, especially of people.

Comment by BrianTan on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-17T06:12:12.707Z · EA · GW

Yeah I think Jobs-to-Be-Done has been getting more popular recently, and is the more useful one of the two frameworks. It's possible to mix the two too so you have one or more personas and you list their jobs-to-be-done.

Comment by BrianTan on AMA: JP Addison and Sam Deere on software engineering at CEA · 2021-03-17T06:10:37.087Z · EA · GW

Thanks for explaining, sounds like a good process! Cool too that you two do some pair programming