CEA's 2020 Annual Review

post by MaxDalton (Maxdalton) · 2020-12-10T23:45:51.497Z · EA · GW · 9 comments

This is a link post for https://www.centreforeffectivealtruism.org/blog/ceas-2020-annual-review/

Contents

  Summary
    
    
    and strategy
    
  Programs
    Building Grants
      Grants
      New and active Community Building Grants
      Organizer case studies
      Member case studies
          Strong impact: X
          Moderate impact: Y
          Weak impact: Z
      Improved onboarding
      Learning
    Support
      Group size and engagement statistics
      Fellowships
      Organizer training
      Minor grants
      Resources
      Advice
      Reflections
    Altruism Forum and online content
      Forum content
      Engagement
      Individual case studies
      Activities
      EA Newsletter
    
      Event statistics
          Notes on metrics
        EA Global: Virtual
        EAGxVirtual
        Introductory events
        EA Student Summit
        EAGxAsia-Pacific
      Retreats
        EAGx organizers retreat
        Virtual Coordination Forum
    health
      PR
      Risky actors
      Early field-building
        Diversity, equity, and inclusion
      Examples of other work
      Learning
    
          Activities
      Executive priorities
      Staffing
        Changes
        Job satisfaction
      People operations
      Public communications
    
      Financial
      Grantmaking
      Fundraising
      Office management
      Other
  Finances
None
9 comments

This is our review of the Centre for Effective Altruism's progress in 2020.[1] We've also posted our plans for 2021 [EA · GW].

Summary

I think that CEA has made good progress this year: we improved our programs while narrowing our scope. Nevertheless, I would have liked to see even more progress on groups.

Progress

Core programs:

Spin-off programs (these will be covered in more detail in an upcoming document):

Disappointments

Suboptimal progress on groups: Improving groups was a major focus of the year. Progress was slow in the first half of the year, partly due to strategic uncertainty. Most of the progress from groups has come in the second half of the year, and I think that our recent trajectory is solid. However, progress over the year was less than I wanted.

Failure to produce high-quality intro content: The introductory sequence we produced for the EA Forum wasn’t high-quality enough to promote widely.

Scope and strategy

We narrowed our scope and developed our long-term plans:

Internal

Staff retention was 94%, compared to 83% last year and 53% in 2018. Staff are generally performing well and report high satisfaction.

We made improvements to our financial reporting, HR admin systems, governance, and grantmaking system). These improvements will also benefit 80,000 Hours, the Forethought Foundation, GWWC, and the Longtermist Entrepreneurship Project.

The following sections provide supporting evidence for this executive summary.

Programs

Community Building Grants

Community Building Grants [? · GW] (CBGs) allow group organizers to professionally engage in local community building activities.

Cost: $1,320,000

FTEs: 1.0 (+ 0.5, Groups Manager)

Summary:

Grants

We actively ran application rounds for EA NYC and EA Cambridge, which produced three full-time organizers (Arushi Gupta, Aaron Mayer, and Dewi Erwan). We expect location-specific application rounds to become an important source of high-value grants in the future.

New and active Community Building Grants

CBG recipient table (fixed)

Organizer case studies

We think that CBGs have contributed to notable improvements in some groups (though CBGs weren’t the only factor).

For instance, before having a full time CBG, Stanford EA was run by dedicated leaders who had somewhat limited capacity. Two years ago, the group had a small executive team and one of its main activities was running a Doing Good better discussion group. They now have a 13-person exec team, and run cause-specific programming on AI safety, governance, and biosecurity. This year, they had 195 people in their intro fellowships, including members of other groups and leaders at Berkeley and UCLA. Stanford EA members supported the development of the Stanford Existential Risk Initiative (SERI) Summer Research Fellowship, where over 250 Stanford students applied to conduct summer research related to the topic of existential risk (20 were accepted).

Kuhan (the CBG recipient) appears to be directly responsible for a lot of the improvements above, and is referenced in some of the case studies from Stanford. Kuhan seems to have been helped by two organizer retreats we ran in 2019, as well as by the funding to focus full-time on the group. However, we also think that this improvement was partly caused by support from other core EAs, a number of core EA graduate students joining, and the fact that SERI started around the time that Kuhan’s grant started.

We also think the grant provided to Emma Abele at Brown allowed her to build the group — for example, by running an intro fellowship, an in depth fellowship, and 1:1 career calls. According to the local groups survey, the group went from having no "highly engaged”[6] members (as identified by the organizer) to 37 within a year.

Member case studies

We surveyed 145 members from 10 major 2019 grants.[7] We think that these grants were a bit less promising on average than our 2020/21 grants (but we haven't done a full evaluation of our 2020 grants yet).

We judged 86 of the 145 group members to have taken significant action based on a good understanding of EA ideas, and we categorised these cases as strong, moderate, or weak based on our expectations about the counterfactual impact the group had on the individual.

We think there were 64 weak cases of impact, 19 moderate cases of impact, and 3 strong cases of impact.

What follows are some anonymized examples:

Strong impact: X

X is a technical researcher at a longtermist organization. They performed exceptionally well in a top technical undergraduate degree, and in technical competitions. They received an internship offer from top companies in their priority cause area, and from other prestigious technical companies.

X estimates that they attended 20+ of their group’s events in the last 12 months. They report that a group retreat led them to make new friends, become more involved with their local community, and then go on to relevant technical workshops and 80,000 Hours coaching. They think that this exposure to the EA community made them think more about their career and made it more likely that they would work in this field.

Moderate impact: Y

Y is planning on a two-year master’s in a technical subject at a top university. They were recently offered a grant to work on community building at that university. After their master’s, Y plans to enter a PhD program and work as a researcher on a priority area. They estimate that they have spent 40 hours per month engaging with their group over the last year.

Y is a member of a top university group. Y said that they would probably have pursued a lower-impact path if they hadn’t engaged with the group.

Weak impact: Z

Z works at an EA-adjacent organization. They report selecting their current job on the basis of EA principles and ideas, and think they wouldn’t have considered this option if they hadn’t engaged with their group. They have taken the GWWC pledge. They estimate that they spend 4 hours per month interacting with the group.

Improved onboarding

Some grant recipients said they would have liked a more thorough onboarding process at the start of their grant period. We worked with Gabriella Overroder (EA Sweden) and Eirin Evjen (EA Norway) to develop a collection of online resources for grant recipients (including strategy and operational advice and useful contact information) and set up fortnightly calls for new grantees with an experienced grantee.

Learning

We didn’t set clear enough expectations about what support we can provide. Some grantees expected and desired more post-grant support than they received. This may have prevented grantees from exploring further ways to find support or support each other.

We spent longer than we anticipated redesigning our impact survey, and we're not sure that it was a big improvement on what we had before. Fewer people filled out the survey than we had expected; we will send it to a significantly greater number of group members next year.

We raised our bar for making new grants in 2020, and we’re likely to increase it further in 2021.

Group Support

We help local group organizers by advising them, providing resources they can use, and creating online spaces where they can share resources and support each other.

Cost: $200,000

FTEs: 2.0 (we also often work with volunteers or CBG recipients)

Progress was relatively slow in the first half of the year, but has accelerated since then:

Group size and engagement statistics

From our 2020 EA groups survey:

Fellowships

Fellowships (typically reading/discussion groups with an application process, lasting a few months) are a common group activity, and one with the potential to provide a detailed and accurate overview of key ideas. We thought that there were some opportunities to improve the fellowship (e.g. by updating the readings and adding more exercises).

Improved curriculum: Joan worked with Huw Thomas, Alex Holness-Tofts, and James Aung to make improvements to the issues identified above. Most of the (external) reviewers thought that the updated fellowship was an improvement from the previous version. We are now testing this fellowship with 6 focus university groups (~200 students), and we hope to make further improvements based on organizer feedback before deciding whether to share it with all groups.

Facilitator training: In August we ran a training session, in conjunction with experienced fellowship facilitators, for 36 organizers who were planning to run a fellowship (likelihood to recommend 8.5/10). There have been 19 introductory fellowships this the second half of the year.

In total, 184 people participated in five introductory fellowships over the summer, and we anticipate that around 430 people will attend an introductory fellowship this fall (~200 of them using the new fellowship curriculum at focus universities like Yale and MIT).[9]

Additionally, 179 people participated in EA Oxford’s In Depth Fellowship over the year. We are contracting part-time with Will Payne (EA Oxford organizer) to develop it further.

Organizer training

We improved our onboarding process: all prospective organizers receive a personal email, are asked to schedule a call with Catherine Low, and are sent key resources.

At least 85 organizers attended a high-quality EA fellowship to improve their understanding of EA ideas, 60 of whom attributed their attendance to our promotion of the fellowships.[10] We think this is important because it means that they will have a good understanding of EA ideas to share with their groups.

We began a small trial of a mentorship program which matched experienced organizers with core EAs who used to run groups, as well as professional coaches. If these matches prove valuable, we plan to expand this service.

We had ~340 attendees across online training events/meetups that we held over the course of the year,[11] compared to ~90 who attended our (more intensive) retreats in 2019. Average likelihood to recommend the training was 8.3/10. 48% of organizers attended an event, almost all of whom thought it was useful, and 33% of whom thought it was very useful.

Minor grants

We provided $30,784 for non-salary expenses across 38 groups. This was much lower than usual due to COVID. Projects funded include a forecasting tournament [EA · GW] run by UChicago Effective Altruism, and retreats in Zurich, Australia, New Zealand, Yale, and Denmark.

We also revamped our funding systems, so that applicants have more information about what types of activities we are interested in funding, and so that evaluation and payment are faster.

Resources

We spent more time updating and creating resources for organizers. We created five new pages on the EA Resource Hub, and updated 15 key pages (with help from volunteers).

Particularly important were:

We had 14,000 page views on the EA Resource Hub, which includes pages written by CEA staff and pages written by volunteers, and the number of active users tripled from January to November (we started tracking this in January, so some of this might be a seasonal effect). 61% of organizers used it, and 97% of these organizers found it useful.

We use the monthly EA Groups newsletter to increase organizers’ knowledge of EA ideas and EA community building strategy. 97% of recipients find this content useful and 15% find it very useful. We increased subscriptions to the EA Groups newsletter by 15% over the year, but the click-through rate fell by around 4% relative to 2019. Most organizers found the newsletter moderately useful.

62% of organizers use our Slack channel. We have 942 total members and 170 weekly active members.

Advice

We believe we provided faster and higher-quality advice for groups via message and support calls, compared to previous years.[12]

We received and responded to 310 messages that required significant time/effort to address (plus many minor messages) from Q1-Q3. Nearly all responses were within 2 working days. 41% of organizers used this service, and 99% of those that did found it useful (30% very useful).

We had ~140 calls with 97 different organizers, weighted towards organizers we thought were particularly important to support. We had 18 calls with people planning to start new groups. When the organizer was new or the call instigated by the organizer, we sent a feedback form afterwards. Organizers gave the calls a recommendation score of 9.4/10. However, only about 32% of organizers have used this service: we think that scaling up calls might improve organizer satisfaction and help organizers to have more impact.

Examples of support:

Reflections

Groups capacity:

In a couple of cases, organizers were dissatisfied with the level of support we gave them. In some cases, we also think we should have offered more support than we did. We also should have done more to make sure that organizers knew about what we offer (e.g. support calls), since organizers weren’t always aware of the resources they could access. We are investigating what other improvements we should make in 2021.

Effective Altruism Forum and online content

The Effective Altruism Forum (EA Forum) aims to be the central place for collaborative discussion about how to do the most good.

Cost: $340,000

FTEs: 2.5

The Forum’s key metrics grew markedly, and we think post and discussion quality improved.

Forum content

The number of high-quality posts --- those the team considers especially likely to spark action or impart useful knowledge --- doubled compared to 2019.[15] However, we think the Forum is getting around the same number of excellent posts.

We created a sequence of introductory articles [? · GW]. It was a mistake to try to generate original content as part of this sequence: we spent a lot of time on edits and the final material was less compelling than the other material cited. We have not felt confident enough to share the introductory sequence widely, and it has not had as many views as we hoped.

We promoted "Ask Me Anything [? · GW]" sessions with top thinkers such as Toby Ord, Owen Cotton-Barratt, and Elie Hassenfeld, and it seems that more top thinkers (e.g. Carl Shulman and Will MacAskill) are occasionally commenting on the Forum.

In our judgement, some of the most important posts of the year were:

We think that many of these posts would have been created without the Forum, but the Forum helped them to get a wider audience: many saw thousands of pageviews. You can see the top posts by karma for this year and the previous year here [? · GW].

Engagement

Monthly active users increased by 30%, pageviews are up by 90%, and daily votes grew by ~65%. The number of comments has been increasing faster than the number of posts, indicating that the average post gets more engagement than it used to.

Forum users gave a mean satisfaction score of 7.4/10. Lower-scoring users reported a mix of technical complaints (e.g. about the previous text editor, which we’ve now replaced) and concerns about the quality/subject matter of some posts and comments. We had several complaints that Forum users and moderators were too politically progressive (especially around issues of diversity and inclusion), and a similar number of complaints that they were insufficiently progressive.[17]

Individual case studies

At least 13 people report that the Forum led them to apply to new jobs or research positions (at least three of these were hired); at least eight people applied to EA Global, EAGx, or another EA event; and at least 50 people changed their minds on a subject that they consider important. These are self reports from a series of surveys and user interviews which pulled in ~200 unique users. This represents ~10% of the Forum’s readership, but a higher percentage of our most active users.

Michael Aird has been a very active user over the past year. Michael credits the Forum [EA(p) · GW(p)] with helping him become more engaged and says that he would advise someone like his earlier self to focus on the Forum today (rather than learning about EA from many different sources), thanks to the growing number of tags and collections. He says that writing on the Forum has substantially increased his visibility within EA. In 2019, he received 2 job offers from EA organizations out of 30 applications, and in 2020 he received 4 offers from 8 applications; he thinks this may have been in part because writing on the Forum made employers more aware of his work.

Zach Groff was already engaged with animal welfare work, but has pivoted towards global priorities research:

I may have been ambivalent in the past, but as time has gone on (and social isolation has set in), I've realized [the Forum] plays a key role in the EA community and has remarkably high-quality dialogue for an online forum.

The Forum’s first EAGx talk seems to have brought in many new authors. Angela Aristizabal learned about the Forum by watching the talk:

Thanks to [the EA Forum EAGx talk], I did a post about geographic diversity in EA. Thanks to the comments, I ended up meeting some people from Brazil and then started working with them in a project for an organization called Generation Pledge.

In the initial application, 7 of the top 21 Charity Entrepreneurship (CE) applicants came from the EA Forum; the Forum was also the most frequent source of applicants who made it to the second round. Even though due to multiple factors, none of the actual attendees of the incubation program reported hearing about it through the EA Forum, CE considers the strength of the initial application pool a strong signal, and believes that the EA Forum was an excellent source.

Activities

New features developed by, or in collaboration with, LessWrong:

Ongoing experiments to increase engagement:

EA Newsletter

We have focused most of our effort on the Forum, and we have delegated more work on the EA Newsletter [? · GW], social media content, and EAG transcripts to a contractor. The newsletter’s engagement has held relatively steady as we’ve devoted fewer total staff hours to it.

As a direct result of receiving the newsletter, six people took the Giving What We Can pledge or signed up for Try Giving, and nine people applied to an EA-related job or research position.

Events

Events enable attendees to make new connections, learn about core concepts, share and discuss new research, and coordinate on projects.

Cost: $560,000

FTEs: 3.5

In total, we hosted 4,252 attendees across six online events. Their average LTR (likelihood to recommend the event to a friend with similar interests) was 8.2/10, and they made 4.6 new connections[20]. We estimate a total of ~14,500 hours of engagement in total (excluding EAGxAsia-Pacific).

Throughout the year, we learned how to run high-quality online events and how to make our events more goal- and user-focused.

We also ran the Virtual Coordination Forum for 30 attendees.[21] 85% said it was a better use of time than what they would have done if they hadn't attended.

Event statistics

Events table (fixed)

Notes on metrics

LTR: Average likelihood to recommend to the event to a friend, out of 10

Average new connections: “As a result of [event], roughly how many new people in the EA community do you feel able to reach out to (e.g. to ask a favour)?”

Hours of content: A conservative estimate, based on data from our event platform, of how many total hours attendees spent viewing content. We used different platforms for different events, so we aren't confident in these numbers.

Estimated total new connections: New connections multiplied by attendees.

Total hours of engagement: For 2019, we multiplied the number of total attendees by the total length of all event-related activities. (EAGx events are generally shorter.)

EA Global: Virtual

We switched to an online event with less than a month’s notice. We learned a lot about the differences between live and virtual events (e.g. we needed more interactive elements).

We learned this year that 7 new Charity Entrepreneurship founders/staff were referred to them from this event or from EA Global events in 2019.

EAGxVirtual

Collaborating with EAGx organizers from around the world, we ran the largest-ever EA conference.

We had some minor issues with our application systems, which we fixed.[22]

Quotes from attendees:

Introductory events

We collaborated with university group organizers in Europe and the USA to host two high-quality EA introductory events, featuring a talk and Q&A with Will MacAskill, Habiba Islam, and Joan Gass. The introductory talks were followed by 44 different discussion groups so that attendees (very often new to the EA community) could immediately meet with local community members.

The most common words they used to describe EA after the event were: rational, thoughtful, compassionate, effective, and analytical.

EA Student Summit

The EA Student Summit aimed to onboard students to EA and increase their engagement.

Quotes from attendees:

EAGxAsia-Pacific

EAGxAsia-Pacific, held virtually in November, evolved out of in-person conferences planned by organizers in Singapore and Australia. The Asia-Pacific region includes both established EA communities and burgeoning new communities and organizations, and we hoped to spotlight EA work in the region. Since time zones and long travel times pose a major barrier to intermingling between EAs on different sides of the world, we also aimed to promote connections between community members in the Asia-Pacific region and community members in other places. Unlike our previous virtual events, the event was scheduled for convenience to Asia-Pacific time zones. 526 people attended the event.

As the event was held recently, we do not yet have an analysis of the outcomes.

Retreats

EAGx organizers retreat

EAGx organizers felt that the retreat helped them build relationships with other organizers and learn best practices for event planning. One said: “I have learned so much and I feel fully equipped and I know I can ask if I have a question without feeling stupid.” Attendees' average LTR score was 9.6/10.

This led to better collaboration between EAGx teams through the EAGx Slack workspace, and allowed us to work together effectively on EAGxVirtual and EAGxAsia-Pacific.

Virtual Coordination Forum

30 attendees (mostly leaders and experienced staff from established EA organizations) came together to coordinate around two topics: EA’s relationship to longtermism and EA’s target audience. 85% of survey respondents said the Virtual Coordination Forum was “a better use of time, compared to what they would have done if they hadn't attended” (slightly less than the 93% last year). They also reported increased knowledge and understanding of other attendees’ views about the two focus topics.

Next time, we’ll circulate documents further in advance. We also hope to hold the next event in person, which will allow for more casual small group discussions.

Community health

The Community Health team aims to reduce key risks to the EA community’s future. Their work includes fostering a good culture, improving diversity, mitigating harm done by risky actors, reducing the harm of negative PR, and identifying risks to early field-building.

Cost: $300,000

FTEs: 2.5

The team’s activity level remained relatively stable in 2020, despite increased capacity.

While we have included summaries of all the types of work the community health team does, we’re limited in the specifics we can give because the cases often involve information that is sensitive or personal.

PR

Proactive: We provided media training to several community members (Cassidy Nelson, Will Bradshaw, Luke Freeman and Jade Leung) working in newer areas of EA, or areas where we especially value clear communication about complex subjects. This involved media training, advice on assessing media requests, and referring some low-risk requests for practice. Luke is now providing spokesperson support and training for GWWC members.

Reactive: We monitored 137 potential PR cases. One of the major cases from the year:

PR metric: We learned about 78% of interviews before they took place.[23] The earlier we learn of an interview, the more proactive help we can give on mitigating risks.

Risky actors

The goals of this work are to reduce the ability of individuals to cause harm to others or to EA’s reputation, and to support individuals who have been harmed. We used to call this area “bad actors”, but moved to using “risky actors” to acknowledge that the harm is sometimes unintentional.

Of the 25 most most significant risky actor cases:

Examples of cases:

Early field-building

Diversity, equity, and inclusion

Examples of other work

Learning

We added Nicole Ross to this team, but some team members had reduced capacity for much of the year, so effective capacity did not increase despite our plans to do so. This contributed to this area remaining relatively stable, despite our hopes to improve it.

We wish we had drawn more distinction earlier between DEI (diversity, equity, and inclusion) work, PR work, and work on conflict about social justice and free speech. For example, we categorized some online conflicts or PR cases as DEI issues. In hindsight, we underestimated how much staff time and attention some cases would take. This diverted staff from more proactive work that we think would have better served the community overall, and especially EAs and potential EAs from underrepresented groups.

During a year with much tension around social justice and free speech, we think we sometimes misjudged the balance. For example, in advising the Munich group about hosting Robin Hanson as a speaker, we should have highlighted the costs of cancellation in our advice and we didn't correctly anticipate the amount of alarm/backlash the cancellation would cause. See criticism here [EA · GW] and our response [EA · GW].

We have recently shifted to focus more on epistemics and culture, and we're considering additional focus on some tractable parts of PR, such as research on effective branding for EA. We also developed theories of change for our program work.

Executive

We aim to set and track clear goals and to recruit, support, and retain staff.

Cost: $930,000 (this also includes general expenses like office costs, office food, and online services)

FTEs: 3.25

Activities

Executive priorities

Max focused on hiring for and spinning out EA Funds and GWWC, and on developing a clearer scope and goals for CEA (as well as lots of management/reactive work). Joan split her time evenly between managing community health, managing groups work, and developing metrics and surveys to assess our impact. The results of this work are mostly covered elsewhere in this report.

Major reflections from Max:

Staffing

Changes

This year, we hired the following people full time:

New contractors include Helena Dias (grant administrator), Kashif Ahmed (tech support), and Catherine Low, Huw Thomas, James Aung, Alex Holness-Tofts, and Matt Reardon (groups).

Alex Barry (group support associate) left CEA. Our retention rate was 94%.

Job satisfaction

People operations

Hiring:

Staff support:

Public communications

We shared a series of posts on our 2019 work [EA · GW] and our plans for 2020 [EA · GW], as well as a mid-year update [EA · GW]. We also rewrote our 'Mistakes' page.

Operations

The Operations team aims to provide the finance, legal, HR administration, grantmaking, office management and fundraising support that enable CEA, 80,000 Hours, Forethought, and GWWC to run efficiently.

Cost: $620,000

FTEs: 5.25 (3 employees, 2.25 contractors)

Key metrics:

We made improvements to our financial, grantmaking, and fundraising systems. We improved our international governance structure, and we’re on track to move into a new Oxford office at the end of the year.

Financial

We overhauled financial systems to increase efficiency and clarity, and revised several of CEA's financial and accounting policies.

We invested funds to generate a return of $126,000 in interest and $1.8M in investment returns (for the Carl Shulman discretionary fund), while freeing up money held in old restrictions.

Grantmaking

We paid out over $11M of grants in 2020, on behalf of EA Funds, Community Building Grants, and the Forethought Foundation.

We implemented a new grants management system, which has streamlined our processes, increased compliance, and reduced turnaround time. However, the first release of the system had an overly complicated grantee user interface that harmed their experience. We’ve since made the system easier to use by cutting down the number of questions grantees need to answer, improving the interface, and fixing an issue where an application form was timing out on users.

Key stats:

Fundraising

We implemented a donor management system to improve our understanding of our donors. We also managed end-of-year reporting and outreach to previous donors and promising leads.

Office management

As discussed in our last report, we closed our Berkeley office.

We are on track to move to an improved Oxford headquarters in December. This is expected to improve wellbeing, productivity, and recruitment for staff at CEA, Forethought, EA Funds, FHI, and the Global Priorities Institute (GPI).

Other

Finances

Overall, we expect to fall around 15% under our $6.02M budget. This is mainly due to the impacts of COVID on travel, events, and groups. Another contributing factor was that we made fewer hires than expected.

Finance table (fixed)

Spend varies across projects, and the biggest differences were as follows:


  1. Note that this document mostly covers work from January to October. ↩︎

  2. We weren’t tracking or providing training for this before the summer, so this doesn’t include some spring fellowships. ↩︎

  3. These figures include only calls that Alex, Catherine, and Katie had. We believe we reached more organizers this year, but had fewer calls with each organizer on average. The samples are slightly different (2019 was weighted towards CBG recipients, whereas 2020 was only for groups that requested a call and for new groups). ↩︎

  4. Those that the team considers especially likely to spark action or impart useful knowledge. ↩︎

  5. See events section for more details.

    • We had 4,252 attendees (compared to ~1,555 last year)
    • Our events had an average LTR of 8.2/10 (compared to 8.5/10 for in-person events last year)
    • Attendees reported an average of 4.6 new connections per virtual event (compared to 8.4 for in-person events last year). These new connections are relatively significant: we ask people “As a result of [event], roughly how many new people in the EA community do you feel able to reach out to (e.g. to ask a favour)?”
    ↩︎
  6. They have spent 50 hours or more engaging with effective altruism content. For example, someone who has read 50 posts or articles related to EA; listened to 10 episodes of the 80,000 Hours podcast; and participated in an introductory effective altruism fellowship.

    In addition, effective altruism ideas and principles played a major role in them doing at least one of the following:

    • Choosing where to donate
    • Developing their career plans
    • Volunteering for 2 or more hours a week on effective altruism-related projects.
    ↩︎
  7. Grants of 0.5 FTE or more for 12 months or more. ↩︎

  8. We promoted the EA Groups survey more this year, but we also think there has been an increase in the number of groups. ↩︎

  9. We think that there were also fellowships we weren’t aware of in the first half of the year, but we weren’t tracking them properly then. ↩︎

  10. 48 organizers participated in introductory fellowships (by Yale and Stanford). 85 organizers participated in an advanced fellowship run by EA Oxford, 60 of which could be attributed to our promotion of the fellowship. We’re not sure how much these groups overlap. ↩︎

  11. These figures don’t include talks or sessions we ran at conferences. ↩︎

  12. Improved response times and reliability for messages, and higher LTR for calls (9.4/10 vs 7.3/10). ↩︎

  13. This metric refers to pageviews from logged-in users, rather than all pageviews. We use the former because it is less sensitive to shallow engagement (e.g. an article trends briefly on Reddit, generating a lot of views from people who aren’t very interested in EA and won’t stick around). ↩︎

  14. We recorded 25,000 hours of engagement from logged-in users. Another 15,000 comes from projecting engagement rates back to before we began to measure engagement time, and 40,000 comes from us projecting engagement time to non-logged-in users. ↩︎

  15. Weekly average of 11.3 vs 4.7 in 2019. The average also nearly doubled from January to August 2020 (when we stopped counting).

    We identified 9 posts that we thought were “borderline good" and sent the list to several advisors and heavy Forum users. None of them strongly thought that we should have included as "good" a post that we excluded or vice versa. They also substantially disagreed with each other about the ordering of the posts, which is some evidence that the posts we considered borderline were of similar quality.

    We found that the number of views of high-quality posts was highly correlated with total views, and marking posts (as high-quality or not) was relatively time intensive, so we stopped marking posts in August, and are instead focusing on total views. ↩︎

  16. See also their posts on promising career ideas outside their priority paths [EA · GW] and why to consider a wider range of options [EA · GW] ↩︎

  17. Sample quotes from user surveys:

    “The speed and aggressiveness with which people get downvoted for thinking along the lines of "diversity/inclusion are important" is really worrying, even though this usually rebounds a bit later.”

    “I have seen comments with polite but blunt disagreement called out by moderators when they oppose [points associated with the social justice movement], where much more extreme comments or posts on the opposite side go unchallenged.” ↩︎

  18. Visitors can customize the homepage [? · GW] by weighting how much they want to see different topics. ↩︎

  19. We also maintain weekly digest emails [EA · GW], as well as monthly open & welcome [EA · GW] and progress [EA · GW] threads. ↩︎

  20. “As a result of [event], roughly how many new people in the EA community do you feel able to reach out to (e.g. to ask a favour)?” ↩︎

  21. This replaces the Leaders Forum that we ran in previous years. ↩︎

  22. For instance, several people were frustrated by the number of questions on the application form (we’ve now removed some), and the requirement to pay by card (which is less common in some countries — we have now enabled PayPal). We also brought on a contractor for additional technical support. ↩︎

  23. We weren’t consistently tracking this last year. ↩︎

  24. Across all restrictions in the US and UK. ↩︎

9 comments

Comments sorted by top scores.

comment by EdoArad (edoarad) · 2020-12-11T05:24:33.984Z · EA(p) · GW(p)

Thanks to everyone at CEA for all the hard work you are putting in to improve our community! 🙂

comment by BrianTan · 2020-12-11T03:53:19.751Z · EA(p) · GW(p)

I’m curious to learn more about the following:

We invested funds to generate a return of $126,000 in interest and $1.8M in investment returns (for the Carl Shulman discretionary fund), while freeing up money held in old restrictions.

$1.8M in investment returns for a fund that initially started at $5M is quite high - that's roughly a 36% return in a year. How was that investment return achieved? Also, this is the first time I’m hearing about this discretionary fund. Are there any reports of payouts made by this fund, or what the plans are for it for the future?

Replies from: CarlShulman, Maxdalton
comment by CarlShulman · 2020-12-11T17:51:33.193Z · EA(p) · GW(p)

It's invested in unleveraged index funds, but was out of the market for the pandemic crash and bought in at the bottom. Because it's held with Vanguard as a charity account it's not easy to invest as aggressively as I do my personal funds for donation, in light of lower risk-aversion for altruistic investors than those investing for personal consumption, although I am exploring options in that area.

The fund has been used to finance the CEA donor lottery, and to make grants to ALLFED and Rethink Charity (for nuclear war research). However, it should be noted that I only recommend grants for the fund that I think aren't a better fit for other funding sources I can make recommendations to, and often with special circumstances or restricted funding, and grants it has made should not be taken as recommendations from me to other donors to donate to the same things at the margin. [For the object-level grants, although using donor lotteries is generally sensible for a wide variety of donation views.] 

Replies from: BrianTan
comment by BrianTan · 2020-12-13T16:10:56.036Z · EA(p) · GW(p)

Got it, thanks for the context!

I'm curious if you have a target % return for this fund per year with your investing, and what your target % return is for your personal funds for donation? I also wonder if you think EAs you know perform better with their investment returns than the average investor.

comment by MaxDalton (Maxdalton) · 2020-12-11T17:24:39.122Z · EA(p) · GW(p)

Hi Brian, Thanks for your question! I’m not sure how much we can comment on the investment strategy or grantmaking of this fund, but I’ll flag your questions to Carl.

comment by BrianTan · 2020-12-11T03:51:57.588Z · EA(p) · GW(p)

Thanks for publishing this very thorough review Max! I read most of it and the community building grants and group support sections were particularly important and useful for me to know, though the other parts were also useful to read.

I have a few questions which one of you at CEA may want to answer, and I’ll split these into different comments, so that people can reply separately to each question:

1. Regarding this line in your section on community building grants:

We judged 86 of the 145 group members to have taken significant action based on a good understanding of EA ideas, and we categorised these cases as strong, moderate, or weak based on our expectations about the counterfactual impact the group had on the individual.

I'd like to learn more about how CEA or the CB grants programme categorizes these cases into strong, moderate, or weak impact? I think there is a lot of value in community builders, especially CB grantees, having a better understanding of what CEA considers to be impactful (and how you measure it). This way, this prevents CB grantees from being very positive about what CEA thinks is just a weak case of impact, or grantees thinking that something is moderate impact, but CEA thinks it's strong impact. This then allows community builders to focus on generating more moderate or strong cases of impact, although of course they should not Goodhart (i.e. optimize too hard in a way that hampers the group).

I also understand that examples of impact (and therefore evaluating these examples) can vary widely across different group types (national, city, or university) and those in different countries  (i.e. EA Philippines vs. EA London), but I'd still like to hear more about it.

 In my head, I think that CEA should be measuring two things when trying to measure the impact of a group on its members: 
a) How high is the expected value of the action or career plan change that the person has taken

b) How counterfactual the impact of the group is on the person

The two things above can then be combined so that a case can be classified as strong, moderate, or weak impact.  I'd like to know if what I wrote above on high expected value + the degree of it being counterfactual is aligned with CEA and/or the CBG programme's thinking on evaluating these cases of impact. If you think though this information is too sensitive to share on the forum, then you can just send it to me and/or other CB grantees privately (or let me know if Harri will release a writeup on this for community builders in 2021). Thanks!

Replies from: Maxdalton
comment by MaxDalton (Maxdalton) · 2020-12-11T17:35:17.342Z · EA(p) · GW(p)

Hi Brian, thanks for your question, and I’m glad the update was useful!

You’re correct about the overall approach we’re using (multiplying the expected value of the change by how much of that change is attributable to the group). I’ll flag this comment to Harri and he might follow up with some more details, publicly or privately.

Replies from: BrianTan
comment by BrianTan · 2020-12-13T16:11:16.203Z · EA(p) · GW(p)

Got it, thanks Max!

comment by MHarris · 2021-03-26T16:05:47.928Z · EA(p) · GW(p)

3 months late, but better than never: it's incredibly inspiring to see how the community has grown over the past decade.