Turning percentages back into people: personalizing quantification 2020-09-12T14:30:35.199Z
A message to community members, in light of global protests for racial justice 2020-06-08T22:19:33.517Z
sky's Shortform 2020-03-28T21:04:11.779Z
What to know before talking with journalists about EA 2019-09-04T19:59:21.578Z


Comment by sky on Why I prefer "Effective Altruism" to "Global Priorities" · 2021-03-29T00:41:04.506Z · EA · GW

I have some data that may be relevant to folks with interest in this topic*:
I work for CEA, and this quarter I did a small brand test with Rethink’s help. We asked a sample of US college students if they had heard of “effective altruism.” Some respondents were also asked to give a brief definition of EA and a Likert scale rating of how negative/positive their first impression was of “effective altruism.”

Students who had never heard of “effective altruism” before the survey still had positive associations with it. Comments suggested that they thought it sounded good  - effectiveness means doing things well; altruism means kindness and helping people. (IIRC, the average Likert scale score was 4+ out of 5). There were a small number of critiques too, but fewer than we expected. (Sorry that this is just a high-level summary - we don't have a full writeup ready yet.)

Caveats: We didn't test the name “effective altruism” against other possible names. Impressions will probably vary by audience. It could still be the case that "EA" puts off a sub-set of the audience we really want to reach. (E.g. if we found that highly critical/truth-seeking people in certain fields were often turned away by "EA," I'd consider that a concern. We don't have that data). 

I do think this is encouraging, but doesn't settle the question.  Testing other brands and sub-brands may still be a good idea. Testing brands within very specific sub-audiences is also harder to do. CEA is currently considering trying to hire someone to test and develop the EA brand, and help field media inquiries.

*I think this post may have been written after I gave Max the info that he posted  on my behalf here so I'm cross-posting. 

Comment by sky on Some quick notes on "effective altruism" · 2021-03-29T00:31:25.327Z · EA · GW

Thanks for sharing that info, Max. It was an interesting first pass at some of these questions. 

Comment by sky on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-19T19:28:18.275Z · EA · GW

What are your thoughts on solutions journalism? Does it have much traction among science writers you know? Do you personally use it or promote it as a framework for writing?

Do you think this is a good/bad idea?:
I have the hunch that EA and solutions journalism could be a good match. E.g. EAs in journalism could join the solutions journalism network and seek solutions journalism angles to their editors.  EA projects that think they would be well-served by public media coverage could seek to build relationships with strong solutions journalists and make themselves available for stories when they have something going on that the journalists are interested in. I'm not a journalist myself, and think the SJN approach is still small, so I'm curious if you see this area growing.

Comment by sky on What Makes Outreach to Progressives Hard · 2021-03-19T16:45:36.740Z · EA · GW

I haven't read this whole thread, so forgive me if I'm re-stating someone else's point. 
I think there's another explanation: they have a hypothesis about you/EAs/us that we are not disproving. 

My experience has been that people in any numerical or social minority group (e.g. Black Americans, people with disabilities, someone who is the "only" person from a given group at their workplace, etc), are used to being met with disappointing responses if they try to share their experiences with people who don't have them  (e.g. members of the numerical or social majority group that they are different from).  Most of us have had this experience at least some of the time, maybe as EAs! People get blank stares, unwanted pity or admiration,  or outright dismissal and invalidation (e.g. "it can't be all that bad" or "you're just playing the [race/poverty/privilege/ whatever] card"). This is definitely the kind of conversation people see over and over again on the internet. So, until proven otherwise, that's what people expect. Majority group members are expected to be ignorant of what life is really like for people who experience it differently. I think this is a rational expectation at least some of the time. The hypothesis then goes: EAs look like majority group members and often are, ergo anything EAs say about which problems are "most important" is assumed to be somewhat ignorant. Maybe people see it as well-meaning or callous ignorance. Regardless, ignorance is assumed as most probable, because it's true of most people. (I think EAs and progressives also have different models of when ignorance matters the most and when differences matter the most, but that's a different thread).  

I've usually taken the view that I don't get to assume people will see me as an informed, compassionate person on the progressive left until I disprove the hypothesis above. If the first thing I say is something like why local US poverty issues are "less important" than other issues, I've just reinforced the hypothesis rather than disproven it. It sounds like denying the reality that they know is true -- they've seen the real-life people impacted and/or read their stories or studied the human impact of these issues.  At least in my case, it's not true that they struggle to think of people in other countries as real people too. (My progressive friends have often lived abroad, have family in other countries, or work in immigrant communities). It's a trust issue. If they see me denying that local issues are "real/important," I must be ignorant, and worse, I must be unwilling to be bothered with the real-life experiences of people different from me. Why should they trust anything I say after that about helping people? "But Africa though!" sounds like a deflection, not a genuine consideration or a sincere, compassionate challenge of their own thinking about poverty. 

When I speak first about things we both care about and share sincere examples of the ways that I do see and care about the depth of personal stress that US poverty and racial disparities have on people I actually know, I haven't had a progressive friend respond by saying that poverty in other countries didn't matter.  I brought it up second though, and that seems to make a difference. If someone trusts that I am a caring, informed person, not a callous ignorant one, we can expand the scope of the conversation from there.

Fwiw, I can't think of a time this has led to changed actions on their part. 

Comment by sky on Why EA groups should not use “Effective Altruism” in their name. · 2021-03-01T06:46:57.305Z · EA · GW

To be clear, this also means I don't think everyone should look at PISE and think "we should definitely change our name too!" I think we don't have enough information from this one example to make a claim that strong. 

I thought this was a thoughtfully-shared example and am glad Koen wrote it up so people could share their thinking.

Comment by sky on Why EA groups should not use “Effective Altruism” in their name. · 2021-02-20T14:35:20.820Z · EA · GW

Though I like thinking about words with a skeptical lens, I am not convinced this is a large concern. The name of a new thing will produce both predictable and random reactions from humans. 

 My expectation is that rational, intelligent, self-critical, scientifically literate humans are humans, which comes with a certain degree of randomness to their behaviors. There will be variations in what they feel like doing on a given day, and a low-stakes decision like "Do I want to go to this presentation by a group I haven't heard of?" is not much evidence either way of someone's thinking skills.  If the ideas the group is presenting attract those individuals in their particular context, and they hit upon a name that helps rather than distracts from that goal, that seems solid. 

Comment by sky on Introducing LEEP: Lead Exposure Elimination Project · 2020-10-07T22:20:23.166Z · EA · GW

Congrats on the launch! This may be a stretch, but if you'd find it helpful to connect with any of these folks: or the Data Science for Social Good team at U of Chicago to see if they have additional contacts, let me know and I can connect you.

Comment by sky on How have you become more (or less) engaged with EA in the last year? · 2020-09-12T23:23:17.528Z · EA · GW

Joey, could you say more what you mean by "concepts...that connect to impact"? I'm interested in examples you're thinking of. And whether you're looking for advances on those examples or new/different concepts?

Comment by sky on EricHerboso's Shortform · 2020-09-05T21:05:18.490Z · EA · GW

Quick meta comment: Thanks for explaining your downvote; I think that's helpful practice in general

Comment by sky on sky's Shortform · 2020-09-05T20:48:57.903Z · EA · GW

Quick thoughts on turning percentages back into people

Occasionally, I experiment with different ways to grok probabilities and statistics for myself, starting from the basics. It also involves paying attention to my emotions, and imagining how different explanations would work for different students. (I'm often a mentor/workshop presenter for college students). If your brain is like mine or you like seeing how other people's brains work, this may be of interest.

One trick that has worked well for me is turning %s back into people

Example: I think my Project X can solve a problem for more people than it's currently doing. I have a survey (N=1200) which says I'm currently solving a problem for 1% of the people impacted by Issue X. I think I can definitely make that number go up. Also, I really want that number to go up; 1% seems so paltry.

I might start with:Ok, how likely do I think it is that 1% could go up to 5%, 10%, 20%?

But I think this is the wrong question to start with for me. I want to inform my intuitions about what is likely or probable, but this all feels super hypothetical. I know I'm going to want to say 20%, because I have a bunch of ideas and 20% is still low! The %s here feel too fuzzy to ground me in reality.

Alternative: Turn 1% of 1200 back into 12 people

This is 12 people who say they are positively impacted by Project X.

This helps me remember that no one is a statistic. (A post which may have inspired this idea to begin with). So, yay, 12 people!

But going from 1% to 5% still sounds unambitious and unsatisfying. I like ambitious, tenacious, hopeful goals when it comes to people getting the solutions they're looking for. That's the whole point of the project, after all. Sometimes, I can physically feel the stress over this tension. I want this number to be 100%! I want the problem solved-solved, not kinda-solved.

At this point, maybe I could remind myself or a student that "shoulding at the universe" is a recipe for frustration. I love that concept, and sometimes it works. But often, that's just another way of shoulding at myself. The fact remains that I don't want to be less ambitious about solving problems that I know are real problems for real people.

I try the percents-to-people technique again:

  • Turn 5% of 1200 back into 60 people. Oh. That's 48 additional people. Also notice: it's only 60 people if we're talking about 48 additional people, while losing 0.
  • Turn 10% back into 120 people. 108 additional people, while losing 0.
  • Turn 20% back into 240 people. 228 additional people, while losing 0.
  • So, an increase of 5% or 20% is the difference between 48 or 228 additional people reached. I know about this program because I work on it, and I know how much goes into Project X right now to reach 12 people. I'm sure there are things we could do differently, but are they different enough to reach 228+ additional people?

Now this feels different. It's humbling. But it piques my curiosity again instead of my frustration: how would we attempt that? Could we?

  • What else do I need to know, to figure out if 60 or 120 or 240 (...or 1000, or 10000) is anywhere within the realm of possibilities for me?
  • Do I have a clear idea about what my bottlenecks or mistakes are in the status quo, such that I think there are 48 more people to reach (while still reaching the 12)? What processes would need to change, and how much?
  • This immediately brings up the response, "That depends on how long I have." (Woot, now I've just grokked why it's useful to time-bound examples for comparison's sake). We could call it 1 year, or 3, or 10, etc. I personally think 1-3 years is usually easier to conceptualize and operationalize.
  • Also, whatever I do next, it's obviously going to take notable effort. I know I can only do so much work in a day. (I probably hate this truth the most. This is definitely where I remind myself not to should at the universe). Now I wonder, is this definitely the program where I want to focus my effort for a while? Why? What if there are problems upstream of this one that I could put my effort toward instead? ...aha, now my understanding of why people care about cause prioritization just got deeper and more personally intuitive. This is a topic for another post.

To return to percentages, here's one more example. Percentages can also feel daunting instead of unambitious:

  • Going from 12 to 60 people is a 400% increase. (Right? I haven't miscalculated something basic? Yes, that's right; thank you, online calculators). 400%! Is that madness?
  • Turn '400% increase' back into 4 additional people reached, for every 1 person reached now.

That may still be daunting. But it may be easier to make estimates or compare my intuitions about different action plans this way.

If you (or your students) are like me, this is a useful approach. It gets me into the headspace of imagining creative possibilities to solve problem X, while still grounding myself within some concrete parameters rather than losing myself to shoulding.

Comment by sky on sky's Shortform · 2020-09-01T22:33:53.969Z · EA · GW

Webinar tomorrow: exploring solutions journalism [for EA writers]:

If EA journalists and writers are planning to cover EA topics, I think a solutions journalism angle will usually be the most natural fit.

The Solutions Journalism Network "train[s] and connect[s] journalists to cover what’s missing in today’s news: how people are responding to problems."

The Solutions Journalism Network is having a webinar tomorrow:

Solutions journalism

"Can be character-driven, but focuses in-depth on a response to a problem and how the response works in meaningful detail

  • Focuses on effectiveness, not good intentions, presenting available evidence of results
  • Discusses the limitations of the approach
  • Seeks to provide insight that others can use"

This is still a less common media angle. The quality of coverage will clearly still vary a lot depending on one's research, editorial input, etc, but this is a better fit than many other media angles one could take to cover topics of interest to you in EA.

More info on this type of journalism:

Comment by sky on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-10T22:24:56.535Z · EA · GW

Definitely, I think for many people, the donations example works. And I like the firefighter example too, especially if someone has had first responder experience or has been in an emergency.

I'm curious what happens if one starts with a toy problem that arises from or feels directly applicable to a true conundrum in the listener's own daily life, to illustrate that prioritization between pressing problems is something we are always doing, because we are finite beings who often have pressing problems! I think when I started learning about EA via donation examples, I made the error of categorizing EA as only useful for special cases, such as when someone has 'extra' resources to donate. So, GiveWell sounded like a useful source of the 'the right answer' on a narrow problem like finding recommended charities, which gave me a limited view of what EA was for and didn't grab me much. I came to EA via GiveWell rather than reading any of the philosophy, which probably would have helped me understand the basis for what they were doing better :).

When I was faced with real life trade-offs that I really did not want to make but knew that I must, and someone walked me through an EA analysis of it, EA suddenly seemed much more legible and useful to me.

Have you seen your students pick up on the prioritization ideas right away, or find it useful to use EA analysis on problems in their own life?

Comment by sky on EAGxVirtual Unconference (Saturday, June 20th 2020) · 2020-06-10T02:23:55.705Z · EA · GW

I'm excited about this! I actually came here to see if someone had already covered this or if I should ☺️. I'd love to see a teacher walk through this.

Here's an idea I'd been curious to try out talking or teaching about EA, but haven't yet. I'd be curious if you've tried it or want to (very happy to see someone else take the idea off my hands). I think we often skim over a key idea too fast -- that we each have finite resources and so does humanity. That's what makes prioritization and willingness to name the trade offs we're going to make such an important tool. I know I personally nodded along at the idea of finite resources at first, but it's easy to carry along with the S1 sense that there will be more X somewhere that could solve hard trade-offs we don't want to make. I wonder if starting the conversation there would work better for many people than e.g. starting with cost-effectiveness. Common sense examples like having limited hours in the day or a finite family budget and needing to choose between things that are really important to you but don't all fit is an idea that I think makes sense to many people, and starting with this familiar building block could be a better foundation for understanding or attempting their own EA analysis.

Comment by sky on Call notes with Johns Hopkins CHS · 2020-05-21T11:13:29.551Z · EA · GW

I also found this helpful -- appreciate it

Comment by sky on Racial Demographics at Longtermist Organizations · 2020-05-18T14:26:55.948Z · EA · GW

Thanks for adding that resource, Anon.

Comment by sky on Racial Demographics at Longtermist Organizations · 2020-05-04T14:58:58.997Z · EA · GW

Thanks for doing this analysis! My project plans for 2020 (at CEA) include more efforts to analyze and address the impacts of diversity efforts in EA.

I'd be interested in being in touch with the author if they're open to it, and with others who have ideas, questions, relevant analysis, plans, concerns, etc.

I'm hopeful that EAs, like the author and commenters here, can thoughtfully identify or develop effective diversity efforts. I think we can take wise actions that avoid common pitfalls, so that EA is strong and flexible enough as a field to be a good "home base" for highly altruistic, highly analytical people from many backgrounds. I'm looking forward to continued collaboration with y'all, if you'd like to be in touch:

Comment by sky on What posts do you want someone to write? · 2020-03-29T13:18:27.270Z · EA · GW

Posts on how people came to their values, how much individuals find themselves optimizing for certain values, and how EA analysis is/isn't relevant. Bonus for points for resources for talking about this with other people.

I'd like to have more "Intro to EA" convos that start with, "When I'm prioritizing values like [X, Y, Z], I've found EA really helpful. It'd be less relevant if I valued [ABC ] instead, and it seems less relevant in those times when I prioritize other things. What do you value? How/When do you want to prioritize that? How would you explore that?"

I think personal stories here would be illustrative.

Comment by sky on sky's Shortform · 2020-03-28T21:04:12.017Z · EA · GW

Should reducing partisanship be a higher priority cause area (for me)?

I think political polarization in the US produces a whole heap of really bad societal/policy outcomes and makes otherwise good policy outcomes ~impossible. It has always seemed relatively important to me, because when things go wrong in the US, they often have global consequences. I haven't put that many of my actual resources here though because it's a draining cause to work on and didn't feel that tractable. I also suspected myself of motivated reasoning: I get deep joy from inter-group cooperation and am very distressed by inter-group conflict.

Then I read things like the thread below and feel like not paying more attention to this is foolish, like I've gone too far in the other direction and underweighted the importance of this barrier to global coordination. I imagine others have written about similar questions and I would be interested in more thoughts.

Comment by sky on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2020-03-24T13:01:21.399Z · EA · GW

Hi Aidan, I'm really late to this thread, but found it interesting. If you don't mind coming back in time, could you clarify this:

"I think part of what might be driving the difference of opinion here is that the type of EAs that need a 45 minute chat are not the type of EAs that 80k meets."

I imagine this is true for a lot of EA org staff. It sounded from Howie's comment like it's probably less true for coaches at 80K, though, compared to other EA org staff.

Howie's comment:

"We try to make sure that we talk to the people we think we’re best placed to help with coaching in other ways too, for example some of our advice and many of the connections we can make are particularly valuable for people who don’t already have lots of current links to other effective altruists."

I find the network constrained hypothesis interesting and am interested in exploring it, so I think clarifying our models here seems useful

Comment by sky on EA Survey 2019 Series: Community Demographics & Characteristics · 2020-03-03T14:51:12.985Z · EA · GW

I find myself navigating to this page a lot recently, thanks for publishing!

Quick UX request: could you update this post with links to subsequent posts in the series? I'm often hunting around trying to find various pieces of data, and would find that super helpful for user navigation, rather than searching on the title.

Comment by sky on The EA Hotel is now the Centre for Enabling EA Learning & Research (CEEALAR) · 2020-01-29T20:15:51.423Z · EA · GW

I think it's worth noting that the acronym for the Athena Center for EA Study is ACES! :)

Comment by sky on What to know before talking with journalists about EA · 2019-12-08T22:40:24.096Z · EA · GW

FYI: I've updated this post to show that we now have an email address for requests for media help:

Comment by sky on What to know before talking with journalists about EA · 2019-12-04T20:40:50.196Z · EA · GW

Thanks for adding this, Jonas. I just added a brief blurb that I think is related to this. (See the section about required skills, where I've added a note about being personable but willing to be "awkward"). These are the kinds of tips I'd usually discuss and rehearse with someone in an interview practice session. I notice this post is more about how to evaluate a media opportunity and self-assess readiness, rather than what to do during an actual interview. The latter is something I talk more about with people when we're rehearsing for a specific interview.

When rehearsing mock interviews with people, I've noticed that the point you raise is one of the things that most trips people up though, which I think is understandable.

If someone asks you, "Some people have said butter is blue. Do you think that's true?", it's almost a knee-jerk response to answer "Really? No, I don't think butter is blue. I believe butter is white or yellow, because....". The problem is that our natural instinct here works against us. "EAs 'don't think butter is blue'" is a much weirder and more intriguing quote than, "EAs 'think butter is white or yellow.'"

It's takes practice to get out of this habit and ensure that the words you say consist only of words you want to appear in the article, without giving fodder to competing/distracting/inaccurate messages. (You might still be misrepresented or misunderstood even then, but this is one strategy to lower that risk). The advice of interview coaches is just what you said, Jonas: that you should start right in describing your actual beliefs, and not repeat the question.

It can look something like this:

Q: Some people have said butter is blue. Do you think that's true?

[Take a breath, smile, omit the first part of the response that comes into your head. Say,..]

A: Actually, I think butter is white or yellow. [or]

A: Actually, I don't think that's within my area of expertise.

[Pause. Let it be awkward if needed, wait for a new question]. [or]

A: Hm, no; what I do think is true is...[(possibly unrelated) point that you want to give a good quote about in order to communicate with your readers/viewers].

The last approach can feel especially awkward, but can be very effective in avoiding clickbait quotes and providing content you actually want to be quoted.

Comment by sky on What posts you are planning on writing? · 2019-11-12T04:05:24.334Z · EA · GW

I would personally find this very useful!

Comment by sky on What to know before talking with journalists about EA · 2019-09-05T16:28:43.695Z · EA · GW

Links are fixed, thanks for flagging! We have different versions of our domain name we can use for our email addresses but I agree that can look confusing, so they're updated too.

Comment by sky on Four practices where EAs ought to course-correct · 2019-08-05T16:59:53.658Z · EA · GW

Thanks, Gordon; I've fixed the sharing permissions so that this document is public.

Comment by sky on Four practices where EAs ought to course-correct · 2019-07-31T21:07:09.037Z · EA · GW

[Note: I’m a staff member at CEA]

I have been thinking a lot about this exact issue lately and agree. I think that as EA is becoming more well-known in some circles, it’s a good time to consider if — at a community level — EA might benefit from courting positive press coverage. I appreciate the concern about this. I also think that for those of us without media training (myself included), erring on the side of caution is wise, so being media-shy by default makes sense.

I think that whether or not the community as a whole or EA orgs should be more proactive about media coverage is a good question that we should spend time thinking about. The balance of risks and rewards there is an open question.

At an individual level though, I feel like I’ve gotten a lot of clarity recently on best practices and can give a solid recommendation that aligns with Gordon’s advice here.

For the past several months, I’ve sought to get a better handle on the media landscape, and I’ve been speaking with journalists, media advisors, and PR-type folks. Most experts I’ve spoken to (including journalists and former journalists) converge on this advice: For any individual community member or professional (in any movement, organization, etc), it is very unwise to accept media engagements unless you’ve had media training and practice.

I’m now of the mind that interview skills are skills like any other, which need to be learned and practiced. Some of us may find them easier to pick up or more enjoyable than others, but very few of us should expect to be good at interviews without preparation. Training, practice, and feedback can help someone figure out their skills and comfort level, and then make informed decisions if and when media inquiries come up.

To add on to Gordon’s good advice for those interested, here is a quick summary of what I’ve learned about the knowledge and skills required for media engagements:

  • General understanding of a journalist’s role, an interviewee’s role, and journalistic ethics (what they typically will and will not do; what you can and cannot ask or expect when participating in a story)
  • An understanding of the story’s particular angle and where you do or don’t fit
  • Researching the piece and the journalist’s credibility in advance, so that you can…
    • evaluate and choose opportunities where your ideas are more likely to be understood or represented accurately versus opportunities where you’re more likely to be misrepresented; and
    • predict the kinds of questions you’re likely to be asked so that you can practice meaningful responses. (Even simple questions like “what is EA?” can be surprisingly hard to answer briefly and well).
  • Conveying key ideas in a clear, succinct way so that the most important things you want to say are more likely to be what is reported
    • This includes the tricky business of predicting the ways in which certain ideas might be misunderstood by a variety of audiences and practicing how to convey points in a way that avoids such misunderstandings
  • Clearly understanding the scope of your own expertise and only speaking about related issues, while referring questions outside your expertise to others

I think having more community members with media training could be useful, but I also think only some people will find it worth their time to do the significant amount of preparation required.

This feels very timely, because several of us at CEA have recently been working on updating our resources for media engagement. In our Advice for talking with journalists guide, we go into more depth about some of the advice we've received. I’d be happy to have people’s feedback on this resource!

Comment by sky on There are *a bajillion* jobs working on plant-based foods right now · 2019-07-18T04:13:35.294Z · EA · GW

I really like the broad range of skills presumably required for this list of jobs -- seems worth looking into further.