Considering people’s hidden motives in EA outreach

post by s_mannik · 2019-05-30T17:20:22.981Z · score: 16 (23 votes) · EA · GW · 11 comments

Contents

  EA current outreach and focusing on the why
  Why are people irrational?
  Hidden motives in practice
  How is people’s strive for status relevant in influence strategies?
  Why should EA consider people’s strive for status in its outreach activities?
  How can EA help people increase their status?
  Conclusion
None
12 comments

I recently had a discussion with a friend about the question of why a person wearing a formal jacket during a speech might be more persuasive in their speech than a person wearing something more casual. Assuming that a person’s clothing indeed affects people’s perception about them, the question soon took us to the discussion of what the jacket might say about the speaker and why this might be relevant to the perceiver.

The relevance of the jacket seems to depend on the audience. A group of EAs, for example, will probably not be so easily manipulated by content-irrelevant symbols and will focus more on the arguments of the speaker. I think it’s safe to assume that EAs are above-average rational and more aware of their own biases. But precisely due to that, EAs should be more aware of their own as well as other people’s irrationality in the process of becoming interested and more engaged with the EA community.

EA current outreach and focusing on the why

In the conversation about outreach activities and advertising EA ideas, people talk about all the important things relevant to marketing like strategy models (e.g. the funnel model), target groups (e.g. attracting talent, attracting people who have already heard of the EA movement), effective channels (e.g. 1-on-1 discussions, books, local events) and suggest specific strategies to implement (for example telling more stories and using simple-to-follow data visualizations [EA · GW]).

But I think more focus should be drawn to the “why”. Why do people become interested in EA ideas? Why do people have conversations about these ideas? Why do people read EA books, go to conferences, pursue EA careers? Continuing this chain of whys while reminding ourselves that people are often more irrational than they seem (that irrationality is a basic human trait) might unveil some new perspectives about why humans think and act the way they do and to consider these perspectives in reaching out to prospective EAs.

Why are people irrational?

When we think of irrationality as a basic human trait, the next question that arises is: why are people irrational? At first glance, irrationality doesn’t seem to be an advantage in the evolutionary sense. But digging deeper into the possible evolutionary reasons for irrationality, we might find these reasons quite logical.

According to evolutionary psychology, much of human behavior evolved as adaptations to survive and reproduce amid an array of environmental difficulties. Classical examples of these difficulties include wild predators or scarcity of food. Many of our abilities and behavior developed specifically to overcome these environmental obstacles. Kevin Simler and Robin Hanson suggest in their book “Elephant in the Brain” that our brains actually evolved not to compete with the environment, but primarily with other people. Wanting the same scarce resources, like the best food or the best mates resulted in intragroup competition where we had to convince others of our suitability as a mate, friend or ally. To suppress individual selfishness and promote collective welfare, we established social norms which were collectively enforced within the group. Violating social norms, such as being openly selfish meant losing one’s reputation which hindered our chances of winning in the intragroup competition.

So, we needed a way to convince others that our motives were selfless, even when they might be selfish – for instance, that we were willing to share food or help each other out because we cared for the other person’s well-being, rather than because we expected them to return the favor in the future. We needed to deceive others into thinking highly of us to maintain our good reputation. Hand in hand with this need for deception comes the need to unmask deception in others and a sort of arms race ensues between deceiving and avoiding being deceived. Simler and Hanson suggest that using self-deception and convincing ourselves that our motives are pure makes it much easier to come off as unselfish in front of others. So, self-deception became an integral part of humans.

Many of the puzzles about the irrational part of our behavior can be explained by the idea of pursuing selfish interests while deceiving ourselves and others that we’re pursuing something more innocent. Simler and Hanson bring the example of people’s appreciation for art. We all make art – we decorate our homes, put on makeup, create music or take photographs. Evolutionally, making art seems completely useless since it’s both costly and impractical. But art can also be explained as a way of expressing our fitness to a potential mate. Just like a male peacock shows off its tail feathers to let the female know that he’s healthy and strong enough to waste resources on growing his beautiful, yet useless tail - art is an indication of having enough time, energy or money to waste on useless things.

So, it seems that our irrationality stems from having to master the skill of pursuing selfish motives while seeming selfless to both ourselves and others, because survival and reproduction depended on our ability to effectively deceive others into thinking highly of us.

Hidden motives in practice

In the LessWrong community, the term “signaling” is used to describe behavior where we communicate our abilities, knowledge or admirable qualities - often indirectly - and it’s expected that the receiver of the signal puts the 2 and 2 together. For example, making a witty comment in a conversation is a signal showing off the speaker’s intelligence, which the speaker expects the listener to note. We can be completely oblivious to signaling. For example, behavioral mimicry is a signal where we imitate other’s movements and facial expressions, with the goal of affiliating with them. Mimicking is often nonconscious, unintentional and effortless, and we often feel it to be uncontrollable. Just as the peacock signals by showing off his tail feathers to find a mate, we signal too, to increase our chances of finding mates, as well as friends and allies.

It seems that signaling often helps us achieve our selfish goals by increasing our social status. Social status can be defined as a person’s importance in relation to other people within a group. As more important people have better access to resources, it seems plausible that increasing and sustaining one’s social status is always beneficial from an evolutionary perspective. And indeed, signaling can be interpreted as a pursuit of achieving higher social status. For example, we often brag in conversations - we talk about our accomplishments or our adventurous vacations, or name-drop important people we have spent time with. It seems plausible that the goal for bragging is to increase our social status in the eyes of the listener.

There are different aspects we signal about ourselves, like our abilities, loyalty or personality traits. Among these aspects about ourselves that we signal, many seem to be aimed at the improvement of our social status. For instance, when we’re signaling loyalty, we are showing our belonging or dedication to a group, rather than our abilities. It can be argued that being a member of a group can be a precondition to having status within that group and that loyalty signaling aims to improve our standing within a group by saying “I am loyal, more loyal than that guy and I deserve recognition for that loyalty”. Other kinds of signaling like signaling personality and beliefs can also be argued to be aimed at increasing social status, which then helps us pursue our selfish motives.

Therefore, social status could be an important goal in the pursuit of people’s selfish motives and we often try to increase our social status through signaling. This insight may allow us to better explain people’s decision-making processes and the most effective strategies influencing these processes.

How is people’s strive for status relevant in influence strategies?

If increasing our social status is indeed an important goal of people, which often remains outside of our conscious awareness, then we would expect that among the empirically proven influencing strategies are those which help us increase our social status. Let us look at some research in this area to see if they corroborate the idea that humans have hidden motives towards status.

Kahneman argues that we use two cognitive systems to make decisions: intuition (or System 1) and reasoning (System 2). System 1 is usually fast, automatic, effortless, and difficult to control or modify. System 2 is slower, more effortful and more likely to be consciously monitored and deliberately controlled. System 1 decision-making is outside of conscious awareness and more prone to using short-cuts or heuristics. Because the overall capacity of mental effort is limited, it makes sense for our brain to use heuristics to reduce effort. Kahneman argues that System 1 runs automatically, while System 2 requires deliberate effort. System 1 generates suggestions for System 2 such as impressions, intentions and feelings. Most of the time, System 2 adopts the suggestions of System 1 with no modification and turns impressions into beliefs, and impulses into voluntary actions. When System 1 runs into difficulty, it calls on System 2 to support more detailed processing. Usually this process runs smoothly – System 1 does most of the work and from time to time, System 2 comes for help. However, because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent. System 2 may have no clue that something has been processed automatically and therefore that there is even the possibility of an error, which means that biases can’t always be avoided. Because System 1 filters everything that reaches System 2, we will always be vulnerable to bias.

In influence strategies, the idea that we often make decisions based on System 1’s fast and automatic processes means that we cannot always rely on the content or the carefully conducted arguments. Sometimes we must also account for human irrationality to increase persuasion.

There are numerous examples where implementing influence strategies which are more attuned to people’s fast and automatic decision-making processes have resulted in increased positive outcomes. For instance, in his TED talk “How to Motivate People to Do Good for Others”, researcher Erez Yoeli brings an example of using observability to motivate people to vote. He talks about a non-profit that tried to motivate people to vote by sending people hundreds of thousands of letters every election reminding them to go to the polls. After adding the following sentence: “Someone may call you to find out about your experience at the polls”, the effect of the letter increased by 50%. The added sentence made it clear for people that someone might find out if they have voted or not. Perhaps due to the fear of losing one’s status as a dutiful citizen in the eyes of the caller, people turned to the polls.

Cialdini conducted a meta-analysis of numerous empirical studies in psychology, marketing, economics and anthropology and found that most of people’s shortcuts in decision-making can be summarized in just six principles. These six principles of influence are widely used and seem to have stood the test of time, which gives reason to believe in their effectiveness. But why do Cialdini’s principles work so well? It might be useful to see whether the hidden motives perspective might be a plausible explanation.

For example, the scarcity principle states that we want things that are low in supply. Why do we want things that are low in supply? It might be that possessing scarce things leads to status increase because it can either show our wealth or that we’re skillful enough to obtain it. Another Cialdini’s principle, the authority principle states that people follow the lead of credible and knowledgeable experts. It can be argued that adopting the views of someone with high authority will improve our prospects of being associated with high-status people and ideas by others. For instance, we might think that believing a high-status scientist rather than a low-status scientist is reasonable because a high-status scientist will have a better likelihood of having good opinions than a low-status scientist. In reality, it seems that what matters is not whether the high-status scientist has correct opinions or not but that the high-status scientist will more likely be perceived by others to have good opinions. It makes sense to adopt the view of the high-status scientist, because we can present these ideas to other people and be more certain that it will raise our status as someone knowledgeable.

So, it does seem that effective influence strategies help people to increase their social status. Since outreach activities like conferences, local group events, books, articles and even private discussions can be viewed as acts of influence, we could conclude that it might be worthwhile to look into if and how these activities could consider people’s strive for higher status.

Why should EA consider people’s strive for status in its outreach activities?

The general goal of EA outreach [EA · GW] is arguably to attract dedicated, altruistic people who use evidence and reason to figure out how to improve the world as effectively as possible. Outreach strategies are aimed at getting more dedicated people to work in EA organizations, by spreading EA ideas to prospective EAs through high-fidelity interactions and helping existing EAs to increase their contribution to the EA community. Another objective suggested by some EAs is to spread EA ideas to non-EA organizations or institutions. For instance, Michelle Hutchinson from 80,000 hours argues for the importance of EA to engage with academia, and IARPA director Jason Matheny talks about EAs potential to influence government decisions about hundreds of millions of funding dollars. Academia and policy areas are also recognized by 80,000 hours among the five key categories of high-impact careers. It seems reasonable to conclude that for high impact, EA ideas should be adopted by intelligent, talented and dedicated people both in EA and non-EA settings.

Because people seem to often have the goal to increase their social status, many people will be more attracted to effective altruism when it provides a way to increase their own social status. Even though many effective altruists strive to maximize their rationality (by reducing their biases or avoiding signaling) and therefore don’t consider content-irrelevant aspects an important motivational factor, this is not often the case for people. Among people who do consider these content-irrelevant aspects an important motivational factor, are intelligent, talented and dedicated people who can contribute tremendously to EA causes.

For instance, trainer and career coach Alje van den Bosch describes different people who come to EA career workshops, among whom are what he calls ‘genetic lottery winners’ - people in the top 1% regarding intelligence, and above average in other characteristics like social skills and looks. Arguably, genetic lottery winners are very valuable to EA, both due to their intelligence and because of their potential to ‘translate’ and spread EA ideas to the general audience. Because of their social competence, genetic lottery winners have a vast social circle, and when aiming to increase their status, they must account for what is considered high-status within their social circle. Among those high-status aspects are things which the average effective altruist might find irrelevant, such as organizational prestige (how outsiders view the organization). Therefore, in order to increase their own social status, the genetic lottery winner must also account for prestige. Because EA is not the only one trying to appeal to the genetic lottery winner, EA must compete with other organizations (like Google or Facebook or a local high-prestige employer) that are more prestigious and therefore more appealing to the genetic lottery winner.

Politicians are another group of people valuable to EA, to whom high status and reputation among the general public are usually important. Because politicians’ target group is the general public, the public’s opinion is what determines the politician’s success. Because prestige matters for the general public, it also matters for the politician.

Consequently, it might be useful for EA to apply strategies that help people to increase status among the general public and not just within EA, in order to attract more valuable people.

How can EA help people increase their status?

So far, I have proposed possible reasons behind people’s irrationality, how people try to increase their status in order to achieve their hidden goals, and why it might be useful to consider this pursuit for status in EA outreach activities. Lastly, let’s look at the possible ways EA has helped and could help people to increase their social status. Only a handful of ideas will be discussed in this section. I encourage readers to draw their own conclusions and find their own examples of how EA could put into practice helping people to increase their social status.

As stated in the previous section, one of the ways to help people endorsing EA to increase their status is to increase EAs prestige. Most people agree with the main idea of EA that we should work out the most effective ways to improve the world by applying evidence and reason. But much of the criticism towards EA seems to reflect a contradiction people have in that altruism should also include an emotional component. In addition to being criticized for disregarding the need for emotional connection in altruism, EA has been accused of being arrogant, and having a flawed picture of the social world and human institutions. The criticism of EA being too ‘cold and calculating’ might be reduced by how EA ideas are framed for the general audience. For instance, Dutch politician Wybren van Haga talked in a TV program about investing in birth control in African countries before focusing on vaccination or the fight against malnourishment. He argued that due to a finite amount of money, we should set priorities and an efficient order of focus areas. For saying that, he received serious backlash among the public. Kellie Liket, founder of Effective Giving NL later appeared in a radio program discussing what the politician had said. Although she defended the same argument, her approach was more considerate of the public opinion about ethical behavior and the reaction she received was therefore much more positive.

Besides reframing messages to make it easier for people to accept EA ideas, there are other ways EA can raise its prestige which might be useful to consider.

Organizational prestige is largely affected by symbols of success such as the visibility of the organization, the extent to which the organization has been successful in achieving its goals, and the average status level of the organization’s employees. Organizational visibility means that the organization has a publicly recognized name. Increasing organizational visibility would mean focusing more on activities contradicting with the current position of CEA, like appearing in the mass media or reaching out more to the general audience. To highlight organizational achievements, EA can emphasize the impact of EA organizations and do so in a clear way so that the impact is evident for most people. This could mean focusing more on how and whose lives have been improved or saved, to give more meaning to the mere numbers. Assuming the average status level of the organization’s members will be raised by a few people with a much higher-than-average status, attracting high-status non-EAs as well as using influential EAs might be a reasonable way to improve the overall prestige of EA. This could mean having influential EAs appear in the media as well as having EAs collaborate with influential non-EAs or inviting more high-status people to speak at EA events. Among EAs, raising prestige can also mean sending the signal that “not everyone gets to do this” and “it is an honor and a privilege to work in EA”.

In raising organizational prestige, smaller high-status indications may also matter. These indications include dressing well when giving talks and workshops; organizing workshops, conferences and other events in fancy locations, and investing in the quality of informal parts of events, such as catering and entertainment.

In addition to helping people raise their social status by increasing EA’s prestige among the general audience, we could consider reasons why people endorse EA ideas and their hidden motives. Among the most important reasons seem to be the wish to help the world and wanting to use rationality and reasoning in doing so. The hidden motivations that underlie these reasons might be the desire to present oneself as caring, altruistic, intelligent, and original. There are some examples where these underlying motivations have already played an important part in the spread of EA ideas.

For example, In a 2015 forum post, [EA · GW] William MacAskill talks about writing an article for Quartz about the Ice Bucket Challenge, which the magazine titled “The Cold Hard Truth about the Ice Bucket Challenge”. The article became extremely popular, receiving “about nine times as many views as (his) second most popular article, and 30 times as many as (his) median article”. In addition, the article received 16,008 Facebook shares and 1,490 tweets.

MacAskill suggests that the snowball effect and that the article addressed a hugely popular trend at the time, had a catchy title, defended a minority position, and discussed an issue rarely covered in the media all contributed to its huge success. These reasons fit well into the framework of signaling behavior: sharing this article was a perfect opportunity to show off one’s independent and critical thinking. The title itself provides a signal saying “I’m different” which means that the signal can be communicated without the perceiver having to even open the article. In this example, signaling one’s intelligence seemed to account for a great deal of the article’s success.

Another example is the Giving What We Can pledge campaign in the end of 2016 and early 2017, which resulted in 318 people taking the pledge. It can be argued that the campaign succeeded largely due to its main activities helping to increase people’s status by being perceived as caring and altruistic. The activities considered the importance of observability and people’s desire to signal to their wider social circle (i.e. through having people publicize the pledge through social media). They also accounted for people’s tendency to be influenced by their close social circle, by using personal outreach to friends as a channel.

Other ways EA could help people signal their benevolence and intelligence is to tell more emotional stories to increase word of mouth. Word of mouth is among the most valuable forms of marketing, and is especially impactful when the brand is weak and when taking action (such as buying a product or donating money) requires high involvement. For these reasons, word of mouth is arguably a valuable way to spread EA messages. According to emotional broadcaster theory, the intensity of the tellers' own emotional experience predicts how far their stories travel across their social networks. A message must therefore evoke emotion to be shared with others. It may also be important to consider which emotions are the most effective. For example, for online content, sharing increases when content is high-arousal (causing inspiration/awe, anger or anxiety) rather than low-arousal (causing sadness). In addition, positive stories tend to be shared more.

Whichever way we choose to give people the opportunity to raise their status, it’s necessary to keep in mind that one size does not fit all. It’s useful to try to figure out the motivations of different target groups and what might increase the likelihood of them accepting EA ideas. For example, when giving presentations or workshops, emphasis should be put on the parts that are important for the specific audience. If it’s important for the audience members to show off their brightness or originality, it’s good to allow for discussions and questions. If they’re corporate officers they might care about what the presenter wears or how the slides look like. Because different people use different ways of signaling to increase their status, it’s important to determine which ways of signaling applies to the specific target audience.

Conclusion

This forum post aims to draw attention to the possible reasons why people are not always rational and how EA could find value by being aware of the irrational parts of people’s brains, in addition to the rational, System 2-governed parts. In outreach activities, it might be important to consider possible underlying motivations of people who take interest in EA, engage in EA activities or pursue EA careers. It could be useful to ask yourself questions: which activities from my part will help my target group increase their status? What might my target group want to signal? Could it be intelligence, originality, altruism, or something else? If they most probably want to signal intelligence, it might be useful to put more emphasis on having a lot of scientific evidence in presentations. If they want to signal originality, you might present EA as a new and unique way of helping the world. If it’s altruism, make their good deeds observable. It might also be useful to notice your own signaling and how you try to achieve higher social status, because it will likely apply to some other people as well.

In an ideal world where all people are rational, the ideas mentioned in this forum post would be completely useless. The jacket-wearing speaker will be just as persuasive, whether he wears the jacket or not (in that world, there might not even exist a formal jacket). Unfortunately, people often are much less rational than we’d like to admit. Acknowledging this might be a pragmatic way for EA to improve outreach effectiveness.

This post was written during my internship in Effective Altruism Netherlands. Credit for this post goes largely to Alje van den Bosch, as the post is a result of his guidance and discussions I’ve had with him over the last couple of months.


11 comments

Comments sorted by top scores.

comment by ishaan · 2019-06-01T21:41:14.086Z · score: 15 (8 votes) · EA · GW

You've laid out your opinions clearly. It is well cited, and has interesting and informative accompanying sources. It's a good post. However, I disagree with some portions of the underlying attitudes, (even while not particularly objecting to some of the recommended methods)

In an ideal world where all people are rational, the ideas mentioned in this forum post would be completely useless.

The thing is, this is a purely inside view. It sort of presupposes effective altruist ideas are correct, and that the only barrier to widespread adoption is irrationality, rather than any sensible sort of skepticism.

While humans can be irrational in distributing status, there is such a thing as legitimately earned status. If we put on our idealist hats for just a moment and forget all the extremely silly things humans accord status to, status can represent the "outside view" - if institutions we respect seem to respect EA, that should increase our confidence in EA ideas. Not because we're status climbing apes, but because "capable of convincing me" shouldn't be a person's only bar for trusting an argument. One should sensibly understands the limited scope of ones own judgement regarding big topics.

Now, taking our idealist hats off, obviously we can't just trust what most people think, or consider all "high status" institutions as equally legitimate. We have to be discerning. But there are institutions (such as academia, in my opinion) whose approval matters because it functions as legitimate external validation. It's not just social currency, it's a well earned social currency. Not only that, it's an opportunity to send our good ideas elsewhere to develop and mutate, as well as an opportunity to allow our bad ideas to be culled.

Unfortunately, people often are much less rational than we’d like to admit. Acknowledging this might be a pragmatic way for EA to improve outreach effectiveness.

The other issue is that when one is forming a broad, high level strategy for engaging in the world, it should feel good. The words one uses should make one feel warm inside, not exasperated at the irrationality of the world and the necessity of stooping to slimy feeling methods to win. Lest anyone irrationally (/s) dismiss this as a "warm fuzzy altruism", in Bosch's linked taxonomy, let me pragmatically (/s) employ an appeal to authority: Yudkowsky has made the same point. If it feels cynical and a touch Machiavellian, it usually will not ultimately produce morally wholesome results. Personally, I think if you want to really convince people, you shouldn't use methods that would make them feel like you tricked them if they knew what you were doing.

Not to mention...it's just sort of impractical for EA to attempt "we know you are irrational and we're not above pushing your irrationality buttons" strategies. EA organizations are generally scrupulous about transparency so that we can hold each other accountable. This means that any cynical outreach attempts will be transparent as well. In general my sense is that idealist institutions can't effectively wield some of these more cynical methods.

Also as a sort of aside, I don't think there's anything irrational about appealing to emotions. The key is to appeal to emotions in a way that we bring out behavior which is a true expression of people's values. Often, when someone has a "bad" ideology, it is emotions of compassion that bring them out of it. Learning to better engage people on an emotional level is not in any way opposed to presenting logical and rational cases for things.

How can EA help people increase their status?

...in a non-cynical way?

By acquiring well-earned legitimacy! Make real positive impacts in areas other people care about. That means you can also help individual effective altruists make real measurable impacts that they can put on their resume and thereby increase their career capital. Create arguments that other intellectuals agree with and cite. Mentor other people and give them skills. Create mechanisms for people to be public about their donations and personal sacrifices they might make to further a cause in a socially graceful way (it inspires others to do the same). These are all things that the Effective Altruist community is currently doing, and it's been working regardless of whether or not people are wearing suits.

What all these methods have in common is that they work with people's rationality (and true altruistic motives), rather than work around their irrationality (and hidden selfish motives)- these are methods that encourage involvement with EA because people are convinced that them personally being involved with EA involvement will help further their (altruistic, but also otherwise) goals. The status raising effects in these methods are secondary to real accomplishment, they put forth honest signals of competence and skill, which the larger society recognizes because it is actually valuable. The appeals to emotion work via being connected to the reality of actually accomplishing the tasks that those emotions are oriented towards.

So, I would generally agree with your call for EAs to think about more ways to gain legitimacy. I just want to strongly prioritized well-earned legitimacy...whereas this post comes off as though it's largely about gaining less legitimate forms of status. (Perhaps due to an implicit feeling that all status is illegitimate?)

comment by s_mannik · 2019-06-07T10:45:43.626Z · score: 3 (2 votes) · EA · GW

It seems that we disagree about to what extent people’s motivation to pursue status (well-earned or not) guides our behavior - I don’t think the status raising effects are secondary to real accomplishment, but I think that the status raising effects are an important underlying reason in our pursuit to accomplish anything at all.

I agree that some ways of receiving status are more legitimate than others, that it’s important for EA to focus on legitimate status, and that it’s more important to have a good argument than to wear a suit. But because all humans are also (and maybe even above all) status-climbing apes, I think that EA’s pursuit of achieving legitimate status is affected by content-irrelevant elements. This is why I think it might not be best to view legitimately earned status in isolation from the more irrational parts of status, but to rather see how these two interact.

You mentioned that EA could help people increase their status in a non-cynical way, like helping individual effective altruists make measurable impacts, or creating arguments that other intellectuals agree with and cite, and I agree that these are important ways people could increase their status. However, I think they don’t contradict with the ways of increasing status I mentioned in the post. Different methods might differ in to what extent they rely on content-irrelevant status-increasing elements, but in my opinion, we can never fully disregard these more irrational aspects of why people regard something high-status. In the post I tried to emphasize that EA might consider increasing using the strategies that rely on the content-irrelevant status-increasing elements to a larger extent. That is because I think EA right now is overly cautious about using them and as a result, might miss out on reaching out to people valuable to EA’s cause.

I think that finding the right kind of “packaging” for EA’s content (while not changing the content) is useful when reaching out to all audiences and that this can help make outreach messages more clear and inviting for people without having them feel like they have been tricked into believing something.

comment by aarongertler · 2019-05-31T22:26:57.816Z · score: 10 (7 votes) · EA · GW

Many individual sections of this post were well-done, but it's hard to tell what a given reader, whether or not they work for an EA organization, might draw from it re: activities they could pursue or ways in which they could change their strategy.

It's also the case that nearly every strategy mentioned here can backfire, from media engagement to learning into the movement's natural "elitism". Splitting this post into multiple sub-posts might have created a more natural structure for addressing the pros and cons of your different suggestions.

I'm also left with a nagging feeling that EA's lack of consistent viral success may be more "random" than anything else. There are plenty of storytellers in the movement with a strong grasp for when emotion can make a point more effective (Julia Wise, Jeff Kaufman, and Rob Mather come to mind). Most of the EA presentations I've seen done for a large audience have paid attention to framing in a natural, commonsense way. I don't think we've made many large missteps in public relations over the years -- but I do think that the movement is small and that most small movements never get much attention (unless they nominate a presidential candidate or are held responsible for a catastrophic act of violence). I'm not sure a change in strategy will make much of a difference, but as someone whose work includes trying to market EA, I'd be happy to be proven wrong.

Are there elements of EA marketing you think have systematically been carried out in a way that lends itself to clear correction/improvement? Cases where you can look at an interview held with an EA figure and say: "You should have paid more attention to hidden motives by doing X"?

comment by Halffull · 2019-06-01T15:20:50.036Z · score: 11 (5 votes) · EA · GW

One consistent frame I've seen with EAs is a much higher emphasis on "How can I frame this to avoid looking bad to as many people as possible?" rather than "How can I frame this to look good and interesting to as many people as possible?"

Something the "cold hard truth about the icebucket challenge" did (correctly I think), is be willing to be controversial and polarizing deliberately. This is something that in general EAs seem to avoid, and there's a general sense that these sorts of marketing framings are the "dark arts" that one should not touch.

On one hand, I see the argument for how framing the facts in the most positive light is obviously bad for an epistemic culture, and could hurt EA's reputation; on the other hand, I think EA is so allergic to this that it hurts it. I do think this is a risk aversion bias when it comes to both public perception and epistemic climate, and that EA is irrationally too far towards being cautious.

Another frequent mistake I see along this same vein (although less rare with the higher status people in the movement) is to confuse epistemic and emotional confidence. People often think that if they're unsure about an opinion, they need to appear unsure of themselves when stating an opinion.

The problem with this in the context of the above post is that appearing unsure of yourself signals low status. The antidote to this is to detach your sure-o-meter from your feeling of confidence, and be able to verbally state your confidence levels without being unsure of yourself. If you do this currently in the EA community, there can be a stigma about epistemic overconfidence that's difficult to overcome, even though this is the correct way to maximize both epistemic modesty and outside perception.

So to sum my suggestions up for concrete ways that people in organizations could start taking status effects more into account:

  • Shift more from "how can frame the truth to avoid looking bad?" to "How can I frame the truth to look good?"
  • Work to detach your emotional and your epistemic confidence, especially in public settings.
comment by aarongertler · 2019-06-04T04:08:05.785Z · score: 2 (1 votes) · EA · GW
The problem with this in the context of the above post is that appearing unsure of yourself signals low status. The antidote to this is to detach your sure-o-meter from your feeling of confidence, and be able to verbally state your confidence levels without being unsure of yourself.

This is one of the most interesting points I've seen on the Forum in a long while. It perfectly captures the distinction I feel between certain people who I consider excellent speakers in the EA movement and people who don't give me that feeling. At first, I thought this was something like high charisma vs. low charisma, but that wasn't quite right; you don't need to be charismatic and charming to speak with confidence about your uncertainty.

comment by SiebeRozendal · 2019-06-04T08:28:06.622Z · score: 1 (1 votes) · EA · GW

Relatedly, there are the concepts of 'uncertainty' and 'insecurity'. I think there's a risk that uncertainty as perceived, and perhaps even experienced, as insecurity. Interestingly, both concepts are translated into one and the same word in Dutch! ("onzekerheid")

However, I think stating epistemic uncertainty in a very precise and confident way (e.g. "I believe X, and I am 60% certain my hypothesis is correct") can show meta-confidence and strong epistemics. I would rather learn to be convince while still communicating uncertainties, than learning to hide my epistemic uncertainty.

Also, experts in any domain face this challenge, and useful lessons could be drawn from literature on it, such as this paper (I only read the abstract, it seems useful).

comment by s_mannik · 2019-06-07T10:05:02.429Z · score: 1 (1 votes) · EA · GW

I agree with your feedback that discussing the different suggestions of how to implement the status-increasing elements to specific marketing strategies in greater detail would have made the post more practical.

As for your thoughts on whether EA marketing strategies need improvement - I think EA’s lack of consistent viral success is not so random, I think it’s at least partly the result of abstaining from using some of the marketing strategies that have been considered belonging more to the “dark arts” category Halffull mentions. I agree with Halffull here that perhaps EA is being too cautious when trying to not appear negative, so that it might miss out on some good opportunities to appear positive, e.g. via mass media.

The caution of using mass media, for example, seems to stem from EA’s experience where the idea of Earning to Give became simplified and distorted after several articles (e.g. in Washington Post and DailyMail) were written about it. I’m not sure we should draw a conclusion of abstaining from a highly influential channel after one or few bad experience(s). Perhaps mass media was not the perfect channel for spreading the idea of Earning to Give, but it doesn’t mean it applies to all EA ideas alike. Secondly, even though I don’t think we should be spreading inaccurate ideas of EA, I do wonder what the impact of Earning to Give going to mass media really was - perhaps it sparked interest in people who would have otherwise not heard of Effective Altruism. Perhaps this interest led them to 80,000 hours of GiveWell, which gave them a more precise overview of the idea.

comment by Holly_Elmore · 2019-06-05T01:34:41.720Z · score: 8 (4 votes) · EA · GW

From the early sections, I thought you were going in the opposite direction-- how already involved EAs can be mindful of their secret motives for being involved. (I think that's super-important, btw.) For outreach, I would have thought the implication was that we should balance the need to appeal to and accomodate the human need for status with the possibility that EA would get diluted by the attempt to market EA in a low-fidelity way. I agree with CEA's emphasis on the high-fidelity model: there's no point in growing EA if it stops being EA in the process.

I think there is some very low-hanging fruit EA orgs can pick re:prestige they can offer recruits. #1 is making sure the name of the organization and the name of positions are as impressive and not-loaded as possible. Foundational Research Institute, for example, went with that title over "The Future of Suffering Institute" because they got feedback from academics that they wouldn't be able to put that name on their CVs. At Harvard EA, we have multiple named fellowships for students (the undergrad one is the "Arete Fellowship"). There is no reason we can't call our programs fellowships or name them, even though they are just student club programming. But being able to put "2016 Fellow of the Harvard College Effective Altruism Arete Fellowship" on a resume gives Harvard students the prestige they need to justify spending their time on us. There is a ton of cheap status EA can confer without it costing us anything (just requires us to contribute to the inflation of terms for volunteering, employment, and awards-- I'm not losing any sleep).

comment by Khorton · 2019-05-31T20:39:26.129Z · score: 6 (3 votes) · EA · GW

Cross-reference this 80,000 Hours article of advice for people starting their career, especially #4, #8, and #14: https://80000hours.org/2019/04/career-advice-i-wish-id-been-given-when-i-was-young/

"Many years ago a friend suggested that for people trying to have a big impact, there might be no more cost-effective self-improvement investment than enrolling in Toastmasters and buying a gym membership. If I were more rational I’d take this advice, and in general the effective altruism community might invest too little in superficial improvements, such as appearance and charisma."

comment by Milan_Griffes · 2019-05-31T21:56:49.058Z · score: 4 (3 votes) · EA · GW

+1 to gym memberships having an excellent ROI, self-improvement-wise. (Though it's important to find a type of workout that's intrinsically motivating, otherwise you run into motivation issues that damage the ROI.)

comment by nonzerosum · 2019-05-31T19:34:20.273Z · score: 3 (3 votes) · EA · GW

This is an excellent post. I agree that status is a ridiculously powerful driver of human behavior. Based on your section of how EA can help increase status, what do you think is the single most promising strategy that EA could implement to make joining EA higher status? (Also a side note I'd add is that status *from who* is important. People outside EA don't care about within-EA status currency, yet, they care about status currency from their existing peers and people they respect. So they'd need to believe that joining EA makes them higher status outside of EA.)

Also, has anyone looked at whether EA is at the right time for scaling? In startups there's the framework that you want to solve your value hypothesis first, and then once you've done that, and only then, you should focus on your growth hypothesis. Basically get it working just how you like it with a small number of customers/users and then focus on scaling up. Is EA at the point where people want to focus on scaling? I think it probably is, but I still wanted to ask the obvious question.