The Case for Reducing EA Jargon & How to Do It

post by Akash · 2021-11-22T18:10:17.470Z · EA · GW · 5 comments


    TLDR: Jargon often worsens the communication of EA ideas and makes it harder for us to update our models of the world. I think EAs should apply strategies to notice/reduce jargon, and I offer a few examples of jargon-reducing techniques.
  What is jargon?
  Why should we reduce jargon? 
  How can we reduce jargon?

TLDR: Jargon often worsens the communication of EA ideas and makes it harder for us to update our models of the world. I think EAs should apply strategies to notice/reduce jargon, and I offer a few examples of jargon-reducing techniques.

A few weeks ago, I attended EA Global and a retreat about community building. At both events, we discussed EA-related ideas and how to most effectively communicate them.

One idea I’ve been reflecting on: It can be tempting to use jargon when explaining EA ideas, even in contexts in which jargon is not helpful.

In this post, I describe some drawbacks of EA jargon & offer some suggestions for EAs who want to reduce their use of jargon. (Note: Some of these ideas are inspired by Robert Wilbin's talk  and forum post [EA · GW] about EA jargon. I have tried to avoid covering the same points he raises, but I strongly suggest checking out those resources.

What is jargon?

Jargon is “special words or expressions that are used by a particular profession or group and are difficult for others to understand.” It can refer to any terms/phrases that are used in shorthand to communicate broader ideas. Examples include: “Epistemic humility,” “Shapley values,” “Population ethics,” “The INT framework,” and “longtermism.”

Why should we reduce jargon? 

These two benefits have focused largely on how others react to jargon. The next two focus on how jargon may directly benefit the person who is communicating:

How can we reduce jargon?


Optimizing metacommunication techniques in EA ideas is difficult, especially when trying to communicate highly nuanced ideas while maintaining high-fidelity communication and strong epistemics.

In other words: Communicating about EA is hard. We discuss complicated ideas, and we want them to be discussed clearly and rigorously. 

To do this better, I suggest that we proactively notice and challenge jargon. 

This is my current model of the world, but it could very well be wrong. I welcome disagreements and feedback in the comments!

I’m grateful to Aaron Gertler, Chana Messinger, Jack Goldberg, and Liam Alexander for feedback on this post.


Comments sorted by top scores.

comment by Linch · 2021-11-22T22:10:00.967Z · EA(p) · GW(p)

I suspect people overestimate the harm of jargon for hypothetical "other people" and underestimate the value. In particular, polls I've run on social media have historically gotten results where people have consistently expressed a preference [EA(p) · GW(p)] for more jargon rather than for less jargon. 

Now, of course, these results are biased by the audience I have, rather than my "target audience," who may have different jargon preferences than the people who bother to listen to me on social media.

But if anything, I think my own target audience is more familiar with EA jargon, rather than less, compared to my actual audience. 

I think my points are less true for people in an outreach-focused position, like organizers of university groups.

comment by Lizka · 2021-11-22T21:28:15.066Z · EA(p) · GW(p)

Jargon glossaries sound like a great idea! (I'd be very excited to see them integrated with the wiki [? · GW].)

A post I quite like on the topic of jargon: 3 suggestions about jargon in EA. The tl;dr is that jargon is relatively often misused, that it's great to explain or hyperlink a particular piece of jargon the first time it's used in a post/piece of writing (if it's being used), and that we should avoid incorrectly implying that things originated in EA. 

(I especially like the second point; I love hyperlinks and appreciate it when people give me a term to Google.) 

Also, you linked Rob Wiblin's presentation (thank you!)-- the corresponding post [EA · GW] has a bunch of comments.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2021-11-23T14:03:53.210Z · EA(p) · GW(p)

I'd be very excited to see them integrated with the wiki [? · GW].

This is an idea I've considered and I'd be interested in making it happen if I continue working on the Wiki. If anyone has suggestions, feel free to leave them below or contact me privately.

comment by Charles He · 2021-11-22T23:09:32.977Z · EA(p) · GW(p)

Like Lizka said, glossaries seem to be a great idea!

Drawing on the posts and projects for software here [EA · GW], here, [EA · GW]here, [EA · GW]and here [EA · GW], there seems to be a concrete, accessible software project for creating a glossary procedurally. 

(Somewhat technical stuff below, I wrote this quickly and it's sort of long.)

Sketch of project

You can programmatically create an EA Jargon glossary that can complement, not replace a human glossary. It can continuously refresh itself, capturing new words as time passes.

This is writing a Python script or module that finds EA forum words and associates it with definitions.

To be concrete, here is one sketch of how to how to build this:

  • Essentially, the project is just counting words, filtering ones that appear a lot in EA content, and then attaching definitions to these words.
  • To get these words, essentially all you need to do is get a set of EA content (EA Forum and Lesswrong comments/posts, which is accessible using the GraphQL database [EA · GW]) and compare these to words that appear in a normal corpus (this can come from Reddit, Wikipedia, e.g. see Pushshift dumps here).
  • You want to do some normal "NLP preprocessing", and stuff like Tf-idf (essentially just adjusts for words that appear a lot) or n-grams (which captures two word concepts like "great reflection"). Synonyms with word vectors can be done and more advanced extensions too.
  • Pairing words with definitions is harder, and human input may be required. The script could probably help by making dictionary calls (words like "grok", "differential", "delta" probably can be found in normal dictionaries) and also produce snippets from recent contexts words were used.
  • For the end output, as Lizka suggested [EA(p) · GW(p)], you could integrate this into the wiki, or even some kind of "view" for the forum, like a browser plug-in or LessWrong extension.

Because the core work is essentially word counting and the later steps can be very sophisticated, this project would be accessible to people newer in NLP, and also interest more advanced practitioners.


By the way, this seems like this totally could get funded with an infrastructure grant [EA · GW]. Maybe if you wanted go in this direction, optionally:

  • You might want to submit the grant with someone as a "lead", a sort of "project manager" who has organizes people (not necessarily with formal or technical credentials, just someone friendly and creates collaboration among EAs).
  • There's different styles of doing this, but you could set this up as an open source project with paid commitment, and try to tag as many EA software devs as reasonably plausible.

Maybe there's reasons to get an EA infrastructure grant to do this: 

  • This could help create a natural reason for collaboration and get EAs together
  • The formal grant might help encourage the project to get shipped (since names are on it and money has been paid)
  • Seems plausible it gives some experience for EAs doing collaborations in the future.


Anyways, apologies for being long. I just sometimes get excited and like to write about ideas like this. Feel free to ignore me and just do it!

comment by Mauricio · 2021-11-23T04:48:51.506Z · EA(p) · GW(p)

Thanks Akash! This seems clear to me when it comes to communicating with people who are new to the community / relevant jargon. Just to clarify, would you also advocate for reducing jargon among people who are mostly already familiar with it? There, it seems like the costs (to clarity) of using jargon are lower, while the benefits (to efficiency and--as you say--sometimes to precision) are higher.

(I'd guess you're mainly talking about communication with newer people, but parts like "Ask people to call out whenever you’re using jargon" make me unsure.)

(I also suspect a lot of the costs and benefits come from how jargon affects people's sense of being in an in-group.)