Posts

My slack budget: 3 surprise problems per week. 2022-06-01T19:46:10.774Z
Mentorship, Management, and Mysterious Old Wizards 2021-02-24T21:58:52.241Z
Morality as "Coordination" vs "Altruism" 2020-12-29T02:38:32.187Z
Healthy Competition 2019-11-02T19:50:24.508Z
Raemon's EA Shortform Feed 2019-06-19T22:12:48.966Z
What's the median amount a grantmaker gives per year? 2019-05-04T00:15:57.178Z
You Have About Five Words 2019-03-07T00:57:29.273Z
Dealing with Network Constraints (My Model of EA Careers) 2019-02-28T01:34:03.571Z
Earning to Save (Give 1%, Save 10%) 2018-11-26T23:47:58.384Z
"Taking AI Risk Seriously" – Thoughts by Andrew Critch 2018-11-19T02:21:00.568Z
Earning to Give as Costly Signalling 2017-06-24T16:43:25.995Z
What Should the Average EA Do About AI Alignment? 2017-02-25T20:07:10.956Z
Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) 2017-01-11T17:45:48.394Z
Meetup : Brooklyn EA Gathering 2015-04-13T00:07:47.159Z

Comments

Comment by Raemon on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2022-08-17T01:41:06.725Z · EA · GW

I'm not sure what your imagining, in terms of overall infrastructural update here. But, here's a post that is in some sense a followup post to this:

https://www.lesswrong.com/posts/FT9Lkoyd5DcCoPMYQ/partial-summary-of-debate-with-benquo-and-jessicata-pt-1 

Comment by Raemon on Introducing Asterisk · 2022-05-30T20:01:48.275Z · EA · GW

Where are you expecting to find your audience? (I feel surprisingly ignorant on how journal projects like this bootstrap their way into wider readership)

Comment by Raemon on [Resolved] When will "EA Forum Docs" be taken out of the beta stage and replace Markdown as the default formatting option? · 2022-05-27T19:41:52.829Z · EA · GW

You probably have set your user to use Markdown, specifically. Go to your user settings, open "site customizations", and check that you don't have "use markdown" set.

Comment by Raemon on The EA movement’s values are drifting. You’re allowed to stay put. · 2022-05-24T01:58:33.860Z · EA · GW

While I agree with Vaidehi's comments on whether "value drift" is the right descriptor, I think it's true that proportion of in-practice-priorities has probably shifted.

As someone who endorses the overall shift towards longtermist priorities, I still do agree with this post. I think it's important people be thinking for themselves and not getting tugged along with social consensus.

Comment by Raemon on If EA is no longer funding constrained, why should *I* give? · 2022-05-14T18:15:57.017Z · EA · GW

My answer is that you should primarily be focused on saving, so that you have the financial freedom to pivot, change jobs, learn more, or found an organization. Previously, I recommended new EAs (esp. college students) give 1%, save at least 10% (so that they were building at least some concrete altruistic habits, while mostly focusing on building up slack).

I think this remains good practice in the current environment. (Giving 1% is somewhat a symbolic gift in the first place, and I think it's still a useful forcing function to think about which organizations are valuable to you). But also, as long as you're concretely setting aside money and thinking about your future, I think that's a pretty good starting point.

Comment by Raemon on Results from the First Decade Review · 2022-05-14T18:05:00.604Z · EA · GW

I'd fine it helpful with the spreadsheet to also have people's usernames listed beside the post.

Comment by Raemon on Greg_Colbourn's Shortform · 2022-02-28T18:36:31.401Z · EA · GW

I agree with this, and think maybe this should just be a top-level post

Comment by Raemon on Effective Altruism: The First Decade (Forum Review) · 2021-12-15T01:58:51.681Z · EA · GW

(LW Developer here: there's a code update ready-to-ship that updates the /reviewVoting page to show the outcome. It's been a bit delayed in merging roughly because JP and I are in different timezones)

Comment by Raemon on Earning to Save (Give 1%, Save 10%) · 2021-12-08T03:04:16.597Z · EA · GW

I definitely still stand by the overall thrust of this post, which I'd summarize as:

"The default Recommended EA Action should include saving up runway. It's more important to be able to easily switch jobs, or pivot into a new career, or absorb shocks while you try risky endeavors, than to donate 10%, especially early in your career. This seems true to me regardless of whether you're primarily earning to give, or hoping to do direct work, or aren't sure."

I'm not particularly attached to my numbers here. I think people need more runway than they think, and I think 6 months of runway isn't enough for most people. But I'm not sure if it's more like 12 months or 36.

...

The world is shaped a bit differently than in 2018 though. There's more cryptorich people around. This has some impact on the strategically landscape but I'm not sure exactly how it shakes out. 

I think it mostly points towards earning to save being more important. We are bottlenecked more on agency, and good ideas, than we are on money. There's even more money now, so the main value of your money is in giving you flexibility to pursue really high value career paths.

(This might depend somewhat on how longtermist you are. Longtermism is sort of defined as 'you think the the most important things are the things with the worst feedback loops', and are most bottlenecked on knowledge.

...

One question is whether, if you got to pick one article to summarize this argument, you should go with my article here, or 80k's similar article. It looks lik they've updated their post to say "save enough for 6 - 24 months of runway." (The comments on this post suggest Ben Todd originally wrote "6 - 12". I think 6-12 is clearly too little, but 6-24 seems plausible.")

I haven't read the 80k article in detail, but suspect it is more thorough than my post here. I do  also suspect it could use a better headline/catchphrase to distill the advice down. 

I couldn't easily find that post on the EA Forum and am not sure how to crosspost it for the Decade Review, but it seems worth considering.

Comment by Raemon on You Have About Five Words · 2021-12-08T02:30:57.232Z · EA · GW

I wrote a fairly detailed self-review of this post on the LessWrong 2019 Review last year. Here are some highlights:

  • I've since changed the title to "You have about Five Words" on LessWrong. I just changed it here to keep it consistent. 
  • I didn't really argue for why "about 5". My actual guess for the number of words you have is "between 2 and 7." IConcepts will, in the limit, end up getting compressed into a form that one person can easily/clumsily pass on to another person who's only kinda paying attention or only reads the headline. It'll hit some eventual limit, and I think that limit is determined by people's working memory capacity (about 4-7 chunks)
  • If you don't provide a deliberate way to compress the message down, it'll get compressed for you by memetic selection, and might end up distorting your message.
  • I don't actually have strong beliefs about when the 2-7-word limit kicks in. But I observed the EA movement running into problems where nuanced articles got condensed into slogans that EAs misinterpreted (i.e. "EA is Talent Constrained"), so I think it already applies at the level of organization of EA-2018.

See the rest of the review for more nuanced details.

Comment by Raemon on What Would A Longtermist Flag Look Like? · 2021-03-27T03:35:48.111Z · EA · GW

Oh man, this is pretty cool. I actually like the fact that it's sort of jagged and crazy.

Comment by Raemon on What I learned from working at GiveWell · 2021-03-10T05:28:19.066Z · EA · GW

This was among the most important things I read recently, thanks! (Mostly via reminding me "geez holy hell it's really hard to know things.")

Comment by Raemon on Mentorship, Management, and Mysterious Old Wizards · 2021-02-25T22:05:45.558Z · EA · GW

That is helpful, thanks. I've been sitting on this post for years and published it yesterday while thinking generally about "okay, but what do we do about the mentorship bottleneck? how much free energy is there?", and "make sure that starting-mentorship is frictionless" seems like an obvious mechanism to improve things.

Comment by Raemon on Dealing with Network Constraints (My Model of EA Careers) · 2021-02-24T22:08:59.091Z · EA · GW

https://forum.effectivealtruism.org/posts/JJuEKwRm3oDC3qce7/mentorship-management-and-mysterious-old-wizards

Comment by Raemon on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-11T08:10:05.044Z · EA · GW

In another comment you mention:

(One example would be the high levels of self-censorship required.)

I'm curious what the mechanism underlying the "required-ness" is. i.e. which of the following, or others, are most at play:

  • you'd get voted out of office
  • you'd lose support from your political allies that you need to accomplish anything
  • there are costs imposed directly on you/people-close-to-you (i.e. stress)

A related thing I'm wondering is whether you considered anything like "going out with a bang", where you tried... just not self-censoring, and... probably losing the next election and some supporters in the meanwhile but also heaving some rocks through the overton window on your way out. 

(I can think of a few reasons that might not actually make sense, for either political or personal reasons, but am suddenly curious why more politicians don't just say "Screw it I'm saying what I really think" shortly before retiring)

Comment by Raemon on Morality as "Coordination" vs "Altruism" · 2020-12-29T20:00:56.993Z · EA · GW

The issue isn't just the conflation, but missing a gear about how the two relate.

The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.

Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it's also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.

In particular, I was concretely assuming "torturing people to death is generally worse than lying." But, that's specifically comparing within alike circles. It is now quite plausible to me that lying (or even mild dishonesty) among the groups of people I actually have to coordinate with might actually be worse than allowing the torture-killing of others who I don't have the ability to coordinate with. (Or, might not – it depends a lot on the the weightings. But it is not the straightforward question I assumed at first)

Comment by Raemon on An argument for keeping open the option of earning to save · 2020-09-03T00:38:57.998Z · EA · GW

Just wanted to throw up my previous exploration of a similar topic. (I think I had a fairly different motivation than you – namely I want young EAs to mostly focus on financial runway so they can do risky career moves once they're better oriented).

tl;dr – I think the actual Default Action for young EAs should not be giving 10%, but giving 1% (for self-signalling), and saving 10%. 

Comment by Raemon on You have more than one goal, and that's fine · 2020-09-02T05:50:59.141Z · EA · GW

I recently chatted with someone who said they've been part of ~5 communities over their life, and that all but one of them was more "real community" like than the rationalists. So maybe there's plenty of good stuff out there and I've just somehow filtered it out of my life.

Comment by Raemon on Dealing with Network Constraints (My Model of EA Careers) · 2020-05-10T01:52:54.910Z · EA · GW

Alas, I started writing it and then was like "geez, I should really do any research at all before just writing up a pet armchair theory about human motivation."

I wrote this Question Post to try to get a sense of the landscape of research. It didn't really work out, and since then I... just didn't get around to it.

Comment by Raemon on Dealing with Network Constraints (My Model of EA Careers) · 2020-03-10T00:10:28.985Z · EA · GW

Currently, there's only so many people who are looking to make friends, or hire at organizations, or start small-scrappy-projects together.

I think most EA orgs started out as a small scrappy project that initially hired people they knew well. (I think early-stage Givewell, 80k, CEA, AI Impacts, MIRI, CFAR and others almost all started out that way – some of them still mostly hire people they know well within the network, some may have standardized hiring practices by now)

I personally moved to the Bay about 2 years ago and shortly thereafter joined the LessWrong team, which at the time was just two people, and is now five. I can speak more to this example. At the time, it mattered that Oliver Habryka and Ben Pace already knew me well and had a decent sense of my capabilities. I joined while it was still more like "a couple guys building something in a garage" than an official organization. By now it has some official structure.

LessWrong has hired roughly one person a year for the past 3 years.

I think "median EA" might be a bit of a misnomer. In the case of LessWrong, we're filtering a bit more on "rationalists" than on EAs (the distinction is a bit blurry in the Bay). "Median" might be selling us a bit short. LW team members might be somewhere between 60-90th percentile. (heh, I notice I feel uncomfortable pinning it down more quantitatively than that). But it's not like we're 99th or 99.9th percentile, when it comes to overall competence.

I think most of what separates LW team members (and, I predict, many other people who joined early-stage orgs when they first formed), was a) some baseline competence as working adults, and b) a lot of context about EA, rationality and how to think about the surrounding ecosystem. This involved lots of reading and discussion, but depended a lot on being able to talk to people in the network who had more experience.

Why is it rate limited?

As I said, LessWrong only hires maybe 1-2 people per year. There are only so many orgs, hiring at various rates.

There are also only so many people who are starting up new projects that seem reasonably promising. (Off the top of my head, maybe 5-30 existing EA orgs hiring 5-100 people a year).

One way to increase surface area is for newcomers to start new projects together, without relying on more experienced members. This can help them learn valuable life skills without relying on existing network-surface-area. But, a) there are only so many projects ideas that are plausibly relevant, b) newcomers with less context are likely to make mistakes because they don't understand some important background information, and eventually they'll need to get some mentorship from more experienced EAs. Experienced EAs only have so much time to offer.

Comment by Raemon on Volunteering isn't free · 2020-02-05T22:30:04.443Z · EA · GW

I expect to want to link this periodically. One thing I could use is clearer survey data about how often volunteering is useful, and when it is useful almost-entirely-for-PR reasons. People often are quite reluctant to think volunteering isn't useful will say "My [favorite org] says they like volunteers!". (My background assumption is that their favorite org probably likes volunteers and needs to say so publicly, but primarily because of long-term-keeping-people-engaged reasons. But, I haven't actually seen reliable data here)

Comment by Raemon on Announcing the 2019-20 Donor Lottery · 2020-02-04T02:00:34.000Z · EA · GW

Congrats!

Comment by Raemon on Announcing the 2019-20 Donor Lottery · 2019-12-29T02:25:41.885Z · EA · GW

I just donated to the first lottery, but FYI I found it surprisingly hard to navigate back to it, or link others to it. It doesn't look like the lottery is linked from anywhere on the site and I had to search for this post to find the link again.

Comment by Raemon on Why and how to start a for-profit company serving emerging markets · 2019-11-12T01:40:25.224Z · EA · GW

The book The Culture Map explores these sorts of problems, comparing many cultures' norms and advising on how to bridge the differences.

In Senegal people seem less comfortable by default expressing disagreement with someone above them in the hierarchy. (As a funny example, I've had a few colleagues who I would ask yes-or-no questions and they would answer "Yes" followed by an explanation of why the answer is no.)

Some advice it gives for this particular example (at least in several 'strong hierarchy' cultures), is instead of a higher-ranking asking direct questions of lower-ranking people, the boss can ask a team of lower-ranked people to work together to submit a proposal, where "who exactly criticized which thing" is a bit obfuscated.

Comment by Raemon on Does 80,000 Hours focus too much on AI risk? · 2019-11-03T23:14:43.150Z · EA · GW

Tying in a bit with Healthy Competition:

I think it makes sense (given my understanding of the folk at 80k's views) for them to focus the way they are. I expect research to go best when it follows the interests and assumptions of the researchers.

But, it seems quite reasonable if people want advice for different background assumptions to... just start doing that research, and publishing. I think career advice is a domain that can definitely benefit from having multiple people or orgs involved, just needs someone to actually step up and do it.

Comment by Raemon on Healthy Competition · 2019-11-03T22:53:00.528Z · EA · GW

Nod. I had "more experimentation" as part of what I meant to imply by "diversity of worldviews" but yeah it's good to have that spelled out.

Comment by Raemon on The Future of Earning to Give · 2019-10-26T02:38:12.350Z · EA · GW

This certainly seems like a viable option. I agree with the pros and cons described here, and think it'd make sense for local groups to decide which one made more sense.

Comment by Raemon on The Future of Earning to Give · 2019-10-15T05:07:14.413Z · EA · GW
My intuition is that the EA Funds are usually a much better opportunity in terms of donation impact than donor lotteries and having one person do independent research themself (instead of relying almost entirely on recommendations)

My background assumption is that it's important to grow the number of people who can work fulltime on grant evaluation.

Remember that Givewell was originally just a few folk doing research in their spare time.

Comment by Raemon on The Future of Earning to Give · 2019-10-15T05:06:01.955Z · EA · GW

My understanding (not confident) is that those people (at least Nick Beckstead) are more something like advisors acting as a sanity check or something (or at least that they aren't the ones putting most of the time into the funds)

Comment by Raemon on The Future of Earning to Give · 2019-10-14T01:21:57.599Z · EA · GW

I also think there's some potential to re-orient the EA pipeline around this concept. If local EA meetups did a collective donor lottery, then even if only one of them ends up allocating the money, they could still solicit help from others to think about it.

My experience is that EA meetups struggle a bit with "what do we actually do to maintain community cohesiveness, given that for many of us our core action is something we do a couple times per year, mostly privately." If a local meetup did a collective donor lottery, than even if only one person wins the lottery, they could still solicit help from others to evaluate donor targets, and make it a collective group project. (while being the sort of project that's okay if some people flake on)

Comment by Raemon on The Future of Earning to Give · 2019-10-14T01:21:13.738Z · EA · GW

(edit: whoops, responded to wrong comment)

Comment by Raemon on The Future of Earning to Give · 2019-10-14T01:19:44.313Z · EA · GW

My take: rank-and-file-EAs (and most EA local communities) should be oriented around donor lotteries.

Background beliefs:

  • I think EA is vetting constrained
  • Much of the direct work that needs doing is network constrained (i.e. requires mentorship, in part to help people gain context they need to form good plans)
  • The Middle of the Middle of the EA community should focus on getting good at thinking.
  • There's only so much space in the movement for direct work, and it's unhealthy to set expectations that direct work is what people are "supposed to be."

I think the "default action" for most EAs should be something that is:

  • Simple, easy, and reasonably impactful
  • Provides a route for people who want to put in more effort to do so, while practicing building actual models of the EA ecosystem.

I don't think it's really worth it for someone donating a few thousand dollars to put a lot of effort into evaluating where to donate. But if 50 people each put $2000 into a donation lottery, then they collectively have $100,000, which is enough to justify at least one person's time in thinking seriously about where to put it. (It's also enough to angel-invest in a new person or org, allowing them to vet new orgs as well as existing ones)

I think it's probably more useful for one person to put serious effort into allocating $100,000, than 50 people to put token effort into allocating $2000.

This seems better than generic Earning to Give to me (except for people who make enough for donating, say, $25,000 or more realistic)

Comment by Raemon on Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) · 2019-10-01T06:28:56.775Z · EA · GW

I asked Critch about this today and he said it seemed fine.

Comment by Raemon on Kerry_Vaughan's Shortform · 2019-09-24T01:23:43.997Z · EA · GW

This was quite an interesting point I hadn't considered before. Looking forward to reading more.

Comment by Raemon on Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal) · 2019-09-20T21:15:47.994Z · EA · GW

My understanding is that it's currently focused on nonprofits (in large part because it's much more logistically and legally complicated to send money to individuals)

Comment by Raemon on Effective Altruism and Everyday Decisions · 2019-09-20T20:56:18.517Z · EA · GW
Believing that my time is really valuable can lead to me making more wasteful decisions. Decisions like: "It is totally fine for me to buy all these expensive ergonomic keyboards simultaneously on Amazon and try them out, then throw away whichever ones do not work for me." Or "I will buy this expensive exercise equipment on a whim to test out. Even if I only use it once and end up trashing it a year later, it does not matter."
...
The thinking in the examples above worries me. People are bad at reasoning about when to make exceptions to rules like "try to behave in non-wasteful ways", especially when the exception is personally beneficial. And I think each exception can weaken your broader narrative about what you value and who you are.

I was brought up in a family that was very pro-don't-waste, and I've had an a lengthy shift towards "actually, 'not wasting'" just isn't very important. It's more of a carry-over from a time when a) humanity had a lot less ability to produce stuff, b) humanity had worse landfill technology than we have now."

Insofar as we do produce too much waste, it's mostly at a corporate/organizational level than something that makes sense for individuals to prioritize.

It's not that I think people should be making exceptions to rules like 'try to behave in non-wasteful ways', it's that I mostly now think that 'don't be wasteful' wasn't that useful a core-rule in the first place.

(Among my cruxes here are a belief that landfill technology has improved since the era when 'don't waste' and 'recycle' memes took off, as well as a shift towards 'thinking broadly about having a high impact is much more important than individual local decisions.'

Past me (and perhaps you) might be suspicious of the 'landfill technology is actually good enough that this isn't that big a deal', perhaps rightly so because it's a kinda suspiciously-convenient belief. I don't have arguments-at-the-ready that'd have convinced past me, so mostly just laying out my current reasoning without expecting it to be that persuasive at the moment)

Comment by Raemon on Leverage Research: reviewing the basic facts · 2019-09-19T23:01:46.909Z · EA · GW

Just wanted to say I super appreciated this writeup.

Comment by Raemon on 'Longtermism' · 2019-07-26T00:01:24.389Z · EA · GW

I suspect the goal here is less to deconfuse current EAs and more to make it easier to explain things to newcomers who don't have any context.

(It also seems like good practice to me for people in leadership positions to keep people up to date about how they're conceptualizing their thinking)

Comment by Raemon on I find this forum increasingly difficult to navigate · 2019-07-13T22:20:09.357Z · EA · GW

Quick note that if you set All Posts to "sort by new" instead of "sort by Daily" there'll be 50 posts. (The Daily view is a bit weird because it varies a lot depending on forum traffic that week)

Comment by Raemon on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-07T20:39:09.722Z · EA · GW

I don't have much to contribute but I appreciated this writeup – I like it when EAs explore cause areas like this.

Comment by Raemon on I find this forum increasingly difficult to navigate · 2019-07-05T23:36:24.562Z · EA · GW

For the record I'm someone who works on the forum and thought the OP was expressed pretty reasonably.

Comment by Raemon on I find this forum increasingly difficult to navigate · 2019-07-05T23:28:38.720Z · EA · GW

Strong upvoted mostly to make it easier to find this comment.

Comment by Raemon on Raemon's EA Shortform Feed · 2019-07-03T07:59:03.280Z · EA · GW

The Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they're either young and lacking some core "figure out how to be helpful and actually help" skills, or they're older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require.

I think the *End* of the Middle of the funnel is more of where "volunteer at EA orgs" makes sense. And people in the Middle of the Middle who think they have the "figure out how to be helpful and help" property should do so if they're self-motivated to. (If they're not self motivated they're probably not a good volunteer)

Comment by Raemon on Raemon's EA Shortform Feed · 2019-07-03T07:56:25.875Z · EA · GW

My claim is just that "volunteer at an org" is not a scalable action that it makes sense to be a default thing EA groups do in their spare time. This isn't to say volunteers aren't valuable, or that many EAs shouldn't explore that as an option, or that better coordination tools to improve the situation shouldn't be built.

But I am a bit more pessimistic about it – the last time I checked, many of the times someone had said "huh, it looks like there should be all this free labor available by passionate people, can't we connect these people with orgs that need volunteers?" and tried to build some kind of tool to help with that, it turned out that most people aren't actually very good at volunteering, and that it requires something more domain specific and effortful to get anything done.

My impression is that getting volunteers is about has hard as hiring a regular employee (much cheaper in money, but not in time and management attention), and that hiring employees is generally pretty hard.

(Again, not arguing that ALLFED shouldn't look for volunteers or that EAs shouldn't volunteer at ALLFED, esp. if my experience doesn't match yours. I'd encourage anyone reading this who's looking for projects to give ALLFED volunteering a look.)

Comment by Raemon on Raemon's EA Shortform Feed · 2019-06-30T21:49:44.967Z · EA · GW

Membranes

A membrane is a semi-permeable barrier that things can enter and leave, but it's a bit hard to get in and a bit hard to get out. This allows them to store negentropy, which lets them do more interesting things than their surroundings.

An EA group that anyone can join and leave at a whim is going to have relatively low standards. This is fine for recruiting new people. But right now I think the most urgent EA needs have more do with getting people from the middle-of-the-funnel to the end, rather than the beginning-of-the-funnel to the middle. And I think helping the middle requires a higher expectation of effort and knowledge.

(I think a reasonably good mixed strategy is to have public events maybe once every month or two, and then additional events that require some kind of effort on the part of members)

What happens inside the membrane?

  • First, you meet some basic standards for intelligence, good communication, etc. The basics you need in order to accomplish anything on purpose.
  • As noted elsewhere, I think EA needs to cultivate the skill of thinking (as well as gaining agency). There are a few ways to go about this, but all of them require some amount of "willing to put in extra effort and work." Having a space where people have the expectation that everyone there is interested in putting that effort is helpful for motivation and persistence.
  • In time, you can develop conversation norms that foster better-than-average thinking and communication. (i.e. make sure that admitting you were wrong is rewarded rather than punished)

Membranes can work via two mechanisms:

  • Be more careful about who you let in, in the first place
  • Be willing to invest effort in giving feedback, or being willing to expel people from the group.

The first option is easier. Giving feedback and expelling people is quite costly, and painful both for the person being expelled from a group (who may have friends and roots there), as well as the person doing the expelling (which may involve a stressful fight with people second-guessing you).

If you're much more careful about who you let in, an ounce of prevention can be more valuable than a pound of cure.

On the other hand, if you put up lots of barriers, you may find your community stagnating. There may also be false positives of "so-and-so seemed not super promising" but if you'd given them a chance to grow it would have been fine.

Comment by Raemon on Raemon's EA Shortform Feed · 2019-06-30T21:49:19.547Z · EA · GW

Notes from a "mini talk" I gave to a couple people at EA Global.

Local EA groups (and orgs, for that matter) need leadership, and membranes.

Membranes let you control who is part of a community, so you can cultivate a particular culture within that community. They can involve barrier to entry, or actively removing people or behaviors that harm the culture.

Leadership is necessary to give that community structure. A good leader can make a community valuable enough that it's worth people's effort to overcome the barriers to entry, and/or maintain that barrier.

Comment by Raemon on Raemon's EA Shortform Feed · 2019-06-30T21:22:56.806Z · EA · GW

Part of the problem is there are not that many volunteer spots – even if this worked, it wouldn't scale. There are communities and movements that are designed such that there's lots of volunteer work to be done, such that you can provide 1000 volunteer jobs. But I don't think EA is one of them.

I've heard a few people from orgs express frustration that people come to them wanting to volunteer, but this feels less like the orgs receive a benefit, and more than the org is creating a training program (at cost to themselves) to provide a benefit to the volunteers.

Comment by Raemon on Raemon's EA Shortform Feed · 2019-06-27T23:05:54.487Z · EA · GW

Updated the thread to just serve as my shortform feed, since I got some value out of the ability to jot down early stage ideas.

Comment by Raemon on Raemon's EA Shortform Feed · 2019-06-27T00:24:41.741Z · EA · GW

I’m not yet sure that I’ll be doing this more than 3 months, so I think there’s a bit more value to focus more on generating value in that time.

Comment by Raemon on Raemon's EA Shortform Feed · 2019-06-23T07:38:25.449Z · EA · GW

I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.

Meanwhile... "sufficiently advanced thinking looks like doing", or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.

I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn't actually rise to the level of "thinking for real." Thinking for real is real work.