Posts

Sort forum posts by: Occlumency (Old & Upvoted) 2022-05-15T03:08:20.157Z
EAGT: Social Schelling Times (recurring) 2022-05-13T12:04:08.592Z
EAGT update: bespoke rooms for remote orgs/local groups on the EA Gather.Town 2022-05-05T12:39:30.062Z
EA coworking/lounge space on gather.town 2022-04-26T10:57:26.621Z
Emrik's Shortform 2021-09-21T20:09:20.436Z

Comments

Comment by Emrik on Emrik's Shortform · 2022-05-21T05:45:36.939Z · EA · GW

FWIW, I think personal information is very relevant to giving decisions, but I also think the meme "EA is no longer funding-constrained" perhaps lacks nuance that's especially relevant for people with values or perspectives that differ substantially from major funders.

Relevant: https://forum.effectivealtruism.org/posts/GFkzLx7uKSK8zaBE3/we-need-more-nuance-regarding-funding-gaps

Comment by Emrik on Apply to attend an EA conference! · 2022-05-21T05:41:07.419Z · EA · GW

"and I was surprised to find I had ideas and perspectives that were unique/might not have surfaced in conversation had I not been there."

I think this is one of the reasons EAG (or other ways of informally conversing with regular EAs on EA-related things) can be extremely valuable for people. It lets you get epistemic and emotional feedback on how capable you are compared to a random EAG-sampled slice of the community. People who might have been underconfident (like you) update towards thinking they might be usefwl. That said, I think you're unusually capable, and that a lot of other people will update towards feeling like they're too dumb for EA.

But the value of increased confidence in people like you seems higher value than the possible harm caused by people whose confidence drops. And there are reasons to expect online EA material to be a lot more intimidating due to being way more filtered for high-status (incl. smart), so exposure to low-filtered informal conversations in EAG probably causes higher confidence in people who haven't had had a lot of low-filtered informal exposure yet (so if that describes you, reader, you should definitely considering going). Personally, I have a history of feeling like everything I discover and learn is just a form of "catching up" to what everyone else already knows, so talking to people about my ideas has increased my confidence a lot.

Comment by Emrik on Don't Be Bycatch · 2022-05-19T00:41:43.308Z · EA · GW

I'm really sorry I downvoted... I love the tone, I love the intention, but I worry about the message. Yes, less ambition and more love would probably make us suffer less. But I would rather try to encourage ambition by emphasising love for the ambitious failures. I'm trying to be ambitious, and I want to know that I can spiritually fall back on goodwill from the community because we all know we couldn't achieve anything without people willing to risk failing.

Comment by Emrik on Deferring · 2022-05-16T22:49:10.405Z · EA · GW

Some (controversial) reasons I'm surprisingly optimistic about the community:

1) It's already geographically and social-network bubbly and explores various paradigms.

2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they're likely to improve opinions), and the top-level tries to avoid conforming, there's a status push towards exploration and confidence in independent impressions.

3) As long as deference is mostly unidirectional (downwards in social status) there are fewer loops/information cascades (less double-counting of evidence), and epistemic bubbles are harder to form and easier to pop (from above). And social status isn't that hard to attain for conscientious smart people, I think, so smart people aren't stuck at the bottom where their opinions are under-utilised? Idk.

Probably more should go here, but I forget. The community could definitely be better, and it's worth exploring how to optimise it (any clever norms we can spread about trust functions?), so I'm not sure we disagree except you happen to look like the grumpy one because I started the chain by speaking optimistically. :3

Comment by Emrik on Deferring · 2022-05-16T21:47:14.087Z · EA · GW

Thanks<3

Well, I've been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions,  and all that remained in my toolkit was cool ideas that weren't necessarily usefwl. I might reconsider it. :p 

Keep in mind that in my own framework, I'm an Explorer, not an Expert. Not safe to defer to.

Comment by Emrik on Deferring · 2022-05-16T20:20:46.696Z · EA · GW

This question is studied in veritistic social epistemology. I recommend playing around with the Laputa network epistemology simulation to get some practical model feedback to notice how it's similar and dissimilar to your model of how the real world community behaves. Here are some of my independent impressions on the topic:

  1. Distinguish between testimonial and technical evidence. The former is what you take on trust (epistemic deference, Aumann-agreement stuff), and the latter is everything else (argument, observation, math).
  2. Under certain conditions, there's a trade-off between the accuracy of crowdsourced estimates (e.g. surveys on AI risk) and the widespread availability of decision-relevant current best guesses (cf. simulations of the "Zollman effect").
  3. Personally, I think simulations plausibly underestimate the effect. Think of it like doing Monte-Carlo Tree Search over ideaspace, where we want to have a certain level of randomness to decide which branches of the tree to go down. And we arguably can't achieve that randomness if we get stuck in certain paradigms due to the Einstellung effect (sorry for jargon). Communicating paradigms can be destructive of underdeveloped paradigms.
  4. To increase the breadth of exploration over ideaspace, we can encourage "community bubbliness" among researchers (aka "small-world network"), where communication inside bubbles is high, and communication between them is limited. There's a trade-off between the speed of research progress (for any given paradigm) and the breadth and rigour of the progress. Your preference for how to make this trade-off could depend on your view of AI timelines.
  5. How much you should update on someone's testimony depends on your trust function relative to that person. Understanding trust functions is one of the most underappreciated leverage points for improving epistemic communities and "raising sanity waterlines", imo.
  6. If a community has a habit of updating trust functions naively (e.g. increase or decrease your trust towards someone based on whether they give you confirmatory testimonies), it can lead to premature convergence and polarisation of group beliefs. And on a personal level, it can indefinitely lock you out of areas in ideaspace/branches on the ideatree you could have benefited from exploring. [Laputa example] [example 2]
  7. Committing to only updating trust functions based on direct evidence of reasoning ability and sincerity, and never on object-level beliefs, can be a usefwl start. But all evidence is entangled, and personally, I'm ok with locking myself out of some areas in ideaspace because I'm sufficiently pessimistic about there being any value there. So I will use some object-level beliefs as evidence of reasoning-ability and sincerity and therefore use them to update my trust functions.
  8. Deferring to academic research can have the bandwidth problem[1] you're talking about, and this is especially a problem when the research has been optimised for non-EA relevant criteria. Holden's History is a good example: he shouldn't defer to expert historians on questions related to welfare throughout history, because most academics are optimising their expertise for entirely different things.
  9. Deferring to experts can also be a problem when experts have been selected for their beliefs to some extent. This is most likely true of experts on existential risk.
  10. Deferring to community members you think know better than you is fairly harmless if no one defers to you in turn. I think a healthy epistemic community has roles for people to play for each area of expertise.
    1. Decision-maker: If you make really high-stakes decisions, you should use all the evidence you can, testimonial or otherwise, in order to make better decisions.
    2. Expert: Your role is to be safe to defer to. You realise that crowdsourced expert beliefs provide more value to the community if you try to maintain the purity of your independent impressions, so you focus on technical evidence and you're very reluctant to update on testimonial evidence even from other experts.
    3. Explorer: If most of your contributions come from contributing with novel ideas, perhaps consider taking risks by exploring neglected areas in ideaspace at the cost of potentially making your independent impressions less accurate on average compared to the wisdom of the crowd.

Honestly, my take on the EA community is that it's surprisingly healthy. It wouldn't be terrible if EA kept doing whatever it's doing right now. I think it ranks unreasonably high in the possible ways of arranging epistemic communities. :p

  1. ^

    I like this term for it! It's better than calling it the "Daddy-is-a-doctor problem".

Comment by Emrik on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-16T06:23:19.572Z · EA · GW

Oh. It does mitigate most of the problem as far as I can tell. Good point Oo

Comment by Emrik on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-15T22:49:27.703Z · EA · GW

Oh, this is wonderfwl. But to be clear, Occlumency wouldn't be the front page. It would one of several ways to sort posts when you go to /all posts. Oldie goldies is a great idea for the frontpage, though!

Comment by Emrik on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-15T20:34:01.786Z · EA · GW

I have no idea how feasible it is. But I made this post because I personally would like to search for posts like that to patch the most important missing holes in my EA Forum knowledge. Thanks for all the forum work you've done, the result is already amazing! <3

Comment by Emrik on EA Forum feature suggestion thread · 2022-05-15T20:10:19.824Z · EA · GW
  1. Add a sorting option for Occlumency so people can find the posts with the most enduring value historically (sorting by total karma doesn't do it due to the sharp increase in karma allocated towards newer posts due to influx of new forum users).
  2. Add a tag for "outdated" that people can vote up or down, so that outdated but highly upvoted past posts don't continually mislead people (e.g. based on research that failed to replicate). I can't think of any posts atm, but if you can think of any, please mark them.
  3. Consider hiding authorship and karma for posts 24 hours after publication to decrease how sensitive final karma is to slight variations in initial conditions that are amplified by information cascades. I don't actually advocate doing this, I just recommend considering it to see if it makes sense to people who could know better. My intuition is that it's not worth the cost.
Comment by Emrik on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-15T19:21:03.841Z · EA · GW

The users with the highest karma come from a range of different years, and the two highest joined in 2017 and 2019. I don't think it's too much of a problem.

Comment by Emrik on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-15T16:36:03.489Z · EA · GW

Good point! Edited the post to mention this.

Comment by Emrik on Getting a feel for changes of karma and controversy in the EA Forum over time · 2022-05-15T15:25:00.775Z · EA · GW

Not sure how much it matters, but if you weight vote balances by forum activity during month of publication, you aren't controlling for votes outside month of publication. This means that older posts that have received a second wind of upvotes will be ranked higher.

Comment by Emrik on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-15T14:34:12.294Z · EA · GW

Experimental fine-tuning might be in order. But even without it, Occlumency has a different set of problems to Magic (New & Upvoted), so the option value is probably good.

As for outdated post, there could be an "outdated" tag that anyone can add to posts and vote down or up. And anyone who uses it should be encouraged to link to the reason the post is outdated in the comments. Do you have any posts in mind?

Comment by Emrik on Emrik's Shortform · 2022-05-14T16:45:09.471Z · EA · GW

A way of reframing the idea of "we are no longer funding-constrained" is "we are bottlenecked by people who can find new cost-effective opportunities to spend money". If this is true,  we should plausibly stop donating to funds that can't give out money fast enough anyway, and rather spend money on orgs/people/causes you personally estimate needs more money now. Maybe we should up-adjust how relevant we think personal information is to our altruistic spending decisions.

Is this right? And are there any good public summaries of the collective wisdom fund managers have acquired over the years? If we're bottlenecked by people who can find new giving opportunities, it would be great to promote the related skills. And I want to read them.

Comment by Emrik on If EA is no longer funding constrained, why should *I* give? · 2022-05-14T16:40:30.565Z · EA · GW

Reframe the idea of "we are no longer funding-constrained" to "we are bottlenecked by people who can find new good things to spend money on". Which means you should plausibly stop donating to funds that can't give out money fast enough, and rather spend money on orgs/people/causes you personally estimate needs more money now.

Are there any good public summaries of the collective wisdom fund managers have acquired over the years? If we're bottlenecked by people who can find new giving opportunities, it would be great to promote the related skills. And I want to read them.

Comment by Emrik on EA can be hard: links for that · 2022-05-13T10:55:27.013Z · EA · GW

Idk, I like the attitudes found in pain is not the unit of effort. Summarised: Effort is a dangerous proxy variable to maximise. And for most human beings, maximising impact means trying to have plenty of mental and practical slack in your life. If you feel like you're only putting in enough effort if you're at the brink of how much pain you can handle, you should probably try to find and test ways of getting out of that trap (like acquiring SSRIs to try). :)

Comment by Emrik on Virtual Coworking · 2022-05-11T10:50:34.535Z · EA · GW

And for people who don't know what "gather town" means and wish to judge whether it could appeal to them as a place to cowork, you can read the forum post about it. :)

Comment by Emrik on EA coworking/lounge space on gather.town · 2022-05-04T16:21:18.865Z · EA · GW

I looked at it for a bit, and it seems interesting! But announcing a move would be risky, given that we might lose people in the transition, so the difference in quality of the space would have to sufficient to overcome that risk, and I'm not sure it is.

Also, if you have a Gather Town in Germany, we could link it via a portal; or alternatively you could copy the whole space into EAGT, and it could be linked via a door (like with EA Denmark's space). The latter option has the advantage that it benefits the larger community, encourages more intermingling between groups, makes it easier to find EAs to cowork with, and even if you're just inside your own local rooms you still show up as "online users" and gives the space a livelier feel. I can help with either option if it sounds interesting. :)

Comment by Emrik on What makes a statement a normative statement? · 2022-04-26T16:55:45.260Z · EA · GW

Oh, I like this. Seems good to have a word for it, because it's a set of constraints that a lot of us try to fit our morality into. We don't want it to have logical contradictions. Seems icky. Though it does make me wonder what exactly I mean by 'logical contradiction'. 

Comment by Emrik on Open Thread: Spring 2022 · 2022-04-24T22:53:48.500Z · EA · GW

Exactly! Thanks a lot.

Comment by Emrik on EA coworking/lounge space on gather.town · 2022-04-23T03:34:33.652Z · EA · GW

If cost is a problem, I could definitely contribute with up to 200 $/month. But I expect if we get 25 concurrent users, I'm not the only one interested in funding the project. Having an online EA hub like that could be extremely valuable.

Comment by Emrik on Open Thread: Spring 2022 · 2022-04-23T00:05:46.638Z · EA · GW

Is there like some statistics on this forum? Particularly distribution of votes over posts?

Comment by Emrik on Project: A web platform for crowdsourcing impact estimates of interventions. · 2022-04-22T21:12:55.179Z · EA · GW

I'm in favour of the project, but here's a consideration against: Making people in the community more confident about what the community thinks about a subject, can be potentially harmfwl.

Testimonial evidence is the stuff you get purely because you trust another reasoner (Aumann-agreement fashion), and technical evidence is everything else (observation, math, argument).

Making people more aware of testimonial evidence will also make them more likely to update on it, if they're good Bayesians. But this also reduces the relative influence that technical evidence has on their beliefs. So although you are potentially increasing the accuracy of each member's beliefs, you are also weakening the link community opinion has to technical evidence, and that leaves us more prone to information cascades and slower to update on new discoveries/arguments.

But this is mainly a problem for uniform communities where everyone assigns the same amount of trust to everyone else. If, on the other hand, we have "thought leaders" (highly trusted researchers who severely distrust others' opinions and stubbornly refuse to update on anything other than technical evidence), then their technical-evidence grounded beliefs can filter through to the rest of the community, and we get the best of both worlds.

Comment by Emrik on Free-spending EA might be a big problem for optics and epistemics · 2022-04-17T01:48:47.039Z · EA · GW

There's two opposing arguments: 1) You get more information about your friends than you get about strangers, and 2) you are more likely to be biased in favour of your friends.

Personally, I think it would be very hard to vet potential funding prospects over just having a few talks, and the fact that I've "vetted" my friends over several years is a wealth of information that I would be foolish to ignore.

Our intuitions on this may diverge based on how likely we think it is that we've acquired exceptional friends. If you're imagining childhood friends or college buddies, then I see why you would be skeptical. If on the other hand you're imagining the friends you've acquired from activities that you think only exceptional people would engage in, then that changes things.

Comment by Emrik on Free-spending EA might be a big problem for optics and epistemics · 2022-04-15T04:08:53.389Z · EA · GW

Being friends with someone is also a great way of learning about their capabilities, motivations and reliability, so I think it could be rational for rich funders to be giving grants to their friends moreso than strangers.

Comment by Emrik on Free-spending EA might be a big problem for optics and epistemics · 2022-04-15T04:03:51.764Z · EA · GW

FWIW, I think it'd be pretty hard (practically and emotionally) to fake a project plan that EA funders would be willing to throw money at. So my prior is that cheating is rare and an acceptable cost to being a high-risk funder. EA is not about minimising crime, it's about maximising impact, and before we crack down on funding we should check our motivations. I don't want anyone to change their high-risk strategy based on hearsay, but I do want our top funders to be on the lookout so that they might catch a possible problem before it becomes rampant.

I like the culture-aligning suggestions for other reasons, though. I think the long-term future will benefit from the EA community remaining aligned with actually caring about people.

Comment by Emrik on EA coworking/lounge space on gather.town · 2022-04-12T19:27:20.022Z · EA · GW

Cool! I'll try to stay online when I can. If you see me online, feel free to walk up to me and chat. I'll let you know if I'm too busy to talk. I'd like to know what other EAs are up to, and talk about what I'm up to.

Comment by Emrik on [deleted post] 2022-03-15T01:01:48.739Z

Definitely if you have ways to filter for what the people are interested in, that seems like a much better way to find the kinds of people who are most ripe for EA ideas. But sometimes age is a free variable you can filter for after you have filtered for other things. Which is why I believe this question is usefwl and I want to know more about it.

Comment by Emrik on [deleted post] 2022-03-14T21:32:43.290Z

I don't see the moral problem with trying to inspire young people to learn about something you believe is important. If you're trying to inspire them to learn about the wrong things, then I think that could be bad because the things are wrong. But at least if there are multiple groups doing this, then it teaches them that there are multiple perspectives on the world sooner rather than later, and that seems good.

Also, thanks for linking the post! This is very usefwl.

Comment by Emrik on Some thoughts on EA outreach to high schoolers · 2022-03-14T21:05:02.586Z · EA · GW

I worry more about how language-policing might make people in the community hesitant to do more outreach, for fear that the community will think that they're insensitive. Let's laugh at people who think using the word "convert" is insensitive, rather than making people more hesitant to do outreach. Some "reputational damage" seems good when it makes us stand out more.

If EA seemed too professional (in the sense of policing their language, making sure they don't offend anyone, etc.) to me when I first heard of it, I would be reluctant to believe they were capable of thinking for themselves or doing anything efficiently.

Comment by Emrik on Emrik's Shortform · 2021-10-28T19:16:15.093Z · EA · GW

It'd be cool if the forum had a commenting feature similar to Google Docs, where comments and subcomments are attached directly to sentences in the post. Readers would then be able to opt in to see the discussion for each point on the side while reading the main post. Users could also choose to hide the feature to reduce distractions.

For comments that directly respond to particular points in the post, this feature would be more efficient (for reading and writing) relative to the current standard since they don't have to spend words specifying what exactly they're responding to.

Comment by Emrik on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-26T21:41:45.773Z · EA · GW

I didn't read this short story as supporting cancel culture at all. To me, the good guys in this story are the people who advocate for recognising that people can have both good and bad sides. And the main point of celebration is that they're talking about factory farming as a troubling past history, just like they talk about slavery today. Did you read it differently?

Comment by Emrik on Remove An Omnivore's Statue? Debate Ensues Over The Legacy Of Factory Farming · 2021-10-26T21:31:50.804Z · EA · GW

Love this! Brilliant.

Also, "renown" -> "renowned".

Comment by Emrik on Is it crunch time yet? If so, who can help? · 2021-10-13T06:42:58.882Z · EA · GW

Yes, I think it's crunch time.

But I'd be very hesitant of advocating in general for people to sacrifice more stuff to work hard on the most urgent problems. People vastly overestimate the stability of their motivation and mental life. If you plan your life with the assumption that you'll always be as motivated as you are right now, you'll probably achieve less than if you take some precautions.

I'd say plan for at least 20 years of productivity. This means you want to build relationships with people who support you, invest in finding good down-time activities to keep you refreshed, and don't burn yourself out. Be ambitious! Test your limits until you crash, but make sure you can recover and learn from it rather than taking permanent damage.

On average do I think EAs would do better with more self-sacrifice or less? It varies, and it's important enough that I think advice should be more granular than just "do more".

Comment by Emrik on Introducing Training for Good (TFG) · 2021-10-09T04:31:43.390Z · EA · GW

What's the case for thinking that grantmaking skills is a bottleneck?

Comment by Emrik on The Cost of Rejection · 2021-10-08T22:26:29.216Z · EA · GW

There are a bunch of illegible factors involved in hiring the right person, though. If the reason for rejection is something like "we think you'd be a bad culture fit," then it seems legally risky to be honest.

Comment by Emrik on Emrik's Shortform · 2021-10-08T02:41:27.197Z · EA · GW

Forum suggestion: Option to publish your post as "anonymous" or blank, that then reverts to reveal your real forum name in a week.

This would be an opt-in feature that lets new and old authors gain less biased feedback on their posts, and lets readers read the posts with less of a bias from how they feel about the author.

At the moment, information cascades amplify the number of votes established authors get based on their reputation. This has both good (readers are more likely to read good posts) and bad (readers are less likely to read unusual perspectives, and good newbie authors have a harder time getting rewarded for their work) consequences. The anonymous posting feature would redistribute the benefits of cascades more evenly.

I don't think the net benefit is obvious in this case, but it could be worth exploring and testing.

Comment by Emrik on [deleted post] 2021-10-02T18:06:31.716Z

What about just keeping it full until you end up in a position where it might start being a problem if you're doxed? Also, if you choose to use "JackM" and you apply to EA jobs, you could always just point out that that's your name on the forum.

Comment by Emrik on What's something that every EA community builder should have thought about? · 2021-10-01T13:02:12.003Z · EA · GW

You explained what you meant, anticipated my objection, and provided a follow-up. I largely agree. Thank you!

Comment by Emrik on What's something that every EA community builder should have thought about? · 2021-09-30T23:27:20.089Z · EA · GW

"EA is an aggressive set of memes. Best handled by people firmly grounded in some other community or field or worldview."

What do you mean?

Comment by Emrik on Independent impressions · 2021-09-27T18:40:32.434Z · EA · GW

On the social-epistemological point: Yes, it varies by context.

One thing I'd add is that I think it's hard to keep inside/outside (or independent and all-things-considered) beliefs separate for a long time. And your independent beliefs are almost certainly going to be influenced by peer evidence, and vice versa.

I think this means that if you are the kind of person whose main value to the community is sharing your opinions (rather than, say, being a fund manager), you should try to cultivate a habit of mostly attending to gears-level evidence and to some extent ignore testimonial evidence. This will make your own beliefs less personally usefwl for making decisions, but will make the opinions you share more valuable to the community.

Comment by Emrik on Independent impressions · 2021-09-27T18:30:42.426Z · EA · GW

Agree on all points, but inside/outside is catchier! Might ride the inside-jargon group-belonging-signalling train into norm fixation.

Comment by Emrik on Independent impressions · 2021-09-27T16:51:32.210Z · EA · GW

 I like the words inside beliefs and outside beliefs, almost-but-not-quite analogous to inside- and outside-view reasoning. The actual distinction we want to capture is "which beliefs should we report in light of social-epistemological considerations" and "which beliefs should we use to make decisions to change the world".

Comment by Emrik on Should Grants Fund EA Projects Retrospectively? · 2021-09-23T12:56:57.803Z · EA · GW

I think retrospective funding is a really good norm to encourage, but the benefits only really start accruing once it's relatively widespread. Once it's common knowledge that one may get paid for self-initiated quality work, there's a very wide-but-weak incentive applied across the whole community. The norm-building itself, however, is a stag hunt with upfront costs.

Notice that this incentive affects the poorest of us most, and the people who don't already have jobs in EA. This means that it fulfills a funding niche that isn't already covered neatly by regular jobs and contract-based funding. (I'm especially thinking about those of us without jobs due to disability, but we'd still like to be paid for contributions during our uptime.)

I think the fact that so many EAs have independently arrived at this idea, is strong evidence in favour of there being a real potential for good here.

 • I have previously independently arrived at the idea.

 • Linda Linsefors wrote about it.

 • Ben Kuhn, Paul Christiano and Katja Grace wrote, about, it in the form of impact certificates

 • Remmelt Ellen wrote about it and has a patreon page.

 • Dony Christie has one too.

(If anyone know any other EAs with patreons (or similar), I'd like to know!)

I think we are the tip of the iceberg when it comes to who could potentially benefit from it being a universal expectation that if you do good work for the EA community, you can get paid for it.

Additionally, I think a norm for retrospective funding could open up occasional really high-impact giving opportunities for small-scale donors.

Comment by Emrik on [deleted post] 2021-09-23T11:28:57.128Z

I think your proposed solution is kinda dumb (not meant offensively about you! original thinking will always involve risks, and I appreciate you taking them), but I love the direction you're thinking in! I like this contribution.

Why I think your solution doesn't work: Empirically, I don't think mobs forgive people after they've served their sentences or been fined.

But I think you're so-so right about a need for a proper "path to redemption" in our culture. This is personal for me. I've been a bad person, and I'm racked with guilt about it. In April, I wrote:

I really wish we had a secular religious-like institution for Redemption where people can go and get soundly punished for what they've done wrong (e.g. flogging, torture, confinement, whatever), and then it's publicly announced that this person has *paid* their toll as judged by the institution. If anyone doubts this, you can just show them your scars. And importantly, you can't be punished by this institution against your will. You would go there voluntarily and suggest the severity of your own punishment, and they get to judge whether they see it as adequate. If you're just subjected to your punishment against your will, how's that indicative of regret? It acts as a costly signal that you understand what you've done wrong and that you want to do better. You yourself would get to decide how costly a signal you want to send, and it is not the institution's job to say 'it is too much'.

To be fair, this is also kind of a dumb solution. But the important point is that a path to redemption must be voluntary. Forcing punishment on someone doesn't signal that they've changed their ways.

Comment by Emrik on Should Grants Fund EA Projects Retrospectively? · 2021-09-23T11:06:54.446Z · EA · GW

I would love to understand this comment (I'm very interested in retrospective funding ideas), but I don't currently. Could you perhaps go back a few inferential steps or link to relevant posts?

Comment by Emrik on Emrik's Shortform · 2021-09-23T00:39:07.156Z · EA · GW

And to respond to your question about what I meant by "menial labour". I was being poetic. I just mean that I feel like EA places a lot of focus on the very most high-status jobs, and I've heard friends despairing for having to "settle" for anything less. I sense that this type of writing might not be the norm for EA shortform, but I wasn't sure.

Comment by Emrik on Emrik's Shortform · 2021-09-23T00:32:08.413Z · EA · GW

Nono, I'm not trying to point to a problem of EAs trying to make others feel unwelcome or dumb. I think EA is extremely kind, and almost universally tries hard to make people feel welcome. I'm just pointing to the existence of an unusually strong intellectual pressure, perhaps combined with lots of focus on world-saving heroes and talk about "what should talented people do?"

I think ambition is good, but I think we can find ways of encouraging ambition while also mitigating at least some of the debilitating intelligence-dysphoria many in our community suffer from.

I'm writing this in reaction to talking to three of my friends who suffer under the intellectual pressure they feel. (Note that the following are all about the intellectual pressure they get from EA, and not just in general due to academic life.)

Friend1: "EA makes me feel real dumb XD i think i feel out of place by being less intelligent"

_

Friend2: "I’m not worried that I’m not smart, but I am worried that I am not smart enough to meet a certain threshold that is required for me to do the things I want to do. ... I think I have very low odds of achieving things I deeply want to achieve. I think that is at least partially responsible for me being as extremely uncomfortable about my intelligence as I am, and not being able to snap out of it."

_

Me: "Do you ever refrain from trying to contribute intellectually because you worry about taking up more attention than it's worth?"

Friend3:  "hmm, not really for that reason. because I'm afraid my contribution will be wrong or make me look stupid. wrong in a way that reflects negatively on me-- stupid errors, revealing intellectual or character weakness.

_

Some of this is a natural and unavoidable result of the large focus EA places on intellectual labour, but I think it's worse than it needs to be. I think some effort to instil some "ordinary EA dignity" into our culture wouldn't hurt.  I might have a skewed sample, however.

Comment by Emrik on Emrik's Shortform · 2021-09-22T23:15:47.454Z · EA · GW

Correct me if I'm wrong, but I think in Christianity, there's a lot of respect and positive affect for the "ordinary believer". Christians who identify as "ordinary Christians" feel good about themselves for that fact. You don't have to be among the brightest stars of the community in order to feel like you belong.

I think in EA, we're extremely kind, but we somehow have less of this. Like, unless you have 2 PhD's by the age of 25 and you're able to hold your own in a conversation about AI-alignment theory with the top researchers in the world... you sadly have to "settle" for menial labour with impact hardly worth talking about. I'm overstating it, of course, but am I wrong?

I'm not saying ambition is bad. I think shooting for the stars is a great way to learn your limits. But I also notice a lot of people suffering under intellectual pressure, and I think we could collectively be more effective (and just feel better) if we had more... room for "ordinary folk dignity"?