Posts

Winners of the EA Criticism and Red Teaming Contest 2022-10-01T01:50:09.257Z
Reasoning Transparency 2022-09-28T12:22:00.465Z
9/26 is Petrov Day 2022-09-25T23:14:32.296Z
EA Organization Updates: September 2022 2022-09-14T15:50:34.202Z
Agree/disagree voting (& other new features September 2022) 2022-09-07T11:07:45.382Z
Who are some less-known people like Petrov? 2022-09-06T13:22:11.040Z
Celebrations and gratitude thread 2022-09-02T16:25:14.053Z
Notes on the Forum today 2022-09-01T14:18:26.324Z
Epistemic status: an explainer and some thoughts 2022-08-31T13:59:14.967Z
EA Organization Updates: July-August 2022 2022-08-15T15:54:28.950Z
EA Global Meetings Can Be Short 2022-07-31T19:26:33.905Z
EA Organization Updates: June-July 2022 2022-07-14T13:04:50.809Z
Vox on cash transfers vs "graduation programs" for fighting extreme poverty 2022-07-07T15:09:43.906Z
Import your EAG(x) info to your profile & other new features (Forum update June 2022) 2022-07-01T15:46:05.925Z
Kurzgesagt - The Last Human (Longtermist video) 2022-06-28T20:16:44.930Z
Examples of someone admitting an error or changing a key conclusion 2022-06-27T15:37:00.843Z
Open Thread: June — September 2022 2022-06-22T11:24:28.949Z
You don’t have to respond to every comment 2022-06-20T16:58:00.975Z
GiveDirectly to Administer Cash Grants Pilot in Chicago 2022-06-20T12:46:14.311Z
EA Organization Updates: May-June 2022 2022-06-16T09:15:46.916Z
Follow and filter topics (& an update to the “Community” topic) 2022-06-09T14:36:27.669Z
Notes on impostor syndrome 2022-06-06T10:56:53.011Z
Announcing a contest: EA Criticism and Red Teaming 2022-06-01T18:58:55.510Z
Resource for criticisms and red teaming 2022-06-01T18:58:37.998Z
Stop scolding people for worrying about monkeypox (Vox article) 2022-05-31T08:37:26.788Z
Who wants to be hired? (May-September 2022) 2022-05-27T09:49:53.065Z
Who's hiring? (May-September 2022) 2022-05-27T09:49:35.554Z
Link-posting is an act of community service 2022-05-16T18:25:23.830Z
Against “longtermist” as an identity 2022-05-13T19:17:36.699Z
Results from the First Decade Review 2022-05-13T15:01:53.698Z
EA Organization Updates: April-May 2022 2022-05-12T14:38:04.882Z
How to use the Forum (table of contents) 2022-05-05T18:29:59.036Z
You should write on the EA Forum 2022-04-29T14:55:00.000Z
Forum user manual 2022-04-28T14:05:15.280Z
Guide to norms on the Forum 2022-04-28T13:28:08.591Z
A Bayesian framework for interpreting impact evaluations 2022-04-28T12:27:51.015Z
Forum Digest: reminder that it exists & request for feedback 2022-04-28T10:01:24.010Z
Help with the Forum; wiki editing, giving feedback, moderation, and more 2022-04-20T12:58:22.861Z
The accidental experiment that saved 700 lives (IRS & health insurance) 2022-04-19T13:28:56.242Z
EA should taboo "EA should" 2022-03-29T09:07:11.251Z
Is misinformation a serious problem, and is it tractable? 2022-03-28T13:49:48.888Z
Pre-announcing a contest for critiques and red teaming 2022-03-25T11:52:32.174Z
A Forum post can be short 2022-03-22T11:13:55.601Z
Distillation and research debt 2022-03-15T11:45:38.061Z
Hello from the new Content Specialist at CEA 2022-03-08T12:08:33.409Z
The best EA Global yet? (And other updates.) 2022-02-13T16:00:53.185Z
[Updated] EA conferences in 2022: save the dates 2022-02-09T16:36:20.221Z
Vote on posts for the Decade Review (Deadline: February 1) 2022-01-24T18:09:00.768Z
What are some artworks relevant to EA? 2022-01-17T01:54:04.361Z
Native languages in the EA community (and issues with assessing promisingness) 2021-12-27T02:01:27.929Z

Comments

Comment by Lizka on This post is a Work-In-Progress · 2022-10-03T15:55:17.744Z · EA · GW

There do seem to be bugs around this post, I'm not really sure what's going on, but I'm flagging it to the rest of the team. I marked this post as "Personal" though — I hope that works as it should! 

(Thanks for trying this!) 

Comment by Lizka on Does Economic Growth Meaningfully Improve Well-being? An Optimistic Re-Analysis of Easterlin’s Research: Founders Pledge · 2022-10-01T02:52:13.197Z · EA · GW

I'm curating this post — thank you so much for writing it. 

I agree with other commenters that replication is extremely precious, and I think this post chooses an excellent work to replicate — something that is quite influential for discussions about whether we should prioritize economic growth or more direct types of global health and wellbeing interventions. (Here's a pretty recent related piece by Lant Pritchett.) I also really appreciate that the conclusion about economic growth seems to rely on three very different but independently strong arguments (straightforward estimation of impact given Easterlin's values, noting that the conclusions are very sensitive to small tweaks in the methodology, and suggesting that GDP interventions might be a better approach to improving wellbeing even if Easterlin's interpretations are accurate). 

Re: the discussion on tractability, I want to note that most problems [seem to] fall within a 100x tractability range (assuming that effort on the problems has ~logarithmic returns, which seems very roughly reasonable for, say, research on economic growth or better global health interventions). ("For a problem to be 10x less tractable than the baseline, it would have to take 10 more doublings (1000x the resources) to solve an expected 10% of the problem. Most problems that can be solved in theory are at least as tractable as this; I think with 1000x the resources, humanity could have way better than 10% chance of starting a Mars colony, solving the Riemann hypothesis, and doing other really difficult things.") If I'm interpreting things correctly, I think this means a more plausible reason other interventions might be more impactful is if they're much more neglected (rather than much more tractable). Alternatively, we should simply not expect them to be more impactful. (Disclaimer: I read the tractability post quite a while back, didn't follow the links in this post, and didn't try very hard to understand the parts that I didn't understand after a first read. I also don't have any proper expertise in economics, so I might be getting things significantly wrong. I'm also writing quickly while tired.)

Finally, for those who like Our World in Data charts (and for those who'd appreciate a reference on what we should expect in terms of the relationship between GDP and measures of happiness) — here's a chart showing self-reported life satisfaction vs GDP per capita in different countries (note that this is different from Easterlin's approach for the paradox, which looks at differences in GDP and happiness within countries over time): 

Comment by Lizka on Lizka's Shortform · 2022-09-29T23:29:38.558Z · EA · GW

Here are slides from my "Writing on the Forum" workshop at EAGxBerlin. 

Comment by Lizka on Puzzles for Everyone · 2022-09-24T21:43:13.771Z · EA · GW

I'm curating this post. I think it helps fight a real confusion — the idea that utilitarianism (or consequentialism) is the only moral theory that needs to grapple with extremely counterintuitive (or "repugnant") conclusions. 

As the author writes, "gaps in a theory shouldn’t be mistaken for solutions." (I'm not, however, nearly as confident that consequentialism is ultimately the best answer.)

I also want to highlight:

Comment by Lizka on leopold's Shortform · 2022-09-23T16:14:35.578Z · EA · GW

Hi! We do have footnotes.[1] Are you asking for something more specific (e.g. a different way of inserting them)? 

We're also currently facing a bug that makes it impossible to add bullet points or numbered lists in footnotes. 

[Edit: I apparently had an old tab open — Linch has already answered. :) ]

  1. ^

    (Here's an example.)

Comment by Lizka on EA Forum feature suggestion thread · 2022-09-23T00:21:55.752Z · EA · GW

Thanks for sharing this! I've passed this on to the rest of the team. I agree that this would be useful. 

Comment by Lizka on My take on What We Owe the Future · 2022-09-14T17:51:32.644Z · EA · GW

I'm curating this post. [Disclaimer: writing quickly.] 

Given the amount of attention that What We Owe the Future is getting, it's important to have high-quality critical reviews of it. Here are some things I particularly liked about this one: 

  • I appreciate the focus on clarity around beliefs. [Related: Epistemic legibility.] 
  • I think the section on "What I like" is great (and I agree that the Significance/Persistence/Contingency framework is a really useful tool), and having that section was important. More broadly, the review is quite generous and collaborative. 
  • I really like that there are concrete forecasts and credences and that the criticisms of or disagreements with the book are specific. Relatedly, the post is careful to link to specific sources and cite relevant passages. 
  • The review doesn't nitpick and look for minor errors; it focuses on serious disagreements. 
  • The post is action-relevant (do you give someone The Precipice or WWOTF?).
  • I really like the discussion in the comments of this post. (And I like that commenters pointed out errors that the author then edited.)
  • There are summaries and headings, which makes the post skimmable and easier to navigate for people who want to only read a particular section. 

I should clarify that I disagree with some aspects of the review, but still think it makes lots of true and relevant claims, and appreciate it overall. (E.g. I agree with a commenter that WWOTF does a great job "bringing longtermism into the Overton window," and that this is more important than the review acknowledges it to be.) 

Comment by Lizka on [deleted post] 2022-09-13T11:19:46.931Z

A comment aimed at readers, authors, and commenters alike:

Please try not to misgender people, but also don’t assume ill intent if someone does; a correction is appreciated, but insinuations of ill intent are not. The post currently has correct pronouns, and we’re considering this topic closed in this case. 

On deadnames, I currently agree with Lukas Gloor’s comment

In this instance, the "formerly X" seems quite relevant because of Torres's history in EA. If I was the OP, I wouldn't immediately know how to unambiguously make the point that we're talking about the person who made all these crazy bad-faith accusations against EA without something like "formerly X." (Of course, I'd see no need to mention "formerly X" if Torres was entirely new to EA or didn't have a public persona beforehand.) 

If you know of a better way to handle this issue with previous EA involvement, maybe it would be helpful for others to post a suggestion. 

Comment by Lizka on [deleted post] 2022-09-13T10:26:43.539Z

The discussion on this post is getting heated, so we'd like to remind everyone of the Forum norms. Chiefly: 

  • Be kind.
  • Stay on topic.
  • Be honest.

If you don’t think you can respect these norms consistently in the comments of this post, consider not contributing, and moving on to another post. 

We’ll investigate the issues that are brought up to the best of our ability. We’d like to remind readers that a lot of this is speculation.

Comment by Lizka on Announcing a contest: EA Criticism and Red Teaming · 2022-09-07T19:28:06.055Z · EA · GW

Just a quick update: we got more submissions than we were expecting, and a number of the panelists are low-capacity right now. We're still targeting the end-of-September deadline, but there's a chance that we'll get delayed by a week or two.  

I apologize in advance if that ends up happening. 

Comment by Lizka on Who are some less-known people like Petrov? · 2022-09-06T17:13:18.765Z · EA · GW

Thank you! 

Comment by Lizka on [deleted post] 2022-09-06T17:11:38.528Z

Just as a reminder, the Forum does not allow doxing. The user is welcome to reach out, though. You could also message them. 

I moved this post to "Personal blog" as it does not seem obviously related to effective altruism. 

Comment by Lizka on The Base Rate of Longtermism Is Bad · 2022-09-06T09:31:31.368Z · EA · GW

Related: Hero Licensing (the title of the first section is "Outperforming the outside view"). 

Comment by Lizka on Climate Change & Longtermism: new book-length report · 2022-09-05T16:35:19.951Z · EA · GW

refrain from commenting more on these threads

My comment above was vague. Just a note to clarify: by “on these threads” we meant threads involving A.C.Skraeling. In our message to John Halstead, we wrote: “refrain from commenting on the existing threads with A.C.Skraeling.”

Comment by Lizka on What happens on the average day? · 2022-09-05T11:04:30.551Z · EA · GW

I'm curating this post. It gave me a better sense of what's really going on in the world and now exists as a very useful resource I can refer to. As the author puts it:

I think of this as a bit like a cheat sheet: some information to have in the back of my mind when reading whatever regular news stories are coming at me, to ground me in something that feels a bit closer to what’s actually going on.

The post also has a lot of other properties I admire. I've listed some below — the list isn't exhaustive or ordered in any particular way. [Disclaimer: written quickly.]

  • For one thing, it notices a real gap (a scope-sensitive news provider) and fills it as well as possible to do in a "small amount of time." 
    • I think this has a similar energy to some of what Michael Aird has done, as he's explained himself (my summary: "Michael is pointing out that he noticed a need that could be filled with a bit of effort, and went ahead and filled the need").  \
    • Relatedly, "What happens on the average day" is a great summary/collection
  • The post is about the real world, not just effective altruism.
  • The post has a lot of useful numbers. That's the whole point. The numbers are well picked; I think most of the metrics listed here track something important about reality and the state of life on Earth. 
    • I want more of this on the Forum, and I'll take this as an example for myself, too, and try to put more numbers in my posts in the future. (If you notice yourself saying words like "big" or "a lot" or "small" — consider trying to write that down as a number or as a specific comparison.)
    • Aside: there seems to be a large variance in how comfortable people are with numbers, but I think this is surmountable, and encourage people to go a little out of their comfort zone if they feel like numbers aren't quite for them. With fermi estimates, for instance, I think the first few estimates are the hardest. I'd be excited to collect ways to make this sort of thing easier for people. 
  • I love the fact that lots of Forum users came in and checked the numbers in this post, and that the author corrected them in the post in response. 
  • There are images and visualizations! I love images and Our World in Data graphs (as a reminder, you can embed them into your Forum posts) and think this is a great use of them. 
Comment by Lizka on Celebrations and gratitude thread · 2022-09-05T10:20:39.608Z · EA · GW

I'm really grateful to Our World in Data and think their work is amazing. Some examples  (please feel free to add other examples to this thread): 

This type of chart (and explanations of economic growth):

This chart:

This chart

This chart (and the related "history of global conditions"):

Comment by Lizka on Celebrations and gratitude thread · 2022-09-02T16:45:01.326Z · EA · GW

I'm excited about Squiggle (and the associated experimentation prize).

Comment by Lizka on Celebrations and gratitude thread · 2022-09-02T16:31:32.342Z · EA · GW

A recent post describes progress made by Animal Advocacy Careers: The impact we achieved to date: Animal Advocacy Careers — seems really cool! 

Comment by Lizka on Open Thread: June — September 2022 · 2022-09-01T10:08:34.576Z · EA · GW

I agree that this is not ideal. We're hoping to clean this up and improve the structure. 

Comment by Lizka on Open Thread: June — September 2022 · 2022-08-31T15:35:01.756Z · EA · GW

Hi Nuño, thanks for this comment. I'll be trying to write comments going forward (I wrote one for one of the posts I curated). 

Re: turning it off, I'll pass that on to the rest of the Forum team -- thanks for the feedback!

Comment by Lizka on How might we align transformative AI if it’s developed very soon? · 2022-08-30T20:41:18.528Z · EA · GW

I think this post is really valuable — I'm curating it. There seems to be a lack of serious but accessible (or at least, readable to non-experts like me) discussions of AI risk and strategy, and this post helps with this problem. I list some specific elements that I liked about the post below.

Please note that I have read this post less carefully than I would have liked to read it, and I have no experience or expertise in AI.

Assorted things I liked about this post

First, I think my mental model of "how we make AI happen safely" improved significantly. That seems like a big win, especially since most of the AI safety content that I've read focused on laying out arguments for why AI poses a big risk. This improvement in my mental model is both broad — I have a much better overview of the situation (at least of the near-casting version), and specific (I learned a lot, e.g. I was surprised to see that the success of AI checks and balances was listed as a key question for overall success on AI — this seems like a big update for me). More generally, this post had a very high density of learning-per-paragraph, for me. 

Second, I really appreciated this diagram[1], variations of which appear throughout the post to orient and guide the reader:

Third, I really appreciate the clarity of the post. I don't mean that it was easy to read — it really wasn't — but rather that it put a lot of effort into making sure that readers took the right conclusions from it and not trying to "sound right." E.g. I think the last section makes its position clear (if not very specific). 

Fourth, there were a number of very helpful frameworks or places where the post took a difficult concept or phenomenon and broke it down. For instance: 

  • The action risk vs. inaction risk distinction seems useful. It's also discussed elsewhere (and with warnings)
  • The discussion of risk-reducing properties was helpful: breaking alignment into honesty, corrigibility, and legibility helps me place some other things I've read and work that I'm aware of, and helps me understand better how it relates to alignment. The example of legibility was also really helpful.
  • The "accurate reinforcement" section had a fair bit of content that was new to me, but which I could follow. I really appreciated the examples and types of accurate reinforcement.
  • Similarly, the section on adversarial training had useful concrete models of how we could train out undesired behaviors (and some pitfalls)
  • I really liked the example "unusual incentive" setup in the testing section (as well as the analogy)
  • The checks and balances section had content that was basically entirely new to me. I really appreciated that section and the pitfalls outlined, as well as the countermeasures listed.
  • The "high-level factors" and key questions section was great. (I wish it had a diagram.)

Finally, the post was just somewhat fun to read. It was a more slow-to-read post than many on the Forum, but e.g. the section on "advanced collusion" was fascinating for someone even a bit nerdy. 

  1. ^

    I think diagrams are great. Some reasons for this: 

    - I personally understand things much better when I can see a diagram (I often draw things out before I write)

    - I think diagrams can complement plain text by providing an alternate way for readers to engage with the material —which helps accommodate different types of readers and helps check comprehension (you think you understand what was written, then read through the diagram and get a different takeaway, which forces you to check again). 

    - Diagrams provide a good condensed/overview-style reference. As you read, having the diagram in mind can help you have a sense of the road map or of how different parts of the text relate to each other. 

    I also think the creation of the diagram is a good exercise to clarify your thoughts. 

Comment by Lizka on Climate Change & Longtermism: new book-length report · 2022-08-30T19:11:19.832Z · EA · GW

Accusing anonymous or pseudonymous Forum accounts of being someone in particular (or doxing anyone) goes against Forum norms. We have reached out to John Halstead to ask that he refrain from doing so and that he refrain from commenting more on these threads.

Comment by Lizka on Climate Change & Longtermism: new book-length report · 2022-08-30T17:12:06.917Z · EA · GW

However, I doubt this would go anywhere. I suspect this is simply yet another way of ignoring people who disagree with you without thinking too hard, and relying on the combination of your name-recognition and the average EA's ignorance of climate change to buy you the 'Seems like he knows what he's talking about!'-ness you want

The moderation team feels that this is unnecessarily hostile and rude, and violates Forum norms. This is a warning; please do better in the future.

Comment by Lizka on Climate Change & Longtermism: new book-length report · 2022-08-30T17:10:23.513Z · EA · GW

Thus, all criticism must be stated in the most faux-friendly, milquetoast way possible, imposing significant effort demands on the critic and allowing them to be disregarded if they ever slip up and actually straightforwardly say that a bad thing is bad.

I'm speaking for the moderation team right now. We enforce civility on the Forum and don't view this property as opposed to criticism or disagreement.

Comment by Lizka on EA & LW Forums Weekly Summary (21 Aug - 27 Aug 22’) · 2022-08-30T10:07:17.775Z · EA · GW

I'm so excited about this. Thanks so much for working on this!

Comment by Lizka on Open Thread: June — September 2022 · 2022-08-29T17:28:28.913Z · EA · GW

The moderation team is issuing Phil Tanny a 1-year ban for repeated violation of Forum norms (even after warning). This user repeatedly violated our norms and we didn’t see any attempts on their behalf to follow our Forum’s norms more after we warned them the first time by messaging them and responding to their comments. Some examples: 

Comment by Lizka on Announcing a contest: EA Criticism and Red Teaming · 2022-08-29T10:18:07.263Z · EA · GW

11:59 pm AoE on September 1st

It's BST in the announcement post, but I've messed up in this comment thread (I missed the time while skimming) and now commit to AoE. Apologies for the confusion, folks!

From a different comment thread

I'd be personally grateful (and grateful in my Forum role) if people didn't wait until the last minute to post their submissions (but last-minute submissions won't be penalized in the scoring). Besides other problems, posting last-minute doesn't allow wiggle room for things to go wrong. 

And as an FYI, we're not going to be accepting any late submissions. 

Comment by Lizka on Announcing a contest: EA Criticism and Red Teaming · 2022-08-29T10:16:09.356Z · EA · GW

When re-skimming the announcement post that I myself co-wrote, I missed this, too, and have now committed to "as generous as possible — "Anywhere on Earth." (Here's a live clock for AoE time.) " So it's 11:59 pm AoE on September 1st.

I'd be personally grateful (and grateful in my Forum role) if people didn't wait until the last minute to post their submissions (but last-minute submissions won't be penalized in the scoring). Besides other problems, posting last-minute doesn't allow wiggle room for things to go wrong. 

And as an FYI, we're not going to be accepting any late submissions. 

Comment by Lizka on Are "Bad People" Really Unwelcome in EA? · 2022-08-26T14:18:08.583Z · EA · GW

These comments are not acceptable for the Forum, as they come too close to advocating for violence, which is not allowed (you can see the guide to norms on the Forum). The moderation team has issued a warning to the poster.

Comment by Lizka on Announcing a contest: EA Criticism and Red Teaming · 2022-08-26T12:19:27.881Z · EA · GW

We didn't specify when we posted the announcement, so let's be as generous as possible and say "Anywhere on Earth." (Here's a live clock for AoE time.) 

Comment by Lizka on Forum Digest: reminder that it exists & request for feedback · 2022-08-25T11:20:42.137Z · EA · GW

I'd love to get feedback in the comments of this post. Please feel free to share! 

Comment by Lizka on The Parable of the Boy Who Cried 5% Chance of Wolf · 2022-08-17T18:56:43.128Z · EA · GW

Thanks for linking the post — I think it's really great. 

Note also, a comment from the post:

There once was a shepherd boy who lived near a small town. One day, he heard rustling in the trees and thought it had a 10% chance of being a wolf, and did nothing, and nothing came of it. On another day, he once again heard rustling in the trees and thought it had a 10% chance of being a wolf, and did nothing, and nothing came of it. On a third day, he once again heard rustling in the trees and thought it had a 10% chance of being a wolf, and did nothing, and he and all of his flock were eaten by the wolf.

There once was a shepherd boy who lived near a small town. One day, he heard rustling in the trees and thought it had a 10% chance of being a wolf, and cried out “Wolf!,” and all the neighbors came to help but found no wolf. On another day, he heard rustling in the trees and thought it had a 10% chance of being a wolf, and cried out “Wolf!,” and all the neighbors came to help but found no wolf. On a third day, he once again heard rustling in the trees and thought it had a 10% chance of being a wolf, and cried out “Wolf!,” but none of his neighbors came to help because they thought there would be no wolf, and he and all of his flock were eaten by the wolf.

Comment by Lizka on Open Thread: June — September 2022 · 2022-08-04T20:56:47.524Z · EA · GW

I'm glad people are finding the new feature useful (I've fought with Forum post footnotes before and was really excited about the development). And thanks @Lorenzo for sharing the link!

Comment by Lizka on Vox on cash transfers vs "graduation programs" for fighting extreme poverty · 2022-07-07T17:14:38.965Z · EA · GW

I really appreciate this comment, thank you!

I agree with your disappointment about the lack of any quantitative aspect, and I'm adding the paper you linked to my reading list. 

I've also been planning on reading selected books and papers from Further reading/References in the Growth and the case against randomista development for a while, but if you have other recommendations, I'd love to hear them. 

Comment by Lizka on The Future Might Not Be So Great · 2022-07-04T20:06:03.943Z · EA · GW

The moderation team feels that phrases like “called you out on it being bullshit” aren’t constructive for this discussion (or on the Forum as a general rule). Please don’t use them.

Comment by Lizka on The Future Might Not Be So Great · 2022-07-04T20:04:30.116Z · EA · GW

Some comments in this thread are uncivil and break Forum norms. The moderation team is asking John Halstead to refrain from adding more to this thread.

Comment by Lizka on The Future Might Not Be So Great · 2022-07-04T20:02:57.953Z · EA · GW

A comment from the moderation team: 

This topic is extremely difficult to discuss publicly in a productive way. First, a lot of information isn’t available to everyone — and can’t be made available — so there’s a lot of guesswork involved. Second, there are a number of reasons to be very careful; we want community spaces to be safe for everyone, and we want to make sure that issues with safety can be brought up, but we also require a high level of civility on this Forum.

We ask you to keep this in mind if you decide to contribute to this thread. If you’re not sure that you will contribute something useful, you might want to refrain from engaging. Also, please note that you can get in touch with the Community Health team at CEA if you’d like to bring up a specific concern in a less public way. 

Comment by Lizka on Examples of someone admitting an error or changing a key conclusion · 2022-06-27T19:48:49.524Z · EA · GW

Thank you!

Comment by Lizka on Examples of someone admitting an error or changing a key conclusion · 2022-06-27T19:48:26.202Z · EA · GW

Good point, thanks! I'm really impressed, seems like a very hard switch to make. 

Comment by Lizka on Examples of someone admitting an error or changing a key conclusion · 2022-06-27T19:47:59.719Z · EA · GW

Thanks a bunch for sharing this! I think this is really cool. 

Comment by Lizka on Preventing a US-China war as a policy priority · 2022-06-24T13:30:03.497Z · EA · GW

As a moderator, I think this comment is unnecessarily rude and breaks Forum norms. Please don't leave any more comments like this or you will be banned from the Forum. 

Comment by Lizka on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-23T19:54:34.323Z · EA · GW

The moderation team is issuing Charles a 3-month ban. 

Comment by Lizka on Critiques of EA that I want to read · 2022-06-23T19:50:57.284Z · EA · GW

The issue was that we were letting people upload files as submissions. If you uploaded a file, your email or name would be shared (and we had a note explaining this in the description of the question that offered the upload option). Nearly no one was using the upload option, and if you didn't upload anything, your information wasn't shared

Unfortunately, Google's super confusing UI says: "The name and photo associated with your Google account will be recorded when you upload files and submit this form. Your email is not part of your response," which makes it seem like the form is never anonymous. (See below.)

I removed the upload option today to reduce confusion, and hope people will just create a pseudonym or fake Google account if they want to share something that's not publicly accessible on the internet via link anonymously.

What the form looked like:

I don't remember what the wording of the description actually was, but it was along these lines. 

Here's what the settings for the test form look like: 

Comment by Lizka on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-22T15:36:36.933Z · EA · GW

Here are some things we think break Forum norms

  • Rude/hostile language and condescension, especially from Charles He
  • Gwern brings in an external dispute — a thread in which Charles accuses them of doxing an anonymous critic on LessWrong. We think that bringing in external disputes interferes with good discourse; it moves the thread away from discussion of the topic in question, and more towards discussions of individual users’ characters
  • The conversation about the external dispute gets increasingly unproductive

The mentioned thread about doxing also breaks Forum norms in multiple ways. We’ve listed them on that thread. 

The moderators are still considering a further response. We’ll also be discussing with both Gwern and Charles privately.

Comment by Lizka on EA Forum feature suggestion thread · 2022-06-22T15:27:10.224Z · EA · GW

The moderators feel that several comments in this thread break Forum norms. In particular: 

  • Charles He points out that Gwern has doxed someone on a different website, LessWrong, seemingly in response to criticism. We’re not in a position to address this because it happened outside the EA Forum and isn't about a Forum user, but we do take this seriously and wouldn’t have approved of this on the EA Forum.
  • However, we feel that Charles’s comment displays a lack of care and further doxes the user in question since the comment lists the user’s full name (which Gwern never listed). Moreover, Charles unnecessarily shares a vulnerability of LessWrong.

We’ve written to Charles about this, and we’re discussing further action. We’ve also explicitly added that doxing is not allowed or tolerated on the EA Forum, although we think this behavior was already banned or heavily discouraged as a corollary of our existing “Strong norms.”

Comment by Lizka on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-21T15:17:59.752Z · EA · GW

The moderators feel that some comments in this thread break Forum norms and are discussing what to do about it.

Comment by Lizka on How bad would nuclear winter caused by a US-Russia nuclear exchange be? · 2022-06-21T13:06:19.782Z · EA · GW

This comment is not civil, and this sort of discourse is not appropriate for the Forum. The moderation team will issue a ban on the poster if we see this activity again.

Comment by Lizka on EA Organization Updates: May-June 2022 · 2022-06-20T12:40:15.009Z · EA · GW

Thanks for asking! Unless I got things wrong when I was transferring the Google Doc to the Forum post, there wasn't anything from M-Z or from I-M. (Some organizations on the list didn't have an update this month, apparently, and also the list of organizations is pretty early-alphabet-heavy.)

Comment by Lizka on You Don't Need To Justify Everything · 2022-06-15T12:19:30.146Z · EA · GW

Thanks for posting this! I really appreciate it. 

I want to highlight some relevant posts: 

I think they're especially relevant for this section:

But possible self-defeating dynamics aren’t the only issue. Another is that pressure to justify everything can cause people to come up with justifications after the fact. What do you do if you only heard about EA six months ago, and already made some decisions beforehand? What do you do if you just didn’t think very hard about working at a random tech company rather than something “more EA”? What if you just went to the party because you wanted to?

Justifying, in these cases, is also a way to get practice... in motivated reasoning.

Comment by Lizka on Announcing a contest: EA Criticism and Red Teaming · 2022-06-15T09:44:28.839Z · EA · GW

I'd be happy to see this kind of process, and don't think it's against the rules of the contest. You might not want to tag early versions with the contest tag if you don't expect them to win and don't think panelists should bother voting on them, but tagging the early versions wouldn't count against you for the final version. 

On a different note (taking off my contest-oragnizer hat, putting on my Forum hat): I think people should feel free to post butterfly ideas with the idea that they will develop them further. The Forum exists in part for this kind of communal idea development. (Of course, this isn't the best approach for certain kinds of idea development. In particular, it might make sense to do some basic research on the Forum before posting certain questions or starting to write something long on a topic you're very unsure about.