Posts

Charles He's Shortform 2021-10-20T04:31:23.301Z

Comments

Comment by Charles He on Why does AGI occur almost nowhere, not even just as a remark for economic/political models? · 2022-10-03T16:45:59.639Z · EA · GW

I don't usually add this, but writing this, because this person seems to be setting themselves up for a bit of a public figure role in EA, and mentions credentials a bit:

Some of the content in previous comments and this comment isn't a great update in regards to those goals above and I would tap the brakes here. In this comment, it's not the general take in the content itself (negative takes on economics are fine and good ones are informative) but the intellectual depth/probably quality of the context of these specific ideas  ("Rational Man hypothesis, rapid convergence on Nash equilibrium play in complex games, importance of positional goods in advanced market economies, continuity in rates of economic growth") patterns matches not great.

Comment by Charles He on Why does AGI occur almost nowhere, not even just as a remark for economic/political models? · 2022-10-03T16:42:09.117Z · EA · GW

There's a lot going on here, but this person's take on economics seems bad, and also is a reductive take that is very common. 

To calibrate, this take is actually incredibly similar how someone critiquing EA would say "all EA is obsessed with esoteric philosophical arguments and captured by AI and the billionaire donors"[1].

Reasons:

  • A decent chunk of economics is concerned with meta-economics and disliking economics. These ideas are held by many of the key figures.
  • In addition to these negative meta views, which are common, entire subdisciplines or alt schools are concerned with worldview changes in economics
    • See behavioral economics (which is well, sort of not super promising because it seems to be a repacking of anecdotes/psychology)
    • See heterodox economics (which also does poorly basically for similar reasons as above, as well as challenges with QA/QC/talent supply, because diluting disciplines wholesale doesn't really work)
  • As a plus, economics has resisted the extremes of the culture wars and kept its eye on the work, while internally, to most students and some faculty, IMO proportionately giving disadvantaged people some equity or leg up (obviously not complete). 
    • The effort/practice given to diversity is pretty similar to the level EA orgs have chosen and I suspect that's not a coincidence.
  • Economics has avoided the replication crisis, the event which drives a lot of negative opinion about mainstream science in EA
    • To me, it's obvious economics would...it's very hard to communicate why, e.g. to show EAs the environment in an empirical seminar (argument in labor economics between senior faculty)
  • The amount of respect senior/mainstream economics give to reality and talking to people on the ground in empirical matters is large, and many ideas about unobserved/quality/social models has come out of this (although these ideas themselves can be attacked as repackaging the obvious)
  • Some work in economics like environmental economics (not the same as "Ecological economics" , which is one of the unpromising heterodox schools) and practical work like kidney donation (Al Roth) are highly cherished by almost all economists
  • Health economics and developmental economics is basically the entire cornerstone of the most concrete/publicized sector of EA, that is, global health, e.g. GiveWell. 

There's (a lot) more but I have to do some work and I got tired of writing

  1. ^

    The issues with economics are similar/literally isomorphic/and in one case identical with EA (the math, dissenting subcultures and decision theory). I always wanted to write up but it seemed ancillary, hard to do well (for the social reality reasons) and embarrassing in multiple ways.

Comment by Charles He on Winners of the EA Criticism and Red Teaming Contest · 2022-10-03T04:52:03.549Z · EA · GW

This post has been zapped off the forum (it's downvoted so negative that few will see it):

I don't like the post, but I think people should know it exists (for a few days or something), so I'm posting it here.

(Commenting in this post because, why not?)

https://forum.effectivealtruism.org/posts/ALMX8BbR5HfhwbPwp/

Comment by Charles He on To those who contributed to Carrick Flynn in Oregon CD-6 - Please help now · 2022-10-03T02:07:24.809Z · EA · GW

The choice of readers of this forum to fund  Flynn in the primary plus the massive influx of money from PACs guided by EA principles made it more likely that she will lose this general election...

Unfortunately,  the contributions you made plus the many millions from the PAC funded by Bankman-Fried required Salinas supporters to dig deeply in their pockets to respond to a tsunami of ads, including  untrue attacks.

I think giving persuasive detail about this claim would contribute to the goals of this ask, especially given how central this idea is to this post.

  • What special actions did Salinas perform or have performed for her that resulted in depletion, in response to the Flynn candidate? For example, did she spend or deploy favors that were earmarked for the general? 
    • If Salinas has been depleted, can you explain or illustrate this with some anecdotal information (a few sentences/paragraphs) that show why Salinas has been put in a worse place?
  • Are you suggesting that election funding resembles a fixed pot of spending each cycle? Doesn't the perceived closeness of elections and other specific factors strongly drive election spending? 


In general, can you explain how people from the EA did harm and explain what EA's duty of care is here as a result for actions?

  • Let's say that EA did not donate money and instead, Flynn did well because EAs got out the vote and worked hard for him. Salinas had to spend a lot of money as a result. Would EAs need to compensate the winning candidate Salinas then? 
    • If in the scenario above, you don't think EAs need to compensate Salinas, then why would the use of money be a special case, compared to the other substantial resources used, and how fundamental money is to US political campaigns, including Salinas (exemplified by your post)?
    • In all close primaries, are the losing candidate's supporters expected to compensate the winner?
       
  • Are you asking individual EAs who made personal donations, or are you asking SBF for money? 
    • If you're asking EAs personally for money, can you explain/show why these individual donations was abnormal, and if it was abnormal, why it was "bad"? 
    • As an aside, basically, no individual normal EA "wanted or approved" the 8 figure donation strategy.
       
  • In the aftermath, isn't it common to believe/speculate that much of the SBF money was neutral or negative to the Flynn campaign?
    • The other candidates orchestrated a major press event, uniting against the Flynn campaign
    • The regional paper began a sustained series of hostile, critical takes on Flynn.
    • Doesn't it seem plausible that the absolute, net result of the SBF money, was a strong reaction that unified and concentrated support behind Salinas, who has a powerful narrative that embodies many traits (background, political career) that Flynn lacked in his narrative?
      • There was at least one other well-funded candidate (millions of dollars of personal wealth) who literally complained that this crowded out his strategy.
  • Didn't the House PAC give $1M, comparable to the total of the EA donation amounts? Do they need to compensate for their behavior too?
Comment by Charles He on EA Forum feature suggestion thread · 2022-10-02T04:13:29.833Z · EA · GW

??? Yeah, Reddit's design literally uses a grey background. It's darker than the EA forum.

Comment by Charles He on EA Forum feature suggestion thread · 2022-10-02T04:02:47.833Z · EA · GW

"Grey is easier" I don't think it is. Would you disagree that most publications use a white background? Could you provide at least some examples of ones that doesn't?

 

I checked two sites that you listed, FB and StackExchange, and they literally use a grey/off white background. Started with these two and I stopped after checking these two, I suspect I'll find more. 

https://ux.stackexchange.com/questions/101876/why-not-use-darker-backgrounds-instead-of-white

The stackexchange site literally answered this very question and one answer pointed out that the very site is off-white (although less than grey or the EA forum). The top answer here supports grey backgrounds: https://ux.stackexchange.com/questions/23965/is-there-a-problem-with-using-black-text-on-white-backgrounds?rq=1.

 

Before I thought this opinion about the use of grey (and avoidance of high contrast) was normal/standard before. Now I'm even more sure, and not knowing/ opposing about it seems sort of strange to me. 

 

"I disagree that those other sites are superior." We would have to define superior. For me, the best (most well paid) minds in UX + the most number of users are objective measures.  That doesn't mean we have to copy them, but it beckons to the familiarity factor. 

There's a lot going on here, but IMO neither of those things make this view very promising. This is because they are designed for MAU/growth hacking and the audience is different (and I don't think this is some elite or niche thing). Also, since the business is multiple billions are year, you naturally get top talent. 

As an analogy, tabloids are popular and well designed for their audience, but that doesn't make them dominant design choices. I do agree that the design on average is good and things work for those sites.

 

Also, I suspect some design choices from those sites have dependencies—I think having an infinite scroll or video or picture focus would affect other design choices, such as size/position/font of text, so copying those design choices to a forum might not be appropriate without more sophistication.

 

I don't want to be disagreeable or press too much here on you here. Honestly I want to learn about design and different perspectives, but I don't think I am?

Some of the other things you said suggested you have strong views that seemed more personal and also that you use some unusual color filter? This makes me speculate that you are applying a personal perspective disproportionately and ignoring "the customer", but maybe this is unfair.

Comment by Charles He on Winners of the EA Criticism and Red Teaming Contest · 2022-10-02T01:45:56.609Z · EA · GW

Thanks for the responses, they give a lot more useful context.

(I also want to reply to your top-level comments about the evolutionary anchor, but am a bit short on time to do it right now (since for those questions I don't have cached technical answers and will have to remind myself about the context). But I'll definitely get to it next week.)
 

If it frees up your time, I don't think you need to write the above, unless you specifically want to. It seems reasonable to interpret that point on "evolutionary anchors" as a larger difference on the premise, and that is not fully in scope of the post. This difference and its phrasing is more disagreeable/overbearing to answer, so it's also less worthy of a response. 

Thanks for writing your ideas.

Comment by Charles He on About going to a hub · 2022-10-02T01:40:42.056Z · EA · GW

This seems like a really good post. It's compact, honest, and clueful to a lot of social realities. It is aware of many offsetting considerations and presents these appropriate way. The post must also have been easier to write than a longer, more ornate post.

Comment by Charles He on Strange Love - Developing Empathy With Intention (or: How I Learned To Stop Calculating And Love The Cause) · 2022-10-02T01:37:29.433Z · EA · GW

This post is really well written and explains a lot of ideas in an approachable way.

Comment by Charles He on EA Forum feature suggestion thread · 2022-10-02T01:30:03.632Z · EA · GW

I'm confused why the all white background is better, grey is easier on the eyes and the non-white color gives a natural framing to the other content. Both points seem pretty normal in design.

I disagree that those other sites are superior. Also a major issue is that they use visual/video content (reddit and FB) and have different modes of use/seeking attention. They are designed around a scrolling feed, producing a constant stream of content, showing 1-3 items at a time.

Setting the above aside, I'm uncertain why your changes reflect ideas from them. For example, your changes to text, make posts much more compact than Reddit or SO.

Comment by Charles He on Winners of the EA Criticism and Red Teaming Contest · 2022-10-01T08:10:03.957Z · EA · GW

Nested inside of the above issue, another problem is that the author seems to use “proof-like" rhetoric in arguments, when she needs to provide broader illustrations that could generalize for intuition, because the proof actually isn’t there. 

Sometimes some statements don't seem to resemble how people use mathematical argumentation in disciplines like machine learning or economics.

 

To explain, the author begins with an excellent point that it’s bizarre and basically statistically impossible that a feed forward network can learn to do certain things through limited training, even though the actual execution in the model would be simple. 

One example is that it can’t learn the mechanics of addition for numbers larger than it has seen computed in training.


 

Basically, the most “well trained”/largest feed forward DNN that uses backprop training, will never add 99+1 correctly, if it was only trained on adding smaller numbers like 12+17  if these calculations never total 100. This is because in backprop, the network literally needs to see and create processes for the 100 digits. This is despite the fact that it’s simple (for a vast DNN) to “mechanically have” the capability to perform true logical addition.

 

Immediately starting from the above point, I think author wants to suggests that, in the same way it’s impossible to get this functionality above, this constrains what feed forward networks would do (and these ideas should apply to deep learning or 2020 technology for biological anchors).

However, everything sort of changes here. The author says:

I’s not clear what is being claimed or what is being built on above.

  • What computations are foreclosed or what can’t be achieved in feed forward nets? 
    • While the author shows that addition with n+1 digits can't be achieved by training with addition with numbers with n digits", and certainly many other training to outcomes are prevented, why would this generally rule out capability, and why  would this stop other (maybe very sophisticated) training strategies/simulations from producing models that could be dangerous?
  • The author says the “upshot is that the class of solutions searched over by feedforward networks in practice seems to be (approximately) the space of linear models with all possible features” and “this is a big step up from earlier ML algorithms where one has to hand-engineer the features”. 
    • But that seems to allow general transformations on the features. If so, that is incredibly powerful. It doesn't seem to constrain functionality (of these feed forward networks)?
  • Why would the logic which relies on a technical proof (which I am guessing relies on a "topological-like" argument that requires the smooth structure of feed forward neural nets), apply to even to RNN or LTSM, or transformers?
Comment by Charles He on Winners of the EA Criticism and Red Teaming Contest · 2022-10-01T07:56:18.810Z · EA · GW

I don’t know anything about machine learning, AI or math, but I’m really uncertain about the technical section in the paper, on “Can 2020 algorithms scale to TAI?.”

One major issue is that in places in her paper, the author expresses doubt that “2020 algorithms” can be the basis for computation for this exercise. However, she only deals with feed forward neural nets for the technical section. 

This is really off to leave out other architectures. 

If you try using feed forward neural nets, and compare them to RNN/LSTM for things like sequence like text generation, it’s really clear they have a universe of difference. I think there’s many situations where you can’t get similar  functionality in a DNN (or get things to converge at all) even with much more "compute"/parameter size. On the other hand, plain RNN/LTSM will work fine, and these are pretty basic models today. 

Comment by Charles He on Winners of the EA Criticism and Red Teaming Contest · 2022-10-01T07:54:37.722Z · EA · GW

I have lots of questions about the paper “Biological Anchors external review”.

I think this paper contains a really good explanation of the biological anchors and gives good perspective in plain English.

 

But later in the paper, the author seems to present ideas that seem briefly handled, and to me they appear like intuitions. Even if these are intuitions, these can be powerful, but I can’t tell how insightful they really are with the brief explanation given, and I think they need more depth to be persuasive. 

One example is when the author critiques using compute and evolutionary anchors as an upper bond:



(Just to be clear I don’t actually read any of the relevant papers and I just guess the content based on the title) but the only way I can imagine “biological anchors” can be used is as a fairly abstract model, a metaphor, really. 

People won't really simulate a physical environment and physical brains literally. In a normal environment, actual evolution rarely selects for “intelligence” (no mutation/phenotype, environment has too many/no challenges). So you would skip a lot of features, and force mutation and construct challenges. I think a few steps along these lines means the simulation will use abstract digital entities, and this would be wildly faster.

It seems important to know more about why the author thinks that more literal, biological brains need to be simulated. This seems pretty core to the idea of her paper, where she says a brain-like models needs to be specified:

But I think it’s implausible to expect a brain-like model to be the main candidate to emerge as dangerous AI (in the AI safety worldview) or useful as AGI for commercial reasons. The actual developed model will be different.

Another issue is that (in the AI safety worldview) specifying this “AGI model” seems dangerous by itself, and wildly backwards for the purpose of this exercise. Because of normal market forces, by the time you write up this dangerous model, someone would be running it. Requiring to see something close to the final model is like requiring to see an epidemic before preparing for one.

Comment by Charles He on Open Thread: June — September 2022 · 2022-10-01T03:25:45.699Z · EA · GW

The prospect of Russia's grotesque invasion of Ukraine leading the use of beyond conventional weapons has produced anxiety in the past. I don't think this anxiety is justified.

Recently, there's been media reports suggesting the possibility of this escalation again, after sham claims of annexing Ukraine territory. 

Despite these media reports, the risk of this escalation is low. The report here from the "ISW" points this out, with detailed considerations.

https://www.understandingwar.org/backgrounder/special-report-assessing-putin%E2%80%99s-implicit-nuclear-threats-after-annexation

Comment by Charles He on William MacAskill - The Daily Show · 2022-09-30T05:44:46.119Z · EA · GW

>My sense is that the original, OG EAs, were hard core and took the idea of frugality very seriously

The relevant story:

It was a sunny day in September, and they were at an apple orchard outside Boston. There were candy apples for sale, and Julia wanted one. Normally she would have told herself that she could not justify spending her money that way, but Jeff had told her that if she wanted anything he would buy it for her with his money. He had found a job as a computer programmer; Julia was still unemployed, and did not have any savings, because she had given everything she had earned in the summer to Oxfam.

That night they lay in bed and talked about money. Jeff told Julia that, inspired by her example, he was thinking of giving some percentage of his salary to charity. And Julia realised that, if Jeff was going to start giving away his earnings, then, by asking him to buy her the apple, she had spent money that might have been given. With her selfish, ridiculous desire for a candy apple, she might have deprived a family of an anti-malarial bed net or deworming medicine that might have saved the life of one of its children. The more she thought about this, the more horrific and unbearable it seemed to her, and she started to cry. She cried for a long time, and it got so bad that Jeff started to cry, too, which he almost never did. He cried because, more than anything, he wanted Julia to be happy, but how could she be happy if she went through life seeing malarial children everywhere, dying before her eyes for want of a bed net? He knew that he wanted to marry her, but he was not sure how he could cope with a life that was going to be this difficult and this sad, with no conceivable way out.

They stopped crying and talked about budgets. They realised that Julia was going to lose her mind if she spent the rest of her life weighing each purchase in terms of bed nets, so, after much discussion, they came up with a system...

https://www.theguardian.com/world/2015/sep/22/extreme-altruism-should-you-care-for-strangers-as-much-as-family

Comment by Charles He on William MacAskill - The Daily Show · 2022-09-30T02:04:47.824Z · EA · GW

I didn't downvote but my guess is that there are two distinct reasons:

  • The Daily Show is the big leagues. MacAskill's interview was really well done, which is hard for the audience and context.  I think that every phrase MacAskill said was carefully chosen to be correct and serve the intended narrative, while at the same time appearing succinct and natural. A digression, which IMO is what you're suggesting, would take up time and attention. 

    The natural place where MacAskill could insert your proposed point was inside a potentially problematic subthread that EA attracts billionaire wealth. MacAskill responded by an argument revolving around the idea that that billionaires should pay more than others. 
    • To be more exact,  at "6:16" the host said "It's also interesting to see how many billionaires have signed onto your ideas."
    • IMO, this is a pretty "hingey" place in the interview and MacAskill responds perfectly by talking about how wealth doesn't matter much proportionately to the wealthy, and even lands applause with a punchy point.
    • At about 7:05 is the most natural place for your comment, and MacAskill uses it to press on the (hyper)privileged giving more. Here's a rough transcript:
  • The second issue is that, in this specific enviromment, I think there's a risk of having  your point conflated: your point that the underprivileged should not have to give, might be conflated with the idea that underprivileged should not engage with EA
    • This conflation outcome seems bad: it’s possible that non-"elite", lower income people could make major contributions to EA by being employees, leaders or founders (which is like, your personal belief right?). Also, this idea is terrible optics.
    • I think there is some risk of this conflation and it would take care and attention to communicate this original point. Even 15 seconds would be costly and risk redirecting the flow of the conversation.
       

I'm not sure this is true, also, I'm personally a carpet bagger and chancer and I learned about EA two weeks ago: My sense is that the original, OG EAs, were hard core and took the idea of frugality very seriously. There's stories about Julia Wise crying about having candy bought for her, when it could buy a bednet instead. MacAskill has had serious back problems, presumably because he avoided spending money on furniture or specialists for himself. 

Even among very established, "EA elite", there might many who miss things about that old spirit. The ideas in your comment seem against this level of dedication and might screen out new "instances" of these people. 

 

Comment by Charles He on Why don't all EA-suggested organizations disclose salary in job descriptions? · 2022-09-29T07:48:09.495Z · EA · GW

Inside the tech world, there's a norm of fairly transparent salaries driven by levels.fyi (and Glassdoor, to a lesser extent). I think this significantly reduces pay gaps caused by eg differential negotiating inclinations, and a similar gathering place for public EA salary metrics is one of my pet project proposals.

 

This is fairly wrong to very wrong.

  • It's somewhat true that within each "level" or "band", there's a defined salaries or comp
    • But this isn't that restrictive, the ranges can be high, esp past entry bands
      • Also, there can be weird sub bands or sub roles where you can evade normal bands.
      • Also, there can be special cases where they go out of band or other "compensating differentials" can be given (although the later is small at big tech companies).
  • The band is determined after the interview process, e.g. after the interviews, your band is decided and then you get an offer, you can come in say, level "5" or level "6". 
    • E,g it's very possible that for people who interview for the same position, the comp ranges from 200k to 600k, maybe even 800k
  • This is further messed up by RSU and vesting schedules.

The result is that comp ranges for each "job posting" is huge, 200k to 800k. 

Overall, for various structural reasons, salary isn't transparent in almost all very successful for profit companies, except for junior roles. Tech is a particularly terrible example to use because the pay can be so different for the same role. 

Using large ranges would result the various problems described in this comment, which is excellent and I strongly support the view, because saying the pragmatic truth is an "anti-meme".

 

For better or worse, the end result of transparency is probably limited increase over the current state (which is fairly good) for a number of reasons, which I think are given in Morrison's comment. 

Another reason is optics, at least one EA org pays a salary over $1,000,000 and that's not including equity.

Comment by Charles He on Offering FAANG-style mock interviews · 2022-09-29T06:05:08.567Z · EA · GW

Thanks for the answers, this makes a lot of sense.

Can you be specific about #1? For example, what format of programming tests would you prefer to give to a generalist engineer? 

By the way, do you mean something special or "hands on" for ML programming or design questions? 

For ML programming, it seems bad to rely on ML or design questions in the sense of  a verbal question and answer? I think actually designing/choosing ML scientific knowledge is a tiny part of the job, so I think many ML knowledge questions would be unnatural (rewarding memorization of standard ML books/selecting for "enthusiasts" who read up on recent libraries, and blow out strong talent who solved a lot of hard real world problems). 

Comment by Charles He on What Do AI Safety Pitches Not Get About Your Field? · 2022-09-29T05:51:07.378Z · EA · GW

whether intelligence is even the limiting factor in scientific and technological progress. 

My personal, limited experience is that better algorithms are rarely the bottleneck.

 

Yeah, in some sense everything else you said might be true or correct.

But I suspect by "better algorithms", I think you thinking along the lines of "What's going to work as a classifier, is this gradient booster with these parameters going to work robustly for this dataset?", "More layers to reduce false negatives has huge diminishing returns, we need better coverage and identification in the data." or  "Yeah, this clustering algorithm sucks for parsing out material with this quality." 

Is the above right?

The above isn't what the AI safety worldview sees as "intelligence". In that worldview, the "AI" competency would basically start working up the org chart and taking over a lot of roles progressively: starting with the decisions in the paragraph above of model selection, then doing data cleaning, data selection over accessible datasets, calling and interfacing with external data providers, then understanding the relevant material science and how that might relate to the relevant "spaces" of the business model. 

So this is the would-be "intelligence". In theory, solving all those problems above seems like a formidable "algorithm". 

Comment by Charles He on Offering FAANG-style mock interviews · 2022-09-29T02:49:06.030Z · EA · GW

Interested in answering these questions?

Comment by Charles He on Offering FAANG-style mock interviews · 2022-09-29T02:48:29.287Z · EA · GW

While not directly related, this post gives an opportunity to ask question that are interesting and might benefit others:

How would an aligned EA working on something approximately EA, recruit and judge other SWE EAs for a software role?

  • Would you use leetcode or a similar puzzle style of coding, or pair programming or some kind of assignment based project?
  • What if I wanted to recruit someone better or more senior than me (by a moderate margin)? How would I go about this? 

What have you found surprising about recruiting or building a team for a small project? 

  • (I think this is hard or a big topic. For example, SWE is a lot about architecture/vision/design decisions that often take on a social/cultural aspect.)
     

One reason I'm asking is that I’m assuming the answers might be different than normal, because there might be differences/advantages in having an EA recruit an EA. 

(On the other hand, there's probably reasons and examples that are evidence against this). 
 

Comment by Charles He on A direct way to reduce the catch of wild fish · 2022-09-28T22:03:36.419Z · EA · GW

Thanks for this thoughtful reply and your work on animal welfare.

So I think my previous comment is to express my belief/knowledge/informed guesses that chronic suffering is very large, and on average, focusing on interventions that reduce highly unnatural, chronic suffering is more impactful.  I'm uncertain that ~1-2 hours of slaughter, even of many millions of fish, is the top priority if it shifts resources from other priorities[1]. I think that a general, uncommitted audience, should take these views into account.


All your reasons you mention seem great to me and also is useful, interesting ideas in themselves.

I think a project reducing slaughter suffering is really valuable. I think if you're saying that you or someone else is committed to working on this, have a lead here, or even have a personal preference to work on these projects, that seems really good to do.

I think it's really bad if EA aligned people thoughtfully working on interventions feel there they have to "win the extra game of being approved on the EA forum",  especially if the quality of discussants is low or the views unnecessarily disagreeable.

 

  1. ^

    Further ways I know of communicating this, are basically massive violations of  the intent of "CW", "safe spaces". I don't know if anyone wants to read this right now.

Comment by Charles He on Optimism, AI risk, and EA blind spots · 2022-09-28T21:54:12.532Z · EA · GW

double comment

Comment by Charles He on Project Idea: The cost of Coccidiosis on Chicken farming and if AI can help · 2022-09-27T16:59:44.992Z · EA · GW

Thanks a lot for the thoughtful replies. I'm really glad it seems like you identified useful interventions that someone like Max can work on.

Comment by Charles He on Project Idea: The cost of Coccidiosis on Chicken farming and if AI can help · 2022-09-27T06:17:47.524Z · EA · GW

I'm not just saying the probability that this post will be discovered or clicked on is very low.

In my above comment, I'm saying a computer vision project, like I think the OP proposed, has a very low risk of harm, so this danger is outweighed by the value from a viable, well executed EA project along these lines (assuming that such a promising project exists).

I have these beliefs for reasons that include the following:

  • Someone I know, works or has knowledge of, and working experience in "AI" "machine learning" and leading experimental projects, that is probably above median EA knowledge in those areas. 
  • Similar to my comment above, here is the Google scholar search "computer vision chicken broilers". There's a lot of papers, including some from 2016, 2017. I think many of this work is in the same vein as the project. Here's page 10 or so:


Also, I think a well aligned and executed project could be more valuable than it seems from just discussion. This is because the actual implementation of a system (and welfare improving projects that don't include "AI") is really important. Differences in execution, which can be subtle, small, and hard to communicate, can affect welfare, more than "desk research" or non-expert discussion suggests. Assuming they can execute it, an EA can take advantage of this, and supplant/compete with alternative systems.

It's possible that this is wrong, but I'm not immediately sure it's possible to communicate this.

I think it's good to write this comment, because well, EA forum discussion is like, possibly stopping potential actual EA projects, and that's bad.

Comment by Charles He on Project Idea: The cost of Coccidiosis on Chicken farming and if AI can help · 2022-09-27T00:52:11.988Z · EA · GW

It's really impressive you changed your mind so quickly.

I don't want to jump on discussions, and answering this fully is hard, but you did write you were stopping entirely, so I'm writing the below quickly in support.

It's unlikely that promising, thoughtful work by EAs are going to be harmful/be captured/enhance  harm by factory farming, and it's likely a net positive.

  1. The grand parent comment dramatically understates how much the industry actively works on/crowds out related profit maximizing research. Basically, in the same way that the ITN framework works, this reduces the risk that even talented animal welfare EAs, working on the most successful project would significantly help the industry exploit more animals.
    1. There's not just one, but multiple journals and subdisciplines on industrial farming of broilers and chickens.
    2. I mean, go to this link, and keep clicking next, I'm on page 46 and there's  still articles.
      1. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Coccidiosis+chicken&btnG=
  2. There's likely "wedges" where animal welfare, and increasing factory farming profit diverge. 
    1. One would be alleviating chronic suffering of malingering animals, who get slaughtered anyways and whose alleviation of suffering isn't cost effective will produce more meat. Because factory farms don't price in pain, it's possible that making chickens happier in many situations could have major welfare improvements, without changing farming incentives.
      1. Point #1 above suggests why there would be a wedge, because profit seeking is extensive.

 

I think there's a big, giant thesis here about "welfarism", talking about theories of change that involve collaborating/working with some farmers. These projects have been done viably and successfully by many EAs in the past, but  I'm unsure anyone wants to read this right now, and I want to just write this comment quickly.

Comment by Charles He on Switzerland fails to ban factory farming – lessons for the pursuit of EA-inspired policies? · 2022-09-27T00:40:11.148Z · EA · GW

Separately, I think the effective animal activism community should be much clearer on a long-term strategy to inform their prioritization. By when do we expect to get meat alternatives that are competitive on taste and price? At that point, how many people do we expect to go vegetarian? Is there a date by which we expect >50% of the developed-world population to go vegetarian? To what degree are policies shaped by precedents from other countries?

 

These thoughts seem both really important and quite deep and thoughtful.

I don’t know the answer at all, but I have a few questions that might be useful (they might advance discussion/ intent). Please feel free to answer if it makes sense.

  • Is this related to what people call a “theory of victory” or vision? 
    • If so, I have questions about the use of “theory of victory”. I’m uncertain, in the sense I want to learn more, about the value of a “theory of victory” in farm animal welfare. 
      • If we reduced animal suffering by 50-90% in a fairly short time, that seems really good and productive. What does a theory of victory contribute  in addition to that?
        • Maybe it provide a “focal” or “tipping point”, or is useful for morale, rhetoric, getting further allies or resources? 
        • Maybe it has coordination value. For example, if we knew at "year Y" that meat alternatives would be at “cost parity”, coordinating many other campaigns and activities at the same time would be useful
  • Are you imagining this "long-term strategy" to come from the EA community (maybe in the sense of EA farm animal leaders agreeing, or people brainstorming more generally, or Rethink Priorities spinning up a project), or do you think it would come from a more external source?
  • I'm not certain, but I'm probably not very optimistic about large-scale shifts from meat consumption in a short time frame[1]. I’m interested in facts or even just a formidable narrative that could change this non-optimistic view. Do you or anyone else know any thoughts about this?
     

 

  1. ^

    The people I have met in the past, who advance the idea of a major, upcoming, shift, seem to rely on narratives focused on personal dietary change. Upon examination, their views seem really inconsistent with data that the % of the population that is vegan/vegetarian seems to be flat over decades. 

    To me, some groups or initiatives seem to be communicating mainly with subcultures that are historically receptive to animal welfare. It seems the related /consequent information environment could unduly influence their judgement.

Comment by Charles He on Switzerland fails to ban factory farming – lessons for the pursuit of EA-inspired policies? · 2022-09-27T00:31:04.185Z · EA · GW

I was one of the people who helped draft the constitutional amendment and launch the initiative. 
 

This project seems hard and important. Your work seems impactful, as you mentioned, it seems noticeable and impressive to 37% support of banning a harmful industry.

As Mo Putera also suggested, would it make sense to further write up what you and others did on this project? 

  • Maybe a write up would add depth, communicate the challenges faced, publicize the project, as well as give some sense of how to get started on similar object level work? 
    • For example, I have no idea how to get a “constitutional initiative” started, or “how connected” someone would need to be (which could be easier because of EA networks/EA aligned people). 

Useful work on animal welfare policy seems like it would apply to other initiatives. 

(I understand you are busy. Maybe it make sense to hire an assistant or use some other service to help write this up?)

Comment by Charles He on Switzerland fails to ban factory farming – lessons for the pursuit of EA-inspired policies? · 2022-09-26T19:38:39.564Z · EA · GW

I'm skeptical/confused by this comment. 

So, I'm not that familiar with this legislation, but I think the (key) purpose behind the banning of factory farms, would be a major political, legislative win that has a strategic value.

 

But the comment's main arguments are:

  1. Factory farming is minimal in Switzerland, so this legislation doesn't do much
  2. There are big negative consequences to this legislation banning of farms (price increases, food security). 

Don't these two points conflict with each other? Also, neither undermines the main purpose of the legislation mentioned above.

 

Other comments:

  • I'm confused what "ivory tower" and "consequentialism" add here—I'm sure EA has a big consequentialist streak, but I'm not sure how relevant that is to reducing torture on factory farms, or reduce huge mortality from diseases like malaria. 
    • Similarly, whether I agree or don't agree with unfairness or not, I'm unsure what "moral axes in human psychology according to Haidt’s moral foundations theory" adds.
  • RE: Regressiveness, I think it's possible to model price increases due to policy changes, and I would expect to see numbers if this was significant.
    • Dealing with regressiveness, pigovian taxes, and progressive taxes have been a thing for a long time, and seem to be tools we can use.
  • "claims like 99% of meat being factory farmed", but I can't find this claim on this post or on the website. Where did you get this and what was the context it was used in?
Comment by Charles He on EA Forum feature suggestion thread · 2022-09-26T01:04:31.607Z · EA · GW

Update:

I was wrong, this entire thread about a "leak", my last two comments, was wrong/noise.

What happened

I thought I was obtaining private information through an API endpoint. Upon more examination, the information was not private, the data had no additional information, and I misinterpreted the meaning of the information.

Comment by Charles He on EA Forum feature suggestion thread · 2022-09-26T00:24:32.422Z · EA · GW

TLDR; Unfortunately, I think I am asking for a somewhat clearer statement from a senior source that CEA won't take such action. To be clear, this might be a clear, good faith statement from someone like Max Dalton or Habryka, that they will do best efforts not to restrict or adjust the open vision of API use, as a result of this leak. Because the EA forum technical development is closely intertwined with LW, this statement should include consent of the LW team, such as Habryka.

 

I believe the leak is substantial (it's not an emergency but there is some chance it's embarrassing). 

Because of the moderate severity of the leak, I think something like the following scenario could occur:

Two weeks after being notified of the leak, JP, Ben West, Max Dalton and the CEA board have a routine, private meeting about CEA's online programs, like most organizations do. 

In this routine meeting meeting, one of the conclusions was the suspension of further development of EA forum features in favor of another technical project. The leak had a large influence on this decision.

Three months after the private meeting, a message was posted "We're restricting use of [specific forum/API] use because of limited developer attention. Unfortunately, we decided to turn it off because of the maintenance demands". 

While no connection with the leak is stated and other factors, including actually limited developer time played a role, the truth was that leaks/headaches was the causal reason the feature stopped being developed.

 

Note that the above does not require bad faith on the part of CEA. I actually don't think anyone wants the above to happen. The above scenario is just the logical, rational way of doing things if you're heading an entity that has a lot of projects and limited developer time. 

Like, this is a confession, this is what I would do.

Another way of seeing this is that the forces are just the normal forces of being a public entity.

By this request, I'm trying to proportionately "tie" your hands (to the degree that a public good faith statement would) so that these forces can't act to deteriorate access.

Comment by Charles He on EA Forum feature suggestion thread · 2022-09-25T23:47:01.935Z · EA · GW

Thank you for this information, this allowed me to resolve the issue.

 

Now, separately, and in a more formal/precise tone.

I want to inform you that there is a unintended leak of EA forum or LessWrong private user information, that is substantial.

To calibrate on severity: this is  something you want to change soon. It is not extremely sensitive, (less sensitive than private messages) but in my opinion, it is about at least as sensitive as leaking email addresses.

I propose the following:

  1. Please agree that the EA forum team and the LessWrong team, will not remove, restrict, or allow to deteriorate forum access and/or API functionality, as a consequence of my disclosure of this leak. 
    1.  Specifically, I am worried that the team might turn off access or functionality of the EA forum or the LW forum to reduce the likelihood of future leaks by reducing the amount of code that needs to be maintained (it's not that the access/functionality is a security  concern by itself, but it's easier not to have leaks if you don't have websites/API endpoints).
    2. Note that my disclosure is in good faith, I do not directly benefit from the disclosure. I believe my "use" of the leak or knowledge of its existence would be undetectable without me notifying you.
  2. If you agree on the point above, please provide an email where I can send information about the leak. 
    1. Note that I may email Habyrka and other people in addition to the email you provide.

Note that the leak should not be difficult to correct.

Comment by Charles He on EA Forum feature suggestion thread · 2022-09-25T22:50:03.936Z · EA · GW

The LessWrong API does not seem to work using HTTP requests from a remote host (my machine).

To be specific, the following Python code shows an HTTP request for the GraphQL API.

# Python 3.9 code 
import requests

query_text = """
{
  comments {
    results {
      _id
    }
  }
}
"""

headers = {'Content-Type': 'application/json'}

url = 'https://forum.effectivealtruism.org/graphql'
requests.post(url, json={'query': query_text}, headers=headers)
# <Response [200]>

url = 'https://www.lesswrong.com/graphql'
requests.post(url, json={'query': query_text}, headers=headers)
#  <Response [403]>

The code demonstrates that the request returns the information correctly for the EA forum, but the same code does not work for LessWrong (see the response 403 versus response 200).

  • Can the LessWrong team confirm that there are differences, or a current access restriction to using the GraphQL API? 
  • If this is unintended, what is the current method to access the LessWrong API using remote HTTP requests?

 

Note that the "visual" GraphQL API for LessWrong works fine, which suggests there is something wrong/unintended about the above behavior.

Comment by Charles He on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-25T00:26:52.885Z · EA · GW

One example is the presence of staff that monitor all interactions in order to enforce certain norms. I've heard that they can seem a bit intimidating at times.

I agree that transparency to the public is really lacking. I happen to know there is an internal justification for this opaqueness, but still believe that there are a lot more details they could be making public without jeopardizing their objectives. 

The content in this comment seem really false to me, both in the actual statement and the "color" this comment has. It seems like it could mislead others who are less familiar with actual EAG events and other EA activities. 

Below is object level content pushing back on the above thoughts.

 

Basically,  it's almost physically impossible to monitor a large number of interactions, much less all interactions at EAG: 

  • Most meetings are 1on1s that are privately arranged, and there's many thousands of these meetings at every conference. Some meetings occur in scheduled events  (e.g. speed meetings for people interested in a certain topic).
    • It's not possible that CEA staff could physically hover over in all person meetings, I don't think there's enough staff to cover all centrally organized events (trained volunteers are used instead).
    • Also, if someone tried to eavesdrop in this way, it would be immediately obvious (and seem sort of clownishly absurd). 
  • In all venues, there is "great diversity" of the physical environments people could meet.
    • This includes large, open standing areas, rooms of small or medium size, booths, courtyards. 
    • This includes the ability to walk the streets surrounding the venue (which can be useful for sensitive conversations). 
    • By the way, providing this diversity is intentionally done by the organizers.
  • CEA staff do not control/own the conference venue (they rent and deal with venue staff, who generally are present constantly).
  • It seems absurd to write this, but covert monitoring of private conversations is illegal, and there's literally hundreds of technical people at EA conferences, and I don't think this would go undetected for long.

 

While less direct, here are anecdotes about EAG or CEA that seems to suggest an open, normal culture, or something:

  • At one EAGx, the literal conference organizers and leader(s) of the country/city EA group were longtime EAs, who actively expressed dislike of CEA, due to its bad "pre-Dalton era" existence (before 2019)
    • The fact that they communicated their views openly and still lead a EAGx and enjoy large amounts of CEA funding/support, seems healthy and open.
  • Someone I know has been approached multiple times at EA conferences  by people who are basically "intra-EA activists", for example, who want different financing and organizing structures, and are trying to build momentum. 
    • The way they approached seemed pretty open, e.g. the place they wanted to meet  was public and they spoke reasonably loudly and directly
    • By the way, some of these people are employed by the canonical EA organizations or think tanks, e.g. they literally have physical offices not too far next to some of the major, major EA figures. 
      • These people shared many details and anecdotes, some of which are hilarious.
      • Everything about these interactions and the existence of these people suggests openness in EA in general
  • On various matters, CEA staff don't agree with other CEA staff, like all normal, healthy organizations with productive activities and capable staff. The fact these disputes exists sort of "interrogates the contours" of the culture at CEA and seems healthy. 
Comment by Charles He on EA forum content might be declining in quality. Here are some possible mechanisms. · 2022-09-24T22:47:14.576Z · EA · GW
  • It might be possible and useful to quantify decline in forum quality (measurement is hard it seems plausible to use engagement with promising or established users, and certain voting patterns might be a mark for quality).
  • I think the forum team should basically create/find content, for example by inviting guest writers. The resulting content would be good themselves and in turn might draw in high quality discussion. This would be particularly useful in generating high quality object level discussion.
Comment by Charles He on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T22:00:39.180Z · EA · GW

Yes, Amy's comment is where I got my information/conclusion from. 

Yes, you are right, the OP has commented to say she is open to EAGx, and based on this, my comment above about not liking EAGx does not apply.

Comment by Charles He on AGI Battle Royale: Why “slow takeover” scenarios devolve into a chaotic multi-AGI fight to the death · 2022-09-23T20:30:16.782Z · EA · GW

This seems basic and wrong. 

In the same way that two human super powers can't simply make a contract to guarantee world peace, two AI powers could not do so either. 

(Assuming an AI safety worldview and the standard, unaligned, agentic AIs) in the general case, each AI will always weigh/consider/scheme at getting the other's proportion of control, and expect the other is doing the same.

 

based on their relative power and initial utility functions

It's possible that peace/agreement might come from some sort of  "MAD" or game theory sort of situation. But it doesn't mean anything to say it will come from "relative power". 

Also, I would be cautious about being too specific about utility functions. I think an AI's "utility function" generally isn't a literal, concrete, thing, like a Python function that gives comparisons , but might be far more abstract, and could only appear from emergent behavior. So it may not be something that you can rely on to contract/compare/negotiate. 

Comment by Charles He on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T19:47:17.533Z · EA · GW

I think the emotional cost of rejection is real and important. I think the post is about feeling like a member of a community, as opposed to acceptance at EAG itself.

 

It seems the OP didn't want to go to EAGx conferences. This wasn't mentioned in her OP.

Presumably, one reason the OP didn't want to go to EAGx, was that they view these events as diluted, or not having the same value as an EAG[1]

But that view seems contrary to wanting to expand from "elite", highly filtered EAGs. Instead, their choices suggests the issue is a personal one about fairness/meeting the bar for EAG.

 

The grandparent comment opens a thread criticizing eliteness or filtered EAG/CEA events. But that doesn't seem to be consistent with the above.

  1. ^

    BTW, I think views where EAGx are "lesser" are disappointing, because in some ways, EAGx conferences have greater opportunities for counterfactuals (there are more liminal or nascent EAs).

Comment by Charles He on Case Study of EA Global Rejection + Criticisms/Solutions · 2022-09-23T19:31:47.063Z · EA · GW

EAG conference activity has grown dramatically, with EAGs now going over 1,500 people, and more EAG and EAGx conferences. Expenses and staff have all increased to support many more attendees.

The very CEA people who are responding here (and actively recruiting more people to get more/larger conferences), presided over this growth in conferences. 

I can imagine that the increased size of EAGs faced some opposition. It's plausible to me that the CEA people here, actively fought for the larger sizes (and increased management/risk). 

In at least a few views, this seems opposite to "eliteness" and seems important to notice/mention.

Comment by Charles He on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-23T02:22:51.782Z · EA · GW

This is useful and thoughtful. I will read and will try to update on this (in general life, if not the forum?) Please continue as you wish!

I want to notify you and others, that I don't expect such discussion to materially affect any resulting moderator action, see this comment describing my views on my ban.

Below that comment, I wrote some general thoughts on EA. It would be great if people considered or debated the ideas there.

Comment by Charles He on EA Forum feature suggestion thread · 2022-06-23T02:15:14.148Z · EA · GW

EA Common Application seems like a good idea

  • I think a common application seems good and  to my knowledge, no one I know is working on a very high end, institutional version
  • See something written up here
     

EA forum investment seems robustly good

  • This is one example ("very high quality focus posts")
    • This content empowers the moderator to explore any relevant idea, and cause thousands of people, to learn and update on key EA thought, and develop object level views of the landscape. They can stay grounded. 
    • This can justify a substantial service team, such as editors and artists, who can illustrate posts or produce other design
Comment by Charles He on EA Forum feature suggestion thread · 2022-06-23T02:11:46.020Z · EA · GW

AI Safety

Dangers from AI is real, moderate timelines are real

  • AI alignment is a serious issue, AIs can be unaligned and dominate humans, for the reasons most EA AI safety people say it does
  • One major objection, that severe AI danger correlates highly with intractability, is powerful
    • Some vehement neartermists actually believe in AI risk but don’t engage because of tractability
  • Another major objection of AI safety concerns, that seems very poorly addressed, is AI competence in the real world. This is touched on here and here.
    • This seems important, but relying on a guess that AGI can’t navigate the world, is bad risk management
  • Several lock-in scenarios fully justify neartermist work.
    • Some considerations in AI safety may even heavily favor neartermist work (if AI alignment tractability is low and lock in is likely and this can occur fairly soon)

 

There is no substance behind “nanotech”, “intelligence explosion in hours” based narratives

  • These are good as theories/considerations/speculations, but their central place is very unjustifiable
  • They expose the field to criticism and dismissal by any number of scientists (skeptics and hostile critics  outnumber senior EA AI safety people, which is bad and recent trends are unpromising)
    • This slows progress. It’s really bad these suboptimal viewpoints have existed for so long, and damages the rest of EA

 

It is remarkably bad that there hasn’t been any effective effort to recruit applied math talent from academia (even good students from top 200 schools would be formidable talent)

  • This requires relationship building and institutional knowledge (relationships with field leaders, departments and established professors in applied math/computer science/math/economics/others
  • Taste and presentation is big
    • Probably the current choice of math and theories around AI safety or LessWrong, are quaint and basically academic poison
      • For example, acausal work is probably pretty bad
      • (Some chance they are actually good, tastes can be weird)
    • Fixation on current internal culture is really bad for general recruitment 
  • Talent pool may be 5x to 50x greater with effective program 

 

A major implementation of AI safety is very highly funded new EA orgs and this is close to an existential issue for some parts of EA

  • Note that (not yet stated) critiques of these organizations, like “spending EA money” or “conflict of interest” aren’t valid
    • They are even counterproductive, for example, because closely knit EA leaders are the best talent, and they actually can make profit back to EA (however which produces another issue)
  • It’s fair to speculate that they will perform two key traits/activities, probably not detailed in their usually limited public communications:
    • Often will attempt to produce AGI outright or find explore/conditions related to it
    • Will always attempt to produce profit
    • (These are generally prosocial)
  • Because these orgs are principled, they will hire EAs for positions whenever possible, with compensation, agency and culture that is extremely high
    • This has major effects on all EA orgs
  • A concern is that they will not achieve AI safety, AGI, and the situation becomes one where EA gets caught up creating rather prosaic tech companies
    • This could result in a bad ecosystem and bad environment (think of a new season of the Wire, where ossified patterns of EA jargon cover up pretty regular shenanigans)
    • So things just dilute down to profit seeking tech companies. This seems bad:
      • In one case, speaking to an AI safety person, they brought up donations  from their org, casually by chance.  The donation amount was large compared to EA grants.
        • It's problematic if ancillary donations by EA orgs are larger than EA grantmakers.
      • The "constant job interview" culture of some EA events and interactions will be made worse
    • Leadership talent from one cause area may gravitate to middle management in another—this would be very bad
  • All these effects can actually be positive, and these orgs and cultures can strengthen EA.
     

I think this can be addressed by  monitoring of talent flows, funding and new organizations

  • E.g. a dedicated FTE, with a multi year endowment, monitors talent in EA and hiring activity
    • I think this person should be friendly (pro AI safety), not a critic
      • They might release reports showing how good the orgs are and how happy employees are
    • This person can monitor and tabulate grants as well, which seems useful.
    • Sort of a census taker or statistician
Comment by Charles He on EA Forum feature suggestion thread · 2022-06-23T01:52:46.139Z · EA · GW

Animal welfare 

There is a lack of forum discussion on effective animal welfare

  • This can be improved with the presence of people from the main larger EA animal welfare orgs

Welfarism isn’t communicated well

  • Welfarism observes the fact that suffering is enormously unequal among farmed animals, with some experiencing very bad lives
  • It can be very effective to alter this and reduce suffering, compared to focusing on removing all animal products at once
  • This idea is well understood and agreed upon by animal welfare EAs
  • While welfarism may need critique (which it will withstand as it’s substantive as impartialism), its omission is distorting and wasting thinking, in the same way the omission of impartialism would
    • Anthropomorphism is common (discussions contain emotionally salient points, that are different than what fish and land animal welfare experts focus on )
    • Reasoning about prolonged, agonizing experiences is absent (it’s understandably very difficult), yet is probably the main source of suffering.
       

Patterns of communication in wild animal welfare and other areas aren’t ideal.

  • It should be pointed out that this work involves important foundational background research. Addressing just the relevant animals in human affected environments could be enormously valuable.
  • In conversations that are difficult or contentious with otherwise altruistic people, it might be useful to be aware of the underlying sentiment where people feel pressured or are having their morality challenged.
    • Moderation of views and exploration is good, and pointing out one's personal history in more regular animal advocacy and other altruistic work is good.
    • Sometimes it may be useful to avoid heavy use of jargon, or applied math that might be seen as undue or overbearing.
  • A consistent set of content (web pages seem to be good).
    • Showing upcoming work would be good in Wild Animal Welfare, such as explaining foundational scientific work

 

Weighting suffering by neuron count is not scientific - resolving this might be EA cause X

  • EAs often weight by neuron count, as a way to calculate suffering. This has no basis in science. There are reasons (not solved or concrete unfortunately) to think smaller animals (mammals and birds) can have similar level of feelings of pain or suffering as humans.
  • To calibrate, I think most or all animal welfare EAs, as well as many welfare scientists would agree that simple neuron count weighting is primitive or wrong
  • Weighting by neuron count has been necessary because it’s very difficult to deal with the consequences of not weighting
  • Weighting by neuron counts is almost codified—its use turns up casually, probably because omitting it is impractical (emotionally abhorrent)
    • Because it's blocked for unprincipled reasons, this could probably be “cause X”
    • The alleviation of suffering may be tremendously greater if we remove this artificial and maybe false modifier, and take appropriate action with consideration of the true experiences of the sentient beings. 
  • The considerations about communication and overburdening people apply, and a conservative approach would be good
    • Maybe driving this issue starting from prosaic, well known animals is a useful tactic

 

Many new institutions in EA animal welfare, which have languished from lack of attention, should be built.

  • (Theres no bullet points here)
Comment by Charles He on EA Forum feature suggestion thread · 2022-06-23T01:40:50.609Z · EA · GW

Instead of talking about me or this ban anymore, while you are here, I really want to encourage considerations of some ideas that I wrote in the following comments:

 

Global health and poverty should have stories and media that show work and EA talent 

  • The sentiment that “bednets are boring” is common.
  • This is unnecessary, as the work in these areas are fascinating, involve great skill and unique experiences, that can be exciting and motivating.
  • These stories have educational value to EAs and others.
    • They can cover skills and work like convincing stakeholders, governments and complex logistical, scientific related implementations in many different countries or jurisdictions.
    • They express skills not currently present or visible in most EA communications.
  • This helps communications and presentation of EA
  • To be clear, this would be something like an EA journalist, continually creating stories about these interventions. 80K hours but with a different style or approach.

Examples of stories (found in a few seconds)

(These don't have a 80K sort of, long form in depth content, or cover perspectives from the founders, which seems valuable).

Comment by Charles He on EA Forum feature suggestion thread · 2022-06-23T01:37:26.256Z · EA · GW

I agree with this comment and it seems I should be banned, and I encourage you to apply the maximum ban. This is because:

  1. The moderator comment above is correct
  2. Additionally, in the comment that initiated this issue, I claimed I was protecting an individual. Yet, as the moderator pointed out, I seemed to be “further doxxing” him. So it seems my claims are a lie or hypocritical. I think this is a severe fault.
  3. In the above, and other incidents, it seems like I am the causal factor—without me, the incidents won’t exist.

Also, this has taken up a lot of time:

  1. For this event, at least one moderator meeting has occurred and several messages notifying me (which seems a lot of effort).
    1. I have gotten warnings in the past, such as from two previous bans (!)
  2. One moderately-senior moderator EA has reached out for a call now.

I think this use of time (including very senior EAs) is generous. While I’m not confident I understand the nature of the proposed call, I’m unsure my behavior or choices will change. Since the net results may not be valuable to these EAs, I declined this call.

I do not promise to remedy my behavior, and I won’t engage with these generous efforts at communication. 

So, in a way requiring the least amount of further effort or discussion, you should apply a ban, maybe a very long or permanent one. 

Comment by Charles He on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-22T23:22:39.680Z · EA · GW

I agree with your first paragraph for sure.

Comment by Charles He on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-22T23:07:44.266Z · EA · GW

I don’t disagree with your judgement of banning but I point out there’s no banning for quality—you must be very frustrated with the content.

To get a sense of this, for the specific issue in the dispute, where I suggested the person or institution in question caused a a 4 year delay in funding, are you saying it’s an objectively bad read, even limited to just the actual document cited? I don’t see how that is.

Or is this wrong, but requires additional context or knowledge.

Comment by Charles He on A Quick List of Some Problems in AI Alignment As A Field · 2022-06-22T22:53:14.679Z · EA · GW

I think what you said makes sense.

(As a presumptuous comment) I don’t have a positive view about the work from strong circumstantial evidence. However, as sort of devils advocate:

There are very few good theories of change for very short timelines and one of them is build it yourself. So, I don’t see how that’s good to share.

Alignment might be entangled in this to the degree that sharing even alignment might be capabilities research.

The above might be awful beliefs but I don’t see how it’s wrong.

By the way, just to calibrate so people can read if I’m crazy:

It reads like MIRI or closely related people have tried to build AGI or find the requisite knowledge, many times over the years. The negative results seems to be an update about their beliefs.

Comment by Charles He on A Quick List of Some Problems in AI Alignment As A Field · 2022-06-22T22:38:00.462Z · EA · GW

Basically, I don’t know . I think it’s good to start off by emphatically stating I don’t have any real knowledge of MIRI.

A consideration is that the beliefs in MIRI are still on very short timelines. A guess is that because of the nature of some work relevant to short timelines, maybe some projects could have bad consequences if made public (or just don’t make sense to ever make public).

Again, this is presumptuous, but my instinct is not to have attitudes of instructing org policy in a situation like this, because of dependencies we don’t see. (Just so this doesn’t read like a statement that nothing can ever change: I guess the change here would be a new org or new leaders, obviously this is hard).

Also, to be clear, this is accepting the premise of MIRI. IMO one should take seriously the premise of shorter timelines, like, it’s a valid belief. Under this premise, the issue here is really bad execution, like actively bad.

If your comment was alluding to shifting of beliefs away from short timelines, that seems like a really different discussion.

Comment by Charles He on Mental support for EA partners? · 2022-06-22T19:39:20.360Z · EA · GW

You seem really thoughtful and considerate.

This isn’t really that deep, but it seems like EAs should accommodate the needs of their partners, with good communication, and investment appropriate to the relationship that they want for each other.

I don’t think this is news to anyone. I think I’m trying to say your feelings and views are valid.