I don't usually add this, but writing this, because this person seems to be setting themselves up for a bit of a public figure role in EA, and mentions credentials a bit:
Some of the content in previous comments and this comment isn't a great update in regards to those goals above and I would tap the brakes here. In this comment, it's not the general take in the content itself (negative takes on economics are fine and good ones are informative) but the intellectual depth/probably quality of the context of these specific ideas ("Rational Man hypothesis, rapid convergence on Nash equilibrium play in complex games, importance of positional goods in advanced market economies, continuity in rates of economic growth") patterns matches not great.
There's a lot going on here, but this person's take on economics seems bad, and also is a reductive take that is very common.
To calibrate, this take is actually incredibly similar how someone critiquing EA would say "all EA is obsessed with esoteric philosophical arguments and captured by AI and the billionaire donors".
A decent chunk of economics is concerned with meta-economics and disliking economics. These ideas are held by many of the key figures.
In addition to these negative meta views, which are common, entire subdisciplines or alt schools are concerned with worldview changes in economics
See behavioral economics (which is well, sort of not super promising because it seems to be a repacking of anecdotes/psychology)
See heterodox economics (which also does poorly basically for similar reasons as above, as well as challenges with QA/QC/talent supply, because diluting disciplines wholesale doesn't really work)
As a plus, economics has resisted the extremes of the culture wars and kept its eye on the work, while internally, to most students and some faculty, IMO proportionately giving disadvantaged people some equity or leg up (obviously not complete).
The effort/practice given to diversity is pretty similar to the level EA orgs have chosen and I suspect that's not a coincidence.
Economics has avoided the replication crisis, the event which drives a lot of negative opinion about mainstream science in EA
To me, it's obvious economics would...it's very hard to communicate why, e.g. to show EAs the environment in an empirical seminar (argument in labor economics between senior faculty)
The amount of respect senior/mainstream economics give to reality and talking to people on the ground in empirical matters is large, and many ideas about unobserved/quality/social models has come out of this (although these ideas themselves can be attacked as repackaging the obvious)
Some work in economics like environmental economics (not the same as "Ecological economics" , which is one of the unpromising heterodox schools) and practical work like kidney donation (Al Roth) are highly cherished by almost all economists
Health economics and developmental economics is basically the entire cornerstone of the most concrete/publicized sector of EA, that is, global health, e.g. GiveWell.
GiveDirectly was literally founded and driven by economists, the entire methodology/worldview is an economics one.
The issues with economics are similar/literally isomorphic/and in one case identical with EA (the math, dissenting subcultures and decision theory). I always wanted to write up but it seemed ancillary, hard to do well (for the social reality reasons) and embarrassing in multiple ways.
The choice of readers of this forum to fund Flynn in the primary plus the massive influx of money from PACs guided by EA principles made it more likely that she will lose this general election...
Unfortunately, the contributions you made plus the many millions from the PAC funded by Bankman-Fried required Salinas supporters to dig deeply in their pockets to respond to a tsunami of ads, including untrue attacks.
I think giving persuasive detail about this claim would contribute to the goals of this ask, especially given how central this idea is to this post.
What special actions did Salinas perform or have performed for her that resulted in depletion, in response to the Flynn candidate? For example, did she spend or deploy favors that were earmarked for the general?
If Salinas has been depleted, can you explain or illustrate this with some anecdotal information (a few sentences/paragraphs) that show why Salinas has been put in a worse place?
Are you suggesting that election funding resembles a fixed pot of spending each cycle? Doesn't the perceived closeness of elections and other specific factors strongly drive election spending?
In general, can you explain how people from the EA did harm and explain what EA's duty of care is here as a result for actions?
Let's say that EA did not donate money and instead, Flynn did well because EAs got out the vote and worked hard for him. Salinas had to spend a lot of money as a result. Would EAs need to compensate the winning candidate Salinas then?
If in the scenario above, you don't think EAs need to compensate Salinas, then why would the use of money be a special case, compared to the other substantial resources used, and how fundamental money is to US political campaigns, including Salinas (exemplified by your post)?
In all close primaries, are the losing candidate's supporters expected to compensate the winner?
Are you asking individual EAs who made personal donations, or are you asking SBF for money?
If you're asking EAs personally for money, can you explain/show why these individual donations was abnormal, and if it was abnormal, why it was "bad"?
As an aside, basically, no individual normal EA "wanted or approved" the 8 figure donation strategy.
In the aftermath, isn't it common to believe/speculate that much of the SBF money was neutral or negative to the Flynn campaign?
The other candidates orchestrated a major press event, uniting against the Flynn campaign
The regional paper began a sustained series of hostile, critical takes on Flynn.
Doesn't it seem plausible that the absolute, net result of the SBF money, was a strong reaction that unified and concentrated support behind Salinas, who has a powerful narrative that embodies many traits (background, political career) that Flynn lacked in his narrative?
There was at least one other well-funded candidate (millions of dollars of personal wealth) who literally complained that this crowded out his strategy.
Didn't the House PAC give $1M, comparable to the total of the EA donation amounts? Do they need to compensate for their behavior too?
Before I thought this opinion about the use of grey (and avoidance of high contrast) was normal/standard before. Now I'm even more sure, and not knowing/ opposing about it seems sort of strange to me.
"I disagree that those other sites are superior." We would have to define superior. For me, the best (most well paid) minds in UX + the most number of users are objective measures. That doesn't mean we have to copy them, but it beckons to the familiarity factor.
There's a lot going on here, but IMO neither of those things make this view very promising. This is because they are designed for MAU/growth hacking and the audience is different (and I don't think this is some elite or niche thing). Also, since the business is multiple billions are year, you naturally get top talent.
As an analogy, tabloids are popular and well designed for their audience, but that doesn't make them dominant design choices. I do agree that the design on average is good and things work for those sites.
Also, I suspect some design choices from those sites have dependencies—I think having an infinite scroll or video or picture focus would affect other design choices, such as size/position/font of text, so copying those design choices to a forum might not be appropriate without more sophistication.
I don't want to be disagreeable or press too much here on you here. Honestly I want to learn about design and different perspectives, but I don't think I am?
Some of the other things you said suggested you have strong views that seemed more personal and also that you use some unusual color filter? This makes me speculate that you are applying a personal perspective disproportionately and ignoring "the customer", but maybe this is unfair.
Thanks for the responses, they give a lot more useful context.
(I also want to reply to your top-level comments about the evolutionary anchor, but am a bit short on time to do it right now (since for those questions I don't have cached technical answers and will have to remind myself about the context). But I'll definitely get to it next week.)
If it frees up your time, I don't think you need to write the above, unless you specifically want to. It seems reasonable to interpret that point on "evolutionary anchors" as a larger difference on the premise, and that is not fully in scope of the post. This difference and its phrasing is more disagreeable/overbearing to answer, so it's also less worthy of a response.
This seems like a really good post. It's compact, honest, and clueful to a lot of social realities. It is aware of many offsetting considerations and presents these appropriate way. The post must also have been easier to write than a longer, more ornate post.
I'm confused why the all white background is better, grey is easier on the eyes and the non-white color gives a natural framing to the other content. Both points seem pretty normal in design.
I disagree that those other sites are superior. Also a major issue is that they use visual/video content (reddit and FB) and have different modes of use/seeking attention. They are designed around a scrolling feed, producing a constant stream of content, showing 1-3 items at a time.
Setting the above aside, I'm uncertain why your changes reflect ideas from them. For example, your changes to text, make posts much more compact than Reddit or SO.
Nested inside of the above issue, another problem is that the author seems to use “proof-like" rhetoric in arguments, when she needs to provide broader illustrations that could generalize for intuition, because the proof actually isn’t there.
Sometimes some statements don't seem to resemble how people use mathematical argumentation in disciplines like machine learning or economics.
To explain, the author begins with an excellent point that it’s bizarre and basically statistically impossible that a feed forward network can learn to do certain things through limited training, even though the actual execution in the model would be simple.
One example is that it can’t learn the mechanics of addition for numbers larger than it has seen computed in training.
Basically, the most “well trained”/largest feed forward DNN that uses backprop training, will never add 99+1 correctly, if it was only trained on adding smaller numbers like 12+17 if these calculations never total 100. This is because in backprop, the network literally needs to see and create processes for the 100 digits. This is despite the fact that it’s simple (for a vast DNN) to “mechanically have” the capability to perform true logical addition.
Immediately starting from the above point, I think author wants to suggests that, in the same way it’s impossible to get this functionality above, this constrains what feed forward networks would do (and these ideas should apply to deep learning or 2020 technology for biological anchors).
However, everything sort of changes here. The author says:
I’s not clear what is being claimed or what is being built on above.
What computations are foreclosed or what can’t be achieved in feed forward nets?
While the author shows that addition with n+1 digits can't be achieved by training with addition with numbers with n digits", and certainly many other training to outcomes are prevented, why would this generally rule out capability, and why would this stop other (maybe very sophisticated) training strategies/simulations from producing models that could be dangerous?
The author says the “upshot is that the class of solutions searched over by feedforward networks in practice seems to be (approximately) the space of linear models with all possible features” and “this is a big step up from earlier ML algorithms where one has to hand-engineer the features”.
But that seems to allow general transformations on the features. If so, that is incredibly powerful. It doesn't seem to constrain functionality (of these feed forward networks)?
Why would the logic which relies on a technical proof (which I am guessing relies on a "topological-like" argument that requires the smooth structure of feed forward neural nets), apply to even to RNN or LTSM, or transformers?
I don’t know anything about machine learning, AI or math, but I’m really uncertain about the technical section in the paper, on “Can 2020 algorithms scale to TAI?.”
One major issue is that in places in her paper, the author expresses doubt that “2020 algorithms” can be the basis for computation for this exercise. However, she only deals with feed forward neural nets for the technical section.
This is really off to leave out other architectures.
If you try using feed forward neural nets, and compare them to RNN/LSTM for things like sequence like text generation, it’s really clear they have a universe of difference. I think there’s many situations where you can’t get similar functionality in a DNN (or get things to converge at all) even with much more "compute"/parameter size. On the other hand, plain RNN/LTSM will work fine, and these are pretty basic models today.
I have lots of questions about the paper “Biological Anchors external review”.
I think this paper contains a really good explanation of the biological anchors and gives good perspective in plain English.
But later in the paper, the author seems to present ideas that seem briefly handled, and to me they appear like intuitions. Even if these are intuitions, these can be powerful, but I can’t tell how insightful they really are with the brief explanation given, and I think they need more depth to be persuasive.
One example is when the author critiques using compute and evolutionary anchors as an upper bond:
(Just to be clear I don’t actually read any of the relevant papers and I just guess the content based on the title) but the only way I can imagine “biological anchors” can be used is as a fairly abstract model, a metaphor, really.
People won't really simulate a physical environment and physical brains literally. In a normal environment, actual evolution rarely selects for “intelligence” (no mutation/phenotype, environment has too many/no challenges). So you would skip a lot of features, and force mutation and construct challenges. I think a few steps along these lines means the simulation will use abstract digital entities, and this would be wildly faster.
It seems important to know more about why the author thinks that more literal, biological brains need to be simulated. This seems pretty core to the idea of her paper, where she says a brain-like models needs to be specified:
But I think it’s implausible to expect a brain-like model to be the main candidate to emerge as dangerous AI (in the AI safety worldview) or useful as AGI for commercial reasons. The actual developed model will be different.
Another issue is that (in the AI safety worldview) specifying this “AGI model” seems dangerous by itself, and wildly backwards for the purpose of this exercise. Because of normal market forces, by the time you write up this dangerous model, someone would be running it. Requiring to see something close to the final model is like requiring to see an epidemic before preparing for one.
>My sense is that the original, OG EAs, were hard core and took the idea of frugality very seriously
The relevant story:
It was a sunny day in September, and they were at an apple orchard outside Boston. There were candy apples for sale, and Julia wanted one. Normally she would have told herself that she could not justify spending her money that way, but Jeff had told her that if she wanted anything he would buy it for her with his money. He had found a job as a computer programmer; Julia was still unemployed, and did not have any savings, because she had given everything she had earned in the summer to Oxfam.
That night they lay in bed and talked about money. Jeff told Julia that, inspired by her example, he was thinking of giving some percentage of his salary to charity. And Julia realised that, if Jeff was going to start giving away his earnings, then, by asking him to buy her the apple, she had spent money that might have been given. With her selfish, ridiculous desire for a candy apple, she might have deprived a family of an anti-malarial bed net or deworming medicine that might have saved the life of one of its children. The more she thought about this, the more horrific and unbearable it seemed to her, and she started to cry. She cried for a long time, and it got so bad that Jeff started to cry, too, which he almost never did. He cried because, more than anything, he wanted Julia to be happy, but how could she be happy if she went through life seeing malarial children everywhere, dying before her eyes for want of a bed net? He knew that he wanted to marry her, but he was not sure how he could cope with a life that was going to be this difficult and this sad, with no conceivable way out.
They stopped crying and talked about budgets. They realised that Julia was going to lose her mind if she spent the rest of her life weighing each purchase in terms of bed nets, so, after much discussion, they came up with a system...
I didn't downvote but my guess is that there are two distinct reasons:
The Daily Show is the big leagues. MacAskill's interview was really well done, which is hard for the audience and context. I think that every phrase MacAskill said was carefully chosen to be correct and serve the intended narrative, while at the same time appearing succinct and natural. A digression, which IMO is what you're suggesting, would take up time and attention.
The natural place where MacAskill could insert your proposed point was inside a potentially problematic subthread that EA attracts billionaire wealth. MacAskill responded by an argument revolving around the idea that that billionaires should pay more than others.
To be more exact, at "6:16" the host said "It's also interesting to see how many billionaires have signed onto your ideas."
IMO, this is a pretty "hingey" place in the interview and MacAskill responds perfectly by talking about how wealth doesn't matter much proportionately to the wealthy, and even lands applause with a punchy point.
At about 7:05 is the most natural place for your comment, and MacAskill uses it to press on the (hyper)privileged giving more. Here's a rough transcript:
The second issue is that, in this specific enviromment, I think there's a risk of having your point conflated: your point that the underprivileged should not have to give, might be conflated with the idea that underprivileged should not engage with EA.
This conflation outcome seems bad: it’s possible that non-"elite", lower income people could make major contributions to EA by being employees, leaders or founders (which is like, your personal belief right?). Also, this idea is terrible optics.
I think there is some risk of this conflation and it would take care and attention to communicate this original point. Even 15 seconds would be costly and risk redirecting the flow of the conversation.
I'm not sure this is true, also, I'm personally a carpet bagger and chancer and I learned about EA two weeks ago: My sense is that the original, OG EAs, were hard core and took the idea of frugality very seriously. There's stories about Julia Wise crying about having candy bought for her, when it could buy a bednet instead. MacAskill has had serious back problems, presumably because he avoided spending money on furniture or specialists for himself.
Even among very established, "EA elite", there might many who miss things about that old spirit. The ideas in your comment seem against this level of dedication and might screen out new "instances" of these people.
Inside the tech world, there's a norm of fairly transparent salaries driven by levels.fyi (and Glassdoor, to a lesser extent). I think this significantly reduces pay gaps caused by eg differential negotiating inclinations, and a similar gathering place for public EA salary metrics is one of my pet project proposals.
This is fairly wrong to very wrong.
It's somewhat true that within each "level" or "band", there's a defined salaries or comp
But this isn't that restrictive, the ranges can be high, esp past entry bands
Also, there can be weird sub bands or sub roles where you can evade normal bands.
Also, there can be special cases where they go out of band or other "compensating differentials" can be given (although the later is small at big tech companies).
The band is determined after the interview process, e.g. after the interviews, your band is decided and then you get an offer, you can come in say, level "5" or level "6".
E,g it's very possible that for people who interview for the same position, the comp ranges from 200k to 600k, maybe even 800k
This is further messed up by RSU and vesting schedules.
The result is that comp ranges for each "job posting" is huge, 200k to 800k.
Overall, for various structural reasons, salary isn't transparent in almost all very successful for profit companies, except for junior roles. Tech is a particularly terrible example to use because the pay can be so different for the same role.
Using large ranges would result the various problems described in this comment, which is excellent and I strongly support the view, because saying the pragmatic truth is an "anti-meme".
For better or worse, the end result of transparency is probably limited increase over the current state (which is fairly good) for a number of reasons, which I think are given in Morrison's comment.
Another reason is optics, at least one EA org pays a salary over $1,000,000 and that's not including equity.
Thanks for the answers, this makes a lot of sense.
Can you be specific about #1? For example, what format of programming tests would you prefer to give to a generalist engineer?
By the way, do you mean something special or "hands on" for ML programming or design questions?
For ML programming, it seems bad to rely on ML or design questions in the sense of a verbal question and answer? I think actually designing/choosing ML scientific knowledge is a tiny part of the job, so I think many ML knowledge questions would be unnatural (rewarding memorization of standard ML books/selecting for "enthusiasts" who read up on recent libraries, and blow out strong talent who solved a lot of hard real world problems).
whether intelligence is even the limiting factor in scientific and technological progress.
My personal, limited experience is that better algorithms are rarely the bottleneck.
Yeah, in some sense everything else you said might be true or correct.
But I suspect by "better algorithms", I think you thinking along the lines of "What's going to work as a classifier, is this gradient booster with these parameters going to work robustly for this dataset?", "More layers to reduce false negatives has huge diminishing returns, we need better coverage and identification in the data." or "Yeah, this clustering algorithm sucks for parsing out material with this quality."
Is the above right?
The above isn't what the AI safety worldview sees as "intelligence". In that worldview, the "AI" competency would basically start working up the org chart and taking over a lot of roles progressively: starting with the decisions in the paragraph above of model selection, then doing data cleaning, data selection over accessible datasets, calling and interfacing with external data providers, then understanding the relevant material science and how that might relate to the relevant "spaces" of the business model.
So this is the would-be "intelligence". In theory, solving all those problems above seems like a formidable "algorithm".
Thanks for this thoughtful reply and your work on animal welfare.
So I think my previous comment is to express my belief/knowledge/informed guesses that chronic suffering is very large, and on average, focusing on interventions that reduce highly unnatural, chronic suffering is more impactful. I'm uncertain that ~1-2 hours of slaughter, even of many millions of fish, is the top priority if it shifts resources from other priorities. I think that a general, uncommitted audience, should take these views into account.
All your reasons you mention seem great to me and also is useful, interesting ideas in themselves.
I think a project reducing slaughter suffering is really valuable. I think if you're saying that you or someone else is committed to working on this, have a lead here, or even have a personal preference to work on these projects, that seems really good to do.
I think it's really bad if EA aligned people thoughtfully working on interventions feel there they have to "win the extra game of being approved on the EA forum", especially if the quality of discussants is low or the views unnecessarily disagreeable.
I'm not just saying the probability that this post will be discovered or clicked on is very low.
In my above comment, I'm saying a computer vision project, like I think the OP proposed, has a very low risk of harm, so this danger is outweighed by the value from a viable, well executed EA project along these lines (assuming that such a promising project exists).
I have these beliefs for reasons that include the following:
Someone I know, works or has knowledge of, and working experience in "AI" "machine learning" and leading experimental projects, that is probably above median EA knowledge in those areas.
Also, I think a well aligned and executed project could be more valuable than it seems from just discussion. This is because the actual implementation of a system (and welfare improving projects that don't include "AI") is really important. Differences in execution, which can be subtle, small, and hard to communicate, can affect welfare, more than "desk research" or non-expert discussion suggests. Assuming they can execute it, an EA can take advantage of this, and supplant/compete with alternative systems.
It's possible that this is wrong, but I'm not immediately sure it's possible to communicate this.
I think it's good to write this comment, because well, EA forum discussion is like, possibly stopping potential actual EA projects, and that's bad.
It's really impressive you changed your mind so quickly.
I don't want to jump on discussions, and answering this fully is hard, but you did write you were stopping entirely, so I'm writing the below quickly in support.
It's unlikely that promising, thoughtful work by EAs are going to be harmful/be captured/enhance harm by factory farming, and it's likely a net positive.
The grand parent comment dramatically understates how much the industry actively works on/crowds out related profit maximizing research. Basically, in the same way that the ITN framework works, this reduces the risk that even talented animal welfare EAs, working on the most successful project would significantly help the industry exploit more animals.
There's not just one, but multiple journals and subdisciplines on industrial farming of broilers and chickens.
I mean, go to this link, and keep clicking next, I'm on page 46 and there's still articles.
There's likely "wedges" where animal welfare, and increasing factory farming profit diverge.
One would be alleviating chronic suffering of malingering animals, who get slaughtered anyways and whose alleviation of suffering isn't cost effective will produce more meat. Because factory farms don't price in pain, it's possible that making chickens happier in many situations could have major welfare improvements, without changing farming incentives.
Point #1 above suggests why there would be a wedge, because profit seeking is extensive.
I think there's a big, giant thesis here about "welfarism", talking about theories of change that involve collaborating/working with some farmers. These projects have been done viably and successfully by many EAs in the past, but I'm unsure anyone wants to read this right now, and I want to just write this comment quickly.
Separately, I think the effective animal activism community should be much clearer on a long-term strategy to inform their prioritization. By when do we expect to get meat alternatives that are competitive on taste and price? At that point, how many people do we expect to go vegetarian? Is there a date by which we expect >50% of the developed-world population to go vegetarian? To what degree are policies shaped by precedents from other countries?
These thoughts seem both really important and quite deep and thoughtful.
I don’t know the answer at all, but I have a few questions that might be useful (they might advance discussion/ intent). Please feel free to answer if it makes sense.
Is this related to what people call a “theory of victory” or vision?
If so, I have questions about the use of “theory of victory”. I’m uncertain, in the sense I want to learn more, about the value of a “theory of victory” in farm animal welfare.
If we reduced animal suffering by 50-90% in a fairly short time, that seems really good and productive. What does a theory of victory contribute in addition to that?
Maybe it provide a “focal” or “tipping point”, or is useful for morale, rhetoric, getting further allies or resources?
Maybe it has coordination value. For example, if we knew at "year Y" that meat alternatives would be at “cost parity”, coordinating many other campaigns and activities at the same time would be useful
Are you imagining this "long-term strategy" to come from the EA community (maybe in the sense of EA farm animal leaders agreeing, or people brainstorming more generally, or Rethink Priorities spinning up a project), or do you think it would come from a more external source?
I'm not certain, but I'm probably not very optimistic about large-scale shifts from meat consumption in a short time frame. I’m interested in facts or even just a formidable narrative that could change this non-optimistic view. Do you or anyone else know any thoughts about this?
The people I have met in the past, who advance the idea of a major, upcoming, shift, seem to rely on narratives focused on personal dietary change. Upon examination, their views seem really inconsistent with data that the % of the population that is vegan/vegetarian seems to be flat over decades.
To me, some groups or initiatives seem to be communicating mainly with subcultures that are historically receptive to animal welfare. It seems the related /consequent information environment could unduly influence their judgement.
So, I'm not that familiar with this legislation, but I think the (key) purpose behind the banning of factory farms, would be a major political, legislative win that has a strategic value.
But the comment's main arguments are:
Factory farming is minimal in Switzerland, so this legislation doesn't do much
There are big negative consequences to this legislation banning of farms (price increases, food security).
Don't these two points conflict with each other? Also, neither undermines the main purpose of the legislation mentioned above.
I'm confused what "ivory tower" and "consequentialism" add here—I'm sure EA has a big consequentialist streak, but I'm not sure how relevant that is to reducing torture on factory farms, or reduce huge mortality from diseases like malaria.
Similarly, whether I agree or don't agree with unfairness or not, I'm unsure what "moral axes in human psychology according to Haidt’s moral foundations theory" adds.
RE: Regressiveness, I think it's possible to model price increases due to policy changes, and I would expect to see numbers if this was significant.
Dealing with regressiveness, pigovian taxes, and progressive taxes have been a thing for a long time, and seem to be tools we can use.
"claims like 99% of meat being factory farmed", but I can't find this claim on this post or on the website. Where did you get this and what was the context it was used in?
I was wrong, this entire thread about a "leak", my last two comments, was wrong/noise.
I thought I was obtaining private information through an API endpoint. Upon more examination, the information was not private, the data had no additional information, and I misinterpreted the meaning of the information.
TLDR; Unfortunately, I think I am asking for a somewhat clearer statement from a senior source that CEA won't take such action. To be clear, this might be a clear, good faith statement from someone like Max Dalton or Habryka, that they will do best efforts not to restrict or adjust the open vision of API use, as a result of this leak. Because the EA forum technical development is closely intertwined with LW, this statement should include consent of the LW team, such as Habryka.
I believe the leak is substantial (it's not an emergency but there is some chance it's embarrassing).
Because of the moderate severity of the leak, I think something like the following scenario could occur:
Two weeks after being notified of the leak, JP, Ben West, Max Dalton and the CEA board have a routine, private meeting about CEA's online programs, like most organizations do.
In this routine meeting meeting, one of the conclusions was the suspension of further development of EA forum features in favor of another technical project. The leak had a large influence on this decision.
Three months after the private meeting, a message was posted "We're restricting use of [specific forum/API] use because of limited developer attention. Unfortunately, we decided to turn it off because of the maintenance demands".
While no connection with the leak is stated and other factors, including actually limited developer time played a role, the truth was that leaks/headaches was the causal reason the feature stopped being developed.
Note that the above does not require bad faith on the part of CEA. I actually don't think anyone wants the above to happen. The above scenario is just the logical, rational way of doing things if you're heading an entity that has a lot of projects and limited developer time.
Like, this is a confession, this is what I would do.
Another way of seeing this is that the forces are just the normal forces of being a public entity.
By this request, I'm trying to proportionately "tie" your hands (to the degree that a public good faith statement would) so that these forces can't act to deteriorate access.
Thank you for this information, this allowed me to resolve the issue.
Now, separately, and in a more formal/precise tone.
I want to inform you that there is a unintended leak of EA forum or LessWrong private user information, that is substantial.
To calibrate on severity: this is something you want to change soon. It is not extremely sensitive, (less sensitive than private messages) but in my opinion, it is about at least as sensitive as leaking email addresses.
I propose the following:
Please agree that the EA forum team and the LessWrong team, will not remove, restrict, or allow to deteriorate forum access and/or API functionality, as a consequence of my disclosure of this leak.
Specifically, I am worried that the team might turn off access or functionality of the EA forum or the LW forum to reduce the likelihood of future leaks by reducing the amount of code that needs to be maintained (it's not that the access/functionality is a security concern by itself, but it's easier not to have leaks if you don't have websites/API endpoints).
Note that my disclosure is in good faith, I do not directly benefit from the disclosure. I believe my "use" of the leak or knowledge of its existence would be undetectable without me notifying you.
If you agree on the point above, please provide an email where I can send information about the leak.
Note that I may email Habyrka and other people in addition to the email you provide.
Note that the leak should not be difficult to correct.
One example is the presence of staff that monitor all interactions in order to enforce certain norms. I've heard that they can seem a bit intimidating at times.
I agree that transparency to the public is really lacking. I happen to know there is an internal justification for this opaqueness, but still believe that there are a lot more details they could be making public without jeopardizing their objectives.
The content in this comment seem really false to me, both in the actual statement and the "color" this comment has. It seems like it could mislead others who are less familiar with actual EAG events and other EA activities.
Below is object level content pushing back on the above thoughts.
Basically, it's almost physically impossible to monitor a large number of interactions, much less all interactions at EAG:
Most meetings are 1on1s that are privately arranged, and there's many thousands of these meetings at every conference. Some meetings occur in scheduled events (e.g. speed meetings for people interested in a certain topic).
It's not possible that CEA staff could physically hover over in all person meetings, I don't think there's enough staff to cover all centrally organized events (trained volunteers are used instead).
Also, if someone tried to eavesdrop in this way, it would be immediately obvious (and seem sort of clownishly absurd).
In all venues, there is "great diversity" of the physical environments people could meet.
This includes large, open standing areas, rooms of small or medium size, booths, courtyards.
This includes the ability to walk the streets surrounding the venue (which can be useful for sensitive conversations).
By the way, providing this diversity is intentionally done by the organizers.
CEA staff do not control/own the conference venue (they rent and deal with venue staff, who generally are present constantly).
It seems absurd to write this, but covert monitoring of private conversations is illegal, and there's literally hundreds of technical people at EA conferences, and I don't think this would go undetected for long.
While less direct, here are anecdotes about EAG or CEA that seems to suggest an open, normal culture, or something:
At one EAGx, the literal conference organizers and leader(s) of the country/city EA group were longtime EAs, who actively expressed dislike of CEA, due to its bad "pre-Dalton era" existence (before 2019)
The fact that they communicated their views openly and still lead a EAGx and enjoy large amounts of CEA funding/support, seems healthy and open.
Someone I know has been approached multiple times at EA conferences by people who are basically "intra-EA activists", for example, who want different financing and organizing structures, and are trying to build momentum.
The way they approached seemed pretty open, e.g. the place they wanted to meet was public and they spoke reasonably loudly and directly
By the way, some of these people are employed by the canonical EA organizations or think tanks, e.g. they literally have physical offices not too far next to some of the major, major EA figures.
These people shared many details and anecdotes, some of which are hilarious.
Everything about these interactions and the existence of these people suggests openness in EA in general
On various matters, CEA staff don't agree with other CEA staff, like all normal, healthy organizations with productive activities and capable staff. The fact these disputes exists sort of "interrogates the contours" of the culture at CEA and seems healthy.
It might be possible and useful to quantify decline in forum quality (measurement is hard it seems plausible to use engagement with promising or established users, and certain voting patterns might be a mark for quality).
In the same way that two human super powers can't simply make a contract to guarantee world peace, two AI powers could not do so either.
(Assuming an AI safety worldview and the standard, unaligned, agentic AIs) in the general case, each AI will always weigh/consider/scheme at getting the other's proportion of control, and expect the other is doing the same.
based on their relative power and initial utility functions
It's possible that peace/agreement might come from some sort of "MAD" or game theory sort of situation. But it doesn't mean anything to say it will come from "relative power".
Also, I would be cautious about being too specific about utility functions. I think an AI's "utility function" generally isn't a literal, concrete, thing, like a Python function that gives comparisons , but might be far more abstract, and could only appear from emergent behavior. So it may not be something that you can rely on to contract/compare/negotiate.
This content empowers the moderator to explore any relevant idea, and cause thousands of people, to learn and update on key EA thought, and develop object level views of the landscape. They can stay grounded.
This can justify a substantial service team, such as editors and artists, who can illustrate posts or produce other design
There is a lack of forum discussion on effective animal welfare
This can be improved with the presence of people from the main larger EA animal welfare orgs
Welfarism isn’t communicated well.
Welfarism observes the fact that suffering is enormously unequal among farmed animals, with some experiencing very bad lives
It can be very effective to alter this and reduce suffering, compared to focusing on removing all animal products at once
This idea is well understood and agreed upon by animal welfare EAs
While welfarism may need critique (which it will withstand as it’s substantive as impartialism), its omission is distorting and wasting thinking, in the same way the omission of impartialism would
Anthropomorphism is common (discussions contain emotionally salient points, that are different than what fish and land animal welfare experts focus on )
Reasoning about prolonged, agonizing experiences is absent (it’s understandably very difficult), yet is probably the main source of suffering.
Patterns of communication in wild animal welfare and other areas aren’t ideal.
It should be pointed out that this work involves important foundational background research. Addressing just the relevant animals in human affected environments could be enormously valuable.
In conversations that are difficult or contentious with otherwise altruistic people, it might be useful to be aware of the underlying sentiment where people feel pressured or are having their morality challenged.
Moderation of views and exploration is good, and pointing out one's personal history in more regular animal advocacy and other altruistic work is good.
Sometimes it may be useful to avoid heavy use of jargon, or applied math that might be seen as undue or overbearing.
A consistent set of content (web pages seem to be good).
Showing upcoming work would be good in Wild Animal Welfare, such as explaining foundational scientific work
Weighting suffering by neuron count is not scientific - resolving this might be EA cause X
EAs often weight by neuron count, as a way to calculate suffering. This has no basis in science. There are reasons (not solved or concrete unfortunately) to think smaller animals (mammals and birds) can have similar level of feelings of pain or suffering as humans.
To calibrate, I think most or all animal welfare EAs, as well as many welfare scientists would agree that simple neuron count weighting is primitive or wrong
Weighting by neuron count has been necessary because it’s very difficult to deal with the consequences of not weighting
Weighting by neuron counts is almost codified—its use turns up casually, probably because omitting it is impractical (emotionally abhorrent)
Because it's blocked for unprincipled reasons, this could probably be “cause X”
The alleviation of suffering may be tremendously greater if we remove this artificial and maybe false modifier, and take appropriate action with consideration of the true experiences of the sentient beings.
The considerations about communication and overburdening people apply, and a conservative approach would be good
Maybe driving this issue starting from prosaic, well known animals is a useful tactic
Many new institutions in EA animal welfare, which have languished from lack of attention, should be built.
I agree with this comment and it seems I should be banned, and I encourage you to apply the maximum ban. This is because:
The moderator comment above is correct
Additionally, in the comment that initiated this issue, I claimed I was protecting an individual. Yet, as the moderator pointed out, I seemed to be “further doxxing” him. So it seems my claims are a lie or hypocritical. I think this is a severe fault.
In the above, and other incidents, it seems like I am the causal factor—without me, the incidents won’t exist.
Also, this has taken up a lot of time:
For this event, at least one moderator meeting has occurred and several messages notifying me (which seems a lot of effort).
I have gotten warnings in the past, such as from two previous bans (!)
One moderately-senior moderator EA has reached out for a call now.
I think this use of time (including very senior EAs) is generous. While I’m not confident I understand the nature of the proposed call, I’m unsure my behavior or choices will change. Since the net results may not be valuable to these EAs, I declined this call.
I do not promise to remedy my behavior, and I won’t engage with these generous efforts at communication.
So, in a way requiring the least amount of further effort or discussion, you should apply a ban, maybe a very long or permanent one.
I don’t disagree with your judgement of banning but I point out there’s no banning for quality—you must be very frustrated with the content.
To get a sense of this, for the specific issue in the dispute, where I suggested the person or institution in question caused a a 4 year delay in funding, are you saying it’s an objectively bad read, even limited to just the actual document cited? I don’t see how that is.
Or is this wrong, but requires additional context or knowledge.
Basically, I don’t know . I think it’s good to start off by emphatically stating I don’t have any real knowledge of MIRI.
A consideration is that the beliefs in MIRI are still on very short timelines. A guess is that because of the nature of some work relevant to short timelines, maybe some projects could have bad consequences if made public (or just don’t make sense to ever make public).
Again, this is presumptuous, but my instinct is not to have attitudes of instructing org policy in a situation like this, because of dependencies we don’t see. (Just so this doesn’t read like a statement that nothing can ever change: I guess the change here would be a new org or new leaders, obviously this is hard).
Also, to be clear, this is accepting the premise of MIRI. IMO one should take seriously the premise of shorter timelines, like, it’s a valid belief. Under this premise, the issue here is really bad execution, like actively bad.
If your comment was alluding to shifting of beliefs away from short timelines, that seems like a really different discussion.
This isn’t really that deep, but it seems like EAs should accommodate the needs of their partners, with good communication, and investment appropriate to the relationship that they want for each other.
I don’t think this is news to anyone. I think I’m trying to say your feelings and views are valid.