Posts
Comments
The other alternative was that there was some coordination about releasing LLM. Plenty of people argue that they somehow should coordinate, so it would not be surprising if they actually did it.
There's the claim that GPT-4 is better at not going off the guardrails and that Bing runs on GPT-4. How does that fit together with Bing's behavior?
This definitely indicates a mishandling of the situation, that leaves room for improvement.
I agree with that and do think that having a better system to share information would be good.
With regards to the latter, if someone was triggering psychotic breaks in my community, I would feel no shame in kicking them out, even if it was unintentional.
If Vassar tells someone about how the organization for which they are working is corrupt and the person Vassar is talking with considers his arguments persuasive, that's going to be bad for their mental health.
Anna Salamon wrote that post because she believes that some arguments made about how CFAR was corrupt were reasonable arguments.
To the extent that the rationalist ideal makes sense, it includes not ostracising people for speaking out uncomfortable truths even if those uncomfortable truths are bad for the mental health of some people.
We already know he's been using his connection with Yud (via HPMOR) to try and seduce people.
The seduction here is "Look, I'm bad in a way that I served as a template for the evil villain".
While "X is a bad boy" can be attractive to some women, it should be a very clear sign that he's poor relationship material. It also shouldn't be surprising for anyone when he's actually a bad boy in that relationship.
A woman who wants a relationship with a bad boy can find that and it feels a bit paternalistic to say that a woman who wants that shouldn't get any opportunity to get that.
I do think there are good reasons not to have him at meetups but it's a complex decision.
Neither Scotts banning of Vassar nor the REACH banning was quiet. It's just that there's no process by which those people who organize Slate Star Codex meetups are made aware.
It turns out that plenty of people who organize Slate Star Codex meetups are not in touch with Bay Area community drama. The person who organized that SSC online meetup was from Israel.
Even in the comments here where some very harsh allegations are made against him
That's because some of the harsh allegations don't seem to hold up. Scott Alexander spent a significant amount of time investigating and came up with:
While I disagree with Jessica's interpretations of a lot of things, I generally agree with her facts (about the Vassar stuff which I have been researching; I know nothing about the climate at MIRI). I think this post gives most of the relevant information mine would give. I agree with (my model of) Jessica that proximity to Michael's ideas (and psychedelics) was not the single unique cause of her problems but may have contributed.
There are different concerns when it comes to Authentic Revolution and the EA community. Authentic Revolution hosts events where people become emotionally vulnerable which calls for rules that prevent that vulnerable state from being abused by people leading the events.
In the EA community, a lot of concerns about power abuse are about helping with professional connections. Waiting three months reduces the emotional impact of an Authentic Revolution event but it changes little about the power a person in a leadership role has to help a person to get a job at an EA org.
Well, you're right that signaling intelligence, creativity, wisdom, and moral virtues is sexually and romantically attractive.
Signaling intelligence, creativity, wisdom, and moral virtues is not the same as signaling social power by leading events.
To the extent that people have power through their roles, that's not directly about signaling intelligence, creativity, wisdom, and moral virtues.
Next, there are four other occasions where something a bit like this has happened. How many of these happened after the main events described here? I guess 2 or 3. So even after upsetting someone like this, this pattern continues. This does make me question a Owen's judgenent.
To me, Owen's post reads like he didn't notice at the time that he upset her. Owen writes: "She was in a structural position where it was (I now believe) unreasonable to expect honesty about her experience".
It's unclear how long it took for Owen to know how uncomfortable he made her.
The great thing about monetary prices is that there are market mechanisms that keep the numbers honest.
If you want to measure your TEMS value you don't have information about a lot of the involved factors that matter.
By forcing people to collect those values, you force people to spend a lot of work to account for those values and try to get the accounting to look the way they want it to.
To raise $4.1 trillion in total taxes, the bureaucratic work was around $313 Billion. If you force people to report those TEMS all of those terms are likely similarly complex. The questions of how the numbers will be determined are also very complex so you will need a lot of lobbyists who fight over the values. You need lawyers to litigate cases where people cheated their numbers.
If you hire a plumber then the plumber has to know the TEMS values for all his equipment and know the rules for how much of that will apply to the job for which you hire him.
When thinking about the plastic bottle, we not only care about TEMS we care also how many of those plastic bottles will end up in the ocean and what effects they have in the ocean.
We care about the health effects of the substances in plastic. Both those that are already scientifically known as well as health impacts we haven't yet researched.
In Ohio, a train derailment that might have very well in the supply chain of water bottles that were produced a lot of problems.
If you focus on TEMS you are going to ignore such effects. Ideally, there are taxes that price all the externalities into the price of the plastic bottle. It's unclear to me why it would be good for actors to focus more on TEMS values.
You seem to assume that there's a linear relationship between the intervention and the effect. This might be the case for cash transfers but it's not the case for many other interventions.
If you give someone half of a betnet they are not 50% as much protected.
When it comes to medical treatments it might be that certain side effects only appear at a given dose and as a result you have to do your clinical trial for the dose that you actually want to put into the pill that you sell.
I was newly open to polyamory, and newly exposed to circling and saw something powerful and good about speaking truths even when they were uncomfortable.
From what you describe, it sounds to me like you didn't really express truths when they were uncomfortable.
The truth was that you felt shame. It's easier to be edgy and say "I have to masturbate before I see you" than to say "I feel ashamed of the attraction I have for you. I think I should masturbate so that I don't get aroused by your presence before seeing you.". Saying "I feel ashamed of the attraction I have for you" would be showing vulnerability. Probably, it would have made sense to share that feeling of shame about your attraction to her even earlier as at this point in time even the full version would have been too much.
The problem is that you neither followed the societal standards nor the standards of being radically honest about your experience.
I think we have good reason to believe the article is broadly right, even if some of the specific anecdotes don't do a good job of proving this.
If someone invests a lot of effort into searching for good evidence and comes up empty that's a signal for the availability of good evidence.
But it's just hard to present evidence that conclusively proves
That leaves the question of why it's hard. In plenty of communities, it's easy to find a lot of women who were sexually touched without their consent.
The fact that the article suggests that this is very hard to find that in the EA community, suggests that something goes right.
"#MeToo urged society to 'believe women;' EAs tend to be a bit more skeptical." (also seems true)
This seems to make it sound like society on average updated due to #MeToo to 'believe women;' and EAs didn't. In reality, most of society didn't update. I would expect that on average EA might even lean more toward 'believe women;' than the average person.
partly because there are many polyamorous people in the EA/rationalist communities; this creates an environment in which sexual misconduct may be addressed suboptimally." This strikes me as totally plausible.
One thing that distinguishes polyamorous people is that they are more willing to talk about sexual conduct of other people than most other groups. That results in some people feeling bad because they hear about sexual conduct, but it also helps to police bad conduct.
What is the total existential risk stemming from pandemics this century? This is a key number in my guesstimate model and I feel like my estimate is an area where I could make quick improvements.
This sounds like a number that could be well-sourced via metaculus.
If we would have better PPE, hospitals likely would use it also outside of pandemics.
If we have easier-to-use PPE on the margin more researchers doing dangerous research on pathogens are going to wear PPE.
Both of those can help with pandemic prevention and are those valuable for biorisk but they aren't in the model.
“ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent.”.
safe distance from his middle-class existence
People at elite universities usually don't have middle-class existence. Being at an elite university is a sign of being upper class.
I was active at that time on LessWrong and mostly go after my memory and memories for something that happened eight years ago isn't perfect.
https://yudkowsky.tumblr.com/post/81447230971/my-april-fools-day-confession was to my memory also posted to LessWrong and the LessWrong site of that post is deleted.
When doing a Google search for the timeframe on LessWrong, that doesn't bring up any mention of Dath Ilan.
Is your memory that Dath Ilan was just never talked about on LessWrong when Eliezer wrote that post?
I think that's the post. As far as my memory goes, the criticism led to Eliezer deleting it from LessWrong.
The bigger discussion from maybe 7 years ago that Habryka refers to was as far as my memories goes his April first post in 2014 about Dath Ilan. The resulting discussion was critical enough of EY that from that point on most of EY's writing was published on Facebook/Twitter and not LessWrong anymore. One his Facebook feed he can simply ban people who he finds annoying but on LessWrong he couldn't.
- Why the rationalist community seems to treat race/IQ as an area where one should defer to "the scientific consensus" but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally, AI safety, etc.
With ivermectin we had a time where the best meta-analysis were pro-ivermectin but the scientific establishement was against ivermectin. Trusting those meta reviews that were published in reputable peer reviewed is poorly understood as "not defering to the scientific consensus". Scott also wrote a deep dive on Ivermectin and the evidence in the scientific literature for it.
You might ask yourself "Why doesn't Scott Alexander write a deep dive on the literature of IQ and race?" Why don't other rationalists on LessWrong write deep dives on the literature of IQ and race and questions about which hypothesis are supported by the literature an which aren't.
From a truth seeking perspective it would be nice to have such literature deep dives. From a practical point, writing deep dives on the literature of IQ and race and have indepth discussions about it has a high likelihood to offend people. The effort and risks that come with it are high enough that Scott is very unlikely to write such a post.
One view I hold, though, is something like "the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you're considering the [personal/community-level] social implications thereof, is non-zero."
I think that there's broad agreement on this and that self-censorship is one of the core reasons why rationalists are not engaging as deeply with the literature around IQ and race as we did with Ivermectin or COVID.
On the other hand, there are situation where there are reasons to actually speak about an issue and people still express their views even if they would prefer to just avoid talking about a topic.
To the extent that writing on the AI forum matters to EA decision making it has high stakes. Opinions that would actually allow the EA community to course correct have stakes that are worth millions of dollars.
Banning something that looks like a throwaway account for one month basically is a choice to forbid the person from engaging publically with the criticism that their comment got while doing little else.
This post being downvoted the way it already clearly signals the community doesn't like the post and moderator action to send that signal isn't really needed.
There's already a large amount of democratized funding. It's gathered via taxes and spent by bodies that are backed by democratic processes.
In EA there's a belief that the dollars spent by EA orgs are more efficiently spent than those by the government. Choosing EA as the electorate would be a choice with the intention of not regressing to the average dollar effectiveness of dollars in our government budgets.
In contrast to the budget of our governments and even African governments the budget of EA is very tiny.
Yes, when it comes to judging people for what they said it's useful to focus on what they actually said.
Generally, if you have to focus on things that a person didn't say to fuel your own outrage that should be taken as a sign that what they actually said isn't as problematic as your first instinctual response suggests.
In the self-evaluation of their mistakes, the Intelligence community in the US came to the conclusion that lack of quantification of the likelihood that Saddam didn't have WMDs was one of the reasons they messed up.
This led to forecasting tournaments which inturn lead to Tetlock's superforcasting. I think the orthodox view in EA is that Tetlock's work is valuable and we should apply its insights.
You generally read books to understand a thesis in more detail. If there would be a few examples of notable organizations that used democratic decision-making to great effect and someone would want to learn from that, reading a book that gives more details is a great idea. Reading a book to see whether or not a thesis deserves more attention on the other hand makes less sense.
This would require either membership fees or recording attendance at EA events, so there would be a lot of complexity in making this work.
Given that EA Global is already having an application process that does some filtering you likely could use the attendance lists.
Believing that democracy is a good way to run a country is a different view than believing that it's an effective way to run an NGO. The idea that NGOs whose main funding comes from donors as opposed to membership dues should be run democratically seems like a fringe political idea and one that's found in certain left-wing circles.
When it comes to extreme views it's worth noting that what's extreme depends a lot of the context.
A view like "homosexuality should be criminalized" is extreme in Silicon Valley but not in Uganda where it's a mainstream political opinion. In my time as a forum moderator, I had to deal with a user from Uganda voicing those views and in cases, like that you have to make choice about how inclusive you want to be of people expressing very different political ideologies.
In many cases, where the political views of people in Ghana or Uganda substantially differ from those common in the US they are going to be perceived as highly extreme.
The idea, that you can be accepting of political ideologies of a place like Ghana where the political discussion is about "Yes, we already have forbidden homosexuality but the punishment seems to low to discourage that behavior" vs. "The current laws against homosexuality are enough" while at the same time shunning highly extreme views, seems to me very unrealistic.
You might find people who are from Ghana and who adopted woke values, but those aren't giving you deep diversity in political viewpoints.
For all the talk about decolonization, Silicon Valley liberals seem always very eager when it comes to denying people from Ghana or Uganda to express mainstream political opinions from their home countries.
Diversity is always a very interesting word and it's interesting that the call for more comes after two of the three scandals mentioned in the opening posts are about EA being diverse along an axis that many EAs disagree with.
Similarly, it's very strange that a post that talks a lot about the problems of EAs caring too much about other people being value aligned and afterward talk in the recommendations about how there should be more scrutiny to checking whether funders are aligned with certain ethical values.
This gives me the impression that the main thesis of the post is that EA values differ from woke values and should be changed to be more aligned with woke values.
The post doesn't seem to have any self-awareness about pushing in different axis. If your goal is to convince people to think differently about diversity or about the importance of value alignment, it would make sense to make arguments that are more self-aware.
When Stuart Russell argues that AI could pose an existential threat to humanity, he is held up as someone worth listening to –”He wrote the book on AI, you know!” However, if someone of comparable standing in Climatology or Earth-Systems Science, e.g. Tim Lenton or Johan Rockström, says the same for their field, they are ignored, or even pilloried.
To me, this looks like it mistakes why people hold the views that they do and strawman people.
Saying "Stuart Russell" is worth listening to because of his book boils down to "If you actually want to understand why AI is an existential threat to humanity, read his book it's likely to convince you". On the other hand, Tim Lenton or Johan Rockström have not written books that make arguments for the importance of climate change that seem convincing to many EAs.
Quantification
When it comes to the topic of quantification, it seems that this post criticizes at the same time that EAs quantify everything and that they don't quantify the value of paying community organizers relatively high salaries.
EAs seem to me, very willing to do a lot of thing, especially in the field of long-termism without quantification being central. Generally, EA thought leaders don't tend to hold positions naive positions on topics like diversity or quantification but complex ones. If you want to change views you need to be more clear about cruxes and how you think about the underlying tradeofffs.
There's plenty of real estate investment that does not depend on the real estate being rented out. That's why laws get passed that require some real estate to be rented out.
One of the attributes of real estate is that it's a lot less liquid than stocks and economic theory suggests that market participants should pay a premium for liquidity.
Finally, it's wrong to say that anything with less expected returns than stocks is no investment. People all the time invest money in treasury bonds that have less expected returns.
Investing money into the stock market and investing money into real estate are similar. In both cases, the value of your capital can rise or fall over time.
Okay, it's good to see that it's finally there as it wasn't the last time I publically complained about it. At the time it seemed like apologizing deep in a comment thread was the only action that CEA felt warrented.
When talking about Sam Bankman-Fried I read a bunch of times the claim that EA failed because it didn't put sufficient effort into checking his background. It might be worthwhile to fund a new organization, ideally as independent as possible from other orgs whose sole reason for existance is to look into the powerful people in EA and criticize them when warrented.
While it might be great if CEA would be able to fill that role, they happen to be an org that in the past didn't honor a confidentiality promise when people came to them with critizism of powerful people in EA and don't think this was enough of a problem to list it on their mistakes page.
The potential harms of these technologies come from their unbounded scope
Previous technologies also have quite unbounded scopes. That does not seem to me different from the technology of film. The example of film in the post you were replying too also has an unbounded scope.
This can therefore inform the kinds of models / training techniques that are more dangerous: e.g. that for which the scope is the widest
Technologies with a broad scope are more like to be dangerous but they are also more likely to be valuable.
If you look at the scope of photoshop it can be already used by people to make deepfake porn. It can also used by people to print fake money.
Forbidding broad-scope technologies to be deployed would have likely prevented most of the progress in the last century and would make a huge damper on future progress as well.
When it comes to gene editing, our society decides to regulate its application but is very open that developing the underlying technology is valuable.
The analogy to how we treat gene editing would be to pass laws to regulate image creation. The fact that deepfake porn is currently not heavily criminalized is a legislative choice. We could pass laws to regulate it like other sexual assaults.
Instead of regulating at the point of technology creation, you could focus on regulating technology use. To the extent that we are doing a bad job at that currently, you could build a think tank that lobbies for laws to regulate problems like deepfake porn creation and that constantly analysis new problems and lobbies for the to be regulated.
When it comes to the issue of deepfake porn, it's also worth looking why it's not criminalized. When Googling I found https://inforrm.org/2022/07/19/deepfake-porn-and-the-law-commissions-final-report-on-intimate-image-abuse-some-initial-thoughts-colette-allen/ which makes the case that it should be regulated but which cites a government report which suggests that deepfake porn creation should be legal while sharing it shouldn't be legal. I would support making both illegal, but I think approaching the problem from the usage point of view seem the right strategy.
When these factors are combined with the high population growth predicted in hotter countries, one report finds that 3.0 degrees celsius of averaged global warming translates to an average temperature increase as felt per individual of 7.5 degrees celsius.1 The same report estimates that 30% of the world’s predicted population will then be living in areas with an average temperature equal to or above the hottest parts of the Sahara desert by 2070.
It's very unclear how someone can on the one hand expect this kind of damage to be caused by climate change and on the other hand expect that people in the future won't do geoengineering that leads to different temperatures. Or expect that this isn't supposed to be an important element of any modal that predict the future of global warming.
farmed animals likely have it worse than animals used in research
Why do you believe that farmed animals have it worse?
Farmed animals usually get killed in a way that's designed to be quick and minimize suffering. I would expect, that research animals that die death due to being infected with illnesses or toxicity tests generally die more painful deaths.
Just because someone tried products for free and then posted about them doesn't mean that they haven't been paid to post about them.
When I say that I know the German youtuber, I'm meaning that I privately talked with him about how that industry works.
The people who make the most money in that industry do it through paid product placement.
Andrea Salinas got 36% of the vote while Carrick Flynn got 18%. I think it's pretty clear, that Flynn would have gotten more votes if he wouldn't have been perceived by the press as being funded by ill-intentioned corporate money.
Whether that would have been enough to get double the amount of votes is unclear but I don't think the available data suggest that this isn't in the realm of what would have been possible.
What Squad-esque members did POF support?
I agree that Protect Our Future should be a lot more explicit about its agenda. While the Valerie Foushee campaign was successful the Carrick Flynn campaign failed and likely failed for reasons like distrust of PAC money.
It's unclear to me why the strategic decision of Protect Our Future to be this untransparent was made. Given the amount of money they spend, it was likely that it will get some public attention and the transparency made it look a bit shady.
Clear public-facing criteria would likely be helpful. It makes it clear to the media how Protect Our Future chooses its candidates. It also makes it clear to other politicians what they would need to do to get support from Protect Our Future. Given that politicians do spend a lot of time on fundraising, making it more clear to them what they would need to do to get support from Protect Our Future would likely be good.
Having a blog that publishes posts about candidates they decided to support and also on legislative movements that they care about it would likely be a good move while costing very little compared to the amount of money that Protect Our Future spends.
I don’t think people who are anti-soy are racist – but convincing a swath of Americans that being anti-soy is culturally insensitive could be one way to reduce stigma.
Such a campaign might also significantly increase the stigma. It could turn soy into a culture war topic.
If you tell a bodybuilder that he should be less anti-soy because it's culturally insensitive, I would expect that to reinforce anti-soy attitudes for most bodybuilders.
Out-of-house R&D. Instead of hiring a chef, we could inspire food bloggers, restaurant chefs, or CPG brands to develop their own recipes. If we mailed ten $100 tofu boxes to ten different food bloggers, I'd guess that at least one or two would try out, like them, and create a video or two on their channel.
I know one German youtuber that has a cooking channel. He makes most of his money via paid product placement. I would expect that the same is true for popular US food bloggers as well.
My model would be that those food bloggers generally don't promote products you send them for free just because they tried and liked them.
If you manage to be one of the only online shops that sell those new tofus I would expect that you could negotiate some affiliate deals with many of the popular vegan food influencers.
Maybe, you could also find a food influencer who already sells their own products and pitch them onto selling the tofus.
It's relatively easy for Western governments to gather the quantity of blood that's needed. Increasing the rewards for blood donations straightforwardly increases the amount of blood that's given.
Ensuring the quality of blood donations is harder. If you would pay out strong financial rewards you would get a lot of homeless people to donated blood and their blood is more likely to be contaminated by viruses than that of the people who donate blood because they are motivated by altruistic reasons instead of financial reasons.
As far as I remember the accepted rate for AIDS infections is somwhere between 1 in 1-10 million blood transfusions at the moment. A lot of minor virsues like adenovirsues for which we don't test could also be passed on via blood transfusions without us understanding well how often that currently happens.
Policy-wise, it seems that the existing amber alert motivates enough people to donate. If it would probably make sense to say that if there's an amber alert that automatically raises financial rewards to donate blood.
If you are criticizing an EA organization because they did something wrong, a journalist who writes the next "EA is bad" article might quote you. They might add something to the critique, but it's likely not what you want.
Because people fear saying things that are bad for EA's PR they are engaging in less public criticism of the actions of EA organizations.
If you are talking about a topic that touches partisan politics but wants to discuss it from an EA perspective, you don't want your post to turn into a magnet for partisan political discussion. If there would be a lot of partisan political discussion that would be bad for the EA forum.
Criminal Justice Reform, Immigration Policy, Land Use Reform, and Macroeconomic Stabilization Policy are at the moment cause areas for OpenPhil. I see relatively little discussion on the EA forum about how those cause areas. I would expect that one of the major reasons for this is that people fear that those discussions would be too partisan politic.
I think it would be good if there would be more EA exchange over those cause areas given that EA money if flowing into them. At the same time I believe it's good to have those discussions in a way that doesn't make it easy for new users who browse the EA forum and who like partisan politic discussions to just jump in.
If you want to personally protect yourself, then writing anonymously is a way to go. If you care about protecting EA's PR it helps little.
But I really hope the vast majority of posts don't require any kind of "protective" measures.
Practically, that results in people not publically sharing certain information. I personally would like most of what people are willing to say privately at an event like an EA meetup they would also be willing to publically say on the EA forum.
Why is it not as inclusive as such a platform could be?
Both exclusivity and inclusivity have advantages. In recent times, there are plenty of complaints that the quality of the EA forum went down.
When writing it's important to decide who you want to reach. The first decision you should make when writing a post is whether you would want to reach as many people as possible or whether you want to reach a more narrow audience.
When writing publically about topics that can easily be picked up out of context by the media, jargon can be protective. If you write about wild animal suffering maybe you don't want everyone to read your post and that's okay.
Dominic Cummings wrote about how it's hard to get UK politicians to say the things that voters want to hear according to polling data and thus hard to get them to maximize their chances of winning.
Getting them to give up part of their power over law-making would be much harder.
I'm an American. I don't understand UK politics. All I know is, there was once a PM named Liz Truss. Liz did something the markets didn't like. Now PM Liz is no more.
It's not only that the market didn't like it. Many people don't like taxes for the rich to get cut.
Collective actions: persuasion, policy, and energy systems. I am omitting some more specialized opportunities, such as regarding cement and refrigerants, because they’re arguably less relevant for a general audience and because I just don’t know much about them, my apologies—but see the links if you’re interested.
Why is cement less relevant for a general audience than the policy on how the energy supply works? From an EA perspective, the topic produces significant emissions and is getting little attention from a general audience. That suggests neglectedness and thus that it's more important to talk about it than to talk about solar power.
The catastrophic harms are those studied by people interested in climate change as a GCR/Xrisk. The research is limited (1, 2, 3, 4, 5, 6, 7), but for now, the basic picture seems to be of some chance of climate change contributing to global/existential catastrophe. The size of the effect is a point of debate. My sense is that it’s being underestimated, but it’s difficult to pin down.
That paragraph tells me nothing about what order of magnitude of chance we are speaking about. If you want to draw any conclusions then it's important to talk about the likelihood or at least the ballpark of it.
Being too vague to be wrong is bad.
The software on which is forum is run is created by Lightcone Infrastructure. It's possible to convince them to add new features, but I would expect that requires more than saying "it would be cool if".