Posts
Comments
Some people are worried that this will come off as "crying wolf".
Some people have criticised the timing. I think there's some validity to this, but the trigger has been pulled and cannot be unpulled. You might say that we could try write another similar letter a bit further down the track, but it's hard to get people to do the same thing twice and even harder to get people to pay attention.
So I guess we really have the choice to get behind this or not. I think we should get behind this as I see this letter as really opening up the Overton Window. I think it would be a mistake to wait for a theoretical perfectly timed letter to sign, as opposed to signing what we have in front of us.
Thanks for gathering these comments!
“But I’m not sure how the AI would come to understand ‘smart’ human goals without acquiring those goals”
The easiest way to see the flaw with this reasoning is to note that by inserting a negative sign in the objective function we can make the AI aim for the exact opposite of what it would otherwise do. In other words, having x in the training data doesn’t show that the ai will seek x rather than avoid x. It can also ignore x, we can imagine an AI with lots of colour data trying to identify the shape of dark objects on a white background. In this case, if the objective function only rewards correct guesses and punishes incorrect ones, there’s no incentive for the network to learn to represent colour vs. darkness assumes colour is uncorrelated with the shape.
Sounds like an individual attendee might have done this. I don't see this as a big deal. I don't think that we should be so concerned about possible bad PR that we kill off any sense of fun in the community. I suspect that doing so will cost us members rather than gain us members.
That doesn’t create the same pressure as a public statement which signals “this is the narrative”.
I'm guessing that the worry is that if Will said he thinks X then that might create pressure for the independent investigation to conclude X since the independent investigators are being paid by CEA and presumably want to be hired by other companies in the future.
I really appreciated this post. I think that there are some things here that are very difficult to have an honest conversation about and so I appreciated you sharing your perspective.
I'd absolutely love to know about how this conference came about.
Requested page not found
I don't think it makes sense to say that the group is "preoccupied with making money". I expect that there's been less focus on this in EA than in other groups, although not necessarily due to any virtue, but rather because of how lucky we have been in having access to funding.
I think there’s a distinction between people you meet at EA events and people you’ve already connected with outside. Otherwise, this could very easily become unworkable where you connect with people outside EA, you mention the you’re interested in EA and so they come to an event, then any dating momentum is broken because you’re not supposed to flirt with them for a while. If it happens enough, this could easily stifle someone’s dating life.
(It’s worth noting that AR events are a lot more intense than EA, where this policy might make more sense)
Hmm… part of me worries that this might be a bit too contentless/applause-lightly to provide useful information?
I wasn’t really a fan of framing this as a “rot”. I worry that this tilts people towards engaging with this topic more emotionally rather than rationally.
I thought you made some good points, however: Regarding peer review, I expect that one of the major cruxes here is timelines and whether engaging more with the peer-review system would slow down work too much. Regarding hiring value-aligned people, I thought that you didn’t engage much with the reasons why people tend to think this is important (ability to set people ill-defined tasks which you can’t easily evaluate + worry about mission drift over longer periods of time).
I downvoted this post because it felt rambling and not very coherent (no offence). You can fix it though :-).
I would also be in favour in having more information on their plan.
The EA Corner Discord might be a better location to post things like that are very raw and unfiltered. I often post things to a more casual location first, then post an improved version either here or on Less Wrong. For example, I often use Facebook or Twitter for this purpose.
What was the plan for SOL?
Swapcard seems to have significantly improved since the last EAG. You can now view your one on ones and events you’re attending all in the one place. I suspect that if we keep submitting feedback, they’ll eventually fix the flaws.
I will admit to having strong downvoted a number of critical posts, while having upvoted others, in order to create an incentive gradient to produce better criticism.
If we start getting less criticism, then I’ll default towards overvoting criticism more.
“ The way I see it the ‘woke takeover’ is really just movements growing up and learning to regulate some of their sharper edges in exchange for more social acceptance and political power.”
I think there is some truth in movements often “growing up” over time and I agree that in some circumstances people can confuse this with “woke takeover”, but I think it’s important to have a notion of some takeover/entryism as well.
In terms of the difference: to what extent did people in the movement naturally change their views vs. to what extent was it compelled?
I suppose protest can have its place in fixing a system, but at a certain hard-to-identify point, it essentially becomes blackmail.
I would like to see the forum team:
a) Figure out the most important conversations that aren't happening
b) Make them happen
The two biggest things happening at the moment are:
a) EA community drama
b) Dramatic AI progress
Lots of discussions have occurred regarding a), but there may be meta-questions that we aren't asking.
There are periods when I feel kind of drained from all the community drama, plus the terrible state of the AI gameboard, but at other times I feel hopeful or optimistic.
Thanks for posting this. It seems like a useful exercise!
Could you clarify?
If CEA hires, someone for this activity, it should be someone they have absolute confidence in given its sensitive nature. I think it’s reasonable for them to not hire someone even if they have 80% confidence in them. So it’s possible you’re both doing a good job and it’s reasonable not to hire you, which would be painful, but unfortunately that’s how reality is sometimes. Anyway, regardless of what they decide, I hope things work out for you.
I upvoted this post due to this comment. I don’t see a good reason for this to have negative karma either.
I was confused about what Core Topics are after reading this and how they differ from tags/what problem they are supposed to address. Is it that a post can be included in a core topic by having one of a number of relevant tags?
I was accepted this year, but I’ve been rejected for a past conference, so I certainly understand the sting. Question: Would it make sense to organise some kind of online event to lessen the sting for those who were rejected? Obviously, this wouldn’t be comparable to EAG, but it would be something.
Oh, sorry, I missed on “agree/disagree votes” my bad. That seems more reasonable.
Withdrawn: I misread the question.
I'm generally in favour of tests, but I'm not sold on this test because:
• The rest of the internet looks like this, so it seems like we should be able to predict results without having running this test ourselves.• If we did test it, I wouldn't want to test it forum wide, but in a few specific conversations and this post doesn't suggest any ways in which we could test it without turning it on for the whole site.• I would like to see a specific issue that this would be aimed to address. If your worry is that dissenting posts don't get upvoted, well I can see why someone might have been worried about this before, but this doesn't seem to have been a problem over the last few weeks. If anything, I'm becoming worried that the easiest way to gain karma is currently to write some criticism, even if it doesn't really bother to engage with any of the counterarguments.
I wish I lived in a world where I could support this. I am definitely worried about how recent events may have harmed minorities and women and made it harder for them to trust the movement.
However, coming out of a few years where the world essentially went crazy with canceling people, sometimes for the most absurd reasons, I’m naturally wary of anything in the social justice vein, even whilst I respect the people proposing/signing it and believe that most of them are acting in good faith and attempting to address real harms.
Before the world went crazy for a few years, I would have happily signed such a statement and encouraged people to sign it as well, since I support my particular understanding of those words. Although now I find myself agreeing with Duncan that there are real costs with signing a statement if that then allows other people to use your signature as support for an interpretation that doesn’t match your beliefs. And I think it’s pretty clear to anyone who has been following online discourse that terms can be stretched surprisingly far.
This comment is more political than I’d like it to be, however, I think it is justified given that the standard position within social justice is that political neutrality is fake and an attempt to impose values whilst pretending that you aren’t.
Maybe it’s unfair to attribute possible beliefs to group of people who haven’t made that claim, but this has to be balanced against reasoning transparency which feels particularly important to me when I suspect that this is many people’s true rejection. And maybe it makes sense in the current environment when people are leaning more towards sharing.
I wish we lived in a different world, but in this world, there are certain nice things that we don’t get to have. That all said, there’s definitely been times when I’ve failed to properly account for the needs or perspectives of people with other backgrounds and certainly intend to become as good at navigating these situations as I can because I really don’t want to offend or be unfair to anyone.
I don't know if the framing of it "creating barriers" completely captures the dynamic. I would suggest that there is already a barrier (opportunities to exchange ideas/network with like-minded people) and the main effect of starting a group house is to lower these barriers for the people who end up joining these and then maybe there is a secondary effect where some of these people might be less accessible than they would be otherwise since they have a lower need for connecting with outside people, however, this seems like a secondary effect. And I guess I see conflating the two as increasing the chance that people talk past each other.
"The whole point of having "neutral" EA entities like CEA and 80000 is to avoid this line of thinking" - Hmm... describing this as the "whole point" seems a bit strong?
I agree that sometimes there's value in adopting a stance of neutrality. I'm still not entirely sure why I feel this way, but I have an intuition that CEA should learn more toward neutrality than 80,000 Hours. Perhaps, it's because I see CEA as more focused on community building and taking responsibility for the community overall. Even then, I wouldn't insist that CEA be purely neutral, but rather strike a balance between what its views are and what the wider EA community views are.
One area where I agree though is that organisations should be transparent in terms of what they represent.
I would be tempted to add something about being truth-seeking as well. So, is someone interested in updating their beliefs about what is more effective, or is this the last thing that they would want?
Yes, there's a chance it could be discouraging and if there are ways to improve it without sacrificing accuracy, I'd like to see that happen.
On the other hand, if you have strong reason to believe that some cause areas have orders of magnitude more impact more influence than others, then you will often achieve more impact by slightly increasing the number of people working on these priority areas than by greatly increasing the number of people working on less impactful areas. In other words, you can often have more impact accurately representing your beliefs because it can be hard for the benefits of serving a broader audience to outweigh the impact of persuading more people to focus on something important.
Could you clarify the meaning of "Shorism" here? I assume you're referring to David Shor?
I gave the OP a weak downvote, although this comment almost convinced me to make it a strong downvote. I probably wouldn't have downvoted if this would have taken the post into the negative, but I'm starting to become worried about the incentives if posts get strongly upvoted merely for being critical, regardless of their other attributes. I guess I would have preferred for the post to be honest that it's attempting an expose rather than just pretending to "just be asking questions".
This is an exciting project!
I'm in favor of running the experiment.
I would suggest providing people with a week or two notice before implementing this change so that people can get any last community posts out. Otherwise, it might lead to frustration for people who are currently working on posts.
What did you think worked so well about these unconferences?
I would love to see this happen. Having run an unconference at an AI Safety Retreat and then another unconference in person, I believe that unconferences rate pretty highly in terms of reward per effort.
Agreed, that a hits-based approach doesn't mean throwing money at everything. On the other hand, "lack of prior expertise" seems (at least in my books) to be the second strongest critique after the alleged misrepresentation.
So, while I conceded it doesn't really address the strongest argument against this grant, I don't see addressing the second strongest argument against the grant as being beside the point.
I would love to know why it was downvoted as well. I provided a strong upvote as I can't see the reason why this post should be downvoted, although I might change this if I'm persuaded there's a good reason. However, I would be extremely surprised if there were any such reason.
I think it's valuable to write critiques of grants that you believe to have mistakes, as I'm sure some of Open Philanthropy's grants will turn out to be mistakes in retrospect and you've raised some quite reasonable concerns.
On the other hand, I was disappointed to read the following sentence "Henry drops out of school because he thinks he is exceptionally smarter and better equipped to solve 'our problems". I guess when I read sentences like that I apply some (small) level of discounting towards the other claims made, because it sounds like a less than completely objective analysis. To be clear, I think it is valid to write a critique of whether people are biting off more than they can chew, but I still think my point stands.
I also found this quote interesting: "What personal relationships or conflicts of interest are there between the two organizations?" since it makes it sound like there are personal relationships or conflicts of interest without actually claiming this is the case. There might be such conflicts or this implication may not be intentional, but I thought it was worth noting.
Regarding this grant in particular: if you view it from the original EA highly evidence-based philanthropy end, then it isn't the kind of grant that would rate highly in this framework. On the other hand, if you view it from the perspective of hits-based giving (thinking about philanthropy as a VC would), then it looks like a much more reasonable investment from this angle[1], as for instance, Mark Zuckerberg famously dropped out of college to start Facebook. Similarly, most start-ups have some degree of self-aggrandizement and I suspect that it might actually be functional in terms of pushing them toward greater ambition.
That said, if OpenPhilanthropy is pursuing this grant under a hits-based approach, it might be less controversial if they were to acknowledge this.
- ^
Though of course, if the grant was made on the basis of details that were misrepresented (I haven't looked into those claims) then this would undercut this.
I would suggest that new paradigms are most likely to establish themselves among the young because they are still in the part of their life where they are figuring out their views.
Great question, I would love to have clarity on this!
Volunteering. Effective Altruism doesn't have as strong a culture of volunteering as other community groups. When we had access to massive amounts of funding we were able to substitute paying people for volunteering, but I think we're going to have to address this situation in the new funding environment.
I do wonder if there could be the reverse correlation: crypto hostile politicians might not have wanted to accept donations from a fund associated so heavily with crypto.
Are you planning to augment the sections so that they engage further with counter-arguments? I recognise that this would take significant effort, so it’s completely understandable if you don’t have the time, but I would love to see this happen if that’s at all possible. Even if you leave it as is, splitting up the sections will still aid discussion, so is still worthwhile.
Agreed. It takes quite a bit of context to recognise the difference between deep critiques and shallow ones, whilst everyone will see their critique as a deep critique.
Interesting, well that’s an even worse wording in terms of leaving them vulnerable to PR.