Posts

Metaculus Predicts Weak AGI in 2 Years and AGI in 10 2023-03-24T19:43:18.275Z
Lessons learned and review of the AI Safety Nudge Competition 2023-01-17T17:13:23.163Z
How should EA navigate the divide between those who prioritise epistemics vs. social capital? 2023-01-16T06:31:57.194Z
Should the forum be structured such that the drama of the day doesn't occur on the front page? 2023-01-13T11:58:43.182Z
Infinite Ethics - Sketch Part 1: Explorations with Surreals 2023-01-09T00:44:12.186Z
Have your timelines changed as a result of ChatGPT? 2022-12-05T15:03:18.466Z
FTX Open Thread 2022-11-18T01:00:27.959Z
Winners of the AI Safety Nudge Competition 2022-11-15T01:06:27.618Z
AI Safety Microgrant Round 2022-11-14T04:25:17.266Z
Idea: Provide Keen EAGx'ers with Follow-Up Programming 2022-10-09T15:17:53.336Z
Announcing the AI Safety Nudge Competition to Help Beat Procrastination 2022-10-01T01:49:22.976Z
What I'm doing 2022-07-19T11:31:09.302Z
AI Safety Melbourne - Launch 2022-06-09T05:01:24.288Z
Is the time crunch for AI Safety Movement Building now? 2022-06-08T12:19:33.146Z
6 Year Decrease of Metaculus AGI Prediction 2022-04-12T05:36:14.143Z
Sydney AI Safety Fellowship Review 2022-03-28T02:39:05.445Z
Community Building: Micro vs. Macro 2022-03-21T07:02:33.103Z
Should EA be explicitly long-termist or uncommitted? 2022-01-11T08:56:39.015Z
Sydney AI Safety Fellowship 2021-12-02T07:35:00.188Z
List of AI safety courses and resources 2021-09-06T14:26:42.397Z
The Sense-Making Web 2021-01-04T23:21:57.226Z
Effective Altruism and Rationalist Philosophy Discussion Group 2020-09-16T02:46:19.168Z
Mike Huemer on The Case for Tyranny 2020-07-16T09:57:13.701Z
Is some kind of minimally-invasive mass surveillance required for catastrophic risk prevention? 2020-07-01T23:32:22.016Z
Making Impact Purchases Viable 2020-04-17T23:01:53.273Z
The World According to Dominic Cummings 2020-04-14T23:52:37.334Z
The Hammer and the Dance 2020-03-20T19:45:45.706Z
Inward vs. Outward Focused Altruism 2020-03-04T02:05:01.848Z
EA should wargame Coronavirus 2020-02-12T04:32:02.608Z
Why were people skeptical about RAISE? 2019-09-04T08:26:52.654Z
casebash's Shortform 2019-08-21T11:17:32.878Z
Rationality, EA and being a movement 2019-06-22T05:22:42.623Z
Most important unfulfilled role in the EA ecosystem? 2019-04-05T11:37:00.294Z
A List of Things For People To Do 2019-03-08T11:34:43.164Z
What has Effective Altruism actually done? 2019-01-14T14:07:50.062Z
If You’re Young, Don’t Give To Charity 2018-12-24T11:55:42.798Z
Rationality as an EA Cause Area 2018-11-13T14:48:25.011Z
Three levels of cause prioritisation 2018-05-28T07:26:32.333Z
Viewing Effective Altruism as a System 2017-12-28T10:09:43.004Z
EA should beware concessions 2017-06-14T01:58:47.207Z
Reasons for EA Meetups to Exist 2016-07-20T06:22:39.675Z
Population ethics: In favour of total utilitarianism over average 2015-12-22T22:34:53.087Z

Comments

Comment by Chris Leong (casebash) on FLI open letter: Pause giant AI experiments · 2023-03-30T17:07:03.311Z · EA · GW

Some people are worried that this will come off as "crying wolf".

Comment by Chris Leong (casebash) on FLI open letter: Pause giant AI experiments · 2023-03-29T09:07:29.542Z · EA · GW

Some people have criticised the timing. I think there's some validity to this, but the trigger has been pulled and cannot be unpulled. You might say that we could try write another similar letter a bit further down the track, but it's hard to get people to do the same thing twice and even harder to get people to pay attention.

So I guess we really have the choice to get behind this or not. I think we should get behind this as I see this letter as really opening up the Overton Window. I think it would be a mistake to wait for a theoretical perfectly timed letter to sign, as opposed to signing what we have in front of us.

Comment by Chris Leong (casebash) on Holden Karnofsky’s recent comments on FTX · 2023-03-24T14:10:29.017Z · EA · GW

Thanks for gathering these comments!

Comment by Chris Leong (casebash) on Where I'm at with AI risk: convinced of danger but not (yet) of doom · 2023-03-22T10:13:41.183Z · EA · GW

“But I’m not sure how the AI would come to understand ‘smart’ human goals without acquiring those goals”

The easiest way to see the flaw with this reasoning is to note that by inserting a negative sign in the objective function we can make the AI aim for the exact opposite of what it would otherwise do. In other words, having x in the training data doesn’t show that the ai will seek x rather than avoid x. It can also ignore x, we can imagine an AI with lots of colour data trying to identify the shape of dark objects on a white background. In this case, if the objective function only rewards correct guesses and punishes incorrect ones, there’s no incentive for the network to learn to represent colour vs. darkness assumes colour is uncorrelated with the shape.

Comment by Chris Leong (casebash) on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-19T05:16:31.971Z · EA · GW

Sounds like an individual attendee might have done this. I don't see this as a big deal. I don't think that we should be so concerned about possible bad PR that we kill off any sense of fun in the community. I suspect that doing so will cost us members rather than gain us members.

Comment by Chris Leong (casebash) on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T22:22:30.590Z · EA · GW

That doesn’t create the same pressure as a public statement which signals “this is the narrative”.

Comment by Chris Leong (casebash) on Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" · 2023-03-16T08:00:17.848Z · EA · GW

I'm guessing that the worry is that if Will said he thinks X then that might create pressure for the independent investigation to conclude X since the independent investigators are being paid by CEA and presumably want to be hired by other companies in the future.

Comment by Chris Leong (casebash) on It's not all that simple · 2023-03-13T08:33:44.751Z · EA · GW

I really appreciated this post. I think that there are some things here that are very difficult to have an honest conversation about and so I appreciated you sharing your perspective.

Comment by Chris Leong (casebash) on Japan AI Alignment Conference · 2023-03-10T22:58:15.353Z · EA · GW

I'd absolutely love to know about how this conference came about.

Comment by Chris Leong (casebash) on Japan AI Alignment Conference · 2023-03-10T22:55:47.800Z · EA · GW

Requested page not found

Comment by Chris Leong (casebash) on Nathan Young's Shortform · 2023-03-10T00:31:38.638Z · EA · GW

I don't think it makes sense to say that the group is "preoccupied with making money". I expect that there's been less focus on this in EA than in other groups, although not necessarily due to any virtue, but rather because of how lucky we have been in having access to funding.

Comment by Chris Leong (casebash) on Suggestion: A workable romantic non-escalation policy for EA community builders · 2023-03-09T15:11:15.103Z · EA · GW

I think there’s a distinction between people you meet at EA events and people you’ve already connected with outside. Otherwise, this could very easily become unworkable where you connect with people outside EA, you mention the you’re interested in EA and so they come to an event, then any dating momentum is broken because you’re not supposed to flirt with them for a while. If it happens enough, this could easily stifle someone’s dating life.

(It’s worth noting that AR events are a lot more intense than EA, where this policy might make more sense)

Comment by Chris Leong (casebash) on FTX Poll Post - What do you think about the FTX crisis, given some time? · 2023-03-09T14:56:15.545Z · EA · GW

Hmm… part of me worries that this might be a bit too contentless/applause-lightly to provide useful information?

Comment by Chris Leong (casebash) on Against EA-Community-Received-Wisdom on Practical Sociological Questions · 2023-03-09T14:40:36.586Z · EA · GW

I wasn’t really a fan of framing this as a “rot”. I worry that this tilts people towards engaging with this topic more emotionally rather than rationally.

I thought you made some good points, however: Regarding peer review, I expect that one of the major cruxes here is timelines and whether engaging more with the peer-review system would slow down work too much. Regarding hiring value-aligned people, I thought that you didn’t engage much with the reasons why people tend to think this is important (ability to set people ill-defined tasks which you can’t easily evaluate + worry about mission drift over longer periods of time).

Comment by Chris Leong (casebash) on Call to demand answers from Anthropic about joining the AI race · 2023-03-02T21:35:14.234Z · EA · GW

I downvoted this post because it felt rambling and not very coherent (no offence). You can fix it though :-).

I would also be in favour in having more information on their plan.

The EA Corner Discord might be a better location to post things like that are very raw and unfiltered. I often post things to a more casual location first, then post an improved version either here or on Less Wrong. For example, I often use Facebook or Twitter for this purpose.

Comment by Chris Leong (casebash) on Conference on EA hubs and offices, expression of interest · 2023-02-28T19:04:02.429Z · EA · GW

What was the plan for SOL?

Comment by Chris Leong (casebash) on EA Global in 2022 and plans for 2023 · 2023-02-24T20:43:35.951Z · EA · GW

Swapcard seems to have significantly improved since the last EAG. You can now view your one on ones and events you’re attending all in the one place. I suspect that if we keep submitting feedback, they’ll eventually fix the flaws.

Comment by Chris Leong (casebash) on EA is too New & Important to Schism · 2023-02-23T19:44:30.406Z · EA · GW

I will admit to having strong downvoted a number of critical posts, while having upvoted others, in order to create an incentive gradient to produce better criticism.

If we start getting less criticism, then I’ll default towards overvoting criticism more.

Comment by Chris Leong (casebash) on EA is too New & Important to Schism · 2023-02-23T19:35:01.776Z · EA · GW

“ The way I see it the ‘woke takeover’ is really just movements growing up and learning to regulate some of their sharper edges in exchange for more social acceptance and political power.”

I think there is some truth in movements often “growing up” over time and I agree that in some circumstances people can confuse this with “woke takeover”, but I think it’s important to have a notion of some takeover/entryism as well.

In terms of the difference: to what extent did people in the movement naturally change their views vs. to what extent was it compelled?

I suppose protest can have its place in fixing a system, but at a certain hard-to-identify point, it essentially becomes blackmail.

Comment by Chris Leong (casebash) on How can we improve discussions on the Forum? · 2023-02-23T02:17:15.523Z · EA · GW

I would like to see the forum team:

a) Figure out the most important conversations that aren't happening

b) Make them happen

The two biggest things happening at the moment are:

a) EA community drama

b) Dramatic AI progress

Lots of discussions have occurred regarding a), but there may be meta-questions that we aren't asking. 

Comment by Chris Leong (casebash) on How do you feel? No discourse allowed · 2023-02-23T02:10:02.490Z · EA · GW

There are periods when I feel kind of drained from all the community drama, plus the terrible state of the AI gameboard, but at other times I feel hopeful or optimistic.

Comment by Chris Leong (casebash) on How do you feel? No discourse allowed · 2023-02-23T02:08:21.806Z · EA · GW

Thanks for posting this. It seems like a useful exercise!

Comment by Chris Leong (casebash) on Nathan Young's Shortform · 2023-02-19T03:23:37.478Z · EA · GW

Could you clarify?

Comment by Chris Leong (casebash) on EA, 30 + 14 Rapes, and My Not-So-Good Experience with EA. · 2023-02-18T23:33:22.104Z · EA · GW

If CEA hires, someone for this activity, it should be someone they have absolute confidence in given its sensitive nature. I think it’s reasonable for them to not hire someone even if they have 80% confidence in them. So it’s possible you’re both doing a good job and it’s reasonable not to hire you, which would be painful, but unfortunately that’s how reality is sometimes. Anyway, regardless of what they decide, I hope things work out for you.

Comment by Chris Leong (casebash) on We are incredibly homogenous · 2023-02-17T06:24:11.490Z · EA · GW

I upvoted this post due to this comment. I don’t see a good reason for this to have negative karma either.

Comment by Chris Leong (casebash) on “Community” posts have their own section, subforums are closing, and more (Forum update February 2023) · 2023-02-13T07:23:37.095Z · EA · GW

I was confused about what Core Topics are after reading this and how they differ from tags/what problem they are supposed to address. Is it that a post can be included in a core topic by having one of a number of relevant tags?

Comment by Chris Leong (casebash) on Solidarity for those Rejected from EA Global · 2023-02-11T02:45:37.346Z · EA · GW

I was accepted this year, but I’ve been rejected for a past conference, so I certainly understand the sting. Question: Would it make sense to organise some kind of online event to lessen the sting for those who were rejected? Obviously, this wouldn’t be comparable to EAG, but it would be something.

Comment by Chris Leong (casebash) on The forum should test agree/disagreevotes only counting for 1 and removing strong votes (not up/downvotes!) · 2023-02-11T02:20:23.124Z · EA · GW

Oh, sorry, I missed on “agree/disagree votes” my bad. That seems more reasonable.

Comment by Chris Leong (casebash) on The forum should test agree/disagreevotes only counting for 1 and removing strong votes (not up/downvotes!) · 2023-02-11T01:21:10.291Z · EA · GW

Withdrawn: I misread the question.

I'm generally in favour of tests, but I'm not sold on this test because:

• The rest of the internet looks like this, so it seems like we should be able to predict results without having running this test ourselves.
• If we did test it, I wouldn't want to test it forum wide, but in a few specific conversations and this post doesn't suggest any ways in which we  could test it without turning it on for the whole site.
• I would like to see a specific issue that this would be aimed to address. If your worry is that dissenting posts don't get upvoted, well I can see why someone might have been worried about this before, but this doesn't seem to have been a problem over the last few weeks. If anything, I'm becoming worried that the easiest way to gain karma is currently to write some criticism, even if it doesn't really bother to engage with any of the counterarguments.

Comment by Chris Leong (casebash) on EA Community Builders’ Commitment to Anti-Racism & Anti-Sexism · 2023-02-10T12:01:57.107Z · EA · GW

I wish I lived in a world where I could support this. I am definitely worried about how recent events may have harmed minorities and women and made it harder for them to trust the movement.

However, coming out of a few years where the world essentially went crazy with canceling people, sometimes for the most absurd reasons, I’m naturally wary of anything in the social justice vein, even whilst I respect the people proposing/signing it and believe that most of them are acting in good faith and attempting to address real harms.

Before the world went crazy for a few years, I would have happily signed such a statement and encouraged people to sign it as well, since I support my particular understanding of those words. Although now I find myself agreeing with Duncan that there are real costs with signing a statement if that then allows other people to use your signature as support for an interpretation that doesn’t match your beliefs. And I think it’s pretty clear to anyone who has been following online discourse that terms can be stretched surprisingly far.

This comment is more political than I’d like it to be, however, I think it is justified given that the standard position within social justice is that political neutrality is fake and an attempt to impose values whilst pretending that you aren’t.

Maybe it’s unfair to attribute possible beliefs to group of people who haven’t made that claim, but this has to be balanced against reasoning transparency which feels particularly important to me when I suspect that this is many people’s true rejection. And maybe it makes sense in the current environment when people are leaning more towards sharing.

I wish we lived in a different world, but in this world, there are certain nice things that we don’t get to have. That all said, there’s definitely been times when I’ve failed to properly account for the needs or perspectives of people with other backgrounds and certainly intend to become as good at navigating these situations as I can because I really don’t want to offend or be unfair to anyone.

Comment by Chris Leong (casebash) on In (mild) defence of the social/professional overlap in EA · 2023-02-09T22:23:33.281Z · EA · GW

I don't know if the framing of it "creating barriers" completely captures the dynamic. I would suggest that there is already a barrier (opportunities to exchange ideas/network with like-minded people) and the main effect of starting a group house is to lower these barriers for the people who end up joining these and then maybe there is a secondary effect where some of these people might be less accessible than they would be otherwise since they have a lower need for connecting with outside people, however, this seems like a secondary effect. And I guess I see conflating the two as increasing the chance that people talk past each other.

Comment by casebash on [deleted post] 2023-02-09T16:10:19.621Z

"The whole point of having "neutral" EA entities like CEA and 80000 is to avoid this line of thinking" - Hmm... describing this as the "whole point" seems a bit strong?

I agree that sometimes there's value in adopting a stance of neutrality. I'm still not entirely sure why I feel this way, but I have an intuition that CEA should learn more toward neutrality than 80,000 Hours. Perhaps, it's because I see CEA as more focused on community building and taking responsibility for the community overall. Even then, I wouldn't insist that CEA be purely neutral, but rather strike a balance between what its views are and what the wider EA community views are.

One area where I agree though is that organisations should be transparent in terms of what they represent.

Comment by Chris Leong (casebash) on What actually is “value-alignment”? · 2023-02-09T15:40:49.981Z · EA · GW

I would be tempted to add something about being truth-seeking as well. So, is someone interested in updating their beliefs about what is more effective, or is this the last thing that they would want?

Comment by casebash on [deleted post] 2023-02-09T15:22:24.838Z

Yes, there's a chance it could be discouraging and if there are ways to improve it without sacrificing accuracy, I'd like to see that happen.

On the other hand, if you have strong reason to believe that some cause areas have orders of magnitude more impact more influence than others, then you will often achieve more impact by slightly increasing the number of people working on these priority areas than by greatly increasing the number of people working on less impactful areas. In other words, you can often have more impact accurately representing your beliefs because it can be hard for the benefits of serving a broader audience to outweigh the impact of persuading more people to focus on something important.

Comment by Chris Leong (casebash) on Smuggled assumptions in "declining epistemic quality" · 2023-02-09T04:56:41.241Z · EA · GW

Could you clarify the meaning of "Shorism" here? I assume you're referring to David Shor?

Comment by Chris Leong (casebash) on [Atlas Fellowship] Why do 100 high-schoolers need $50k each from Open Philanthropy? · 2023-02-09T04:16:13.054Z · EA · GW

I gave the OP a weak downvote, although this comment almost convinced me to make it a strong downvote. I probably wouldn't have downvoted if this would have taken the post into the negative, but I'm starting to become worried about the incentives if posts get strongly upvoted merely for being critical, regardless of their other attributes. I guess I would have preferred for the post to be honest that it's attempting an expose rather than just pretending to "just be asking questions".

Comment by Chris Leong (casebash) on Help writing a History of Effective Altruism? · 2023-02-09T02:53:36.777Z · EA · GW

This is an exciting project!

Comment by Chris Leong (casebash) on Moving community discussion to a separate tab (a test we might run) · 2023-02-07T00:38:51.358Z · EA · GW

I'm in favor of running the experiment.

I  would suggest providing people with a week or two notice before implementing this change so that people can get any last community posts out. Otherwise, it might lead to frustration for people who are currently working on posts.

Comment by Chris Leong (casebash) on Project Idea: Lots of Cause-area-specific Online Unconferences · 2023-02-06T15:38:35.286Z · EA · GW

What did you think worked so well about these unconferences?

Comment by Chris Leong (casebash) on Project Idea: Lots of Cause-area-specific Online Unconferences · 2023-02-06T11:27:16.930Z · EA · GW

I would love to see this happen. Having run an unconference at an AI Safety Retreat and then another unconference in person, I believe that unconferences rate pretty highly in terms of reward per effort.

Comment by Chris Leong (casebash) on Questions about OP grant to Helena · 2023-02-02T13:22:22.728Z · EA · GW

Agreed, that a hits-based approach doesn't mean throwing money at everything. On the other hand, "lack of prior expertise" seems (at least in my books) to be the second strongest critique after the alleged misrepresentation.

So, while I conceded it doesn't really address the strongest argument against this grant, I don't see addressing the second strongest argument against the grant as being beside the point.

Comment by Chris Leong (casebash) on What advice would you give someone who wants to avoid doxing themselves here? · 2023-02-02T10:19:14.237Z · EA · GW

I would love to know why it was downvoted as well. I provided a strong upvote as I can't see the reason why this post should be downvoted, although I might change this if I'm persuaded there's a good reason. However, I would be extremely surprised if there were any such reason.

Comment by Chris Leong (casebash) on Questions about OP grant to Helena · 2023-02-02T09:53:35.625Z · EA · GW

I think it's valuable to write critiques of grants that you believe to have mistakes, as I'm sure some of Open Philanthropy's grants will turn out to be mistakes in retrospect and you've raised some quite reasonable concerns.

On the other hand, I was disappointed to read the following sentence "Henry drops out of school because he thinks he is exceptionally smarter and better equipped to solve 'our problems".  I guess when I read sentences like that I apply some (small) level of discounting towards the other claims made, because it sounds like a less than completely objective analysis. To be clear, I think it is valid to write a critique of whether people are biting off more than they can chew, but I still think my point stands.

I also found this quote interesting: "What personal relationships or conflicts of interest are there between the two organizations?" since it makes it sound like there are personal relationships or conflicts of interest without actually claiming this is the case. There might be such conflicts or this implication may not be intentional, but I thought it was worth noting.

Regarding this grant in particular: if you view it from the original EA highly evidence-based philanthropy end, then it isn't the kind of grant that would rate highly in this framework. On the other hand, if you view it from the perspective of hits-based giving (thinking about philanthropy as a VC would), then it looks like a much more reasonable investment from this angle[1], as for instance, Mark Zuckerberg famously dropped out of college to start Facebook. Similarly, most start-ups have some degree of self-aggrandizement and I suspect that it might actually be functional in terms of pushing them toward greater ambition.

That said, if OpenPhilanthropy is pursuing this grant under a hits-based approach, it might be less controversial if they were to acknowledge this.

  1. ^

    Though of course, if the grant was made on the basis of details that were misrepresented (I haven't looked into those claims) then this would undercut this.

Comment by Chris Leong (casebash) on Questions about AI that bother me · 2023-02-01T05:07:08.845Z · EA · GW

I would suggest that new paradigms are most likely to establish themselves among the young because they are still in the part of their life where they are figuring out their views.

Comment by Chris Leong (casebash) on Who owns "Effective Altruism"? · 2023-02-01T03:10:25.441Z · EA · GW

Great question, I would love to have clarity on this!

Comment by Chris Leong (casebash) on What do you think the effective altruism movement sucks at? · 2023-01-31T13:31:54.296Z · EA · GW

Volunteering. Effective Altruism doesn't have as strong a culture of volunteering as other community groups. When we had access to massive amounts of funding we were able to substitute paying people for volunteering, but I think we're going to have to address this situation in the new funding environment.

Comment by Chris Leong (casebash) on Protect Our Future's Crypto Politics · 2023-01-31T01:12:29.161Z · EA · GW

I do wonder if there could be the reverse correlation: crypto hostile politicians might not have wanted to accept donations from a fund associated so heavily with crypto.

Comment by Chris Leong (casebash) on Doing EA Better: Preamble, Summary, and Introduction · 2023-01-26T05:26:31.742Z · EA · GW

Are you planning to augment the sections so that they engage further with counter-arguments? I recognise that this would take significant effort, so it’s completely understandable if you don’t have the time, but I would love to see this happen if that’s at all possible. Even if you leave it as is, splitting up the sections will still aid discussion, so is still worthwhile.

Comment by Chris Leong (casebash) on Doing EA Better: Preamble, Summary, and Introduction · 2023-01-26T05:19:21.420Z · EA · GW

Agreed. It takes quite a bit of context to recognise the difference between deep critiques and shallow ones, whilst everyone will see their critique as a deep critique.

Comment by Chris Leong (casebash) on FLI FAQ on the rejected grant proposal controversy · 2023-01-20T23:32:47.243Z · EA · GW

Interesting, well that’s an even worse wording in terms of leaving them vulnerable to PR.