Posts

Jay Bailey's Shortform 2022-08-11T03:31:03.593Z
Does EA have a model for scientific impact? 2022-04-08T01:34:59.807Z
Beware Premature Introspection 2022-03-30T01:22:07.905Z

Comments

Comment by Jay Bailey on Lauren Maria's Shortform · 2022-09-27T10:26:14.701Z · EA · GW

I like the idea of trying this as an optional feature, either on a user opt-in level, or a thread opt-in level similar to how agreement voting started out. I think that would provide a lot of the value of upvotes in a comment thread and potentially remove a lot of the downsides, and I think that's definitely worth exploring - you should submit the idea! (Maybe after figuring out whether user-level or thread-level would fit better)

I do think the downvote is useful as well. While you can again get cascading effects (Something gets downvoted, people are predisposed to think negatively of it, people are more likely to downvote) I think it's a good thing for people to be able to downvote things they disagree with. One major benefit of this is that new people to the forum can see when an idea doesn't match the EA consensus on an issue. This is a good thing, for non-groupthink reasons. Some ideas, for instance, are just really bad internet-crackpot takes, and I wouldn't want someone new to the forum to think that we agreed with them. 

Other ideas are not internet-crackpot tier, but EA's disagree with them generally - I think it's helpful for people to know that too, so they can understand what EA's generally believe, whether they then agree with those views or not. (If not, it would be a good signal that better  arguments for this idea would be a worthy use of time!)

That said, I think there should be a general norm of explaining a downvote if you're downvoting something which doesn't already have one attached, so people don't just get downvoted with no idea why. I think EA does better about this than most places but is not perfect.

Comment by Jay Bailey on Lauren Maria's Shortform · 2022-09-27T03:21:14.410Z · EA · GW

Regardless of deeper issues, I don't think it would be a good idea to remove votes on comments. Many popular EA posts can have dozens or hundreds of comments, and despite the risk of groupthink, I do believe it's helpful for highly-upvoted comments to rise to the top of comment threads. It might not be maximally  helpful, but it seems more helpful to read the top N comments of a popular post than reading through a subsection of the newest comments of that post, provided those top N comments are at least mostly correlated with what they should be.

Comment by Jay Bailey on ‘Where are your revolutionaries?’ Making EA congenial to the social justice warrior. · 2022-09-26T03:33:35.561Z · EA · GW

"Unless EA changes its positioning soon, it is so obvious to me that this well-meaning platform will remain a sparring ground of ideas, of ivory towers, and not of grassroots or picket lines."

Yes, there's a lot of abstract arguments in EA, but we've also achieved significant things. See here: https://www.effectivealtruism.org/impact Thus, it doesn't seem fair to imply that EA is currently merely a sparring ground of ideas and ivory towers. EA's ground-level work doesn't look like picket lines, but it's very much there. One of the unfortunate baked-in problems of the EA Forum is that the most impactful things often aren't talked about a lot because there's not a lot of new information to share about them. I don't know a good solution to this, and I don't think it's anyone's fault, but it does mean that a lot of discussion in the EA Forum will be more the ivory tower type - stuff that's largely settled, like the Against Malaria Foundation, doesn't get a lot of debate.

I'm also curious how you reconcile certain parts of this essay.

First, you've written this:

"The world of social justice is not so easily swayed as Silicon Valley, we do not iterate, and we certainly do not ‘fail fast’. Such concessions cost lives. This doggedness is something EA sorely lacks at present: its principles are more fluid, more congenial to the power structures that cause the existential threats it rails against. Such flexibility may make us more effective collaborators, but not necessarily more effective influencers. The direction provided by political or ideological weathervanes does not hold its ground in changing winds. Instead, by appealing to the social justice warrior, EA targets become non-negotiable and our politic more steadfast and demanding. 

But EA Principles do little to endear themselves to social justice. Strict rationalism and Pascal-Wager-like calculations for doing good feel false, contrived, and a far cry from the wildfire of activism. Longtermism is abhorrent to the advocate."

I accept this as largely true. The one suggestion I'd make is that I don't think EA's principles are more fluid - it is our methods that are more fluid, and more congenial to existing power structures.

That said, for the most part, this is entirely accurate. You seem to have very accurately hit upon several major differences in the way EA and social justice operate, in a way that indicates to me that you understand both pretty well. But then, you write later:

"Firstly, it is essential that we find common ground to work from, reconciling our theory and our language with the real-world experience of advocates. We are not so different, and we need to prove that. We need to demonstrate how what can come off as fixed narratives and frameworks are completely complimentary to the goals and methods of social justice."

My question is - ARE our frameworks completely complimentary to the goals and methods of social justice? This doesn't seem obviously false to me, but it also doesn't seem obviously true either. Iteration, rationalism, remaining open to changing paths and changing our minds, and rejection of ideology are all pretty big in the EA movement. However, you then take it as a given that social justice and EA are compatible. I'd love to hear a more fleshed-out argument for why this is the case. 

Comment by Jay Bailey on Grantees: how do you structure your finances & career? · 2022-09-25T00:30:27.400Z · EA · GW

That's excellent advice! I just looked up Australia specifically (https://www.ato.gov.au/Individuals/Income-and-deductions/In-detail/Income/Scholarship-payments-and-tax) and it appears that:

For a scholarship payment to be exempt income it can't:

  • be an excluded government payment (Austudy, Youth Allowance or ABSTUDY)
  • come with a requirement for you to do work (either as an employee or contract for labour, now or in the future).

You must also meet both of the following conditions:

The key point here is the third one. So, if you're a uni student being funded to do a Masters or PhD, your grant is tax-exempt. If you're like me, and you're upskilling independently, tax does need to be paid for it.

That said, this took me almost no time and could have potentially saved the LTFF tens of thousands of dollars, so this was a very high EV thing to check.

Comment by Jay Bailey on Announcing the Future Fund's AI Worldview Prize · 2022-09-24T01:19:33.838Z · EA · GW

Worth noting is that money like this is absolutely capable of shifting people's beliefs through motivated reasoning. Specifically, I might be tempted to argue for a probability outside the Future Fund's threshold, and for research I do to be motivated in favor of updating in this direction. Thus, my strategy would be to figure out your beliefs before looking at the contest, then look at the contest to see if you disagree with the Future Fund.

The questions are:

“P(misalignment x-risk|AGI)”: Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI

AGI will be developed by January 1, 2043

AGI will be developed by January 1, 2100

To be answered as a percentage.

Comment by Jay Bailey on Why Wasting EA Money is Bad · 2022-09-23T10:32:33.798Z · EA · GW

$4,500 is the cost to save a life, whereas $200 is the quote for saving 1 year of life. Saving a life produces, IIRC, somewhere around 25-30 QALY's. So, $200/year would be correct, accounting for rounding, if GiveWell's estimates are trustworthy.

Comment by Jay Bailey on Why Wasting EA Money is Bad · 2022-09-22T14:31:49.090Z · EA · GW

"So if a $50 Uber ride saves me half an hour, my half an hour must be more valuable than a three months of someone else’s life. That’s a pretty big claim."

That line hit hard. Something about reducing it to such a small scale made it really hit home - I can actually viscerally  understand why there are people who agonise over every purchase and struggle so much with guilt. I've always been able to emotionally remain distant - to donate my 10%, save lives each year, and yet somehow be okay with not donating more, even though I could. Thinking of it in terms of a single purchase and weeks/months of someone's life makes it feel so much more real all of a sudden, and my justifications of Schelling points and sustainable giving feel much more hollow.

Comment by Jay Bailey on Glo, an ethical stablecoin: model, potential impact, and roadmap · 2022-09-16T00:44:46.469Z · EA · GW

Does the Impact Fund not take a small percentage to support GiveWell's overhead? I just always assumed they did.

Comment by Jay Bailey on Glo, an ethical stablecoin: model, potential impact, and roadmap · 2022-09-15T16:26:31.470Z · EA · GW

This looks a lot more promising than the original post, so I'm very impressed at the continued evolution of this idea!

So, if I understand correctly, the current setup (or, the setup in a month or two) is roughly equivalent to the idea of - I give you money, you invest that money in a very low-risk investment, that profit goes to GiveDirectly, and if I need the money back, you give it to me. The reason it's a cryptocurrency is that there are plans to eventually allow GLO to be used as cash for various things. This is important because GLO is designed to be held in checking accounts, savings accounts, and emergency funds, not long-term investments - it doesn't compete in yield with the stock market, but that's not the intention.

Have I got that right?

Some additional questions:

How quickly, and at what cost, will I be able to exchange a currency (whether USD or non-USD) for GLO, and back again?

Is there a long-term plan to extract some amount of the T-bond interest for operational expenses? Do you see yourself being donor-funded indefinitely? 

 

Comment by Jay Bailey on I’ve written a Fantasy Novel to Promote Effective Altruism · 2022-09-13T05:35:27.406Z · EA · GW

I quite liked it! I left some comments, but I found it an engaging novel overall. I liked the different perspectives given by the characters who opposed Isaac's views in pretty reasonable ways, and how EA views were mentioned without getting too preachy.

Plus, I liked how the novel evoked the overall essence or vibe of cultivation novels without getting too lost in the weeds, as well as the well-developed military theory of cultivation warfare the characters had. Overall I quite enjoyed the novel both as an intro to EA concepts and on its own merits.

It could certainly use another editing pass or two for grammar, but I think it has fantastic potential!

Comment by Jay Bailey on Levelling Up in AI Safety Research Engineering · 2022-09-02T14:15:35.477Z · EA · GW

Interesting. Do you have any good examples?

Comment by Jay Bailey on Levelling Up in AI Safety Research Engineering · 2022-09-02T05:42:05.787Z · EA · GW

This is a fantastic resource, and I'm really glad to have it! 

My own path has been a little more haphazard - I completed Level 2 (Software Engineering) years ago, and am currently working on AI safety (1), mathematics (3) and research engineering ability (4) simultaneously. Having just completed the last goal of 4 (Completing 1-3 RL projects) I was planning to jump right into 6 at this point, since transformers haven't yet appeared in my RL perusal, but I'm now rethinking those plans based on this document - perhaps I should learn about transformers first.

All in all, the first four levels (The ones I feel qualified to write about, having gone through some or all of them) seem extremely good. 

The thing that most surprised me about the rest of the document was Level 6. Specifically, the part about being able to reimplement a paper's work in 10-20 hours. This seems pretty fast compared to other resources I've seen out there, though most of these resources are RL-focused.  For instance, this post (220 hours). This post from DeepMind about job vacancies a few months ago also says:

"As a rough test for the Research Engineer role, if you can reproduce a typical ML paper in a few hundred hours and your interests align with ours, we’re probably interested in interviewing you."

Thus, I don't think it's necessary to be able to replicate a paper in 10-20 hours. Replicating papers is a great idea according to my own research, but I think that one can be considerably slower than that and still be at a useful standard.

If you have other sources that suggest otherwise I'd be very interested to read them - it's always good to improve my idea of where I'm heading towards! 

 

Comment by Jay Bailey on EA vs. FIRE – reconciling these two movements · 2022-09-01T10:04:59.380Z · EA · GW

Good piece! Upon seeing the title, I immediately wish I had thought to write something like it.

I was personally involved in FIRE before I got involved in EA. Even now, I donate 10% of my income and save most of what's left. Because of my decision to try and perform direct work to improve the world, I'm no longer planning the RE part of FIRE. I've also become a little less frugal as a result and willingly taken a pay cut to skill up for direct work - what does it matter if it takes an extra year or two to reach FI if I'm planning to perform direct work post-FI anyway?

So, I guess for me, these ideas are in conflict somewhat, in the sense that I can't simultaneously maximise both. But I agree there is a core to both of these movements that align very well. Mr Money Moustache, whether he identifies as EA or not, has donated significant amounts to GiveWell in the past. It makes complete sense that a person who wants to optimise their finances would also want to optimise their charitable giving in a similar fashion, so I think EA ideas will find fruitful soil in the FIRE movement.

Comment by Jay Bailey on Community Builder Writing Contest: $20,000 in prizes for reflections · 2022-08-31T15:20:46.969Z · EA · GW

I was keen to check out the winning entries to this contest, but I'm wondering if I missed the announcement, and I can't seem to find it anywhere. Have the entries been made public somewhere?

Comment by Jay Bailey on Effective altruism is no longer the right name for the movement · 2022-08-31T07:07:25.626Z · EA · GW

The point about global poverty and longtermism being very different causes is a good one, and the idea of these things being more separate is interesting.

That said, I disagree with the idea that working to prevent existential catastrophe within one's own lifetime is selfish rather than altruistic. I suppose it's possible someone could work on x-risk out of purely selfish motivations, but it doesn't make much sense to me.

From a social perspective, people who work on climate change are considered altruistic even if they are doomy on climate change. People who perform activism on behalf of marginalised groups are considered altruistic even if they're part of that marginalised group themselves and thus even more clearly acting in their own self-interest.

From a mathematical perspective, consider AI alignment. What are the chances of me making the difference between "world saved" and "world ends" if I go into this field? Let's call it around one in a million, as a back-of-the-envelope figure. (Assuming AI risk at 10% this century, the AI safety field reducing it by 10%, and my performing 1/10,000th of the field's total output)

This is still sufficient to save 7,000 lives in expected value, so it seems a worthy bet. By contrast, what if, for some reason, misaligned AI would kill me and only me? Well, now I could devote my entire career to AI alignment and only reduce my chance of death by one micromort - by contrast, my Covid vaccine cost me three micromorts all by itself, and 20 minutes of moderate exercise gives a couple of micromorts back. Thus, working on AI alignment is a really dumb idea if I care only about my own life. I would have to go up to at least 1% (10,000x better odds) to even consider doing this for myself.

Comment by Jay Bailey on Jay Bailey's Shortform · 2022-08-30T08:46:02.387Z · EA · GW

A criticism of EA I see all the time: "Buying bednets for people in the third world is paternalistic, elevates the giver over the receiver, etc."

A criticism of EA I never see: "Donating to political candidates in the US is paternalistic, elevates the giver over the receiver, etc."

I find this strange because the latter seems at least as paternalistic as the former - using money to try and impact the political process for the good of the world says "I know what's good for you more than you do". This may be true in some cases, but still -  why do people critique the first, and not the second?

Theories, in rough order of most likely to least likely:

- Critics don't realise EA's involvement in politics is growing yet. Most paternalism claims of EA come from people outside of EA, so we should expect them to be more likely to know about highly visible stuff that's been going on for years.

- Paternalism critiques of EA come from the left, and when EA tries to impact politics they pick Democrat candidates. People don't think of "paternalism" except when it comes to things they disagree with.

- People have mentioned this criticism somewhere, and I just haven't seen it.

- EA donors tend to be Western, so they're from the same culture they aim to influence, which makes it okay.

Curious about people's thoughts on this.

Comment by Jay Bailey on Reducing long-term risks from malevolent actors · 2022-08-28T00:49:19.110Z · EA · GW

The marriage between violent men and an ever accelerating knowledge explosion is unsustainable.   One of them has to go.

I think our core crux here is that if this is true, I would rather tackle it from the "ever accelerating knowledge explosion" side or the "violent" side rather than the "men" side.

Good luck with your ideas, man. You've certainly given me a new idea to think about (knowledge explosion) and I hope I've done the same. 

Comment by Jay Bailey on Reducing long-term risks from malevolent actors · 2022-08-27T17:20:41.828Z · EA · GW

That's true.  But it is that radical idea that engaged your interest, so sometimes it works.  I'm not claiming to have the perfect engagement system etc, again, I'm just doing what I know how to do.

The idea that engaged my interest was that of the exponential knowledge explosion. I thought there was a good idea there, and I replied since I had seen a lot of your posts recently and nobody else was replying to them. I replied in spite of, not because of, the proposed solution. I imagine it's quite likely that others decided not to reply because of the proposed solution as well.

I apologise for misleading you there - I'll focus just on the exponential knowledge explosion side of things to help correct that.

Me: In the real world, I cannot see any sort of realistic path towards your solution,

You: How long have you been trying?  Today and yesterday?

The above line by me was an invitation for you to share a path. You're right - my inability to see a path after five minutes isn't a strong indication that no such path exists. On the other hand, you've had this idea for years - if you don't have a path forward, that's much stronger evidence for the idea's untenability. If you do have a path forward, this would be a good thing to add to your argument.

You seem to have the idea that people are disagreeing with you because your ideas are strange and outside the box.

This sentence was referring entirely to the EA Forum - people tend to be more tolerant of weird ideas here than in most places on the internet.

Comment by Jay Bailey on Reducing long-term risks from malevolent actors · 2022-08-27T14:53:40.282Z · EA · GW

The reasoning here is simple.   If ideas within the group consensus could solve this existential threat, the problem would likely already be solved.

First off - this isn't necessarily true. There are ideas within the group consensus that could at least take steps towards solving nuclear war (advocate for nuclear disarmament), climate change (advance the state of carbon capture technology), and reduce global poverty. (Basically everything GiveWell does) The reason we haven't solved them yet is that executing these ideas is still very hard due to problems around logistics, co-ordination, and they simply require a hell of a lot of work.

That said, I do agree with you that looking at the root cause of exponentially increasing x-risk capable technologies is not an idea that is currently discussed much. But that should actually move you towards thinking inside the box - if people aren't thinking about it, there's low-hanging fruit to be picked, and you don't need to go straight to the most radical idea you can think of. Maybe there are some good ideas within the Overton window that haven't been popularised yet, just waiting to be found!

This is an understandable critique, which is welcomed.  But to be more precise, what is really being said is that I've shied away from solving the problem in the  manner preferred by those who make their living serving an out of control knowledge explosion.   If such folks can solve the problem in the manner which they prefer, that's great, I'm all for solving the problem.  Let's hear their solution...

No, that's not what's being said. What I am saying is that you've shied away from solving the problem. "Solving the problem" means coming up with a solution that has a chance of actually working giving the constraints of the real world. In the real world, I cannot see any sort of realistic path towards your solution, and coming up with such a path is a requirement for a valid solution. It is not enough to describe a possible state of the world that solves a problem - you have to have some path to get from our current world to that world.  

I also gave you two ideas for the limitation of knowledge development - ban all research in any areas deemed to be prone to existential risk, and find a way to remove the tendency for highly aggressive men to seek power in world-threatening ways. Don't get me wrong, these still don't count as solutions in my definition above, but they seem at least as plausible as removing the male gender entirely. It seems "probably impossible" rather than "definitely impossible", and limiting research even suggests some possible next steps, like banning biological gain-of-function research (even though these are still very hard and I've heard people make decent arguments for why it can't be done, largely around the difficulty of international co-operation - but this is also an argument that applies equally well to your proposal) 

Yes, the argument I'm making seems strange to you.  It seems strange to most people.   That's what I like about it.  We're exploring outside the limits of a group consensus  which has a proven record of failing to successfully address these existential threats.  

Being outside the box doesn't guarantee success, agree completely there.  But where else would you have me look?  

You seem to have the idea that people are disagreeing with you because your ideas are strange and outside the box. This is from the same forum that has wild animal suffering as a small but real cause area.  When I said your argument seemed strange, I was being polite. What I should have said was that your argument seems bad. Not because it's outside the box. Not because I disagree with it morally. Because I don't see how it could actually work, and there seem to me to be options that are simultaneously less radical and also closer to the set of things I would describe as possible solutions. The reason to think out of the box is because nothing else has worked - you should be aiming for the least radical solution that is necessary to solve the problem.

Finally, while I don't think this point is necessary for my core argument, I'll address it anyway:

Also, why should men resist the idea of their being fewer men???  The fewer men there are, the more valuable the remaining men become in the dating pool.   This is another obvious point critics always ignore in their rush to rejection.

Firstly - a society that wants to eliminate the male gender probably isn't going to be very nice for men to live in. You can't have a society dedicated to the extinction of men that also respects, appreciates, and is kind to them.  Any attitudes society adapted that would allow them to even consider such a radical act are going to be opposed by men for precisely this reason - which, by the way, is an extremely good reason and has my full sympathy. There are people right now who advocate for the extinction of men, and they are not the kind of people I would feel safe around, let alone being ruled by.

Secondly - let's say I don't care about the future of men, and I don't care what people think of me, and I'm certain that society's newfound attitudes that allow for the intentional extinction of men definitely won't turn into violence, and I'm willing to throw away my entire gender to increase my chance of getting laid. Even then, I'm thirty years old now. How does this help me? If you're not going to commit violence, which you've previously claimed, the only way you can reduce the number of men is to stop new ones from being born. That means that if you could start having statistically relevant effects a mere ten years from now, I'm still going to be in my sixties by the time the dating pool starts shifting in any sort of meaningful way, and there's still just as many 30+ year olds as there ever was. That doesn't seem like it benefits me.

Finally, if most humans were willing to consider twenty years in the future when deciding on what policies they support, we might not be in this mess in the first place.  

Comment by Jay Bailey on Reducing long-term risks from malevolent actors · 2022-08-27T04:17:43.404Z · EA · GW

The argument you're making seems strange to me. I'm going to talk purely about the logistics here. You say that no society in history has figured out how to keep peaceful men without the violent ones, and therefore this means it probably can't be done. Presumably, the same argument applies to banning scientific progress.

But the same argument can be applied to a society without men at all! No society in history has managed to do it. In fact, the obstacles are far greater - men are still biologically necessary, there are billions of men already on the planet, and most of them wouldn't agree with you that their gender should be phased out. It's easy to say you're not calling for violence, but I don't see any way you could adopt this without violence, even if you had the technology to allow for it.

Finally, it occurs to me that if your ideal-but-untenable solution is to keep peaceful men and not violent men, then your solution is anti-correlated with what you want - what kind of man would willingly permit his own gender's extinction for the greater good of humanity? Certainly not a violent one.

It seems to me like you've correctly identified a large problem that lurks behind several EA causes (The exponentially advancing state of knowledge resulting in existential risk) and a compelling analogy for it (The ever-increasing speed of a conveyor belt) which is a very useful question worth exploring. But then, having successfully identified this as a hard problem, you've then shied away from the difficulty of the problem, which is actually solving the problem. If you propose an unworkable solution, and then people ignore it, you at least get to say that you tried, and other people are to blame for denying your vision. In other words, you are Trying To Try.

In short - you've proposed a solution that would almost certainly fail in real life. It is in fact much more difficult than lesser impossibilities like "Remove the tendency for violence in men/humans" or "Halt all scientific progress in any field that is deemed unsafe". It might not even solve the problem if it succeeded. And it also happens to be outside the Overton window to boot. And then, when people downvote your posts, you assume that they disagree with the solution because it's outside the Overton window and not because it's just...not going to work. And even if it could work, the solutions I mentioned above are better across every possible dimension. They're slightly less impossible, morally more reasonable, societally more tenable, and equally likely to solve the problem if you could somehow pull them off. And even then, those aren't good solutions because they STILL don't bring the problem into a state where a solution can realistically be seen.

The problem you mention is real, but you've cut your search for a solution short way too fast.

 

Comment by Jay Bailey on [deleted post] 2022-08-19T14:44:07.217Z

A couple of bits of feedback:

"It is a paid program ($150~$250USD/person) with free spots available for exceptional and autistic individuals." is a bit ambivalent. While the literal definition of this sentence is "People who are both exceptional and autistic",  a colloquial use might be "People who are either exceptional or autistic".

I'd recommend changing it to "Free spots available for exceptional autistic individuals" (if logical AND) or "Free spots available for exceptional and/or autistic individuals" (if logical OR)

Secondly - when I think "principles" I usually think moral principles, but it's not entirely clear what principles means in this post. It sounds like the bootcamp is about strategies to better achieve the things that are important to you, and those things are your "principles", but this isn't common usage (though it may be a common and well-defined term within a certain field that I'm ignorant of)

Comment by Jay Bailey on Announcing the Distillation for Alignment Practicum (DAP) · 2022-08-19T03:01:35.772Z · EA · GW

I've put in an application! I'm currently doing a distillation of how Deep Q Networks work for an AGISF project, and distillation is my Plan B if research engineering doesn't work out, so this would be a great course to do in parallel with my current upskilling! I am in Australia, so the timezones might be difficult, but I'll definitely try to arrange a group to do it with if not selected. 

Comment by Jay Bailey on Paula Amato's Shortform · 2022-08-13T01:44:18.459Z · EA · GW

It would be nice to have some specific examples of these things. This particular criticism, in my view, is just an attempt to associate EA with Bad Things so that people also think of EA as a Bad Thing. There's no actual arguments in this statement - there are no specific claims to oppose. (Except that EA is incredibly well-funded - which is true, but also not inherently good or bad, and therefore does not need to be defended.) 

If I'm being charitable - many arguments are like this, especially when you only have 140 characters. This is a bad argument, but it's far from a uniquely bad argument. The burden of proof is on Timnit to provide evidence for these accusations, but they may have done this somewhere else, just not in this tweet. (I assume it's a tweet because of its length, and, let's face it, its dismissiveness. Twitter is known for such things.)

If I'm not being charitable - the point of a vague argument such as the above is that it places the burden of proof on the accused. The defense being asked for is for EA's to present specific examples of actions that EA is taking that prove they aren't "colonial" or "white savior"-esque. This is a losing game from the start, because the terms are vague enough that you can always argue that a given action proves nothing or isn't good enough, and that someone could be doing more to decolonise their thoughts and actions. The only winning game is not to play.

Which interpretation is correct? I don't know enough about Timnit Gebru to say. I'd say that if Timnit is known for presenting nuanced, concrete arguments on other mediums or on other topics, this argument is probably a casualty of Twitter, and the charitable approach is appropriate here.

Comment by Jay Bailey on Book a chat with an EA professional · 2022-08-11T08:05:02.342Z · EA · GW

"<topic> 101" generally means beginner or introductory questions, taken from some universities where a class like MATH101 would be the first and most basic mathematics class in a degree. So, "EA 101 questions" here means basic or introductory EA questions.

Comment by Jay Bailey on Jay Bailey's Shortform · 2022-08-11T03:31:03.754Z · EA · GW

I notice some parallels between the old essay "Transhumanism as Simplified Humanism" (https://www.lesswrong.com/posts/Aud7CL7uhz55KL8jG/transhumanism-as-simplified-humanism) and current criticisms of EA - that the idea of "doing the most good possible" is obvious and has been thought of many times before. Really,  in a way, this is just common sense. And yet:

Then why have a complicated special name like “transhumanism” ? For the same reason that “scientific method” or “secular humanism” have complicated special names. If you take common sense and rigorously apply it, through multiple inferential steps, to areas outside everyday experience, successfully avoiding many possible distractions and tempting mistakes along the way, then it often ends up as a minority position and people give it a special name.

I feel that EA is like this. If you take a common sense idea like "Do the most good possible", and actually really think about how to do that, and actively compare different things you could be doing - not just the immediate Overton window of what your friends or your colleagues are doing - and then make a serious commitment of resources to make that answer happen, then it ends up as a minority position and people give it a special name. 

Comment by Jay Bailey on By how much should Meta's BlenderBot being really bad cause me to update on how justifiable it is for OpenAI and DeepMind to be making significant progress on AI capabilities? · 2022-08-10T07:49:41.278Z · EA · GW

Not quite a direct answer to your question, but it is worth noting - not everyone in EA believes that about AI capabilities work. I, for one, believe that working on AI capabilities, especially at a top lab like OpenAI and DeepMind, is a terrible idea and should be front and center on our "List of unethical careers". Working in safety positions in those labs is still highly useful and impactful imo.

Comment by Jay Bailey on [deleted post] 2022-08-10T01:10:25.240Z

I don't agree with most of these points, though I appreciate you writing them up. Here are my thoughts on each of them, in turn:

Altruism implies a naive model of human cognition. I feel like this argument proves too much. If "altruism" is not a good concept because humans are inconsistent, why would "self-interest" be any less vulnerable to this criticism? It seems that you could even-handedly apply this criticism to any concept we might want to maximise, which ends up bringing everything back to neutral anyway.

Altruism as emergent from reward-seeking. This brings up a good point in my opinion, though perhaps not the same point you were making. Specifically, I think altruism is often poorly defined. On some level it's obvious that people are altruistic because of self-interest. But it also seems to me that if your view of what you want the world to look like includes other people's preferences, and you make non-trivial sacrifices (E.g, donating 10%) to meet those preferences, that should certainly count as altruism, even if you're doing it because you want to. 

Need for self / other distinction. I'm not actually following this one, so I won't comment on it.

Information asymmetry. Perfectly true - if all humans were roughly equally well-off, the optimal thing to do would be to focus on yourself. However, this is not the case. I may understand more about my preferences than I understand about the preferences of someone in Bangladesh earning $2/day, but I can reasonably predict that a marginal $20 would help them more than it would help me. Thus, it seems totally reasonable that there are ways you can help others even with less information on their internal states.

Game-theoretic perspective. This argument is just confusing to me. Your first sentence says that self-interested agents can co-operate for everyone's benefit, and your second sentence says that altruistic groups may behave suboptimally. Well...so might self-interested agents! "Can" does not mean "will". You've done some sleight of hand here where you say that self-interested agents can sometimes co-ordinate optimally, then you say that altruistic groups do not always co-ordinate optimally, and then used that to imply that self-interested groups are better. You haven't actually shown that self-interested groups are more effective in general, merely that it's possible, in some cases (1 in 10? 1 in 100? 1 in 1000?) for a self-interested group to outperform an altruistic one.

Human nature. Humans aren't hardwired to care about spreadsheets, or to build rockets, or to program computers. One of the greatest things about humans, in my mind, is our ability to transcend our nature. I view evolutionary psychology as a useful field in the same way that sitcoms are useful for romance advice - they give solid advice on what not to do, or what pitfalls to watch out for. I am naturally wired for self-interest...so I should watch out for that.

Also...I'm not sure if we can do this and still keep what makes the movement great. In the end, effective altruism is about trying to improve the world, and that requires thinking beyond oneself, even if that's hard and we're wired to do otherwise. I don't think I'm likely to be convinced that donating 10% of my income to people I'll never see is actually in my own self-interest, and yet I do it anyway. There are absolutely positives to being part of the movement from the point of view of self-interest, and those are good to smuggle along to get your monkey-brain on board. Nevertheless - if you're focused on self-interest, that limits a lot of what you can do to improve the world compared to having that goal directly. So I think altruism is still very important.

Comment by Jay Bailey on EA is Insufficiently Value Neutral in Practice · 2022-08-05T02:27:07.468Z · EA · GW

Agreed entirely. There is a large difference between "We should coexist alongside not maximally effective causes" and "We should coexist across causes we actively oppose." I think a good test for this would be:

You have one million dollars, and you can only do one of two things with it - you can donate it to Cause A, or you can set it on fire. Which would you prefer to do?

I think we should be happy to coexist with (And encourage effectiveness for) any cause for which we would choose to donate the money. A longtermist would obviously prefer a million dollars go to animal welfare than be wasted. Given this choice, I'd rather a million dollars go to supporting the arts, feeding local homeless people, or improving my local churches even though I'm not religious. But I wouldn't donate this money to the Effective Nazism idea that other people have mentioned - I'd rather it just be destroyed. Every dollar donated to them would be a net bad for the world in my opinion. 

Comment by Jay Bailey on Reflection - Growth and the case against randomista development - EA Forum · 2022-07-30T09:03:12.329Z · EA · GW

In addition to raising several further problems, I don't actually see how this solution actually solves any of the problems I brought up in my previous comment.

Comment by Jay Bailey on Reflection - Growth and the case against randomista development - EA Forum · 2022-07-29T00:38:20.511Z · EA · GW

(NOTE: I wrote this response when the post was much shorter, and ended at "I do not care about evidence when people are dying.")

 

First off - a linkpost is a link to the exact same post that has been written somewhere else, rather than an inspiration or a source like the original "Against RCT" post. That's a small thing.

Secondly - people did think about the kids in the PlayPump story. With the benefit of hindsight, we now know the PlayPumps were a bad idea, but that's not how it seemed at the time. It seemed like the kids would get to play (hence the name) and the village would naturally get water as a result. That's a win-win! No need to take kids out of school, and providing access to clean water would have been a great thing. It didn't work out that way, but the narrative was compelling - evidence about how it actually works is the thing that was missing.

Thirdly, it seems strange to say that you don't care about evidence. You claim:

"With common sense it is obvious that we have to invest billions to build a water public company in Africa and to build the infrastructure to allow every citizen to get fresh and clean water.(this is solve root problem with common sense)"

How would we work out how to achieve this, without using evidence? For that matter, how do we know people in Africa need clean water at all? Sure, it's common knowledge now, but how did the people who originally reported on it find out? Did they close their eyes and think really hard, and then open their eyes and say "I bet there's a country called Africa, and people live there, and they need clean water", or did people actually ask Africans or look at conditions in Africa, and find out what was going on?

Less facetiously, there's a whole bunch of questions that would need to be asked in order to complete this project. Questions like:

Would these countries allow this company to be built?
Who should be in charge of it?
Can we actually provide this infrastructure?
How maintainable is the infrastructure? 
What will the expected costs and benefits actually be?

The lesson of the PlayPumps is that you can't answer all these questions by telling a nice story - you have to actually go out and do the research about how things might go in the real world, and then at least you have a chance of getting it right. The world is complicated - things that seem compelling aren't always possible or useful. The only way we know about that can even somewhat reliably tell the difference is with evidence, ideally as empirical (i.e, as close to the source of what's really happening) as possible. 

The key insight from this post I am trying to convey is not "You can't criticise these things", but rather - if you're going to criticise these things, you need to present a counterargument against the actual reasons EA believes in these things. Why do the benefits of evidence not apply here? What method can we use, other than evidence-gathering, to be sure that this project is the best project we could be doing and will actually work as intended? 

Comment by Jay Bailey on GLO, a UBI-generating stablecoin that donates all yields to GiveDirectly · 2022-07-20T00:49:00.646Z · EA · GW

Considering that the scale we're talking about probably involves reaching out to non-crypto people, I feel like my question isn't too basic given that premise:

How fast/cheap is "making a crypto transaction" currently? I've heard bad things about how expensive it is but have no idea if that's actually true.

Comment by Jay Bailey on One Million Missing Children · 2022-07-12T03:07:19.880Z · EA · GW

I imagine the financial claim isn't that offering financial support doesn't work, but a claim more like - there aren't enough resources to offer enough financial support to enough people to meaningfully alter the US fertility rate on the basis of this alone.

Like - how much does it take to raise a child? I've heard 250k, so let's go with that. You don't need to offer the entire amount as financial support, but something like 5k/year seems reasonable. Across 18 years, that's still $90,000. That means that if you give a billion dollars away as financial support, with zero overheads, you've supported the birth of ~11,000 children. This is a rounding error compared to the size of the issue, so I wouldn't see it as "directly moving the ". To directly move the needle at a cost of 90k/child, you'd need to invest hundreds of billions of dollars. It would probably work effectively, but the resources just aren't there in private philanthropy.

By contrast, political advocacy actually could work on the scales that we're talking about.

Comment by Jay Bailey on My EA Failure Story · 2022-07-11T12:58:29.076Z · EA · GW

Thanks for sharing this, semicycle! 

One thing I would also like to point out is that relativity is the enemy here. Compared to being a billionaire, making a "mere" six figures as a successful engineer and donating 10% doesn't seem like much, but let's take a step back and look at it objectively. If you donate 10% of that, that's saving 3+ lives every single year. Across a career, that could easily save a HUNDRED PEOPLE. That's like, two schoolbusses full of children! This is incredibly valuable, regardless of what anybody else is doing. 

If you save three people, then as far as I've concerned you've made a positive contribution with your life as long as you're a somewhat decent person the rest of the time, and nobody can tell you otherwise. You're in a position to do that every year you have an engineering job, even if it's not in EA! 

Everyone who signs that pledge (or donates the equivalent) is doing incredible work. The child you saved doesn't care if someone else saved ten or not, and every life is precious.

Comment by Jay Bailey on (Even) More Early-Career EAs Should Try AI Safety Technical Research · 2022-07-04T01:48:23.119Z · EA · GW

You can essentially think of it as two separate problems:

Problem 1: Conditional on us having a technical solution to AI alignment, how do we ensure the first AGI built implements it?

Problem 2: Conditional on us having a technical solution to AI alignment, how do we ensure no AGI is ever built that does NOT implement it, or some other equivalent solution?

I feel like you are talking about Problem 1, and Locke is talking about Problem 2. I agree with the MIRI-type that Problem 1 is easy to solve, and the hard part of that problem is having the solution. I do believe the existing labs working on AGI would implement a solution to AI alignment if we had one. That still leaves Problem 2 that needs to be solved - though at least if we're facing Problem 2, we do have an aligned AGI to help with the problem.

Comment by Jay Bailey on AI Safety Brisbane Launch · 2022-06-28T08:05:52.509Z · EA · GW

I'd recommend moving it a week because of EAGx. I would also like to attend, but can't due to EAGx Australia.

Also, the time says 4 am AEST - I assume that means the actual time is Friday 6 pm AEST, and it was put in as Friday 6 pm GMT?

Finally, the event says "Following the launch events in Melbourne and Sydney, there will be an AI Safety Brisbane Launch in May." I am assuming this isn't meant to be in May.

Comment by Jay Bailey on Puggy Knudson's Shortform · 2022-06-25T01:02:30.713Z · EA · GW

EA's greatest strength, in my mind, is our epistemic ability - our willingness to weigh the evidence and carefully think through problems. All of the billions of dollars and thousands of people working on the world's most pressing problems came from that, and we should continue to have that as our top priority.

Thus, I'm not comfortable with sentences like "Proposal: change the framing from “Computers might choose to kill us” to “Humans will use computers to kill us” regardless of whether either potential outcome is more likely than the other." We shouldn't be misleading people, including by misrepresenting our beliefs. Plus, remember - if you tell one lie, the truth is forever after your enemy. What if I'm a new EA engaging with AI safety arguments, and you use that argument on me, and I push back? Maybe I say something like "Well, if the problem is that humans will use computers to kill us, why not give the computer enough agency that, if the humans tell it to kill us, the computer tells us to shove it?"

This would obviously be a TERRIBLE idea, but it's not obvious how you could argue against it within the framework you've just constructed where humans are the real danger. Every good argument against this comes from the idea that agentic AI's are super dangerous, which contrasts the claim you just made. If the danger is humans using these weapons to kill each other, giving the AI's more agency might be a good idea. If the danger is computers choosing to kill humans, giving the AI's more agency is a terrible idea. I'm sure you could come up with a way of reconciling these examples, but you'll notice that it sounds a bit forced, and I bet there are more sophisticated arguments I couldn't come up with in two minutes that would further separate these two worlds.

We have to be able to think clearly about these problems to solve them, especially AI alignment, which is such a difficult problem to even properly comprehend. I feel like this would be both counterproductive and just not the direction EA should be going. Accuracy is super important - it's what brought EA from a few people wanting to find the world's best charities to what we have today.

Comment by Jay Bailey on Fill out this census of everyone who could ever see themselves doing longtermist work — it’ll only take a few mins · 2022-06-22T00:24:53.537Z · EA · GW

A good call to action, I feel, should be about the upper bound rather than the lower bound. I too assumed that was "<= 3 mins" purely because ">= X time" is very unusual to put in a title. Perhaps changing it to something like "<= 15 mins" would be a good idea.

Comment by Jay Bailey on AUTHORITY · 2022-06-13T08:43:27.003Z · EA · GW

This argument is extraordinarily difficult to follow because it doesn't appear to be written very legibly. It really needs several more passes, as I can barely follow what the point is meant to be. The argument seems to be something like "Equality is comprised of multiple things, and EA's only focus on one (equality of socioeconomic opportunity) which is inconsistent with the full meaning of equality." Is this right?

You also mention this in the first paragraph: "Many effective altruists would prefer to learn that someone is poor, than to be robbed by someone who is poor. The revealed preference in this thought experiment is that equality of authority is valued over socioeconomic equality."

...What? This makes absolutely no sense. There is no abuse of authority involved in robbing someone, because the thief does not have authority to begin with. If you believe that me robbing you is an example of "inequality of authority", because I have the ability to take something from you and you can't stop me because I have a gun and you don't, then...I don't really understand the point the rest of the post is making at all. Do you think EA's are okay with people robbing each other? If not, then clearly EA's do care about "inequality of authority" by your definition of the term, and so does almost everyone, so it isn't clear to me what the problem actually is that you're pointing out.

This definitely needs more editing. Most of the post is not coherent enough for me to follow the argument in detail. The one part of the post I think I understand (the first paragraph) appears so blatantly wrong that I feel like I must be misunderstanding it too.

 

Comment by Jay Bailey on Community Builder Writing Contest: $20,000 in prizes for reflections · 2022-06-08T01:37:37.627Z · EA · GW

Do we have a date for the announcement of the winners at this point? I've attempted to reach out to Akash via the forum and via email, and haven't gotten a response after a few weeks. Has someone else taken over the organisation of this contest? 

Comment by Jay Bailey on Unflattering reasons why I'm attracted to EA · 2022-06-07T02:49:19.755Z · EA · GW

I think the correct steelmanning of dotsam's point is:

1. As a member of <group>, I have a great deal of privilege.
2. In order to remove this privilege, we need sweeping societal changes that upend the current power structures.
3. EA does not focus on upending current power structures in a radical way.
4. EA makes me feel less guilty about my privilege despire this.
5. Therefore, EA allows me to maintain my privilege by relieving my guilt by taking actions that doesn't actually require overthrowing current power structures, i.e, the actions that would affect me personally the most.

Under this set of assumptions, most people find ways to maintain their privilege not by actively reinforcing power structures, but by avoiding the moral imperative to overthrow them. EA's are at least slightly more principled, because their price for this is something like "Donate 10% of your income" instead of "Attend a protest", "Sign a petition", or "Decide that you're inherently worthy of what you have and privilege doesn't exist."

Personally, I don't agree with this chain of logic because I disagree with Point 2 above, but I think the chain of logic holds if you agree with points 1 and 2. (And I suppose you also need to add the assumptions that one can tractably work on upending these power structures, and that doing so won't cause more harm than good.)
 

Comment by Jay Bailey on nananana.nananana.heyhey.anon's Shortform · 2022-06-05T01:51:24.932Z · EA · GW

I've definitely thought about this. EA is a relatively young movement. Its momentum is massive at the moment, but even so, creating a career out of something like EA community building is far from certain, even for people who can reasonably easily secure funding for a few months or years. 

I think that a good thing to do would be to ask "What would happen if EA ceased to exist in ten years?" when making career plans. If the answer is "Well, I would have been better off had I sought traditional career capital, but I think I'll land on my feet anyway" that's a fine answer - it would be unreasonable to expect that devoting years of your life to a niche movement has zero costs. If the answer is "I'd be completely screwed, I have no useful skills outside of this ecosystem and would still need to work for a living", I would be more concerned and suggest people alter plans accordingly.

That said, I think for many or most EA's, this will not be the case. Many EA cause areas require highly valuable skills such as software engineering, research ability, or operations/management skills that can be useful in the private or public sector outside of effective altruism. I also  feel like this mainly applies to very early-career individuals. For instance, I have a few years of SWE experience and want to  move into AI safety. If EA disbanded in ten years...well, I'd still want to work on the problem, but what if we solved the alignment problem or proved it actually wasn't a major cause area somehow? And then EA said "Okay, thanks for all your hard work, but we don't really need AI alignment experts any more". I would be okay - I could go back to SWE work. I'd be worse off than if I spent ten years working for strong non-EA tech companies, but I would hardly be destitute.

It's not that hard to have a backup plan in place, but we should encourage people to have one. This may also help with mental health - leaving a line of retreat from EA should it be too overwhelming for some people is useful, and you don't have much of a line of retreat if you're dependent on EA for income.

Comment by Jay Bailey on Michael Nielsen's "Notes on effective altruism" · 2022-06-03T10:27:27.471Z · EA · GW

Passage 5 seems to prove too much, in the sense of "If you take X philosophy literally, it becomes bad for you" being applicable to most philosophies, but I very much like Passage 4, the EA judo one.

While it is very much true that disagreeing over the object-level causes shouldn't disqualify one from EA, I do agree that it is not completely separate from EA - that EA is not defined purely by its choice of causes, but neither does it stand fully apart from them. EA is, in a sense, both a question and an ideology, and trying to make sure the ideology part doesn't jump too far ahead of the question part is important. 

"Again: if your social movement "works in principle" but practical implementation has too many problems, then it's not really working in principle, either. The quality "we are able to do this effectively in practice" is an important (implicit) in-principle quality."

I think this is a very key thing that many movements, including EA, should keep in mind. I think that what EA should be aiming for is "EA has some very good answers to the question of how we can do the most good, and we think they're the best answers humanity has yet come up with to answer the question. That's different from thinking our answers are objectively true, or that we have all the best answers and there are none left to find." We can have the humility to question ourselves, but still have the confidence to suggest our answers are good ones.

I dream of a world where EA is to doing good as science is to human knowledge. Science isn't always right, and science has been proven wrong again and again in the past, but science is collectively humanity's best guess. I would like for EA to be humanity's best guess at how to do the most good. EA is very young compared to science, so I'm not surprised we don't have that same level of mastery over our field as science does, but I think that's the target.

Comment by Jay Bailey on My list of effective altruism ideas that seem to be underexplored · 2022-06-01T07:02:07.529Z · EA · GW

Not OP, but it seems reasonable that if you perform an action to help someone, and that person then agrees in retrospect that they preferred this to happen, that can be seen as "fulfilling a preference". 

For a mundane example, imagine I'm ambivalent about mini-golfing. But you know me, and you suspect I'll love it, so you take me mini-golfing. Afterwards, I enthusiastically agree that you were right, and I loved mini-golfing. I see this as pretty similar to me saying beforehand "I love mini-golfing, I wish someone would go with me", and you fulfilling my preference by taking me. In both cases, the end result is the same, even though I didn't actually have a preference for mini-golfing before. 

Similarly, even though it is impossible for a dead person to have a preference, I think that if you bring someone back to life and they then agree that this was a fantastic idea and they're thrilled to be alive, that would be morally equivalent to fulfilling an active preference to live.

Comment by Jay Bailey on You Understand AI Alignment and How to Make Soup · 2022-05-28T11:23:32.729Z · EA · GW

Do you think this is a useful tool for AGI alignment? I can certainly see it being potentially useful for current models and a useful research tool, but I'm not sure if it is expected to scale.  It'd still be useful either way, but I'm curious about the scope and limitations of the dataset.

Comment by Jay Bailey on You Understand AI Alignment and How to Make Soup · 2022-05-28T11:20:47.251Z · EA · GW

Had to go digging into the paper to find a link, so I figured I'd add it to the comments: https://github.com/hendrycks/ethics

Comment by Jay Bailey on What is the journey to caring more about 1) others and 2) what is really true even if it is inconvenient? · 2022-05-28T08:36:58.077Z · EA · GW

For me, I feel like the big difference was around taking action, more than the other two. I heard about EA years ago, but only took action when I had already developed the habit of doing a good deed, however small or unimpactful, each day. Acting on a moral impulse, for me, became habitual. So when I revisited EA, I decided to actually start donating, because the move from "Someone should do something" -> "I should do something" -> Doing something had become much more a force of habit for myself.

I guess the lesson for this is that for people like me, something like Try Giving and committing just 1% of income or something small would have been a solid entry point, getting me into the habit of doing good.

Comment by Jay Bailey on I just found out that I missed the deadline to sign up for the online course by 1 day, is there anyone I can contact, or any chance someone can receive a late application if there is still space left? · 2022-05-24T06:27:25.022Z · EA · GW

Which course?

Comment by Jay Bailey on Complex Systems for AI Safety [Pragmatic AI Safety #3] · 2022-05-24T00:54:36.176Z · EA · GW

Possibly a newbie question: I noticed I was confused about the paragraph around deep learning vs. reinforcement learning. 

"One example of obviously suboptimal resource allocation is that the AI safety community spent a very large fraction of its resources on reinforcement learning until relatively recently. While reinforcement learning might have seemed like the most promising area for progress towards AGI to a few of the initial safety researchers, this strategy meant that not many were working on deep learning."

I thought that reinforcement learning was a type of deep learning. My own understanding is that deep learning is any form of ML using multilayered neural networks, and that reinforcement learning today uses multilayered neural networks, and thus could be called "deep reinforcement learning", but is generally just RL for short. If that were true that would mean RL research was also DL research.

Am I misunderstanding some of the terminology?

Comment by Jay Bailey on Against “longtermist” as an identity · 2022-05-14T00:28:15.114Z · EA · GW

One thing I'm curious about - how do you effectively communicate the concept of EA without identifying as an effective altruist?

Comment by Jay Bailey on Fermi estimation of the impact you might have working on AI safety · 2022-05-14T00:19:00.934Z · EA · GW

I've discovered something that is either a bug in the code, or a parameter that isn't explained super well. 

Under "How likely is it to work" I assume "it" refers to AGI safety. If so, this parameter is reversed - the more likely I say AGI safety is to work, the higher the x-risk becomes. If I set it to 0%, the program reliably tells me there's no chance the world ends.