Posts

[U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government 2020-04-10T04:54:25.630Z · score: 31 (10 votes)
Double Crux prompts for Effective Altruists 2018-10-12T23:12:19.707Z · score: 12 (10 votes)

Comments

Comment by elityre on EricHerboso's Shortform · 2020-09-16T06:47:12.065Z · score: 27 (6 votes) · EA · GW
I know that I was wrong because people of the global majority continuously speak in safe spaces about they feel unsafe in EA spaces. They speak about how they feel harmed by the kinds of things discussed in EA spaces. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.

I'm not sure what to say to this.

Again, just because someone claims to feel harmed by some tread of discourse, that can't be sufficient grounds to establish a social rule against it.But I am most baffled by this...

. And they speak about how there are some people — not everyone, but some people — who don’t seem to be just participating in open debate in order to get at the truth, but rather seem to be using the ideals of open discussion to be a cloak that can hide their intent to do harm.

Um. Yes? Of course? It's pretty rare that people are in good faith and sincerely truth-seeking. And of course there are some bad-actors, in every group. And of course those people will be pretending to have good intentions. Is the claim that in order to feel safe, people need to know that there are no bad actors?(I think that is not a good paraphrase of you.)

We need a diverse set of people to at least feel safe in our community.

Yeah. So the details here matter a lot, and if we operationalize, I might change my stance here. But on the face of this, I disagree. I think that we want people to be safe in our community and that we should obviously take steps to insure that. But it seems to be asking to much to insure that people feel safe. People can have all kinds of standards regarding what they need to feel safe, and I don't think that we are obligated to carter to them because they are on the list of things that some segment of people need to feel safe.

Especially if one of the things on that list is "don't openly discuss some topics that are relevant to improving the world." That is what we do. That's what we're here to do. We should sacrifice pretty much none of the core point of the group to be more inclusive.

"How much systemic racism is there, what forms does it take, and how does it impact people?" are actually important questions for understanding and improving the world. We want to know if there is anything we can do about it, and how it stacks up against other interventions. Curtailing that discussion is not a small or trivial ask.

(In contrast, if using people's preferred pronouns, or serving vegan meals at events, or not swearing, or not making loud noises, etc. helped people feel safe and/or comfortable, and they are otherwise up for our discourse standards, I feel much more willing to accommodate them. Because none of those compromise the core point of the EA community.)


...Oh. I guess one thing that seems likely to be a crux:

...if we are to succeed in truly achieving effective altruism at scale..

I am not excited about scaling EA. If I thought that trying to do EA at scale was a good idea, then I would be much more interested in having different kinds of discussions in push and pull media.

Comment by elityre on EricHerboso's Shortform · 2020-09-16T06:33:56.553Z · score: 16 (5 votes) · EA · GW
Some speech is harmful. Even speech that seems relatively harmless to you might be horribly upsetting for others. I know this firsthand because I’ve seen it myself.

I want to distinguish between "harmful" and "upsetting". It seems to me that there is a big different between shouting 'FIRE' in a crowed theater, "commanding others to do direct harm" on the one hand, and "being unable to focus for hours" after reading a facebook thread, being exhausted from fielding questions.

My intuitive grasp of these things has it that the "harm" of the first category is larger than that of the second. But even if that isn't true, and the harm of reading racist stuff is as bad as literal physical torture, there are a number of important differences.

For one thing, the speak acts in the first category have physical, externally legible bad consequences. This matters, because it means we can have rules around those kinds of consequences that can be socially enforced without those rules being extremely exploitable. If we adopt a set of discourse rules that say "we will ban any speech act that produce significant emotional harm", then anyone not in good faith can shut down and discourse that they don't like by claiming to be emotionally harmed by it. Indeed, they don't even need to be consciously malicious (though of course there will be some explicitly manipulative bad actors), this creates a subconscious incentive to be and act more upset than you might otherwise be by some speech-acts, because if you are sufficiently upset, the people saying things you don't like will stop.

Second, I note that both of the examples in the second category are much easier to avoid than the second category. If there are Facebook threads that drain your ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. Most of us have some kind of political topics that we find triggering, and a lot of us find that browsing facebook at all saps our motivation. So we have workarounds to avoid that stuff. These workarounds aren't perfect, and occasionally you'll encounter material that triggers you. But I seems way better to have that responsibility be on the individual. Hence the idea of safe space in the first place.

Furthermore, there are lots of things that are upsetting (for instance, that there are people dying of preventable Malaria in the third world right now, and that this in principle, could be stopped if enough people and the first world knew and cared about it, or that the extinction of humanity is plausibly imminent), which are never the less pretty important to talk about.

Comment by elityre on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-15T20:22:46.276Z · score: 3 (2 votes) · EA · GW

I think this comment says what I was getting at in my own reply, though more strongly.

Comment by elityre on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-15T20:19:42.534Z · score: 9 (5 votes) · EA · GW

First of all, I took this comment to be sincere and in the spirit of dialog. Thank you and salutations.

[Everything that I say in this comment is tentative, and I may change my mind.]

Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for us to tell our fellow EAs to cut it out.

If that were actually happening, I would want to think more about the specific case (and talk directly to the people involved), but I'm inclined to bite the bullet of allowing that sort of conversation.

The main reason is that, (I would guess, though you can say more about your state of mind), that there is an implicit premise underlying the stance that we shouldn't allow that kind of talk. Namely, that "the Holocaust happened, and Holocaust denial is false".

Now, my understanding is that there is an overwhelming historical consensus that the Holocaust happened. But the more I learn about the world, the more I discover that claims that I would have thought were absurd are basically correct, especially in politicized areas.

I am not so confident that the Holocaust happened, and especially that the holocaust happened the way it is said to have happened, that I am willing to sweep out any discussion to the contrary.

If they are making strong arguments for a false conclusion, then they should be countered with arguments, not social censure.

This is the case even if none of the EAs talking about it actually believe it. Even if they are just steel-manning devil’s advocates...

In the situation where EAs are making such arguments not out of honest truth-seeking, but as playing edge-lord / trying to get attention / etc., then I feel a lot less sympathetic. I would be more inclined to just tell them to cut it out in that case. (Basically, I would make the argument that they are doing damage for no gain.)

But mostly, I would say if any people in an EA group were threatening violence, racially-motivated or otherwise, we should have a zero-tolerance policy. That is where I draw the line. (I agree that there is a bit of a grey area in the cases where someone is politely advocating for violent-action down the line, eg the Marxist who has never personally threatened anyone, but is advocating for a violent revolution.)

...

Q1: Do you agree that this is a question of degree, not kind? If not, then the rest of this comment doesn't really apply.

I think so. I expect that any rigid rule is going to have edge cases, that are bad enough that you should treat them differently. But I don't think we're on the same page about what the relevant scalar is.

If it became standard for undergraduate colleges to disallow certain forms of racist speech to protect students, would you be okay with copying those norms over to EA?

It depends entirely on what is meant by "certain forms", but on the face of it, I would not be okay with that. I expect that a lot of ideas and behaviors would get marked as "racist", because that is a convenient and unarguable way to attack those ideas.

I would again draw the line at the threat of violence: if a student group got together to discuss how to harass some racial minority, even just as a hypothetical (they weren't actually going to do anything), Eli-University would come down on them hard.

If a student group came together to discuss the idea a white ethno-state, and the benefits of racial and cultural homogeneity, Eli-University would consider this acceptable behavior, especially if the epistemic norms of such a group are set high. (However if I had past experience that such reading groups tended to lead to violence, I might watch them extra carefully.)

The ethno-state reading group is racist, and is certainly going to make some people feel uncomfortable, and maybe make them feel unsafe. But I don't know enough about the world to rule out discussion of that line of thinking entirely.

...

I will report here that a large number of people I see talking in private Facebook groups, on private slack channels, in PMs, emails, and even phone calls behind closed doors are continuously saying that they do not feel safe in EA spaces.

I would love to hear more about the details there. In what ways do people not feel safe?

(Is it things like this comment?)

I’m extremely privileged, so it’s hard for me to empathize here. I cannot imagine being harmed by mere speech in this way. But I can report from direct experience watching private Facebook chats and slack threads of EAs who aren’t willing to publicly talk about this stuff that these speech acts are causing real harm.

Yeah. I want to know more about this. What kind of harm?

My default stance is something like, "look, we're here to make intellectual progress, and we gotta be able to discuss all kinds of things to do that. If people are 'harmed' by speech-acts, I'm sorry for you, but tough nuggets. I guess you shouldn't participate in this discourse. "

That said, if I had a better sense of what kind of harms are resulting, I might have a different view, or it might be more obvious where there are cheep tradeoffs to be made.

Is the harm small enough to warrant just having these potential EAs bounce off? Or would we benefit from pushing such speech acts to smaller portions of EA so that newer, more diverse EAs can come in and contribute to our movement? I hope that you'll agree that these are questions of degree, not of kind.

Yep. I think I do, though I think that the indifference curve is extremely lopsided, for EA in particular.

...

I agree that one of the things that makes EA great is the quality of its epistemic discourse. I don’t want my words here to be construed that I think we should lower it unthinkingly. But I do think that a counterbalancing force does exist: being so open to discussion of any kind that we completely alienate a section of people who otherwise would be participating in this space.

I'm tentatively suggesting that we should pay close to no attention to possibility of alienating people, and just try to do our best to actually make progress on the intellectual project.

It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.

Comment by elityre on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-15T19:31:20.738Z · score: 3 (2 votes) · EA · GW

I don't follow how what you're saying is a response to what I was saying.

I think a model by which people gradually "warm up" to "more advanced" discourse norms is false.

I wasn't saying "the point of different discourse norms in different EA spaces is that it will gradually train people into more advanced discourse norms." I was saying if that I was mistaken about that "warming up effect", it would cause me to reconsider my view here.

In the comment above, I am only saying that I think it is a mistake to have different discourse norms at the core vs. the periphery of the movement.

Comment by elityre on Some thoughts on the EA Munich // Robin Hanson incident · 2020-09-07T09:15:25.008Z · score: 32 (14 votes) · EA · GW

I think there is a lot of detail and complexity here and I don't think that this comment is going to do it justice, but I want to signal that I'm open to dialog about these things.

For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.

On the face of it, this seems like a bad idea to me. I don't want "introductory" EA spaces to have different norms than advanced EA spaces, because I only want people to join the EA movement to the extent that they have a very high epistemic standards. If people wouldn't like the discourse norms in the central EA spaces, I don't want them to feel comfortable in the more peripheral EA spaces. I would prefer that they bounce off.

To say it another way, I think it is a mistake to have "advanced" and "introductory" EA spaces, at all.

I am intending to make a pretty strong claim here.

[One operationalization I generated, but want to think more about before I fully endorse it: "I would turn away billions of dollars of funding to EA causes, if that was purchased at the cost of 'EA's discourse norms are as good as those in academia.'"]

Some cruxes:

  • I think what is valuable about the EA movement is the quality of the epistemic discourse in the EA movement, and almost nothing else matters (and to the extent that other factors matter, the indifference curve heavily favors better epistemology). If I changed my mind about that, it would change my view about a lot of things, including the answer to this question.
  • I think a model by which people gradually "warm up" to "more advanced" discourse norms is false. I predict that people will mostly stay in their comfort zone, and people who like discussion at the "less advanced" level will prefer to stay at that level. If I were wrong about that, I would substantially reconsider my view.
  • Large number of people at the fringes of a movement tend to influence the direction of the movement, and significantly shape the flow of talent to the core of the movement. If I thought that you could have 90% of the people identifying as EAs have somewhat worse discourse norms that we have on the forum without meaningfully impacting the discourse or action of the people at the core of the movement, I think I might change my mind about this.
Comment by elityre on Dominic Cummings - An 'Odyssean' Education [review] · 2020-06-05T22:38:25.268Z · score: 2 (2 votes) · EA · GW
On the most crucial topics, and in capturing the nuance and complexity of the real world, this piece fails again and again: epistemic overconfidence plus uncharitable disdain for the work of others, spread thinly over as many topics as possible.

Interestingly, this reminds me of Nassim Nicholas Taleb.

Comment by elityre on [U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government · 2020-05-09T21:57:40.324Z · score: 1 (1 votes) · EA · GW

Another thing for people to keep in mind:

Apparently, if you want loan forgiveness, you can only spend 8 weeks worth of the money on payroll.

From here,

If you’re a sole proprietor, you can have eight weeks of the loan forgiven as a replacement for lost profit. But you’ll need to provide documentation for the remaining two weeks worth of cash flow, proving you spent it on mortgage interest, rent, lease, and utility payments.

So if at some point you need to check boxes saying what you're applying for this loan for, and you can check more than one box, you should check all of them, or at least payroll + something else. If you can only check one box, I guess check payroll.

I doubt that that box checking actually matters, but it seems prudent to do this, just in case it does.

Comment by elityre on [U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government · 2020-05-09T04:19:52.220Z · score: 9 (2 votes) · EA · GW

I recommend that everyone who is eligible apply through US Bank ASAP.

Other lenders might still work, but US Bank was by far the fastest. A person that I was coaching through this process and I both received our loans within 4 days of initially filling out their application (I say "initially" because there were several steps where they needed additional info).

Also, we now know that the correct answer to how many employees you have is "0 employees, it's just me", not "1 employee, because I employ myself."

Comment by elityre on [U.S. Specific] Free money (~$5k-$30k) for Independent Contractors and grant recipients from U.S. government · 2020-04-10T19:39:35.333Z · score: 3 (2 votes) · EA · GW

An email I received from Bench reads: "If your bank isn’t participating, your next best option is to apply through Fundera—they will match you with the best lender."

However, when I tried to fill out their application, they asked me to upload...

  • a business bank statement,
  • a copy of my drivers license,
  • proof of payroll (IRS Form 941),
  • and voided a business check,

...of which I have only one out of three.

Comment by elityre on What is EA opinion on The Bulletin of the Atomic Scientists? · 2019-12-10T19:53:59.680Z · score: 1 (1 votes) · EA · GW

Wat?

Comment by elityre on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-22T00:47:29.587Z · score: 20 (9 votes) · EA · GW

I think a lot of this is right and important, but I especially love:

Don't let the fact that Bill Gates saved a million lives keep you from saving one.

We're all doing the best we can with the privileges we were blessed with.

Comment by elityre on Bottlenecks and Solutions for the X-Risk Ecosystem · 2019-11-14T02:27:41.973Z · score: 1 (1 votes) · EA · GW

I like the breakdown of those two bullet points, a lot, and I want to think more about them.

Both of these I think are fairly easily measurable from looking at someone's past work and talking to them, though.

I bet that you could do that, yes. But that seems like a different question than making a scalable system that can do it.

In any case, Ben articulates the view that generated the comment above, above.

Comment by elityre on Raemon's EA Shortform Feed · 2019-11-13T22:37:49.247Z · score: 5 (3 votes) · EA · GW

What about Paul's Integrity for Consequentialists?

Comment by elityre on X-risk dollars -> Andrew Yang? · 2019-10-15T15:25:54.584Z · score: 25 (7 votes) · EA · GW
[Edit: it'd be very strange if we end up preferring candidates who hadn't thought about AI at all to candidates who had thought some about AI but don't have specific plans for it.]

That doesn't seem that strange to me. It seems to mostly be a matter of timing.

Yes, eventually we'll be in an endgame where the great powers are making substantial choices about how powerful AI systems will be deployed. And at that point I want the relevant decision makers to have sophisticated views about AI risk and astronomical stakes.

But in the the decades before that final period, I probably prefer that governmental actors not really think about powerful AI at all because...

1. There's not much that those governmental actors can usefully do at this time.

2. The more discussion of powerful AI there is in the halls of government, the more likely someone is to take action.

Given that there's not much that can be usefully done, it's almost a tautology that any action taken is likely to be net-negative.

Additionally, there are specific reasons to to think that governmental action is likely to be more bad than good.

  • Politicization:
    • As Ben says above, this incurs a risk of politicizing the issue, that prevents good discourse in the future, and traps the problem in a basically tribal-political frame. (Much as global climate change, a technical problem with consequences for everyone on planet earth, has been squashed into a frame of "liberal vs. conservative.")
  • Swamping the field:
    • If the president of the United States openly says that AI alignment is a high priority for our generation, that makes AI alignment (or rather things called "AI alignment") high status, sexy, and probably sources of funding. This incentives many folks to either rationalize the work that they were already doing as "AI alignment" or to more-genuinely try to switch into switch into doing AI alignment work.
    • But the field of AI alignment is young and fragile, it doesn't yet have standard methods or approaches, and it is unlike most technical fields in that there is possibly a lot of foundational philosophical work to be done. The field does not yet have clear standards of what kind of work is good and helpful, and which problems are actually relevant. These standards are growing, slowly. For instance Stuart Russell's new textbook is a very clear step in this direction (though I don't know if it is any good or not).
    • If we added 100 or 1000x more people to the field of AI alignment, without having slowly built that infrastructure, the field will be swamped: there will be a lot of people trying to do work in the area, using a bunch of different methods, most of which will not be attacking the the core problem (that's a crux for me). The signal to noise ratio would collapse. This will inhibit building a robust, legible paradigm that is tracking the important part of the problem.
      • Elaborating: Currently, the the people working on AI alignment are unusually ideologically motivated (ie they're EAs), and the proportion of people working in the field who have deep inside view models of what work needs to be done and why, is relatively high.
      • If we incentivized working on AI alignment, via status or funding, more of the work in the area will be motivated by people seeking status or funding, instead of motivated by a desire to solve the core problem. I expect that this will warp the direction of the field, such that most of the work done under the heading of "AI alignment" is relatively useless.
      • (My impression is that this is exactly what happened with the field of nanotechnology: There was a relatively specific set of problems, leading up to specific technologies. The term "Nanotech" became popular and sexy, and a lot of funding was available for "nanotech." The funders couldn't really distinguish between people trying to solve the core problems that were originally outlined and people doing other vaguely related work (see Paul Graham, on "the Design Paradox" (the link to the full post is here)). The people doing vaguely related work that they called nanotech got the funding and the prestige. The few people trying to solve the original problems were left out in the cold, and more importantly, the people who might have eventually been attracted to working on those problems were instead diverted to working on things called "nanotech." And now, in 2019 we don't have a healthy field building towards Atomically Precise manufacturing.
    • We do want 100x the number of people working on the problem, eventually, but it is very important to grow the field in a way that allows the formation of good open problems and standards.

My overall crux here is point #1, above. If I thought that there were concrete helpful things that governments could do today, I might very well think that the benefits outweighed the risks that I outline above.


Comment by elityre on Which scientific discovery was most ahead of its time? · 2019-05-19T18:16:43.303Z · score: 3 (2 votes) · EA · GW

This is a quote from somewhere? From where?

Comment by elityre on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-12T15:55:00.726Z · score: 16 (10 votes) · EA · GW

At the moment, not really.

There's the classic Double Crux post. Also, here's a post I wrote, that touches on one sub-skill (out of something like 50 to 70 sub-skills that I currently know). Maybe it helps give the flavor.

If I were to say what I'm trying to do in a sentence: "Help the participants actually understand eachother." Most people generally underestimate how hard this is, which is a large part of the problem.

The good thing that I'm aiming for in a conversation is when "that absurd / confused thing that X-person was saying, clicks into place, and it doesn't just seem reasonable, it seems like a natural way to think about the situation".

Another frame is, "Everything you need to do to make Double Crux actually work."

A quick list of things conversational facilitation, as I do it, involves:

  • Tracking the state of mind of the participants. Tracking what's at stake for each person.
  • Noticing when Double Illusion of Transparency, or talking past eachother, is happening, and having the participants paraphrase or operationalize. Or in the harder cases, getting each view myself, and then acting as an intermediary.
  • Identifying Double Cruxes.
  • Helping the participants to track what's happening in the conversation and how this thread connects to the higher level goals. Cleaving to the query.
  • Keeping track of conversational threads, and promising conversational tacts.
  • Drawing out and helping to clarifying a person's inarticulate objections, when they don't buy an argument but can't say why.
  • Ontological translation: getting each participants conceptual vocabulary to make natural sense to you, and then porting models and arguments back and forth between the differing conceptual vocabularies.

I don't know if that helps. (I have some unpublished drafts on these topics. Eventually they're to go on LessWrong, but I'm likely to publish rough versions on my musings and rough drafts blog, first.)

Comment by elityre on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T01:08:29.228Z · score: 10 (9 votes) · EA · GW
I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka

Hear, hear.

I feel proud of the commitment to epistemic integrity that I see here.

Comment by elityre on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T01:03:48.296Z · score: 0 (2 votes) · EA · GW

[Are there ways to delete a comment? I started to write a comment here, and then added a bit to the top-level instead. Now I can't make this comment go away?]

Comment by elityre on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T00:52:04.294Z · score: 24 (12 votes) · EA · GW

A small correction:

Facilitating conversations between top people in AI alignment (I’ve in particular heard very good things about the 3-day conversation between Eric Drexler and Scott Garrabrant that Eli facilitated)

I do indeed facilitate conversations between high level people in AI alignment. I have a standing offer to help with difficult conversations / intractable disagreements, between people working on x-risk or other EA causes.

(I'm aiming to develop methods for resolving the most intractable disagreements in the space. The more direct experience I have trying my existing methods against hard, "real" conversations, the faster that development process can go. So, at least for the moment, it actively helps me when people request my facilitation. And also, a number of people, including Eric and Scott, have found it to be helpful for the immediate conversation.)

However, I co-facilitated that particular conversation between Eric and Scott. The other facilitators were, Eliana Lorch, Anna Salamon, and Owen Cotton Barratt.

Comment by elityre on My current thoughts on MIRI's "highly reliable agent design" work · 2018-11-13T19:08:15.961Z · score: 5 (4 votes) · EA · GW

(Eli's personal notes, mostly for his own understanding. Feel free to respond if you want.)

1. It seems pretty likely that early advanced AI systems won't be understandable in terms of HRAD's formalisms, in which case HRAD won't be useful as a description of how these systems should reason and make decisions.

My current guess is that the finalized HRAD formalisms would be general enough that they will provide meaningful insight into early advanced AI systems (even supposing that the development of those early systems is not influenced by HRAD ideas), in much the same way that Pearlean causality and Bayes nets gives (a little) insight into what neural nets are doing.

Comment by elityre on Double Crux prompts for Effective Altruists · 2018-10-23T15:25:59.240Z · score: 0 (0 votes) · EA · GW

I'm not sure I follow. The question asks what the participants think is most important, which may or may not be diversity of perspectives. At least some people think that diversity of perspectives is a misguided goal, that erodes core values.

Are you saying that this implies that "EA wants more of the same" because some new EA (call him Alex) will be paired with a partner (Barbra) who gives one of the above answers, and then Alex will presume that what Barbra said was the "party line" or "the EA answer" or "what everyone thinks"?

Comment by elityre on Double Crux prompts for Effective Altruists · 2018-10-22T23:31:45.792Z · score: 2 (2 votes) · EA · GW

I like these modified questions.

The reason why the original formulations are what they are is to get out of the trap of everyone agreeing that "good things are good", and to draw out specific disagreements.

The intention is that each of these has some sort of crisp "yes or no" or "we should or shouldn't prioritize X". But also the crisp "yes or no" is rooted in a detailed, and potentially original, model.

Comment by elityre on Double Crux prompts for Effective Altruists · 2018-10-22T22:25:17.118Z · score: 1 (1 votes) · EA · GW

What sort of discussions does this question generate?

Here are demographics that I've heard people list.

  • AI researchers (because of relevance to x-risk)
  • Teachers (for spreading the movement)
  • Hedge fund people (who are rich and analytical)
  • Startup founders (who are ambitious and agenty)
  • Young people/ college students (because they're the only people that can be sold on weird ideas like EA)
  • Ops people (because 80k and CEA said that's what EA needs)

All of these have very different implications about what is most important on the margin in EA.

Comment by elityre on Double Crux prompts for Effective Altruists · 2018-10-22T22:14:33.277Z · score: 1 (1 votes) · EA · GW

I think using EA examples in the double crux game may be a bad idea because it will inadvertently lead EAs to come away with a more simplistic impression of these issues than they should.

I mostly teach Double Crux and related at CFAR workshops (the mainline, and speciality / alumni workshops). I've taught it at EAG 4 times (twice in 2017), and I can only observe a few participants in a session. So my n is small, and I'm very unsure.

But it seems to me that using EA examples mostly has the effect of fleshing out understanding of other EA's views, more than flattening and simplifying. People are sometimes surprised by their partner's cruxes are, at least (which suggests places where a straw model is getting updated).

But, participants could also be coming away with too much of an either-or perspective on these questions.

Comment by elityre on Double Crux prompts for Effective Altruists · 2018-10-22T22:05:53.891Z · score: 2 (2 votes) · EA · GW

I strongly agree that more EAs doing independent thinking really important, and I'm very interested in interventions that push in that direction. In my capacity as a CFAR instructor and curriculum developer, figuring out ways to do this is close to my main goal.

I think many individual EAs should be challenged to generate less confused models on these topics, and from there between models is when deliberation like double crux should start.

Strongly agree.

I don't think in the span of only a couple minutes either side of a double crux game will generate an excellent but controversial hypothesis worth challenging.

I think this misses the point a little. People at EAG have some implicit model that they're operating from, even if it isn't well-considered. The point of the exercise in this context is not to get all the way to the correct belief, but rather to engage with what one thinks and what would cause them to change their mind.

This Double Crux is part of the de-confusion and model building process.

Comment by elityre on A Research Framework to Improve Real-World Giving Behavior · 2018-10-08T16:36:07.437Z · score: 6 (6 votes) · EA · GW

I'm not sure how much having a "watered down" version of EA ideas in the zeitgeist helps because, I don't have a clear sense of how effective most charities are.

If the difference between the median charity and the most impactful charity is 4 orders of magnitude ($1 to the most impactful charities does as much good as $1000 to the the median charity), then even a 100x improvement from the median charity is not very impactful. It's still only 1% as good a donating to the best charity. If that were the case, it's probably more efficient to just aim to get more people to adopt the whole EA mindset.

On the other hand, if the variation is much smaller, it might be the case that a 100x improvement get's you to about half of the impact per dollar of the best charities.

Which world we're living in matters a lot for whether we should invest in this strategy.

That said, promotion of EA principles, like cost effectiveness and EV estimates, separate from the EA brand, seem almost universally good, and extend far beyond people's choice of charities.

Comment by elityre on Bottlenecks and Solutions for the X-Risk Ecosystem · 2018-10-08T15:09:44.601Z · score: 13 (11 votes) · EA · GW

It seems to me that in many cases the specific skills that are needed are both extremely rare and not well captured by the standard categories.

For instance, Paul Christiano seems to me to be an enormous asset to solving the core problems of AI safety. If "we didn't have a Paul" I would be willing to trade huge amounts of EA resources to have him working on AI safety, and I would similarly trade huge resources to get another Paul-equivalent working on the problem.

But it doesn't seem like Paul's skillset is one that I can easily select for. He's knowledgeable about ML, but there are many people with ML knowledge (about 100 new ML PhDs each year). That isn't the thing that distinguishes him.

Nevertheless, Paul has some qualities, above and beyond his technical familiarity, that allow him to do original and insightful thinking about AI safety. I don't understand what those qualities are, or know how to assess them, but they seem to me to be much more critical than having object level knowledge.

I have close to no idea how to recruit more people that can do the sort of work that Paul can do. (I wish I did. As I said, I would give up way more than my left arm to get more Pauls).

But, I'm afraid there's a tendency here to goodhart on the easily measurable virtues, like technical skill or credentials.

Comment by elityre on Bottlenecks and Solutions for the X-Risk Ecosystem · 2018-10-08T14:57:26.738Z · score: 12 (8 votes) · EA · GW

In the short term, senior hires are most likely to come from finding and onboarding people who already have the required skills, experience, credentials and intrinsic motivation to reduce x-risks.

Can you be more specific about the the required skills and experience are?

Skimming the report, you say "All senior hires require exceptionally good judgement and decision-making." Can you be more specific about what that means and how it can be assessed?

Comment by elityre on Leverage Research: reviewing the basic facts · 2018-10-08T14:18:33.806Z · score: 5 (5 votes) · EA · GW

Intellectual contributions to the rationality community: including CFAR’s class on goal factoring

Just a note. I think this might be a bit missleading. Geoff, and other members of Leverage research taught a version of goal factoring at some early CFAR workshops. And Leverage did develop a version of goal factoring inspired by CT. But my understanding is that CFAR staff independently developed goal factoring (starting from an attempt to teach applied consequentialism), and this is an instance of parallel development.

[I work for CFAR, though I had not yet joined the EA or rationality community in those early days. I am reporting what other longstanding CFAR staff told me.]