Posts

Comments

Comment by moses on Do research organisations make theory of change diagrams? Should they? · 2020-07-22T07:58:35.303Z · score: 9 (5 votes) · EA · GW

I haven’t actually seen any examples of ToC diagrams from research orgs except the two shown above.

A good example of a ToC diagram is this old Leverage Research plan.

Comment by moses on Dealing with Network Constraints (My Model of EA Careers) · 2020-05-08T09:15:39.853Z · score: 1 (1 votes) · EA · GW

Has this post happened anywhere?

Comment by moses on EA Organization Updates: March 2020 · 2020-04-18T06:02:01.706Z · score: 1 (1 votes) · EA · GW

FHI staff were asked to give advice at the highest level of government in the U.K. and the Czech Republic

Is there more info anywhere on the connection between FHI and the Czech govt?

Comment by moses on Countering imposter syndrome · 2019-08-29T18:54:11.894Z · score: 1 (1 votes) · EA · GW

I think the first step, if you believe you're less competent than your colleagues believe you to be, is to find out who's wrong—you, them, or both? And are you wrong about your assessment of yourself, or about what your colleagues think of you, or both? Think about what questions you could ask or what metrics you could measure to answer these questions.

If it's your colleagues who's wrong, is it worth correcting them? They understand the risks, they know that recruitment is hit and miss. Is it your responsibility to protect them? You can live in fear of the moment when you'll be found out, or you can cherish the days when you are allowed to do the job, and accept your fate with equanimity. You're not getting your head cut off; you can choose how you feel about this.

Comment by moses on casebash's Shortform · 2019-08-22T14:16:42.694Z · score: 1 (1 votes) · EA · GW

Oh, I would've sworn that was already the case (with the understanding that, as you say, there is less volunteering involved, because with the "inner" movement being smaller, more selective, and with tighter/more personal relationships, there is much less friction in the movement of money, either in the form of employment contracts or grants).

Comment by moses on If physics is many-worlds, does ethics matter? · 2019-07-11T13:16:18.752Z · score: 1 (1 votes) · EA · GW

So, to simplify your problem: I help someone, but somewhere else there is someone else who I wasn't able to help. Wat do?

You're in this precise situation regardless of quantum physics; I guarantee you won't be able to save everyone in your personal future light cone either. So I think that should simplify your question a bunch.

Why would this change your metaethical position? The reason you'd want to help someone else shouldn't change if I make you aware of some additional people somewhere which you're not capable of helping.

Comment by moses on I find this forum increasingly difficult to navigate · 2019-07-05T16:07:20.814Z · score: 28 (11 votes) · EA · GW

Both here and on LW, I have /allPosts bookmarked, "Sorted by Daily"; that helps. I haven't used the front page in ages.

Comment by moses on I find this forum increasingly difficult to navigate · 2019-07-05T16:05:48.644Z · score: 10 (4 votes) · EA · GW

Just as a data point, I didn't read OP as an attack at all.

I also don't think that if you have overall negative feedback, you should necessarily have to come up with some good things to say as well, just to balance things out and "be nice". OP said what they wanted to say and it reads to me like valuable feedback, including the subtle undertone of frustration.

As a data point on the object level, I think that magic sorting makes sense on a website with intense traffic (HN, reddit), not on a site with a few posts a day.

Comment by moses on What new EA project or org would you like to see created in the next 3 years? · 2019-06-26T07:06:34.016Z · score: 1 (1 votes) · EA · GW

Ah, got it.

Comment by moses on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T16:22:36.065Z · score: 2 (2 votes) · EA · GW

Oh, I thought you refer to some kind of legal costs. You mean costs of vetting. Right. As has been noted: EA is vetting constrained, EA is network constrained.

But this is the case with employees as well, isn't it? It's just about vetting people in general.

One thing I notice, looking at the 80k job board, is that not that many EA(-adjacent) orgs are interested in remote workers.

Comment by moses on What new EA project or org would you like to see created in the next 3 years? · 2019-06-24T14:50:58.491Z · score: 1 (1 votes) · EA · GW

The costs to set up contractor relationships are considerable

I'm curious, how does that work in the US? Why is contract work different in this regard from receiving services from any other type of supplier?

Comment by moses on Raemon's EA Shortform Feed · 2019-06-23T12:31:02.706Z · score: 2 (2 votes) · EA · GW

Hmm, it's not so much the classic rationalist trait of overthinking that I'm concerned about. It's more like…

First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of "practicing thinking". If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try and all that. So yes, practicing thinking, but you can't let your brain know that that's what you're trying to achieve.

Second, "thinking for real" sure is work, but the next question is, is this work worth doing? When you start with some tangible end goal and make plans by working your way backwards to where you are now, that informs you what thinking works needs to be done, decreasing the chance that you'll waste time on producing research which looks nice and impressive and all that, but in the end doesn't help anyone improve the world.

I guess if you come up with technology that allows people to plug into the world-saving-machine at the level of "doing research-assistant-kind-of-work for other people who know what they're doing" and gradually work their way up to "being one of the people who know what they're doing", that would make this work.

You wouldn't be "practicing thinking"; you could easily convince your brain that you're actually trying to achieve something in the real world, because you could clearly follow some chain of sub-sub-agendas to sub-agendas to agendas and see that what you're working on is for real.

And, by the same token, you'd be working on something that (someone believes) needs to be done. And maybe sometimes you'd realize that, no, actually, this whole line of reasoning can be cut out or de-prioritized, here's why, etc.—and that's how you'd gradually grow to be one of the people who know what they're doing.

So, yeah, proceed on that, I guess.

Comment by moses on Raemon's EA Shortform Feed · 2019-06-22T22:14:17.194Z · score: 4 (3 votes) · EA · GW

Ah.

An important facet of the Middle of the Middle is that people don't yet have the agency or context needed to figure out what's actually worth doing, and a lot of the obvious choices are wrong.

This seems to me like two different problems:

Some people lack, as you say, agency. This is what I was talking about—they're looking for someone to manage them.

Other people are happy to do things on their own, but they don't have the necessary skills and experience, so they will end up doing something that's useless in the best case and actively harmful in the worst case. This is a problem which I missed before but now acknowledge.

Normally I would encourage practicing doing (or, ideally, you know, doing) rather than practicing thinking, but when doing carries the risk of harm, thinking starts to seem like a sensible option. Fair enough.

Comment by moses on Raemon's EA Shortform Feed · 2019-06-20T19:10:41.710Z · score: 7 (5 votes) · EA · GW

I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do.

Funny—I think a big problem for EA is mid-level EAs looking over their shoulders for someone else to tell them what they're supposed to do

Comment by moses on Raemon's EA Shortform Feed · 2019-06-20T19:08:44.552Z · score: 1 (1 votes) · EA · GW

I'll take your invitation to treat this as an open thread (I'm not going to EAG).

before you're ready to tackle anything real ambitious... what should you do?

Why not tackle less ambitious goals?

Comment by moses on Not getting carried away with reducing extinction risk? · 2019-06-04T17:58:32.593Z · score: 1 (1 votes) · EA · GW

I'm going to speak for myself again:

I view our current situation as a fork in the road. Either very bad outcomes or very good ones. There is no slowing down. There is no scenario where we linger before the fork for decades or centuries.

As far as very bad outcomes, I'm not worried about extinction that much; dead people cannot suffer, at least. What I'm most concerned about is locking ourselves into a state of perpetual hell (e.g. undefeatable totalitarianism, or something like Christiano's first tale of doom, and then spreading that hell across the universe.

The very good outcomes would mean that we're recognizably beyond the point where bad things could happen; we've built a superintelligence, it's well-aligned, it's clear to everyone that there are no risks anymore. The superintelligence will prevent wars, pandemics, asteroids, supervolcanos, disease, death, poverty, suffering, you name it. There will be no such thing as "existential risk".

Of course, I'm keeping an eye on the developments and I'm ready to reconsider this position at any time; but right now this is the way I see the world.

Comment by moses on Not getting carried away with reducing extinction risk? · 2019-06-01T17:51:33.672Z · score: 10 (10 votes) · EA · GW

If humanity wipes itself out, those wild animals are going to continue suffering forever.

If we only partially destroy civilization, we're going to set back the solution to problems like wild animal suffering until (and if) we rebuild civilization. (And in the meantime, we will suffer as our ancestors suffered).

If we nuke the entire planet down to bedrock or turn the universe into paperclips, that might be a better scenario than the first one in terms of suffering, but then all of the anthropic measure is confined to the past, where it suffers, and we're foregoing the creation of an immeasurably larger measure of extremely positive experiences to balance things out.

On the other hand, if we just manage to pass through the imminent bottleneck of potential destruction and emerge victorious on the other side—where we have solved coordination and AI—we will have the capacity to solve problems like wild animal suffering, global poverty, or climate change with a snap of our fingers, so to speak.

That is to say, problems like wild animal suffering will either be solved with trivial effort a few decades from now, or we will have much, much bigger problems. Either way—this is my personal view, not necessarily other "long-termists"—current work on these issues will be mostly in vain.

Comment by moses on Why do you downvote EA Forum posts & comments? · 2019-05-30T07:19:27.969Z · score: 8 (6 votes) · EA · GW

Most often I downvote posts when I'm reasonably confident that it would be a waste of time for others to open and read it (confused posts, off-topic, rambling, trivial, etc.)—my goal with voting is to make recommendations to others.

I rarely downvote comments, typically only when someone's not playing nice, but that's more on LW than here.

Comment by moses on Meditation and Effective Altruism · 2019-04-23T16:00:53.782Z · score: 4 (3 votes) · EA · GW

I think it's more than a matter of the quantity of thinking; I think there's a qualitative difference in whether the underlying motive for even starting the train of thought is "I intend to do X, so I have to plan the steps that constitute X", or whether it's "X scares the fuck out of me and I have to avoid doing X in a way that the System 2 can rationalize to itself, so it's either (1) go stare in the fridge, (2) masturbate, (3) deep-clean the bathroom, or (4) start a google doc brainstorming all the concerns I should take into account when prioritizing the various sub-tasks of X. Hmm, 4 sounds like something System 2 would eat up, the absolute dumbass."

Comment by moses on Meditation and Effective Altruism · 2019-04-23T13:56:09.407Z · score: 15 (8 votes) · EA · GW

Re: productivity—from personal experience, meditation also seems to help with overthinking. I think that Rationalists in particular have the nasty habit of endless intellectualizing about how to beat akrasia and get myself to do X; it seems that as you meditate, the addiction to this mental movement fades and then it's not appealing anymore, so you go do X instead.

Comment by moses on Meditation and Effective Altruism · 2019-04-23T13:49:49.861Z · score: 12 (7 votes) · EA · GW

Nice summary of the benefits, thanks.

To new practitioners, I would strongly suggest to follow much more detailed instruction that given here; for example, I follow the meditation guide The Mind Illuminated, which I can wholeheartedly recommend. It will make your meditation more productive and more enjoyable.

Comment by moses on What are people's objections to earning-to-give? · 2019-04-14T14:05:18.854Z · score: 31 (18 votes) · EA · GW

I'm not in a position where EtG would seem reasonable, but I can imagine the psychological obstacles which would arise if I was in that position. E.g.:

If you're one of the x-risk-oriented people (like me), rather than, say, global-poverty-oriented, your money wouldn't typically go to people who are much worse off than you, in Africa and elsewhere. It would typically go to support people like AI and generalist researchers, content creators, event organizers, and their support staff—people who are notably better off than you. They spend their days doing work which feels meaningful and enjoyable, often they live (and pay rent!) in the Bay Area, surrounded by fellow EAs and Rationalists, and they enjoy the high social status that the EA community assigns to people who do direct work.

Meanwhile, you spend 8 hours a day doing… well, a job. The people there might be nice enough, but probably not exactly your kind. You're probably working on something that (roughly speaking) doesn't matter. And your future prospects are gloomy: if you really give away a significant portion of your income, rather than saving up, you'll keep toiling as a wage slave for deeecaaadeees before you can afford to retire.

This is indeed something that might make rational sense (if you're somehow particularly ill-equipped for direct work), but it just feels… unfair?

Comment by moses on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-12T16:24:35.017Z · score: 5 (4 votes) · EA · GW

Yes, that helps, thanks. "Mediating" might be a word which would convey the idea better.

Comment by moses on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T18:34:00.403Z · score: 3 (2 votes) · EA · GW

Is there any resource (eg blogpost) for people curious about what "facilitating conversations" involves?

Comment by moses on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-28T18:34:52.332Z · score: 23 (16 votes) · EA · GW

I agree with Brendon that the Hotel should charge the tenants, and the tenants should seek their own funding.

If I was contemplating donating to the Hotel, the decision would hinge almost entirely on who is at the hotel and what they are working on. Moreover, I expect I would almost certainly want to tie my donation to a specific tenant/group of tenants, because I wouldn't a priori expect all of them to be good donation targets.

At this point, why would I not just fund the specific person directly? Better yet, why would I not donate to the EA Funds/CEA and let professional grant-makers sift through the tenants' personal applications?

When I look at the current guest list, it's just very short, general introductory paragraphs. Surely you wouldn't expect a grant-maker to make a funding decision based on these.

The Hotel itself is a cool idea which makes sense: create a transient EA hub somewhere where land is cheap. I love it. I am in fact one of those people who were excited about the project when it was first announced.

But in the current model, you cannot separate funding the Hotel from funding the specific people who stay there, and potential donors just don't have enough information about those people to confidently fund them.

Comment by moses on Severe Depression and Effective Altruism · 2019-03-26T14:26:53.870Z · score: 0 (2 votes) · EA · GW

I have several thoughts on this, but I only have time for one right now:

I'm not a psychiatrist, but I would suggest that the thoughts we have when we're mentally healthy are the valid ones, and the thoughts we have when we're depressed are the twisted, irrational ones.

I know that when you're depressed, it seems that you're seeing things more clearly, but I think that a psychiatrist would tell you that's not the case.

So if your healthy self feels okay about not performing up to your depressed-self's standards, I would strongly suggest to defer to the healthy self (by postponing all decisions until you're healthy again).

Comment by moses on SHOW: A framework for shaping your talent for direct work · 2019-03-14T13:13:30.740Z · score: 9 (6 votes) · EA · GW

It's been said that EA is vetting constrained, but in some deep sense it's more like that EA (and the world) is constrained on the amount of people that don't need to be told what to do.

Great, I feel less crazy when other people have the same thoughts as me. From my comment a week ago:

The high-profile EA orgs are not bottlenecked on "structure" or "network"; they're bottlenecked because there's a hundred people requiring management for every one person willing to manage others.

Comment by moses on SHOW: A framework for shaping your talent for direct work · 2019-03-14T10:30:36.032Z · score: 3 (3 votes) · EA · GW

Yes, makes sense.

EA should try to make people feel relevant if and only if they're doing good.

I would even say something like "iff they're making an honest attempt at doing good", because the kids are suffering from enough crippling anxiety as it is :)

Comment by moses on SHOW: A framework for shaping your talent for direct work · 2019-03-13T19:54:40.486Z · score: 21 (13 votes) · EA · GW

achieved their prominence

Aha! This made it click for me. I was confused by this whole issue where people can't get jobs at prestigious EA orgs. Something felt backwards about it.

Let's say you want to solve some problem in the world and you conclude that the most effective way for you to push on the problem is to take the open research position at organization X.

But you find out that there's someone even better for that position than you who will take it. Splendid! Now your hands are free to take the only slightly less effective position at organization Y! It's as if you got a clone for free—now we're surely getting closer to the solution of the problem than you originally expected!

But again, you find out someone better suited will be taking the position instead of you. Marvelous! So many people are working on the problem; as someone who just wants the problem solved (right?), you couldn't wish for anything better! Off to the next task on the to-do list—hopefully someone is already taking care of that one as well!

…But, weirdly enough, as people get rejected from position after position, they get more and more frustrated and sullen. How so?

I think it makes more sense to me if, instead of "how can I maximize the amount of progress made on the most important problem", I model people as asking "how can I achieve prominence in the EA community?" Then, of course, if it's someone else achieving prominence instead of you, you're going to get frustrated instead of delighted.

Does this make sense to anyone else or have I read too much Robin Hanson?

Comment by moses on What to do with people? · 2019-03-06T21:18:36.708Z · score: 10 (7 votes) · EA · GW

I feel you could come to the same conclusions/prescriptions with a much simpler underlying framework:

In order to utilize human effort, someone must come up with some valuable activity to pipe that effort into. A manager/employer, roughly speaking.

Some people manage/employ themselves; they find something to pipe their efforts into on their own. Maybe they start a project, a charity, a startup, organize a local group or an event, what have you.

Some people are even willing to manage/employ other people: they come up with so many ideas of what to do that it can keep multiple people busy.

Other people require external management/employment; they look for pre-defined jobs to slot themselves into.

[Rest of comment edited for clarity:]

The practical suggestions seem to fall into two categories:

"Be more self-managing, stop looking for a job and come up with your own idea what you can do"—e.g., organize events, do research on your own.

"Delegate"—e.g. distill the 80k know-how and delegate coaching. But the people at 80k don't have the time to actively orchestrate this. Again, there will need to be people who actively step up and make this happen.

So I think you could take out all the hierarchy stuff, radically simplifying the idea, and still make roughly the same suggestions:

Stop looking for other people to manage you. If you show up looking for a job, requiring management from other people who are already busy managing themselves or others, you're adding to their burden, not easing it. The high-profile EA orgs are not bottlenecked on "structure" or "network"; they're bottlenecked because there's a hundred people requiring management for every one person willing to manage others. Create your own research agenda, start your own EA org, organize your own event, find out on your own how some aspect of the EA community could be improved, propose a solution, implement it.

Comment by moses on So you want to do operations [Part two] - how to acquire and test for relevant skills · 2019-03-04T15:53:53.289Z · score: 1 (1 votes) · EA · GW

Yes, I don't. The result page is broken (the previous pages work fine).

Comment by moses on So you want to do operations [Part two] - how to acquire and test for relevant skills · 2018-12-17T11:41:20.134Z · score: 1 (1 votes) · EA · GW

Just a heads up regarding the HEXACO personality test website that was mentioned: it seems to be broken right now, so instead of results, you get a bunch of lines like this: Notice: Undefined offset: 3 in /home/hexaco/domains/hexaco.org/public_html/classes/Statistics.php on line 35

I didn't find any other HEXACO test online; did anyone else? (Or has the official website worked for anyone else?)