Posts

Jack R's Shortform 2022-08-09T06:16:30.576Z
[Link-post] Beware of Other-Optimizing 2022-08-06T22:57:22.940Z
Consequentialists (in society) should self-modify to have side constraints 2022-08-03T22:47:48.681Z
If you ever feel bad about EA social status, try this 2022-04-10T00:15:13.364Z
Community builders should learn product development models 2022-04-01T00:07:37.287Z
23 career choice heuristics 2022-02-26T01:14:08.932Z
Have you noticed costs of being anticipatory? 2022-02-14T19:16:30.810Z
Ideas for avoiding optimizing the wrong things day-to-day? 2022-01-26T10:38:58.817Z
Redwood Research is hiring for several roles 2021-11-29T00:18:38.205Z
How would you gauge random undergrads' "EA potential"? 2021-09-03T06:51:03.260Z

Comments

Comment by Jack R (JackRyan) on Yale EA got an office. How did it go? · 2022-08-16T05:40:38.379Z · EA · GW

Of course feel free not to share, but I'd be curious for a photo of the inside of the office! Partly I am curious because I imagine how nice of a place it is (and e.g. whether there is a fridge) could make a big difference re: how much people tend to hang out there.

Comment by Jack R (JackRyan) on The Parable of the Boy Who Cried 5% Chance of Wolf · 2022-08-15T19:22:50.808Z · EA · GW

Relatedly: Heuristics That Almost Always Work

Comment by Jack R (JackRyan) on Jack R's Shortform · 2022-08-11T06:16:58.164Z · EA · GW

Concept-shaped holes are such a useful concept; from what I can tell, it seems like a huge amount of miscommunication happens because people have somewhat different understandings of the same word.

I think I interpret people's advice and opinions pretty differently now that I'm aware of concept-shaped holes.

Comment by Jack R (JackRyan) on Are "Bad People" Really Unwelcome in EA? · 2022-08-09T20:59:22.584Z · EA · GW

It seems possible to me that you have a concept-shaped hole for the concept "bad people"

Comment by Jack R (JackRyan) on Jack R's Shortform · 2022-08-09T06:16:30.794Z · EA · GW

I have found it useful and interesting to build a habit of noticing an intuition and then thinking of arguments for why that intuition is worth listening to. It has caused me to find some pretty interesting dynamics that it seems like naive consequentialists/utilitarians aren't aware of.

One concern about this is that you might be able to find arguments for any conclusion that you seek out arguments for; the counter to this is that your intuition doesn't give random answers, and is actually fairly reliably correct, hence explicit arguments that explain your intuition are some amount more likely than random to correspond reality, making these arguments useful to discover.

This definitely goes better if you are aware of the systematic errors your intuition can make (i.e. cognitive biases).

Comment by Jack R (JackRyan) on Consequentialists (in society) should self-modify to have side constraints · 2022-08-08T23:28:11.844Z · EA · GW

I'm noticing two ways of interpreting/reacting to this argument:

  • "This is incredibly off-putting; these consequentialists aren't unlike charismatic sociopaths who will try to match my behavior to achieve hidden goals that I find abhorrent" (see e.g. Andy Bernard from The Office; currently, this is the interpretation that feels most salient to me)
  • "This is like a value handshake between consequentialists and the rest of society: consequentialists may have different values than many other people (perhaps really only at the tail ends of morality), but it's worth putting aside our differences and working together to solve the problems we all care about rather than fighting battles that result in predictable loss"
Comment by Jack R (JackRyan) on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-07T04:40:01.007Z · EA · GW

Makes sense - thanks Asya!

Comment by Jack R (JackRyan) on Consequentialists (in society) should self-modify to have side constraints · 2022-08-04T00:31:07.060Z · EA · GW

This is good to know - thank you for making this connection!

Comment by Jack R (JackRyan) on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships · 2022-08-01T12:12:14.094Z · EA · GW

Notably, (and I think I may feel more strongly about this than others in the space), I’m generally less excited about organizers who are ambitious or entrepreneurial, but less truth-seeking, or have a weak understanding of the content that their group covers.

Do you feel that you'd rather have the existing population of community builders be a bit more ambitious or a bit more truth-seeking? Or: if you could suggest improvement on only one of these virtues to community builders, which would you choose? ETA: Does the answer feel obvious to you, or is it a close call?

Comment by Jack R (JackRyan) on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-29T03:46:33.069Z · EA · GW

"Interesting" is subjective, but there can still be areas that a population tends to find interesting. I find David's proposals of what the EA population tends to find interesting plausible, though ultimately the question could be resolved with a survey

Comment by Jack R (JackRyan) on A summary of every "Highlights from the Sequences" post · 2022-07-16T05:09:11.945Z · EA · GW

Thanks for this! I enjoyed the refresher + summaries of some of the posts I hadn't yet read.

Comment by Jack R (JackRyan) on Doom Circles · 2022-07-08T21:13:24.482Z · EA · GW

I'm not familiar with the opposite type of circle format

Me neither really - I meant to refer to a hypothetical activity.

And thanks for the examples!

Comment by Jack R (JackRyan) on Doom Circles · 2022-07-08T20:03:36.280Z · EA · GW

Does anyone have an idea why doom circles have been so successful compared to the opposite type of circle where people say nice things about each other that they wouldn't normally say?

Relatedly, I have a hypothesis that the EA/rationalist communities are making mistakes that they wouldn't make if they had more psychology expertise. For instance, my impression is that many versions of positivity measurably improve performance/productivity and many versions of negativity worsen performance (though these impressions aren't based on much research), and I suspect if people knew this, they would be more interested in trying the opposite of a doom circle.

Comment by Jack R (JackRyan) on You should join an EA organization with too many employees · 2022-05-22T19:57:27.505Z · EA · GW

Ah I see — thanks!

Comment by Jack R (JackRyan) on You should join an EA organization with too many employees · 2022-05-22T19:52:41.393Z · EA · GW

Thanks!

Is it correct that this assumes that the marginal cost of supporting a user doesn’t change depending on the firm’s scale? It seems like some amount of the 50x difference between EAF and reddit could be explained by the EAF having fewer benefits of scale since it is a smaller forum (though should this be counter balanced by it being a higher quality forum?)

Continuing the discussion since I am pretty curious how significant the 50x is, in case there is a powerful predictive model here

Comment by Jack R (JackRyan) on You should join an EA organization with too many employees · 2022-05-21T21:57:31.303Z · EA · GW

Could someone show the economic line of reasoning one would use to predict ex ante from the Nordhaus research that the Forum would have 50x more employees per user? (FYI, I might end up working it out myself.)

Comment by Jack R (JackRyan) on Some potential lessons from Carrick’s Congressional bid · 2022-05-18T07:50:13.121Z · EA · GW

Maybe someone should user-interview or survey Oregonians to see what made people not want to vote for Carrick

Comment by JackRyan on [deleted post] 2022-05-18T07:44:08.327Z

No worries! Seemed mostly coherent to me, and please feel free to respond later.

I think the thing I am hung up on here is what counts as "happiness" and "suffering" in this framing.

Comment by JackRyan on [deleted post] 2022-05-18T06:02:48.386Z

Could you try to clarify what you mean by the AI (or an agent in general) being "better off?"

Comment by JackRyan on [deleted post] 2022-05-18T04:21:35.254Z

I’m actually a bit confused here, because I'm not settled on a meta-ethics: why isn't it the case that a large part of human values is about satisfying the preferences of moral patients, and human values consider any or most advanced AIs as non-trivial moral patients?

I don't put much weight on this currently, but I haven't ruled it out.

Comment by Jack R (JackRyan) on Choosing causes re Flynn for Oregon · 2022-05-18T04:15:31.899Z · EA · GW

If you had to do it yourself, how would you go about a back-of-the-envelope calculation for estimating the impact of a Flynn donation?

Asking this question because I suspect that other people in the community won't actually do this, and since you are maybe one of the best-positioned people to do this since you seem interested in it.

Comment by Jack R (JackRyan) on Rational predictions often update predictably* · 2022-05-16T21:27:51.439Z · EA · GW

Yeah, I had to look this up

Comment by Jack R (JackRyan) on Rational predictions often update predictably* · 2022-05-16T03:37:19.500Z · EA · GW

e.g. from P(X) = 0.8, I may think in a week I will - most of the time - have notched this forecast slightly upwards, but less of the time notching it further downwards, and this averages out to E[P(X) [next week]] = 0.8.

I wish you had said this in the BLUF -- it is the key insight, and the one that made me go from "Greg sounds totally wrong" to "Ohhh, he is totally right"

ETA: you did actually say this, but you said it in less simple language, which is why I missed it

Comment by Jack R (JackRyan) on Against “longtermist” as an identity · 2022-05-13T22:35:46.326Z · EA · GW

I really like your drawings in section 2 -- conveys the idea surprisingly succinctly

Comment by Jack R (JackRyan) on A retroactive grant for creating the HPMoR audiobook (Eneasz Brodski)? · 2022-05-10T22:59:27.320Z · EA · GW

Ha!

Comment by Jack R (JackRyan) on [8] Meditations on Moloch (Alexander, 2014) · 2022-05-09T07:39:44.736Z · EA · GW
  • Note to self: I should really, really try to avoid speaking like this when facilitating in the EA intro fellowship

Hah!

Comment by Jack R (JackRyan) on Most problems fall within a 100x tractability range (under certain assumptions) · 2022-05-09T01:31:23.954Z · EA · GW

The entire time I've been thinking about this, I've been thinking of utility curves as logarithmic, so you don't have to sell me on that. I think my original comment here is another way of understanding why tractability perhaps doesn't vary much between problems, not within a problem.

Comment by Jack R (JackRyan) on Most problems fall within a 100x tractability range (under certain assumptions) · 2022-05-09T01:19:30.168Z · EA · GW

Ah, I see now that within a problem, tractability shouldn't change as the problem gets less neglected if you assume that u(r) is logarithmic, since then the derivative is like 1/R, making tractability like 1/u_total

Comment by Jack R (JackRyan) on Most problems fall within a 100x tractability range (under certain assumptions) · 2022-05-09T01:13:01.903Z · EA · GW

But why is tractability roughly constant with neglectedness in practice? Equivalently, why are there logarithmic returns to many problems?

I don't see why logarithmic utility iff tractability doesn't change with neglectedness.

Comment by Jack R (JackRyan) on Most problems fall within a 100x tractability range (under certain assumptions) · 2022-05-09T00:40:06.660Z · EA · GW

There was an inference there -- you need tractability to balance with the neglectedness to add up to equal cost-effectiveness

Comment by Jack R (JackRyan) on Most problems fall within a 100x tractability range (under certain assumptions) · 2022-05-08T23:36:27.737Z · EA · GW

I don't know if I understand why tractability doesn't vary much. It seems like it should be able to vary just as much as cost-effectiveness can vary. 

For example, imagine two problems with the same cost-effectiveness, the same importance, but one problem has 1000x fewer resources invested in it. Then the tractability of that problem should be 1000x higher [ETA: so that the cost-effectiveness can still be the same, even given the difference in neglectedness.]

Another example:  suppose an AI safety researcher solved AI alignment after 20 years of research. Then the two problems "solve the sub-problem which will have been solved by tomorrow" and "solve AI alignment" have the same local cost-effectiveness (since they are locally the same actions), the same amount of resources invested into each, but potentially massively different importances. This means the tractabilities must also be massively different.

These two examples lead me to believe that in as much as tractability doesn't vary much, it's because of a combination of two things:

  1. The world isn't dumb enough to massively underinvest in a really cost-effective and important problems
  2. The things we tend to think of as problems are "similarly sized" or something like that

I'm still not fully convinced, though, and am confused for instance about what "similarly sized" might actually mean.

 

Comment by Jack R (JackRyan) on Most problems fall within a 100x tractability range (under certain assumptions) · 2022-05-08T23:24:23.591Z · EA · GW

When I formalize "tractability" it turns out to be directly related to neglectedness. If R is the number of resources invested in a problem currently, and u(r) is the difference in world utility from investing 0 v.s. r resources into the problem, and u_total is u(r) once the problem is solved, then tractability turns out to be:

Tractability = u'(R) * R * 1/ u_total

So I'm not sure I really understand yet why tractability wouldn't change much with neglectedness. I have preliminary understanding, though, which I'm writing up in another comment.

Comment by Jack R (JackRyan) on Most problems fall within a 100x tractability range (under certain assumptions) · 2022-05-04T08:34:31.175Z · EA · GW

each additional doubling will solve a similar fraction of the problem, in expectation

Aren't you assuming the conclusion here?

Comment by Jack R (JackRyan) on The future is good "in expectation" · 2022-04-22T00:00:33.823Z · EA · GW

As a note, it's only ever the case that something is good "in expectation" from a particular person's point of view or from a particular epistemic state. It's possible for someone to disagree with me because they know different facts about the world, and so for instance think that different futures are more or less likely. 

In other words, the expected value referred to by the term "expectation" is subtly an expected value conditioned on a particular set of beliefs.

Comment by Jack R (JackRyan) on FTX/CEA - show us your numbers! · 2022-04-19T10:59:51.343Z · EA · GW

I disagree with your reasons for downvoting the post, since I generally judge posts on their content, but I do appreciate your transparency here and found it interesting to see that you disliked a post for these reasons. I’m tempted to upvote your comment, though that feels weird since I disagree with it

Comment by Jack R (JackRyan) on Free-spending EA might be a big problem for optics and epistemics · 2022-04-16T15:47:17.236Z · EA · GW

Because of Evan's comment, I think that the signaling consideration here is another example of the following pattern:

Someone suggests we stop (or limit) doing X because of what we might signal by doing X, even though we think X is correct. But this person is somewhat blind to the negative signaling effects of not living up to our own stated ideals (i.e. having integrity). It turns out that some more rationalist-type people report that they would be put off by this lack of honesty and integrity (speculation: perhaps because these types have an automatic norm of honesty).

The other primary example of this I can think of is with veganism and the signaling benefits (and usually unrecongnized costs).

A solution is that when you find yourself saying “X will put off audience Y” to ask yourself “but what audience does X help attract, and who is put off by my alternative to X?”

Comment by Jack R (JackRyan) on Free-spending EA might be a big problem for optics and epistemics · 2022-04-16T15:43:14.489Z · EA · GW

[ETA: this is mostly directed at the OP, not you Evan]

Because of your comment, I think that this is another example of a thing I’ve seen before with EA community prescriptions:

Someone suggests we stop (or limit) doing X because of what we might signal by doing X, even though we think X is correct. But this person is somewhat blind to the negative signaling effects of not living up to our own stated ideals (i.e. having integrity). It turns out that some more rationalist-type people report that they would be put off by this lack of honesty and integrity (speculation: perhaps because these types have an automatic norm of honesty).

The other primary example of this I can think of is with veganism and the signaling benefits (and under-stated costs).

A solution is that when you find yourself saying “X will put off audience Y” to ask yourself “but what audience does X help attract, and who is put off by my alternative to X?”

Comment by Jack R (JackRyan) on How to become an AI safety researcher · 2022-04-13T07:24:43.815Z · EA · GW

Maybe someone should compile a bunch of exercises that train the muscle of formalizing intuitions

Comment by Jack R (JackRyan) on Free-spending EA might be a big problem for optics and epistemics · 2022-04-13T07:04:48.923Z · EA · GW

FWIW, Chris didn't say what you seem to be claiming he said

Comment by Jack R (JackRyan) on If you ever feel bad about EA social status, try this · 2022-04-11T01:48:11.386Z · EA · GW

Oh, interesting, thanks for this.

I think before assuming you made a mistake you could add the question of "if someone did that thing to me, could I easily forgive them?" If the answer is yes, then maybe don't sweat it because generally we think of ourselves way more than we think others do[1]

I really like this advice, and I just realized I use this trick sometimes.

Comment by Jack R (JackRyan) on The Vultures Are Circling · 2022-04-08T03:18:33.673Z · EA · GW

I might make it clearer that your bullet points are what you recommend people not do. I was skimming and at first and was close to taking away the opposite of what you intended.

Comment by Jack R (JackRyan) on Good practices for changing minds · 2022-04-07T19:27:13.326Z · EA · GW

I might add something to the tune of "have them lead the conversation by letting their questions and vague feelings do the steering"

Comment by Jack R (JackRyan) on Community builders should learn product development models · 2022-04-01T20:36:43.812Z · EA · GW

Thank you Peter! Definitely taking a look at the books and resources. Also, I now link your comment in the tldr of the post :)

Comment by Jack R (JackRyan) on I feel anxious that there is all this money around. Let's talk about it · 2022-04-01T03:12:44.246Z · EA · GW

I have seen little evidence that FTX Future Fund (FFF) or EA Infrastructure Fund (EAIF) have lowered their standards for mainline grants

FFF is new, so that shouldn't be a surprise.

Comment by Jack R (JackRyan) on Companies with the most EAs and those with the biggest potential for new Workplace Groups · 2022-03-12T21:55:19.049Z · EA · GW

I’d be curious to see how many people each of these companies employ + the % of employees which are EAs

Comment by Jack R (JackRyan) on 23 career choice heuristics · 2022-03-04T02:07:23.862Z · EA · GW

[First comment was written without reading the rest of your comment. This is in reply to the rest.]

Re: whether a company adds intrinsic value, I agree, it isn't necessarily counterfactually good, but also that's sort of the point of a heuristic -- most likely you can think of cases where all of these heuristics fail; by prescribing a heuristic, I don't mean to say the heuristic always holds, instead just that using the heuristic v.s. not happens to, on average, lead to better outcomes.

Serial entrepreneur seems to also be a decent heuristic.

 

Comment by Jack R (JackRyan) on 23 career choice heuristics · 2022-03-04T02:02:18.260Z · EA · GW

I haven't thought about it deeply, but the main thing I was thinking here was that I think founders get the plurality of credit for the output of a company, partly because I just intuitively believe this, and partly because, apparently, not many people found things. This is an empirical claim, and it could be false e.g. in worlds where everyone tries to be a founder, and companies never grow, but my guess is that the EA community is not in that world. So this heuristic tracks (to some degree) high counterfactual impact/neglectedness.

Comment by Jack R (JackRyan) on 23 career choice heuristics · 2022-02-27T17:46:27.735Z · EA · GW

This heuristic is meant to be a way of finding good opportunities to learn (which is a way to invest in yourself to improve your future impact) and it’s not meant to be perfect.

Comment by Jack R (JackRyan) on Some thoughts on vegetarianism and veganism · 2022-02-18T21:59:36.578Z · EA · GW

I'm still not very convinced of your original point, though -- when I simulate myself becoming non-vegan, I don't imagine this counterfactually causing me to lose my concern for animals (nor does it seem like it would harm my epistemics? Though not sure if I trust my inner sim here. It does seem like that,  if anything, going non-vegan would help my epistemics, since, in my case, being vegan wastes enough time such that it is harmful for future generations to be vegan, and by continuing to be vegan I am choosing to ignore that fact).

Comment by Jack R (JackRyan) on Some thoughts on vegetarianism and veganism · 2022-02-18T21:56:22.166Z · EA · GW

it would make me deeply sad and upset

That makes sense, yeah. And I could see this being costly enough such that it's best to continue avoiding meat.