Posts

Relationship Advice Repository 2022-06-20T15:38:11.940Z
Career Suggestion: Earning to Skill 2022-04-14T00:46:39.584Z
Ruby's Shortform 2019-09-24T17:58:35.445Z

Comments

Comment by Ruby on Relationship Advice Repository · 2022-06-21T19:13:56.382Z · EA · GW

Again, thanks for taking the time to engage.

I think this post is maybe a format that the EA Forum hasn't done before, but this is intended to be a repository of advice that's crowd-sourced. This is also maybe not obvious because I "seeded" it with a lot of content I thought was worth sharing (and also to make it less sad if it didn't get many contributions – so far a few).

As I wrote:

I've seeded this post with a mix of advice, experience, and resources from myself and a few friends, plus various good content I found on LessWrong through the Relationships tag. The starting content is probably not the very best content possible (if it was, why make this a thread?), but I wanted to launch with something. Don't anchor too hard on what I thought to include! Likely as better stuff is submitted, I'll move some stuff out of the post text and into comments to keep the post from becoming absurdly long.

I also solicit disagreement:

Please disagree with advice you think is wrong! (It probably makes sense to add notes/links about differing views next to advice items in the main text, so worth the effort to call out stuff you disagree with.)

If you're okay with it, I will add your points of disagreement into the main post.

It is definitely not comprehensive! I put this together within a few hours over the weekend, I did not aim to start off with everything that's relevant. (Somehow still reached 10k words). If someone has good content on childbirth, pregnancy, etc., I think that would be great to add. On reflection, I'm in favor it being a behemoth and people hunting for sections relevant to them and/or later distillation.

it's not clear to me that a lot of this is actually very good advice for a lot of people.

I agree, because giving universal advice is extremely hard. The approach I'd advise for this is for people to read it, and if it seems like a good idea for them, try it. But also "consider reversing all advice you hear".

I'm also not sure why you linked to a list of 'negative expectation value' 'infohazard' questions that you don't recommend people do?

Because it's funny and fun. Note that I didn't write the text around it. Like most of the text, it's stuff I copied in. Also, it's a not a major world-ending infohazard. It's clearly marked. And as a commenter wrote on LW, it's only an infohazard if your relationship is bad (I think his bar for good relationships is too high, but I agree that healthier relationships/people aren't at as much risk.

And finally, most bizarrely... why is 50% of the 'sex' advice section a survey on what it is like to have sex with one particular guy? 

Going back to how I was just seeding the crowd-sourced post with content I had, that was something I had on hand. I didn't have other stuff and didn't feel like going hunting for advice, but thought it'd be good if other people had things that wanted to recommend be added to that section. As I write in that section:

How to have good and healthy sex is beyond the intended scope for this thread, but I welcome people to add links to external resources here (or submit them via comments with spoiler text/warning, or the Google Form).

I agree that it'd be much better if that one link was not 50% of the list! But I actually thought it's a helpful read for people who don't find it TMI.

Comment by Ruby on Relationship Advice Repository · 2022-06-21T18:56:44.324Z · EA · GW

Hi Larks, thanks for taking the time to engage.

I'm not sure how relevant this is to the EA forum?

I personally think that for Effective Altruists to be effective, they need to be healthy/well-adjusted/flourishing humans and therefore something as crucial as good relationship advice ought to be shared on the EA Forum (much the same productivity, agency or motivation advice).

I didn't mention it in the post, but part of the impetus for this post came from Julia's recent Power Dynamics between people in EA  post that discusses relationships, and it seemed like collecting broader advice on that would make for a healthier community overall. Mm, that's a point I'd emphasize – healthy relationships between individuals makes for a healthy community, especially when the individuals are working within and across EA orgs.

Comment by Ruby on Lifeguards · 2022-06-11T05:59:26.931Z · EA · GW

In terms of thinking about why solutions haven't been attempted, I'll plug Inadequate Equilibria. Though it probably provides a better explanation for why problems in the broader world haven't been addressed. I don't think the EA world is yet in an equilibrium and so things don't get done because {it's genuinely a bad idea, it seems like the thing you shouldn't be unilateral on and no one has built consensus,  sheer lack of time}.

Comment by Ruby on Lifeguards · 2022-06-11T05:53:15.643Z · EA · GW

Good comment!!


Most ideas for solving problems are bad, so your prior should be that if you have an idea, and it's not being tried, probably the idea is bad;


A key thing here is to be able to accurately judge whether the idea would be harmful if tried or not. "Prior is bad idea != EV is negative". If the idea is a random research direction, probably won't hurt anyone if you try it. On the other hand, for example, certain kinds of community coordination attempts deplete a common resource and interfere with other attempts, so the fact no one else is acting is a reason to hesitate.

Going to people who you think maybe ought to be acting and asking them why they're not doing a thing is probably a thing that should be encouraged and welcomed? I expect in most cases the answer will be "lack of time" rather than anything more substantial. 

Comment by Ruby on Sort forum posts by: Occlumency (Old & Upvoted) · 2022-05-15T20:27:29.261Z · EA · GW

For LessWrong, we've thought about some kind of "karma over views" metrics for a while. We experimented a few years ago but it proved to be a hard UI design challenge to make it work well. Recently we've thought about having another crack at it.

Comment by Ruby on Why CEA Online doesn’t outsource more work to non-EA freelancers · 2022-05-07T17:55:12.421Z · EA · GW

Yes! This. Thank you for writing.

I often get asked why LessWrong doesn't hire contractors in the meantime while we're hiring, and this is the answer. In particular the fact that getting contractors to do good work would require all of the onboarding that getting a team member to do good work would require.

Comment by Ruby on Did Peter Thiel give "the keynote address at an EA conference"? · 2022-05-07T17:25:51.603Z · EA · GW

He also gave a talk at the EA Summit 2014

Comment by Ruby on EAG is over, but don't delete Swapcard · 2022-05-01T04:59:40.359Z · EA · GW

I don't mean that I expect EA Forum software to replace Swapcard for EAG itself probably, just that the goal is to provide similar functionality all year round.

Comment by Ruby on EAG is over, but don't delete Swapcard · 2022-04-24T07:08:21.580Z · EA · GW

My understanding (which could be wrong, and I hope they don't mind me mentioning it on their behalf) is that the EA Forum dev team is working to build Swapcard functionality into the forum, including the ability to import your Swapcard data.

In the meantime, I agree with the OP.

Comment by Ruby on Career Suggestion: Earning to Skill · 2022-04-14T17:07:00.545Z · EA · GW

I bet that if they are impressive to you (and your judgment is reasonable), you can convince grantmakers at present.

Comment by Ruby on Career Suggestion: Earning to Skill · 2022-04-14T10:17:54.818Z · EA · GW

But there already is from the major funders.

Comment by Ruby on Career Suggestion: Earning to Skill · 2022-04-14T04:58:57.001Z · EA · GW

Thank you for the detailed reply!

I agree that Earning to Give may make sense if you're neartermist or don't share the full moral framework. This is why my next sentence beings "if you'd be donating to longtermist/x-risk causes." I could have emphasized these caveats more.

I will say that if a path is not producing value, I very much want to demotivate people pursuing that path. They should do something else! One should only be motivated for things that deserve motivation.

I've looked at the posts you shared and I don't find them compelling. 

I think the best previous argument for Earning to Give is that you as a small donor might be able to fund things that the major funders won't or can't, but my current sense is that bar is sufficiently low that it is now very hard to find such opportunities (within the x-risk/lontermist space and framework at least). Things that seem like remotely a good idea get funding now.

I think that the reason we're not hiring more people isn't for lack of money, as discussed on that post.

There might be crazy future scenarios where EA suddenly needs a tremendous amount of money, more than all the funders currently have (or will have), in which case additional funds might be useful, but...it seems if we really thought this was the case, the big funders should raise the bar and not fund as many things as generously as they do.

Comment by Ruby on Career Suggestion: Earning to Skill · 2022-04-14T03:43:05.576Z · EA · GW

Earn to Learn

Well dang.

Comment by Ruby on How might a herd of interns help with AI or biosecurity research tasks/questions? · 2022-03-21T05:20:59.701Z · EA · GW

Maybe the Visible Thoughts Project.

Comment by Ruby on EA Forum feature suggestion thread · 2022-02-16T17:16:31.776Z · EA · GW

[Speaking from LessWrong here:] based on our experiments so far, I think there's a fair amount more work to be done before we'd want to widely roll out a new voting system. Unfortunately for this feature, development is paused while we work on some other stuff.

Comment by Ruby on Petrov Day Retrospective: 2021 · 2021-10-21T21:57:26.389Z · EA · GW

LessWrong Petrov Day Retrospective, just published.

Comment by Ruby on Clarifying the Petrov Day Exercise · 2021-09-28T02:01:52.378Z · EA · GW

 I also see that a lot of the issues were predictable from last year's comments but were not addressed.

This is my fault. I was the lead organizer for Petrov Day this year though wasn't an organizer in previous years. I recalled that there were issues with ambiguity last year, which I attempted to address (albeit quite unsuccessfully), however, I didn't go through and read/re-read all of the comments from last year. If I had done so, I might have corrected more of the design.

I'm sorry for the negative experience you had due to the poor design. I do think it's bad for people to find themselves threatened by social consequences over something they weren't given proper context for.

If I'm involved in next year's Petrov Day, I plan on there being consent mechanisms, as you suggest.

Comment by Ruby on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T03:15:17.340Z · EA · GW

LessWrong mod speaking here. Just wanted to confirm that everything written here is correct. 

To be clear, only the identities of the account that enters a valid code will be shared. 

Comment by Ruby on Clarifying the Petrov Day Exercise · 2021-09-27T00:53:57.086Z · EA · GW

Seems good to me too.

Comment by Ruby on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T21:32:33.698Z · EA · GW

There's a user setting that lets you do this. 

Comment by Ruby on Our plans for hosting an EA wiki on the Forum · 2021-04-14T16:20:44.877Z · EA · GW

There is already a (clunky) feature that enables this.

If you hyperlink text with a tag url with the url parameter ?userTagName=true, the hyperlinked text will be replaced by whatever the current name of the tag is.

E.g. If the tag is called "Global dystopia" and I put in a post or other tag with the hyperlink url /global-distoptia?useTagName=true and then it gets renamed to "Dystopia":

  1. The old URL will still work
  2. The text "Global dystopia" will be replaced with the current name "Dystopia"
     

See: https://www.lesswrong.com/posts/E6CF8JCQAWqqhg7ZA/wiki-tag-faq#What_are_the_secret_fancy_URL_parameters_for_linking_to_tags_

Comment by Ruby on LessWrong/EA New Year's Ultra Party · 2021-01-01T22:23:31.902Z · EA · GW

Ah, sorry, we meant to fix that. Should have all been CET.

Comment by Ruby on LessWrong/EA New Year's Ultra Party · 2020-12-30T23:59:34.341Z · EA · GW

Collaborative calendar/schedule for the event is now live! https://docs.google.com/spreadsheets/d/1xUToQ-Wu6w-Uaow7q8Bo5s61beWWRJhIh9P-DNAvx4Q/edit?usp=sharing

Please add any events or activities you'd like to run. Comment here or in the doc if you have questions, e.g . about good places to host your session.

Comment by Ruby on LessWrong/EA New Year's Ultra Party · 2020-12-30T07:43:18.388Z · EA · GW

Upvote suggestions from others if you like them too.

Comment by Ruby on LessWrong/EA New Year's Ultra Party · 2020-12-30T07:40:23.104Z · EA · GW

The Ultra Party Radio (TM) has been constructed (bottom right of attached image). We'll be streaming tunes to the entire Garden from our own server, but the music will be optimized for the ballroom dancefloor.

What music would you like to hear? Please comment with:

  1. Genres
  2. Specific song requests
  3. Playlists you might like us to use or borrow from
Comment by Ruby on LessWrong/EA New Year's Ultra Party · 2020-12-29T06:39:00.374Z · EA · GW

We've now got a rough map of the venue. 
 

Comment by Ruby on LessWrong/EA New Year's Ultra Party · 2020-12-28T21:52:19.158Z · EA · GW

Some images of the party locations to pump the imagination:
 

Early testing of the ballroom
Here fishy, fishy
Meet new people in the Violet Study
Comment by Ruby on LessWrong/EA New Year's Ultra Party · 2020-12-28T21:48:58.881Z · EA · GW

We've now designated many activities to many different regions of the Walled Garden. If you're interested in hosting or attending a specific activity, please comment. The organizers can help you set it up and put it on the Official Party Schedule.

The following are scheduling throughout the party, but it seems great to have more specific things scheduled for like-interested people to join.

Ballroom: dancing, toasts & roasts, countdown
Violet Study: meet new people
Moloch Maze: games, e.g., poker, Among Us
Great Library (1st floor): deep philosophical conversations
Orrery: make and discuss predictions for 2021
Map Room: reflect on where you've been and where you're going
Hell: complain about 2020
Heaven:....
etc!

Comment by Ruby on Ruby's Shortform · 2019-09-24T17:58:35.565Z · EA · GW

Just a thought: there's the common advice that fighting all out with the utmost desperation makes sense for very brief periods, a few weeks or months, but doing so for longer leads to burnout. So you get sayings like "it's a marathon, not a sprint." But I wonder if length of the "fight"/"war" isn't the only variable in sustainable effort. Other key ones might be the degree of ongoing feedback and certainty about the cause.

Though I expect a multiyear war which is an existential threat to your home and family to be extremely taxing, I imagine soldiers experiencing less burnout than people investing similar effort for a far-mode cause, let's say global warming which might be happening, but is slow and your contributions to preventing it unclear. (Actual soldiers may correct me on this, and I can believe war is very traumatizing, though I will still ask how much they believed in the war they were fighting.)

(Perhaps the relevant variables here are something like Hanson's Near vs Far mode thinking, where hard effort for far-mode thinking more readily leads to burnout than near-mode thinking even when sustained for long periods.)

Then of course there's generally EA and X-risk where burnout is common. Is this just because of the time scales involved, or is it because trying to work on x-risk is subject to so much uncertainty and paucity of feedback? Who knows if you're making a positive difference? Contrast with a Mario character toiling for years to rescue the princess he is certain is locked in a castle waiting. Fighting enemy after enemy, sleeping on cold stone night after night, eating scraps. I suspect Mario, with his certainty and much more concrete sense of progress, might be able expend much more effort and endure much more hardship for much longer than is sustainable in the EA/X-risk space.

Related: On Doing the Improbable

Comment by Ruby on Suggestions for EA wedding vows? · 2019-03-22T16:15:21.338Z · EA · GW

Holly's answer reminded me of some of the passages we used in a 3rd anniversary mini-ceremony last year. The mini-ceremony had a couple of posts from the Sequences, one of which appear to be mostly an expansion of that point from Origin.

Comment by Ruby on Suggestions for EA wedding vows? · 2019-03-22T16:04:32.991Z · EA · GW

Congratulations!! Marriage between the right people is wonderful.

Miranda Dixon-Luinenburg and I had EA themes throughout our wedding ceremony. You're welcome to read and borrow from our ceremony text. (Eventually I'll post the audio recordings too, but they need some significant audio clean up.)

Context: we had our wedding in a planetarium and had our friends write speeches each according to a particular theme combining into an overall arc. Each speech was read while a matching starscape was projected on the dome.

Comment by Ruby on Empathic communication and strategy for Effective Altruism, Part 1 · 2015-09-25T23:32:29.370Z · EA · GW

Firstly, thanks for the post above! These are important questions to consider.

I think your main point in your post is that the misperception of EAs as cold is preventing growth, and that's why we'd want to correct it. Habryka replied that what really matters is 'are we growing the EA ecosystem in the right way?'. In your response to him, you say that you argue for warmer language because it corrects a false perception of us and that it's a common point of criticism.

But to reply to Habryka, a clearer argument is needed for saying why those things matter. It could be the case that we are warm, our language falsely makes us seem cold, but this isn't a problem because it doesn't adversely affect growing the EA ecosystem in a healthy way, even if there are people turned off by it.

Also, it might not be practical or worthwhile to defend against every criticism based on misrepresentations. This critique might not even be the critic's true rejection of EA. Defend against that one, and they'll generate new critiques based on other distortions of who we are. Is this a critique in particular which we need to defend against because it's damaging us, worse than the next critique they'll focus on?