Posts

How would you run the Petrov Day game? 2021-09-26T23:37:10.878Z
If You're So Smart, Why Aren't You Governor Of California? (Scott Alexander: Astral Codex Ten) 2021-08-26T11:08:59.801Z
A Case for Better Feeds 2021-08-24T16:28:37.743Z
What is the closest thing you know to EA that isn't EA? 2021-08-14T13:31:08.442Z
What EA online chat spaces exist? (link them here) 2021-08-14T12:57:03.027Z
What things did you do to gain experience related to EA? 2021-08-07T18:54:05.565Z
What EA projects could grow to become megaprojects, eventually spending $100m per year? 2021-08-06T11:24:31.775Z
Ways to impact public policy. In what situations are different options better? Are any better or worse in general? 2021-07-23T16:26:20.321Z
Should 80,000 hours run a tiktok account? 2021-06-12T12:41:31.743Z
EA Twitter Job Bots and more 2021-06-04T09:55:12.594Z
Heywood Foundation Public Policy Prize Entry - Existential Risk and Prediction Markets 2021-03-16T18:18:57.506Z
Where would you look for an EA job applications support group? 2021-03-16T12:28:23.708Z
How many hits do the hits of different EA sites get each year? 2021-03-04T16:34:00.664Z
What is something the reader should do? 2020-09-24T16:08:47.422Z
Economist: "What's the worst that could happen". A positive, sharable but vague article on Existential Risk 2020-07-08T10:37:08.017Z
What EA questions do you get asked most often? 2020-06-23T17:12:09.583Z
What are good sofware tools? What about general productivity/wellbeing tips? 2020-06-15T11:47:30.417Z
What is your favourite readings for weddings/ funerals/ important community events? 2020-05-28T11:14:57.508Z
What question would you want a prediction market about? 2020-01-19T14:16:57.356Z
What content do people recommend looking at when trying to find jobs? 2019-12-19T09:50:21.395Z
UK General Election: Where would you look for impact analysis of policy platforms? 2019-11-14T17:13:17.459Z
A rationalism discussion with some useful lessons on culture 2019-11-05T09:29:30.150Z
Nathan Young's Shortform 2019-11-05T09:11:05.415Z
Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? 2019-10-17T09:24:22.687Z
Who should I put a funder of a $100m fund in touch with? 2019-10-16T10:22:51.502Z
How to do EA Global well? How can one act during EA Global so as to make the most effective changes afterwards? 2019-10-16T10:21:03.573Z
I Estimate Joining UK Charity Boards is worth £500/hour 2019-09-16T14:40:14.359Z
EU AI policy and OPSI Consultation 2019-08-08T09:51:15.589Z
What are your thoughts on my career change options? (AI public policy) 2019-07-19T16:50:08.813Z
Please May I Have Reading Suggestions on Consistency in Ethical Frameworks 2019-07-08T09:35:18.198Z
How does one live/do community as an Effective Altruist? 2019-05-15T21:20:02.296Z
This website is really functional and attractive (to me) 2019-05-08T08:56:19.942Z
Is EA unscalable central planning? 2019-05-07T07:25:07.387Z
If this forum/EA has ethnographic biases, here are some suggestions 2019-05-06T11:20:51.105Z
How do we check for flaws in Effective Altruism? 2019-05-06T10:59:41.441Z

Comments

Comment by Nathan Young (nathan) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-27T11:50:15.205Z · EA · GW

I think this just gets back to what the game is.

If it's a game, I think what Peter did was fun and cool.

If it's a ritual, then yeah maybe it was irresponsible (maybe not I don't know).

Personally, it made me think about precommitments, which seems good, so I'm glad he did it.

Comment by Nathan Young (nathan) on How would you run the Petrov Day game? · 2021-09-27T06:53:46.888Z · EA · GW

Possible features:

  • some way to be unilateralist in a defensive way
  • Sometimes it says the other site has launched and is wrong
Comment by Nathan Young (nathan) on How would you run the Petrov Day game? · 2021-09-27T00:06:43.008Z · EA · GW

If someone thinks it should be a community building ritual, I suggest they write an answer, for balance.

Comment by Nathan Young (nathan) on How would you run the Petrov Day game? · 2021-09-27T00:05:43.826Z · EA · GW

Jchan suggests that users could raise money for an effective charity which would then be publicly wasted if the button gets pressed. That's a fun idea.

Comment by Nathan Young (nathan) on Clarifying the Petrov Day Exercise · 2021-09-26T23:53:00.800Z · EA · GW

If anyone wants to suggest what they'd like to see from Petrov Day, I've written a question here

Comment by Nathan Young (nathan) on How would you run the Petrov Day game? · 2021-09-26T23:45:44.286Z · EA · GW

I'd make it clearly a lighthearted game.

  • It would be clearly stated that this is a game and that while the subject is serious, this game is not. No-one will face social opprobrium for defecting.
  • The button pusher would be anonymous
  • The sites would be blocked but links would still function. It would be a little inconvenient but only for regular users

I think some people have Petrov Days as serious rituals, but I think the EA forum is too big for that. So why not embrace a little chaos and create a space for thinking about Petrov. I've thought about him a lot today and i don't think that would be hurt by not taking the game too seriously. 

Comment by Nathan Young (nathan) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T23:18:03.089Z · EA · GW

This has been suggested to me.

Comment by Nathan Young (nathan) on Clarifying the Petrov Day Exercise · 2021-09-26T23:06:04.579Z · EA · GW

Thanks for writing this. I've been thinking about this a lot today.

Another purpose could be raising awareness of nuclear risk and unilateralism. And that could either be as a game or as a community building ritual. But I agree it should be clear which of those it is.

My guess is that is meant to be a community building exercise but to add some jeopardy, it's been gamified. If it were opt in, I reckon there would be little risk of the button being pressed, which would be less interesting. As Khorton says, this is pretty confusing and potentially harmful for those who misread the situation.

Comment by Nathan Young (nathan) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T19:55:08.620Z · EA · GW

I'm coming to the conclusion that a private Petrov day game is good but a public one without community buy-in leads to a lot of tense disagreements as to what the game means. In some ways that's a nice analogy for the human condition, in other ways it feels like afterwards we should have some kind of group therapy.

I think I'm softly in favour but I'm glad this only happens once a year. Also I'm 1% worried this is going to end in reputational damage to the community.

Comment by Nathan Young (nathan) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T17:29:58.140Z · EA · GW

Yeah, 20k seemed about right to me also.

Comment by Nathan Young (nathan) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T17:26:52.035Z · EA · GW

I think I'd ask for the community here to agree first, but if someone suggested an amount that got half the upvotes of the total of this page I'd probably push it. That seems like the ethical choice.

Comment by Nathan Young (nathan) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T16:47:45.557Z · EA · GW

I mean it depends how much the terrorists are going to pay compared to the value of the damage. In the real world that value is unthinkably high, here, not so much.

Comment by Nathan Young (nathan) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T16:35:29.486Z · EA · GW

Hmm, I guess I find it strange. To me, asking this question is part of taking this ritual seriously. IE how valuable is this ritual to maintain?

Comment by Nathan Young (nathan) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T16:18:28.870Z · EA · GW

Ouch. It was a serious question, if someone were to pay 20k to malaria coalition, that's 4 lives on expectation. Seems reasonable for the loss to our community.

Could someone explain why asking how much we should charge to press the button is taboo?

Comment by Nathan Young (nathan) on Honoring Petrov Day on the EA Forum: 2021 · 2021-09-26T16:09:52.777Z · EA · GW

I have a code. How much should I charge in counterfactual donations to effective charities to push it? How much do we think it's worth to "win" this year's Petrov day game?

Comment by Nathan Young (nathan) on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-21T11:02:35.955Z · EA · GW

Yeah I think your behaviour here is fine. But imagine I spent $10,000 advertising a post I wrote. Would that be okay? The question interested me.

Comment by Nathan Young (nathan) on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-20T12:58:31.320Z · EA · GW

I suggest there should be an honourable mention for the most upvoted in each category if that entry doesn't win a prize. If the community thinks something is great and it doesn't get anything, I think that would be a shame.

Comment by Nathan Young (nathan) on EA Forum Creative Writing Contest: $10,000 in prizes for good stories · 2021-09-20T12:54:12.712Z · EA · GW

People have suggested story competitions before, but this has gotten way more upvotes. Why?

I guess it's because there is a 10k prize and because Aaron suggested it. 

If we were getting too biased to the person writing forum articles rather than the quality of those articles, how would we know?

Comment by Nathan Young (nathan) on Working at a (DC) policy think tank: Why you might want to do it, what it’s like, and how to get a job · 2021-08-31T22:34:58.853Z · EA · GW

I weakly suggest that Twitter is also underrated by EAs interested in policy. I'm happy to write something fully if people upvote this.

The sketch of my argument is that lots of top staffers are on Twitter and it's not hard to get access to them. Many EAs are followed by big academics, pubic figures etc, I've had several chats to movers and shakers (Noah Smith, David Shor, Tyler Cowen). I imagine someone who sought to talk to staffers in a particular area could after about 6 months of posting for 3 hours a week.

Comment by Nathan Young (nathan) on Working at a (DC) policy think tank: Why you might want to do it, what it’s like, and how to get a job · 2021-08-31T22:28:17.063Z · EA · GW

I assume this makes much more sense for US folks than others. If this isn't the case correct me.

Comment by Nathan Young (nathan) on Gifted $1 million. What to do? (Not hypothetical) · 2021-08-30T22:12:21.363Z · EA · GW

I like that you suggest that people should give what they are comfortable giving. 

I think that's advice I'd want someone to give my friend and I think it's wiser in the long term.

Comment by Nathan Young (nathan) on Gifted $1 million. What to do? (Not hypothetical) · 2021-08-30T22:00:25.362Z · EA · GW

I agree with the other answers and will avoid repeating them.

The only thing I'd add is that I suggest giving an amount of money you'll be glad you gave.

You don't have to give any of this money.

If you do choose to give away any %, that's wonderful! 

This is the advice I'd want someone to give to someone I care about. Also it's more sustainable in the long term, if we give away what we are comfortable with. 

Comment by Nathan Young (nathan) on Announcing the Open Philanthropy Undergraduate Scholarship · 2021-08-29T11:43:07.352Z · EA · GW

I think that ETH  Zurich is a particularly good example since it's one place below Imperial. Not sure I'd be convinced of this argument if it was #15, say. 

Comment by Nathan Young (nathan) on Announcing the Open Philanthropy Undergraduate Scholarship · 2021-08-29T11:41:07.492Z · EA · GW

Also, what about people who "ought" to apply but don't know it yet? Could there be an EA scholarship program for developing nations? 1 page application, a short test, then then anyone above a certain bar gets supported in a full funding application and if successful helped applying to a university.

If I had to guess I'd think there were many barriers in the way of students applying to foreign universities. 

Comment by Nathan Young (nathan) on What are examples of technologies which would be a big deal if they scaled but never ended up scaling? · 2021-08-28T21:34:56.623Z · EA · GW

Really interesting question.

Comment by Nathan Young (nathan) on Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” · 2021-08-28T17:37:16.613Z · EA · GW

A large question I have is "Why haven't any EA orgs used futarchy?" 

It seems to me that it's easiest to implement in a small organisation. The fact that is seen nowhere (to my knowledge) suggests it can't be as clearcut as it first seems.

Some suggestions for how it would be used if people were convinced:

  • Forecast GiveWell scores, investigate charities with high ones
  • Forecast fund evaluations
  • Forecast community metrics under different large scale community decisions
  • Set up an EA org that runs on a futarchy

In my mind, the fact that these aren't happening despite being possible suggests there must be more flaws than those raised in this peice.

If I had to guess it's that:

  • Futarchy seems to weird
  • EAs (like everyone else) are concerned that such an inflexible system would lead to unpredictable badness and so push away from creating it. 
Comment by Nathan Young (nathan) on Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” · 2021-08-28T17:26:24.914Z · EA · GW
  1. Maintaining a careful and aligned measure of welfare is likely to be extremely difficult. It is hard to capture everything we value as a society (especially on different levels, like cities and states), and it would also be very difficult to avoid manipulations. Hanson notes this issue (in objections 13-15, 22-23), but does not treat it with the seriousness it deserves. Additionally, Hanson occasionally proposes modifying the measure of welfare to fix other issues, and this is an added complication.
    1. A simpler measure of welfare might, for instance, prompt blind maximization of something that is not quite aligned with our values. If we try to compensate by adding everything we value, however, we may encounter issues of corruption in the measurement processes for certain parameters, encode policies in our measure of welfare (an oversimplified example of this is adding miles of roads built to the measure of welfare), or create a more messy system by attempting to solve other problems (e.g. on page 24, Hanson mentions the possibility of agreeing, by treaty, to give welfare weight to other nations’ welfare).


I'm not sure that it would be any harder than in a society without futarchy. In some ways I think it's quite neat that Hanson acknowledges this would be the problem of the legislator and that people could vote for politicians they thought would derive a good function.

All the bad situations I can think of here apply to current societies so it seems harsh to judge futarchy by those standards.

Comment by Nathan Young (nathan) on Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” · 2021-08-28T17:21:18.013Z · EA · GW
  1. Thin markets, or markets where there are few buyers and sellers, will be less accurate. Thin markets are often more volatile (prices shift rapidly) and less efficient or accurate than liquid markets, where there are many buyers and sellers. Policy-oriented prediction markets could become much thinner for policies that are complicated or which do not affect the interests of sufficient numbers of people or of sufficiently rich people.[12]
    1. For instance, if a policy is technical and affects only, say, the agricultural practices of a specific area, there may not be enough natural interest in it, and all but a few people may believe that it is not worth their time to learn the details of how the policy would affect welfare. As a result, the final prices would be based on very little information: the best guesses of a few traders.


I'm not sure this argument holds up:
 

  1. If there is an agricultural practise that will be effected the poeple in that area can bet on the market. Unlike voting for a general political campaign, they will have a big impact on the result. They will have good incentives to bet in order to change the result or to offset losses in the case it goes badly.
  2. This doesn't seem worse than status quo. Decision are already made on the basis of a few people's opinions, often without any sense of track record that the profit and loss here would provide.
Comment by Nathan Young (nathan) on Summary and Takeaways: Hanson's “Shall We Vote on Values, But Bet on Beliefs?” · 2021-08-28T17:16:56.583Z · EA · GW

Thanks for writing this. I enjoyed it and I'd like to look more at the paper.

  1. Causality might diverge from conditionality in the case of advisory/indirect markets.[10] Traders are sometimes rewarded for guessing at hidden info about the world—information that is revealed by the fact that a policy decision was made—instead of causal relationships between the policy and outcomes.[11]
    1. For instance, suppose a company is running a market to decide whether to keep an unpopular CEO, and they ask if, say, stocks conditional on the CEO remaining would be higher than stocks conditional on the CEO not staying. Then traders might think that, if it is the case that the CEO stayed, it is likely that the Board found out something really great about the CEO, which would increase expectations that the CEO would perform very well (and stocks would rise). So the market would seem to imply that the CEO is good for the company even if they were actually terrible.

I don't understand the point here. If the market is the only chooser, the traders would be stupid to assume that some other reason made the board choose. If the board chose based on the market and other features, then yes the market would be predicting given that choice. This seems like a restatement of the theory rather than an issue.

Am I misunderstanding?

Comment by Nathan Young (nathan) on Announcing the Open Philanthropy Undergraduate Scholarship · 2021-08-28T16:59:16.974Z · EA · GW

If I were a student applying I might want to know if I were going to get funding before going through a complex applications process in another country. Is there anything that could be said to someone who feels like this - success chance, some kind of prescreening etc etc?

Comment by Nathan Young (nathan) on If You're So Smart, Why Aren't You Governor Of California? (Scott Alexander: Astral Codex Ten) · 2021-08-28T16:11:20.765Z · EA · GW

Nah your post is fine. Welcome to the forum.

I think you're right that its hard to run for positions but I don't think the cost should be borne by individuals. I think if it cost 1 million to have a 5% chance of governor of California then I would call that effective spending.

I suggest that I've not yet heard why an EA org shouldn't have been willing to spend a million here

Comment by Nathan Young (nathan) on If You're So Smart, Why Aren't You Governor Of California? (Scott Alexander: Astral Codex Ten) · 2021-08-28T16:06:31.059Z · EA · GW

I'm not so convinced that forecasting is anticorrelated with selling oneself. I get the sense that Galef argues that in the Scout Mindset but I've not read that yet.

Feels like we are talking about different things if you think the forecast here are disheartening. If I were a Democratic person with some connections in California I'd be kicking myself right now

Comment by Nathan Young (nathan) on If You're So Smart, Why Aren't You Governor Of California? (Scott Alexander: Astral Codex Ten) · 2021-08-26T15:50:31.408Z · EA · GW

Happy to read over it.

Comment by Nathan Young (nathan) on Nathan Young's Shortform · 2021-08-26T12:04:08.708Z · EA · GW

EAs please post your job posting to twitter

Please post your jobs to Twitter and reply with @effective_jobs. Takes 5 minutes. and the jobs I've posted and then tweeted have got 1000s of impressions. 

Or just DM me on twitter (@nathanpmyoung) and I'll do it. I think it's a really cheap way of getting EAs to look at your jobs. This applies to impactful roles in and outside EA.

Here is an example of some text:

-tweet 1

Founder's Pledge Growth Director

@FoundersPledge are looking for someone to lead their efforts in growing the amount that tech entrepreneurs give to effective charities when they IPO. 

Salary: $135 - $150k 
Location: San Francisco

https://founders-pledge.jobs.personio.de/job/378212

-tweet 2, in reply

@effective_jobs

-end

I suggest it should be automated but that's for a different post.

Comment by Nathan Young (nathan) on Founders Pledge Seeks Growth Director (San Francisco)! · 2021-08-26T11:54:58.181Z · EA · GW

Okay I tweeted it. Will delete and replace if you write one instead https://twitter.com/NathanpmYoung/status/1430861177693818881?s=20

Comment by Nathan Young (nathan) on If You're So Smart, Why Aren't You Governor Of California? (Scott Alexander: Astral Codex Ten) · 2021-08-26T11:24:05.154Z · EA · GW

I was tempted to intro Scott as "Scott Alexander (Slate Star Codex, Astral Codex Ten, occasional EA forum poster)"

Comment by Nathan Young (nathan) on EA Forum feature suggestion thread · 2021-08-26T10:31:58.583Z · EA · GW

I think if people come to this thread and see 8 of one persons suggestions as the first 8 they will probably grow to resent that person.

Also, I was using grammarly on this page and it was reaaally slowing down typing speed. FYI.

Comment by Nathan Young (nathan) on Who do intellectual prizewinners follow on Twitter? · 2021-08-26T07:37:49.772Z · EA · GW

It says something that Cowen has around a quarter of the followers but more IPW followers.

Comment by Nathan Young (nathan) on Who do intellectual prizewinners follow on Twitter? · 2021-08-26T00:23:20.901Z · EA · GW

Yeah I thought this. Pretty quickly we're gonna struggle with working who is in and who is out. If we're counting cowen are we counting Patrick Collision, Ezra Klein, Peter Theil. Hard problem.

Comment by Nathan Young (nathan) on Madagascar · 2021-08-26T00:19:57.454Z · EA · GW

Sorry I don't know any more than you do. But yes, it's awful.

Comment by Nathan Young (nathan) on Who do intellectual prizewinners follow on Twitter? · 2021-08-25T18:00:01.175Z · EA · GW

 Thanks for doing this research.

I've stared at this article a bit, and I have some hazy thoughts. Don't take them too seriously or spend too long responding unless you want to.

- EAs are generally young and so haven't had the time to build big twitter followings
- I'm not that surprised that Gates, who has a massive account is the most effective EA adjacent influence
- EA doesn't favour unilateralism , what would this look like if it was about sharing 80k articles or similar
- I'm not sure that Musk can be so easily dismissed. I don't really have a horse, but just because he shitposts doesn't mean he can't be aligned
- Nate Silver isn't (to my knowledge) personally aligned but I'd see his content as similar in tone

Comment by Nathan Young (nathan) on Who do intellectual prizewinners follow on Twitter? · 2021-08-25T17:49:53.858Z · EA · GW

Richard Ngo, so good his account is listed twice.

Comment by Nathan Young (nathan) on Epistemic trespassing, or epistemic squatting? | Noahpinion · 2021-08-25T16:53:27.944Z · EA · GW

Yeah I thought this piece was spot on, thanks for crossposting.

Only thing I'd add is that I wish there was a clearer public forecasting track record. If there were then for certain areas (eg covid and geopolitics) we would be able to be judged on who gave good forecasts. 

(Though forecasting is hard and revealing and the people at the top of those hierarchies wouldn't want to have to do it, I guess, but it would serve up and comers well)

Comment by Nathan Young (nathan) on Open call for EAs with passion for meta-learning <3 · 2021-08-25T16:33:07.019Z · EA · GW

I'm interested. I'll PM you. Also, a google doc with a roadmap 😍 Sign me up. 

Comment by Nathan Young (nathan) on Founders Pledge Seeks Growth Director (San Francisco)! · 2021-08-25T16:26:08.399Z · EA · GW

Could you please post this on your twitter (I've looked and can't see it) and then reply with @effective_jobs.

Comment by Nathan Young (nathan) on A Case for Better Feeds · 2021-08-25T14:56:53.195Z · EA · GW

Hey :) I don't think critical is bad and I want peoples comments both in support and critical. Thanks for clarifying though, that's kind of you.

I would label your comment as disagreeing with the thesis of the post and since it has more upvotes than the post, it suggests others agree with you. Now since we agree that the situations are a bit different I don't quite know what people's specific disagreement is. And that's okay, people can do what they like, but it is disheartening to write an article, receive relatively little takeup, have people like a disagreeing comment more than that article and then not know why. Right?

Comment by Nathan Young (nathan) on What are the EA movement's most notable accomplishments? · 2021-08-25T13:18:49.761Z · EA · GW

Good framing of the question.

Comment by Nathan Young (nathan) on EA Forum feature suggestion thread · 2021-08-25T12:27:39.181Z · EA · GW

Aaron, can we write forum PR FAQs too?

Pros:
nice format

Cons:
Would dilute the legitimacy of current ones

Soultion 
"Unofficial PR FAQ"

But if you're okay with this could you explicitly say so. If you don't I think me writing one will feel like I'm freeriding on the current legitimacy of the concept.

Comment by Nathan Young (nathan) on EA Forum feature suggestion thread · 2021-08-25T12:24:34.525Z · EA · GW

Yeah now I come here and all my posts are at the top and that feels bad.

Comment by Nathan Young (nathan) on How EA Blue grew in our 2nd Year in the Ateneo de Manila University in the Philippines - Our achievements & learnings · 2021-08-25T11:03:19.301Z · EA · GW

Thanks for writing such a detailed report.

I particularly liked the issues you faced and his you dealt with them. Seems like a good article for people staying student groups to read.

Maybe I missed it, but I saw less about how you could be supported better.

Are there any ways the readers of this post could help you? Are there any connections you need?