Posts

What is something the reader should do? 2020-09-24T16:08:47.422Z · score: 8 (3 votes)
Economist: "What's the worst that could happen". A positive, sharable but vague article on Existential Risk 2020-07-08T10:37:08.017Z · score: 11 (8 votes)
What EA questions do you get asked most often? 2020-06-23T17:12:09.583Z · score: 7 (2 votes)
What are good sofware tools? What about general productivity/wellbeing tips? 2020-06-15T11:47:30.417Z · score: 7 (5 votes)
What is your favourite readings for weddings/ funerals/ important community events? 2020-05-28T11:14:57.508Z · score: 5 (2 votes)
What question would you want a prediction market about? 2020-01-19T14:16:57.356Z · score: 4 (1 votes)
What content do people recommend looking at when trying to find jobs? 2019-12-19T09:50:21.395Z · score: 11 (3 votes)
UK General Election: Where would you look for impact analysis of policy platforms? 2019-11-14T17:13:17.459Z · score: 3 (2 votes)
A rationalism discussion with some useful lessons on culture 2019-11-05T09:29:30.150Z · score: 11 (6 votes)
Nathan Young's Shortform 2019-11-05T09:11:05.415Z · score: 3 (1 votes)
Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? 2019-10-17T09:24:22.687Z · score: 1 (2 votes)
Who should I put a funder of a $100m fund in touch with? 2019-10-16T10:22:51.502Z · score: 6 (3 votes)
How to do EA Global well? How can one act during EA Global so as to make the most effective changes afterwards? 2019-10-16T10:21:03.573Z · score: 13 (5 votes)
I Estimate Joining UK Charity Boards is worth £500/hour 2019-09-16T14:40:14.359Z · score: 11 (9 votes)
EU AI policy and OPSI Consultation 2019-08-08T09:51:15.589Z · score: 19 (6 votes)
What are your thoughts on my career change options? (AI public policy) 2019-07-19T16:50:08.813Z · score: 9 (6 votes)
Please May I Have Reading Suggestions on Consistency in Ethical Frameworks 2019-07-08T09:35:18.198Z · score: 10 (4 votes)
How does one live/do community as an Effective Altruist? 2019-05-15T21:20:02.296Z · score: 23 (17 votes)
This website is really functional and attractive (to me) 2019-05-08T08:56:19.942Z · score: 0 (5 votes)
Is EA unscalable central planning? 2019-05-07T07:25:07.387Z · score: 9 (6 votes)
If this forum/EA has ethnographic biases, here are some suggestions 2019-05-06T11:20:51.105Z · score: -2 (16 votes)
How do we check for flaws in Effective Altruism? 2019-05-06T10:59:41.441Z · score: 39 (23 votes)

Comments

Comment by nathan on How much does a vote matter? · 2020-10-31T16:17:36.864Z · score: 1 (1 votes) · EA · GW

Thank you for writing this. The previous version was one of my most shared 80k links. 

The post was long and detailed. That is what a certain audience wants (ie this forum).

I wonder if there is an audience for a trimmed down, lighter piece (eg style of vox's future perfect). I think the piece is good enough in content to share among people who prefer shorter and lighter articles.

Comment by nathan on Nathan Young's Shortform · 2020-10-31T11:30:45.376Z · score: 1 (1 votes) · EA · GW

It would be good to easily be able to export jobs from the EA job board.

Comment by nathan on Nathan Young's Shortform · 2020-10-31T11:30:23.063Z · score: 1 (1 votes) · EA · GW

I suggest at some stage having up and downvoting of jobs would be useful.

Comment by nathan on Nathan Young's Shortform · 2020-10-31T11:25:25.001Z · score: 5 (2 votes) · EA · GW

Rather than using Facebook as a way to collect EA jobs we should use an airtable form

1) Individuals finding jobs could put all the details in, saving time for whoever would have to do this process at 80k time.

2) Airtable can post directly to facebook, so everyone would still see it https://community.airtable.com/t/posting-to-social-media-automatically/20987

3) Some people would find it quicker. Personally, I'd prefer an airtable form to inputting it to facebook manually every time. 

Ideally we should find websites which often publish useful jobs and then scrape them regularly. 

Comment by nathan on Nathan Young's Shortform · 2020-10-27T16:11:28.930Z · score: 2 (2 votes) · EA · GW

Has rethink priorities ever thought of doing a survey of non-EAs? Perhaps paying for a poll? I'd be interested in questions like "What do you think of Effective Altruism? What do you think of Effective Altruists?"

Only asking questions of those who are currently here is survivorship bias. Likewise we could try and find people who left and ask why.

Comment by nathan on Nathan Young's Shortform · 2020-10-27T09:09:11.961Z · score: 1 (1 votes) · EA · GW

The UK police does.

It seems to me if you wanted to avoid a huge scandal you'd want to empower and incentivise an organisation to find small ones.

Comment by nathan on Nathan Young's Shortform · 2020-10-24T12:19:12.052Z · score: 2 (4 votes) · EA · GW

At what size of the EA movement should there be an independent EA whistleblowing organisation, which investigates allegations of corruption?

Comment by nathan on Nathan Young's Shortform · 2020-10-14T17:30:17.352Z · score: 6 (4 votes) · EA · GW

UK government will pay for organisations to hire 18-24 year olds who are currently unemployed, for 6 months. This includes minimum wage and national insurance.

 

I imagine many EA orgs are people constrained rather than funding constrained but it might be worth it. 



And here is a data science org which will train them as well https://twitter.com/John_Sandall/status/1315702046440534017

 

Note: applications have to be for 30 jobs, but you can apply over a number of organisations or alongside a local authority etc. 



https://www.gov.uk/government/collections/kickstart-scheme

Comment by nathan on Nathan Young's Shortform · 2020-10-12T16:16:28.994Z · score: 1 (1 votes) · EA · GW

Seems that the Ebook/audiobook is free. Is that correct?

I imagine being able to give a free physcial copy would have more impact.

Comment by nathan on Nathan Young's Shortform · 2020-10-10T16:02:31.487Z · score: 8 (3 votes) · EA · GW

Is there a way to sort shortform posts?

Comment by nathan on Nathan Young's Shortform · 2020-10-10T15:58:05.955Z · score: 1 (1 votes) · EA · GW

Do you have any thoughts as to what the next step would be. It's not obvious to me what you'd do to research the impact of this.

Perhaps have a questionnaire asking people how many people they'd give books to at different prices. Do we know the likelihood of people reading a book they are given?

Comment by nathan on If you like a post, tell the author! · 2020-10-07T11:57:44.686Z · score: 13 (6 votes) · EA · GW

I think one or two positive posts are fine. I'd agree if every post were like that. But that's true of all post types.

I guess I think there is new information to the author which is, "someone like my post enough to specifically say so". You could argue that's included in the post karma, but emotionally, I don't think they are the same.

Comment by nathan on SiebeRozendal's Shortform · 2020-10-07T11:47:47.588Z · score: 8 (4 votes) · EA · GW

I think I call this "the wrong frame".

"I think you are framing that incorrectly etc"

eg in the UK there is often discussion of if LGBT lifestyles should be taught in school and at what age. This makes them seem weird and makes it seem risky. But this is the wrong frame - LGBT lifestyles are typical behaviour (for instance there are more LGBT people than many major world religions). Instead the question is, at what age should you discuss, say, relationships in school. There is already an answer here - I guess children learn about "mummies and daddies" almost immediately. Hence, at the same time you talk about mummies and daddies, you talk about mummies and mummies, and single dads and everything else. 

By framing the question differently the answer becomes much clearer. In many cases I think the issue with bad frames (or models) is a category error.

Comment by nathan on Nathan Young's Shortform · 2020-10-07T11:43:12.138Z · score: 7 (6 votes) · EA · GW

EA Book discount codes.

tl;dr EA  books have a positive externality. The response should be to subsidise them

If EA thinks that certain books (doing good better, the precipice) have greater benefits than they seem, they could subsidise them.

There could be an EA website which has amazon coupons for EA books so that you can get them more cheaply if buying for a friend, or advertise said coupon to your friends to encourage them to buy the book.

From 5 mins of research the current best way would be for a group to buys EA books and sell them at the list price but provide coupons as here - https://www.passionintopaychecks.com/how-to-create-single-use-amazon-coupons-promo-codes/

Alternatively, you could just sell them at the coupon price.
 

Comment by nathan on Can the EA community copy Teach for America? (Looking for Task Y) · 2020-10-06T00:07:22.842Z · score: 3 (3 votes) · EA · GW

Task Y seems analogous to GiveDirectly - less effective that the top option but with the ability to take on many more resources.

Some ideas for task Y. I'm mainly spitballing, so don't take them too seriously:

Trying to convince friends and families of good policy ideas - EAs could be encouraged to learn and spread good policy ideas. I agree that politics is a polarising, but policy need not be zero sum.

Forecasting effective causes. I think there should be a board for forecasting impact of different causes. People can vote and then whenever GiveWell gets round to checking, you could test accuracy. Low cost and the accuracy might be better than random.

Testing decision-making interventions at work. If there is an effective decision making strategy then EAs could test it at their workplaces. This would lead to lots of case studies and adoption if effective.

Comment by nathan on Can the EA community copy Teach for America? (Looking for Task Y) · 2020-10-06T00:02:08.690Z · score: 2 (2 votes) · EA · GW

Is Teach For America impactful?

Or does it use two years of time at the start of people's careers. Caplan's "The Case Against Education" puts forward a strong case that a large chunk (certainly university) education's value is signalling. I think some of this applies to 5-18 year old education too. Certainly there are flaws in the current education system that it would be better without, but it might just be tinkering around the edges. In that sense, is it a better use of people's time than the counterfactual?

I know it's just an analogy, but the analogy carries across. We wouldn't want to be advertising people sink time into something which turns out to be ineffective. That would likely discredit EA among people who had seen that.

Comment by nathan on Nathan Young's Shortform · 2020-09-25T22:37:21.101Z · score: 4 (3 votes) · EA · GW

The address (in the link) is humbling and shows someone making a positive change for good reasons. He is clear and coherent.

Good on him.

Comment by nathan on Nathan Young's Shortform · 2020-09-25T22:36:07.270Z · score: 6 (4 votes) · EA · GW

Harris is a marmite figure - in my experience people love him or hate him. 

It is good that he has done this. 

Newswise, it seems to me it is more likely to impact the behavior of his listeners, who are likely to be well-disposed to him. This is a significant but currently low-profile announcement. As will the courses be on his app. 

 I don't think I'd go spreading this around more generally, many don't like Harris and for those who don't like him, it could be easy to see EA as more of the same (callous superior progessivism).

In the low probability (5%?) event that EA gains traction in that space of the web (generally called the Intellectual Dark Web - don't blame me, I don't make the rules) I would urge caution for EA speakers who might pulled into polarising discussion which would leave some groups feeling EA ideas are "not for them".

Comment by nathan on Nathan Young's Shortform · 2020-09-25T22:28:06.125Z · score: 33 (12 votes) · EA · GW

Sam Harris takes Giving What We Can pledge for himself and for his meditation company "Waking Up"

Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness.

Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall. 

https://dynamic.wakingup.com/course/D8D148

I like letting personal thoughts be up or downvoted, so I've put them in the comments.

Comment by nathan on What is something the reader should do? · 2020-09-24T16:13:54.959Z · score: 9 (3 votes) · EA · GW

Read 80,000 hours' Key Ideas

https://80000hours.org/key-ideas/

Comment by nathan on What is something the reader should do? · 2020-09-24T16:11:56.162Z · score: 7 (2 votes) · EA · GW

Have good sleep health.

Wake up at the same time each day. Wear earplugs and an eyemask. Get a good quality mattress.

Comment by nathan on What is something the reader should do? · 2020-09-24T16:09:46.111Z · score: 7 (2 votes) · EA · GW

Read Doing Good Better by William MacAskill

https://www.amazon.co.uk/Doing-Good-Better-Effective-Difference/dp/1592409105

Comment by nathan on Parenting: Things I wish I could tell my past self · 2020-09-23T20:54:42.531Z · score: 5 (3 votes) · EA · GW

116 Karma? This is a very successful post. Perhaps parenting advice is undersupplied in EA? Or there is a disconnect between how much people want kids and how much it feels acceptable to talk about it.

Note: I want to have children.

This suggests to me that if there were more effective ways to maintain ones job and have children etc, then childraising advice could be a pretty effective cause. 5% more output from the median EA parents for the time when their kids are young would be a huge win.

(As an aside, I'm currently reading Caplan's "selfish reasons to have kids", which I would recommend)

Comment by nathan on Nathan Young's Shortform · 2020-09-18T15:21:26.101Z · score: 10 (7 votes) · EA · GW

EA short story competition?

Has anyone ever run a competition for EA related short stories?

Why would this be a good idea?
* Narratives resonate with people and have been used to convey ideas for 1000s of years
* It would be low cost and fun
* Using voting on this forum there is the same risk of "bad posts" as for any other post

How could it work?
* Stories submitted under a tag on the EA forum.
* Rated by upvotes
* Max 5000 words (I made this up, dispute it in the comments)
* If someone wants to give a reward, then there could be a prize for the highest rated
* If there is a lot of interest/quality they could be collated and even published
* Since it would be measured by upvotes it seems unlikely a destructive story would be highly rated (or as likely as any other destructive post on the forum)

Upvote if you think it's a good idea. If it gets more than 40 karma I'll write one. 
 

Comment by nathan on Nathan Young's Shortform · 2020-09-18T12:13:58.117Z · score: 6 (4 votes) · EA · GW

This perception gap site would be a good form for learning and could be used in altruism. It reframes correcting biases as a fun prediction game.

https://perceptiongap.us/

It's a site which gets you to guess what other political groups (republicans and democrats) think about issues.

Why is it good:

1) It gets  people thinking and predicting. They are asked a clear question about other groups and have to answer it.
2) It updates views in a non-patronising way - it turns out dems and repubs are much less polarised than most people think (the stat they give is that people predict 50% of repubs hold extreme views, when actually it's 30).  But rather than yelling this, or an annoying listicle, it gets people's consent and teachest something.
3) It builds consensus. If we are actually closer to those we disagree with than we think, perhaps we could work with them.
4) It gives quick feedback. People learn best when given feedback which is close to the action. In this case, people are rapidly rewarded for thoughts like "probably most of X group" are more similar to me that I first think.

Imagine:

What percentage of neocons want insitutional reform?
What % of libertarians want an end to factory farming?
What % of socialists want an increase in foreign direct aid?

Conlusion

If you want to change people's minds, don't tell them stuff, get them to guess trustworthy values as a cutesy game.

Comment by nathan on EA Forum feature suggestion thread · 2020-08-11T08:12:46.497Z · score: 1 (1 votes) · EA · GW

It seems there is iteration possible here. Are there more users on here or the EA Hub, if the former it might be worth using EA forum logins for the EA hub.

Comment by nathan on Nathan Young's Shortform · 2020-07-25T12:12:27.305Z · score: 3 (2 votes) · EA · GW

I would choose your statement over the current one.

I think the sentiment lands pretty well even with a very toned down statement. The movement is called "effective altruism". I think often in groups are worried that outgroups will not get their core differences when generally that's all outgroups know about them.

I don't think that anyone who visits that website won't think that effectiveness isn't a core feature. And I don't think we need to be patronising (as EAs are charactured as being in conversations I have) in order to make known something that everyone already knows.

Comment by nathan on Nathan Young's Shortform · 2020-07-23T13:25:53.266Z · score: 14 (9 votes) · EA · GW

I strongly dislike the following sentence on effectivealtruism.org:

"Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on."

It reads to me as arrogant, and epitomises the worst caracatures my friends do of EAs. Read it in a snarky voice (such as one might if they struggled with the movement and were looking to do research) "Rather that just doing what feels right..."

I suggest it gets changed to one of the following:

  • "We use evidence and careful analysis to find the very best causes to work on."
  • "It's great when anyone does a kind action no matter how small or effective. We have found value in using evidence and careful analysis to find the very best causes to work on."

I am genuinely sure whoever wrote it meant well, so thank you for your hard work.

Comment by nathan on Economist: "What's the worst that could happen". A positive, sharable but vague article on Existential Risk · 2020-07-08T10:40:39.917Z · score: 1 (1 votes) · EA · GW

If you can't access the article on the economist, I'd recommend Blendle, which is spotify but for news.

https://blendle.com/i/economist/whats-the-worst-that-could-happen/bnl-economist-20200626-2fda187ce08?utm_campaign=social-share&utm_source=blendle&utm_content=blendletrending-android&sharer=eyJ2ZXJzaW9uIjoiMSIsInVpZCI6Im5hdGhhbnBteW91bmciLCJpdGVtX2lkIjoiYm5sLWVjb25vbWlzdC0yMDIwMDYyNi0yZmRhMTg3Y2UwOCJ9&utm_medium=external

Comment by nathan on Economist: "What's the worst that could happen". A positive, sharable but vague article on Existential Risk · 2020-07-08T10:40:00.093Z · score: 4 (4 votes) · EA · GW

The article is positive, easy to understand and communicates EA ideas well. It is high reputation (the economist is well respected) and comes with easy to understand examples (volcanoes and solar storms). This is good.

Comment by nathan on Economist: "What's the worst that could happen". A positive, sharable but vague article on Existential Risk · 2020-07-08T10:38:58.338Z · score: 1 (1 votes) · EA · GW

The article seemed overly concerned on the risk of solar storms. There are many higher priority issues it could have focused on.

Comment by nathan on EA Forum feature suggestion thread · 2020-06-29T12:49:48.996Z · score: 1 (1 votes) · EA · GW

I think we should upvote features we'd like and let the tech team decide what is possible to implement.

It might be hard, it might not.

Comment by nathan on EA Forum feature suggestion thread · 2020-06-29T12:48:34.107Z · score: 1 (1 votes) · EA · GW

I think EA wikis have been tried in the past.

For what it's worth I think rather than storing information you want to store connections and allow for easy error checking. I suggest this is the non-obvious value of wikipedia.

In that regard I think a roam board would be better than a wiki.

Comment by nathan on EA Forum feature suggestion thread · 2020-06-29T11:42:00.466Z · score: 1 (1 votes) · EA · GW

It has taken me a long time to find the EA online events calendar (thanks @EdoArad) could this be displayed more prominently

https://calendar.google.com/calendar/embed?src=ie5uop71imftf4ut2htbv789v8@group.calendar.google.com

Comment by nathan on EA Forum feature suggestion thread · 2020-06-29T11:34:32.421Z · score: 2 (2 votes) · EA · GW

Sure but you could reduce the friction on that. And ideally make it more trackable.

Comment by nathan on EA Forum feature suggestion thread · 2020-06-29T11:23:58.509Z · score: 4 (1 votes) · EA · GW

For what it's worth I think you want this to have the minimal friction but that maybe suggestions are hidden as standard.

Comment by nathan on EA Forum feature suggestion thread · 2020-06-26T14:11:49.545Z · score: 1 (1 votes) · EA · GW

Is there an equivalent post on lesswrong for this discussion?

Comment by nathan on EA Forum feature suggestion thread · 2020-06-26T14:11:27.783Z · score: 1 (1 votes) · EA · GW

It says who has edited it and how. I think you'd be careful when using it and if someone abused it it would be clear to everyone that they had done so.

Comment by nathan on EA Forum feature suggestion thread · 2020-06-26T14:08:14.302Z · score: 2 (2 votes) · EA · GW

I don't think so. As I commented, parhaps these start invisible (or with little markers you can mouse over). I find it works on google docs.

what do you think?

Comment by nathan on EA is risk-constrained · 2020-06-25T12:11:46.834Z · score: 1 (1 votes) · EA · GW

I think I'd go further. If an EA organisation or some other EAs aren't willing to support you in running your project then should you be doing it as you're main job?

As a side project, sure, but no funding means noone else is convinced of your impact. This seems like a good reason to choose a different path.

Comment by nathan on Patrick Collison on Effective Altruism · 2020-06-24T09:20:15.465Z · score: 1 (1 votes) · EA · GW

Let's model it. Currently it seems a very vague risk. If it's a significant risk, it seems worth considering in a way that we could find out if we were wrong.

I'd also say things like:

  • EAs do a lot of projects, many of which are outlandish or not obviously impactful, how does this compare to the counterfactual?
Comment by nathan on Patrick Collison on Effective Altruism · 2020-06-23T23:13:01.559Z · score: 2 (2 votes) · EA · GW

Fun post. Thanks for adding it and to Patrick and Jason.

And similarly, as we have a look at the things that in hindsight seem like very good things to have happen in the world, it's often unclear to me how an EA oriented intuition might have caused somebody to do so

I think this point is a good one but it doesn't hold up. This post has 40 upvotes and no negative comments. Seemlingly everone agrees that it's good for people to follow non-standard paths. This is literally how "an EA oriented intuition might have caused somebody to do so".

Does someone want to send Patrick his membership details?

Comment by nathan on What EA questions do you get asked most often? · 2020-06-23T22:58:31.482Z · score: 1 (1 votes) · EA · GW

Any thoughts on how you would answer this question?

Comment by nathan on What EA questions do you get asked most often? · 2020-06-23T17:23:29.263Z · score: 0 (0 votes) · EA · GW

https://www.effectivealtruism.org/grants/

Comment by nathan on What EA questions do you get asked most often? · 2020-06-23T17:22:01.705Z · score: 1 (1 votes) · EA · GW

Write a post on the forum about it. Write that your considering looking for funding. If it gets postiive feedback then take poeple's recommendation. If it gets negative feedback or little uptake, improve and try again.

Comment by nathan on What EA questions do you get asked most often? · 2020-06-23T17:18:47.827Z · score: 1 (1 votes) · EA · GW

https://80000hours.org/

Comment by nathan on What EA questions do you get asked most often? · 2020-06-23T17:18:27.336Z · score: 1 (1 votes) · EA · GW

How can I find an EA job?

Comment by nathan on What EA questions do you get asked most often? · 2020-06-23T17:18:11.950Z · score: 1 (1 votes) · EA · GW

How can I get grant funding for my project?

Comment by nathan on What EA questions do you get asked most often? · 2020-06-23T17:13:52.628Z · score: 1 (1 votes) · EA · GW

https://www.givedirectly.org/

Comment by nathan on What EA questions do you get asked most often? · 2020-06-23T17:13:30.653Z · score: 1 (1 votes) · EA · GW

https://www.givewell.org/