Comment by nathan on What are your thoughts on my career change options? (AI public policy) · 2019-07-19T20:25:28.691Z · score: 1 (1 votes) · EA · GW

I suppose I'm not putting much weight on it, other than what is required to keep me working at a problem for the long term. The issue there is that I don't know what working at many of these jobs will be like...

In terms of desires, I would like most of all to have a legitimate ethical system. I value that more than my own wellbeing and my own desires. So I don't really care what I want other than instrumentally. I do thinks I *want* on my own time, whereas I think for my career I'd like to maximise as much as I can.

At least I think so - it's hard to know what you really want, right?

Perhaps I'll end up justifying what I want to do anyway. I suppose this process at least stops me making significantly non-maximal choices.

What are your thoughts on my career change options? (AI public policy)

2019-07-19T16:50:08.813Z · score: 7 (4 votes)
Comment by nathan on Want to Save the World? Enter the Priesthood · 2019-07-09T09:29:39.134Z · score: 9 (6 votes) · EA · GW

Interesting take. I like your willingness to think about this and I agree that religion offers a lot and there are many lessons to learn. Some thoughts:

  • What will one say when their congregation asks what they believe and why?
  • Generally it seems EAs advise against going into harmful organisations to "change them". Have I misunderstood or why not here? Perhaps you don't think religious orgs are on balance harmful.
  • In my understanding the quickest growing religious sects are conservative, which will be strongly opposed to doing things for maximising wellbeing as opposed to what the Bible/Qu'ran says.
  • I think seeking the growth of more liberal branches is an interesting idea but what if that gives the "oxygen" required by more conservative groups. What if this ends up aiding the growth of conservative dogmatism?

Please May I Have Reading Suggestions on Consistency in Ethical Frameworks

2019-07-08T09:35:18.198Z · score: 10 (4 votes)
Comment by nathan on Announcing the launch of the Happier Lives Institute · 2019-06-28T19:20:31.703Z · score: 3 (2 votes) · EA · GW

Yeah, fair points. :)

Comment by nathan on Open Thread #45 · 2019-06-21T14:17:04.851Z · score: 1 (1 votes) · EA · GW

Article I wrote about the recent Tory Leadership debates:

Comment by nathan on Memetic Tribes and Culture War 2.0 (Article Discussion) · 2019-06-21T14:13:52.815Z · score: 1 (1 votes) · EA · GW

I was nearly gonna post this article so glad you already have. I think it provides an interesting framework for understand peoples worldview (telos, existential threat etc). I have found it to be useful in discussing people's veiws with them: "What do you think is the purpose of life, what do you fear? etc"

Comment by nathan on Announcing the launch of the Happier Lives Institute · 2019-06-21T13:53:45.326Z · score: 9 (4 votes) · EA · GW

Thank you for your work. It seems like a really important thing to study. Thank you for taking the time to lay your your plans so clearly.

Do you think your work will at any point reach onto how individuals could live that would make them more happu or have greater well being? I think there is room for publising a kind of workflow/ lifehacks to help peoeple know how their lives could be better. I acknolwedge that's not what you speak about here but it seems adjacent. Perhaps another reader could point me in the direction of this.

We think well-being consists in happiness, defined as a positive balance of enjoyment over suffering. Understood this way, this means that when we reduce misery, we increase happiness.

Sure though there are some kinds of misery you don't want to reduce. I could choose not to attend my fathers funeral and that would reduce misery. Do you have any idea how you will attempt to account for "good sadness" in any way? If you will avoid those kinds of interventions, how will you choose your interventions and how will you avoid bias in this?

Comment by nathan on Open Thread #45 · 2019-05-30T07:01:42.405Z · score: 2 (2 votes) · EA · GW

Non EA Parody Video I made

I made a Brexit parody of Remix to Ignition. You folks are a community I'm part of and I think sharing what your are proud of (or what is uniquely you), is a great part of community life.

(As an aside I'd like to do some EA rap if I could think of a good idea of how to do it. Alternatively if you want rap marketing of an EA organisation or rap at an event then we can talk)

Comment by nathan on How does one live/do community as an Effective Altruist? · 2019-05-17T10:56:57.965Z · score: 2 (2 votes) · EA · GW

Regarding there being answers. That's good to know, I guess I will search for them - also I've just found less wrong - which is useful.

I'll check out those links.

You might be right about a broad discussion. If it turns out that issues haven't been covered I might come back and write a more specific piece.

I have not spent any time in local EA communities. I'd like to though, but that will involve working out where I'm going to live next.

Thanks for your time.

Comment by nathan on How does one live/do community as an Effective Altruist? · 2019-05-16T12:01:41.481Z · score: 1 (1 votes) · EA · GW

Yeah I wonder if there is any home-finding app in the EA community. I'd love to live with some people with similar views. (I am equally wary of going from one strict ideology to another but there we are)

Comment by nathan on Is preventing child abuse a plausible Cause X? · 2019-05-16T08:17:14.978Z · score: 2 (2 votes) · EA · GW

In this sense I think the govt should create appropriate incentives for long term committed relationships where children are concerned - perhaps like a no claims bonus (an increasing yearly benefit of not crashing a car in the UK) for each year parents with children who stay together until their last child is 18?

Comment by nathan on How does one live/do community as an Effective Altruist? · 2019-05-16T07:58:28.810Z · score: 1 (1 votes) · EA · GW

It's fair to remove comments one no longer supports but if someone did say this, I'd agree. :P I guess it stands out a mile away.

How does one live/do community as an Effective Altruist?

2019-05-15T21:20:02.296Z · score: 23 (17 votes)
Comment by nathan on Does climate change deserve more attention within EA? · 2019-05-13T10:28:59.630Z · score: 1 (1 votes) · EA · GW

Hey thanks for replying,

Sure, it's a question of maximising effect. I don't know what is best. 80k say it's not the most effective. I suppose you'd have to ask them how that explanation works.

Certainly it's a better thing to do that working building bombs, but as to if it's as good as AI policy, 80k says no.

What do you think?

Comment by nathan on Is EA unscalable central planning? · 2019-05-10T10:26:20.701Z · score: 1 (1 votes) · EA · GW

A core question for me is still, "Is EA's main aim to grow to affect govt policy?". This would be able to deal with problems that EA organisations work on at an incentives level such that that non-EAs would be properly motivated to solve problems that affect all our wellbeing.

In that sense, correcting an an externality is better than lobbying firms/consumers to ignore it (which is roughly what we currently do). Am I wrong here? If growth isn't EA's main aim, why not? Something doesn't add up.

I suppose the best answer I can expect is "we don't know that's more effective" thanks to aaron who showed me how Givewell is starting to look at this . But at some level this will stop being true, if EA had 51% support then we could just vote throw the measures we wanted (some ethical nuances).

So the secondary question is, do we have any idea when this shift from lobbying individuals to lobbying/participating in govt ought to take place. How many EAs should exist in a country before they make a concerted effort to lobby directly. That seems a fairly crucial detail.

Comment by nathan on Is EA unscalable central planning? · 2019-05-10T08:37:31.089Z · score: 3 (2 votes) · EA · GW
Is there a particular article or statement from an organization that made you think influencing legislation isn't one of the movement's aims?

I suppose from what I've read I get the sense it's mainly about careers and philanthropy rather than lobbying/activism, though that may be a case of what you later describe. Also @anonymous_EA 's post does suggest this idea:

EA growth itself is much less prioritized now than it was a few years ago.

Thanks for your time, I'll look into the influencing policy stuff.

Comment by nathan on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-10T08:35:34.822Z · score: 1 (1 votes) · EA · GW

I'd like someone to research and plot the graph fully and do some tests. Let's see, I guess.

Comment by nathan on Why we should be less productive. · 2019-05-09T14:45:48.450Z · score: 2 (2 votes) · EA · GW

Thanks for writing this.

I think we should seek to maximise both our own and everyone's wellbeing and that that probably means productivity is good for others and self care/things we enjoy are good for us. I'm not quite sure if you agree or disagree.

I think we need to learn to be satisfied with good but also strive for better. That's a hard balance, though its worth remembering that if we have interesting satisfying jobs, disposable income, a few hours of free time each day and safety for ourselves and those we love, we are doing really well worldwide and so it's worth working for the good of others who are less well off and for our own benefit.

Comment by nathan on Benefits of EA engaging with mainstream (addressed) cause areas · 2019-05-09T12:48:08.597Z · score: 4 (3 votes) · EA · GW

Interesting post. Thank you for writing it. Attractive graphs.

I wonder if there could be a kind of "trip advisor" type badge to recommend how well charities/interventions are doing in such a way as to encourage them to improve.

You mention it, but a key strength and issue is that EA is exclusive. It only wants to the the most good, so it only recommends the best charities, but it therefore doesn't encourage middling charities/interventions to be better.

There is a hard question here which is, does EA want those charities to get better or does it want them to end? Do we look down on individuals and organisations backing or using inefficient approaches, have we becomes something akin to a purity cult? To do so might be unreasonable since refusing to engage with successful middle-efficiency highly-backed approaches could be a failure to improve them and do more good.

The real kicker then, I think is, do you get more good per $ by increasing the high end or shifting the graph to the right? Has anyone done any research on this? However, it seems relatively useful to not become sneery/superior towards middle-efficiency approaches and it doesn't cost much (I think, though perhaps I'm wrong) to be gracious to those we think are doing some good but not as much as they could be.

Comment by nathan on How do we check for flaws in Effective Altruism? · 2019-05-09T06:58:08.395Z · score: 3 (2 votes) · EA · GW

How can one incentivise the right kind of behaviour here? This isn't a zero sum game - we can all win, we can all lose. How do we inculcate the market with that knowledge such that the belief that only one of us can win doesn't make us all more likely to lose?

Off the top of my head:

Some sort of share trading scheme.

Some guarantee from different AI companies that whichever one reaches AI first will employ people from the others.

Comment by nathan on Is EA unscalable central planning? · 2019-05-08T17:38:13.104Z · score: 3 (2 votes) · EA · GW

I suppose I don't understand why the aim isn't to grow the movement more to eventually influence legislation.

Likewise if that will one day be the aim at what point will the switch come?

This website is really functional and attractive (to me)

2019-05-08T08:56:19.942Z · score: 0 (5 votes)
Comment by nathan on If this forum/EA has ethnographic biases, here are some suggestions · 2019-05-08T07:07:37.288Z · score: 8 (3 votes) · EA · GW
I hate that I made you feel that way.

No need to apologise, I didn't mean my original comment individually, more as a kind of "gee whiz" to how much the blog bombed in general. But as I say, that's okay, noone was unkind, they just didn't like what I wrote. I think it can be easy for communities like this to be very dog-eat-dog so I think a little vulnerability/honesty might go a long way. Recently I have learned when I am insecure enough to be tempted to "man up" it's often better to show vulnerability.

What is the issues of things voted against demographics, if I might understand. Let's say tall EAs want a slightly different thing than short EAs. Scaling comments by height as if it were a survey means that if we have a fewer than representative number of tall EAs their votes would get weighed as more. That would mean the top posts/comments would be more likely to contain things which appealed to them, since (if they comprised half the representative population) they would control half the weighted votes. So new tall EAs would visit a site closer in tone/culture to what they would enjoy.

I don't see why this would result in a less rational site, but if certain issues it turned out were culturally more important to short EAs, it would be good to notice that, rather than thinking it was about rationality.

Frankly, most counter positions seem to lead to "representative voting control by minority groups would lead to a worse site" and I don't understand why that is the case. If they lead to increased growth in EA in minority groups that seems a good thing.

So my slightly clunky analogy aside, what do you think?

Comment by nathan on Is EA unscalable central planning? · 2019-05-07T12:21:34.984Z · score: 1 (1 votes) · EA · GW

Thanks :)

Do we acknowledge our activities will change as we grow? Are we transparent about our mission?

Comment by nathan on You Should Write a Forum Bio · 2019-05-07T12:06:54.085Z · score: 3 (2 votes) · EA · GW

I am about to add a bio as a result of this.

Comment by nathan on How do we check for flaws in Effective Altruism? · 2019-05-07T11:58:47.435Z · score: 5 (4 votes) · EA · GW
In fact, I think you would quickly be incentivized not to say anything you're uncertain about. At best, it would lead to excessive hedging which would make you appear less confident and likely hurt your career. At worst, you'd be so loathe to make a mistake that you wouldn't speak up on a topic you're uncertain about, even if your contributions could help someone.

I think you make solid points, though I think you could limit it to some type of important post and certain types of concepts. eg "only when I state something as true in my blog posts"

Likewise, I often think declaring our uncertainty would be better for us as a species. Learning to should "I don't know" as loudly as the yesses and nos in a debate would I think be helpful to most debates also.

Comment by nathan on If this forum/EA has ethnographic biases, here are some suggestions · 2019-05-07T07:46:43.378Z · score: 2 (2 votes) · EA · GW

First of all, youch, people did not like this post. That's okay.

I think we need to be careful not to get swept up in the diversity moment of popular culture or think that imperfect representation is an automatic condemnation.

I think we agree. I would prefer to understand problems rather than kneejerk solve them. Perhaps I was unclear.

However I do think that changing voting to match profiling is an interesting option to see whether it would change the kind of content the bubbled up.

Comment by nathan on If this forum/EA has ethnographic biases, here are some suggestions · 2019-05-07T07:40:55.078Z · score: 1 (1 votes) · EA · GW

Yes, you are correct.

Comment by nathan on How do we check for flaws in Effective Altruism? · 2019-05-07T07:36:45.611Z · score: 5 (3 votes) · EA · GW

Thanks for responding (and for encouraging me, I think, to write this in the first place).

Only change your life by an amount proportionate with your trust in the change you plan to make.

Sure. The point I am trying to make is that I would pay to have some of that work done for me. If enough people would then you could pay someone to do it. I don't think we disagree with the thinking that needs to be done, but I think I am less inclined/less trusting that I will do it well and would prefer and infrastructural solution.

Comment by nathan on How do we check for flaws in Effective Altruism? · 2019-05-07T07:31:13.529Z · score: 5 (3 votes) · EA · GW

This is a clever/fun idea.

$2 via Paypal to the first person who convinces me this is a bad idea.

How much money do you have? How often are you wrong? To what extent do you want people to try and correct you all the time?

Actually I think it's a really good idea and if you try it, let me know how it works out.

Is EA unscalable central planning?

2019-05-07T07:25:07.387Z · score: 9 (6 votes)
Comment by nathan on I want an ethnography of EA · 2019-05-06T13:34:59.193Z · score: 1 (1 votes) · EA · GW

What should we do about ethnographic biases, some suggestions.

Comment by nathan on I want an ethnography of EA · 2019-05-06T13:33:40.402Z · score: 1 (1 votes) · EA · GW

Thanks, I wrote a post here

If this forum/EA has ethnographic biases, here are some suggestions

2019-05-06T11:20:51.105Z · score: -3 (15 votes)

How do we check for flaws in Effective Altruism?

2019-05-06T10:59:41.441Z · score: 39 (23 votes)
Comment by nathan on Can my filmmaking/songwriting skills be used more effectively in EA? · 2019-05-03T12:11:36.346Z · score: 1 (1 votes) · EA · GW

I have thought about this too. I am a parody rapper and wondered whether I could write raps about important topics (EA, AI, antibiotic resistance, etc) to both educate and entertain. I've often wondered about setting up a philosophical, non-zero sum rap battle night (ie not to "beat" the other party but to discuss the issues through rap).

If you'd like to get in touch, I'd love to chat.

My content is 5/10 at the moment, but I'm hoping to film some new stuff soon.

Prepared songs:

Improvised Livestreams:

Comment by nathan on Does climate change deserve more attention within EA? · 2019-05-03T09:47:27.460Z · score: 4 (3 votes) · EA · GW

Thanks for your hard work in writing this. I was impressed by the depth of thought and how well it was linked to other articles.


I agree climate change is not an x-risk, and though EA shouldn't focus on it, we should probably discuss it a bit more and bring our critical thinking to help solve the problem more efficiently. Not discussing it seems like at best an oversight, and at worst, harmful.

This seems like a wise comment. We should all be open to criticism, particularly if we are disagreeing or taking a nuanced view on something the majority of people think is really important or that they expect us to care about.

2. Overall, it seems reasonable to me that EA resources might be ill spent in working on climate change, since there is already so much money and so many people going to work there. Several of my non-EA friends are deciding to fight climate change of their own accord. I think it would be better if they used an EA methodology and it would be better if the people at the top of anti-climate change orgs used this methodology but it seems much of what you speak of is answered by the fact that additional EA members in that community may not (or may) do much additional good compared to their intervention elsewhere. I don't know how to judge that.

Perhaps in regard to this, it would be better if rather than getting jobs in these spheres EA people were encouraged to vote wisely, donate wisely, engage with their communities, be informed about climate change. Maybe this is what you had in mind anyway.

Comment by nathan on Scrupulosity: my EAGxBoston 2019 lightning talk · 2019-05-03T06:47:49.207Z · score: 6 (3 votes) · EA · GW

I can see a lot of this in myself. I care about EA partly because it seems like a great way to help the world and partly because it assuages my need for certainty and good decision making.

In my case, scrupulous symptoms are related to feelings of worthlessness, like I alone have to live up to this perfect moral standard because somehow I can’t afford to be as immoral as a normal person

This is particular rings true. Having come from a strongly religious background I often feel perfection is the only aim of life.

I believe that EA selects for scrupulous people (like me), which concentrates these tendencies in a very connected community.

My suggestion is that we affirm the truths which are opposite to these tendencies ie, if you don't care for yourself you will be less effective and less affected by good you are doing, because your own wellbeing will suffer. We should affirm "unspoken truths" because in not doing so we might accidentally forget to enact them.

Thanks for this good piece. Quite challenging.

Comment by nathan on I want an ethnography of EA · 2019-05-03T06:33:49.859Z · score: 4 (3 votes) · EA · GW

To what extent should we fund a counter organisation to, say, 80000 hours to reresearch its decisions - an independent watchdog so to speak?