What’s the low resolution version of effective altruism? 2020-12-31T09:33:45.120Z
Evidence, cluelessness, and the long term - Hilary Greaves 2020-11-01T17:25:47.589Z
EARadio - more EA podcasts! 2020-10-26T14:32:41.264Z
Prospecting for Gold - EAGxOxford 2016 - edited transcript 2020-09-14T11:11:21.242Z
The Moral Value of Information - edited transcript 2020-07-02T21:02:30.392Z
Differential technological development 2020-06-25T10:54:53.776Z
Heuristics from Running Harvard and Oxford EA Groups 2018-04-24T10:03:24.686Z


Comment by james_aung on [link] Centre for the Governance of AI 2020 Annual Report · 2021-01-14T11:06:30.071Z · EA · GW

I think you might have accidentally linked to the 2019 report. The 2020 report seems to be here

Comment by james_aung on What’s the low resolution version of effective altruism? · 2020-12-31T14:45:13.529Z · EA · GW

(rough note) This seems to have strands of: 'rich people focused' 'rich people are more moral' 'E2G focus'

Comment by james_aung on What’s the low resolution version of effective altruism? · 2020-12-31T14:40:29.617Z · EA · GW

Nice! Could you do a version which is 70% lower resolution? 😁

Comment by james_aung on What’s the low resolution version of effective altruism? · 2020-12-31T14:39:40.197Z · EA · GW

It might be that SF has more people who are kinda into EA such that they donate 10% to givewell, diluting out the people who are representative of more extreme self sacrifice

Comment by james_aung on What’s the low resolution version of effective altruism? · 2020-12-31T14:38:10.797Z · EA · GW

Interesting about the idea that EA let's people off the moral hook easily: 'I'm rich so I just donate and I've done my moral duty and get to virtue signal'

It's interesting how that applies to people who are wealthy, work a conventional job, and donate 10% to charities, but doesn't seem like a valid criticism against those who donate way more like 50%+. That normally seems to be met with the response "wow that's impressive self sacrifice!". Same with those who might drastically shift their career

Comment by james_aung on What’s the low resolution version of effective altruism? · 2020-12-31T14:35:17.946Z · EA · GW

'Charity for nerds' doesn't sound like an awful low res version compared to others suggested like 'moral hand-washing for rich people'.

'Charity for nerds' has nice properties like:

  • it's okay if you're not into EA (maybe you're just not nerdy enough), compared to 'EA things you're evil if you don't agree with EA'
  • selects for nerdy people, who are willing to think hard about their work
Comment by james_aung on Responses to effective altruism critics · 2020-12-31T11:59:52.272Z · EA · GW

Effective Altruism and its Critics, Iason Gabriel, Journal of Applied Philosophy 

Comment by james_aung on What’s the low resolution version of effective altruism? · 2020-12-31T09:39:41.304Z · EA · GW

Tyler Cowen's low resolution version: "COWEN: A lot of giving is not very rational. Whether that’s good or bad, it’s a fact. And if you try to make it too rational in a particular way, a very culturally specific way, you’ll simply end up with less giving. And then also, a lot of the particular targets of effective altruism, I’m not sure, are bad ideas. So somewhere like Harvard, it has a huge endowment, it’s super non- or even anti-egalitarian. But it’s nonetheless a self-replicating cluster of creativity. And if you’re a rich person, Harvard was your alma mater, and you give them a million dollars, is that a bad idea? I don’t know, but effective altruists tend to be quite sure it’s a bad idea." from

Seems mostly focused on the idea of 'EA tries to shift existing philanthropy to be given using more rational decision making procedures' 

Comment by james_aung on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T10:31:46.109Z · EA · GW

Thanks for the reply and taking the time to explain your view to me :)

I'm curious: My friend has been trying to estimate the liklihood of nuclear war before 2100. It seems like this is a question that is hard to get data on, or to run tests on. I'd be interested to know what you'd recommend them to do?

Is there a way I can tell them to approach the question such that it relies on 'subjective estimates' less and 'estimates derived from actual data' more?

Or is it that you think they should drop the research question and do something else with their time, since any approach to the question would rely on subjective probability estimates that are basically useless?

Comment by james_aung on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-26T20:32:16.413Z · EA · GW

Thanks for taking the time to write this :)

In your post you say "Of course, it is impossible to know whether $1bn of well-targeted grants could reduce the probability of existential risk, let alone by such a precise amount. The “probability” in this case thus refers to someone’s (entirely subjective) probability estimate — “credence” — a number with no basis in reality and based on some ad-hoc amalgamation of beliefs."

I just wanted to understand better: Do you think its ever reasonable to make subjective probability estimates (have 'credences') over things? If so, in what scenarios is it reasonable to have such subjective probability estimates; and what makes those scenarios different from the scenario of forming a subjective probability estimate of what $1bn in well-target grants could do to reduce existential risk?

Comment by james_aung on What areas are the most promising to start new EA meta charities - A survey of 40 EAs · 2020-12-23T20:54:36.900Z · EA · GW

No worries, thanks!

Comment by james_aung on What areas are the most promising to start new EA meta charities - A survey of 40 EAs · 2020-12-23T18:11:26.242Z · EA · GW

Thanks for this write up! I'm excited about CE looking into this area. I was wondering whether you were able to share information about the breakdown of which organisations  the 40 EAs you surveyed came from and/or which chapters were interviewed, or whether that data is anonymous? 

Comment by james_aung on Open and Welcome Thread: December 2020 · 2020-12-20T15:19:41.580Z · EA · GW

Welcome, Roger! 😊 Congrats on moving towards a vegetarian diet, even though you previously thought you wouldn't have attempted it 👏

Comment by james_aung on Guerrilla Foundation Response to EA Forum Discussion · 2020-12-15T23:51:24.574Z · EA · GW

A quick guess of something that might be underpinning a worldview difference here is a differing conception of what counts as "harm". In the original post, the author suggests that a wealthy donor should try and pay reparations to reverse or prevent further harm in the specific sector in which the wealth was generated.

But I think most EAs have an unusual (but philosophically defensible) conception of harm which not only includes direct harm but also indirect harm caused by a failure to act.

So for an EA, if a wealthy donor is faced with a choice between

  1. paying reparations in the specific sector in which their wealth was generated
  2. donating to an intervention which would have a greater benefit than (1)

then choosing (1) over (2) would actually cause more harm. (Which is the point I believe you're trying to draw attention to in your comment) I think many EAs probably feel quite psychologically guilty about the harm they are causing in failing to do the best thing.

But I would say that most people don't conceptualise harm in this way. And so for most people a failure to do (2) if its better than (1) wouldn't be considered a 'harm'.

Comment by james_aung on EA Forum Prize: Winners for October 2020 · 2020-12-11T10:03:34.440Z · EA · GW

Thanks for the info!

Comment by james_aung on EA Forum Prize: Winners for October 2020 · 2020-12-11T08:52:52.353Z · EA · GW

Is the prize paid out to the recipient or is the prize a donation to a charity at the recipient’s choosing?

Comment by james_aung on Careers Questions Open Thread · 2020-12-06T09:20:13.731Z · EA · GW

Hey Will! Would you be able to say anything more about why you didn't like the 2 years of college that you did? What sort of college degrees are you looking into right now? :)

Comment by james_aung on How to best address Repetitive Strain Injury (RSI)? · 2020-11-22T21:08:17.580Z · EA · GW

I found using voice dictation on my phone and iPad pretty good, often now I just send emails and messages using my phone instead of my computer.

I find the Google speech recognition on the Google keyboard for Android pretty good, as well as the Apple speech recognition on IOS devices.

Comment by james_aung on Please Take the 2020 EA Survey · 2020-11-12T21:45:34.349Z · EA · GW

Thanks for organising this! I think the survey is very valuable! I was wondering if you could you say more on why you "will not be making an anonymised data set available to the community"? That seems initially to me like an interesting and useful thing for community members to have, and was wondering whether it was just a lack of resources/it being difficult, that meant that you weren't doing this anymore.

Comment by james_aung on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-02T19:48:28.207Z · EA · GW

Thanks I'll check it out!

Comment by james_aung on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-01T17:33:49.366Z · EA · GW

On October 25th, 2020, Hilary Greaves gave a talk on ‘Cluelessness in effective altruism’  at the EA Student Summit 2020. I found the talk so valuable that I wanted to transcribe it.

I made the transcript with the help of, an AI speech-to-text platform which I highly recommend. Thank you to Julia Karbing for help with editing.

If you'd like to make suggestions to the transcript, you may do so here: 

Comment by james_aung on EARadio - more EA podcasts! · 2020-10-26T14:36:51.370Z · EA · GW

I wanted to share EARadio on the Forum again. Although this project has been going for a long time, I think a lot of people probably aren't aware of its existence.

I know a lot of my EA friends often want to watch EA Global lectures but never get round to actually doing so. I think EARadio provides a great service in allowing people to consume this content in an easy and accessible way.

Comment by james_aung on Making More Sequences · 2020-10-22T23:19:14.021Z · EA · GW

I also really value sequences! I’m working on an (extremely janky) web app to read sequences of content, as a way to learn web development.

I hope to eventually make it into a nice app that people can use to easily make their own sequences of EA content from around the web, and for people to discover and read this content.

You can check it out here (doesn't work great on mobile yet, unfort) :

I’m keen to work on it more once I stop having RSI 😅, so if people do have comments and feedback would love to hear it.

Comment by james_aung on I Want To Do Good - an EA puppet mini-musical! · 2020-10-08T11:37:58.455Z · EA · GW

I just saw this now and loved it, super excited for more content in the future!

Comment by james_aung on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-09-25T12:08:10.999Z · EA · GW

I believe you can edit the image size of images on old posts by dragging their bottom border down when in edit mode

Comment by james_aung on Prospecting for Gold - EAGxOxford 2016 - edited transcript · 2020-09-17T10:00:24.699Z · EA · GW

I've now changed that section to:

"On the right is a factorisation which is mathematically trivial and looks like it just makes things more complicated. I've taken the expression on the left and added in a load of things which cancel each other out. But I hope I can justify this decomposition by virtue of it being easier to interpret and measure. So I'm going to present the case for why I think it is."

Do let me know if you'd prefer something different to that :)

Comment by james_aung on Prospecting for Gold - EAGxOxford 2016 - edited transcript · 2020-09-16T19:35:48.557Z · EA · GW

Thanks! I'll change that :)

Comment by james_aung on Prospecting for Gold - EAGxOxford 2016 - edited transcript · 2020-09-14T11:15:39.620Z · EA · GW

This is a heavily edited transcript of the popular talk "Prospecting for Gold". We created this edited versions because we found it hard to follow the transcripts provided by CEA and thought there could be some value in condensing, clarifying and cleaning up the transcript.

You can compare this version with CEA's version here. We'd love for you to comment suggestions on ways this can be improved further.

You can also read a transcript of Amanda Askell's talk 'The Moral Value of Information' here:

Comment by james_aung on [deleted post] 2020-08-15T22:13:09.186Z

Not a cookbook, but you might find interesting. It shows 'How many hours did animals have to live on factory farms to produce various food products?'

Comment by james_aung on Defining Effective Altruism · 2020-08-15T17:12:14.291Z · EA · GW

Is there a way to read the finalised (instead of penultimate) article without purchasing the book? Perhaps, Will, you have a PDF copy you own?

Comment by james_aung on Center for Global Development: The UK as an Effective Altruist · 2020-08-10T22:49:09.436Z · EA · GW

The title of the CGD article is "The UK as an Effective Altruist"

Comment by james_aung on New member--essential reading and unwritten rules? · 2020-07-13T19:26:01.057Z · EA · GW

I like the book suggestions in this comment in another EA forum post

Comment by james_aung on New member--essential reading and unwritten rules? · 2020-07-13T19:18:49.174Z · EA · GW

Welcome to the community! And congratulations on your achievements so far!

It could be worth learning study skills so that you can do better in your degree and/or get your coursework done in less time, freeing up your time to learn other things, explore EA, or just have fun.

I was surprised when coming to university how much people study skills differed, and I don’t think it’s unreasonable to say that you can free up weeks (months?) of your time and save yourself a lot of stress through good study skills.

I’d recommend the cousera course called learning how to learn.

Beyond this, university is a great time to try new things, try out new lifestyles and habits, and do self improvement. Going through the things in this list would get you off to a flying start, I reckon I’d also just recommend trying out new societies and clubs that are available at your university, in case you find something interesting and useful or fun.

Comment by james_aung on Differential technological development · 2020-06-29T11:18:00.855Z · EA · GW

Indeed. Although there is an upper limit still, since there surely is some limit to how much value we can extract from a resource and there are only a finite number of atoms in the universe.

Comment by james_aung on AI Governance Reading Group Guide · 2020-06-25T11:11:34.345Z · EA · GW

Do you have a template of the shared document that you used? Or was it a quite unstructured blank document?

Comment by james_aung on Differential technological development · 2020-06-25T10:58:17.181Z · EA · GW

I wrote this up because I wanted a single resource I could send to people that explained differential technological development.

I made it quite quickly in about 1 hour, so I'm sure it's quite lacking and would appreciate any comments and suggestions people may have to improve it. You can also comment on a GDoc version of this here:

Comment by james_aung on Ask Me Anything! · 2019-08-21T12:43:03.923Z · EA · GW

I enjoy reading Phil's blog here:

Comment by james_aung on Problems with EA representativeness and how to solve it · 2018-08-05T20:53:41.741Z · EA · GW

Just wanted to say that I'd be really excited to read more of your thoughts on this. As mentioned above, I think many considerations and counter-considerations against x-risk work deserve more attention and exposure in the community.

I encourage you to write up your thoughts in the near-term rather than far future! :P

Comment by james_aung on Heuristics from Running Harvard and Oxford EA Groups · 2018-05-03T16:52:32.997Z · EA · GW

I think that makes sense and I agree with you. We also have run the sort of things you describe in Oxford.

Maybe don't teach can be understood as 'prefer using resources as a way of conveying ideas, rather than you teaching'.

I agree that we should aim to 'outreach', in '(on-topic) introductory' EA talks, and don't disagree here.

Comment by james_aung on Heuristics from Running Harvard and Oxford EA Groups · 2018-05-03T16:46:29.939Z · EA · GW


I think there are easy ways to make it not weird. Some tips:

1) Emailing from an official email account, rather than a personal one, if you've never met the person before.

2) Mention explicitly that this is 'something you do' and that, for newcomers, you'd like to welcome them into the community. This makes it less strange that you're reaching out to them personally.

3) Mention explicitly that you'll be talking about EA, and not other stuff.

4) It's useful to meet people in real life at an event first and say hello and introduce yourself there.

5) Don't feel like you have an agenda or anything; keep it informal. Treat it as if you were getting to know a friend better and have an enjoyable time.

6) Absolutely don't pressure people, just reach out and offer to meet up if they'd find it useful

Comment by james_aung on Heuristics from Running Harvard and Oxford EA Groups · 2018-04-25T16:00:54.491Z · EA · GW

Thanks for the comment JoshP!

I've spoken a lot with the Cambridge lot about this. I guess the cruxes of my disagreement with their approach are:

1) I think their committee model selects more for willingness to do menial tasks for the prestige of being in the committee, rather than actual enthusiasm for effective altruism. So something like what you described happens where "a section become more high-fidelity later, and it ends up not making that much difference", as people who aren't actually interested drop out. But it comes at the cost of more engaged people spending time on management.

2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events. But my model of someone who ends up being very engaged in EA is that excitement about the content drives most of the motivation, rather than external commitment devices. So I suppose roles only play a limited role in committing people to engage, but comes at the cost of people spending X hours on admin, when they could have spent X hours on learning more about EA.

It's worth noting that I think Cambridge have recently been thinking hard about this, and also I expect their models for how their committee provides value to be much more nuanced than I present. Nevertheless, I think (1) and (2) capture useful points of disagreement I've had with them in the past.

Comment by james_aung on Heuristics from Running Harvard and Oxford EA Groups · 2018-04-25T15:50:07.833Z · EA · GW

Hey! Thanks for the comment.

I think it captures a few different notions. I'll try and spell out a few salient ones

1) Pushes back against the idea that an outreach talk needs to cover all aspects of EA. e.g. I think some intro EA 45min talks end up being really unsatisfactory as they only have time to skim across loads of different concepts and cause areas lightly. Instead I think it could be OK and even better to do outreach talks that don't introduce all of EA but do demonstrate a cool and interesting facet of EA epistemology. e.g. I could imagine a talk on differential vs absolute technological progress as being a way to attract new people.

2) Pushes back against running introductory discussion groups. Sometimes it feels like you need to guide someone through the basics, but I've found that often you can just lend people books or send them articles and they'll be able to pick up the same stuff without it taking up your time.

3) Reframes particular community niches, such as a technical AI safety paper reading group, as also a potential entry-point into the broader community. e.g. People find out about the AI group since they study computer science and find it interesting and then get introduced to EA.