Prospecting for Gold - EAGxOxford 2016 - edited transcript 2020-09-14T11:11:21.242Z · score: 26 (8 votes)
The Moral Value of Information - edited transcript 2020-07-02T21:02:30.392Z · score: 15 (9 votes)
Differential technological development 2020-06-25T10:54:53.776Z · score: 27 (14 votes)
Heuristics from Running Harvard and Oxford EA Groups 2018-04-24T10:03:24.686Z · score: 38 (33 votes)


Comment by james_aung on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-09-25T12:08:10.999Z · score: 3 (2 votes) · EA · GW

I believe you can edit the image size of images on old posts by dragging their bottom border down when in edit mode

Comment by james_aung on Prospecting for Gold - EAGxOxford 2016 - edited transcript · 2020-09-17T10:00:24.699Z · score: 3 (2 votes) · EA · GW

I've now changed that section to:

"On the right is a factorisation which is mathematically trivial and looks like it just makes things more complicated. I've taken the expression on the left and added in a load of things which cancel each other out. But I hope I can justify this decomposition by virtue of it being easier to interpret and measure. So I'm going to present the case for why I think it is."

Do let me know if you'd prefer something different to that :)

Comment by james_aung on Prospecting for Gold - EAGxOxford 2016 - edited transcript · 2020-09-16T19:35:48.557Z · score: 3 (2 votes) · EA · GW

Thanks! I'll change that :)

Comment by james_aung on Prospecting for Gold - EAGxOxford 2016 - edited transcript · 2020-09-14T11:15:39.620Z · score: 5 (4 votes) · EA · GW

This is a heavily edited transcript of the popular talk "Prospecting for Gold". We created this edited versions because we found it hard to follow the transcripts provided by CEA and thought there could be some value in condensing, clarifying and cleaning up the transcript.

You can compare this version with CEA's version here. We'd love for you to comment suggestions on ways this can be improved further.

You can also read a transcript of Amanda Askell's talk 'The Moral Value of Information' here:

Comment by james_aung on EA cookbook? · 2020-08-15T22:13:09.186Z · score: 1 (1 votes) · EA · GW

Not a cookbook, but you might find interesting. It shows 'How many hours did animals have to live on factory farms to produce various food products?'

Comment by james_aung on Defining Effective Altruism · 2020-08-15T17:12:14.291Z · score: 1 (1 votes) · EA · GW

Is there a way to read the finalised (instead of penultimate) article without purchasing the book? Perhaps, Will, you have a PDF copy you own?

Comment by james_aung on Center for Global Development: The UK as an Effective Altruist · 2020-08-10T22:49:09.436Z · score: 12 (5 votes) · EA · GW

The title of the CGD article is "The UK as an Effective Altruist"

Comment by james_aung on New member--essential reading and unwritten rules? · 2020-07-13T19:26:01.057Z · score: 1 (1 votes) · EA · GW

I like the book suggestions in this comment in another EA forum post

Comment by james_aung on New member--essential reading and unwritten rules? · 2020-07-13T19:18:49.174Z · score: 11 (7 votes) · EA · GW

Welcome to the community! And congratulations on your achievements so far!

It could be worth learning study skills so that you can do better in your degree and/or get your coursework done in less time, freeing up your time to learn other things, explore EA, or just have fun.

I was surprised when coming to university how much people study skills differed, and I don’t think it’s unreasonable to say that you can free up weeks (months?) of your time and save yourself a lot of stress through good study skills.

I’d recommend the cousera course called learning how to learn.

Beyond this, university is a great time to try new things, try out new lifestyles and habits, and do self improvement. Going through the things in this list would get you off to a flying start, I reckon I’d also just recommend trying out new societies and clubs that are available at your university, in case you find something interesting and useful or fun.

Comment by james_aung on Differential technological development · 2020-06-29T11:18:00.855Z · score: 1 (1 votes) · EA · GW

Indeed. Although there is an upper limit still, since there surely is some limit to how much value we can extract from a resource and there are only a finite number of atoms in the universe.

Comment by james_aung on AI Governance Reading Group Guide · 2020-06-25T11:11:34.345Z · score: 1 (1 votes) · EA · GW

Do you have a template of the shared document that you used? Or was it a quite unstructured blank document?

Comment by james_aung on Differential technological development · 2020-06-25T10:58:17.181Z · score: 14 (6 votes) · EA · GW

I wrote this up because I wanted a single resource I could send to people that explained differential technological development.

I made it quite quickly in about 1 hour, so I'm sure it's quite lacking and would appreciate any comments and suggestions people may have to improve it. You can also comment on a GDoc version of this here:

Comment by james_aung on Ask Me Anything! · 2019-08-21T12:43:03.923Z · score: 23 (15 votes) · EA · GW

I enjoy reading Phil's blog here:

Comment by james_aung on Problems with EA representativeness and how to solve it · 2018-08-05T20:53:41.741Z · score: 9 (9 votes) · EA · GW

Just wanted to say that I'd be really excited to read more of your thoughts on this. As mentioned above, I think many considerations and counter-considerations against x-risk work deserve more attention and exposure in the community.

I encourage you to write up your thoughts in the near-term rather than far future! :P

Comment by james_aung on Heuristics from Running Harvard and Oxford EA Groups · 2018-05-03T16:52:32.997Z · score: 2 (2 votes) · EA · GW

I think that makes sense and I agree with you. We also have run the sort of things you describe in Oxford.

Maybe don't teach can be understood as 'prefer using resources as a way of conveying ideas, rather than you teaching'.

I agree that we should aim to 'outreach', in '(on-topic) introductory' EA talks, and don't disagree here.

Comment by james_aung on Heuristics from Running Harvard and Oxford EA Groups · 2018-05-03T16:46:29.939Z · score: 3 (3 votes) · EA · GW


I think there are easy ways to make it not weird. Some tips:

1) Emailing from an official email account, rather than a personal one, if you've never met the person before.

2) Mention explicitly that this is 'something you do' and that, for newcomers, you'd like to welcome them into the community. This makes it less strange that you're reaching out to them personally.

3) Mention explicitly that you'll be talking about EA, and not other stuff.

4) It's useful to meet people in real life at an event first and say hello and introduce yourself there.

5) Don't feel like you have an agenda or anything; keep it informal. Treat it as if you were getting to know a friend better and have an enjoyable time.

6) Absolutely don't pressure people, just reach out and offer to meet up if they'd find it useful

Comment by james_aung on Heuristics from Running Harvard and Oxford EA Groups · 2018-04-25T16:00:54.491Z · score: 5 (5 votes) · EA · GW

Thanks for the comment JoshP!

I've spoken a lot with the Cambridge lot about this. I guess the cruxes of my disagreement with their approach are:

1) I think their committee model selects more for willingness to do menial tasks for the prestige of being in the committee, rather than actual enthusiasm for effective altruism. So something like what you described happens where "a section become more high-fidelity later, and it ends up not making that much difference", as people who aren't actually interested drop out. But it comes at the cost of more engaged people spending time on management.

2) From my understanding, Cambridge viewed the 1 year roles as a way of being able to 'lock in' people to engage with EA for 1 year and create a norm of committee attending events. But my model of someone who ends up being very engaged in EA is that excitement about the content drives most of the motivation, rather than external commitment devices. So I suppose roles only play a limited role in committing people to engage, but comes at the cost of people spending X hours on admin, when they could have spent X hours on learning more about EA.

It's worth noting that I think Cambridge have recently been thinking hard about this, and also I expect their models for how their committee provides value to be much more nuanced than I present. Nevertheless, I think (1) and (2) capture useful points of disagreement I've had with them in the past.

Comment by james_aung on Heuristics from Running Harvard and Oxford EA Groups · 2018-04-25T15:50:07.833Z · score: 7 (7 votes) · EA · GW

Hey! Thanks for the comment.

I think it captures a few different notions. I'll try and spell out a few salient ones

1) Pushes back against the idea that an outreach talk needs to cover all aspects of EA. e.g. I think some intro EA 45min talks end up being really unsatisfactory as they only have time to skim across loads of different concepts and cause areas lightly. Instead I think it could be OK and even better to do outreach talks that don't introduce all of EA but do demonstrate a cool and interesting facet of EA epistemology. e.g. I could imagine a talk on differential vs absolute technological progress as being a way to attract new people.

2) Pushes back against running introductory discussion groups. Sometimes it feels like you need to guide someone through the basics, but I've found that often you can just lend people books or send them articles and they'll be able to pick up the same stuff without it taking up your time.

3) Reframes particular community niches, such as a technical AI safety paper reading group, as also a potential entry-point into the broader community. e.g. People find out about the AI group since they study computer science and find it interesting and then get introduced to EA.