Astronomical Waste: The Opportunity Cost of Delayed Technological Development - Nick Bostrom (2003) 2021-06-10T21:21:28.240Z
8+ productivity tools for movement building 2021-04-13T13:15:03.041Z
Evidence, cluelessness, and the long term - Hilary Greaves 2020-11-01T17:25:47.589Z
EARadio - more EA podcasts! 2020-10-26T14:32:41.264Z
Expected Value 2020-07-31T13:59:54.861Z
The Moral Value of Information - edited transcript 2020-07-02T21:02:30.392Z
Differential technological development 2020-06-25T10:54:53.776Z
Heuristics from Running Harvard and Oxford EA Groups 2018-04-24T10:03:24.686Z


Comment by velutvulpes (james_aung) on EARadio - more EA podcasts! · 2021-06-11T06:06:27.832Z · EA · GW

Done! Thanks for working on this! Do the other links still work fine?

Comment by velutvulpes (james_aung) on Should EA Buy Distribution Rights for Foundational Books? · 2021-06-10T15:15:22.701Z · EA · GW

I've set up a system for buying books for people on request. If people are interested in using it you can read more and express interest here:

Comment by velutvulpes (james_aung) on How much do you (actually) work? · 2021-05-21T15:58:09.212Z · EA · GW

I track my time using and try to be quite strict with only tracking 'sit down work time'. I normally can do around 3.5-4h of work a day. I normally start at 10am and finish around 5pm.

This matches my experience at college, where I found I could normally do around 4 hours of studying before feeling tired out.

It's easier for me to 'clock more hours' when I have more meetings. But I try to avoid meetings.

I find that I can get most of my things done within this time and would consider myself a quite productive person.

Comment by velutvulpes (james_aung) on CEA update: Q1 2021 · 2021-04-25T20:43:11.243Z · EA · GW

Thanks for explaining your view! I don’t really have super strong views here, so don’t want to labour the point, but just thought I’d share my intuition for where I’m coming from. For me it makes sense to have a thresholds at the places because it does actually carve up the buckets of reactions better than the linear scale suggests.

For example, some people feel weird rating something really low and so they “express dislike” by rating it 6/10. So to me the lowest scorers and the 6/10ers are actually probably have more similar experiences than their linear score suggests. I claim this is driven by weird habits/something psychological of how people are used to rating things.

I think there’s a similar thing at the 7/8/9 distinction. I think when people think something is “okay” they just rate it 7/10. But when someone is actually impressed by something they rate it 9/10, which is only 2 points more but actually captures a quite different sentiment. From experience also I’ve noticed some people use 9/10 in place of 10/10 because they just never give anything 10/10 (e.g they understand what it means for something to be 10/10 differently to others)

The short of it is that I claim people don’t seem to use the linear scale as an actual linear scale , and so it makes sense to normalise things with the thresholds, and I claim that the thresholds are at the right place mostly just from my (very limited) experience

Comment by velutvulpes (james_aung) on CEA update: Q1 2021 · 2021-04-24T18:02:08.303Z · EA · GW

Thanks! I guess I think NPS is useful precisely because of those threshold effects, but agree not sure that it handles the discrimination between 6 and 1 well. Histograms seem great!

Comment by velutvulpes (james_aung) on CEA update: Q1 2021 · 2021-04-22T17:20:13.511Z · EA · GW

Would you be able to provide a Net Promoter Score analysis of your Likelihood to Recommend metrics? I find NPS yields different, interesting information from an averaged LTR and should be very straightforward to compute.

Comment by velutvulpes (james_aung) on CEA update: Q1 2021 · 2021-04-22T09:32:36.912Z · EA · GW

Hey Brian. I'd have to ask the individuals who wrote up their docs, but the plan is definitely to eventually share more of these type of group writeups widely. They weren't written with a broad audience in mind, but I feel like several leaders would be keen to share their writeups more publicly after cleaning them up a bit. I'll nudge people on this and ask if they're keen

Comment by velutvulpes (james_aung) on 8+ productivity tools for movement building · 2021-04-13T17:14:12.767Z · EA · GW

Thanks for the feedback! Will add some costs in

Comment by velutvulpes (james_aung) on 8+ productivity tools for movement building · 2021-04-13T15:06:24.015Z · EA · GW

Thanks for the great comment and suggestions!

Comment by velutvulpes (james_aung) on EA Debate Championship & Lecture Series · 2021-04-09T00:20:15.792Z · EA · GW

Some more discussion on competitive debating and EA 

Comment by velutvulpes (james_aung) on How much does performance differ between people? · 2021-03-26T11:30:28.932Z · EA · GW

Minor typo: "it’s often to reasonable to act on the assumption" probably should be "it’s often reasonable to act on the assumption"

Comment by velutvulpes (james_aung) on Some quick notes on "effective altruism" · 2021-03-25T18:20:55.045Z · EA · GW

A small and simple change that CEA could do is to un-bold the 'Effective' in their 'Effective Altruism' logo which is used on and EAG t-shirts

I find the bold comes across as unnecessarily smug emphasis in Effective Altruism.

Comment by velutvulpes (james_aung) on [link] Centre for the Governance of AI 2020 Annual Report · 2021-01-14T11:06:30.071Z · EA · GW

I think you might have accidentally linked to the 2019 report. The 2020 report seems to be here

Comment by james_aung on [deleted post] 2020-12-31T14:45:13.529Z

(rough note) This seems to have strands of: 'rich people focused' 'rich people are more moral' 'E2G focus'

Comment by james_aung on [deleted post] 2020-12-31T14:40:29.617Z

Nice! Could you do a version which is 70% lower resolution? 😁

Comment by james_aung on [deleted post] 2020-12-31T14:39:40.197Z

It might be that SF has more people who are kinda into EA such that they donate 10% to givewell, diluting out the people who are representative of more extreme self sacrifice

Comment by james_aung on [deleted post] 2020-12-31T14:38:10.797Z

Interesting about the idea that EA let's people off the moral hook easily: 'I'm rich so I just donate and I've done my moral duty and get to virtue signal'

It's interesting how that applies to people who are wealthy, work a conventional job, and donate 10% to charities, but doesn't seem like a valid criticism against those who donate way more like 50%+. That normally seems to be met with the response "wow that's impressive self sacrifice!". Same with those who might drastically shift their career

Comment by james_aung on [deleted post] 2020-12-31T14:35:17.946Z

'Charity for nerds' doesn't sound like an awful low res version compared to others suggested like 'moral hand-washing for rich people'.

'Charity for nerds' has nice properties like:

  • it's okay if you're not into EA (maybe you're just not nerdy enough), compared to 'EA things you're evil if you don't agree with EA'
  • selects for nerdy people, who are willing to think hard about their work
Comment by velutvulpes (james_aung) on Responses to effective altruism critics · 2020-12-31T11:59:52.272Z · EA · GW

Effective Altruism and its Critics, Iason Gabriel, Journal of Applied Philosophy 

Comment by james_aung on [deleted post] 2020-12-31T09:39:41.304Z

Tyler Cowen's low resolution version: "COWEN: A lot of giving is not very rational. Whether that’s good or bad, it’s a fact. And if you try to make it too rational in a particular way, a very culturally specific way, you’ll simply end up with less giving. And then also, a lot of the particular targets of effective altruism, I’m not sure, are bad ideas. So somewhere like Harvard, it has a huge endowment, it’s super non- or even anti-egalitarian. But it’s nonetheless a self-replicating cluster of creativity. And if you’re a rich person, Harvard was your alma mater, and you give them a million dollars, is that a bad idea? I don’t know, but effective altruists tend to be quite sure it’s a bad idea." from

Seems mostly focused on the idea of 'EA tries to shift existing philanthropy to be given using more rational decision making procedures' 

Comment by velutvulpes (james_aung) on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-27T10:31:46.109Z · EA · GW

Thanks for the reply and taking the time to explain your view to me :)

I'm curious: My friend has been trying to estimate the liklihood of nuclear war before 2100. It seems like this is a question that is hard to get data on, or to run tests on. I'd be interested to know what you'd recommend them to do?

Is there a way I can tell them to approach the question such that it relies on 'subjective estimates' less and 'estimates derived from actual data' more?

Or is it that you think they should drop the research question and do something else with their time, since any approach to the question would rely on subjective probability estimates that are basically useless?

Comment by velutvulpes (james_aung) on Strong Longtermism, Irrefutability, and Moral Progress · 2020-12-26T20:32:16.413Z · EA · GW

Thanks for taking the time to write this :)

In your post you say "Of course, it is impossible to know whether $1bn of well-targeted grants could reduce the probability of existential risk, let alone by such a precise amount. The “probability” in this case thus refers to someone’s (entirely subjective) probability estimate — “credence” — a number with no basis in reality and based on some ad-hoc amalgamation of beliefs."

I just wanted to understand better: Do you think its ever reasonable to make subjective probability estimates (have 'credences') over things? If so, in what scenarios is it reasonable to have such subjective probability estimates; and what makes those scenarios different from the scenario of forming a subjective probability estimate of what $1bn in well-target grants could do to reduce existential risk?

Comment by velutvulpes (james_aung) on What areas are the most promising to start new EA meta charities - A survey of 40 EAs · 2020-12-23T20:54:36.900Z · EA · GW

No worries, thanks!

Comment by velutvulpes (james_aung) on What areas are the most promising to start new EA meta charities - A survey of 40 EAs · 2020-12-23T18:11:26.242Z · EA · GW

Thanks for this write up! I'm excited about CE looking into this area. I was wondering whether you were able to share information about the breakdown of which organisations  the 40 EAs you surveyed came from and/or which chapters were interviewed, or whether that data is anonymous? 

Comment by velutvulpes (james_aung) on Open and Welcome Thread: December 2020 · 2020-12-20T15:19:41.580Z · EA · GW

Welcome, Roger! 😊 Congrats on moving towards a vegetarian diet, even though you previously thought you wouldn't have attempted it 👏

Comment by velutvulpes (james_aung) on Guerrilla Foundation Response to EA Forum Discussion · 2020-12-15T23:51:24.574Z · EA · GW

A quick guess of something that might be underpinning a worldview difference here is a differing conception of what counts as "harm". In the original post, the author suggests that a wealthy donor should try and pay reparations to reverse or prevent further harm in the specific sector in which the wealth was generated.

But I think most EAs have an unusual (but philosophically defensible) conception of harm which not only includes direct harm but also indirect harm caused by a failure to act.

So for an EA, if a wealthy donor is faced with a choice between

  1. paying reparations in the specific sector in which their wealth was generated
  2. donating to an intervention which would have a greater benefit than (1)

then choosing (1) over (2) would actually cause more harm. (Which is the point I believe you're trying to draw attention to in your comment) I think many EAs probably feel quite psychologically guilty about the harm they are causing in failing to do the best thing.

But I would say that most people don't conceptualise harm in this way. And so for most people a failure to do (2) if its better than (1) wouldn't be considered a 'harm'.

Comment by velutvulpes (james_aung) on EA Forum Prize: Winners for October 2020 · 2020-12-11T10:03:34.440Z · EA · GW

Thanks for the info!

Comment by velutvulpes (james_aung) on EA Forum Prize: Winners for October 2020 · 2020-12-11T08:52:52.353Z · EA · GW

Is the prize paid out to the recipient or is the prize a donation to a charity at the recipient’s choosing?

Comment by velutvulpes (james_aung) on Careers Questions Open Thread · 2020-12-06T09:20:13.731Z · EA · GW

Hey Will! Would you be able to say anything more about why you didn't like the 2 years of college that you did? What sort of college degrees are you looking into right now? :)

Comment by velutvulpes (james_aung) on How to best address Repetitive Strain Injury (RSI)? · 2020-11-22T21:08:17.580Z · EA · GW

I found using voice dictation on my phone and iPad pretty good, often now I just send emails and messages using my phone instead of my computer.

I find the Google speech recognition on the Google keyboard for Android pretty good, as well as the Apple speech recognition on IOS devices.

Comment by velutvulpes (james_aung) on Please Take the 2020 EA Survey · 2020-11-12T21:45:34.349Z · EA · GW

Thanks for organising this! I think the survey is very valuable! I was wondering if you could you say more on why you "will not be making an anonymised data set available to the community"? That seems initially to me like an interesting and useful thing for community members to have, and was wondering whether it was just a lack of resources/it being difficult, that meant that you weren't doing this anymore.

Comment by velutvulpes (james_aung) on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-02T19:48:28.207Z · EA · GW

Thanks I'll check it out!

Comment by velutvulpes (james_aung) on Evidence, cluelessness, and the long term - Hilary Greaves · 2020-11-01T17:33:49.366Z · EA · GW

On October 25th, 2020, Hilary Greaves gave a talk on ‘Cluelessness in effective altruism’  at the EA Student Summit 2020. I found the talk so valuable that I wanted to transcribe it.

I made the transcript with the help of, an AI speech-to-text platform which I highly recommend. Thank you to Julia Karbing for help with editing.

Comment by velutvulpes (james_aung) on EARadio - more EA podcasts! · 2020-10-26T14:36:51.370Z · EA · GW

I wanted to share EARadio on the Forum again. Although this project has been going for a long time, I think a lot of people probably aren't aware of its existence.

I know a lot of my EA friends often want to watch EA Global lectures but never get round to actually doing so. I think EARadio provides a great service in allowing people to consume this content in an easy and accessible way.

Comment by velutvulpes (james_aung) on Making More Sequences · 2020-10-22T23:19:14.021Z · EA · GW

I also really value sequences! I’m working on an (extremely janky) web app to read sequences of content, as a way to learn web development.

I hope to eventually make it into a nice app that people can use to easily make their own sequences of EA content from around the web, and for people to discover and read this content.

You can check it out here (doesn't work great on mobile yet, unfort) :

I’m keen to work on it more once I stop having RSI 😅, so if people do have comments and feedback would love to hear it.

Comment by velutvulpes (james_aung) on I Want To Do Good - an EA puppet mini-musical! · 2020-10-08T11:37:58.455Z · EA · GW

I just saw this now and loved it, super excited for more content in the future!

Comment by velutvulpes (james_aung) on Correlations Between Cause Prioritization and the Big Five Personality Traits · 2020-09-25T12:08:10.999Z · EA · GW

I believe you can edit the image size of images on old posts by dragging their bottom border down when in edit mode

Comment by velutvulpes (james_aung) on Prospecting for Gold (Owen Cotton-Barratt) · 2020-09-17T10:00:24.699Z · EA · GW

I've now changed that section to:

"On the right is a factorisation which is mathematically trivial and looks like it just makes things more complicated. I've taken the expression on the left and added in a load of things which cancel each other out. But I hope I can justify this decomposition by virtue of it being easier to interpret and measure. So I'm going to present the case for why I think it is."

Do let me know if you'd prefer something different to that :)

Comment by velutvulpes (james_aung) on Prospecting for Gold (Owen Cotton-Barratt) · 2020-09-16T19:35:48.557Z · EA · GW

Thanks! I'll change that :)

Comment by velutvulpes (james_aung) on Prospecting for Gold (Owen Cotton-Barratt) · 2020-09-14T11:15:39.620Z · EA · GW

This is a heavily edited transcript of the popular talk "Prospecting for Gold". We created this edited versions because we found it hard to follow the transcripts provided by CEA and thought there could be some value in condensing, clarifying and cleaning up the transcript.

You can also read a transcript of Amanda Askell's talk 'The Moral Value of Information' here:

Comment by james_aung on [deleted post] 2020-08-15T22:13:09.186Z

Not a cookbook, but you might find interesting. It shows 'How many hours did animals have to live on factory farms to produce various food products?'

Comment by velutvulpes (james_aung) on Defining Effective Altruism · 2020-08-15T17:12:14.291Z · EA · GW

Is there a way to read the finalised (instead of penultimate) article without purchasing the book? Perhaps, Will, you have a PDF copy you own?

Comment by velutvulpes (james_aung) on Center for Global Development: The UK as an Effective Altruist · 2020-08-10T22:49:09.436Z · EA · GW

The title of the CGD article is "The UK as an Effective Altruist"

Comment by velutvulpes (james_aung) on New member--essential reading and unwritten rules? · 2020-07-13T19:26:01.057Z · EA · GW

I like the book suggestions in this comment in another EA forum post

Comment by velutvulpes (james_aung) on New member--essential reading and unwritten rules? · 2020-07-13T19:18:49.174Z · EA · GW

Welcome to the community! And congratulations on your achievements so far!

It could be worth learning study skills so that you can do better in your degree and/or get your coursework done in less time, freeing up your time to learn other things, explore EA, or just have fun.

I was surprised when coming to university how much people study skills differed, and I don’t think it’s unreasonable to say that you can free up weeks (months?) of your time and save yourself a lot of stress through good study skills.

I’d recommend the cousera course called learning how to learn.

Beyond this, university is a great time to try new things, try out new lifestyles and habits, and do self improvement. Going through the things in this list would get you off to a flying start, I reckon I’d also just recommend trying out new societies and clubs that are available at your university, in case you find something interesting and useful or fun.

Comment by velutvulpes (james_aung) on Differential technological development · 2020-06-29T11:18:00.855Z · EA · GW

Indeed. Although there is an upper limit still, since there surely is some limit to how much value we can extract from a resource and there are only a finite number of atoms in the universe.

Comment by velutvulpes (james_aung) on AI Governance Reading Group Guide · 2020-06-25T11:11:34.345Z · EA · GW

Do you have a template of the shared document that you used? Or was it a quite unstructured blank document?

Comment by velutvulpes (james_aung) on Differential technological development · 2020-06-25T10:58:17.181Z · EA · GW

I wrote this up because I wanted a single resource I could send to people that explained differential technological development.

I made it quite quickly in about 1 hour, so I'm sure it's quite lacking and would appreciate any comments and suggestions people may have to improve it. You can also comment on a GDoc version of this here:

Comment by velutvulpes (james_aung) on Ask Me Anything! · 2019-08-21T12:43:03.923Z · EA · GW

I enjoy reading Phil's blog here:

Comment by velutvulpes (james_aung) on Problems with EA representativeness and how to solve it · 2018-08-05T20:53:41.741Z · EA · GW

Just wanted to say that I'd be really excited to read more of your thoughts on this. As mentioned above, I think many considerations and counter-considerations against x-risk work deserve more attention and exposure in the community.

I encourage you to write up your thoughts in the near-term rather than far future! :P