Skill up in ML for AI safety with the Intro to ML Safety course (Spring 2023) 2023-01-05T11:02:34.215Z
Transcript of a talk on The non-identity problem by Derek Parfit at EAGxOxford 2016 2022-06-20T21:34:05.194Z
Advice on how to get a remote personal/executive assistant 2022-05-15T19:21:18.985Z
How to set up a UK organisation (Limited Company version) 2022-04-22T11:44:27.275Z
Explore sequences of EA content on the Global Challenges Library 2022-03-03T15:35:30.714Z
What are some resources (articles, videos) that show off what the current state of the art in AI is? (for a layperson who doesn't know much about AI) 2021-12-06T21:06:47.177Z
What’s the low resolution version of effective altruism? 2021-08-29T14:59:32.517Z
Shelly Kagan - readings for Ethics and the Future seminar (spring 2021) 2021-06-29T09:59:33.645Z
Astronomical Waste: The Opportunity Cost of Delayed Technological Development - Nick Bostrom (2003) 2021-06-10T21:21:28.240Z
Get funding for your student group to buy productivity software 2021-04-13T13:15:03.041Z
Evidence, cluelessness, and the long term - Hilary Greaves 2020-11-01T17:25:47.589Z
EARadio - more EA podcasts! 2020-10-26T14:32:41.264Z
Expected Value 2020-07-31T13:59:54.861Z
The Moral Value of Information - edited transcript 2020-07-02T21:02:30.392Z
Differential technological development 2020-06-25T10:54:53.776Z
Heuristics from Running Harvard and Oxford EA Groups 2018-04-24T10:03:24.686Z


Comment by james (james_aung) on Announcing the EA Merch Store! · 2022-12-30T15:01:15.494Z · EA · GW

I thought the team behind the EAGx designs were really great and I loved them. Have you considered reaching out to them to make designs for your store?

Comment by james (james_aung) on Announcing the EA Merch Store! · 2022-12-30T14:57:57.187Z · EA · GW

I also think the website design seems a bit off to me

Comment by james (james_aung) on How to set up a UK organisation (Limited Company version) · 2022-10-17T10:59:36.744Z · EA · GW

Yeah I think Wise could actually just work on its own alone

Comment by james (james_aung) on How might we align transformative AI if it’s developed very soon? · 2022-09-02T03:21:37.464Z · EA · GW

It also appears that the link to ELK in this section is incorrect

  • Making use of an AI’s internal state,2 not just its outputs. For example, giving positive reinforcement to an AI when it seems likely to be “honest” based on an examination of its internal state (and negative reinforcement when it seems likely not to be). Eliciting Latent Knowledge provides some sketches of how this might look.
Comment by james (james_aung) on How might we align transformative AI if it’s developed very soon? · 2022-09-02T03:14:25.441Z · EA · GW

The link to ELK in this bullet point is broken.

  • It’s not currently clear how to find training procedures that train “giving non-deceptive answers to questions” as opposed to “giving answers to questions that appear non-deceptive to the most sophisticated human arbiters” (more at Eliciting Latent Knowledge).

It may intend to point to here: 

Comment by james (james_aung) on What happens on the average day? · 2022-08-17T15:27:28.920Z · EA · GW

This is cool, thanks for writing it!

Comment by james (james_aung) on Advice on how to get a remote personal/executive assistant · 2022-06-17T14:10:43.316Z · EA · GW

I also recommend for full time remote executive assistants

Comment by james (james_aung) on You Don't Need To Justify Everything · 2022-06-13T07:24:18.508Z · EA · GW

Also see

For more on criterion of rightness vs decision procedure

Comment by james (james_aung) on Advice on how to get a remote personal/executive assistant · 2022-05-25T06:46:35.541Z · EA · GW

I don't think my particular VAs have more capacity, but I believe Virtalent has other VAs ready to match with clients.

It is unclear to me whether I’ve just gotten lucky. But with Virtalent you can switch VA and the minimum commitment is very low, which is why I think the best strategy is just to try

Comment by james (james_aung) on Should we call them something other than retreats? · 2022-05-18T07:46:01.522Z · EA · GW

I like the term "Summit"

Comment by james (james_aung) on Bad Omens in Current Community Building · 2022-05-15T11:21:34.024Z · EA · GW

Hey Theo - I’m James from the Global Challenges Project :)

Thanks so much for taking the time to write this - we need to think hard about how to do movement building right, and its great for people like you to flag what you think is going wrong and what you see as pushing people away.

Here’s my attempt to respond to your worries with my thoughts on what’s happening!

First of all, just to check my understanding, this is my attempt to summarise the main points in your post:

My summary of your main points

We’re missing out on great people as a result of how community building is going at student groups. A stronger version of this claim would be that current CB may be selecting against people who could most contribute to current talent bottlenecks. You mention 4 patterns that are pushing people away:

  1. EA comes across as totalising and too demanding, which pushes away people who could nevertheless contribute to pressing cause areas. (Part 1.1)
  2. Organisers come across as trying to push particular conclusions to complex questions in a way that is disingenuous and also epistemically unjustified. (Part 1.2)
  3. EA comes across as cult-like; primarily through appearing to be trying to hard to be persuasive, pattern matching to religious groups, coming across as disingenuously friendly (Part 1.3, your experience)
  4. There aren’t as many ways for neartermist-interested EAs to get involved in the community, despite them being able to contribute to EA cause areas (Part 1.4)

My understanding is that you find patterns (2) and (3) especially concerning. So to elaborate on them, you’re worried about:

  • EA outreach is over-optimising on persuasion/conversion in a way that makes epistemically rigorous and skeptical people extremely averse to EA outreach. You feel like student group leaders are trying to persuade people into certain conclusions rather than letting people decide for themselves.
  • EA student group leaders are generally unaware and out-of-the-loop on how they are coming across poorly to other people.
  • EA student group leaders are often themselves pretty new to EA, yet are getting funded to do EA outreach. This is bad because they won’t really how best to do outreach due to being so new.

You think these worrying patterns are being driven upstream by a strategic mistake of over-optimising for a metric of “highly engaged EAs”. This is a poor choice of metric because:

  • A large fraction of people who could excel in an EA career won’t get engaged in EA quickly, but will be slow to come to arrive at EA conclusions due to their desire to reason carefully and skeptically. Thus you worry that these people will be ignored by EA outreach because they don’t come across as a “highly engaged EA”.

You then suggest some possible changes that student group leaders could make (here I’m just focusing on changes that SG leaders could do):

  1. Don’t think in terms of producing “highly engaged EAs”; in general beware of over-optimising on getting people who quickly agree with EA ideas.
  2. Try and get outside perspectives on whether what you’re doing might be off-putting to others.
  3. Actively seek out criticisms and opinions of people who might have been put off by EA.
  4. Seek to improve your epistemics; do the hard and virtuous thing of being open to criticism even though it’s naturally aversive.
  5. Beware social dynamics that incentivise people to agree with conclusions in return for social approval.

Sorry that was such a long summary (and if I missed out key parts, please do let me know)! I think you’re making many great points.

Here are some of my thoughts in reply:

My thoughts in reply

Over-optimising on HEAs

  • I agree with all of your specific pieces of advice in your final section. I think they're great heuristics that every person doing EA outreach should try and adopt.
  • My overall impression is that many student group leaders also agree with the direction of your advice, but find it hard to implement in real life because in general it’s hard to do stuff right. My impression is that most student group leaders are super overstretched, have lots of university work going on, and are only able to spend several hours per week doing EA outreach work (and generally find it stressful and difficult to stay on top of things).
  • I think the core failure mode of “getting the people who already initially express the most interest/agreement in EA” does go on, but I think that what drives it is a general tendency to do what’s easier (which is true of any activity) instead of necessarily over-optimising on an explicit metric. Since group leaders are so time-constrained, it’s easier for them to talk to and engage with people who already agree because they don’t have the time or patience to grapple with people who initially disagree.
    • If group leaders were feeling a lot of pressure to get HEAs from funding bodies, this would be super bad. I’m not sure to the extent that this is really going on that much: CEA’s HEA metric is kinda vague and I haven’t got an impression from group leaders I’ve talked to that people are trying to optimise super hard on it (would love to hear contrary anecdotes). In general I find most student groups to be small, somewhat chaotically run, and so not very good at optimising for anything in particular.
    • If this claim is true, then I think that would be an argument for investing more resources into student groups to get them to a state where they have more capacity to make better decisions and spend time engaging with Alice-types.

Here are some of my thoughts on EA coming across as cult-like:

  • I agree that EA can come off as weird and cult-like at times. I think this is because: (i) there’s a lot of focus on outreach, (ii) EA is an intense idea that people take very seriously in their lives.
  • I think it’s such a shame that EA comes across this way. At its core I think it’s because it’s so unusual to have communities that are this serious about things. To give a personal anecdote, when I was at university I felt pretty disillusioned and distant from my peers. I felt that things were so messed up in the world and it made me sad that many of my friends didn’t seem to notice or care. When I first met EAs I found it so inspiring how they were so serious about taking personal responsibility for making the world better; no matter what society’s default expectations were.
  • When I was first getting into EA I was really fervent about doing outreach, and I think I did a pretty bad job. It seemed so important to me that everyone should agree with EA ideas because of the huge amount of suffering that was going on in the world. I found it confusing and disheartening when many of those I talked to simply didn’t agree with EA or who seemed to agree but then not do anything about it. I would argue back in an unconvincing way, which made little progress. Because EA conclusions seemed obvious to me, I didn’t get how people didn’t immediately also agree.
  • With all that in mind, here is a quick guess of additional heuristics (beyond your suggestions) that student leaders could bear in mind:
    • It’s not your job to make someone an EA: I think a better framing is to view your responsibility as making sure that people have the opportunity to hear about and engage with EA ideas. But at the end of the day, if they don’t agree it’s not your job to make them agree. There’s a somewhat paradoxical subtlety to it - through coming to peace with the fact that some people won’t agree, you can better approach conversations with a genuine desire to help people make up their own minds.
    • Look at things from an outsider's perspective: I don't have immediately thoughts on tactical decisions like how to use CRMs (although I do find the business jargon quite ugh) or book giveaways. It seems to me that there are good ways and bad ways to do these sorts of things. But your suggestion of checking in with non EA-s about whether they'd find it weird seems great and so I just wanted to doubly reiterate it here!
    • Embrace the virtue of patience: I think it’s important to approach EA outreach conversations with a virtue of patience. It can be difficult to embrace because for EAs outreach feels so high-stakes and valuable. However if you don’t have patience then you’ll be tempted to do outreach in a hurry, which leads to sloppy epistemics or at worse deceitfulness. A patient EA carefully explains ideas, a hurried EA aims to persuade.
    • I think it would be a shame if we lost the good qualities of EA that make it so unique in the world - that it’s a community of people who are unusually serious about doing the most good they can in their lives. But I think we can do better as a community in not coming across cult-like by being more balanced in our outreach efforts, being mindful of the bad effects of aiming for persuasion, and coming to a peace with the idea that some people just won’t be that into EA and that’s okay (and that doesn’t make them a bad person).

Other strategy suggestions which I think could improve the status quo:

  • I’d be excited to see more EA adjacent and cause specific outreach. I think having lots of different brands and sub-communities broadens the appeal of EA ideas to different sorts of audiences, and lets people get involved in EA stuff to different extents (so that EA isn’t as all-or-nothing). I’d be keen to see people restart effective giving groups, rationality groups, EA outreach focused on entrepreneurs, and cause specific groups like animal welfare and AI alignment.

Thanks again for taking the time to write the post - it seems like it's generated great discussion and that its something that a lot of people agree with :)

Comment by james (james_aung) on How to set up a UK organisation (Limited Company version) · 2022-04-27T12:13:25.932Z · EA · GW

UK lawyers

Comment by james (james_aung) on How to set up a UK organisation (Limited Company version) · 2022-04-26T14:44:54.753Z · EA · GW

Cool! Have you considered turning those notes into a post? Could be a great way for more people to see rhem

Comment by james (james_aung) on How to set up a UK organisation (Limited Company version) · 2022-04-24T09:10:46.205Z · EA · GW

I'm not familiar with CIO's unfortunately, so don't know :(

Comment by james (james_aung) on How to set up a UK organisation (Limited Company version) · 2022-04-23T10:03:14.016Z · EA · GW

Great suggestions. I didn't know that Osome would also do company formations, thanks for the tip. I simply just listed the process I used.

Also excited to check out Starling and Free Agent. Thanks for the recommendations.

I agree that this post would be substantially enhanced with summaries of key responsibilities. I'd love for you to contribute to that and perhaps we can update this post and add you as a coauthor?

I'm probably not going to work on this post myself more, as I just wanted to spend a few minutes writing a quick post. But if you feel excited about it I think it could be valuable for you to draft extensions to this post :)

Comment by james (james_aung) on How to set up a UK organisation (Limited Company version) · 2022-04-22T13:57:06.689Z · EA · GW

USA recommendations

I recommend  Stripe Atlas to set up a USA for-profit entity It costs $500 for them to set up your entity.

I also recommend for US banking

Comment by james (james_aung) on How to set up a UK organisation (Limited Company version) · 2022-04-22T13:51:50.534Z · EA · GW


Comment by james (james_aung) on University Groups Should Do More Retreats · 2022-04-07T10:39:02.469Z · EA · GW

Global Challenges Project have just released their Guide on Running a Retreat here: 

Comment by james (james_aung) on Get funding for your student group to buy productivity software · 2022-03-15T15:22:39.888Z · EA · GW

Ah thanks for letting me know. Yup it's still up here

Comment by james (james_aung) on Michael Page: Embracing the intellectual challenge of effective altruism · 2022-02-23T11:48:30.268Z · EA · GW


Good afternoon. I'm going to talk about embracing the intellectual challenge of effective altruism. This has been a theme of the conference, so I don't think this will come as a surprise to many of you, but I think it's an important direction for us to think about. So I think of it as being a dirty secret of effective altruism: that it's hard. And I'm going to say a bit more about what I mean by hard in a second, because I'm using that term in a somewhat unconventional way.

Now being hard could be concerning. One concern is, if effective altruism is perceived of as hard, it might be less appealing. I call this the liability framing. I don't think the liability framing is necessary or correct. My view is that the qualities of effective altruism that make it hard, also provide a rich source of opportunity. I call this the opportunity framing. But to take advantage of the opportunity framing, we have to build a community that aggressively embraces the intellectual challenge.

Okay. What do I mean by hard? Again, I'm using this term a bit unconventionally. I mean three basic things. One, we have a lot to learn about how to do the most good. Two, making progress on that question is intellectually demanding work. And three, because of those first two our views about how to change the world for the better are likely to change, probably quite a lot, even in the short term.

Effective altruism is hard for two primary reasons. One, it's new. It hasn't been around for very long and comparatively few resources have been devoted to understanding its implications. Here are a few dates just to give you some perspective. Even within these organizations, much of the research has gone to thinking about how to do the most good within a particular domain on a particular problem. Many of the questions that transcend problems, that transcend cause areas are only beginning to be explored even today.

A second reason why effective altruism is hard: its scope is enormous. To answer the question, what is the best action we can undertake, we need to know all of the following. One, the way the entire world is. This question is breathtakingly large in scope. It includes everything from the impact of distributing bed nets in Africa, to how different sentient beings experience pleasure and pain, to the timeline that certain technological advancements will follow.

The actions available to us. Well, what can we do to make the world a better place? We can donate today to a charity. We can put money in a donor-advised fund, let it collect interest and donate one year from now or five years from now or 30 years from now. We can go work for a charity. We can do research to help others know where they should donate or where they should work. Maybe we can go into politics and try to move vast sums to address the most important problems in the world.

Third, what everyone else is doing and will do. We don't act in a vacuum. The decisions we make about where to donate or where to work, impact the decisions of those around us. What might seem most impactful if we consider just our own impact might not be when we consider this broader perspective.

And lastly, what we should value. How do we compare the interests of different species? How do we think about the interests of people alive today versus people that will be alive or might be alive in the future? What do we do with ideals like justice, democracy, equality? Do those have intrinsic value or are those rules of thumb for how we can make others generally better off? To understate the point, this is complicated.

A couple implications of the fact that effective altruism is hard: one, we should expect disagreement. Now, disagreement is often unproductive because somebody is uninformed or maybe self-interested, but because of the scope of effective altruism there's an incredible amount of space for informed legitimate, we'll call it 'disagreement', where people who generally share the same values are reaching quite different conclusions about how to make the world a better place. And this is good. Informed disagreement staves off intellectual stagnation. The hope is that the better ideas will rise to the top. And by experimenting with different ideas, we can get information about those ideas that will allow us to update our beliefs going forward.

A second implication of the fact that effective altruism is hard is that we should expect to be wrong. And by wrong, I don't mean subjectively wrong. It could be that when you made a certain choice, given the information available to you, you made the right choice. But then the world changes in unexpected ways and with the benefit of hindsight, maybe we would've done more good if we'd made a different choice. And this is often a difficult pill to swallow because it means, again, with the benefit of hindsight, the decision we made in the past, wasn't the optimal decision.

So imagine if you'd known five years ago, about development in meat substitution research, or AlphaGo, or the establishment of Good Ventures, or CRISPR, or the success of the effective altruism movement. I expect many of us would've made pretty different decisions. I'd actually like to take a minute and if everybody could just, this is silly, but indulge me, literally think about something that has happened in the world that if you had known it one year ago or three years ago, whenever it makes sense for you, you would've made different decisions about how to make the world a better place. Please actually do this. I think it'll be interesting.

Okay. I know I didn't give you very much time. Obviously we can't change the past, but I think indulging in exercises like that do allow us to think about how new information can change our decisions. And that will give us better processes for changing our mind going forward.

Okay. What is the point? The crux of the issue is the fact that effective altruism is hard a good thing or a bad thing. One framing, what I'm calling the liability framing is that it's a bad thing. It's bad because highlighting the intellectual challenge might make effective altruism less appealing, and this could happen in a couple ways. It could happen because it negatively impacts growth. People hear about how difficult it is to know what will actually help the world, and they don't engage. It could also happen because of decision paralysis. People who've already nominally signed up for effective altruism might be paralyzed by the number of choices before them, by the complexity of the world and not even act.

Here's a couple examples or one example of the way the message could be tweaked. If you donate $3,500 to the Against Malaria Foundation, you can save a child's life. It's a pretty compelling message. Now contrast that with this message. If you donate $3,500 to the Against Malaria Foundation, you can save a child's life, but you can't take these numbers literally, there might more cost-effective ways to save lives from malaria, malaria might not be the most important problem in the world, and the long-run effects of saving lives from malaria are potentially significant and are poorly understood. One might be concerned that the second message would be off-putting and therefore would be tempted to gloss over everything after the 'but'. So the liability framing creates an incentive to oversimplify.

I don't think the liability framing is necessary or correct. I believe the liability framing misses two of effective altruism's most powerful qualities. And I'm going to put these in the context of what I'm calling the opportunity framing. And by the way, bear in mind, all of this is a gross oversimplification. So it's a bit ironic, but hopefully there's some useful message here. And if not, my apologies.

Opportunity framing: First quality - Effective altruism is cause and means neutral, meaning we strive to do the most good period. We don't strive to do the most good within a particular domain or to solve a particular problem. We don't strive to do the most good with a particular tool or method. Two - we're truth-seeking. We take it upon ourselves to figure out how to do the most good. And lastly, because effective altruism is new, because it's so big in scope, because our minds are so likely to change going forward, it's likely that the problems and interventions we've already identified are just the tip of the iceberg.

All right, let's consider how the two framings actually interact in a couple concrete contexts. One example is long-run or indirect effects. So the problem: an action might look promising, but its longterm effects are unknown. And I'll give you a simplified example, maybe cash transfers to a poor family in Uganda. And let's assume we know that that cash transfer will help this family. Let's also assume we know nothing about the longer term effects of that cash transfer. It could be good for the local economy, for example, it could be bad for the local economy, but we know nothing.

The liability framing: you might say "if I can't know whether I'll do good, why act?". Under the opportunity framing you look at what you know. Giving money to this family will help them. Fantastic. That's it. You act based upon that information. Meanwhile, you do research, you do experiments, you run trials. You figure out what the longer term effects would be. You figure out what type of cash transfer programs actually help the local economy, which ones might harm the local economy. Maybe cash transfer programs aren't a good idea in the first place. You look into that. You then design the programs going forward in a way that's more likely to help the local economy or have positive long-run effects. In so doing you're turning unknowns into knowns, and that allows you to substantially increase the amount of good we can do.

Another example: new problems or cause areas. So here's a silly example. You've spent the last few years earning to give, so you can donate to movement building organizations. And then at this conference, you meet somebody who tells you about this incredible opportunity to work on emerging technologies, policy, and government. And it would be more impactful. You're convinced. All right. So under one framing, liability framing, you just wasted years of your life working on the wrong problem. Under the opportunity framing: "wow, I can do even more good than I previously thought". And it sounds hokey, but the first framing is pretty natural.

Okay. Quick sidebar on decision paralysis. And I'm going to really oversimplify this. The landscape is complex. The number of decisions before you are numerous, what do you actually do? So this is a rough and ready taxonomy of some options. Don't quote me on any of this. Okay. And what category you think we're in is going to depend on where you come down on the way the world is and what values you have in other areas of legitimate disagreement. So one category, known knowns, problems we can effectively address now. This might include eradicating certain tropical diseases or factory farming. What can you do? Do or fund direct work now.

Another category, known unknowns. These might include problems that are on our radar, but where we think the most effective way to address them will be available several years from now. Perhaps we need more research to actually have the technology to address this problem. So what can you do? You can do or fund problem-specific research now. You can donate to a donor-advised fund and then fund that direct work a few years from now, when the research is ready. Or you can develop useful skills, maybe develop the research skills you need to be able to work on that problem.

And the third category: unknown unknowns. These are important problems that aren't even on our radar yet. The Cause X that I believe Will mentioned in the introduction. If you think this is the most important problem, the one we don't even know about, well, you can do or fund foundational research to try to identify new problems, to completely reorient the way we think about these problems. Or you can put yourself in a position to develop the skills, to work on these problems in the future. Or you can maybe strengthen the effective altruism community, believing that is one of the more robust strategies to create a world in which people are ready and able to work on these problems in the future.

Okay. For the opportunity framing to actually be useful, we need to develop a community that can actually make progress on these problems, what I'm calling generally a truth-seeking community. And there are two general components to this. One is social norms. We need to be informed. That means everybody needs to be informed. If you're not informed, you talk about effective altruism in an oversimplified way. And that creates a community that's not well suited to make progress on these problems. So read, read widely, discuss. I also want to plug, a website that we launched a few days ago. This is the seed of a much bigger project, but it's going to be a great repository for information that will help you stay informed.

Calibrate your confidence; meaning don't be overconfident, don't oversimplify, but also don't be underconfident. Contribute to the marketplace of ideas. If you see perspectives that are being underrepresented, speak up, know when you add value. And embrace disagreement and criticism. It's hard, but recall that there's an enormous legitimate space for disagreement and criticism is one of the most useful ways to improve your beliefs.

The other component to developing an effective truth-seeking community is to develop the body of knowledge that's relevant to making the world a better place. One way of doing this is to make effective altruism an academic discipline. Some of my colleagues are actually working on developing an institute at Oxford tentatively called the Institute for Effective Altruism or the Oxford Institute for Effective Altruism.

So to develop the body of knowledge that's relevant to doing good, better, we need to invest in foundational research. Foundational research is a broad label that I think can include anything that is likely to show that we're wrong in a significant way. So foundational research might be looking for new problems or looking for information that might mean we are way undervaluing one problem relative to another, or that there's an entire way of thinking about addressing certain problems that we've neglected or that we've overinvested in.

Identify and draw upon research in other fields. Other established disciplines like statistics, economics, psychology have a lot to say about how to make the world a better place. And there's no need for us to reinvent the wheel. And organize and build on our own research. Much of the best research on how to make the world a better place has appeared on someone's blog and then been forgotten. We need to find a way to organize this research so we can develop the idea going forward.

All right, I want to close with a shameless plug. The Center for Effective Altruism is looking for highly talented, highly motivated people to work on some of these foundational problems. If you think you might be a candidate, send me an email. Thank you.

Transcript by

Comment by james_aung on [deleted post] 2022-02-11T16:11:47.921Z

Thank you to all who made submissions!

Our top bounty winner was Jackson Wagner

Our 2nd and 3rd prizes went to ludwigbald and Jay Bailey

Comment by james_aung on [deleted post] 2022-02-10T16:12:30.336Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:11:45.081Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:11:08.828Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:10:52.323Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:10:31.873Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:10:16.587Z

Thanks for your submission Ramiro :)

Comment by james_aung on [deleted post] 2022-02-10T16:08:00.790Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:07:39.237Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:05:18.796Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:04:25.199Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:03:47.977Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:03:21.700Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:03:01.290Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:02:09.490Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:01:54.368Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:01:24.622Z

Thanks for your submission!

Comment by james_aung on [deleted post] 2022-02-10T16:01:06.802Z

Thanks for your submission Jackson :)

Comment by james_aung on [deleted post] 2022-02-10T16:00:46.956Z

Thanks for your submission Pablo :)

Comment by james_aung on [deleted post] 2022-02-10T15:59:44.884Z

Seems right, I agree. Thanks for the feedback!

Comment by james_aung on [deleted post] 2022-02-09T09:50:07.125Z

That's reasonable - thanks for sharing! We might try and shake it up if we do a future round; will need to think about it.

Comment by james (james_aung) on Making More Sequences · 2022-02-08T16:55:23.180Z · EA · GW

More and better sequences here: 

Comment by james_aung on [deleted post] 2022-02-03T07:29:23.513Z

I have now made some small clarifications to the original post. If we decide to continue with the bounty program then I'll try and do more clarifications to our aims and why we're doing it this way :)

Comment by james_aung on [deleted post] 2022-02-03T07:24:21.279Z

Heavily relying on preexisting content is okay! I expect a good answer might just come from reviewing the existing literature and mashing together the content

Comment by james_aung on [deleted post] 2022-02-03T07:17:57.403Z

Good question! Yes these sorts of replies are allowed and I would be excited to see them!

Comment by james_aung on [deleted post] 2022-02-03T07:11:09.504Z

Good question!

If I were an onlooker I might be thinking "hmm looks like these people are trying to settle difficult EA questions in certain positions and are going to advertise those as the correct answers when there is still a lot of unsettled debate"

I think a good answer to the prompt would acknowledge the debate in EA and that people have different views.

I ought to clarify: For the purposes we'll be using our FAQ for we want to be outlining and defending our urgent longtermist view. That's why in the prompt I'm looking for answers that fall on one particular side of the view (i.e. the side that best represents the views of our organisation and goals which are urgent longtermist) (if I weren't doing this bounty I would just be writing an answer that fell on this side myself! And I'm looking to outsource my work here)

I think this is a very different set of goals and views that the EA movement as a whole, and we're not trying to represent those - sorry for any confusion! I should have specified more clearly what our use case of the FAQ is. For example, I think this would probably be bad as a FAQ on

I also think that a lot of these questions will be unsettled. Nevertheless for this bounty I want people to be able to indicate their tentative best guess answer to the question in a decision relevant way without getting caught in the failure mode of just providing a survey of different views.

I think that the valuable discussion and debate over the answers to the question should continue elsewhere :)

Comment by james_aung on [deleted post] 2022-01-09T04:50:56.614Z

I think our service is easier to use. Group leaders should feel free to use whichever service they like.

Comment by james (james_aung) on Evidence, cluelessness, and the long term - Hilary Greaves · 2021-12-23T19:21:40.494Z · EA · GW

My understanding is that "complex cluelessness" is not essentially identical to"deep uncertainty", although "deep uncertainty" could mean a few things and I'm not sure exactly what you have in mind.

My understanding is also that the term is not essentially identical to "uncertainty" "Knightian uncertainty" "wicked problems" “extreme model uncertainty” or "fragile credences"

I do however think that EAs often use the term "cluelessness" incorrectly in a way that makes it more similar to these other terms. I think this is because cluelessness is a confusing topic to wrap ones head around correctly.

Comment by james_aung on [deleted post] 2021-12-15T15:46:40.099Z

I've recently been liking using to schedule my day. You tell it all your tasks and when they're due and connect your calendars and it automatically schedules all your tasks and blocks them out in your calendar (intelligently sorting them so that they all get done on time, and automatically resolving conflicts when meetings pop up)

Comment by james (james_aung) on We need alternatives to Intro EA Fellowships · 2021-11-19T03:16:00.972Z · EA · GW

This is great! I agree we need more experimentation beyond long intro 'fellowships'. I like all 4 of your suggested alternatives and hope you and others try them out and share your learnings.