Posts

We Did AGISF’s 8-week Course in 3 Days. Here’s How it Went 2022-07-24T16:46:30.261Z
[Linkpost] Eric Schwitzgebel: Against Longtermism 2022-01-06T14:15:50.439Z
Has there been much work on figuring out the impact of plant-based foods? 2021-01-24T19:12:42.026Z
ag4000's Shortform 2021-01-11T20:13:41.421Z

Comments

Comment by ag4000 on Some updates in EA communications · 2022-08-04T22:34:48.797Z · EA · GW

I enjoyed the new intro article, especially the focus on solutions.  Some nitpicks:

  • I'm not sure that it's good to use 1DaySooner as the second example of positive EA interventions.  I agree that challenge trials are good, but in my experience (admittedly a convenience sample), a lot of people I talk to are very wary of challenge trials.  I worry that including it in an intro article could create needless controversy/turn people away.
  • I also think that some of the solutions in the biodefense section are too vague.  For example, what exactly did the Johns Hopkins Center for Health Security do to qualify as important?  It's great that the Apollo Programme for Biodefense has billions in funding, but what are they doing with that money? 
  • I don't think it makes sense to include longtermism without explanation in the AI section.  Right now it's unexplained jargon.  If I were to edit this, I'd replace that sentence with a quick reason why this huge effect on future generations matters or delete the sentence entirely.
Comment by ag4000 on We Did AGISF’s 8-week Course in 3 Days. Here’s How it Went · 2022-07-25T20:02:00.208Z · EA · GW

Thanks for writing this up so concisely -- I think that this is a nice list of pros and cons.  I agree that the weekly/seminar model works better for virtual reading groups.  I certainly would not want to spend 6+ hours on Zoom for a reading group continuously.

Comment by ag4000 on We Did AGISF’s 8-week Course in 3 Days. Here’s How it Went · 2022-07-25T19:54:47.519Z · EA · GW

I'm not sure what all of the participants' motivation was for joining (I should've gathered that info).  As background, we mostly publicized the intensive to members of MIT EA interested in AI safety and to members of Harvard EA.  Here are, I think, the main motivations I noticed:

  • Considering pursuing AI safety technical research as a career, and thus wanting to develop a foundation/overview (~2 participants);
  • Wanting to learn about an important EA cause area to get a more well-rounded view of EA, or to help with work in an adjacent cause area like AI governance (~2 participants);
  • Shoring up/filling in gaps in knowledge about AI safety, already planning to work in AI safety (~2 participants).
Comment by ag4000 on We Did AGISF’s 8-week Course in 3 Days. Here’s How it Went · 2022-07-25T19:50:29.375Z · EA · GW

Good luck!  I'm excited to hear how this goes.

Comment by ag4000 on EAs should use Signal instead of Facebook Messenger · 2022-07-21T18:55:52.785Z · EA · GW

Agreed, although it's possible to use Messenger with a deactivated Facebook account, which seems to solve this issue.

Comment by ag4000 on Should we buy coal mines? · 2022-05-05T15:46:49.083Z · EA · GW

Back of the envelope calculation

Comment by ag4000 on What readings should we include in a "sequence" on global health and development · 2022-04-01T15:19:58.740Z · EA · GW

As an alternative to "Famine, Affluence, and Morality," there is Peter Unger's Living High and Letting Die, of which Chapter 2 is particularly relevant.  It's more philosophical (this could be a bad thing) and much more comprehensive than Singer's article.

This is the first of our cases:

The Vintage Sedan. Not truly rich, your one luxury in life is a vintage Mercedes sedan that, with much time, attention and money, you've restored to mint condition. In particular, you're pleased by the auto's fine leather seating. One day, you stop at the intersection of two small country roads, both lightly travelled. Hearing a voice screaming for help, you get out and see a man who's wounded and covered with a lot of his blood. Assuring you that his wound's confined to one of his legs, the man also informs you that he was a medical student for two full years. And, despite his expulsion for cheating on his second year final exams, which explains his indigent status since, he's knowledgeably tied his shirt near the wound so as to stop the flow. So, there's no urgent danger of losing his life, you're informed, but there's great danger of losing his limb. This can be prevented, however, if you drive him to a rural hospital fifty miles away. “How did the wound occur?” you ask. An avid bird‐watcher, he admits that he trespassed on a nearby field and, in carelessly leaving, cut himself on rusty barbed wire. Now, if you'd aid this trespasser, you must lay him across your fine back seat. But, then, your fine upholstery will be soaked through with blood, and restoring the car will cost over five thousand dollars. So, you drive away. Picked up the next day by another driver, he survives but loses the wounded leg.

Except for your behavior, the example's as realistic as it's simple.

Even including the specification of your behavior, our other case is pretty realistic and extremely simple; for convenience, I'll again display it:

The Envelope. In your mailbox, there's something from (the U.S. Committee for) UNICEF. After reading it through, you correctly believe that, unless you soon send in a check for $100, then, instead of each living many more years, over thirty more children will die soon. But, you throw the material in your trash basket, including the convenient return envelope provided, you send nothing, and, instead of living many years, over thirty more children soon die than would have had you sent in the requested $100.

Taken together, these contrast cases will promote the chapter's primary puzzle.

Comment by ag4000 on Bayes' Theorem explained · 2022-03-28T01:29:45.841Z · EA · GW

Thanks for sharing this!   I agree that learning about Bayes' Theorem is important for EAs, and really anyone in the world.  Small typo: it is Bayes' Theorem, not Baye's Theorem, as it's named after Thomas Bayes.

Comment by ag4000 on A New Book to Introduce People to Ethical Vegetarianism · 2022-02-10T05:41:53.384Z · EA · GW

I absolutely LOVE these dialogues; they're my go-to introduction to why I think that animal welfare and veganism are so important.  I especially like to have people read them one day at a time, discussing each day with them after they've read it.  The dialogues are engaging and far more comprehensive for the size than anything else I know. 

One criticism I have is that the dialogues don't mention much the conditions in which animals on factory farms live.  I find that one bottleneck is that people don't always believe that factory farming is a big deal until they learn about the severity of suffering within the farms.  I therefore plan to supplement reading the Dialogues with some other sources.

By the way, if you want a legal free copy of the book, a previous draft was published in Between the Species.  You can find it here.

Comment by ag4000 on [deleted post] 2022-01-22T21:51:46.403Z

Does the short causal pitch not run the risk of limiting EA's scope too much to philanthropy?  To me, it seems to miss the core of EA: figuring out how to better improve the world, given the resources we have.

Comment by ag4000 on What self-help topics would you like better research/ resources on? · 2022-01-19T17:27:05.643Z · EA · GW

This is sort of vague, but I'd like to see more about whether/how to induce mindset shifts.  For example, for decreasing procrastination, there are sort of "quick fix" methods (e.g., blocking websites, creating routines) and others that try to get you to change your mindset or motivations (e.g., Nate Soares's Replacing Guilt).  I'm not sure whether there is any research on how these two broad methods of self-help compare, but I'd be interested to hear.  For example, to what extent are these approaches complementary?  In the procrastination example, does blocking websites effectively decrease people's urges to find distractions, inducing a mindset shift, or does it simply cause them to find new distractions?

Comment by ag4000 on What are some artworks relevant to EA? · 2022-01-17T15:24:03.549Z · EA · GW

Ted Chiang's "The Lifecycle of Software Objects" (included in one of his collections of stories, Exhalation) is a fascinating exploration of digital sentience.

Apuleius's The Golden Ass is an ancient novel (the only complete surviving Roman novel!) in which the protagonist accidentally turns into an ass.  Although I haven't read the novel, Peter Singer seems to think that it is a good vehicle for conveying empathy towards other animals.

J.M. Coetzee's The Lives of Animals is a peculiar story of a novelist (much like Coetzee himself) delivering a set of lectures on humans' treatment of the other animals, along with surrounding tensions and encounters. 

Comment by ag4000 on Please complete a survey to influence EU animal protection policies · 2022-01-07T17:16:18.726Z · EA · GW

Sorry if this is a very dumb question -- can non-EU people fill out the survey/will it make any difference if they do?  For example, I see that a small number of people from the US filled out the survey.  Are those just people from NGOs/consumer organizations or food business operators?

Comment by ag4000 on EA outreach to high school competitors · 2021-12-23T16:44:13.284Z · EA · GW

Unfortunately, at this point I have relatively limited contact with current LDers -- there are some I know, but not very well.  I do know some people who are important within the LD community (e.g., run debate camps or major tournaments), but I am not very involved in LD anymore.

Comment by ag4000 on EA outreach to high school competitors · 2021-12-18T13:48:18.992Z · EA · GW

I also wanted to chime in about debate.  For context, I did Lincoln-Douglas debate (LD) competitively throughout high school.  

I think many LDers could be good targets for outreach.  Many ideas from EA come up extensively in LD.  In particular: different moral theories and arguments for/against them, cost/benefit analysis, moral hedging to deal with moral uncertainty, arguments for existential risk reduction, and focus on existential risks.  Note that debaters bastardize many of these arguments and concepts, but I think this introduction is useful nonetheless.  LD was certainly where I first heard names like Bostrom, MacAskill, Singer, and Parfit. More generally, I think LD inculcates many attitudes and skills that can be useful for EAs.  Debating LD well requires extensive research of policies, thinking hard about how to apply moral theories to concrete problems, and thinking through both sides of issues. 

I should note a major caveat to what I said above.  Much of the LD community and discussions within LD are not the sort of EA debates I noted above.  There is much sophistry and arguing over rules.  Moreover, the LD community is fairly left politically (at least based off arguments many people read) and so I imagine there could be some pushback to outreach efforts.

If anyone is interested in learning more about LD, or US high school debate more generally, I'm happy to talk about it! 

Comment by ag4000 on How Should Free Will Theories Impact Effective Altruism? · 2021-06-15T02:44:15.677Z · EA · GW

I'm no expert in this topic and haven't read Sam Harris's argument, but there are a couple of things I usually bear in mind:

1. If you're uncertain about whether determinism is true (that is, the probability you assign to hard determinism is less than 1), then it seems you should still act as though you are not determined.  Then we can apply reasoning like Pascal's Wager -- if determinism is false, then sadistic torture is terrible; if it's right, then we are indifferent.  Hence it seems that we should still act on the side of morality still having bearing.

2. A more compelling response (although, still contentious) is compatibilism.  I leave you to explore it here.

Comment by ag4000 on ag4000's Shortform · 2021-04-12T23:57:20.766Z · EA · GW

I was planning to donate some money to a climate cause a few months ago, and I decide to give some money to Giving Green (this was after the post here recommending GG).  There were some problems with the money going through (unrelated to GG), but anyways now I can still decide to send the money elsewhere.  I'm thinking about giving the money elsewhere due to the big post criticizing GG.  However, I still think it's probably a good giving opportunity, given that it's at an important stage of its growth and seems to have gotten a lot of publicity.  Should I consider giving someplace else and doing more research, or should I keep the plan of giving it to GG? (Sorry if this is vague -- let me know if I can fill in any details!)

Comment by ag4000 on ag4000's Shortform · 2021-01-17T22:04:58.712Z · EA · GW

Thanks so much! I've been doing some stuff related to GTD, but haven't read the whole book -- will do so.

Comment by ag4000 on ag4000's Shortform · 2021-01-11T19:23:16.937Z · EA · GW

Sorry if this isn't directly related to EA.  What is a good way to measure one's own productivity?  I tend to measure the amount of time that I spend doing productive activities, but the discussion here seems to make a convincing case that measuring hours worked isn't the best method to do so.