Posts

Comments

Comment by michaelchen on The expected value of extinction risk reduction is positive · 2021-09-27T17:12:13.876Z · EA · GW

What are your thoughts on A longtermist critique of “The expected value of extinction risk reduction is positive”?

Comment by michaelchen on Avoiding Groupthink in Intro Fellowships (and Diversifying Longtermism) · 2021-09-27T03:21:22.123Z · EA · GW

Reading your suggested required readings, S-risks: An introduction and S-risk FAQ, I don't get a clear sense of why s-risks are plausible or why the suggested interventions are useful. I like S-risks: Why they are the worst existential risks, and how to prevent them (EAG Boston 2017) a bit more for illustrating why they are plausible, and I've added it as an optional reading in the uni chapter intro program I'm running. Unfortunately, it doesn't give more than a cursory overview of how s-risks could be reduced. I'd be hesitant about making an s-risk reading a required reading though since I don't know about high-quality intro-level material about s-risks for participants to learn more. I also expect that s-risks might provoke a lot of discussion, so we would want to make sure discussion facilitators have the resources to be well-informed about the issues. Right now, I wouldn't feel confident discussing s-risks with intro program participants.

By the way, if you want to make your suggestions as easy as possible to add it to the curriculum, you should also suggest discussion questions that facilitators can ask about the topic.

(Note: I'm not in charge of the EA Virtual Programs syllabus.)

Comment by michaelchen on GCRs mitigation: the missing Sustainable Development Goal · 2021-09-27T01:43:14.982Z · EA · GW

So I see https://www.povertyactionlab.org/initiative/crime-and-violence-initiative and https://www.poverty-action.org/topics/crime but based on a quick examination, I have no idea how cost-effective these interventions are. Does anyone have links providing an estimate of the cost-effectiveness of violence prevention?

Comment by michaelchen on Annotated List of Project Ideas & Volunteering Resources · 2021-09-27T01:06:29.398Z · EA · GW

I can't comment on the Google Doc version of this post. Can you add Impact CoLabs?

Comment by michaelchen on How to make the best of the most important century? · 2021-09-27T00:02:23.418Z · EA · GW

Figuring out how to stop AI systems from making extremely bad judgments on images designed to fool them, and other work focused on helping avoid the "worst case" behaviors of AI systems.

I haven’t seen much about adversarial examples for AI alignment. Besides https://www.alignmentforum.org/tag/adversarial-examples (which only has four articles tagged), https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment, and https://cset.georgetown.edu/publication/key-concepts-in-ai-safety-robustness-and-adversarial-examples/ are there other good articles about this topic?

Comment by michaelchen on Fighting Climate Change with Progressive Activism in the US: CEA · 2021-09-26T01:08:27.604Z · EA · GW

Is progressive activism for the climate more cost-effective than nonpartisan activism (such as by Citizens' Climate Lobby)?

Comment by michaelchen on Introducing 'Playpumps Productivity System' · 2021-09-25T00:02:03.377Z · EA · GW

At least for me, I think daily accountability—and having to write a reflection if you fail to meet your goals—is a lot greater of an incentive than the threat of donating to PlayPumps a few months down the line.

Comment by michaelchen on The importance of optimizing the first few weeks of uni for EA groups · 2021-09-21T22:22:26.418Z · EA · GW

I'm surprised that retreats are low-effort to plan! What sorts of sessions do you run? What draws people in to attend?

Comment by michaelchen on Introducing EA to policymakers/researchers at the ECB · 2021-09-18T01:21:32.501Z · EA · GW

I think you'd get a lot more answers if you ask your question in the EA Groups Slack: https://efctv.org/groupslack

Giving What We Can's guide to talking about effective altruism has some good tips.

Inviting people to come to a nearby EA meetup or to apply to a locally hosted EA Fellowship sounds good.

One thing you can see if you can set up is presenting to someone in charge of philanthropy at the organization about EA. I know someone who just had a presentation like that at his internship company, though he wasn't successful in causing them to give to effective charities. Might still be worth a shot though! The EA Hub has resources on introductory presentations and you can talk to Jack Lewars from One for the World since he has experience with talking to corporations about effective giving.

Comment by michaelchen on Announcing riesgoscatastroficosglobales.com · 2021-09-14T20:45:15.606Z · EA · GW

Looks good! Some minor suggestions:

  • Remove "Made with Squarespace" in the footer
  • Add a favicon to the website
Comment by michaelchen on Lessons from Running Stanford EA and SERI · 2021-09-04T23:48:43.553Z · EA · GW

Hey Markus, I'm only getting started with organizing an EA group, but here are my thoughts:

  • I think 6 hours per week is enough time to sustain a reasonable amount of growth for a group though, but I don't have enough personal experience to know. If you think funding would enable you to spend more time on community building, you can apply for an EA Infrastructure Fund. And you can always get Group Support Funding to cover expenses for things you think would help, such as snacks, flyers, books etc.
  • I think the Intro EA Program is a surprisingly effective way for newcomers to learn about EA to a reasonably deep level, so I would prioritize running a fellowship. You can broadly advertise for the fellowship across various mailing lists, Facebook groups, and group chats. You can see some more templates as well as advertising templates on the EA Hub's "Advertising Your EA Programs" page. You can see Yale EA's fellowship page for some of the benefits for participants. If you have the time to facilitate discussions, it would be better for engagement to run a fellowship on campus instead of referring everyone to the EA Virtual Programs, if students are on campus. Facilitating takes about 1 hour per week per cohort if you have already done the readings in the Intro EA Program, and an extra 1.5 hours per week if you have not done the readings.
  • Marketing widely is probably quite helpful. Stanford EA and Brown EA have docs on marketing advice which I can also send you. I think the Intro EA Program is better for outreach than regular weekly discussions. I'm currently thinking of using weekly discussions for people who have already completed the Intro EA Program but aren't planning on committing to another fellowship like the In-Depth EA Program.
  • I believe GDPR only applies to businesses collecting data, not private individuals like you.
  • I think you shouldn't be hesitant about inviting people to do things, highlighting the benefits so they can feel motivated, etc. but you can't push people to do things.
  • Let me know if you'd like to set up a call and I can message you on the EA Forum.
Comment by michaelchen on You should write about your job · 2021-08-08T03:49:57.750Z · EA · GW

While I think a write up of my experience as a web development intern wouldn’t add much value compared to the existing web developer post, I’d be interested in writing a guide to getting a (top) software engineering internship/new grad position as a university student. (Not saying my past internships are top-tier!) I'm planning on giving an overview about (or at least link to resources about) to how to write a great resume, preparing behavioral interview answers, preparing for technical interviews with LeetCode-style or system design questions, and so on. A lot has been written about the topic already on the general internet, so I would heavily link to those resources rather than reinventing the wheel, but I think it would be useful to have some practical job-seeking advice on the EA Forum to help with each other's career success. Does that sound like it would be on-topic for the EA Forum?

Comment by michaelchen on Phil Torres' article: "The Dangerous Ideas of 'Longtermism' and 'Existential Risk'" · 2021-08-06T22:45:00.106Z · EA · GW

Phil Torres's tendency to misrepresent things aside, I think we need to take Phil Torres's article as an example of the severe criticism that longtermism is liable to attract, as currently framed, and reflect on how we can present it differently. It's not hard to read this sentence on the first page of (EDIT: the original version of) "The Case for Strong Longtermism":

The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.

and conclude that, as Phil Torres does, longtermism means that we can justify causing present-day atrocities for a slight, let's say 0.1% increase in the subjective probability of a valuable long-term future. Thinking rationally, atrocities do not improve the long-term future, and longtermists care a lot about stability. But with the framing given by "The Case for Strong Longtermism", there is a small risk that is higher than it needs to be that future longtermists can be persuaded that atrocities would be justified, especially when subjective probabilities are so subjective. How can we reframe or redefine longtermism so that: firstly, we reduce the risk of longtermism being used to justify atrocities, and secondly (and I think more pressingly), reduce the risk that longtermism is generally seen as something that justifies atrocities?

It seems like this framing of longtermism is a far greater reputational risk to EA than, say, how 80,000 Hours over-emphasized earning to give, which is something that 80,000 Hours apparently seriously regrets. I think "The Case for Strong Longtermism" should be revised to not say things like "we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years", without detailing significant caveats. It's just a working paper—shouldn't be too hard for Greaves and MacAskill to revise. (EDIT: this has already happened, as Aleks_K has pointed out below.) If there are many more articles like Phil Torres's here written in other media in the near future, I would be very hesitant about using the term "longtermism". Phil Torres is someone who is sympathetic to effective altruism and to existential risk reduction, someone who believes "you ought to care equally about people no matter when they exist"; now imagine if the article were written by someone who isn't as sympathetic to EA.

(This really shouldn't affect my argument, but I do generally agree with longtermism.)

Comment by michaelchen on Narration: The case against “EA cause areas” · 2021-07-25T04:49:01.074Z · EA · GW

I listen to a good amount of podcasts using the Pocket Casts app (or at least I did for a couple years up until a few weeks ago when I realized that I find YouTube explanations of tech topics a lot more informative). But when I'm browsing the EA Forum, I'm not really interested in listening to podcasts, especially podcast versions of posts I've already read that I could easily re-read on the EA Forum. I think this is a cool project but after the first couple of audio narration posts which were good for generating awareness of this podcast, I don't think it's necessary to continue these top-level posts. I think it would still be worthwhile to experiment with not posting some episodes on the front-page and seeing how that affects the number of listens.

Comment by michaelchen on Studies / reports on increased wellbeing for extreme poor · 2021-07-20T04:55:30.221Z · EA · GW

Somewhat low-effort answer:

I know Doing Good Better mentions the logarithmic relationship between income and wellbeing in one of its earlier chapters.

Relevant posts/articles:

Comment by michaelchen on There will now be EA Virtual Programs every month! · 2021-07-20T04:22:21.200Z · EA · GW

It seems inconvenient if applicants potentially have to fill out the Virtual Programs application form too and receive a second acceptance/rejection decision—could we have just one application form for them to fill out and one acceptance/rejection decision notification? I was thinking that hopefully we could have something like the following process:

  • Have applicants apply through the EA Virtual Programs form, or have a form specific to our chapter which puts data into the EA Virtual Programs application database. (I don't know enough about Airtable to know whether this is possible or unrealistic).
  • Include a multiple-choice application question about whether they prefer in-person or virtual. I think we can assume by default that Georgia Tech applicants prefer to be with other Georgia Tech students—or at least that should help with building the community at Effective Altruism at Georgia Tech.
  • Tell EA Virtual Programs how many in-person cohorts we could have and the availability of the in-person facilitators. Perhaps the facilitators could fill out the regular EA Virtual Programs facilitator form but with some info about whether or not they can facilitate on-campus.
  • EA Virtual Programs assigns people to in-person or virtual cohorts.
  • Something extra that might be nice: If they were rejected due to limited capacity and their application answers were not bad, automatically offer them an option to be considered for the next round (for EA Georgia Tech, I'm thinking we'd have rounds in September–October and February–March).

If it is true that people who are rejected tend not to reapply or engage with the EA group because they might feel discouraged, then it seems important to try to minimize how many people get a rejection from the Intro EA Program.

When we think about fellowships, we generally think about programs that are highly selective, are intensive, has funding, has various supports and opportunities (example 1, example 2).

Interesting, I didn't realize "fellowship" had those connotations before to such an extent! I mainly associated "fellowship" with its meaning in Christianity haha, where it isn't selective or prestigious, just religious.

Comment by michaelchen on Why I prioritize moral circle expansion over artificial intelligence alignment · 2021-07-18T04:09:40.398Z · EA · GW

In Stuart Russell's Human Compatible (2019), he advocates for AGI to follow preference utilitarianism, maximally satisfying the values of humans. As for animal interests, he seems to think that they are sufficiently represented since he writes that they will be valued by the AI insofar as humans care about them. Reading this from Stuart Russell shifted me toward thinking that moral circle expansion probably does matter for the long-term future. It seems quite plausible (likely?) that AGI will follow this kind of value function which does not directly care about animals rather than broadly anti-speciesist values, since AI researchers are not generally anti-speciesists. In this case, moral circle expansion across the general population would be essential.

(Another factor is that Russell's reward modeling depends on receiving feedback occasionally from humans to learn their preferences, which is much more difficult to do with animals. Thus, under an approach similar to reward modeling, AGI developers probably won't bother to directly include animal preferences, when that involves all the extra work of figuring out how to get the AI to discern animal preferences. And how many AI researchers want to risk, say, mosquito interests overwhelming human interests?)

In comparison, if an AGI was planned to only care about the interests of people in, say, Western countries, that would instantly be widely decried as racist (at least in today's Western societies) and likely not be developed. So while moral circle expansion encompasses caring about people in other countries, I'm less concerned that large groups of humans will not have their interests represented in the AGI's values than I am about nonhuman animals.

It may be more cost-effective to have targeted approach of increasing anti-speciesism among AI researchers and doing anti-speciesist AI alignment philosophy/research (e.g., more details on how AI following preference utilitarianism can also intrinsically care about animal preferences, accounting for preferences of digital sentience given the problem that they can easily replicate and dominate preference calculations), but anti-speciesism among the general population still seems to be an important component of reducing risk of having a bad far future.

Comment by michaelchen on EA cause areas are just areas where great interventions should be easier to find · 2021-07-18T03:23:12.498Z · EA · GW

I see this sentence as suggesting capitalizing on the (relative) popularity of anti-racism movements and trying to use society's interest in anti-racism toward genocide prevention.

Comment by michaelchen on There will now be EA Virtual Programs every month! · 2021-07-17T21:40:47.503Z · EA · GW

The Introductory EA Virtual Program has been invaluable for getting enough people engaged in EA for me to be able to start a group at my university, and for that I'm extremely grateful to those who have helped organize it and develop the curriculum. If you're in the same position I was a few months ago, reasonably interested in starting an EA group but having difficulty finding enough people interested in EA, I'd highly recommend advertising the Introductory EA Program!

I'm interested in running a local in-person program at my university from September to October with the virtual program as overflow capacity, in case our capacity for in-person cohorts isn't enough to accept all quality applicants. Would that setup be possible?

Also, is there a reason that the program is no longer called a fellowship?

Comment by michaelchen on You are allowed to edit Wikipedia · 2021-07-06T18:02:46.472Z · EA · GW

I agree that people should edit a Wikipedia article directly or discuss on the talk page instead of complaining about it elsewhere. Leaving a comment on the talk page can be a quick way of helping shift the consensus for a controversial topic. In my experience though, unless it's a very popular page, it's often the case that when someone leaves a comment on the talk page describing overall changes that they want to be made, no one will respond and no changes will be taken. Or someone responds with an agreement or disagreement, and nothing happens. Thus, especially if it's not a controversial change, be bold and directly edit the page yourself. The Visual Editor makes it very easy to make edits—you don't need to know the wiki markup syntax. Make sure that your edits are balanced (so the article has a neutral point of view) and all claims in your edit are backed by citations to reliable sources. Watch the page so you can get notified for further changes to the page, such as if your edits are reverted in part or in whole in the coming days. If they are reverted and you disagree with those changes, politely discuss it on the talk page and ping them with {{Reply to|username}}. If it's a semiprotected page or extended confirmed protected page that newbie editors don't have permission to edit, you can leave an edit request on the talk page detailing the concrete changes that you want to be made.

Comment by michaelchen on Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community? · 2021-06-23T16:04:30.869Z · EA · GW

Something similar to explicit credence levels on claims is how Arbital has inline images of probability distributions. Users can vote on a certain probability and contribute to the probability distribution.

Comment by michaelchen on What should CEEALAR be called? · 2021-06-16T15:23:04.931Z · EA · GW

Effective Altruism Lodge

Comment by michaelchen on What should CEEALAR be called? · 2021-06-16T15:22:49.031Z · EA · GW

Effective Altruism House

Comment by michaelchen on Shouldn't 'Effective Altruism' be capitalized? · 2021-06-12T00:13:11.923Z · EA · GW

I've seen a mix of some people capitalizing effective altruism, maybe more often in communications with a more general audience, and some people not capitalizing it. I generally try leave it uncapitalized, following CEA policy, but sometimes I capitalize it when it makes it clearer that effective altruism is an actual Thing, not just altruism that is effective, and not a generic made-up compound term like, say, "efficient humanitarianism" or "impactful lovingkindness". For example, if I write in a self-introduction "I'm passionate about effective altruism", lowercase effective altruism doesn't read like a term that you could google, at least to someone who has never heard of effective altruism before, so I would probably capitalize effective altruism here. If it's in a context where it's clear to the readers that effective altruism is a specific thing, I would leave it lowercase. Minor note: somehow, some Hack4Impact folks have described Effective Altruism as an organization, in some Slack messages and in a social impact talk they wrote, and capitalizing the term may have contributed to that misconception.

Comment by michaelchen on Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community? · 2021-06-11T23:29:50.445Z · EA · GW

Anecdotally, it's quite tiring to put credence levels on everything. When I started my blog I began by putting a probability on all major claims (and even wrote a script to hide this behind a popup to minimise aesthetic damage). But I soon stopped.

Interesting! Could you provide links to some of these blog posts?

Comment by michaelchen on gavintaylor's Shortform · 2021-06-11T23:22:17.122Z · EA · GW

According to Fleck's thesis, Matsés has nine past tense conjugations, each of which express the source of information (direct experience, inference, or conjecture) as well as how far in the past it was (recent past, distant past, or remote past). Hearsay and history/mythology are also marked in a distinctive way. For expressing certainty, Matsés has a particle ada/-da and a verb suffix -chit which mean something like "perhaps" and another particle, ba, that means something like "I doubt that..." Unfortunately for us, this doesn't seem more expressive than what English speakers typically say. I've only read a small fraction of Fleck's 1279-page thesis so it's possible that I missed something. I wrote a lengthier description of the evidential and epistemic modality system in Matsés at https://forum.effectivealtruism.org/posts/MYCbguxHAZkNGtG2B/matses-are-languages-providing-epistemic-certainty-of?commentId=yYtEWoHQEFuWCehWt.

Comment by michaelchen on Matsés - Are languages providing epistemic certainty of statements not of the interest of the EA community? · 2021-06-11T21:50:30.304Z · EA · GW

I'm pretty interested in linguistics so after reading gavintaylor's comment which you linked to, I decided to read part of David Fleck's doctoral thesis from 2003 on the Matsés language. For distinguishing levels of certainty, Matsés has a particle ada/-da and a verb suffix -chit which mean something like "perhaps" and another particle, ba, that means something like "I doubt that...", and English speakers already naturally express these distinctions of certainty. Something distinctive that Matsés speakers do though is that they always mark statements about the past with an evidentiality suffix to show whether the proposition is known by direct experience, inference, or conjecture. By the way, it is not true that Matsés "obliges the speakers to indicate the certainty of information provided in a sentence".

The constructed language Lojban has a richer set of evidentials, derived from a variety of American Indian languages. It might be useful to think about including phrases like these in one's writing.

  • ja'o [jalge] I conclude
  • ca'e I define
  • ba'a [balvi] I expect I experience I remember
  • su'a [sucta] I generalize I particularize
  • ti'e [tirna] I hear (hearsay)
  • ka'u [kulnu] I know by cultural means
  • se'o [senva] I know by internal experience
  • za'a [zgana] I observe
  • pe'i [pensi] I opine
  • ru'a [sruma] I postulate
  • ju'a [jufra] I state

DataPacRat from Less Wrong proposed Lojban particles to express subjective probabilities:

  • ni'uci'ibei'e: 0%
  • ni'upabei'e: 44.3%
  • ni'ubei'e: <50%
  • nobei'e: 50%
  • ma'ubei'e: >50%
  • pabei'e: 55.7%
  • etc.
  • ci'ibei'e: 100%
  • xobei'e: asking listener their level of belief

These words are based on logarithmic probability, which I don't really understand. I personally would find it a lot easier to just write a linear probability or percentage such as 80% confidence. It might be useful to include subjective probabilities for various sentences in a document, like Technicalities used to do according to their comment on this post, although they later quit.

Matses language evidentials and epistemic modality

Here, I've summarized and quoted key relevant sections from the Fleck's thesis which are relevant to our discussion of epistemic certainty. It's possible that there are important parts of Fleck's thesis that I've missed.

There's also a more recent paper by Fleck from 2007 specifically focusing on evidentials: https://www.jstor.org/stable/40070903. I haven't read this.

The remainder of this post is fairly long, so if you're not interested in learning more about Matses, don't feel like you have to read this.

5.5.10 Derivational Suffix -chit 'Uncertainty' (page 388, page 410 in the PDF):

This suffix expresses uncertainty, and cannot be used with the uncertainty particle ada/-da. Some example sentences:

  • It is possibly in tree cavities that the porcupine gives birth.
  • But others probably drink mashed fruit drink.
  • Perhaps he died.

5.6.1 Past tenses and evidentiality - page 397 (page 419 in the PDF):

Evidential devices in languages often code multiple types of information with respect to the reliability of the information being related by the speaker. These include: i) source of knowledge (e.g., hearsay); ii) degree of precision or truth, iii) probability of the information being true, and iv) the speaker's expectations concerning the probability of the statement.

What is different about Matses is that it code evidentiality (source of knowledge) in one set of verbal inflectional suffixes, and epistemic modality (certainty/validation/commitment to truth of statement) by various other means, such as a verbal suffix that comprises a different position class, an uncertainty particle, and a dubitative particle (inflectional suffixes only code uncertainty in nonpast and future tenses). Thus, there is no need for Matses speakers to extend the function of evidentials to code epistemic modality because there are other markers that do this, which are outside the Matses evidential system. (page 398, 420 in the PDF)

Evidentiality is coded only in verbal suffixes that also mark past tense. Nine different tense-evidentiality inflectional suffixes mark a combination of one of three evidential distinctions (direct experience, inference, and conjecture) and one of three past tense distinctions (recent past, distant past and remote past). There are no past tense inflections besides these nine suffixes, so every time a speaker reports a past event, he must reveal the source of knowledge. This contrasts with epistemic modality, which is not an obligatory category in any tense (e.g., the absence of -chit 'Uncertainty' does not necessarily imply certainty; section 5.5.10). There is no morphological means of marking hearsay--it must be reported via direct quotation with the quotative verb inflected with one of the nine tense-evidentiality suffixes. Mythical and historical past is marked with the recent past inferential suffix, often reported in a quotative sentence.

Experiential suffixes

The three experiential suffixes are:

  • -o 'Recent Past: Experiential': immediate past to about one month ago
  • -onda 'Distant Past: Experiential': about one month to about 50 years ago
  • -denne 'Remote Past: Experiential': about 50 to about 100 years ago (approximate maximum human life span)

experiential refers to a situation where the speaker detects the occurrence of an event at the time that it transpires (or a state at the time that it holds true).

Inferential suffixes

The four inferential suffixes are (page 406, page 428 in the PDF):

  • -ac 'Recent Past: Inferential': immediate past to about one month ago
  • -nëdac 'Distant Past: Inferential': about one month ago to speaker's infancy
  • -ampic 'Remote Past: Inferential': before speaker's infancy
  • -nëdampic 'Remote Past: Inferential': before speaker's infancy. It's unclear if this differs from -ampic or is used for events even further in the past.

Some examples of inferential statements (page 406, 428 in the PDF):

  • Davy (evidently) left to go to sleep.
  • Davy (evidently) left, probably because he got bored.

with first-person forms, narrative and mythical past, ancient habitual activities, etc., where evidential distinctions are difficult or not relevant, inferential suffixes are used as a default. (page 424, page 446 in the PDF)

Conjecture suffixes

The two conjecture suffixes are (page 417, page 439 in the PDF):

  • -ash 'Recent Past Conjecture': immediate past to about one month ago
  • -nëdash 'Distant Past Conjecture': more than about one month ago

What is meant by "conjecture" here is that the speaker wishes to report the occurrence of an event or state that he did not witness, did not hear about from somebody else, and for which there is no resulting evidence. For example, if a dog is missing, the owner might conjecture that a snake bit it or a jaguar ate it.

Examples of conjectural statements:

  • Davy probably slept.
  • I suppose Davy got bored.

History/mythology

Statements about history/mythology from the "inaccessible past" use the suffix (-pa)-ac (cadennec) (page 424, page 446 in the PDF).

Hearsay

Hearsay is marked by a direct quote + quotative (ca or que) + any experiential or conjecture suffix (page 424, page 446 in the PDF).

9.4 Clause-level particles - page 724 (page 745 in the PDF):

  • ada/-da: a clause-level particle to express uncertainty. E.g., "I doubt if you're going to come here".
  • ba: a clause-level particle to express doubt. E.g., "ada debi cho-pa-ash ba" "I'm not sure if Davy has arrived."
Comment by michaelchen on Tan Zhi Xuan: AI alignment, philosophical pluralism, and the relevance of non-Western philosophy · 2021-06-11T20:35:38.015Z · EA · GW

An extended transcript of the talk is available at https://www.alignmentforum.org/posts/jS2iiDPqMvZ2tnik2/ai-alignment-philosophical-pluralism-and-the-relevance-of. There's also a lot more discussion there.

Comment by michaelchen on Launching 60,000,000,000 Chickens: A Give Well-Style CEA Spreadsheet for Animal Welfare · 2021-06-06T17:39:55.191Z · EA · GW

I initially thought that CEA here stood for Centre for Effective Altruism, and only later did I realize that it stood for cost-effectiveness analysis.

Comment by michaelchen on Introducing Rational Animations · 2021-06-06T01:24:20.538Z · EA · GW

I upvoted, but here are some comments I have. Looking at the titles of the first three videos, it wasn't clear how they related to rationality.

  • How Does Bitcoin Work? Simple and Precise 5…
  • Why Your Life Is Harder When You Are Ugly | The… (didn't notice the "Halo Effect" in the thumbnail at first)
  • Why You STRUGGLE to Finish on Time | The…

So perhaps people downvoted based on first impressions that it doesn't seem that related to rationality?

I enjoyed the Bitcoin, halo effect, and planning fallacy videos, but I didn't think that the video "If You Want to Find Truth You Need to Step Into Cringe" made a solid rational argument for the opening statement that "If you honestly seek truth, and if you decide to tell the truth, at some point, you will accept to appear cringe to the eyes of most people." The only evidence given was that telling other people the truth that you are a weeaboo is cringe, but that "truth" (which is about you yourself) is quite disanalogous to truths about the external world. Since the channel is branded as "rational", and you also include "Effective Altruism" in the channel description, I would like the videos to all have rigorous rational arguments. For this video, I would also have preferred a somewhat more serious overall tone in the script instead of leaning in so heavily into internet culture, so that the videos seem more respectable perhaps, but I don't have a strong preference about this.

Regarding this YouTube comment of yours: "internet dynamics are probably relatable to a YT audience. Should I worry that using them as examples would make my videos cringe in the eyes of a sophisticated audience? Well, maybe but... I prefer to defy social rules rather than accept them. Sometimes it works, sometimes it backfires, but I observed that when it works, the payoff is usually very high. Plus, it's fun." Make sure to take downside risk seriously! I don't think any of your videos so far have been risky, and I don't think talking about internet culture is risky, but it's something to keep in mind. It would be good to avoid inadvertently damaging the reputation of rationality and/or effective altruism. Ben Todd's "Why not to rush to translate effective altruism into other languages" says "any kind of broad based outreach is risky because it’s hard to reverse. Once your message is out there, it tends to stick around for years, so if you get the message wrong, you’ve harmed years of future efforts. We call this the risk of ‘lock in’." I don't agree with that much of Ben Todd's post, but I think it's important to be thoughtful about the messaging that you're putting forth (not saying that you aren't). Before making the videos, do you get the scripts reviewed by other people and ask for honest feedback? Ideally, you'd get feedback from rationalists and effective altruists as well as people who are neither. If not, I would recommend doing so, especially since these videos are targeted toward the general public.

It could also help to have official subtitles. YouTube makes it pretty easy to add subtitles—you just paste in your script and then it automatically aligns it to the right time.

I think the animations are really awesome, not only in art quality but also in elucidating the subject of the video visually. It might be nice to have some non-white characters too (besides nonhuman animals like doge lol) for some more diversity.

Comment by michaelchen on Please Test/Share my New Vegan Video Game · 2021-05-30T20:44:27.525Z · EA · GW

How long does it take to play through this game?

Comment by michaelchen on Introducing Rational Animations · 2021-05-30T20:37:41.748Z · EA · GW

What topics are you thinking of making videos for in the future? (Or is this information reserved for Patreons?)

You say "first three videos", but I can also see the following videos:

  • redstone neuron in minecraft!
  • minecraft 24 hour digital clock
  • etc.

I don't think that's a big deal though—you can keep them up and maybe they'll attract some more viewers to your channel.

Also, one month ago on your YouTube channel, you posted a link to your Patreon, but it actually links to https://www.patreon.com/rationalanima (which is dead) instead of https://www.patreon.com/rationalanimations.

Comment by michaelchen on Problems and Solutions in Infinite Ethics · 2021-05-06T03:23:40.193Z · EA · GW

The blog post links "Ridiculous math things which ethics shouldn't depend on but does" and "Kill the young people" are dead. You can find archived versions of the posts at the following links:

Comment by michaelchen on On Sleep Procrastination: Going To Bed At A Reasonable Hour · 2021-04-20T18:23:53.647Z · EA · GW

Some things I do:

  • Set my computer to automatically shutdown at certain times, such as 12:10am, 12:30am, and 1am. I chose the time of 12:10am because assignments are generally due at 11:59pm, so there's no need to stay up later. Since I'm using Windows, I can do this with Task Scheduler: https://www.techrepublic.com/article/how-to-schedule-a-windows-10-shutdown-for-a-specific-date-and-time/
  • Set an alarm on my computer at 12am to remind me that my computer is going to shut down soon. This alarm plays only when the computer is awake (i.e., when I might be using the computer).
  • On my iPad and Android, I use website blockers (1Blocker Legacy for iPad, LeechBlock in Firefox for Android) to block any sites except a handful of sites on a whitelist. I don't have any other apps like YouTube, Facebook, Reddit, etc. where I could get sucked down an infinite rabbit hole. So then all I have left to do is reading some boring PDFs of textbooks.
Comment by michaelchen on Summer Research Internship: Stanford Existential Risks Initiative, Deadline April 21st · 2021-04-20T00:30:16.762Z · EA · GW

Nitpick: The first link leads to https://forum.effectivealtruism.org/posts/3mnLZtbeBSWZqQCLb/cisac.fsi.stanford.edu/stanford-existential-risks-initiative, which is a dead link. I think you mean to link to cisac.fsi.stanford.edu/stanford-existential-risks-initiative.

Comment by michaelchen on Concerns with ACE's Recent Behavior · 2021-04-16T21:04:25.739Z · EA · GW

I think this point from the Black VegFest 7 points of allyship (for the white vegan community) is reasonably straightforward:

White vegans/ARs will respect the sanctity of Black space and will not enter unless their presence is necessary. Black space is for the growth and betterment of Black people. Allyship and being accomplices begins with white people learning to respect Black space.

My understanding is that there can be spaces for only Black people to discuss, though white people can participate if necessary (presumably, if they are invited). Part of allyship is allowing for these spaces to exist.

That said, I’m still very confused by the second sentence of this quote:

There is no such thing as an equal playing field under white supremacist imperialist capitalist patriarchy. In the current system, white people have the power to usurp anything Black lives create simply by being white.

EDIT: oh okay the last one makes slightly more sense in context:

Recognize that where and how you live and work affects everything around you. There is no such thing as an equal playing field under white supremacist imperialist capitalist patriarchy . In the current system, white people have the power to usurp anything Black lives create simply by being white. White people deny loans to Black people in neighborhoods they ignore and in neighborhoods they take over. The tiny loans white people do offer Black businesses are often predatory and insufficient and still create a dependency on whiteness to be kind or agreeable. This is all Redlining. It continues today despite being made illegal in 1968. White people deny grants to us and redistribute them to large 100 year-old white organizations due to the grandfathering of a racist system that has modified its tone but not changed its actions. When the excuse is not financial stability, small white organizations who seemingly appear out of nowhere “win” resources because they knew someone white.

Comment by michaelchen on Concerns with ACE's Recent Behavior · 2021-04-16T20:52:25.174Z · EA · GW

I found this post to be quite refreshing compared to the previous one criticizing Effective Altruism Munich for uninviting Robin Hanson to speak. I’m not against “cancel culture” when it’s cancelling speakers for particularly offensive statements they’ve made in the past (e.g., Robin Hanson in my opinion, but let’s not discuss Robin Hanson much further since that’s not the topic of this post). Sometimes though, cancelling happens in response to fairly innocuous statements, and it looks like that’s what ACE has done with the CARE incident.

Comment by michaelchen on EA Debate Championship & Lecture Series · 2021-04-08T17:09:36.284Z · EA · GW

Sure, debate may involve a lot of techniques that are counterproductive to truth-seeking, and I wouldn't want people to write on the EA Forum like it's a debate, for example. However, I think there are many places where it would help to be able to convey more convincing arguments even if being more convincing doesn't improve truth-seeking—speaking with non-EAs about EA, for example.

Comment by michaelchen on EA Debate Championship & Lecture Series · 2021-04-08T16:58:33.780Z · EA · GW

Are there recordings of the debates? I'd be interested in watching them.

Comment by michaelchen on EA for Jews - Proposal and Request for Comment · 2021-04-07T02:20:03.583Z · EA · GW

If necessary, it might be good to frame the arguments from religious texts as connecting with traditional Jewish thought, not in a way that demands a belief (or lack of belief) in the literal accuracy of the Talmud—basically what (my understanding of) Reform Judaism does. It might be good to intersperse religious arguments with secular arguments as well.

Comment by michaelchen on Contact with reality · 2021-02-18T17:05:56.277Z · EA · GW

Out there, though, you can meet real people, with their own rich and complex lives; you can make real friends, and be part of real relationships, communities, and institutions. You can wander cities with real history; you can hear stories about things that really happened, and tell them; you can stand under real skies, and feel the heat of a real sun. People out there are doing real science, and discovering real things. They’re barely beginning to understand the story they’re a part of, but they can understand. You can understand, too; you can be a part of that story, too. No one knows, yet, what’s going to happen.

This is beautifully written, and it helps me feel more appreciative of our world. I think I'd still prefer the experience machine in this scenario though, just due to the hedonic difference.

Comment by michaelchen on EA Birthday Posts: An Alternative to Fundraisers · 2021-02-06T19:27:23.785Z · EA · GW

Yeah sure!

Comment by michaelchen on EA Birthday Posts: An Alternative to Fundraisers · 2020-12-29T02:29:37.499Z · EA · GW

I ended up making a post based on Kuhan's. It's somewhat shorter and has less of a focus on careers. I ended up getting 14 reactions (for reference, I have a bit over 600 friends on Facebook). I wonder if accompanying the post with a new profile picture would have been a good way to get more engagement haha.

My birthday is today! For the past two years, I did birthday fundraisers, but this year, instead of gifts or donations, I’m asking for just five minutes of your attention.

In eighth grade, I discovered a social impact–oriented career site called 80000hours.org and learned about a new social movement called effective altruism.

Previously, I knew about various issues like environmental degradation or extreme global poverty, but never really knew what to really do about it. I had learned a bit from UNICEF posters around school about how cheap it was to provide essential goods and medicine in parts of the developing world, but never saw any examples of ordinary people taking that opportunity seriously.

I had learned about income inequality of the United States, but not the much more extreme inequality on a global scale. It turns out that someone making $70,000 per year is in the top 1% of the global income distribution, while 700 million people live on less than $2 per day (~$700 per year, and this is adjusted for lower cost of living in poorer countries). Millions of people—often children—die each year of easily preventable ailments such as malaria and vitamin A deficiency.

Similarly, all I knew of animal farming was idyllic illustrations on milk cartons and meat packaging, and never learned about the horrific conditions in which farmed animals are raised in truly are, and the terrifying scale of the cruelty. Over 40 billion land animals and 40 billion fish from factory farms were killed for food in 2019, but the issue gets little attention compared to other well-known causes.

And then there’s the issue of safeguarding our future. Existential risks to civilization are estimated by philosopher Toby Ord and others to be more than 10% likely in the coming century, partly due to potential misuse of emerging technologies such as synthetic biology or advanced artificial intelligence. But if all goes well, human history is just beginning. Humanity could survive for incalculable years, reaching heights of flourishing unimaginable today.

So in tenth grade, I decided to take the Giving What We Can pledge to donate 10% of my income in my future career* (1% of my 红包🧧 money while a student lol) to high-impact charities. As I struggled with depression through high school due to chronic sleep deprivation and academic stress, the commitment to trying to make the biggest positive impact on the world that I could was something that kept me going.

*Dear siblings: yes, I will make sure to build up months of savings and pay off student debt first, etc.

If these ideas sound interesting to you, I’d appreciate it if you could read https://80000hours.org/key-ideas/ to learn more about the most effective problems to work on and how you might use your career to address them, directly or indirectly. If you prefer to read it in batches, you can subscribe to the newsletter at https://80000hours.org/community/. And if you’re interested in checking out high-impact, vetted charities, check out the charity evaluation research at https://www.givewell.org/charities/top-charities and https://founderspledge.com/research.

Comment by michaelchen on Introducing Probably Good: A New Career Guidance Organization · 2020-12-23T04:56:10.470Z · EA · GW

I'm not a fan of the name "Probably Good" because:

  • if it's describing the advice, it seems like the advice might be pretty low-effort and not worth paying attention to
  • if it's describing the careers, it sounds like the careers recommended have a significant chance of having a negative impact, so again, not worth reading about
Comment by michaelchen on Taking Self-Determination Seriously · 2020-12-02T05:37:59.180Z · EA · GW

I don't have much knowledge in this area, but my main concern is that there's a significant risk of a new country's governance being unstable and frankly terrible—e.g., South Sudan.

Comment by michaelchen on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-12-02T05:11:29.303Z · EA · GW

Minor comment regarding the case of Greg Patton: As someone who heard about the story in early September and was shocked at the fallout, it was heartening to read the aftermath in https://www.lamag.com/citythinkblog/usc-professor-slur/ and https://poetsandquants.com/2020/09/26/usc-marshall-finds-students-were-sincere-but-prof-did-no-wrong-in-racial-flap/ and see that the university eventually “concluded there was no ill intent on Patton’s part and that ‘the use of the Mandarin term had a legitimate pedagogical purpose.’”

Comment by michaelchen on What are some ways to practice self-care in a job hunt? · 2020-11-27T22:56:40.970Z · EA · GW

For context, can you explain what kind of job you are looking for? How time-consuming does each application tend to be?

Comment by michaelchen on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2020-11-26T04:11:54.565Z · EA · GW

Has the list been made public, or has the app been made open-source yet?

Comment by michaelchen on Planning my birthday fundraiser for October 2020 · 2020-11-26T04:00:43.072Z · EA · GW

Can you share the text of your birthday fundraiser post here?

Comment by michaelchen on EA Birthday Posts: An Alternative to Fundraisers · 2020-11-26T03:57:15.135Z · EA · GW

If anyone else makes a birthday post along these lines, I think it would be valuable if you could share it here as well.