Posts

Misha_Yagudin's Shortform 2020-01-08T14:23:19.002Z · score: 3 (1 votes)

Comments

Comment by misha_yagudin on Misha_Yagudin's Shortform · 2020-04-30T15:27:53.817Z · score: 10 (6 votes) · EA · GW

I noticed that Effective Altruism: Philosophical Issues is available at Library Genesis.

Comment by misha_yagudin on English as a dominant language in the movement: challenges and solutions · 2020-04-10T17:23:22.460Z · score: 1 (1 votes) · EA · GW

It may be my problem, but the popular Grammarly app works terribly with comments: I can't use its suggestions; I can't navigate the comments with mouse/trackpad; because of that I need to use the web version, which takes, say, 10–30 seconds and is annoying.

I am on:

  • Google Chrome: Version 80.0.3987.163 (Official Build) (64-bit)
  • MacOS Mojave: 10.14.5 (18F2059)
Comment by misha_yagudin on Why I'm Not Vegan · 2020-04-09T19:56:43.803Z · score: 2 (4 votes) · EA · GW

One argument against is that begin vegan adds weirdness points, which might make it harder for someone to do workplace activism or might slow one's career in more conservative fields/countries.

Comment by misha_yagudin on Why I'm Not Vegan · 2020-04-09T19:47:18.084Z · score: 22 (11 votes) · EA · GW

This is odd to me. I see how committing to be vegan can strengthen one's belief in the importance of animal suffering. But my not-very-educated guess is that the effect is more akin to why buying iPhone/Android would strengthen your belief into the superiority of one to another. But I don't see how would it help one to understand/consider animal experiences and needs.

I haven't read the paper in depth but searched for relevant keywords and found:

Additionally, a sequence of five studies from Jonas Kunst and Sigrid Hohle demonstrates that processing meat, beheading a whole roasted pig, watching a meat advertisement without a live animal versus one with a live animal, describing meat production as “harvesting” versus “killing” or “slaughtering,” and describing meat as “beef/pork” rather than “cow/pig” all decreased empathy for the animal in question and, in several cases, significantly increased willingness to eat meat rather than an alternative vegetarian dish.33
Psychologists involved in these and several other studies believe that these phenomena 34 occur because people recognize an incongruity between eating animals and seeing them as beings with mental life and moral status, so they are motivated to resolve this cognitive dissonance by lowering their estimation of animal sentience and moral status. Since these affective attitudes influence the decisions we make, eating meat and embracing the idea of animals as food negatively influences our individual and social treatment of nonhuman animals.

The cited paper (33, 34) do not provide much evidence to support your claim among people who spend significant time reflecting on welfare of animals.

Comment by Misha_Yagudin on [deleted post] 2020-03-19T06:30:19.307Z

Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood (2017)

Ask MIRI Anything (AMA) (2016)

Comment by misha_yagudin on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-19T06:23:57.087Z · score: 3 (2 votes) · EA · GW

This is a valid convergence test. But I think it's easier to reason about \sum p_i < ∞. See math.SE for a proof.

Comment by misha_yagudin on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T17:26:56.659Z · score: 2 (2 votes) · EA · GW

re: 3 — to be more precise, one can show that $\prod_i (1 - p_i) > 0$ iff $\sum p_i < ∞$, where $p_i \in [0, 1)$ is a probability of extinction in a given year.

Comment by misha_yagudin on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2020-03-17T21:29:33.690Z · score: 1 (1 votes) · EA · GW

What are some of your favourite theorems, proofs, algorithms, data structures, and programming languages?

Comment by misha_yagudin on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T21:27:41.164Z · score: 1 (1 votes) · EA · GW

What are some of your favourite theorems, proofs, algorithms, and data structures?

Comment by misha_yagudin on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T21:19:53.282Z · score: 5 (3 votes) · EA · GW

What do you think an EAG attendee is likely getting wrong about animal advocacy? Effective animal advocacy?

Comment by misha_yagudin on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-17T21:17:56.747Z · score: 5 (3 votes) · EA · GW

What do you think an EAG attendee is likely getting wrong about global health and poverty?

Comment by misha_yagudin on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-17T21:16:52.363Z · score: 5 (3 votes) · EA · GW

What do you think an EAG attendee is likely getting wrong about GiveWell?

Comment by misha_yagudin on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T21:16:29.132Z · score: 11 (5 votes) · EA · GW

What do you think an EAG attendee is likely getting wrong about ACE?

Comment by misha_yagudin on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-17T09:03:20.307Z · score: 8 (6 votes) · EA · GW

What have you changed your mind on recently?

Comment by misha_yagudin on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T09:03:01.994Z · score: 7 (3 votes) · EA · GW

What have you changed your mind on recently?

Comment by misha_yagudin on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T09:02:46.889Z · score: 18 (11 votes) · EA · GW

What have you changed your mind on recently?

Comment by misha_yagudin on EA Updates for February 2020 · 2020-02-29T09:58:02.730Z · score: 5 (2 votes) · EA · GW

Great; thank you!


On climate. I enjoyed Ryan Orbuch's, Stripe’s climate product manager, write-up on negative emissions technologies.

Comment by misha_yagudin on EA syllabi and teaching materials · 2020-02-18T19:51:33.255Z · score: 3 (2 votes) · EA · GW

If MIT's alignment reading group is here, this course should also belong to the list: AGI Safety @ UC Berkley (Fall 2018).

Comment by misha_yagudin on Misha_Yagudin's Shortform · 2020-02-18T17:56:08.148Z · score: 1 (3 votes) · EA · GW

The three subscales of the Light Triad Scale are conceptualized as follows:

Faith in Humanityㅡor the belief that, generally speaking, humans are good.

Sample item: I think people are mostly good.

Humanismㅡor the belief that humans across all backgrounds are deserving of respect and appreciation.

Sample Item: I enjoy listening to people from all walks of life.

Kantianismㅡor the belief that others should be treated as ends in and of themselves, and not as pawns in one’s own game.

Sample item: When I talk to people, I am rarely thinking about what I want from them.

![](https://www.frontiersin.org/files/Articles/438704/fpsyg-10-00467-HTML/image_m/fpsyg-10-00467-g001.jpg)

via https://www.psychologytoday.com/us/blog/darwins-subterranean-world/201903/the-light-triad-personality

Comment by misha_yagudin on Responsible Biorisk Reduction Workshop, Oxford May 2020 · 2020-02-07T11:22:04.903Z · score: 1 (1 votes) · EA · GW

Nice! Am I remembering correctly, that this workshop was seeded during a conversation at the last EAG in London?

Comment by misha_yagudin on Misha_Yagudin's Shortform · 2020-01-31T03:49:05.185Z · score: 1 (1 votes) · EA · GW

Karma of EA Survey Series as of today:

29 EA Survey 2019 Series: Geographic Distribution of EAs
43 EA Survey 2019 Series: Careers and Skills
74 EA Survey 2019 Series: Cause Prioritization
64 EA Survey 2019 Series: Community Demographics & Characteristics

38 EA Survey 2018 Series: Community Demographics & Characteristics
15 EA Survey 2018 Series: Distribution & Analysis Methodology
50 EA Survey 2018 Series: How do people get involved in EA?
30 EA Survey 2018 Series: Subscribers and Identifiers
82 EA Survey 2018 Series: Donation Data
68 EA Survey 2018 Series: Cause Selection
34 EA Survey 2018 Series: EA Group Membership
34 EA Survey 2018 Series: Where People First Hear About EA and Higher Levels of Involvement
68 EA Survey 2018 Series: Geographic Differences in EA
50 EA Survey 2018 Series: How Welcoming is EA?
50 EA Survey 2018 Series: How Long Do EAs Stay in EA?
75 EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge?

Comment by misha_yagudin on Misha_Yagudin's Shortform · 2020-01-08T14:23:19.135Z · score: 2 (2 votes) · EA · GW

Morgan Kelly, The Standard Errors of Persistence

A large literature on persistence finds that many modern outcomes strongly reflect characteristics of the same places in the distant past. However, alongside unusually high t statistics, these regressions display severe spatial autocorrelation in residuals, and the purpose of this paper is to examine whether these two properties might be connected. We start by running artificial regressions where both variables are spatial noise and find that, even for modest ranges of spatial correlation between points, t statistics become severely inflated leading to significance levels that are in error by several orders of magnitude. We analyse 27 persistence studies in leading journals and find that in most cases if we replace the main explanatory variable with spatial noise the fit of the regression commonly improves; and if we replace the dependent variable with spatial noise, the persistence variable can still explain it at high significance levels. We can predict in advance which persistence results might be the outcome of fitting spatial noise from the degree of spatial autocorrelation in their residuals measured by a standard Moran statistic. Our findings suggest that the results of persistence studies, and of spatial regressions more generally, might be treated with some caution in the absence of reported Moran statistics and noise simulations.

Comment by misha_yagudin on What grants has Carl Shulman's discretionary fund made? · 2019-12-29T10:30:34.673Z · score: 13 (3 votes) · EA · GW

Hey Sam, any updates?

Comment by misha_yagudin on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-27T15:24:03.659Z · score: 3 (2 votes) · EA · GW

Thank you, Peter. If you are curious Anna Salamon connected various types of activities with CFAR's mission in the recent Q&A.

Comment by misha_yagudin on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-24T18:29:14.737Z · score: 1 (1 votes) · EA · GW

Peter, thank you! I am slightly confused by your phrasing.


To benchmark, would you say that

  • (a) CFAR mainline workshops are aimed to train [...] to "people who are likely to have important impacts on AI";
  • (b) AIRCS workshops are aimed at the same audience;
  • (c) MSFP is aimed at the same audience?
Comment by misha_yagudin on Effective Altruism Funds Project Updates · 2019-12-22T20:54:48.726Z · score: 18 (8 votes) · EA · GW

Hey Sam, I am curious about your estimates of (a) CEA's overhead, (b) grantmakers' overhead for an average grant.

For the context in April Oliver Habryka of EA LTFF wrote:

A rough fermi I made a few days ago suggests that each grant we make comes with about $2000 of overhead from CEA for making the grants in terms of labor cost plus some other risks (this is my own number, not CEAs estimate).
Comment by misha_yagudin on Effective Altruism Funds Project Updates · 2019-12-21T15:31:35.902Z · score: 14 (6 votes) · EA · GW

My not very informed guess is that only a minority of fund managers are primarily financially constrained. I think (a) giving detailed feedback is demanding [especially negative feedback]; (b) I expect that most of the fund managers are just very busy.

Comment by misha_yagudin on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-20T14:05:38.413Z · score: 37 (12 votes) · EA · GW

A bit of a tangent. I am confused by SFF's grant to OAK (Optimizing Awakening and Kindness). Could any recommender comment on its purpose or at least briefly describe what OAK is about as the hyperlink is not very informative.

Comment by misha_yagudin on Announcing our 2019 charity recommendations · 2019-12-09T13:09:07.793Z · score: 4 (3 votes) · EA · GW

Thanks! I really like that you compensated charities for working with you. I think engaging with ACE might by itself promote better norms within the organizations (as they reflect on ACE's criterions, which span much further than marginal cost-effectiveness).

Comment by misha_yagudin on Could we solve this email mess if we all moved to paid emails? · 2019-09-24T13:37:51.398Z · score: 1 (1 votes) · EA · GW

Great suggestions! Recently, I found it helpful to unsubscribe from the newsletters and put them into my RSS reader with the help of https://kill-the-newsletter.com.

Comment by misha_yagudin on Forum Update: New Features (September 2019) · 2019-09-20T17:09:22.012Z · score: 4 (3 votes) · EA · GW

Before reading the details I was surprised that 3 out of 5 community favourite posts are about invertebrate welfare. Seems like I'm missing out on something :)

Comment by misha_yagudin on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2019-06-17T10:50:09.535Z · score: 1 (1 votes) · EA · GW

Idea: the local group organisers might use something like spaced repetition to invite busy community members [say, people who are pursuing a demanding job to increase their career capital] to the social events.

Anki's "Again", "Hard", "Good", "Easy" might map to "1-on-1 over coffee in a few weeks", "Invite to the upcoming event and pay more attention to the person", "Invite person to the social event in 3mo", "Invite person to the event in 6mo or to the EAG".

Comment by misha_yagudin on What new EA project or org would you like to see created in the next 3 years? · 2019-06-17T10:37:37.700Z · score: 10 (7 votes) · EA · GW

EA longitudinal studies

I think the movement might benefit from uncovering predictors of value drift / EA specific causes of mental health issues. It would be interesting to see how ideas propagate from the leaders to the followers.

Also, delegating all the surveys to the central planner might make them more well-thought and might make it easier to integrate the results of different surveys into conclusions.

Comment by misha_yagudin on What books or bodies of work, not about EA or EA cause areas, might be beneficial to EAs? · 2019-06-13T15:47:20.706Z · score: 4 (3 votes) · EA · GW

80K's All the evidence-based advice we found on how to be successful in any job (link).

Comment by misha_yagudin on EA Mental Health Survey: Results and Analysis. · 2019-06-13T15:45:00.313Z · score: 10 (4 votes) · EA · GW

re: cheap resources

Some EAs are working on UpLift.app, a CBT web/mobile app for depression. Also, some friends recommended my woebot.io, a CBT chat-bot app for depression. These apps are very cheap compared to talking therapy with a trained professional and AFAIK, self-studying CBT is almost as effective as working with a therapist.


Comment by misha_yagudin on Aligning Recommender Systems as Cause Area · 2019-05-08T11:26:34.690Z · score: 12 (8 votes) · EA · GW

re: How You Can Contribute

Center for Humane Technology is hiring for 5 positions: Managing Director, Head of Humane Design Programs, Manager of Culture & Talent, Head of Policy, Research Intelligence Manager.

Comment by misha_yagudin on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T17:40:05.977Z · score: 11 (5 votes) · EA · GW

Oliver, Rob, and others thank you for your thoughts.
1. I don't think that experimenting with the variants is an option for EGMO [severe time constraints].
2. For IMO we have more than enough time, and I will incorporate the feedback and considerations into my decision-making.

Comment by misha_yagudin on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T22:55:51.100Z · score: 22 (10 votes) · EA · GW

Dear Morgan,

In this comment I want to address the following paragraph (related to #2).

If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism? The Life You Can Save, Doing Good Better, and _80,000 Hours_are three books much more relevant to Effective Altruism than Harry Potter and the Methods of Rationality. Furthermore, they are much cheaper than the $43 per copy of HPMOR. Even if one is to make the argument that HPMOR is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of HPMOR relative to any of the other books I mentioned is justified. It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than HPMOR in encouraging effective altruism. It is also free!

a. While I agree that the books you've mentioned are more directly related to EA than HPMoR. I think it would not be possible to give them as a prize. I think the fact that the organizers whom we contacted had read HPMoR significantly contributed to the possibility to give anything at all.

b. I share your concern about HPMoR not being EA enough. We hope to mitigate it via leaflet + SPARC/ESPR.

Comment by misha_yagudin on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T22:22:19.473Z · score: 36 (13 votes) · EA · GW

Dear Morgan,

In this comment I want to address the following paragraph (#3).

I also want to point out that the fact that EA Russia has made oral agreements to give copies of the book before securing funding is deeply unsettling, if I understand the situation correctly. Why are promises being made in advance of having funding secured? This is not how a well-run organization or movement operates. If EA Russia did have funding to buy the books and this grant is displacing that funding, then what will EA Russia spend the original $28,000 on? This information is necessary to evaluate the effectiveness of this grant and should not be absent.

I think that it is a miscommunication on my side.

EA Russia has the oral agreements with [the organizers of math olympiads]...

We contacted organizers of math olympiads and asked them whether they would like to have HPMoRs as a prize (conditioned on us finding a sponsor). We didn't promise anything to them, and they do not expect anything from us. Also, I would like to say that we hadn't approached them as the EAs (as I am mindful of the reputational risks).

Comment by misha_yagudin on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T17:12:24.665Z · score: 17 (12 votes) · EA · GW

A bit of a tangent to #3. It seems to me that solving AI Alignment requires breakthroughs and the demographic we are targeting is potentially very well equipped to do so

According to “Invisible Geniuses: Could the Knowledge Frontier Advance Faster?” (Agarwal & Gaule 2018), IMO gold medalists are 50x more likely to win a Fields Medal than PhD graduates of US top-10 math programs. (h/t Gwern)

Comment by misha_yagudin on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T16:50:27.120Z · score: 22 (11 votes) · EA · GW

Hi Matthew,

1. $43/unit is an upper bound. While submitting an application, I was uncertain about the price of on-demand printing. My current best guess is that EGMO book sets will cost $34..40. I expect printing cost for IMO to be lower (economy of scale).

2. HPMOR is quite long (~2007 pages according to Goodreads). Each EGMO book set consists of 4 hardcover books.

3. There is an opportunity to trade-off money for prestige by printing only the first few chapters.

Comment by misha_yagudin on Gauging Interest in an EA Weekend Workshop · 2019-03-29T12:31:50.724Z · score: 2 (2 votes) · EA · GW

Done!

Comment by misha_yagudin on Ben Garfinkel: How sure are we about this AI stuff? · 2019-02-10T17:04:19.931Z · score: 23 (13 votes) · EA · GW

I was confused by the headline. "Ben Garfinkel: How Sure are we about this AI Stuff?" would make it clear that it is not some kind of official statement from the CEA. Changing an author to EA Global or even to the co-authorship of EA Global and Ben Garfinkel would help as well.

Comment by misha_yagudin on You Should Write a Forum Bio · 2019-02-02T11:47:38.184Z · score: 2 (2 votes) · EA · GW

Done. I think it is a good social norm for the forum.