Posts

Misha_Yagudin's Shortform 2020-01-08T14:23:19.002Z · score: 3 (1 votes)

Comments

Comment by misha_yagudin on vaidehi_agarwalla's Shortform · 2020-08-07T04:30:51.476Z · score: 1 (1 votes) · EA · GW

If longtermism is one of the latest stages of moral circle development than your anecdotal data suffers from major selection effects.

Anecdotally seems true from a number of EAs I've spoken to who've updated to longtermism over time.

Comment by misha_yagudin on Recommendations for increasing empathy? · 2020-08-02T09:17:14.686Z · score: 8 (4 votes) · EA · GW

You might want to read some essays from Effective Altruism Handbook: Motivation Series. I especially like 500 Million, But Not A Single One More, it is short and powerful.

Comment by misha_yagudin on The academic contribution to AI safety seems large · 2020-08-01T08:49:53.584Z · score: 1 (1 votes) · EA · GW

Thanks; fixed.

Comment by misha_yagudin on The academic contribution to AI safety seems large · 2020-07-31T11:22:35.233Z · score: 5 (3 votes) · EA · GW

On the other hand, in 2018's review MIRI wrote about new research directions, one of which feels ML adjacent. But from a few paragraphs, it doesn't seem that the direction is relevant for prosaic AI alignment.

Seeking entirely new low-level foundations for optimization, designed for transparency and alignability from the get-go, as an alternative to gradient-descent-style machine learning foundations.

Comment by misha_yagudin on The academic contribution to AI safety seems large · 2020-07-31T11:17:33.577Z · score: 14 (4 votes) · EA · GW

Indeed, Why I am not currently working on the AAMLS agenda is a year-later write up by the lead researcher. Moreover, they write:

That is, though I was officially lead on AAMLS, I mostly did other things in that time period.

Comment by misha_yagudin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-19T22:49:16.322Z · score: 9 (4 votes) · EA · GW

Oh, I meant pessimistic. A reason for a weak update might similar to Gell-Man amnesia effect. After putting effort into classical arguments you noticed some important flaws. The fact that have not been articulated before suggests that collective EA epistemology is weaker than expected. Because of that one might get less certain about quality of arguments in other EA domains.

So, in short, the Gell-Mann Amnesia effect is when experts forget how badly their own subject is treated in media and believe that subjects they don't know much about are treated more competently by the same media.
Comment by misha_yagudin on Misha_Yagudin's Shortform · 2020-07-18T15:14:46.894Z · score: 4 (4 votes) · EA · GW

Estimates from The Precipice.

| Stellar explosion             | 1 in 1,000,000,000 |
| Asteroid or comet impact | 1 in 1,000,000 |
| Supervolcanic eruption | 1 in 10,000 |
| “Naturally” arising pandemics | 1 in 10,000 |
|-------------------------------+--------------------|
| Total natural risk | 1 in 10,000 |


| Nuclear war | 1 in 1,000 |
| Climate change | 1 in 1,000 |
| Other environmental damage | 1 in 1,000 |
| Engineered pandemics | 1 in 30 |
| Unaligned artificial intelligence | 1 in 10 |
| Unforeseen anthropogenic risks | 1 in 30 |
| Other anthropogenic risks | 1 in 50 |
|-----------------------------------+------------|
| Total anthropogenic risk | 1 in 6 |


|------------------------+--------|
| Total existential risk | 1 in 6 |
Comment by misha_yagudin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-18T15:12:57.560Z · score: 1 (1 votes) · EA · GW

Have your become more uncertain/optimistic about the arguments in favour of importance of other x-risks as a result of scrutinising AI risk?

Comment by misha_yagudin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-18T15:11:05.886Z · score: 11 (6 votes) · EA · GW

I am curious whether you are, in general, more optimistic about x-risks [say, than Toby Ord]. What are your estimates of total and unforeseen anthropogenic risks in the next century?

Comment by misha_yagudin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T21:49:05.073Z · score: 6 (6 votes) · EA · GW

On a scale from 1 to 10 what would you rate The Boss Baby? :)

Comment by misha_yagudin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T21:42:06.304Z · score: 22 (10 votes) · EA · GW

What have you changed your mind about recently?

Comment by misha_yagudin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T21:39:31.726Z · score: 9 (3 votes) · EA · GW

How confident are you in brief arguments for rapid and general progress outlined in the section 1.1 of GovAI's research agenda? Have the arguments been developed further?

Comment by misha_yagudin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T21:34:25.009Z · score: 2 (2 votes) · EA · GW

What do you think about hardware-based forecasts for human-substitute AI?

Comment by misha_yagudin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T21:21:55.681Z · score: 7 (3 votes) · EA · GW

What priorities for TAI strategy does your skepticism towards classical work dictates? Some argued, that we have greater leverage over the scenarios with discrete/discontinuous deployment.

Comment by misha_yagudin on AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher · 2020-07-13T20:19:05.542Z · score: 4 (3 votes) · EA · GW

Wow, I am quite surprised it took a year to produce. @80K, does it always take so long?

Comment by misha_yagudin on New EA International Innovation Fellowship · 2020-06-28T13:33:56.471Z · score: 1 (1 votes) · EA · GW

Could you say more about a timeline of the fellowship?

Comment by misha_yagudin on Misha_Yagudin's Shortform · 2020-04-30T15:27:53.817Z · score: 10 (6 votes) · EA · GW

I noticed that Effective Altruism: Philosophical Issues is available at Library Genesis.

Comment by misha_yagudin on English as a dominant language in the movement: challenges and solutions · 2020-04-10T17:23:22.460Z · score: 1 (1 votes) · EA · GW

It may be my problem, but the popular Grammarly app works terribly with comments: I can't use its suggestions; I can't navigate the comments with mouse/trackpad; because of that I need to use the web version, which takes, say, 10–30 seconds and is annoying.

I am on:

  • Google Chrome: Version 80.0.3987.163 (Official Build) (64-bit)
  • MacOS Mojave: 10.14.5 (18F2059)
Comment by misha_yagudin on Why I'm Not Vegan · 2020-04-09T19:56:43.803Z · score: 2 (4 votes) · EA · GW

One argument against is that begin vegan adds weirdness points, which might make it harder for someone to do workplace activism or might slow one's career in more conservative fields/countries.

Comment by misha_yagudin on Why I'm Not Vegan · 2020-04-09T19:47:18.084Z · score: 22 (11 votes) · EA · GW

This is odd to me. I see how committing to be vegan can strengthen one's belief in the importance of animal suffering. But my not-very-educated guess is that the effect is more akin to why buying iPhone/Android would strengthen your belief into the superiority of one to another. But I don't see how would it help one to understand/consider animal experiences and needs.

I haven't read the paper in depth but searched for relevant keywords and found:

Additionally, a sequence of five studies from Jonas Kunst and Sigrid Hohle demonstrates that processing meat, beheading a whole roasted pig, watching a meat advertisement without a live animal versus one with a live animal, describing meat production as “harvesting” versus “killing” or “slaughtering,” and describing meat as “beef/pork” rather than “cow/pig” all decreased empathy for the animal in question and, in several cases, significantly increased willingness to eat meat rather than an alternative vegetarian dish.33
Psychologists involved in these and several other studies believe that these phenomena 34 occur because people recognize an incongruity between eating animals and seeing them as beings with mental life and moral status, so they are motivated to resolve this cognitive dissonance by lowering their estimation of animal sentience and moral status. Since these affective attitudes influence the decisions we make, eating meat and embracing the idea of animals as food negatively influences our individual and social treatment of nonhuman animals.

The cited paper (33, 34) do not provide much evidence to support your claim among people who spend significant time reflecting on welfare of animals.

Comment by Misha_Yagudin on [deleted post] 2020-03-19T06:30:19.307Z

Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood (2017)

Ask MIRI Anything (AMA) (2016)

Comment by misha_yagudin on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-19T06:23:57.087Z · score: 3 (2 votes) · EA · GW

This is a valid convergence test. But I think it's easier to reason about \sum p_i < ∞. See math.SE for a proof.

Comment by misha_yagudin on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-18T17:26:56.659Z · score: 2 (2 votes) · EA · GW

re: 3 — to be more precise, one can show that $\prod_i (1 - p_i) > 0$ iff $\sum p_i < ∞$, where $p_i \in [0, 1)$ is a probability of extinction in a given year.

Comment by misha_yagudin on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2020-03-17T21:29:33.690Z · score: 1 (1 votes) · EA · GW

What are some of your favourite theorems, proofs, algorithms, data structures, and programming languages?

Comment by misha_yagudin on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T21:27:41.164Z · score: 1 (1 votes) · EA · GW

What are some of your favourite theorems, proofs, algorithms, and data structures?

Comment by misha_yagudin on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T21:19:53.282Z · score: 5 (3 votes) · EA · GW

What do you think an EAG attendee is likely getting wrong about animal advocacy? Effective animal advocacy?

Comment by misha_yagudin on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-17T21:17:56.747Z · score: 5 (3 votes) · EA · GW

What do you think an EAG attendee is likely getting wrong about global health and poverty?

Comment by misha_yagudin on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-17T21:16:52.363Z · score: 5 (3 votes) · EA · GW

What do you think an EAG attendee is likely getting wrong about GiveWell?

Comment by misha_yagudin on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T21:16:29.132Z · score: 11 (5 votes) · EA · GW

What do you think an EAG attendee is likely getting wrong about ACE?

Comment by misha_yagudin on AMA: Elie Hassenfeld, co-founder and CEO of GiveWell · 2020-03-17T09:03:20.307Z · score: 8 (6 votes) · EA · GW

What have you changed your mind on recently?

Comment by misha_yagudin on AMA: Leah Edgerton, Executive Director of Animal Charity Evaluators · 2020-03-17T09:03:01.994Z · score: 7 (3 votes) · EA · GW

What have you changed your mind on recently?

Comment by misha_yagudin on AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement · 2020-03-17T09:02:46.889Z · score: 18 (11 votes) · EA · GW

What have you changed your mind on recently?

Comment by misha_yagudin on EA Updates for February 2020 · 2020-02-29T09:58:02.730Z · score: 5 (2 votes) · EA · GW

Great; thank you!


On climate. I enjoyed Ryan Orbuch's, Stripe’s climate product manager, write-up on negative emissions technologies.

Comment by misha_yagudin on EA syllabi and teaching materials · 2020-02-18T19:51:33.255Z · score: 3 (2 votes) · EA · GW

If MIT's alignment reading group is here, this course should also belong to the list: AGI Safety @ UC Berkley (Fall 2018).

Comment by misha_yagudin on Misha_Yagudin's Shortform · 2020-02-18T17:56:08.148Z · score: 1 (3 votes) · EA · GW

The three subscales of the Light Triad Scale are conceptualized as follows:

Faith in Humanityㅡor the belief that, generally speaking, humans are good.

Sample item: I think people are mostly good.

Humanismㅡor the belief that humans across all backgrounds are deserving of respect and appreciation.

Sample Item: I enjoy listening to people from all walks of life.

Kantianismㅡor the belief that others should be treated as ends in and of themselves, and not as pawns in one’s own game.

Sample item: When I talk to people, I am rarely thinking about what I want from them.

![](https://www.frontiersin.org/files/Articles/438704/fpsyg-10-00467-HTML/image_m/fpsyg-10-00467-g001.jpg)

via https://www.psychologytoday.com/us/blog/darwins-subterranean-world/201903/the-light-triad-personality

Comment by misha_yagudin on Responsible Biorisk Reduction Workshop, Oxford May 2020 · 2020-02-07T11:22:04.903Z · score: 1 (1 votes) · EA · GW

Nice! Am I remembering correctly, that this workshop was seeded during a conversation at the last EAG in London?

Comment by misha_yagudin on Misha_Yagudin's Shortform · 2020-01-31T03:49:05.185Z · score: 1 (1 votes) · EA · GW

Karma of EA Survey Series as of today:

29 EA Survey 2019 Series: Geographic Distribution of EAs
43 EA Survey 2019 Series: Careers and Skills
74 EA Survey 2019 Series: Cause Prioritization
64 EA Survey 2019 Series: Community Demographics & Characteristics

38 EA Survey 2018 Series: Community Demographics & Characteristics
15 EA Survey 2018 Series: Distribution & Analysis Methodology
50 EA Survey 2018 Series: How do people get involved in EA?
30 EA Survey 2018 Series: Subscribers and Identifiers
82 EA Survey 2018 Series: Donation Data
68 EA Survey 2018 Series: Cause Selection
34 EA Survey 2018 Series: EA Group Membership
34 EA Survey 2018 Series: Where People First Hear About EA and Higher Levels of Involvement
68 EA Survey 2018 Series: Geographic Differences in EA
50 EA Survey 2018 Series: How Welcoming is EA?
50 EA Survey 2018 Series: How Long Do EAs Stay in EA?
75 EA Survey 2018 Series: Do EA Survey Takers Keep Their GWWC Pledge?

Comment by misha_yagudin on Misha_Yagudin's Shortform · 2020-01-08T14:23:19.135Z · score: 2 (2 votes) · EA · GW

Morgan Kelly, The Standard Errors of Persistence

A large literature on persistence finds that many modern outcomes strongly reflect characteristics of the same places in the distant past. However, alongside unusually high t statistics, these regressions display severe spatial autocorrelation in residuals, and the purpose of this paper is to examine whether these two properties might be connected. We start by running artificial regressions where both variables are spatial noise and find that, even for modest ranges of spatial correlation between points, t statistics become severely inflated leading to significance levels that are in error by several orders of magnitude. We analyse 27 persistence studies in leading journals and find that in most cases if we replace the main explanatory variable with spatial noise the fit of the regression commonly improves; and if we replace the dependent variable with spatial noise, the persistence variable can still explain it at high significance levels. We can predict in advance which persistence results might be the outcome of fitting spatial noise from the degree of spatial autocorrelation in their residuals measured by a standard Moran statistic. Our findings suggest that the results of persistence studies, and of spatial regressions more generally, might be treated with some caution in the absence of reported Moran statistics and noise simulations.

Comment by misha_yagudin on What grants has Carl Shulman's discretionary fund made? · 2019-12-29T10:30:34.673Z · score: 13 (3 votes) · EA · GW

Hey Sam, any updates?

Comment by misha_yagudin on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-27T15:24:03.659Z · score: 3 (2 votes) · EA · GW

Thank you, Peter. If you are curious Anna Salamon connected various types of activities with CFAR's mission in the recent Q&A.

Comment by misha_yagudin on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-24T18:29:14.737Z · score: 1 (1 votes) · EA · GW

Peter, thank you! I am slightly confused by your phrasing.


To benchmark, would you say that

  • (a) CFAR mainline workshops are aimed to train [...] to "people who are likely to have important impacts on AI";
  • (b) AIRCS workshops are aimed at the same audience;
  • (c) MSFP is aimed at the same audience?
Comment by misha_yagudin on Effective Altruism Funds Project Updates · 2019-12-22T20:54:48.726Z · score: 18 (8 votes) · EA · GW

Hey Sam, I am curious about your estimates of (a) CEA's overhead, (b) grantmakers' overhead for an average grant.

For the context in April Oliver Habryka of EA LTFF wrote:

A rough fermi I made a few days ago suggests that each grant we make comes with about $2000 of overhead from CEA for making the grants in terms of labor cost plus some other risks (this is my own number, not CEAs estimate).
Comment by misha_yagudin on Effective Altruism Funds Project Updates · 2019-12-21T15:31:35.902Z · score: 14 (6 votes) · EA · GW

My not very informed guess is that only a minority of fund managers are primarily financially constrained. I think (a) giving detailed feedback is demanding [especially negative feedback]; (b) I expect that most of the fund managers are just very busy.

Comment by misha_yagudin on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-20T14:05:38.413Z · score: 37 (12 votes) · EA · GW

A bit of a tangent. I am confused by SFF's grant to OAK (Optimizing Awakening and Kindness). Could any recommender comment on its purpose or at least briefly describe what OAK is about as the hyperlink is not very informative.

Comment by misha_yagudin on Announcing our 2019 charity recommendations · 2019-12-09T13:09:07.793Z · score: 4 (3 votes) · EA · GW

Thanks! I really like that you compensated charities for working with you. I think engaging with ACE might by itself promote better norms within the organizations (as they reflect on ACE's criterions, which span much further than marginal cost-effectiveness).

Comment by misha_yagudin on Could we solve this email mess if we all moved to paid emails? · 2019-09-24T13:37:51.398Z · score: 1 (1 votes) · EA · GW

Great suggestions! Recently, I found it helpful to unsubscribe from the newsletters and put them into my RSS reader with the help of https://kill-the-newsletter.com.

Comment by misha_yagudin on Forum Update: New Features (September 2019) · 2019-09-20T17:09:22.012Z · score: 4 (3 votes) · EA · GW

Before reading the details I was surprised that 3 out of 5 community favourite posts are about invertebrate welfare. Seems like I'm missing out on something :)

Comment by misha_yagudin on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2019-06-17T10:50:09.535Z · score: 1 (1 votes) · EA · GW

Idea: the local group organisers might use something like spaced repetition to invite busy community members [say, people who are pursuing a demanding job to increase their career capital] to the social events.

Anki's "Again", "Hard", "Good", "Easy" might map to "1-on-1 over coffee in a few weeks", "Invite to the upcoming event and pay more attention to the person", "Invite person to the social event in 3mo", "Invite person to the event in 6mo or to the EAG".

Comment by misha_yagudin on What new EA project or org would you like to see created in the next 3 years? · 2019-06-17T10:37:37.700Z · score: 10 (7 votes) · EA · GW

EA longitudinal studies

I think the movement might benefit from uncovering predictors of value drift / EA specific causes of mental health issues. It would be interesting to see how ideas propagate from the leaders to the followers.

Also, delegating all the surveys to the central planner might make them more well-thought and might make it easier to integrate the results of different surveys into conclusions.

Comment by misha_yagudin on What books or bodies of work, not about EA or EA cause areas, might be beneficial to EAs? · 2019-06-13T15:47:20.706Z · score: 4 (3 votes) · EA · GW

80K's All the evidence-based advice we found on how to be successful in any job (link).