Posts

Modelling the odds of recovery from civilizational collapse 2020-09-17T11:58:41.412Z · score: 23 (9 votes)
Should surveys about the quality/impact of research outputs be more common? 2020-09-08T09:10:03.215Z · score: 21 (9 votes)
Please take a survey on the quality/impact of things I've written 2020-09-01T10:34:53.661Z · score: 17 (5 votes)
What is existential security? 2020-09-01T09:40:54.048Z · score: 24 (7 votes)
Risks from Atomically Precise Manufacturing 2020-08-25T09:53:52.763Z · score: 26 (15 votes)
Crucial questions about optimal timing of work and donations 2020-08-14T08:43:28.710Z · score: 37 (12 votes)
How valuable would more academic research on forecasting be? What questions should be researched? 2020-08-12T07:19:18.243Z · score: 21 (8 votes)
Quantifying the probability of existential catastrophe: A reply to Beard et al. 2020-08-10T05:56:04.978Z · score: 19 (7 votes)
Propose and vote on potential tags 2020-08-04T23:49:47.992Z · score: 36 (10 votes)
Extinction risk reduction and moral circle expansion: Speculating suspicious convergence 2020-08-04T11:38:48.816Z · score: 10 (4 votes)
Crucial questions for longtermists 2020-07-29T09:39:17.144Z · score: 72 (28 votes)
Moral circles: Degrees, dimensions, visuals 2020-07-24T04:04:02.017Z · score: 49 (23 votes)
Do research organisations make theory of change diagrams? Should they? 2020-07-22T04:58:41.263Z · score: 36 (13 votes)
Improving the future by influencing actors' benevolence, intelligence, and power 2020-07-20T10:00:31.424Z · score: 56 (30 votes)
Venn diagrams of existential, global, and suffering catastrophes 2020-07-15T12:28:12.651Z · score: 57 (24 votes)
Some history topics it might be very valuable to investigate 2020-07-08T02:40:17.734Z · score: 72 (33 votes)
3 suggestions about jargon in EA 2020-07-05T03:37:29.053Z · score: 89 (44 votes)
Civilization Re-Emerging After a Catastrophic Collapse 2020-06-27T03:22:43.226Z · score: 30 (13 votes)
I knew a bit about misinformation and fact-checking in 2017. AMA, if you're really desperate. 2020-05-11T09:35:22.543Z · score: 16 (7 votes)
Existential risks are not just about humanity 2020-04-28T00:09:55.247Z · score: 15 (8 votes)
Differential progress / intellectual progress / technological development 2020-04-24T14:08:52.369Z · score: 30 (17 votes)
Clarifying existential risks and existential catastrophes 2020-04-24T13:27:43.966Z · score: 22 (10 votes)
A central directory for open research questions 2020-04-19T23:47:12.003Z · score: 57 (26 votes)
Database of existential risk estimates 2020-04-15T12:43:07.541Z · score: 76 (32 votes)
Some thoughts on Toby Ord’s existential risk estimates 2020-04-07T02:19:31.217Z · score: 50 (25 votes)
My open-for-feedback donation plans 2020-04-04T12:47:21.582Z · score: 25 (15 votes)
What questions could COVID-19 provide evidence on that would help guide future EA decisions? 2020-03-27T05:51:25.107Z · score: 7 (2 votes)
What's the best platform/app/approach for fundraising for things that aren't registered nonprofits? 2020-03-27T03:05:46.791Z · score: 5 (1 votes)
Fundraising for the Center for Health Security: My personal plan and open questions 2020-03-26T16:53:45.549Z · score: 14 (7 votes)
Will the coronavirus pandemic advance or hinder the spread of longtermist-style values/thinking? 2020-03-19T06:07:03.834Z · score: 11 (6 votes)
[Link and commentary] Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society 2020-03-14T09:04:10.955Z · score: 14 (5 votes)
Suggestion: EAs should post more summaries and collections 2020-03-09T10:04:01.629Z · score: 41 (19 votes)
Quotes about the long reflection 2020-03-05T07:48:36.639Z · score: 51 (24 votes)
Where to find EA-related videos 2020-03-02T13:40:18.971Z · score: 20 (12 votes)
Causal diagrams of the paths to existential catastrophe 2020-03-01T14:08:45.344Z · score: 34 (16 votes)
Morality vs related concepts 2020-02-10T08:02:10.570Z · score: 14 (9 votes)
What are information hazards? 2020-02-05T20:50:25.882Z · score: 11 (10 votes)
Four components of strategy research 2020-01-30T19:08:37.244Z · score: 19 (13 votes)
When to post here, vs to LessWrong, vs to both? 2020-01-27T09:31:37.099Z · score: 12 (6 votes)
Potential downsides of using explicit probabilities 2020-01-20T02:14:22.150Z · score: 28 (14 votes)
[Link] Charity Election 2020-01-19T08:02:09.114Z · score: 8 (5 votes)
Making decisions when both morally and empirically uncertain 2020-01-02T07:08:26.681Z · score: 17 (6 votes)
Making decisions under moral uncertainty 2020-01-01T13:02:19.511Z · score: 41 (15 votes)
MichaelA's Shortform 2019-12-22T05:35:17.473Z · score: 10 (4 votes)
Are there other events in the UK before/after EAG London? 2019-08-11T06:38:12.163Z · score: 9 (7 votes)

Comments

Comment by michaela on AMA: Markus Anderljung (PM at GovAI, FHI) · 2020-10-01T18:23:33.693Z · score: 2 (1 votes) · EA · GW

In addition to Markus' suggestion that you could consider applying to the GovAI fellowship, you could also considering applying for a researcher role at GovAI. Deadline is October 19th. 

(I don't mean to imply that the only way to do this is to be at FHI. I don't believe that that's the case. I just wanted to mention that option, since Markus had mentioned a different position but not that one.)

Comment by michaela on Crucial questions for longtermists · 2020-09-30T06:24:48.591Z · score: 4 (2 votes) · EA · GW

[Unstructured, quickly written collection of reactions]

I agree that those two things would be valuable, largely for the reason you mention. Improving our neuroimaging capabilities could also be useful for some interventions to reduce long-term risks from malevolence

Though there could also be some downsides to each of those things; e.g.., better neuroimaging could perhaps be used for purposes that make totalitarianism or dystopias more likely/worse in expectation. (See "Which technological changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?")

---

I think the main reason I didn't already include a question directly about consciousness is what's captured here:

This post can be seen as collecting questions relevant to the “strategy” level.

One could imagine a version of this post that “zooms out” to discuss crucial questions on the “values” level, or questions about cause prioritisation as a whole. This might involve more emphasis on questions about, for example, population ethics, the moral status of nonhuman animals, and the effectiveness of currently available global health interventions. But here we instead (a) mostly set questions about morality aside, and (b) take longtermism as a starting assumption.

Though I acknowledge that this division is somewhat arbitrary, and also that consciousness is at least arguably/largely/somewhat an empirical rather than "values"/"moral" matter. (One reason I'm implicitly putting it partly in the "moral" bucket is that we might be most interested in something like "consciousness of a morally relevant sort", such that our moral views influence which features we're interested in investigating.)

---

After reading your comment, I skimmed again through the list of questions to see what of the things I already had were closest, and where those points might "fit". Here are the questions I saw that seemed related (though they don't directly address our understanding of consciousness):

What is the possible quality of the human-influenced future?

  • How does the “difficulty” or “cost” of creating pleasure vs. pain compare?

Can and will we expand into space? In what ways, and to what extent? What are the implications? 

  • Will we populate colonies with (some) nonhuman animals, e.g. through terraforming? [it's the implications of terraforming that make this relevant]

Can and will we create sentient digital beings? To what extent? What are the implications?

  • Would their experiences matter morally?
  • Will some be created accidentally?

[...]

  • How close to the appropriate size should we expect influential agents’ moral circles to be “by default”?
Comment by michaela on Has anyone gone into the 'High-Impact PA' path? · 2020-09-28T19:23:17.421Z · score: 5 (4 votes) · EA · GW

Interesting comment, thanks!

Tanya at FHI first took the position of executive assistant to Nick Bostrom. She explained in the 80,000 Hours podcast how very, very valuable this has been for Nick Bostrom's research - and after that, for FHI operations.

For people who don't know the latest chapter of that story: Tanya is now the Director of Strategy and Operations at FHI. 

Comment by michaela on Crucial questions for longtermists · 2020-09-28T07:30:18.122Z · score: 2 (1 votes) · EA · GW

Yeah, that sounds right. Those factors were left out just because I didn't think of including them (because I don't know very much about these frameworks from population and conservation biology), rather than because I explicitly decided to include them, and I'd guess you're right that attending to those factors and using those frameworks would be useful. So thanks for highlighting this :)

There are probably also various other "crucial questions" people could highlight, as well as questions that would fit under these questions and get more into the fine-grained details, and I'd encourage people to comment here, comment in the google doc, or create their own documents to highlight those things. (I say this partly because this post has a very broad scope, so a vast array of fields will have relevant knowledge, and I of course have very limited knowledge of most of those fields.)

Comment by michaela on Database of existential risk estimates · 2020-09-24T07:24:41.723Z · score: 2 (1 votes) · EA · GW

There's currently an active thread on LessWrong for making forecasts of existential risk across the time period from now to 2120. Activity on that thread is already looking interesting, and I'm hoping more people add their forecasts, their reasoning, and questions/comments to probe and improve other people's reasoning. 

I plan to later somehow add to this database those forecasts, an aggregate of them, and/or links.

Comment by michaela on How have you become more (or less) engaged with EA in the last year? · 2020-09-24T06:34:49.470Z · score: 2 (1 votes) · EA · GW

(Btw, I've just updated my original answer, as it overlooked the time spent on audiobooks, podcasts, and video.)

Comment by michaela on Forecasting Thread: Existential Risk · 2020-09-23T16:03:35.535Z · score: 8 (5 votes) · EA · GW

(Just want to mention that I'm guessing it's best if people centralise their forecasts and comments on the LW thread, and just use this link post as a pointer to that. Though Amanda can of course say if she disagrees :) )

The one thing I will say here, just in case anyone sees my example forecast here but doesn't follow the link, is that I'd give very little weight to both my forecast and my reasoning. Reasons for that include that:

  • I'm not an experienced forecaster
  • I don't have deep knowledge on relevant specifics (e.g., AI paradigms, state-of-the-art in biotech)
  • I didn't spend a huge amount of time on my forecast, and used pretty quick-and-dirty methods
  • I drew on existing forecasts to some extent (in particular, the LessWrong Elicit AI timelines thread and Ord's x-risk estimates). So if you updated on those forecasts and then also updated on my forecast as if it was independent of them, you'd be double-counting some views and evidence

So I'm mostly just very excited to see other people's forecasts, and even more excited to see how they reason about and break down the question!

Comment by michaela on How have you become more (or less) engaged with EA in the last year? · 2020-09-23T15:01:32.750Z · score: 18 (7 votes) · EA · GW

tl;dr: Duration: Maybe ~12 months. Hours of EA-related video per week during that time: Maybe 4? Hours of EA-related audiobooks and podcasts per week: Maybe 10-15. Hours of all other EA-related learning per week: Maybe ~5-15? 

So maybe ~1400 hours total. (What!? That sounds like a lot!) Or 520 hours if we don't count video and audio, since those didn't actually take time out of my day (see below).

Duration

I learned about EA around September 2018, and started actively trying to "get up to speed" around October 2018. It's less clear what "end points" to use - i.e., when was I now "up to speed"?

Two possible "end points" are when I wrote my first proper forum post and when I was offered an EA researcher job. Both of those things happened around the end of December 2019, suggesting this was a ~14 months process.

But maybe a better "end point" would be around August 2019. By around then, I was running an EA-based club at my school and organising and presenting at local EA events. And in September, I attended EAGxAustralia, and felt - to my surprise! - like I was unusually familiar with EA ideas, among the people there. So that suggests this was a ~10 month process.

Hours of video per week 

I watched EAG, EAGx, and other EA-related videos only while on an exercise bike or while eating. So it didn't really cut into my schedule, except in that it meant I wasn't watching other things at that time (e.g., random history lectures, Netflix). I'd guess this amounted to roughly 4 hours per week.

Hours of audio per week

I listen to audiobook and podcasts while commuting, doing housework, donating plasma, or doing other tasks that don't require much focus but also don't allow me to be on my laptop. This seems to amount to roughly 1-2.5 hours per day. As with the video, this doesn't really cut into my schedule except by displacing other audio things (and also by making me extra helpful with housework when I've got a really good book/podcast!). 

(I also listen at 1.5-2x speed, but skip back often, so the 1-2.5 clock hours are probably ~1.5-3.5 content hours.)

Hours per week ignoring video and audio

During these 10-14 months, I was also teaching at 0.8 FTE and doing a Masters of Teaching (but with a lower course-load than I expect most Masters have, as it was integrated with my actual teaching). This was part of the Teach For Australia program, which people tend to find very busy and intense by itself. So I crammed my "EA study" into weekends, after-work hours, and (teacher) holidays, alongside the (limited and pretty easy) Masters coursework. 

So it wasn't a huge number of hours per week, simply as I had few available. On the other hand, I think I'm happy with working - and tend to work - more hours than is average. And I also just found learning EA-relevant things very interesting, so that didn't drain me at all - it was more like the carrot I dangled in front of myself to get myself to do my other, actual work more efficiently!

And the matter of hours per week is further complicated by the fact that (a) teachers get long holidays, but (b) I had a lot of Masters work and teacher prep work to do during holidays.

So I'd pretty unconfidently guess I spent 5-15 hours per week on this, averaging out across that whole period (including both the work weeks and holiday weeks).

[My original answer ignored the video and audio time, since I'd been trying to remember how much time I allocated to EA-related stuff, and the video and audio didn't really require allocating special time so I overlooked it.]

Comment by michaela on MichaelA's Shortform · 2020-09-23T08:34:09.160Z · score: 6 (4 votes) · EA · GW

Here I list all the EA-relevant books I've read (well, mainly listened to as audiobooks) since learning about EA, in roughly descending order of how useful I perceive/remember them being to me. I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin, Nick Beckstead, and Luke Muehlhauser's lists very useful.) 

That said, this isn't exactly a recommendation list, because some of factors making these books more/less useful to me won't generalise to most other people, and because I'm including all relevant books I've read (not just the top picks). 

Google Doc version here. Let me know if you want more info on why I found something useful or not so useful, where you can find the book, etc.

See also this list of EA-related podcasts and this list of sources of EA-related videos.

  1. The Precipice
    • Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I’d now recommend The Precipice first.
  2. Superforecasting
  3. How to Measure Anything
  4. Rationality: From AI to Zombies
    • I.e., “the sequences”
  5. Superintelligence
    • Maybe this would've been a little further down the list if I’d already read The Precipice.
  6. Expert Political Judgement
    • I read this after Superforecasting and still found it very useful.
  7. Normative Uncertainty
    • This is MacAskill’s thesis, rather than a book
    • I’d now instead recommend the book by him and others on the same topic
  8. Secret of Our Success by Henrich
  9. Human-Compatible
  10. The Book of Why
  11. Blueprint
    • This is useful primarily in relation to some specific research I’m doing, rather than more generically.
  12. Moral Tribes
  13. Algorithms to Live By
  14. The Better Angels of Our Nature
  15. Thinking, Fast and Slow
    • This might be the most useful of all these books for people who have little prior familiarity with the ideas, but I happened to already know a decent portion of what was covered.
  16. Against the Grain
    • I read this after Sapiens and thought the content would overlap a lot, but actually it provided a lot of independent value, I thought.
  17. Sapiens
  18. Destined for War
  19. The Dictator’s Handbook
  20. Age of Ambition
  21. Moral Mazes
  22. The Myth of the Rational Voter
  23. The Hungry Brain
    • If I recall correctly, I found this surprisingly useful for purposes unrelated to the topics of weight, hunger, etc.; e.g., it gave me a better understanding of the liking-wanting distinction.
  24. The Quest: Energy, Security, and the Remaking of the Modern World
  25. Harry Potter and the Methods of Rationality
    • Fiction
    • I also just found this very enjoyable (I was somewhat amused and embarrassed by how enjoyable and thought-provoking I found this, to be honest)
    • This overlaps in many ways with Rationality: AI to Zombies, so it would be more valuable to someone who hadn't already read those sequences (but then I'd recommend such a person read most of those sequences)
    • Within the 2 hours before I go to sleep, I try not to stimulate my brain too much; e.g. I'd avoid listening to most of the books on this list during that time. But I found that I could listen to this during that time without it keeping my brain too active. This is a perk, as that period of my day is less crowded with other things to do.
      • Same goes for Steve Jobs, Power Broker, Animal Farm, and Consider the Lobster.
  26. Steve Jobs by Walter Isaacson
    • Surprisingly useful, given I don’t plan to at all emulate Jobs’ life and don’t work in relevant industries.
  27. Enlightenment Now
  28. The Undercover Economist Strikes Back
  29. Inadequate Equilibria
    • Halfway between a book and a series of posts
  30. Radical Markets
  31. Command and Control
  32. How to Be a Dictator: The Cult of Personality in the Twentieth Century
  33. Climate Matters: Ethics in a Warming World by John Broome
  34. The Power Broker
    • Very interesting, but very long and probably not super useful.
  35. Science in the Twentieth Century
  36. Animal Farm
    • Fiction
  37. Consider the Lobster
    • To be honest, I'm not sure why Wiblin recommended this. But I benefitted from many of his other

(Hat tip to Aaron Gertler for sort-of prompting me to post this list.)

Comment by michaela on How have you become more (or less) engaged with EA in the last year? · 2020-09-23T07:26:18.478Z · score: 19 (8 votes) · EA · GW

Those are good questions. I can't remember in great detail what I did (and especially the order and causal attributions). But here's my rough guess as to what I did, which is probably similar to what I'd recommend to others who are willing/keen to invest a bunch of time to "get up to speed" quite thoroughly:

  • I started mainly with the 80k career guide (now the "old career guide"), problem profiles, career profiles, and other 80k articles I found via links (including their older blog posts)
    • I'd now recommend the Key Ideas article rather than the career guide
  • I listened to every episode of the 80k podcast
  • I started going through the sequences (Rationality: AI to Zombies) on LessWrong, mainly via the "unofficial" podcast version
    • But I only finished this around February this year, after getting a job at an EA research org, so the latter parts probably weren't key to my journey
    • But I'd still definitely recommend reading at least a substantial chunk of the sequences
  • I watched on YouTube basically all the EA Global talks since 2016, as well as a bunch of other EA-related videos (see here for where to find such videos)
  • I started listening to some audiobooks recommended by Wiblin, Beckstead, and/or Muehlhauser
    • I selected these based on how relevant they seemed to me, how highly the people recommended them, and how many of those 3 people recommended the same book
    • I've now listened to/read 30-38 (depending on what you count) EA-relevant books since learning about EA, most of which were recommended by one of those people. I should probably share my list in a shortform comment soon.
  • I read a lot of EA Forum and LessWrong posts
    • I think I basically bookmarked or read anything that seemed relevant and that I was linked to from elsewhere or heard mentioned, and then gradually worked through those bookmarks and (separately) the list of most upvoted posts based on what seemed most relevant or interesting
  • I looked at most major EA orgs' sites and read at least some stuff there, I guess to "get a lay of the land"
    • E.g., FHI, Center on Long-Term Risk (then FRI), GPI, Charity Entrepreneurship, Animal Charity Evaluators ...
  • I started listening to some other podcasts I'd heard recommended, such as Slate Star Codex, EconTalk, and Rationally Speaking
    • I found the first of those most useful, and Rationally Speaking not super useful/interesting, personally
    • See also this list of podcasts
  • I subscribed to the main EA Newsletter
    • I now also subscribe to the EA London newsletter, and find it useful
  • I read everything on Conceptually
  • I read some stuff on the EA Concepts site
  • I applied for lots of jobs, and through the process learned more about what jobs are available and what they involve (e.g., by doing work tests)
  • Probably other things I'm forgetting

I think this process would now be easier, for a few reasons. One that stands out is that the tagging system makes it easier to find posts relevant to a particular topic. Another is that a bunch of people have made more collections and summaries of various sorts than there previously were (indeed, I made an effort to contribute to that so that others could get up to speed more efficiently and effectively than I did; see also). 

So I'd probably recommend people who want to replicate something like what I did use the EA Forum more centrally than I did, both by: 

  1. reading good posts on the forum (which are now more numerous and much easier to find)
  2. finding on the forum curated lists of links to the large body of other sources that are scattered around elsewhere

(I expect more sequences on the EA Forum will also help with this.)

Comment by michaela on AMA: Markus Anderljung (PM at GovAI, FHI) · 2020-09-22T13:46:25.340Z · score: 4 (3 votes) · EA · GW

I found this answer very interesting - thanks!

On feedback, I also liked and would recommend these two recent posts:

Comment by michaela on Is anyone coordinating amateurs to do cause prioritization research in groups? · 2020-09-22T12:21:00.492Z · score: 4 (2 votes) · EA · GW

I'm not aware of anyone doing precisely this. But here are a couple quick pointers to potentially relevant things:

  • ALLFED coordinate volunteers to do research, and I believe some/many of these volunteers are "amateurs", and I'd guess that some of this research occurs in groups and could be called "cause prioritisation" (though I'm not certain)
    • There's some info on their volunteer program here
    • I'd guess ALLFED would be happy to talk to people about what they've learned from this
  • There's a group called READI coordinating a mix of amateur and professional researchers to do EA research, though not necessarily precisely "cause prioritisation research"
    • Here's their site
    • One of the coordinators is Peter Slattery, and I expect he'd be happy to talk to people about their work
  • I think Edo Arad has been involved in some relevant efforts
    • See their comments here
  • The various EA research coordination efforts related to COVID might provide useful evidence on how successful this sort of thing is in general, and how best to do it
    • This wouldn't be about cause prioritisation, but some lessons might generalise
  • I think someone is planning to or already providing country-specific EA career advice in France, similar to what Brian Tan describes for the Philippines
    • If people want to hear more, I can message the person who I think was planning/doing this and see if they're happy to give some public update or get in touch
Comment by michaela on EA Relationship Status · 2020-09-21T07:20:57.763Z · score: 2 (1 votes) · EA · GW

Backing up to clarify where I'm coming from

Again, a reasonable question. I don't think we disagree substantially. 

Also, again, I think my views are actually less driven by a perceived distinction between "for life" vs "till death do us part", and more driven by: 

  • the idea that it seems ok to make promises even if there's some chance that unforeseen circumstances will make fulfilling them impossible/unwise - as long as the promise really was "taken seriously", and ideally the promise-receiver has the same understanding of how "binding" the promise is
  • having had many explicit conversations on these matters with my partner

Finally, I'd also guess that I'm far from alone in simultaneously (a) being aware that a large portion of marriages end in divorce, (b) being aware that many of those divorces probably began with the couple feeling very confident their marriage wouldn't end in divorce, and (c) having a wedding in which a phrase like "for life" or "till death do us part" was used. 

And I think it would be odd to see all such people as having behaving poorly by making a promise they may well not keep and know in advance they may not keep, at least if the partners had discussed their shared understanding of what they were promising. (I'm not necessarily saying you're saying we should see those people that way.) One reason for this view  is that people extremely often mean something other than exact the literal meaning of what they've said, and this seems ok in most contexts, as long as people mutually understand what's actually meant. 

(I think a reasonable argument can be made that marriages aren't among those "most contexts", given their unusually serious and legal nature. But it also seems worth noting that this is about what the celebrant said, not our vows or what we signed.)

Direct response, which is sort-of getting in the weeds on something I haven't really thought about in detail before, to be honest

What do you think the "for life" adds to the pledge if not "for the rest of your lives"?

One could likewise ask what "He spent his life working to end malaria" means that's different from "He spent some time working to end malaria". There, I'd say it adds the idea that this was a very major focus for perhaps at least 2 decades, probably more than 3 decades. Whereas "some time" could mean it wasn't a major priority for him at any point, or only for e.g. 10 years. 

It seems to me perhaps reasonable to think of "entered into for life" as meaning "entered into as at one of the core parts of one's life for at least a few decades, and perhaps/ideally till the very end of one's life". Whereas "till death do us part" is very explicitly until the very end of one's life.

Out of curiosity, I've now looked up what dictionaries say "for life" means. The first two results I found said "for the whole of one's life : for the rest of one's life" (source) and "for the rest of a person's life" (source). This pushes against my (tentative) view, and in favour of your view. 

However, I'd tentatively argue that 2 of the 5 of the examples those dictionaries give actually seem to me to at least arguably fit my (tentative) view:

  • "She may have been scarred for life."
    • Obviously, people can say this as an exaggeration. But I think they can also say it in a more serious way, that people wouldn't perceive as an exaggeration, even if they actually just mean something like "scarred in a substantial way that resurfaces semi-regularly for at least 2 decades". (That's still a lot more than just "scarred" or "scarred for a while".)
  • "There can be no jobs for life."
    • Another dictionary tells me "job for life" means (as I'd expect) "a job that you can stay in all your working life"; not till the actual end of your life.

Two of the other examples are about being sentenced to prison for life; I think that also arguably fits my view, given how life sentences actually tend to work (as far as I'm aware). The fifth example - "They met in college and have remained friends for life" -could go either way.

(And again, I think it's common for people to not actually mean the dictionary definitions of what they say, and that this can be ok, as long as they understand each other.)

Comment by michaela on EA Relationship Status · 2020-09-21T06:56:32.997Z · score: 3 (2 votes) · EA · GW

(I'll indeed allow the little joke, and will furthermore add a link to the hilarious post which I think orginated that phrase, for anyone who hasn't had the pleasure of encountering it yet.)

Comment by michaela on EA Relationship Status · 2020-09-20T12:59:14.563Z · score: 2 (3 votes) · EA · GW

I think that's reasonable. Here's one example to illustrate what might be making my intuitions differ a bit; I feel like you could say "He has spent his life working to end malaria" when someone is alive and fairly young, and also that you could say "He spent his life working to end malaria" even if really he worked on that from 30-60 and then retired. (Whereas I don't think this is true if you explicitly say "He worked to end malaria till the day he died".) In a similar way, I have a weak sense we can "enter into a union for life" without this literally extending for 100% of the rest of our lives. 

But maybe my intuition is being driven more by it being a present-tense matter of us currently voluntarily entering into this union. Analogously, I think people would usually feel it's reasonable for promises to not always be upheld if unusual and hard-to-foresee circumstances arose, the foreseeing of which would've made the promise-maker decide not to make the promise to begin with. (But this does get complicated if reference class forecasting suggests an e.g. 50% chance of some relevant circumstance arising, and it's just that any particular circumstance arising is hard to foresee, as it was in many of those 50% of cases.)

In any case, I guess I really think that whether and how partners explicitly discussed their respective understandings of their arrangement, in advance, probably matters more than the precise words the celebrant said.

Comment by michaela on EA Relationship Status · 2020-09-20T10:27:58.121Z · score: 6 (3 votes) · EA · GW

That all sounds reasonable. And yeah, I wasn't interpreting your comment as actually intended as an argument against marriage (just a hypothesis as to why EAs may tend to be less inclined to get married).

One thing I'd note is that I'm not sure "till death do us part" is actually required or default. The celebrant for our wedding just said:

I am to remind you of the solemn and binding nature of the relationship into which you are about to enter. Marriage, according to law in Australia, is the union of a two people to the exclusion of all others, voluntarily entered into for life.

(And this was just her default; we didn't have to request a move away from "till death do us part". Note that this was a non-religious ceremony and celebrant.) 

Maybe that has the same literal meaning as "till death do us part"; I'm not sure. But I feel like I'd naturally interpret the phrasing my celebrant used as meaning that the two parties have thought really seriously about this, and do presently intend for this to last for life - without it necessarily meaning they totally commit to sticking with it till death or that they predict a 100% chance of that. 

(My partner and I also had more explicit conversations about this sort of thing.)

Comment by michaela on EA Relationship Status · 2020-09-20T07:38:59.446Z · score: 6 (4 votes) · EA · GW

I'd also be interested to find out what proportion of EA marriages are to non-EAs, and what proportion are relationships that began before either party discovered EA. I feel like that'd affect what the best explanation of this trend would be.

For one data point, I got married this year (~1.5 years after learning of EA), my relationship began before I discovered EA, and my partner is not an EA. 

And I'm 23, so apparently I might be ~25% of the married EA cohort in my age group. (Though I wasn't married in 2018 and may not have taken the survey then, so I'm not actually one of the four married 18-24 year olds shown there. Also, I'd guess that EA has grown and that this will slightly increase the size of each cohort.) So perhaps my one data point can be extrapolated from to a greater extent than one would intuitively assume.

Comment by michaela on EA Relationship Status · 2020-09-20T07:29:52.798Z · score: 8 (4 votes) · EA · GW

[Disclaimer: I notice that I felt weird about your comment to an extent that may not be reasonable, so my own comment here may be odd/have an odd tone. Also, I'm recently married, so maybe somehow I'm feeling defensive, but I really don't know why that'd be.]

My knee-jerk reaction is that that mindset, at least as phrased, would be quite naive consequentialism, for two main reasons: 

  • Getting married may itself change how much happiness a relationship provides and how long one wants to stay in it.
    • One part of what I have in mind is analogous to "burning one's boats", and the related notion in game theory that one can sometimes improve one's payoffs by cutting off some of one's own options.
  • Regularly running the decision procedure "explicitly try to work out what personal life decisions would maximise utility" will not necessarily be the best way to actually maximise utility.
    • Relatedly, Askell writes: "As many utilitarians have pointed out, the act utilitarian claim that you should ‘act such that you maximize the aggregate wellbeing’ is best thought of as a criterion of rightness and not as a decision procedure. In fact, trying to use this criterion as a decision procedure will often fail to maximize the aggregate wellbeing. In such cases, utilitarianism will actually say that agents are forbidden to use the utilitarian criterion when they make decisions."
    • That said, whether to get married is a large decision that doesn't arise often, so it's plausible that that's the sort of case where it is worth thinking as a consequentialist explicitly and in detail.

(That said, even if this way of thinking would indeed be quite naive consequentialism, that doesn't rule out the possibility that many EAs think this way, so your comment could still be onto something.)

Comment by michaela on EA Relationship Status · 2020-09-20T07:16:28.900Z · score: 4 (2 votes) · EA · GW

1. Anecdotally, conditional upon marriage, the rate of divorce in my EA friends seem much higher than among my non-EA friends of similar ages. So it is not the case that EAs are careful/slow to marry because they are less willing with making long-term commitments, or because they are more okay with pre-marital cohabitation.

I'm not sure I understand why the observation in the first sentence supports the claims in the second sentence? Couldn't EAs tend to be less willing to make long-term commitments, or be more ok with pre-marital cohabitation, but then there's also some other factor (e.g., not feeling bound by conventions, regularly changing lifestyles such as by moving, disagreeablenss) meaning that if EAs get married they're more likely to get divorced? Or couldn't be that EAs tend to have those two features, but the EAs who get married are ones who deviate from those tendencies?

My inside view is that if you don't update on the observed data and just consider which characteristics will make EAs more or less likely to be married, I think there are a bunch of factors that push EAs towards "more"as opposed to less.

This seems true to me as well.

Comment by michaela on Crucial questions for longtermists · 2020-09-20T06:39:13.984Z · score: 3 (2 votes) · EA · GW

Very interesting, thanks! Strong upvoted.

in reality, the population seems more likely to go extinct because of poor environmental conditions, random environmental fluctuations, loss of cultural knowledge (which, like genetic variation, goes down in small populations), or lack of physical goods and technology, none of which have much to do with genetic variation.

This matches what I had tentatively believed before seeing your comment - i.e., I had suspected that genetic diversity wasn't among the very most important considerations when modelling odds of recovery from collapse. So I've now updated to more confidence in that view. 

I raised MVP (from a genetic perspective) just as one of many considerations, and primarily because I'd seen it mentioned in The Precipice. (Well, Ord doesn't make it 100% clear that he's just talking about MVP from a genetic perspective, but the surrounding text suggests he is. Hanson also devotes two paragraphs to the topic, again alongside other considerations.)

Perhaps we should keep the term "minimum viable population size" but use a broader definition based on likelihood to survive, period. I see that Wikipedia uses a broad definition that includes extinction due to demographic and environmental stochasticity, but often MVP is used as in the OP to refer just to extinction due to genetic reasons, so it is important to clarify terms.

I'd agree that clarifying what one means is important. This is why I explicitly noted that here I was using MVP in a sense focused only on genetic diversity. To touch on the other "aspects" of MVP, I also have "What population size is required for economic specialisation, technological development, etc.?" 

It seems fine to me for people to also use MVP in a sense referring to all-things-considered ability to survive, or in a sense focused only on e.g. economic specialisation, as long as they make it clear that that's what they're doing. Indeed, I do the latter myself here: I write there that a seemingly important parameter for modelling odds of recovery is "Minimum viable population for sufficient specialisation to maintain industrialised societies, scientific progress, etc."

Another way in which the concept of a MVP is too simplistic...

I wasn't aware of these points; thanks for sharing them :)

Comment by michaela on How have you become more (or less) engaged with EA in the last year? · 2020-09-19T14:47:55.264Z · score: 4 (2 votes) · EA · GW

Nice to hear that your engagement has increased, and that you've started donating more and feel more inclined towards altruism now!

Out of interest, do you think it's more like engaging more with EA made your disposition towards altruism change, or the other way around, or both? Relatedly, do you think EA played an important role in you starting to give away 10%, or that you would've started doing so around the same time without EA?

Comment by michaela on EA Relationship Status · 2020-09-19T10:14:41.665Z · score: 7 (4 votes) · EA · GW

Epistemic status: Very anecdotal and probably unimportant.

I previously took part in a program called Teach For Australia (TFA), similar to the better known Teach First and Teach For America programs. I found the people in this program much more "like me" in a bunch of ways than any other group I'd encountered until then. I then discovered EA, and found EAs even more "like me", but in similar ways. And I have a loose impression that EA and TFA both disproportionately draw from fairly similar types of people. (E.g., ambitious, impact-oriented, career-prioritising, critical thinking, privileged young graduates of prestigious universities. Probably also more often non-religious than is typical - though by a smaller margin than is the case in EA - which is relevant in light of RyanCarey's comment.)

It also seemed to me that people in the TFA program were quite surprisingly often married or engaged, despite their young average age. I didn't do any systematic data collection, but of the group of me and the 3 other TFAs I was closest with, the average age is ~27, and 75% are married or would've had their wedding by now if not for COVID (and also started their current relationships 5-10 years ago, so there wasn't even much time in the "single" category).

This makes this data somewhat more surprising to me, as it seems to weakly suggest that some of the differences between EAs and society at large may increase marriage rates and reduce single rates, and that other differences are having to push hard to offset that. (Though I guess that that claim, as stated, should be fairly obvious.)

Comment by michaela on MichaelA's Shortform · 2020-09-18T07:01:28.123Z · score: 3 (2 votes) · EA · GW

Suggested by a member of the History and Effective Altruism Facebook group:

Comment by michaela on Modelling the odds of recovery from civilizational collapse · 2020-09-18T06:56:14.381Z · score: 5 (3 votes) · EA · GW

Also, if you're aware of Rethink Priorities/Luisa Rodriguez's work on modelling the odds and impacts of nuclear war (e.g., here), I'd be interested to hear whether you think making parameter estimates was worthwhile in that case. (And perhaps, if so, whether you think you'd have predicted that beforehand, vs being surprised that there ended up being a useful product.)

This is because that seems like the most similar existing piece of work I'm aware of (in methodology rather than topic). And to me it seems like that project was probably worthwhile, including the parameter estimates, and that it provided outputs that are perhaps more useful and less massively uncertain than I would've predicted. And that seems like weak evidence that parameter estimates could be worthwhile in this case as well.

Comment by michaela on Modelling the odds of recovery from civilizational collapse · 2020-09-18T06:48:48.795Z · score: 4 (2 votes) · EA · GW

Thanks for the comment. That seems reasonable. I myself had been wondering if estimating the parameters of the model(s) (the third step) might be: 

  • the most time-consuming step (if a relatively thorough/rigorous approach is attempted)
  • the least insight-providing step (since uncertainty would likely remain very large)

If that's the case, this would also reduce the extent to which this model could "plausibly inform our point estimates" and "narrow our uncertainty". Though the model might still capture the other two benefits (indicating what further research would be most valuable and suggesting points for intervention).

That said, if one goes to the effort of building a model of this, it seems to me like it's likely at least worth doing something like: 

  1. surveying 5 GCR researchers or other relevant experts on what parameter estimates (or confidence intervals or probability distributions for parameters[1]) seem reasonable to them
  2. inputting those estimates
  3. see what outputs that suggests, and more importantly perform sensitivity analyses
  4. thereby gain knowledge on what the cruxes of disagreement appear to be and what parameters most warrant further research, breaking down further, and/or getting more experts' views on

And then perhaps this project could stop there, or perhaps it could then involve somewhat deeper/more rigorous investigation of the parameters where that seems most valuable.

Any thoughts on whether that seems worthwhile?

[1] Perhaps this step could benefit from use of Elicit; I should think about that if I pursue this idea further.

Comment by michaela on Modelling the odds of recovery from civilizational collapse · 2020-09-17T15:19:29.384Z · score: 3 (2 votes) · EA · GW

Thanks, I've sent you a PM :)

ETA: Turns out I was aware of the work Peter had in mind; I think it's relevant, but not so similar as to strongly reduce the marginal value this project could provide.

Comment by michaela on Risks from Atomically Precise Manufacturing · 2020-09-15T09:23:06.324Z · score: 5 (3 votes) · EA · GW

It looks like FHI now want to start looking into nanotechnology/APM more, and build more capacity in that area: They're hiring for researchers in a bunch of areas, one of which is: 

Nanotechnology: analysing roadmaps to atomically precise manufacturing and related technologies, including possible intersections with advances in artificial intelligence, and potential impacts and strategic implications of progress in these areas.

Comment by michaela on How have you become more (or less) engaged with EA in the last year? · 2020-09-15T07:23:29.072Z · score: 6 (4 votes) · EA · GW

That makes sense to me. 

It also reminds me of the idea - which I've either heard before or said before - of talking about taking the Giving What We Can pledge by telling the story of what led one to take it, rather than as an argument for why one should take it. A good thing about that is that you can still present the arguments for taking it, as they probably played a role in the story, and if other arguments played a role in other people's stories you can talk about that too. But it probably feels less pushy or preachy that way, compared to framing it more explicitly as a set of arguments.

(These two pages may also be relevant: 1, 2.)

Comment by michaela on How have you become more (or less) engaged with EA in the last year? · 2020-09-14T07:35:09.814Z · score: 6 (4 votes) · EA · GW

Thanks for sharing :)

Do you think you wouldn't have found it as negative/abrasive if the people still basically argued against a focus on those causes or an engagement with other advocacy orgs or the like, but did so in a way that felt less like a quick, pre-loaded answer, and more like they: 

  • were really explaining their reasoning
  • were open to seeing if you had new arguments for your position
  • were just questioning neglectedness/tractability, rather than importance

I ask because I think there'll be a near-inevitable tension at times between being welcoming to people's current cause prioritisation views and staying focused on what does seem most worth prioritising.[1] So perhaps the ideal would be a bit more genuine open-mindedness to alternative views, but mainly a more welcoming and less dismissive-seeming way of explaining "our" views. I'd hope that that would be sufficient to avoid seeming arrogant or abrasive or driving people away, but I don't know.

(Something else may instead be the ideal. This could include spending more time helping people think about the most effective approaches to causes that don't actually seem to be worth prioritising. But I suspect that that's not ideal in many cases.)

[1] I'm not sure this tension is strong for climate change, as I do think there are decent arguments for prioritising (neglected aspects of) climate change (e.g., nuclear power, research into low-probability extreme risks). But I think this tension probably exists for human rights advocacy and various other issues many people care about.

Comment by michaela on How have you become more (or less) engaged with EA in the last year? · 2020-09-13T19:18:54.738Z · score: 4 (3 votes) · EA · GW

Very glad your second bout of experiences with EA has been more positive! And sorry to hear that your earlier experiences were negative/abrasive. I'd be interested to hear more about that, though that also feels like the sort of thing that might be personal or hard to capture in writing. But if you do feel comfortable sharing, I'd be interested :)

Additionally/alternatively, I'd be interested in whether you have any thoughts on more general trends that could be tweaked, or general approaches that could be adopted, to avoid EA pushing people away like it did the first time you engaged. (Even if those thoughts are very tentative, they could perhaps be pooled with other tentative thoughts to form a clearer picture of what the community could do better.)

Comment by michaela on How have you become more (or less) engaged with EA in the last year? · 2020-09-13T19:03:45.781Z · score: 9 (2 votes) · EA · GW

I've also changed the style/pace of my engagement somewhat, in a way that feels a little hard to describe. 

It's sort-of like, when I first encountered EA, I was approaching it as a sprint: there were all these amazing things to learn, all these important career paths to pursue, and all these massive problems to solve, and I had to go fast. I actually found this exciting rather than stressful, but it meant I wasn't spending enough time with my (non-EA) partner, was too constantly talking about EA things with her, etc. (I think this is more about my personality than about EA specifically, given that a similar thing occurred when I first started teaching in 2018.)

Whereas now it's more like I'm approaching EA as a marathon. By that I mean I'm: 

  • Spending a little less time on "work and/or EA stuff" and a little more time with my partner
    • My work is now itself EA stuff, so I actually increased my time spent on EA stuff compared to when I was a teacher. But I didn't increase it as much as I would've if still in "sprint mode".
  • Making an effort to more often talk about non-EA things with my partner
  • Reducing how much I "sweat the small stuff"; being more willing to make some frivolous expenditures (which are actually small compared to what I'm donating and will donate in future) for things like nice days out, and to not think carefully each time about whether to do that

I think the factors that led me to switch to marathon mode are roughly that:

  • It seemed best for my partner and my relationship
  • I've come to see my relationship itself in a more marathon-y and mature way (or something like that; it's hard to describe), I think due to the fact that I got married this year
    • This seems to have made ideas about compromise and long time horizons more salient to me
    • (I mean this all in a good way, despite how "seeing my relationship as a marathon" might sound!)
  • My career transition worked! So now I feel a bit less like there's a mad dash to get onto a high impact path, and a bit more like I just need to work well and sustainably
    • But this change was only moderate, for reasons including that I remain uncertain about which path I should really be on
  • Getting an EA research job means I can now scratch my itch for learning, discussing, and writing about interesting and important ideas during my work hours, and therefore don't feel an unmet intellectual "need" if I spend my free hours on other things
    • In contrast, when I was a teacher, I mostly had to get my fill of interesting and important ideas outside of work time, biting into the time I spent with my partner
Comment by michaela on How have you become more (or less) engaged with EA in the last year? · 2020-09-13T18:54:08.087Z · score: 10 (6 votes) · EA · GW

I've become much more engaged in the last year. I think this was just a continuation of a fairly steady upward trend in my engagement since I learned about EA in late 2018. And I think this trend hasn't been about increased inclination to engage (because I was already very sold on EA shortly after encountering it), but rather about increased ability to engage, resulting from me: 

  • catching up on EA's excellent back-catalogue of ideas
  • gradually having more success with job applications 

Ways my engagement increased over the past ~12 months include that I:

  • Continued applying to a bunch of EA-aligned jobs, internships, etc.
    • Over 2019 as a whole, I applied to ~30 roles
    • Perhaps ~10 were with non-EA orgs
  • Attended my first EAGx (Australia) and EAG (London)
  • Made my first 10% donation
    • This was to the EA Long-Term Future Fund
    • This was also my first donation after I took the GWWC Pledge in early 2019
  • Started posting to the EA Forum, as well as commenting much more
  • Was offered two roles at EA orgs and accepted one
  • Stayed at the EA Hotel
  • Mostly moved from vegetarianism to veganism
    • This was influenced by my stay at the EA Hotel, as basically all the food there was vegan, and I realised I was pretty happy with it
  • Was later offered a fellowship at a different EA org and accepted it
  • Made a bunch of EA friends

Overall, I've really enjoyed this process, and I'm very glad I found EA. 

I've found some EAs or EA-adjacent people rude or arrogant, especially on Facebook groups and LessWrong (both of which I value a lot overall!). But for some reason this hasn't really left me with a bad taste in my mouth, or a reduced inclination to engage with EA as a whole. And I've much more often had positive experiences (including on Facebook groups and LessWrong).

Comment by michaela on Propose and vote on potential tags · 2020-09-13T07:22:51.680Z · score: 4 (2 votes) · EA · GW

Personally, I think those two tags have sufficiently large and separate scopes for it to make sense for the forum to have both tags. (I didn't create either tag, by the way.) 

But the Longtermism (Philosophy) tag has perhaps been used too liberally, including for posts that should've only been given tags like Long-Term Future or Existential Risk. Perhaps this is because the Longtermism (Philosophy) tag was around before Long-Term Future was created (not sure if that's true), and/or because the first two sentences of the Longtermism (Philosophy) tag didn't explicitly indicate that its scope was limited to philosophical aspects of longtermism only. Inspired by your comment, I've now edited the tag description to hopefully help a bit with that. 

The tag description used to be:

Longtermism is the idea that we can maximize our impact by working to ensure that the long-run future goes well (because it may contain an enormous number of people whose lives we may be able to improve).

This is a relatively new idea, and people in the EA movement currently work on a wide range of open questions related to different facets of longtermism. 

This tag is meant for discussion of longtermist philosophy, rather than specific longtermist cause areas (there are other tags for those, like Existential Risk).

The tag description is now:

The Longtermism (Philosophy) tag is for posts about philosophical matters relevant to longtermism, meaning, roughly, "an ethical view that is particularly concerned with ensuring long-run outcomes go well" (MacAskill). Longtermism is a relatively new idea, and people in the EA movement currently work on a wide range of open questions related to different facets of longtermism. 

For posts about what the long-term future might actually look like, see Long-Term Future. For posts about specific longtermist cause areas, see other tags such as Existential Risk.

(The second sentence could perhaps be cut.)

For comparison, the tag description of Long-Term Future is:

The Long-Term Future tag is meant for discussion of what the long-term future might actually look like. This doesn't necessarily overlap with the Longtermism (Philosophy) tag, because a post attempting to e.g. model the future of space travel won't necessarily discuss the philosophical implications of its model.

Comment by michaela on Propose and vote on potential tags · 2020-09-13T07:04:35.042Z · score: 2 (1 votes) · EA · GW

Agreed. I think people creating tags should probably always add those descriptions/definitions.

One thing I'd note is that anyone can add descriptions/definitions for tags, even if they didn't create them. This could be hard if you're not sure what the scope was meant to be, but if you think you know what the scope was meant to be, you could consider adding a description/definition yourself.

Comment by michaela on Should surveys about the quality/impact of research outputs be more common? · 2020-09-11T17:52:36.984Z · score: 3 (2 votes) · EA · GW

A lot of people might get a lot of the value from a fairly small number of responses, which would minimise costs and negative externalities.

Agreed.

This sort of thing is part of why I wrote "relatively publicly advertised", and added "And maybe it doesn't hold for surveys sent out in a more targeted manner." But good point that someone could run a relatively publicly advertised survey and then just close it after a small-ish number of responses; I hadn't considered that option.

Comment by michaela on MichaelA's Shortform · 2020-09-11T17:50:23.376Z · score: 2 (1 votes) · EA · GW

Good point. 

Though I guess I suspect that, if the reason a person finds my original research not so useful is just because they aren't the target audience, they'd be more likely to either not explicitly comment on it or to say something about it not seeming relevant to them. (Rather than making a generic comment about it not seeming useful.) 

But I guess this seems less likely in cases where: 

  • the person doesn't realise that the key reason it wasn't useful is that they weren't the target audience, or
  • the person feels that what they're focused on is substantially more important than anything else (because then they'll perceive "useful to them" as meaning a very similar thing to "useful")

In any case, I'm definitely just taking this survey as providing weak (though useful) evidence, and combining it with various other sources of evidence.

Comment by michaela on A central directory for open research questions · 2020-09-10T18:20:04.403Z · score: 3 (2 votes) · EA · GW

Update: Effective Thesis have now basically done both of the things you suggested (you can see the changes here). So thanks for the suggestions!

Comment by michaela on Asking for advice · 2020-09-10T07:09:45.880Z · score: 2 (1 votes) · EA · GW

I think I might be happier if there was an explicit and expected part of the process where the other person  confirms they are aware of the meeting and will show up, either by emailing to say "I'll see you at <time>!" or if they have to click "going" to the calendar invitation and I would get a notification saying "They confirmed", and only then was it 'officially happening'.

Yeah, even as an unabashed Calendly-lover I think these things would definitely be improvements. I've thought before that it seems weird that the person whose calendly it is is set to "going" by default, which means the person who booked the time will by default only know that the other person received an email, not that they saw it or plan to be there. 

For this reason, when people book a slot with me, I try to always send a message like "I'll see you at <time>!" But I think it'd be better to have a stronger norm around this, and/or have the person not be set to "going" until they actively click "going".

(It also looks like your comment has gotten a downvote, which seems surprising to me. My small plug for calendly has turned into a much larger and spicier thread than expected.)

Comment by michaela on Making decisions under moral uncertainty · 2020-09-09T19:56:52.951Z · score: 2 (1 votes) · EA · GW

(2) how should one assign probabilities to moral theories?

(I'll again just provide some thoughts rather than actual, direct answers.)

Here I'd again say that I think an analogous question can be asked in the empirical context, and I think it's decently thorny in that context too. In practice, I think we often do a decent job of assigning probabilities to many empirical claims. But I don't know if we have a rigorous theoretical understanding of how we do that, or of why that's reasonable, or at least of how to do it in general. (I'm not an expert there, though.)

And I think there are some types of empirical claims where it's pretty hard to say how we should do this.[1] For some examples I discussed in another post

  • What are the odds that “an all-powerful god” exists?
  • What are the odds that “ghosts” exist? 
  • What are the odds that “magic” exists? 

What process do we use to assign probabilities to these claims? Is it a reasonable process, with good outputs? (I do think we can use a decent process here, as I discuss in that post; I'm just saying it doesn't seem immediately obvious how one does this.)

I do think this is all harder in the moral context, but some of the same basic principles may still apply.

In practice, I think people often do something like arriving at an intuitive sense of the likelihood of the different theories (or maybe how appealing they are). And this in turn may be based on reading, discussion, and reflection. People also sometimes/often update on what other people believe. 

I'm not sure if this is how one should do it, but I think it's a common approach, and it's roughly what I've done myself.

[1] People sometimes use terms like Knightian uncertainty, uncertainty as opposed to risk, or deep uncertainty for those sorts of cases. My independent impression is that those terms often imply a sharp binary where reality is more continuous, and it's better to instead talk about degrees of robustness/resilience/trustworthiness of one's probabilities. Very rough sketch: sometimes I might be very confident that there's a 0.2 probability of something, whereas other times my best guess about the probability might be 0.2, but I might be super unsure about that and could easily change my mind given new evidence.

Comment by michaela on Making decisions under moral uncertainty · 2020-09-09T19:43:29.167Z · score: 2 (1 votes) · EA · GW

Glad you found the post useful :)

Yeah, I think those are both very thorny and important questions. I'd guess that no one would have amazing answers to them, but that various other EAs would have somewhat better answers than me. So I'll just make a couple quick comments.

(1) how should one select which moral theories to use in ones evaluation of the expected choice worthiness of a given action?

I think we could ask an analogous question about how to select which hypotheses about the world/future to use in one's evaluation of the expected value of a given action, or just in evaluating what will happen in future in general. (I.e., in the empirical context, rather than the moral/normative context.)

For example, if I want to predict the expected number of readers of an article, I could think about how many readers it'll get if X happens and how many it'll get if Y happens, and then think about how likely X and Y seem. X and Y could be things like "Some unrelated major news event happens to happen on the day of publication, drawing readers away", or "Some major news event that's somewhat related to the topic of the article happens soon-ish after publication, boosting attention", or "The article is featured in some newsletter/roundup."

But how many hypotheses should I consider? What about pretty unlikely stuff, like Obama mentioning the article on TV? What about really outlandish stuff that we still can't really assign a probability of precisely 0, like a new religion forming with that article as one of its sacred texts?

Now, that response doesn't actually answer the question at all! I don't know how this problem is addressed in the empirical context. But I imagine people have written and thought a bunch about it in that context, and that what they've said could probably be ported over into the moral context.

(It's also possible that the analogy breaks down for some reason I haven't considered.)

Comment by michaela on Asking for advice · 2020-09-09T19:34:05.696Z · score: 3 (2 votes) · EA · GW

I'm particularly annoyed if someone asks me for a favour and then send a calendly with only a couple slots, or slots that don't make sense in my time zone. I'm very very annoyed if I say "How's Monday at 8?" and they say, "I think that should be fine, can you check my Calendly?"

Yeah, I think I'd find both of those annoying as well, and the second especially - the second just seems an entirely unnecessary use of calendly anyway, and does seem to fairly strongly signal "Your time is worth less than mine".

Instead of starting from my preferences, I'm put in the position of picking which of their preferences is the least inconvenient for me. It's perfectly functional, but I don't get to be the star.

Interesting. I guess I'd assumed people would instead see it more like me offering them a massive menu that they can pick from with ease and at their convenience. (Well, not really like that, but something more like that than like them having to work around me in a way that puts me first.)

Stefan wrote in another comment:

Maybe one option would be to both send the Calendly and write a more standard email? E.g.:

"When would suit you? How about Tuesday 3pm or Wednesday 4pm? Alternatively, you could check my Calendly, if you prefer."

Do you think that that option would alleviate this feeling for you? 

Comment by michaela on Asking for advice · 2020-09-09T19:28:18.443Z · score: 4 (2 votes) · EA · GW

Yeah, I think roughly that sort of message is what I'll use from now on, as a result of the (rather unexpected!) data this thread has provided. It still seems to me that Calendly (at least given my flexible schedule) will very likely tend to save both parties time and effectively give them more choice over timings, but I'll provide some particular option alongside the link from now on. 

I think this would also help in cases where the person I'm talking to would feel it's easier to make a decision if one or two options are singled out for them (e.g. Lukas, based on his comment).

Comment by michaela on Should surveys about the quality/impact of research outputs be more common? · 2020-09-09T19:18:37.514Z · score: 3 (2 votes) · EA · GW

Strong upvote for two good points that, in retrospect, I feel should've been obvious to me! 

In light of those points as well as what I mentioned above, my new, quickly adjusted, bottom-line view, would be that:

  • People considering running these surveys should take into account that cost and that risk which you mention.
  • I probably still think most EA research organisations should run such a survey at least once.
    • In many cases, it may make the most sense to just send it to some particular group of people, or post it in some place more targeted to their target audience than the EA forum as a whole. This would reduce the risk of survey fatigue somewhat, in that not all these surveys are being publicised to basically all EAs.
    • In many cases, it may make sense for the survey to be even shorter than my one.
    • In many cases, it may make sense to run the survey only once, rather than something like annually.
  • Probably no/very few individual researchers who are working at organisations who are themselves running surveys should run their own, relatively publicly advertised individual surveys (even if it's at a different time to the org's survey).
    • This is because those individuals survey would probably provide relatively little marginal value, while still having roughly the same time costs and survey fatigue risk.
    • But maybe this doesn't hold if the org only does a survey once, and the researcher is considering running a survey more than a year later.
    • And maybe it doesn't hold for surveys sent out in a more targeted manner.
  • Even among individual researchers who work independently, or whose org isn't running surveys, probably relatively few should run their own, relatively publicly advertised individual surveys.
    • The exceptions may tend to be those who wrote a large number of outputs, on a wide range of topics, for relatively broad audiences. (For the reasons alluded to in my parent comment.)

I could definitely imagine shifting my views on this again, though. 

Comment by michaela on Asking for advice · 2020-09-09T15:52:37.939Z · score: 2 (1 votes) · EA · GW

These comments are all useful data for me, but I also find them somewhat confusing. Are you referring to cases where the person's calendly is quite full, so you're forced into a narrow range of options?

My calendly is usually quite empty, as my schedule is quite flexible. So I'd hope this comes across to people as being very considerate of their schedule, since they can choose from a very wide range of times and dates. 

Or maybe you find it annoying either way, and it's more like getting sent a calendly link just feels less considerate of your schedule than being explicitly asked when's good for you?

Comment by michaela on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-09T06:46:05.097Z · score: 2 (1 votes) · EA · GW

Thanks. 

Those answers make sense to me. But I notice that the answer to question 1 sounds like an outcome you want to bring about, but which I wouldn't be way more surprised to observe in a world where CRS doesn't exist/doesn't have impact than one in which it does. This is because it could be brought about by the actions of others (e.g., CLR). 

So I guess I'd be curious about things like:

  • Whether and how you think that that desired world-state will look different if CRS succeeds than if CRS accomplishes very little but other groups with somewhat similar goals succeed
  • How you might disentangle the contribution of CRS to this desired outcome from the contributions of others

I guess this connects to the question of quality/impact assessment as well. 

I also think this dilemma is far from unique to CRS. In fact, it's probably weaker for CRS than for non-suffering-focused longtermists (e.g. much of FHI), because there are currently more of the latter (or at least they control more resources), so there are more plausible alternative candidates for the causes of non-suffering-focused longtermist impacts.

Also, do you think it might make sense for CRS to run a (small) survey about the quality & impact of its outputs?

Comment by michaela on AMA: Tobias Baumann, Center for Reducing Suffering · 2020-09-08T17:13:54.254Z · score: 2 (1 votes) · EA · GW

That makes sense, thanks.

Do you have a sense of who you want to take up that project, or who you want to catalyse it among? E.g., academics vs EA researchers, and what type/field? 

And does this influence what you work on and how you communicate/disseminate your work?

Comment by michaela on Propose and vote on potential tags · 2020-09-08T09:23:04.648Z · score: 2 (1 votes) · EA · GW

In addition to the three tags mentioned as “See also”, this tag would perhaps overlap a bit with the tags:

  • Forecasting
  • Org Update
  • Cause Prioritization
  • Community Projects
  • Criticism (EA Cause Areas)
  • Criticism (EA Movement)
  • Criticism (EA Orgs)
  • Data (EA Community)
  • EA Funding
Comment by michaela on Propose and vote on potential tags · 2020-09-08T09:22:48.642Z · score: 2 (1 votes) · EA · GW

(Update: I've now made this tag.)

Impact Assessment (or maybe something like Impact Measurement or Measuring Impact)

Proposed rough description: 

The Impact Assessment tag is for posts relevant to "measuring the effectiveness of organisational activities and judging the significance of changes brought about by those activities" (source). This could include posts which include impact assessments; discuss pros, cons, and best practices for impact assessment; or discuss theories of change or forecasts of impacts, against which measured impacts could later be compared. 

See also Org Strategy, Statistical Methods, and Research Methods.

A handful of the many posts that this tag would fit: 

Comment by michaela on Should surveys about the quality/impact of research outputs be more common? · 2020-09-08T09:11:27.582Z · score: 6 (3 votes) · EA · GW

On Q3 

If an org/individual wants to run such a survey, I’d probably suggest they read Rethink’s post on their survey, Rethink’s survey itself, my survey, 80,000 Hours survey, and maybe 80,000 Hours annual review (I haven’t read the full version of that annual review myself). 

I’d also suggest reflecting on one’s theory of change for one’s research. (Though I’d also suggest this even if one isn’t planning to run this sort ofsurvey.)

I also made quick predictions about what my survey results would be, and what would be “surprisingly” good or bad results. This was to give myself some sort of baseline to compare results against, and help me know/remember how surprised I should be. I think this was worthwhile, and I would do it again.

Finally, some small things I’ll change about my own survey if I run it again:

  • I should’ve been clearer about whether I wanted feedback on my forum comments, in addition to my posts
  • I should’ve added a box at the end asking if respondents were comfortable with their comments being included verbatim in a public write-up
Comment by michaela on Should surveys about the quality/impact of research outputs be more common? · 2020-09-08T09:10:56.303Z · score: 2 (1 votes) · EA · GW

On Q2 

Overall, my independent impression is that more/most/all EA research orgs should run such surveys, and that it might be worth individual EA researchers experimenting with doing so as well. I also expect I’ll run another survey of this kind next year. This is all partly due to the potential upsides, but also partly about the seemingly low costs, as I think these surveys should usually take little time to run and reflect on. More details and caveats follow.

Indications of other people’s views

Rethink’s post about their survey says:

Two of our founding values at Rethink Priorities are transparency and impact assessment. Here we present the results of our stated intention to annually run a formal survey to discover if one of our target audiences, decision-makers and donors in the areas which we investigate, has read our work and if it has influenced their decision-making. Due to the small sample of 47 respondents, the disproportionate importance of some of these respondents, and the ability to highlight comments from only those who opted to share their responses publicly, the precise results should not be taken too seriously. It is also very important to note that this is just one of the many ways we are assessing our impact. Nevertheless, we will present the overall results and the results by cause area.

Max Daniel commented on that post:

Thanks for posting this! I'd really like to see more organizations evaluate their impact, and publish about their analysis.

(Though note that impact could also be evaluated without a survey of this kind.)

On the first page of their currently running annual impact survey, 80,000 Hours writes:

Your survey responses are extremely useful to us.

They help us understand whether we're doing any good, and if so what part of our work is actually helping you.

That means we can focus on doing more of the things that are having an impact, and deprioritise those that aren't valuable, or are even doing harm. 

My views before running my survey

My views before running my survey were similar to my current view, though even more tentative and even less fleshed out. My basic reasoning was as follows: 

First, I think that getting clear feedback on how well one is doing, and how much one is progressing, tends to be somewhat hard in general, but especially when it comes to:

  • Research
  • Actually improving the world compared to the counterfactual
    • Rather than, e.g., getting students’ test scores up, meeting an organisation’s KPIs, or publishing a certain number of papers

(I also think this applies especially to relatively big-picture/abstract research, rather than applied research, and to longtermism. This was relevant to my case, but isn’t central to the following points.)

Second, I think some of the best metrics by which to judge research are whether people:

  • are bothering to pay attention to it
  • think it’s interesting
  • think it’s high-quality/rigorous/well-reasoned
  • think it addresses important topics
  • think it provides important insights
  • think they’ve actually changed their beliefs, decisions, or plans based on that research
  • etc.

I think this data is most useful if these people have relevant expertise, are in positions to make especially relevant and important decisions, etc. But anyone can at least provide input on things like how well-written or well-reasoned some work seems to have been. And whoever the respondents are, whether the research influenced them probably provides at least weak evidence regarding whether the research influenced some other set of people (or whether it could, if that set of people were to read it).

Third, impact surveys are one way to gather data on these metrics. Such surveys aren’t the only way to do that, and these metrics aren’t the only ones that matter. But I expect it to tend to be useful to gather more data than people would by default, and to gather data from a more diverse set of sources (each with their own, different limitations).

Fourth, a lot of the data I’d gotten was from people actively reaching out to me, unprompted and non-anonymously. I expect this data to be biased towards positive feedback, because:

  • people who like my work are more likely to reach out to me
  • a lack of anonymity may bias people towards being friendly / avoiding being “rude” / avoiding hurting my feelings.

Surveys face similar sampling and response biases, but perhaps to a smaller extent, because: 

  • people are at least prompted to participate (though they still choose at that point to opt in or out)
  • respondents are anonymous.

With my survey in particular, I wanted to get additional inputs into my thinking about: 

  1. whether EA-aligned research and/or writing is my comparative advantage (as I’m also actively considering a range of alternative pathways)
  2. which topics, methodologies, etc. within research and/or writing are my comparative advantage
  3. specific things I could improve about my research and/or writing (e.g., topic choice, how rigorous vs rapid-fire my approach should be, how concise I should be)

How I’ve updated my views based on further thought

Potential relevance of one’s theory of change

I’d guess that key components of Rethink Priorities and 80,000 Hours’ theories of change involve relatively directly influencing key decisions that are (a) made by people outside their organisations, and (b) not just about further research. This could include things like career and donation decisions.

There may be many research orgs for which that is not the case. For example, some orgs’ may view their research as being almost entirely intended to lay the groundwork for further research done by themselves or by others. (I expect Rethink and 80k also have this as one goal for their research, but that for them this goal isn’t as dominant.) Orgs in this category may include MIRI and most academic institutes (including GPI and FHI).

If this is true, it might mean that orgs like Rethink and 80k have an unusually large and hard-to-pin-down key audience for their work. Perhaps many other orgs can be satisfied simply with things like: 

  • seeing how many citations their papers get, and how those papers build on their papers
  • getting a sense of their reputation among the other few dozen relevant researchers at a conference for their field

Potential relevance of how diverse one’s topic choices are

Similarly, if an org/individual writes only on a handful of relatively narrow areas, it may be easier for them to identify a small set of particularly relevant people and get their feedback without running a survey. In contrast, if an org/individual’s writings span many areas, it may be more valuable to publicly post a survey in fora where their writings are read.

Potential relevance of one’s speed of output

Perhaps if an org/individual produces something like 1-5 outputs per year, it makes sense to just solicit input on individual pieces. In contrast, a larger amount of output per unit of time might increase the value of a survey gathering views on all of this output, including on which pieces were most widely read, seemed most useful, etc.

Potential reputational relevance of “EA vs not” and “academic vs not”

Perhaps outside of EA, and perhaps in academia, running this sort of survey would seem weird and somehow tarnish one’s reputation? (I have no real reason for believing this; it just seems plausible.)

However...

I think of those points except the last one would merely somewhat reduce how useful surveys are, rather than making them useless. It still seems to me that surveys would often provide relevant and useful data, and data with different limitations to data from other sources (which is useful because it means findings that come up consistently are more likely to accurately reflect reality). 

And I think surveys could be made, promoted, analysed, and reflected on in: 

  • in 2 hours if one really wants to go fast
  • In fewer than 10 hours in most cases
    • Exceptions would be cases where one constructs a particularly large survey, gets text responses from a particularly large sample, or wants to reflect particularly rigorously/extensively on the results. E.g., I expect the process for 80,000 Hours’ impact survey this year will end up having taken substantially more than 10 hours.

So it seems to me that the expected value of a research org running such a survey will tend to offset the costs, given that it:

  • will probably usually provide at least slightly useful info
  • will probably have a nontrivial chance of providing very useful info
  • will probably not take much time

I’m unsure if this is true for individual researchers, as they’ll tend to have less output and write on a smaller set of topics. But I do think it was worthwhile for me to run my survey. (Though note that I’ve written unusually many outputs and written on many different areas. This is in turn partly because I’ve been writing posts rather than papers.)

How I’ve updated my views based on the survey

I spent ~1 hour 10 minutes creating my survey; writing a post and comment to promote it and explain my rationale behind it; publishing that to the EA Forum and LessWrong; and later also publishing shortform comments promoting the survey. Replicating these steps next year would likely take closer to 20 minutes. 

I spent ~2 hours analysing and reflecting on the results. I expect I could do this about twice as fast next year, though I may also get more responses, which would cause the reflection process to take longer.

I spent around 5 hours writing up my reflections publicly, as well as this post and comment. I think I was inefficient in how I did this. But in any case, other orgs/researchers could skip this step, or do a much smaller version of it. (The main reasons I did this step the way I did were that I’m interested in the question of whether and how others should run similar surveys, and because I think seeing my data, reflections, and thoughts might be useful for others.)

I think I benefitted noticeably, but not incredibly much, from the survey data. For details, see my reflections