Post results

Long-Term Future Fund: April 2019 grant recommendations by Habryka (habryka) · 2019-04-23T07:00:00.000Z
*   The first way HPMOR teaches science is that the reader is given many examples of the inside of someone’s mind when they are thinking with the goal of actually understanding the world and are reasoning with the scientific and quantitative understanding humanity has developed. HPMOR is a fictional work, containing a highly detailed world with characters whose experience a reader empathises with and storylines that evoke responses from a reader. The characters in HPMOR demonstrate the core skills of quantitative, scientific reasoning: forming a hypothesis, making a prediction, throwing out the hypothesis when the prediction does not match reality, and otherwise updating probabilistically when they don’t yet have decisive evidence.
*   The second way HPMOR teaches science is that key scientific results and mechanisms are woven into the narrative of the book. Studies in the heuristics and biases literature, genetic selection, programming loops, Bayesian reasoning, and more are all explained in an unusually natural manner. They aren’t just added on top of the narrative in order for there to be science in the book; instead, the story’s universe is in fact constrained by these theories in such a way that they are naturally brought up by characters attempting to figure out what they should do.
*   This contributes to the third way HPMOR helps teach scientific thinking: HPMOR is specifically designed to be understandable in advance of the end of the book, and many readers have used the thinking tools taught in the book to do just that. One of the key bottlenecks in individuals’ ability to affect the long-term future is the ability to deal with the universe as though it is understandable in principle, and HPMOR creates a universe where this is so and includes characters doing their best to understand it. This sort of understanding is necessary for being able to take actions that will have large, intended effects on important and difficult problems 10^n years down the line.
Impact purchase first round results by Katja_Grace (katja_grace) · 2015-04-10T03:43:37.056Z
##### Oliver Habryka's organization of HPMOR wrap parties
Long-Term Future Fund: August 2019 grant recommendations by Habryka (habryka) · 2019-10-03T18:46:40.813Z
*   Harry Potter and the Methods of Rationality (HPMOR) was instrumental in the growth and development of both the EA and Rationality communities. It is very likely the single most important recruitment mechanism for productive AI alignment researchers, and has also drawn many other people to work on the broader aims of the EA and Rationality communities.
*   Fiction was a core part of the strategy of the neoliberal movement; fiction writers were among the groups referred to by Hayek as "secondhand dealers in ideas.” An example of someone whose fiction played both a large role in the rise of neoliberalism and in its eventual spread would be Ayn Rand.
*   Almost every major religion, culture and nation-state is built on shared myths and stories, usually fictional (though the stories are often held to be true by the groups in question, making this data point a bit more confusing).
*   Francis Bacon’s (unfinished) utopian novel “The New Atlantis” is often cited as the primary inspiration for the founding of the Royal Society, which may have been the single institution with the greatest influence on the progress of the scientific revolution.
A Guide to Early Stage EA Group-Building at Liberal Arts Colleges by vaidehi_agarwalla · 2019-07-02T12:53:23.752Z
*   If possible, find EA-adjacent people (e.g. LessWrongers, HPMOR readers, people with some exposure to 80K/Doing Good Better, people interested in AI Safety) to engage with EA. This can be done through activities like checking out local Less Wrong meetups, starting an HPMOR reading group, or informally through word of mouth.
Direct Funding Between EAs - Moral Economics by Diego_Caleiro (diego_caleiro) · 2015-07-28T01:07:53.100Z
Direct Requests for Events: Some events like the HPMOR parties were arranged directly by asking donors to finance the event.
Stories and altruism by (robm_73-hotmail-com) · 2019-05-20T08:37:38.528Z
[Harry Potter and the Methods of Rationality- Eliezer Yudkowsky](
Why do social movements fail: Two concrete examples. by NunoSempere (nunosempere) · 2019-10-04T19:56:02.028Z
Anyways, there doesn't seem to be that clear a connection between their fiction and their actual work, unlike in Ayn Rand's Atlas Shrugged, or in Yudkowsky's HPMOR. Interestingly enough, the EA movement doesn't yet have such fiction, that I know of. 
Modelers and Indexers by Denis Drescher (telofy) · 2020-05-12T12:01:14.768Z
I found a more complex and noncooperative demonstration of this reasoning problem in the scene in Dumbledore’s office in [chapter 18 of Harry Potter and the Methods of Rationality]( (but this chapter does not make sense in isolation) and later in the book in the complexities of the three-army game. It’s difficult to imagine that some people have been in all possible permutations of these situations to have meaningful achetypes stored.
A Case Study in Newtonian Ethics--Kindly Advise by Lumpyproletariat (lumpyproletariat) · 2020-12-05T07:40:18.893Z
I don’t have any coins on my person, I’m well aware. I don’t carry any coins. What I do carry is thirty-ish dollars in small bills in my wallet because Harry from HPMoR said in one of the early chapters that money was something that you might need a lot of in a hurry and I thought the odds of someone robbing me were low enough that the expected utility of having cash on hand outweighed the odds of losing thirty dollars in one go the man asks me if I can *look* for coins so I get my wallet out of my pocket.
EA Hotel with free accommodation and board for two years by Greg_Colbourn (greg_colbourn) · 2018-06-04T18:09:09.845Z
What about the name and branding of the hotel? Something straight forward like "The EA Hotel”, or “The EA Hotel: Blackpool”? Or maybe "The Bentham Hotel”? Or something with more in-group appeal? “The Phoenix’ Nest” (H/T Ryan Christopher Augustine Thomas) has associations with incubation, [HPMoR](, altruism, immortality, and also the english pub aesthetic, and the trope of adventurers meeting at an inn. However, explicitly associating it with EA might not be ideal when factoring reputational concerns; given that the initial funding source comes from risky crypto investments, and the possibility of a failure to deliver in terms of impact. A recent [poll]( on the Facebook group for this project now has “The EA Hotel: Blackpool” in second place, with “Athena” Hotel (what it's currently called) in the lead. This suggests a community preference for straight forward naming, and some caution around attaching the moniker “EA” to projects.
Why earn to give? (transcript) by TopherHallquist (topherhallquist) · 2014-09-19T20:50:56.620Z
There's still a shortage people. It hasn't been like all the smart kids have flooded into it the way that it's happened with medicine and law. It's also something that's easy to switch into once you're already out of school. That's what I did. I had no idea this was even possible two years ago. Then I heard from people like Eliezer Yudkowsky who did a [PSA about this]( in HPMOR.
Impacts of rational fiction? by vn256 · 2020-06-24T16:25:20.671Z
Hi everyone! I've been reading rational fiction for a while, and it was an important part of how I found the EA community. Currently I'm working on a podcast about how rational fiction and EA interact, and came across several grants and writeups about the effects and processes that rational fiction entails (see [here]( and [here]( It was also great to see the discussions about connecting EA and art through the EAGxVirtual Slack and Unconference these past weekends. I am wondering what experiences with rational fiction that people on this forum have (creating or discussing or reading), and whether people would be willing to share their stories in an audio format. In particular, what do people think about the following:
Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift by Darius_Meissner (darius_meissner) · 2018-05-08T09:50:14.302Z
> “And Harry remembered what Professor Quirrell had said beneath the starlight: Sometimes, when this flawed world seems unusually hateful, I wonder whether there might be some other place, far away, where I should have been…  
> And Harry couldn’t understand Professor Quirrell’s words, it might have been an alien that had spoken, (...) something built along such different lines from Harry that his brain couldn’t be forced to operate in that mode. You couldn’t leave your home planet while it still contained a place like Azkaban. You had to stay and fight.”  
> – [Harry Potter and the Methods of Rationality](  
Against Modest Epistemology by EliezerYudkowsky (eliezeryudkowsky) · 2017-11-14T21:26:48.198Z
1.  See Cowen and Hanson, “[Are Disagreements Honest?](” [↩](#footnote-1-return)
2.  This doesn’t mean the net estimate of who’s wrong comes out 50-50. It means that if you rationalized last Tuesday then you expect yourself to rationalize this Tuesday, if you would expect the same thing of someone else after seeing the same evidence. [↩](#footnote-2-return)
3.  And then the recursion stops here, first because we already went in a loop, and second because in practice nothing novel happens after the third level of any infinite recursion. [↩](#footnote-3-return)
4.  Chapter 22 of my _Harry Potter_ fanfiction, _[Harry Potter and the Methods of Rationality](, was written after I learned this lesson. [↩](#footnote-4-return)
Some cruxes on impactful alternatives to AI policy work by richard_ngo · 2018-11-22T13:43:40.684Z
*   People can affect incredibly large numbers of other people worldwide. The Internet is an example of a revolutionary development which allows this to happen very quickly.
*   Startups are becoming unicorns unprecedentedly quickly, and their valuations are very heavily skewed.
*   The impact of global health interventions is heavy-tail distributed. So is funding raised by Effective Altruism - two donors have contributed more money than everyone else combined.
*   Google and Wikipedia qualitatively changed how people access knowledge; people don't need to argue about verifiable facts any more.
*   Facebook qualitatively changed how people interact with each other (e.g. FB events is a crucial tool for most local EA groups), and can swing elections.
*   It's not just that we got more extreme versions of the same things, but rather that we can get unforeseen types of outcomes.
*   The books _HPMOR_ and _Superintelligence_ both led to mass changes in plans towards more effective ends via the efforts of individuals and small groups.
How to increase your odds of starting a career in charity entrepreneurship by katherinesavoie · 2019-12-03T17:40:22.411Z
*   [The Life You Can Save](
*   [Doing Good Better](
*   [Poor Economics](
*   [Animal Liberation](
*   [Failing in the Field](
*   [Future Babble](
*   [Black Swan](
*   [Harry Potter and the Methods of Rationality](
*   [How to Measure Anything](
*   [Grit](
April Fool's Day Is Very Serious Business by John_Maxwell (john_maxwell) · 2020-03-13T09:16:37.023Z
Twenty Year Economic Impacts of Deworming by SamiM (samim) · 2020-08-22T00:40:00.135Z
EA-aligned podcast with Spencer Greenberg by Garrison (garrison) · 2019-07-23T17:20:18.556Z
"Music we lack the ears to hear" by Louis_Dixon (bdixon) · 2020-04-19T14:23:07.346Z
Blood Donation: (Generally) Not That Effective on the Margin by Grue_Slinky (grue_slinky) · 2017-08-05T03:56:12.114Z
Results of the Effective Altruism Outreach Survey by Denis Drescher (telofy) · 2015-07-26T11:41:48.500Z
Would a reduction in the number of owned cats outdoors in Canada and the US increase animal welfare? by kcudding · 2019-10-25T19:14:20.996Z
Wireheading as a Possible Contributor to Civilizational Decline by avturchin · 2018-11-12T19:48:45.759Z
Book recommendation: Loonshots by nonzerosum · 2019-05-03T18:30:12.254Z
Debate and Effective Altruism: Friends or Foes? by TenaThau (tenathau) · 2018-11-10T18:33:02.738Z
Maximizing long-term impact by Squark (squark) · 2015-03-03T19:50:01.524Z
How can EA local groups reduce likelihood of our members getting COVID-19 or other infectious diseases? by Linch (linch) · 2020-02-26T16:16:49.234Z
The economy of weirdness by Katja_Grace (katja_grace) · 2015-03-09T06:00:52.754Z
Russian x-risks newsletter, fall 2019 by avturchin · 2019-12-03T17:01:23.705Z

Comment results

comment by aarongertler on Which piece got you more involved in EA? · 2018-09-19T10:37:10.711Z
HPMOR and [Privileging the Question]( got me into Less Wrong, and started me thinking about the idea that the problems I'd been hearing about weren't necessarily the problems that would be best to work on.


From there, [Money: The Unit of Caring]( and [Efficient Charity: Do Unto Others]( helped me get interested in GiveWell.


I can't think of a particular GiveWell article that pushed me further toward EA, though [Excited Altruism]( helped me frame the way I was feeling about all of the ideas. Mostly, as I read their charity evaluations (and their past history of seeing certain charities, like VillageReach, underperform), I realized that glib assertions about impact were often wrong, and that deciding this sort of thing \*correctly\* -- in the absence of a functioning market -- was going to be difficult and require that I rely on outside experts to some extent.


The last big step to get me fully enmeshed in the community was starting a student group. The articles with the most influence on that decision were [Ben Kuhn's reflections on starting the Harvard EA group](
comment by iarwain on I am Nate Soares, AMA! · 2015-06-11T14:00:12.975Z
I know that in the past LessWrong, HPMOR, and similar community-oriented publications have been a significant source of recruitment for areas that MIRI is interested in, such as rationality, EA, awareness of the AI problem, and actual research associates (including yourself, I think). What, if anything, are you planning to do to further support community engagement of this sort? Specifically, as a LW member I'm interested to know if you have any plans to help LW in some way.
comment by igor-terzic on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T21:16:27.317Z
I'd like to challenge the downside estimate re: HPMoR distribution funding.

> So I felt comfortable recommending this grant, especially given its relatively limited downside

I think that funding this project comes with potentially significant PR and reputational risk, especially considering the goals for the fund. It seems like it might be a much better fit for the Meta fund, rather than for the fund that aims to: "support organizations that work on improving long-term outcomes for humanity".
comment by williamkiely on New edition of "Rationality: From AI to Zombies" · 2018-12-17T05:49:52.374Z
I know someone who I think would enjoy HPMOR but refuses to read any book-length text in anything but paper book format. Other than going through the effort of printing out a PDF myself, does anyone know of any way I can get a hard copy? I'd be willing to make a counter-factual donation to MIRI or wherever for the trouble if that would help.
comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T05:36:44.176Z
It's a pretty large number of books, from the application:

> Giving HPMoRs out would allow EA or Rationalist communities to establish initial contact with about 650 gifted students (~200 for EGMO and ~450 for IMO)
comment by larks on Why I think the EA Community should write more fiction · 2020-11-05T20:27:04.955Z
Obligatory link to [Harry Potter and the Methods of Rationality](, both a great piece of literature on its own merits and also one of the leading gateways to the LW/EA community.
comment by reallyeli on Please use art to convey EA! · 2019-05-25T22:57:00.188Z
Have you heard of Harry Potter and the Methods of Rationality [(](( and/or []( ? I think they serve some of this role for the community already.

It's interesting they are both long-form web fiction; we don't have EA tv shows or rock bands that I know of.
comment by morganlawless on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T20:36:18.083Z
Mr. Habryka,

I do not believe the $28,000 grant to buy copies of _HPMOR_ meets the evidential standard demanded by effective altruism.  “Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.” With all due respect, it seems to me that this grant feels right but lacks evidence and careful analysis.

The Effective Altruism Funds are "for maximizing the effectiveness of your donations" according to the homepage.  This grant's claim that buying copies of _HPMOR_ is among the most effective ways to donate $28,000 by way of improving the long-term future rightly demands a high standard of evidence.

You make two principal arguments in justifying the grant.  First, the books will encourage the Math Olympiad winners to join the EA community.  Second, the book swill teach the Math Olympiad winners important reasoning skills.

If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism?  _The Life You Can Save_, _Doing Good Better_, and _80,000 Hours_are three books much more relevant to Effective Altruism than _Harry Potter and the Methods of Rationality_.  Furthermore, they are much cheaper than the $43 per copy of _HPMOR_.  Even if one is to make the argument that _HPMOR_ is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of _HPMOR_ relative to any of the other books I mentioned is justified.  It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than _HPMOR_ in encouraging effective altruism.  It is also free!

If the goal is to teach Math Olympi
comment by maswiebe on Why to Optimize Earth? (post 1/3) · 2017-10-22T17:16:14.758Z
It's obviously the case that "do the most good" is equivalent to "optimize the Earth". HPMOR readers will remember: "World domination is such an ugly phrase. I prefer to call it world optimisation."

But given that they're equivalent, I don't see that changing the label offers any benefits. For example, the theoretical framework linked to "do the most good" already gives us a way to think about how to choose causes while taking into account inter-cause spillovers (corresponding to 1(iv)).
comment by ben-pace on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-21T21:02:42.605Z
A high quality podcast has been made (for free, by the excellent fanbase). It’s at
comment by cole_haus on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T01:55:37.185Z
I am not OP but as someone who also has (minor) concerns under this heading:

*   Some people judge HPMoR to be of little artistic merit/low aesthetic quality
*   Some people find the subcultural affiliations of HPMoR off-putting (fanfiction in general, copious references to other arguably low-status fandoms)

If the recipients have negative impressions of HPMoR for reasons like the above, that could result in (unnecessarily) negative impressions of rationality/EA.

Clearly, there also many people that like HPMoR and don't have the above concerns. The key question is probably what fraction of recipients will have positive, neutral and negative reactions.
comment by casebash on Rationality as an EA Cause Area · 2018-11-14T13:15:05.633Z
"In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander's writing have attracted significant attention and readership, and mostly continue doing so" - I was talking more about academia than the  blogosphere. Here, only AI safety has had reasonable penetration. EA has had several heavyweights in philosophy, plus FHI for a while and also now GPI.
comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T17:12:51.391Z
The thing that makes me more optimistic here is that the organizers of IMO and EGMO themselves have read HPMoR, and that the books are (as far as I understand it) handed out as part of the prize-package of IMO and EGMO.

I think this makes it more natural to award a large significant-seeming prize, and also comes with a strong encouragement to actually give the books a try.

My model is that only awarding the first book would feel a lot less significant, and my current models of human psychology suggests that while it is the case that some people will feel intimidated by the length of the book, the combined effect of being given a much smaller-seeming gift plus the inconvenience of having to send an email or fill out a form or go to a website to continue reading the book is larger than the effect of the size of the book being overwhelming.

The other thing that having full physical copies enables is book-lending. I printed a full copy of HPMoR a few years ago and have borrowed it out to at least 5 people, maybe one of which would have read the book if I had just sent them a link or just borrowed them the first few chapters (I have given out the small booklets and generally had less success at that than loaning parts of my whole printed book series).

However, I am not super confident of this, and the tradeoff strikes me as relatively close. I yesterday also had a longer conversation about this on the EA-Corner discord and after chatting with me for a while a lot of people seemed to think that giving out the whole book was a better idea, but it did take a while, which is some evidence of inferential distance.
comment by wei_dai on Which five books would you recommend to an 18 year old? · 2017-09-07T08:10:51.575Z
This is a bit tangential, but do you know if anyone has done an assessment of the impact of HPMoR? Cousin_it (Vladimir Slepnev) [recently wrote](

> The question then becomes, how do we set up a status economy that will encourage research? Peer review is one way, because publications and citations are a status badge desired by many people. Participating in a forum like LW when it's "hot" and frequented by high status folks is another way, but unfortunately we don't have that anymore. From that perspective it's easy to see why the massively popular HPMOR didn't attract many new researchers to AI risk, but attracted people to HPMOR speculation and rational fic writing. People do follow their interests sometimes, but mostly they try to find venues to show off.

Taking this one step further, it seems to me that HPMoR may have done harm by directing people's attentions (including Eliezer's own) away from doing the hard work of making philosophical and practical progress in AI alignment and rationality, towards discussion/speculation of the book and rational fic writing, thereby contributing to the decline of LW. Of course it also helped bring new people into the rationalist/EA communities. What would be a fair assessment of its net impact?
comment by liam_donovan on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-14T17:10:51.984Z
Maybe the most successful recruitment books directly target people 1-2 stages away in the recruitment funnel? In the case of HPMOR/Crystal Society, that would be quantitatively minded people who enjoy LW-style rationality rather than those who are already interested in AI alignment specifically.
comment by ryancarey on Which five books would you recommend to an 18 year old? · 2017-09-05T23:14:20.559Z
But honorable mentions for Superintelligence, the Oxford Handbook of Science Writing, all of Dennett's other books,,, Oliver Sacks,, HPMOR, Wolfram Mathworld, Wikipedia, ...........................................
comment by paul_christiano on Impact purchase round 3 · 2015-06-15T21:02:42.443Z
Submission: Oliver Habryka's organization of wrap parties for the conclusion of Harry Potter and the Methods of Rationality, summarized [here](
comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T02:50:36.200Z
Hmm, so my model is that the books are given out without significant EA affiliation, together with a pamphlet for SPARC and ESPR. I also know that HPMoR is already relatively widely known among math olympiad participants. Those together suggest that it's unlikely this would cause much reputational damage to the EA community, given that none of this contains an explicit reference to the EA community (and shouldn't, as I have argued below).

The outcome might be that some people might start disliking HPMoR, but that doesn't seem super bad and of relatively little downside. Maybe some people will start disliking CFAR, though I think CFAR on net benefits a lot more from having additional people who are highly enthusiastic about it, than it suffers from people who kind-of dislike it.

I have some vague feeling that there might be some more weird downstream effects of this, but I don't think I have any concrete models of how they might happen, and would be interested in hearing more of people's concerns.
comment by lukeprog on Which five books would you recommend to an 18 year old? · 2017-09-09T22:14:14.859Z
Back in ~2014, I remember doing a survey of top-contributing MIRI donors over the previous 3 years and a substantial fraction (1/4th?) had first encountered MIRI or EA or whatever through HPMoR. Malo might have the actual stats. It might even be in a MIRI blog post footnote somewhere.

But w.r.t. to research impact, someone could make a list of the 25 most useful EA researchers, or the 15 most useful "AI safety" researchers, or whatever kind of research you most care about, and find out what fraction of them were introduced to x-risk/EA/rationality/whatever through HPMoR.

I don't have a good sense for the what the net impact is.
comment by larks on Impact purchase round 3 · 2015-06-17T01:59:37.647Z
I think this case was probably my biggest disagreement with Paul; I thought this project had quite high value. As it happened I didn't end up purchasing any, presumably because the gap between

*   My evaluation of the HPMOR wrap parties
*   The seller's evaluation of the HPMOR wrap parties

was larger than the gap between

*   My evaluation of the thesis
*   The seller's evaluation of the thesis

but I would like to signal that I am willing in general to bid non-trivial amounts for academic work on X-risk and value drift.

The technique I used to value it was to estimate how long it would take MIRI to produce such a piece, how valuable it was compared to MIRI's typical output, and how much it costs MIRI to hire researchers.

The fact it was a doctoral thesis did cause me to assign a significant discount to my valuation. I'm not really sure how to think about this.
comment by ryancarey on I am Samwise [link] · 2015-01-08T19:06:07.716Z
A hero means roughly what you'd expect - someone who takes personal responsibility for solving world problems. Kind of like an effective altruist. A sidekick doesn't have any specific jargon meaning.

For a bit more flavour, here's a description from hpmor:

> You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” Harry’s face tightened. “That’s why I say you’re not thinking responsibly, Hermione. Thinking that your job is done when you tell Professor McGonagall—that isn’t heroine thinking. Like Hannah being beat up is okay then, because it isn’t your fault anymore. Being a heroine means your job isn’t finished until you’ve done whatever it takes to protect the other girls, permanently.” In Harry’s voice was a touch of the steel he had acquired since the day Fawkes had been on his shoulder. “You can’t think as if just following the rules means you’ve done your duty. –HPMOR, chapter 75.
comment by bryonymc on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-21T19:19:34.359Z
Food for thought: just in thinking how to maximize the value of experimenting with distribution; an alternative approach would be to print the first book and distribute to the math olympiads then invest the rest of the money into converting HPMOR into a podcast/audiobook that can be shared more widely and outlining a “next steps” resource to guide readers. If distributing the books fails (depending on your definition of distribution being a “success”) you avoid sinking $28k into books sitting on shelves at home and now have a widely available podcast (to access for free or a small donation) that can increase HPMOR’s reach over time. (FYI the funds raised through small donations for access could be used to sponsor future printings for youth competitions). 

A podcast or a revamped online version becomes a renewable resource, whereas once those books are distributed, they (and the money) are gone. For those interested, the model that comes to mind is [HP and the Sacred Text]( Using Harry Potter to convey certain ideas or messages is not uncommon given its global reach. HPST is using it for different reasons obviously but HOW they are distributing the idea might be worth pursing with HPMOR too. [HP Alliance]( is another group using HP to convey a message (their focus is on political and social activism). HPMOR could have greater value long-term if there were alternative methods for accessing it beyond a 2000 page series.
comment by ben-pace on Which five books would you recommend to an 18 year old? · 2017-09-05T21:19:44.241Z
I don't think the idea Anna suggests is to pick books you think young people should read, but to actually ask the best people what books they read that influenced them a lot.

Things that come to my mind include GEB, HPMOR, The Phantom Tolbooth, Feynman. Also, which surprises me but is empirically true for many people, Sam Harris's "The Moral Landscape" seems to have been the first book a number of top people I know read on their journey to doing useful things.

But either way I'd want more empirical data.
comment by lukeprog on Which five books would you recommend to an 18 year old? · 2017-09-07T04:00:23.177Z
We got close to doing this when I was at MIRI but just didn't have the outreach capacity to do it. The closest we got was to print a bunch of paperback copies of (the first 17 chapters of) just one book, _HPMoR_, and we shipped copies of that to contacts at various universities etc. I think we distributed 1000-2000 copies, not sure if more happened after I left.
comment by riceissa on Which five books would you recommend to an 18 year old? · 2017-09-13T17:39:32.993Z
Re top MIRI donors, there is a [2013 in review post]( that talks about a survey of "(nearly) every donor who gave more than $3,000 in 2013" with four out of approximately 35 coming into contact via HPMoR. (Not to imply that this is the survey mentioned above, as several details differ.)
comment by misha_yagudin on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T16:50:27.120Z
Hi Matthew,

1\. $43/unit is an upper bound. While submitting an application, I was uncertain about the price of on-demand printing. My current best guess is that EGMO book sets will cost $34..40. I expect printing cost for IMO to be lower (economy of scale).

2\. HPMOR is quite long (~2007 pages according to Goodreads). Each EGMO book set consists of 4 hardcover books.

3\. There is an opportunity to trade-off money for prestige by printing only the first few chapters.
comment by habryka on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-10T00:08:23.949Z
Sorry for the delay, others seem to have given a lot of good responses in the meantime, but here is my current summary of those concerns:

> 1\. Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.

By word-count the HPMOR writeup is (I think) among the three longest writeups that I produced for this round of grant proposals. I think my reasoning is sufficiently strong, though it is obviously difficult for me to comprehensively explain all of my background models and reasoning in a way that allows you to verify that.

The core arguments that I provided in the writeup above seem sufficiently strong to me, not necessarily to convince a completely independent observer, but I think for someone with context about community building and general work done on the long-term future, I expect it to successfully communicate the actual reasons for why I think the grant is a good idea.

I generally think grantmakers should give grants to whatever interventions they think are likely to be most effective, while not constraining themselves to only account for evidence that is easily communicable to other people. They then should also invest significant resources into communicating whatever can be communicated about their reasons and intuitions and actively seek out counterarguments and additional evidence that would change their mind.

> 2\. I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don't make the claim that it won't be effective _at all_. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.

This one has mostly been answered by other people in the thread, but here is my rough summary of my thoughts on this objection:

*   I don't th
comment by jan_kulveit on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T23:16:52.995Z
I don't think anyone should be trying to persuade IMO participants to join the EA community, and I also don't think giving them "much more directly EA content" is a good idea.

I would prefer Math Olympiad winners to think about long-term, think better, and think independently, than to "join the EA community". HPMoR seems ok because it is not a book trying to convince you to join a community, but mostly a book about ways how to think, and a good read.

(If they readers eventually become EAs after reasoning independently, it's likely good; if they for example come to the conclusion there are mayor flaws in EA and it's better to engage with the movement critically, it's also good.)
comment by igor-terzic on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-08T21:05:51.656Z
I don't think that 2) really captures the objection the way I read it. It seems that on margin, there are much more cost effective ways of engaging math olympiad participants, and that the content distributed could be much more directly EA/AI related at lower cost than distributing 2000 pages of hard copy HPMoR.
comment by ben-pace on I am Nate Soares, AMA! · 2015-06-11T23:35:12.254Z
1) What was the length of time between you reading the sequences and doing research on the value alignment problem?

2) What portion of your time will now be spent on technical research? Also, what is Eliezer Yudkowsky spending most of his work-time on? Is he still writing up introductory stuff like he said in the HPMOR author notes?

3) What are any unstated pre-requisites for researching the value-alignment problem that aren't in MIRI's research guide? e.g. could include Real Analysis or particular types of programming ability