Should you familiarize yourself with the literature before writing an EA Forum post? 2019-10-06T23:17:09.317Z · score: 31 (14 votes)
[Link] How to contribute to the psychedelics ecosystem 2019-09-28T01:55:14.267Z · score: 10 (6 votes)
How to Make Billions of Dollars Reducing Loneliness 2019-08-24T01:49:45.629Z · score: 26 (17 votes)
How Flying Cars Will Solve Global Poverty 2019-04-01T20:56:47.829Z · score: 21 (12 votes)
Open Thread #43 2018-12-08T05:39:37.672Z · score: 8 (4 votes)
Open Thread #41 2018-09-03T02:21:51.927Z · score: 4 (4 votes)
Five books to make you super effective 2015-04-02T02:31:48.509Z · score: 6 (6 votes)


Comment by john_maxwell_iv on EA Hotel Fundraiser 5: Out of runway! · 2019-11-25T08:42:30.156Z · score: 16 (6 votes) · EA · GW

The reversal test doesn't mean 'if you don't think a charity for X is promising, you should be in favour of more ¬X'. I may not find homeless shelters, education, or climate change charities promising, yet not want to move in the direction of greater homelessness, illiteracy, or pollution.

Suppose you're the newly appointed director of a large charitable foundation which has allocated its charitable giving in a somewhat random way. If you're able to resist status quo bias, then usually, you will not find yourself keeping the amount allocated for a particular cause at exactly the level it was at originally. For example, if the foundation is currently giving to education charities, and you don't think those charities are very effective, then you'll reduce their funding. If you think those charities are very effective, then you'll increase their funding.

Now consider "having EAs live alone in apartments in expensive cities" as a cause area. Currently, the amount we're spending on this area has been set in a somewhat random way. Therefore, if we're able to resist status quo bias, we should probably either be moving it up or moving it down. We could move it up by creating a charity that pays EAs to live alone, or move it down by encouraging EAs to move to the EA Hotel. (Maybe creating a charity that pays EAs to live alone would be impractical or create perverse incentives or something, this is more of an "in principle" intuition pump sort of an argument.)

Edit: With regard to the professionalism thing, my personal feelings on this are something like the last paragraph in this comment -- I think it'd be good for some of us to be more professional in certain respects (e.g. I'm supportive of EAs working to gain institutional legitimacy for EA cause areas), but the Hotel culture I observed feels mostly acceptable to me. Probably some mixture of not seeing much interpersonal drama while I was there, and expecting the Hotel residents will continue to be fairly young people who don't occupy positions of power (grad student housing comes to mind). FWIW, my personal experience is that the value of professionalism comes up more often in Blackpool EA conversations than Bay Area EA conversations. With the Bay Area, you may very well be paying more rent for a less professional culture. Just my anecdotal impressions.

Comment by john_maxwell_iv on EA Hotel Fundraiser 5: Out of runway! · 2019-11-25T01:05:25.267Z · score: 14 (8 votes) · EA · GW

I'm not convinced community health issues are uniquely problematic when you have people living together. I feel like one could argue just as easily that conferences are risky for community health. If something awkward happens at EA Global, you'll have an entire year to chew on that before running into the person next year. (Pretty sure that past EA Global conferences have arranged shared housing in e.g. dormitories for participants, by the way.) And there is less shared context at a conference because it happens over a brief period of time. One could also argue that having the community be mostly online runs risks for community health (for obvious reasons), and it's critical for us to spend lots of time in person to build stronger bonds. And one could argue that not having much community at all, neither online nor in person, runs risks for community health due to value drift. Seems like there are risks everywhere.

If people really think there are significant community health risks with EA roommates, then they could start a charity which pays EAs who currently live with EA roommates to live alone. To my knowledge, no one has proposed a charity like that. It doesn't seem like a very promising charity to me. If you agree, then by the reversal test, it follows that as a community we should want to move a bit further in the direction of EAs saving money by living together.

Comment by john_maxwell_iv on Institutions for Future Generations · 2019-11-22T20:39:10.016Z · score: 4 (2 votes) · EA · GW

Interesting point re: savings rate. It wouldn't surprise me if economists have done research into what factors cause an increase in the savings rate. (If no research has been so far, it seems like such research would fill a valuable gap in the literature.) Anyway, it seems plausible to me that some things which cause an increase in the savings rate also increase longtermism more generally. (This is another topic which we could gather information about by checking to see if people who save a lot of money are more longtermist generally.) My personal guess would be that economic and political stability predicts savings rate better than equality. I suspect drastic efforts to mitigate present day inequality would probably decrease savings rate if anything. What's the point in saving money if the government might randomly take it at some point in the future? [Edit: If you replace "reducing inequality" with "ensuring more people have lower levels on Maslow's hierarchy met" then I'd be more convinced.]

More broadly, I'd be interested to see people tackling longtermism as a psychological rather than a political project--what are the correlates of longtermist outlook that could feasibly be affected through interventions?

Comment by john_maxwell_iv on Institutions for Future Generations · 2019-11-22T20:24:41.498Z · score: 3 (2 votes) · EA · GW

Can I sell my security? Why not just sell right before doing whatever it is I want to do that is going to screw the future over?

Comment by john_maxwell_iv on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-22T05:17:26.451Z · score: 12 (4 votes) · EA · GW

Thanks for the aggregate position summary! I'd be interested to hear more about the motivation behind that wish, as it seems likely to me that doing shallow investigations of very speculative causes would actually be the comparative advantage of people who aren't employed at existing EA organizations. I'm especially curious given the high probability that people assigned to the existence of a Cause X that should be getting so much resources. It seems like having people who don't work at existing EA organizations (and are thus relatively unaffected by existing blind spots) do shallow investigations of very speculative causes would be just the thing for discovering Cause X.

For a while now I've been thinking that the crowdsourcing of alternate perspectives ("breadth-first" rather than "depth-first" exploration of idea space) is one of the internet's greatest strengths. (I also suspect "breadth-first" idea exploration is underrated in general.) On the flip side, I'd say one of the internet's greatest weaknesses is the ease with which disagreements become unnecessarily dramatic. So I think if someone was to do a meta-analysis of recent literature on, say, whether remittances are actually good for developing economies in the long run (critiquing GiveDirectly -- btw, I couldn't find any reference to academic research on the impact of remittances on Givewell's current GiveDirectly profile -- maybe they just didn't think to look it up -- case study in the value of an alternate perspective?), or whether usage of malaria bed nets for fishing is increasing or not (critiquing AMF), there's a sense in which we'd be playing against the strengths of the medium. Anyway, if organizations wanted critical feedback on their work, they could easily request that critical feedback publicly (solicited critical feedback is less likely to cause drama / bad feelings that unsolicited critical feedback), or even offer cash prizes for best critiques, and I see few cases of organizations doing that.

Maybe part of what's going on is that shallow investigations of very speculative causes only rarely amount to something? See this previous comment of mine for more discussion.

Comment by john_maxwell_iv on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-19T22:14:44.584Z · score: 18 (7 votes) · EA · GW

Not Buck, but one possibility is that people pursuing different high-level agendas have different intuitions about what's valuable, and those kind of disagreements are relatively difficult to resolve, and the best way to resolve them is to gather more "object-level" data.

Maybe people have already spent a fair amount of time having in-person discussions trying to resolve their disagreements, and haven't made progress, and this discourages them from writing up their thoughts because they think it won't be a good use of time. However, this line of reasoning might be mistaken -- it seems plausible to me that people entering the field of AI safety are relatively impartial judges of which intuitions do/don't seem valid, and the question of where new people in the field of AI safety should focus is an important one, and having more public disagreement would improve human capital allocation.

Comment by john_maxwell_iv on EA syllabi and teaching materials · 2019-11-19T03:36:09.018Z · score: 2 (1 votes) · EA · GW

Here's another one:

Comment by john_maxwell_iv on What areas of maths are useful across disciplines? · 2019-11-19T03:28:18.032Z · score: 2 (1 votes) · EA · GW

multiobjective optimization theory

Can you say something about why you feel this is especially useful?

Comment by john_maxwell_iv on How to find EA documents on a particular topic · 2019-11-19T02:56:43.091Z · score: 9 (7 votes) · EA · GW

I assembled a huge list of domains like this and created a custom search engine using this tool from Google. Unfortunately despite it being Google, the search results are really terrible, so I never posted it. (Example: a search for "capacity-building" returns 5 results, none of which are this page. I know it's picking up because when I search for "moral uncertainty" the #2 result is from BTW, I included quite a number of domains in the search engine, not all the results are necessarily EA-related.) is a nice little tool which makes use of the site:A OR site:B mechanism, but unfortunately I believe Google caps the number of distinct domains you can search using that trick? But maybe we could use multiple searchstacks for different EA subtopics. I think if there are search companies that actually do a good job of allowing you to create a custom search engine, that would be the ideal solution, even if it requires paying a monthly fee. If someone else wants to take initiative on this, I'd love to collaborate.

It'd be especially cool if a search engine could search Facebook group archives, since there's so much EA discussion in those.

Comment by john_maxwell_iv on Applying EA to climate change · 2019-11-18T23:14:14.205Z · score: 2 (1 votes) · EA · GW

Surely some food emits much more carbon than other food. Maybe we could just tax food based on how much carbon it emits? Then people won't want to throw it away because they don't want to waste their money. (And they'll also substitute high-emission food for low-emission food.)

Comment by john_maxwell_iv on Applying EA to climate change · 2019-11-18T07:06:49.871Z · score: 7 (2 votes) · EA · GW

35% of food is thrown away in high-income economies.

That number seems pretty high. I wonder where most of the waste happens? Somewhat contrived scenario here, but suppose the drug store buys a new food product. Customers aren't having it so they throw it away. But then due to this awareness campaign, next time they keep it on the shelf--which means they don't have room for something customers do want to buy, so the customers drive to a different store, cancelling out the alleged food waste benefit. Again, contrived, I just feel like we should know why the waste is happening before working to stop it. There's a clear financial incentive not to waste food. Maybe it's mostly food with a short shelf life, like fresh vegetables, that people intend to eat but never do?

Instead of a public campaign against food waste, maybe a public campaign that shows the decarbonization benefit of everyday lifestyle changes. Which is better from an individual perspective: stop driving and take the bus to work, or cut food waste from 35% to 0%?

Comment by john_maxwell_iv on EA Leaders Forum: Survey on EA priorities (data and analysis) · 2019-11-12T07:57:59.332Z · score: 33 (17 votes) · EA · GW

Thanks for this post!

One thing I noticed is that EA leaders seem to be concerned with both excessive intellectual weirdness and also excessive intellectual centralization. Was this a matter of leaders disagreeing with one another, or did some leaders express both positions?

There isn't necessarily a contradiction in expressing both positions. For example, perhaps there's an intellectual center and it's too weird. (Though, if the weirdness comes in the form of "People saying crazy stuff online", this explanation seems less likely.) You could also argue that we are open to weird ideas, just not the right weird ideas.

But I think it could be interesting to try & make this tradeoff more explicit in future surveys. It seems plausible that the de facto result of announcing survey results such as these is to move us in some direction along a single coarse intellectual centralization/decentralization dimension. (As I said, there might be a way to square this circle, but if so I think you want a longer post explaining how, not a survey like this.)

Another thought is that EA leaders are going to suggest nudges based on the position they perceive us to be occupying along a particular dimension--but perceptions may differ. Maybe one leader says "we need more talk and less action", and another leader says "we need less talk and more action", but they both agree on the ideal talk/action balance, they just disagree about the current balance (because they've made different observations about the current balance).

One way to address this problem in general for some dimension X is to have a rubric with 5 written descriptions of levels of X the community could aim for, and ask each leader to select the level of X that seems optimal to them. Another advantage of this scheme is if there's a fair amount of community variation in levels of X, the community could be below the optimal level of X on average, but if leaders publicly announce that levels of X should move up (without having specified a target level), people who are already above the ideal level of X might move even further above the ideal level.

Comment by john_maxwell_iv on Assumptions about the far future and cause priority · 2019-11-12T06:06:28.884Z · score: 3 (2 votes) · EA · GW

It seems extremely unlikely to me that we will come remotely close to discovering the utility-maximizing pattern of matter that can be formed even just here on Earth. There are about 10^50 atoms on Earth. In how many different ways could these atoms be organized in space? To keep things simple, suppose that we just want to delete some of them to form a more harmonious pattern, and otherwise do not move anything. Then there are already 2^(10^50) possible patterns for us to explore.

One direction you could take this: It's probably not actually necessary for us to explore 2^(10^50) patterns in a brute-force manner. For example, once I've tried brussels sprouts once, I can be reasonably confident that I still won't like them if you move a few atoms over microscopically. A Friendly AI programmed to maximize a human utility function it has uncertainty about might offer incentives for humans to try new matter configurations that it believes offer high value of information. For example, before trying a dance performance which lasts millions of years, it might first run an experimental dance performance which lasts only one year and see how humans like it. I suspect a superintelligent Friendly AI would hit diminishing returns on experiments of this type within the first thousand years.

Comment by john_maxwell_iv on EA Hotel Fundraiser 5: Out of runway! · 2019-11-01T04:13:48.518Z · score: 36 (14 votes) · EA · GW

The fact that there are only 18 total donations totaling less than $10k is concerning

If you are well-funded, they'll say: "You don't need my money. You're already well-funded." If you aren't well-funded, they'll say: "You aren't well-funded. That seems concerning."

Comment by john_maxwell_iv on EA Hotel Fundraiser 5: Out of runway! · 2019-11-01T04:12:49.077Z · score: 27 (11 votes) · EA · GW

This seems like a disagreement that goes deeper than the EA Hotel. If your focus is on rigorous, measurable, proven causes, great. I'm very supportive if you want to donate to causes like that. However, there are those of us who are interested in more speculative causes which are less measurable but we think could have higher expected value, or at the very least, might help the EA movement gather valuable experimental data. That's why Givewell spun out Givewell Labs which eventually became the Open Philanthropy Project. It's why CEA started the EA Funds to fund more speculative early stage EA projects. Lots of EA projects, from cultured meat to x-risk reduction to global priorities research, are speculative and hard to rigorously measure or forecast. As a quick concrete example, the Open Philanthropy Project gave $30 million to OpenAI, much more money than the EA Hotel has received, with much less public justification than has been put forth for the EA Hotel, and without much in the way of numerical measurements or forecasts.

If you really want to discuss this topic, I suggest you create a separate post laying out your position - but be warned, this seems to be a fairly deep philosophical divide within the movement which has been relatively hard to bridge. I think you'll want to spend a lot of time reading EA archive posts before tackling this particular topic. The fact that you seem to believe EAs think contributing to the sort of relatively undirected, "unsafe" AI research that DeepMind is famous for should be a major priority suggests to me that there's a fair amount you don't know about positions & thinking that are common to the EA movement.

Here are some misc links which could be relevant to the topic of measurability:

And here's a list of lists of EA resources more generally speaking:

Comment by john_maxwell_iv on Notes on 'Atomic Obsession' (2009) · 2019-10-27T20:56:37.811Z · score: 2 (1 votes) · EA · GW

Seems like some form of Pascal's Wager is valid in this case -- it's hard to know for sure what the impact of nukes will be, especially without the benefit of hindsight, so it's better to err on the side of caution.

Comment by john_maxwell_iv on Resource Generation: Inheriting-to-give, for systemic change · 2019-10-15T15:50:28.931Z · score: 13 (5 votes) · EA · GW

Do they have thoughts on GiveDirectly? Looks like although they mention the Global South, they're asking members to donate to first world political advocacy groups. Was Giridharadas one of the critics who says rich people have too much influence on US public policy?

BTW this group recently got a grant from the EA Meta fund:

Comment by john_maxwell_iv on [Link] "How feasible is long-range forecasting?" (Open Phil) · 2019-10-12T06:01:14.743Z · score: 14 (5 votes) · EA · GW

I'd be interested to know how people think long-range forecasting is likely to differ from short-range forecasting, and to what degree we can apply findings from short-range forecasting to long-range forecasting. Could it be possible to, for example, ask forecasters to forecast at a variety of short-range timescales, fit a curve to their accuracy as a function of time (or otherwise try to mathematically model the "half-life" of the knowledge powering the forecast--I don't know what methodologies could be useful here, maybe survival analysis?) and extrapolate this model to long-range timescales?

I'm also curious why there isn't more interest in presenting people with historical scenarios and asking them to forecast what will happen next in the historical scenario. Obviously if they already know about that period of history this won't work, but that seems possible to overcome.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-12T05:43:08.805Z · score: 2 (1 votes) · EA · GW

Another thought is that even if the original post had a weak epistemic status, if the original post becomes popular and gets the chance to receive widespread scrutiny, which it survives, it could be reasonable to believe its "de facto" epistemic status is higher than what's posted at the top. But yes, I guess in that case there's the risk that none of the people who scrutinized it had familiarity with relevant literature that contradicted the post.

Maybe the solution is to hire someone to do lit reviews to carefully examine posts with epistemic status disclaimers that nonetheless became popular and seem decision relevant.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-09T20:28:31.097Z · score: 4 (2 votes) · EA · GW

Interesting thought, upvoted!

Is there particular evidence for source amnesia you have in mind? The abstract for the first Wikipedia citation says:

Experiment 2 demonstrated that when normal subjects' level of item recall was equivalent to that of amnesics, they exhibited significantly less source amnesia: Normals rarely failed to recollect that a retrieved item derived from either of the two sources, although they often forgot which of the two experimenters was the correct source. The results are discussed in terms of their implications for theories of normal and abnormal memory.

So I guess the question is whether the epistemic status disclaimer falls into the category of source info that people will remember ("an experimenter told me X") or source info that people often forget ("Experimenter A told me X"). (Or whether it even makes sense to analyze epistemic status in the paradigm of source info at all--for example, including an epistemic status could cause readers to think "OK, these are just ideas to play with, not solid facts" when they read the post, and have the memory encoded that way, even if they aren't able to explicitly recall a post's epistemic status. And this might hold true regardless of how widespread a post is shared. Like, for all we know, certain posts get shared more because people like playing with new ideas more than they like reading established facts, but they're pretty good at knowing that playing with new ideas is what they're doing.)

I think if you fully buy into the source amnesia idea, that could be considered an argument for posting anything to the EA Forum which is above average quality relative to a typical EA information diet for that topic area--if you really believe this source amnesia thing, people end up taking Facebook posts just as seriously as papers they read on Google Scholar.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T01:57:27.352Z · score: 6 (4 votes) · EA · GW on Google has been working OK for me.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T01:36:59.291Z · score: 6 (3 votes) · EA · GW

Clever :)

However, I'm not sure that post follows its own advice, as it appears to be essentially a collection of anecdotes. And it's possible to marshal anecdotes on both sides, e.g. here is Claude Shannon's take:

...very frequently someone who is quite green to a problem will sometimes come in and look at it and find the solution like that, while you have been laboring for months over it. You’ve got set into some ruts here of mental thinking and someone else comes in and sees it from a fresh viewpoint.

[Edit: I just read that Shannon and Hamming, another person I cited in this thread, apparently shared an office at Bell Labs, so their opinions may not be 100% independent pieces of evidence. They also researched similar topics.]

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T01:31:37.344Z · score: 15 (6 votes) · EA · GW

One possible synthesis comes from Turing award winner Richard Hamming's book The Art of Doing Science and Engineering. He's got chapters at the end on Creativity and Experts. The chapters are somewhat rambly and I've quoted passages below. My attempt to summarize Hamming's position: Having a deep intellectual toolkit is valuable, but experts are often overconfident and resistant to new ideas.

Chapter 25: Creativity

...Do not be too hasty [in refining a problem], as you are likely to put the problem in the conventional form and find only the conventional solution...


...Wide acquaintance with various fields of knowledge is thus a help—provided you have the knowledge filed away so it is available when needed, rather than to be found only when led directly to it. This flexible access to pieces of knowledge seems to come from looking at knowledge while you are acquiring it from many different angles, turning over any new idea to see its many sides before filing it away. This implies effort on your part not to take the easy, immediately useful “memorizing the material” path, but prepare your mind for the future.


Over the years of watching and working with John Tukey I found many times he recalled the relevant information and I did not, until he pointed it out to me. Clearly his information retrieval system had many more “hooks” than mine did. At least more useful ones! How could this be? Probably because he was more in the habit than I was of turning over new information again and again so his “hooks” for retrieval were more numerous and significantly better than mine were. Hence wishing I could similarly do what he did, I started to mull over new ideas, trying to make significant “hooks” to relevant information so when later I went fishing for an idea I had a better chance of finding an analogy. I can only advise you to do what I tried to do—when you learn something new think of other applications of it—ones which have not arisen in your past but which might in your future. How easy to say, but how hard to do! Yet, what else can I say about how to organize your mind so useful things will be recalled readily at the right time?


...Without self-confidence you are not likely to create great, new things. There is a thin line between having enough self-confidence and being over-confident. I suppose the difference is whether you succeed or fail; when you win you are strong willed, and when you lose you are stubborn!...

Chapter 26: Experts


In an argument between a specialist and a generalist the expert usually wins by simply: (1) using unintelligible jargon, and (2) citing their specialist results which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts are both necessary, and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.


Experts in looking at something new always bring their expertise with them as well as their particular way of looking at things. Whatever does not fit into their frame of reference is dismissed, not seen, or forced to fit into their beliefs. Thus really new ideas seldom arise from the experts in the field. You can not blame them too much since it is more economical to try the old, successful ways before trying to find new ways of looking and thinking.

All things which are proved to be impossible must obviously rest on some assumptions, and when one or more of these assumptions are not true then the impossibility proof fails—but the expert seldom remembers to carefully inspect the assumptions before making their “impossible” statements. There is an old statement which covers this aspect of the expert. It goes as follows: “If an expert says something can be done he is probably correct, but if he says it is impossible then consider getting another opinion.”


...It appears most of the great innovations come from outside the field, and not from the insiders... examples occur in most fields of work, but the text books seldom, if ever, discuss this aspect.

...the expert faces the following dilemma. Outside the field there are a large number of genuine crackpots with their crazy ideas, but among them may also be the crackpot with the new, innovative idea which is going to triumph. What is a rational strategy for the expert to adopt? Most decide they will ignore, as best they can, all crackpots, thus ensuring they will not be part of the new paradigm, if and when it comes.

Those experts who do look for the possible innovative crackpot are likely to spend their lives in the futile pursuit of the elusive, rare crackpot with the right idea, the only idea which really matters in the long run. Obviously the strategy for you to adopt depends on how much you are willing to be merely one of those who served to advance things, vs. the desire to be one of the few who in the long run really matter. I cannot tell you which you should choose that is your choice. But I do say you should be conscious of making the choice as you pursue your career. Do not just drift along; think of what you want to be and how to get there. Do not automatically reject every crazy idea, the moment you hear of it, especially when it comes from outside the official circle of the insiders—it may be the great new approach which will change the paradigm of the field! But also you cannot afford to pursue every “crackpot” idea you hear about. I have been talking about paradigms of Science, but so far as I know the same applies to most fields of human thought, though I have not investigated them closely. And it probably happens for about the same reasons; the insiders are too sure of themselves, have too much invested in the accepted approaches, and are plain mentally lazy. Think of the history of modern technology you know!


...In some respects the expert is the curse of our society with their assurance they know everything, and without the decent humility to consider they might be wrong. Where the question looms so important I suggested to you long ago to use in an argument, “What would you accept as evidence you are wrong?” Ask yourself regularly, “Why do I believe whatever I do”. Especially in the areas where you are so sure you know; the area of the paradigms of your field.

Hamming shares a number of stories from the history of science to support his claims. He also says he has more stories which he didn't include in the chapter, and that he looked for stories which went against his position too.

A couple takeaways:

  • Survivorship bias regarding stories of successful contrarians - most apparent crackpots actually are crackpots.

  • Paradigm shifts - if an apparent crackpot is not actually a crackpot, their idea has the potential to be extremely important. So shutting down all the apparent crackpots could have quite a high cost even if most are full of nonsense. As Jerome Friedman put it regarding the invention of bagging (coincidentally mentioned in the main post):

The first time I saw this-- when would that have been, maybe the mid '90s-- I knew a lot about the bootstrap. Actually, I was a student of Brad Efron, who invented the bootstrap. And Brad and I wrote a book together on the bootstrap in the early '90s. And then when I saw the bag idea from Leo, I thought this looks really crazy. Usually the bootstrap is used to get the idea of standard errors or bias, but Leo wants to use bootstrap to produce a whole bunch of trees and to average them, which sounded really crazy to me. And it was a reminder to me that you see an idea that looks really crazy, it's got a reasonable chance of actually being really good. If things look very familiar, they're not likely to be big steps forward. This was a big step forward, and took me and others a long time to realize that.

However, even if one accepts the premise that apparent crackpots deliver surprisingly high expected value, it's still not obvious how many we want on the Forum!

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T02:11:14.029Z · score: 4 (3 votes) · EA · GW

More thoughts re: the wisdom of the crowds: I suppose the wisdom of the crowds works best when each crowd member is in some sense an "unbiased estimator" of the quantity to be estimated. For example, suppose we ask a crowd to estimate the weight of a large object, but only a few "experts" in the crowd know that the object is hollow inside. In this case, the estimate of a randomly chosen expert could beat the average estimate of the rest of the crowd. I'm not sure how to translate this into a more general-purpose recommendation though.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T02:01:15.782Z · score: 5 (5 votes) · EA · GW


Maybe it's possible to develop more specific guidelines here. For example, your comment implies that you think it's essential to know all the key considerations. OK... but I don't see why ignorance of known key considerations would prevent someone from pointing out a new key consideration. And if we discourage them from making them post, that could be very harmful, because as you say, it's important to know all the key considerations.

In other words, maybe it's worth differentiating the act of generating intellectual raw material, and the act of drawing conclusions.

Comment by john_maxwell_iv on Long-term Donation Bunching? · 2019-09-28T13:21:24.496Z · score: 3 (2 votes) · EA · GW

Another argument against extreme donation bunching: Because marginal tax rates get higher as your income increases, being able to deduct $40K is not necessarily twice as valuable as being able to deduct $20K.

Comment by john_maxwell_iv on Some personal thoughts on EA and systemic change · 2019-09-27T02:47:40.215Z · score: 43 (20 votes) · EA · GW

I wish the systemic change discussion was less focused on cost-effectiveness and more focused on uncertainty regarding the results of our actions. For example, in 2013 Scott Alexander wrote this post on how military strikes are an extremely cheap way to help foreigners ("at least potentially"). I'm glad he included the disclaimer, because although Scott's article works off the premise that "life is ~10% better in Libya after Gaddafi was overthrown", Libya isn't looking too hot right now - Obama says Libya is the biggest regret of his presidency. Scott also failed to mention that American intervention in Libya may have reduced North Korea's willingness to negotiate regarding its nuclear weapons program.

To me, uncertainty means it's valuable to research systemic changes well in advance of trying to make them. If systemic changes aren't cost-effective now, but might be cost-effective in the future, we should consider starting to theorize, debate, and run increasingly large experiments now anyway. (Disclaimer: Having productive disagreements about systemic changes is in itself a largely unsolved institution design problem, I'd argue! Maybe we should start by trying to solve that.)

Comment by john_maxwell_iv on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-27T02:05:16.083Z · score: 3 (2 votes) · EA · GW

Maybe it'd be helpful to build the charter city somewhere like here?

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-21T22:02:07.285Z · score: 2 (1 votes) · EA · GW

Thanks to everyone who entered this contest! I decided to split the prize money evenly between the four entries. Winners, please check your private messages for payment details!

Comment by john_maxwell_iv on How much EA analysis of AI safety as a cause area exists? · 2019-09-19T02:49:22.869Z · score: 3 (2 votes) · EA · GW

This critique is quite lengthy :-) Is there a summary available?

Comment by john_maxwell_iv on What things do you wish you discovered earlier? · 2019-09-19T01:33:00.410Z · score: 3 (2 votes) · EA · GW saved my career from a disabling repetitive strain injury. I'll never get the 1-2 years of misery before finding that website back.

Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-09-17T01:05:25.243Z · score: 2 (1 votes) · EA · GW

It looks like this report is from 2018, and doesn't incorporate the 2019 YouGov research I linked. (I doubt pre-2004 data will give us insight into modern loneliness. Facebook and Twitter didn't exist back then, for instance.) This bit is interesting though:

More recently, some media outlets have misinterpreted the results of a 2018 Cigna survey to argue that loneliness has increased. The survey indicated that loneliness was higher for younger Americans than for older ones. A mistaken interpretation of this finding would be that older Americans were less likely to be lonely when they were younger than today's younger Americans are. This interprets life-course changes in loneliness as reflecting a change over time for Americans whatever their stage in the life course. While USA Today reported the age-based results as "surprising," the research on the relationship between age and loneliness suggests that the "[p]revalence and intensity of lonely feelings are greater in adolescence and young adulthood (i.e., 16-25 years of age)," decline with age, and then increase again in the very old.33 The Cigna survey does not support the claim that loneliness has increased over time, nor is the increased loneliness of adolescents a new revelation.

It's not clear to me how to reconcile this with e.g. the research YouGov cites to attribute loneliness among current youth to social media use. I guess a natural first step would be to see whether the magnitude of historical effects in the Handbook of Individual Differences in Social Behavior can explain what YouGov saw. I think you'd have to analyze data carefully to figure out if it supports the hypothesis "young people just tend to be lonelier" or the hypothesis "social ties get weaker with every passing generation + elderly people get lonely as their friends die".

In any case, I think loneliness could be a problem worth tackling even if it isn't rising. (And you will notice I didn't technically claim it was rising :P) The point is also somewhat moot as only one person expressed interest as a result of me posting here.

Comment by john_maxwell_iv on Does any thorough discussion of moral parliaments exist? · 2019-09-13T03:15:04.479Z · score: 2 (1 votes) · EA · GW

How about fixing the discount rate for all the parliament members? Or treating the discount rate question as orthogonal to the altruism/egoism question, and having 4 agents with each combination of altruism/egoism and high/low discount rates? I suppose analogous problems could appear in a non-discount-rate form somehow?

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-13T02:47:08.763Z · score: 2 (1 votes) · EA · GW


Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-13T02:46:41.693Z · score: 3 (2 votes) · EA · GW

Thanks, interesting points!

there is no incentive for the organization to pick the most scathing criticisms, when it could just as well pick only moderate ones.

If a particular criticism gets a lot of upvotes on the forum, but CEA ignores it and doesn't give it a prize, that looks a little suspicious.

Even if you solve the incentive problem somehow, there is a danger to public criticism campaigns like that: that they will provide a negative impression of the organization to outside people that do not read about the positive aspects of the organization/movement.

You could be right. However, I haven't seen anyone get in this kind of trouble for having a "mistakes" page. It seems possible to me that these kind of measures can proactively defuse the discontent that can lead to real drama if suppressed long enough. Note that the thing that stuck in your head was not any particular criticism of CEA, but rather just the notion that criticism might be being suppressed--I wonder if that is what leads to real drama! But you could have a good point, maybe CEA is too important of an organization to be the first ones to experiment with doing this kind of thing.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-13T02:42:00.460Z · score: 3 (2 votes) · EA · GW

Thanks for the feedback, these are points worth considering.

bs-ing and overstating certain things and omitting other considerations to write those most compelling criticism they can

Hm, my thought was that CEA would be the ones choosing the winners, and presumably CEA's definition of a "compelling" criticism could be based on how insightful or accurate CEA perceives the criticism to be rather than how negative it is.

It's like reading a study written by someone with a conflict of interest – it's very easy to dismiss it out of hand.

An alternative analogy is making sure that someone accused of a crime gets a defense lawyer. We want people who are paid to tell both sides of the story.

In any case, the point is not whether we should overall be pro/con CEA. The point is what CEA should do to improve. People could have conflicts of interest regarding specific changes they'd like to see CEA make, but the contest prize seems a bit orthogonal to those conflicts, and indeed could surface suggestions that are valuable precisely because no one currently has an incentive to make them.

If CEA were to offer a financial incentive for critiques, then all critiques of CEA become less trustworthy.

I don't see how critiques which aren't offered in the context of the contest would be affected.

I think it would be more productive to encourage people to offer the most thoughtful suggestions on how to improve, even if that means scaling up certain things because they were successful, and not criticism per se.

Maybe you're right and this is a better scheme. I guess part of my thinking was that there are social incentives which discourage criticism, and cash could counteract those, and additionally people who are pessimistic about your organization could have some of the most valuable feedback to offer, but because they're pessimistic they will by default focus on other things and might only be motivated by a cash incentive. But I don't know.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-03T21:20:10.202Z · score: 4 (3 votes) · EA · GW

Upvoted for relevant evidence.

However, I don't think you're representing that blog post accurately. You write that Givewell "stopped [soliciting external feedback] because it found that it generally wasn't useful", but at the top of the blog post, it says Givewell stopped because "The challenges of external evaluation are significant" and "The level of in-depth scrutiny of our work has increased greatly". Later it says "We continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny."

I also don't think we can generalize from Givewell to CEA easily. Compare the number of EAs who carefully read Givewell's reports (not that many?) with the number of EAs who are familiar with various aspects of CEA's work (lots). Since CEA's work is the EA community, which should expect a lot of relevant local knowledge to reside in the EA community--knowledge which CEA could try & gather in a proactive way.

Check out the "Improvements in informal evaluation" section for some of the things Givewell is experimenting with in terms of critical feedback. When I read this section, I get the impression of an organization which is eager to gather critical feedback and experiment with different means for doing so. It doesn't seem like CEA is trying as many things here as Givewell is--despite the fact that I expect external feedback would be more useful for it.

if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, you're not necessarily gaining anything by hearing more copies of each.

I would say just the opposite. If you're hearing multiple copies of a particular narrative, especially from a range of different individuals, that's evidence you should trust it.

If you're worried about feedback not being actionable, you could tell people that if they offer concrete suggestions, that will increase their chance of winning the prize.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-02T23:42:51.222Z · score: 9 (3 votes) · EA · GW

These are good points, upvoted. However, I don't think they undermine the fundamental point: even if this is all true, CEA could publish a list of their known weaknesses and what they plan to do to fix them, and offer prizes for either improved understanding of their weaknesses (e.g. issues they weren't aware of), or feedback on their plans to fix them. I would guess they would get their money's worth.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-02T22:30:21.483Z · score: 23 (9 votes) · EA · GW

I'm suggesting that the revealed preferences of most organizations, including CEA, indicate they aren't actually very self-critical. Hence the "Not to rag on CEA specifically" bit.

I think we're mostly in agreement that CEA isn't less self-critical than the average organization. Even one of the Glassdoor reviewers wrote: "Not terribly open to honest self-assessment, but no more so than the average charity." (emphasis mine) However, aarongertler's reply made it sound like he thought CEA was very self-critical... so I think it's reasonable to ask why less than 0.01% of CEA's cash budget goes to self-criticism, if someone makes that claim.

How meaningful is an organization's commitment to self-criticism, exactly? I think the fraction of their cash budget devoted to self-criticism gives us a rough upper bound.

I agree that the norm I'm implicitly promoting, that organizations should offer cash prizes for the best criticisms of what they're doing, is an unusual one. So to put my money where my mouth is, I'll offer $20 (more than 0.01% of my annual budget!) for the best arguments for why this norm should not be promoted or at least experimented with. Enter by replying to this comment. (Even if you previously appeared to express support for this idea, you're definitely still allowed to enter!) I'll judge the contest at some point between Sept 20 and the end of the month, splitting $20 among some number of entries which I will determine while judging. Please promote this contest wherever you feel is appropriate. I'll set up a reminder for myself to do judging, but I appreciate reminders from others also.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-02T20:39:55.608Z · score: 5 (11 votes) · EA · GW

It does seem a bit weird to me for an organization to claim to be self-critical but put relatively little effort into soliciting external critical feedback. Like, CEA has a budget of $5M. To my knowledge, not even 0.01% of that budget is going into cash prizes for the best arguments that CEA is on the wrong track with any of its activities. This suggests either (a) an absurd level of confidence, on the order of 99.99%, that all the knowledge + ideas CEA needs are in the heads of current employees or (b) a preference for preserving the organization's image over actual effectiveness. Not to rag on CEA specifically--just saying if an organization claims to be self-critical, maybe we should check to see if they're putting their money where their mouth is.

(One possible counterpoint is that EAs are already willing to provide external critical feedback. However, Will recently said he thought EA was suffering too much from deference/information cascades. Prizes for criticism seem like they could be an effective way to counteract that.)

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-01T00:31:46.631Z · score: 4 (2 votes) · EA · GW

my experience is that CEA circa late 2019 is intensely self-reflective; I'm prompted multiple times in the average week to put serious thought into ways we can improve our processes and public communication.

Glad to hear it!

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-08-30T01:09:19.916Z · score: 2 (1 votes) · EA · GW

I guess a practical way to measure creativity could be to give candidates a take-home problem which is a description of one of the organization's current challenges :P I suspect take-home problems are in general a better way to measure creativity, because if it's administered in a conversational interview context, I imagine it'd be more of a test of whether someone can be relaxed & creative under pressure.

BTW, another point related to creativity and exclusivity is that outsiders often have a fresh perspective which brings important new ideas.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-08-30T01:04:50.880Z · score: 2 (1 votes) · EA · GW

Oh interesting, I was thinking it would be bad to correct for measurement error in the work sample (since measurement error is a practical concern when it comes to how predictive it is.) But I guess you're right that it would be reasonable to correct for measurement error in the measure of employee performance.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-08-29T06:00:51.208Z · score: 2 (1 votes) · EA · GW

Ah, thanks! So as a practical matter it seems like we probably shouldn't correct for attenuation in this context and lean towards the correlation coefficient being more like 0.26? Honestly that seems a bit implausibly low. Not sure how much stock to put in this paper even if it is a meta-analysis. Maybe better to read it before taking it too seriously.

Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-08-28T07:26:31.998Z · score: 4 (2 votes) · EA · GW

It could work. However, Tinder works well because people can quickly guess whether they want to date someone based on physical attraction. I don't think there is a single easy-to-evaluate factor which predicts roommate compatibility. Also, moving in with someone is a bigger commitment than going on a date with them.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-08-28T04:23:25.993Z · score: 28 (13 votes) · EA · GW

Nice post!

Re: sequestration, OpenPhil has written about the difficulty of getting honest, critical feedback as a grantmaker. This seems like something all grantmakers should keep in mind. The danger seems especially high for an organization like OpenPhil or CEA, which is grantmaking all over the EA movement with EA Grants and EA Funds. Unfortunately, some reports from ex-employees of CEA on Glassdoor give me the impression CEA is not as proactive in its self-skepticism as OpenPhil:

Not terribly open to honest self-assessment, but no more so than the average charity.


As another reviewer mentioned, ironically hostile to honest self-assessment, let alone internal concerns about effectiveness - I saw and heard of some people who'd got significant grief for this. Groupthink and back-patting was more rewarded.

I've also heard an additional anecdote about CEA, independent of Glassdoor, which is compatible with this impression.

The question of whether and how much to prioritize those who appear most talented is tricky. I get the impression there has been a gradual but substantial update away from mass outreach over the past few years (though some answers in Will's AMA make me wonder if he and maybe others are trying to push back against what they see as excessive "hero worship" etc.) Anyway, some thoughts on this:

  • I think it's not always obvious how much of the work attributed to one famous person should really be credited to a much larger team. For example, one friend of mine cited the massive amount of money Bill Gates made as evidence that impact is highly disproportionate. However, I would guess in many cases, successful entrepreneurs at the $100M+ scale are distinguished by their ability to identify & attract great people to work for their company. I think maybe there is some quirk of our society where we want to credit just a few individuals with an impressive accomplishment even when the "correct" assignment of credit doesn't actually follow a power law distribution. [For a concrete example where we have data available, I think claims about Wikipedia editor contributions following a power law distribution have been refuted.]

  • Even in cases where individual impact will be power law distributed, that doesn't mean we can reliably identify the people at the top of the distribution in advance. For example, this paper apparently found that work sample tests only correlated with job performance at around 0.26-0.33! (Not sure what "attenuation" means in this context.) Anyway, maybe we could do some analysis: If you have applicant pool with N applicants, and you're going to hire the top K applicants based on a work sample test which correlates with job performance at 0.3, what does K need to be for you to have a 90% chance of hiring the best applicant? (I'd actually argue that the premise of this question is flawed, because the hypothetical 10x applicant is probably going to achieve 10x performance through some creative insights which the work sample test predicts even less well, but I'd still be interested in seeing the results of the analysis. Actually, speaking of creativity, have any EA organizations experimented with using tests of creative ability in their hiring?)

  • Finally, I think it could be useful to differentiate between "elitism" and "exclusivity". For example, I once did some napkin math suggesting that less than 0.01% of the people who watch Peter Singer's TED talk later become EAs. So arguably, this is actually a pretty strong signal of dedication & willingness to take ideas seriously compared to, say, someone who was persuaded to become an EA through an element of peer pressure after several friends became interested. But the second person is probably going to better connected within EA. So if the movement becomes more "exclusive", in the sense of using someone's position in the social scene as a proxy for their importance, I suspect we'd be getting it wrong. When I think of the EAs who seem very dedicated to making an impact, people I'm excited about, they're often people who came to EA on their own and in some cases still aren't very well-connected.

Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-08-27T04:48:57.346Z · score: 3 (2 votes) · EA · GW

OKCupid was huge before Tinder came along in the US. And as I mentioned, RoomieMatch is already pretty big. That said, it's possible there wouldn't be as much of a market for this in Germany. One approach is to start in a city with lots of early adopters who like trying weird new stuff (San Francisco is traditional) and gradually expand as the product concept is normalized. But sometimes things don't go much beyond early adopters.

Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-08-26T20:35:48.719Z · score: 3 (4 votes) · EA · GW

Facebook and Google have an incentive to track their users because they sell targeted advertising. The user isn't the customer, they are the product. This is an atypical business model.

One thing about the real estate business is because so much money is changing hands, there's a big incentive to cut out the middleman. (Winning Through Intimidation is a fascinating book about this.) I would highly recommend you avoid actions which run the slightest risk of pissing your customers off, lest they cut a deal with the property owner directly. Airbnb will ban anyone who exchanges money outside their platform, but that's less of a threat here because people don't change homes frequently. With the amount of money you're making per customer, you should be able to afford an army of customer service people in order to provide a high-touch customer experience.

There are a few reasons I think for-profit is generally preferable to non-profit when possible:

  • It's easier to achieve scale as a for-profit.
  • For-profit businesses are accountable to their customers. They usually only stay in business if customers are satisfied with the service they provide. Non-profits are accountable to their donors. The impressions of donors correlate imperfectly with the extent to which real needs are being served.
  • First worlders usually aren't poor and don't need charity.
  • You can donate the money you make to effective charities.
Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-08-26T20:19:24.066Z · score: 8 (5 votes) · EA · GW

I have a lot more ideas than I know what to do with. So I try to prioritize ruthlessly. I feel like I've got a comparative advantage working on AI stuff and a comparative disadvantage starting a company like this one. I'm experimenting with posting some of my ideas to the EA Forum to see if they can be useful to other people, e.g. folks who wanted to get a job at an EA organization but weren't successful.

Comment by john_maxwell_iv on How to generate research proposals · 2019-08-03T06:07:31.716Z · score: 3 (2 votes) · EA · GW

Generating and prioritizing research proposals seems to be a critical part of strategic research, and my informal impression is that systematic approaches are quite underexplored.

This is also my impression.

I guess some people will probably be just as interested in pursuing the research ideas of others as pursuing their own. For those people, maybe creating a thread here or on Facebook with a message like "I'm looking for research ideas on Topic X" would work?

Personally, I've been noting down research ideas on EA topics that interest me (AI safety and improved institutions would be the two big ones I guess) for quite a while, and I'm pursuing the ideas at a much lower rate than I'm noting them down! So maybe it'd be good for me to connect with people who are hunting for ideas somehow?