Posts

Should you familiarize yourself with the literature before writing an EA Forum post? 2019-10-06T23:17:09.317Z · score: 29 (12 votes)
[Link] How to contribute to the psychedelics ecosystem 2019-09-28T01:55:14.267Z · score: 10 (6 votes)
How to Make Billions of Dollars Reducing Loneliness 2019-08-24T01:49:45.629Z · score: 26 (17 votes)
How Flying Cars Will Solve Global Poverty 2019-04-01T20:56:47.829Z · score: 21 (12 votes)
Open Thread #43 2018-12-08T05:39:37.672Z · score: 8 (4 votes)
Open Thread #41 2018-09-03T02:21:51.927Z · score: 4 (4 votes)
Five books to make you super effective 2015-04-02T02:31:48.509Z · score: 6 (6 votes)

Comments

Comment by john_maxwell_iv on Resource Generation: Inheriting-to-give, for systemic change · 2019-10-15T15:50:28.931Z · score: 13 (5 votes) · EA · GW

Do they have thoughts on GiveDirectly? Looks like although they mention the Global South, they're asking members to donate to first world political advocacy groups. Was Giridharadas one of the critics who says rich people have too much influence on US public policy?

BTW this group recently got a grant from the EA Meta fund: https://generationpledge.org

Comment by john_maxwell_iv on [Link] "How feasible is long-range forecasting?" (Open Phil) · 2019-10-12T06:01:14.743Z · score: 14 (5 votes) · EA · GW

I'd be interested to know how people think long-range forecasting is likely to differ from short-range forecasting, and to what degree we can apply findings from short-range forecasting to long-range forecasting. Could it be possible to, for example, ask forecasters to forecast at a variety of short-range timescales, fit a curve to their accuracy as a function of time (or otherwise try to mathematically model the "half-life" of the knowledge powering the forecast--I don't know what methodologies could be useful here, maybe survival analysis?) and extrapolate this model to long-range timescales?

I'm also curious why there isn't more interest in presenting people with historical scenarios and asking them to forecast what will happen next in the historical scenario. Obviously if they already know about that period of history this won't work, but that seems possible to overcome.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-12T05:43:08.805Z · score: 2 (1 votes) · EA · GW

Another thought is that even if the original post had a weak epistemic status, if the original post becomes popular and gets the chance to receive widespread scrutiny, which it survives, it could be reasonable to believe its "de facto" epistemic status is higher than what's posted at the top. But yes, I guess in that case there's the risk that none of the people who scrutinized it had familiarity with relevant literature that contradicted the post.

Maybe the solution is to hire someone to do lit reviews to carefully examine posts with epistemic status disclaimers that nonetheless became popular and seem decision relevant.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-09T20:28:31.097Z · score: 4 (2 votes) · EA · GW

Interesting thought, upvoted!

Is there particular evidence for source amnesia you have in mind? The abstract for the first Wikipedia citation says:

Experiment 2 demonstrated that when normal subjects' level of item recall was equivalent to that of amnesics, they exhibited significantly less source amnesia: Normals rarely failed to recollect that a retrieved item derived from either of the two sources, although they often forgot which of the two experimenters was the correct source. The results are discussed in terms of their implications for theories of normal and abnormal memory.

So I guess the question is whether the epistemic status disclaimer falls into the category of source info that people will remember ("an experimenter told me X") or source info that people often forget ("Experimenter A told me X"). (Or whether it even makes sense to analyze epistemic status in the paradigm of source info at all--for example, including an epistemic status could cause readers to think "OK, these are just ideas to play with, not solid facts" when they read the post, and have the memory encoded that way, even if they aren't able to explicitly recall a post's epistemic status. And this might hold true regardless of how widespread a post is shared. Like, for all we know, certain posts get shared more because people like playing with new ideas more than they like reading established facts, but they're pretty good at knowing that playing with new ideas is what they're doing.)

I think if you fully buy into the source amnesia idea, that could be considered an argument for posting anything to the EA Forum which is above average quality relative to a typical EA information diet for that topic area--if you really believe this source amnesia thing, people end up taking Facebook posts just as seriously as papers they read on Google Scholar.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T01:57:27.352Z · score: 5 (3 votes) · EA · GW

site:forum.effectivealtruism.org on Google has been working OK for me.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T01:36:59.291Z · score: 4 (2 votes) · EA · GW

Clever :)

However, I'm not sure that post follows its own advice, as it appears to be essentially a collection of anecdotes. And it's possible to marshal anecdotes on both sides, e.g. here is Claude Shannon's take:

...very frequently someone who is quite green to a problem will sometimes come in and look at it and find the solution like that, while you have been laboring for months over it. You’ve got set into some ruts here of mental thinking and someone else comes in and sees it from a fresh viewpoint.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-08T01:31:37.344Z · score: 11 (4 votes) · EA · GW

One possible synthesis comes from Turing award winner Richard Hamming's book The Art of Doing Science and Engineering. He's got chapters at the end on Creativity and Experts. The chapters are somewhat rambly and I've quoted passages below. My attempt to summarize Hamming's position: Having a deep intellectual toolkit is valuable, but experts are often overconfident and resistant to new ideas.

Chapter 25: Creativity

...Do not be too hasty [in refining a problem], as you are likely to put the problem in the conventional form and find only the conventional solution...

...

...Wide acquaintance with various fields of knowledge is thus a help—provided you have the knowledge filed away so it is available when needed, rather than to be found only when led directly to it. This flexible access to pieces of knowledge seems to come from looking at knowledge while you are acquiring it from many different angles, turning over any new idea to see its many sides before filing it away. This implies effort on your part not to take the easy, immediately useful “memorizing the material” path, but prepare your mind for the future.

...

Over the years of watching and working with John Tukey I found many times he recalled the relevant information and I did not, until he pointed it out to me. Clearly his information retrieval system had many more “hooks” than mine did. At least more useful ones! How could this be? Probably because he was more in the habit than I was of turning over new information again and again so his “hooks” for retrieval were more numerous and significantly better than mine were. Hence wishing I could similarly do what he did, I started to mull over new ideas, trying to make significant “hooks” to relevant information so when later I went fishing for an idea I had a better chance of finding an analogy. I can only advise you to do what I tried to do—when you learn something new think of other applications of it—ones which have not arisen in your past but which might in your future. How easy to say, but how hard to do! Yet, what else can I say about how to organize your mind so useful things will be recalled readily at the right time?

...

...Without self-confidence you are not likely to create great, new things. There is a thin line between having enough self-confidence and being over-confident. I suppose the difference is whether you succeed or fail; when you win you are strong willed, and when you lose you are stubborn!...

Chapter 26: Experts

...

In an argument between a specialist and a generalist the expert usually wins by simply: (1) using unintelligible jargon, and (2) citing their specialist results which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts are both necessary, and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.

...

Experts in looking at something new always bring their expertise with them as well as their particular way of looking at things. Whatever does not fit into their frame of reference is dismissed, not seen, or forced to fit into their beliefs. Thus really new ideas seldom arise from the experts in the field. You can not blame them too much since it is more economical to try the old, successful ways before trying to find new ways of looking and thinking.

All things which are proved to be impossible must obviously rest on some assumptions, and when one or more of these assumptions are not true then the impossibility proof fails—but the expert seldom remembers to carefully inspect the assumptions before making their “impossible” statements. There is an old statement which covers this aspect of the expert. It goes as follows: “If an expert says something can be done he is probably correct, but if he says it is impossible then consider getting another opinion.”

...

...It appears most of the great innovations come from outside the field, and not from the insiders... examples occur in most fields of work, but the text books seldom, if ever, discuss this aspect.

...the expert faces the following dilemma. Outside the field there are a large number of genuine crackpots with their crazy ideas, but among them may also be the crackpot with the new, innovative idea which is going to triumph. What is a rational strategy for the expert to adopt? Most decide they will ignore, as best they can, all crackpots, thus ensuring they will not be part of the new paradigm, if and when it comes.

Those experts who do look for the possible innovative crackpot are likely to spend their lives in the futile pursuit of the elusive, rare crackpot with the right idea, the only idea which really matters in the long run. Obviously the strategy for you to adopt depends on how much you are willing to be merely one of those who served to advance things, vs. the desire to be one of the few who in the long run really matter. I cannot tell you which you should choose that is your choice. But I do say you should be conscious of making the choice as you pursue your career. Do not just drift along; think of what you want to be and how to get there. Do not automatically reject every crazy idea, the moment you hear of it, especially when it comes from outside the official circle of the insiders—it may be the great new approach which will change the paradigm of the field! But also you cannot afford to pursue every “crackpot” idea you hear about. I have been talking about paradigms of Science, but so far as I know the same applies to most fields of human thought, though I have not investigated them closely. And it probably happens for about the same reasons; the insiders are too sure of themselves, have too much invested in the accepted approaches, and are plain mentally lazy. Think of the history of modern technology you know!

...

...In some respects the expert is the curse of our society with their assurance they know everything, and without the decent humility to consider they might be wrong. Where the question looms so important I suggested to you long ago to use in an argument, “What would you accept as evidence you are wrong?” Ask yourself regularly, “Why do I believe whatever I do”. Especially in the areas where you are so sure you know; the area of the paradigms of your field.

Hamming shares a number of stories from the history of science to support his claims. He also says he has more stories which he didn't include in the chapter, and that he looked for stories which went against his position too.

A couple takeaways:

  • Survivorship bias regarding stories of successful contrarians - most apparent crackpots actually are crackpots.

  • Paradigm shifts - if an apparent crackpot is not actually a crackpot, their idea has the potential to be extremely important. So shutting down all the apparent crackpots could have quite a high cost even if most are full of nonsense. As Jerome Friedman put it regarding the invention of bagging (coincidentally mentioned in the main post):

The first time I saw this-- when would that have been, maybe the mid '90s-- I knew a lot about the bootstrap. Actually, I was a student of Brad Efron, who invented the bootstrap. And Brad and I wrote a book together on the bootstrap in the early '90s. And then when I saw the bag idea from Leo, I thought this looks really crazy. Usually the bootstrap is used to get the idea of standard errors or bias, but Leo wants to use bootstrap to produce a whole bunch of trees and to average them, which sounded really crazy to me. And it was a reminder to me that you see an idea that looks really crazy, it's got a reasonable chance of actually being really good. If things look very familiar, they're not likely to be big steps forward. This was a big step forward, and took me and others a long time to realize that.

However, even if one accepts the premise that apparent crackpots deliver surprisingly high expected value, it's still not obvious how many we want on the Forum!

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T02:11:14.029Z · score: 4 (3 votes) · EA · GW

More thoughts re: the wisdom of the crowds: I suppose the wisdom of the crowds works best when each crowd member is in some sense an "unbiased estimator" of the quantity to be estimated. For example, suppose we ask a crowd to estimate the weight of a large object, but only a few "experts" in the crowd know that the object is hollow inside. In this case, the estimate of a randomly chosen expert could beat the average estimate of the rest of the crowd. I'm not sure how to translate this into a more general-purpose recommendation though.

Comment by john_maxwell_iv on Should you familiarize yourself with the literature before writing an EA Forum post? · 2019-10-07T02:01:15.782Z · score: 3 (4 votes) · EA · GW

(Upvoted)

Maybe it's possible to develop more specific guidelines here. For example, your comment implies that you think it's essential to know all the key considerations. OK... but I don't see why ignorance of known key considerations would prevent someone from pointing out a new key consideration. And if we discourage them from making them post, that could be very harmful, because as you say, it's important to know all the key considerations.

In other words, maybe it's worth differentiating the act of generating intellectual raw material, and the act of drawing conclusions.

Comment by john_maxwell_iv on Long-term Donation Bunching? · 2019-09-28T13:21:24.496Z · score: 3 (2 votes) · EA · GW

Another argument against extreme donation bunching: Because marginal tax rates get higher as your income increases, being able to deduct $40K is not necessarily twice as valuable as being able to deduct $20K.

Comment by john_maxwell_iv on Some personal thoughts on EA and systemic change · 2019-09-27T02:47:40.215Z · score: 43 (20 votes) · EA · GW

I wish the systemic change discussion was less focused on cost-effectiveness and more focused on uncertainty regarding the results of our actions. For example, in 2013 Scott Alexander wrote this post on how military strikes are an extremely cheap way to help foreigners ("at least potentially"). I'm glad he included the disclaimer, because although Scott's article works off the premise that "life is ~10% better in Libya after Gaddafi was overthrown", Libya isn't looking too hot right now - Obama says Libya is the biggest regret of his presidency. Scott also failed to mention that American intervention in Libya may have reduced North Korea's willingness to negotiate regarding its nuclear weapons program.

To me, uncertainty means it's valuable to research systemic changes well in advance of trying to make them. If systemic changes aren't cost-effective now, but might be cost-effective in the future, we should consider starting to theorize, debate, and run increasingly large experiments now anyway. (Disclaimer: Having productive disagreements about systemic changes is in itself a largely unsolved institution design problem, I'd argue! Maybe we should start by trying to solve that.)

Comment by john_maxwell_iv on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-27T02:05:16.083Z · score: 3 (2 votes) · EA · GW

Maybe it'd be helpful to build the charter city somewhere like here?

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-21T22:02:07.285Z · score: 2 (1 votes) · EA · GW

Thanks to everyone who entered this contest! I decided to split the prize money evenly between the four entries. Winners, please check your private messages for payment details!

Comment by john_maxwell_iv on How much EA analysis of AI safety as a cause area exists? · 2019-09-19T02:49:22.869Z · score: 3 (2 votes) · EA · GW

This critique is quite lengthy :-) Is there a summary available?

Comment by john_maxwell_iv on What things do you wish you discovered earlier? · 2019-09-19T01:33:00.410Z · score: 3 (2 votes) · EA · GW

http://painscience.com saved my career from a disabling repetitive strain injury. I'll never get the 1-2 years of misery before finding that website back.

Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-09-17T01:05:25.243Z · score: 2 (1 votes) · EA · GW

It looks like this report is from 2018, and doesn't incorporate the 2019 YouGov research I linked. (I doubt pre-2004 data will give us insight into modern loneliness. Facebook and Twitter didn't exist back then, for instance.) This bit is interesting though:

More recently, some media outlets have misinterpreted the results of a 2018 Cigna survey to argue that loneliness has increased. The survey indicated that loneliness was higher for younger Americans than for older ones. A mistaken interpretation of this finding would be that older Americans were less likely to be lonely when they were younger than today's younger Americans are. This interprets life-course changes in loneliness as reflecting a change over time for Americans whatever their stage in the life course. While USA Today reported the age-based results as "surprising," the research on the relationship between age and loneliness suggests that the "[p]revalence and intensity of lonely feelings are greater in adolescence and young adulthood (i.e., 16-25 years of age)," decline with age, and then increase again in the very old.33 The Cigna survey does not support the claim that loneliness has increased over time, nor is the increased loneliness of adolescents a new revelation.

It's not clear to me how to reconcile this with e.g. the research YouGov cites to attribute loneliness among current youth to social media use. I guess a natural first step would be to see whether the magnitude of historical effects in the Handbook of Individual Differences in Social Behavior can explain what YouGov saw. I think you'd have to analyze data carefully to figure out if it supports the hypothesis "young people just tend to be lonelier" or the hypothesis "social ties get weaker with every passing generation + elderly people get lonely as their friends die".

In any case, I think loneliness could be a problem worth tackling even if it isn't rising. (And you will notice I didn't technically claim it was rising :P) The point is also somewhat moot as only one person expressed interest as a result of me posting here.

Comment by john_maxwell_iv on Does any thorough discussion of moral parliaments exist? · 2019-09-13T03:15:04.479Z · score: 2 (1 votes) · EA · GW

How about fixing the discount rate for all the parliament members? Or treating the discount rate question as orthogonal to the altruism/egoism question, and having 4 agents with each combination of altruism/egoism and high/low discount rates? I suppose analogous problems could appear in a non-discount-rate form somehow?

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-13T02:47:08.763Z · score: 2 (1 votes) · EA · GW

Nice!

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-13T02:46:41.693Z · score: 3 (2 votes) · EA · GW

Thanks, interesting points!

there is no incentive for the organization to pick the most scathing criticisms, when it could just as well pick only moderate ones.

If a particular criticism gets a lot of upvotes on the forum, but CEA ignores it and doesn't give it a prize, that looks a little suspicious.

Even if you solve the incentive problem somehow, there is a danger to public criticism campaigns like that: that they will provide a negative impression of the organization to outside people that do not read about the positive aspects of the organization/movement.

You could be right. However, I haven't seen anyone get in this kind of trouble for having a "mistakes" page. It seems possible to me that these kind of measures can proactively defuse the discontent that can lead to real drama if suppressed long enough. Note that the thing that stuck in your head was not any particular criticism of CEA, but rather just the notion that criticism might be being suppressed--I wonder if that is what leads to real drama! But you could have a good point, maybe CEA is too important of an organization to be the first ones to experiment with doing this kind of thing.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-13T02:42:00.460Z · score: 3 (2 votes) · EA · GW

Thanks for the feedback, these are points worth considering.

bs-ing and overstating certain things and omitting other considerations to write those most compelling criticism they can

Hm, my thought was that CEA would be the ones choosing the winners, and presumably CEA's definition of a "compelling" criticism could be based on how insightful or accurate CEA perceives the criticism to be rather than how negative it is.

It's like reading a study written by someone with a conflict of interest – it's very easy to dismiss it out of hand.

An alternative analogy is making sure that someone accused of a crime gets a defense lawyer. We want people who are paid to tell both sides of the story.

In any case, the point is not whether we should overall be pro/con CEA. The point is what CEA should do to improve. People could have conflicts of interest regarding specific changes they'd like to see CEA make, but the contest prize seems a bit orthogonal to those conflicts, and indeed could surface suggestions that are valuable precisely because no one currently has an incentive to make them.

If CEA were to offer a financial incentive for critiques, then all critiques of CEA become less trustworthy.

I don't see how critiques which aren't offered in the context of the contest would be affected.

I think it would be more productive to encourage people to offer the most thoughtful suggestions on how to improve, even if that means scaling up certain things because they were successful, and not criticism per se.

Maybe you're right and this is a better scheme. I guess part of my thinking was that there are social incentives which discourage criticism, and cash could counteract those, and additionally people who are pessimistic about your organization could have some of the most valuable feedback to offer, but because they're pessimistic they will by default focus on other things and might only be motivated by a cash incentive. But I don't know.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-03T21:20:10.202Z · score: 4 (3 votes) · EA · GW

Upvoted for relevant evidence.

However, I don't think you're representing that blog post accurately. You write that Givewell "stopped [soliciting external feedback] because it found that it generally wasn't useful", but at the top of the blog post, it says Givewell stopped because "The challenges of external evaluation are significant" and "The level of in-depth scrutiny of our work has increased greatly". Later it says "We continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny."

I also don't think we can generalize from Givewell to CEA easily. Compare the number of EAs who carefully read Givewell's reports (not that many?) with the number of EAs who are familiar with various aspects of CEA's work (lots). Since CEA's work is the EA community, which should expect a lot of relevant local knowledge to reside in the EA community--knowledge which CEA could try & gather in a proactive way.

Check out the "Improvements in informal evaluation" section for some of the things Givewell is experimenting with in terms of critical feedback. When I read this section, I get the impression of an organization which is eager to gather critical feedback and experiment with different means for doing so. It doesn't seem like CEA is trying as many things here as Givewell is--despite the fact that I expect external feedback would be more useful for it.

if your bottleneck is not on raw material but instead on which of multiple competing narratives to trust, you're not necessarily gaining anything by hearing more copies of each.

I would say just the opposite. If you're hearing multiple copies of a particular narrative, especially from a range of different individuals, that's evidence you should trust it.

If you're worried about feedback not being actionable, you could tell people that if they offer concrete suggestions, that will increase their chance of winning the prize.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-02T23:42:51.222Z · score: 9 (3 votes) · EA · GW

These are good points, upvoted. However, I don't think they undermine the fundamental point: even if this is all true, CEA could publish a list of their known weaknesses and what they plan to do to fix them, and offer prizes for either improved understanding of their weaknesses (e.g. issues they weren't aware of), or feedback on their plans to fix them. I would guess they would get their money's worth.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-02T22:30:21.483Z · score: 23 (9 votes) · EA · GW

I'm suggesting that the revealed preferences of most organizations, including CEA, indicate they aren't actually very self-critical. Hence the "Not to rag on CEA specifically" bit.

I think we're mostly in agreement that CEA isn't less self-critical than the average organization. Even one of the Glassdoor reviewers wrote: "Not terribly open to honest self-assessment, but no more so than the average charity." (emphasis mine) However, aarongertler's reply made it sound like he thought CEA was very self-critical... so I think it's reasonable to ask why less than 0.01% of CEA's cash budget goes to self-criticism, if someone makes that claim.

How meaningful is an organization's commitment to self-criticism, exactly? I think the fraction of their cash budget devoted to self-criticism gives us a rough upper bound.

I agree that the norm I'm implicitly promoting, that organizations should offer cash prizes for the best criticisms of what they're doing, is an unusual one. So to put my money where my mouth is, I'll offer $20 (more than 0.01% of my annual budget!) for the best arguments for why this norm should not be promoted or at least experimented with. Enter by replying to this comment. (Even if you previously appeared to express support for this idea, you're definitely still allowed to enter!) I'll judge the contest at some point between Sept 20 and the end of the month, splitting $20 among some number of entries which I will determine while judging. Please promote this contest wherever you feel is appropriate. I'll set up a reminder for myself to do judging, but I appreciate reminders from others also.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-02T20:39:55.608Z · score: 5 (11 votes) · EA · GW

It does seem a bit weird to me for an organization to claim to be self-critical but put relatively little effort into soliciting external critical feedback. Like, CEA has a budget of $5M. To my knowledge, not even 0.01% of that budget is going into cash prizes for the best arguments that CEA is on the wrong track with any of its activities. This suggests either (a) an absurd level of confidence, on the order of 99.99%, that all the knowledge + ideas CEA needs are in the heads of current employees or (b) a preference for preserving the organization's image over actual effectiveness. Not to rag on CEA specifically--just saying if an organization claims to be self-critical, maybe we should check to see if they're putting their money where their mouth is.

(One possible counterpoint is that EAs are already willing to provide external critical feedback. However, Will recently said he thought EA was suffering too much from deference/information cascades. Prizes for criticism seem like they could be an effective way to counteract that.)

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-09-01T00:31:46.631Z · score: 4 (2 votes) · EA · GW

my experience is that CEA circa late 2019 is intensely self-reflective; I'm prompted multiple times in the average week to put serious thought into ways we can improve our processes and public communication.

Glad to hear it!

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-08-30T01:09:19.916Z · score: 2 (1 votes) · EA · GW

I guess a practical way to measure creativity could be to give candidates a take-home problem which is a description of one of the organization's current challenges :P I suspect take-home problems are in general a better way to measure creativity, because if it's administered in a conversational interview context, I imagine it'd be more of a test of whether someone can be relaxed & creative under pressure.

BTW, another point related to creativity and exclusivity is that outsiders often have a fresh perspective which brings important new ideas.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-08-30T01:04:50.880Z · score: 2 (1 votes) · EA · GW

Oh interesting, I was thinking it would be bad to correct for measurement error in the work sample (since measurement error is a practical concern when it comes to how predictive it is.) But I guess you're right that it would be reasonable to correct for measurement error in the measure of employee performance.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-08-29T06:00:51.208Z · score: 2 (1 votes) · EA · GW

Ah, thanks! So as a practical matter it seems like we probably shouldn't correct for attenuation in this context and lean towards the correlation coefficient being more like 0.26? Honestly that seems a bit implausibly low. Not sure how much stock to put in this paper even if it is a meta-analysis. Maybe better to read it before taking it too seriously.

Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-08-28T07:26:31.998Z · score: 4 (2 votes) · EA · GW

It could work. However, Tinder works well because people can quickly guess whether they want to date someone based on physical attraction. I don't think there is a single easy-to-evaluate factor which predicts roommate compatibility. Also, moving in with someone is a bigger commitment than going on a date with them.

Comment by john_maxwell_iv on Movement Collapse Scenarios · 2019-08-28T04:23:25.993Z · score: 28 (13 votes) · EA · GW

Nice post!

Re: sequestration, OpenPhil has written about the difficulty of getting honest, critical feedback as a grantmaker. This seems like something all grantmakers should keep in mind. The danger seems especially high for an organization like OpenPhil or CEA, which is grantmaking all over the EA movement with EA Grants and EA Funds. Unfortunately, some reports from ex-employees of CEA on Glassdoor give me the impression CEA is not as proactive in its self-skepticism as OpenPhil:

Not terribly open to honest self-assessment, but no more so than the average charity.

...

As another reviewer mentioned, ironically hostile to honest self-assessment, let alone internal concerns about effectiveness - I saw and heard of some people who'd got significant grief for this. Groupthink and back-patting was more rewarded.

I've also heard an additional anecdote about CEA, independent of Glassdoor, which is compatible with this impression.

The question of whether and how much to prioritize those who appear most talented is tricky. I get the impression there has been a gradual but substantial update away from mass outreach over the past few years (though some answers in Will's AMA make me wonder if he and maybe others are trying to push back against what they see as excessive "hero worship" etc.) Anyway, some thoughts on this:

  • I think it's not always obvious how much of the work attributed to one famous person should really be credited to a much larger team. For example, one friend of mine cited the massive amount of money Bill Gates made as evidence that impact is highly disproportionate. However, I would guess in many cases, successful entrepreneurs at the $100M+ scale are distinguished by their ability to identify & attract great people to work for their company. I think maybe there is some quirk of our society where we want to credit just a few individuals with an impressive accomplishment even when the "correct" assignment of credit doesn't actually follow a power law distribution. [For a concrete example where we have data available, I think claims about Wikipedia editor contributions following a power law distribution have been refuted.]

  • Even in cases where individual impact will be power law distributed, that doesn't mean we can reliably identify the people at the top of the distribution in advance. For example, this paper apparently found that work sample tests only correlated with job performance at around 0.26-0.33! (Not sure what "attenuation" means in this context.) Anyway, maybe we could do some analysis: If you have applicant pool with N applicants, and you're going to hire the top K applicants based on a work sample test which correlates with job performance at 0.3, what does K need to be for you to have a 90% chance of hiring the best applicant? (I'd actually argue that the premise of this question is flawed, because the hypothetical 10x applicant is probably going to achieve 10x performance through some creative insights which the work sample test predicts even less well, but I'd still be interested in seeing the results of the analysis. Actually, speaking of creativity, have any EA organizations experimented with using tests of creative ability in their hiring?)

  • Finally, I think it could be useful to differentiate between "elitism" and "exclusivity". For example, I once did some napkin math suggesting that less than 0.01% of the people who watch Peter Singer's TED talk later become EAs. So arguably, this is actually a pretty strong signal of dedication & willingness to take ideas seriously compared to, say, someone who was persuaded to become an EA through an element of peer pressure after several friends became interested. But the second person is probably going to better connected within EA. So if the movement becomes more "exclusive", in the sense of using someone's position in the social scene as a proxy for their importance, I suspect we'd be getting it wrong. When I think of the EAs who seem very dedicated to making an impact, people I'm excited about, they're often people who came to EA on their own and in some cases still aren't very well-connected.

Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-08-27T04:48:57.346Z · score: 3 (2 votes) · EA · GW

OKCupid was huge before Tinder came along in the US. And as I mentioned, RoomieMatch is already pretty big. That said, it's possible there wouldn't be as much of a market for this in Germany. One approach is to start in a city with lots of early adopters who like trying weird new stuff (San Francisco is traditional) and gradually expand as the product concept is normalized. But sometimes things don't go much beyond early adopters.

Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-08-26T20:35:48.719Z · score: 3 (4 votes) · EA · GW

Facebook and Google have an incentive to track their users because they sell targeted advertising. The user isn't the customer, they are the product. This is an atypical business model.

One thing about the real estate business is because so much money is changing hands, there's a big incentive to cut out the middleman. (Winning Through Intimidation is a fascinating book about this.) I would highly recommend you avoid actions which run the slightest risk of pissing your customers off, lest they cut a deal with the property owner directly. Airbnb will ban anyone who exchanges money outside their platform, but that's less of a threat here because people don't change homes frequently. With the amount of money you're making per customer, you should be able to afford an army of customer service people in order to provide a high-touch customer experience.

There are a few reasons I think for-profit is generally preferable to non-profit when possible:

  • It's easier to achieve scale as a for-profit.
  • For-profit businesses are accountable to their customers. They usually only stay in business if customers are satisfied with the service they provide. Non-profits are accountable to their donors. The impressions of donors correlate imperfectly with the extent to which real needs are being served.
  • First worlders usually aren't poor and don't need charity.
  • You can donate the money you make to effective charities.
Comment by john_maxwell_iv on How to Make Billions of Dollars Reducing Loneliness · 2019-08-26T20:19:24.066Z · score: 8 (5 votes) · EA · GW

I have a lot more ideas than I know what to do with. So I try to prioritize ruthlessly. I feel like I've got a comparative advantage working on AI stuff and a comparative disadvantage starting a company like this one. I'm experimenting with posting some of my ideas to the EA Forum to see if they can be useful to other people, e.g. folks who wanted to get a job at an EA organization but weren't successful.

Comment by john_maxwell_iv on How to generate research proposals · 2019-08-03T06:07:31.716Z · score: 3 (2 votes) · EA · GW

Generating and prioritizing research proposals seems to be a critical part of strategic research, and my informal impression is that systematic approaches are quite underexplored.

This is also my impression.

I guess some people will probably be just as interested in pursuing the research ideas of others as pursuing their own. For those people, maybe creating a thread here or on Facebook with a message like "I'm looking for research ideas on Topic X" would work?

Personally, I've been noting down research ideas on EA topics that interest me (AI safety and improved institutions would be the two big ones I guess) for quite a while, and I'm pursuing the ideas at a much lower rate than I'm noting them down! So maybe it'd be good for me to connect with people who are hunting for ideas somehow?

Comment by john_maxwell_iv on Four practices where EAs ought to course-correct · 2019-08-03T05:53:52.453Z · score: 18 (9 votes) · EA · GW

Keep in mind that Twitter users are a non-representative sample of the population... Please don't accept kbog's proposed deal with the devil in order to become popular in Twitter's malign memetic ecosystem.

Comment by john_maxwell_iv on Four practices where EAs ought to course-correct · 2019-08-03T05:48:52.562Z · score: 42 (16 votes) · EA · GW

Given that ruthlessness has downside risks, maybe we should brainstorm a number of new ideas for movement growth (assuming movement growth is, in fact, valuable) instead of jumping straight to ruthlessness?

In today's world, people don't care how "ethical" or "nice" you are if you are on the wrong team, and people who don't have a team won't be motivated to action unless you give them one.

This is a terrible incentive gradient. I would much rather we make an EA project out of changing or mitigating this incentive gradient than give in to it.

Yes, we could have a large number of people who call themselves "EAs", and all they care about is whether you are on the right team... but would it be an EA movement worth the name?

Please read this post: https://www.effectivealtruism.org/articles/hard-to-reverse-decisions-destroy-option-value/

Comment by john_maxwell_iv on The EA Forum is a News Feed · 2019-08-03T05:28:53.678Z · score: 2 (1 votes) · EA · GW

Another option is to use topic modeling software to automatically infer & assign tags. I might be interested in working on this. An advantage of using software is that it doesn't require continuous volunteer commitment.

A recommendation system which displays related posts could also be helpful for discovery.

Comment by john_maxwell_iv on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-30T07:33:25.945Z · score: 2 (1 votes) · EA · GW

There are also many companies that sell carbon credits to commercial and individual customers who are interested in lowering their carbon footprint on a voluntary basis.

Wikipedia

Comment by john_maxwell_iv on “Just take the expected value” – a possible reply to concerns about cluelessness · 2019-07-30T06:54:42.513Z · score: 2 (1 votes) · EA · GW

I think this is the jargon: https://en.wikipedia.org/wiki/Posterior_predictive_distribution

Comment by john_maxwell_iv on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-17T05:25:35.603Z · score: 4 (3 votes) · EA · GW

Oh I thought you were talking about popularity contest dynamics for arguments, not causes.

Sounds like you are positing a Matthew Effect where causes which many people are already working on will tend to have greater awareness (due to greater visibility) and also greater credibility (so many people are working on this cause, they must be on to something! Newcomers to EA will probably be especially tempted by causes which many people are already working on, since they won't feel they are in a position to evaluate causes for themselves.)

If true, an unfortunate side effect would be that neglected causes tend to remain neglected.

I think in practice how things work nowadays is that there are a few organizations in the community (OpenPhil, 80K) which have a lot of credibility and do their own in-depth evaluation of causes, and EA resources end up getting directed based on their evaluations. I'm not sure this is such a bad setup overall.

Comment by john_maxwell_iv on Age-Weighted Voting · 2019-07-15T06:15:22.663Z · score: 2 (1 votes) · EA · GW

This is an exciting idea. My guess is that public buy-in would be easier than you might think; my impression is that the horse race aspect of betting markets appeals to the public and creates TV coverage etc. However, I think the surveys could be an issue. I suspect many people responding to surveys about events which happened 10-30 years ago would be doing so with the aim of influencing the betting markets which affect near future policy. There might end up being a meta-game regarding who will answer surveys 10-30 years down the line and what agenda they will have in mind.

Comment by john_maxwell_iv on Age-Weighted Voting · 2019-07-15T05:44:10.988Z · score: 16 (7 votes) · EA · GW

I would at least suggest that 18-25 yo voters not have a multiplier.

Yes. As a reductio ad absurdum of Will's idea, why not give toddlers an extreme multiplier? Well, we know toddlers don't make good judgements. But it's not like your ability to make good judgments suddenly turns a corner on your 18th birthday. So as long as we're refactoring voting weights for different ages, we should also fix the 18th birthday step function issue, and create a scheme which gradually accounts for a person's increased wisdom as they age.

[Edit: A countervailing consideration is that if you make your scheme too wonky, it may not gather broad support.]

(I also think randomly selecting a small number of voters jury selection-style, to address the public goods problem inherent in becoming an informed & thoughtful voter, would probably be a higher-leverage improvement... but that's another discussion.)

Comment by john_maxwell_iv on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-15T05:31:35.621Z · score: 24 (7 votes) · EA · GW

Nice post!

Why are popularity-contest dynamics harmful, precisely? I suppose one argument is: If you are looking for the best new argument against psychedelics, popularity-contest dynamics are likely to get you the argument that resonates with the most people, or perhaps the argument that the most people can understand, or the argument that the most people had in their head already. These could still be useful to learn about, though.

For judging, you could always get a third party to judge. I'm also curious about a prize format like "$X to anyone who's able to change my mind substantially about Y". (This might be the closest thing I've seen to that so far.) Or a prize format which attempts to measure & reward novelty/variety among the responses somehow.

You mentioned status quo bias. It's interesting that all 3 of the prizes you link at the top are cases where people presented a new EA initiative and paid the community for the best available critiques. One idea for evening things out is to offer prizes for the best arguments against established EA donation targets! I do think you're right that more outsider-y causes are asked to meet a higher standard of support.

  • For example, this recent post on EA as an ideology did very little to critique global poverty, but there's a provocative argument that our focus on global poverty is one of the most ideological aspects of EA: It is easily the most popular EA cause area, but my impression is that less has been written to justify a focus on global poverty than other cause areas--it seems to have been "grandfathered in" due to the drowning child argument.

  • Similarly, we could turn the tables on the EA Hotel discussion by asking mainstream EA orgs to justify why they pay their employees such high salaries to live in high cost of living areas. I've also heard tales through the grapevine about the perverse incentives created by the need to fundraise for projects in EA, and my perception is that this is a big issue in the cause area I'm most excited about (AI safety). (Here is a recent LW thread on this topic.)

Comment by john_maxwell_iv on [Link] "The AI Timelines Scam" · 2019-07-12T05:18:31.912Z · score: 19 (9 votes) · EA · GW

This seems like selective presentation of the evidence. You haven't talked about AlphaZero or generative adversarial networks, for instance.

Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms

80% by what metric? Is your claim that Facebook could find your face in a photo using logistic regression if it had enough clean data? (If so, can you show me a peer-reviewed paper supporting this claim?)

Presumably you are saying something like: "80% of the human labor which goes into making these systems is data cleaning labor". First, I don't know if this is true. It seems like a hard claim to substantiate, because you'd have to get internal time usage data from a random sample of different organizations doing ML work. Anecdotes from social media are likely to lead us astray in this area, because "humans do most of the work that 'AI' is supposedly doing" is more of a "man bites dog" story and more likely to go viral.

But second... even if 80% of the hours spent are data cleaning hours, it's not totally clear how this is relevant. This could just as easily be a story about how general-purpose and easy-to-use machine learning libraries are, because "once you plug them in and press go, most of the time is spent giving the system examples of what you want it to do. (A child could do it!)"

startups pretend use human labor to pretend they have advanced AI

A friend of mine started a software startup which did not pretend to use any advanced AI whatsoever. However, he still did most email interactions with users by hand in the early days, because he wanted a deep understanding of how people were using his product. The existence of companies not using AI to power their products in no way refutes the existence of companies that do! And if you read the links in your post, their takes are significantly more nuanced than yours (Woebot does in fact use AI, '“Everything was perfect,” Mr. Park said in an interview after conversing with the Google bot. “It’s like a real person talking.”')

I think a common opinion is that current deep learning tech will not get us to AGI, but we have recently acquired important new abilities we didn't have before, we are using those abilities to do cool stuff we couldn't previously do, and it's possible we'll have AGI after acquiring some number of additional key insights.

Even if deep learning is a local maximum which has just gotten us a few more puzzle pieces--my personal view--it's possible that renewed interest in this area will end up producing AGI through some other means. I suspect that hype cycles in AI cause us to be overoptimistic about the ease of AGI during periods with lots of hype, and underoptimistic during periods of little hype. (From an EA perspective, the best outcome might be if the hype dies down but EAs keep working on it, to increase the probability that AGI is built by an EA team.) But at the end of the day, throwing research hours at problems tends to result in progress, and right now lots of research hours are being thrown at AI problems. I also think researchers tend to make more breakthroughs when they are feeling excited and audacious. That's when I get my best ideas, at least.

Comment by john_maxwell_iv on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-09T04:27:25.480Z · score: 9 (6 votes) · EA · GW

Could someone start a business putting these fires out and make money selling carbon credits?

Comment by john_maxwell_iv on Please May I Have Reading Suggestions on Consistency in Ethical Frameworks · 2019-07-09T04:23:25.639Z · score: 7 (4 votes) · EA · GW
You might separately wonder about how much weight should be given to our judgements about particular cases vs our judgements about general principles on a spectrum from hyper-particularism to hyper-methodism.

Could this be considered similar to the bias/variance tradeoff in machine learning?

Comment by john_maxwell_iv on Corporate campaigns affect 9 to 120 years of chicken life per dollar spent · 2019-07-09T04:08:15.105Z · score: 11 (4 votes) · EA · GW

With regard to the follow-through rate... my assumption is that improving welfare will raise costs, and higher costs will cause customers to switch providers. Are you at all worried about companies that follow through going out of business?

I wonder if companies that follow through would be interested in sponsoring legislation that forces their competitors to also improve welfare? That could help solve this problem maybe?

In any case, this might be an argument for people interested in farm animal welfare to concentrate their efforts on improving welfare for one animal product in one country at a time. (Or, if you're acting as an individual, try to figure out which animal product is currently getting the most pressure from activists and add to that pressure through your individual actions.) If a particular market is an oligopoly, and all the firms in the oligopoly can be persuaded to raise welfare standards simultaneously, it seems like they face less risk of going out of business. (Note that what's important is the animal product, not the animal itself. My guess is that eggs are an easier target than chicken meat, for instance, because if you target chicken meat, people will probably substitute chicken meat with beef & pork to some degree as chicken prices rise, putting the chicken companies at risk. Additionally it might make sense to concentrate on particular industries, e.g. hotels, high-end restaurants, fast food restaurants, etc. Presumably McDonald's is more worried about being undercut by Burger King than Marriot. I think this could be considered a prisoner's dilemma for the companies from a game theory point of view, so ideally there is some enforcement mechanism for cooperation, i.e. contracts that companies sign such that they have to give their competitors $ if they don't follow through on their commitments. It might be worth studying the parallels to cartel formation in oligopolistic competition.)

Comment by john_maxwell_iv on New study in Science implies that tree planting is the cheapest climate change solution · 2019-07-06T18:09:52.625Z · score: 8 (6 votes) · EA · GW

Some companies are trying to make reforestation cheaper using drones:

https://www.youtube.com/watch?v=EkNdrTZ7CG4

Working at a company like this could be high-impact, and could also be a good way to build career capital in AI/robotics/machine learning.

Comment by john_maxwell_iv on Effective Altruism is an Ideology, not (just) a Question · 2019-06-29T09:22:49.721Z · score: 19 (10 votes) · EA · GW
I don't see how you can say only other ideologies of being full of groupthink and having the right politics, even though most posts on the EA forum that don't agree with the ideological tennets listed in the OP tends to get heavily downvoted.

This post of yours is at +28. The most upvoted comment is a request to see more stuff from you. If EA was an ideology, I would expect to see your post at a 0 or negative score.

There's no shortage of subreddits where stuff that goes against community beliefs rarely scores above 0. I would guess most subreddits devoted to feminism & libertarianism have this property, for instance.

Comment by john_maxwell_iv on Effective Altruism is an Ideology, not (just) a Question · 2019-06-29T05:48:22.122Z · score: 20 (13 votes) · EA · GW

I see Helen's post as being more prescriptive than descriptive. It's something to aspire to, and declaring that "Effective Altruism is an Ideology" feels like giving up. Instead of "defending" against "competing" ideological perspectives, why not adopt the best of what they have to offer?

I also think you're being a little unfair. Time & attention for evaluating ideas & publishing analysis is limited, and in several cases there is work you don't seem aware of.

I'll grant that EA may have an essentially consequentialist outlook (though even on this point, I'd argue many EAs are too open to other moral philosophies to qualify for the adjective "ideological"; see e.g. the discussion of non-consequentialist ethics in this podcast with EA co-founder Will MacAskill).

But some of your other claims feel too strong to me. For example, even if it's true that no EA organization has ever made use of ethnography, I don't think that's because we're ideologically opposed to ethnography in the way that, say, libertarians are ideologically opposed to government coercion. As anonymous_ea points out, ethnography was just recently a topic of interest here on the forum. It seems plausible to me that we're talking about and making use of ethnography at about the same rate as the research world at large (that is to say, not very much).

Similarly, using phenomenology to determine the value of different types of life sounds like Qualia Research Institute, and I believe CEA has examined historical case studies related to social movements. Just because you aren't personally aware of it doesn't mean someone in EA isn't doing it, and it certainly doesn't mean EA is ideologically opposed to it.

With regard to "devising and implementing alternatives to global capitalism", 80k did a podcast on that. This is the sort of podcast I'd expect to see in the world where EA is a question, and 80k is always talking to experts in different areas, exploring new possible cause areas for EA. Here's a post on socialism you might be interested in.

Similarly, there is an effective environmentalism group with hundreds of members in it. Here is a post on an EA blog attempting address more or less exactly the issue you outline ("serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not") with regard to environmentalism. And at a recent EA conference, I attended a presentation which argued that global warming should be a higher priority for EAs.

It doesn't feel to me like EAs are ideologically opposed to environmentalism with anything like the vigor with which feminists and libertarians ideologically oppose things. Instead it seems like EAs investigate environmentalism, and some folks argue for it and work on it, but those arguments haven't been strong enough to make environmentalism the primary focus of most EAs. 80k places global warming under the category of "areas that are especially important but somewhat less neglected".

Anyway, an argument that uniquely picks out AI safety is: If we can solve AI safety and create a superintelligent FAI, it can solve all the other problems on your list. I don't think this argument is original to me; I suspect it came up when FHI did research on which existential risks to focus on many years ago. A quick look at the table of contents of this book shows FHI spent plenty of time considering existential risks unrelated to new technologies. I think OpenPhil did their own broad research and ended up coming to conclusions similar to FHI's.

With regard to the Global Priorities Institute, and the importance of x-risk, longtermism has received a fair amount of discussion. Nick Beckstead wrote an entire PhD thesis on it.

Regarding the claim that emerging technologies are EA's main focus, I want to highlight these results from the EA Survey. Note that the fourth most popular cause is cause prioritization. You write: "My point is not that the candidate causes I have presented actually are good causes for EAs to work on". However, if we're trying to figure out whether we should devote even more resources to investigating unexplored causes to do the most good, the ease of finding good causes which are currently ignored seems like an important factor.

In addition to being a question, EA is also a community and a memeplex. It's important to listen to people outside the community in case people are self-selecting in or out based on incidental factors. And I believe in upvoting uncommon perspectives on this forum to encourage a diversity of opinions. But let's not give up and start calling ourselves an ideology. I would rather have an ecosystem of competing ideas than a body of doctrine--and luckily, I think we're already closer to an ecosystem, so let's keep it that way.