Posts

How Flying Cars Will Solve Global Poverty 2019-04-01T20:56:47.829Z · score: 21 (12 votes)
Open Thread #43 2018-12-08T05:39:37.672Z · score: 8 (4 votes)
Open Thread #41 2018-09-03T02:21:51.927Z · score: 4 (4 votes)
Five books to make you super effective 2015-04-02T02:31:48.509Z · score: 6 (6 votes)

Comments

Comment by john_maxwell_iv on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-17T05:25:35.603Z · score: 4 (3 votes) · EA · GW

Oh I thought you were talking about popularity contest dynamics for arguments, not causes.

Sounds like you are positing a Matthew Effect where causes which many people are already working on will tend to have greater awareness (due to greater visibility) and also greater credibility (so many people are working on this cause, they must be on to something! Newcomers to EA will probably be especially tempted by causes which many people are already working on, since they won't feel they are in a position to evaluate causes for themselves.)

If true, an unfortunate side effect would be that neglected causes tend to remain neglected.

I think in practice how things work nowadays is that there are a few organizations in the community (OpenPhil, 80K) which have a lot of credibility and do their own in-depth evaluation of causes, and EA resources end up getting directed based on their evaluations. I'm not sure this is such a bad setup overall.

Comment by john_maxwell_iv on Age-Weighted Voting · 2019-07-15T06:15:22.663Z · score: 2 (1 votes) · EA · GW

This is an exciting idea. My guess is that public buy-in would be easier than you might think; my impression is that the horse race aspect of betting markets appeals to the public and creates TV coverage etc. However, I think the surveys could be an issue. I suspect many people responding to surveys about events which happened 10-30 years ago would be doing so with the aim of influencing the betting markets which affect near future policy. There might end up being a meta-game regarding who will answer surveys 10-30 years down the line and what agenda they will have in mind.

Comment by john_maxwell_iv on Age-Weighted Voting · 2019-07-15T05:44:10.988Z · score: 15 (6 votes) · EA · GW

I would at least suggest that 18-25 yo voters not have a multiplier.

Yes. As a reductio ad absurdum of Will's idea, why not give toddlers an extreme multiplier? Well, we know toddlers don't make good judgements. But it's not like your ability to make good judgments suddenly turns a corner on your 18th birthday. So as long as we're refactoring voting weights for different ages, we should also fix the 18th birthday step function issue, and create a scheme which gradually accounts for a person's increased wisdom as they age.

[Edit: A countervailing consideration is that if you make your scheme too wonky, it may not gather broad support.]

(I also think randomly selecting a small number of voters jury selection-style, to address the public goods problem inherent in becoming an informed & thoughtful voter, would probably be a higher-leverage improvement... but that's another discussion.)

Comment by john_maxwell_iv on Debrief: "cash prizes for the best arguments against psychedelics" · 2019-07-15T05:31:35.621Z · score: 24 (7 votes) · EA · GW

Nice post!

Why are popularity-contest dynamics harmful, precisely? I suppose one argument is: If you are looking for the best new argument against psychedelics, popularity-contest dynamics are likely to get you the argument that resonates with the most people, or perhaps the argument that the most people can understand, or the argument that the most people had in their head already. These could still be useful to learn about, though.

For judging, you could always get a third party to judge. I'm also curious about a prize format like "$X to anyone who's able to change my mind substantially about Y". (This might be the closest thing I've seen to that so far.) Or a prize format which attempts to measure & reward novelty/variety among the responses somehow.

You mentioned status quo bias. It's interesting that all 3 of the prizes you link at the top are cases where people presented a new EA initiative and paid the community for the best available critiques. One idea for evening things out is to offer prizes for the best arguments against established EA donation targets! I do think you're right that more outsider-y causes are asked to meet a higher standard of support.

  • For example, this recent post on EA as an ideology did very little to critique global poverty, but there's a provocative argument that our focus on global poverty is one of the most ideological aspects of EA: It is easily the most popular EA cause area, but my impression is that less has been written to justify a focus on global poverty than other cause areas--it seems to have been "grandfathered in" due to the drowning child argument.

  • Similarly, we could turn the table on the EA Hotel discussion by asking mainstream EA orgs to justify why they pay their employees such high salaries to live in high cost of living areas. I've also heard tails through the grapevine about the perverse incentives created by the need to fundraise for projects in EA, and my perception is that this is a big issue in the cause area I'm most excited about (AI safety). (Here is a recent LW thread on this topic.)

Comment by john_maxwell_iv on [Link] "The AI Timelines Scam" · 2019-07-12T05:18:31.912Z · score: 19 (9 votes) · EA · GW

This seems like selective presentation of the evidence. You haven't talked about AlphaZero or generative adversarial networks, for instance.

Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms

80% by what metric? Is your claim that Facebook could find your face in a photo using logistic regression if it had enough clean data? (If so, can you show me a peer-reviewed paper supporting this claim?)

Presumably you are saying something like: "80% of the human labor which goes into making these systems is data cleaning labor". First, I don't know if this is true. It seems like a hard claim to substantiate, because you'd have to get internal time usage data from a random sample of different organizations doing ML work. Anecdotes from social media are likely to lead us astray in this area, because "humans do most of the work that 'AI' is supposedly doing" is more of a "man bites dog" story and more likely to go viral.

But second... even if 80% of the hours spent are data cleaning hours, it's not totally clear how this is relevant. This could just as easily be a story about how general-purpose and easy-to-use machine learning libraries are, because "once you plug them in and press go, most of the time is spent giving the system examples of what you want it to do. (A child could do it!)"

startups pretend use human labor to pretend they have advanced AI

A friend of mine started a software startup which did not pretend to use any advanced AI whatsoever. However, he still did most email interactions with users by hand in the early days, because he wanted a deep understanding of how people were using his product. The existence of companies not using AI to power their products in no way refutes the existence of companies that do! And if you read the links in your post, their takes are significantly more nuanced than yours (Woebot does in fact use AI, '“Everything was perfect,” Mr. Park said in an interview after conversing with the Google bot. “It’s like a real person talking.”')

I think a common opinion is that current deep learning tech will not get us to AGI, but we have recently acquired important new abilities we didn't have before, we are using those abilities to do cool stuff we couldn't previously do, and it's possible we'll have AGI after acquiring some number of additional key insights.

Even if deep learning is a local maximum which has just gotten us a few more puzzle pieces--my personal view--it's possible that renewed interest in this area will end up producing AGI through some other means. I suspect that hype cycles in AI cause us to be overoptimistic about the ease of AGI during periods with lots of hype, and underoptimistic during periods of little hype. (From an EA perspective, the best outcome might be if the hype dies down but EAs keep working on it, to increase the probability that AGI is built by an EA team.) But at the end of the day, throwing research hours at problems tends to result in progress, and right now lots of research hours are being thrown at AI problems. I also think researchers tend to make more breakthroughs when they are feeling excited and audacious. That's when I get my best ideas, at least.

Comment by john_maxwell_iv on Extinguishing or preventing coal seam fires is a potential cause area · 2019-07-09T04:27:25.480Z · score: 9 (6 votes) · EA · GW

Could someone start a business putting these fires out and make money selling carbon credits?

Comment by john_maxwell_iv on Please May I Have Reading Suggestions on Consistency in Ethical Frameworks · 2019-07-09T04:23:25.639Z · score: 5 (3 votes) · EA · GW
You might separately wonder about how much weight should be given to our judgements about particular cases vs our judgements about general principles on a spectrum from hyper-particularism to hyper-methodism.

Could this be considered similar to the bias/variance tradeoff in machine learning?

Comment by john_maxwell_iv on Corporate campaigns affect 9 to 120 years of chicken life per dollar spent · 2019-07-09T04:08:15.105Z · score: 11 (4 votes) · EA · GW

With regard to the follow-through rate... my assumption is that improving welfare will raise costs, and higher costs will cause customers to switch providers. Are you at all worried about companies that follow through going out of business?

I wonder if companies that follow through would be interested in sponsoring legislation that forces their competitors to also improve welfare? That could help solve this problem maybe?

In any case, this might be an argument for people interested in farm animal welfare to concentrate their efforts on improving welfare for one animal product in one country at a time. (Or, if you're acting as an individual, try to figure out which animal product is currently getting the most pressure from activists and add to that pressure through your individual actions.) If a particular market is an oligopoly, and all the firms in the oligopoly can be persuaded to raise welfare standards simultaneously, it seems like they face less risk of going out of business. (Note that what's important is the animal product, not the animal itself. My guess is that eggs are an easier target than chicken meat, for instance, because if you target chicken meat, people will probably substitute chicken meat with beef & pork to some degree as chicken prices rise, putting the chicken companies at risk. Additionally it might make sense to concentrate on particular industries, e.g. hotels, high-end restaurants, fast food restaurants, etc. Presumably McDonald's is more worried about being undercut by Burger King than Marriot. I think this could be considered a prisoner's dilemma for the companies from a game theory point of view, so ideally there is some enforcement mechanism for cooperation, i.e. contracts that companies sign such that they have to give their competitors $ if they don't follow through on their commitments. It might be worth studying the parallels to cartel formation in oligopolistic competition.)

Comment by john_maxwell_iv on New study in Science implies that tree planting is the cheapest climate change solution · 2019-07-06T18:09:52.625Z · score: 8 (6 votes) · EA · GW

Some companies are trying to make reforestation cheaper using drones:

https://www.youtube.com/watch?v=EkNdrTZ7CG4

Working at a company like this could be high-impact, and could also be a good way to build career capital in AI/robotics/machine learning.

Comment by john_maxwell_iv on Effective Altruism is an Ideology, not (just) a Question · 2019-06-29T09:22:49.721Z · score: 19 (10 votes) · EA · GW
I don't see how you can say only other ideologies of being full of groupthink and having the right politics, even though most posts on the EA forum that don't agree with the ideological tennets listed in the OP tends to get heavily downvoted.

This post of yours is at +28. The most upvoted comment is a request to see more stuff from you. If EA was an ideology, I would expect to see your post at a 0 or negative score.

There's no shortage of subreddits where stuff that goes against community beliefs rarely scores above 0. I would guess most subreddits devoted to feminism & libertarianism have this property, for instance.

Comment by john_maxwell_iv on Effective Altruism is an Ideology, not (just) a Question · 2019-06-29T05:48:22.122Z · score: 18 (11 votes) · EA · GW

I see Helen's post as being more prescriptive than descriptive. It's something to aspire to, and declaring that "Effective Altruism is an Ideology" feels like giving up. Instead of "defending" against "competing" ideological perspectives, why not adopt the best of what they have to offer?

I also think you're being a little unfair. Time & attention for evaluating ideas & publishing analysis is limited, and in several cases there is work you don't seem aware of.

I'll grant that EA may have an essentially consequentialist outlook (though even on this point, I'd argue many EAs are too open to other moral philosophies to qualify for the adjective "ideological"; see e.g. the discussion of non-consequentialist ethics in this podcast with EA co-founder Will MacAskill).

But some of your other claims feel too strong to me. For example, even if it's true that no EA organization has ever made use of ethnography, I don't think that's because we're ideologically opposed to ethnography in the way that, say, libertarians are ideologically opposed to government coercion. As anonymous_ea points out, ethnography was just recently a topic of interest here on the forum. It seems plausible to me that we're talking about and making use of ethnography at about the same rate as the research world at large (that is to say, not very much).

Similarly, using phenomenology to determine the value of different types of life sounds like Qualia Research Institute, and I believe CEA has examined historical case studies related to social movements. Just because you aren't personally aware of it doesn't mean someone in EA isn't doing it, and it certainly doesn't mean EA is ideologically opposed to it.

With regard to "devising and implementing alternatives to global capitalism", 80k did a podcast on that. This is the sort of podcast I'd expect to see in the world where EA is a question, and 80k is always talking to experts in different areas, exploring new possible cause areas for EA. Here's a post on socialism you might be interested in.

Similarly, there is an effective environmentalism group with hundreds of members in it. Here is a post on an EA blog attempting address more or less exactly the issue you outline ("serious evidence-based research into the specific questions I present is highly neglected, even if the broader areas are not") with regard to environmentalism. And at a recent EA conference, I attended a presentation which argued that global warming should be a higher priority for EAs.

It doesn't feel to me like EAs are ideologically opposed to environmentalism with anything like the vigor with which feminists and libertarians ideologically oppose things. Instead it seems like EAs investigate environmentalism, and some folks argue for it and work on it, but those arguments haven't been strong enough to make environmentalism the primary focus of most EAs. 80k places global warming under the category of "areas that are especially important but somewhat less neglected".

Anyway, an argument that uniquely picks out AI safety is: If we can solve AI safety and create a superintelligent FAI, it can solve all the other problems on your list. I don't think this argument is original to me; I suspect it came up when FHI did research on which existential risks to focus on many years ago. A quick look at the table of contents of this book shows FHI spent plenty of time considering existential risks unrelated to new technologies. I think OpenPhil did their own broad research and ended up coming to conclusions similar to FHI's.

With regard to the Global Priorities Institute, and the importance of x-risk, longtermism has received a fair amount of discussion. Nick Beckstead wrote an entire PhD thesis on it.

Regarding the claim that emerging technologies are EA's main focus, I want to highlight these results from the EA Survey. Note that the fourth most popular cause is cause prioritization. You write: "My point is not that the candidate causes I have presented actually are good causes for EAs to work on". However, if we're trying to figure out whether we should devote even more resources to investigating unexplored causes to do the most good, the ease of finding good causes which are currently ignored seems like an important factor.

In addition to being a question, EA is also a community and a memeplex. It's important to listen to people outside the community in case people are self-selecting in or out based on incidental factors. And I believe in upvoting uncommon perspectives on this forum to encourage a diversity of opinions. But let's not give up and start calling ourselves an ideology. I would rather have an ecosystem of competing ideas than a body of doctrine--and luckily, I think we're already closer to an ecosystem, so let's keep it that way.

Comment by john_maxwell_iv on What new EA project or org would you like to see created in the next 3 years? · 2019-06-14T20:57:42.566Z · score: 2 (1 votes) · EA · GW

There's also this post plus comments.

Comment by john_maxwell_iv on EA Forum Prize: Winners for April 2019 · 2019-06-05T17:40:29.358Z · score: 3 (2 votes) · EA · GW

Cool! Here are a few that might be worth including. Perhaps searching the Forum for "prize" or "incentivize" would give more interesting results. Also, I think maybe if you look on Paul Christiano's LW account submissions, there are a few more like this.

Comment by john_maxwell_iv on Ingredients for creating disruptive research teams · 2019-05-30T05:18:01.412Z · score: 25 (11 votes) · EA · GW

I asked my father, who has spent the past 40 years at Xerox PARC and worked with Bob Taylor, what he thought of this post. He wrote:

That all seems reasonable to me. My guess is that the most important factors are great people and a great leader. One of my co-workers, who was involved with starting a research center in France said “A people hire A people. B people hire C people”. So, the first few people that you hire are really important.

I think that the main job of the leader is to keep people happy and focused. Most of my managers have been really good leaders.

I also think that being co-located is very important. When I am out of touch with my co-workers, I tend to lose motivation.
...
BTW, one of the reasons that the best leaders usually have a technical background is that it is hard to identify the very best people without it. That is why non-technical companies have trouble hiring good programmers, and conversely why the best tech companies were founded by people with a technical background.

Another thing I remember him once mentioning to me is that PARC bought its researchers very expensive, cutting-edge equipment to do research with, on the assumption that Moore's Law would eventually drive down the price of such equipment to the point where it was affordable to the mainstream.

He's willing to answer questions.

Comment by john_maxwell_iv on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-24T22:59:18.823Z · score: 4 (2 votes) · EA · GW
This is exactly what p-values are designed for, so you are probably better off looking at p-values rather than effect size if that's the scenario you're trying to avoid.

Yes, this is a better idea.

Comment by john_maxwell_iv on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-23T22:43:15.161Z · score: 16 (7 votes) · EA · GW

From what I understand, effect size is one of the better ways to predict whether a study will replicate. For example, this paper found that 77% of replication effect sizes reported were within a 95% prediction interval based on the original effect size.

As a spot check, you say that brain training has massive purported effects. I looked at the research page of Lumosity, a company which sells brain training software. I expect their estimates of the effectiveness of brain training to be among the most optimistic, but their highlighted effect size is only d = 0.255.

A caveat is that if an effect size seems implausibly large, it might have arisen due to methodological error. (The one brain training study I found with a large effect size has been subject to methodological criticism.) Here is a blog post by Daniel Lakens where he discusses a study which found that judges hand out much harsher sentences before lunch:

If hunger had an effect on our mental resources of this magnitude, our society would fall into minor chaos every day at 11:45. Or at the very least, our society would have organized itself around this incredibly strong effect of mental depletion... we would stop teaching in the time before lunch, doctors would not schedule surgery, and driving before lunch would be illegal.

However, I think psychedelic drugs arguably do pass this test. During the 60s, before they became illegal, a lot of people kind of were talking about how society would reorganize itself around them. And forget about performing surgery or driving while you are tripping.

The way I see it, if you want to argue that an effect isn't real, there are two ways to do it. You can argue that the supposed effect arose through random chance/p-hacking/etc., or you can argue that it arose through methodological error.

  • The random chance argument is harder to make if the studies have large effect sizes. If the true effect is 0, it's unlikely we'll observe a large effect by chance. If researchers are trying to publish papers based on noise, you'd expect p-values to cluster just below the p < 0.05 threshold (see p-curve analysis)... they're essentially going to publish the smallest effect size they can get away with.
  • The methodological error argument could be valid for a large effect size, but if this is the case, confirmatory research is not necessarily going to help, because confirmatory research could have the same issue. So at that point your time is best spent trying to pinpoint the actual methodological flaw.
Comment by john_maxwell_iv on Why do EA events attract more men than women? Focus group data · 2019-05-23T20:55:52.174Z · score: 20 (10 votes) · EA · GW

This is the only comment this user has ever written, and their profile looks very spammy. I wonder if spammers have discovered that posting flamebait is a good way to get people to visit their website...

Comment by john_maxwell_iv on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-11T23:40:18.207Z · score: 20 (8 votes) · EA · GW

I don't have data either way, but "knacks" for psychotherapy feel more plausible to me than "knacks" for producing the effects in Many Labs 2 (just skimming over the list of effects here). Like, the strongest version of this claim is that no one is more skilled than anyone else at anything, which seems obviously false.

Suppose we conduct a study of the Feynman problem-solving algorithm: "1. Write down the problem. 2. Think real hard. 3. Write down the solution." A n=1 study of Richard Feynman finds the algorithm works great, but it fails to replicate on a larger sample. What is your conclusion: that the n=1 result was spurious, or that Feynman has useful things to teach us but the 3-step algorithm didn't capture them?

I haven't read enough studies on psychedelics to know how much room there is in the typical procedure for a skilled therapist to make a difference though.

Comment by john_maxwell_iv on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-11T01:39:45.098Z · score: 12 (5 votes) · EA · GW
1.3) (Owed to Scott Alexander's recent post). The psychedelic literature mainly comprises small studies generally conducted by 'true believers' in psychedelics and often (but not always) on self-selected and motivated participants. This seems well within the territory of scientific work vulnerable to replication crises.

I think small studies are also more vulnerable to publication bias.

On the flip side, it may be possible that the "true believers" actually are on to something, but they have a hard time formalizing their procedure into something that can be replicated on a massive scale. So if larger studies fail to replicate the results from the small studies, this may be the reason why.

Comment by john_maxwell_iv on [Link] "Radical Consequence and Heretical Knots" – an ethnography of the London EA community · 2019-05-09T17:58:52.300Z · score: 13 (5 votes) · EA · GW

Maybe someone could read and summarize the core points of this? I read the first chapter and didn't get a lot out of it, and wasn't able to parse passages such as

Technologies of the self anchor these reflective practises; data in this sense forms a bridge between the actual and the virtual, as the creation of the self spills over into the negotiated co-creation of worlds. Empathy and emotion are not in conflict, but complex mediation and configuration.
...
The meaning of those practises, the positions they occupy and the selves they created were problematic rather than the practises in and of themselves. It is in this sense that ‘ethics’ as a dimension of social life is is here distinguished from pre-theorised systematics; it shapes the connotations for how selves are formed, others are engaged and worlds are envisioned. Yet the perceived Heresy is a deeply personal thing, made through enrolment into this ‘moral assemblage’. As the ethical subject develops, its possibilities for further self-reflection and development are also changed. Whilst the others ‘clicked’ into this assemblage, for Sarah the strands of relational fabric become tangled... The ‘Heresy’ of the EAs for many isn’t in any single thing they do; no one practise is the cause of offense, but it the complex relational possibilities of specific lived encounters in which the self is so profoundly involved. At the moment of ‘ethical breakdown’, reflection fails and revelation is rejected.
Comment by john_maxwell_iv on Aligning Recommender Systems as Cause Area · 2019-05-09T07:23:39.049Z · score: 6 (4 votes) · EA · GW

Good post!

I have a hunch that a big part of the issue here is institutional momentum around maximizing key performance indicators such as daily active users, time spent on platform, etc. Perhaps it will be important to persuade decisionmakers that although optimizing for these metrics helps the bottom line in the short run, in the long run optimizing these to the exclusion of all else hurts the brand, increases the probability of regulatory action or negative "black swan" type events, and risks having the users abandon the product. (I understand that the longer a culture gets exposed to alcohol, the greater the degree it develops "cultural antibodies" to the negative effects of alcohol which allow it to mitigate the harms... decisionmakers should worry that if users don't endorse the time they spend with the product, this hurts the long-term viability of the platform; imagine the formation of a group like Alcoholics Anonymous but for social media, for instance.) I think it'd be good if decisionmakers also started optimizing for key performance indicators like whether users think the product is a benefit to their life personally, whether the product makes society healthier/better off, etc. Or even more specific stuff, like whether users who engage in disagreements tend to come to a consensus vs walking away even angrier than when they started.

With regard to risks, here are some thoughts of mine related to scenarios in which users self-select in their use of these tools. I think maybe what I describe in this comment has already happened though.

Comment by john_maxwell_iv on How do we check for flaws in Effective Altruism? · 2019-05-07T06:32:27.142Z · score: 30 (12 votes) · EA · GW

Donald Knuth is a Stanford professor and world-renowned computer scientist. For years he offered cash prizes to anyone who could find an error in any of his books. The amount of money was only a few dollars, but there's a lot of status associated with receiving a Knuth check. People would frame them instead of cashing them.

Why don't more people do this? Like having a bug bounty program, but for your beliefs. Offer some cash and public recognition to anyone who can correct a factual error you've made or convince you that you're wrong about something. Donald Freakin' Knuth has cut over two thousand reward checks, and us mortals probably make mistakes at a higher rate than he does.

Everyone could do this: organizations, textbooks, newspapers, individuals. If you care about having correct beliefs, create an incentive for others to help you out.

$2 via Paypal to the first person who convinces me this practice is harmful.

Comment by john_maxwell_iv on Does climate change deserve more attention within EA? · 2019-04-20T17:13:07.189Z · score: 10 (4 votes) · EA · GW

This is an interesting post by Ramez Naam. He argues that too much attention is given to transportation & energy emissions and not enough to agriculture & industry emissions. Naam thinks that renewable tech will continue to drop in cost, and he's optimistic that part of the equation will solve itself. He says the highest-leverage action is the development of new tech to address agriculture & industry emissions.

Comment by john_maxwell_iv on Legal psychedelic retreats launching in Jamaica · 2019-04-17T20:40:13.200Z · score: 14 (10 votes) · EA · GW

Maybe we could have a classified ads thread every once in a while? (More thoughts here.)

Comment by john_maxwell_iv on Should EA grantmaking be subject to independent audit? · 2019-04-17T19:30:20.651Z · score: 5 (3 votes) · EA · GW

It feels inefficient to second-guess a decision which has already been finalized. I think you could argue that something like a grant decisions thread should get posted before money gets disbursed, in case commenters surface important considerations overlooked by the grantmakers. There might also be value in auditing a while after money gets disbursed, to understand what the money actually did. Auditing right after money gets disbursed seems like the worst of both worlds.

Comment by john_maxwell_iv on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-17T19:25:28.267Z · score: 4 (2 votes) · EA · GW
So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.

You mean it functions like a venture capital fund or angel investor?

Comment by john_maxwell_iv on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T19:36:53.187Z · score: 5 (3 votes) · EA · GW

Good to know!

Comment by john_maxwell_iv on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-09T07:10:17.447Z · score: 9 (15 votes) · EA · GW
This in particular strikes me as understandable but very unfortunate. I'd strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?

I agree this creates unfortunate incentives for EAs to burn resources living in high cost-of-living areas (perhaps even while doing independent research which could in theory be done from anywhere!) However, if I was a grantmaker, I can see why this arrangement would be preferable: Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.

I suspect there's low-hanging fruit in having the grantmaking team be geographically distributed. To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network. If the goal is to select the minimum number of supernetworkers to cover as much of the EA social network as possible, I think you'd want each person to be located in a different geographic EA hub. (Perhaps you'd want supernetworkers covering disparate online communities devoted to EA as well.)

This also provides an interesting reframing of all the recent EA Hotel discussion: Instead of "Fund the EA Hotel", maybe the key intervention is "Locate grantmakers in low cost-of-living locations. Where grant money goes, EAs will follow, and everyone can save on living expenses." (BTW, the EA Hotel is actually a pretty good place to be if you're an aspiring EA supernetworker. I met many more EAs during the 6 months I spent there than my previous 6 months in the Bay Area. There are always people passing through for brief stays.)

Comment by john_maxwell_iv on Announcing EA Hub 2.0 · 2019-04-09T06:19:37.501Z · score: 16 (7 votes) · EA · GW

Congratulations on the launch!

Can anyone think of good places to link EA Hub from now that it's been revamped? I'm worried that people will forget about it in a few weeks once this post falls off the EA Forum homepage.

One strategy: Brainstorm use cases, then figure out where people are currently going for those use cases, then put links to the EA Hub in those places with an explanation of how EA Hub solves the use case. For example (rot13'd so you can think of your own before being primed by mine), one possible use case is crbcyr zrrgvat sryybj RNf juvyr geniryvat. Fb jr pbhyq qebc n yvax gb gur RN Uho va gur RN Pbhpufhesvat Snprobbx tebhc qrfpevcgvba naq fhttrfg gung crbcyr svaq ybpny tebhcf be fraq crefbany zrffntrf gb ybpny RNf vaivgvat gurz sbe pbssrr juvyr geniryvat. (Nffhzvat gung'f pbafvqrerq na npprcgnoyr hfr bs gur crefbany zrffntr srngher—V qba'g frr jul vg jbhyqa'g or gubhtu.)

Comment by john_maxwell_iv on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-08T05:05:38.484Z · score: 6 (4 votes) · EA · GW

We could just start calling it the Athena Hotel. That also disambiguates if additional hotels are opened in the future.

Comment by john_maxwell_iv on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-06T00:12:07.519Z · score: 4 (2 votes) · EA · GW

Do you have any thoughts on Tetlock's work which recommends the use of probabilistic reasoning and breaking questions down to make accurate forecasts?

Comment by john_maxwell_iv on Salary Negotiation for Earning to Give · 2019-04-05T19:16:28.631Z · score: 2 (1 votes) · EA · GW

A friend of mine recommends the book Bargaining for Advantage.

Comment by john_maxwell_iv on Open Thread #44 · 2019-04-03T19:15:59.227Z · score: 5 (3 votes) · EA · GW

You might also try using Google's Search Console to better understand how Google is scraping the site and what users are searching for (if you aren't already using it).

Comment by john_maxwell_iv on How Flying Cars Will Solve Global Poverty · 2019-04-01T23:58:29.601Z · score: 9 (3 votes) · EA · GW

You just have to build a propeller which produces relaxing brown noise.

Other naysayers like to complain that "most battery technology right now isn’t ready for anything other than short hops". The solution to that is also simple: Put battery replacement/charging stations on top of every building. You'd make a series of hops from one battery station to another on flying buses. The entire thing would be run by Lyft, naturally.

Comment by john_maxwell_iv on Why is the EA Hotel having trouble fundraising? · 2019-04-01T07:29:41.414Z · score: 5 (4 votes) · EA · GW
Many of them are working on very different projects from each other, and their peers are incentivized to be nice - it's not the kind of relationship a student has with a teacher or an employee has with a manager.

This is a good point. Maybe the hotel should have events where people anonymously write down the strongest criticisms they can think of for a particular person's project, then someone reads the criticisms aloud and they get discussed.

Comment by john_maxwell_iv on The Case for the EA Hotel · 2019-04-01T06:38:26.623Z · score: 11 (6 votes) · EA · GW

I burned out a couple of times. Taking time off allowed me to recover, but overall I updated in the direction that I should self-fund my EA projects, because I put too much pressure on myself if someone else is funding me. If I stay at the hotel again, I think I'll pay the £10/day "EA on vacation" fee. Then I can always remind myself that technically, I'm on vacation.

I also updated in the direction that a vegan diet is not the best for me physiologically. If I stay at the hotel again, I'll be more shameless about buying and eating my own non-vegan food.

When I was at the hotel, there was a culture of doing recreational stuff together on the weekends. I know I was usually taking it easy on the weekends. But maybe things have changed since I left.

Comment by john_maxwell_iv on What consequences? · 2019-03-31T22:50:20.643Z · score: 3 (2 votes) · EA · GW

Use what I've read about history to try & think of historical events I think were pivotal which share important similarities with the action in question, and also try to estimate the base rate of historical people taking actions similar to the action in question in order to have an estimate for the denominator.

If I was trying to improve my ability in this area, I might read books by Peter Turchin, Yuval Noah Harari, Niall Ferguson, Will and Ariel Durant, and people working on Big History. Maybe this book too. Some EA-adjacent discussion of this topic: 1, 2, 3, 4.

Comment by john_maxwell_iv on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-31T21:15:35.428Z · score: 21 (8 votes) · EA · GW

Startup founders are one possible reference class, but another possible reference class is researchers. People have proposed random funding for research proposals above a certain quality threshold:

Science is expensive, and since we can’t fund every scientist, we need some way of deciding whose research deserves a chance. So, how do we pick? At the moment, expert reviewers spend a lot of time allocating grant money by trying to identify the best work. But the truth is that they’re not very good at it, and that the process is a huge waste of time. It would be better to do away with the search for excellence, and to fund science by lottery.

People like Nick Bostrom and Eric Drexler are late in their careers, and they've had a lot of time to earn your respect and accumulate professional accolades. They find it easy to get funding and paying high rent is not a big issue for them. Given the amount of influence they have, it's probably worthwhile for them to live in a major intellectual hub and take advantage of the networking opportunities that come with it.

I think a focus on funding established researchers can impede progress. Max Planck said that science advances one funeral at a time. I happen to think Nick Bostrom is wrong about some important stuff, but I'm not nearly as established as Bostrom and I don't have the stature for people to take me as seriously when I make that claim.

Also, if donors fund any charity that has a good idea, I'm a bit concerned that that will attract a larger number of low-quality projects, much like the quality of startups declined near the peak of the dot-com bubble, when investors threw money at startups without much regard for competence.

Throwing small amounts of money at loads of startups is Y Combinator's business model.

I think part of why Y Combinator is so successful is because funding so many startups has allowed them to build a big dataset for what factors do & don't predict success. Maybe this could become part of the EA Hotel's mission as well.

Comment by john_maxwell_iv on What consequences? · 2019-03-31T20:43:02.174Z · score: 1 (1 votes) · EA · GW

Is it similar to the sort of actions I believe have had a large impact on the future in the past?

Comment by john_maxwell_iv on “Just take the expected value” – a possible reply to concerns about cluelessness · 2019-03-31T20:42:14.699Z · score: 1 (1 votes) · EA · GW

Do you think it's an acceptable conversational move for me to give you pointers to a literature which I believe addresses issues you're working on even if I don't have a deep familiarity with that literature?

Comment by john_maxwell_iv on “Just take the expected value” – a possible reply to concerns about cluelessness · 2019-03-30T08:55:46.087Z · score: 1 (1 votes) · EA · GW

Sorry, I'm not sure what the official jargon for the thing I'm trying to refer to is. In the limit of trying to be more accessible, I'm basically teaching a class in Bayesian statistics, and that's not something I'm qualified to do. (I don't even remember the jargon!) But the point is there are theoretically well-developed methods for talking about these issues, and maybe you shouldn't reinvent the wheel. Also, I'm almost certain they work fine with expected value.

Comment by john_maxwell_iv on What consequences? · 2019-03-30T08:45:10.007Z · score: 1 (1 votes) · EA · GW
I think part of the trouble is that it's very hard to tell prospectively whether an action is going to have a large impact on the far future.

I'm not convinced of that.

Comment by john_maxwell_iv on What consequences? · 2019-03-30T08:40:39.085Z · score: 1 (1 votes) · EA · GW
I haven't yet figured out how to allot the proportions of such a congress in a way that feels principled. Do you know of any work on this?

Not offhand, but I would probably use some kind of Bayesian approach.

Comment by john_maxwell_iv on What open source projects should effective altruists contribute to? · 2019-03-29T02:31:04.286Z · score: 6 (4 votes) · EA · GW

Here are posts from the LessWrong developers which might answer some of these questions. From 2017, so possibly outdated at this point...

https://www.lesswrong.com/posts/HJDbyFFKf72F52edp/welcome-to-lesswrong-2-0

https://www.lesswrong.com/posts/6XZLexLJgc5ShT4in/lesswrong-2-0-feature-roadmap-and-feature-suggestions

https://www.lesswrong.com/posts/rEHLk9nC5TtrNoAKT/lw-2-0-strategic-overview

More recent discussions here:

https://www.lesswrong.com/meta

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-27T06:11:22.559Z · score: 9 (3 votes) · EA · GW

As a concrete example of this "same project ideas over and over with little awareness of what has been proposed or attempted in the past" thing, https://lets-fund.org is a fairly recent push in the "fund fledgling EA projects" area which seems to have a decent amount of momentum behind it relative to the typical volunteer-lead EA project. What are the important differences between Let's Fund and what Jan is working on? I'm not sure. But Let's Fund hasn't hit the $75k target for their first project, even though it's been ~5 months since their launch.

The EA Hotel is another recent push in the "fund fledgling EA projects" area which is struggling to fundraise. Again, loads of momentum relative to the typical grassroots EA project--they've bought a property and it's full of EAs. What are the relative advantages & disadvantages of the EA Hotel, Let's Fund, and Jan's thing? How about compared with EA Funds? Again, I'm not sure. But I do wonder if we'd be better off with "more wood behind fewer arrows", so to speak.

Comment by john_maxwell_iv on Why doesn't the EA forum have curated posts or sequences? · 2019-03-26T20:14:30.585Z · score: 6 (3 votes) · EA · GW

Another situation where it can be valuable for a post to spend more time on the frontpage: This essay argues it's important to have 4 layers of intellectual conversation. The number 4 seems arbitrary to me, but I agree with the overall point that back-and-forth is valuable and necessary. But if a post falls off the frontpage partway through that back-and-forth, people are less motivated to continue the back-and-forth because the audience is smaller.

Comment by john_maxwell_iv on Severe Depression and Effective Altruism · 2019-03-26T19:11:03.323Z · score: 4 (4 votes) · EA · GW

What works for me in situations like these is to find a compromise position that both parts of me are OK with. For you, maybe this would look like: bringing up effective altruism with your parents just to see how they feel about it. Or purchasing a house for yourself in a lower cost of living area and donating the rest. Or using the money to retire early somewhere inexpensive and spend your time working on EA projects.

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-22T05:56:43.279Z · score: 6 (3 votes) · EA · GW
I actually stated my opinion in writing in a response to you two days ago which seems to deviate highly from your interpretation of my opinion.

I think I've seen forum discussions where language has been an unacknowledged barrier to understanding in the past, so it might be worth flagging that Jan is from the Czech Republic and likely does not speak English as his mother tongue.

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-22T05:31:32.917Z · score: 11 (4 votes) · EA · GW

It seems like Jan is getting a lot of critical feedback, so I just want to say, big ups to you Jan for spearheading this. Perhaps it'd be useful to schedule a Skype call with Habryka, RyanCarey, or others to try & hash out points of disagreement.

The point of a pilot project is to gather information, but if information already exists in the heads of community members, a pilot could just be an expensive way of re-gathering that info. The ideal pilot might be something that is controversial among the most knowledgable people in the community, with some optimistic and some pessimistic, because that way we're gathering informative experimental data.

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-22T05:22:28.741Z · score: 4 (3 votes) · EA · GW
I consider having people giving feedback to have 'skin in the game' to be important for the accuracy of the feedback. Most people don't enjoy discouraging others they have social ties with. Often reviewers without sufficient skin in the game might be tempted to not be as openly negative about proposals as they should be.

Maybe anonymity would be helpful here, the same way scientists do anonymous peer review?