Comment by john_maxwell_iv on Does climate change deserve more attention within EA? · 2019-04-20T17:13:07.189Z · score: 9 (3 votes) · EA · GW

This is an interesting post by Ramez Naam. He argues that too much attention is given to transportation & energy emissions and not enough to agriculture & industry emissions. Naam thinks that renewable tech will continue to drop in cost, and he's optimistic that part of the equation will solve itself. He says the highest-leverage action is the development of new tech to address agriculture & industry emissions.

Comment by john_maxwell_iv on Legal psychedelic retreats launching in Jamaica · 2019-04-17T20:40:13.200Z · score: 15 (9 votes) · EA · GW

Maybe we could have a classified ads thread every once in a while? (More thoughts here.)

Comment by john_maxwell_iv on Should EA grantmaking be subject to independent audit? · 2019-04-17T19:30:20.651Z · score: 5 (3 votes) · EA · GW

It feels inefficient to second-guess a decision which has already been finalized. I think you could argue that something like a grant decisions thread should get posted before money gets disbursed, in case commenters surface important considerations overlooked by the grantmakers. There might also be value in auditing a while after money gets disbursed, to understand what the money actually did. Auditing right after money gets disbursed seems like the worst of both worlds.

Comment by john_maxwell_iv on Long Term Future Fund: April 2019 grant decisions · 2019-04-17T19:25:28.267Z · score: 4 (2 votes) · EA · GW
So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.

You mean it functions like a venture capital fund or angel investor?

Comment by john_maxwell_iv on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T19:36:53.187Z · score: 5 (3 votes) · EA · GW

Good to know!

Comment by john_maxwell_iv on Long Term Future Fund: April 2019 grant decisions · 2019-04-09T07:10:17.447Z · score: 10 (14 votes) · EA · GW
This in particular strikes me as understandable but very unfortunate. I'd strongly prefer a fund where happening to live near or otherwise know a grantmaker is not a key part of getting a grant. Are there any plans or any way progress can be made on this issue?

I agree this creates unfortunate incentives for EAs to burn resources living in high cost-of-living areas (perhaps even while doing independent research which could in theory be done from anywhere!) However, if I was a grantmaker, I can see why this arrangement would be preferable: Evaluating grants feels like work and costs emotional energy. Talking to people at parties feels like play and creates emotional energy. For many grantmakers, I imagine getting to know people in a casual environment is effectively costless, and re-using that knowledge in the service of grantmaking allows more grants to be made.

I suspect there's low-hanging fruit in having the grantmaking team be geographically distributed. To my knowledge, at least 3 of these 4 grantmakers live in the Bay Area, which means they probably have a lot of overlap in their social network. If the goal is to select the minimum number of supernetworkers to cover as much of the EA social network as possible, I think you'd want each person to be located in a different geographic EA hub. (Perhaps you'd want supernetworkers covering disparate online communities devoted to EA as well.)

This also provides an interesting reframing of all the recent EA Hotel discussion: Instead of "Fund the EA Hotel", maybe the key intervention is "Locate grantmakers in low cost-of-living locations. Where grant money goes, EAs will follow, and everyone can save on living expenses." (BTW, the EA Hotel is actually a pretty good place to be if you're an aspiring EA supernetworker. I met many more EAs during the 6 months I spent there than my previous 6 months in the Bay Area. There are always people passing through for brief stays.)

Comment by john_maxwell_iv on Announcing EA Hub 2.0 · 2019-04-09T06:19:37.501Z · score: 16 (7 votes) · EA · GW

Congratulations on the launch!

Can anyone think of good places to link EA Hub from now that it's been revamped? I'm worried that people will forget about it in a few weeks once this post falls off the EA Forum homepage.

One strategy: Brainstorm use cases, then figure out where people are currently going for those use cases, then put links to the EA Hub in those places with an explanation of how EA Hub solves the use case. For example (rot13'd so you can think of your own before being primed by mine), one possible use case is crbcyr zrrgvat sryybj RNf juvyr geniryvat. Fb jr pbhyq qebc n yvax gb gur RN Uho va gur RN Pbhpufhesvat Snprobbx tebhc qrfpevcgvba naq fhttrfg gung crbcyr svaq ybpny tebhcf be fraq crefbany zrffntrf gb ybpny RNf vaivgvat gurz sbe pbssrr juvyr geniryvat. (Nffhzvat gung'f pbafvqrerq na npprcgnoyr hfr bs gur crefbany zrffntr srngher—V qba'g frr jul vg jbhyqa'g or gubhtu.)

Comment by john_maxwell_iv on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-08T05:05:38.484Z · score: 6 (4 votes) · EA · GW

We could just start calling it the Athena Hotel. That also disambiguates if additional hotels are opened in the future.

Comment by john_maxwell_iv on [Link] The Optimizer's Curse & Wrong-Way Reductions · 2019-04-06T00:12:07.519Z · score: 4 (2 votes) · EA · GW

Do you have any thoughts on Tetlock's work which recommends the use of probabilistic reasoning and breaking questions down to make accurate forecasts?

Comment by john_maxwell_iv on Salary Negotiation for Earning to Give · 2019-04-05T19:16:28.631Z · score: 2 (1 votes) · EA · GW

A friend of mine recommends the book Bargaining for Advantage.

Comment by john_maxwell_iv on Open Thread #44 · 2019-04-03T19:15:59.227Z · score: 5 (3 votes) · EA · GW

You might also try using Google's Search Console to better understand how Google is scraping the site and what users are searching for (if you aren't already using it).

Comment by john_maxwell_iv on How Flying Cars Will Solve Global Poverty · 2019-04-01T23:58:29.601Z · score: 9 (3 votes) · EA · GW

You just have to build a propeller which produces relaxing brown noise.

Other naysayers like to complain that "most battery technology right now isn’t ready for anything other than short hops". The solution to that is also simple: Put battery replacement/charging stations on top of every building. You'd make a series of hops from one battery station to another on flying buses. The entire thing would be run by Lyft, naturally.

How Flying Cars Will Solve Global Poverty

2019-04-01T20:56:47.829Z · score: 21 (12 votes)
Comment by john_maxwell_iv on Why is the EA Hotel having trouble fundraising? · 2019-04-01T07:29:41.414Z · score: 5 (4 votes) · EA · GW
Many of them are working on very different projects from each other, and their peers are incentivized to be nice - it's not the kind of relationship a student has with a teacher or an employee has with a manager.

This is a good point. Maybe the hotel should have events where people anonymously write down the strongest criticisms they can think of for a particular person's project, then someone reads the criticisms aloud and they get discussed.

Comment by john_maxwell_iv on The Case for the EA Hotel · 2019-04-01T06:38:26.623Z · score: 11 (6 votes) · EA · GW

I burned out a couple of times. Taking time off allowed me to recover, but overall I updated in the direction that I should self-fund my EA projects, because I put too much pressure on myself if someone else is funding me. If I stay at the hotel again, I think I'll pay the £10/day "EA on vacation" fee. Then I can always remind myself that technically, I'm on vacation.

I also updated in the direction that a vegan diet is not the best for me physiologically. If I stay at the hotel again, I'll be more shameless about buying and eating my own non-vegan food.

When I was at the hotel, there was a culture of doing recreational stuff together on the weekends. I know I was usually taking it easy on the weekends. But maybe things have changed since I left.

Comment by john_maxwell_iv on What consequences? · 2019-03-31T22:50:20.643Z · score: 3 (2 votes) · EA · GW

Use what I've read about history to try & think of historical events I think were pivotal which share important similarities with the action in question, and also try to estimate the base rate of historical people taking actions similar to the action in question in order to have an estimate for the denominator.

If I was trying to improve my ability in this area, I might read books by Peter Turchin, Yuval Noah Harari, Niall Ferguson, Will and Ariel Durant, and people working on Big History. Maybe this book too. Some EA-adjacent discussion of this topic: 1, 2, 3, 4.

Comment by john_maxwell_iv on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-31T21:15:35.428Z · score: 21 (8 votes) · EA · GW

Startup founders are one possible reference class, but another possible reference class is researchers. People have proposed random funding for research proposals above a certain quality threshold:

Science is expensive, and since we can’t fund every scientist, we need some way of deciding whose research deserves a chance. So, how do we pick? At the moment, expert reviewers spend a lot of time allocating grant money by trying to identify the best work. But the truth is that they’re not very good at it, and that the process is a huge waste of time. It would be better to do away with the search for excellence, and to fund science by lottery.

People like Nick Bostrom and Eric Drexler are late in their careers, and they've had a lot of time to earn your respect and accumulate professional accolades. They find it easy to get funding and paying high rent is not a big issue for them. Given the amount of influence they have, it's probably worthwhile for them to live in a major intellectual hub and take advantage of the networking opportunities that come with it.

I think a focus on funding established researchers can impede progress. Max Planck said that science advances one funeral at a time. I happen to think Nick Bostrom is wrong about some important stuff, but I'm not nearly as established as Bostrom and I don't have the stature for people to take me as seriously when I make that claim.

Also, if donors fund any charity that has a good idea, I'm a bit concerned that that will attract a larger number of low-quality projects, much like the quality of startups declined near the peak of the dot-com bubble, when investors threw money at startups without much regard for competence.

Throwing small amounts of money at loads of startups is Y Combinator's business model.

I think part of why Y Combinator is so successful is because funding so many startups has allowed them to build a big dataset for what factors do & don't predict success. Maybe this could become part of the EA Hotel's mission as well.

Comment by john_maxwell_iv on What consequences? · 2019-03-31T20:43:02.174Z · score: 1 (1 votes) · EA · GW

Is it similar to the sort of actions I believe have had a large impact on the future in the past?

Comment by john_maxwell_iv on “Just take the expected value” – a possible reply to concerns about cluelessness · 2019-03-31T20:42:14.699Z · score: 1 (1 votes) · EA · GW

Do you think it's an acceptable conversational move for me to give you pointers to a literature which I believe addresses issues you're working on even if I don't have a deep familiarity with that literature?

Comment by john_maxwell_iv on “Just take the expected value” – a possible reply to concerns about cluelessness · 2019-03-30T08:55:46.087Z · score: 1 (1 votes) · EA · GW

Sorry, I'm not sure what the official jargon for the thing I'm trying to refer to is. In the limit of trying to be more accessible, I'm basically teaching a class in Bayesian statistics, and that's not something I'm qualified to do. (I don't even remember the jargon!) But the point is there are theoretically well-developed methods for talking about these issues, and maybe you shouldn't reinvent the wheel. Also, I'm almost certain they work fine with expected value.

Comment by john_maxwell_iv on What consequences? · 2019-03-30T08:45:10.007Z · score: 1 (1 votes) · EA · GW
I think part of the trouble is that it's very hard to tell prospectively whether an action is going to have a large impact on the far future.

I'm not convinced of that.

Comment by john_maxwell_iv on What consequences? · 2019-03-30T08:40:39.085Z · score: 1 (1 votes) · EA · GW
I haven't yet figured out how to allot the proportions of such a congress in a way that feels principled. Do you know of any work on this?

Not offhand, but I would probably use some kind of Bayesian approach.

Comment by john_maxwell_iv on What open source projects should effective altruists contribute to? · 2019-03-29T02:31:04.286Z · score: 6 (4 votes) · EA · GW

Here are posts from the LessWrong developers which might answer some of these questions. From 2017, so possibly outdated at this point...

https://www.lesswrong.com/posts/HJDbyFFKf72F52edp/welcome-to-lesswrong-2-0

https://www.lesswrong.com/posts/6XZLexLJgc5ShT4in/lesswrong-2-0-feature-roadmap-and-feature-suggestions

https://www.lesswrong.com/posts/rEHLk9nC5TtrNoAKT/lw-2-0-strategic-overview

More recent discussions here:

https://www.lesswrong.com/meta

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-27T06:11:22.559Z · score: 9 (3 votes) · EA · GW

As a concrete example of this "same project ideas over and over with little awareness of what has been proposed or attempted in the past" thing, https://lets-fund.org is a fairly recent push in the "fund fledgling EA projects" area which seems to have a decent amount of momentum behind it relative to the typical volunteer-lead EA project. What are the important differences between Let's Fund and what Jan is working on? I'm not sure. But Let's Fund hasn't hit the $75k target for their first project, even though it's been ~5 months since their launch.

The EA Hotel is another recent push in the "fund fledgling EA projects" area which is struggling to fundraise. Again, loads of momentum relative to the typical grassroots EA project--they've bought a property and it's full of EAs. What are the relative advantages & disadvantages of the EA Hotel, Let's Fund, and Jan's thing? How about compared with EA Funds? Again, I'm not sure. But I do wonder if we'd be better off with "more wood behind fewer arrows", so to speak.

Comment by john_maxwell_iv on Why doesn't the EA forum have curated posts or sequences? · 2019-03-26T20:14:30.585Z · score: 6 (3 votes) · EA · GW

Another situation where it can be valuable for a post to spend more time on the frontpage: This essay argues it's important to have 4 layers of intellectual conversation. The number 4 seems arbitrary to me, but I agree with the overall point that back-and-forth is valuable and necessary. But if a post falls off the frontpage partway through that back-and-forth, people are less motivated to continue the back-and-forth because the audience is smaller.

Comment by john_maxwell_iv on Severe Depression and Effective Altruism · 2019-03-26T19:11:03.323Z · score: 4 (4 votes) · EA · GW

What works for me in situations like these is to find a compromise position that both parts of me are OK with. For you, maybe this would look like: bringing up effective altruism with your parents just to see how they feel about it. Or purchasing a house for yourself in a lower cost of living area and donating the rest. Or using the money to retire early somewhere inexpensive and spend your time working on EA projects.

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-22T05:56:43.279Z · score: 6 (3 votes) · EA · GW
I actually stated my opinion in writing in a response to you two days ago which seems to deviate highly from your interpretation of my opinion.

I think I've seen forum discussions where language has been an unacknowledged barrier to understanding in the past, so it might be worth flagging that Jan is from the Czech Republic and likely does not speak English as his mother tongue.

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-22T05:31:32.917Z · score: 11 (4 votes) · EA · GW

It seems like Jan is getting a lot of critical feedback, so I just want to say, big ups to you Jan for spearheading this. Perhaps it'd be useful to schedule a Skype call with Habryka, RyanCarey, or others to try & hash out points of disagreement.

The point of a pilot project is to gather information, but if information already exists in the heads of community members, a pilot could just be an expensive way of re-gathering that info. The ideal pilot might be something that is controversial among the most knowledgable people in the community, with some optimistic and some pessimistic, because that way we're gathering informative experimental data.

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-22T05:22:28.741Z · score: 4 (3 votes) · EA · GW
I consider having people giving feedback to have 'skin in the game' to be important for the accuracy of the feedback. Most people don't enjoy discouraging others they have social ties with. Often reviewers without sufficient skin in the game might be tempted to not be as openly negative about proposals as they should be.

Maybe anonymity would be helpful here, the same way scientists do anonymous peer review?

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-22T05:13:26.959Z · score: 13 (6 votes) · EA · GW
Since I view this as an important idea, I think it's important to get the plan to the strongest point that it can be.

It's also important not to let the perfect be the enemy of the good. Seems to me like people are always proposing volunteer-lead projects like this and most of them never get off the ground. Remember this is just a pilot.

I think this is the sort of project where if you do it badly, it might dissuade others from trying the same.

The empirical reality of the EA project landscape seems to be that EAs keep stumbling on the same project ideas over and over with little awareness of what has been proposed or attempted in the past. If this post goes like the typical project proposal post, nothing will come of it, it will soon be forgotten, and 6 months later someone will independently come up with a similar idea and write a similar post (which will meet a similar fate).

Comment by john_maxwell_iv on The career and the community · 2019-03-22T04:50:02.731Z · score: 7 (4 votes) · EA · GW

This is a good post. But since we hear so much about the value of career capital, I thought it'd be useful to link this old post which encouraged people to deprioritize it, just for the sake of an alternate perspective.

Given that there’s always going to be a social bias towards working at EA orgs

I'm not sure this is true. Just a few years ago, it seemed like there was a social bias against working at EA orgs. The "prioritize talent gaps" meme was meant to address this. (I feel like there might be other historical cases of the EA movement overcorrecting in this manner, but no specific instances are coming to mind.)

Comment by john_maxwell_iv on Why doesn't the EA forum have curated posts or sequences? · 2019-03-21T23:03:53.793Z · score: 1 (1 votes) · EA · GW

Related thought: I haven't had a chance to look closely at this yet, but it appears to be the nth proposal for something like this I've seen on the EA forum. This might also represent a failure of the "newsfeed of posts sorted by date" model.

What if there was a box to check for "this is a proposal", and then once that box was checked, we got "bump" style forum mechanics where every time someone left a comment, the post went back to the top of the front page? That way a proposal could stay on the front page long enough for details to get hashed out & eventually the proposal might actually get implemented.

Proposals could also sort comments by "new" by default, so you could write a comment summarizing all of the discussion so far & suggesting a way to synthesize it, and that comment would be the first comment to get displayed. Perhaps there could also be something to nudge the user towards reading most or all of the comments in a proposal thread before leaving a comment of their own?

(Are there other categories of posts that would benefit from additional time on the front page? BTW, another advantage of the bumping mechanic is it makes it easier to have an influence on the discussion even if you only check the forum occasionally. Which is likely going to be true for important people who have a lot of other responsibilities.)

Comment by john_maxwell_iv on Request for comments: EA Projects evaluation platform · 2019-03-21T22:51:22.613Z · score: 10 (4 votes) · EA · GW

This is true, but "possible negative consequences" and "possible benefits" are good brainstorming prompts. Especially if someone has a bias towards one or the other, telling them to use both prompts can help even things out.

Comment by john_maxwell_iv on The career coordination problem · 2019-03-19T00:26:55.592Z · score: 1 (1 votes) · EA · GW

I had some thoughts on how to use the survey in this comment.

Comment by john_maxwell_iv on The career coordination problem · 2019-03-18T03:28:17.448Z · score: 3 (3 votes) · EA · GW

I actually don't think that would help a ton, because 80K already prioritizes careers based on their perceived delta between supply and demand. The coordination problem comes because it can take years to generate additional supply, and 80K has only limited visibility into that supply as it's being generated.

Comment by john_maxwell_iv on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-13T01:08:40.776Z · score: 1 (1 votes) · EA · GW

I wrote some comments on your sequence:

Unexpected outcomes will largely fall into two categories: those we think we should have anticipated, and those we don't think we reasonably could have anticipated. For the first category, I think we could do better at brainstorming unusual reasons why our plans might fail. I have a draft post on how to do this. For the second category, I don't think there is much to do. Maybe there will be a blizzard during midsummer all over California this year, and I will hold Californian authorities blameless for their failure to prepare for that blizzard.

I stumbled across this today; haven't had a chance to read it but it looks relevant.

Comment by john_maxwell_iv on SHOW: A framework for shaping your talent for direct work · 2019-03-12T20:31:17.555Z · score: 3 (2 votes) · EA · GW

Good post.

Seems like writing blog posts is another possibility. A good fraction of the most prominent EAs appear to have achieved their prominence through writing.

Comment by john_maxwell_iv on Doing good while clueless · 2019-03-12T16:58:00.676Z · score: 9 (3 votes) · EA · GW

Metaculus is an EA project worth a mention in the "improving foresight" area. I'm also excited by what the Less Wrong 2 team is doing. And Clearer Thinking is cool.

I think steering capacity is valuable, but there has to be a balance between building steering capacity and taking object-level action. In many cases, object-level actions are likely to be time-sensitive. Delaying object-level action only makes sense insofar as we can usefully resolve our cluelessness. (But as you say, we tend to become less cluelessness about things as they move from the far future to the near future. So object-level actions which destroy option value can be bad.)

Remember also that acting in the world is sometimes the best way to gather information (which can help resolve cluelessness).

Comment by john_maxwell_iv on How tractable is cluelessness? · 2019-03-12T16:46:58.040Z · score: 1 (1 votes) · EA · GW

Bostrom defines a "crucial consideration" as one that would overturn a conclusion or reveal the need for a major change of direction. By this definition, something may or may not be a "crucial consideration" depending on our current set of conclusions and our current direction. The definition sneaks in a connotation that important new insights will tend to reveal the need for a major change of direction. But it's also possible that important new insights will reaffirm our current direction. See conservation of expected evidence.

Regarding the precautionary principle, consider a reversibility test: Suppose there is some parameter of the world p which is gradually increasing, and you have the opportunity to interfere and stop this increase for no cost. By the precautionary principle, you should not interfere. Now suppose p is currently static, and you have the opportunity to interfere and trigger a gradual increase for no cost. Again, by the precautionary principle, you should not interfere.

For someone like me, who does not believe in the act/omission distinction and believes in fighting status quo bias, this seems a little silly. I think the best arguments for a policy of non-interference in both scenarios are:

  • In the real world, actions typically have costs.
  • It's possible that our interference isn't reversible, and by thinking more, we can better determine whether interference is the correct course of action. In other words, value of information is high. But this argument depends on cluelessness being tractable! If our current guess is as good as our guess will ever be, we might as well act on it.

I'm sympathetic to the idea that value of information is high, and I think cluelessness is tractable. I support EA groups like the Future of Humanity Institute which are trying to work out the best course of action. But at a certain point, the low-hanging information fruit will get picked, and then it's likely time to act. If we aren't going to take action under any circumstances, gathering information is a waste of time.

Comment by john_maxwell_iv on “Just take the expected value” – a possible reply to concerns about cluelessness · 2019-03-12T16:11:53.290Z · score: 1 (1 votes) · EA · GW

Good points, but this seems to point to a weakness in the way we do modeling, not a weakness in expected value.

Comment by john_maxwell_iv on “Just take the expected value” – a possible reply to concerns about cluelessness · 2019-03-12T16:10:33.271Z · score: 1 (1 votes) · EA · GW

This business with multiple possible probabilities sounds like you are partway through reinventing Bayesian model uncertainty. Seems like "representor" corresponds to "posterior distribution over possible models". From a Bayesian perspective, you can solve this problem by using the full posterior for inference, and summing out the model.

Answers like this indicate that the estimator doesn’t have visibility into the process by which they’re arriving at their estimate.

"It is better to be approximately right than to be precisely wrong." - Warren Buffett

"Anything you need to quantify can be measured in some way that is superior to not measuring it at all." - Gilb's Law

Comment by john_maxwell_iv on What consequences? · 2019-03-12T15:55:52.400Z · score: 4 (3 votes) · EA · GW
in reality the bulk of an intervention’s impact is composed of indirect & long-run effects which are difficult to observe and difficult to estimate.

Robin Hanson has some posts which are skeptical. I think there's probably a power law distribution of impact on the far future, and most actions are relatively unimpactful. You could argue that the scale of the universe is big enough in time & space that even a small relative impact on the far future will be large in absolute terms. But to compromise with near future focused value systems, maybe we should still be focused on near-term effects of interventions which seem relatively unimpactful in the long run.

BTW, your typology neglects work to prevent s-risks.

Comment by john_maxwell_iv on EA is vetting-constrained · 2019-03-11T22:08:27.716Z · score: 9 (4 votes) · EA · GW
grantmakers fall back on prestige because they don’t always have the resources to properly evaluate ideas

It seems like this recent post describes the opposite pattern, of someone with a highly prestigious resume spending a lot of resources getting evaluated, and getting rejected despite their resume. I wonder why the pattern would be different between hiring and grantmaking?

Anyway, one idea for helping address the bottleneck is to maintain a shared open-source grantmaking algorithm. The algorithm could include forecasting best practices, a list of ways projects can cause harm, etc. Every time a project fails despite our hopes, or succeeds despite our concerns, we could update the algorithm with our learnings. It could be shared between established EA grantmakers, donor lottery winners, independent angels, etc.

I don't think such an algorithm would eliminate the need for domain expertise. But it might make it less of a bottleneck. The ideal audience might be an EA who is EtG and thinking of donating to a friend's project. They can vouch for their friend and they have a limited amount of domain expertise in the area of their friend's project. They could do some fraction of the algorithm on their own, then maybe step 7 would be: "Find a domain expert in the EA community. Have them glance over everything you've done so far to evaluate this project and let you know what you're missing." (Arguably the biggest weakness of amateurs relative to experts is amateurs don't know what they don't know. Plausibly it's also valuable to involve at least one person who is not friends with the project leader to fight social desirability bias etc. Another way to help address the unknown unknowns problem is making a post to this forum and paying for critical feedback. OpenPhil has a relevant essay re: what they aim for in their writeups.)

Comment by john_maxwell_iv on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T10:37:22.676Z · score: 7 (5 votes) · EA · GW

I feel for you :(

It would really suck if this is just a temporary supply/demand imbalance. I could even imagine us having the opposite problem in a few years' time, if EA organizations grow exponentially and find that the EA talent pool gets used up (due to a mixture of people getting hired and people getting discouraged). After all, only ~3 years ago 80k was emphasizing we should focus more on talent gaps and less on funding gaps, and now we have stories like yours.

Comment by john_maxwell_iv on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-01T08:04:39.848Z · score: 3 (2 votes) · EA · GW

I've thought a lot about cluelessness, and I could give you feedback on something you're thinking of writing.

Comment by john_maxwell_iv on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T06:36:58.382Z · score: 36 (25 votes) · EA · GW

Meta note: I believe this is now the most upvoted EA forum post of all time by a wide margin. Seems like it struck a chord with a lot of people. It's probably worthwhile for people to write follow-up posts exploring issues related to human capital allocation, since it is a pretty central challenge for the movement. Example prompts:

  • Brainstorming Task Y for someone with an absurdly impressive resume.
  • Does the high difficulty of getting a job at an EA organization mean we should stop promoting EA? (What are the EA movement's current bottlenecks?)
  • Consequentialist cluelessness and how it relates to funding speculative projects and early-stage organizations (some previous discussion here).
Comment by john_maxwell_iv on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-02-26T10:26:29.312Z · score: 3 (2 votes) · EA · GW

That makes sense.

The podcast with Rob Wiblin and Nick Beckstead is about whether EA should "aim to be a very broad movement that appeals to potentially hundreds of millions of people". I initially read your post as addressing that question. Maybe a good answer to that question is "not before we've found useful things for the people already interested in EA to do". My point about branding is most relevant in the case where we've found a useful thing and we want to scale it up beyond the existing EA community.

By the way, this new post is interesting, from a guy with a ridiculous resume who get rejected for 20 different EA positions.

Comment by john_maxwell_iv on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-02-22T06:29:23.411Z · score: 8 (6 votes) · EA · GW

Another question related to Task Y: supposing Task Y does exist, would you rather people working on Task Y think of themselves as "Soft EAs", or as people who are part of the "Task Y community"? For example, if eating a vegan diet is Task Y, would you like vegans to start thinking of themselves as EAs due to their veganism? If veganism didn't exist already, and it was an idea that originated from within the EA community, would it be best to spin it off or keep it internal?

I can think of arguments on both sides:

  • Maybe there's already a large audience of people who have heard about EA and think it's really cool but don't know how to contribute. If these people already exist, we might as well figure out the best things for them to do. This isn't necessarily an argument for expansion of EA, however. (It's also not totally clear which direction this consideration points in.)
  • If Task Y is a task where the argument for positive impact is abstruse & hard to follow, then maybe a "Task Y Movement" isn't ever going to get off the ground because it lacks popular appeal. Maybe the EA movement has more popular appeal, and the EA movement's popular appeal can be directed into Task Y.
  • Some find the EA movement uninviting in its elitism. Even on this forum, reportedly the most elitist EA discussion venue, a highly upvoted post says: "Many of my friends report that reading 80,000 Hours’ site usually makes them feel demoralized, alienated, and hopeless." There have been gripes about the difficulty of getting grant money for EA projects from grantmaking organizations after it became known that "EA is no longer funding-limited". (I might be guilty of this griping myself.) Do we want average Janes and Joes reading EA career advice that Google software engineers find "very depressing"? How will they feel after learning that some EAs are considered 1000x as impactful as them?
  • Expansion of the EA movement itself could be hard to reverse and destroy option value.
Comment by john_maxwell_iv on Three Biases That Made Me Believe in AI Risk · 2019-02-16T04:53:51.414Z · score: 10 (5 votes) · EA · GW

If people are biased towards believing their actions have cosmic significance, does this also imply that people without math & CS skills will be biased against AI safety as a cause area?

Comment by john_maxwell_iv on The Need for and Viability of an Effective Altruism Academy · 2019-02-15T23:29:38.809Z · score: 13 (5 votes) · EA · GW

The EA Hotel hosted an EA Retreat which sounds a bit similar. Here's a report from a Czech EA retreat.

Comment by john_maxwell_iv on The Need for and Viability of an Effective Altruism Academy · 2019-02-15T23:27:55.256Z · score: 14 (5 votes) · EA · GW

The Pareto Fellowship even moreseo for me. Here CEA explains why they discontinued it.

Open Thread #43

2018-12-08T05:39:37.672Z · score: 8 (4 votes)

Open Thread #41

2018-09-03T02:21:51.927Z · score: 4 (4 votes)

Five books to make you super effective

2015-04-02T02:31:48.509Z · score: 6 (6 votes)