Posts

New Top EA Causes for 2020? 2020-04-01T07:39:59.687Z
April Fool's Day Is Very Serious Business 2020-03-13T09:16:37.023Z
Open Thread #46 2020-03-13T08:01:31.342Z
Should you familiarize yourself with the literature before writing an EA Forum post? 2019-10-06T23:17:09.317Z
[Link] How to contribute to the psychedelics ecosystem 2019-09-28T01:55:14.267Z
How to Make Billions of Dollars Reducing Loneliness 2019-08-24T01:49:45.629Z
New Top EA Cause: Flying Cars 2019-04-01T20:56:47.829Z
Open Thread #43 2018-12-08T05:39:37.672Z
Open Thread #41 2018-09-03T02:21:51.927Z
Five books to make you super effective 2015-04-02T02:31:48.509Z

Comments

Comment by John_Maxwell (John_Maxwell_IV) on The Mistakes of Focusing on Student Outreach in EA · 2022-09-25T20:52:23.776Z · EA · GW

Has anyone thought about outreach to retirees? They're no longer locked into a career, and could help fill the experience gap with part-time mentorship, consulting, or advisory work. I imagine many would be excited to work with young people & give back.

Could try cold messaging retirees on linkedin who seem to have relevant skills/expertise...

Comment by John_Maxwell (John_Maxwell_IV) on Thomas Kwa's Shortform · 2022-09-20T17:48:32.616Z · EA · GW

Thanks for the correction!

BTW, I hope it doesn't seem like it was picking on you -- it just occurred to me that I could do math for Rethink Priorities because your salaries are public. I have no reason to believe a cost-per-public-report estimate would be different for any other randomly chosen EA research organization in either direction. And of course most EA organizations correctly focus on making a positive impact rather than maximizing publication count.

Comment by John_Maxwell (John_Maxwell_IV) on Agree/disagree voting (& other new features September 2022) · 2022-09-17T09:18:33.093Z · EA · GW

The curation discussion made me think of this recent shortform post: "EA forum content might be declining in quality. Here are some possible mechanisms: [...]"

It seems like there has been an effort to get people less intimidated about posting to the Forum. I think this is probably good -- intimidation seems like a somewhat bad way to achieve quality control. However, with less intimidation and higher post volumes, we're leaning harder on upvotes & downvotes to direct attention and achieve quality control. Since our system is kind of like reddit's [I believe reddit is the only major social media site that's primarily driven by upvotes+downvotes rather than followings and/or recommendations], the obvious problems to fear would be the ones you see when subreddits get larger:

  • People who disagree with the current consensus get dogpiled with downvotes and self-select out of the community

  • Memes get more upvotes than in-depth content since they are more accessible and easier to consume

(My sense is that these are the 2 big mechanisms behind the common advice to seek out niche subreddits for high-quality discussion -- let me know if you're a redditor and you can think of other considerations.)

Anyway, this leaves me feeling positive about two-factor voting, including on toplevel posts. It seems like a good way to push back on the "self-selection for agreement" problem.

It also leaves me feeling positive about curation as a way to push back on the "popcorn content" problem. In fact, I might take curation even further. Brainstorming follows...

Imagine I am a forum user thinking about investing several weeks or months writing an in-depth report on some topic. Ian David Moss wrote:

...it's pretty demotivating when a post that reflects five months and hundreds of hours of work is on the front page for less than a day. I feel like there's something wrong with the system when I can spend five minutes putting together a linkpost instead and earn a greater level of engagement.

Curation as described in the OP helps a bit, because there's a chance someone will notice my post while it's on the frontpage and suggest it for curation. But imagine I could submit an abstract/TLDR to a curator asking them to rate their interest in curating a post on my chosen topic. After I finish writing my post, I could "apply for curation" and maybe have some back-and-forth with a curator to get my post good enough. Essentially making curation on the forum work a bit like publication in an academic journal. While I'm dreaming, maybe someone could be paid to fact-check/red team my post before it goes live (possibly reflected in a separate quality badge, or maybe this should actually be a prereq for curation).

I think academic journals and online forums have distinct advantages. Academic journals seem good at incentivizing people to iron out boring details. But they lack the exciting social nature of an online forum which gets people learning and discussing things for fun in their spare time. Maybe there's a way to combine the advantages of both, and have an exciting social experience that also gets boring details right. (Of course, it would be good to avoid academic publishing problems too -- I don't know too much about that though.)

Another question is the role of Facebook. I don't use it, and I know it has obvious disadvantages, but even so it seems like there's an argument for making relevant Facebook groups the designated place for less rigorous posts.

Comment by John_Maxwell (John_Maxwell_IV) on Thomas Kwa's Shortform · 2022-09-17T08:03:06.274Z · EA · GW

Another possible mechanism is forum leadership encouraging people to be less intimidated and write more off-the-cuff posts -- see e.g. this or this.

Side note: It seems like a small amount of prize money goes a long way.

So napkin math suggests that the per-post cost of a contest post is something like 1% of the per-post cost of a RP publication. A typical RP publication is probably much higher quality. But maybe sometimes getting a lot of shallow explorations quickly is what's desired. (Disclaimer: I haven't been reading the forum much, didn't read many contest posts, and don't have an opinion about their quality. But I did notice the organizers of the ELK contest were "surprised by the number and quality of submissions".)

A related point re: quality is that smaller prize pools presumably select for people with lower opportunity costs. If I'm a talented professional who commands a high hourly rate, I might do the expected value math on e.g. the criticism prize and decide it's not worthwhile to enter.

It's also not clear if the large number of entries will persist in the longer term. Not winning can be pretty demoralizing. Supposing a talented professional goes against their better judgement and puts a lot of time into their entry, then loses and has no idea why. Will they enter the next contest they see? Probably not. They're liable to interpret lack of a prize as "the contest organizers didn't think it was worth my time to make a submission".

Comment by John_Maxwell (John_Maxwell_IV) on New cause area: bivalve aquaculture · 2022-06-21T05:33:40.941Z · EA · GW

Supposing bivalves are in fact capable of suffering, might it still be economical to farm them in a way that causes almost no suffering? Presumably they don't suffer from confinement the way most animals do...

Comment by John_Maxwell (John_Maxwell_IV) on New cause area: bivalve aquaculture · 2022-06-21T05:29:18.463Z · EA · GW

One idea for reducing the cost is to automate the culling and grading process.

Comment by John_Maxwell (John_Maxwell_IV) on The biggest risk of free-spending EA is not optics or motivated cognition, but grift · 2022-05-15T11:59:01.550Z · EA · GW

Seems to me that scarcity can also be grift-inducing, e.g. if a tech company only hires the very top performers on its interview, it might find that most hires are people who looked up the questions beforehand and rehearsed the answers. But if the company hires any solid performer, that doesn't induce a rehearsal arms race -- if it's possible to get hired without rehearsing, some people will value their integrity enough to do this.

The CEEALAR model is interesting because it combines a high admission rate with low salaries. You're living with EAs in an undesirable city, eating vegan food, and getting paid peanuts. This seems unattractive to professional grifters, but it might be attractive to deadbeat grifters. Deadbeat grifters seem like a better problem to have since they're less sophisticated and less ambitious on average.

Another CEEALAR thing: living with someone helps you get to know them. It's easier to put up a facade for a funder than for your roommates.

...three conditions that sociologists since the 1950s have considered crucial to making close friends: proximity; repeated, unplanned interactions; and a setting that encourages people to let their guard down and confide in each other, said Rebecca G. Adams, a professor of sociology and gerontology at the University of North Carolina at Greensboro. This is why so many people meet their lifelong friends in college, she added.

Source. When I was at CEEALAR, it seemed to me like the "college dorm" atmosphere was generating a lot of social capital for the EA movement.

I don't think CEEALAR is perfect (and I also left years ago so it may have changed). But the overall idea seems good to iterate on. People have objected in the past because of PR weirdness, but maybe that's what we need to dissuade the most dangerous sort of grifter.

Comment by John_Maxwell (John_Maxwell_IV) on Why Helping the Flynn Campaign is especially useful right now · 2022-05-14T00:12:50.151Z · EA · GW

I think there is no harm in setting up an alert in case there are more threads about him. The earlier you arrive in a thread, the greater the opportunity to influence the discussion. If people are going to be reading a negative comment anyways, I don't think there is much harm in replying, at least on reddit -- I don't think reddit tends to generate more views for a thread with more activity, the way twitter can. In fact, replying to the older threads on reddit could be a good way to test out messaging, since almost no one is reading at this point, but you might get replies from people who left negative comments and learn how to change their mind. I've had success arguing for minority positions on my local subreddit by being friendly, respectful, and factual.

Beyond that I'm really not sure, creating new threads could be a high-risk/high-reward strategy to use if he's falling in the polls. Maybe get him to do an AMA?

My local subreddit's subscriber count is about 20% of the population of the city, and I've never seen a political candidate post there, even though there is lots of politics discussion. I think making an AMA saying what you've learned from talking to voters, and asking users what issues are most important to them, early in a campaign could be a really powerful strategy (edit: esp. if prearranged w/ subreddit moderators). I don't know if there is a comparable subreddit for District 6 though, e.g. this subreddit only has about 1% of the city population according to Wikipedia, and it's mostly pretty pictures right now so they might not like it if you started talking about politics.

Comment by John_Maxwell (John_Maxwell_IV) on Why Helping the Flynn Campaign is especially useful right now · 2022-05-13T06:17:31.394Z · EA · GW

Have you thought about crossposting this to some local subreddits? I searched for Carrick's name on reddit and he seems to be very unpopular there. People are tired of his ads and think he's gonna be a shill for the crypto industry. Maybe could make a post like "Why all of the Flynn ads? An explanation from a campaign volunteer"

Comment by John_Maxwell (John_Maxwell_IV) on Some clarifications on the Future Fund's approach to grantmaking · 2022-05-13T05:01:27.657Z · EA · GW

A model that I heard TripleByte used sounds interesting to me.

I wrote a comment about TripleByte's feedback process here; this blog post is great too. In our experience, the fear of lawsuits and PR disasters from giving feedback to rejected candidates was much overblown, even at a massive scale. (We gave every candidate feedback regardless of how well they performed on our interview.)

Something I didn't mention in my comment is that much of TripleByte's feedback email was composed of prewritten text blocks carefully optimized to be helpful and non-offensive. While interviewing a candidate, I would check boxes for things like "this candidate used their debugger poorly", and then their feedback email would automatically include a prewritten spiel with links on how to use a debugger well (or whatever). I think this model could make a lot of sense for the fund:

  • It makes giving feedback way more scalable. There's a one-time setup cost of prewriting some text blocks, and probably a minor ongoing cost of gradually improving your blocks over time, but the marginal cost of giving a candidate feedback is just 30 seconds of checking some boxes. (IIRC our approach was to tell candidates "here are some things we think it might be helpful for you to read" and then when in doubt, err on the side of checking more boxes. For funding, I'd probably take it a step further, and rank or score the text blocks according to their importance to your decision. At TripleByte, we would score the candidate on different facets of their interview performance and send them their scores -- if you're already scoring applications according to different facets, this could be a cheap way to provide feedback.)

  • Minimize lawsuit risk. It's not that costly to have a lawyer vet a few pages of prewritten text that will get reused over and over. (We didn't have a lawyer look over our feedback emails, and it turned out fine, so this is a conservative recommendation.)

  • Minimize PR risk. Someone who posts their email to Twitter can expect bored replies like "yeah, they wrote the exact same thing in my email." (Again, PR risk didn't seem to be an issue in practice despite giving lots of freeform feedback along with the prewritten blocks, so this seems like a conservative approach to me.)

If I were you, I think I'd experiment with hiring one of the writers of the TripleByte feedback emails as a contractor or consultant. Happy to make an intro.

A few final thoughts:

  • Without feedback, a rejectee is likely to come up with their own theory of why they were rejected. You have no way to observe this theory or vet its quality. So I think it's a mistake to hold yourself to a high bar. You just have to beat the rejectee's theory. (BTW, most of the EA rejectee theories I've heard have been very cynical.)

  • You might look into liability insurance if you don't have it already; it probably makes sense to get it for other reasons anyway. I'd be curious how the cost of insurance changes depending on the feedback you're giving.

Comment by John_Maxwell (John_Maxwell_IV) on Bad Omens in Current Community Building · 2022-05-13T03:55:16.744Z · EA · GW

Assume that people find you more authoritative, important, and hard-to-criticise than you think you are. It’s usually not enough to be open to criticism - you have to actually seek it out or visibly reward it in front of other potential critics.

Chapter 7 in this book had a number of good insights on encouraging dissent from subordinates, in the context of disaster prevention.

Comment by John_Maxwell (John_Maxwell_IV) on Why not offer a multi-million / billion dollar prize for solving the Alignment Problem? · 2022-04-18T07:55:43.422Z · EA · GW

My solution to this problem (originally posted here) is to run builder/breaker tournaments:

  • People sign up to play the role of "builder", "breaker", and/or "judge".
  • During each round of the tournament, triples of (builder, breaker, judge) are generated. The builder makes a proposal for how to build Friendly AI. The breaker tries to show that the proposal wouldn't work. ("Builder/breaker" terminology from this report.) The judge moderates the discussion.
    • Discussion could happen over video chat, in a Google Doc, in a Slack channel, or whatever. Personally I'd do text: anonymity helps judges stay impartial, and makes it less intimidating to enter because no one will know if you fail. Plus, having text records of discussions could be handy, e.g. for fine-tuning a language model to do alignment work.
  • Each judge observes multiple proposals during a round. At the end of the round, they rank all the builders they observed, and separately rank all the breakers they observed. (To clarify, builders are really competing against other builders, and breakers are really competing against other breakers, even though there is no direct interaction.)
  • Scores from different judges are aggregated. The top scoring builders and breakers proceed to the next round.
  • Prizes go to the top-ranked builders and breakers at the end of the tournament.

The hope is that by running these tournaments repeatedly, we'd incentivize alignment progress, and useful insights would emerge from the meta-game:

  • "Most proposals lack a good story for Problem X, and all the breakers have started mentioning it -- if you come up with a good story for it, you have an excellent shot at the top prize"
  • "Almost all the top proposals were variations on Proposal Z, but Proposal Y is an interesting new idea that people are having trouble breaking"
  • "All the top-ranked competitors in the recent tournament spent hours refining their ideas by playing with a language model fine-tuned on earlier tournaments plus the Alignment Forum archive"

I think if I was organizing this tournament, I would try to convince top alignment researchers to serve as judges, at least in the later rounds. The contest will have more legitimacy if prizes are awarded by experts. If you had enough judging capacity, you might even be able to have a panel of judges observe each proposal. If you had too little, you could force contestants to judge some matches they weren't participating in as a condition of entry. [Edit: This might not be the best idea because of perverse incentives. So probably just cash compensation to attract judges is a better idea.]

[Edit 2: One way things could be unfair is if e.g. Builder A happens to be matched with a strong Breaker A, and Builder B happens to be matched with a weaker Breaker B, it might be hard for a judge who observes both proposals to figure out which is stronger. To address this, maybe the judge could observe 4 pairings: Builder A with Breaker A, Builder A with Breaker B, Builder B with Breaker A, and Builder B with Breaker B. That way they'd get to see Builder A and Builder B face the same 2 adversaries, allowing for a more apples-to-apples comparison.]

Comment by John_Maxwell (John_Maxwell_IV) on Milan Griffes on EA blindspots · 2022-03-19T21:52:48.436Z · EA · GW

To be frank, I think most of these criticisms are nonsense and I am happy that the EA community is not spending its time engaging with whatever the 'metaphysical implications of the psychedelic experience' are.

...

If the EA community has not thought sufficiently about a problem, anyone is very welcome to spend time thinking about it and do a write-up of what they learned... I would even wager that if someone wrote a convincing case for why we should be 'taking dharma seriously', then many would start taking it seriously.

These two bits seem fairly contradictory to me.

If you think a position is "nonsense" and you're "happy that the EA community is not spending its time engaging with" it, is someone actually "very welcome" to do a write-up about it on the EA Forum?

In a world where a convincing case can be written for a weird view, should we really expect EAs to take that view seriously, if they're starting from your stated position that the view is nonsense and not worth the time to engage with? (Can you describe the process by which a hypothetical weird-but-correct view would see widespread adoption?)

And, who would take the time to try & write up such a case? Milan said he thinks EA "basically can't hear other flavors of important feedback", suggesting a sense in which he agrees with your first paragraph -- EAs tend to think these views are nonsense and not worth engaging with, therefore there is no point in defending them at length because no one is listening.

I'm reminded of this post which stated:

We were told by some that our critique is invalid because the community is already very cognitively diverse and in fact welcomes criticism... It was these same people that then tried to prevent this paper from being published.

Comment by John_Maxwell (John_Maxwell_IV) on Rhetorical Abusability is a Poor Counterargument · 2022-01-09T17:43:35.995Z · EA · GW

Another different and perhaps more relevant question is whether popularizing belief in consequentialism has net bad consequences on the margin.

Comment by John_Maxwell (John_Maxwell_IV) on EA/Rationalist Safety Nets: Promising, but Arduous · 2022-01-01T20:53:16.031Z · EA · GW

Good point. However, since Howie was employed at an EA organization, he might be eligible for the idea described here. One approach is to implement several overlapping ideas, and if there's an individual for whom none of the ideas work, they could go through the process Ozzie described in the OP (with the associated unfortunate downsides).

Comment by John_Maxwell (John_Maxwell_IV) on Democratising Risk - or how EA deals with critics · 2021-12-30T06:37:40.164Z · EA · GW

I believe these are authors already working at EA orgs, not "brave lone researchers" per se.

Comment by John_Maxwell (John_Maxwell_IV) on EA/Rationalist Safety Nets: Promising, but Arduous · 2021-12-30T06:28:44.499Z · EA · GW

An idea for addressing the challenges is to make the safety net something that only a "genuine EA" would find attractive. For example, you get free room and board in a house with other EAs in a low-prestige + low-rent location, with mandatory EA volunteer hours (perhaps spent helping other inhabitants of the house with their issues?) Only vegan food is served, and the length of your stay is capped at N years. I'm not sure it's necessary to be 100% resistant to outsiders with sob stories; I'd say the important thing is that outsiders with sob stories should be able to market those stories elsewhere & get more of what they want. Also, even if they fake their way into an EA support group like what I described, they might find they absorb EA values and identify as an EA at the end... lol.

Comment by John_Maxwell (John_Maxwell_IV) on EA/Rationalist Safety Nets: Promising, but Arduous · 2021-12-30T06:18:54.114Z · EA · GW

Another idea is a safety net which estimates the opportunity cost associated with taking a low-paying EA role and caps the financial support at said opportunity cost. Potentially a much cheaper way to achieve the same end result.

The best approach might be to have people register for this safety net as soon as they get an EA role, so they can argue for a particular opportunity cost at that time and know how much "insurance" they're getting.

Comment by John_Maxwell (John_Maxwell_IV) on Democratising Risk - or how EA deals with critics · 2021-12-29T15:20:45.220Z · EA · GW

IMO we should seek out and listen to the most persuasive advocates for a lot of different worldviews. It doesn't seem epistemically justified to penalize a worldview because it gets a lot of obtuse advocacy.

Comment by John_Maxwell (John_Maxwell_IV) on Democratising Risk - or how EA deals with critics · 2021-12-29T15:06:16.728Z · EA · GW

I think people don't appreciate how much upvotes and especially downvotes can encourage conformity.

Suppose a forum user has drafted "Comment C", and they estimate an 90% chance that it will be upvoted to +4, and a 10% chance it will be downvoted to -1.

Do we want them to post the comment? I'd say we do -- if we take score as a proxy for utility, the expected utility is positive.

However, I submit that for most people, the 10% chance of being downvoted to -1 is much more salient in their mind -- the associated rejection/humiliation of -1 is a bigger social punishment than +4 is a social reward, and people take those silly "karma" numbers surprisingly seriously.

It seems to me that there are a lot of users on this forum who have almost no comments voted below 0, suggesting a revealed preference to leave things like "Comment C" unposted (or even worse, they don't think the thoughts that would lead to "Comment C" in the first place). People (including me) just don't seem very willing to be unpopular. And as a result, we aren't just losing stuff that would be voted to -1. We're losing stuff which people thought might be voted to -1.

(I also don't think karma is a great proxy for utility. People are more willing to find fault with & downvote comments that argue for unpopular views, but I'd say arguments for unpopular views have higher value-of-information and are therefore more valuable to post.)

In terms of solutions... downvoting less is an obvious one. I like how Hacker News hides comment scores. Another idea is to disable scores on a thread-specific basis, e.g. in shortform.

Comment by John_Maxwell (John_Maxwell_IV) on Democratising Risk - or how EA deals with critics · 2021-12-29T14:40:12.862Z · EA · GW

I think offering financial incentives specifically for red teaming makes sense. I tend to think red teaming is systematically undersupplied because people are concerned (often correctly in my experience with EA) that it will cost them social capital, and financial capital can offset that.

I'm a fan of the CEEALAR funding model -- giving small amounts to dedicated EAs, with less scrutiny and less prestige distribution. IMO it is less incentive-distorting than more popular EA funding models.

Comment by John_Maxwell (John_Maxwell_IV) on Good news on climate change · 2021-11-05T03:52:18.814Z · EA · GW

I think it may be a mistake to look at fast progress in renewables and infer that countries will be able to meet their emissions targets without significant difficulty. Ramez Naam writes:

Our hardest climate problems – the ones that are both large and lack obvious solutions – are agriculture (and deforestation – its major side effect) and industry. Together these are 45% of global carbon emissions. And solutions are scarce.

Agriculture and land use account for 24% of all human emissions. That’s nearly as much as electricity, and twice as much all the world’s passenger cars combined.

Industry – steel, cement, and manufacturing – account for 21% of human emissions – one and a half times as much as all the world’s cars, trucks, ships, trains, and planes combined.

Add industry, agriculture, and land use together and you have a very sticky, very difficult-to-improve 45% of carbon emissions.

By contrast, electricity and transportation are 39% of global emissions – nearly as big. The good news is that in electricity and transportation, we have momentum.

We do NOT have momentum in reducing the carbon emissions of industry and agriculture.

Comment by John_Maxwell (John_Maxwell_IV) on Why aren't you freaking out about OpenAI? At what point would you start? · 2021-10-14T00:05:16.767Z · EA · GW

I also noticed this post. It could be that OpenAI is more safety-conscious than the ML mainstream. That might not be safety-conscious enough. But it seems like something to be mindful of if we're tempted to criticize them more than we criticize the less-safety-conscious ML mainstream (e.g. does Google Brain have any sort of safety team at all? Last I checked they publish way more papers than OpenAI. Then again, I suppose Google Brain doesn't brand themselves as trying to discover AGI--but I'm also not sure how correlated a "trying to discover AGI" brand is likely to be with actually discovering AGI?)

Comment by John_Maxwell (John_Maxwell_IV) on The Cost of Rejection · 2021-10-13T23:58:12.587Z · EA · GW

It sounds like you're saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn't realize that.

I don't know that. But it seems like a possibility. [EDIT: Sally's story was inspired by cases I'm familiar with, although it's not an exact match.] And even if it isn't happening very much, it seems like we might want it to happen -- we might prefer EAs branch out and become specialists in a diverse set of areas instead of the movement being an army of generalists.

Comment by John_Maxwell (John_Maxwell_IV) on The Cost of Rejection · 2021-10-13T22:28:00.744Z · EA · GW

I think part of our disagreement might be that I see Wave as being in a different situation relative to some other EA organizations. There are a lot of software engineer jobs out there, and I'm guessing most people who are rejected by Wave would be fairly happy at some other software engineer job.

By contrast, I could imagine that stories like the following happening fairly frequently with other EA jobs:

  • Sally discovers the 80K website and gets excited about effective altruism. She spends hours reading the site and planning her career.

  • Sally converges on a particular career path she is really excited about. She goes to graduate school to get a related degree, possibly paying significant opportunity cost in earnings etc.

  • After graduating, Sally realizes there are actually about 3-4 organizations doing EA work in her selected area, and of those only 2 are hiring. She applies to both, but never hears back, possibly due to factors like:

    • She didn't do a great job of selling herself on her resume.

    • She's not actually applying for the role her degree+resume best suit her for.

    • It so happens that a lot of other people reading the 80K website got excited about the same thing Sally did around the same time, and the role is unexpectedly competitive.

    • The organization has learned more about what they're looking for in this role, and they no longer consider Sally's degree to be as useful/relevant.

    • Her resume just falls through the cracks.

At this point, Sally's only contact with the community so far is reading the 80K website and then not hearing back after putting significant effort into getting an EA career. Can we really blame her if she gives up on EA at this point, or at the very least starts thinking of herself as playing on "single player" mode?

My point here is that we should distinguish between "effort the candidate expended on your hiring process" and "effort the candidate expended to get a job at your org". The former may be far bigger than the latter, but this isn't necessarily visible.

The same visibility point applies to costs to the org -- Sally may complain bitterly to her friends about how elitist the org is in their hiring / how elitist EA is in general, which might count as a cost.

Anyway, I think total cost for giving feedback to everyone is probably the wrong number here -- really you should be looking at benefits relative to costs for an individual applicant.

I also think it'd be worth trying experiments like:

  • Ask candidates who want feedback to check a box that says "I promise not to complain or cause trouble if I don't like the feedback"

  • Instead of saying "we can't hire you because you don't have X", spend less time making sure you're understanding the resume correctly, and more time asking questions like "it looks like your resume doesn't have X, we were hoping to find someone with X for this role". If they've got something to say in response to that, that's evidence that they really want the job -- and it might be worth letting them progress to the next stage as a way of validating your resume screen.

Comment by John_Maxwell (John_Maxwell_IV) on The Cost of Rejection · 2021-10-11T11:32:48.714Z · EA · GW

Candidates haven't interacted with a human yet, so are more likely to be upset or have an overall bad experience with the org; this is also exacerbated by having to make the feedback generic due to scale

...

Candidates are more likely to feel that the rejection didn't give them a fair chance (because they feel that they'd do a better job than their resume suggests) and dispute the decision; reducing the risk of this (by communicating more effectively + empathetically) requires an even larger time investment per rejection

Are you speaking from experience on these points? They don't seem obvious to me. In my experience, having my resume go down a black hole for a job I really want is incredibly demoralizing. I'd much rather get a bit of general feedback on where it needs to be stronger. And since I'm getting rejected at the resume stage either way, it seems like the "frustration that my resume underrates my skills" factor would be constant.

I'm also wondering if there is a measurement issue here -- giving feedback could greatly increase the probability that you will learn that a candidate is frustrated, conditional on them feeling frustrated. It's interesting that the author of the original post works as a therapist, i.e. someone paid to hear private thoughts we don't share with others. This issue could be much bigger than EA hiring managers realize.

Comment by John_Maxwell (John_Maxwell_IV) on The Cost of Rejection · 2021-10-11T11:04:17.770Z · EA · GW

On the topic of feedback... At Triplebyte, where I used to work as an interviewer, we would give feedback to every candidate who went through our technical phone screen. I wasn't directly involved in this, but I can share my observations -- I know some other EAs who worked at Triplebyte were more heavily involved, and maybe they can fill in details that I'm missing. My overall take is that offering feedback is a very good idea and EA orgs should at least experiment with it.

  • Offering feedback was a key selling point that allowed us to attract more applicants.

  • As an interviewer, I was supposed to be totally candid in my interview notes, and also completely avoid any feedback during the screening call itself. Someone else in the company (who wasn't necessarily a programmer) would lightly edit those notes before emailing them -- they wanted me to be 100% focused on making an accurate assessment, and leave the diplomacy to others. My takeaway is that giving feedback can likely be "outsourced" -- you can have a contractor / ops person / comms person / intern / junior employee take notes on hiring discussions, then formulate diplomatic but accurate feedback for candidates.

  • My boss told me that the vast majority of candidates appreciated our feedback. I never heard of any candidate suing us, even though we were offering feedback on an industrial scale. I think occasionally candidates got upset, but they mostly insulated me from that unless they thought it would be valuable for me to hear -- they wanted my notes to stay candid.

  • Jan writes: "when evaluating hundreds of applications, it is basically certain some errors are made, some credentials misunderstood, experiences not counted as they should, etc. - but even if the error rate is low, some people will rightfully complain, making hiring processes even more costly." I think insofar as you have low confidence in your hiring pipeline, you should definitely be communicating this to candidates, so they don't over-update on rejection. At Triplebyte, we had way more data to validate our process than I imagine any EA org has. But I believe that "our process is noisy and we know we're rejecting good candidates" was part of the standard apologetic preamble to our feedback emails. (One of the worst parts of my job was constant anxiety that I was making the wrong call and unfairly harming a good candidate's career.)

  • Relatedly... I'm in favor of orgs taking the time to give good feedback. It seems likely worthwhile as an investment in the human capital of the rejectee, the social capital of the community as a whole, and improved community retention. But I don't think feedback needs to be good to be appreciated -- especially if you make it clear if your feedback is low confidence. As a candidate, I'm often asking the question of which hoops I need to jump through in order to get a particular sort of job. If part of hoop-jumping means dealing with imperfect interviewers who aren't getting an accurate impression of my skills, I want to know that so I can demonstrate my skills better.

  • But I also think that practices that help you give good feedback are quite similar to practices that make you a good interviewer in general. If your process doesn't give candidates a solid chance to demonstrate their skills, that is something you should fix if you want to hire the best people! (And hearing from candidates whose skills were, in fact, judged inaccurately will help you fix it! BTW, I predict if you acknowledge your mistake and apologize, the candidate will get way less upset, even if you don't end up hiring them.) A few more examples to demonstrate the point that interviewing and giving feedback are similar competencies:

    • Concrete examples are very useful for feedback. And I was trained to always have at least one concrete example to back up any given assessment, to avoid collecting fuzzy overall impressions that might be due to subconscious bias. (BTW, I only saw a candidate's resume at the very end of the interview, which I think was helpful.)

    • Recording the interview (with the candidate's consent), so you can review it as needed later, is another thing that helps with both objectives. (The vast majority of Triplebyte candidates were happy to have their interview recorded.)

    • Using objective, quantifiable metrics (or standard rubrics) makes your process better, and can also give candidates valuable info on their relative strengths and weaknesses. (Obviously you want to be diplomatic, e.g. if a candidate really struggled somewhere, I think we described their skills in that area as "developing" or something. We'd also give them links to resources to help them level up on that.)

  • At Triplebyte, we offered feedback to every candidate regardless of whether they asked for it. I once suggested to my boss that we should make it opt-in, because that would decrease the time cost on our side and also avoid offending candidates who didn't actually want feedback. IIRC my boss didn't really object to that thought. It wasn't deemed a high-priority change, but I would suggest organizations creating a process from scratch make feedback opt-in.

BTW if any EA hiring managers have questions for me I'm happy to answer here, via direct message, or on a video call. I interviewed both generalist software engineers (tilted towards backend web development) and machine learning engineers.

Comment by John_Maxwell (John_Maxwell_IV) on What Makes Outreach to Progressives Hard · 2021-05-23T05:15:21.440Z · EA · GW

Just for reference, there's a group kinda like Resource Generation called Generation Pledge that got a grant from the EA Meta Fund. I think they've got a bit more of an EA emphasis.

Comment by John_Maxwell (John_Maxwell_IV) on Insights into mentoring from WANBAM · 2021-05-18T18:06:42.861Z · EA · GW

We are currently actively exploring how we can scale and provide mentoring support, in addition to WANBAM, to our community (those who are interested in/ inspired by Effective Altruism) more broadly.

You probably thought of this, but I suppose you could move in more of an 80K-ish direction by asking mentees to take notes on the best generalizable advice they get in their mentoring conversations, then periodically publishing compilations of this (perhaps organized by topic or something). If I was a mentor, I think I'd be more willing to spend time mentoring if my advice was going to scale beyond a single person.

Comment by John_Maxwell (John_Maxwell_IV) on EA is a Career Endpoint · 2021-05-18T11:47:54.556Z · EA · GW

My sense is that Triplebyte focuses on "can this person think like an engineer" and "which specific math/programming skills do they have, and how strong are they?" Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.

I used to work as an interviewer for TripleByte. Most companies using TripleByte put TripleByte-certified candidates through their standard technical onsite. From what I was able to gather, the value prop for companies working with TripleByte is mostly about 1. expanding their sourcing pipeline to include more quality candidates and 2. cutting down on the amount of time their engineers spend administering screens to candidates who aren't very good.

Some of your comments make it sound like a TB like service for EA has to be a lot better than what EA orgs are currently doing to screen candidates. Personally, I suspect there's a lot of labor-saving value to capture if it is merely just as good (or even a bit worse) than current screens. It might also help organizations consider a broader range of people.

Comment by John_Maxwell (John_Maxwell_IV) on Introducing High Impact Athletes · 2021-03-31T09:00:12.023Z · EA · GW

Ryan Carey suggests that athletes could have an impact by giving EA presentations to high schoolers.

Comment by John_Maxwell (John_Maxwell_IV) on Geographic diversity in EA · 2021-03-31T04:54:09.521Z · EA · GW

But it's not easy to visit or live in an EA hub city like London or San Francisco, for most of the global population (financially, legally, for family reasons) ... Fewer like-minded people around you means you have to put in a lot more effort to stay engaged and informed

EA Anywhere might help :-)

Comment by John_Maxwell (John_Maxwell_IV) on RyanCarey's Shortform · 2021-03-31T03:51:59.565Z · EA · GW

Relevant

Comment by John_Maxwell (John_Maxwell_IV) on How much does performance differ between people? · 2021-03-30T23:29:51.791Z · EA · GW

It might be worth discussing the larger question which is being asked. For example, your IMO paper seems to be work by researchers who advocate looser immigration policies for talented youth who want to move to developed countries. The larger question is "What is the expected scientific impact of letting a marginal IMO medalist type person from Honduras immigrate to the US?"

These quotes from great mathematicians all downplay the importance of math competitions. I think this is partially because the larger question they're interested in is different, something like: "How many people need go into math for us to reap most of the mathematical breakthroughs that this generation is capable of?"

Comment by John_Maxwell (John_Maxwell_IV) on How much does performance differ between people? · 2021-03-30T22:40:40.384Z · EA · GW

YC having a low acceptance rate could mean they are highly confident in their ability to predict ex ante outcomes. It could also mean that they get a lot of unserious applications. Essays such as this one by Paul Graham bemoaning the difficulty of predicting ex ante outcomes make me think it is more the latter. ("it's mostly luck once you get down to the top 1-5%" makes it sound to me like ultra-successful startups should have elite founders, but my take on Graham's essay is that ultra-successful startups tend to be unusual, often in a way that makes them look non-elite according to traditional metrics -- I tend to suspect this is true of exceptionally innovative people more generally)

Comment by John_Maxwell (John_Maxwell_IV) on Apply now for EA Global: Reconnect (March 20-21) · 2021-03-20T00:11:17.636Z · EA · GW

Glad I could help :D

Comment by John_Maxwell (John_Maxwell_IV) on Apply now for EA Global: Reconnect (March 20-21) · 2021-03-13T00:53:05.095Z · EA · GW

They drifted away from the community, but are they still working towards EA goals?

  • If they have stopped working towards EA goals, going to this event could be an opportunity to explore whether this is a decision they [still] endorse.

  • If they have continued to work towards EA goals on their own, going to this event could be a good opportunity to learn & share the kind of things that are most readily learned & shared through face-to-face chitchat. (A fairly large set of things, in my experience.) Additionally, making new face-to-face connections with people lets you trade favors and establish collaborative relationships that are harder to establish through e.g. sending them a cold email. (See: EA is vetting-constrained.) I expect the benefit here will be high variance. There's a high probability you have a weekend full of friendly-but-useless video calls (which will hopefully help with quarantine blues at least!) There's a small probability that you end up learning or sharing something that makes a big difference for you or someone else, or making an important new connection. (If someone hasn't been interacting with the community as much, I expect this probability to be higher, since the backlog of conversations they haven't had and new people they haven't met is gonna be larger.)

Might be worth noting the conventional wisdom in the business world, that networking is really important. As EAs we might have a bias towards things which are more measurable and legible, and I don't think the benefits of networking are always like that.

I noticed the following facts about people who work with the door open or the door closed. I notice that if you have the door to your office closed, you get more work done today and tomorrow, and you are more productive than most. But 10 years later somehow you don't know quite know what problems are worth working on; all the hard work you do is sort of tangential in importance. He who works with the door open gets all kinds of interruptions, but he also occasionally gets clues as to what the world is and what might be important. Now I cannot prove the cause and effect sequence because you might say, ``The closed door is symbolic of a closed mind.'' I don't know. But I can say there is a pretty good correlation between those who work with the doors open and those who ultimately do important things, although people who work with doors closed often work harder. Somehow they seem to work on slightly the wrong thing - not much, but enough that they miss fame.

Richard Hamming, Turing award winner, on what he observed at Bell Labs

Comment by John_Maxwell (John_Maxwell_IV) on Why Hasn't Effective Altruism Grown Since 2015? · 2021-03-10T08:38:35.482Z · EA · GW

Would it be better for people to describe their "conversion" experiences on a Forum thread?

I suspect the EA Survey is the ideal place to ask this sort of question because selection effects will be lowest that way. The best approach might be to gather some qualitative written responses, try to identify clusters in the responses or dimensions along which the responses vary, then formulate quantitative survey questions based on the clusters/dimensions identified.

Comment by John_Maxwell_IV on [deleted post] 2021-03-08T11:16:13.886Z

Will MacAskill wrote an article which discussed funding cannibalism here. Unfortunately the article seems to be behind a registration wall now; I don't know how deep his investigation was.

Comment by John_Maxwell (John_Maxwell_IV) on Why do content blockers still suck? · 2021-01-22T13:51:05.675Z · EA · GW

Sorry to hear about that.

I don't think there is anything on the market which blocks things by default

Not sure if this is helpful, but I turn my internet blocker on every night before bed, and only turn it on the next day after a self-imposed mandatory waiting period.

Comment by John_Maxwell (John_Maxwell_IV) on Why do content blockers still suck? · 2021-01-19T21:54:14.524Z · EA · GW

Most content blockers are free, right? Maybe what's going on is: there aren't incentives to make a free offering really good, but the existence of free offerings will discourage people from creating paid offerings.

https://freedom.to is a paid offering that looks like it might address some of your complaints.

Comment by John_Maxwell (John_Maxwell_IV) on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-11T12:25:35.209Z · EA · GW

Now that you have firsthand experience of the incentives that public office-holders (and candidates for public office) face, how do you think those incentives could be improved? Trying to take a meta approach here ;-)

Comment by John_Maxwell (John_Maxwell_IV) on Can I have impact if I’m average? · 2021-01-05T11:39:23.428Z · EA · GW

A couple reasons to be skeptical of the "top 1%" idea:

  • It does seem true that some people are much more famous than others, but I don't think we can trust the distribution of fame to accurately reflect the distribution of contributions. The famous CEO may get all the credit, but maybe they couldn't have done it without a whole host of key employees.

  • Even if the distribution of actual contributions is skewed, that doesn't mean we can reliably predict the big contributors in advance. I found this paper which says work sample tests used in hiring ("suggested to be among the most valid predictors") only weakly correlate with job performance. Speaking for myself, a few years ago some EAs I respected told me "John, I don't think you are cut out for X." That sounded plausible to me at the time, but I decided to take a shot at X anyways, and I now believe their assessment was incorrect.

Longer exposition here.

But at the end of the day, constantly comparing yourself to others is not a good mental habit. Better to compare yourself with yourself. Which version of yourself will do more good: The version of yourself which wallows in despair, or the version of yourself which identifies people you think are doing great stuff and asks "Is there something I can do to help?" Last I checked, we have long lists of EA project ideas which aren't getting worked on.

Comment by John_Maxwell (John_Maxwell_IV) on My mistakes on the path to impact · 2020-12-07T06:08:34.474Z · EA · GW

See also answers here mentioning that EA feels "intellectually stale". A friend says he thinks a lot of impressive people have left the EA movement because of this :(

I feel bad, because I think maybe I was one of the first people to push the "avoid accidental harm" thing.

Comment by John_Maxwell (John_Maxwell_IV) on Should marginal longtermist donations support fundamental or intervention research? · 2020-12-01T04:43:53.409Z · EA · GW

I suspect you want a mix of both, and fundamental research helps inform what kind of intervention research is useful, but intervention research also helps inform what kind of fundamental research is useful. Given a long-term effect, you can try to find a lever which achieves that effect, or given a big lever that's available for pulling, you can and try to figure out what its long-term effect is likely to be.

Comment by John_Maxwell (John_Maxwell_IV) on A Case and Model for Aggressively Funding Effective Charities · 2020-09-28T02:00:03.261Z · EA · GW

Sure, but if you only award prizes for the latter, I think people will gradually recognize the difference.

Maybe your point is that the opinions of loudmouths like myself will be overrepresented in such a scheme? Allowing for private submissions could help address that.

Comment by John_Maxwell (John_Maxwell_IV) on A Case and Model for Aggressively Funding Effective Charities · 2020-09-25T09:42:32.166Z · EA · GW

In terms of hearing diverse perspectives, I suspect there are more effective ways to accomplish that goal than having diverse funders. For example, a funder could require that a nonprofit lay their thinking out publicly in detail, and offer prizes for the best critiques other people write in response to their thinking. That way you're optimizing for hearing from people who think they have something to add.

Comment by John_Maxwell (John_Maxwell_IV) on Deliberate Consumption of Emotional Content to Increase Altruistic Motivation · 2020-09-14T11:53:28.269Z · EA · GW

I thought this recent Netflix documentary which talks a lot about Bill Gates' charity work was fairly inspiring (and informative). I haven't tried watching videos of suffering... I doubt it would be very motivating for the sort of study/brainstorm/write EA work I most want myself to do.

Comment by John_Maxwell (John_Maxwell_IV) on AMA: Owen Cotton-Barratt, RSP Director · 2020-09-03T09:37:22.886Z · EA · GW

Why not just have the people who need mentorship serve as "research personal assistants" to improve the productivity of people who are qualified to provide mentorship? (This describes something which occurs between professors and graduate students right?)

Comment by John_Maxwell (John_Maxwell_IV) on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015? · 2020-08-15T06:23:28.329Z · EA · GW

At time T0, someone suggests X as a joke.

Telling jokes as an EA cause.