Notes on hiring a copyeditor for CEA

post by Aaron Gertler (aarongertler) · 2020-01-09T12:56:37.126Z · score: 89 (45 votes) · EA · GW · 11 comments


  Statistics on the hiring process
  The job description
    Some thoughts on how I handled this process, and what I wish I’d changed:
  The initial work test
    Scoring rubric:
    Score distribution:
  Grading the applications
    Scoring rubric:
    Score distribution:
  Second-round decisions
  Follow-up and feedback
  Keeping in touch with candidates (want to hire someone?)
  Any questions?

Over the last year, there’s been a lot of discussion about the “EA job market” and how to build an effective career in an EA field.

A few months ago, I went through my first hiring process from the employer side when I found a part-time copyeditor to work on CEA’s social media posts (and a few other tasks). I thought my process might be interesting to people who've been following the aforementioned discussion, so I’m writing up my notes in this post.

Meta: It can be legally tricky to write applications, conduct job interviews, and provide feedback to applicants. I recommend consulting your local HR expert before attempting to hire.


Statistics on the hiring process

The initial application was meant to give me information about applicants’ editing skills and experience, as well as their familiarity with EA (which I felt would be helpful for the role, given the material they’d be editing and sometimes writing). 

Applicants were asked to edit half of a transcript of an EA Global talk generated by a transcription service, which contained many errors. They were given a two-hour limit to make it as clean and readable as they could. Only a few applicants didn’t finish the full edit, some because they went over the time limit and others because they applied very soon before the deadline. It’s possible that some people ignored the limit; I spot-checked the edit history of some of the best transcripts and didn’t see this, but I didn’t check all transcripts in this way.

Applicants were also asked to provide some information about themselves; see the next section for a link to the full job description.

Number of applicants who completed the initial work trial: 183

Number of applicants who scored at least “1” on each of two 1-3 scales (one for EA/editing experience, one for editing skill): 147

(I describe my system in more detail below, but you can think of this as “the number of people who followed the instructions, seemed to be fluent in English, and indicated a genuine interest in the position.”)

Number of applicants who reached the interview stage: 21

Number of applicants who reached the “final work trial” stage: 8


The job description

Here’s the description I used to advertise the position. I shared it through the EA Newsletter, the 80,000 Hours job board, the EA Job Postings Facebook group, and EA Work Club. I didn’t try to track how many candidates came through each source. The position was posted in the first week of June, and was open until the last day of June.

Some thoughts on how I handled this process, and what I wish I’d changed:

I didn’t have a good sense for how many people would apply, so I erred on the side of having a more “open” description: I described an “ideal” candidate, but didn’t set out many strict requirements.

On the one hand, this led to many more people applying than I had expected, and quite a lot of applicant time being invested in a work test for minor (if any) benefit to applicants. This makes me wish I’d done more research beforehand to better understand how many people tend to apply for these positions — for example, by asking GiveWell about their past experience hiring people to write up conversation notes (a position with similarly loose qualifications).

On the other hand, had I screened for professional editing experience or professional EA experience, I might have missed out on some of my best candidates,perhaps including the person I eventually hired. 

I could have saved even more applicant time by asking contacts in the EA community for references; several candidates I chose to interview were people I expect I’d have found through this method. However, since I didn’t think the position would require a strong EA background to be done well, I wanted to make the process more open and give chances to people looking for their first EA-aligned work experience.


The initial work test

The most important skill for the position was basic copyediting: Even if someone had a strong resume/background and a lot of passion for EA, I still needed to be able to trust their edits and minimize the time I spent reviewing their work.

Things I think went well with this test:

Things I wish I’d reconsidered:

Scoring rubric:

The difference between, say, a 2 and a 2.5 was highly subjective. It’s likely that I gave slightly higher scores to candidates whose natural “flow” in editing sentences was similar to mine, even if other candidates’ work was perfectly grammatical/smooth. This was subconscious, but I think I endorse it; if I’m going to look over a lot of someone’s work, it helps if we have a similar sense for how sentences should sound. 

Score distribution:

Feedback from the person I eventually hired:

“One of the reasons I applied was that your initial screen/first step was a test — not an interview. It made me respect CEA's attempts to avoid hiring bias (plus it backed up the org's claim that you are evidence-based).”

Grading the applications

In addition to the writing task, I asked applicants to send some information about themselves. I graded this on an 0-3 scale, as well. I’m satisfied with the information I asked for, and I can’t easily think of any questions I wish I’d included in hindsight.

Scoring rubric:

I cared more about performance on the work trial than on an applicant’s EA background; I think it takes less time and effort to become familiar with EA (assuming at least a basic inclination toward its ideas) than to become a very polished copywriter/editor from a baseline of having only moderate skill. 

Within the application score, knowing about someone’s editing background was helpful, but I put less weight on that than on their EA background, because I had access to their editing task already. (For what it’s worth, editing experience did correlate quite positively with performance on the task.)

Score distribution:


Second-round decisions

I calculated a “total” score by multiplying the 0-3 editing score by two, then adding the 0-3 application score. Nine candidates scored at 7 or above, with many more between 6 and 7. I selected some but not all candidates from the 6-7 range, in part by using the following factors:

This isn’t an exhaustive list of factors that mattered (there may have been a dozen elements in any given application that made me more or less inclined to move a candidate to the next round), but it covers all the most important points.


Follow-up and feedback

Of the candidates I did not interview, most got the following email:

Thank you for applying to the freelance copyeditor position at CEA. I appreciate your taking the time to complete the initial work test, and your interest in our mission.

After careful consideration, I've decided not to move forward with your application.

Please don’t take this as a sign that you aren't a capable editor, or that you shouldn’t apply for positions with other organizations connected to effective altruism. More than 180 people applied for the position, and many people with strong qualifications didn’t pass the first stage. 

If you have any further questions, please let me know; I'm open to providing individual feedback, though it may be brief. And I wish you the best of luck with any other editing jobs for which you may apply!

However, 28 non-interviewed candidates demonstrated strong editing skills and had work trial scores competitive with some of the candidates I interviewed. I expected these candidates to be competitive applicants for other writing/editing jobs at EA orgs that might open up in the future. They got the following email:

Thank you for applying to the copyeditor position at CEA. I appreciate your taking the time to complete the work test, and your interest in our mission.

More than 180 people applied for the position, and many people with strong qualifications didn’t pass the first stage. After careful consideration, I’ve decided not to move forward with your application.

However, you were one of a small number of applicants who, despite not passing to the second round, made unusually strong edits. Based on your work test, I think you might be a good candidate for other jobs that involve writing or editing for organizations connected to effective altruism — most of which don’t have nearly as many applicants. I encourage you to apply to those positions in the future (if they interest you).

If you have any further questions, please let me know; I wish you the best of luck with your future applications!

(Note that I didn't include the "feedback" note here: In retrospect, this was a mistake. While high-scoring applicants may not have needed editing advice, many people also asked for my thoughts on their resumes/intro emails, and this seemed valuable to provide; I wish I'd gotten more requests.)

Candidates I did interview got an email which included the following language, meant to help them understand the process and decide whether they wanted to keep investing time in the position. (For example, someone might have been willing to stay in the running:for a freelance gig if they were competing with two other people, but not with twenty.)

For context, this is what I have planned for the rest of the application process:

  1. Interviews with candidates who passed the first round (21 out of 183 applicants)
  2. A second work test for approximately 10 of those candidates, based on their interviews and a closer examination of their original work tests.
  3. One person chosen to take the position — though it’s possible that work might be split among multiple people, or that other applicants might be asked to take the position if the first person hired becomes unable to continue the work.

I strongly expect to make my final decision by the end of August. 

Please let me know if you’d require an earlier decision in order to be able to take the job.

Nearly two dozen applicants (of those I didn’t interview) asked for feedback. I responded to them with specific notes on their particular applications, including editing mistakes and areas where I felt uncertain about their experience, as well as positive feedback (many of these candidates did make strong edits, or wrote excellent application emails). 

Feedback on my feedback (when I received it) was highly positive; it seems as though people really appreciated hearing back from a “hiring manager”. Sending those notes took a fair amount of time, but I’m glad I did it; it seems to have been helpful to some of the applicants, and I hope that it also made them feel more positive about CEA, and about EA in general. I’d cautiously recommend that other organizations do the same if they can spare the time and trouble (again, legal trickiness).



Every candidate offered an interview chose to schedule one. The interviews had less structure than I’d have liked; while I asked each candidate the same set of initial questions, alongside specific questions about their application, I didn’t have a scoring rubric in mind. 

I wound up giving each interview a score “out of 10” (actual scores ranged from 6 to 9) after I finished, which made it hard to directly compare candidates later on. However, the candidate with the strongest interview, who I wound up hiring, also had among the strongest trial tasks in both rounds, so I wound up not needing to think too hard about these comparisons.

How I selected candidates for the second work trial (factors ordered from most to least important):

  1. The strength of their initial application (still a major factor, and weighted more heavily than the interview)
  2. How certain I was that they’d be available for the position for the right number of hours, and for a long time to come, despite its part-time nature (discussing this was a part of the interview)
  3. How engaged and curious they were during the interview? Did they ask questions that showed they were seriously thinking about how the position would work out for them? Did they seem to be thinking carefully about my questions before they answered?


The eight strongest candidates received this task as the final stage of the application.

I’m happy with the first two tasks (I got a great sense for how the candidates thought about social media, plus a lot of useful suggestions for improvements to the EA Newsletter). But I don’t think the third task wound up mattering much; it’s possible that I should have skipped it to save the candidates’ time.

The most important factors in my evaluation of this test (in no particular order):

Note: The set of “known” typos/oddities consisted of all the different issues that candidates found; I didn't re-copyedit my own newsletter for this task.

Three of the eight candidates had especially strong tests (particularly their Newsletter advice). I informed the top candidate that I wanted to offer her the position, and I informed the other two know that I was strongly considering them if the top candidate did not accept. 

After thinking about the initial offer and negotiating briefly for a higher rate, she did accept the position, and is currently working on several CEA projects. (Her new rate was still much lower than those requested by candidates I excluded for their high rate requirements.)

She requested that I not use her name, but gave me permission to talk a bit about her background and application:


Keeping in touch with candidates (want to hire someone?)

Hiring for this position took dozens of hours of my time, and hundreds of hours of candidates’ time. I want to squeeze as much value as I can from the process.

So, in addition to hiring a candidate, I’ve also kept a record of the other applicants who most impressed me, so that I can let them know if I hear about promising opportunities. I’ve already referred a few candidates for different part-time roles at other EA orgs, and I anticipate more chances to come.

(If you’re looking to hire someone for writing and/or editing, let me know!)


Any questions?

I’d be happy to respond to questions about the hiring process or anything else I’ve mentioned in this post. Please leave a comment or send me an email.


Comments sorted by top scores.

comment by Khorton · 2020-01-09T22:43:20.794Z · score: 9 (6 votes) · EA(p) · GW(p)

Great post Aaron! I appreciate the detail you included

comment by agent18 · 2020-03-03T20:12:25.019Z · score: 3 (2 votes) · EA(p) · GW(p)

Hi Aaron, Can you also answer the following for me please?

So, in addition to hiring a candidate, I’ve also kept a record of the other applicants who most >impressed me, so that I can let them know if I hear about promising opportunities. I’ve already >referred a few candidates for different part-time roles at other EA orgs, and I anticipate more >chances to come.

  1. How many people "most impressed you"?

  2. How many people have you already referred for different part-time roles at other EA orgs?

  3. How many people do you think EA orgs are hiring in this job type currently or within the last year?

  4. How many people in your list who didn't get hired, do you expect to get hired else where in EA orgs? (gut feel, guess, based on past experiences, anything)

comment by Ula · 2020-01-14T20:12:45.911Z · score: 3 (2 votes) · EA(p) · GW(p)

This is super useful, we're just about to go through similar process (hiring full-time editor). Thanks for sharing!

comment by Aaron Gertler (aarongertler) · 2020-01-14T23:46:04.220Z · score: 3 (2 votes) · EA(p) · GW(p)

Could you share the job listing with me? I'd love to forward it on to some of the candidates!

comment by Ula · 2020-01-17T14:41:15.504Z · score: 1 (1 votes) · EA(p) · GW(p)

Probably Joey has already send it but if not:

comment by agent18 · 2020-03-03T20:07:28.552Z · score: 1 (1 votes) · EA(p) · GW(p)

And maybe this is a bit much--> Do you have the distribution of where you got your candidates from? Here is an example from EAF's hiring round [EA · GW]

comment by Aaron Gertler (aarongertler) · 2020-03-03T20:08:16.643Z · score: 3 (2 votes) · EA(p) · GW(p)

I didn't collect information on where people heard about the position, though that would have been a good idea!

comment by agent18 · 2020-02-29T15:05:35.798Z · score: 1 (1 votes) · EA(p) · GW(p)

Nice Post Aaron! I have the following questions:

1. I was wondering if you can also provide the score ruberic and distribution for the interview rounds and the final work trial rounds?

2. Within what time frame did you receive the 180+ applications?

comment by Aaron Gertler (aarongertler) · 2020-03-02T12:17:54.785Z · score: 2 (1 votes) · EA(p) · GW(p)
  1. There was no formal written rubric for either round, and submissions for the final work trial weren't given numerical scores. As I noted in my post: I wound up giving each interview a score “out of 10” (actual scores ranged from 6 to 9) after I finished. (However, these scores were fairly subjective.)
  2. I began to post the job listing roughly a month before applications were due. I received the first few applications within a day or two, and the last few on the day of the deadline.
comment by agent18 · 2020-03-03T20:07:09.516Z · score: 1 (1 votes) · EA(p) · GW(p)

Thank you very much Aaron. Are you then able to inform the distribution of the scores for the interview (21 people) and the final work trail (8 people)? I understand they are subjective. Nevertheless they were a score on 10.

comment by Aaron Gertler (aarongertler) · 2020-03-03T20:10:03.897Z · score: 2 (1 votes) · EA(p) · GW(p)

No, I'm not going to share that information. I don't think there's any value to it given the subjectivity, and I think that anyone trying to analyze it will be wasting their time.

(Also, the final work trials were not scored.)