Posts
Comments
+1
Though I suspect it will be difficult to get to a sufficient threshold of EAs using LinkedIn as their social network without something similar to a marketing campaign. Any takers?
I agree with Owen's comments and the others. The basic message of my post, however, seems to be something like, "Make sure you compare your plans to reality" while emphasizing the failure mode I see more often in EA (that people overestimate the difficulty of launching their own project).
Would it be correct to say that your comments don't disagree with the underlying message, but rather believe that my framing will have net harmful effects because you predict that many people reading this forum will be incited to take unwise actions?
Agreed. This updates my view.
Fascinating - this ranks as both my most downvoted and most shared post of all time.
Yup, this is an important thing to keep in the background of expert assessment.
I'm glad you think it's nonsense, since - in some strange state of affairs - a certain unnamed person has been crushing on the communal Pom sheet lately. =P
Well-observed! Here's my guess on where I rank on the various conditions above:
- P - Process: Medium. I think my explicit process is still fairly decent, but my implicit processes still need work. E.g., I might perform well at identifying an expert if you gave me a decent amount of time to check markers with my framework, but I'm not fluent enough in my explicit models to do expertise assessments on the fly very well, Sherlock Holmes-style.
- I - Interaction: Medium. I've spent dozens of hours interacting with expertise assessment tasks, as mentioned in the article. However, for much of this interaction with the data, I did not have strong explicit models (I only developed the expert assessment framework last month.) Since my interaction with the data was not very model-guided for the majority of the time, it's likely that I often didn't pay attention to the right features of the data. So I may have been rather like Bob above:
Bob, a graphic design novice, pays no attention to the signs and advertisements along the side of the street, even though they are within his field of vision. It may have been that lots of data relating to expertise was literally and metaphorically in my field of vision, but that I wasn't focusing on it very well, or wasn't focusing on the proper features.
- F - Feedback: Low. Since I've only had well-developed explicit models for about a month, I still have only gotten minor feedback on my predictive power. I have run a few predictive exercises - they went well by the n is still small. My primary feedback method has been to generate lots of examples of people I am confident have expertise and check whether each marker can be found in all the examples. I also did the opposite: generate lots of examples of people I am confident lack expertise, and check whether each marker is absent from all the examples. I also used normal proxy methods that one can apply to check the robustness of theories without knowing much about them. (E.g., are there logical contradictions?) I used a couple other methods (e.g., running simulations and checking whether my system 1 yielded error signals), but I'd need to write a full-length article about them for these to make sense. For now, I will just say that they were weak feedback processes, but useful ones. Overall, I looked for correlation between the various feedback methods.
- T - Time: Low-medium. I have probably spent more time training in specifically domain-general expertise assessment relative to most people in the world. But this is not saying much, since domain-general expertise assessment is not a thriving or even recognized field, as far as I can tell. Also, I have been only a small amount of time on the skill relative to the amount of training required to become skilled in domains falling into a similar reference class. (e.g., I think expertise assessment could be it's own scientific discipline, and people spend years in order to gain sufficient expertise in scientific disciplines.)
Potential improvement: Rather than a binary pass fail for experts we should like a metric that grades the material they present.
Agreed. I tried to make it binary for the sake of generating good examples, but the world is much more messy. In the spreadsheet version I use, I try to assign each marker a rating from "none" to "high."
The Cambridge Handbook of Expertise
How worthwhile do you think it would be for someone to read the handbook?
Issue: It seems like the model might have trouble filtering people who have detailed but wrong models.
100%. The model above is only good for assessing necessary conditions, not sufficient ones. I.e., someone can pass all four conditions above and still not be an expert.
I imagine there is another class of experts who have decades of experience, rich implicit models and impressive achievements, but who would struggle to present concise, detailed answers if you asked them to share their wisdom. I suspect that quiet observation of such a person in their work environment, rather than asking them questions, would yield a better measure of their level of expertise, but this requires considerable skill on the part of the observer.
Indeed: tacit experts. The way I assess this now is basically by looking at indirect signs around the potential tacit expert (e.g., achievements is a good one, as is evidence of them having made costly tradeoffs in the past to develop their expertise (a weaker sign).) If anyone develops tools for directly assessing tacit experts, please let me know.
I'd also be very interested if anyone has ideas for how to learn the skills of tacit experts, once you've identified them.
I tested my predictions against the experts by rating applications for the top 5 candidates myself, then getting the domain expert to rank them and compare scores, watching them doing so.
Ah! This sounds like a great feedback mechanism for one's expert assessment abilities. I'm going to steal this. =)
Tyler’s model seems somewhat helpful here, and adding the components from John’s model improves it again.
+1 - you definitely want to use more signs than the ones I mentioned above to be confident that you have identified sufficient marker of expertise. The ones listed above are only intended to be necessary markers. A good way of generating markers beyond the necessary ones: think about a few people who you can confidently say are experts. What do they have in common? (Please send me any cool markers you've come up with! My own list has over 30 now, and it doesn't seem like ceiling has been hit.)
While it seems possible to make some progress on the problem of independently assessing expertise, I want to stress that we should still expect to fail if we proceed to do so entirely independently, without consulting a domain expert
Right, I should have mentioned this. Your job is much, much easier if you can identify a solid "seed" expert in the domain with a few caveats:
- If the seed expert becomes your primary input to expertise identification, you should be confident that their expertise checks are good. I'm tempted to think that the skill of domain-specific expertise identification correlates strongly with expertise in that domain, but not perfectly. This will be especially true in fields where there are lots of persuaders who have learned how to mimic signs of expertise.
- Keep domain-specific expertise base-rates in mind, as mentioned above. In domains where the expertise base-rate is low (e.g., sociology), you will need to run many more expertise checks on the seed expert than usual, and will have a harder time finding a passable expert in the first place.
- In fields where results are not easily verifiable (e.g., sociology again), it will be more difficult to identify a seed expert. Also, these seed experts will often have a hard time identifying revolutionary forms of expertise, since they might look like crackpots. (As opposed to, say, math, where there are cases of people who prima facie look like crackpots being nonetheless hired as professors, since their results are reliably verifiable.)
- In fields with high variance, you may be able to find a passable seed expert who cannot consistently identify experts who are much, much better than they are.
- In fields with poorly networked knowledge, seed experts will be much less helpful. I can imagine this being the case for fields like massage therapy, where I expert there to be fewer journals and conferences.
"Check to see whether the field has tangible external accomplishments."
This is a good one. I think you can decently hone your expertise assessment by taking an outside view which incorporates base-rates of strong expertise in the field amongst average practitioners, as well as the variance. (Say that five-times fast.) For example:
- Forecasters: very low baserate, high variance
- Doctors: high baserate, low-medium variance
- Normal car repairpeople: medium baserate, low-medium variance (In this case, there is a more salient and practical ceiling to expertise. While a boxer might continuously improve her ability to box until she wins all possible matches (a really high ceiling), a repairperson can't make a car dramatically "more repaired" than others. Though I suppose she might improve her speed at the process.)
- Users of forks, people who walk, people who can recognize faces: high baserate, low variance
- Mealsquares founders: enormously high baserate, extremely low variance =)
We plan to!
It will come from CEA's EA Outreach budget. Winners may choose to re-donate to CEA if they think that we're the best target of funds, or donate somewhere else they think is a better target. That said, we think the main reason why someone would be motivated to enter the contest would be to have the 1000s of future people being introduced to EA be introduced by the best content.
Just changed it to a Creative Commons Attribution 4.0 International License, so posting it elsewhere is fine (or even encouraged).
Very much support the thrust of this post. Oliver Habryka on the EA Outreach team is currently chatting with the Good Judgment Project team about implementing a prediction market in EA.
What about the following simple argument? "If you look at many many (most?) movements or organizations, you see mission creep or Goodharting."
Do you think there is anything that puts us in a different reference class?
Hi Julia - I wholeheartedly agree with your semantic point: the words "hardcore" and "softcore" seem potentially harmful.
However, I wonder if the stronger thesis is true: "Having strictly defined categories of involvement doesn’t seem likely to help."
It seems plausible, but I can think of worlds in which categories of involvement actually do play an important role. (For instance, there is a reason galas will do things like sort donors into silver, gold, and platinum levels based on their level of contribution.) Since one could see strong arguments for both sides, it seems like the sort of hypothesis that benefit from a mechanism posit, as talked about in my last post: http://effective-altruism.com/ea/sn/a_call_for_mechanistic_thinking_in/
My guess is that, for example, the distinction between priests and parishioners does play a socially useful function. Since the labels are non-normative (unlike "hardcore" and "softcore"), they seem to establish healthy attractors at two different levels of dedication. On the macro-level, I wouldn't be surprised if this wasn't a distinction which contributed to Christianity being able to maintain relative social equilibrium for many centuries. It seems like EA is going to need a similar degree of social equilibrium to achieve its most ambitious goals - e.g., a stable piece of culture that helps us continue to figure out what to do and then do it for many many years.
What do you think? =)
I was chatting with Julia Wise about this post. It seems plausible the types of people we prioritize recruiting isn't such a black-and-white issue. For instance, it seems likely that EA can better take advantage of network effects with some mass-movement-style tactics.
That said, it seems likely that there might be a lot of neglected low-hanging fruit in terms of outreach to people with extreme influence, talent or net worth.
+1 this is awesome
EA Ventures would be very interested in hearing ideas for donor coordination. Feel free to email us about it at tyler@centreforeffectivealtruism.org.
It's a pretty tricky problem that probably requires the team solving it to have a good understanding of social dynamics from having solved similar issues in the past, so the ideal solution would factor this in.
+1 I'd avoid over-associating EA with just effective giving. E.g., startup-founding, political advocacy, and scientific research can all be undertaken with EA ideas in mind.
I would place quite a bit of emphasis on epistemic tools, since valuing (and ideally exercising) reason and evidence is the primary thing which differentiates EA and unites people across different causes.
Things to be covered might include:
Prioritization
Building models about relevant parts of the world
Epistemic humility (being open to changing your mind, steelmanning other people's arguments, etc)
People to contact for these things:
Oliver Habryka (panisnecis@gmail.com) - he runs an undergrad course at Berkeley
Cat Lavigne (cat.m.lavigne@gmail.com) - currently developing a model-building workshop called Shift
People at CFAR (obviously). Namely Anna (anna@appliedrationality.org)
Owen Cotton-Barratt (owen.cotton-barratt@maths.ox.ac.uk), Nick Beckstead (nbeckstead@gmail.com), and Geoff Anders (geoffrey.anders@gmail.com) - for material on prioritization.
Thanks for the comments, all! I pretty much agree with the bulk of them so far, and have added an edit to the post above.
Thoughts on how favorably or unfavorably pursuing movement-building compares to other EA career paths?
Yearly salary range (helpful for getting sponsorships in the future of EA events if the average yearly salary turns out to be high)
The difference between this and vegan flyering is that you're already targeting groups that have already self-selected for one aspect of EA. That said, I could definitely see a much lower than .1% rate being the case. Though the cost-effectiveness still seems competitive even at a conversion rate of .01% or even .001%. That's 10 days and 100 days, respectively, of work for a year of earn-to-give.
That said, as Peter alluded earn-to-give still seems competitive if, e.g., you're funding that much more of this work happens. Unless, by doing the work, you're recruiting EtGers that will fund the work. Unless... [mind explodes]
Peter Buckley attempted to hire some virtual assistants from ODesk. They were way too slow. My guess would be that EAs have a much better sense of what types of groups to look for and where to find them. The task also requires a decent amount of research, which is a comparative advantage of many EAs.
Would love to get tons of VAs on this though if you can think of a better way to use them.
Mass-scraping is great when you've already identified the webpages to scrap from. Identifying these webpages, however, is half the battle. (We've already combined THINK's list with ours, but thanks for the heads up!)
If you know someone at SER, I'd love to chat with them about what their strategy was.
This sounds awesome, and perhaps even the sort of thing we could use to assess the applications we get for EA Ventures (eaventures.org). I imagine the tough part will be acquiring and sustaining a user base of reviewers. Toward this end, you might first recruit an official board of dedicated reviewers yet still allow for anyone to leave impact estimates.
The next couple weeks are going to be serious crunch time on EA Global, but feel free to ping me about this in ~2 weeks if you're interested in a potential EAV integration: tyler@centreforeffectivealtruism.org
Just signed up and left a review on Amazon. Awesome idea.
What are GCRI's current plans or thinking around reducing synthetic biology risk? Frighteningly, there seems to be underinvestment in this area.
Also, with regard to the research project on altruism, my shoot-from-the-hip intuition is that you'll find somewhat different paths into effective altruism than other altruistic activities. Many folks I know now involved in EA were convinced by philosophical arguments from people like Peter Singer. I believe Tom Ash (tog.ash@gmail.com) embedded Qs about EA genesis stories in the census he and a few others conducted.
As for more general altruistic involvement, one promising body of work is on the role social groups play. Based on some of the research I did for Reducetarian message-framing, it seems like the best predictor of whether someone becomes a vegetarian is whether their friends also engage in vegetarianism (this accounts for more of the variance than self-reported interest in animal welfare or health benefits). The same was true of the civil right movement: the best predictor of whether students went down South to sign African Americans up to vote was whether they were part of a group that participated in this very activity.
Buzz words here to aid in the search: Social proof Peer pressure Normative social influence Conformity Social contagion
Literature to look into:
- Sandy Pentland's "social physics" work: http://socialphysics.media.mit.edu/papers/
- Chapter 4 ("Social proof") of Cialdini's Influence: Science and Practice: http://www.amazon.com/Influence-Science-Practice-5th-Edition/dp/0205609996
- McKenzie-Mohr's book on Community–Based Social Marketing: http://www.cbsm.com/pages/guide/preface/
Cool. Is the site targeted at people new to EA? Yup!
Maybe you could link to the EA Forum and the EA Job Board? Have a news feed containing original content, news articles, blog posts, or .impact hackpad posts? Have or link to a page of open research questions? Soon we hope to revise the "Get Involved" section to incorporate much of this.
Hi Daniel, for further reach, the X-Risk comm channels on this spreadsheet might help: https://docs.google.com/spreadsheets/d/1_EH3cpHUJw052iXNI1Q_b-FgHBBNuXe_a4ZjM6uqzpU/edit?usp=sharing
Hey Jonathan, right now I'm chatting with the founder of The Feast (http://feastongood.com) to set up an international network of EA dinners. Personally, I've had a lot of success in using dinners as a mechanism for community building. I'm a bit at capacity between EA Ventures, EA Global, and other EA outreach work to get a lot of momentum going on the partnership. However, would you be interested in an introduction?
Here's some of the reasoning I sent Ben Todd back in the day on the potential effectiveness of dinners: https://docs.google.com/document/d/1PGfQF9R5nJtygF_O2M6E-2Iu8an9VNhkX37xizOs1uM/edit?usp=sharing
Also, I plan to strategize soon on shifting corporate philanthropy in a top-down way with Google's global corporate social responsibility lead (she self-identifies as EA). (Example: sell corporate decision-makers on EA.) So let me know if you dig up anything in this arena.
GOOD/Corps (www.goodcorps.com) is another nice resource. I've been in contact with them in case anyone wants an intro.
A growing body of evidence seems to suggest that aerobic exercise is best for improving cognitive fitness.
See:
http://well.blogs.nytimes.com/2009/09/16/what-sort-of-exercise-can-make-you-smarter/?_r=0
etc