Posts

COVID-19 response as XRisk intervention 2020-04-10T06:16:33.051Z
The Multiple Stage Fallacy 2016-03-16T23:55:17.748Z
Should you start your own project now rather than later? 2016-02-25T02:22:12.034Z
Why and how to assess expertise 2016-02-14T01:43:49.457Z
"Allkind" 2016-02-14T01:24:15.939Z
$250 donation for best EA intro essay - deadline: March 10 2016-02-11T18:44:35.653Z
A call for mechanistic thinking in movement-building 2016-01-21T00:54:49.499Z
New EA Global: Oxford program now live 2015-07-16T02:54:02.206Z
Should EAs influence corporate giving? 2015-07-13T02:28:04.883Z
[Discussion] What does winning look like? 2015-06-07T19:52:26.201Z
How to save more lives today than in a year of earn-to-give 2015-05-13T05:36:47.859Z
Announcing EffectiveAltruism.org 2015-02-02T04:06:26.617Z

Comments

Comment by tyleralterman on Why more effective altruists should use LinkedIn · 2016-06-03T17:35:40.712Z · EA · GW

+1

Though I suspect it will be difficult to get to a sufficient threshold of EAs using LinkedIn as their social network without something similar to a marketing campaign. Any takers?

Comment by tyleralterman on Should you start your own project now rather than later? · 2016-02-27T23:07:19.159Z · EA · GW

I agree with Owen's comments and the others. The basic message of my post, however, seems to be something like, "Make sure you compare your plans to reality" while emphasizing the failure mode I see more often in EA (that people overestimate the difficulty of launching their own project).

Would it be correct to say that your comments don't disagree with the underlying message, but rather believe that my framing will have net harmful effects because you predict that many people reading this forum will be incited to take unwise actions?

Comment by tyleralterman on Should you start your own project now rather than later? · 2016-02-26T03:36:20.365Z · EA · GW

Agreed. This updates my view.

Comment by tyleralterman on Should you start your own project now rather than later? · 2016-02-26T03:35:28.146Z · EA · GW

Fascinating - this ranks as both my most downvoted and most shared post of all time.

Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:58:41.871Z · EA · GW

Yup, this is an important thing to keep in the background of expert assessment.

Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:57:51.214Z · EA · GW

I'm glad you think it's nonsense, since - in some strange state of affairs - a certain unnamed person has been crushing on the communal Pom sheet lately. =P

Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:54:47.549Z · EA · GW

Well-observed! Here's my guess on where I rank on the various conditions above:

  • P - Process: Medium. I think my explicit process is still fairly decent, but my implicit processes still need work. E.g., I might perform well at identifying an expert if you gave me a decent amount of time to check markers with my framework, but I'm not fluent enough in my explicit models to do expertise assessments on the fly very well, Sherlock Holmes-style.
  • I - Interaction: Medium. I've spent dozens of hours interacting with expertise assessment tasks, as mentioned in the article. However, for much of this interaction with the data, I did not have strong explicit models (I only developed the expert assessment framework last month.) Since my interaction with the data was not very model-guided for the majority of the time, it's likely that I often didn't pay attention to the right features of the data. So I may have been rather like Bob above:

    Bob, a graphic design novice, pays no attention to the signs and advertisements along the side of the street, even though they are within his field of vision. It may have been that lots of data relating to expertise was literally and metaphorically in my field of vision, but that I wasn't focusing on it very well, or wasn't focusing on the proper features.

  • F - Feedback: Low. Since I've only had well-developed explicit models for about a month, I still have only gotten minor feedback on my predictive power. I have run a few predictive exercises - they went well by the n is still small. My primary feedback method has been to generate lots of examples of people I am confident have expertise and check whether each marker can be found in all the examples. I also did the opposite: generate lots of examples of people I am confident lack expertise, and check whether each marker is absent from all the examples. I also used normal proxy methods that one can apply to check the robustness of theories without knowing much about them. (E.g., are there logical contradictions?) I used a couple other methods (e.g., running simulations and checking whether my system 1 yielded error signals), but I'd need to write a full-length article about them for these to make sense. For now, I will just say that they were weak feedback processes, but useful ones. Overall, I looked for correlation between the various feedback methods.
  • T - Time: Low-medium. I have probably spent more time training in specifically domain-general expertise assessment relative to most people in the world. But this is not saying much, since domain-general expertise assessment is not a thriving or even recognized field, as far as I can tell. Also, I have been only a small amount of time on the skill relative to the amount of training required to become skilled in domains falling into a similar reference class. (e.g., I think expertise assessment could be it's own scientific discipline, and people spend years in order to gain sufficient expertise in scientific disciplines.)
Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:30:07.282Z · EA · GW

Potential improvement: Rather than a binary pass fail for experts we should like a metric that grades the material they present.

Agreed. I tried to make it binary for the sake of generating good examples, but the world is much more messy. In the spreadsheet version I use, I try to assign each marker a rating from "none" to "high."

Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:27:54.725Z · EA · GW

The Cambridge Handbook of Expertise

How worthwhile do you think it would be for someone to read the handbook?

Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:27:42.915Z · EA · GW

Issue: It seems like the model might have trouble filtering people who have detailed but wrong models.

100%. The model above is only good for assessing necessary conditions, not sufficient ones. I.e., someone can pass all four conditions above and still not be an expert.

Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:23:14.747Z · EA · GW

I imagine there is another class of experts who have decades of experience, rich implicit models and impressive achievements, but who would struggle to present concise, detailed answers if you asked them to share their wisdom. I suspect that quiet observation of such a person in their work environment, rather than asking them questions, would yield a better measure of their level of expertise, but this requires considerable skill on the part of the observer.

Indeed: tacit experts. The way I assess this now is basically by looking at indirect signs around the potential tacit expert (e.g., achievements is a good one, as is evidence of them having made costly tradeoffs in the past to develop their expertise (a weaker sign).) If anyone develops tools for directly assessing tacit experts, please let me know.

I'd also be very interested if anyone has ideas for how to learn the skills of tacit experts, once you've identified them.

Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:19:31.710Z · EA · GW

I tested my predictions against the experts by rating applications for the top 5 candidates myself, then getting the domain expert to rank them and compare scores, watching them doing so.

Ah! This sounds like a great feedback mechanism for one's expert assessment abilities. I'm going to steal this. =)

Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:16:23.798Z · EA · GW

Tyler’s model seems somewhat helpful here, and adding the components from John’s model improves it again.

+1 - you definitely want to use more signs than the ones I mentioned above to be confident that you have identified sufficient marker of expertise. The ones listed above are only intended to be necessary markers. A good way of generating markers beyond the necessary ones: think about a few people who you can confidently say are experts. What do they have in common? (Please send me any cool markers you've come up with! My own list has over 30 now, and it doesn't seem like ceiling has been hit.)

Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T18:11:41.351Z · EA · GW

While it seems possible to make some progress on the problem of independently assessing expertise, I want to stress that we should still expect to fail if we proceed to do so entirely independently, without consulting a domain expert

Right, I should have mentioned this. Your job is much, much easier if you can identify a solid "seed" expert in the domain with a few caveats:

  • If the seed expert becomes your primary input to expertise identification, you should be confident that their expertise checks are good. I'm tempted to think that the skill of domain-specific expertise identification correlates strongly with expertise in that domain, but not perfectly. This will be especially true in fields where there are lots of persuaders who have learned how to mimic signs of expertise.
  • Keep domain-specific expertise base-rates in mind, as mentioned above. In domains where the expertise base-rate is low (e.g., sociology), you will need to run many more expertise checks on the seed expert than usual, and will have a harder time finding a passable expert in the first place.
  • In fields where results are not easily verifiable (e.g., sociology again), it will be more difficult to identify a seed expert. Also, these seed experts will often have a hard time identifying revolutionary forms of expertise, since they might look like crackpots. (As opposed to, say, math, where there are cases of people who prima facie look like crackpots being nonetheless hired as professors, since their results are reliably verifiable.)
  • In fields with high variance, you may be able to find a passable seed expert who cannot consistently identify experts who are much, much better than they are.
  • In fields with poorly networked knowledge, seed experts will be much less helpful. I can imagine this being the case for fields like massage therapy, where I expert there to be fewer journals and conferences.
Comment by tyleralterman on Why and how to assess expertise · 2016-02-14T17:50:58.957Z · EA · GW

"Check to see whether the field has tangible external accomplishments."

This is a good one. I think you can decently hone your expertise assessment by taking an outside view which incorporates base-rates of strong expertise in the field amongst average practitioners, as well as the variance. (Say that five-times fast.) For example:

  • Forecasters: very low baserate, high variance
  • Doctors: high baserate, low-medium variance
  • Normal car repairpeople: medium baserate, low-medium variance (In this case, there is a more salient and practical ceiling to expertise. While a boxer might continuously improve her ability to box until she wins all possible matches (a really high ceiling), a repairperson can't make a car dramatically "more repaired" than others. Though I suppose she might improve her speed at the process.)
  • Users of forks, people who walk, people who can recognize faces: high baserate, low variance
  • Mealsquares founders: enormously high baserate, extremely low variance =)
Comment by tyleralterman on $250 donation for best EA intro essay - deadline: March 10 · 2016-02-12T03:42:46.009Z · EA · GW

We plan to!

Comment by tyleralterman on $250 donation for best EA intro essay - deadline: March 10 · 2016-02-12T00:36:18.533Z · EA · GW

It will come from CEA's EA Outreach budget. Winners may choose to re-donate to CEA if they think that we're the best target of funds, or donate somewhere else they think is a better target. That said, we think the main reason why someone would be motivated to enter the contest would be to have the 1000s of future people being introduced to EA be introduced by the best content.

Comment by tyleralterman on $250 donation for best EA intro essay - deadline: March 10 · 2016-02-11T22:37:22.069Z · EA · GW

Just changed it to a Creative Commons Attribution 4.0 International License, so posting it elsewhere is fine (or even encouraged).

Comment by tyleralterman on Effective Altruism Prediction Registry · 2016-01-30T05:23:20.060Z · EA · GW

Very much support the thrust of this post. Oliver Habryka on the EA Outreach team is currently chatting with the Good Judgment Project team about implementing a prediction market in EA.

Comment by tyleralterman on Notice what arguments aren't made (but don't necessarily go and make them) · 2016-01-25T21:19:31.159Z · EA · GW

What about the following simple argument? "If you look at many many (most?) movements or organizations, you see mission creep or Goodharting."

Do you think there is anything that puts us in a different reference class?

Comment by tyleralterman on Against segregating EAs · 2016-01-21T21:21:40.942Z · EA · GW

Hi Julia - I wholeheartedly agree with your semantic point: the words "hardcore" and "softcore" seem potentially harmful.

However, I wonder if the stronger thesis is true: "Having strictly defined categories of involvement doesn’t seem likely to help."

It seems plausible, but I can think of worlds in which categories of involvement actually do play an important role. (For instance, there is a reason galas will do things like sort donors into silver, gold, and platinum levels based on their level of contribution.) Since one could see strong arguments for both sides, it seems like the sort of hypothesis that benefit from a mechanism posit, as talked about in my last post: http://effective-altruism.com/ea/sn/a_call_for_mechanistic_thinking_in/

My guess is that, for example, the distinction between priests and parishioners does play a socially useful function. Since the labels are non-normative (unlike "hardcore" and "softcore"), they seem to establish healthy attractors at two different levels of dedication. On the macro-level, I wouldn't be surprised if this wasn't a distinction which contributed to Christianity being able to maintain relative social equilibrium for many centuries. It seems like EA is going to need a similar degree of social equilibrium to achieve its most ambitious goals - e.g., a stable piece of culture that helps us continue to figure out what to do and then do it for many many years.

What do you think? =)

Comment by tyleralterman on EA is elitist. Should it stay that way? · 2016-01-20T23:32:48.687Z · EA · GW

I was chatting with Julia Wise about this post. It seems plausible the types of people we prioritize recruiting isn't such a black-and-white issue. For instance, it seems likely that EA can better take advantage of network effects with some mass-movement-style tactics.

That said, it seems likely that there might be a lot of neglected low-hanging fruit in terms of outreach to people with extreme influence, talent or net worth.

Comment by tyleralterman on Guesstimate: An app for making decisions with confidence (intervals) · 2015-12-30T21:40:32.417Z · EA · GW

+1 this is awesome

Comment by tyleralterman on Even More Reasons for Donor Coordination · 2015-10-30T23:08:19.182Z · EA · GW

EA Ventures would be very interested in hearing ideas for donor coordination. Feel free to email us about it at tyler@centreforeffectivealtruism.org.

It's a pretty tricky problem that probably requires the team solving it to have a good understanding of social dynamics from having solved similar issues in the past, so the ideal solution would factor this in.

Comment by tyleralterman on A minimal definition of Effective Altruism: · 2015-09-20T23:19:07.980Z · EA · GW

+1 I'd avoid over-associating EA with just effective giving. E.g., startup-founding, political advocacy, and scientific research can all be undertaken with EA ideas in mind.

Comment by tyleralterman on EA introduction course and YouTube playlists · 2015-08-14T16:08:28.484Z · EA · GW

I would place quite a bit of emphasis on epistemic tools, since valuing (and ideally exercising) reason and evidence is the primary thing which differentiates EA and unites people across different causes.

Things to be covered might include:

  • Prioritization

  • Building models about relevant parts of the world

  • Epistemic humility (being open to changing your mind, steelmanning other people's arguments, etc)

People to contact for these things:

Comment by tyleralterman on Should EAs influence corporate giving? · 2015-07-14T19:02:38.517Z · EA · GW

Thanks for the comments, all! I pretty much agree with the bulk of them so far, and have added an edit to the post above.

Comment by tyleralterman on How valuable is movement growth? · 2015-05-17T03:38:02.421Z · EA · GW

Thoughts on how favorably or unfavorably pursuing movement-building compares to other EA career paths?

Comment by tyleralterman on Suggestions thread for questions for the 2015 EA Survey · 2015-05-15T22:04:39.092Z · EA · GW

Yearly salary range (helpful for getting sponsorships in the future of EA events if the average yearly salary turns out to be high)

Comment by tyleralterman on How to save more lives today than in a year of earn-to-give · 2015-05-13T17:50:12.747Z · EA · GW

The difference between this and vegan flyering is that you're already targeting groups that have already self-selected for one aspect of EA. That said, I could definitely see a much lower than .1% rate being the case. Though the cost-effectiveness still seems competitive even at a conversion rate of .01% or even .001%. That's 10 days and 100 days, respectively, of work for a year of earn-to-give.

That said, as Peter alluded earn-to-give still seems competitive if, e.g., you're funding that much more of this work happens. Unless, by doing the work, you're recruiting EtGers that will fund the work. Unless... [mind explodes]

Comment by tyleralterman on How to save more lives today than in a year of earn-to-give · 2015-05-13T17:40:55.450Z · EA · GW

Peter Buckley attempted to hire some virtual assistants from ODesk. They were way too slow. My guess would be that EAs have a much better sense of what types of groups to look for and where to find them. The task also requires a decent amount of research, which is a comparative advantage of many EAs.

Would love to get tons of VAs on this though if you can think of a better way to use them.

Comment by tyleralterman on How to save more lives today than in a year of earn-to-give · 2015-05-13T09:05:14.019Z · EA · GW

Mass-scraping is great when you've already identified the webpages to scrap from. Identifying these webpages, however, is half the battle. (We've already combined THINK's list with ours, but thanks for the heads up!)

If you know someone at SER, I'd love to chat with them about what their strategy was.

Comment by tyleralterman on Expected Utility Auctions · 2015-05-09T02:13:25.787Z · EA · GW

This sounds awesome, and perhaps even the sort of thing we could use to assess the applications we get for EA Ventures (eaventures.org). I imagine the tough part will be acquiring and sustaining a user base of reviewers. Toward this end, you might first recruit an official board of dedicated reviewers yet still allow for anyone to leave impact estimates.

The next couple weeks are going to be serious crunch time on EA Global, but feel free to ping me about this in ~2 weeks if you're interested in a potential EAV integration: tyler@centreforeffectivealtruism.org

Comment by tyleralterman on EA Advocates announcement · 2015-03-26T00:17:16.530Z · EA · GW

Just signed up and left a review on Amazon. Awesome idea.

Comment by tyleralterman on I am Seth Baum, AMA! · 2015-03-04T00:12:01.613Z · EA · GW

What are GCRI's current plans or thinking around reducing synthetic biology risk? Frighteningly, there seems to be underinvestment in this area.

Also, with regard to the research project on altruism, my shoot-from-the-hip intuition is that you'll find somewhat different paths into effective altruism than other altruistic activities. Many folks I know now involved in EA were convinced by philosophical arguments from people like Peter Singer. I believe Tom Ash (tog.ash@gmail.com) embedded Qs about EA genesis stories in the census he and a few others conducted.

As for more general altruistic involvement, one promising body of work is on the role social groups play. Based on some of the research I did for Reducetarian message-framing, it seems like the best predictor of whether someone becomes a vegetarian is whether their friends also engage in vegetarianism (this accounts for more of the variance than self-reported interest in animal welfare or health benefits). The same was true of the civil right movement: the best predictor of whether students went down South to sign African Americans up to vote was whether they were part of a group that participated in this very activity.

Buzz words here to aid in the search: Social proof Peer pressure Normative social influence Conformity Social contagion

Literature to look into:

Comment by tyleralterman on Announcing EffectiveAltruism.org · 2015-02-07T04:19:25.039Z · EA · GW

Cool. Is the site targeted at people new to EA? Yup!

Maybe you could link to the EA Forum and the EA Job Board? Have a news feed containing original content, news articles, blog posts, or .impact hackpad posts? Have or link to a page of open research questions? Soon we hope to revise the "Get Involved" section to incorporate much of this.

Comment by tyleralterman on Request for proposals for Musk/FLI AI research grants · 2015-02-06T22:17:30.166Z · EA · GW

Hi Daniel, for further reach, the X-Risk comm channels on this spreadsheet might help: https://docs.google.com/spreadsheets/d/1_EH3cpHUJw052iXNI1Q_b-FgHBBNuXe_a4ZjM6uqzpU/edit?usp=sharing

Comment by tyleralterman on What do I do as the Director of Community for Giving What We Can? · 2015-01-28T23:53:45.273Z · EA · GW

Hey Jonathan, right now I'm chatting with the founder of The Feast (http://feastongood.com) to set up an international network of EA dinners. Personally, I've had a lot of success in using dinners as a mechanism for community building. I'm a bit at capacity between EA Ventures, EA Global, and other EA outreach work to get a lot of momentum going on the partnership. However, would you be interested in an introduction?

Here's some of the reasoning I sent Ben Todd back in the day on the potential effectiveness of dinners: https://docs.google.com/document/d/1PGfQF9R5nJtygF_O2M6E-2Iu8an9VNhkX37xizOs1uM/edit?usp=sharing

Comment by tyleralterman on Initial research into corporate fundraising · 2015-01-28T23:42:05.054Z · EA · GW

Also, I plan to strategize soon on shifting corporate philanthropy in a top-down way with Google's global corporate social responsibility lead (she self-identifies as EA). (Example: sell corporate decision-makers on EA.) So let me know if you dig up anything in this arena.

Comment by tyleralterman on Initial research into corporate fundraising · 2015-01-28T23:38:14.813Z · EA · GW

GOOD/Corps (www.goodcorps.com) is another nice resource. I've been in contact with them in case anyone wants an intro.

Comment by tyleralterman on Generic good advice: do intense exercise often · 2014-12-15T04:49:26.277Z · EA · GW

A growing body of evidence seems to suggest that aerobic exercise is best for improving cognitive fitness.

See:

http://well.blogs.nytimes.com/2009/09/16/what-sort-of-exercise-can-make-you-smarter/?_r=0

http://scholar.google.com/scholar?q=aerobic+exercise+cognition&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ei=o2eOVOS6CtLyoASd3YFo&ved=0CBsQgQMwAA

etc