comment by Gregory_Lewis ·
2014-11-14T00:31:13.658Z · EA(p) · GW(p)
Per Bernadette, getting good data from these sorts of project requires significant expertise (if your university is as bad as mine, you can get student media attention for attention-grabbing but methodologically suspect survey data, but I doubt you would get much more). I'm reluctant to offer advice beyond 'find an expert'. But I will add a collection of problems that surveys run by amateurs fall into as pitfalls to avoid, and further to provide further evidence why expertise is imperative.
1: Plan more, trial less
A lot of emphasis in EA is on trialling things instead of spending a lot of time planning them: lean startups, no plan survives first contact, VoI etc. But lean trial design hasn't taken off in the way lean start-ups have. Your data can be poisoned to the point of being useless in innumerable ways, and (usually) little can be done about this post-hoc: many problems revealed in analysis could only have been fixed in original design.
1a: Especially plan analysis
Gathering data and then analysing it always suspect: one can wonder whether the investigators have massaged the analysis to satisfy their own preconceptions or prejudices. The usual means to avoiding it is specifying the analysis you will perform: the analysis might be ill-conceived, but at least it won't be data-dredging. It is hard to plan in advance what sort of hypotheses the data would inspire you to inspect, so seek expert help.
2: Care about sampling
With 'true' random sampling, the errors in your estimates fall as your sample size increases. The problem with bias/directional error is that its magnitude doesn't change with your sample size.
Perfect probabilistic sampling is probably a platonic ideal - especially with voluntary surveys, the factors that make someone take the survey will probably change the sample from the population of interest along axis that aren't perfectly orthogonal to your responses. It remains an ideal worth striving for: significant sampling bias makes your results all-but-uninterpretable (modulo very advanced ML techniques, and not always even then). It is worth thinking long and hard about the population you are actually interested, the sampling frame you will use to try and capture them, etc. etc.
- Questions can be surprisingly hard to ask right
Even with a perfect sample, they still might not provide good data depending on the questions you use. There are a few subtle pitfalls besides the more obvious ones of forgetting to include the questions you wanted to ask or lapses of wording: allowing people to select multiple options of an item then wondering how to aggregate it, having a 'choose one' item with too many selections for the average person to read, or sub dividing it inappropriately: ("Is your favourite food Spaghetti, Tortollini, Tagliatelle, Fusili, or Pizza?")
Again, people who spend a living designing surveys try and do things to limit these problems - item pools, pilots where they look at different questions and see which yield the most data, etc. etc.
3a. Too many columns in the database
There's a habit towards a 'kitchen sink' approach of asking questions - if in doubt, add it in, as it can only give good data, right? The problem is that false positives become increasingly difficult if you just fish for interesting correlations, as the possible comparisons increase geometrically. There are ways of overcoming this (dimension reduction, family-wise or false-discovery error control), but they aren't straightforward.
There are probably many more I've forgotten. But tl;dr: it is tricky to do this right!