Comment by pablo_stafforini on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-17T11:09:42.362Z · score: 15 (9 votes) · EA · GW

In case it helps others decide whether or not to take the Superforecasting Fundamentals course, I'm reposting a brief message I sent to the CEA Slack workspace back in August 2017:

I took it a year or so ago. The course is very good, but also very basic: I clearly wasn’t the target audience, since I was already quite familiar with most of the content. I wouldn't recommend it unless you don’t know anything about forecasting.
Comment by pablo_stafforini on Near-term focus, robustness, and flow-through effects · 2019-02-05T22:12:35.762Z · score: 1 (1 votes) · EA · GW

I see. Thanks.

Comment by pablo_stafforini on Near-term focus, robustness, and flow-through effects · 2019-02-05T19:20:24.681Z · score: 2 (2 votes) · EA · GW
Another object-level point, due to AGB

Would you mind linking to the comment left by that user, rather than to the user who left the comment? Thanks.

Comment by pablo_stafforini on What are some lists of open questions in effective altruism? · 2019-02-05T11:27:04.765Z · score: 13 (6 votes) · EA · GW

This post compiles lists of important questions and problems.

Comment by pablo_stafforini on Cost-Effectiveness of Aging Research · 2019-01-31T11:46:05.924Z · score: 2 (5 votes) · EA · GW

Owen's last name is 'Cotton-Barratt'.

Comment by pablo_stafforini on High-priority policy: towards a co-ordinated platform? · 2019-01-15T13:29:42.731Z · score: 2 (2 votes) · EA · GW
What would an EA policy platform look like?

You may want to expand your list to include some of the proposals here:

Comment by pablo_stafforini on Climate Change Is, In General, Not An Existential Risk · 2019-01-12T15:18:07.770Z · score: 3 (4 votes) · EA · GW

Beware brittle arguments.

Comment by pablo_stafforini on Rationality as an EA Cause Area · 2018-11-14T16:55:58.718Z · score: 8 (7 votes) · EA · GW

Then I would suggest changing the title of the post. 'Rationality as a cause area' can mean many things besides 'growing the rationality community'.

Furthermore, some of the considerations you list in support of the claim that rationality is a promising cause area do not clearly support, and may even undermine, the claim that one should grow the rationality community. Your remarks about epistemic standards, in particular, suggest that one should approach growth very carefully, and that one may want to deprioritize growth in favour of other forms of community building.

Comment by pablo_stafforini on Against prediction markets · 2018-05-15T17:10:08.920Z · score: 2 (2 votes) · EA · GW

Feel free to ignore if you don't think this is sufficiently important, but I don't understand the contrast you draw between accuracy and outside world manipulation. I thought manipulation of prediction markets was concerning precisely because it reduces their accuracy. Assuming you accept Robin's point that manipulation increases accuracy on balance, what's your residual concern?

Comment by pablo_stafforini on Against prediction markets · 2018-05-13T21:02:20.440Z · score: 2 (2 votes) · EA · GW

I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Robin's position is that manipulators can actually improve the accuracy of prediction markets, by increasing the rewards to informed trading. On this view, the possibility of market manipulation is not in itself a consideration that favors non-market alternatives, such as polls or pundits.

Comment by pablo_stafforini on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-22T12:31:49.974Z · score: 4 (10 votes) · EA · GW

The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society.

Wild animal advocacy is far more neglected than farmed animal advocacy, and it involves even larger numbers of sentient beings ignored by most of society. If the superiority of farmed animal advocacy over global poverty along these two dimensions is a sufficient reason for not working on global poverty, why isn't the superiority of wild animal advocacy over farmed animal advocacy along those same dimensions not also a sufficient reason for not working on farmed animal advocacy?

Comment by pablo_stafforini on New Effective Altruism course syllabus · 2018-01-29T16:49:39.491Z · score: 3 (3 votes) · EA · GW

Thanks for creating this. I've added your course to this list.

Comment by pablo_stafforini on Finding and managing literature on EA topics · 2017-11-13T19:33:48.346Z · score: 4 (4 votes) · EA · GW

Thank you for writing this! The images under 'What are you going to search for?' are not loading.

Comment by pablo_stafforini on In defence of epistemic modesty · 2017-11-01T21:27:56.091Z · score: 6 (6 votes) · EA · GW

Thanks for drawing our attention to that early Overcoming Bias post. But please note that it was written by Hal Finney, not Robin Hanson. It took me a few minutes to realize this, so it seemed worth highlighting lest others fail to appreciate it.

Incidentally, I've been re-reading Finney's posts over the past couple of days and have been very impressed. What a shame that such a fine thinker is no longer with us.

ETA: Though one hopes this is temporary.

Comment by pablo_stafforini on In defence of epistemic modesty · 2017-10-31T21:28:24.957Z · score: 2 (4 votes) · EA · GW

Okay, thank you for the clarification.

[In the original version, your comment said that the quote was pulled out of context, hence my interpretation.]

Comment by pablo_stafforini on In defence of epistemic modesty · 2017-10-31T20:59:50.130Z · score: 2 (2 votes) · EA · GW

In that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero.

I offered that quote to cast doubt on Rob's assertion that Eliezer has "a really strong epistemic track record, and that this is good evidence that modesty is a bad idea." I didn't mean to deny that Eliezer had some successes, or that one shouldn't "look closely and form much better estimates of the likelihood of good invisible reasons" or that "the base rate of dysfunction is anywhere near zero", and I didn't offer the quote to dispute those claims.

Readers can read the original comment and judge for themselves whether the quote was in fact pulled out of context.

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-31T20:39:17.060Z · score: 0 (0 votes) · EA · GW

why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth?

Maybe because these people have been surprisingly accurate? In addition, it's not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.

Comment by pablo_stafforini on In defence of epistemic modesty · 2017-10-31T18:41:17.893Z · score: 3 (3 votes) · EA · GW

I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea.

Really? My sense is that the opposite is the case. Eliezer himself acknowledges that he has an "amazing bet-losing capability" and my sense is that he tends to bet against scientific consensus (while Caplan, who almost always takes the consensus view, has won virtually all his bets). Carl Shulman notes that Eliezer's approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-31T16:09:29.520Z · score: 1 (1 votes) · EA · GW

I also feel comfortable having lower probability in the existence of God than the average physicist does; and "physicists are the wrong kind of authority to defer to about God" isn't the reasoning I go through to reach that conclusion.

Out of curiosity, what is the reasoning you would go through to reach that conclusion?

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-31T14:46:42.374Z · score: 2 (2 votes) · EA · GW

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to "read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.

To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer tried to do this?

Update (2017-10-28): I just realized that the kind of challenge I'm raising here has been carried out, in the form of a "natural experiment", for Eliezer's views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer's TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).

Update (2018-01-20): Note the parallels between what Scott Alexander says here and what I write above (emphasis added):

I admit I don’t know as much about economics as some of you, but I am working off of a poll of the country’s best economists who came down pretty heavily on the side of this not significantly increasing growth. If you want to tell me that it would, your job isn’t to explain Economics 101 theories to me even louder, it’s to explain how the country’s best economists are getting it wrong.

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-30T12:35:07.519Z · score: 1 (1 votes) · EA · GW

I think the main two factual disagreements here might be "how often, and to what extent, do top institutions and authorities fail in large and easy-to-spot ways?" and "for epistemic and instrumental purposes, to what extent should people like you and Eliezer trust your own inside-view reasoning about your (and authorities') competency, epistemic rationality, meta-rationality, etc.?"

Thank you, this is extremely clear, and captures the essence of much of what's going between Eliezer and his critics in this area.

Could you say more about what you have in mind by "confident pronouncements [about] AI timelines"? I usually think of Eliezer as very non-confident about timelines.

I had in mind forecasts Eliezer made many years ago that didn't come to pass as well as his most recent bet with Bryan Caplan. But it's a stretch to call these 'confident pronouncements', so I've edited my post and removed 'AI timelines' from the list of examples.

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-29T20:41:57.857Z · score: 1 (1 votes) · EA · GW

I never claimed that this is what Eliezer was doing in that particular case, or in other cases. (I'm not even sure I understand Eliezer's position.) I was responding to the previous comment, and drawing a parallel between "beating the market" in that and other contexts. I'm sorry if this was unclear.

To address your substantive point: If the claim is that we shouldn't give much weight to the views of individuals and institutions that we shouldn't expect them to be good at tracking the truth, despite their status or prominence in society, this is something that hardly any rationalist or EA would dispute. Nor does this vindicate various confident pronouncements Eliezer has made in the past—about nutrition, animal consciousness, philosophical zombies, population ethics, and quantum mechanics, to name a few—that deviate significantly from expert opinion, unless this is conjoined with credible arguments for thinking that warranted skepticism extends to each of those expert communities. To my knowledge, no persuasive arguments of this sort have been provided.

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-29T11:32:55.022Z · score: 7 (9 votes) · EA · GW

The reason people aren't doing this is probably that it isn't profitable once you account for import duties, value added tax and customs clearance fees, as well as the time costs of transacting in the black market. I'm from Argentina and have investigated this in the past for other electronics, so my default assumption is that these reasons generalize to this particular case.

I think this discussion provides a good illustration of the following principle: you should usually be skeptical of your ability to "beat the market" even if you are able to come up with a plausible explanation of the phenomenon in question from which it follows that your circumstances are unique.

Similarly, I think one should generally distrust one's ability to "beat elite common sense" even if one thinks one can accurately diagnose why members of this reference class are wrong in this particular instance.

Very rarely, you may be able to do better than the market or the experts, but knowing that this is one of those cases takes much more than saying "I have a story that implies I can do this, and this story looks plausible to me."

[link] 'Crucial Considerations and Wise Philanthropy', by Nick Bostrom

2017-03-17T06:48:47.986Z · score: 14 (14 votes)
Comment by pablo_stafforini on Report -- Allocating risk mitigation across time · 2017-03-14T22:02:31.902Z · score: 0 (0 votes) · EA · GW

Updated link: https://www.fhi.ox.ac.uk/reports/2015-2.pdf

Comment by pablo_stafforini on A Different Take on President Trump · 2016-12-16T20:01:05.556Z · score: 3 (5 votes) · EA · GW

an audience full of people who can't tell whether or not to trust my perspective.

Statements like "There is a growing risk that European countries will fall into civil war" are very implausible to many folks here. So if you want people to take you seriously, you should at least show us that you sincerely believe this, by being willing to turn those statements into testable predictions. Your refusal to do this is part of the reason some of us don't trust your perspective.

Comment by pablo_stafforini on Should I be vegan? · 2016-12-12T11:29:31.738Z · score: 2 (2 votes) · EA · GW

Upon reflection, I agree with you. I haven't been using the "lactovegetarian" label much, both because few people know what it means and because there isn't much need to use it. But I won't be using it at all from now on.

Comment by pablo_stafforini on A Different Take on President Trump · 2016-12-08T13:52:37.891Z · score: 8 (10 votes) · EA · GW

Europe is a morass of ethnic conflict, terrorism, sexual violence, rising nationalist militias, and jihadism. There is a growing risk that European countries will fall into civil war. Civil war in Europe would be a catastrophic risk that could go global.

  1. What is your credence that at least one European country will fall into civil war in 2017?
  2. How do you define the global catastrophe that you believe could result from civil war in Europe? In particular, how many people would need to be killed for such an event to count as a global catastrophe in your sense?
Comment by pablo_stafforini on CEA is Fundraising! (Winter 2016) · 2016-12-07T22:08:39.549Z · score: 4 (4 votes) · EA · GW

I agree that, other things equal, we want to encourage critics to be constructive. All things considered, however, I'm not sure we should hold criticism to a higher standard, as we seem to be doing. This would result in higher quality criticism, but also in less total criticism.

In addition, the standard to which criticism is held is often influenced by irrelevant considerations, like the status of the person or organization being criticized. So in practice I would expect such a norm to stifle certain types of criticism more than others, over and above reducing criticism in general.

Comment by pablo_stafforini on Donor lotteries: demonstration and FAQ · 2016-12-07T20:16:38.572Z · score: 3 (3 votes) · EA · GW

I have been put in touch with other donors that are each contributing less than $5k, but you can just team up with us. Email me at MyFrstName at MyLastName, followed by the most common domain extension.

Ideally there should be a better procedure for doing this; the associated trivial inconvenience may be discouraging some people from joining.

Comment by pablo_stafforini on Donor lotteries: demonstration and FAQ · 2016-12-07T13:47:26.837Z · score: 11 (11 votes) · EA · GW

Cool. I'm in with $2k.

Comment by pablo_stafforini on Are You Sure You Want To Donate To The Against Malaria Foundation? · 2016-12-07T13:33:19.645Z · score: 6 (6 votes) · EA · GW

Firstly I think many people give to GiveWell recommended charities because they believe, rightly or wrongly, that a healthier population will spur economic growth, or political reform, or whatever else, which will improve the welfare of present and future generations of people in the country.

That argument, however, is vulnerable to the "suspicious convergence" objection.

Comment by pablo_stafforini on CEA is Fundraising! (Winter 2016) · 2016-12-07T13:12:07.425Z · score: 9 (9 votes) · EA · GW

While I disagree with Michael and don't think we should discourage EA orgs from posting fundraising documents,* I'm disappointed that his comment has so far received 100% downvotes. This seems to be part of a disturbing larger phenomenon whereby criticism of prominent EA orgs or people tends to attract significantly more downvotes that other posts or comments of comparable quality, especially posts or comments that praise such orgs or people.

__

(*) I work for CEA, so there's a potential conflict of interest that may bias my thinking about this issue.

Comment by pablo_stafforini on Contra the Giving What We Can pledge · 2016-12-05T17:25:32.687Z · score: 1 (1 votes) · EA · GW

The principle you outline does not apply to the pledge because many people (citation) don't think the pledge is obviously bad.

AlyssaVance isn't outlining a principle. AGB made a general claim about criticism being useless without a counterfactual. AlyssaVance's mention of firebombing was meant as a counterexample to that generalization.

Comment by pablo_stafforini on How valuable is movement growth? · 2016-12-04T14:42:30.935Z · score: 3 (3 votes) · EA · GW

I am concerned that we are reinventing the wheel, and ignoring a substantial body of empirical and theoretical work that has already been done on the subject.

I share this concern, and believe that EAs are often guilty of ignoring existing fields of research from which they could learn a lot. I'm not sure whether this concern applies in this particular case, however. I spent several days looking into the sociological literature on social movements and didn't find much of value. Have you stumbled across any writings that you would recommend?

Comment by pablo_stafforini on Should effective altruism have a norm against donating to employers? · 2016-12-02T12:04:57.387Z · score: 2 (4 votes) · EA · GW

Greg's point is that the case against donating to one's employer is part of a larger argument for increased professionalization of EA orgs. The situation he describes in the paragraph you quote illustrates what can go wrong when an organization lacks the level of professionalism he thinks orgs should have.

Comment by pablo_stafforini on Should effective altruism have a norm against donating to employers? · 2016-12-02T11:45:11.081Z · score: 0 (2 votes) · EA · GW

I think the claim should be that there is a prima facie reason for donating to one's employer. If the reason was pro tanto, one would have reason for donating even after learning that one's employer e.g. has no room for more funding.

I agree with the claim so interpreted. If you believe working for some organization is the best use of your time, there's a presumption that donating to this organization is the best use of your money. So I now see that my original comment was uncharitable.

At present, I don't have a good sense of how strong this presumption should be. So it's unclear to me how much weight I should give to arguments that appeal to this presumption.

Comment by pablo_stafforini on Should effective altruism have a norm against donating to employers? · 2016-11-30T17:45:07.778Z · score: 5 (7 votes) · EA · GW

The claim that it's natural to donate to one's employer given one's prior decision to become an employee assumes that EAs—or at least those working for EA orgs—should spend all their altruistic resources (i.e. time and money) in the same way. But this assumption is clearly false: it can be perfectly reasonable for me to believe that I should spend my time working for some organization, and that I should spend my money supporting some other organization. Obviously, this will be the case if the organization I work for, but not the one I support, lacks room for more funding. But it can also be the case in many other situations, depending on the relative funding and talent constraints of both the organization I work for and the organizations I could financially support.

Comment by pablo_stafforini on Should you switch away from earning to give? Some considerations. · 2016-08-26T12:10:56.541Z · score: 3 (3 votes) · EA · GW

If many of those people aren't earning to give, then either fewer EAs are earning to give than is generally assumed, or the EA survey is not a representative sample of the EA population.

Alternatively, we may question the antecedent of that conditional, and either downgrade our confidence in our ability to infer whether someone is earning to give from information about how much they give, or lower the threshold for inferring that a person who fails to give at least that much is likely not earning to give.

Comment by pablo_stafforini on The most persuasive writing neutrally surveys both sides of an argument · 2016-02-18T11:40:21.942Z · score: 4 (4 votes) · EA · GW

What are the best arguments against writing in this way?

Kudos for acting on your own advice!

Comment by pablo_stafforini on We care about WALYs not QALYs · 2015-11-16T18:57:30.359Z · score: 0 (0 votes) · EA · GW

I often see media coverage of effective altruism that says "effective altruists want to maximise the number of QALYs in the world." (e.g. London Review of Books).

The specific example you mention is particularly puzzling: it is a review of Doing Good Better, which makes this point very clearly (pp. 39-40):

the same methods that were used to create the QALY could be used to measure the costs and benefits of pretty much anything. We could use these methods to estimate the degree to which your well-being is affected by stubbing your toe, or by going through a divorce, or by losing your job. We could call them well-being-adjusted life years instead. e idea would be that being dead is at 0 percent well-being; being as well off as you realistically can be is at 100 percent well-being. You can compare the impact of different activities in term of how much and for how long they increase people’s well-being. In chapter one we saw that doubling someone’s income gives a 5 percentage points increase in reported subjective well-being. On this measure, doubling someone’s income for twenty years would provide one WALY.

Thinking in terms of well-being improvements allows us to compare very different outcomes, at least in principle. For example, suppose you were unsure about whether to donate to the United Way of New York City or to Guide Dogs of America. You find out that it costs Guide Dogs of America approximately $50,000 to train and provide one guide dog for one blind person. Which is a better use of fifty dollars: providing five books, or a 1/1,000th contribution to a guide dog? It might initially seem like such a comparison is impossible, but if we knew the impact of each of these activities on people’s well-being, then we could compare them.

Suppose, hypothetically, that we found out that providing one guide dog (at a cost of $50,000) would give a 10 percentage points increase in reported well-being for one person’s life over nine years (the working life of the dog). at would be 0.9 WALYs. And suppose that providing five thousand books (at a cost of $50,000) provided a 0.001 percentage point increase in quality of life for five hundred people for forty years. at would be two WALYs. If we knew this, then we’d know that spending $50,000 on schoolbooks provided a greater benefit than spending $50,000 on one guide dog.

The difficulty of comparing different sorts of altruistic activity is therefore ultimately due to a lack of knowledge about what will happen as a result of that activity, or a lack of knowledge about how different activities translate into improvements to people’s lives. It’s not that different sorts of benefits are in principle incomparable.

Comment by pablo_stafforini on Is there a hedonistic utilitarian case for Cryonics? (Discuss) · 2015-08-29T13:46:20.150Z · score: 15 (15 votes) · EA · GW

Some folks argue that cryonics is or may be justified on EA grounds. Among these people, some go ahead and pay for a cryonics subscription. However, I have yet to find a single person in that group who has paid for someone else's subscription, rather than his or her own. If there was indeed an EA justification for cryonics, this would be an extraordinary coincidence. The hypothesis that these decisions were motivated by self-interest and later rationalized as justified on EA grounds seems much more plausible.

Comment by pablo_stafforini on Rich-country policy changes that could greatly benefit poor countries · 2015-08-21T02:59:48.799Z · score: 1 (1 votes) · EA · GW

As the quote you provided shows, labor mobility was rated as "not very tractable", not "intractable". Moreover, labor mobility was given that rating because it was judged to be politically infeasible, in light of the low popularity of even modest migration reform proposals, and not because we lack evidence from RCTs, or due to blindness to the mechanics of political change. So I think you are mischaracterizing what was said in the book.

Comment by pablo_stafforini on Peter Hurford thinks that a large proportion of people should earn to give long term · 2015-08-18T08:25:35.913Z · score: 1 (1 votes) · EA · GW

A neglected but important related question is: 'What proportion of people doing 'money moving' should earn to give?'

Comment by pablo_stafforini on Should I be vegan? · 2015-05-17T20:18:44.162Z · score: 7 (9 votes) · EA · GW

A neglected consideration in favor of veganism is that it gives one greater signalling flexibility over most other diets. Depending on one's audience, one can honestly describe oneself as a "vegetarian", a "lacto-vegetarian", a "reducetarian", etc. as well as a "vegan". The importance of this consideration will depend on the relative impact of signalling versus direct effects, the benefits of sending different signals in different contexts, the intrinsic and instrumental value of honesty, and other factors.

Comment by pablo_stafforini on Saving administration costs or saving lives? · 2015-04-01T00:39:38.779Z · score: 0 (0 votes) · EA · GW

If you have limited time and are only donating small amounts, it's potentially not worth the effort to look up detailed information, so evaluating based on what's available may be better than nothing. (At least you reduce your chance of donating to a scam that way.)

It is worth than nothing. Donor emphasis on overhead ratios causes charities to spend less on overhead than they consider optimal. So if anything one should assume charities with higher overheads to be less willing to yield to this pressure, and hence more cost-effective (at least as long overhead expenses do not exceed certain upper bounds beyond which one could reasonably suspect corruption or incompetence).

Comment by pablo_stafforini on The 2014 Survey of Effective Altruists: Results and Analysis · 2015-03-19T00:52:43.308Z · score: 1 (1 votes) · EA · GW

What makes it particularly problematic is that it is very hard estimate the ‘size’ of this bias

One approach would be to identify a representative sample of the EA population and circulate among folks in that sample a short survey with a few questions randomly sampled from the original survey. By measuring response discrepancies between surveys (beyond what one would expect if both surveys were representative), one could estimate the size of the sampling bias in the original survey.

ETA: I now see that a proposal along these lines is discussed in the subsection 'Comparison of the EA Facebook Group to a Random Sample' of the Appendix. In a follow-up study, the authors of the survey randomly sampled members of the EA Facebook group and compared their responses to those of members of that group in the original survey. However, if one regards the EA Facebook group as a representative sample of the EA population (which seems reasonable to me), one could also compare the responses in the follow-up survey to all responses in the original survey. Although the authors of the survey don't make this comparison, it could be made easily using the data already collected (though given the small sample size, practically significant differences may not turn out to be statistically significant).

Comment by pablo_stafforini on The 2014 Survey of Effective Altruists: Results and Analysis · 2015-03-19T00:29:57.441Z · score: 1 (1 votes) · EA · GW

I agree with the spirit of this criticism, though it seems that the problem is not significance testing as such, but a failure to define the null hypothesis adequately.

Comment by pablo_stafforini on Results from a survey of people's views on donation matching · 2015-03-01T08:28:38.923Z · score: 1 (3 votes) · EA · GW

When donation challenges become the new high-status thing in the EA community, please remember to credit Ben Kuhn.

Really cool post (I just read it on Ben's blog, but am commenting here because he wants to consolidate discussion in a single place).

Comment by pablo_stafforini on EA housemates: a good idea? Plus how to find them · 2015-01-26T00:51:53.373Z · score: 0 (0 votes) · EA · GW

Given the low costs involved in creating such a spreadsheet, my advice would be to go ahead and just try it.

Comment by pablo_stafforini on The Privilege of Earning To Give · 2015-01-16T08:11:32.016Z · score: 2 (2 votes) · EA · GW

Scott describes himself in the post you link to as "97% on board [with feminism]".

A clarification. The author of the post is Scott Alexander. The subject of the post is Scott Aaronson. Alexander doesn't describe himself as 97% on board with feminism; Aaronson does.

Effective Altruism Blogs

2014-11-28T17:26:05.861Z · score: 4 (4 votes)

Meetup : .impact's 26th project meeting

2014-09-19T18:50:57.302Z · score: 0 (0 votes)

[link] The Economist on "extreme altruism"

2014-09-18T19:53:52.287Z · score: 4 (4 votes)

Effective altruism quotes

2014-09-17T06:47:27.140Z · score: 5 (5 votes)