[link] 'Crucial Considerations and Wise Philanthropy', by Nick Bostrom 2017-03-17T06:48:47.986Z · score: 14 (14 votes)
Effective Altruism Blogs 2014-11-28T17:26:05.861Z · score: 4 (4 votes)
Meetup : .impact's 26th project meeting 2014-09-19T18:50:57.302Z · score: 0 (0 votes)
[link] The Economist on "extreme altruism" 2014-09-18T19:53:52.287Z · score: 4 (4 votes)
Effective altruism quotes 2014-09-17T06:47:27.140Z · score: 5 (5 votes)


Comment by pablo_stafforini on Ask Me Anything! · 2019-08-19T11:16:06.393Z · score: 8 (5 votes) · EA · GW


Comment by pablo_stafforini on Apology · 2019-03-24T22:05:18.230Z · score: 27 (11 votes) · EA · GW
There are no Real Apologies, it is naive to think otherwise and toxic to demand otherwise. Of course he is acknowledging wrongdoing, and he is acknowledging wrongdoing because he is being pressured to acknowledge wrongdoing.

What are you talking about? There's a clear difference between apologizing because one sincerely believes one acted wrongly, and apologizing only because one thinks the consequences will be graver if one fails to apologize. I am puzzled by your apparent failure to recognize this difference.

Comment by pablo_stafforini on Apology · 2019-03-24T20:11:55.687Z · score: 18 (10 votes) · EA · GW

Thanks for agreeing to state your credences explicitly (and strongly upvoted for that reason).

I thought it was important to get more precision given the evidence showing that qualifiers such as 'possible', 'likely', etc are compatible with a wide range of values. Before your subsequent clarification, I interpreted your 'quite plausible' as expressing a probability of ~60%.

Comment by pablo_stafforini on Apology · 2019-03-24T19:16:07.822Z · score: 18 (8 votes) · EA · GW

"Quite plausible"? What's your actual credence?

Comment by pablo_stafforini on Apology · 2019-03-24T13:14:08.587Z · score: 24 (18 votes) · EA · GW
it is not at all clear to me that the accusations that are being discussed here are separate from the accusations that appear to have caused his apology. I agree that if they were from separate disconnected communities, then that would be significant evidence

In his apology, Jacy says that he "know[s] very little of the details of these allegations." But he clearly knows the Brown allegations very well. So even ignoring the other evidence cited by Halstead, the allegations for which he is apologizing clearly can't include the Brown allegations.

EDIT: I now see it's also possible that Jacy was presented with so little information that he wouldn't be able to determine if the allegations CEA was concerned with included the Brown allegations, however well he knew the latter. My reasoning above ignores this possibility. Personally, I think the evidence Halstead offered is pretty conclusive, so I don't think this makes a practical difference, but it still seemed something worth mentioning.

Comment by pablo_stafforini on Candidate Scoring System, First Release · 2019-03-13T15:59:59.399Z · score: 4 (4 votes) · EA · GW

Thanks for doing this. Maybe add Andrew Yang? From a recent Vox article by Dylan Matthews:

Yang, a startup veteran and founder of the nonprofit Venture for America who has never run for elected office before, has made a $12,000-per-year basic income for all American adults the centerpiece of his campaign. He averages 0 to 1 percent in public opinion polls, but as of this writing, he’s surged on prediction markets, with bettors giving him slightly worse odds than Warren, Booker, and Klobuchar, and better odds than Tulsi Gabbard, Kirsten Gillibrand, or Julián Castro.
...successful or not, Yang is a fascinating cultural phenomenon. He blends a traditionally left-wing platform (a mass expansion of the safety net and a big new value-added tax, or VAT, to pay for it) with massive appeal to the young, predominantly male, and, in their unique way, socially conservative audiences of people like Joe Rogan and Sam Harris.
Comment by pablo_stafforini on What skills would you like 1-5 EAs to develop? · 2019-03-07T16:06:41.778Z · score: 3 (2 votes) · EA · GW


To those interested in becoming better forecasters: I strongly recommend the list of prediction resources that Metaculus has put together.

Comment by pablo_stafforini on Making discussions in EA groups inclusive · 2019-03-05T01:03:04.040Z · score: 27 (9 votes) · EA · GW

If the topics to avoid are irrelevant to EA, it seems preferable to argue that these topics shouldn't be discussed because they are irrelevant than to argue that they shouldn't be discussed because they are offensive. In general, justifications for limiting discourse that appeal to epistemic considerations (such as bans on off-topic discussions) appear to generate less division and polarization than justifications that appeal to moral considerations.

Comment by pablo_stafforini on Making discussions in EA groups inclusive · 2019-03-04T20:33:21.219Z · score: 8 (3 votes) · EA · GW

That makes sense.

Comment by pablo_stafforini on Making discussions in EA groups inclusive · 2019-03-04T20:22:04.211Z · score: 6 (5 votes) · EA · GW
People who are down-voting, can you please explain why? To just down-vote seems unproductive.

Are you implying that every time someone downvotes a post they should provide an accompanying explanation of their decision? If not, what makes this post different from others?

Comment by pablo_stafforini on Do you have any suggestions for resources on the following research topics on successful social and intellectual movements similar to EA? · 2019-02-24T12:48:12.189Z · score: 7 (4 votes) · EA · GW

In connection to (1), a while ago I compiled a list of EA-relevant fields and movements, with associated EA and academic references. As I point out in the document, the list is incomplete, but it's already at a stage where others may perhaps find it useful. You can access the Google Doc here.

Comment by pablo_stafforini on Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post · 2019-02-17T11:09:42.362Z · score: 17 (10 votes) · EA · GW

In case it helps others decide whether or not to take the Superforecasting Fundamentals course, I'm reposting a brief message I sent to the CEA Slack workspace back in August 2017:

I took it a year or so ago. The course is very good, but also very basic: I clearly wasn’t the target audience, since I was already quite familiar with most of the content. I wouldn't recommend it unless you don’t know anything about forecasting.
Comment by pablo_stafforini on Near-term focus, robustness, and flow-through effects · 2019-02-05T22:12:35.762Z · score: 1 (1 votes) · EA · GW

I see. Thanks.

Comment by pablo_stafforini on Near-term focus, robustness, and flow-through effects · 2019-02-05T19:20:24.681Z · score: 2 (2 votes) · EA · GW
Another object-level point, due to AGB

Would you mind linking to the comment left by that user, rather than to the user who left the comment? Thanks.

Comment by pablo_stafforini on What are some lists of open questions in effective altruism? · 2019-02-05T11:27:04.765Z · score: 13 (6 votes) · EA · GW

This post compiles lists of important questions and problems.

Comment by pablo_stafforini on Cost-Effectiveness of Aging Research · 2019-01-31T11:46:05.924Z · score: 2 (5 votes) · EA · GW

Owen's last name is 'Cotton-Barratt'.

Comment by pablo_stafforini on High-priority policy: towards a co-ordinated platform? · 2019-01-15T13:29:42.731Z · score: 2 (2 votes) · EA · GW
What would an EA policy platform look like?

You may want to expand your list to include some of the proposals here:

Comment by pablo_stafforini on Climate Change Is, In General, Not An Existential Risk · 2019-01-12T15:18:07.770Z · score: 3 (4 votes) · EA · GW

Beware brittle arguments.

Comment by pablo_stafforini on Rationality as an EA Cause Area · 2018-11-14T16:55:58.718Z · score: 8 (7 votes) · EA · GW

Then I would suggest changing the title of the post. 'Rationality as a cause area' can mean many things besides 'growing the rationality community'.

Furthermore, some of the considerations you list in support of the claim that rationality is a promising cause area do not clearly support, and may even undermine, the claim that one should grow the rationality community. Your remarks about epistemic standards, in particular, suggest that one should approach growth very carefully, and that one may want to deprioritize growth in favour of other forms of community building.

Comment by pablo_stafforini on Against prediction markets · 2018-05-15T17:10:08.920Z · score: 2 (2 votes) · EA · GW

Feel free to ignore if you don't think this is sufficiently important, but I don't understand the contrast you draw between accuracy and outside world manipulation. I thought manipulation of prediction markets was concerning precisely because it reduces their accuracy. Assuming you accept Robin's point that manipulation increases accuracy on balance, what's your residual concern?

Comment by pablo_stafforini on Against prediction markets · 2018-05-13T21:02:20.440Z · score: 2 (2 votes) · EA · GW

I find it questionable whether blatant attempts at voter manipulation through prediction markets are worth the cost. This is a big price to pay even if prediction markets did a bit better than polls or pundits.

Robin's position is that manipulators can actually improve the accuracy of prediction markets, by increasing the rewards to informed trading. On this view, the possibility of market manipulation is not in itself a consideration that favors non-market alternatives, such as polls or pundits.

Comment by pablo_stafforini on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-22T12:31:49.974Z · score: 4 (10 votes) · EA · GW

The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society.

Wild animal advocacy is far more neglected than farmed animal advocacy, and it involves even larger numbers of sentient beings ignored by most of society. If the superiority of farmed animal advocacy over global poverty along these two dimensions is a sufficient reason for not working on global poverty, why isn't the superiority of wild animal advocacy over farmed animal advocacy along those same dimensions not also a sufficient reason for not working on farmed animal advocacy?

Comment by pablo_stafforini on New Effective Altruism course syllabus · 2018-01-29T16:49:39.491Z · score: 3 (3 votes) · EA · GW

Thanks for creating this. I've added your course to this list.

Comment by pablo_stafforini on Finding and managing literature on EA topics · 2017-11-13T19:33:48.346Z · score: 4 (4 votes) · EA · GW

Thank you for writing this! The images under 'What are you going to search for?' are not loading.

Comment by pablo_stafforini on In defence of epistemic modesty · 2017-11-01T21:27:56.091Z · score: 6 (6 votes) · EA · GW

Thanks for drawing our attention to that early Overcoming Bias post. But please note that it was written by Hal Finney, not Robin Hanson. It took me a few minutes to realize this, so it seemed worth highlighting lest others fail to appreciate it.

Incidentally, I've been re-reading Finney's posts over the past couple of days and have been very impressed. What a shame that such a fine thinker is no longer with us.

ETA: Though one hopes this is temporary.

Comment by pablo_stafforini on In defence of epistemic modesty · 2017-10-31T21:28:24.957Z · score: 2 (4 votes) · EA · GW

Okay, thank you for the clarification.

[In the original version, your comment said that the quote was pulled out of context, hence my interpretation.]

Comment by pablo_stafforini on In defence of epistemic modesty · 2017-10-31T20:59:50.130Z · score: 2 (2 votes) · EA · GW

In that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero.

I offered that quote to cast doubt on Rob's assertion that Eliezer has "a really strong epistemic track record, and that this is good evidence that modesty is a bad idea." I didn't mean to deny that Eliezer had some successes, or that one shouldn't "look closely and form much better estimates of the likelihood of good invisible reasons" or that "the base rate of dysfunction is anywhere near zero", and I didn't offer the quote to dispute those claims.

Readers can read the original comment and judge for themselves whether the quote was in fact pulled out of context.

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-31T20:39:17.060Z · score: 0 (0 votes) · EA · GW

why in general should we presume groups of people with academic qualifications have their strongest incentives towards truth?

Maybe because these people have been surprisingly accurate? In addition, it's not that Eliezer disputes that general presumption: he routinely relies on results in the natural and social sciences without feeling the need to justify in each case why we should trust e.g. computer scientists, economists, neuroscientists, game theorists, and so on.

Comment by pablo_stafforini on In defence of epistemic modesty · 2017-10-31T18:41:17.893Z · score: 3 (3 votes) · EA · GW

I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea.

Really? My sense is that the opposite is the case. Eliezer himself acknowledges that he has an "amazing bet-losing capability" and my sense is that he tends to bet against scientific consensus (while Caplan, who almost always takes the consensus view, has won virtually all his bets). Carl Shulman notes that Eliezer's approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-31T16:09:29.520Z · score: 1 (1 votes) · EA · GW

I also feel comfortable having lower probability in the existence of God than the average physicist does; and "physicists are the wrong kind of authority to defer to about God" isn't the reasoning I go through to reach that conclusion.

Out of curiosity, what is the reasoning you would go through to reach that conclusion?

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-31T14:46:42.374Z · score: 2 (2 votes) · EA · GW

A discussion about the merits of each of the views Eliezer holds on these issues would itself exemplify the immodest approach I'm here criticizing. What you would need to do to change my mind is to show me why Eliezer is justified in giving so little weight to the views of each of those expert communities, in a way that doesn't itself take a position on the issue by relying primarily on the inside view.

Let’s consider a concrete example. When challenged to justify his extremely high confidence in MWI, despite the absence of a strong consensus among physicists, Eliezer tells people to "read the QM sequence”. But suppose I read the sequence and become persuaded. So what? Physicists are just as divided now as they were before I raised the challenge. By hypothesis, Eliezer was unjustified in being so confident in MWI despite the fact that it seemed to him that this interpretation was correct, because the relevant experts did not share that subjective impression. If upon reading the sequence I come to agree with Eliezer, that just puts me in the same epistemic predicament as Eliezer was originally: just like him, I too need to justify the decision to rely on my own impressions instead of deferring to expert opinion.

To persuade me, Greg, and other skeptics, what Eliezer needs to do is to persuade the physicists. Short of that, he can persuade a small random sample of members of this expert class. If, upon being exposed to the relevant sequence, a representative group of quantum physicists change their views significantly in Eliezer’s direction, this would be good evidence that the larger population of physicists would update similarly after reading those writings. Has Eliezer tried to do this?

Update (2017-10-28): I just realized that the kind of challenge I'm raising here has been carried out, in the form of a "natural experiment", for Eliezer's views on decision theory. Years ago, David Chalmers spontaneously sent half a dozen leading decision theorists copies of Eliezer's TDT paper. If memory serves, Chalmers reported that none of these experts had been impressed (let alone persuaded).

Update (2018-01-20): Note the parallels between what Scott Alexander says here and what I write above (emphasis added):

I admit I don’t know as much about economics as some of you, but I am working off of a poll of the country’s best economists who came down pretty heavily on the side of this not significantly increasing growth. If you want to tell me that it would, your job isn’t to explain Economics 101 theories to me even louder, it’s to explain how the country’s best economists are getting it wrong.

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-30T12:35:07.519Z · score: 1 (1 votes) · EA · GW

I think the main two factual disagreements here might be "how often, and to what extent, do top institutions and authorities fail in large and easy-to-spot ways?" and "for epistemic and instrumental purposes, to what extent should people like you and Eliezer trust your own inside-view reasoning about your (and authorities') competency, epistemic rationality, meta-rationality, etc.?"

Thank you, this is extremely clear, and captures the essence of much of what's going between Eliezer and his critics in this area.

Could you say more about what you have in mind by "confident pronouncements [about] AI timelines"? I usually think of Eliezer as very non-confident about timelines.

I had in mind forecasts Eliezer made many years ago that didn't come to pass as well as his most recent bet with Bryan Caplan. But it's a stretch to call these 'confident pronouncements', so I've edited my post and removed 'AI timelines' from the list of examples.

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-29T20:41:57.857Z · score: 1 (1 votes) · EA · GW

I never claimed that this is what Eliezer was doing in that particular case, or in other cases. (I'm not even sure I understand Eliezer's position.) I was responding to the previous comment, and drawing a parallel between "beating the market" in that and other contexts. I'm sorry if this was unclear.

To address your substantive point: If the claim is that we shouldn't give much weight to the views of individuals and institutions that we shouldn't expect them to be good at tracking the truth, despite their status or prominence in society, this is something that hardly any rationalist or EA would dispute. Nor does this vindicate various confident pronouncements Eliezer has made in the past—about nutrition, animal consciousness, philosophical zombies, population ethics, and quantum mechanics, to name a few—that deviate significantly from expert opinion, unless this is conjoined with credible arguments for thinking that warranted skepticism extends to each of those expert communities. To my knowledge, no persuasive arguments of this sort have been provided.

Comment by pablo_stafforini on Inadequacy and Modesty · 2017-10-29T11:32:55.022Z · score: 7 (9 votes) · EA · GW

The reason people aren't doing this is probably that it isn't profitable once you account for import duties, value added tax and customs clearance fees, as well as the time costs of transacting in the black market. I'm from Argentina and have investigated this in the past for other electronics, so my default assumption is that these reasons generalize to this particular case.

I think this discussion provides a good illustration of the following principle: you should usually be skeptical of your ability to "beat the market" even if you are able to come up with a plausible explanation of the phenomenon in question from which it follows that your circumstances are unique.

Similarly, I think one should generally distrust one's ability to "beat elite common sense" even if one thinks one can accurately diagnose why members of this reference class are wrong in this particular instance.

Very rarely, you may be able to do better than the market or the experts, but knowing that this is one of those cases takes much more than saying "I have a story that implies I can do this, and this story looks plausible to me."

Comment by pablo_stafforini on Report -- Allocating risk mitigation across time · 2017-03-14T22:02:31.902Z · score: 0 (0 votes) · EA · GW

Updated link:

Comment by pablo_stafforini on A Different Take on President Trump · 2016-12-16T20:01:05.556Z · score: 4 (6 votes) · EA · GW

an audience full of people who can't tell whether or not to trust my perspective.

Statements like "There is a growing risk that European countries will fall into civil war" are very implausible to many folks here. So if you want people to take you seriously, you should at least show us that you sincerely believe this, by being willing to turn those statements into testable predictions. Your refusal to do this is part of the reason some of us don't trust your perspective.

Comment by pablo_stafforini on Should I be vegan? · 2016-12-12T11:29:31.738Z · score: 2 (2 votes) · EA · GW

Upon reflection, I agree with you. I haven't been using the "lactovegetarian" label much, both because few people know what it means and because there isn't much need to use it. But I won't be using it at all from now on.

Comment by pablo_stafforini on A Different Take on President Trump · 2016-12-08T13:52:37.891Z · score: 8 (10 votes) · EA · GW

Europe is a morass of ethnic conflict, terrorism, sexual violence, rising nationalist militias, and jihadism. There is a growing risk that European countries will fall into civil war. Civil war in Europe would be a catastrophic risk that could go global.

  1. What is your credence that at least one European country will fall into civil war in 2017?
  2. How do you define the global catastrophe that you believe could result from civil war in Europe? In particular, how many people would need to be killed for such an event to count as a global catastrophe in your sense?
Comment by pablo_stafforini on CEA is Fundraising! (Winter 2016) · 2016-12-07T22:08:39.549Z · score: 4 (4 votes) · EA · GW

I agree that, other things equal, we want to encourage critics to be constructive. All things considered, however, I'm not sure we should hold criticism to a higher standard, as we seem to be doing. This would result in higher quality criticism, but also in less total criticism.

In addition, the standard to which criticism is held is often influenced by irrelevant considerations, like the status of the person or organization being criticized. So in practice I would expect such a norm to stifle certain types of criticism more than others, over and above reducing criticism in general.

Comment by pablo_stafforini on Donor lotteries: demonstration and FAQ · 2016-12-07T20:16:38.572Z · score: 3 (3 votes) · EA · GW

I have been put in touch with other donors that are each contributing less than $5k, but you can just team up with us. Email me at MyFrstName at MyLastName, followed by the most common domain extension.

Ideally there should be a better procedure for doing this; the associated trivial inconvenience may be discouraging some people from joining.

Comment by pablo_stafforini on Donor lotteries: demonstration and FAQ · 2016-12-07T13:47:26.837Z · score: 11 (11 votes) · EA · GW

Cool. I'm in with $2k.

Comment by pablo_stafforini on Are You Sure You Want To Donate To The Against Malaria Foundation? · 2016-12-07T13:33:19.645Z · score: 6 (6 votes) · EA · GW

Firstly I think many people give to GiveWell recommended charities because they believe, rightly or wrongly, that a healthier population will spur economic growth, or political reform, or whatever else, which will improve the welfare of present and future generations of people in the country.

That argument, however, is vulnerable to the "suspicious convergence" objection.

Comment by pablo_stafforini on CEA is Fundraising! (Winter 2016) · 2016-12-07T13:12:07.425Z · score: 9 (9 votes) · EA · GW

While I disagree with Michael and don't think we should discourage EA orgs from posting fundraising documents,* I'm disappointed that his comment has so far received 100% downvotes. This seems to be part of a disturbing larger phenomenon whereby criticism of prominent EA orgs or people tends to attract significantly more downvotes that other posts or comments of comparable quality, especially posts or comments that praise such orgs or people.


(*) I work for CEA, so there's a potential conflict of interest that may bias my thinking about this issue.

Comment by pablo_stafforini on Contra the Giving What We Can pledge · 2016-12-05T17:25:32.687Z · score: 1 (1 votes) · EA · GW

The principle you outline does not apply to the pledge because many people (citation) don't think the pledge is obviously bad.

AlyssaVance isn't outlining a principle. AGB made a general claim about criticism being useless without a counterfactual. AlyssaVance's mention of firebombing was meant as a counterexample to that generalization.

Comment by pablo_stafforini on How valuable is movement growth? · 2016-12-04T14:42:30.935Z · score: 3 (3 votes) · EA · GW

I am concerned that we are reinventing the wheel, and ignoring a substantial body of empirical and theoretical work that has already been done on the subject.

I share this concern, and believe that EAs are often guilty of ignoring existing fields of research from which they could learn a lot. I'm not sure whether this concern applies in this particular case, however. I spent several days looking into the sociological literature on social movements and didn't find much of value. Have you stumbled across any writings that you would recommend?

Comment by pablo_stafforini on Should effective altruism have a norm against donating to employers? · 2016-12-02T12:04:57.387Z · score: 2 (4 votes) · EA · GW

Greg's point is that the case against donating to one's employer is part of a larger argument for increased professionalization of EA orgs. The situation he describes in the paragraph you quote illustrates what can go wrong when an organization lacks the level of professionalism he thinks orgs should have.

Comment by pablo_stafforini on Should effective altruism have a norm against donating to employers? · 2016-12-02T11:45:11.081Z · score: 0 (2 votes) · EA · GW

I think the claim should be that there is a prima facie reason for donating to one's employer. If the reason was pro tanto, one would have reason for donating even after learning that one's employer e.g. has no room for more funding.

I agree with the claim so interpreted. If you believe working for some organization is the best use of your time, there's a presumption that donating to this organization is the best use of your money. So I now see that my original comment was uncharitable.

At present, I don't have a good sense of how strong this presumption should be. So it's unclear to me how much weight I should give to arguments that appeal to this presumption.

Comment by pablo_stafforini on Should effective altruism have a norm against donating to employers? · 2016-11-30T17:45:07.778Z · score: 5 (7 votes) · EA · GW

The claim that it's natural to donate to one's employer given one's prior decision to become an employee assumes that EAs—or at least those working for EA orgs—should spend all their altruistic resources (i.e. time and money) in the same way. But this assumption is clearly false: it can be perfectly reasonable for me to believe that I should spend my time working for some organization, and that I should spend my money supporting some other organization. Obviously, this will be the case if the organization I work for, but not the one I support, lacks room for more funding. But it can also be the case in many other situations, depending on the relative funding and talent constraints of both the organization I work for and the organizations I could financially support.

Comment by pablo_stafforini on Should you switch away from earning to give? Some considerations. · 2016-08-26T12:10:56.541Z · score: 3 (3 votes) · EA · GW

If many of those people aren't earning to give, then either fewer EAs are earning to give than is generally assumed, or the EA survey is not a representative sample of the EA population.

Alternatively, we may question the antecedent of that conditional, and either downgrade our confidence in our ability to infer whether someone is earning to give from information about how much they give, or lower the threshold for inferring that a person who fails to give at least that much is likely not earning to give.

Comment by pablo_stafforini on The most persuasive writing neutrally surveys both sides of an argument · 2016-02-18T11:40:21.942Z · score: 4 (4 votes) · EA · GW

What are the best arguments against writing in this way?

Kudos for acting on your own advice!