Posts

Peter Singer no-platformed by pro-disability protestors at Canadian university 2017-03-08T17:53:10.260Z
80,000 Hours: EA and Highly Political Causes 2017-01-28T18:42:26.069Z

Comments

Comment by the_jaded_one on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-29T16:57:31.132Z · EA · GW

The idea of introducing social justice into an existing movement has already been tried, and I think it's worth going over the failures and problems that social justice has caused in the atheist movement before jumping headlong into it in the EA movement. This reddit page about why Atheism+ failed makes for interesting reading: https://www.reddit.com/r/atheism/comments/2ygiwh/so_why_did_atheism_plus_fail/

Unfortunately, the people who ended up in charge of the movement cared much more about perpetuating their radical ideologies, their cults of personality, and their easy paycheques than any of these issues. ... No matter how noble your cause, someone who practices dishonesty, censorship, intimidation, and harassment in alleged pursuit of that cause is not your friend.

See also this: https://athefist.wordpress.com/2013/09/04/the-atheism-plusftb-problem/

Attempts to interfere with harassment policies and promoting the (unevidenced) idea that atheist meetings are hotbeds of sexual exploitation have slashed female attendance at events like TAM which were practically on 50/50 parity. Rather than trying to invest in the future of the movement or seek the best and most effective speakers there’s an insistence on the basis of gender rather than expertise. Not that there aren’t good speakers of all genders but when you pass over expert male speakers to include sub-par ones with axes to grind rather than progress to make that’s an issue

it continues:

There’s also something peculiar in claiming to be atheists and skeptics while suspending skepticism when it comes to certain claims – like the highly questionable 1-in-4 rape statistic or broader concepts like patriarchy and rape culture. Skepticism or demands for evidence in these arenas is treated as hostility.

The post by Kelly that I am responding to seems to contain several red flags indicating that EA+SJ is falling into the same traps that Atheism+SJ fell into;

  • suspending healthy skepticism of questionable claims,
  • advocating identity categories over competence,
  • supporting the silencing of dissenting opinions and abandoning free speech

As I said in another comment: don't say you weren't warned if this goes badly.

Comment by the_jaded_one on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-29T16:32:59.108Z · EA · GW

Came to say this as well.

See, for example:

https://www.reddit.com/r/atheism/comments/2ygiwh/so_why_did_atheism_plus_fail/

The atheists even started to disinvite their intellectual founders, e.g. Richard Dawkins. Will EA eventually go down the same path - will they end up disinviting e.g. Bostrom for not being a sufficiently zealous social justice advocate?

All I'm saying is that there is a precedent here. If SJW-flavored EA ends up going down this path, please don't say you were not warned.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-10-26T21:04:10.657Z · EA · GW

Thanks for the info on the worm wars, will look into it.

Comment by the_jaded_one on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T21:01:28.668Z · EA · GW

This smells a lot like a Social Justice Warrior takeover of effective altruism. The idea of restricting free speech is particularly worrying. I would write a full rebuttal, but it might not be worth my time or that of others - the movement might already be unsalvageable (does anyone agree/disagree with that?)

EDIT: Replying to XCCF below: I don't think there's much in this post that doesn't qualify as generic SJW ideology and talking points.

EDIT: Regarding the noncentral fallacy: I think this is a pretty central example of an SJW takeover from a pretty central SJW, but I'm open to new information.

Comment by the_jaded_one on Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood · 2017-06-30T06:41:44.053Z · EA · GW

https://en.m.wikipedia.org/wiki/Explanatory_gap

Comment by the_jaded_one on Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood · 2017-06-29T21:07:14.980Z · EA · GW

I think that functionalism is incorrect and that we are super-confused about this issue.

Specifically, there is merit to the "Explanatory gap" argument. See https://en.m.wikipedia.org/wiki/Explanatory_gap

I also sort of think I know what the missing thing is. It's that the input is connected to the algorithm that constitutes you.

If this is true, there is no objective fact-of-the-matter about which entities are conscious (in the sense of having qualia). From my point of view only I am conscious. From your point of view, only you are. Neither of us are wrong.

Comment by the_jaded_one on [deleted post] 2017-04-30T12:06:55.569Z

I do think indeed every physical punishment, however "mild" or "reasonable", is child abuse

I think this claim is a bit problematic...

  • moral claim masquerading as factual via reification of moral categories (there is no objective fact of the matter about whether something is or is not child abuse)
  • supporting a deontological claim with consequentialist evidence of harm that (presumably) arises from only a subset the more extreme violations
  • never physically punishing children is a much​ less defensible, less persuasive position than doing so in a limited set of circumstances
Comment by the_jaded_one on How accurately does anyone know the global distribution of income? · 2017-04-28T08:17:35.731Z · EA · GW

If a shanty town opens down the road from me, giving me the option to live like the global poor, I become richer relative to my neighbors, but I don't become richer in absolute terms. Even if a shanty town opened, I'd buy the same stuff as before, so my quality of life would be exactly the same.

I think this is incorrect. Right now I am looking for accommodation. The cheapest option I can find (which doesn't have a working washing machine and is a single small room with shared facilities) costs €5400 per year. It would be very useful for me to have the option to live in a room of quality and price at the level of the 75th or 80th percentile in India. Eyeballing the graph above, the 80th percentile in India is on $1500 or so. They can't be paying more than $700 for their accommodation.

I have to go live in this room - the alternatives are even more expensive, or being homeless and losing my job.

I agree that the shanty town wouldn't help me - I would stink of feces and quickly lose my job on personal hygiene grounds - but the 50th and 75th and 80th percentiles in India do not live in 'shanty towns'. Or am I completely misinformed here?

The reason the super cheap goods the global poor buy don't exist in the West is because no-one wants them.

that's false - they don't exist because the government bans or taxes them, or because of cost disease. In almost every relevant category (cars, accommodation, food, household goods), the government bans you from buying the cheap options.

E.g.

  • Bike helmet made in China for $3 but sells for €40. (Compulsory safety testing to ludicrously high standards, tax, overheads)
  • Want to buy a second hand toaster? Nope, banned in many countries because it might be hazardous. Pay for a new one, including all the tax and overheads and then when you've finished, throw the old one away.
  • Learn to drive in the UK as a new young, male driver)? That'll be £1500 for lessons, plus £3800 for insurance Why? Do people in India or Brazil need to pay that much? You tell me!
  • Want to buy a cheap new car like the Tata Nano? Nope, your government has banned it because it's 99.999% safe rather than 99.9999% safe.
  • Surely a speeding fine will be inconsequential to someone in the top 1% of the global income distribution, because the harm from speeding on a road is a fixed quantity? No, the government wisely decided to make it it scale with your income!
  • want to buy/rent a very small house/flat/room? Nope! It has to have a bunch of amenities and features that you don't need, by law.
  • Want to buy a simple product like milk at market price? Nope! The government, media and farming lobby are getting together to make sure that consumers subsidize unprofitable dairy farms.

I think the key here is that once you have some money, the government finds many ways to take that money away from you, and those ways tend to scale as a percentage of your income for people in the range that we are talking about ($1000-$100,000). Being able to afford a place to live has a minimum threshold which depends on the average income in your country.

If you (in the west) fall behind in this race to make enough money so that once the government has thieved from you at both ends you can still afford a room to live, you end up falling into the homelessness trap which is very hard to escape from and is actually worse than the life of a median person in India.

Comment by the_jaded_one on How accurately does anyone know the global distribution of income? · 2017-04-26T17:27:42.917Z · EA · GW

it would indeed be harder to live in the west on $2/day, because the low-quality goods that the global poor use are not available to buy. I think the relevant comparison is more like "if there were lots of people living on $2/day in the west, what quality of living would you get?". It's artificial to imagine one person living in extreme poverty without a market and community around them.

OK, so maybe appeals to donate money based on factors of 100 wealth difference should be limited to people who actually have a third-world price/quality market for (food, accommodation, shelter) available to them. Hmmmm OK that would be no-one at all.

Then we come to this:

You could, of course, doubt the existing estimates. My general policy is to go with the expert view when it comes to issues that have been thoroughly researched,

...

The PPP adjustments are meant as a hypothetical "what you could buy on $2 per day if the same goods were sold in stores, or if there were lots of other poor people in the country".

So they've thoroughly researched a question which is completely different than the one I care about, which is what I can actually buy and do.

As an inhabitant of a rich country, you get to consume lots of extra public goods that aren't fully included in the post-tax income figures in these data-sets e.g. safety, clean air, beautiful buildings, being surrounded by lots of educated people.

But these things are mostly not worth the massive amount of tax money I have to pay. And that's partly because that tax money is not being spent on me, (I have looked at government spending and the part of the pie that is spent on "things that childless healthy 30 year-olds want" is extremely small.), partly because taxation is progressive so punishes people who earn well, and partly because others in the west have different preferences about how much to spend reducing various risks (such as the risk of a $50,000 car being damaged in a collision with my $1000 old banger).

I would contend that I am not (on $60k) 100 times richer than the average Indian, at least not in the same way that someone on $6M is 100 times richer than me; the way that they really can buy my entire life 90 times over and still be way better off than me.

Anyway, thanks for responding to me, best regards, Jaded.

Comment by the_jaded_one on How accurately does anyone know the global distribution of income? · 2017-04-26T15:57:19.054Z · EA · GW

I think that on £1.53/day you could easily die, depending on your location (esp. cold locations). No food in the bins for a while, police evict you from your tent or destroy your shelter, you get drenched with water and then really cold, you get an injury or infection.

Are these kind of things (dying from exposure or hunger, police bulldoze your house) actually happening all the time to the median person in India at $700? I don't think so. I don't imagine it's easy to be the median average Indian, but I expect that you would have a shack, and food, and not freeze to death.

Also, there is a larger issue here. Being "100 times richer" than someone hides a lot of important assumptions. Let me grant the point that I could survive as a barely-alive hobo for £1.53/day. Well, that existence is not compatible with having a job in the west. There are amounts of money that I have to spend whether I like it or not, if I am to continue to earn any money at all, let alone my current salary. Others have argued that that restriction is irrelevant since it isn't binding, but actually I would quite like the option to live in a really small room in sharing facilities with 4 people if it was 4x cheaper. The option just isn't on the market. I am looking for accommodation right now, and plenty of people want to sell me 50m^2 for $1000/month with a 3 month penalty clause ($3000!) if I want to leave in less than 3 years, but no-one is selling 12m^2 for $250/month. In fact, the government in my country (western Europe) has passed laws that prevent me from living in a student house (closer to what I want to pay), because I am not a student; they have deliberately split the housing market into two and made people with jobs ineligible for the cheaper half. I would like to pay insurance that only paid out to a maximum of $1000 because that is all my car is worth, but other people on the roads drive $50000 cars and that impacts my premium. Then there's the social aspect of wealth. If I lived with the same quality of stuff that an Indian person at the 75th percentile of wealth in India lived on, and took a girl back to my shack, she would urgently have something else to do; whereas I can imagine a female from the 50th percentile in pretty much any place being impressed by a male from the 75th percentile.

With all these things in mind, I would say that I am not 100 times richer than a guy in India earning $700. If I were earning $70,000 IN INDIA, I would say it would be closer to the truth (though some of the same problems would apply, especially having to spend money to hold down a white collar job). For starters, I could easily afford a servant or three on that kind of money in India, which paints a picture that is more intuitively commensurate with the factor of 100 that the numbers imply.

Anyway, thanks for responding, I realize your time is valuable!

Comment by the_jaded_one on How accurately does anyone know the global distribution of income? · 2017-04-26T15:37:54.804Z · EA · GW

I didn't bring up the $70k figure or the $200k figure

that may be true, but they are figures that have been brought up

FWIW I doubt this is actually true.

Maybe. But the promotional materials certainly seem to frame it that way.

Comment by the_jaded_one on How accurately does anyone know the global distribution of income? · 2017-04-09T11:41:11.484Z · EA · GW

If minimum standards rise to $90,000 and I'm earning $100,000, I would argue they do probably affect me substantially and my original premise of 'minimum standards that basically don't affect me' no longer holds.

And I think the reality of the situation facing many people in the intended audience of the original graph is at least somewhat like that.

As this debate has progressed, the amount of income corresponding the targeted person has gradually moved upwards from $70k gross in an expensive area of The West (Bay Area, Oxford UK, NYC, London) to $200k net in an average-cost-area (Ohio). I feel like there is something of a motte-and-bailey defence going on here:

  • the "motte" is the position that the superstar lawyer earning $200k after tax who inexplicably lives in Cleveland, Ohio could pretty reasonably be said to be roughly 200 times richer than the person in the developing world on $1000. Not quite true (because it still fails the division test), but close. Problem with the "motte": If you literally told young EA recruitment targets that they all earned $200k post tax and lived in Cleveland, Ohio where living costs are average, they would unanimously object that that's nowhere near their situation in life.

  • the "bailey" is position that young potential EA recruits earning $70k net in an expensive area are literally 100 times richer than an average Indian earning $700. Advantage of the "bailey": makes people feel extremely guilty and more likely to donate money or sign the pledge. Disdvantage of the "bailey": as always, it's not actually a defensible position. We can see this by the fact that as I push on it, a retreat is happening.

Comment by the_jaded_one on How accurately does anyone know the global distribution of income? · 2017-04-08T00:38:14.499Z · EA · GW

why do increases in minimum standards that basically don't affect me (I was already buying higher-than-minimum-quality things) and don't at all affect the median Indian make me much poorer relative to the median Indian?

Well, this itself may prove too much.

Suppose that the minimum to survive in the west is raised to $90,000, and if you have less than that you are thrown out onto the streets and made homeless.

If the minimum to not be homeless is $90,000 and you earn $100,000, are you REALLY 100 times richer than someone on $1000 who has a shack to live in and food to eat?

and the now-much-richer countries have a safety net that enables everyone to reach this.

that's a nice fantasy but in reality the way the west works is if you are a single young male and you have less than enough money to afford rent, there is no safety net in many places, especially the USA and the UK. You are thrown into the homelessness trap.

Comment by the_jaded_one on How accurately does anyone know the global distribution of income? · 2017-04-06T17:37:09.706Z · EA · GW

these bottom lines remain in every estimate of the global income distribution I’ve seen so far... Many people in the world live in serious absolute poverty, surviving on as little as one hundredth the income of the upper-middle class in the US.

But is this bottom line really approximately true?

A salary of $70,000 could be considered upper-middle-class. 1/100th of $70,000 is $700.

According to the chart, that is slightly greater than the income of the median Indian, adjusted for PPP.

Since these figures have been adjusted, that should mean that $700 in Western Europe or the US will afford you the same quality of life as the median Indian person, without you getting any additional resources such as extra meals from sympathetic passers-by or free accommodation in a shelter (because otherwise, to be 100 times richer you would have to have 100 units per day of these additional resources - i.e. $70,000 plus 100 meals/day plus owning low-quality accommodation for 100 people).

However, $700/year (= $1.91/day, =€1.80/day, =£1.53 /day) (without gifts or handouts) is not a sufficient amount of money to be alive in the west. You would be homeless. You would starve to death. In many places, you would die of exposure in the winter without shelter. Clearly, the median person in India is better off than a dead person.

A realistic minimum amount of money to not die in the west is probably $2000-$5000/year, again without gifts or handouts, implying that to be 100 times richer than the average Indian, you have to be earning at least $200,000-$500,000 net of tax (or at least net of that portion of tax which isn't spent on things that benefit you - which at that level is almost all of it, unless you are somehow getting huge amounts of government money spent on you in particular).

The reality is that a PPP conversion factor is trying to represent a nonlinear mapping with a single straight line, and it fails badly at the extremes. But the extremes are exactly where one is getting this (misleading) factor of 100 from.

Comment by the_jaded_one on Introducing CEA's Guiding Principles · 2017-03-16T17:32:30.129Z · EA · GW

Gets almost no upvotes

Actually you got 7 upvotes and 6 downvotes, I can tell from hovering over the '1 point'.

Comment by the_jaded_one on Peter Singer no-platformed by pro-disability protestors at Canadian university · 2017-03-14T19:23:06.850Z · EA · GW

you are effectively "bundling" a high-quality post with additional content, which grants this extra content with undue attention.

A post which simply quotes a news source could be criticized as not containing anything original and therefore not worth posting. Someone has already complained that this post is superfluous since a discussion already exists on Facebook.

Actually if I had to criticize my own post I would say its weakness is that it lacks in-depth analysis and research. Unfortunately, in-depth analysis takes a lot of time...

Comment by the_jaded_one on Introducing CEA's Guiding Principles · 2017-03-11T15:24:00.951Z · EA · GW

Also, I am somewhat concerned that this comment has been downvoted so much. It's the only really substantive criticism of the article (admittedly it isn't great), and it is at -3, right at the bottom.

Near the top are several comments at +5 or something that are effectively just applause.

Comment by the_jaded_one on Introducing CEA's Guiding Principles · 2017-03-11T15:21:43.984Z · EA · GW

dangerous ideas of mass-termination of human and non-human life,

Specifically?

Comment by the_jaded_one on Peter Singer no-platformed by pro-disability protestors at Canadian university · 2017-03-08T21:39:45.633Z · EA · GW

Facebook requires that you give your real name to post an opinion, be part of the group etc. That is certainly a serious limitation to open discussion, and this topic in particular exacerbates that problem.

Not everyone will necessarily want to comment on this issue under their real name.

Also, I presume this forum exists because someone decided that something other than Facebook is required. Are we questioning this logic in general? Or are we making a special case of this issue? Why?

But if you would be so kind as to post anything you see as particularly relevant, I would appreciate it.

Comment by the_jaded_one on Vote Pairing is a Cost-Effective Political Intervention · 2017-03-03T17:35:58.822Z · EA · GW

there's a difference between "politics is hard to predict perfectly" and "politics is impossible predict at all".

I think there's a lot of improvement to be had in the area of "refining which direction we are pushing in".

Was there ever a well-prosecuted debate about whether EA should support Clinton over Trump, or did we just sort of stumble into it because the correct side is so obvious?

Comment by the_jaded_one on Vote Pairing is a Cost-Effective Political Intervention · 2017-03-02T21:43:57.473Z · EA · GW

2016 only one candidate had any sort of policy at all about farmed animals, so it didn't require a very extensive policy analysis to figure out who is preferable.

Beware of unintended consequences, though. The path from "Nice things are written about X on a candidate's promotional materials" to "Overall, X improved" is a very circuitous one in human politics.

The same is true for other EA focus areas.

A lot of people in EA seem to assume, without a thorough argument, that direct support for certain political tribes is good for all EA causes. I would like to see some effort put into something like a quasi realistic simulation of human political processes to back up claims like this. (Not that I am demanding specific evidence before I will believe these claims - just that it would be a good idea). Real-world human politicking seems to be full of crucial considerations.

I also feel like when we talk about human political issues, we lack an understanding of, or don't bother to think about, the causal dynamics behind how politics works in humans. I am specifically talking about things like signalling

Comment by the_jaded_one on The Moral Obligation to Organize · 2017-02-23T20:01:05.225Z · EA · GW

I think we can push issues towards being less political by reframing them and persuading others to reframe them.

Abortion, gun control, tax rate - these issues are so central to the left-right political divide that they will never be depoliticized.

Climate change is not like them IMO. I think it can be pushed away from the political left-right axis if it can be reframed so that doing something about climate change is no longer seen as supporting left-wing ideas about big government. There is an angle about efficiency, fairness & cutting red tape (carbon tax) and another angle about innovation and industry (e.g. Tesla). I think we should be pushing those very hard.

Comment by the_jaded_one on The Moral Obligation to Organize · 2017-02-19T10:39:42.700Z · EA · GW

Political organizing is a highly accessible way for many EAs to have a potentially high impact. Many of us are doing it already. We propose that as a community we recognize it more formally as way to do good within an EA framework

I agree that EAs should look much more broadly at ways to do good, but I feel like doing political stuff to do good is a trap, or at least is full of traps.

Why do humans have politics? Why don't we just fire all the politicians and have a professional civil service that just does what's good?

  • Because people have different goals or values, and if a powerful group ends up in control of the apparatus of the state and pushes its agenda very hard and pisses a lot of people off, it is better to have that group ousted in an election than in a civil war.

But the takeaway is that politics is the arena in which we discuss ideas where different people in our societies disagree on what counts as good, and as a result it is a somewhat toxic arena with relatively poor intellectual standards. It strongly resists good decision-making and good quality debate, and strongly encourages rhetoric. EA needs to take sides in this like I need more holes in my head.

I think it would be fruitful for EA to get involved in politics, but not by taking sides; I get the impression that the best thing EAs can do is try to find pareto improvements that help both sides, and by making issues that are political into nonpolitical issues by de-ideologizing them and finding solutions that make everyone happy and make the world a better place.

Take a leaf out of Elon Musks's book. The right wing in the USA is engaging in some pretty crazy irrationality and science denial about global warming. Many people might see this as an opportunity to score points against the right, but global warming will not be solved by political hot air, it will be solved by making fossil fuels economically marginal or nonviable in most applications. In particular, we need to reduce car related emissions to near zero. So Musks goes and builds fast, sexy macho cars in factories in the USA which provide tens of thousands of manufacturing jobs for blue collar US workers, and emphasizes them as innovative, forward looking and pro-US. Our new right wing president is lapping it up. This is what effective altruism in politics looks like: the rhetoric ("look at these sexy, innovative US-made cars!") is in service of the goal (eliminating gasoline cars and therefore eventually CO2 emissions), not the other way around.

And if you want to see the opposite, go look at this. People are cancelling their Tesla orders because Musk is "acting as a conduit to the rise of white nationalism and fascism in the United States". Musk has an actual solution to a serious problem, and people on the political left want to destroy it because it doesn't conform perfectly to their political ideology. Did these people stop to think about whether this nascent boycott makes sense from a consequentialist perspective? As in, "let's delay the solution to a pressing global problem in order to mildly inconvenience our political enemy"?

Collaborating with existing social justice movements

I would personally like to see EA become more like Elon Musk and less like Buzzfeed. The Trump administration and movement is a bit like a screaming toddler; it's much easier to deal with by distracting it with it's favorite toys ("Macho! Innovative! Made in the US!") than by trying to start an argument with it. How can we find ways to persuade the Trump administration - or any other popular right wing regime - that doing good is in its interest and conforms to its ideology? How can we sound right wing enough that the political right (who currently hold all the legislative power in the US) practically thinks they thought of our ideas themselves?

Comment by the_jaded_one on The Moral Obligation to Organize · 2017-02-18T18:32:05.526Z · EA · GW

I think we should align with the left on climate change, for example.

re: climate change, it would be really nice if we could persuade the political right (and left) that climate change is apolitical and that it is just a generally sensible thing to tackle it, like building roads is apolitical and just generally sensible.

Technology is on our side here: electric cars are going mainsteam, wind and solar are getting better. I believe that we have now entered a regime where climate change will fix itself as humanity naturally switches over to clean energy, and the best thing that politics can do is get out of the way.

Comment by the_jaded_one on Collaborators Wanted: Could war disrupt EA orgs in the US or UK in the next 10 years? · 2017-02-01T21:43:37.985Z · EA · GW

What disruptions are EAs especially well placed to mitigate?

I like this one. If you plan to do good in an uncertain future, it makes sense to take advantage of altruism's risk neutrality and put a lot of effort into scenarios that are reasonably likely but also favour your own impact.

In the event of a major disruption or catastrophe such as a war or negative political event in the EA heartland, this would mean that global health work would suddenly become pretty useless - no-one would have the will or means to help distant (in space) people. But we would suddenly have much more leverage to help people who are distant in time, by trying to positively affect any recovery of civilisation. That could be by making it happen sooner, or by giving it some form of aid that is cheap for us. Robust preservation of information is a good idea. If there were a major disaster that destroyed the internet and most servers, and then a long period of civilisational downtime, it might make sense to try and save and distribute key information, for example Wikipedia, the, certain key books, sites, courses, etc.

There might also be attempts to distort history in a very thorough way. Perhaps steps can be taken against this.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-02-01T17:41:25.136Z · EA · GW

some of OpenPhil are probably reading it

...

The fix is to email them a link, and to try to give arguments that you think they would appreciate as input for how they could improve their activities.

Those arguments are in the post.

I am writing under a pseudonym so I don't have an easy way of emailing them without it going to their spam folder. I have sent an email pointing them to the post, though.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-02-01T17:33:24.127Z · EA · GW

If the issue is that the charities are in fact ineffective, then you haven't provided any direct evidence of this, only the indirect point that political charities are often ineffective.

Where is the direct evidence that Cosecha is highly effective?

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-02-01T17:13:26.413Z · EA · GW

I don't think this is the right way to model marginal probability, to put it lightly. :)

Well really you're trying to look at d/dx P(Hillary Win|spend x), and one way to do that is to model that as a linear function. More realistically it is something like a sigmoid.

For some numbers, see this

So if we assume: P(Hillary Win|total spend $300M) = 25% P(Hillary Win|total spend $3Bn) = 75%

Then the average value of d/dx P(Hillary Win|spend x) over that range is going to be 2700M/0.5 = $5.5Bn per unit of probability. Most likely the value of the derivative at the actual value isn't too far off the average.

This isn't too far from $1000/vote x 3 million votes = $3Bn.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-02-01T12:01:51.584Z · EA · GW

Well if we go with $1000 per vote and we need to shift 3 million votes, that's $3bn. Now let's map $3bn to, say, a 25% increased probability of winning, under a reasonable pre-election distribution.

Then you can think of the election costing $12bn, for a benefit of 4tn, which is a factor of 400.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-02-01T07:59:08.769Z · EA · GW

Hillary outspent Trump by a factor of 2 and lost by a large margin, so it's something of a questionable decision.

EDIT: I think a more realistic model might go something like this; you can tweak the figures to shift a factor of 2-3 but not much more:

P(Hillary Win|total spend $300M) = 25% P(Hillary Win|total spend $3Bn) = 75%

Then the average value of d/dx P(Hillary Win|spend x) over that range is going to be 2700M/0.5 = $5.5Bn per unit of probability. Most likely the value of the derivative at the actual value isn't too far off the average.

This isn't too far from $1000/vote x 3 million votes = $3Bn.

So we could look at something like $5Bn/unit probability at the margin, or each $1 increasing the probability of Hillary winning by 1/5,000,000,000

You could probably do a very similar analysis for any political election at roughly this level of existing funding.

We can take a first approximation to the expected disutility of a very bad Trump presidency at $4Tn or one full years' GDP. This implies a very confident belief in an extremely negative outcome about Trump.

Is it competitive with global poverty? Well it seems like it is on a fairly similar level, since for $5000 you can save a life which is typically valued at something like $5-$25M, which is a similar "rate of return" to paying 5Bn for 4Tn via the Clinton campaign.

Is this competitive with MIRI or the other AI risk orgs? Probably not, but your beliefs about AI risk factor into this quite a lot.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T20:17:39.594Z · EA · GW

I don't think that can function as an argument that the recommendation shouldn't have been made in the first place

I agree, and I didn't mention that document or my degree of trust in it.

I feel your overall engagement here hasn't been very productive.

I suppose it depends what you want to produce. If debates were predictably productive I presume people would just update without even having to have a debate.

it feels like you're reaching for whatever counterarguments you can think of, without considering whether someone who disagreed with you would have an immediate response

What counterarguments is one supposed to make, other than the ones one thinks of? I suppose the alternative is to not make a counterargument, or start a debate with all possible lines of play fully worked out and prepared? A high standard, to be sure. Sometimes one doesn't correctly anticipate the actual responses. Is there some tax on number of comments or responses? I mean this is valid to an extent, if someone is making really dumb arguments, but then again sometimes one has to ask the emperor why he isn't wearing any clothes.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T19:57:56.659Z · EA · GW

Can you elaborate?

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T18:22:06.755Z · EA · GW

Comments from anyone involved in Open Philanthropy are welcome here.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T18:18:23.219Z · EA · GW

policy expertise in a particular field

What is policy expertise in the field of deciding that it is a good idea to encourage illegal immigration? I feel like we are (mis)using words here to make some extremely dodgy inferences. Chloe studied worked for the ACLU and a law firm, focusing on litigating police misconduct and aiming to reduce incarceration, and then Open Phil. This doesn't IMO qualify her to decide that increasing legal and illegal immigration is a good idea, and doesn't endow her with expertise on that question.

Is your claim that Chloe Cockburn has failed to consider policy ideas associated with the right-wing, and thus has not done her due diligence to know that what she recommends is actually the best course? If so, what is your evidence for this claim?

Well what is your evidence that she has done her due diligence to know that what she recommends is actually the best course?

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T17:53:47.967Z · EA · GW

I am confused. If you took it as given, why bother talking about whether Alliance for Safety and Justice and Cosecha are good charities?

Well, I am free to both assert that it is a sensible background assumption that it is not usually good for EA to do highly political things, and also argue a few relevant special cases of highly political EA things that aren't good, without taking on the bigger task of specifying and defending my assumption. But I offer Robin Hanson's post as some degree of defence.

I expect that they would become culture-war issues as soon as they become more prominent. Do you disagree?

I disagree strongly for synthetic meat, it will be an open-and-shut case once the quality surpasses real meat. I think wild animal suffering is emotive and will generate debate, but I don't think it will split left-right, mostly because I can't even decide which of {left, right} maps to {wild-suffering-bad, wild-suffering-OK}.

Or do you think that the appropriate role of EA is to elevate issues into culture-war prominence and then step aside?

Well hopefully EA can elevate issues that are approximately-pareto-improvements from irrelevance to broad-consensus, skipping out any kind of war.

that's a tribal war between economists and epidemiologists?

What?

Or do you mean that they shouldn't take sides in issues associated with the American left and right, even if they sincerely believe that one of those issues is the best way to improve the world?

yes, this. And if they do believe that one particular side of the the US/EU culture war is the most important cause, then they should provide rock solid evidence that it is, that deals with the best arguments from the other side as well as the argument from marginal utility of extra effort, which is critically missing in the OP.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T16:48:02.151Z · EA · GW

More generally, you keep trying to frame your points as politically neutral "meta" considerations but it definitely feels like you have an axe to grind against the activist left which motivates a lot of what you're saying.

Well if EA is funding the activist left, justifying it by saying that a "trusted expert" (who just happens to be a leftist activist!) said it was a good idea, what exactly do you expect me to do?

And if people who disagree with leftist activism aren't allowed to bring up "meta" considerations when those considerations are inconvenient for leftist activism, then who is going to do it?

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T16:38:12.892Z · EA · GW

if your argument were taken to its endpoint, we ought not trust GiveWell because its employees sometimes talk about how great malaria nets and deworming are on social media.

I don't trust them, to the extent that I endorse these causes, I trust their arguments (having read them) and data, and I trust the implicit critical process that has failed to come up with reasons why deworming isn't that good (to the extent that it hasn't).

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T16:35:12.173Z · EA · GW

reducing deportations of undocumented immigrants would reduce incarceration (through reducing the number of people in ICE detention)

That is true, but it is politicized inference. You could also reduce the number of people in ICE detention at any given time by deporting them much more quickly. Or you could reduce the number of undocumented immigrants by making it harder for them to get in in the first place, for example by building a large wall on the southern US border.

So I would characterize this as a politically biased opinion first and foremost. It's not even an opinion that requires being informed - it's obvious that you could reduce incarceration by releasing people from detention and just letting them have whatever they were trying to illegally take, you don't need a law degree to make this inference, but you do need a political slant to claim that it's a good idea.

And the totality of policies espoused by people such as Chloe Cockburn would be to flood the US with even more immigrants from poorer countries, not just to grant legal status to existing ones. This is entryism, and it is a highly political move that many people are deeply opposed to because they see it as part one of a plan to wipe them and their culture out. I don't think that's a good fit for an EA cause - even if you think it's a good idea, it makes sense to separate it from EA.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T16:18:32.175Z · EA · GW

when you just as easily could have addressed it to OpenPhil

This is true - and I would say that a lot of the same questions could be directed to OpenPhil.

process that minimised the influence of my personal opinions

But there should be some ultimate sanity checking on that process; if some process ends up recommending something that isn't really a good recommendation, then is it a good process?

it can save you from wasting time going down rabbit holes.

Yes, that's true, and I would consider it a pro which I consider to be outweighed by other factors.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T15:55:42.780Z · EA · GW

I think dividing these three claims more clearly would make it easier for me to follow your argument: effective altruist charity suggestion lists should not endorse political charities.

This is a rather large topic, I don't think it would be wise to try and specify and defend that abstract claim in the same post as talking about a specific situation. I take it as given, at least here. Perhaps I will do a followup, but I think it would be hard to do the topic justice in, say, 5-10 hours which is what I realistically have.

Of course, an identical critique applies to animal welfare charities: many, many traditionalists/conservatives/non-social-justice-people are turned off by animal welfare activism.

Animal welfare activism is controversial, but it hasn't been subsumed into the culture war in the way immigration, race and social justice have. Some parts of animal welfare activism, such as veganism are left-associated, but other parts like wild animal suffering and synthetic meat most certainly are not. So in my mind, animal welfare activism is suitable for EA involvement.

And xrisk charities tend to turn off, to a first approximation, everyone.

AI-risk as offputting is becoming less true over time, but EA should not be aiming to appeal to everyone. Rather I think that EA should be aiming to not take sides in tribal wars.

Is your belief that it is morally wrong to ever specifically help one group because you believe they are worse off than other groups?

No, but in the specific case of the US culture war I think it is a bad idea to move in the "Black lives matter" direction. In the case of the tradeoff between incarceration and public safety, I don't think there is any good reason to make it into a race issue, because that immediately sends the signal that you are interested in raising the status and outcomes of your "favorite" race at the cost of other races. This is a tradeoff situation where benefits targeted at a specific group will harm people who are not from that group in a fairly direct way.

On the other hand if GiveDirectly gives cash to women in some third world country, and that cash comes from voluntary payments in the west, it is going to be an improvement for everyone in the receiving community as their local economy is stimulated.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T15:03:51.253Z · EA · GW

Informed opinions can still be biased, and we are being asked to "trust" her.

I am uncertain why someone would choose to figure out what other people's area of expertise is from Twitter.

Well I am worried about political bias in EA. Her political opinions are supremely relevant.

On a strictly legal question such as "In situation X, does law Y apply" I would definitely trust her more than I would trust myself. But that is not the question that is being asked, the question that is being asked is "Will the action of funding Cosecha reduce incarceration while maintaining public safety" with the followup question of "Or is this about increasing illegal immigration by making it harder to deport illegals, opposing Trump and generally supporting left-wing causes?"

I don't think that she can claim special knowledge or lack of bias in answering those questions. I think it's hard for anyone to.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T13:36:47.633Z · EA · GW

One way to resolve our initial skepticism would be to have a trusted expert in the field

And in what field is Chloe Cockburn a "trusted expert"?

If we go by her twitter, we might say something like "she is an expert left-wing, highly political, anti-trump, pro-immigration activist"

Does that seem like a reasonable characterization of Chloe Cockburn's expertise to you?

Characterizing her as "Trusted" seems pretty dishonest in this context. Imagine someone who has problems with EA and Cosecha, for example because they were worried about political bias in EA. Now imagine that they got to know her opinions and leanings, e.g. on her twitter. They wouldn't "trust" her to make accurate calls about the effectiveness of donations to a left-wing, anti-Trump activist cause, because she is a left-wing anti-Trump activist. She is almost as biased as it is possible to be on this issue, the exact opposite of the kind of person whose opinion you should trust. Of course, she may have good arguments since those can come from biased people, but she is being touted as a "trusted expert", not a biased expert with strong arguments, so that is what I am responding to.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T13:10:59.358Z · EA · GW

suggesting that they provide some additional disclaimers about the nature of the recommendation.

I most certainly wouldn't suggest that, I would suggest that they cease recommending both of these organisations, with the caveat that Cosecha is the worse of the two and first in line for being dropped.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T12:59:45.601Z · EA · GW

it seems you could get the same results by emailing the 80K team

Given that the response given by 80,000 Hours here is

[we] don't really have independent views or goals on any of these things. We're just syndicating content

I am extremely glad that I didn't email them and try to keep this private. I believe that 80,000 Hours should take responsibility for recommendations that appear on its site, with the unavoidable implicit seal of approval that that confers.

Comment by the_jaded_one on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T10:37:59.244Z · EA · GW

based on a misconception about how we produced the list and our motivations.

I would disagree; to me it seems irrelevant whether 80,000 hours is "just syndicating content", or whether your organisation has a "direct view or goal".

It's on your website, as a recommendation. If it's a bad recommendation, it's your problem.

Comment by the_jaded_one on EAG 2017 Boston Update: moved to June · 2017-01-25T21:14:34.134Z · EA · GW

I would like to post an article but I only have 2 karma, this website requires that I have 5 karma in order to post an article. I have been a member for almost a year, though I mostly lurk. I have an account on LessWrong as The_Jaded_One where I post more frequently.

So... can anyone be altruistic and spare a few upvotes?

Comment by the_jaded_one on Rational Politics Project · 2017-01-09T17:50:57.074Z · EA · GW

You claim this is non-partisan, yet you make highly partisan claims,

I made a similar point on the LW version of this post. I think it is going to be hard to fix politics and the links between the object level and the meta level, which are especially strong in politics, are close to the root cause of why politics is so hard to be rational about.

But I feel like it might be useful to poke around a bit at that link.

Comment by the_jaded_one on Two Strange Things About AI Safety Policy · 2016-10-15T12:13:15.277Z · EA · GW

I have heard about retreats and closed conferences/workshops to get people together, I would imagine something like that would be better from the point of view that Eliezer is coming from.

In order for people to have useful conversations where genuine reasoning and thinking is done, they have to actually meet each other.

Comment by the_jaded_one on A review of the safety & efficacy of genetically engineered mosquitoes · 2016-02-19T11:18:51.476Z · EA · GW

How feasible is it to use a gene drive coupled with a "genetic time bomb" to completely wipe out a mosquito species? By a "genetic time bomb", I mean some gene that kills only after, e.g. 10 generations?

If you could assign a very high probability to completely wiping out a species (or all species) of mosquito, then worries about reduced acquired immunity could be put aside.