Posts

Comments

Comment by cflexman on Two Strange Things About AI Safety Policy · 2016-09-28T22:30:29.342Z · score: 13 (12 votes) · EA · GW

I don't think the issue is that we don't have any people willing to be radicals and lose credibility. I think the issue is that radicals on a certain issue tend to also mar the reputations of their more level-headed counterparts. Weak men are superweapons, and groups like PETA and Greenpeace and Westboro Baptist Church seem to have attached lasting stigma to their causes because people's pattern-matching minds associate their entire movement with the worst example.

Since, as you point out, researchers specifically grow resentful, it seems really important to make sure radicals don't tip the balance backward just as the field of AI safety is starting to grow more respectable in the minds of policymakers and researchers.

Comment by cflexman on Thread for discussing critical review of Doing Good Better in the London Review of Books · 2015-09-21T05:50:47.729Z · score: 10 (10 votes) · EA · GW

I really want to pull good insights out of this to improve the movement. However, the only thing I'm really getting is that we should think more about systemic change, which a) already seems to be the direction we're moving in and b) doesn't seem amenable to too much more focus than we are already liable to give it, i.e., we should devote some resources but not very much. My first reaction was that maybe Doing Good Better should have spent a little bit of time mentioning why this is difficult, but it's a book, and really had to make sacrifices when choosing what to focus on, so I don't think that's even a possibe improvement. I think the best thing to come from this is your realization of potential coordination problems.

While I encourage well-thought-out criticism of the movement and different viewpoints for us to build off of, I can't help but echo kbog's sentiment that this seems a bit too continental to learn from. The feeling I get is that this is one of the many critiques I've encountered that find themselves vaguely uncomfortable with our notions and then paint a gestalt that can be slowly and assiduously associated with various negatives. There's a lot of interplay between forest and trees here, but it's really difficult to communicate when one is trying to work with concrete claims and another is trying to work with associations.

In summation, I think on most of these points (individualism, demandingness, systemic change, x-risk) we are pretty aware of the risky edges we walk along, and can't really improve our safety margins much without violating our own tenets.

Comment by cflexman on A response to Matthews on AI Risk · 2015-08-13T16:08:54.824Z · score: 5 (5 votes) · EA · GW

I think it's very good Matthews brought this point up so the movement can make sure we remain tolerant and inclusive of people mostly on our side but differing in a few small points. Especially those focused on x-risk, if he finds them to be most aggressive, but really I think it should apply to all of us.

That being said, I wish he had himself refrained from being divisive with allegations that x-risk is self-serving for those in CS. Your point about CS concentrators being "damned if you do, damned if you don't" is great. Similarly, the point (you made on facebook?) about many people converting from other areas into computer science as they realize the risk is a VERY strong counterargument to his. But more generally, it seems like he is applying asymmetric standards here. It seems the x-risk crowd no more deserves his label of biased and self-serving as the animal rights crowd, or the global poverty crowd; many of the people in those subsets also began there, so any rebuttal could label them as self-serving for promoting their favored cause if we wanted. Ad hominem is a dangerous road to go down, and I wish he would refrain from critiquing the people and stick to critiquing the arguments (which actually promotes good discussion from people like you and Scott Alexander in regards to his pseudo-probability calculation, even if we've been down this road before).

Comment by cflexman on [cross-post] Does donation matching work? · 2015-04-10T08:43:12.145Z · score: 0 (0 votes) · EA · GW

If big donors feel better and donate more, I'm not convinced that is a neutral thing. If running a matching donation drive doesn't get more donations from the matchees but does pull more money from the matchers, that may have a fairly large effect. I have certainly thought about donating more money than I otherwise would have when I heard it could be used to run a matching fundraiser. If they truly don't attract more matchee funds then I suppose it is epistemically unvirtuous to ask matchers to donate, since this implies it has an effect, but nonetheless a mechanism like this to get matchers to donate more seems not too different than the original deal (where it seems like the matchees are kind of being deluded into giving more anyways).

Comment by cflexman on Motte-and-Bailey Explanations of EA · 2015-02-21T19:41:07.559Z · score: 2 (2 votes) · EA · GW

I find another motte-and-bailey situation more striking: the motte of "make your donations count by going to the most effective place" and the bailey of "also give all your money!"

I personally know a lot of people who have been turned off of effective altruism by the bailey here, and while some seem to disagree with the motte, they are legions fewer. In the discussion about how to present EA to those we know, I think in many circumstances I'd recommend sticking with the motte, especially until you know they are very on board with that and perhaps come up with the bailey on their own.

Comment by cflexman on Preventing human extinction · 2015-01-10T19:29:39.551Z · score: 0 (0 votes) · EA · GW

Has anyone done an EA evaluation of (formerly B612) Sentinel Mission's expected value?

Comment by cflexman on You have a set amount of "weirdness points". Spend them wisely. · 2014-11-28T06:32:38.886Z · score: 0 (0 votes) · EA · GW

I also find that it's frequently the most helpful to be only a little weird in public, but once you have someone's confidence you can start being significantly more weird with them because they can't just write you off. Most of the best of both worlds.

Comment by cflexman on Introduce Yourself · 2014-11-17T02:17:56.676Z · score: 0 (0 votes) · EA · GW

I'm a physics undergrad who is very interested in quantum computing. Interested to hear thoughts on it from someone who is a rationalist; if you would email me at Connor_Flexman AT brown DOT edu, it would be wildly helpful.

Comment by cflexman on The Economist on "extreme altruism" · 2014-09-27T01:00:51.337Z · score: 2 (2 votes) · EA · GW

I've heard from several of my friends that EA is frequently introduced to them in a way that seems elitist and moralizing. I was wondering if there was any data on how many people learned about it through which sources. One possibility that came up was running tv/radio/internet ads for it (in a more gentle, non-elitist manner), in the hopes that the outreach and potentially recruited donors would more than pay back the original cost. Thoughts?