Comment by telofy on Is visiting North Korea effective? · 2019-04-06T12:48:39.522Z · score: 11 (4 votes) · EA · GW

I’ve written a comparative article on plausible intervention for human rights in North Korea. The activists I interviewed had already considered running campaigns to discourage travel to North Korea because tourism is an important source of foreign currency for the government. (They can force their citizens to stage North Korean life for tourists while paying them in their worthless national currency, so that they make a large profit on tourism.)

To my knowledge, these activists never pursued that strategy because it may actually be an attention hazard and thus actually increase tourism, and because it might strain relationships with organizations that think that tourists may show North Koreans that other ways of life are possible. But I find that implausible because almost no one is allowed to travel within North Korea (and tourists are even more tightly controlled and restricted) so that it’s always only the same most loyal North Koreans who come into contact with tourists.

But I discuss other more promising interventions in the article. For more detailed, reliable, and up-to-date information you can get in touch with, e.g., Saram as I’m not myself active in the space.

Comment by telofy on Tool recommendation: Polar personal knowledge repository · 2019-04-02T12:12:38.241Z · score: 3 (2 votes) · EA · GW

Very promising! They have plans to create a mobile client, and maybe the web version will also eventually support HTML and ebook formats. Looking forward to that!

Comment by telofy on Potential funding opportunity for woman-led EA organization · 2019-03-15T17:25:16.938Z · score: 2 (1 votes) · EA · GW

CFAR and Encompass (https://encompassmovement.org/) might also fit the bill? Maybe also some (other) EA meta-charities whose current team configuration I don't remember well enough.

Comment by telofy on Three Biases That Made Me Believe in AI Risk · 2019-02-22T12:17:40.725Z · score: 3 (2 votes) · EA · GW

I’d particularly appreciate an updated version of “Astronomical waste, astronomical schmaste” that disentangles the astronomical waste argument from arguments for the importance of AI safety. The current one makes it hard for me to engage with it because I don’t go along with the astronomical waste argument at all but are still convinced that a lot of projects under the umbrella of AI safety are top priorities, because extinction is considered bad by a wide variety of moral systems irrespective of astronomical waste, and particularly in order to avert s-risks, which are also considered bad by all moral systems I have a grasp on.

Comment by telofy on Small animals have enormous brains for their size · 2019-02-22T12:07:06.020Z · score: 5 (2 votes) · EA · GW

This is fascinating! I’ve heard (though it may well be bunk) that intelligence in humans is somewhat correlated with brain size but that the brain size is limited by the size of the birth canal. (Which made me think that c-section should lead to smarter people in the long run.) But if there’s still so much room for optimization left without changing the brain size, does that merely indicate that the changes would take too many mutations to be likely to happen (sort of why we still have our weird eye architecture when other animals have straightforward eyes) or that a lot of human thinking happens at a lower abstraction level than that of the neuron so that, e.g., whole brain emulation at a neuronal level would be destined to fail?

Comment by telofy on are values a potential obstacle in scaling the EA movement? · 2019-01-04T10:09:21.850Z · score: 2 (1 votes) · EA · GW

Seminal for me has been Owen Cotton-Barratt’s paper “How valuable is movement growth?” I therefore welcome the shift toward very careful if any growth that has happened over the past years. Today I think of the EA community like a startup of sorts that tries to hire slowly and selects staff carefully based on culture fit, character, commitment, etc.

Comment by telofy on Announcing the EA donation swap system · 2018-12-22T13:00:44.731Z · score: 3 (2 votes) · EA · GW

Hi! Thank you! That sounds good (“Charity X” would be a free text field?), but I don’t know whether there are other problems it doesn’t address. To guard against that, an FAQ entry explaining the problem would be best. Generally that’ll be needed because this is probably unintuitive for many people (like me), so even if they have the information about the swap counterfactual, they may not be able to use it optimally without an explanation.

Comment by telofy on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-22T12:40:08.482Z · score: 5 (5 votes) · EA · GW

Hmm, yeah, curious as well. Maybe it’s because I link long essays without summarizing them, so people are left wondering whether the essays are relevant enough to be worth reading.

But apart from the link to Simon’s reply, Kaj’s comment is much better than mine anyway.

Comment by telofy on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-18T15:00:08.955Z · score: 4 (17 votes) · EA · GW

Wow! Thank you again for another amazing overview! :-D

With regard to the FRI section: Here is a reply to Toby Ord by Simon Knutsson and another piece that seems related. (And by “suffering focus,” people are referring to something much broader than NU, which may be true of some CUs too.)

Comment by telofy on EA Global Lightning Talks (San Francisco 2018) · 2018-12-03T17:36:31.293Z · score: 2 (1 votes) · EA · GW

Very interesting talks – thank you! For me, especially Phillip Trammell’s talk.

Comment by telofy on Announcing the EA donation swap system · 2018-12-02T12:18:15.217Z · score: 3 (2 votes) · EA · GW

Thank you for creating this! I want to understand some possible risks to my value system better. So here’s one scenario that I’ve been thinking about.

I realize that it’s a trust system but if Donor A trusts Donor B on something that is not clear enough to Donor A to be able to ask and that’s so unremarkable to Donor B that they see no reason to tell Donor A about it any more than their espresso preferences, then no one is really at fault if they miscommunicate.

Say Donor A:

  1. is neutral between Rethink Priorities (RP) and the Against Malaria Foundation (AMF) (but Donor B doesn’t know this),
  2. can get tax exemption for a donation to RP but not AMF, and
  3. wants to donate $2k.

And Donor B:

  1. values a dollar to RP more than 100 times as highly as one to AMF (but Donor A doesn’t know this),
  2. can get tax exemption for a donation to AMF but not RP, and
  3. wants to donate $1k.

Without donation swap:

  1. Donor A is perfectly happy, and donates $2k to RP because of the tax exemption as tie breaker (but even if they split the donation 50:50 or donate with 50% probability, this case is still problematic).
  2. Donor B is a bit sad but donates, say, $850 to RP, which comes down to the same cost to them due to the lacking tax break.
  3. In effect: RP gains $2,850 and AMF gains $0. Both donors are reasonably happy with this result, the only wrinkle being the taxes.

But with donation swap:

  1. Donor A loves helping their fellow EAs and so offers a swap even though they don’t personally need it.
  2. Donor B enthusiastically takes them up on the offer to save the taxes, donates $1k to AMF, and Donor A donates $1k to RP. Later, Donor A donates their remaining $1k to RP.
  3. In effect: RP gains $2k and AMF gains $1k. Slightly positive for Donor A but big loss for Donor B.

This seems like a plausible scenario to me but there are other scenarios that are less extreme but still very detrimental to one side and possibly even harder to spot.

So am I overlooking something that alleviates this worry or do donors have to know (commit) and be transparent about where they will donate if no swap happens, in order for the other party to know whether they should take them up on the offer?

Comment by telofy on Which World Gets Saved · 2018-11-11T13:05:10.047Z · score: 5 (4 votes) · EA · GW

Very interesting! This strikes me as a particular type of mission hedging, right?

Comment by telofy on Announcing the EA Angel Group - Seeking EAs with the Time and Money to Evaluate and Fund Early-Stage Grants · 2018-10-18T19:24:19.019Z · score: 2 (2 votes) · EA · GW

I'd love to subscribe to a blog where you publish what grants you've recommended. Are you planning to run something like that?

Comment by telofy on Current Thinking on Prioritization 2018 · 2018-09-02T16:27:50.387Z · score: 0 (0 votes) · EA · GW

Oh, cool! I'm reading that study at the moment. I'll be able to say more once I'm though. Then I'll turn to your article. Sounds interesting!

Comment by telofy on A Critical Perspective on Maximizing Happiness · 2018-08-02T06:05:37.754Z · score: 7 (7 votes) · EA · GW

Thank you for starting that discussion. Some resources that come to mind that should be relevant here are:

  • Lukas Gloor’s concept of Tranquilism,
  • different types of happiness (a talk by Michael Plant where I think I heard them explained), and
  • the case for the relatively greater moral urgency and robustness of suffering minimization over happiness maximization, i.e., a bit of a focus on suffering.
Comment by telofy on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-20T11:02:22.218Z · score: 2 (2 votes) · EA · GW

I’m against it. ;-)

Just kidding. I think monopolies and competition are bundles of advantages and disadvantages that we can also combine differently. Competition comes with duplication of effort, sometimes sabotaging the other rather than improving oneself, and some other problems. A monopoly would come with the local optima problem you mentioned. But we can also acknowledge (as we do in many other fields) that we don’t know how to run the best wiki, and have different projects that try out different plausible strategies while not being self-interested by being interested in the value of information from the experiment. So they can work together, automatically synchronize any content that can be synchronized, etc. We’ll first need meaningful differences between the projects that it’ll be worthwhile to test out, e.g., restrictive access vs. open access.

Comment by telofy on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-20T10:52:50.979Z · score: 0 (0 votes) · EA · GW

That would be an immensely valuable meta problem to solve!

Then maybe we can have a wiki which reaches "meme status".

On a potentially less serious note, I wonder if one could make sure that a wiki remains popular by adding a closed section to it that documents particular achievements from OMFCT the way Know Your Meme does. xD

Comment by telofy on Announcing PriorityWiki: A Cause Prioritization Wiki · 2018-06-19T15:33:59.753Z · score: 16 (15 votes) · EA · GW

Sweet! I hope it’ll become a great resource! Are you planning to merge it with https://causeprioritization.org/? If there are too many wikis, we’d just run into the same problem with fragmented bits of information again.

Comment by telofy on Lessons for estimating cost-effectiveness (of vaccines) more effectively · 2018-06-08T09:59:21.506Z · score: 1 (1 votes) · EA · GW

Thank you! I suspect, this is going to be very helpful for me.

Comment by telofy on Introducing Charity Entrepreneurship: an Incubation and Research Program for New Charities · 2018-06-08T08:00:30.167Z · score: 1 (1 votes) · EA · GW

Awesome! Do you also have plans to assist EA founders of for-profit social enterprises (like e.g. Wave)?

Comment by telofy on Current Thinking on Prioritization 2018 · 2018-03-30T08:30:39.866Z · score: 0 (0 votes) · EA · GW

Awesome, thank you!

Comment by telofy on Is Effective Altruism fundamentally flawed? · 2018-03-25T15:05:07.194Z · score: 1 (1 votes) · EA · GW

Hi Jeff!

To just briefly answer your question, “Are you concluding from this that there is not actually a single subject-of-experience”: I don’t have an intuition for what a subject-of-experience is – if it is something defined along the lines of the three characteristics of continuous person moments from my previous message, then I feel that it is meaningful but not morally relevant, but if it is defined along the lines of some sort of person essentialism then I don’t believe it exists on Occam’s razor grounds. (For the same reason, I also think that reincarnation is metaphysically meaningless because I think there is no essence to a person or a person moment besides their physical body* until shown otherwise.)

* This is imprecise but I hope it’s clear what I mean. People are also defined by their environment, culture, and whatnot.

Comment by telofy on Current Thinking on Prioritization 2018 · 2018-03-25T09:49:59.130Z · score: 0 (0 votes) · EA · GW

Cool, thank you! Have you written about direct chemical synthesis of food or can you recommend some resources to me?

Comment by telofy on Is Effective Altruism fundamentally flawed? · 2018-03-16T22:43:32.380Z · score: 1 (1 votes) · EA · GW

Argh, sorry, I haven’t had time to read through the other conversation yet, but to clarify, my prior was the other one – not that there is something linking the experiences of the five people but that there is very little, and nothing that seems very morally relevant – that links the experiences of the one person. Generally, people talk about continuity, intentions, and memories linking the person moments of a person such that we think of them as the same one even though all the atoms of their bodies may’ve been exchanged for different ones.

In your first reply to Michael, you indicate that the third one, memories, is important to you, but in themselves I don’t feel that they confer moral importance in this sense. What you mean, though, may be that five repeated headaches are more than five times as bad as one because of some sort of exhaustion or exasperation that sets in. I certainly feel that, in my case especially with itches, and I think I’ve read that some estimates of DALY disability weights also take that into account.

But I model that as some sort of ability of a person to “bear” some suffering, which gets worn down over time by repeated suffering without sufficient recovery in between or by too extreme suffering. That leads to a threshold that makes suffering below and above seem morally very different to me. (But I recognize several such thresholds in my moral intuitions, so I seem to be some sort of multilevel prioritarian.)

So when I imagine what it is like to suffer headaches as bad as five people suffering one headache each, I imagine them far apart with plenty of time to recover, no regularity to them, etc. I’ve had more than five headaches in my life but no connection and nothing pathological, so I don’t even need to rely on my imagination. (Having five attacks of a frequently recurring migraine must be noticeably worse.)

Comment by telofy on Is Effective Altruism fundamentally flawed? · 2018-03-15T12:32:26.563Z · score: 0 (0 votes) · EA · GW

Okay, curious. What is to you a “clear experiential sense” is just as clear or unclear to me no matter whether I think about the person moments of the same person or of different people.

It would be interesting if there’s some systematic correlation between cultural aspects and someone’s moral intuitions on this issue – say, more collectivist culture leading to more strongly discounted aggregation and more individualist culture leading to more linear aggregation… or something of the sort. The other person I know who has this intuition is from a eastern European country, hence that hypothesis.

Current Thinking on Prioritization 2018

2018-03-13T19:22:20.654Z · score: 9 (9 votes)
Comment by telofy on Is Effective Altruism fundamentally flawed? · 2018-03-13T19:20:08.838Z · score: 3 (3 votes) · EA · GW

I think Brian Tomasik has addressed this briefly and Nick Bostrom at greater length.

What I’ve found most convincing (quoting myself in response to a case that hinged on the similarity of the two or many experiences):

If you don’t care much more about several very similar beings suffering than one of them suffering, then you would also not care more about them, when they’re your own person moments, right? You’re extremely similar to your version a month or several months ago, probably more similar than you are to any other person in the whole world. So if you’re suffering for just a moment, it would be no better than being suffering for an hour, a day, a month, or any longer multiple of that moment. And if you’ve been happy for just a moment sufficiently recently, then close to nothing more can be done for you for a long time.

I imagine that fundamental things like that are up to the subjectivity of moral feelings – so close to the axioms, it’s hard to argue with even more fundamental axioms. But I for one have trouble empathizing with a nonaggregative axiology at least.

Comment by telofy on Cause prioritization for downside-focused value systems · 2018-02-07T14:02:59.976Z · score: 2 (2 votes) · EA · GW

Just FYI, Simon Knutsson has responded to Toby Ord.

Comment by Telofy on [deleted post] 2018-01-12T12:07:41.713Z

Thanks!

Comment by telofy on Four Organizations EAs Should Fully Fund for 2018 · 2018-01-12T12:01:32.089Z · score: 1 (1 votes) · EA · GW

Sophie and Meret will know more, but from what I’ve heard, they’re pretty much on board with it because it will shift demand toward them. I can point Sophie to this thread if you’d like a more detailed or reliable answer than mine. ;-)

Comment by Telofy on [deleted post] 2018-01-04T08:16:45.733Z

What happened to this post? Is there another place where it is being discussed? It sounds very interesting. Thanks!

Comment by telofy on Four Organizations EAs Should Fully Fund for 2018 · 2017-12-12T12:16:32.606Z · score: 11 (11 votes) · EA · GW

That’s an awesome selection! I’m also planning to support WASR in 2018 and perhaps longer, and I’m about to donate CHF 5k from my 2018 budget (for tax reasons) to their fundraiser.

I’m particularly optimistic about the field of welfare biology because it can draw on enormous resources in terms of institutions, biology and ecology research, and scientific methodology to generate break-throughs in an area that has been greatly neglected so far. The situation may be similar to that of medicine in the early days (1800s or so) when the foundations for systematic inquiry into health had finally been laid and then just needed to be applied to generate invaluable new insights.

Surely many animals in the wild have net positive lives, but so do many humans around the world. I think it’s valuable to research how we can improve the well-being of humans who suffer – perhaps even to the point of having net negative lives, but not necessarily – and so I value the same even more for wild animals who are so much more numerous and still live under worse conditions at much higher rates.

There’s also a Sentience Politics initiative going on in Switzerland (automatic translation) that has a shot at banning factory farming in the whole country via a popular vote. I see this in the same reference class as, for example, the ban on battery cages in California, though on a smaller scale because of the lower population size. Import of factory-farmed products may be more difficult than in the case of California, though, which is a big plus for the initiative. And they’re also far short of their fundraising goals.

Comment by telofy on Volunteering for a non-EA charity - a write up · 2017-12-03T20:17:20.315Z · score: 0 (0 votes) · EA · GW

I'm impressed by how perceptive you are even in such an unaccustomed environment!

Comment by telofy on Cause Area: Human Rights in North Korea · 2017-12-02T21:44:47.997Z · score: 0 (0 votes) · EA · GW

I know little about it and what I know may or may not still apply to whoever makes the decisions at the DoD today. The way I understood it, it was some sort of trade off between reducing suffering and keeping the political situation stable (war or refugees or complicated network of international allegiances vel sim.). The activist I talked to had a particular example in mind, but for this public article we decided on just the wording you quoted. I’d rather like to connect you to him directly if you’d like to know more since I don’t feel like I know well how sensitive what information is and he knows much more too of course.

Cause Area: Human Rights in North Korea

2017-11-26T14:58:10.490Z · score: 15 (15 votes)
Comment by telofy on Introducing Improving autonomy · 2017-08-14T20:21:02.267Z · score: 0 (0 votes) · EA · GW

Someone downvoted your comment without explaining it, so I upvoted to balance out. (But I suspect it was just a practical joke.)

Comment by Telofy on [deleted post] 2017-08-07T04:26:49.297Z

Awesome! Does it make a difference with which Swiss bank I’ll have a bank account? Because I haven’t signed up for one yet.

Comment by Telofy on [deleted post] 2017-08-05T15:58:34.018Z

Is there any way (any not prohibitively inefficient way) for me to use the service from Switzerland with my prospective Swiss Francs? Or for anyone outside the US for that matter? Thankies!

Comment by telofy on Fact checking comparison between trachoma surgeries and guide dogs · 2017-06-08T11:14:49.899Z · score: 0 (0 votes) · EA · GW

Thanks! In response to which point is that? I think points 5 and 6 should answer your objection, but tell me if they don’t. Truth is not at issue here (if we ignore the parenthetical at the very end that isn’t mean to be part of my argument). I’d even say that Peter Singer deals in concepts of unusual importance and predictive power. But I think it’s important to make sure that we’re not being misunderstood in dangerous ways by valuable potential allies.

Comment by Telofy on [deleted post] 2017-06-04T21:31:12.539Z

True, the Gates Foundation is a good example of someone speeding up things that would’ve happened eventually with high probability.

I’ll ask around FRI whether there’ve been any papers comparing these scenarios. We’ve long been discussing them, but I can’t quite point to the one paper that summarizes it all. This is also something I would love to have cowritten by a historian. The only historian EA I know is not doing any history unfortunately.

Comment by Telofy on [deleted post] 2017-05-30T16:11:32.888Z

I’d still say that the original point stands. Developed countries had no problem taking care of their own malaria problems. They still had a few more chemical that don’t work anymore today, but if we still had malaria in Germany today, the government would surely find a way to eliminate it. The trajectory of developing countries indicates that many will cease to be developing countries within half a century or so, and then the same logic will apply to them.

Cost-effectiveness in this cause area means getting the money in early to speed up positive developments that would likely happen anyway, just a few decades later. That saves a few decades of suffering, which is valuable, but it probably doesn’t compare to trajectory-changing interventions. Rather than increase the speed of a development that is already going into the right direction, trajectory-changing interventions can potentially affect very long periods of the future.

That’s probably the comparison Arunbharatula is drawing where international development doesn’t look as strong as some x-risk or values spreading interventions.

Comment by Telofy on [deleted post] 2017-05-29T11:41:28.866Z

The best explanation I’ve found so far (pages 133–135).

Equity is meant in the justice sense, not in the sense of shares, which is something that a lot of people care about inherently, and that has strong effects relevant at least to non-negative utilitarians as well, as for example in the GiveDirectly case mentioned above. (This considerations has also often cropped up in the context of village-level vs. household-level randomization in RCTs in my experience.) But I think there are more considerations of this sort that should also be included once we refine the model (and make it less simple), such as the creation or destruction of option value and value of information. (Option in the sense of choice rather than stock option.) Moral Foundations Theory may provide some inspiration for further considerations.

Comment by Telofy on [deleted post] 2017-05-28T10:51:44.453Z

Quick question: The paper on the PAHO-HANLON approach doesn't include the term equity. What does it mean in this context, and where did you find an expansion that might be helpful for me as well? Positioning is very relevant to something in writing at the moment, and so might be this equity thing.

I love having so many directions of improvement summarized in one post. In some cases my critiques would soon in the opposite direction. Not too little evidence for our against something but too great risk-aversion, perhaps as a concession to donors who are biased by considerations from the area of private investments, which don't carry over to commons.

Sorry, very compressed because I'm typing on my phone.

Comment by telofy on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-17T14:26:43.711Z · score: 2 (2 votes) · EA · GW

Here’s what I usually found most unfortunate about the comparison, but I don’t mean to compete with anyone who thinks that the math is more unfortunate or anything else.

  1. The decision to sacrifice the well-being of one person for that of others (even many others) should be hard. If we want to be trusted (and the whole point of GiveWell is that people don’t have the time to double-check all research no matter how accessible it is – plus, even just following a link to GiveWell after watching a TED Talk requires that someone trusts us with their time), we need to signal clearly that we don’t make such decisions lightly. It is honest signaling too, since the whole point of EA is to put a whole lot more effort into the decision than usual. Many people I talk to are so “conscientious” about such decisions that they shy away from them completely (implicitly making very bad decisions). It’s probably impossible to show just how much effort and diligence has gone into such a difficult decision in just a short talk, so I rather focus on cases where I am, or each listener is, the one at whose detriment we make the prioritization decision, just like in the Child in the Pond case. Few people would no-platform me because they think it’s evil of me to ruin my own suit.
  2. Sacrificing oneself, or rather some trivial luxury of oneself, also avoids the common objection why a discriminated against minority should have to pay when there are [insert all the commonly cited bad things like tax cuts for the most wealthy, military spending, inefficient health system, etc.]. It streamlines the communication a lot more.
  3. The group at whose detriment we need to decide should never be a known, discriminated against minority in such examples, because these people are used to being discriminated against and their allies are used to seeing them being discriminated against, so when someone seems to be saying that they shouldn’t receive some form of assistance, they have just a huge prior for assuming that it’s just another discriminatory attack. I think their heuristic more or less fails in this case, but that is not to say that it’s not a very valid heuristic. I’ve been abroad in a country where pedestrian crosswalks are generally ignored by car drivers. I’m not going to just blinding walk onto the street there even if the driver of the only car coming toward me is actually one who would’ve stopped for me if I did. My heuristic fails in that case, but it generally keeps me safe.
  4. Discriminated minority groups are super few, especially the ones the audience will be aware of. Some people may be able to come up with a dozen or so, some with several dozens. But in my actual prioritization decisions for the Your Siblings charity, I had to decide between groups of so fuzzy reference classes that there must be basically arbitrarily many of such groups. Street children vs. people at risk of malaria vs. farmed animals? Or street children in Kampala vs. people at risk of malaria in the southern DRC vs. chickens farmed for eggs in Spain? Or street children of the lost generation in the suburb’s of Kampala who were abducted for child sacrifice but freed by the police and delivered to the orphanage we’re cooperating with vs. …. You get the idea. If we’re unbiased, then what are the odds that we’ll draw a discriminated against group from the countless potential examples in this urn? This should heavily update a listener toward thinking that there’s some bias against the minority group at work here. Surely, the real explanation is something about salience on our minds or ease of communication and not about discrimination, but they’d have to know us very well to have so much trust in our intentions.
  5. People with disability probably have distance “bias” at the same rates as anyone else, so they’ll perceive the blind person with the guide dog as in-group, the blind people suffering from cataracts in developing countries as completely neutral foreign group, and us as attacking them, making us the out-group. Such controversy is completely avoidable and highly dangerous, as Owen Cotton-Barratt describes in more detail in his paper on movement growth. Controversy breeds an opposition (and one that is not willing to engage in moral trade with us) that destroys option value particularly by depriving us of the highly promising option to draw on the democratic process to push for the most uncontroversial implications of effective altruism that we can find. Scott Alexander has written about it under the title “Toxoplasma of Rage.” I don’t think publicity is worth sacrificing the political power of EA for it, but that is just a great simplification of Owen Cotton-Barratt’s differentiated points on the topic.
  6. Communication is by necessity cooperative. If we say something, however true it may be, and important members of the audience understand it as something false or something else entirely (that may not have propositional nature), then we failed to communicate. When this happens, we can’t just stamp our collective foot on the ground and be like, “But it’s true! Look at the numbers!” or “It’s your fault you didn’t understand me because you don’t know where I’m coming from!” That’s not the point of communication. We need to adapt our messaging or make sure that people at least don’t misunderstand us in dangerous ways.

(I feel like you may disagree on some of these points for similar reasons that The Point of View of the Universe seemed to me to argue for a non-naturalist type of moral realism while I “only” try to assume some form of non-cognitivist moral antirealism, maybe emotivism, which seems more parsimonious to me. Maybe you feel like or have good reasons to think that there is a true language (albeit in a non-naturalist sense) so that it makes sense to say “Yes, you misunderstood me, but what I said is true, because …,” while I’m unsure. I might say, “Yes, you misunderstood me, but what I meant was something you’d probably agree with. Let me try again.”)

Comment by telofy on Why you should consider going to EA Global · 2017-05-17T07:00:50.385Z · score: 1 (1 votes) · EA · GW

Love your writing style!

Comment by telofy on Effective altruism is self-recommending · 2017-04-26T11:44:59.353Z · score: 3 (3 votes) · EA · GW

Surely your comment would‘ve been very informative on its own.

Welcome to the forum! :-D

Comment by telofy on Hard-to-reverse decisions destroy option value · 2017-03-27T14:14:15.688Z · score: 4 (4 votes) · EA · GW

While enough people are skeptical about rapid growth and no one (I think) wants so sacrifice integrity, the warning to be careful about politicization of EA is a timely and controversial one because well-known EAs have put a lot of might behind Hillary’s election campaign and the prevention of Brexit to the point that the lines behind private efforts and EA efforts may blur.

Comment by telofy on Some Thoughts on Public Discourse · 2017-02-26T20:13:00.684Z · score: 0 (0 votes) · EA · GW

Namely, making no claims but only distinctions

Or taxonomies. Hence: The Taxoplasma of Ra.

(Sorry, I should post this in DEAM, not here. I don’t even understand this Ra thing.)

But I really like this concept!

Comment by telofy on Why I left EA · 2017-02-25T08:37:51.522Z · score: 0 (2 votes) · EA · GW

We’ve been communicating so badly that I would’ve thought you’d be one to reject an article like the one you linked. Establishing the sort of movement that Eliezer is talking about was the central motivation for making my suggestion in the first place.

If you think you can use a cooperative type of discourse in a private conversation where there is no audience that you need to address at the same time, then I’d like to remember that for the next time when I think we can learn something from each other on some topic.

Comment by telofy on Why I left EA · 2017-02-23T07:50:54.114Z · score: 2 (4 votes) · EA · GW

You get way too riled up over this. I started out being like “Uh, cloudy outside. Should we all pack umbrellas?” I’m not interested in an adversarial debate over the merits of packing umbrellas, one where there is winning and losing and all that nonsense. I’m not backing down; I was never interested in that format to begin with. It would incentivize me to exaggerate my confidence into the merits of packing umbrellas, which has been low all along; incentivize me to not be transparent about my epistemic status, as it were, my suspected biases and such; and so would incentivize an uncooperative setup for the discussion. The same probably applies to you.

I’m updating down from 70% for packing umbrellas to 50% for packing umbrellas. So I guess I won’t pack one unless it happens to be in the bag already. But I’m worried I’m over-updating because of everything I don’t know about why you never assumed what ended up as “my position” in this thread.

Comment by telofy on Why I left EA · 2017-02-22T16:34:44.130Z · score: 0 (2 votes) · EA · GW

The Wikipedia article on EA, the books by MacAskill and Singer, the EA Handbook, all seem to be a pretty good overview of what we do and stand for.

Lila has probably read those. I think Singer’s book contained something to the effect that the book is probably not meant for anyone who wouldn’t pull out the child. MacAskill’s book is more of a how-to; such a meta question would feel out of place there, but I’m not sure; it’s been a while since I read it.

Especially texts that appeal to moral obligation (which I share) signal that the reader needs to find an objective flaw in them to be able to reject them. That, I’m afraid, leads to people attacking EA for all sorts of made-up or not-actually-evil reasons. That can result in toxoplasma and opposition. If they could just feel like they can ignore us without attacking us first, we could avoid that.

If you want to prevent oppositions and toxoplasma, narrowing who is invited in accomplishes very little. The smaller your ideological circle, the finer the factions become.

A lot of your objections take the form of likely-sounding counternarratives to my narratives. They don’t make me feel like my narratives are less likely than yours, but I increasingly feel like this discussion is not going to go anywhere unless someone jumps in with solid knowledge of history or organizational culture, historical precedents and empirical studies to cite, etc.

So how many good donors and leaders would you want to ignore for the ability to keep one insufficiently likeminded person from joining? Since most EAs don't leave, at least not in any bad way, it's going to be >1.

That’s a good way to approach the question! We shouldn’t only count those that join the movement for a while and then part ways with it again but also those that hear about it and ignore it, publish a nonconstructive critique of it, tell friends why EA is bad, etc. With small rhetorical tweaks of the type that I’m proposing, we can probably increase the number of those that ignore it solely at the expense of the numbers who would’ve smeared it and not at the expense of the numbers who would’ve joined. Once we exhaust our options for such tweaks, the problem becomes as hairy as you put it.

I haven’t really dared to take a stab at how such an improvement should be worded. I’d rather base this on a bit of survey data among people who feel that EA values are immoral from their perspective. The positive appeals may stay the same but be joined by something to the effect that if they think they can’t come to terms with values X and Y, EA may not be for them. They’ll probably already have known that (and the differences may be too subtle to have helped Lila), but saying it will communicate that they can ignore EA without first finding fault with it or attacking it.

And no other social movement has had this level of obsessive theorizing about movement dynamics.

Oh dear, yeah! We should both be writing our little five-hour research summaries on possible cause areas rather than starting yet another marketing discussion. I know someone at CEA who’d get cross with me if he saw me doing this again. xD

It’s well possible that I’m overly sensitive to being attacked (by outside critics), and I should just ignore it and carry on doing my EA things, but I don’t think I overestimate this threat to the extend that I think further investment of our time into this discussion would be proportional.

Comment by telofy on Why I left EA · 2017-02-22T09:34:17.345Z · score: 2 (2 votes) · EA · GW

Thank you. <3

The Attribution Moloch

2016-04-28T06:43:10.413Z · score: 9 (9 votes)

Even More Reasons for Donor Coordination

2015-10-27T05:30:37.899Z · score: 4 (6 votes)

The Redundancy of Quantity

2015-09-03T17:47:20.230Z · score: 2 (4 votes)

My Cause Selection: Denis Drescher

2015-09-02T11:28:51.383Z · score: 6 (6 votes)

Results of the Effective Altruism Outreach Survey

2015-07-26T11:41:48.500Z · score: 3 (5 votes)

Dissociation for Altruists

2015-05-14T11:27:21.834Z · score: 5 (9 votes)

Meetup : Effective Altruism Berlin Meetup #3

2015-05-10T19:40:40.990Z · score: 0 (0 votes)

Incentivizing Charity Cooperation

2015-05-10T11:02:46.433Z · score: 6 (6 votes)

Expected Utility Auctions

2015-05-02T16:22:28.948Z · score: 4 (4 votes)

Telofy’s Effective Altruism 101

2015-03-29T18:50:56.188Z · score: 3 (3 votes)

Meetup : EA Berlin #2

2015-03-26T16:55:04.882Z · score: 0 (0 votes)

Common Misconceptions about Effective Altruism

2015-03-23T09:25:36.304Z · score: 8 (8 votes)

Precise Altruism

2015-03-21T20:55:14.834Z · score: 6 (6 votes)

Telofy’s Introduction to Effective Altruism

2015-01-21T16:46:18.527Z · score: 7 (9 votes)