Posts

Comments

Comment by itaibn on Are there historical examples of excess panic during pandemics killing a lot of people? · 2020-05-29T00:10:55.120Z · EA · GW
historical cases are earlier than would be relevant directly

Practically all previous pandemics were far enough back in history that their applicability is unclear. I think it's unfair to discount your example because of that, because every other positive or negative example can be discounted the same way.

Comment by itaibn on Which scientific discovery was most ahead of its time? · 2019-05-17T23:12:04.528Z · EA · GW

I've just examined the two Wikipedia articles you link to and I don't think this is an independent discovery. The race between Einstein and Hilbert was for finding the Einstein field equations which put general relativity in a finalized form. However, the original impetus for developing general relativity was Einstein's proposed Equivalence Principle in 1907, and in 1913 he and Grossman published the proposal that it would involve spacetime being curved (with a pseudo-Riemannian metric). Certainly after 1913 general relativity was inevitable, perhaps it was inevitable after 1907, but it still all depended on Einstein's first ideas.

That's a far cry from saying that these idea wouldn't have been discovered until the 1970s, which I'm basing mainly on hearsay and I confess is much more dubious.

Comment by itaibn on Which scientific discovery was most ahead of its time? · 2019-05-16T14:24:02.506Z · EA · GW

I don't recall the source, but I remember hearing from a physicist that if Einstein hadn't discovered the theory of special relativity it would surely have been discovered by other scientists at the time, but if he hadn't discovered the theory of general relativity it wouldn't have been discovered until the 1970s. More specifically, general relativity has an approximation known as linearized gravity which suffices to explain most of the experimental anomalies of Newtonian gravity but doesn't contain the concept that spacetime is curved, and that could have been discovered instead.

Comment by itaibn on Interview with Jon Mallatt about invertebrate consciousness · 2019-05-02T18:55:28.951Z · EA · GW

I'm puzzled by Mallatt's response to the last question about consciousness in computer systems. It appears to me like he and Feinberg are applying a double-standard when judging the consciousness of computer programs. I don't know what he has in mind when he talks about the enormous complexity of conscious, but based on other parts of the interview we can see some of the diagnostic criteria Mallatt uses to judge consciousness in practice. These include behavioral tests such as going back to places an animal saw food before, tending wounds, and hiding when injured, as well as structural tests such as a multiple levels of intermediate processing from the sensory input to motor output. Existing AIs already pass the structural test I listed, and I believe they could pass the behavior tests with a simple virtual environment and reward function. I don't see a principled way of including the simplest types of animal conscious while any form of computer consciousness.

Comment by itaibn on Debate and Effective Altruism: Friends or Foes? · 2018-11-12T15:05:15.500Z · EA · GW

On the second paragraph, making your point succinctly is a valuable skill that is also important for anti-debates. A key part of this skill is understanding which parts of your argument are crucial for your conclusion and which merit less attention. The bias towards quick arguments and the bandwagon effect also exist in natural conversation and I'm not sure if it's any worse in competitive debating. I have little experience with competitive debating so I cannot make the comparison and am just arguing from how this should work in principle.

On the other hand, in natural conversation you want to minimize use both of the audiences' time and cognitive resources, whereas competitive debate weighs more heavily in minimizing time, which distorts how people learn succinctness from it. Also, the time constraint in competitive debate might be much more severe than the mental resource constraint in the most productive natural conversations, and so many important skills that are only applied in long-form conversation are not practiced at all.

Comment by itaibn on Curing past sufferings and preventing s-risks via indexical uncertainty · 2018-09-27T22:24:57.926Z · EA · GW

You should consider whether something has gone terribly wrong if your method for preventing s-risks is to simulate individuals suffering intensely in huge quantities.

Comment by itaibn on Empirical data on value drift · 2018-04-25T00:07:57.086Z · EA · GW

A particular word choice that put me at unease is calling "dating a non-EA" "dangerous" without qualifying this word properly. It is more precise to say that something is "good" or "bad" for a particular purpose than to just call it "good" or "bad"; just the same with "dangerous". If you call something "dangerous" without qualification or other context, this leaves an implicit assumption that the underlying purpose is universal and unquestioned, or almost so, in the community you're speaking to. In many cases it's fine to assume EA values in these sorts of statements -- this is an EA forum, after all. Doing so for statements about value drift appears to support the norm that people here should want to stay with EA values forever, a norm which I oppose.

Comment by itaibn on Comparative advantage in the talent market · 2018-04-12T12:32:32.702Z · EA · GW

It seems to me like you're in favor of unilateral talent trading, that is, that someone should work on a cause he thinks isn't critical but he has a comparative advantage there, because he believes that this will induce other people to work on his preferred causes. I disagree with this. When someone works on a cause, this also increases the amount of attention and perceived value it is given in the EA community as a whole. As such I expect the primary effect of unilateral talent trading would be to increase the cliquishness of the EA community -- people working on what's popular in the EA community rather than what's right. Also, what's commonly considered as EA priorities could differ significantly from the actual average opinion, and unilateral trading would unrightly shift the latter in the direction of the former, especially as the former is more easily gamed by advertising etc.. On the whole, I discourage working on a cause you don't think is important unless you are confident this won't decrease the total amount of attention given to your preferred cause. That is, only accept explicit bilateral trades with favorable terms.

Comment by itaibn on How to improve EA Funds · 2018-04-05T12:13:20.606Z · EA · GW

On this very website, clicking the link "New to Effective Altruism?" and a little browsing quickly leads to recommendations to give to EA funds. If EA funds really is intended to be a high-trust option, CEA should change that recommendation.

Comment by itaibn on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-03-10T23:38:30.260Z · EA · GW

I haven't responded to you for so long firstly because I felt like we got to the point in the discussion where it's difficult to get across anything new and I wanted to be attentive to what I say, and then because after a while without writing anything I became disinclined from continuing. The conversation may close soon.

Some quick points:

  • My whole point in my previous comment is that the conceptual structure of physics is not what you make it out to be, and so your analogy to physics is invalid. If you want to say that my arguments against consciousness apply equally well to physics you will need to explain the analogy.

  • My views on consciousness that I mentioned earlier but did not elaborate on are becoming more relevant. It would be a good idea for me to explain them in more detail.

  • I read your linked piece on quantifying bliss and I am unimpressed. I concur with the last paragraph of this comment.

Comment by itaibn on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-03-02T00:23:36.996Z · EA · GW

Do you think we should move the conversation to private messages? I don't want to clutter a discussion thread that's mostly on a different topic, and I'm not sure whether the average reader of the comments benefits or is distracted by long conversations on a narrow subtopic.

Your comment appears to be just reframing the point I just made in your own words, and then affirming that you believe that the notion of qualia generalizes to all possible arrangements of matter. This doesn't answer the question, why do you believe this?

By the way, although there is no evidence for this, it is commonly speculated by physicists that the laws of physics allow multiple metastable vacuum states, and the observable universe only occupies one such vacuum, and near different vacua there different fields and forces. If this is true then the electromagnetic field and other parts of the Standard Model are not much different from my earlier example of the alignment of an ice crystal. One reason this view is considered plausible is simply the fact that it's possible: It's not considered so unusual for a quantum field theory to have multiple vacuum states, and if the entire observable universe is close to one vacuum then none of our experiments give us any evidence on what other vacuum states are like or whether they exist.

This example is meant to illustrate a broader point: I think that making a binary distinction between contextual concepts and universal concepts is oversimplified. Rather, here's how I would put it: Many phenomena generalize beyond the context in which they were originally observed. Taking advantage of this, physicists deliberate seek out the phenomena that generalize as far as possible, and over history broadened their grasp very far. Nonetheless, they avoid thinking about any concept as "universal", and often when they do think a concept generalizes they have a specific explanation for why it should, while if there's a clear alternative to the concept generalizing they keep an open mind.

So again: Why do you think that qualia and emotional valence generalize to all possible arrangements of matter?

Comment by itaibn on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-27T23:05:45.290Z · EA · GW

It wasn't clear to me from your comment, but based on your link I am presuming that by "crisp" you mean "amenable to generalizable scientific theories" (rather than "ontologically basic"). I was using "pleasure/pain" as a catch-all term and would not mind substituting "emotional valence".

It's worth emphasizing that just because a particular feature is crisp does not imply that it generalizes to any particular domain in any particular way. For example, a single ice crystalline has a set of directions in which the molecular bonds are oriented which is the same throughout the crystal, and this surely qualifies as a "crisp" feature. Nonetheless, when the ice melts, this feature becomes undefined -- no direction is distinguished from any other direction in water. When figuring out whether a concept from one domain extends to a new domain, to posit that there's a crisp theory describing the concept does not answer this question without any information on what that theory looks like.

So even if there existed a theory describing qualia and emotional valence as it exists on Earth, it need not extend to being able to describe every physically possible arrangement of matter, and I see no reason to expect it to. Since a far future civilization will be likely to approach the physical limits of matter in many ways, we should not assume that it is not one such arrangement of matter where the notion of qualia is inapplicable.

Comment by itaibn on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-26T01:50:05.743Z · EA · GW

Thanks for the link. I didn't think to look at what other posts you have published and now I understand your position better.

As I now see it, there two critical questions for distinguishing the different positions on the table:

  1. Does our intuitive notion of pleasure/suffering have objective precisely defined fundamental concept underlying it?
  2. In practice, is it a useful approach to look for computational structures exhibiting pleasure/suffering in the distant future as a means to judge possible outcomes?

Brian Tomasik answers these questions "No/Yes", and a supporter of the Sentience Institute would probably answer "Yes" to the second question. Your answers are "Yes/No", and so you prefer to work on finding the underlying theory for pleasure/suffering. My answers are "No/No", and am at a loss.

I see two reasons why a person might think that pleasure/pain of conscious entities is a solid enough concept to answer "Yes" to either of these questions (not counting conservative opinions over what futures are possible for question 2). The first is a confusion caused by subtle implicit assumptions in the way we talk about consciousness, which makes a sort of conscious experience from which includes in it pleasure and pain seem more ontologically basic than it really is. I won't elaborate on this in this comment, but for now you can round me as an eliminativist.

The second is what I was calling "a sort wishful thinking" in argument #4: These people have moral intuitions that tell them to care about others' pleasure and pain, which implies not fooling themselves about how much pleasure and pain others experience. On the other hand, there are many situations where their intuition does not give them a clear answer, but also tells them that picking an answer arbitrarily is like fooling themselves. They resolve this tension by telling themselves, "there is a 'correct answer' to this dilemma, but I don't know what it is. I should act to best approximate this 'correct answer' with the information I have." People then treat these "correct answers" like other things they are ignorant about, and in particular imagine that a scientific theory might be able to answer these questions in the same way science answered other things we used to be ignorant about.

However, this expectation infers something external, the existence of a certain kind of scientific theory, from evidence that is internal, their own cognitive tensions. This seems fallacious to me.

Comment by itaibn on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-25T12:13:38.751Z · EA · GW

Thanks for reminding me that I was implicitly assuming computationalism. Nonetheless, I don't think physicalism substantially affects the situation. My arguments #2 and #4 stand unaffected; you have not backed up your claim that qualia is a natural kind under physicalism. While it's true that physicalism gives clear answers for the value of two identical systems or a system simulated with homomorphic encryption, it may still be possible to have quantum computations involving physically instantiated conscious beings, by isolating the physical environment of this being and running the CPT reversal of this physical system after an output has been extracted to maintain coherence. Finally, physicalism adds its own questions, namely, given a bunch of physical systems that all appear to have behavior that appears to be conscious, which ones are actually conscious and which are not. If I understood you correctly, physicalism as a statement about consciousness is primary a negative statement, "the computational behavior of a system is not sufficient to determine what sort of conscious activity occurs there", which doesn't by itself tell you what sort of conscious activity occurs.

Comment by itaibn on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-25T00:12:38.810Z · EA · GW

My current position is that the amount of pleasure/suffering that conscious entities will experience in a far-future technological civilization will not be well-defined. Some arguments for this:

  1. Generally utility functions or reward functions are invariant under affine transformations (with suitable rescaling for the learning rate for reward functions). Therefore they cannot be compared between different intelligent agents as a measure of pleasure.

  2. The clean separation of our civilization into many different individuals is an artifact of how evolution operates. I don't expect far future civilization to have a similar division of its internal processes into agents. Therefore the method of counting conscious entities with different levels of pleasure is inapplicable.

  3. Theoretical computer science gives many ways to embed one computational process within another so that it is unclear whether or how many times the inner process "occurs", such as running identical copies of the same program, using a quantum computer to run the same program with many inputs in superposition, and homomorphic encryption. Similar methods we don't know about will likely be discovered in the future.

  4. Our notions of pleasure and suffering are mostly defined extensionally with examples from the present and the past. I see no reason that such an extensionally-derived concept to have a natural definition that applies to extremely different situations. Uncharitably, it seems like the main reason people assume this is a sort of wishful thinking due to their normal moral reasoning breaking down if they allow pleasure/suffering to be undefined.

I'm currently uncertain about how to make decisions relating to the far future in light of the above arguments. My current favorite position is to try to understand the far future well enough until I find something I have strong moral intuitions about.

Comment by itaibn on We Could Move $80 Million to Effective Charities, Pineapples Included · 2017-12-14T18:19:31.703Z · EA · GW

Indeed, maybe I should made the point more harshly. To be clear, that comment is not about something people might do, it's about what's already present in the top post, which I see as breaking the Reddit rules.

I used soft language because I was worried about EA discussions breaking into arguments whenever someone suggests a good thing to do, and was worried that I might have erred too much in the other direction in other contexts. I still don't feel I have a good intuition on how confrontational I should be.

Comment by itaibn on We Could Move $80 Million to Effective Charities, Pineapples Included · 2017-12-14T16:54:50.222Z · EA · GW

I've spent some time thinking and investigating what the current state of affairs is, and here's my conclusions:

I've been reading through PineappleFund's comments. Many are responses to solicitations for specific charities with him endorsing them as possibilities. One of these was for SENS foundation. Matthew_Barnett suggested that this is evidence that he particularly cares about long-term future causes, but given the diversity of other causes he endorsed I think it is pretty weak evidence.

They haven't yet commented on any of the subthreads specifically discussing EA. However, these subthreads are high up on the Reddit sorting algorithm and have many comments endorsing EA. This is already a good position and is difficult to improve: They either like what they see or they don't. It may be better if the top-level comments explicitly described and linked to a specific charity since that is what they responded well to in other comments, but I am cautious about making such surface-level generalizations which might have more to do with the distribution of existing comments than PineappleFund's tendencies.

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

Comment by itaibn on Anti-tribalism and positive mental health as high-value cause areas · 2017-10-18T18:16:54.138Z · EA · GW

First, I consider our knowledge of psychology today to be roughly equivalent to that of alchemists when alchemy was popular. Like with alchemy, our main advantage over previous generations is that we're doing lots of experiments and starting to notice vague patterns, but we still don't have any systematic or reliable knowledge of what is actually going on. It is premature to seriously expect to change human nature.

Improving our knowledge of psychology to the point where we can actually figure things out could have a major positive effect on society. The same could be said for other branches of science. I think basic science is a potentially high-value cause, but I don't see why psychology should be singled out.

Second, this cause is not neglected. It is one of the major issues intellectuals have been grappling with for centuries or more. Framing the issue in terms of "tribalism" may be a novelty, but I don't see it as an improvement.

Finally, I'm not saying that there's nothing the effective altruism community can do about tribalism. I'm saying I don't see how this post is helping.

edit: As an aside, I'm now wondering if I might be expressing the point too rudely, especially the last paragraph. I hope we manage to communicate effectively in spite of any mistakes on my part.

Comment by itaibn on Anti-tribalism and positive mental health as high-value cause areas · 2017-10-18T11:35:55.193Z · EA · GW

I don't see any high-value interventions here. Simply pointing out a problem people have been aware of for millenia will not help anyone.

Comment by itaibn on [deleted post] 2017-10-14T13:33:34.670Z

I don't think the people of this forum are qualified to discuss this. Nobody in the post or comments (as of the time I posted my comment, and I am including myself) leaves me with a visible impression that they have detailed knowledge of the process and trade-offs for making a new government agency or any other type of major governmental action on x-risk. As laymen I believe we should not be proposing or judging any particular policy but recognizing and supporting people with genuine expertise interested in existential risk policy.

Comment by itaibn on Which five books would you recommend to an 18 year old? · 2017-09-13T13:58:00.378Z · EA · GW

Before you get too excited about this idea, I want you to recall your days at school and how well it turned out when the last generation of thinkers tried this.

Comment by itaibn on Which five books would you recommend to an 18 year old? · 2017-09-09T10:16:28.894Z · EA · GW

While I couldn't quickly find the source for this, I'm pretty sure Eliezer read the Lectures on Physics as well. Again, I think Surely You're Joking is good, I just think the Lectures on Physics is better. Both are reasonable candidates for the list.

Comment by itaibn on Ten new 80,000 Hours articles made for the effective altruist community · 2017-09-08T01:00:00.524Z · EA · GW

The article on machine learning doesn't discuss the possibility that more people to pursuing machine jobs can have a net negative effect. It's true your venue will generally encourage people that will be more considerate of the long-term and altruistic effects of their research and so will likely have a more positive effect than the average entrant to the field, but if accelerating the development of strong AI is a net negative then that could outweigh the benefit of the average researcher being more altruistic.

Comment by itaibn on Which five books would you recommend to an 18 year old? · 2017-09-08T00:38:24.403Z · EA · GW

What do you mean by Feynman? I endorse his Lectures in Physics as something that had a big effect on my own intellectual development, but I worry many people won't be able to get that much out of it. While his more accessible works are good, I don't rate them as highly.

Comment by itaibn on Looking at how Superforecasting might improve some EA projects response to Superintelligence · 2017-08-30T12:01:38.127Z · EA · GW

This post is a bait-and-switch: It starts off with a discussion of the Good Judgement Project and what lessons it teaches us about forecasting superintelligence. However, starting with the section "What lessons should we learn?", you switch from a general discussion of these techniques towards making a narrow point about which areas of expertise forecasters should rely on, an opinion which I suspect the author arrived at through means not strongly motivated from the Good Judgement Project.

While I also suspect the Good Judgement Project could have valuable lessons on superintelligence forecasting, I think that taking verbal descriptions of the how superforecasters make good predictions and citing them for arguments about loosely related specific policies is a poor way to do that. As a comparison, I don't think that giving a forecaster this list of suggestions and asking them to make predictions with those suggestions in mind would lead to performance similar to that of a superforecaster. In my opinion, the best way to draw lessons from the Good Judgement Project is to directly rely on existing forecasting teams, or new forecasting teams trained and tested in the same manner, to give us their predictions on potential superintelligence, and to give the appropriate weight to their expertise.

Moreover, among the list of suggestions in the section "What they found to work", you almost entirely focus on the second one, "Looking at a problem from multiple different view points and synthesising them?" to make your argument. You can also be said to be relying on the last suggestion to the extent they say essentially the same thing, that we should rely on multiple points of view. The only exception is that you rely on the fifth suggestion, "Striving to distinguish as many degrees of doubt as possible - be as precise in your estimates as you can", when you argue their strategy documents should have more explicit probability estimates. In response to that, keep in mind that these forecasters are specifically tested on giving well-calibrated probabilistic predictions. Therefore I expect that this overestimates the importance of precise probability estimates in other contexts. My hunch is that giving numerically precise subjective probability estimates is useful in discussions among people already trained to have a good subjective impression of what these probabilities mean, but among people without such training the effect of using precise probabilities is neutral or harmful. However, I have no evidence for this hunch.

I disapprove of this bait-and-switch. I think it deceptively builds a case for diversity in intelligence forecasting, and adds confusion to both the topics it discusses.

Comment by itaibn on Peter Singer no-platformed by pro-disability protestors at Canadian university · 2017-03-12T14:34:05.138Z · EA · GW

Suggestion: The author should have omitted the "Thoughts" section of this post and put the same content in a comment, and, in general, news posts should avoid subjective commentary in the main post.

Reasoning: The main content of this post is its report of EA-related news. This by itself is enough to make it worth posting. Discussion and opinions of this news can be done in the comments. By adding commentary you are effectively "bundling" a high-quality post with additional content, which grants this extra content with undue attention.

Note: This comment was not incited by any particular objection to the views discussed in this post. I also approve of the way you clearly separated the news from your thoughts on it. I don't think the post goes outside the EA Forum's community norms. Rather, I want to discuss whether shifting those community norms is a good idea.

Comment by itaibn on Some Thoughts on Public Discourse · 2017-03-08T04:25:06.800Z · EA · GW

The following is entirely a "local" criticism: It responds only to a single statement you made, and has essentially no effect on the validity of the rest of what you say.

I always run content by (a sample of) the people whose views I am addressing and the people I am directly naming/commenting on... I see essentially no case against this practice.

I found this statement surprising, because it seems to me that this practice has a high cost. It increases the amount of effort it takes to make a criticism. Increasing the cost of making criticisms can also making you less likely to consider making a criticism. There is also a fixed cost in making this into a habit.

Seeing the situation you're in as you describe in the rest of your post, and specifically that you put a lot of effort into your comments in any case, I can see this practice working well for you. However, it's not "no case" against it, especially for people who aren't public figures.