Posts

[Linkpost]: German judges order government to strengthen legislation before end of year to protect future generations 2021-04-29T16:47:18.396Z

Comments

Comment by Flodorner on AMA: The new Open Philanthropy Technology Policy Fellowship · 2021-07-29T09:39:36.603Z · EA · GW

Relatedly, what is the likelihood that future iterations of the fellowship might be less US-centric, or include Visa sponsorship?

Comment by Flodorner on Apply to the new Open Philanthropy Technology Policy Fellowship! · 2021-07-23T07:25:20.941Z · EA · GW

The job posting states: 

"All participants must be eligible to work in the United States and willing to live in Washington, DC, for the duration of their fellowship. We are not able to sponsor US employment visas for participants; US permanent residents (green card holders) are eligible to apply, but fellows who are not US citizens may be ineligible for placements that require a security clearance."

So my impression would be that it would be pretty difficult to participate for non-US citizens who do not already live in the US. 

Comment by Flodorner on What previous work has been done on factors that affect the pace of technological development? · 2021-04-28T08:04:28.489Z · EA · GW

https://en.wikipedia.org/wiki/Technological_transitions might be relevant.

The Geels book cited in the article (Geels, F.W., 2005. Technological transitions and system innovations. Cheltenham: Edward Elgar Publishing.) has a bunch of interesting case studies I read a while ago and a (I think popular) framework for technological change, but I am not sure the framework is sufficiently precise to be very predictive (and thus empirically validatable). 

I don't have any particular sources on this, but the economic literature on the effects of regulation might be quite relevant. In particular, I do remember attending a lecture arguing that limited liability played an important role for innovation during the industrial revolution.

Comment by Flodorner on Is there evidence that recommender systems are changing users' preferences? · 2021-04-16T17:19:35.562Z · EA · GW

Facebook has at least experimented with using deep reinforcement learning to adjust its notifications according to https://arxiv.org/pdf/1811.00260.pdf . Depending on which exact features they used for the state space (i.e. if they are causally connected to preferences), the trained agent would at least theoretically have an incentive to change user's preferences. 

The fact that they use DQN rather than a bandit algorithm seems to suggest that what they are doing involves at least some short term planning, but the paper does not seem to analyze the experiments in much detail, so it is unclear whether they could have used a myopic bandit algorithm instead. Either way, seeing this made me update quite a bit towards being more concerned about the effect of recommender systems on preferences. 

Comment by Flodorner on Objectives of longtermist policy making · 2021-02-12T16:59:58.944Z · EA · GW

Interesting writeup!

Depending on your intended audience, it might make sense to add more details for some of the proposals. For example, why is scenario planning a good idea compared to other methods of decision making? Is there a compelling story, or strong empirical evidence for its efficacy? 

Some small nitpicks: 

There seems to be a mistake here: 

"Bostrom argues in The Fragile World Hypothesis that continuous technological development will increase systemic fragility, which can be a source of catastrophic or existential risk. In the Precipice, he estimates the chances of existential catastrophe within the next 100 years at one in six."

I also find this passage a bit odd: 

"One example of moral cluelessness is the repugnant conclusion, which assumes that by adding more people to the world, and proportionally staying above a given average in happiness, one can reach a state of minimal happiness for an infinitely large population."

The repugnant conclusion might motivate someone to think about cluelessness, but it does not really seem to be an example of cluelessness (the question whether we should accept it might or might not be). 

Comment by Flodorner on 13 Recent Publications on Existential Risk (Jan 2021 update) · 2021-02-12T09:33:50.819Z · EA · GW

Most of the links to the papers seem to be broken.

Comment by Flodorner on Even Allocation Strategy under High Model Ambiguity · 2021-01-01T11:09:49.078Z · EA · GW

So for the maximin we are minimizing over all  joint distributions that are  -close to our initial guess?

"One intuitive way to think about this might be considering circles of radius  centered around fixed points, representing your first guesses for your options, in the plane. As  becomes very large, the intersection of the interiors of these circles will approach 100% of their interiors. The distance between the centres becomes small relative to their radii. Basically, you can't tell the options apart anymore for huge . (I might edit this post with a picture...)"

If I can't tell the options apart any more, how is the 1/n strategy better than just investing everything into a random option? Is it just about variance reduction? Or is the distance metric designed such that shifting the distributions into "bad territories" for more than one of the options requires more movement? 

Comment by Flodorner on A case against strong longtermism · 2020-12-22T12:16:10.843Z · EA · GW

I wrote up my understanding of Popper's argument on the impossibility of predicting one's own knowledge (Chapter 22 of The Open Universe) that came up in one of the comment threads. I am still a bit confused about it and would appreciate people pointing out my misunderstandings.

Consider a predictor:

A1: Given a sufficiently explicit prediction task, the predictor predicts correctly

A2: Given any such prediction task, the predictor takes time to predict and issue its reply (the task is only completed once the reply is issued).

T1: A1,A2=> Given a self-prediction task, the predictor can only produce a reply after (or at the same time as) the predicted event

T2: A1,A2=> The predictor cannot predict future growth in its own knowledge

A3: The predictor takes longer to produce a reply, the longer the reply is

A4: All replies consist of a description of a physical system and use the same (standard) language.

A1 establishes implicit knowledge of the predictor about the task. A2, A3 and A4 are there to account for the fact that the machine needs to make its prediction explicit.

A5: Now, consider two identical predictors, Tell and Told. At t=0 give Tell the task to predict Told's state (including it's physically issued reply) at t=1 from Told's state at t=0. Give Told the task to predict a third predictor's state (this seems to later be interpreted as Tell's state) at t = 1 from that predictor's state at t=0 (such that Tell and Told will be in the exact same state at t=0).

  • If I understand correctly, this implies that Tell and Told will be in the same state all the time, as future states are just a function of the task and the initial state.

T3: If Told has not started issuing its reply at t=1, Tell won't have completed its task at t=1

  • Argument: Tell must issue its reply to complete the task, but Tell has to go through the same states as Told in equal periods of time, so it cannot have started issuing its reply.

T4: If Told has completed its task at t=1, Tell will complete its task at t=1.

  • Argument: Tell and Told are identical machines

T5: Tell cannot predict its own future growth in knowledge

  • Argument: Completing the prediction would take until the knowledge is actually obtained.

A6: The description of the physical state of another description (that is for example written on a punch card) cannot be shorter than said other description.

T6: If Told has completed its task at t=1, Tell must have taken longer to complete its task

  • This is because its reply is longer than TOLD's given that it needs to describe TOLD's reply.

T6 contradicts T4, so some of the assumptions must be wrong.

  • A5 and A1 are some of the most shaky assumptions. If A1 fails, we cannot predict the future. If A5 fails, there is a problem with self-referential predictions.

Initial thoughts: 

This seems to establish too little, as it is about deterministic predictions. Also, the argument does not seem to preclude partial predictions about certain aspects  of the world's state (for example,  predictions that are not concerned with the other predictor's physical output might go through). Less relevantly, the argument heavily relies on (pseudo) self-references and Popper distinguishes between explicit and implicit knowledge and only explicit knowledge seems to be affected by the argument. It is not clear to me that making an explicit prediction about the future necessarily requires me to make all of the knowledge gains I have until then explicit (If we are talking about determinstic predictions of the whole world's state, I might have to, though, especially if I predict state-by-state ). 

Then, if all of my criticism was invalid and the argument was true, I don't see how we could predict anything in the future at all (like the sun's existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions? (I agree that there is a quantitative one, and it seems quite plausible that some lontermists are undervaluing that.)

I am also slightly discounting the proof, as it uses a lot of words that can be interpreted in different ways. It seems like it is often easier to overlook problems and implicit assumptions in that kind of proof as opposed to a more formal/symbolic proof. 

Popper's ideas seem to have interesting overlap with MIRI's work. 

Comment by Flodorner on A case against strong longtermism · 2020-12-22T10:05:39.996Z · EA · GW

They are, but I don't think that the correlation is strong enough to invalidate my statement. P(sun will exist|AI risk is a big deal) seems quite large to me. Obviously, this is not operationalized very well...

Comment by Flodorner on A case against strong longtermism · 2020-12-21T09:39:13.574Z · EA · GW

It seems like the proof critically hinges on assertion 2) which is not proven in your link. Can you point me to the pages of the book that contain the proof?

I agree that proofs are logical, but since we're talking about probabilistic predictions,  I'd be very skeptical of the relevance of a proof that does not involve mathematical reasoning,

Comment by Flodorner on A case against strong longtermism · 2020-12-20T13:53:54.835Z · EA · GW

I don't think I buy the impossibility proof as predicting future knowledge in a probabilistic manner is possible (most simply, I can predict that if I flip a coin now, that there's a 50/50 chance I'll know the coin landed on heads/tails in a minute). I think there is some important true point behind your intuition about how knowledge (especially of more complex form than about a coin flip) is hard to predict, but I am almost certain you  won't be able to find any rigorous mathematical proof for  this intuition because reality is very fuzzy (in a mathematical sense, what exactly is the difference between the coin flip and knowledge about future technology?) so I'd be a lot more excited about other types of arguments (which will likely only support weaker claims). 

Comment by Flodorner on A case against strong longtermism · 2020-12-18T21:24:36.834Z · EA · GW

Ok, makes sense. I  think that our ability to make predictions about the future steeply declines with increasing time horizions, but find it somewhat implausible that it would become entirely uncorrelated with what is actually going to happen in finite time. And it does not seem to be the case that data supporting long term predictions is impossible to get by: while it might be pretty hard to predict whether AI risk is going to be a big deal by whatever measure, I can still be fairly certain that the sun will exist in a 1000 years; in part due to a lot of data collection and hypothesis testing done by physicist. 

Comment by Flodorner on A case against strong longtermism · 2020-12-18T19:44:18.303Z · EA · GW

"The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. "

This claim seems confused, as every nonempty set allows for the definition of a probability measure on it  and measures on function spaces exist ( https://en.wikipedia.org/wiki/Dirac_measure , https://encyclopediaofmath.org/wiki/Wiener_measure ). To obtain non-existence, further properties of the measure such as translation-invariance need to be required (https://aalexan3.math.ncsu.edu/articles/infdim_meas.pdf) and it is not obvious to me that we would necessarily require such properties. 

Comment by Flodorner on A case against strong longtermism · 2020-12-18T19:34:38.559Z · EA · GW

I am confused about the precise claim made regarding the Hilbert Hotel and measure theory.  When you say "we have no  measure over the set of all possible futures",  do you mean that no such measures exist (which would be incorrect without further requirements:  https://en.wikipedia.org/wiki/Dirac_measure , https://encyclopediaofmath.org/wiki/Wiener_measure ), or that we don't have a way of choosing the right measure?  If it is the latter,  I agree that this is an important challenge, but I'd like to highlight that the situation is not too different from the finite case in which there is still an infinitude of possible measures for a given set to choose from. 

Comment by Flodorner on Challenges in evaluating forecaster performance · 2020-09-12T08:21:56.547Z · EA · GW

I'm also not sure I follow your exact argument here. But frequency clearly matters whenever the forecast is essentially resolved before the official resolution date, or when the best forecast based on evidence at time t behaves monotonically (think of questions of the type "will event Event x that (approximately) has a small fixed probability of happening each day happen before day y?", where each day passing without x happening should reduce your credence).

Comment by Flodorner on Challenges in evaluating forecaster performance · 2020-09-11T06:18:03.827Z · EA · GW

I guess you're right (I read this before and interpreted "active foreast" as "forecast made very recently").

If they also used this way of scoring things for the results in Superforecasting, this seems like an important caveat for forecasting advice that is derived from the book: For example the efficacy of updating your beliefs might mostly be explained by this. I previously thought that the results meant that a person who forecasts a question daily will make better forecasts on sundays than a person who only forecasts on sundays.

Comment by Flodorner on Challenges in evaluating forecaster performance · 2020-09-10T19:02:00.290Z · EA · GW

Do you have a source for the "carrying forward" on gjopen? I usually don't take the time to update my forecasts if I don't think I'd be able to beat the current median but might want to adjust my strategy in light of this.

Comment by Flodorner on EA considerations regarding increasing political polarization · 2020-06-27T18:16:30.063Z · EA · GW

Claims that people are "unabashed racists and sexists" should at least be backed up with actual examples. Like this, I cannot know whether you have good reasons for that believe that I don't see (to the very least not in all of the cases), or whether we have the same information but fundamentally disagree about what constitutes "unabashed racism".

I agree with the feeling that the post undersells concerns about the right wing, but I don't think you will convince anybody without any arguments except for a weakly supported claim that the concern about the left is overblown. I also agree that "both sides are equal" is rarely true, but again just claiming that does not show anyone that the side you prefer is better (see that comment where someone essentially argues the same for the other side; Imagine I haven't thought about this topic before, how am I supposed to choose whom of you two to listen to?).

"If you would like to avoid being deplatformed or called out, perhaps the best advice is to simply not make bigoted statements. That certainly seems easier than fleeing to another country." The author seems to be arguing that it might make sense to be prepared to flee the country if things become a lot worse than deplatforming. While I think that the likelihood of this happening is fairly small (although this course of action would be equally advisable if things got a lot worse on the right wing), they are clearly not advocating to leave the country in order to avoid being "called out".

Lastly, I sincerely hope that all of the downvotes are for failing to comply with the commenting guidlines of "Aim to explain, not persuade, Try to be clear, on-topic, and kind; and Approach disagreements with curiosity" and not because of your opinions.

Comment by Flodorner on EA considerations regarding increasing political polarization · 2020-06-27T17:32:21.109Z · EA · GW

"While Trump’s policies are in some ways more moderate than the traditional Republican platform". I do not find this claim self-evident (potentially due to biased media reporting affecting my views) and find it strange that no source or evidence for it is provided, especially given the commendable general amount of links and sources in the text.

Relatedly, I noticed a gut feeling that the text seems more charitable to the right-wing perspective than to the left (specific "evidence" included the statement from the previous paragraph, the use of the word "mob", the use of concrete examples for the wrongdoings of the left while mostly talking about hypotheticals for the right and the focus on the cultural revolution without providing arguments why parallels to previous right-wing takeovers [especially against the backdrop of a perceived left-wing threat] are not adequate). The recommendation of eastern europe as good destination for migration seems to push in a similar vein, given recent drifts towards right wing authoritarianism in states like poland and hungary.

I would be curious if others (especially people whose political instincts don't kick in when thinking about the discussion around deplatforming) share this impression to get a better sense of how much politics distorts how I viscerally weigh evidence.

I am also confused whether pieces that can easily be read in a way that is explicitly anti-left wing (If I, who is quite sceptical of deplatforming but might not see it is as a huge threat can do this, imagine someone who is further to the left) rather than mostly orthogonal to politics (with the occasional statement that can be misconstrued as right-wing) might make it even easier for EA to "get labelled as right-wing or counter-revolutionary and lose status among left-wing academia and media outlets.". If that was the case, one would have to carefully weigh the likelihood that these texts will prevent extreme political outcomes and the added risk of getting caught in the crossfire. (Of course, there are also second order like the effect of potential self-censorship that might very well play a relevant role).

Similar considerations go for mass-downvoting comments pushing against texts like this [in a way that most likely violates community norms but is unlikely to be trolling], without anyone explaining why.

Comment by Flodorner on EA considerations regarding increasing political polarization · 2020-06-27T15:40:16.270Z · EA · GW

If you go by GDP per capita, most of europe is behind the US but ahead of most of Asia https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita (growth rates in Asia are higher though, so this might change at some point in the future.)

In terms of the Human Develompment Index https://en.wikipedia.org/wiki/List_of_countries_by_Human_Development_Index (which seems like a better measure of "success" than just GDP), some countries (including large ones like Germany and the UK) score above the US but others score lower. Most of Asia (except for Singapore, Hong Kong and Japan) scores lower.

For the military aspect, it kind of depends on what you mean by "failed"? Europe is clearly not as militarily capable as the US, but it also seems quite questionable whether spending as much as the US on military capabilities is a good choice, especially for allies of the US who also possess (or are strongly connected with) other countries who possess nuclear deterrence.

Comment by Flodorner on Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics · 2020-05-15T18:02:52.543Z · EA · GW

While I am unsure about how good of an idea it is to map out more plausible scenarios for existential risk from pathogens, I agree with the sentiment that the top level post seems seems to focus too narrowly on a specific scenario.

Comment by Flodorner on Biases in our estimates of Scale, Neglectedness and Solvability? · 2020-02-27T16:38:25.164Z · EA · GW

Re bonus section: Note that we are (hopefully) taking expectations over our estimates for importance, neglectedness and tractability, such that general correlations between the factors between causes do not necessarily cause a problem. However, it seems quite plausible that our estimation errors are often correlated because of things like the halo effect.

Edit: I do not fully endorse this comment any more, but still belief that the way we model the estimation procedure matters here. Will edit again, once I am less confused.

Comment by Flodorner on Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED) · 2019-09-06T15:06:59.721Z · EA · GW

Maybe having a good understanding of Quantum Computing and how it could be leveraged in different paradigms of ML might help with forecasting AI-timelines as well as dominant paradigms, to some extend?

If that was true, while not necessarily helpful for a single agenda, knowledge about quantum computing would help with the correct prioritization of different agendas.

Comment by Flodorner on Three Biases That Made Me Believe in AI Risk · 2019-02-15T17:18:20.014Z · EA · GW

"The combination of these vastly different expressions of scale together with anchoring makes that we should expect people to over-estimate the probability of unlikely risks and hence to over-estimate the expected utility of x-risk prevention measures. "

I am not entirely sure whether i understand this point. Is the argument that the anchoring effect would cause an overestimation, because the "perceived distance" from an anchor grows faster per added zero than per increase of one to the exponent?

Comment by Flodorner on Critique of Superintelligence Part 2 · 2018-12-15T20:28:24.371Z · EA · GW

Directly relevant quotes from the articles for easier reference:

Paul Christiano:

"This story seems consistent with the historical record. Things are usually preceded by worse versions, even in cases where there are weak reasons to expect a discontinuous jump. The best counterexample is probably nuclear weapons. But in that case there were several very strong reasons for discontinuity: physics has an inherent gap between chemical and nuclear energy density, nuclear chain reactions require a large minimum scale, and the dynamics of war are very sensitive to energy density."

"I’m not aware of many historical examples of this phenomenon (and no really good examples)—to the extent that there have been “key insights” needed to make something important work, the first version of the insight has almost always either been discovered long before it was needed, or discovered in a preliminary and weak version which is then iteratively improved over a long time period. "

"Over the course of training, ML systems typically go quite quickly from “really lame” to “really awesome”—over the timescale of days, not months or years.

But the training curve seems almost irrelevant to takeoff speeds. The question is: how much better is your AGI then the AGI that you were able to train 6 months ago?"

AIImpacts:

"Discontinuities larger than around ten years of past progress in one advance seem to be rare in technological progress on natural and desirable metrics. We have verified around five examples, and know of several other likely cases, though have not completed this investigation. "

"Supposing that AlphaZero did represent discontinuity on playing multiple games using the same system, there remains a question of whether that is a metric of sufficient interest to anyone that effort has been put into it. We have not investigated this.

Whether or not this case represents a large discontinuity, if it is the only one among recent progress on a large number of fronts, it is not clear that this raises the expectation of discontinuities in AI very much, and in particular does not seem to suggest discontinuity should be expected in any other specific place."

"We have not investigated the claims this argument is premised on, or examined other AI progress especially closely for discontinuities."

Comment by Flodorner on Critique of Superintelligence Part 2 · 2018-12-14T10:55:54.757Z · EA · GW

Another point against the content overhang argument: While more data is definitely useful, it is not clear, whether raw data about a world without a particular agent in it will be similarly useful to this agent as data obtained from its own (or that of sufficiently similar agents) interaction with the world. Depending on the actual implementation of a possible superintelligence, this raw data might be marginally helpful but far from being the most relevant bottleneck.

"Bostrom is simply making an assumption that such rapid rates of progress could occur. His intelligence spectrum argument can only ever show that the relative distance in intelligence space is small; it is silent with respect to likely development timespans. "

It is not completely silent. I would expect any meaningful measure for distance in intelligence space to at least somewhat correlate with timespans necessary to bridge that distance. So while the argument is not a decisive one regarding time spans, it also seems far from saying nothing.

"As such it seems patently absurd to argue that developments of this magnitude could be made on the timespan of days or weeks. We simply see no examples of anything like this from history, and Bostrom cannot argue that the existence of superintelligence would make historical parallels irrelevant, since we are precisely talking about the development of superintelligence in the context of it not already being in existence. "

Note that the argument from historical parallels is extremely sensitive to reference class. While it seems like there has not been "anything like this" in science or engineering (although progress seems to have been quite discontinous (but not self-reinforcing) by some metrics at times) or related to general intelligence (here it would be interesting to explore, whether or not the evolution of human intelligence happened a lot faster than an outside observer would have expected from looking at the evolution of other animals, since hours and weeks seem like a somewhat Anthropocentric frame of reference), narrow AI has gone from sub- to superhuman level in quite small time spans a lot recently (this is once again very sensitive to framing, so take it more as a point for the complexity of aruments from historical parallels, than as a direct argument for fast take-offs being likely).

"not consistent either with the slow but steady rate of progress in artificial intelligence research over the past 60 years"

Could you elaborate? I'm not extremely familiar with the history of artificial intelligence, but my impression was, that progress was quite jumpy at times, instead of slow and steady.

Comment by Flodorner on Critique of Superintelligence Part 1 · 2018-12-13T23:50:20.087Z · EA · GW

Thanks for writing this!

I think you are pointing out some important imprecisions, but i think that some of your arguments aren't as conclusive as you seem to present them to be:

"Bostrom therefore faces a dilemma. If intelligence is a mix of a wide range of distinct abilities as in Intelligence(1), there is no reason to think it can be ‘increased’ in the rapidly self-reinforcing way Bostrom speaks about (in mathematical terms, there is no single variable  which we can differentiate and plug into the differential equation, as Bostrom does in his example on pages 75-76). "

Those variables could be reinforcing each other, as one could argue they had done in the evolution of human intelligence. (in mathematical terms, there is a runaway dynamic similar to the one dimensional case for a linear vector-valued differential equation, as long as all eigenvalues are positive).

"This should become clear if one considers that ‘essentially all human cognitive abilities’ includes such activities as pondering moral dilemmas, reflecting on the meaning of life, analysing and producing sophisticated literature, formulating arguments about what constitutes a ‘good life’, interpreting and writing poetry, forming social connections with others, and critically introspecting upon one’s own goals and desires. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips. To me it seems extraordinarily unlikely that any agent capable of performing all these tasks with a high degree of proficiency would simultaneously stand firm in its conviction that the only goal it had reasons to pursue was tilling the universe with paperclips."

Why does it seem unlikely? Also, do you mean unlikely as in "agents emerging in a world similar to ours is nowprobably won't have this property" or as in "given that someone figured out how to construct a great variety of superintelligent agents, she would still have trouble constructing an agent with this property?"

Comment by Flodorner on When should EAs allocate funding randomly? An inconclusive literature review. · 2018-11-23T18:59:58.891Z · EA · GW

Yes, exactly. When first reading your summary i interpreted it as the "for all" claim.

Comment by Flodorner on When should EAs allocate funding randomly? An inconclusive literature review. · 2018-11-22T16:27:42.931Z · EA · GW

Very interesting!

In the your literature review you summarize the Smith and Winkler (2006) paper as "Prove that nonrandom, non-Bayesian decision strategies systematically overestimate the value of the selected option."

On first sight, this claim seems like it might be stronger than the claim i have taken away from the paper (which is similar to what you write later in the text): if your decision strategy is to just choose the option you (naively) expect to be best, you will systematically overestimate the value of the selected option.

If you think the first claim is implied by the second (or something in the paper i missed) in some sense, i'd love to learn about your arguments!

"In fact, I believe that choosing the winning option does maximize expected value if all measurements are unbiased and their reliability doesn’t vary too much."

I think you are basically right, but the amount of available options also plays a role here. If you consider a lot of non-optimal options, for which your measurements are only slightly noisier than for the best option, you can still systematically underselect the best option. (For example, simulations suggest that with 99 N(0,1.1) and 1 N(0.1,1) variables, the last one will only be maximal among the 100 only 0.7% of the time, despite having the highest expected value).

In this case, randomly taking one option would in fact have a higher expected value. (But it still seems very unclear, how one would identify similar situations in reality, even if they existed).

Some combination of moderately varying noise and lots of options seems like the most plausible condition, under which not taking the winning option might be better for some real world decisions.

Comment by Flodorner on Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem · 2018-11-11T13:11:52.905Z · EA · GW

I think that the assumption of the existence of a Funnel shaped distribution with undefined expected value of things we care about is quite a bit stronger than assuming that there are infinitely many possible outcomes.

But even if we restrict ourselves to distributions with finite expected value, our estimates can still fluctuate wildly until we have gathered huge amounts of evidence.

So while i am sceptical of the assumption that there exists a sequence of world states with utilities tending to infinity and even more sceptical of extremely high/low utility world states being reachable with sufficient probability for there to be undefined expected value (the absolute value of the utility of our action would have to have infinite expected value, and i'm sceptical of believing this without something at least close to "infinite evidence"), i still think your post is quite valuable for starting a debate on how to deal with low probability events, crucial considerations and our decision making when expected values fluctuate a lot.

Also, even if my intuition about the impossibility of infinite utilities was true (I'm not exactly sure what that would actually mean, though), the problems you mentioned would still apply to anyone who does not share this intuition.

Comment by Flodorner on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-11T12:12:19.725Z · EA · GW

I think the argument is that additional information showing that a cause has high marginal impact might divert causes away towards it from causes with less marginal impact. And getting this kind of information does seem more likely for causes without a track record allowing for a somewhat robust estimation of their (marginal) impact.

Comment by Flodorner on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-11T12:10:29.833Z · EA · GW

Comment moved.

Comment by Flodorner on Is Neglectedness a Strong Predictor of Marginal Impact? · 2018-11-11T11:30:30.036Z · EA · GW

For clarification: (PITi+ui) is the "real" tractability and importance?

The text seems to make more sense that way, but reading "ui is the unknown (to you) importance and tractability of the cause.", I interpreted it as ui being the "real" tractability and importance instead of just a noise term at first.

Comment by Flodorner on Debate and Effective Altruism: Friends or Foes? · 2018-11-11T10:23:09.425Z · EA · GW

Relatedly, the impromptu nature of some debating formats could also help with getting comfortable formulating answers to nontrivial questions under (time) pressure. Apart from being generally helpful, this might be especially valuable in some types of job interviews.

I've been considering to invest some time into competitive debating, mostly in order to improve that skill, so if someone has data (even anecdotal) on that, pleases share :)

Comment by Flodorner on Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? · 2018-11-09T16:36:09.043Z · EA · GW

Interesting post!

I am quite interested in your other arguments for why EV calculations won't work for pascal's mugging and why they might extend to x-risks. I would probably have prefered a post already including all the arguments for your case.

About the argument from hypothetical updates: My intuition is, that if you assign a probability of a lot more than 0.1^10^10^10 to the mugger actually being able to follow through this might create other problems (like probabilities of distinct events adding to something higher than 1 or priors inconsistent with occams razor). If that intuition (and your argument) was true (my intuition might very well be wrong and seems at least slightly influenced by motivated reasoning), one would basically have to conclude that bayesian EV reasoning fails as soon as it involves combinations of extreme utilities and miniscule probabilities.

However, i don't think the credenced for being able to influence x-risks are so low, that updating becomes impossible and therefore i'm not convinced not to use EV to evaluate them by your first argument. I'm quite eager to see the other arguments, though.

Comment by Flodorner on Reducing existential risks or wild animal suffering? · 2018-11-01T20:19:03.447Z · EA · GW

What exactly do you mean with utility here? The Quasi-negative utilitarian framework seems to correspond to a shift of everyone's personal utility, such that the shifted utility for each person is 0, whenever this person's live is neither worth living, nor not worth living.

It seems to me, like a reasonable notion of utility would have this property anyway (but i might just use the word differently than other people, please tell me, if there is some widely used definition contradicting this!). This reframes the discussion into one about where the zero point of utility functions should lie, which seems easier to grasp. In particular, from this point of view Quasi-negative utilitarianism still gives rise to some for of the sadistic-repugnant conclussion.

On a broader point, i suspect, that the repugnance of repgugnant conclussions usually stems from confusion/disagreement about what "a life worth living" means. However, as in your article, entertaining this conclussion still seems useful in order to sharpen our intuition about what lives are actually worth living.

Comment by Flodorner on Additional plans for the new EA Forum · 2018-09-10T08:59:08.121Z · EA · GW

Are any ways of making content easier to filter (like for example tags) planned?

I am rather new to the community and there have been multiple occassions, where i randomly stumbled upon old articles, i haven't read, concerned with topics i was interested in and had previously made an effort to find articles about. This seems rather inefficient.

Comment by Flodorner on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-25T07:25:24.170Z · EA · GW

"to prove this argument I would have to present general information which may be regarded as having informational hazard"

Is there any way to assess the credibility of statements like this (or whether this is actually an argument worth considering in a given specific context)? It seems like you could use this as a general purpose argument for almost everything.

Comment by Flodorner on Doning with the devil · 2018-06-16T06:48:16.781Z · EA · GW

I am not sure about whether your usage of economies of scale already covers this, but it seems to make sense to highlight, that what matters is the marginal difference of the money for you and your adversary. If doing evil is a lot more efficient at low scales (Think of distributing highly addictive drugs among vurnerable populations vs. Distributing Malaria nets), your adversary could be hitting diminishing returns already, while your marginal returns increase, and the lottery might still be not be worth it.

Comment by Flodorner on Animal Equality showed that advocating for diet change works. But is it cost-effective? · 2018-06-11T08:55:19.488Z · EA · GW

Are you talking about the individual level, or the mean? My estimate would be, that for the median individual, the effect will have faded out after at most 6 months. However, the mean might be influenced by the tails quite strongly.

Thinking about it for a bit longer, a mean effect of 12 years does seem quite implausible, though. In the limiting case, where only the tails matter, this would be equivalent to convincing around 25% of the initially influenced students to stop eating pork for the rest of their lives.

The upper bound for my 90% confidence interval for the mean seems to be around 3 years, while the lower bound is at 3 months. The probability mass within the interval is mostly centered to the left.

Comment by Flodorner on Animal Equality showed that advocating for diet change works. But is it cost-effective? · 2018-06-11T07:49:32.589Z · EA · GW

The claim does not seem to be exactly, that there is a 10% chance of an animal advocacy video affecting consumption decisions after 12 years for a given individual.

I'd interpret it as: there is a 5% chance of the mean duration of reduction, conditioned on the participant reporting to change their behaviour based on the video being higher than 12 years.

This could for example also be achieved by having a very long term impact on very few participants. This interpretation seems a lot more plausible, although i am not certain at all, wheter that claim correct. Long term follow up data would certainly be very helpful.

Comment by Flodorner on The counterfactual impact of agents acting in concert · 2018-05-29T20:51:16.530Z · EA · GW

At this point, i think that to analyze the $1bn case correctly, you'd have to substract everyone's opportunity cost in the calculation of the shapley value (if you want to use it here). This way, the example should yield what we expect.

I might do a more general writeup about shapley values, their advantages, disadvantages and when it makes sense to use them, if i find the time to read a bit more about the topic first.

Comment by Flodorner on Expected cost per life saved of the TAME trial · 2018-05-29T12:52:26.586Z · EA · GW

I think, it might be best to just report confidence intervals for your final estimates (guesstimate should give you those). Then everyone can combine your estimates with their own priors on general intervention's effectiveness and thereby potentially correct for the high levels of uncertainty (at least in a crude way by estimating the variance from the confidence intervals).

The variance of X can be defined as E[X^2]-E[X]^2, which should not be hard to implement in Guesstimate. However, i am not sure, whether or not having the variance yields to more accurate updating, than having a confidence interval. Optimally you'd have the full distribution, but i am not sure, whether anyone will actually do the maths to update from there. (But they could get it roughly from your guesstimate model).

I might comment more on some details and the moral assumptions, if i find the time for it soon.

Comment by Flodorner on Three levels of cause prioritisation · 2018-05-28T14:35:33.267Z · EA · GW

I disagree. If we are fairly certain, that the average intervention in Cause X is 10 times more effective than the average Intervention in Cause Y (For a comparision, 80000 hours currently believes, that AI-safety work is 1000 times as effective as global health), it seems like we should strongly prioritize Cause X. Even if there are some interventions in Cause Y, which are more effective, than the average intervention in Cause X, finding them is probably as costly as finding the most effective interventions in Cause X (Unless there is a specific reason, why evaluating cost effectiveness in Cause X is especially costly, or the distributions of Intervention effectiveness are radically different between both causes). Depending on how much we can improve on our current comparative estimates of cause effctiveness, the potential impact of doing so could be quite high, since it is essentially multiplies the effects of our lower level prioritization. Therefore it seems, like high to medium level prioritization in combination with low-level prioritization restricted to the best causes seems the way to go. On the other hand, it seems at least plausible, that we cannot improve our high-level prioritization significantly at the moment and should therefore focus on the lower level within the most effective causes.

Comment by Flodorner on The counterfactual impact of agents acting in concert · 2018-05-28T14:09:06.284Z · EA · GW

"The alternative approach (which I argue is wrong) is to say that each of the n A voters is counterfactually responsible for 1/n of the $10bn benefit. Suppose there are 10m A voters. Then each A voter’s counterfactual social impact is 1/10m$10bn = $1000. But on this approach the common EA view that it is rational for individuals to vote as long as the probability of being decisive is not too small, is wrong. Suppose the ex ante chance of being decisive is 1/1m. Then the expected value of Emma voting is a mere 1/1m$1000 = $0.001. On the correct approach, the expected value of Emma voting is 1/10m*$10bn = $1000. If voting takes 5 minutes, this is obviously a worthwhile investment for the benevolent voter, as per common EA wisdom."

I am not sure, whether anyone is arguing for discounting twice. The alternative approach using the shapley value would divide the potential impact amongst the contributors, but not additionally account for the probability. Therefore, in this example both approaches seem to assign the same counterfactual impact.

More generally, it seems like most disagreements in this thread could be resolved by a more charitable interpretation of the other side (from both sides, as the validity of your argument against rohinmshah's counterexample seems to show)

Right now, a comment from someone more proficient with the shapley value arguing against

"Also consider the $1bn benefits case outlined above. Suppose that the situation is as described above but my action costs $2 and I take one billionth of the credit for the success of the project. In that case, the Shapely-adjusted benefits of my action would be $1 and the costs $2, so my action would not be worthwhile. I would therefore leave $1bn of value on the table."

might be helpful for a better understanding.

Comment by Flodorner on Expected cost per life saved of the TAME trial · 2018-05-28T09:06:17.650Z · EA · GW

Interesting Analysis! Since you already have confidence intervals for a lot of your models factors, using the guesstimate web tool to get a more detailed idea of the uncertainty in the final estimate might be helpful, since some bayesian discounting based on estimate's uncertainty might be a sensible thing to do. (https://www.lesswrong.com/posts/5gQLrJr2yhPzMCcni/the-optimizer-s-curse-and-how-to-beat-it)

It might also make sense to make your ethical assumptions more explicit in the beginning (https://www.givewell.org/how-we-work/our-criteria/cost-effectiveness/comparing-moral-weights), especially since the case against aging seems to be less intuitive than most of givewells interventions.