Posts

[Meta] Is it legitimate to ask people to upvote posts on this forum? 2021-06-29T07:42:57.439Z
Book review: Architects of Intelligence by Martin Ford (2018) 2020-08-11T17:24:16.833Z
ofer's Shortform 2020-02-19T06:53:16.647Z

Comments

Comment by ofer on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-16T09:27:49.953Z · EA · GW

(I don't know/remember the details of AlphaGo, but if the setup involves a value network that is trained to predict the outcome of an MCTS-guided gameplay, that seems to make it more likely that the value network is doing some sort of search during inference.)

Comment by ofer on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-14T16:24:47.346Z · EA · GW

Essentially the more the setup factors out valence-relevant computation (e.g. by separating out a module, or by accessing an oracle as in your example) the less likely it is for valenced processing to happen within the agent.

I think the analogy to humans suggests otherwise. Suppose a human feels pain in their hand due to touching something hot. We can regard all the relevant mechanisms in their body outside the brain—those that cause the brain to receive the relevant signal—as mechanisms that have been "factored out from the brain". And yet those mechanisms are involved in morally relevant pain. In contrast, suppose a human touches a radioactive material until they realize it's dangerous. Here there are no relevant mechanisms that have been "factored out from the brain" (the brain needs to use ~general reasoning); and there is no morally relevant pain in this scenario.

Though generally if "factoring out stuff" means that smaller/less-capable neural networks are used, then maybe it can reduce morally relevant valence risks.

Comment by ofer on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-13T22:13:10.681Z · EA · GW

GPT-3 is of that form, but AlphaGo/MuZero isn't (I would argue).

I don't see why. The NNs in AlphaGo and MuZero were trained using some SGD variant (right?), and SGD variants can theoretically yield mesa-optimizers.

Comment by ofer on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-11T18:17:35.841Z · EA · GW

Re comment 1: Yes, sorry this was just meant to point at a potential parallel not to work out the parallel in detail. I think it'd be valuable to work out the potential parallel between the DM agent's predicate predictor module (Fig12/pg14) with my factored-noxiousness-object-detector idea. I just took a brief look at the paper to refresh my memory, but if I'm understanding this correctly, it seems to me that this module predicts which parts of the state prevent goal realization.

I guess what I don't understand is how the "predicate predictor" thing can make it so that the setup is less likely to yield models that support morally relevant valence (if you indeed think that). Suppose the environment is modified such that the observation that the agent gets in each time step includes the value of every predicate in the reward specification. That would make the "predicate predictor" useless (I think; just from a quick look at the paper). Would that new setup be more likely than the original to yield models that have morally relevant valence?

Comment by ofer on A mesa-optimization perspective on AI valence and moral patienthood · 2021-09-10T22:29:41.512Z · EA · GW

This topic seems to me both extremely important and neglected. (Maybe it's neglected because it ~requires some combination of ML and philosophy backgrounds that people rarely have).

My interpretation of the core hypothesis in this post is something like the following: A mesa optimizer may receive evaluative signals that are computed by some subnetwork within the model (a subnetwork that was optimized by the base optimizer to give "useful" evaluative signals w.r.t. the base objective). Those evaluative signals can constitute morally relevant valenced experience. This hypothesis seems to me plausible.

Some further comments:

  1. Re:

    For instance, the DeepMind agent discussed in section 4 pre-processes an evaluation of its goal achievement. This evaluative signal is factored/non-integrated in a sense, and so it may not be interacting in the right way with the downstream, abstract processes to reach conscious processing.

    I don't follow. I'm not closely familiar with the Open-Ended Learning paper, but from a quick look my impression is that it's basically standard RL in multi-agent environments, with more diversity in the training environments than most works. I don't understand what you mean when you say that the agent "pre-processes an evaluation of its goal achievement" (and why the analogy to humans & evolution is less salient here, if you think that).

  2. Re:

    Returning to Tomasik’s assumption, “RL operations are relevant to an agent’s welfare”, the functionalist must disagree. At best we can say that RL operations can be instrumentally valuable by (positively) modifying the valence system.

    (I assume that an "RL operation" refers to things like an update of the weights of a policy network.) I'm not sure what you mean by "positively" here. An update to the weights of the policy network can negatively affect an evaluative signal.

[EDIT: Also, re: "Compare now to mesa-optimizers in which the reward signal is definitionally internal to the system". I don't think that the definition of mesa-optimizers involves a reward signal. (It's possible that a mesa-optimizer will never receive any evidence about "how well it's doing".)]

Comment by ofer on Investigating how technology-focused academic fields become self-sustaining · 2021-09-07T12:41:53.538Z · EA · GW

We define a self-sustaining field as “an academic research field that is capable of attracting the necessary funds and expertise for future work without reliance on a particular small group of people or funding sources” (see the next subsection for more on this definition).

I think that another aspect that is important to consider here is the goals of the funders. For example, an academic field may get huge amounts of funding from several industry actors that try to influence/bias researchers in certain ways (e.g. for the purpose of avoiding regulation). Such an academic field may satisfy the criterion above for being "self-sustaining" while having lower EV than what it would have if it had only a small group of EA-aligned funders.

Comment by ofer on Impact Certificates on a Blockchain · 2021-08-11T17:42:25.762Z · EA · GW

I think "oppositional work" can't always serve as a way to mitigate the harm of a net-negative projects (e.g. it doesn't seem obvious what the "oppositional work" is for a net-negative outreach intervention).

Simply shorting shares doesn't seem to me like a solution either. Suppose traders anticipate that the price of the share will be very high at some point in the future (due to the chance that the project ends up being very beneficial). Shorting the share will not substantially affect its price if the amount of money that participating traders can invest is sufficiently large.

Comment by ofer on Impact Certificates on a Blockchain · 2021-08-10T21:37:28.670Z · EA · GW

Here's a concrete example: Suppose there's 50% chance that next month a certain certificate share will be worth $10, because the project turns out to be beneficial; and there's 50% chance that the share will be worth $0, because the project turns out to be extremely harmful. The price of the share today would be ~$5, even though the EV of the underlying project is negative. The market treats the possibility that "the project turns out to be extremely harmful" as if it was "the project turns out to be neutral".

How do you define “net-negative” if not in terms of expected value? Stochastic dominance? Or do you mean that the ex ante expected value of an intervention can be great even though its value is net-negative ex post?

These seem like very important questions. I guess the concern I raised is an argument in favor of using ex ante expected value. (Though I don't know with respect to what exact point in time. The "IPO" of the project?). And then there can indeed be a situation where shares of a project, that is already known to be a failure that caused harm, are traded at a high price (because the project was really a good idea ex ante).

  1. Investors should be able to short just as easily as they can long, and their profits from correctly predicting downside should be just as unbounded as their profits from correctly predicting upside. This can be approximated with borrowing and lending or with a perpetual future. The first approach requires that someone offers the tokens for lending and that the short trader is ready to pay interest on them. The second only has minor issues that I’m aware of (e.g., see Calstud29’s comment), so that, I think, would be a great feature. Then again the ability to lend tokens that you have would create an incentive to buy and hold.

I don't see how letting traders bet against a certificate by shorting its shares solves the issue that I raised. Re-using my example above: Suppose there's 50% chance that next month a share will be worth $10 (after the project turns out to be beneficial) and 50% chance that the share will be worth $0 (after the project turns out to be extremely harmful). The price of the share today would be ~$5. Why would anyone short these shares if they are currently trade at $5? Doing so will result in losing money in expectation.

Perhaps the mechanism you have in mind here is more like the one suggested by MichaelStJules (see my reply to his comment).

  1. Profit-oriented investors only care about profits that they make in futures in which they can spend them.

That's a great point. This also applies to traders who go long on a share (potentially making them give less weight to the downside risks of the project).

getting sophisticated altruists to vote on what projects to include in intervention shares can serve to include only fairly robust projects (to the best of our current knowledge).

I think something like this can potentially be a great solution. Though there may be a risk that such a market will cause other crypto enthusiasts to create competing markets that don't have this mechanism ("our market is truly decentralized, not like that other one!").

If you were just referring to the ex ante vs. ex post distinction, then I think it’s fair that people can get lucky by betting on risky projects (risky in the sense of downside risks)

My concern here is about net-negative projects, not risky projects in general (risky projects can be net-positive).

Comment by ofer on Impact Certificates on a Blockchain · 2021-08-10T21:04:00.971Z · EA · GW

That's an interesting line of thought. Some potential problems:

  1. There may be a wealthy actor that doesn't like a certain net-positive intervention (e.g. because they are a company that tries to avoid the regulation that the intervention aims to impose). Such an actor can attack the "positive shares" by buying all the "negative shares" and then artificially making their price arbitrarily high (by trading with themselves).

  2. A more speculatively concern (not specific to your idea): Suppose that most traders would believe that: conditional on an existential catastrophe happening, owning the right certificate shares is less important. This may cause traders to give less weight to downside risks when making their decisions. (RowanBDonovan mentioned this issue here.)

Comment by ofer on Most research/advocacy charities are not scalable · 2021-08-07T18:28:32.780Z · EA · GW

Other than regranting, GiveWell's largest expense in 2020 was staff salaries - they spent just over $3 million on salaries. In total, they spent about $6 million (excluding regranting). GiveWell would have to grow to 20x the size in order to become a $100 million 'megaproject' [1].

I don't see why we should treat the funds they regrant differently than their salary expenses (in this context). GiveWell is a good counter example to the claim that "It is very hard for a charity to scale to more than $100 million per year without delivering a physical product." GiveWell can easily use another an additional $100M (e.g. by simply regranting it to GiveDirectly).

Comment by ofer on Impact Certificates on a Blockchain · 2021-08-07T11:53:36.542Z · EA · GW

Re the shorting related ideas: maybe you're thinking about mechanisms that I'm not familiar with, but I don't currently see how these approaches can help here. Certificate shares for a risky, net-negative intervention can have a very high value according to a correct fundamental analysis (due to the chance that the intervention will end up being very beneficial). In such cases traders who would "bet against the certificate" will lose money in expectation.

Comment by ofer on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T08:39:26.849Z · EA · GW

Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have.

It may be critical that the purchasing decisions will somehow account for historical risks (even ones that did not materialize and are no longer relevant), otherwise this approach may fund/incentivize net-negative interventions that are extremely risky (and have some chance of being very beneficial). I elaborated some more on this here.

Comment by ofer on Impact Certificates on a Blockchain · 2021-08-05T12:13:47.411Z · EA · GW

In particular, do you see downside risks that I’ve overlooked, i.e. risks that are not merely like a failure of the project but create net harm?

Related to what you wrote under the "General Use" section, I think we should consider the risks from funding "very risky altruistic projects" that are actually net-negative, even though they have a chance of ending up being extremely beneficial. The root of the problem here is that certificate shares can never have a negative market price, even if the underlying charity/project/intervention ends up being extremely harmful. So from the perspective of a certificate trader, their financial risk from their purchase is limited to the amount they invest, while their upside is unlimited. In other words, the expected future price of a certificate share (and thus its price today) can be high even if everyone thinks that the underlying charity/project/intervention has a very negative expected value.

Is it possible to make it so that the estimation of the share value, from the perspective of certificate traders, will somehow account for the historical downside risks of the charity/project/intervention? (Even if by now the downside risks no longer exist and the charity/project/intervention ended up being extremely beneficial.)

Comment by ofer on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-04T17:24:18.700Z · EA · GW

But maybe you don't consider this much evidence, if you posit that Nick Bostrom specifically has unusually high discernment, specifically enough to donate to things in the band of activities that are "speculative/weird/non-legible" from the perspective of the relevant donors, but not speculative/weird/non-legible enough that the donor lottery administration won't permit this.

My reasoning here is indeed based specifically on the track record of Nick Bostrom. (Also, I'm imagining here a theoretical donor lottery where the winner has 100% control over the money that they won.)

I guess my rejoinder here is just an intuitive sense of disbelief? Several (say >=3?) orders of magnitude above 1 million gets you >1B, and as can be deduced in the figures in the post above, this is already well over the annual long-termist spending every year. If we believe that Nick Bostrom can literally accomplish much more good with 1 million than money allocated by the rest of the longtermist EA movement combined (including all money sent to FHI, where he works), isn't this really wild?

I was not comparing $1M in the hands of Bostrom to $1B in the hands of a random longtermism-aligned person. (The $1B would plausibly be split across many grants, and it's plausible that Bostrom would end up controlling way more than $1M out of it.)

As an aside, without thinking about it much, it seems to me that the EV from the publication of the book Superintelligence is plausibly much higher than the total EV from everything else that was accomplished by the rest of the longtermist EA movement so far. (I can easily imagine myself updating away from that if I try to enumerate the things that were accomplished by the longtermist EA movement).

Also why aren't we sending more money to Nick Bostrom to regrant?

To answer this I think that the word "we" should be replaced with something more specific. Why grantmakers at longtermism-aligned grantmaking orgs don't send more money to Bostrom to regrant? One response is that there is probably nothing analogous to the efficient-market hypothesis for EA grantmaking (see the last paragraph here). Also, the grantmakers are in an implicitly completion with each other over influence on future grant funds. A grantmaker who makes grants that are speculative, weird, non-legible or have a high probability of failing may tend to lose influence over future grant funds, and perhaps reduce the amount of future longtermist funding that their org can give.

Imagine that Bostrom uses the additional $1M to hire another assistant, or some manager for FHI, that simply results in Bostrom being a bit more productive. Looking at this from the lens of the grantmakers' incentives, how would that $1M grant compare to the average LTFF grant?

I'm confused about what you're saying here. P(B| do A) is not evidence against P(A|B), except in very rare circumstances.

If we estimate P(A|B) based on a correlation that we observe between A and B then the existence of a causal relationship from A to B is indeed evidence that should update our estimate of P(A|B) towards a lower value.

Comment by ofer on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-03T16:33:20.265Z · EA · GW

If Bostrom, a very high-status figure within longtermist EA, has really good donation opportunities to the tune of 1 million, I doubt it'd be unfunded.

Even 'very high-status figures within longtermist EA' can control a limited amount of funding, especially for requests that are speculative/weird/non-legible from the perspective of the relevant donors. I don't know what's the bar for "really good donation opportunities", but the relevant thing here is to compare the EV of that $1M in the hands of Bostrom to the EV of that $1M in the hands of other longtermism aligned people.

Less importantly, you rely here on the assumption that being "a very high-status figure within longtermist EA" means you can influence a lot of funding, but the causal relationship may mostly be going in the other direction. Bostrom (for example) probably got his high-status in longtermist EA mostly from his influential work, and not from being able to influence a lot of funding.

I also feel like there are similar analogous experiments made in the past where relatively low oversight grantmaking power was given to certain high-prestige longtermist EA figures( eg here and here). You can judge for yourself whether impact "several orders of magnitude higher" sounds right, personally I very much doubt it.

To be clear, I don't think my reasoning here applies generally to "high-prestige longtermist EA figures". Though this conversation with you made me think about this some more and my above claim now seems to me too strong (I added an EDIT block).

Comment by ofer on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-03T10:06:16.655Z · EA · GW

This seems like a fairly surprising claim to me, do you have a real or hypothetical example in mind?

Imagine that all the longtermism ~aligned people in the world participate in a "longtermism donor lottery" that will win one of them $1M. My estimate is that the EV of that $1M, conditional on person X winning, is several orders of magnitude larger for X=[Nick Bostrom] than for almost any other value of X.

[EDIT: following the conversation here with Linch I thought about this some more, and I think the above claim is too strong. My estimate of the EV for many values of X is very non-robust, and I haven't tried to estimate the EV for all the relevant values of X. Also, maybe potential interventions that cause there to be more longtermism-aligned funding should change my reasoning here.]

EDIT: Also I feel like in many such situations, such people should almost certainly become grantmakers!

Why? Do you believe in something analogous to the efficient-market hypothesis for EA grantmaking? What mechanism causes that? Do grantmakers who make grants with higher-than-average EV tend to gain more and more influence over future grant funds at the expense of other grantmakers? Do people who appoint such high-EV grantmakers tend to gain more and more influence over future grantmaker-appointments at the expense of other people who appoint grantmakers?

Comment by ofer on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-01T13:08:34.735Z · EA · GW

Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10 (where this hugely depends on fit and the circumstances).

(Maybe you already think so, but...) it probably also depends a lot on the identity of that "someone" who is donating the $X (even if we restrict the discussion to, say, potential donors who are longtermism-aligned). Some people may have a comparative advantage with respect to their ability to donate effectively such that the EV from their donation would be several orders of magnitude larger than the "average EV" from a donation of that amount.

Comment by ofer on All Possible Views About Humanity's Future Are Wild · 2021-07-15T21:58:32.578Z · EA · GW

The three critical probabilities here are Pr(Someone makes an epistemic mistake when thinking about their place in history), Pr(Someone believes they live at the HoH|They haven’t made an epistemic mistake), and Pr(Someone believes they live at the HoH|They’ve made an epistemic mistake).

I think the more decision relevant probabilities involve "Someone believes they should act as if they live at the HoH" rather than "Someone believes they live at the HoH". Our actions may be much less important if 'this is all a dream/simulation' (for example). We should make our decisions in the way we wish everyone-similar-to-us-across-the-multiverse make their decisions.

As an analogy, suppose Alice finds herself getting elected as the president of the US. Let's imagine there are citizens in the US. So Alice reasons that it's way more likely that she is delusional than she actually being the president of the US. Should she act as if she is the president of the US anyway, or rather spend her time trying to regain her grip on reality? The citizens want everyone in her situation to choose the former. It is critical to have a functioning president. And it does not matter if there are many delusional citizens who act as if they are the president. Their "mistake" does not matter. What matters is how the real president acts.

Comment by ofer on The Centre for the Governance of AI is becoming a nonprofit · 2021-07-09T12:01:55.713Z · EA · GW

Related to the concern that I raised here: I recommend interested readers to listen to (or read the transcript of) this FOL podcast episode with Mohamed Abdalla about their paper: "The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on academic integrity".

Comment by ofer on The Centre for the Governance of AI is becoming a nonprofit · 2021-07-09T11:37:37.575Z · EA · GW

Will GovAI in its new form continue to deal with the topic of regulation (i.e. regulation of AI companies by states)?

DeepMind is owned by Alphabet (Google). Many interventions that are related to AI regulation can affect the stock price of Alphabet, which Alphabet is legally obligated to try to maximize (regardless of the good intentions that many senior executives there may have). If GovAI will be co-lead by an employee of DeepMind, there is seemingly a severe conflict of interest issue about anything that GovAI does (or avoids doing) with respect to the topic of regulating AI companies.

GovAI's research agenda (which is currently linked to from their 'placeholder website') includes the following:

[...] At what point would and should the state be involved? What are the legal and other tools that the state could employ (or are employing) to close and exert control over AI companies? With what probability, and under what circumstances, could AI research and development be securitized--i.e., treated as a matter of national security--at or before the point that transformative capabilities are developed? How might this happen and what would be the strategic implications? How are particular private companies likely to regard the involvement of their host government, and what policy options are available to them to navigate the process of state influence? [...]

How will this part of the research agenda be influenced by GovAI being co-lead by a DeepMind employee?

Comment by ofer on [Meta] Is it legitimate to ask people to upvote posts on this forum? · 2021-06-29T07:47:39.576Z · EA · GW

I think this method of "promoting a post" should be discouraged in the EA community.

The community's attention is a limited resource. Gaining more upvotes in an "artificial" way is roughly a zero-sum game with other writers on this forum and it adds noise to the useful signal that the karma score provides. It also seems counterproductive in terms of fostering good coordination norms within the community.

Comment by ofer on EA Funds has appointed new fund managers · 2021-03-30T09:35:34.111Z · EA · GW

Committee members recused themselves from some discussions and decisions in accordance with our conflict of interest policy.

Is that policy public?

Comment by ofer on Introducing The Nonlinear Fund: AI Safety research, incubation, and funding · 2021-03-22T16:13:19.337Z · EA · GW

I'm not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI's GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute's work on Lethal Autonomous Weapons Systems).

Just wanted to quickly flag: I think the more popular interpretation of the term AI safety points to a wide landscape that includes AI policy/strategy as well as technical AI safety (which is also often referred to by the term AI alignment).

Comment by ofer on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-01T17:48:59.625Z · EA · GW

Apart from the biological anchors approach, what efforts in AI timelines or takeoff dynamics forecasting—both inside and outside Open Phil—are you most excited about?

Comment by ofer on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-01T17:48:37.531Z · EA · GW

Imagine you win $10B in a donor lottery. What sort of interventions—that are unlikely to be funded by Open Phil in the near future—might you fund with that money?

Comment by ofer on Promoting Effective Giving and Giving What We Can within EA Groups · 2020-11-10T13:16:50.924Z · EA · GW

Regarding the first potential change: It seems to me helpful (consider also "inclined" -> "inclined/able"). Regarding the second one, I was not sure at first that "resign" here means ceasing to follow through after having taken the pledge.

For both changes, consider wording it such that it's clear that we should consider the runway / financial situation factors over a person's entire life (rather than just their current plans and financial situation) and the substantial uncertainties that are involved.

Comment by ofer on Promoting Effective Giving and Giving What We Can within EA Groups · 2020-11-09T07:24:45.807Z · EA · GW

Hi Luke,

I recommend expanding the discussion in the "Things to be careful of" section. In particular, it seems worthwhile to estimate the impact of people in EA not having as much runway as they could have.

You mentioned that some people took The Pledge and did not follow through. It's important to also consider the downsides in situations where people do follow through despite regretting having taken The Pledge. People in EA are selected for scrupulousness which probably correlates strongly with pledge-keeping. As an aside, maybe it's worth adding to The Pledge (or The Pledge 2.0?) some text such that the obligation is conditional on some things (e.g. no unanticipated developments that would make the person regret taking the pledge).

Comment by ofer on How much does a vote matter? · 2020-11-07T14:20:37.531Z · EA · GW

When one assumes that the number of people that are similar to them (roughly speaking) is sufficiently small, I agree.

Comment by ofer on How much does a vote matter? · 2020-11-07T14:11:35.889Z · EA · GW

The costs are higher for people who value the time of people that are correlated with them, while the benefits are not.

Comment by ofer on How much does a vote matter? · 2020-11-05T18:31:27.370Z · EA · GW

Wikipedia's entry on superrationality probably explains the main idea here better than me.

Comment by ofer on Thoughts on whether we're living at the most influential time in history · 2020-11-05T04:15:37.244Z · EA · GW

I don’t make any claims about how likely it is that we are part of a very long future. Only that, a priori, the probability that we’re *both* in a very large future *and* one of the most influential people ever is very low. For that reason, there aren’t any implications from that argument to claims about the magnitude of extinction risk this century.

I don't understand why there are implications from that argument to claims about the magnitude of our influentialness either.

As an analogy, suppose Alice bought a lottery ticket that will win her $100,000,000 with an extremely small probability. The lottery is over, and she is now looking at the winning numbers on her phone, comparing them one by one to the numbers on her ticket. Her excitement grows as she finds more and more of the winning numbers on her ticket. She managed to verify that she got 7 numbers right (amazing!), but before she finished comparing the rest of the numbers, her battery died. She tries to find a charger, and in the meantime she's considering whether to donate the money to FHI if she wins. It occurs to her that the probability that *both* [a given person wins the lottery] *and* [donating $100,000,000 to FHI will reduce existential risk] is extremely small. She reasons that, sure, there are some plausible arguments that donating $100,000,000 to FHI will have a huge positive impact, but are those arguments strong enough considering her extremely small prior probability in the above conjunction?

Comment by ofer on Thoughts on whether we're living at the most influential time in history · 2020-11-03T15:53:44.717Z · EA · GW

This topic seems extremely important and I strongly agree with your core argument.

As Will notes, following Brian Tomasik and others, the simulation argument dampens enthusiasm for influencing the far future.

There is no reason for longtermists to care specifically about "the far future" (interpreted as our future light cone, or whatever spacetime we can causally affect). Most longtermists probably intrinsically care about all of spacetime across Reality. Even if the probability that we are not in a short simulation is 1e-50, longtermists still have strong reasons to strive for existential security. One of those reasons is that striving for existential security would make all civilizations that are similar to us (i.e. civilizations that their behavior is correlated with ours) more likely to successfully use their cosmic endowment in beneficial ways.

Regarding the part about the outside view argument being also an argument against patient philanthropy: this seems to depend on some non-obvious assumptions. If the population size in some future year X is similar to today's population size, and the fraction of wealth generated until X but not inherited by people living in X is sufficiently small, then a random person living in X will be able to donate an amount that is similar (in expectation) to the worth of a patient-philanthropy-fund that was donated by a random person living today.

Comment by ofer on How much does a vote matter? · 2020-11-03T07:22:51.843Z · EA · GW

If I'm in the voting booth, and I suddenly decide to leave the ballot blank, how does that affect anyone else?

It doesn't affect anyone else in a causal sense, but it does affect people similar to you in a decision-relevant-to-you sense.

Imagine that while you're in the voting booth, in another identical voting booth there is another person who is an atom-by-atom copy of you (and assume our world is deterministic). In this extreme case, it is clear that you're not deciding just for yourself. When we're talking about people who are similar to you rather than copies of you, a probabilistic version of this idea applies.

Comment by ofer on How much does a vote matter? · 2020-10-31T18:38:16.908Z · EA · GW

What I mean to say is that, roughly speaking, one should compare the world where people like them vote to the world where people like them don't vote, and choose the better world. That can yield a different decision than when one decides without considering the fact that they're not deciding just for themselves.

Comment by ofer on How much does a vote matter? · 2020-10-31T18:37:47.777Z · EA · GW

What I wrote is indeed aligned with evidential decision theory (EDT). The objections to EDT that you mentioned don't seem to apply here. When you decide whether to vote you don't decide just for yourself, but rather you decide (roughly speaking) for everyone who is similar to you. The world will become better or worse depending on whether it's good or bad that everyone-who-is-similar-to-you decides to vote/not-vote.

Comment by ofer on How much does a vote matter? · 2020-10-31T09:35:40.069Z · EA · GW

I don't think that the chance of the election hinging on a single vote is the right thing to look at. One should decide based on the fact that other people similar to them are likely to act similarly. E.g. a person reading this post might decide whether to vote by asking themselves whether they want 300 people on the EA forum to each spend an hour (+ face COVID-19 risk?) on voting. (Of course, this reasoning neglects a much larger group of people that are also correlated with them.)

Comment by ofer on N-95 For All: A Covid-19 Policy Proposal · 2020-10-28T23:49:54.189Z · EA · GW

I agree that the issue I raised does not interfere with this proposed intervention (sorry for not making this clear).

Re availability, googling for the term [buy n95 masks] gives some relevant pointers within the first 2 result pages. There are probably many counterfeit respirators out there and these sellers don't seem well-known, but one may still want to bet on them if the manufacturer's website offers a way to authenticate the validity of some unique ids on the respirators etc. (3M has something like this). Note: I'm not recommending the above google search as a way to buy respirators; people may have better alternatives depending on where they live (e.g. in Israel one can buy n95 respirators from a well-known retailer).

Comment by ofer on N-95 For All: A Covid-19 Policy Proposal · 2020-10-28T13:24:24.202Z · EA · GW

Other than governments' willingness to pay, I think another important factor here is the popular stance that it would be immoral for manufacturers to sell respirators at a price that is substantially above marginal cost (regardless of market price). Maybe if manufacturers were "allowed" to sell respirators for $3 (without a major PR/regulatory risk) their marginal profit would be x20 larger than normal, and would draw major investments and efforts into manufacturing respirators.

[EDIT: In support of this, consider the following (from 3M's website, dated 2020-03-31): "3M has not changed the prices we charge for 3M respirators as a result of the COVID-19 outbreak."]

It is now 7 months later, and you still generally cannot buy N-95 masks.

I'm not sure what you mean by "cannot buy". The question is at what price it is feasible to buy good respirators. (I think at ~8-11 USD per respirator it's probably possible to buy good respirators, at least in the US and Israel).

Comment by ofer on Avoiding Munich's Mistakes: Advice for CEA and Local Groups · 2020-10-15T16:33:18.024Z · EA · GW

Thank you for writing this important post Larks!

I would add that the harm from cancel culture's chilling effect may be a lot more severe than what people tend to imagine. The chilling effect does not only prevent people from writing things that would actually get them "canceled". Rather, it can prevent people from writing things that they merely have a non-neglectable credence (e.g. 0.1%) of getting them canceled (at some point in the future); which is probably a much larger and more important set of things/ideas that we silently lose.

Comment by ofer on ofer's Shortform · 2020-10-04T16:21:26.566Z · EA · GW

[Certificates of Impact]

To implement certificates of impact we need to decide how we want projects to be evaluated. The following is a consideration that seems to me potentially important (and I haven't seen it mentioned yet):

If the evaluation of a project ignores substantial downside risks that the project once had but no longer has (because fortunately things turned out well), the certificate market might incentivize people to carry out risky net-negative projects: If things turn out great, the project's certificates will be worth a lot; and thus when the project is just started the future value of its certificates is large in expectation. (Impact certificates can never have a negative market price, even if the project's impact turns out to be horrible).

Comment by ofer on Are social media algorithms an existential risk? · 2020-09-16T12:55:01.333Z · EA · GW

Perhaps not one that "threatens the premature extinction of Earth-originating intelligent life" (Bostrom, 2012)

I just want to flag that the full sentence from that paper is: "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom 2002)."

Comment by ofer on Are social media algorithms an existential risk? · 2020-09-16T12:39:24.914Z · EA · GW

From an AI safety perspective, the algorithms that create the feeds that social media users see do have some properties that make them potentially more concerning than most AI applications:

  1. The top capabilities are likely to be concentrated rather than distributed. For example, very few actors in the near future are likely to invest resources in such algorithms in a similar scale to Facebook.
  2. The feed-creation-solution (or policy, in reinforcement learning terminology) being searched for has a very rich real-world action space (e.g. showing some post X to some user Y, where Y is any person from a set of 3 billion FB users).
  3. The social media company is incentivized to find a policy that maximizes users' time-spent over a long time horizon (rather than using a very small discount factor).
  4. Early failures/deception-attempts may be very hard to detect, especially if the social media company itself is not on the lookout for such failures.

These properties seem to make it less likely that relevant people would see sufficiently alarming small-scale failures before the point where some AI systems pose existential risks.

Comment by ofer on Challenges in evaluating forecaster performance · 2020-09-12T14:19:28.881Z · EA · GW

This makes Alice a better forecaster

As long as we keep asking Alice and Bob questions via the same platform, and their incentives don't change, I agree. But if we now need to decide whether to hire Alice and/or Bob to do some forecasting for us, comparing their average daily Brier score is problematic. If Bob just wasn't motivated enough to update his forecast every day like Alice did, his lack of motivation can be fixed by paying him.

Comment by ofer on Challenges in evaluating forecaster performance · 2020-09-12T14:13:14.132Z · EA · GW

Thanks for the explanation!

I don't think this formal argument conflicts with the claim that we should expect the forecasting frequency to affect the average daily Brier score. In the example that Flodorner gave where the forecast is essentially resolved before the official resolution date, Alice will have perfect daily Brier scores: , for any , while in those days Bob will have imperfect Brier scores: .

Comment by ofer on Challenges in evaluating forecaster performance · 2020-09-12T05:29:44.309Z · EA · GW

The long-term solution here is to allow forecasters to predict functions rather than just static values. This solves problems of things like people needing to update for time left.

Do these functions map events to conditional probabilities? (I.e. mapping an event to the probability of something conditioned on that event happening)? How will this look like for the example of forecasting an election result?

In terms of the specific example though, I think if a significant new poll comes out and Alice updates and Bob doesn't, Alice is a better forecaster and deserves more reward than Bob.

Suppose Alice encountered the important poll result because she was looking for it (as part of her effort to come up with a new forecast). At the end of the day what we really care about is how much weight we should place on any given forecast made by Alice/Bob. We don't directly care about the average daily Brier score (which may be affected by the forecasting frequency). [EDIT: this isn't true if the forecasting platform and the forecasters' incentives are the same when we evaluate the forecasters and when we ask the questions we care about.]

Comment by ofer on Challenges in evaluating forecaster performance · 2020-09-12T05:26:12.215Z · EA · GW

I didn't follow that last sentence.

Notice that in the limit it's obvious we should expect the forecasting frequency to affect the average daily Brier score: Suppose Alice makes a new forecast every day while Bob only makes a single forecast (which is equivalent to him making an initial forecast and then blindly making the same forecast every day until the question closes).

Comment by ofer on Challenges in evaluating forecaster performance · 2020-09-11T19:43:16.444Z · EA · GW

After thinking for a few more minutes, it seems that forecasting more often but at random moments shouldn't impact the expected Brier score.

In my toy example (where the forecasting moments are predetermined), Alice's Brier score for day X will be based on a"fresh" prediction made on that day (perhaps influenced by a new surprising poll result), while Bob's Brier score for that day may be based on a prediction he made 3 weeks earlier (not taking into account the new poll result). So we should expect that the average daily Brier score will be affected by the forecasting frequency (even if the forecasting moments are uniformly sampled).

In this toy example the best solution seems to be using the average Brier score over the set of days in which both Alice and Bob made a forecast. If in practice this tends to leave us with too few data points, a more sophisticated solution is called for. (Maybe partitioning days into bins and sampling a random forecast from each bin? [EDIT: this mechanism can be gamed.])

Comment by ofer on Challenges in evaluating forecaster performance · 2020-09-11T14:41:08.669Z · EA · GW

The rewarding-more-active-forecasters problem seems severe and I'm surprised it's not getting more attention. If Alice and Bob both forecast the result of an election, but Alice updates her forecast every day (based on the latest polls) while Bob only updates his forecast every month, it doesn't make sense to compare their average daily Brier score.

Comment by ofer on Some thoughts on the EA Munich // Robin Hanson incident · 2020-08-29T12:41:25.902Z · EA · GW

You'd expect having a wider range of speakers to increase intellectual diversity — but only as long as hosting Speaker A doesn't lead Speakers B and C to avoid talking to you

As an aside, if hosting Speaker A is a substantial personal risk to the people who need to decide whether to host Speaker A, I expect the decision process to be biased against hosting Speaker A (relative to an ideal EA-aligned decision process).

Comment by ofer on "Good judgement" and its components · 2020-08-20T14:03:26.552Z · EA · GW

Thank you for the thoughtful comment!

As an aside, when I wrote "we usually need to have a good understanding ..." I was thinking about explicit heuristics. Trying to understand the implications of our implicit heuristics (which may be hard to influence) seems somewhat less promising. Some of our implicit heuristics may be evolved mechanisms (including game-theoretical mechanisms) that are very useful for us today, even if we don't have the capacity to understand why.