Posts

Growth of prediction markets over time? 2021-09-02T13:43:09.820Z
What 2026 looks like (Daniel's median future) 2021-08-07T05:14:35.718Z
DeepMind: Generally capable agents emerge from open-ended play 2021-07-27T19:35:08.662Z
Taboo "Outside View" 2021-06-17T09:39:12.385Z
Vignettes Workshop (AI Impacts) 2021-06-15T11:02:04.064Z
Fun with +12 OOMs of Compute 2021-03-01T21:04:16.532Z
Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain 2021-01-18T12:39:30.132Z
Against GDP as a metric for timelines and takeoff speeds 2020-12-29T17:50:04.176Z
Incentivizing forecasting via social media 2020-12-16T12:11:33.789Z
Is this a good way to bet on short timelines? 2020-11-28T14:31:46.235Z
Persuasion Tools: AI takeover without AGI or agency? 2020-11-20T16:56:52.687Z
How Roodman's GWP model translates to TAI timelines 2020-11-16T14:11:38.809Z
How can I bet on short timelines? 2020-11-07T12:45:46.192Z
What considerations influence whether I have more influence over short or long timelines? 2020-11-05T19:57:16.172Z
AI risk hub in Singapore? 2020-10-29T11:51:49.741Z
Relevant pre-AGI possibilities 2020-06-20T13:15:29.008Z
Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post 2019-02-15T19:14:41.459Z
Tiny Probabilities of Vast Utilities: Bibliography and Appendix 2018-11-20T17:34:02.854Z
Tiny Probabilities of Vast Utilities: Concluding Arguments 2018-11-15T21:47:58.941Z
Tiny Probabilities of Vast Utilities: Solutions 2018-11-14T16:04:14.963Z
Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem 2018-11-10T09:12:15.039Z
Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? 2018-11-08T10:09:59.111Z
Ongoing lawsuit naming "future generations" as plaintiffs; advice sought for how to investigate 2018-01-23T22:22:08.173Z
Anyone have thoughts/response to this critique of Effective Animal Altruism? 2016-12-25T21:14:39.612Z

Comments

Comment by kokotajlod on Fanaticism in AI: SERI Project · 2021-09-24T12:51:02.797Z · EA · GW

Nice work!

However, imposing a bounded utility function on any decision involving lives saved or happy lives instantiated seems unpalatable, as it suggests that life diminishes in value. Thus, in decisions surrounding human lives and other unbounded utility values it seems that an instrumentally rational agent will maximize expected utility and reach a fanatical verdict. Therefore, if an agent is instrumentally rational, she will reach fanatical verdicts through maximizing expected utility.

I've only skimmed it so maybe this is answered in the paper somewhere, but: I think this is the part I'd disagree with. I don't think bounded utility functions are that bad, compared to the alternatives (such as fanaticism! And worse, paralysis! See my sequence.)

More importantly though, if we are trying to predict how superintelligent AIs will behave, we can't assume that they'll share our intuitions about the unpalatability of unbounded utility functions! I feel like the conclusion should be: Probably superintelligent AIs will either have bounded utility functions or be fanatical.

Comment by kokotajlod on Why AI alignment could be hard with modern deep learning · 2021-09-22T00:21:53.653Z · EA · GW

This is my favorite post on Cold Takes so far! I think it's going to be one of my go-to things to link people to from now on. Well done! 

Comment by kokotajlod on It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link] · 2021-09-17T21:27:22.679Z · EA · GW

Now I think we are on the same page. Nice! I agree that this is weak bayesian evidence for the reason you mention; if the experiment had discovered that one artificial neuron could adequately simulate one biological neuron, that would basically put an upper bound on things for purposes of the bio anchors framework (cutting off approximately the top half of Ajeya's distribution over required size of artificial neural net). Instead they found that you need thousands. But (I would say) this is only weak evidence because prior to hearing about this experiment I would have predicted that it would be difficult to accurately simulate a neuron, just as it's difficult to accurately simulate a falling leaf. Pretty much everything that happens in biology is complicated and hard to simulate.

Comment by kokotajlod on It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link] · 2021-09-17T14:10:34.804Z · EA · GW

What I meant by the falling leaf thing:
If we wanted to accurately simulate where a leaf would land when dropped from a certain height and angle, it would require a ton of complex computation. But (one can imagine) it's not necessary for us to do this; for any practical purpose we can just simplify it to a random distribution centered directly below the leaf with variance v.

Similarly (perhaps) if we want to accurately simulate the input-output behavior of a neuron, maybe we need 8 layers of artificial neurons. But maybe in practice if we just simplified it to "It sums up the strength of all the neurons that fired at it in the last period, and then fires with probability p, where p is an s-curve function of the strength sum..." maybe that would work fine for practical purposes -- NOT for purpose of accurately reproducing the human brain's behavior, but for purposes of building an approximately brain-sized artificial neural net that is able to learn and excel at the same tasks.

My original point no. 1 was basically that I don't see how the experiment conducted in this paper is much evidence against the "simplified model would work fine for practical purposes" hypothesis.

 

Comment by kokotajlod on It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link] · 2021-09-16T14:36:18.761Z · EA · GW

I think I get it, thanks! (What follows is my understanding, please correct if wrong!) The idea is something like: A falling leaf is not a computer, it can't be repurposed to perform many different useful computations. But a neuron is; depending on the weights of its synapses it can be an and gate, an or gate, or various more complicated things. And this paper in the OP is evidence that the range of more complicated useful computations  it can do is quite large, which is reason to think that in maybe in the relevant sense a lot of the brain's skills have to involve fancy calculations within neurons. (Just because they do doesn't mean they have to, but if neurons are general-purpose computers capable of doing lots of computations, that seems like evidence compared to if neurons were more like falling leaves)

I still haven't read the paper -- does the experiment distinguish between the "it's a tiny computer" hypothesis vs. the "it's like a falling leaf -- hard to simulate, but not in an interesting way" hypothesis?

Comment by kokotajlod on It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link] · 2021-09-15T19:44:22.883Z · EA · GW

What does it mean to say a biological neuron is more computationally powerful than an artificial one? If all it means is that it takes more computation to fully simulate its behavior, then by that standard a leaf falling from a tree is more computationally powerful than my laptop.
(This is a genuine question, not a rhetorical one. I do have some sense of what you are saying but it's fuzzy in my head and I'm wondering if you have a more precise definition that isn't just "computation required to simulate." I suspect that the Carlsmith report I linked may have already answered this question and I forgot what it said.)

Comment by kokotajlod on It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link] · 2021-09-15T15:28:00.023Z · EA · GW

My own quick takeaway is that it takes 5-8 layers with about 1000 neurons in total in an artificial neural network to simulate a single biological neuron of a certain kind, and before taking this into account, we'd likely underestimate the computational power of animal brains relative to artificial neural networks, possibly up to about 1000x.

This does not seem right to me. I haven't read the paper yet, so maybe I'm totally misunderstanding things, but...

  1. The bio anchors framework does not envision us achieving AGI/TAI/etc. by simulating the brain, or even by simulating neurons. Instead, it tries to guesstimate how many artificial neurons or parameters we'd need to achieve similar capabilities to the brain, by looking at how many biological neurons or synapses are used in the brain, and then adding a few orders of magnitude of error bars. See the Carlsmith report, especially the conclusion summary diagram. Obviously if we actually wanted to simulate the brain we'd need to do something more sophisticated than just use 1 artificial neuron per biological neuron. For a related post, see this. Anyhow, the point is, this paper seems almost completely irrelevant to the bio anchors framework, because we knew already (and other papers had shown) that if we wanted to simulate a neuron it would take more than just one artificial neuron.
  2.  Assuming I'm wrong about point #1, I think the calculation would be more complex than just "1000 artificial neurons needed per biological neuron, so +3 OOMs to bio anchors framework." Most of the computation in the bio anchors calculation comes from synapses, not neurons. Here's an attempt at how the revised calculation might go:
    1. Currently Carlsmith's median estimate is 10^15 flop per second. Ajeya's report guesses that artificial stuff is crappier than bio stuff and so uses 10e16 flop per second as the median instead, IIRC.
    2. There are 10e11 neurons in the brain, and 10^14-10^15 synapses.
    3. If we assume each neuron requires an 8-layer convolutional DNN with 1000 neurons... how many parameters is that? Let's say it's 100,000, correct me if I'm wrong.
    4. So then that would be 100,000 flop per period of neuron-simulation.
    5. I can't access the paper itself but one of the diagrams says something about one ms of input. So that means maybe that the period length is 1 ms, which means 1000 periods a second, which means 100,000,000 flop per second of neuron-simulation.
    6. This would be a lot more than the cost of simulating the synapses, so we don't have to bother calculating that.
    7. So our total cost is 10^8 flop per second per neuron times 10^11 neurons = 10^17 flop per second to simulate the brain.
    8. So this means a loose upper bound for the bio anchors framework should be at 10^17, whereas currently Ajeya uses a median of 10^16 with a few OOMs of uncertainty on either side. It also means, insofar as you think my point #1 is wrong and that this paper is the last word on the subject, that the median should maybe be 10^17 as well, though that's less clear. (Plausibly we'll be able to find more efficient ways to simulate neurons than the dumb 8-layer NN they tried in this paper, shaving an OOM or so off the cost, bringing us back down to 10^16again...)
    9. It's unclear whether this would lengthen or shorten timelines, I'd like to see the calculations for that. My wild guess is that it would lower the probability of <15 year timelines and also lower the probability of >30 year timelines.
Comment by kokotajlod on Growth of prediction markets over time? · 2021-09-02T19:38:34.614Z · EA · GW

Thanks! Gosh, it's disappointing to learn that prediction markets wax and wane in popularity over time instead of steadily exponentially increasing as I had assumed. (I'm thinking about the 'world events' kind, not the sports or stock market kind) This makes me pessimistic that they'll ever get big enough to raise the sanity waterline.

Comment by kokotajlod on Forecasting transformative AI: the "biological anchors" method in a nutshell · 2021-09-01T14:08:44.571Z · EA · GW

Another nice post! I think it massively overstates the case for Bio Anchors being too aggressive:

More broadly, Bio Anchors could be too aggressive due to its assumption that "computing power is the bottleneck":

  • It assumes that if one could pay for all the computing power to do the brute-force "training" described above for the key tasks (e.g., automating scientific work), this would be enough to develop transformative AI.
  • But in fact, training an AI model doesn't just require purchasing computing power. It requires hiring researchers, running experiments, and perhaps most importantly, finding a way to set up the "trial and error" process so that the AI can get a huge number of "tries" at the key task. It may turn out that doing so is prohibitively difficult.

The assumption of the Bio Anchors framework is that compute is the bottleneck, not that compute is all you need (but your phrasing gives the opposite impression). 

In the bio anchors framework we pretty quickly (by 2025 I think?) get to a regime where people are willing to spend a billion+ dollars on the compute for a single training run. It's pretty darn plausible that by the time you are spending billions of dollars on compute, you'll also be able to afford the associated data collection, researcher salaries, etc. for some transformative task. 

(There are many candidate transformative tasks, some of which would be directly transformative and others indirectly by rapidly leading to the creation of AIs that can do the directly transformative things. So for compute to not be the bottleneck, it has to be that we are data-collection-limited, or researcher-salary limited, or whatever, for all of these tasks.)

(Also, I don't think "transformative AI" should be our milestone anyway. AI-induced point of no return is, and that probably comes earlier.)

Comment by kokotajlod on What are examples of technologies which would be a big deal if they scaled but never ended up scaling? · 2021-08-27T12:53:09.368Z · EA · GW

Also technically Alchemy will in fact be cheaply scaled in the future, probably. When we are disassembling entire stars to fund galaxy-wide megaprojects, presumably some amount of alchemy will be done as well, and that amount will be many orders of magnitude bigger than the original alchemists imagined, and it will be done many orders of magnitude more cheaply (in 2021 dollars, after adjusting for inflation) as well. EDIT: Nevermind I no longer endorse this comment, I think I was assuming alignment success for some reason.

Comment by kokotajlod on What are examples of technologies which would be a big deal if they scaled but never ended up scaling? · 2021-08-27T12:50:58.133Z · EA · GW

I think there's an ambiguity in "it'd eventually be very cheap and scalable."

Consider alchemy. It's cheaper to do now than it was when we first did it, in part because the price of energy has dropped. It's also possible to do it on much bigger scales. However, nobody bothers because people have better things to do. So for something to count as cheaper and scalable, does it need to actually be scaled up, or is it enough that we could do it if we wanted to? If the latter, then alchemy isn't even an example of the sort of thing you want. If the former, then there are tons of examples, examples all over the place!

Comment by kokotajlod on What are examples of technologies which would be a big deal if they scaled but never ended up scaling? · 2021-08-27T12:47:00.906Z · EA · GW

I'm pretty confident that if loads more money and talent had been thrown at space exploration, going to the moon would be substantially cheaper and more common today. SpaceX is good evidence of this, for example.

As for fusion power, I guess I've got a lot less evidence for that. Perhaps I am wrong. But it seems similar to me.  We could also talk about fusion power on the metric of "actually producing more energy than it takes in, sustainably" in which case my understanding is that we haven't got there at all yet.

Comment by kokotajlod on What are examples of technologies which would be a big deal if they scaled but never ended up scaling? · 2021-08-27T09:20:11.066Z · EA · GW

Going to the moon.

Fusion power?

Nuclear power more generally?

...I guess the problem with these examples is that they totally are scalable, they just didn't scale for political/cultural reasons.

Comment by kokotajlod on Taboo "Outside View" · 2021-08-27T09:16:02.504Z · EA · GW

Yeah, I probably shouldn't have said "bogus" there, since while I do think it's overrated, it's not the worst method. (Though arguably things can be bogus even if they aren't the worst?) 

Partitioning by any X lets you decide how much weight you give to X vs. not-X. My claim is that the bag of things people refer to as "outside view" isn't importantly different from the other bag of things, at least not more importantly different than various other categorizations one might make. 

I do think that people who are experts should behave differently than people who are non-experts. I just don't think we should summarize that as "Prefer to use outside-view methods" where outside view = the things on the First Big List. I think instead we could say:
--Use deference more
--Use reference classes more if you have good ones (but if you are a non-expert and your reference classes are more like analogies, they are probably leading you astray)
--Trust your models less
--Trust your intuition less
--Trust your priors less
...etc. 

Comment by kokotajlod on Are we "trending toward" transformative AI? (How would we know?) · 2021-08-26T15:08:10.593Z · EA · GW

See also this piece for a bit of a more fleshed out argument along these lines, which I don't agree with fully as stated (I don't think it presents a strong case for transformative AI soon), but which I think gives a good sense of my intuitions about in-principle feasibility.


I'd be interested to hear your disagreements sometime! To clarify, the point of my post was not to present a strong case for transformative AI soon, but rather to undermine a class of common arguments against that hypothesis.

Comment by kokotajlod on Taboo "Outside View" · 2021-08-26T06:28:08.775Z · EA · GW

Thanks!

Re : Inadequate Equilibria: I mean, that was my opinionated interpretation I guess. :) But Yudkowsky was definitely arguing something was bogus. (This is a jab at his polemical style) To say a bit more:  Yudkowsky argues that the justifications for heavy reliance on various things called "outside view" don't hold up to scrutiny, and that what's really going on is that people are overly focused on matters of who has how much status and which topics are in whose areas of expertise and whether I am being appropriately humble and stuff like that, and that (unconsciously) this is what's really driving people's use of "outside view" methods rather than the stated justifications. I am not sure whether I agree with him or not but I do find it somewhat plausible at least. I do think the stated justifications often (usually?) don't hold up to scrutiny.

To be clear, I don't think "weighted sum of 'inside views' and 'outside views'" is the gold standard or something. I just think it's an okay approach sometimes (maybe especially when you want to do something "quick and dirty").

If you strongly disagree (which I think you do), I'd love for you to change my mind! :)

I mean, depending on what you mean by "an okay approach sometimes... especially when you want to do something quick and dirty" I may agree with you! What I said was:

This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition.

And it seems you agree with me on that. What I would say is: Consider the following list of methods:
1. Intuition-weighted sum of "inside view" and "outside view" methods (where those terms refer to the Big Lists summarized in this post)
2. Intuition-weighted sum of "Type X" and "Type Y" methods (where those terms refer to any other partition of the things in the Big Lists summarized in this post)
3. Intuition
4. The method Tetlock recommends (as interpreted by me in the passage of my blog post you quoted)

My opinion is that 1 and 2 are probably typically better than 3 and that 4 is probably typically better than 1 and 2 and that 1 and 2 are probably about the same. I am not confident in this of course, but the reasoning is: Method 4 has some empirical evidence supporting it, plus plausible arguments/models.* So it's the best. Methods 1 & 2 are like method 3 except that they force you to think more and learn more about the case (incl. relevant arguments about it) before calling on your intuition, which hopefully results in a better-calibrated intuitive judgment. As for comparing 1 & 2, I think we have basically zero evidence that partitioning into "Outside view" and "Inside view" is more effective than any other random partition of the things on the list. It's still better than pure intuition though, probably, for reasons mentioned. 

The view I was arguing against in the OP was the view that method 1 is the best, supported by the evidence from Tetlock, etc. I think I slipped into holding this view myself over the past year or so, despite having done all this research on Tetlock et al earlier! It's easy to slip into because a lot of people in our community seem to be holding it, and when you squint it's sorta similar to what Tetlock said. (e.g. it involves aggregating different things, it involves using something called inside view and something called outside view.) 

*The margins of this comment are too small to contain, I was going to write a post on this some day...

 

Comment by kokotajlod on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-13T10:28:45.297Z · EA · GW

Announce $100M/year in prizes for AI interpretability/transparency research. Explicitly state that the metric is "How much closer does this research take us towards, one day when we build human-level AGI, being able to read said AI's mind, understand what it is thinking and why, what its goals and desires are, etc., ideally in an automated way that doesn't involve millions of person-hours?"
(Could possibly do it as NFTs, like my other suggestion.)

Comment by kokotajlod on Outline of Galef's "Scout Mindset" · 2021-08-12T12:24:13.817Z · EA · GW

I think I agree with this take. Thanks!

Comment by kokotajlod on Outline of Galef's "Scout Mindset" · 2021-08-11T08:53:55.805Z · EA · GW

Thanks! I notice that 1, 4, and 5 are examples where in some sense it's clear what you need to do, and the difficulty is just actually doing it. IIRC Julia says somewhere in the book (perhaps in discussing the rock climbing example?) that this where the soldier mindset performs relatively well.  I think I tentatively agree with this take, meaning that I agree with you that in some cases soldier is better probably.  

Comment by kokotajlod on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-11T08:13:20.757Z · EA · GW

Thats a succinct way of putting it, nice!

Comment by kokotajlod on Outline of Galef's "Scout Mindset" · 2021-08-10T15:59:52.204Z · EA · GW

 I am currently very much on the fence about whether to agree with you or not. I'm very keen to hear your views on situations in which soldier mindset is better than scout mindset--can you elaborate?

Comment by kokotajlod on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-10T08:00:40.245Z · EA · GW

Once it's established that you will be giving $100M a year to buy impact certificates, that will motivate lots of people already doing good to mint impact certificates, and probably also motivate lots of people to do good (so that they can mint the certificate and later get money for it)

By buying the certificate rather than paying the person who did the good, you enable flexibility -- the person who did the good can sell the certificate to speculators and get money immediately rather than waiting for your judgment. Then the speculators can sell it back and forth to each other as new evidence comes in about the impact of the original act, and the conversations the speculators have about your predicted evaluation can then help you actually make the evaluation, thanks to e.g. facts and evidence the speculators uncover. So it saves you effort as well.

Comment by kokotajlod on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T09:24:13.669Z · EA · GW

Is this something we could purchase for a few hundred million in a few years?

Comment by kokotajlod on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T09:23:37.036Z · EA · GW

Yes.

Comment by kokotajlod on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T05:44:16.800Z · EA · GW

This is my impression based on (a) talking to a bunch of people and hearing things like "Yeah our security is unacceptably weak" and "I don't think we are in danger yet, we probably aren't on anyone's radar" and "Yeah we are taking it very seriously, we are looking to hire someone. It's just really hard to find a good security person." These are basically the ONLY three things I hear when I raise security concerns, and they are collectively NOT reassuring. I haven't talked to every org and every person so maybe my experience is misleading. also (b) on priors, it seems that people in general don't take security seriously until there's actually a breach.  (c) I've talked to some people who are also worried about this, and they told me there basically isn't any professional security person in the EA community willing to work full time on this.

 

Comment by kokotajlod on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-08T05:33:35.046Z · EA · GW

Think of it like a grants program, except that instead of evaluating someone's pitch for what they intend to do, you are evaluating what they actually did, with the benefit of hindsight. Presumably your evaluations will be significantly more accurate this way. (Also, the fact that it's NFT-based means that you can recruit the "wisdom of the efficient market" to help you  in various ways, e.g. lots of non-EAs will be buying and selling these NFTs trying to predict what you will think of them, and thus producing lots of research you can use.)

I don't think it should replace our regular grants programs. But it might be a nice complement to them.

I don't see what you mean by centralization here, or how it's a problem. As for reliable guarantees the money will be used cost effectively, hell no, the whole point of impact certificates is that the evaluation happens after the event, not before. People can do whatever they want with the money, because they've already done the thing for which they are getting paid.

Comment by kokotajlod on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T05:33:12.866Z · EA · GW

Finally get acceptable information security by throwing money at the problem.

Spend $100M/year to hire, say, 10 world-class security experts and get them everything they need to build the right infrastructure for us, and for e.g. Anthropic.

Comment by kokotajlod on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T05:29:49.179Z · EA · GW

Impact certificates. Announce that we will purchase NFT's representing altruistic acts created by one of the actors. (Starting now, but with a one-year delay, such that we can't purchase an NFT unless it's at least a year old.) Commit to buy $100M/year of these NFTs, and occasionally reselling them and using the proceeds to buy even more. Promise that our purchasing decisions will be based on our estimate of how much total impact the action represented by the NFT will have. 

Comment by kokotajlod on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-03T20:17:35.175Z · EA · GW

Interesting! The secretary problem does seem relevant as a model, thanks!

Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future. 

FWIW, many of us do think that. I do, for example.

Comment by kokotajlod on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-03T19:56:04.587Z · EA · GW

Interesting! How does it work when you are uncertain about the date of extinction but think there's a 50% chance that it's within 10 years? (For concreteness, suppose that every decade there's a 50% chance of extinction.) (This is more or less what I think)
 

Comment by kokotajlod on Is effective altruism growing? An update on the stock of funding vs. people · 2021-08-02T08:58:22.967Z · EA · GW

Thanks Wayne, will read!

Comment by kokotajlod on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T16:14:36.385Z · EA · GW

Will do, thanks!

Comment by kokotajlod on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T14:30:43.887Z · EA · GW

I hadn't even taken into account future donors; if you take that into account then yeah we should be doing even more now. Huh. Maybe it should be like 20% or so. Then there's also the discount rate to think about... various risks of our money being confiscated, or controlled by unaligned people, or some random other catastrophe killing most of our impact, etc.... (Historically, foundations seem to pretty typically diverge from the original vision/mission laid out by their founders.)

I've read the hinge of history argument before, and was thoroughly unconvinced (for reasons other people explained in the comments).

One quick thing is that I think high interest rates are overall an argument for giving later rather than sooner!

Hmmm, toy model time: Suppose that our overall impact is log(whatwespendinyear2021)+log(whatwespendinyear2022)+Log(whatwespendinyear2023)... etc. up until some year when existential safety is reached or x-risk point of no return is passed.
Then is it still the case that going from e.g. a 10% interest rate to a 20% interest rate means we should spend less in 2021? Idk, but I'll go find out! (Since I take this toy model to be reasonably representative of our situation)

Comment by kokotajlod on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-29T13:13:58.119Z · EA · GW

Thanks, this data is really helpful -- and it also is reassuring to know that people in the EA community are on top of this stuff. I would be disappointed if no one was.

I'm curious as to how the 3% per year number could be justified (via models, rather than by aggregating survey answers). It seems to me that it should be substantially higher.

Suppose you have my timelines (median 2030). Then, intuitively, I feel like we should be spending something like 10% per year. If you have 2055 as your median, then maybe 3% per year makes sense...

EXCEPT that this doesn't take into account interest rates! Even if we spent 10% per year, we should still expect our total pot of money to grow, leaving us with an embarrassingly large amount of money going to waste at the end. (Sure, sure,  it wouldn't literally go to waste--we'd probably blow it all on last-ditch megaprojects to try to turn things around--but these would probably be significantly less effective per dollar compared to a world in which we had spread out our spending more, taking more opportunities on the margin over many years.) And if we spent 3%...

Idk. I'm new to this whole question. I'd love for people to explain more about how to think about this.

Comment by kokotajlod on Digital People Would Be An Even Bigger Deal · 2021-07-28T11:39:41.969Z · EA · GW

Digital people that become less economically, militarily, and politically powerful--e.g. as a result of reward hacking making them less interested in the 'rat race'--will be outcompeted by those that don't,  unless there are mechanisms in place to prevent this, e.g. all power centralized in one authority that decides not to let that happen, or strong effective regulations that are universally enforced.

Comment by kokotajlod on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-28T08:12:02.182Z · EA · GW

My take is that indeed, we now have AGI -- but it's really shitty AGI, not even close to human-level. (GPT-3 was another example of this; pretty general, but not human-level.) It seems that we now have the know-how to train a system that combines all the abilities and knowledge of GPT-3 with all the abilities and knowledge of these game-playing agents. Such a system would qualify as AGI, but not human-level AGI. The question is how long it'll take, and how much money (to make it bigger, train for longer) to get to human-level or something dangerously powerful at least.

Comment by kokotajlod on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-27T21:14:51.278Z · EA · GW

I would love to know! If anyone finds out how many PF-DAYs or operations or whatever were used to train this stuff, I'd love to hear it. (Alternatively: How much money was spent on the compute, or the hardware.)

Comment by kokotajlod on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-27T19:26:36.012Z · EA · GW

Oops, sorry thanks!

Comment by kokotajlod on DeepMind: Generally capable agents emerge from open-ended play · 2021-07-27T19:26:01.867Z · EA · GW

I did say it was a hot take. :D If I think of more sophisticated things to say I'll say them. 

 

Comment by kokotajlod on Shallow evaluations of longtermist organizations · 2021-07-08T07:41:42.074Z · EA · GW

Ah, OK, that makes sense. 

Comment by kokotajlod on Shallow evaluations of longtermist organizations · 2021-07-05T19:56:36.228Z · EA · GW

Hmmm, this surprises me a bit because doesn't it apply to pretty much all of your evaluations on this list? Presumably for each of them, the leadership of the org has somewhat different opinions than your independent impression, and your overall view should be an average of the two. I didn't get the impression that you were averaging your impression with those of other org's leadership.

Comment by kokotajlod on Taboo "Outside View" · 2021-07-02T06:22:31.830Z · EA · GW

I am sometimes happy making pretty broad and sloppy statements. For example: "People making political predictions typically don't make enough use of 'outside view' perspectives" feels fine to me, as a claim, despite some ambiguity around the edges. (Which perspectives should they use? How exactly should they use them? Etc.)

I guess we can just agree to disagree on that for now. The example statement you gave would feel fine to me if it used the original meaning of "outside view" but not the new meaning, and since many people don't know (or sometimes forget) the original meaning...
 

A good conversation would focus specifically on the conditions under which it makes sense to defer heavily to experts, whether those conditions apply in this particular case, etc. Some general Tetlock stuff might come into the conversation, like: "Tetlock's work suggests it's easy to trip yourself up if you try to use your own detailed/causal model of the world to make predictions, so you shouldn't be so confident that your own 'inside view' prediction will be very good either." But mostly you should be more specific.

100% agreement here, including on the bolded bit.

I think some parts of the community lean too much on things in the bag (the example you give at the top of the post is an extreme example). I also think that some parts of the community lean too little on things in the bag, in part because (in my view) they're overconfident in their own abilities to reason causally/deductively in certain domains. I'm not sure which is overall more problematic, at the moment, in part because I'm not sure how people actually should be integrating different considerations in domains like AI forecasting.

Also agree here, but again I don't really care which one is overall more problematic because I think we have more precise concepts we can use and it's more helpful to use them instead of these big bags. 

There also seem to be biases that cut in both directions. I think the 'baseline bias' is pretty strongly toward causal/deductive reasoning, since it's more impressive-seeming, can suggest that you have something uniquely valuable to bring to the table (if you can draw on lots of specific knowledge or ideas that it's rare to possess), is probably typically more interesting and emotionally satisfying, and doesn't as strongly force you to confront or admit the limits of your predictive powers. The EA community has definitely introduced an (unusual?) bias in the opposite direction, by giving a lot of social credit to people who show certain signs of 'epistemic virtue.' I guess the pro-causal/deductive bias often feels more salient to me, but I don't really want to make any confident claim here that it actually is more powerful.

I think I agree with all this as well, noting that this causal/deductive reasoning definition of inside view isn't necessarily what other people mean by inside view, and also isn't necessarily what Tetlock meant. I encourage you to use the term "causal/deductive reasoning" instead of "inside view," as you did here, it was helpful (e.g. if you had instead used "inside view" I would not have agreed with the claim about baseline bias)

Comment by kokotajlod on Shallow evaluations of longtermist organizations · 2021-06-28T11:44:45.607Z · EA · GW

Hey! Thanks for doing this, strong-upvoted.

I just wrote a post about the terms "outside view" and "inside view" and I figured I'd apply my own advice and see where it leads me. I noticed you used the term here:

CSER

Epistemic status for this section: Unmitigated inside view.

and so I thought I'd try my hand at saying what I think you meant, but using less ambiguous terms. You probably didn't just mean "I'm not using reference classes in this section," because that's true of most sections I'd guess. You also probably didn't mean that you are using a gears-level model, though arguably you are using a model of some sort? Idk, could also classify it as intuition. My guess is that you meant "This is just how things seem to me," i.e. this section doesn't attempt to defer to anyone else or correct for any biases you might have.  How does all this sound? What would you say you meant?

Comment by kokotajlod on Taboo "Outside View" · 2021-06-23T17:03:20.548Z · EA · GW

Wow, that's an impressive amount of charitable reading + attempting-to-ITT you did just there, my hat goes off to you sir!

I think that summary of my view is roughly correct. I think it over-emphasizes the applause light aspect compared to other things I was complaining about; in particular, there was my second point in the "this expansion of meaning is bad" section, about how people seem to think that it is important to have an outside view and an inside view (but only an inside view if you feel like you are an expert) which is, IMO, totally not the lesson one should draw from Tetlock's studies etc., especially not with the modern, expanded definition of these terms. I also think that while I am mostly complaining about what's happened to "outside view," I also think similar things apply to "inside view" and thus I recommend tabooing it also. 

In general, the taboo solution feels right to me; when I imagine re-doing various conversations I've had, except without that phrase, and people instead using more specific terms, I feel like things would just be better. I shudder at the prospect of having a discussion about "Outside view vs inside view: which is better? Which is overrated and which is underrated?" (and I've worried that this thread may be tending in that direction) but I would really look forward to having a discussion about "let's look at Daniel's list of techniques and talk about which ones are overrated and underrated and in what circumstances each is appropriate."

Now I'll try to say what I think your position is:

1. If people were using "outside view" without explaining more specifically what they mean, that would be bad and it should be tabood, but you don't see that in your experience
2. If the things in the first Big List were indeed super diverse and disconnected from the evidence in Tetlock's studies etc., then there would indeed be no good reason to bundle them together under one term. But in fact this isn't the case; most of the things on the list are special cases of reference-class / statistical reasoning, which is what Tetlock's studies are about. So rather than taboo "outside view" we should continue to use the term but mildly prune the list.
3. There may be a general bias in this community towards using the things on the first Big List, but (a) in your opinion the opposite seems more true, and (b) at any rate even if this is true the right response is to argue for that directly rather than advocating the tabooing of the term.

How does that sound?

Comment by kokotajlod on Taboo "Outside View" · 2021-06-22T21:07:25.285Z · EA · GW

I said in the post, I'm a fan of reference classes. I feel like you think I'm not? I am! I'm also a fan of analogies. And I love trend extrapolation. I admit I'm not a fan of the anti-weirdness heuristic, but even it has its uses. In general most of what you are saying in this thread is stuff I agree with, which makes me wonder if we are talking past each other. (Example 1: Your second small comment about reference class tennis. Example 2: Your first small comment, if we interpret instances of "outside view" as meaning "reference classes" in the strict sense, though not if we use the broader definition you favor. Example 3: your points a, b, c, and e. (point d, again, depends on what you mean by 'outside view,' and also what counts as often.)

My problem is with the term "Outside view." (And "inside view" too!) I don't think you've done much to argue in favor of it in this thread. You have said that in your experience it doesn't seem harmful; fair enough, point taken. In mine it does. You've also given two rough definitions of the term, which seem quite different to me, and also quite fuzzy. (e.g. if by "reference class forecasting" you mean the stuff Tetlock's studies are about, then it really shouldn't include the anti-weirdness heuristic, but it seems like you are saying it does?) I found myself repeatedly thinking "but what does he mean by outside view? I agree or don't agree depending on what he means..." even though you had defined it earlier. You've said that you think the practices you call "outside view" are underrated and deserve positive reinforcement; I totally agree that some of them are, but I maintain that some of them are overrated, and would like to discuss each of them on a case by case basis instead of lumping them all together under one name. Of course you are free to use whatever terms you like, but I intend to continue to ask people to be more precise when I hear "outside view" or "inside view." :)



 

Comment by kokotajlod on Taboo "Outside View" · 2021-06-21T08:02:53.371Z · EA · GW

But I think the best intervention, in this case, is probably just to push the ideas "outside views are often given too much weight" or "heavily reliance on outside views shouldn't be seen as praiseworthy" or "the correct way to integrate outside views with more inside-view reasoning is X." Tabooing the term itself somehow feels a little roundabout to me, like a linguistic solution to a methodological disagreement.

 

On the contrary; tabooing the term is more helpful, I think. I've tried to explain why in the post. I'm not against the things "outside view" has come to mean; I'm just against them being conflated with / associated with each other, which is what the term does. If my point was simply that the first Big List was overrated and the second Big List was underrated, I would have written a very different post!

I'm pretty confident that the average intellectual doesn't pay enough attention to "outside views" -- and I think that, absent positive reinforcement from people in your community, it actually does take some degree of discipline to take outside views sufficiently seriously.

By what definition of "outside view?" There is some evidence that in some circumstances people don't take reference class forecasting seriously enough; that's what the original term "outside view" meant. What evidence is there that the things on the Big List O' Things People Describe as Outside View are systematically underrated by the average intellectual?

Comment by kokotajlod on Taboo "Outside View" · 2021-06-19T06:00:00.197Z · EA · GW

Thanks for this thoughtful pushback. I agree that YMMV; I'm reporting how these terms seem to be used in my experience but my experience is limited.

I think opacity is only part of the problem; illicitly justifying sloppy reasoning is most of it. (My second and third points in "this expansion of meaning is bad" section.) There is an aura of goodness surrounding the words "outside view" because of the various studies showing how it is superior to the inside view in various circumstances, and because of e.g. Tetlock's advice to start with the outside view and then adjust. (And a related idea that we should only use inside view stuff if we are experts... For more on the problems I'm complaining about, see the meme, or Eliezer's comment.) This is all well and good if we use those words to describe what was actually talked about by the studies, by Tetlock, etc. but if instead we have the much broader meaning of the term, we are motte-and-bailey-ing ourselves.




 

Comment by kokotajlod on What are some key numbers that (almost) every EA should know? · 2021-06-18T08:23:25.504Z · EA · GW

Median household income (worldwide, not in USA) is the thing that sticks with me the most and seems most eye-opening... Looking it up now, it seems that it is $15,900 per year. Imagine your entire household bringing in that much, and then think: that's what life would be like if we were right in the middle.

Comment by kokotajlod on Taboo "Outside View" · 2021-06-17T21:05:33.215Z · EA · GW

Good point, I'll add analogy to the list. Much that is called reference class forecasting is really just analogy, and often not even a good analogy.

I really think we should taboo "outside view." If people are forced to use the term "reference class" to describe what they are doing, it'll be more obvious when they are doing epistemically shitty things, because the term "reference class" invites the obvious next questions: 1. What reference class? 2. Why is that the best reference class to use?

Comment by kokotajlod on Taboo "Outside View" · 2021-06-17T18:32:09.007Z · EA · GW

I agree it's hard to police how people use a word; thus, I figured it would be better to just taboo the word entirely. 

I totally agree that it's hard to use reference classes correctly, because of the reference class tennis problem. I figured it was outside the scope of this post to explain this, but I was thinking about making a follow-up... at any rate, I'm optimistic that if people actually use the words "reference class" instead of "outside view" this will remind them to notice how there are more than one reference class available, how it's important to argue that the one you are using is the best, etc.