Posts

Taboo "Outside View" 2021-06-17T09:39:12.385Z
Vignettes Workshop (AI Impacts) 2021-06-15T11:02:04.064Z
Fun with +12 OOMs of Compute 2021-03-01T21:04:16.532Z
Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain 2021-01-18T12:39:30.132Z
Against GDP as a metric for timelines and takeoff speeds 2020-12-29T17:50:04.176Z
Incentivizing forecasting via social media 2020-12-16T12:11:33.789Z
Is this a good way to bet on short timelines? 2020-11-28T14:31:46.235Z
Persuasion Tools: AI takeover without AGI or agency? 2020-11-20T16:56:52.687Z
How Roodman's GWP model translates to TAI timelines 2020-11-16T14:11:38.809Z
How can I bet on short timelines? 2020-11-07T12:45:46.192Z
What considerations influence whether I have more influence over short or long timelines? 2020-11-05T19:57:16.172Z
AI risk hub in Singapore? 2020-10-29T11:51:49.741Z
Relevant pre-AGI possibilities 2020-06-20T13:15:29.008Z
Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post 2019-02-15T19:14:41.459Z
Tiny Probabilities of Vast Utilities: Bibliography and Appendix 2018-11-20T17:34:02.854Z
Tiny Probabilities of Vast Utilities: Concluding Arguments 2018-11-15T21:47:58.941Z
Tiny Probabilities of Vast Utilities: Solutions 2018-11-14T16:04:14.963Z
Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem 2018-11-10T09:12:15.039Z
Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? 2018-11-08T10:09:59.111Z
Ongoing lawsuit naming "future generations" as plaintiffs; advice sought for how to investigate 2018-01-23T22:22:08.173Z
Anyone have thoughts/response to this critique of Effective Animal Altruism? 2016-12-25T21:14:39.612Z

Comments

Comment by kokotajlod on What are some key numbers that (almost) every EA should know? · 2021-06-18T08:23:25.504Z · EA · GW

Median household income (worldwide, not in USA) is the thing that sticks with me the most and seems most eye-opening... Looking it up now, it seems that it is $15,900 per year. Imagine your entire household bringing in that much, and then think: that's what life would be like if we were right in the middle.

Comment by kokotajlod on Taboo "Outside View" · 2021-06-17T21:05:33.215Z · EA · GW

Good point, I'll add analogy to the list. Much that is called reference class forecasting is really just analogy, and often not even a good analogy.

I really think we should taboo "outside view." If people are forced to use the term "reference class" to describe what they are doing, it'll be more obvious when they are doing epistemically shitty things, because the term "reference class" invites the obvious next questions: 1. What reference class? 2. Why is that the best reference class to use?




 

Comment by kokotajlod on Taboo "Outside View" · 2021-06-17T18:32:09.007Z · EA · GW

I agree it's hard to police how people use a word; thus, I figured it would be better to just taboo the word entirely. 

I totally agree that it's hard to use reference classes correctly, because of the reference class tennis problem. I figured it was outside the scope of this post to explain this, but I was thinking about making a follow-up... at any rate, I'm optimistic that if people actually use the words "reference class" instead of "outside view" this will remind them to notice how there are more than one reference class available, how it's important to argue that the one you are using is the best, etc.

Comment by kokotajlod on How many times would nuclear weapons have been used if every state had them since 1950? · 2021-05-05T10:01:23.702Z · EA · GW

OK, you've convinced me! Nice!

Comment by kokotajlod on How many times would nuclear weapons have been used if every state had them since 1950? · 2021-05-05T04:43:58.162Z · EA · GW

I'm surprised you don't mention what seems to me to be the most likely scenario, 0. : Mutually assured destruction, nuclear winter, etc. The world looks like 1 or 2 up until some series of accidents and mistakes causes sufficiently many nukes to be fired that we end up in nuclear winter.

(Think about the history of cold war nuclear close calls. Now imagine that sort of thing is happening not just between two countries but everywhere. Surely there would be accidental escalations to full-on nuclear combat at least sometimes, and when two countries are going at it with nukes, probably that raises the chances of other countries getting involved on purpose or on accident)

Comment by kokotajlod on Consciousness research as a cause? [asking for advice] · 2021-05-02T10:21:31.027Z · EA · GW

Oops thanks!

Comment by kokotajlod on Qualia Research Institute: History & 2021 Strategy · 2021-04-06T13:07:02.258Z · EA · GW

Sorry for the delayed reply! Didn't notice this until now.

Sure, I'd be happy to see your slides, thanks! Looking at your post on FAI and valence, it looks like reasons no. 3, 4, 5, and 9 are somewhat plausible to me. I also agree that there might be philosophical path-dependencies in AI development and that doing some of the initial work ourselves might help to discover them--but I feel like QRI isn't aimed at this directly and could achieve this much better if it was; if it happens it'll be a side-effect of QRI's research.

For your flipped criticism: 

--I think bolstering the EA community and AI risk communities is a good idea
--I think "blue sky" research on global priorities, ethics, metaphilosophy, etc. is also a good idea if people seem likely to make progress on it
--Obviously I think AI safety, AI governance, etc. are valuable
--There are various other things that seem valuable because they support those things, e.g. trying to forecast decline of collective epistemology and/or prevent it.

--There are various other things that don't impact AI safety but independently have a decently strong case that they are similarly important, e.g. ALLFED or pandemic preparedness.

--I'm probably missing a few things
--My metaphysical uncertainty... If you mean how uncertain am I about various philosophical questions like what is happiness, what is consciousness, etc., then the answer is "very uncertain." But I think the best thing to do is not try to think about it directly now, but rather to try to stabilize the world and get to the Long Reflection so we can think about it longer and better later.
 

Comment by kokotajlod on Qualia Research Institute: History & 2021 Strategy · 2021-04-01T09:49:23.107Z · EA · GW

Thanks for the detailed engagement!

Yep, that's roughly correct as a statement of my position. Thanks. I guess I'd put it slightly differently in some respects -- I'd say something like "A good test for whether to do some EA project is how likely it is that it's within a few orders of magnitude as good as AI safety work. There will be several projects for which we can tell a not-too-implausible story for how they are close to as good or better than AI safety work, and then we can let tractibility/neglectedness/fit considerations convince us to do them. But if we can't even tell such a story in the first place, that's a pretty bad sign." The general thought is: AI safety is the "gold standard" to compare against, since it's currently No. 1 priority in my book. (If something else was No. 1, it would be my gold standard.)

I think QRI actually can tell such a story, I just haven't heard it yet. In the comments it seems that a story like this was sketched. I would be interested to hear it in more detail. I don't think the very abstract story of "we are trying to make good experiences but we don't know what experiences are" is plausible enough as a story for why this is close to as good as AI safety. (But I might be wrong about that too.)

re: A: Hmmm, fair enough that you disagree, but I have the opposite intuition.

re: B: Yeah I think even the EA community underweights AI safety. I have loads of respect for people doing animal welfare stuff and global poverty stuff, but it just doesn't seem nearly as important as preventing everyone from being killed or worse in the near future. It also seems much less neglected--most of the quality-adjusted AI safety work is being done by EA-adjacent people, whereas that's not true (I think?) for animal welfare or global poverty stuff.  As for traceability, I'm less sure how to make the comparison--it's obviously much more tractable to make SOME improvement to animal welfare or the lives of the global poor, but if we compare helping ALL the animals / ALL the global poor to AI safety, it actually seems less tractable (while still being less important and less neglected.) There's a lot more to say about this topic obviously, I worry I come across as callous or ignorant of various nuances... so let me just say I'd love to discuss with you further and hear your thoughts.

re: D:  I'm certainly pretty uncertain about the improving collective sanity thing. One reason I'm more optimistic about it than QRI is that I see how it plugs in to AI safety: If we improve collective sanity, that massively helps with AI safety, whereas if we succeed at understanding consciousness better, how does that help with AI safety? (QRI seems to think it does, I just don't see it yet) Therefore sanity-improvement can be thought of as similarly important to AI safety (or alternatively as a kind of AI safety intervention) and the remaining question is how tractable and neglected it is. I'm unsure, but one thing that makes me optimistic about tractability is that we don't need to improve sanity of the entire world, just a few small parts of the world--most importantly, our community, but also certain AI companies and (maybe) governments. And even if all we do is improve sanity of our own community, that has a substantially positive effect on AI safety already, since so much of AI safety work comes from our community. As for neglectedness, yeah IDK. Within our community there is a lot of focus on good epistemology and stuff already, so maybe the low-hanging fruit has been picked already. But subjectively I get the  impression that there are still good things to be doing--e.g. trying to forecast how collective epistemology in the relevant communities could change in the coming years, building up new tools (such as Guesstimate or Metaculus) ...











 

Comment by kokotajlod on Qualia Research Institute: History & 2021 Strategy · 2021-03-26T10:14:20.086Z · EA · GW

Good question.  Here are my answers:

  1. I don't think I would say the same thing to every project discussed on the EA forum. I think for every non-AI-focused project I'd say something similar (why not focus instead on AI?) but the bit about how I didn't find QRI's positive pitch compelling was specific to QRI. (I'm a philosopher, I love thinking about what things mean, but I think we've got to have a better story than "We are trying to make more good and less bad experiences, therefore we should try to objectively quantify and measure experience." Compare: Suppose it were WW2, 1939. We are thinking of various ways to help the allied war effort. An institute designed to study "what does war even mean anyway? What does it mean to win a war? let's try to objectively quantify this so we can measure how much we are winning and optimize that metric" is not obviously a good idea. Like, it's definitely not harmful, but it wouldn't be top priority, especially if there are various other projects that seem super important, tractable, and neglected, such as preventing the Axis from getting atom bombs. (I think of the EA community's position with respect to AI as analogous to the position re atom bombs held by the small cohort of people in 1939 "in the know" about the possibility. It would be silly for someone who knew about atom bombs in 1939 to instead focus on objectively defining war and winning.)
  2. But yeah, I would say to every non-AI-related project something like "Will your project be useful for making AI go well? How?" And I think that insofar as one could do good work on both AI safety stuff and something else, one should probably choose AI safety stuff. This isn't because I think AI safety stuff is DEFINITELY the most important, merely that I think it probably is. (Also I think it's more neglected AND tractable than many, though not all, of the alternatives people typically consider)
  3. Some projects I think are still worth pursuing even if they don't help make AI go well. For example, bio risk, preventing nuclear war, improving collective sanity/rationality/decision-making, ... (lots of other things would be added, it all depends on tractibility + neglectedness + personal fit.) After all, maybe AI won't happen for many decades or even centuries. Or maybe one of those other risks is more likely to happen soon than it appears.
  4. Anyhow, to sum it all up: I agree that we shouldn't be super confident that AI is the most important thing. Depending on how broadly you define AI, I'm probably about 80-90% confident. And I agree that this means our community should explore a portfolio of ideas rather than just one. Nevertheless, I think even our community is currently less focused on AI than it should be, and I think AI is the "gold standard" so to speak that projects should compare themselves to, and moreover I think QRI in particular has not done much to argue for their case. (Compare with, say, ALLFED which has a pretty good case IMO: There's at least a 1% chance of some sort of global agricultural shortfall prior to AI getting crazy, and by default this will mean terrible collapse and famine, but if we prepare for this possibility it could instead mean much better things (people and institutions surviving, maybe learning)).
  5. My criticism is not directly of QRI but of their argument as presented here. I expect that if I talked with them and heard more of their views, I'd hear a better, more expanded version of the argument that would be much more convincing. In fact I'd say 40% chance QRI ends up seeming better than ALLFED to me after such a conversation. For example, I myself used to think that consciousness research was really important for making AI go well. It might not be so hard to convince me to switch back to that old position.
Comment by kokotajlod on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-19T08:25:09.961Z · EA · GW

Yes, thanks!  Some follow-ups:

1. To what extent do some journalists use the Chinese Robber Fallacy deliberately -- they know that they have a wide range of even-worse, even-bigger tragedies and scandals to report on, but they choose to report on the ones that let them push their overall ideology or political agenda? (And they choose not to report on the ones that seem to undermine or distract from their ideology/agenda)
2.  Do you agree with the "The parity inverse of a meme is the same meme in a different point of its life cycle" idea? In other words, do you agree with the "Toxoplasma of Rage" thesis?
 

Comment by kokotajlod on Consciousness research as a cause? [asking for advice] · 2021-03-11T09:00:31.265Z · EA · GW

I currently think consciousness research is less important/tractable/neglected than AI safety, AI governance, and a few other things. The main reason is that it totally seems to me to be something we can "punt to the future" or "defer to more capable successors" to a large extent. However, I might be wrong about this. I haven't talked to QRI at length sufficient to truly evaluate their arguments. (See this exchange, which is about all I've got.)

Comment by kokotajlod on AMA: Tom Chivers, science writer, science editor at UnHerd · 2021-03-10T16:14:26.531Z · EA · GW

Thanks for doing this -- I'm a big fan of your book!

I'm interested to hear what you think this post about how media works gets right and gets wrong. In particular: (1)

A common misconception about propaganda is the idea it comes from deliberate lies (on the part of media outlets) or from money changing hands. In my personal experience colluding with the media no money changes hands and no (deliberate) lies are told by the media itself. ... Most media bias actually takes the form of selective reporting. ... Combine the Chinese Robbers Fallacy with a large pool of uncurated data and you can find facts to support any plausible thesis.

and (2)

Even when a news outlet is broadcasting a lie, their government is unlikely to prosecute them for promoting official government policy. Newspapers abnegate responsibility for truth by quoting official sources. You get away (legally) straight-up lying about medical facts if you are quoting the CDC.

News outlets' unquestioning reliance on official sources comes from the economics of their situation. It is cheaper to republish official statements without questioning them. The news outlet which produces the cheapest news outcompetes outlets with higher expenditure.

and (3)

The parity inverse of a meme is the same meme—at a different phase in its lifecycle. Two-sided conflicts are extremely virulent memes because they co-opt potential enemies.

and (4)

Media bias is not a game of science. It is a game of memetics. Memetics isn't about truth. It is about attention. Ask yourself "What are you thinking about and why are you thinking about it?"

Comment by kokotajlod on How many hits do the hits of different EA sites get each year? · 2021-03-05T11:06:21.816Z · EA · GW

Whoa, Lesswrong beats SSC? That surprises me. 

Comment by kokotajlod on AMA: Ajeya Cotra, researcher at Open Phil · 2021-03-02T09:36:15.165Z · EA · GW

Update: The draft I mentioned is now a post! 

Comment by kokotajlod on Fun with +12 OOMs of Compute · 2021-03-02T08:38:50.275Z · EA · GW

I wonder if you think the EA community is too slow to update their strategies here. It feels like what is coming is easily among the most difficult things humanity ever has to get right and we could be doing much more if we all took current TAI forecasts more into account.

You guessed it -- I believe that most of EA's best and brightest will end up having approximately zero impact (compared to what they could have had) because they are planning for business-as-usual. The twenties are going to take a lot of people by surprise, I think. Hopefully EAs working their way up the academic hierarchy will at least be able to redirect prestige/status towards those who have been building up expertise in AI safety and AI governance, when the time comes.

Comment by kokotajlod on [Link post] Are we approaching the singularity? · 2021-02-14T11:48:18.955Z · EA · GW

I think that if I were going on outside-view economic arguments I'd probably be <50% singularity by 2100.

To what extent is this a repudiation of Roodman's outside-view projection? My guess is you'd say something like "This new paper is more detailed and trustworthy than Roodman's simple model, so I'm assigning it more weight, but still putting a decent amount of weight on Roodman's being roughly correct and that's why I said <50% instead of <10%."

Comment by kokotajlod on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-08T09:19:12.453Z · EA · GW

Thanks! How can an org give ops staff more freedom and involvement-if-they-want-it? What are some classic mistakes to avoid?

Comment by kokotajlod on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-08T09:18:20.744Z · EA · GW

Thanks! I wonder if some sort of two-tiered system would work, where there's a value-aligned staff member who is part of the core team and has lots of money and flexibility and so forth, and then they have a blank check to hire contractors who aren't value-aligned to do various things. That might help the value-aligned staff member from becoming overworked. Idk though, I have no idea what I'm talking about. What do you think?

Comment by kokotajlod on AMA: We Work in Operations at EA-aligned organizations. Ask Us Anything. · 2021-02-05T08:03:33.394Z · EA · GW

Do you think, on the margin, that EA orgs could get more and better ops work/people by paying substantially larger salaries?

Comment by kokotajlod on Books on authoritarianism, Russia, China, NK, democratic backsliding, etc.? · 2021-02-02T09:18:07.533Z · EA · GW

One point in favor of 1984 and Animal Farm is that Orwell was intimitely familiar with real-life totalitarian regimes, having fought for the communists in Spain etc. His writing is more credible IMO because he's criticizing the side he fought for rather than the side he fought against. (I mean, he's criticizing both, for sure--his critiques apply equally to fascism--but most authors who warn us of dystopian futures are warning us against their outgroup, so to speak, whereas Orwell is warning us against what used to be his ingroup.)

Comment by kokotajlod on AMA: Ajeya Cotra, researcher at Open Phil · 2021-02-02T09:08:32.382Z · EA · GW

Thanks, this was a surprisingly helpful answer, and I had high expectations!

This is updating me somewhat towards doing more blog posts of the sort that I've been doing. As it happens, I have a draft of one that is very much Category 3, let me know if you are interested in giving comments!

Your sense of why we disagree is pretty accurate, I think. The only thing I'd add is that I do think we should update downwards on low-end compute scenarios because of market efficiency considerations, just not as strongly as you perhaps, and moreover I also think that we should update upwards for various reasons (the surprising recent sucesses of deep learning, the fact that big corporations are investing heavily-by-historical-standards in AI, the fact that various experts think they are close to achieving AGI) and the upwards update mostly cancels out the downwards update IMO.

Comment by kokotajlod on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-29T11:34:00.064Z · EA · GW

Yep, my current median is something like 2032. It fluctuates depending on how I estimate it, sometimes I adjust it up or down a bit based on how I'm feeling in the moment and recent updates, etc.

Comment by kokotajlod on AMA: Ajeya Cotra, researcher at Open Phil · 2021-01-29T07:55:02.195Z · EA · GW

Hi Ajeya! I"m a huge fan of your timelines report, it's by far the best thing out there on the topic as far as I know. Whenever people ask me to explain my timelines, I say "It's like Ajeya's, except..."

My question is, how important do you think it is for someone like me to do timelines research, compared to other kinds of research (e.g. takeoff speeds, alignment, acausal trade...)

I sometimes think that even if I managed to convince everyone to shift from median 2050 to median 2032 (an obviously unlikely scenario!), it still wouldn't matter much because people's decisions about what to work on are mostly driven by considerations of tractability, neglectedness, personal fit, importance, etc. and even that timelines difference would be a relatively minor consideration. On the other hand, intuitively it does feel like the difference between 2050 and 2032 is a big deal and that people who believe one when the other is true will probably make big strategic mistakes.

 

Bonus question: Murphyjitsu: Conditional on TAI being built in 2025, what happened? (i.e. how was it built, what parts of your model were wrong, what do the next 5 years look like, what do the 5 years after 2025 look like?)

 

Comment by kokotajlod on Qualia Research Institute: History & 2021 Strategy · 2021-01-28T07:19:10.805Z · EA · GW

Well said. I agree that that is a path to impact for the sort of work QRI is doing, it just seems lower-priority to me than other things like working on AI alignment or AI governance.  Not to mention the tractability / neglectedness concerns (philosophy is famously intractable, and there's an entire academic discipline for it already)

Comment by kokotajlod on Qualia Research Institute: History & 2021 Strategy · 2021-01-27T08:58:28.038Z · EA · GW

Is emotional valence a particularly confused and particularly high-leverage topic, and one that might plausibly be particularly conductive getting clarity on? I think it would be hard to argue in the negative on the first two questions. Resolving the third question might be harder, but I’d point to our outputs and increasing momentum. I.e. one can levy your skepticism on literally any cause, and I think we hold up excellently in a relative sense. We may have to jump to the object-level to say more.

I don't think I follow.  Getting more clarity on emotional valence does not seem particularly high-leverage to me. What's the argument that it is?

To your second concern, I think a lot about AI and ‘order of operations’. ...  But might there be path-dependencies here such that the best futures happen if we gain more clarity on consciousness, emotional valence, the human nervous system, the nature of human preferences, and so on, before we reach certain critical thresholds in superintelligence development and capacity? Also — certainly.

Certainly? I'm much less sure. I actually used to think something like this; in particular, I thought that if we didn't program our AI to be good at philosophy, it would come to some wrong philosophical view about what consciousness is (e.g. physicalism, which I think is probably wrong) and then kill us all while thinking it was doing us a favor by uploading us (for example).

But now I think that programming our AI to be good at philosophy should be tackled directly, rather than indirectly by first solving philosophical problems ourselves and then programming the AI to know the solutions.  For one thing, it's really hard to solve millenia-old philosophical problems in a decade or two. For another, there are many such problems to solve. Finally, our AI safety schemes probably won't involve feeding answers into the AI, so much as trying to get the AI to learn our reasoning methods and so forth, e.g. by imitating us.

Widening the lens a bit, qualia research is many things, and one of these things is an investment in the human-improvement ecosystem, which I think is a lot harder to invest effectively in (yet also arguably more default-safe) than the AI improvement ecosystem. Another ‘thing’ qualia research can be thought of as being is an investment in Schelling point exploration, and this is a particularly valuable thing for AI coordination.

I don't buy these claims yet. I guess I buy that qualia research might help improve humanity, but so would a lot of other things, e.g. exercise and nutrition. As for the Schelling point exploration thing, what does that mean in this context?

I’m confident that, even if we grant that the majority of humanity's future trajectory will be determined by AGI trajectory — which seems plausible to me — I think it’s also reasonable to argue that qualia research is one of the highest-leverage areas for positively influencing AGI trajectory and/or the overall AGI safety landscape.

I'm interested to hear those arguments!

Comment by kokotajlod on Qualia Research Institute: History & 2021 Strategy · 2021-01-26T08:24:22.355Z · EA · GW

Thanks for this detailed and well-written report! As a philosopher (and fan of the cyberpunk aesthetic :) ) your project sounds really interesting and exciting to me. I hope I get to meet you one day and learn more.  However, I currently don't see the case for prioritising your project:

Isn’t it perplexing that we’re trying to reduce the amount of suffering and increase the amount of happiness in the world, yet we don’t have a precise definition for either suffering or happiness?

As a human collective, we want to create good futures. This encompasses helping humans have happier lives, preventing intense suffering wherever it may exist, creating safe AI, and improving animals’ lives too, both in farms and in the wild.

But what is happiness? And what is suffering?

Until we can talk about these things objectively, let alone measure and quantify them reliably, we’ll always be standing in murky water.

It seems like you could make this argument about pretty much any major philosophical question, e.g. "We're trying to reduce the amount of suffering and increase the amount of happiness in the world, yet we don't have a precise definition of the world, or of we, or of trying, and we haven't rigorously established that this is what we should be doing anyway, and what does should  mean anyway?"

Meanwhile, here's my argument for why QRI's project shouldn't be prioritized:

--Crazy AI stuff will probably be happening in the next few decades, and if it doesn't go well, the impact of QRI's research will be (relatively) small or even negative.
--If it does go well, QRI's impact will still be small, because the sort of research QRI is doing would have been done anyway after AI stuff goes well. If other people don't do it, the current QRI researchers could do it, and probably do it even better thanks to advanced AI assistance.

 

Comment by kokotajlod on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-19T08:37:54.433Z · EA · GW

Thanks! Makes sense. (To be clear, I wasn't saying that tight control by a single political faction would be a good thing... only that it would fix the polarization problem.) I think the Civil War era was probably more polarized than today, but that's not very comforting given what happened then. Ideally we'd be able to point to an era with greater-than-today polarization that didn't lead to mass bloodshed. I don't know much about the Jefferson-Adams thing but I'd be surprised if it was as bad as today.

Comment by kokotajlod on Lessons from my time in Effective Altruism · 2021-01-17T07:26:10.196Z · EA · GW

For personal fit stuff: I agree that for intellectual work, personal fit is very important. It's just that I have discovered, almost by accident, that I have more personal fit than I realized for things I wasn't trained in. (You may have made a similar discovery?) Had I prioritized personal fit less early on, I would have explored more. I still wonder what sorts of things I could be doing by now if I had tried to reskill instead of continuing in philosophy. Yeah, maybe I would have discovered that I didn't like it and gone back to philosophy, but maybe I would have discovered that I loved it. I guess this isn't against prioritizing personal fit per se, but against how past-me interpreted the advice to prioritize personal fit.

For engaging with people outside EA: I went to a philosophy PhD program and climbed the conventional academic hierarchy for a few years. I learned a bunch of useful stuff, but I also learned a bunch of useless stuff, and a bunch of stuff which is useful but plausibly not as useful as what I would have learned working for an EA org. When I look back on what I accomplished over the last five years, almost all of the best stuff seems to be things I did on the side, extracurricular from my academic work. (e.g. doing internships at CEA etc.) I also made a bunch of friends outside EA, which I agree is nice in several ways (e.g. the ones you mention) but to my dismay I found it really hard to get people to lift a finger in the direction of helping the world, even if I could intellectually convince them that e.g. AI risk is worth taking seriously, or that the critiques and stereotypes of EA they heard were incorrect. As a counterpoint, I did have interactions with several dozen people probably, and maybe I caused more positive change than I could see, especially since the world's not over yet and there is still time for the effects of my conversations to grow.  Still though: I missed out on several year's worth of EA work and learning by going to grad school; that's a high opportunity cost.
As for learning things myself: I heard a lot of critiques of EA, learned a lot about other perspectives on the world, etc. but ultimately I don't think I would be any worse off in this regard if I had just gone into an EA org for the past five years instead of grad school.

 

Comment by kokotajlod on Lessons from my time in Effective Altruism · 2021-01-16T00:47:34.903Z · EA · GW

Thanks for this! I think my own experience has led to different lessons in some cases (e.g. I think I should have prioritised personal fit less and engaged less with people outside the EA community), but I nevertheless very much approve of this sort of public reflection.

Comment by kokotajlod on The ten most-viewed posts of 2020 · 2021-01-15T08:51:49.959Z · EA · GW

Good question. Yeah, how about views of the average post from 2020 in 2020? And ditto for 90th percentile.

Comment by kokotajlod on The ten most-viewed posts of 2020 · 2021-01-14T16:08:27.501Z · EA · GW

Out of curiosity, how many views does the average post get? What about the 90th-percentile post? 

Comment by kokotajlod on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-10T10:45:44.445Z · EA · GW

going up against consensus in a deliberative body, be that my Committee or the General Assembly, and convincing my fellow Representatives to reverse course and vote the opposite way they had intended.

It's great to hear that this is not only possible but possible for one person to achieve multiple times in two years. Do you think you were able to do it significantly more often than the average representative? (e.g. because the average representative cares more about conforming to the pack than you and so tries to do this less often?)
 

Comment by kokotajlod on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-10T10:42:35.343Z · EA · GW

What's your model for what's driving political polarization in the US? My model is basically that the internet + a few other technologies is allowing people to sort themselves into filter bubbles, and also toxoplasma of rage stuff is making the bubbles fight each other instead of ignore each other. On this model, things aren't going to get significantly less polarized until our media is tightly controlled by a single political faction.
 

Comment by kokotajlod on Can I have impact if I’m average? · 2021-01-03T14:10:33.282Z · EA · GW

I think I basically agree with you here. I don't have much to say by way of positive proposals, but maybe this blog post is helpful: http://mindingourway.com/the-value-of-a-life/ Basically, the value of a life should be measured in stars (or something even bigger!), even though the price of a life should be measured in dollars or work-hours. Thus if you do something impactful but less-than-maximally impactful, you should still feel proud, because e.g. the life you contributed to saving is immensely, astronomically valuable.

Comment by kokotajlod on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-03T02:32:01.533Z · EA · GW

Interesting post! I'm excited to see more thinking about memetics, for reasons sketched here and here. Some thoughts:

--In my words, what you've done is point out that approximate-consequentialism + large-scale preferences is an attractor. People with small-scale preferences (such as just caring about what happens to their village, or their family, or themselves, or a particular business) don't have much to gain by spreading their memeplex to others. And people who aren't anywhere close to being consequentialists might intellectually agree that spreading their memeplex to others would result in their preferences being satisfied to a greater extent, but this isn't particularly likely to motivate them to do it. But people who are approximately consequentialist and who have large-scale preferences will be strongly motivated to spread their memeplex, because doing so is a convergent instrumental goal for people with large-scale preferences. Does this seem like a fair summary to you? 

--I guess it leaves out the "truth-seeking" bit, maybe that should be bundled up with consequentialism. But I think that's not super necessary. It's not hard for people to come to believe that spreading their memeplex will be good by their lights; that is, you don't have to be a rationalist to come to believe this. It's pretty obvious.

--I think it's not obvious this is the strongest attractor, in a world full of memetic attractors. Most major religions are memetic attractors, and they often rely on things other than convergent instrumental goals to motivate their members to spread the memeplex. And they've been extremely successful, far more so than "truth-seeking self-aware altruistic decision-making," even though that memeplex has been around for millenia too.

--On the other hand, maybe truth-seeking self-aware altruistic decision-making has actually been even more successful than every major religion and ideology, we just don't realize it because as a result of being truth-seeking, the memplex morphs constantly, and thus isn't recognized as a single memplex. (By contrast with religions and ideologies which enforce conformity and dogma and thus maintain obvious continuity over many years and much territory.)





 

Comment by kokotajlod on [deleted post] 2021-01-02T11:05:09.191Z

Sounds good.

Comment by kokotajlod on [Crosspost] Relativistic Colonization · 2021-01-01T01:48:50.868Z · EA · GW

Mmm, good point. Perhaps the way to salvage the concept of a singleton is to define it as the opposite of moloch, i.e. a future is ruled by a singleton to the extent that it doesn't have moloch-like forces causing drift towards outcomes that nobody wants, money being left on the table, etc. Or maybe we could just say a singleton is where outcomes are on or close to the pareto frontier. Idk.

Comment by kokotajlod on [Crosspost] Relativistic Colonization · 2020-12-31T19:17:39.231Z · EA · GW

Agreed on all counts except that I like the concept of a singleton. I'd be interested to hear why you don't, if you wish to discuss it.

Comment by kokotajlod on [deleted post] 2020-12-31T17:35:02.015Z

Thanks! How about these: 

"Effective altruists believe you'll 1000x more good if you prioritize impact"
"Effective altruists believe you'll 1000x more good if you actually try to do the most good you can."
"Effective altruists believe you'll do 1000x more good if you shut up and calculate"

"Effective altruists believe you'll do 1000x more good if you take cost-effectiveness calculations seriously"

 

I think the third one is my favorite, haha, but the second one is what I think would actually be best.

Comment by kokotajlod on Against GDP as a metric for timelines and takeoff speeds · 2020-12-31T17:26:21.541Z · EA · GW

Thanks! Yes, I think stock in AI companies is a significantly better metric than world GDP. I still think it's not a great metric, because some of the arguments/reasons I gave above still apply. But others don't.

I think forecasting platforms are definitely something to take seriously. I reserve the right to disagree with them sometimes though. :)

As for additional stuff we care about regarding takeoff speeds... Yeah, your comment and others are increasingly convincing me that my list wasn't exhaustive. There are a bunch of variables we care about, and there's lots of intellectual work to be done thinking about how they correlate and interact. 

Comment by kokotajlod on [Crosspost] Relativistic Colonization · 2020-12-31T14:06:36.547Z · EA · GW

Am I right in thinking the conclusion is something like this:

If we get a singleton on Earth, which then has a monopoly on space colonization forever, they do the Armstrong-Sandberg method and colonize the whole universe extremely efficiently. If instead we have some sort of competitive multipolar scenario, where Moloch reigns, most of the cosmic commons get burnt up in competition between probes on the hardscrapple frontier?

If so, that seems like a reasonably big deal. It's an argument that we should try to avoid scenarios in which powerful space tech is developed prior to a singleton forming. Perhaps this means we should hope for a fast takeoff rather than a slow takeoff, for example.

 

Comment by kokotajlod on [deleted post] 2020-12-31T13:57:50.232Z

Here's what I wish the low-resolution version was:

"Effective altruists believe that if you actually try to do as much good as you can with your money or time, you'll do thousands of times more good than if you donate in the usual ways. They also think that you should do this."

Comment by kokotajlod on Against GDP as a metric for timelines and takeoff speeds · 2020-12-30T00:42:12.113Z · EA · GW

OK, thanks. I'm not sure how you calculated that but I'll take your word for it. My hypothetical observer is seeming pretty silly then -- I guess I had been thinking that the growth prior to 1700 was fast but not much faster than it had been at various times in the past, and in fact much slower than it had been in 1350 (I had discounted that, but if we don't, then that supports my point) so a hypothetical observer would be licensed to discount the growth prior to 1700 as maybe just catch-up + noise. But then by the time the data for 1700 comes in, it's clear a fundamental change has happened. I guess the modern-day parallel would be if a pandemic or economic crisis depresses growth for a bit, and then there's a sustained period of growth afterwards in which the economy doubles in 7 years, and there's all sorts of new technology involved but it's still respectable for economists to say it's just catch-up growth + noise, at least until year 5 or so of the 7-year doubling. Is this fair?

There definitely wasn't 0.14% growth over 5000 years. But according to my data there was 12% in 700, 0.23% in 900, 11% in 1000 and 1100, 47% in 1350, and 21% in 1400. So 14% fits right in; 14% over a 500-year period is indeed more impressive, but not that impressive when there are multiple 100-year periods with higher growth than that worldwide  (and thus presumably longer periods with higher growth, in cherry-picked locations around the world)

Anyhow, the important thing is how much we disagree, and maybe it's not much. I certainly think the scenario you sketch is plausible, but I think "faster" scenarios, and scenarios with more of a disconnect between GWP and PONR, are also plausible. Thanks to you I am updating towards thinking the historical case of IR is less support for that second bit than I thought.





 

Comment by kokotajlod on Against GDP as a metric for timelines and takeoff speeds · 2020-12-29T20:15:23.060Z · EA · GW

Thanks for the reply -- Yeah, I totally agree that GDP of the most advanced countries is a better metric than GWP, since presumably GDP will accelerate first in a few countries before it accelerates in the world as a whole. I think most of the points made in my post still work, however, even against the more reasonable metric of GDP-of-the-most-technologically-advanced-country.

Moreover, I think even the point you were specifically critiquing still stands: If AI will be like the Industrial Revolution but faster, then crazy stuff will be happening pretty early on in the curve.

Here's the data I got from Wikipedia a while back on world GDP growth rates. Year is the column on the left, annual growth rate (extrapolated) is in the column on the right.
 

170032099.80.40%
165037081.740.12%
160042077.010.27%
150052058.670.27%
140062044.920.21%
135067040.50.47%
130072032.09-0.21%
125077035.58-0.10%
120082037.44-0.06%
110092039.60.11%
1000102035.310.11%
900112031.680.23%
800122025.230.07%
700132023.440.12%
600142020.860.05%
500152019.920.08%
400162018.440.06%
350167017.93-0.02%
200182018.540.03%
14200617.5-0.43%
1201918.50.04%
-2002220170.03%
-400242016.020.16%
-500252013.720.12%
-80028209.720.21%

On this data at least, 1700 is the first time an observer would say "OK yeah maybe we are transitioning to a new faster growth mode" (assuming you discount 1350 as I do as an artefact of recovering from various disasters). Moreover, it seems to contradict your claim that 0.14% growth was already high by historical standards. (Your data was for population whereas mine is for GWP, maybe that accounts for the discrepancy.)

EDIT: Also, I picked 1700 as precisely the time when "Things seem to be blowing up" first became true. My point was that the point of no return was already past by then. 

To be fair, maybe my data is shitty.



 

Comment by kokotajlod on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-21T15:34:32.980Z · EA · GW

Typo: You say 2020 when you should say 2019 at the beginning.

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-12-16T13:00:47.490Z · EA · GW

I made it up, but it's inspired by reading this short story. (I have a stash of quotes I find inspirational, and sometimes I make up stuff to put in the stash. Having to come up with wedding vows was part of my motivation.)

Comment by kokotajlod on Idea: "Change the World University" · 2020-12-07T09:31:44.697Z · EA · GW

I've seen graduation and commencement speeches for about four different universities. I think every university presents itself as helping its students change the world. Your proposal is to make this even more explicit than it already is.

I don't think jadedness really captures most of what's going on. I think people correctly realize that the world is more complicated and confusing and hard to change than they thought, and full of grey areas they don't understand rather than black and white, good guys and bad guys, etc. But to say that jadedness stopped them from trying to change the world feels off to me; rather, they naively thought it would be easy and simple and then got confused and lost interest when they realized it wasn't. 

If they were actually trying to change the world -- if they were actually strongly motivated to make the world a better place, etc. -- the stuff they learn in college wouldn't stop them.

Comment by kokotajlod on Donating against Short Term AI risks · 2020-12-04T12:24:24.920Z · EA · GW

Not yet, thanks for introducing it to me!

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-12-02T19:51:06.374Z · EA · GW

Yes. As I explained in my previous post, it's not money I'm after, but rather knowledge and help.

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-12-02T11:36:00.898Z · EA · GW

OK, cool, yes let's talk sometime! Will send pm.