Posts

Against GDP as a metric for timelines and takeoff speeds 2020-12-29T17:50:04.176Z
Incentivizing forecasting via social media 2020-12-16T12:11:33.789Z
Is this a good way to bet on short timelines? 2020-11-28T14:31:46.235Z
Persuasion Tools: AI takeover without AGI or agency? 2020-11-20T16:56:52.687Z
How Roodman's GWP model translates to TAI timelines 2020-11-16T14:11:38.809Z
How can I bet on short timelines? 2020-11-07T12:45:46.192Z
What considerations influence whether I have more influence over short or long timelines? 2020-11-05T19:57:16.172Z
AI risk hub in Singapore? 2020-10-29T11:51:49.741Z
Relevant pre-AGI possibilities 2020-06-20T13:15:29.008Z
Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post 2019-02-15T19:14:41.459Z
Tiny Probabilities of Vast Utilities: Bibliography and Appendix 2018-11-20T17:34:02.854Z
Tiny Probabilities of Vast Utilities: Concluding Arguments 2018-11-15T21:47:58.941Z
Tiny Probabilities of Vast Utilities: Solutions 2018-11-14T16:04:14.963Z
Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem 2018-11-10T09:12:15.039Z
Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? 2018-11-08T10:09:59.111Z
Ongoing lawsuit naming "future generations" as plaintiffs; advice sought for how to investigate 2018-01-23T22:22:08.173Z
Anyone have thoughts/response to this critique of Effective Animal Altruism? 2016-12-25T21:14:39.612Z

Comments

Comment by kokotajlod on Lessons from my time in Effective Altruism · 2021-01-16T00:47:34.903Z · EA · GW

Thanks for this! I think my own experience has led to different lessons in some cases (e.g. I think I should have prioritised personal fit less and engaged less with people outside the EA community), but I nevertheless very much approve of this sort of public reflection.

Comment by kokotajlod on The ten most-viewed posts of 2020 · 2021-01-15T08:51:49.959Z · EA · GW

Good question. Yeah, how about views of the average post from 2020 in 2020? And ditto for 90th percentile.

Comment by kokotajlod on The ten most-viewed posts of 2020 · 2021-01-14T16:08:27.501Z · EA · GW

Out of curiosity, how many views does the average post get? What about the 90th-percentile post? 

Comment by kokotajlod on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-10T10:45:44.445Z · EA · GW

going up against consensus in a deliberative body, be that my Committee or the General Assembly, and convincing my fellow Representatives to reverse course and vote the opposite way they had intended.

It's great to hear that this is not only possible but possible for one person to achieve multiple times in two years. Do you think you were able to do it significantly more often than the average representative? (e.g. because the average representative cares more about conforming to the pack than you and so tries to do this less often?)
 

Comment by kokotajlod on AMA: Elizabeth Edwards-Appell, former State Representative · 2021-01-10T10:42:35.343Z · EA · GW

What's your model for what's driving political polarization in the US? My model is basically that the internet + a few other technologies is allowing people to sort themselves into filter bubbles, and also toxoplasma of rage stuff is making the bubbles fight each other instead of ignore each other. On this model, things aren't going to get significantly less polarized until our media is tightly controlled by a single political faction.
 

Comment by kokotajlod on Can I have impact if I’m average? · 2021-01-03T14:10:33.282Z · EA · GW

I think I basically agree with you here. I don't have much to say by way of positive proposals, but maybe this blog post is helpful: http://mindingourway.com/the-value-of-a-life/ Basically, the value of a life should be measured in stars (or something even bigger!), even though the price of a life should be measured in dollars or work-hours. Thus if you do something impactful but less-than-maximally impactful, you should still feel proud, because e.g. the life you contributed to saving is immensely, astronomically valuable.

Comment by kokotajlod on Good altruistic decision-making as a deep basin of attraction in meme-space · 2021-01-03T02:32:01.533Z · EA · GW

Interesting post! I'm excited to see more thinking about memetics, for reasons sketched here and here. Some thoughts:

--In my words, what you've done is point out that approximate-consequentialism + large-scale preferences is an attractor. People with small-scale preferences (such as just caring about what happens to their village, or their family, or themselves, or a particular business) don't have much to gain by spreading their memeplex to others. And people who aren't anywhere close to being consequentialists might intellectually agree that spreading their memeplex to others would result in their preferences being satisfied to a greater extent, but this isn't particularly likely to motivate them to do it. But people who are approximately consequentialist and who have large-scale preferences will be strongly motivated to spread their memeplex, because doing so is a convergent instrumental goal for people with large-scale preferences. Does this seem like a fair summary to you? 

--I guess it leaves out the "truth-seeking" bit, maybe that should be bundled up with consequentialism. But I think that's not super necessary. It's not hard for people to come to believe that spreading their memeplex will be good by their lights; that is, you don't have to be a rationalist to come to believe this. It's pretty obvious.

--I think it's not obvious this is the strongest attractor, in a world full of memetic attractors. Most major religions are memetic attractors, and they often rely on things other than convergent instrumental goals to motivate their members to spread the memeplex. And they've been extremely successful, far more so than "truth-seeking self-aware altruistic decision-making," even though that memeplex has been around for millenia too.

--On the other hand, maybe truth-seeking self-aware altruistic decision-making has actually been even more successful than every major religion and ideology, we just don't realize it because as a result of being truth-seeking, the memplex morphs constantly, and thus isn't recognized as a single memplex. (By contrast with religions and ideologies which enforce conformity and dogma and thus maintain obvious continuity over many years and much territory.)





 

Comment by kokotajlod on What’s the low resolution version of effective altruism? · 2021-01-02T11:05:09.191Z · EA · GW

Sounds good.

Comment by kokotajlod on [Crosspost] Relativistic Colonization · 2021-01-01T01:48:50.868Z · EA · GW

Mmm, good point. Perhaps the way to salvage the concept of a singleton is to define it as the opposite of moloch, i.e. a future is ruled by a singleton to the extent that it doesn't have moloch-like forces causing drift towards outcomes that nobody wants, money being left on the table, etc. Or maybe we could just say a singleton is where outcomes are on or close to the pareto frontier. Idk.

Comment by kokotajlod on [Crosspost] Relativistic Colonization · 2020-12-31T19:17:39.231Z · EA · GW

Agreed on all counts except that I like the concept of a singleton. I'd be interested to hear why you don't, if you wish to discuss it.

Comment by kokotajlod on What’s the low resolution version of effective altruism? · 2020-12-31T17:35:02.015Z · EA · GW

Thanks! How about these: 

"Effective altruists believe you'll 1000x more good if you prioritize impact"
"Effective altruists believe you'll 1000x more good if you actually try to do the most good you can."
"Effective altruists believe you'll do 1000x more good if you shut up and calculate"

"Effective altruists believe you'll do 1000x more good if you take cost-effectiveness calculations seriously"

 

I think the third one is my favorite, haha, but the second one is what I think would actually be best.

Comment by kokotajlod on Against GDP as a metric for timelines and takeoff speeds · 2020-12-31T17:26:21.541Z · EA · GW

Thanks! Yes, I think stock in AI companies is a significantly better metric than world GDP. I still think it's not a great metric, because some of the arguments/reasons I gave above still apply. But others don't.

I think forecasting platforms are definitely something to take seriously. I reserve the right to disagree with them sometimes though. :)

As for additional stuff we care about regarding takeoff speeds... Yeah, your comment and others are increasingly convincing me that my list wasn't exhaustive. There are a bunch of variables we care about, and there's lots of intellectual work to be done thinking about how they correlate and interact. 

Comment by kokotajlod on [Crosspost] Relativistic Colonization · 2020-12-31T14:06:36.547Z · EA · GW

Am I right in thinking the conclusion is something like this:

If we get a singleton on Earth, which then has a monopoly on space colonization forever, they do the Armstrong-Sandberg method and colonize the whole universe extremely efficiently. If instead we have some sort of competitive multipolar scenario, where Moloch reigns, most of the cosmic commons get burnt up in competition between probes on the hardscrapple frontier?

If so, that seems like a reasonably big deal. It's an argument that we should try to avoid scenarios in which powerful space tech is developed prior to a singleton forming. Perhaps this means we should hope for a fast takeoff rather than a slow takeoff, for example.

 

Comment by kokotajlod on What’s the low resolution version of effective altruism? · 2020-12-31T13:57:50.232Z · EA · GW

Here's what I wish the low-resolution version was:

"Effective altruists believe that if you actually try to do as much good as you can with your money or time, you'll do thousands of times more good than if you donate in the usual ways. They also think that you should do this."

Comment by kokotajlod on Against GDP as a metric for timelines and takeoff speeds · 2020-12-30T00:42:12.113Z · EA · GW

OK, thanks. I'm not sure how you calculated that but I'll take your word for it. My hypothetical observer is seeming pretty silly then -- I guess I had been thinking that the growth prior to 1700 was fast but not much faster than it had been at various times in the past, and in fact much slower than it had been in 1350 (I had discounted that, but if we don't, then that supports my point) so a hypothetical observer would be licensed to discount the growth prior to 1700 as maybe just catch-up + noise. But then by the time the data for 1700 comes in, it's clear a fundamental change has happened. I guess the modern-day parallel would be if a pandemic or economic crisis depresses growth for a bit, and then there's a sustained period of growth afterwards in which the economy doubles in 7 years, and there's all sorts of new technology involved but it's still respectable for economists to say it's just catch-up growth + noise, at least until year 5 or so of the 7-year doubling. Is this fair?

There definitely wasn't 0.14% growth over 5000 years. But according to my data there was 12% in 700, 0.23% in 900, 11% in 1000 and 1100, 47% in 1350, and 21% in 1400. So 14% fits right in; 14% over a 500-year period is indeed more impressive, but not that impressive when there are multiple 100-year periods with higher growth than that worldwide  (and thus presumably longer periods with higher growth, in cherry-picked locations around the world)

Anyhow, the important thing is how much we disagree, and maybe it's not much. I certainly think the scenario you sketch is plausible, but I think "faster" scenarios, and scenarios with more of a disconnect between GWP and PONR, are also plausible. Thanks to you I am updating towards thinking the historical case of IR is less support for that second bit than I thought.





 

Comment by kokotajlod on Against GDP as a metric for timelines and takeoff speeds · 2020-12-29T20:15:23.060Z · EA · GW

Thanks for the reply -- Yeah, I totally agree that GDP of the most advanced countries is a better metric than GWP, since presumably GDP will accelerate first in a few countries before it accelerates in the world as a whole. I think most of the points made in my post still work, however, even against the more reasonable metric of GDP-of-the-most-technologically-advanced-country.

Moreover, I think even the point you were specifically critiquing still stands: If AI will be like the Industrial Revolution but faster, then crazy stuff will be happening pretty early on in the curve.

Here's the data I got from Wikipedia a while back on world GDP growth rates. Year is the column on the left, annual growth rate (extrapolated) is in the column on the right.
 

170032099.80.40%
165037081.740.12%
160042077.010.27%
150052058.670.27%
140062044.920.21%
135067040.50.47%
130072032.09-0.21%
125077035.58-0.10%
120082037.44-0.06%
110092039.60.11%
1000102035.310.11%
900112031.680.23%
800122025.230.07%
700132023.440.12%
600142020.860.05%
500152019.920.08%
400162018.440.06%
350167017.93-0.02%
200182018.540.03%
14200617.5-0.43%
1201918.50.04%
-2002220170.03%
-400242016.020.16%
-500252013.720.12%
-80028209.720.21%

On this data at least, 1700 is the first time an observer would say "OK yeah maybe we are transitioning to a new faster growth mode" (assuming you discount 1350 as I do as an artefact of recovering from various disasters). Moreover, it seems to contradict your claim that 0.14% growth was already high by historical standards. (Your data was for population whereas mine is for GWP, maybe that accounts for the discrepancy.)

EDIT: Also, I picked 1700 as precisely the time when "Things seem to be blowing up" first became true. My point was that the point of no return was already past by then. 

To be fair, maybe my data is shitty.



 

Comment by kokotajlod on 2020 AI Alignment Literature Review and Charity Comparison · 2020-12-21T15:34:32.980Z · EA · GW

Typo: You say 2020 when you should say 2019 at the beginning.

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-12-16T13:00:47.490Z · EA · GW

I made it up, but it's inspired by reading this short story. (I have a stash of quotes I find inspirational, and sometimes I make up stuff to put in the stash. Having to come up with wedding vows was part of my motivation.)

Comment by kokotajlod on Idea: "Change the World University" · 2020-12-07T09:31:44.697Z · EA · GW

I've seen graduation and commencement speeches for about four different universities. I think every university presents itself as helping its students change the world. Your proposal is to make this even more explicit than it already is.

I don't think jadedness really captures most of what's going on. I think people correctly realize that the world is more complicated and confusing and hard to change than they thought, and full of grey areas they don't understand rather than black and white, good guys and bad guys, etc. But to say that jadedness stopped them from trying to change the world feels off to me; rather, they naively thought it would be easy and simple and then got confused and lost interest when they realized it wasn't. 

If they were actually trying to change the world -- if they were actually strongly motivated to make the world a better place, etc. -- the stuff they learn in college wouldn't stop them.

Comment by kokotajlod on Donating against Short Term AI risks · 2020-12-04T12:24:24.920Z · EA · GW

Not yet, thanks for introducing it to me!

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-12-02T19:51:06.374Z · EA · GW

Yes. As I explained in my previous post, it's not money I'm after, but rather knowledge and help.

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-12-02T11:36:00.898Z · EA · GW

OK, cool, yes let's talk sometime! Will send pm.

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-12-01T14:34:02.302Z · EA · GW

Well, I look forward to talking more sometime! No rush, let me know if and when you are interested.

On point  no. 3 in particular, here are some relevant parables (a bit lengthy, but also fun to read!) https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message

https://www.lesswrong.com/posts/bTW87r8BrN3ySrHda/starwink-by-alicorn

https://www.gregegan.net/MISC/CRYSTAL/Crystal.html (I especially recommend this last one, it's less relevant to our discussion but a better story and raises some important ethical issues.)
 

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-11-30T20:55:48.558Z · EA · GW

OK. Good to hear. I'm surprised to hear that you think my beliefs are sufficiently different from yours. I thought your timelines views are very similar to Ajeya's; well, so are mine! (Also, I've formed my current views mostly in the last 6 months. Had you asked me a year or two ago, I probably would have said something like median 20-25 years from now, which is pretty close to your median I think. This is evidence, I think, that I could change my mind back.)

Anyhow, I won't take up any more of your time... for now! Bwahaha!  :)

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-11-30T17:47:26.443Z · EA · GW

Thanks, this is helpful! I'm in the middle of writing some posts laying out my reasoning... but it looks like it'll take a few more weeks at least, given how long it's taken so far.

Funnily enough, all three of the sources of skepticism you mention are things that I happen to have written things about or else am in the process of writing something about. This is probably a coincidence. Here are my answers to 1, 2, and 3, or more like teasers of answers:

1. I agree, it could. But it also could not. I think a non-agent AGI would also be a big deal; in fact I think there are multiple potential AI-induced points of no return. (For example, a non-agent AGI could be retrained to be an agent, or could be a component of a larger agenty system, or could be used to research agenty systems faster, or could create a vulnerable world that ends quickly or goes insane.) I'm also working on a post arguing that the millions of years of evolution don't mean shit and that while humans aren't blank slates they might as well be for purposes of AI forecasting. :)

2. My model for predicting AI timelines (which I am working on a post for) is similar to Ajeya's. I don't think it's fair to describe it as an extrapolation of current trends; rather, it constructs a reasonable prior over how much compute should be needed to get to AGI, and then we update on the fact that the amount of compute we have so far hasn't been enough, and make our timelines by projecting how the price of compute will drop. (So yeah, we are extrapolating compute price trends, but those seem fairly solid to extrapolate, given the many decades across which they've held fairly steady, and given that we only need to extrapolate them for a few more years to get a non-trivial probability.)

3. Yes, this is something that's been discussed at length. There are lots of ways things could go wrong. For example, the people who build AGI will be thinking that they can use it for something, otherwise they wouldn't have built it. By default it will be out in the world doing things; if we want it to be locked in a box under study (for a long period of time that it can't just wait patiently through), we need to do lots of AI risk awareness-raising. Alternatively, AI might be good enough at persuasion to convince some of the relevant people that it is trustworthy when it isn't. This is probably easier than it sounds, given how much popular media is suffused with "But humans are actually the bad guys, keeping sentient robots as slaves!" memes. (Also because there probably will be more than one team of people and one AI; it could be dozens of AIs talking to thousands or millions of people each. With competitive pressure to give them looser and looser restrictions so they can go faster and make more money or whatever.) As for whether we'd shut it off after we catch it doing dangerous things -- well, it wouldn't do them if it thought we'd notice and shut it off. This effectively limits what it can do to further its goals, but not enough, I think. 

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-11-30T11:39:58.100Z · EA · GW

Sorry for the delayed reply. I'm primarily interested in making these trades with people who have a similar worldview to me, because this increases the chance that as a result of the trade they will start working on the things I think are most valuable. I'd be happy to talk with other people too, except that if there's so much inferential distance to cross it would be more for fun than for impact. That said, maybe I'm modelling this wrong.

Yes, for no. 3 I meant after the first 5 years. Good catch.

It sounds like you might be a good fit for this sort of thing! Want to have a call to chat sometime? I'm also interested in doing no. 2 with you...



 

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-11-30T09:19:23.238Z · EA · GW

OK, thanks. FWIW I expect at least one of us to update at least slightly. Perhaps it'll be me.  I'd be interested to know why you disagree--do I come across as stubborn or hedgehoggy? If so, please don't hesitate to say so, I would be grateful to hear that. 

I might be willing to pay $4,000, especially if I could think of it as part of my donation for the year. What would you do with the money--donate it? As for time, sure, happy to wait a few months.

 

Comment by kokotajlod on Is this a good way to bet on short timelines? · 2020-11-29T08:41:23.005Z · EA · GW

Thanks! Yeah, your criticism of no. 3 is correct.  As for no. 1, yeah, probably this works best for bets with people who I don't think would do this correctly absent a bet, but who would do it correctly with a bet... which is perhaps a narrow band of people! 

How high would you need for no. 2? I might do it anyway, just for the information value. :) My views on timelines haven't yet been shaped by much direct conversation with people like yourself.

Comment by kokotajlod on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-27T15:31:36.316Z · EA · GW

I'm not optimistic. When will more reasonable voices with different biases enter social media? Almost the whole world is already on social media. 

Comment by kokotajlod on Can we convince people to work on AI safety without convincing them about AGI happening this century? · 2020-11-27T09:05:02.221Z · EA · GW

That said, I think I have a good shot of convincing people that there's a significant chance of AGI this century.

Comment by kokotajlod on Can we convince people to work on AI safety without convincing them about AGI happening this century? · 2020-11-27T09:03:03.235Z · EA · GW

If I were to try to convince someone to work on AI safety without convincing them that AGI will happen this century, I'd say things like:

  1. While it may not happen this century, it might.
  2. While it may not happen this century, it'll probably happen eventually.
  3. It's extremely important; it's an x-risk.
  4. We are currently woefully underprepared for it.
  5. It's going to take a lot of research and policy work to plan for it, work which won't be done by default.
  6. Currently very few people are doing this work (e.g. there's more academic papers published on dung beetles than human extinction, AI risk is even more niche, etc. etc. (I may be remembering the example wrong))
  7. There are other big problems, like climate change, nuclear war, etc. but these are both less likely to cause x-risk and also much less neglected.
Comment by kokotajlod on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-27T08:58:44.883Z · EA · GW

Maybe, I don't know. I have heard people say that the printing press helped cause the religious wars that tore apart Europe; it probably helped cause the American revolution too, which may be a bad thing. As for radio, I've heard people say it contributed to the rise of fascism and communism, helped the genocide in darfur, etc. Of course, maybe these things had good effects that outweighed their bad effects--I have no idea really. 

I think my overall concern is that I don't think the slow process of cultural debate is overall truth oriented. I think science seems to be overall truth-oriented, and of course the stock market and business world is overall truth-oriented, and maybe on some level military strategy is overall truth-oriented. And sports betting is overall truth-oriented. But religion and politics don't seem to be.

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-20T07:29:14.580Z · EA · GW

Yeah, though it's of course heavily inspired by things people say on LessWrong. Thanks! It was one of my wedding vows.

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T10:55:25.095Z · EA · GW

"Few people are actually trying to do good. The best explanation for most people's behavior--even when they think they are trying to do good--is that they are trying to feel good and look good."

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T10:54:11.626Z · EA · GW

“No, I do not really hear the screams of everyone suffering in Hell. But I thought to myself, ‘I suppose if I tell them now that I have the magic power to hear the screams of the suffering in Hell, then they will go quiet, and become sympathetic, and act as if that changes something.’ Even though it changes nothing. Who cares if you can hear the screams, as long as you know that they are there? So maybe what I said was not fully wrong. Maybe it is a magic power granted only to the Comet King. Not the power to hear the screams. But the power not to have to.” -The Comet King, a character in Unsong

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T10:53:23.718Z · EA · GW

"One who wishes to believe says, “Does the evidence permit me to believe?” One who wishes to disbelieve asks, “Does the evidence force me to believe?” Beware lest you place huge burdens of proof only on propositions you dislike, and then defend yourself by saying: “But it is good to be skeptical.” If you attend only to favorable evidence, picking and choosing from your gathered data, then the more data you gather, the less you know. If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider." -Yudkowksy

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T10:52:43.165Z · EA · GW

There are actually two struggles between good and evil within each person. The first is the struggle to choose the right path despite all the temptations to choose the wrong path; it is the struggle to make actions match words. The second is the struggle to correctly decide which path is right and which is wrong. Many people who win one struggle lose the other. Do not lose sight of this fact or you will be one of them.

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T10:51:36.042Z · EA · GW

"Truth is not what you want it to be; it is what it is, and you must bend to its power or live a lie." -Miyamoto Musashi

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T10:50:57.496Z · EA · GW

"Politics is the mind-killer." -Yudkowsky

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T10:50:12.378Z · EA · GW

"If you get the right answer to the wrong question, you still die." --Van Jones

Comment by kokotajlod on What quotes do you find most inspire you to use your resources (effectively) to help others? · 2020-11-18T10:48:35.148Z · EA · GW

"Take pride in noticing when you are confused, or when evidence goes against what you think. Rejoice when you change your mind."

Comment by kokotajlod on Tiny Probabilities of Vast Utilities: Concluding Arguments · 2020-11-18T08:46:09.095Z · EA · GW

Thanks for this! It's been a long time since I wrote this so I don't remember why I thought it was from MIRI/FHI. I think it's because the guesstimate model has two sub-models, one titled "the MIRI method" and one titled "The community method (developed by Owen CB and Daniel Dewey" who were at the time associated with FHI I believe. So I must have figured the first model came from MIRI and the second model came from FHI.

I'll correct the error.

 

Comment by kokotajlod on Donating against Short Term AI risks · 2020-11-17T15:41:49.294Z · EA · GW

Here are some people you could reach out to:
Stefan Schubert (IIRC he is skeptical of this sort of thing, so maybe he'll be a good addition to the conversation)
Mojmir Stehlik (He's been thinking about polarization)
David Althaus (He's been thinking about forecasting platforms as a potential tractible and scalable intervention to raise the sanity waterline)

There are probably a bunch of people who are also worth talking to but these are the ones I know of off the top of my head.

Comment by kokotajlod on Donating against Short Term AI risks · 2020-11-16T21:37:04.581Z · EA · GW

I can't speak for anyone else, but for me:
--Short term AI risks like you mention definitely increase X-risk, because they make it harder to solve AI risk (and other x-risks too, though I think those are less probable)
--I currently think there are things we can do about it, but they seem difficult: Figuring out what regulations would be good and then successfully getting them passed, probably against opposition, and definitely against competition from other interest groups with other issues.
--It's certainly a neglected issue compared to many hot-button political topics. I would love to see more attention paid to it and more smart people working on it. I just think it's probably not more neglected than AI risk reduction.

Basically, I think this stuff is currently at the "There should be a couple EAs seriously investigating this, to see how probable and large the danger is and try to brainstorm tractible solutions."  If you want to be such an EA, I encourage you to do so, and would be happy to read and give comments on drafts, video chat to discuss, etc. If no one else was doing it, I might do it myself even. (Like I said, I am working on a post about persuasion tools, motivated by feeling that someone should be talking about this...)

I think probably such an investigation will only confirm my current opinions (yup, we should focus on AI risk reduction directly rather than on raising the sanity waterline via reducing short-term risk) but there's a decent chance that it would chance my mind and make me recommend more people switch from AI risk stuff to this stuff.

Comment by kokotajlod on Donating against Short Term AI risks · 2020-11-16T15:24:29.720Z · EA · GW

My off-the-cuff answers:
--Yes, the EA community neglects these things in the sense that it prioritizes other things. However, I think it is right to do so. It's definitely a very important, tractable, and neglected issue, but not as important or neglected as AI alignment, for example. I am not super confident in this judgment and would be happy to see more discussion/analysis. In fact, I'm currently drafting a post on a related topic (persuasion tools).
--I don't know, but I'd be interested to see research into this question. I've heard of a few charities and activist groups working on this stuff but don't have a good sense of how effective they are.
--I don't know much about them; I saw their film The Social Dilemma and liked it.

Comment by kokotajlod on some concerns with classical utilitarianism · 2020-11-16T09:05:16.710Z · EA · GW

Who is the "we" you are talking about? I imagine the people who end that politician's career would not be EAs. So it seems like your example is an example of different people having different standards, not the same people having different standards in different contexts.

Comment by kokotajlod on How can I bet on short timelines? · 2020-11-07T20:35:17.358Z · EA · GW

Hmm, yeah, I guess doing a prize is less costly than hiring someone in the event that it doesn't work. So I might as well experiment with that for a bit. Thanks!

If you are looking for ideas for things to do, want to chat sometime?

Comment by kokotajlod on How can I bet on short timelines? · 2020-11-07T20:33:48.481Z · EA · GW

Yes. I did that a while ago. But that just gets me more money, and not even a lot of money. I need knowledge and help much more than money.

Comment by kokotajlod on Thoughts on whether we're living at the most influential time in history · 2020-11-03T08:29:29.366Z · EA · GW

I'm guessing Buck spelled out the zeros for dramatic effect; it makes it easier to see intuitively how small the numbers are.

Comment by kokotajlod on When you shouldn't use EA jargon and how to avoid it · 2020-10-30T09:42:35.698Z · EA · GW

My only disagreement is with the order of magnitude thing. I love orders of magnitude talk. I think it's really useful to think in orders of magnitude about many (most?) things. If this means I sometimes say "one order of magnitude" when I could just say "ten times" so be it.