Posts

Narration: We are in triage every second of every day 2021-07-23T20:59:56.419Z
Narration: Reducing long-term risks from malevolent actors 2021-07-15T16:26:47.420Z
Narration: Report on Running a Forecasting Tournament at an EA Retreat, part 2 2021-07-14T19:41:42.035Z
Narration: Report on Running a Forecasting Tournament at an EA Retreat, part 1 2021-07-13T16:21:45.703Z
Narration: "Against neutrality about creating happy lives" 2021-07-10T19:13:28.112Z
Narration: "[New org] Canning What We Give" 2021-07-09T17:57:18.614Z
[linkpost] EA Forum Podcast: Narration of "Why EA groups should not use 'Effective Altruism' in their name." 2021-07-08T22:43:21.469Z
[linkpost] EA Forum Podcast: Narration of "How to run a high-energy reading group" 2021-07-07T20:21:05.916Z
[Podcast] EA Forum Podcast: Narration of "How much does performance differ between people?" 2021-07-06T20:48:14.069Z
The EA Forum Podcast is up and running 2021-07-05T01:42:03.377Z
Which EA forum posts would you most like narrated? 2021-07-01T22:05:20.829Z
[Repost] A poem on struggling with personal cause prioritization 2021-05-25T01:30:37.104Z
How many small EA orgs in need of workers are there? 2021-05-20T18:19:38.866Z

Comments

Comment by D0TheMath on Narration: We are in triage every second of every day · 2021-07-24T02:05:39.712Z · EA · GW

Yes, it’s a linkpost to my podcast here, where myself and others have been narrating selected forum posts.

Comment by D0TheMath on Which EA forum posts would you most like narrated? · 2021-07-23T20:54:30.496Z · EA · GW

A narration of the newsletter, or the posts linked in the newsletter?

Comment by D0TheMath on Which EA forum posts would you most like narrated? · 2021-07-21T22:01:54.057Z · EA · GW

That's a great idea!

Comment by D0TheMath on How do you communicate that you like to optimise processes without people assuming you like tricks / hacks / shortcuts? · 2021-07-15T16:33:38.839Z · EA · GW

Using Slate Star Codex's Style Guide: Not Sounding Like An Evil Robot, instead of saying "I want to optimize X", you should instead say "I want to find the best way to do X".

Comment by D0TheMath on How to explain AI risk/EA concepts to family and friends? · 2021-07-12T15:21:16.900Z · EA · GW

Perhaps try explaining by analogy, or providing examples of ways we’re already messing up.

Like the YouTube algorithm. It only maximizes the amount of time people spend on the platform, because (charitably) Google thought that’d be a useful metric for the quality of the content it provides. But instead, it ended up figuring out that if it showed people videos which convinced them of extreme political ideologies, then it would be easier to find videos which would make them angry/happy/sad/other addictive emotions which would keep them on the platform.

This particular problem has since been fixed, but it took quite a while to figure out what was going on, and more time to figure out how to fix it. Maybe use analogies of genies who, if you imperfectly specify your wish, will find some way to technically satisfy it, but screw you over in the process.

One thing which stops me from explaining things well to my parents is the fear of looking weird. Which usually doesn’t stop me (to a fault) when talking with anyone else, but I guess not with my parents. You can avert this via ye-olde Appeal to Authority. Tell them the idea was popularized, in part, by professor Stuart Russel—the writer of the world’s foremost textbook on artificial intelligence—in his book Human Compatible, who currently runs the organization HCAI at Berkeley to tackle this very problem.

edit: Also, be sure to note it’s not just HCAI who’s working on this problem. There’s also MIRI, DeepMind, Anthropic, and other organizations.

Comment by D0TheMath on Call for participants to test a pilot forecasting training program · 2021-07-12T05:06:45.763Z · EA · GW

When should those who sign up expect to receive their acceptance/rejection?

Comment by D0TheMath on [deleted post] 2021-07-10T05:37:51.172Z

I am testing comment functionality on Linux Mint OS, using the Firefox browser.

edit: seems like I can edit too.

Comment by D0TheMath on [linkpost] EA Forum Podcast: Narration of "Why EA groups should not use 'Effective Altruism' in their name." · 2021-07-09T14:51:09.673Z · EA · GW

Thanks! I’ll implement these suggestions the next chance I get.

Comment by D0TheMath on The EA Forum Podcast is up and running · 2021-07-06T01:10:16.173Z · EA · GW

We’ve talked in private, but I figure I should publicly thank you for your offer for help.

edit: this is the thank you.

Comment by D0TheMath on The EA Forum Podcast is up and running · 2021-07-05T14:25:10.431Z · EA · GW

Anchor sends messages to podcast platforms to get the podcast on them. They say this takes a few business days to complete. In the meantime, you can use Ben Schifman's method.

Comment by D0TheMath on [Link] Reading the EA Forum; audio content · 2021-07-05T05:27:58.237Z · EA · GW

For example, in the first month we launched them (July 2020), across the 3 different profiles, the detailed versions averaged 62% the number of downloads as the short versions, and the audio versions averaged 6% of the number of downloads of the short versions.

This changes my estimate of how useful the EA forum podcast will be, so thanks for sharing your experience.

Comment by D0TheMath on Which EA forum posts would you most like narrated? · 2021-07-03T05:11:32.742Z · EA · GW

Thanks for the correction. I read the intro to the first prize post (May, 2020) on the tag page, and thought it meant it was the last one that would be published.

I thought there were more published between May of 2020, and now, but for the last year time has felt pretty weird, so I figured I was misremembering.

Comment by D0TheMath on Which EA forum posts would you most like narrated? · 2021-07-03T04:15:49.528Z · EA · GW

That’s a great idea! I was disappointed though that they stopped doing these a year ago, and I thought of any similar ‘best-of’ lists I know of, and remembered the EA Forum Digest exists, so I’m probably going to read posts from this too.

Comment by D0TheMath on [Link] Reading the EA Forum; audio content · 2021-07-01T21:41:56.813Z · EA · GW

Doing so now.

Comment by D0TheMath on [Link] Reading the EA Forum; audio content · 2021-07-01T21:41:34.672Z · EA · GW

I am not on the EA global slack.

Comment by D0TheMath on [Link] Reading the EA Forum; audio content · 2021-07-01T03:46:10.259Z · EA · GW

Nice! Have any preferences for what we use to coordinate together (Slack, Discord, Twitter, WhatsApp, …)? If not, then we can default to WhatsApp.

Comment by D0TheMath on [Link] Reading the EA Forum; audio content · 2021-06-30T16:25:39.960Z · EA · GW

Yes!!! Thank you for this! I absorb audio info much easier & quicker than text, so this will be very helpful.

Edit: also, now that you mention it, I could probably record forum posts myself as well, as there are likely many others like me. Do you want to partner up to coordinate on which posts to read & add a measure of social enforcement?

I don’t have a lot of time in the next few days, but I should be much freer after the 5th.

Comment by D0TheMath on Please Test/Share my New Vegan Video Game · 2021-05-30T21:13:26.968Z · EA · GW

Oh, this seems fun! I'll certainly be playtesting it in the coming days (it's also been added to my wish-list).

Comment by D0TheMath on [deleted post] 2021-05-25T01:19:54.080Z

Ok. I'll do that.

Comment by D0TheMath on [deleted post] 2021-05-22T23:51:48.302Z

That's annoying. The formatting is fixed (I had to transfer over to the WYSIWYG editor, since the <br> solution for markdown didn't work). I also don't know how to transfer it over to a regular post. Thanks for telling me about these problems.

Comment by D0TheMath on Getting money out of politics and into charity · 2020-10-11T16:08:21.706Z · EA · GW

If we successfully built this platform, would you consider using it? If your answer is “it depends”, what does it depend on?

I wouldn't use it, since I don't donate to campaigns, but I would certainly push all my more political friends and family members to use it.

Comment by D0TheMath on What questions would you like to see forecasts on from the Metaculus community? · 2020-07-26T18:27:06.536Z · EA · GW

In The Precipice Toby Ord gives estimated chances of various existential risks happening within the next 100 years. It'd be cool if we could get estimates from Metaculus as well, although it may be impossible to implement, as Tachyons would only be awarded when the world doesn't blow up.

Comment by D0TheMath on The 80,000 Hours podcast should host debates · 2020-07-17T22:43:01.797Z · EA · GW

I like the idea of having people with different opinions discuss their disagreements, but I don't think they should be marketed as debates. That term doesn't have positive connotations, and seems to imply that there will be a winner/loser. Even if there is no official winner/loser, it puts the audience and the participants in a zero-sum mentality.

I think something more like an adversarial collaboration would be healthier, and I like that term more because it's not as loaded, and it's more up front about what we actually want the participants to do.

Comment by D0TheMath on How do i know a charity is actually effective · 2020-07-17T21:21:30.869Z · EA · GW

Thanks for the correction. Idk why I thought it was Toby Ord.

Comment by D0TheMath on How do i know a charity is actually effective · 2020-07-17T18:47:37.585Z · EA · GW

I haven't read Will's book, so I'm not entirely sure what your background knowledge is.

Are you unsure about how to compare two different cause areas? For instance, do you accept that it's better to save the lives of 10 children than to fund a $30,000 art museum renovation project, but are unsure whether saving the lives of 10 children or de-worming 4,500 children is better?

In this case I suggest looking at QUALYs and DALYs which try to quantify the number of healthy years of lives saved given estimates for how bad various diseases and disabilities are. GiveWell has a few reservations about DALYs, and uses their own weighting/cost-effectiveness model. On the linked page, you can look at the spreadsheet they use to analyze different charities and interventions, and change the weights to fit your own moral weights. Although I would recommend doing some research before you just choose a number out of a hat.

If it's more similar to "Deworm the World says it's cost-effectiveness is $0.45 per child dewormed. How do I know this is actually an accurate estimate?" In this example, we can just go to GiveWell and see their documentation. The reason why GiveWell is so useful is because they are both transparent, and very evidence focused. In this case, GiveWell provides a summary of the sources for their review, and in-depth information on exactly what those sources gave them. Including transcripts of interviews/conversations with staff, and general notes on site visits. All of this can be found from a series of links from their main charity page. This heavy transparency means they can likely be trusted for facts. See the above paragraph for information sources on their analysis.

If your confusion is more along the lines of "Ok, I understand, intellectually it's better to save the lives of 10 children then to give $30,000 for a kid's wish via the Make a Wish Foundation, but my gut disagrees, and I am unable to emotionally conceptualize that saving the 10 children is at least 10x better than fulfilling one child's wish." In this case, understand that this is a pretty common experience, and you are not alone. It takes a lot of empathy, and a lot of experience with numbers to even get close to Derek Parfit levels of caring about abstract suffering [1]. Tackling this problem will be different for everyone, so I can't give any advice except to say that while your gut is good for fast and simple decisions (for instance, swerving out of the way before you crash into an old lady while driving your car), it is not so good for figuring out complex decisions.

It is easy to aim and throw a baseball using only your gut, but it is near impossible to land a rocket on the moon using only your gut. We need theories of gravity to figure out that. Some smart people who've been researching and living in astrophysics for their entire adult lives (or have played Kerbal Space Program) will be able to understand theories of gravitation intuitively, but even they will still revert to numbers when given the option. In the same way, it's easy to gut-level understand that you should save a kid from drowning, but much harder to gut-level understand that saving the lives of 10 children is better than making one very happy. But we can set down moral theories to help us, and we can try to get an intuitive feel for why we should listen to those theories.

Personally, I gained a lot of gut understanding from the Sequences on "Mere Goodness". Fake Preferences, Value Theory, and Quantified Humanism. But not everyone likes the Sequences, and they may require some greater amount of background if you haven't read the preceding sequences.


[1] Derek Parft reportedly broke down into tears in the middle of an interview for seemingly no reason. When asked why, it was the very idea of suffering which made him cry.

I originally thought this was Toby Ord, but Thomas Kwa corrected me in the below comment.

Comment by D0TheMath on FLI AI Alignment podcast: Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI · 2020-07-03T01:14:09.376Z · EA · GW

This was a particularly informative podcast, and you helped me get a better understanding of inner alignment issues, which I really appreciate.

To be clear I understand: the issue with inner alignment is that as an agent gets optimized for a reward/cost function on a training distribution, and to do well the agent needs to have a good enough world model to determine that it's in or could be undergoing training, then if the training ends up creating an optimizer, it's much more likely that that optimizer's reward function is bad or a proxy, and if it's sufficiently intelligent, it'll reason that it should figure out what you want it to do, and do that. This is because there are many different bad reward functions an inner optimizer can have, but only one that you want it to actually have, and each of those bad reward functions will pretend to have the good one.

Although the badly-aligned agents seem like they'd at least be optimizing for proxies of what you actually want, as early (dumber) agents with unrelated utility functions wouldn't do as well as alternative agents with approximately aligned utility functions.

Correct me on any mistakes please.

Also, because this depends on the agents being at least a little generally intelligent, I'm guessing there are no contemporary examples of such inner optimizers attempting deception.

Comment by D0TheMath on How do you talk about AI safety? · 2020-04-19T22:54:54.592Z · EA · GW

While I haven't read the book, Slate Star Codex has a great review on Human Compatible. Scott says it speaks of AI safety, especially in the long-term future, in a very professional sounding, and not weird way. So I suggest reading that book, or that review.


You could also list several different smaller scale AI-misalignment problems, such as the problems surrounding Zuckerberg and Facebook. You could say something like "You know how Facebook's AI is programmed to keep you on as long as possible, so often it will show you controversial content in order to rile you up, and get everyone yelling on everyone else so you never leave the platform? Yeah, I make sure that won't happen with smarter, and more influential AIs." If all you're going for is an elevator speech, or explaining to family what is it you do, I'd stop here. Otherwise, say something like "By my estimation, this seems fairly important, as incentives are aligned for companies and countries to use the best AI possible, and better AI means more influential AI, so if you have a really good, but slightly sociopathic AI, it's likely it'll still be used anyway. And if, in a few decades, we get to the point where we have a smarter than human, but still sociopathic AI, it's possible we've just made an immortal Hitler-Einstein combination. Which, needless to say, would be very bad, possibly even extinction-level bad. So if the job is very hard, and the result if the job doesn't get done is very bad, then the job is very very important (that's very)." after the first part.

I've never tried using these statements, but the seem like they'd work.

Comment by D0TheMath on Terrorism, Tylenol, and dangerous information · 2019-04-06T23:38:00.558Z · EA · GW

Not entirely applicable to the discussion, but I just like talking about things like this and I finally found something tangentially related. Feel free to disregard.

if you look at a period of sustained effort in staying on the military cutting edge, i.e. the Cold War, you won't see as many of these mistakes and you'll instead find fairly continuous progress

The cold war wasn't peacetime though... there was continuous fighting by both sides. The Americans and Chinese in Korea, the Americans in Vietnam, and the Russians in Afghanistan.

One can argue that these places don't scale to the kind of military techniques and science that a World War 3 scenario would require. But this kind of war has never occurred with modern technology (specifically hydrogen bombs). How do we know that all of the ideas dreamed up by generals and military experts wouldn't get tossed out the window the moment it was determined that they were inapplicable to a nuclear war?