Posts

List of important ways we may be wrong 2022-01-08T16:30:36.241Z
D0TheMath's Shortform 2021-11-11T13:57:29.946Z
Narration: Improving the EA-aligned research pipeline: Sequence introduction 2021-07-27T00:56:24.921Z
Narration: The case against “EA cause areas” 2021-07-24T20:39:52.632Z
Narration: We are in triage every second of every day 2021-07-23T20:59:56.419Z
Narration: Reducing long-term risks from malevolent actors 2021-07-15T16:26:47.420Z
Narration: Report on Running a Forecasting Tournament at an EA Retreat, part 2 2021-07-14T19:41:42.035Z
Narration: Report on Running a Forecasting Tournament at an EA Retreat, part 1 2021-07-13T16:21:45.703Z
Narration: "Against neutrality about creating happy lives" 2021-07-10T19:13:28.112Z
Narration: "[New org] Canning What We Give" 2021-07-09T17:57:18.614Z
[linkpost] EA Forum Podcast: Narration of "Why EA groups should not use 'Effective Altruism' in their name." 2021-07-08T22:43:21.469Z
[linkpost] EA Forum Podcast: Narration of "How to run a high-energy reading group" 2021-07-07T20:21:05.916Z
[Podcast] EA Forum Podcast: Narration of "How much does performance differ between people?" 2021-07-06T20:48:14.069Z
The EA Forum Podcast is up and running 2021-07-05T01:42:03.377Z
Which EA forum posts would you most like narrated? 2021-07-01T22:05:20.829Z
[Repost] A poem on struggling with personal cause prioritization 2021-05-25T01:30:37.104Z
How many small EA orgs in need of workers are there? 2021-05-20T18:19:38.866Z

Comments

Comment by D0TheMath on FLI launches Worldbuilding Contest with $100,000 in prizes · 2022-01-18T22:12:00.701Z · EA · GW

I posted this to r/rational (subreddit for rational & rationalist fiction), if anyone would like to see the response there

Comment by D0TheMath on Is EA over-invested in Crypto? · 2022-01-18T03:57:56.675Z · EA · GW

Your reasoning seems reasonable in the absence of evidence. I don't know how you were trying to signal it was a question (other than the question-mark in the title, which almost never indicates the intent to simply provoke discussion, and more often means "here is the question I explored in a research project, the details & conclusions of which will follow"). Instead, I think you should have had maybe an epistemic status disclaimer near the beginning. Something like,

Epistemic status: I don't actually know whether EA is over-invested in crypto. This post is intended to spark discussion in the topic.

Comment by D0TheMath on EA Librarian: CEA wants to help answer your EA questions! · 2022-01-17T20:36:13.473Z · EA · GW

This sounds like a very cool & useful service, and I hope enough people take advantage of it to justify it's costs! I will certainly direct fellows to it.

Comment by D0TheMath on Is EA over-invested in Crypto? · 2022-01-16T14:50:11.761Z · EA · GW

I think you should have made this post a question. It being a post made me think you actually had an answer, so I read it, and was disappointed you didn’t actually conclude anything.

Comment by D0TheMath on Forecast procedure competitions · 2022-01-10T03:47:46.514Z · EA · GW

This sounds interesting. Alternatively, you could have the procedure-makers not know what questions will be forecasted, and their procedures given to people or teams with some stake in getting the forecast right (perhaps they are paid in proportion to their log-odds calibration score).

After doing enough trials, we should get some idea about what kinds of advice result in better forecasts.

Comment by D0TheMath on How big are the intra-household spillovers for cash transfers and psychotherapy? Contribute your prediction for our analysis. · 2022-01-07T04:36:52.266Z · EA · GW

Question 7 is a bit confusing. The answer format implies cash transfers have both a 10% and 40% impact, and makes it impossible for (say) cash & psychotherapy to both have a 10% impact.

Comment by D0TheMath on How To Raise Others’ Aspirations in 17 Easy Steps · 2022-01-06T21:47:41.484Z · EA · GW

With a few modifications, all of these are great questions to ask yourself as well.

Comment by D0TheMath on Prioritization when size matters · 2021-12-28T18:14:08.073Z · EA · GW

This post has been narrated by The Effective Altruism Forum Podcast.

Comment by D0TheMath on Sasha Chapin on bad social norms in EA · 2021-11-19T12:16:09.030Z · EA · GW

What makes you think it isn't? To me it seems both like a reasonable interpretation of the quote (private guts are precisely the kinds of positions you can't necessarily justify, and it's talking about having beliefs you can't justify) as well as a dynamic that feels like one that I recognize as one that has been occasionally present in the community.

Because it also mentions woo, so I think it’s talking about a broader class if unjustified beliefs than you think.

Even if this interpretation wasn't actually the author's intent, choosing to steelman the claim in that way turns the essay into a pretty solid one, so we might as well engage with the strongest interpretation of it.

I agree, but in that case you should say make it clear how your interpretation differs from the author’s. If you don’t, then it looks like a motte-bailey is happening (where the bailey is “rationalists should be more accepting of woo & other unjustified beliefs”, and the bailey is “oh no! I/they really just mean you shouldn’t completely ignore gut judgements, and occasionally models can be wrong in known ways but still useful”), or you may miss out on reasons the post-as-is doesn’t require your reformulation to be correct.

Comment by D0TheMath on Sasha Chapin on bad social norms in EA · 2021-11-19T02:07:48.227Z · EA · GW

If this is what the line was saying, I agree. But it’s not, and having intuitions & a track record (or some reason to believe) those intuitions correlate with reality, and useful but known to be not true models of the world is a far cry from having unjustified beliefs & believing in woo, and the lack of these is what the post actually claims is the toxic social norm in rationality.

Comment by D0TheMath on Sasha Chapin on bad social norms in EA · 2021-11-18T13:36:52.880Z · EA · GW

Sure, but that isn’t what the quoted text is saying. Trusting your gut or following social norms are not even on the same level as woo, or adopting beliefs with no justification.

If the harmful social norms Sasha actually had in mind were not trusting your gut & violating social norms with no gain, then I’d agree these actions are bad, and possibly a result of social norms in the rationality community. Another alternative is that the community’s made up of a bunch of socially awkward nerds, who are known for their social ineptness and inability to trust their gut.

But as it stands, this doesn’t seem to be what’s being argued, as the quoted text is tangential to what you said at best.

Comment by D0TheMath on Sasha Chapin on bad social norms in EA · 2021-11-18T12:56:29.474Z · EA · GW

you must reject beliefs that you can’t justify, sentiments that don’t seem rational, and woo things.

This isn’t a toxic social norm. This is the point of rationality, is it not?

Comment by D0TheMath on What stops you doing more forecasting? · 2021-11-17T01:00:45.594Z · EA · GW

Overthinking forecasts, causing writing them down & tracking them diligently to be too much of a mental-overhead for me to bother with.

Comment by D0TheMath on What are the bad EA memes? How could we reframe them? · 2021-11-16T22:33:46.819Z · EA · GW

When I introduce AI risk to someone, I generally start by talking about how we don't actually know what's going on inside of our ml systems, that we're bad at making their goals what we actually want, and we have no way of trusting that the systems actually have the goals we're telling them to optimize for.

Next I say this is a problem because as the state of the art of AI progresses, we're going to be giving more and more power to these systems to make decisions for us, and if they are optimizing for goals different from ours this could have terrible effects.

I then note that we've already seen this happen in YouTube's algorithm a few years ago: they told it to maximize the time spent on the platform, thinking it would just show people videos they liked. But in reality it learned that there were a few videos it could show people which would radicalize them into a political extreme, and by doing this it was far easier to judge which videos would keep them on the platform the longest: those which showed those they agreed with doing good things & being right, and which showed those they disagreed with doing bad things & being wrong. This has since been fixed, but the point is that we thought we were telling it to do one thing, but then it did something we really didn't want it to do. If this system had more power (for instance, running drone swarms or being the CEO-equivalent of a business), it would be far harder to both understand what it was doing wrong, and be able to physically change it's code.

I then say the situation becomes even worse if the AI is smarter than the typical human. There are many people who have malicious goals, and are as smart as the average person, but who are able to stay in positions of power through politically outmaneuvering their rivals. If the AI is better than these people at manipulating humans (which seems very likely, given the thing AIs are known for best nowadays is manipulating humans to do what the company they serve wants), then it is hopeless to attempt to remove them from power.

Comment by D0TheMath on D0TheMath's Shortform · 2021-11-16T22:06:40.999Z · EA · GW

I don't know what the standard approach would be. I haven't read any books on evolutionary biology. I did listen to a bit of this online lecture series: https://www.youtube.com/watch?v=NNnIGh9g6fA&list=PL848F2368C90DDC3D and it seems fun & informative.

Comment by D0TheMath on EA Online Learning Buddy Platform · 2021-11-16T09:54:06.536Z · EA · GW

Great idea! Note also the existence of The University of Bayes on Discord. This doesn’t focus specifically on EA-aligned subject areas, but it is doing something similar to your proposal: ie you can freely join classes, and learn with other members of the Discord topics like Bayesian statistics, calculus, and machine learning.

Comment by D0TheMath on D0TheMath's Shortform · 2021-11-16T09:41:28.984Z · EA · GW

I’ve been using the models I’ve been learning to understand the problems associated with inner alignment to model evolution during this discussion, as it is a stochastic gradient descent process, so many of the arguments for properties that trained models should have can be applied to evolutionary processes.

So I guess you can start with Hubinger et al’s Risks from Learned Optimization? But this seems a nonstandard approach to trying to learn evolutionary biology.

Comment by D0TheMath on D0TheMath's Shortform · 2021-11-16T02:17:30.485Z · EA · GW

Do you feel it is possible for evolution to select for beings who care about their copies in Everett branches, over beings that don't? For the purposes of this question let's say we ignore the "simplicity" complication of the previous point, and assume both species have been created, if that is possible.

It likely depends on what it means for evolution to select for something, and for a species to care about it's copies in other Everett branches. It's plausible to imagine a very low-amplitude Everett branch which has a species that uses quantum mechanical bits to make many of it's decisions, which decreases its chances of reproducing in most Everett branches, but increases it's chances of reproducing in very very few.

But in order for something to care about it's copies in other Everett branches, the species would need to be able to model how quantum mechanics works, as well as how acausal trade works if you want it to be able to be selected for caring how it's decision making process will affect non-causally-reachable Everett branches. I can't think of any pathways for how a species could increase it's inclusive genetic fitness by making acausal trades with it's counterparts in non-causally-reachable Everett branches, but I also can't think of any proof for why it's impossible. Thus, I only think it's unlikely.

For the case where we only care about selecting for caring about future Everett branches, note that if we find ourselves in the situation I described in the original post, and the proposal succeeds, then evolution has just made a minor update towards species which care about their future Everett selves.

Comment by D0TheMath on D0TheMath's Shortform · 2021-11-15T18:43:19.433Z · EA · GW

Evolution doesn't select for that, but it's also important to note that such tendencies are not disselected for, and the value "care about yourself, and others" is simpler than the value "care about yourself, and others except those in other Everett branches", so we should expect people to generalize "others" as including those in Everett branches, in the same way that they generalize "others" as including those in the far future.

Also, while you cannot meaningfully influence Everett branches which have split off in the past, you can influence Everett branches that will split off some time in the future.

Comment by D0TheMath on D0TheMath's Shortform · 2021-11-15T12:31:48.906Z · EA · GW

I’m not certain. I’m tempted to say I care about them in proportion to their “probabilities” of occurring, but if I knew I was on a very low-“probability” branch & there was a way to influence a higher “probability” branch at some cost to this branch, then I’m pretty sure I’d weight the two equally.

Comment by D0TheMath on D0TheMath's Shortform · 2021-11-15T02:23:50.796Z · EA · GW

Are there any obvious reasons why this line of argument is wrong:

Suppose Everett interpretation of qm is true, and an x-risk curtailing humanity's future is >99% certain, with no leads on the solution to it. Then, given a qm bit generator, which generates some high number of bits, for any particular combination of bits, there exists a universe in which that combination was generated. In particular, the combination of bits encoding actions one can take to solve the x-risk are generated in some world. Thus, one should use such a qm bit generator to generate a plan to stop the x-risk. Even though you will likely see a bunch of random letters, there will exist a version of you with a good plan, and the world will not end.

One may argue the chances of finding a plan which produces an s-risk is just as high as one curtailing the x-risk. This only seems plausible to me if the solution produced is some optimization process, or induces some optimization process. These scenarios should not be discounted.

Comment by D0TheMath on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T20:57:04.157Z · EA · GW

Perhaps its best strategy would be to play nice for the time being so that humans would voluntarily give it more compute and control over the world.

This is essentially the thesis of the Deceptive Alignment section of Hubinger et al's Risks from Learned Optimization paper, and related work on inner alignment.

Hm, if an agent is consequentialist, then it will have convergent instrumental subgoals. But what if the agent isn't consequentialist to begin with? For example, if we imagine that GPT-7 is human-level AGI, this AGI might have human-type common sense. If you asked it to get you coffee, it might try to do so in a somewhat common-sense way, without scheming about how to take over the world in the process, because humans usually don't scheme about taking over the world or preserving their utility functions at all costs? But I don't know if that's right; I wonder what AI-safety experts think.

You may be interested to read more about myopic training https://www.alignmentforum.org/posts/GqxuDtZvfgL2bEQ5v/arguments-against-myopic-training

Comment by D0TheMath on D0TheMath's Shortform · 2021-11-11T13:57:30.107Z · EA · GW

I saw this comment on LessWrong

This seems noncrazy on reflection.

10 million dollars will probably have very small impact on Terry Tao's decision to work on the problem. 

OTOH, setting up an open invitation for all world-class mathematicians/physicists/theoretical computer science to work on AGI safety through some sort of sabbatical system may be very impactful.

Many academics, especially in theoretical areas where funding for even the very best can be scarce, would jump at the opportunity of a no-strings-attached sabbatical. The no-strings-attached is crucial to my mind. Despite LW/Rationalist dogma equating IQ with weirdo-points, the vast majority of brilliant (mathematical) minds are fairly conventional - see Tao, Euler, Gauss. 

EA cause area?

Thoughts? 

Comment by D0TheMath on Why Undergrads Should Take History Classes · 2021-11-07T17:20:25.338Z · EA · GW

I'm skeptical of your claim that primary sources are better than secondary books. In particular, it seems that the insight-to-effort ratio is very small, as given a secondary book which comes recommended by people knowledgeable in the field, it seems you can get approximately the same insights as a primary source, but with far far less effort.

Can you expand on why you think either the fidelity of the insight transfer from the primary to secondary source is small, or why I'm overestimating the difficulty of reading primary sources (or some other reason you think I should care more about primary sources which I haven't thought of)?

Comment by D0TheMath on [Creative writing contest] Blue bird and black bird · 2021-09-19T16:12:15.812Z · EA · GW

Death of the author interpretation: currently there are few, large, EA-aligned organizations which were created by EAs. Much of the funding for EA aligned projects just supports smart people who happen to be doing effective altruism.

The blue bird represents the EA community going to smart people, symbolized by the black bird, and asking why they’re working on what they’re working on. If the answer is a good one, the community / blue bird will pitch in and help.

Comment by D0TheMath on [Creative writing contest] Blue bird and black bird · 2021-09-17T13:46:03.131Z · EA · GW

I felt some cognitive dissonance at the small tree / lumberjack scene. Black Bird could have helped fight the lumberjack, then cut down the sprout. So it doesn’t map very well to actual catastrophic risk tradeoffs. I don’t know how to fix it though.

Comment by D0TheMath on Needed: Input on testing fit for your career · 2021-08-16T16:54:58.988Z · EA · GW

This seems like it could be a very valuable resource, and I will totally use it.

Comment by D0TheMath on In favor of more anthropics research · 2021-08-16T01:43:15.831Z · EA · GW

Ah, thanks. It was a while ago, so I guess I was misremembering.

Comment by D0TheMath on In favor of more anthropics research · 2021-08-15T21:16:04.401Z · EA · GW

I haven't done significant research into the Doomsday argument, but I do remember thinking it seemed intuitively plausible when I first heard of it. Then I listened to this 80,000 Hours podcast , and the discussion on the doomsday argument, if I remember correctly, convinced me it's a non-issue. But you may want to relisted to make sure I'm remembering correctly.  correction: I was not remembering correctly. They came away with the conclusion that more funding & research is needed in this space.

There may be good work to be done on formalizing the puzzle, and proving beyond a doubt that the logic doesn't hold.

Comment by D0TheMath on [PR FAQ] Banner highlighting valuable EA resources · 2021-08-09T14:40:21.119Z · EA · GW

This seems cool. I think I’d learn quite a bit about what orgs & resources exist if this was implemented, but also worry it may take up too much space, and I’ll decide to turn it off out of annoyance.

Comment by D0TheMath on 500 Million, But Not A Single One More · 2021-08-04T20:34:14.564Z · EA · GW

The Effective Altruism Forum Podcast has created an audio version of this post here: https://anchor.fm/ea-forum-podcast/episodes/500-Million--But-Not-A-Single-One-More-e15ff61

Comment by D0TheMath on Is effective altruism growing? An update on the stock of funding vs. people · 2021-07-31T01:07:43.675Z · EA · GW

The EA Forum podcast has recorded an audio version of this post here: https://anchor.fm/ea-forum-podcast/episodes/Is-effective-altruism-growing--An-update-on-the-stock-of-funding-vs--people-e158mta

Comment by D0TheMath on [3-hour podcast]: Joseph Carlsmith on longtermism, utopia, the computational power of the brain, meta-ethics, illusionism and meditation · 2021-07-28T03:23:09.664Z · EA · GW

I did not know Utilitarian Podcast existed until now, and have just subscribed.

Comment by D0TheMath on Narration: Improving the EA-aligned research pipeline: Sequence introduction · 2021-07-27T00:57:42.605Z · EA · GW

I said I wasn't going to publish this as a frontpage post, but I misclicked a button during the posting process. Sorry. It'd be nice if an moderator could take it off the frontpage.

Comment by D0TheMath on What novels, poetry, comics have EA themes, plots or characters? · 2021-07-25T05:55:02.664Z · EA · GW

The only work I know of which is explicitly effective altruist is A Common Sense Guide to Doing the Most Good, but many works on r/rational share a similar philosophy as many EAs.

Comment by D0TheMath on Narration: The case against “EA cause areas” · 2021-07-25T05:46:25.250Z · EA · GW

Thanks! This is really good feedback. One person saying something could mean anything, but two people saying the same thing is a much stronger signal that that thing is a good idea.

Comment by D0TheMath on Narration: The case against “EA cause areas” · 2021-07-25T02:13:38.551Z · EA · GW

Noted. I was worried it would get annoying, so thanks for confirming that worry. I’ll experiment with posting some not on the front-page, and see if they get significantly fewer listens.

Comment by D0TheMath on Research into people's willingness to change cause *areas*? · 2021-07-24T15:39:15.574Z · EA · GW

Rethink Priorities' analysis of the 2019 EA survey concluded that 42% of EAs changed their cause area after joining the movement, 57% of change was away from global poverty, and 54% towards long term future / catastrophic and existential risk reduction.

Rethink Priorities, and Faunalytics also have much content on how to do effective animal advocacy, which would likely be useful for your purposes.

This is probably not the extent of research that Rethink Priorities has on this issue, but it's what I could remember reading about.

Comment by D0TheMath on Narration: We are in triage every second of every day · 2021-07-24T02:05:39.712Z · EA · GW

Yes, it’s a linkpost to my podcast here, where myself and others have been narrating selected forum posts.

Comment by D0TheMath on Which EA forum posts would you most like narrated? · 2021-07-23T20:54:30.496Z · EA · GW

A narration of the newsletter, or the posts linked in the newsletter?

Comment by D0TheMath on Which EA forum posts would you most like narrated? · 2021-07-21T22:01:54.057Z · EA · GW

That's a great idea!

Comment by D0TheMath on How do you communicate that you like to optimise processes without people assuming you like tricks / hacks / shortcuts? · 2021-07-15T16:33:38.839Z · EA · GW

Using Slate Star Codex's Style Guide: Not Sounding Like An Evil Robot, instead of saying "I want to optimize X", you should instead say "I want to find the best way to do X".

Comment by D0TheMath on How to explain AI risk/EA concepts to family and friends? · 2021-07-12T15:21:16.900Z · EA · GW

Perhaps try explaining by analogy, or providing examples of ways we’re already messing up.

Like the YouTube algorithm. It only maximizes the amount of time people spend on the platform, because (charitably) Google thought that’d be a useful metric for the quality of the content it provides. But instead, it ended up figuring out that if it showed people videos which convinced them of extreme political ideologies, then it would be easier to find videos which would make them angry/happy/sad/other addictive emotions which would keep them on the platform.

This particular problem has since been fixed, but it took quite a while to figure out what was going on, and more time to figure out how to fix it. Maybe use analogies of genies who, if you imperfectly specify your wish, will find some way to technically satisfy it, but screw you over in the process.

One thing which stops me from explaining things well to my parents is the fear of looking weird. Which usually doesn’t stop me (to a fault) when talking with anyone else, but I guess not with my parents. You can avert this via ye-olde Appeal to Authority. Tell them the idea was popularized, in part, by professor Stuart Russel—the writer of the world’s foremost textbook on artificial intelligence—in his book Human Compatible, who currently runs the organization HCAI at Berkeley to tackle this very problem.

edit: Also, be sure to note it’s not just HCAI who’s working on this problem. There’s also MIRI, DeepMind, Anthropic, and other organizations.

Comment by D0TheMath on [deleted post] 2021-07-12T05:06:45.763Z

When should those who sign up expect to receive their acceptance/rejection?

Comment by D0TheMath on [deleted post] 2021-07-10T05:37:51.172Z

I am testing comment functionality on Linux Mint OS, using the Firefox browser.

edit: seems like I can edit too.

Comment by D0TheMath on [linkpost] EA Forum Podcast: Narration of "Why EA groups should not use 'Effective Altruism' in their name." · 2021-07-09T14:51:09.673Z · EA · GW

Thanks! I’ll implement these suggestions the next chance I get.

Comment by D0TheMath on The EA Forum Podcast is up and running · 2021-07-06T01:10:16.173Z · EA · GW

We’ve talked in private, but I figure I should publicly thank you for your offer for help.

edit: this is the thank you.

Comment by D0TheMath on The EA Forum Podcast is up and running · 2021-07-05T14:25:10.431Z · EA · GW

Anchor sends messages to podcast platforms to get the podcast on them. They say this takes a few business days to complete. In the meantime, you can use Ben Schifman's method.

Comment by D0TheMath on [Link] Reading the EA Forum; audio content · 2021-07-05T05:27:58.237Z · EA · GW

For example, in the first month we launched them (July 2020), across the 3 different profiles, the detailed versions averaged 62% the number of downloads as the short versions, and the audio versions averaged 6% of the number of downloads of the short versions.

This changes my estimate of how useful the EA forum podcast will be, so thanks for sharing your experience.

Comment by D0TheMath on Which EA forum posts would you most like narrated? · 2021-07-03T05:11:32.742Z · EA · GW

Thanks for the correction. I read the intro to the first prize post (May, 2020) on the tag page, and thought it meant it was the last one that would be published.

I thought there were more published between May of 2020, and now, but for the last year time has felt pretty weird, so I figured I was misremembering.