Posts

The FTX Situation: Wait for more information before proposing solutions 2022-11-13T20:28:29.502Z
Every moment of an electron's existence is suffering 2022-04-01T14:35:23.593Z
Global poverty questions you'd like answered? 2022-01-24T01:03:39.568Z
Global development research questions? 2022-01-24T00:59:30.941Z
List of important ways we may be wrong 2022-01-08T16:30:36.241Z
D0TheMath's Shortform 2021-11-11T13:57:29.946Z
Narration: Improving the EA-aligned research pipeline: Sequence introduction 2021-07-27T00:56:24.921Z
Narration: The case against “EA cause areas” 2021-07-24T20:39:52.632Z
Narration: We are in triage every second of every day 2021-07-23T20:59:56.419Z
Narration: Reducing long-term risks from malevolent actors 2021-07-15T16:26:47.420Z
Narration: Report on Running a Forecasting Tournament at an EA Retreat, part 2 2021-07-14T19:41:42.035Z
Narration: Report on Running a Forecasting Tournament at an EA Retreat, part 1 2021-07-13T16:21:45.703Z
Narration: "Against neutrality about creating happy lives" 2021-07-10T19:13:28.112Z
Narration: "[New org] Canning What We Give" 2021-07-09T17:57:18.614Z
[linkpost] EA Forum Podcast: Narration of "Why EA groups should not use 'Effective Altruism' in their name." 2021-07-08T22:43:21.469Z
[linkpost] EA Forum Podcast: Narration of "How to run a high-energy reading group" 2021-07-07T20:21:05.916Z
[Podcast] EA Forum Podcast: Narration of "How much does performance differ between people?" 2021-07-06T20:48:14.069Z
The EA Forum Podcast is up and running 2021-07-05T01:42:03.377Z
Which EA forum posts would you most like narrated? 2021-07-01T22:05:20.829Z
[Repost] A poem on struggling with personal cause prioritization 2021-05-25T01:30:37.104Z
How many small EA orgs in need of workers are there? 2021-05-20T18:19:38.866Z

Comments

Comment by D0TheMath on How bad a future do ML researchers expect? · 2023-03-15T18:04:07.601Z · EA · GW

Public sentiment is already mostly against AI when public sentiment has an opinion. Though its not a major political issue (yet) so people may not be thinking about it. If it turns into a major political issue (there are ways of regulating AI without turning it into a major political issue, and you probably want to do so), then it will probably become 50/50 due to what politics does to everything.

Comment by D0TheMath on There are no coherence theorems · 2023-02-21T03:12:28.477Z · EA · GW

Ah, ok. Why don't you just respond with markets then!

Comment by D0TheMath on There are no coherence theorems · 2023-02-21T03:02:47.076Z · EA · GW

You can argue that the theorems are wrong, or that the explicit assumptions of the theorems don't hold, which many people have done, but like, there are still coherence theorems, and IMO completeness seems quite reasonable to me and the argument here seems very weak (and I would urge the author to create an actual concrete situation that doesn't seem very dumb in which a highly intelligence, powerful and economically useful system has non-complete preferences).

If you want to see an example of this, I suggest John's post here.

Comment by D0TheMath on There are no coherence theorems · 2023-02-21T03:01:13.068Z · EA · GW

Working on it.

Spoiler (don't read if you want to work on a fun puzzle or test your alignment metal).

Comment by D0TheMath on DAO#2: Details of My Plan to Raise $1 Billion for Effective Altruism Using a DAO (Decentralized Autonomous Organization) · 2023-02-20T22:01:20.603Z · EA · GW

This effectively reads as “I think EA is good at being a company, so my company is going to be a company”. Nobody gives you $1B for being a company. People generally give you money for doing economically valuable things. What economically valuable thing do you imagine doing?

Comment by D0TheMath on DAO#2: Details of My Plan to Raise $1 Billion for Effective Altruism Using a DAO (Decentralized Autonomous Organization) · 2023-02-20T21:08:16.743Z · EA · GW

I’m not assuming its a scam, and seems unlikely it’d damage the reputation of EA. Seems like a person who got super enthusiastic about a particular governance idea they had, and had a few too many conversations about how to pitch well.

Comment by D0TheMath on DAO#2: Details of My Plan to Raise $1 Billion for Effective Altruism Using a DAO (Decentralized Autonomous Organization) · 2023-02-20T19:54:02.064Z · EA · GW

I would recommend, when making a startup, you have a clear idea of what your startup would actually do, which takes into account your own & your company’s strengths & weaknesses & comparative advantage. Many want to make money, those who succeed usually have some understanding of how (even if later they end up radically pivoting to something else).

Comment by D0TheMath on [deleted post] 2023-02-06T19:31:51.529Z

I know for one that computer system security and consensus mechanisms for crypto rely on proofs and theorems to guide them. It is a common when you want a highly secure computer system to provably verify its security, and consensus mechanisms rely much on mechanism design. Similarly for counter-intelligence: cryptography is invaluable in this area.

Comment by D0TheMath on Doing EA Better: Preamble, Summary, and Introduction · 2023-01-27T00:32:34.826Z · EA · GW

I agree with this, except when you tell me I was eliding the question (and, of course, when you tell me I was misattributing blame). I was giving a summary of my position, not an analysis which I think would be deep enough to convince all skeptics.

Comment by D0TheMath on Doing EA Better: Preamble, Summary, and Introduction · 2023-01-26T23:34:06.677Z · EA · GW

Mass Gell-Mann amnesia effect because, say, I may look at others talking about my work or work I know closely, and say "wow! That's wrong", but look at others talking about work I don't know closely and say "wow! That implies DOOM!" (like dreadfully wrong corruptions of the orthogonality thesis), and so decide to work on work that seems relevant to that DOOM?

Comment by D0TheMath on Doing EA Better: Preamble, Summary, and Introduction · 2023-01-26T23:16:15.480Z · EA · GW

Do you disagree, assuming my writeup provides little information or context to you?

Comment by D0TheMath on Doing EA Better: Preamble, Summary, and Introduction · 2023-01-25T23:23:08.825Z · EA · GW

Basically, there are simple arguments around 'they are an AGI capabilities organization, so obviously they're bad', and more complicated arguments around 'but they say they want to do alignment work', and then even more complicated arguments on those arguments going 'well, actually it doesn't seem like their alignment work is all that good actually, and their capabilities work is pushing capabilities, and still makes it difficult for AGI companies to coordinate to not build AGI, so in fact the simple arguments were correct'. Getting more into depth would require a writeup of my current picture of alignment, which I am writing, but which is difficult to convey via a quick comment.

Comment by D0TheMath on Doing EA Better: Preamble, Summary, and Introduction · 2023-01-25T21:17:25.021Z · EA · GW

I could list my current theories about how these problems are interrelated, but I fear such a listing would anchor me to the wrong one, and too many claims in a statement produces more discussion around minor sub-claims than major points (an example of a shallow criticism of EA discussion norms).

Comment by D0TheMath on Doing EA Better: Preamble, Summary, and Introduction · 2023-01-25T21:10:22.373Z · EA · GW

The decisions which caused the FTX catastrophe, the fact that EA is counterfactually responsible for the three primary AGI labs, Anthropic being entirely run by EAs yet still doing net negative work, and the funding of mostly capabilities oriented ML work with vague alignment justifications (and potentially similar dynamics in biotech which are more speculative for me right now), with the creation of GPT and[1] RLHF as particular examples of this.


  1. I recently found out that GPT was not in fact developed for alignment work. I had gotten confused with some rhetoric used by OpenAI and employees during the earlier days which turned out to be entirely independent from modern alignment considerations. ↩︎

Comment by D0TheMath on Doing EA Better: Preamble, Summary, and Introduction · 2023-01-25T21:02:11.594Z · EA · GW

EAs should read more deep critiques of EA, especially external ones

  • For instance this blog and this forthcoming book

The blog post and book linked do not seem likely to me to discuss "deep" critiques of EA. In particular, I don't think the problem with the most harmful parts of EA are caused by racism or sexism or insufficient wokeism.

In general, I don't think many EAs, especially very new EAs with little context or knowledge about the community, are capable of recognizing "deep" from "shallow" criticisms, I also expect them to be overly optimistic about the shallow criticisms they preach, and to confuse "deep & unpopular" with 'speculative & wrong'.

Comment by D0TheMath on My highly personal skepticism braindump on existential risk from artificial intelligence. · 2023-01-25T17:48:58.164Z · EA · GW

Eh, I don’t think this is a priors game. Quintin has lots of information, I have lots of information, so if we were both acting optimally according to differing priors, our opinions likely would have converged.

In general I’m skeptical of arguments of disagreement which reduce things to differing priors. It’s just not physically or predictively correct, and it feels nice because now you no longer have an epistemological duty to go and see why relevant people have differing opinions.

Comment by D0TheMath on My highly personal skepticism braindump on existential risk from artificial intelligence. · 2023-01-24T06:46:11.701Z · EA · GW

Yeah, he’s working on it, but its not his no. 1 priority. He developed shard theory.

Comment by D0TheMath on "Status" can be corrosive; here's how I handle it · 2023-01-24T01:36:20.020Z · EA · GW

Totally agree with everything in here!

I also like the framing: Status-focused thinking was likely very highly selected for in the ancestral environment, and so when your brain comes up with status-focused justifications for various plans, you should be pretty skeptical about whether it is actually focusing on status as an instrumental goal toward your intrinsic goals, or as an intrinsic goal in itself. Similar to how you would be skeptical of your brain for coming up with justifications in favor of why its actually a really good idea to hire that really sexy girl/guy interviewing for a position who analyzed objectively is a doofus.

Comment by D0TheMath on Forum + LW relationship: What is the effect? · 2023-01-24T01:16:33.783Z · EA · GW

Scared as in, like, 10-15% in the next 50 years assuming we don't all die.

Comment by D0TheMath on Forum + LW relationship: What is the effect? · 2023-01-24T01:15:49.388Z · EA · GW

I think the current arms-length community interaction is good, but mostly because I'm scared EAs are going to do something crazy which destroys the movement, and that Lesswrongers will then be necessary to start another spinoff movement which fills the altruistic gap. If Lesswrong is too close to EA, then EA may take down Lesswrong with it.

Lesswrongers seem far less liable to play with metaphorical fire than EAs, given less funding, better epistemics, less overall agency, and fewer participants.

Comment by D0TheMath on Forum + LW relationship: What is the effect? · 2023-01-24T01:12:04.001Z · EA · GW

I disagree-voted.

I think pure open dialogue is often good for communities. You will find evidence for this if you look at most any social movement, the FTX fiasco, and immoral mazes.

Most long pieces of independent research that I see are made by open-phil, and I see far more EAs deferring to open-phil's opinion on a variety of subjects than Lesswrongers. Examples that come to mind from you would be helpful.

It was originally EAs who used such explicit expected value calculations during Givewell periods, and I don't think I've ever seen an EV calculation done on LessWrong.

I think the more-karma-more-votes system is mostly good, but not perfect. In particular, it seems likely to reduce the impact of posts which are popular outside EA but not particularly relevant to EAs, a problem many subreddits have.

Comment by D0TheMath on Forum + LW relationship: What is the effect? · 2023-01-24T00:55:52.936Z · EA · GW

I strong downvoted this because I don't like online discussions that devolve into labeling things as cringe or based. I usually replace such words with low/high status, and EA already has enough of that noise.

Comment by D0TheMath on My highly personal skepticism braindump on existential risk from artificial intelligence. · 2023-01-24T00:52:33.345Z · EA · GW

I like this, and think its healthy. I recommend talking to Quintin Pope for a smart person who has thought a lot about alignment, and came to the informed, inside-view conclusion that we have a 5% chance of doom (or just reading his posts or comments). He has updated me downwards on doom a lot.

Hopefully it gets you in a position where you're able to update more on evidence that I think is evidence, by getting you into a state where you have a better picture of what the best arguments against doom would be.

Comment by D0TheMath on FLI FAQ on the rejected grant proposal controversy · 2023-01-21T10:21:47.048Z · EA · GW

I find myself disliking this comment, and I think its mostly because it sounds like you 1) agree with many of the blunders Rob points out, yet 2) don’t seem to have learned anything from your mistake here? I don’t think many do or should blame you, and I’m personally concerned about repeated similar blunders on your part costing EA much loss of outside reputation and internal trust.

Like, do you think that the issue was that you were responding in heat, and if so, will you make a future policy of not responding in heat in future similar situations?

I feel like there are deeper problems here that won’t be corrected by such a policy, and your lack of concreteness is an impedance to communicating such concerns about your approach to CEA comms (and is itself a repeated issue that won’t be corrected by such a policy).

Comment by D0TheMath on Air-gapping evaluation and support · 2022-12-27T18:07:57.004Z · EA · GW

It would not surprise me if most HR departments are set up as the result of lots of political pressures from various special interests within orgs, and that they are mostly useless at their “support” role.

With more confidence, I’d guess a smart person could think of a far better way to do support that looks nothing like an HR department.

I think MATS would be far better served by ignoring the HR frame, and just trying to rederive all the properties of what an org which does support well would look like. The above post looks like a good start, but it’d be a shame if you all just went with a human human resources department. Traditional companies do not in fact seem like they would be good at the thing you are talking about here.

Unless there’s some weird incentives I know nothing about, effective community support is the kind of thing you should expect to do better than all of civilization at, if you are willing to think about it from first principles for 10 minutes.

Comment by D0TheMath on Be less trusting of intuitive arguments about social phenomena · 2022-12-18T18:23:19.793Z · EA · GW

Seems like that is just a bad argument, and can be solved with saying “well that’s obviously wrong for obvious, commonsense reasons” and if they really want to, they can make a spreadsheet, fill it in with the selection pressures they think they’re causing, and see for themselves that indeed its wrong.

The argument I’m making is that most of the examples you gave I thought “that’s a dumb argument”. And if people are consistently making transparently dumb selection arguments, this seems different from people making subtly dumb selection arguments, like economists.

If you have subtly dumb selection arguments, you should go out and test which are true, if you’re making transparently dumb ones, you should figure out how to formulate better hypotheses. Chances are you’re not yet even oriented in the vague direction of reality in the domain you’re attempting to reason in.

Comment by D0TheMath on Be less trusting of intuitive arguments about social phenomena · 2022-12-18T17:50:28.770Z · EA · GW

I don’t buy any of the arguments you said at the top of the post, except for toxoplasma of rage (with lowish probability) and evaporative cooling. But both of these (to me) seem like a description of an aspect of a social dynamic, not the aspect. And currently not very decision relevant.

Like, obviously they’re false. But are they useful? I think so!

I’d be interested in different, more interesting or decision relevant or less obvious mistakes you often see.

Comment by D0TheMath on You Don’t Have to Call Yourself an Effective Altruist or Fraternize With Effective Altruists or Support Longtermism, Just Please, for the Love of God, Help the Global Poor · 2022-12-17T08:05:03.586Z · EA · GW

I feel like you may be preaching to the choir here, but agree with the sentiment (modulo thinking people should do more of whatever is the best thing on the margin).

Nevermind, I see its a crosspost.

Comment by D0TheMath on I went to the Progress Summit. Here’s What I Learned. · 2022-12-15T01:48:15.139Z · EA · GW

Overall, I think the Progress studies community seems decently aligned with what EAs care about, and could become more-so in the coming years. The event had decent epistemics and was less intimidating than an EA conference. I think many people who feel that EA is too intense, cares too much about longtermism, or uses too much jargon could find progress studies as a suitable alternative. If the movement known as EA dissolved (God forbid) I think progress studies could absorb many of the folks.

I'm curious about how you think this will develop. It seems like Progress studies often takes the stance that for all technologies, progress in that technology is good. This seems relatively central to their shtick.

Maybe their views will start to shift towards thinking strongly in terms of what we would call differential technological development. Where they can maintain their view that progress is good, but append onto that that progress is only good if certain technologies get developed sooner than other technologies. Perhaps this is the perspective they have on many technologies already, and I don't know enough about the community to tell.

Comment by D0TheMath on You *should* factor optics into EV calculations · 2022-12-13T14:35:35.510Z · EA · GW

Hm. I think I mostly don’t think people are good at doing that kind of reasoning. Generally when I see it in the wild, it seems very naive.

I’d like to know if you, factoring in optics into your EV calcs, see any optics mistakes EA is currently making which haven’t already blown up, and that (say) Rob Bensinger probably can’t given he’s not directly factoring in optics to his EV calcs.

Comment by D0TheMath on You *should* factor optics into EV calculations · 2022-12-13T00:51:21.762Z · EA · GW

I think optics concerns are corrosive in the same way that PR concerns are. I quite like Rob Bensinger's perspective on this, as well as Anna's "PR" is corrosive, reputation is not.

I'd like to know what you think of these strategies. Notably, I think they defend against SBF, but not against Wytham Abbey type stuff, and conditional on Wytham Abbey being an object-level smart purchase, I think that's a good thing.

Comment by D0TheMath on Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible · 2022-12-12T21:40:28.549Z · EA · GW

I wouldn’t advocate for engineering species to be sapient (in the sense of having valenced experiences), but for those that already are, it seems sad they don’t have higher ceilings for their mental capabilities. Like having many people condemned to never develop past toddlerhood.

edit: also, this is a long-term goal. Not something I think makes sense to make happen now.

Comment by D0TheMath on Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible · 2022-12-12T21:24:08.983Z · EA · GW

I wish people would stop optimizing their titles for what they think would be engaging to click on. I usually downvote such posts once I realize what was done.

I ended up upvoting this one bc I think it makes an important point.

Comment by D0TheMath on Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible · 2022-12-12T21:21:48.261Z · EA · GW

I interpreted “ eliminate natural ecosystems” as more like eliminating global poverty in the human analogy. Seems bad to do a mass killing of all animals, and better to just make their lives very good, and give them the ability to mentally develop past mental ages of 3-7.

Comment by D0TheMath on Kurzgesagt's most recent video promoting the introducing of wild life to other planets is unethical and irresponsible · 2022-12-11T22:59:01.727Z · EA · GW

If done immediately, this seems like it’d severely curtail humanity’s potential. But at some point in the future, this seems like a good idea.

Comment by D0TheMath on Questions about AI that bother me · 2022-12-11T17:30:07.033Z · EA · GW

You should make manifold markets predicting what you’ll think of these questions in a year or 5 years.

Comment by D0TheMath on [deleted post] 2022-12-08T05:04:51.784Z

Didn't see the second part there.

If you would not trade $10 billion for 3 weeks that could be because:

  • I'm more optimistic about empirical research / think the time iterating at the end when we have the systems is significantly more important than the time now when we can only try to reason about them.
  • you think money will be much less useful than I expect it to be

I wouldn't trade $10 billion, but I think empirical research is good. It just seems like we can already afford a bunch of the stuff we want, and I expect we will continue to get lots of money without needing to sacrifice 3 weeks.

I also think people are generally bad consequentialists on questions like these. There is an obvious loss, and a speculative gain. The speculative gain looks very shiny because you make lots of money and end up doing something cool. The obvious loss does not seem very important because its not immediately world destroying, and somewhat boring.

Comment by D0TheMath on [deleted post] 2022-12-08T04:56:23.890Z

the amount of expected serial time a successful (let's say $10 billion dollar) AI startup is likely to counterfactually burn. In the post I claimed that this seems unlikely to be more than a few weeks. Would you agree with this?

No, see my comment above. Its the difference between a super duper AGI and only a super-human AGI, which could be years or months (but very very critical months!). Plus whatever you add to the hype, plus worlds where you somehow make $10 billion from this are also worlds where you've had an inordinate impact, which makes me more suspicious the $10 billion company world is the one where someone decided to just make the company another AGI lab.

the relative value of serial time to money (which is exchangeable with parallel time). If you agree with the first statement, would you trade $10 billion dollars for 3 weeks of serial time at the current margin?

Definitely not! Alignment is currently talent and time constrained, and very much not funding constrained. I don't even know what we'd buy that'd be worth $10 billion. Maybe some people have some good ideas. Perhaps we could buy lots of compute? But we can already buy lots of compute. I don't know why we aren't, but I doubt its because we can't afford it.

Maybe I'd trade a day for $10 billion? I don't think I'd trade 2 days for $20 billion though. Maybe I'm just not imaginative enough. Any ideas yourself?

Comment by D0TheMath on [deleted post] 2022-12-08T04:46:13.577Z

I think it has a large chance of accelerating timelines by a small amount, and a small chance of accelerating timelines by a large amount. You can definitely increase capabilities, even if they're not doing research directly into increasing the size of our language models. Figuring out how you milk language models for all the capabilities they have, the limits of such milking, and making highly capable APIs easy for language models to use are all things which shorten timelines. You go from needing a super duper AGI to take over the world to a barely super-human AGI, if all it can do is output text.

Adjacently, contributing to the AGI hype shortens timelines too.

I also think the above assumes monetary or prestige pressures won't cause organizational value drift. I think its quite likely whoever starts this will see pressure from funders, staff, and others to turn it into an AGI firm. You need good reason to believe your firm is going to not cave in, and I see nothing addressing this concern in the original post.

Comment by D0TheMath on [deleted post] 2022-12-08T03:27:42.278Z

Also, from what I've heard, you cannot in fact use ungodly amounts of money to move talent. Generally, if top researchers were swayable that way, they'd be working in industry. Mostly, they just like working on their research, and don't care much about how much they're paid.

Comment by D0TheMath on [deleted post] 2022-12-08T03:24:17.701Z

In general, it is a bad idea to trade increased probability that the world ends for money if your goal is to decrease probability that the world ends. People are usually bad at this kind of consequentialism, and this definitely strikes my 'galaxybrain take' detector.

And to the "but we'll do it safer than the others" or "we'll use our increased capabilities for alignment!" responses, I refer you to Nate's excellent post rebutting that line of thought.

Comment by D0TheMath on What specific changes should we as a community make to the effective altruism community? [Stage 1] · 2022-12-07T21:43:49.793Z · EA · GW

Most suggestions I see for alternative community norms to the ones we currently have seem to throw out many of the upsides of the community norms they're trying to replace.

Comment by D0TheMath on What specific changes should we as a community make to the effective altruism community? [Stage 1] · 2022-12-07T21:43:11.033Z · EA · GW

When trying to replace community norms, we should try to preserve the upsides of having the previous community norms.

Comment by D0TheMath on What specific changes should we as a community make to the effective altruism community? [Stage 1] · 2022-12-07T21:42:40.212Z · EA · GW

Almost all community norms we currently have have many upsides we should try to maintain.

Comment by D0TheMath on What specific changes should we as a community make to the effective altruism community? [Stage 1] · 2022-12-07T21:41:32.276Z · EA · GW

Pick your poison:

Comment by D0TheMath on What specific changes should we as a community make to the effective altruism community? [Stage 1] · 2022-12-07T21:40:05.940Z · EA · GW

A significant fraction of the focus going into reevaluating community norms feels misplaced.

Comment by D0TheMath on What specific changes should we as a community make to the effective altruism community? [Stage 1] · 2022-12-07T21:39:48.322Z · EA · GW

Most of the focus going into reevaluating community norms feels misplaced.

Comment by D0TheMath on What specific changes should we as a community make to the effective altruism community? [Stage 1] · 2022-12-06T22:46:56.501Z · EA · GW

The EA community is on the margin too excited to hear about criticism.

Comment by D0TheMath on What specific changes should we as a community make to the effective altruism community? [Stage 1] · 2022-12-06T22:45:41.027Z · EA · GW

All the focus going into reevaluating community norms feels misplaced.

Comment by D0TheMath on Mass Good · 2022-11-30T06:29:10.599Z · EA · GW

I really dislike the term Mass Good, but like the speculation on what alternative names to EA there could be.

Mass Good sounds very clunky, with lots of back of the mouth vowels I dislike hearing. It also originally made me think of “mass” as in a catholic mass, then “mass” as in matter (making me click on the article, thinking I’d see a funny post about a cause area devoted to maximizing the amount of mass in the universe or something).