Some potential lessons from Carrick’s Congressional bid 2022-05-18T05:58:22.539Z
Daniel_Eth's Shortform 2021-11-09T10:34:00.308Z
Great-Filter Hard-Step Math, Explained Intuitively 2021-11-01T22:53:24.625Z
[Link post] Paths To High-Level Machine Intelligence 2021-09-22T02:43:24.637Z
The Technological Landscape Affecting Artificial General Intelligence and the Importance of Nanoscale Neural Probes 2018-01-12T01:10:07.056Z
Donating To High-Risk High-Reward Charities 2017-02-14T04:20:31.664Z


Comment by Daniel_Eth on New Cause Area: Baby Longtermism – Child Rights and Future Children · 2022-09-17T00:14:09.231Z · EA · GW

Meta-point – I think it would be better if this was called something other than "baby longtermism", as I found this confusing. Specifically, I initially thought you were going to be writing a post about a baby (i.e., "dumbed-down") version of longtermism.

Comment by Daniel_Eth on Are "Bad People" Really Unwelcome in EA? · 2022-08-13T14:52:30.683Z · EA · GW

"That said, when I started the 10% thing, I did so under the impression that it was what the sacrifice I needed to make to gain acceptance in EA"

If this sentiment is at all widespread among people on the periphery of EA or who might become EA at some point, then I find that VERY concerning. We'd lose a lot of great people if everyone assumed they couldn't join without making that kind of sacrifice.

Comment by Daniel_Eth on [link post] The Case for Longtermism in The New York Times · 2022-08-06T05:59:35.036Z · EA · GW

Hmm, I don't read it that way. My read of this passage is: the risk of WWIII by 2070 might be as high as somewhat over 20% (but that estimate is probably picked from the higher end of serious estimates), WWIII may or may not lead to all-out nuclear war, all-out nuclear war has some unknown chance of leading to the collapse of civilization, and if that happened then there would also be some further unknown chance of never recovering. So all-in-all, I'd read this as Will thinking that X-risk from nuclear war in the next 50 years was well below 20%.

I also don't think NYT readers have particularly clear prejudices about nuclear war (they probably have larger prejudices about things like overpopulation), so this would be a weird place to make a concession, in my mind.

Comment by Daniel_Eth on Passing Up Pay · 2022-07-13T23:47:50.754Z · EA · GW

My personal view is that targeted small-dollar political donations (which large donors cannot simply fill, due to campaign finance laws) are likely to be vastly higher value on the margin than corresponding-sized (equivalent size plus tax savings) non-political donations  to organizations that large donors can fill, insofar as such targeted political opportunities arise. So if I was in the situation you're describing, I'd accept the higher salary with the intention of donating to such political opportunities when they arose. Of course, this logic is specific to a particular kind of donation opportunity, and won't generalize to most areas that EAs currently donate to.

Comment by Daniel_Eth on New US Senate Bill on X-Risk Mitigation [Linkpost] · 2022-07-06T02:02:46.017Z · EA · GW

It's interesting the term 'abused' was used with respect to AI. It makes me wonder if the authors have misalignment risks in mind at all or only misuse risks.


A separate press release says, "It is important that the federal government prepare for unlikely, yet catastrophic events like AI systems gone awry" (emphasis added), so my sense is they have misalignment risks in mind.

Comment by Daniel_Eth on Digital people could make AI safer · 2022-06-11T05:14:12.799Z · EA · GW

You might be interested in my paper on this topic, where I also come to the conclusion that achieving WBE before de novo AI would be good:

Comment by Daniel_Eth on My first effective altruism conference: 10 learnings, my 121s and next steps · 2022-05-23T08:06:23.088Z · EA · GW

Go to EA conferences even if you don't think you are a good fit or 100% bought in to EA. It sparked my interest, sprouted ideas and I was able to tangibly help and share my experiences with others. I underestimated the value of my perspectives for others in different walks of life.


This resonated with me. For my first EA Global (back in 2016), I applied on a whim, attracted by a couple of the speakers and the fact that the conference was close to my hometown, but hesitant due to a few negative misperceptions I had about EA at the time. While there, I felt very much at home,  and I've been heavily involved in EA ever since. Of course, not everyone will have the same experience, but my sense is there's a pretty wide range of surprising upsides from going to these sorts of conferences, and it's often worth going to at least one if you're uncertain.

Comment by Daniel_Eth on Death to 1 on 1s · 2022-05-23T07:47:02.568Z · EA · GW

I've also found going for walks during 1-on-1s to be nice, to the point that I do this for the majority of my 1-on-1s (this also has the side benefit of reducing covid risk)

Comment by Daniel_Eth on Replicating and extending the grabby aliens model · 2022-05-03T02:54:58.398Z · EA · GW


The possibility of try-once steps allows one to reject the existence of hard try-try steps, but suppose very hard try-once steps.

  • I'm not seeing why this is. Why is that the case?


Because if (say) only 1/10^30 stars has a planet with just the right initial conditions to allow for the evolution of intelligent life, then that fully explains the Great Filter, and we don't need to posit that any of the try-try steps are hard (of course, they still could be).

Comment by Daniel_Eth on A new media outlet focused in part on philanthropy · 2022-03-13T02:18:17.281Z · EA · GW

FWIW, I found the interview with SBF to be quite fair, and imho it presented Sam in a neutral-to-positive light (though perhaps a bit quirky). Teddy's more recent reporting/tweets about Sam also strike me as both fair and neutral to positive.

Comment by Daniel_Eth on evelynciara's Shortform · 2021-11-14T00:48:08.864Z · EA · GW

Hmm, culturally YIMBYism seems much harder to do in suburbs/rural areas. I wouldn't be too surprised if the easiest ToC here is to pass YIMBY-energy policies on the state level, with most of the support coming from urbanites. 

But sure, still probably worth trying.

Comment by Daniel_Eth on evelynciara's Shortform · 2021-11-12T21:59:25.011Z · EA · GW

I thought YIMBYs were generally pretty in favor of this already? (Though not generally as high a priority for them as housing.) My guess is it would be easier to push the already existing YIMBY movement to focus on energy more, as opposed to creating a new movement from scratch.

Comment by Daniel_Eth on Daniel_Eth's Shortform · 2021-11-11T10:48:03.123Z · EA · GW

Not just EA funds, I think (almost?) all random, uninformed EA donations would be much better than donations to an Index fund considering all charities on Earth. 

Comment by Daniel_Eth on A Model of Patient Spending and Movement Building · 2021-11-11T10:19:17.223Z · EA · GW

if one wants longtermism to get a few big wins to increase its movement building appeal, it would surprise me if the way to do this was through more earning to give, rather than by spending down longtermism's big pot of money and using some of its labor for direct work

I agree – I think the practical implication is more "this consideration updates us towards funding/allocating labor towards direct work over explicit movement building" and less "this consideration updates us towards E2G over direct work/movement building".

Comment by Daniel_Eth on A Model of Patient Spending and Movement Building · 2021-11-11T10:15:22.488Z · EA · GW

because of scope insensitivity, I don't think potential movement participants would be substantially more impressed by $2*N billions of GiveDirectly-equivalents of good per year vs just $N billions

Agree (though potential EAs may be more likely to be impressed with that stuff than most people), but I think qualitative things that we could accomplish would be impressive. For instance, if we funded a cure for malaria (or cancer, or ...) I think that would be more impressive than if we funded some people trying to cure those diseases but none of the people we funded succeeded. I also think that people are more likely to be attracted to AI safety if it seems like we're making real headway on the problem.

Comment by Daniel_Eth on Daniel_Eth's Shortform · 2021-11-09T23:14:38.292Z · EA · GW

I think you answered your own question? The index fund would just allocate in proportion to current donations, reducing both overhead for fund managers and the necessity to trust the managers' judgement (other than for deciding which charities do/don't qualify to begin with). I'd imagine the value of the index fund might increase as EA grows and the number of manager-directed funds increases (as many individual donors wouldn't know which direct fund to give to, and the index fund would track donations as a whole, including to direct funds).

Comment by Daniel_Eth on A Model of Patient Spending and Movement Building · 2021-11-09T19:36:19.007Z · EA · GW

This looks good! One possible modification that I think would enhance the model would be an arrow from "direct work" or "good in the world" to "movement building" – I'd imagine that the movement will be much more successful in attracting new members if we're seen as doing valuable things in the world.

Comment by Daniel_Eth on Daniel_Eth's Shortform · 2021-11-09T19:09:38.738Z · EA · GW

Presumably someone (or a group) would have to create a list (potentially after creating an explicit set of criteria), and then the list would be updated periodically (say, yearly). 

Comment by Daniel_Eth on Daniel_Eth's Shortform · 2021-11-09T10:34:00.574Z · EA · GW

Should there be an "EA Donation Index Fund" that allows people to simply "donate the market" (similar to how index funds like the S&P500 allow for simply buying the market)? This fund could allocate donations to EA orgs in proportion to the total donations that those funds receive (from EA sources?) over the year (it would perhaps make sense for there to be a few such funds – such as one for EA as a whole, one for longtermism, one for global health and development, etc).

I see a few potential benefits:
• People who want to donate effectively (and especially if wanting to diversify donations) but don't have the knowledge/expertise/time/etc, and for whatever reason don't necessarily trust EA funds to donate appropriately on their behalf, could do so. I expect there may be many people holding back from donating now for lack of a sense of how to donate best (including from people on the periphery of EA), so this might increase donations. I further expect the quality of donations would increase from those not as knowledgable, if they simply donated the market.
• Could be lower overhead and more scalable compared to other funds.
• Aesthetically, I'd imagine this sort of setup might appeal to finance people, and finance people have a lot of money, so it may widen to pool of donors to EA.
• Index fund donations would effectively be matching donations – if, for instance, half of all EA donations were through an EA index fund, then that would mean direct donations to specific charities would be matched by moving money from the index fund towards the specific charity as well (of course, at the expense of other charities in the fund) – this would arguably provide greater incentive for direct donors to donate more (at least insofar as they thought they knew more than/had better values than the market, but that would be their revealed preference from choosing to be direct donors instead of just donating to the index fund).

Comment by Daniel_Eth on How do EAs deal with having a "weird" appearance? · 2021-11-09T00:51:13.222Z · EA · GW

FWIW, I don't think there's a cost in academia for looking a little bit different if doing so makes you look a bit better (at least if we're talking about within the US – other countries may be different). Yes, an unkept, big bushy beard would presumably be a negative (though less so in academia than in other professions), but stylish hairstyles like Afro buns or cornrows might even be a slight positive. 

Comment by Daniel_Eth on Complexity Science, Economics and the Art of Zen. · 2021-11-07T06:57:45.023Z · EA · GW

Lysenkoism was used by central planners to attempt to improve Soviet agricultural output, and, unsurprisingly, exacerbated famines. This is just one example of how dumb Soviet central planners were on critical issues. I doubt the Soviet space program would have worked as well as it did if the thinking of their rocket scientists was at a similar level to that of those running their economy.

Comment by Daniel_Eth on There's a role for small EA donors in campaign finance · 2021-11-06T02:11:44.109Z · EA · GW

DSCC's goals are just to elect democrats – they don't consider, for instance, how different democrats differ on EA criteria such as biosecurity. Donating to particularly aligned candidates (especially in primaries) is probably higher value than donating to existing (non-EA) funds.

Comment by Daniel_Eth on There's a role for small EA donors in campaign finance · 2021-11-06T01:55:37.157Z · EA · GW

I agree more nuance in the headline would have been better (eg., if it included the word "potentially" to say "There's potentially a role for small EA donors in campaign finance"), but note that's effectively what the body of the piece says, such as here: "consider that election campaign contributions might be a way in which you can have a substantial impact as a small donor" (emphasis added).

Comment by Daniel_Eth on Complexity Science, Economics and the Art of Zen. · 2021-11-06T01:23:29.658Z · EA · GW

“Economics can be harder than rocket science: the Soviet Union was great at rocket science”

This is a good quote, but it seems a little unfair. The Soviet's rocket scientists were brilliant scientific thinkers, while their economic planners really were not. I don't think we have clear evidence one way or the other regarding how well central planning would work if the central planners were particularly smart people with good epistemic hygiene.

Comment by Daniel_Eth on What community norm would you most like to see added, removed or acknolwedged within EA? · 2021-11-05T22:24:03.748Z · EA · GW

"Hey, I think I'm going to mingle some. [Optional: This was interesting/Thanks for telling me about XYZ, I'll look into it/Good luck with ABC/whatever makes sense given the context]"

Comment by Daniel_Eth on Nathan Young's Shortform · 2021-11-05T22:12:11.683Z · EA · GW

Yeah, I think the community response to the NYT piece was counterproductive, and I've also been dismayed at how much people in the community feel the need to respond to smaller hit pieces, effectively signal boosting them, instead of just ignoring them. I generally think people shouldn't engage with public attacks unless they have training in comms (and even then, sometimes the best response is just ignoring).

Comment by Daniel_Eth on Can EA leverage an Elon-vs-world-hunger news cycle? · 2021-11-05T20:34:22.468Z · EA · GW

Hmm, thinking personally, my tweets are definitely more off the cuff and don't live up to the same standard of rigor as my academic papers. I think this is reasonable, since that's what people are expecting from tweets vs academic papers, so I expect the audience will update differently based on them. Also, it's probably good for society/the marketplace of ideas for there to be different venues with different standards (eg., op-eds vs news articles; preprints vs peer-reviewed papers, etc). The case here seems potentially* somewhat similar (let's say, hypothetically, that we're 75% sure that Koch is acting in bad faith; I wouldn't want CNN then saying that he's probably acting in bad faith, but it seems reasonable for a piece in CA to do so).

*note I haven't actually read the piece in question, but I think the general point stands

Comment by Daniel_Eth on Can EA leverage an Elon-vs-world-hunger news cycle? · 2021-11-03T01:41:15.681Z · EA · GW

"the EPA has ranked us either number one or two of US companies in pollution reduction initiatives"

This kinda makes me laugh, because the only way to be the company that reduces their pollution the most is to be polluting a ton in the first place. This is like saying "I know I'm a hero, because in the past year I've reduced the annual number of people I've killed more than anyone else".

Comment by Daniel_Eth on What are your favourite ways to buy time? · 2021-11-02T22:07:47.789Z · EA · GW
  • Delivery: for groceries (from Instacart), restaurant food (from ubereats), convenience store stuff (from Amazon), etc
  • Automation/outsourcing: more prepared food (either from restaurants or grocery stores) instead of cooking
  • Redundancy (so I'm not caught off guard and in need of doing an errand): for batteries, chargers, extra food, etc; also if you need something to be productive and you're not sure which version you need, just buy a few versions at once and eat the costs instead of buying one at a time
  • Comfort (so I can work harder/longer without getting distracted): good chair with back support, good mattress, large desk that can convert between sit/stand, acuvue oasys daily contacts
  • Home workout equipment (so don't need to waste time driving to the gym)
  • Upgrades to avoid wait times/ads: Youtube premium, ubereats premium, etc
  • Not searching for great deals but instead just buying quickly and eating the difference in cost
  • The big one: a place that has the following features:
    • no roommates (less distractions and easier to sleep)
    • in a quiet area
    • large-ish/multiple rooms (less claustrophobic feeling/easier to focus, and also easier to compartmentalize by room)
    • close to a park/other green space or blue space (I like going for walks in these sorts of areas, and closer means less time to get there)
Comment by Daniel_Eth on Buck's Shortform · 2021-11-01T20:41:53.671Z · EA · GW

I regret taking the pledge

I feel like you should be able to "unpledge" in that case, and further I don't think you should feel shame or face stigma for this. There's a few reasons I think this:

  • You're working for an EA org. If you think your org is ~as effective as where you'd donate, it doesn't make sense for them to pay you money that you then donate (unless if you felt there was some psychological benefit to this, but clearly you feel the reverse)
  • The community has a LOT of money now. I'm not sure what your salary is, but I'd guess it's lower than optimal given community resources, so you donating money to the community pot is probably the reverse of what I'd want.
  • I don't want the community to be making people feel psychologically worse, and insofar as it is, I want an easy out for them. Therefore, I want people in your situation in general to unpledge and not feel shame or face stigma. My guess is that if you did so, you'd be sending a signal to others that doing so is acceptable. 
  • You signed the pledge under a set of assumptions which appear to no longer hold (eg., about how you'd feel about the pledge years out, how much money the community would have, etc)
  • I'm generally pro-[people being able to "break contract" and similar without facing large penalties] (other than paying damages, but damages here would be zero since presumably no org made specific plans on the assumption that you'd continue to follow the pledge) – this reduces friction in making contracts to begin with and allows for more dynamism. Yes, a "pledge" in some ways has more meaning than a contract, but seeing as you (apparently) made the pledge relatively hastily (and perhaps under pressure from other? I find this unclear from your post), it doesn't seem like it was appropriate for you to have been making a lifelong commitment to the pledge, and I think we as a community should recognize that and adjust our response accordingly.
Comment by Daniel_Eth on Should Effective Altruists Sign Up for Oxford’s COVID Challenge Study? · 2021-10-31T08:29:35.275Z · EA · GW

the additional risk to a healthy young person is probably a much smaller sacrifice than 10% of one's lifetime earnings

FWIW, I'm also against people saying "EAs should give at least 10% of their income to charity" – this makes people who don't want to make that sort of commitment feel unwelcome, and my sense is that rhetoric along those lines has hurt movement growth.

Comment by Daniel_Eth on Should Effective Altruists Sign Up for Oxford’s COVID Challenge Study? · 2021-10-30T23:44:04.449Z · EA · GW

Pedantic, but I'm somewhat uncomfortable with the rhetoric of whether EAs "should" sign up for this (as in, they have an obligation to do so, which they are failing to live up to if they don't), given the personal risks involved. (I think it's reasonable to have a discussion on the object-level question of whether signing up scores well by EA lights – I'm not objecting to that – though I don't personally have a formed opinion on this question either way.)

Comment by Daniel_Eth on Linch's Shortform · 2021-10-30T00:34:39.864Z · EA · GW

I think a better framing might be projects that Open Phil and other funders would be inclined to fund at ~$X (for some large X, not necessarily 100M), and have cost-effectiveness similar to their current last dollar in the relevant causes or better.

I think I disagree and would prefer Linch's original idea; there may be things that are much more cost-effective than OPP's current last dollar (to the point that they'd provide >>$100M of value for <<$100M to OPP), but which can't absorb $X (or which OPP wouldn't pay $X for, due to other reasons).

Comment by Daniel_Eth on Linch's Shortform · 2021-10-30T00:31:08.265Z · EA · GW

Michael Dickens has already done a bunch of work on this

Can you link to this work?

Comment by Daniel_Eth on Who has done the most good? · 2021-10-29T02:42:57.097Z · EA · GW

Arguably yes. Early British abolitionists were clearly influenced by American abolitionists, and abolitionism in Britain (and to a lesser degree America) were major factors in the success of abolitionism in other countries. The big uncertainties here are: 1) how deterministic vs stochastic was the success of abolitionism, and 2) even if it was very stochastic/we got "lucky", how important was Lay in particular for tipping success over the edge.

The other thing I'll say about this is to read Will MacAskill's book on longtermism (What We Owe the Future) when it comes out, which makes a pretty good case that abolitionism's success wasn't predetermined, and also does a good job talking about how important Benjamin Lay in particular was for abolitionism (though Will doesn't argue that abolitionism's success was dependent on Lay; I'm not sure what odds Will would put on P(much slavery in 2021 | world without Benjamin Lay) - P(much slavery in 2021 | world with Benjamin Lay)).

Comment by Daniel_Eth on Good news on climate change · 2021-10-28T23:30:05.232Z · EA · GW

Some more good news: it looks like the US is going to be spending $555B over the next 10 years to combat climate change. Hopefully a decent chunk of this will be spent somewhat effectively.

Comment by Daniel_Eth on Who has done the most good? · 2021-10-28T23:21:52.646Z · EA · GW

Benjamin Lay. Probably did more than anyone else to kick off the abolitionist movement. There's a not-too-crazy story under which if not for him, slavery might still be common throughout the world today. (And under the same world model, the further rights advances/moral circle expansion that followed abolitionism – e.g., women's rights, gay rights, animal rights, etc – likely wouldn't have occurred either.)

Comment by Daniel_Eth on An update in favor of trying to make tens of billions of dollars · 2021-10-21T09:42:05.334Z · EA · GW

I think the update is less about attempting to become a multi-billionaire vs direct work, and more about attempting to become a multi-billionaire over other E2G work.

Comment by Daniel_Eth on Future Funding/Talent/Capacity Constraints Matter, Too · 2021-10-21T08:36:15.708Z · EA · GW

I think one large argument against what you're saying is that spending/direct work attracts more people to the movement (some of which will do E2G), and might even have a higher ROI just looking at the movement's financials than investing/E2G (this argument comes from Owen here).

Also, since there are so few people now in a position to do direct work, it seems like the value of a marginal person doing so is quite high, and much higher than the equivalent labor of the marginal person to do EA-funded work in the future once we've figured out how to scale up our spending to billions of dollars per year.

Comment by Daniel_Eth on [deleted post] 2021-10-18T03:12:24.519Z

I like Bostrom and Shulman's compromise proposal (below) – turn 99.99% of the reachable resources in the universe into hedonium, while leaving 0.01% for (post-)humanity to play with.

Comment by Daniel_Eth on Peter Wildeford's Shortform · 2021-10-15T22:13:52.467Z · EA · GW

Some people at FHI have had random conversations about this, but I don't think any serious work has been done to address the question.

Comment by Daniel_Eth on What is the EU AI Act and why should you care about it? · 2021-10-09T21:34:59.582Z · EA · GW

"If/When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice."

It's true that setting up institutions earlier allows for more practice, and I suspect the act is probably good on the whole, but it's also worth considering potential negative aspects of setting up institutions earlier. For example:

  • potential for more institutional sclerosis
  • institutional inertia may ~lock in features now, despite having a less clear-eyed view than we'll likely have in the future
Comment by Daniel_Eth on AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project · 2021-10-03T10:06:45.121Z · EA · GW

"I often read that we should be wary of backlash in case anti immigrant parties get into power, but if that's stopping us pass immigration measures those parties are getting what they want anyway."

This assumes that the only negative aspect of anti-immigrant parties is their anti-immigrant stance. If they're also worse on other metrics as well, then the logic doesn't necessarily hold.

Comment by Daniel_Eth on AMA: Jeremiah Johnson, Director/Founder of the Neoliberal Project · 2021-10-03T09:35:04.656Z · EA · GW

Hmm, I'm not sure if that's true. People really like animals, people find emerging technology/futurism interesting, and even some of the weirder ideas (eg., philosophy of mind, aliens) are captivating to people (at least when dumbed down somewhat). Contrast these ideas with wonky political ideas like monetary policy or open borders, and I'd guess that EA-issues come out ahead of neoliberal issues on interest. 

Comment by Daniel_Eth on Nathan Young's Shortform · 2021-10-03T08:33:26.057Z · EA · GW

Personal anecdote possibly relevant for 2): EA Global 2016 was my first EA event. Before going, I had lukewarm-ish feelings towards EA, due mostly to a combination of negative misconceptions and positive true-conceptions; I decided to go anyway somewhat on a whim, since it was right next to my hometown, and I noticed that Robin Hanson and Ed Boyden were speaking there (and I liked their academic work). The event was a huge positive update for me towards the movement, and I quickly became involved – and now I do direct EA work.

I'm not sure that a different introduction would have led to a similar outcome. The conversations and talks at EAG are just (as a general rule) much better than at local events, and reading books or online material also doesn't strike me as naturally leading to being part of a community in the same way.

It's possible my situation doesn't generalizes to others (perhaps I'm unusual in some way, or perhaps 2021 is different from 2016 in a crucial way such that the "EAG-first" strategy used to make sense but doesn't anymore), and there may be other costs with having more newcomers at EAG (eg diluting the population of people more familiar with EA concepts), but I also think it's possible my situation does generalize and that we'd be better off nudging more newcomers to come to EAG.

Comment by Daniel_Eth on World Climate Legionnaires · 2021-10-03T07:03:53.795Z · EA · GW

"War seems to be the only endeavor Americans feel good about"
As an American, I found this statement to be unnecessarily hostile. I know you're being hyperbolic, but I think the forum would be better if it didn't have language like this.

Comment by Daniel_Eth on Has anyone found an effective way to scrub indoor CO2? · 2021-06-28T17:57:46.512Z · EA · GW

Also the cost of sound, and possibly outside pollution (though that can be addressed with HEPA filters & ozone filters)

Comment by Daniel_Eth on 2018-2019 Long-Term Future Fund Grantees: How did they do? · 2021-06-21T04:12:16.213Z · EA · GW

"There is a part of me which finds the outcome (a 30 to 40% success rate) intuitively disappointing"

Not only do I somewhat disagree with this conclusion, but I don't think this is the right way to frame it. If we discard the "Very little information" group, then there's basically a three-way tie between "surprisingly successful", "unsurprisingly successful", and "surprisingly unsuccessful". If a similar amount of grants are surprisingly successful and surprisingly unsuccessful, the main takeaway to me is good calibration about how successful funded grants are likely to be.

Comment by Daniel_Eth on Kardashev for Kindness · 2021-06-16T22:57:41.261Z · EA · GW

"I definitely don't think that a world without suffering would necessarily be a state of hedonic neutral, or result in meaninglessness"

Right, it wouldn't necessary be natural – my point was your definition of Type III allowed for a neutral world, not that it required it. I think it makes more sense for the highest classification to be specifically for a very positive world, as opposed to something that could be anywhere from neutral to very positive.

Comment by Daniel_Eth on Event-driven mission correlated investing and the 2020 US election · 2021-06-16T17:05:56.482Z · EA · GW

Good points.