Posts

Being the person who doesn't launch nukes: new EA cause? 2022-08-06T03:44:44.339Z
What are some high-EV but failed EA projects? 2022-05-13T16:01:22.286Z
The Risk of Concentrating Wealth in a Single Asset 2022-04-29T17:58:01.475Z
A Preliminary Model of Mission-Correlated Investing 2022-04-04T18:50:00.738Z
A Comparison of Donor-Advised Fund Providers 2022-03-09T18:53:35.422Z
Should Earners-to-Give Work at Startups Instead of Big Companies? 2021-11-12T22:55:20.332Z
Future Funding/Talent/Capacity Constraints Matter, Too 2021-10-18T22:19:53.532Z
Low-Hanging (Monetary) Fruit for Wealthy EAs 2021-10-16T15:43:14.078Z
Mission Hedgers Want to Hedge Quantity, Not Price 2021-08-18T20:32:36.896Z
How Do AI Timelines Affect Giving Now vs. Later? 2021-08-03T03:36:43.356Z
Metaculus Questions Suggest Money Will Do More Good in the Future 2021-07-22T01:56:48.028Z
Reverse-Engineering the Philanthropic Discount Rate 2021-07-09T18:32:15.150Z
Asset Allocation and Leverage for Altruists with Constraints 2020-12-14T20:48:26.789Z
Uncorrelated Investments for Altruists 2020-11-23T23:03:23.933Z
If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant 2020-11-23T22:47:32.514Z
Donor-Advised Funds vs. Taxable Accounts for Patient Donors 2020-10-19T20:38:23.801Z
MichaelDickens's Shortform 2020-09-24T00:01:24.005Z
"Disappointing Futures" Might Be As Important As Existential Risks 2020-09-03T01:15:50.466Z
Giving Now vs. Later for Existential Risk: An Initial Approach 2020-08-29T01:04:34.488Z
Should We Prioritize Long-Term Existential Risk? 2020-08-20T02:23:43.393Z
The Importance of Unknown Existential Risks 2020-07-23T19:09:56.031Z
Estimating the Philanthropic Discount Rate 2020-07-03T16:58:54.771Z
How Much Leverage Should Altruists Use? 2020-01-07T04:25:31.492Z
How Can Donors Incentivize Good Predictions on Important but Unpopular Topics? 2019-02-03T01:11:09.991Z
Should Global Poverty Donors Give Now or Later? An In-Depth Analysis 2019-01-22T04:45:56.500Z
Why Do Small Donors Give Now, But Large Donors Give Later? 2018-10-28T01:51:56.710Z
Where Some People Donated in 2017 2018-02-11T21:55:09.730Z
Where I Am Donating in 2016 2016-11-01T04:10:02.389Z
Dedicated Donors May Not Want to Sign the Giving What We Can Pledge 2016-10-30T03:26:44.215Z
Altruistic Organizations Should Consider Counterfactuals When Hiring 2016-09-11T04:19:39.164Z
Why the Open Philanthropy Project Should Prioritize Wild Animal Suffering 2016-08-26T02:08:53.190Z
Evaluation Frameworks (or: When Importance / Neglectedness / Tractability Doesn't Apply) 2016-06-10T21:35:50.236Z
A Complete Quantitative Model for Cause Selection 2016-05-18T02:17:28.769Z
Quantifying the Far Future Effects of Interventions 2016-05-18T02:15:07.240Z
GiveWell's Charity Recommendations Require Taking a Controversial Stance on Population Ethics 2016-05-17T01:51:15.218Z
On Priors 2016-04-26T22:35:14.359Z
How Should a Large Donor Prioritize Cause Areas? 2016-04-25T20:46:38.304Z
Expected Value Estimates You Can (Maybe) Take Literally 2016-04-06T15:11:59.359Z
Are GiveWell Top Charities Too Speculative? 2015-12-21T04:05:07.675Z
More on REG's Room for More Funding 2015-11-16T17:31:40.493Z
Cause Selection Blogging Carnival Conclusion 2015-10-05T20:16:43.945Z
Charities I Would Like to See 2015-09-20T15:22:43.083Z
My Cause Selection: Michael Dickens 2015-09-15T23:29:40.701Z
On Values Spreading 2015-09-11T03:57:55.148Z
Some Writings on Cause Selection 2015-09-08T21:56:01.033Z
EA Blogging Carnival: My Cause Selection 2015-08-16T01:07:22.005Z
Why Effective Altruists Should Use a Robo-Advisor 2015-08-04T03:37:13.789Z
Stanford EA History and Lessons Learned 2015-07-02T03:36:56.688Z
How We Run Discussions at Stanford EA 2015-04-14T16:36:05.363Z
Meetup : Stanford THINK 2014-10-23T02:10:42.641Z

Comments

Comment by MichaelDickens on Interesting vs. Important Work - A Place EA is Prioritizing Poorly · 2022-07-29T04:22:43.315Z · EA · GW

If you are working with fast feedback loops, you can make things and then show people the things. If you're working with slow feedback loops, you have nothing to show and people don't really know what you're doing. The former intuitively seems much better if your goal is status-seeking (which is somewhat my goal in practice, even if ideally it shouldn't be).

Comment by MichaelDickens on Probability distributions of Cost-Effectiveness can be misleading · 2022-07-19T15:46:32.085Z · EA · GW

As an example, let's say you have three interventions with that distribution, and they turn out to be perfectly distributed, you have total cost=$11,010 and total effect=3 so, as a funder that cares about expected value, $3670 is the value you care about.

That's true if you spend money that way, but why would you spend money that way? Why would you spend less on the interventions that are more cost-effective? It makes more sense to spend a fixed budget. Given a 1/3 chance that the cost per life saved is $10, $1000, or $10,000, and you spend $29.67, then you save 1 life in expectation (= 1/3 * (29.67 / 10 + 29.67 / 1000 + 29.67 / 10,000)).

Not sure how useful it is as an intuition pump, but here is an even more extreme/absurd example: if there is a 0.001% chance that the cost is 0 and a 99.999% chance that the cost is $1T, mean(effect/cost) would be ∞

That's a feature, not a bug. If something has positive value and zero cost, then you should spend zero dollars/resources to invoke the effect infinitely many times and produce infinite value (with probability 0.00001).

Comment by MichaelDickens on Probability distributions of Cost-Effectiveness can be misleading · 2022-07-19T03:34:46.158Z · EA · GW

If opportunities have consistently diminishing returns (i.e. the second derivative is negative), then it's convex. Giving opportunities may or may not actually be convex.

Comment by MichaelDickens on Probability distributions of Cost-Effectiveness can be misleading · 2022-07-19T03:29:07.696Z · EA · GW

I could be missing something but this sounds wrong to me. I think the actual objective is mean(effect / cost). effect / cost is the thing you care about, and if you're uncertain, you should take the expectation over the thing you care about. mean(cost / effect) can give the wrong answer because it's the reciprocal of what you care about.

mean(cost) / mean(effect) is also wrong unless you have a constant cost. Consider for simplicity a case of constant effect of 1 life saved, and where the cost could be $10, $1000, or $10,000. mean(cost) / mean(effect) = $3670 per life saved, but the correct answer is 0.0337 lives saved per dollar = $29.67 per life saved.

Comment by MichaelDickens on A philosophical review of Open Philanthropy’s Cause Prioritisation Framework · 2022-07-15T14:07:44.232Z · EA · GW

"the intuitively unacceptable implication that saving lives in richer countries would, other things being equal, be more valuable on the grounds that such people are richer and so better off."

FWIW my intuition is that this implication is pretty obviously correct—would I rather live 1 year of life as a wealthy person in the United States, or as a poor person in Kenya? Obviously I'd prefer the former.

The difference in welfare is almost always swamped by the difference in ability to improve people's lives, hence it's better to help the worse-off person. But all else equal, it would be better to extend the life of the better-off person.

Comment by MichaelDickens on CalmCobra's Shortform · 2022-07-14T16:46:58.522Z · EA · GW

I was not previously familiar with the term cash-on-cash but it looks like you're saying you can earn a 20% return if you use ~5:1 leverage. In that case, sure, but that's a lot of leverage, and 20% is actually a pretty bad return at that much leverage. Historically, stocks would have returned about 40%.

Comment by MichaelDickens on CalmCobra's Shortform · 2022-07-12T21:16:38.868Z · EA · GW

I don't think you can get anything remotely close to 20% return because nothing ever reliably earns a 20% return. The real estate market in aggregate has historically performed about as well as equities with somewhat lower risk. An individual's real estate investments will be riskier than equities due to lack of diversification. For a good post on this, see https://rhsfinancial.com/2019/05/01/better-investment-stocks-real-estate/

Comment by MichaelDickens on Stephen Clare's Shortform · 2022-07-12T21:13:01.849Z · EA · GW

I have some feedback on this post that you should feel free to ignore.

In my experience, when you ask someone for feedback, there's about a 10% chance that they will bring up something really important that you missed. And you don't know who's going to notice the thing. So even if you've asked 9 people for feedback and none of them said anything too impactful, maybe the 10th will say something critically important.

Comment by MichaelDickens on The Threat of Climate Change Is Exaggerated · 2022-07-11T16:36:39.546Z · EA · GW

Climate change is one of the most popular global concerns today. However, according to the principles of Effective Altruism, a philosophical and social movement that applies reason and evidence to philanthropy, climate change should not be our top global priority.

This phrasing makes it sounds like you're saying, "Normal people would think climate change is important, but if you're a weirdo who holds these 'effective altruist' values, then you don't think it's important." But your actual claim is more like, "According to normal people's values, climate change isn't as important as they think because they're empirically wrong about how bad it will be." I would remove the reference to EA from the abstract, although I think it's fine to keep it in the first section as an explanation for why you started thinking about the impact of climate change.

Comment by MichaelDickens on Does biodiversity loss warrant being it’s own cause area? · 2022-07-11T16:23:36.521Z · EA · GW

I don't believe biodiversity is an important cause area, for basically two reasons:

  1. Species themselves are not inherently valuable. The experiences of individual conscious animals are what's valuable, and the welfare of wild animals is basically orthogonal to biodiversity, at least as far as anyone can tell—even if biodiversity and wild animal welfare are positively correlated, I've never seen a good argument to that effect, and surely increasing biodiversity isn't the best way to improve wild animal welfare.
  2. You could perhaps argue that loss of biodiversity poses an existential threat to humanity, which matters more for the long-run future than wild animal welfare. But it seems like a very weak x-risk compared to things like AGI or nuclear war.

Most people who prioritize biodiversity (IMO) don't seem to understand what actually matters, and they act as if a species is a unit of inherent value, when it isn't—the unit of value is an individual's conscious experience. If you wanted to ague that biodiversity should be a high priority, you'd have to claim either that (1) increasing biodiversity is a particularly effective way of improving wild animal welfare or (2) loss of biodiversity constitutes a meaningful existential risk. I've never seen a good argument for either of those positions, but an argument might exist.

(Or you could argue that biodiversity is very important for some third reason, but it seems unlikely to me that there could be any third reason that's important enough to be worth spending EA resources on.)

Comment by MichaelDickens on Does biodiversity loss warrant being it’s own cause area? · 2022-07-11T16:16:51.777Z · EA · GW

thinking we could reliably plan and run them when we don't even know most species involved in them

This argument seems symmetric to me. If you support decreasing biodiversity, you're claiming that we can reliably decrease it. If you support increasing diversity, you're claiming that we can reliably increase it. So the parent comment and OP are both making the same assumption—that it's possible in principle to reliably affect biodiversity one way or the other. (Which I think is true—we have a pretty good sense that certain activities affect biodiversity, eg cutting down rainforests decreases it.)

Comment by MichaelDickens on Should I invest my runway? · 2022-07-09T18:26:27.313Z · EA · GW

If you want to take as little risk as possible, you're right that cash is not the safest investment because it's vulnerable to inflation. It would be safer on a real basis to invest in something like Harry Browne's Permanent Portfolio, which is 25% cash, 25% stocks, 25% Treasury bonds, 25% gold. Just make sure your investments are liquid enough that you can sell them quickly if you need to.

Comment by MichaelDickens on Should I invest my runway? · 2022-07-09T18:23:40.964Z · EA · GW

IMO you should be prepared for the stock market to fall 50%.

Comment by MichaelDickens on A Preliminary Model of Mission-Correlated Investing · 2022-07-02T21:45:39.413Z · EA · GW

It's the same as the standard notion in that you're hedging something. It's different in that the thing you're hedging isn't a security. If you wanted to, you could talk about it in terms of the beta between the hedge and the mission target.

Comment by MichaelDickens on The Forum should consider anonymizing names · 2022-06-29T15:10:29.449Z · EA · GW

I use the LessWrong anti-kibitzer to hide names. All you have to do to make it work on the EA Forum is change the URL from lesswrong.com to forum.effectivealtruism.org.

Comment by MichaelDickens on Examples of someone admitting an error or changing a key conclusion · 2022-06-27T17:40:11.289Z · EA · GW

A personal example: I wrote Should Global Poverty Donors Give Now or Later? and then later realized my approach was totally wrong.

Comment by MichaelDickens on Announcing the launch of Open Phil's new website · 2022-06-23T20:37:43.895Z · EA · GW

To piggyback on this, "with the resources available to us" is tautologically true. The mission statement would have identical meaning if it was simply "Our mission is to help others as much as we can."

Taking a step back, I don't really like the concept of mission statements in general. I think they almost always communicate close to zero information, and organizations shouldn't have them.

Comment by MichaelDickens on On Deference and Yudkowsky's AI Risk Estimates · 2022-06-23T19:27:14.286Z · EA · GW

I read this post kind of quickly, so apologies if I'm misunderstanding. It seems to me that this post's claim is basically:

  1. Eliezer wrote some arguments about what he believes about AI safety.
  2. People updated toward Eliezer's beliefs.
  3. Therefore, people defer too much to Eliezer.

I think this is dismissing a different (and much more likely IMO) possibility, which is that Eliezer's arguments were good, and people updated based on the strength of the arguments.

(Even if his recent posts didn't contain novel arguments, the arguments still could have been novel to many readers.)

Comment by MichaelDickens on What are EA's biggest legible achievements in x-risk? · 2022-06-15T17:31:39.696Z · EA · GW

That being said, I don't think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war.

I wouldn't sell yourself short. IMO, any nuclear exchange would dramatically increase the probability of a global nuclear war, even if the probability is still small by non-xrisk standards.

Comment by MichaelDickens on Steering AI to care for animals, and soon · 2022-06-14T18:16:27.592Z · EA · GW

Some people have told me (probably as a joke) that the best way to improve wild animal welfare is to invent AGI and let the AGI figure it out.

I believe this, not as a joke. But I do agree with you that this requires solving the broader alignment problem and also ensuring that the AGI cares about all sentient beings.

Comment by MichaelDickens on The Strange Shortage of Moral Optimizers · 2022-06-07T16:24:53.656Z · EA · GW

Viewed in this light, the absence of any competing explicitly beneficentrist movements is striking. EA seems to be the only game in town for those who are practically concerned to promote the general good in a serious, scope-sensitive, goal-directed kind of way.

Before EA, I think there were at least two such movements:

  1. a particular subset of the animal welfare movement that cared about effectiveness, e.g., focusing on factory farming over other animal welfare issues explicitly because it's the biggest source of harm
  2. AI safety

Both are now broadly considered to be part of the EA movement.

Comment by MichaelDickens on How to determine distribution parameters from quantiles · 2022-05-31T16:04:55.947Z · EA · GW

Thank you for this! I had been trying to solve this exact problem recently, and I wasn't sure if I was doing it right. And this spreadsheet is much more convenient than the way I was doing it.

Comment by MichaelDickens on How to determine distribution parameters from quantiles · 2022-05-30T22:27:13.610Z · EA · GW

The hyperlink on the word "this" (in both instances) is broken. I don't see how to get to the calculator.

Comment by MichaelDickens on Agrippa's Shortform · 2022-05-25T22:48:39.465Z · EA · GW

Eliezer said something similar, and he seems similarly upset about it: https://twitter.com/ESYudkowsky/status/1446562238848847877

(FWIW I am also upset about it, I just don't know that I have anything constructive to say)

Comment by MichaelDickens on MichaelDickens's Shortform · 2022-05-15T15:09:04.984Z · EA · GW

Looking at the Decade in Review, I feel like voters systematically over-rate cool but ultimately unimportant posts, and systematically under-rate complicated technical posts that have a reasonable probability of changing people's actual prioritization decisions.

Example: "Effective Altruism is a Question (not an ideology)", the #2 voted post, is a very cool concept and I really like it, but ultimately I don't see how it would change anyone's important life decisions, so I think it's overrated in the decade review.

"Differences in the Intensity of Valenced Experience across Species", the #35 voted post (with 1/3 as many votes as #2), has a significant probability of changing how people prioritize helping different species, which is very important, so I think it's underrated.

(I do think the winning post, "Growth and the case against randomista development", is fairly rated because if true, it suggests that all global-poverty-focused EAs should be behaving very differently.)

This pattern of voting probably happens because people tend to upvote things they like, and a post that's mildly helpful for lots of people is easier to like than a post that's very helpful for a smaller number of people.

(For the record, I enjoy reading the cool conceptual posts much more than the complicated technical posts.)

Comment by MichaelDickens on What are some high-EV but failed EA projects? · 2022-05-13T22:22:41.749Z · EA · GW

Thanks, I hadn't seen this previous post!

Comment by MichaelDickens on What are some high-EV but failed EA projects? · 2022-05-13T16:10:19.936Z · EA · GW

I will give an example of one of my own failed projects: I spent a couple months writing Should Global Poverty Donors Give Now or Later? It's an important question, and my approach was at least sort of correct, but it had some flaws that made my approach pretty much useless.

Comment by MichaelDickens on Why Helping the Flynn Campaign is especially useful right now · 2022-05-10T00:16:00.940Z · EA · GW

How quickly can campaigns spend money? Can they reasonably make use of new donations within less than 8 days?

Comment by MichaelDickens on New substack on utilitarian ethics: Good Thoughts · 2022-05-09T22:31:32.735Z · EA · GW

Sounds plausible. Some data: The PhilPapers survey found that 31% of philosophers accept or lean toward consequentailism, vs. 32% deontology and 37% virtue ethics. The ratios are about the same if instead of looking at all philosophers, you look at just applied ethicists or normative ethicists.

I don't know of any surveys on normative views of philosophy-adjacent people, but I expect that (e.g.) economists lean much more consequentialist than philosophers. Not sure what other fields one would consider adjacent to philosophy. Maybe quant finance?

Comment by MichaelDickens on How to optimize your taxes as a donor in the US: donate appreciated securities, make a donor-advised fund, and bunch your donations · 2022-05-02T17:33:51.059Z · EA · GW

You could do something very similar by having one person short a liquid security with low borrowing costs (like SPY maybe) and have the other person buy it.

The buyer will tend to make more money than the shorter, so you could find a pair of securities with similar expected return (e.g., SPY and EFA) and have each person buy one and short the other.

You could also buy one security and short another without there being a second person. But I don't think this is an efficient use of capital—it's better to just buy something with good expected return.

Comment by MichaelDickens on Kyle Lucchese's Shortform · 2022-04-28T15:44:14.545Z · EA · GW

Is it possible to do the most good while retaining current systems (especially economic)? What in these systems needs to be transformed?

This question is already pretty heavily researched by economists. There are some known answers (immigration liberalization would be very good) and some unknowns (how much is the right amount of fiscal stimulus in recessions?). For the most part, I don't think there's much low-hanging fruit in terms of questions that matter a lot but haven't been addressed yet. Global Priorities Institute does some economics research, IMO that's the best source of EA-relevant and neglected questions of this type.

Comment by MichaelDickens on FTX/CEA - show us your numbers! · 2022-04-20T20:26:55.494Z · EA · GW

As a positive example, 80,000 Hours does relatively extensive impact evaluations. The most obvious limitation is that they have to guess whether any career changes are actually improvements, but I don't see how to fix that—determining the EV of even a single person's career is an extremely hard problem. IIRC they've done some quasi-experiments but I couldn't find them from quickly skimming their impact evaluations.

Comment by MichaelDickens on FTX/CEA - show us your numbers! · 2022-04-20T20:04:36.841Z · EA · GW

A related thought: If an org is willing to delay spending (say) $500M/year due to reputational/epistemic concerns, then it should easily be willing to pay $50M to hire top PR experts to figure out the reputational effects of spending at different rates.

(I think delays in spending by big orgs are mostly due to uncertainty about where to donate, not about PR. But off the cuff, I suspect that EA orgs spend less than the optimal amount on strategic PR (as opposed to "un-strategic PR", e.g., doing whatever the CEO's gut says is best for PR).)

Comment by MichaelDickens on What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions) · 2022-04-20T19:08:59.067Z · EA · GW

The link to "Unjournal" is broken, it goes to https://forum.effectivealtruism.org/posts/kftzYdmZf4nj2ExN7/bit.ly/eaunjournal instead of bit.ly/eaunjournal.

Comment by MichaelDickens on Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"? · 2022-04-20T17:48:27.767Z · EA · GW

FWIW my intuition is that if you have a name for a thing, it means the opposite of that is the default. If there's a special term for "longtermist", that means people are not longtermists by default (which I think is basically true—most people are not longtermists, and longtermism is kind of a weird position (although I do happen to agree with it)). Sort of like how EAs are called EAs, but there's no word for people who aren't EAs, because being not-EA is the default.

Comment by MichaelDickens on A Complete Quantitative Model for Cause Selection · 2022-04-16T21:26:33.209Z · EA · GW

Thanks for the heads up, it should be working again now.

Comment by MichaelDickens on "Long-Termism" vs. "Existential Risk" · 2022-04-08T19:57:37.008Z · EA · GW

FWIW I would not be offended if someone said Scott's writing is better than mine. Scott's writing is better than almost everyone's.

Your comment inspired me to work harder to make my writings more Scott-like.

Comment by MichaelDickens on "Long-Termism" vs. "Existential Risk" · 2022-04-08T19:51:40.886Z · EA · GW

Yeah, the two things are orthogonal as far as I can see. The person-affecting view is perfectly with consistent with either a zero or a nonzero pure time preference.

Comment by MichaelDickens on "Long-Termism" vs. "Existential Risk" · 2022-04-08T17:49:09.857Z · EA · GW

I don't know of any EAs or philosophers with a nonzero pure time preference, but it's pretty common to believe that creating new lives is morally neutral. Someone who believes this might plausibly be a short-termist. I have a few friends who are short-termist for that reason.

Comment by MichaelDickens on What general financial advising advice would you give to EAs? · 2022-04-08T16:48:51.019Z · EA · GW

In addition to what Brendon said, I'd say that finance best practices for EAs are mostly the same as best practices for anyone else. I like the Bogleheads wiki as a good resource for beginners.

IMO you can get most of the benefits of investing just by following best practices. If you want to take it further, you can follow some of the tips in the articles Brendon linked, or read my post Asset Allocation and Leverage for Altruists with Constraints, which gives my best guess as to how EAs should invest differently than most people.

Comment by MichaelDickens on Liars · 2022-04-06T23:55:04.410Z · EA · GW

The most prominent example I've seen recently is Frank Abagnale, the real-life protagonist of the supposedly-nonfiction movie Catch Me If You Can, who basically totally fabricated his life story, and (AFAICT) makes a living off making appearances where he tells his story, and he still regularly gets paid to do this, even though it's pretty well-documented that he's lying about almost everything.

Comment by MichaelDickens on A Comparison of Donor-Advised Fund Providers · 2022-04-04T15:58:22.692Z · EA · GW

Thanks for pointing this out! I updated the post.

Comment by MichaelDickens on Why 80 000 hours should recommend more people become drug lords · 2022-04-01T15:12:56.997Z · EA · GW

I haven't drug-lorded personally, but I've watched Breaking Bad, and my understanding of the general process is

customers buy drugs in cash -> street dealers kick up to managers -> managers kick up to drug lords

so the drug lords end up accumulating piles of cash. Hard to convert cash into crypto so I think it would be better if CEA could directly receive cash.

Maybe a drug lord mega-donor could donate a storage unit to CEA, and that storage unit happens to be filled with cash? That's probably better than a direct cash donation, because the drug lord would have to report the cash donation on their taxes.

Comment by MichaelDickens on Why 80 000 hours should recommend more people become drug lords · 2022-04-01T02:00:21.608Z · EA · GW

EA-aligned drug lord can solve this problem by donating colossal wonga to charity.

How capable are charities at accepting large cash donations? If this is an issue, maybe CEA could serve as an intermediary to redistribute drug lord cash to other charities, I know they've done similar things for e.g. helping new EA charities that aren't yet officially registered.

Comment by MichaelDickens on Is misinformation a serious problem, and is it tractable? · 2022-03-28T16:19:48.688Z · EA · GW

This isn't a particularly deep or informed take, but my perspective on it is that the "misinformation problem" is similar to what Scott called the cowpox of doubt:

What annoys me about the people who harp on moon-hoaxing and homeopathy – without any interest in the rest of medicine or space history – is that it seems like an attempt to Other irrationality.

It’s saying “Look, over here! It’s irrational people, believing things that we can instantly dismiss as dumb. Things we feel no temptation, not one bit, to believe. It must be that they are defective and we are rational.”

But to me, the rationality movement is about Self-ing irrationality.

It is about realizing that you, yes you, might be wrong about the things that you’re most certain of, and nothing can save you except maybe extreme epistemic paranoia.

10 years ago, it was popular to hate on moon-hoaxing and homeopathy, now it's popular to hate on "misinformation". Fixating on obviously-wrong beliefs is probably counterproductive to forming correct beliefs on important and hard questions.

Comment by MichaelDickens on A Forum post can be short · 2022-03-25T02:46:53.192Z · EA · GW

Yeah I feel the same way, I wonder if there's a good fix for that. Given the current setup, long effortposts are usually only of interest to a small % of people, so they don't get as many upvotes.

Comment by MichaelDickens on A Forum post can be short · 2022-03-23T16:10:10.913Z · EA · GW

I know it's a joke, but if you want to build status, short posts are much better than long posts.

Which is more impressive: the millionth 200-page dissertation published this year, or John Nash's 10-page dissertation?

Which is more impressive: the latest complicated math paper, or Conway & Soifer's two-word paper?

Comment by MichaelDickens on A Forum post can be short · 2022-03-22T15:55:11.576Z · EA · GW

I like when writing advice is self-demonstrating.

Comment by MichaelDickens on Seeking feedback on new EA-aligned economics paper · 2022-03-19T04:47:14.816Z · EA · GW

In response to this comment, I wrote a handy primer: https://mdickens.me/2022/03/18/altruistic_investors_care_about_aggregate_altruistic_portfolio/

Comment by MichaelDickens on A Comparison of Donor-Advised Fund Providers · 2022-03-10T19:08:35.099Z · EA · GW

That's a complicated question, but in short, if you believe that there will be better donation opportunities in the future, you might use a DAF.