Posts

The Future of Earning to Give 2019-10-13T21:28:54.012Z · score: 64 (32 votes)

Comments

Comment by petermccluskey on The Future of Earning to Give · 2019-10-25T00:14:02.294Z · score: 1 (1 votes) · EA · GW

With almost all of those proposed intermediate goals, it's substantially harder to evaluate whether the goal will produce much value. In most cases, it will be tempting to define the intermediate goal in a way that is easy to measure, even when doing so weakens the connection between the goal and health.

E.g. good biomarkers of aging would be very valuable if they measure what we hope they measure. But your XPrize link suggests that people will be tempted to use expert acceptance in place of hard data. The benefits of biomarkers have been frequently overstated.

It's clear that most donors want prizes to have a high likelihood of being awarded fairly soon. But I see that desire as generally unrelated to a desire for maximizing health benefits. I'm guessing it indicates that donors prefer quick results over high-value results, and/or that they overestimate their knowledge of which intermediate steps are valuable.

A $10 million aging prize from an unknown charity might have serious credibility problems, but I expect that a $5 billion prize from the Gates Foundation or OpenPhil would be fairly credible - they wouldn't actually offer the prize without first getting some competent researchers to support it, and they'd likely first try out some smaller prizes in easier domains.

Comment by petermccluskey on The Future of Earning to Give · 2019-10-14T18:39:00.317Z · score: 17 (5 votes) · EA · GW

I agree with most of your comment.

>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.

That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they're pretty replaceable.

Comment by petermccluskey on The Future of Earning to Give · 2019-10-14T18:04:20.332Z · score: 3 (2 votes) · EA · GW

Yes, large donors more often reach diminishing returns on each recipient than do small donors. The one charity heuristic is mainly appropriate for people who are donating $50k per year or less.

Comment by petermccluskey on The Future of Earning to Give · 2019-10-14T18:02:43.281Z · score: 3 (2 votes) · EA · GW

Yes. The post Drowning children are rare seemed to be saying that OPP was capable of making most EA donations unimportant. I'm arguing that we should reject that conclusion, even if many of that post's points are correct.

Comment by petermccluskey on X-risk dollars -> Andrew Yang? · 2019-10-13T02:52:02.261Z · score: 7 (2 votes) · EA · GW

They may not have budged climate scientists, but there other ways they may have influenced policy. Did they (or other partisans) alter the outcomes of Washington Initiative 1631 or 732? That seems hard to evaluate.

Comment by petermccluskey on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-24T19:12:39.146Z · score: 7 (6 votes) · EA · GW

Most of their analysis looks right.

But they implicitly assume a 100% chance of generating a charter city with better institutions than the host country, given a certain amount of effort on their part. I'd be eagerly donating to them if I believed that. But I expect most countries have political problems which will cause them to reject any charter city effort that comes from outside their country.

I'll estimate a less than 5% chance that any US based charity will catalyze the creation of a charter city in another country, and if such a charter city is created, I'll estimate maybe a 50% chance of it having better institutions than the host country. So I'm dividing their expected impact estimates by about 50 or 100.

Comment by petermccluskey on Why were people skeptical about RAISE? · 2019-09-06T12:12:48.087Z · score: 3 (3 votes) · EA · GW

I meant something like "good enough to look like a MIRI researcher, but unlikely to turn out to be more productive than the average MIRI researcher". I guess when I wrote that I was feeling somewhat pessimistic about MIRI's hiring process. Given optimistic assumptions about how well MIRI distinguishes good from bad job applicants, then I'd expect that MIRI wouldn't hire RAISE graduates.

Comment by petermccluskey on Are we living at the most influential time in history? · 2019-09-05T18:46:20.403Z · score: 6 (4 votes) · EA · GW

I agree with most of your reasoning, but disagree significantly about this:

>The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t.

It's true that a pure utilitarian would expect about an order of magnitude less utility from x-risk reduction if we have a 90% chance of being in a simulation compared to a zero chance of being in a simulation. But the pure utilitarian case for x-risk reduction isn't very sensitive to an order of magnitude change in utility, since the expected utility seems many orders of magnitude larger than what's needed to convince a pure utilitarian to focus on x-risks.

From a more selfish perspective, being in a simulation increases my desire to be involved in events that are interesting to the simulators, in case such people get simulated in more detail.

I'm somewhat concerned that being influenced much by the simulation hypothesis increases the risk that the simulation will be shut down, which seems like weak evidence for caution about altering my behavior much in response to the simulation hypothesis.

For these reasons, and WilliamKiely's comments about priors, I want to treat HoH as more than 1% likely.

Comment by petermccluskey on Why were people skeptical about RAISE? · 2019-09-04T13:29:09.535Z · score: 18 (8 votes) · EA · GW

>anyone capable of significantly contributing wouldn't need an on-ramp

That's approximately why I was skeptical, although I want to frame it a bit differently. I expect that the most valuable contributions to AI safety will involve generating new paradigms, asking questions that nobody has yet thought to ask, or something like that. It's hard to teach the skills that are valuable for that.

I got the impression that RAISE was mostly oriented toward producing people who become typical MIRI researchers. Even if MIRI's paradigm is the right one, I expect that MIRI needs atypically good researchers, and would only get minor benefits from someone who is struggling to become a typical MIRI researcher.


Comment by petermccluskey on Age-Weighted Voting · 2019-07-12T19:25:35.245Z · score: 33 (18 votes) · EA · GW

War is more likely when the population has a higher fraction of young men (e.g. see Angry Young Men Are Making the World Less Stable ). That's doesn't quite say that young men vote more for war, but it's suggestive.

More war could easily overwhelm any benefits from weighted voting.

Comment by petermccluskey on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-23T15:43:40.085Z · score: 3 (2 votes) · EA · GW

IPOs are strongly dependent on an expanding economy. Cryptocurrency bubbles are somewhat more likely in an expanding economy.

The impact of IPOs and Bitcoin on other markets is much smaller than the impact of the economy on IPOs and Bitcoin.

Comment by petermccluskey on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-21T15:40:05.954Z · score: 6 (5 votes) · EA · GW

I'll guess that EA giving is a bit more sensitive to the economy than other giving, because a disproportionate amount of EA giving comes from IPO-related wealth and cryptocurrency bubbles.

Comment by petermccluskey on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T21:43:11.839Z · score: 3 (2 votes) · EA · GW

No, I expected that no rigorous research had been done on NLP as of 2014, and I don't know how rigorous the more recent research has been.

Comment by petermccluskey on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T01:34:18.444Z · score: 3 (2 votes) · EA · GW

I don't know whether it has been published. I heard it from Rick Schwall (http://shfhs.org/aboutus.html).

Comment by petermccluskey on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-12T21:24:27.926Z · score: 12 (6 votes) · EA · GW

I've contributed small amounts of money to MAPS , but I haven't been thinking of those as EA donations.

My doubts overlap a fair amount with those of Scott Alexander , but I'll focus on somewhat different reasoning which led me there.

It sounds like MAPS has been getting impressive results, and MAPS would likely qualify as an EA charity if FDA approval were the main obstacle to extending those results to the typical person who seeks help with PTSD. However, I suspect there are other important obstacles.


I know a couple of people, who I think consider themselves EAs, who have been trying to promote an NLP-based approach to treating PTSD, which reportedly has a higher success rate than MAPS has reported. The basic idea behind it has been around for years , without spreading very widely, and without much interest from mainstream science.

Maybe the reports I hear involve an improved version of the basic technique, and it will take off as soon as the studies based on the new version are published.

Or maybe the glowing reports are based on studies that attracted both therapists and patients who were unusually well suited for NLP, and don't generalize to random therapists and random PTSD patients. And maybe the MAPS study has similar problems.

Whatever the case is there, the ease with which I was able to stumble across an alternative to psychedelics that sounds about equally promising is some sort of evidence against the hypothesis that there's a shortage of promising techniques to treat PTSD.

I suspect there are important institutional problems in getting mental help professionals to adopt techniques that provide quick fixes. I doubt it's a complete coincidence that the number of visits required for for successful therapy happens to resemble a number that maximizes revenue per patient.

If that were simply a conspiracy of medical professionals, and patients were eager to work around them, I'd be vaguely hopeful of finding a way to do so. But I'm under the impression that patients have a weak tendency to contribute to the problem, by being more likely to recommend to their friends a therapist who they see for long time, than they would be to recommend a therapist who they stop seeing after a month because they were cured that fast. And I don't see lots of demand for alternative routes to finding therapists that have good track records.

None of these reasons for doubt is quite sufficient by itself to decide that MAPS isn't an EA charity, but they outline at least half of my intuitions for feeling somewhat pessimistic about this cause area.

Comment by petermccluskey on Aligning Recommender Systems as Cause Area · 2019-05-10T23:48:51.872Z · score: 16 (7 votes) · EA · GW

I suspect that principal–agent problems are the biggest single obstacle to alignment. That leads me to suspect it's less tractable than you indicate.

I'm interested in what happened with Netflix. Ten years ago their recommendation system seemed focused almost exclusively on maximizing user ratings of movies. That dramatically improved my ability to find good movies.

Yet I didn't notice many people paying attention to those benefits. Netflix has since then shifted toward less aligned metrics. I'm less satisfied with Netflix now, but I'm unclear what other users think of the changes.

Comment by petermccluskey on Should we consider the sleep loss epidemic an urgent global issue? · 2019-05-06T15:27:31.067Z · score: 7 (4 votes) · EA · GW

Sleep loss is an important problem, but it's unclear whether any charity should focus on it directly.

The problem of driving while sleep-deprived will likely be solved by robocars more than by any altruistic efforts.

The rest of the problem seems better tackled by focusing more on the stresses that cause sleep problems, and by relatively decentralized efforts to shift our cultures to be more sleep-friendly.

Sleep is something to keep in mind when asking whether EAs should donate to mental health charities or to meditation charities such as Monastic Academy. I'm very uncertain whether these charities should be considered effective enough to be EA causes.

Comment by petermccluskey on Why does EA use QALYs instead of experience sampling? · 2019-04-25T14:30:33.244Z · score: 16 (6 votes) · EA · GW

>For anyone who's had some experience with depression or anxiety, as well as with "some problems walking about," it should be obvious that moderate depression or anxiety are (much) worse than moderate mobility problems, pound for pound.

That's obvious for rich people, but not at all obvious for someone who risks hunger as a result of mobility problems.

Comment by petermccluskey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T16:37:52.375Z · score: 14 (6 votes) · EA · GW

I assume that by "cash-flow positive", you mean supported by fees from workshop participants?

I don't consider that to be a desirable goal for CFAR.

Habryka's analysis focuses on CFAR's track record. But CFAR's expected value comes mainly from possible results that aren't measured by that track record.

My main reason for donating to CFAR is the potential for improving the rationality of people who might influence x-risks. That includes mainstream AI researchers who aren't interested in the EA and rationality communities. The ability to offer them free workshops seems important to attracting the most influential people.

Comment by petermccluskey on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-01T20:01:50.476Z · score: 8 (5 votes) · EA · GW

>which means that what everyone else is doing doesn't matter all that much

Earning to give still matters a moderate amount. That's mostly what I'm doing. I'm saying that average EA should start with the outside view that they can't do better than earning to give, and then attempt some more difficult analysis to figure out how they compare to average.

And it's presumably possible to matter more than the average earning to give EA, by devoting above-average thought to vetting new charities.

Comment by petermccluskey on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-01T19:51:43.167Z · score: 21 (9 votes) · EA · GW

I'm unimpressed by the arguments for random funding of research proposals. The problems with research funding are mostly due to poor incentives, rather than people being unable to do much better than random guessing. EA organizations don't have ideal incentives, and may be on the path to unreasonable risk-aversion, but they still have a fairly sophisticated set of donors setting their incentives, and don't yet appear to be particularly risk-averse or credential-oriented.

Unless something has changed in the last few years, there are still plenty of startups with plausible ideas that don't get funded by Y Combinator or anything similar. Y Combinator clearly evaluates a lot more startups than I'm willing or able to evaluate, but it's not obvious that they're being less selective than I am about which ones they fund.

I mentioned Nick Bostrom and Eric Drexler because they're widely recognized as competent. I didn't mean to imply that we should focus more funding on people who are that well known - they do not seem to be funding constrained now.

Let me add some examples of funding I've done that better characterize what I'm aiming for in charitable donations (at the cost of being harder for many people to evaluate):

My largest donations so far have been to CFAR, starting in early 2013, when their track record was rather weak, and almost unknown outside of people who had attended their workshops. That was based largely on impressions of Anna Salamon that I got by interacting with her (for reasons that were only marginally related to EA goals).

Another example is Aubrey de Grey. I donated to the Methuselah Mouse Prize for several years starting in 2003, when Aubrey had approximately no relevant credentials beyond having given a good speech at the Foresight Institute and a similar paper on his little-known website.

Also, I respected Nick Bostrom and Eric Drexler fairly early in their careers. Not enough to donate to their charitable organizations at their very beginning (I wasn't actively looking for effective charities before I heard of GiveWell). But enough that I bought and read their first books, primarily because I expected them to be thoughtful writers.

Comment by petermccluskey on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-31T17:35:27.126Z · score: 40 (14 votes) · EA · GW

Speaking for why I haven't donated, this is close to the key question:

>Then the question is (roughly) whether, given £60,000, it makes more sense to fund 1 researcher who's cleared the EA hiring bar, or 10 who haven't (and are in D).

My intuition has been that if those 10 are chosen at random, then I'm moderately confident that it's better to fund the 1 well-vetted researcher.

EA is talent-constrained in the sense that it needs more people like Nick Bostrom or Eric Drexler, but much less in the sense of needing more people who are average EAs to do direct EA work.

I've done some angel investing in startups. I initially took an approach of trying to fund anyone who has a a good idea. But that worked poorly, and I've shifted, as good VCs advise, to looking for signs of unusual competence in founders. (Alas, I still don't have much reason to think I'm good at angel investing). And evaluating founder's competence feels harder than evaluating a business idea, so I'm not willing to do it very often.

I use a similar approach with donating to early-stage charities, expecting to see many teams with decent ideas, but expecting the top 5% to be more than 10 times as valuable than the average. And I'm reluctant to evaluate more pre-track-record projects than I'm already doing.

With the hotel, I see a bunch of little hints that it's not worth my time to attempt an in-depth evaluation of the hotel's leaders. E.g. the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.

I can imagine that the hotel attracts better than random EAs, but it's also easy to imagine that it selects mainly for people who aren't good enough to belong at a top EA organization.

Halffull has produced a better argument for the EA Hotel, but I find it somewhat odd that he starts with arguments that seem weak to me, and only in the middle did he get around to claims that are relevant to whether the hotel is better than a random group of EAs.

Also, if donors fund any charity that has a good idea, I'm a bit concerned that that will attract a larger number of low-quality projects, much like the quality of startups declined near the peak of the dot-com bubble, when investors threw money at startups without much regard for competence.

Comment by petermccluskey on Bayesian Investor proposes you can predictably beat the market by ~3% following a simple and easy strategy · 2019-03-24T17:32:32.840Z · score: 5 (4 votes) · EA · GW

Here are a few examples of strategies that look (or looked) equally plausible, from the usually thoughtful blog of my fellow LessWronger Colby Davis .

This blog post recommends:
- emerging markets, which overlaps a fair amount with my advice
- put-writing, which sounds reasonable to me, but he managed to pick a bad time to advocate it
- preferred stock, which looks appropriate today for more risk-averse investors, but which looked overpriced when I wrote my post.

This post describes one of his failures. Buying XIV was almost a great idea. It was a lot like shorting VXX, and shorting VXX is in fact a good idea for experts who are cautious enough not to short too much (alas, the right amount of caution is harder to know than most people expect). I expect the rewards in this area to go only to those who accept hard-to-evaluate risks.

This post has some strategies that require more frequent trading. I suspect they're good, but I haven't given them enough thought to be confident.

Comment by petermccluskey on Bayesian Investor proposes you can predictably beat the market by ~3% following a simple and easy strategy · 2019-03-17T17:32:00.175Z · score: 3 (3 votes) · EA · GW

Hi, I'm Bayesian Investor.

I doubt that following my advice would be riskier than the S&P 500 - the low volatility funds reduce the risk in important ways (mainly by moving less in bear markets) that roughly offset the features which increase risk.

It's rational for most people to ignore my advice, because there's lots of other (somewhat conflicting) advice out there that sounds equally plausible to most people.

I've got lots of evidence about my abilities (I started investing as a hobby in 1980, and it's been my main source of income for 20 years). But I don't have an easy way to provide much evidence of my abilities in a single blog post.

Comment by petermccluskey on -0.16 ROI for Weight Loss Interventions (Optimistic) · 2019-03-17T16:35:42.379Z · score: 1 (1 votes) · EA · GW

I'm a little confused by this reply. Did you think I was complaining that you over-estimated the costs of weight loss? Let me emphasize that I was complaining about the actual resources devoted to weight loss, not your estimates of it. I'll guess that you under-estimated those costs, by focusing on money spent, rather than trying to evaluate the psychological costs.

My main point is that we should focus more on getting people to switch from typical weight loss approaches to ones that are easier and more effective.

I'm unsure what to infer from your weight satisfaction evidence. It might mean that some people notice that obesity is harming them (via sleep apnea? romantic problems?) and that's what causes them to worry. Or it might mean they're just more responsive to peer pressure, and it's the peer pressure, not the obesity, that's harmful.

Comment by petermccluskey on -0.16 ROI for Weight Loss Interventions (Optimistic) · 2019-03-11T19:25:00.385Z · score: 4 (3 votes) · EA · GW

I suspect you underestimate the cost of obesity.

But there's something seriously wrong with the cost of the typical weight loss approach, and your ROI estimate might be close to the right answer for that.

I believe it's possible to adopt a much better than average approach to weight loss, by focusing more on switching to healthier foods (based on the Satiety Index, or on high fiber content), and/or some form of intermittent fasting.

Comment by petermccluskey on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T16:52:26.188Z · score: 5 (3 votes) · EA · GW

I expect that good software engineers are more likely to figure out for themselves how to be more efficient than they are to figure out how to increase their work quality. So it's not obvious what to infer from "it's harder for an employer to train people to work faster" - does it just mean that the employer has less need to train the slow, high quality worker?

Comment by petermccluskey on How Can Donors Incentivize Good Predictions on Important but Unpopular Topics? · 2019-02-06T17:57:10.039Z · score: 1 (1 votes) · EA · GW

Regulations shouldn't be much of a problem for subsidized prediction markets. The regulations are designed to protect people from losing their investments. You can avoid that by not taking investments - i.e. give every trader a free account. Just make sure any one trader can't create many accounts.

Alas, it's quite hard to predict how much it will cost to generate good predictions, regardless of what approach you take.

Comment by petermccluskey on Disentangling arguments for the importance of AI safety · 2019-01-24T05:58:34.145Z · score: 3 (3 votes) · EA · GW

Drexler would disagree with some of Richard's phrasing, but he seems to agree that most (possibly all) of (somewhat modified versions of) those 6 reasons should cause us to be somewhat worried. In particular, he's pretty clear that powerful utility maximisers are possible and would be dangerous.

Comment by petermccluskey on Pursuing infinite positive utility at any cost · 2018-12-12T02:00:06.230Z · score: 6 (3 votes) · EA · GW

I think it's more appropriate to use Bostrom's Moral Parliament to deal with conflicting moral theories.

Your approach might be right if the theories you're comparing used the same concept of utility, and merely disagreed about what people would experience.

But I expect that the concept of utility which best matches human interests will say that "infinite utility" doesn't make sense. Therefore I treat the word utility as referring to different phenomena in different theories, and I object to combining them as if they were the same.


Similarly, I use a dealist approach to morality. If you show me an argument that there's an objective morality which requires me to increase the probability of infinite utility, I'll still ask what would motivate me to obey that morality, and I expect any resolution of that will involve something more like Bostrom's parliament than like your approach.

Comment by petermccluskey on Pursuing infinite positive utility at any cost · 2018-11-15T00:26:03.939Z · score: 6 (3 votes) · EA · GW

>For all actions have a non-zero chance of resulting in infinite positive utility.

Human utility functions seem clearly inconsistent with infinite utility. See Alex Mennen's Against the Linear Utility Hypothesis and the Leverage Penalty for arguments.

I don't identify 100% with future versions of myself, and I'm somewhat selfish, so I discount experiences that will happen in the distant future. I don't expect any set of possible experiences to add up to something I'd evaluate as infinite utility.

Comment by petermccluskey on Thoughts on short timelines · 2018-10-24T18:06:59.250Z · score: 4 (4 votes) · EA · GW

I disagree with your analysis of "are we that ignorant?".

For things like nuclear war or financial meltdown, we've got lots of relevant data, and not too much reason to expect new risks. For advanced nanotechnology, I think we are ignorant enough that a 10% chance sounds right (I'm guessing it will take something like $1 billion in focused funding).

With AGI, ML researchers can be influenced to change their forecast by 75 years by subtle changes in how the question is worded. That suggests unusual uncertainty.

We can see from Moore's law and from ML progress that we're on track for something at least as unusual as the industrial revolution.

The stock and bond markets do provide some evidence of predictability, but I'm unsure how good they are at evaluating events that happen much less than once per century.

Comment by petermccluskey on A model of the Machine Intelligence Research Institute - Oxford Prioritisation Project · 2018-09-24T15:30:40.619Z · score: 1 (1 votes) · EA · GW

I'm a little unclear on what you are asking.

How strictly do you mean when you say "provably safe"? That seems like an area where all AI safety researchers are hesitant to say how high they're aiming.

And by "have it implemented", do you mean fully develop it own their own, or do you include scenarios where they convey keys insights to Google, and thereby cause Google to do something safer?

Comment by petermccluskey on Open Thread #40 · 2018-07-17T15:13:02.230Z · score: 3 (3 votes) · EA · GW

I don't trust the author (Lomborg), based on the exaggerations I found in his book Cool It.

I reviewed that book here.

Comment by petermccluskey on Open Thread #39 · 2018-05-30T01:09:45.099Z · score: 5 (1 votes) · EA · GW

I suggest starting with MAPS.

Comment by petermccluskey on Against prediction markets · 2018-05-13T16:36:47.071Z · score: 1 (1 votes) · EA · GW

I think markets that have at least 20 people trading on any given question will on average be at least as good as any alternative.

Your comments about superforecasters suggest that you think what matters is hiring the right people. What I think matters is the incentives the people are given. Most organizations produce bad forecasts because they have goals which distract people from the truth. The biggest gains from prediction markets are due to replacing bad incentives with incentives that are closely connected with accurate predictions.

There are multiple ways to produce good incentives, and for internal office predictions, there's usually something simpler than prediction markets that works well enough.

Comment by petermccluskey on A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4) · 2018-05-12T19:37:02.910Z · score: 1 (1 votes) · EA · GW

I object to the idea that early stage Alzheimer's is incurable. See the book The End of Alzheimer's.

Comment by petermccluskey on Against prediction markets · 2018-05-12T16:59:59.436Z · score: 7 (7 votes) · EA · GW

Who are you arguing against? The three links in your first paragraph go to articles that don't clearly disagree with you.

I’d also be curious about a prediction market in which only superforecasters trade.

I'd guess that there would be fewer trades than otherwise, and this would often offset any benefits that come from the high quality of the participants.

Comment by petermccluskey on [Draft] Fighting Aging as an Effective Altruism Cause · 2018-04-18T16:54:40.836Z · score: 2 (2 votes) · EA · GW

It is even known to extend the life of diabetics so they live longer than healthy people.

No, it is known to correlate with living longer. But some or all of that correlation seems to be due to the sickest diabetics being switched from metformin to other drugs.

Comment by petermccluskey on On funding medical research · 2018-02-23T23:04:43.098Z · score: 1 (1 votes) · EA · GW

It does seem like there are important areas where medical research is inadequate. I'll suggest that part of the problem is inadequate effort devoted to treatments that aren't protected by patents.

It looks like some unknown fraction of ME/CFS is caused by low thyroid hormone levels. "Subclinical" hypothyroidism has symptoms that are pretty similar to those of ME/CFS. They are usually distinguished by TSH tests. [TSH is the standard measure of thyroid levels; there are a number of other options, none of which are ideal].

Here's speculation that we should distrust TSH results. (There's a more detailed and very verbose version of that speculation here).

There's plenty of confusion about when it's wise to increase a patient's thyroid hormone. E.g. this small RCT study which gave a standard T4 dose, rather than adjusting the dose to achieve some measure of optimal hormone levels. The reported TSH levels of 0.66 in patients receiving T4 suggest that many patients got more than the optimal dose, and/or didn't convert T4 to T3 well.

In contract, two smaller uncontrolled studies (here00014-0/abstract) and here) reported good results from T3 treatment for treatment-resistant depression (H/T Sarah Constantin). Plus there are lots of anecdotal reports of benefits (see mine here).

There are real dangers from overdoses, and it's unclear how well researchers have measured the benefits, so it's easy to imagine that most doctors are erring on the side of inaction.

My intuition says that there's plenty of room for making protocols that more safely determine the optimal dose. I don't have enough expertise to estimate how tractable that is.

Another area where EAs might possibly provide an important benefit is Alzheimer's. There have been some recent claims that there are strategies which substantially prevent Alzheimer's or reverse it in early stages. As far as I can tell, these claims aren't prompting as much research as they deserve.

Some parts of those strategies are backed by small RCTs published in 2013 and 2012, and yet the first Google search result for Alzheimer's is still a page that says Alzheimer's "cannot be prevented, cured or even slowed".

I expect good research about Alzheimer's to be too expensive for EAs to fund directly, but it seem like we should be able to do something to nudge existing research funding into better directions.

Comment by petermccluskey on A model of the Machine Intelligence Research Institute - Oxford Prioritisation Project · 2017-05-23T19:11:03.918Z · score: 3 (3 votes) · EA · GW

colonisation of the Supercluster could have a very low probability.

What do you mean by very low probability? If you mean a one in a million chance, that's not improbable enough to answer Bostrom. If you mean something that would actually answer Bostrom, then please respond to the SlateStarCodex post Stop adding zeroes.

I think Bostrom is on the right track, and that any analysis which follows your approach should use at least a 0.1% chance of more than 10^50 human life-years.

Comment by petermccluskey on A model of the Machine Intelligence Research Institute - Oxford Prioritisation Project · 2017-05-22T14:41:16.122Z · score: 5 (5 votes) · EA · GW

Can you explain your expected far future population size? It looks like your upper bound is something like 10 orders of magnitude lower than Bostrom's most conservative estimates.

That disagreement makes all the other uncertainty look extremely trivial in comparison.

Comment by petermccluskey on Rational Politics Project · 2017-01-08T19:32:58.225Z · score: 6 (6 votes) · EA · GW

You claim this is non-partisan, yet you make highly partisan claims, such as "conservatives have relied much more on lies" (you cite Trump's lies, but treating Trump as a conservative is objectionable to many conservatives).

Comment by petermccluskey on Voter Registration As an EA Group Meetup Activity · 2016-09-18T17:06:34.412Z · score: 3 (3 votes) · EA · GW

Measurability doesn't sound quite adequate to describe what this proposal is missing.

FHI and MIRI have major problems with measurability, yet have somewhat plausible claims to fit EA principles.

Voter registration has similar problems with estimating how it affects goals such as lives saved, but seems to be missing an analysis of why the expected number of lives saved is positive or negative.

Comment by petermccluskey on Voter Registration As an EA Group Meetup Activity · 2016-09-17T16:38:08.154Z · score: 4 (4 votes) · EA · GW

The obvious objection is that voters who would otherwise not vote are likely to be less informed than the average voter, so your effort causes election results to be less well informed.

You sound more concerned with whether your actions are socially approved than you are with evaluating the results.

Comment by petermccluskey on Political initiative: Fundamental rights for primates · 2016-08-12T23:31:42.329Z · score: -1 (1 votes) · EA · GW

I'll guess that the most important effects of this would be to influence which species get uploaded when, reducing the chances that the world will be ruled by uploaded bonobos, and increasing the chance of nonprimates ruling.

Comment by petermccluskey on Why don't many effective altruists work on natural resource scarcity? · 2016-02-22T17:11:16.871Z · score: 1 (1 votes) · EA · GW

On the Nymex, they currently go out to Dec 2024. That contract appears to trade less than once a week.

There might be occasional contracts for more distant years traded between institutional investors that don't get publicly reported, but the low volume on publicly traded contracts suggests people just aren't interested in trading such contracts.

Comment by petermccluskey on Investment opportunity for the risk neutral · 2016-01-25T20:06:32.111Z · score: 5 (5 votes) · EA · GW

Your use of the phrase "fair market value" is a large red flag.

I've been speculating in stocks for 35 years. One of the hardest lessons I needed to learn was to not believe that last year's prices were fairer than today's prices.

Betting on mean reversion occasionally makes sense, but I've learned to only do it after careful analysis of the fundamentals (earnings, book value, etc).

Comment by petermccluskey on Direct Funding Between EAs - Moral Economics · 2015-08-01T15:42:26.807Z · score: 0 (0 votes) · EA · GW

The goal of avoiding groupthink has the potential to be a very important reason for preferring direct funding. If the direct funding ends up substituting for donations to large, entrenched institutions, then I expect it to be valuable. But I expect that any groupthink associated with young charities that have a handful of employees comes from a broader community, not the specific institution.

Comment by petermccluskey on Direct Funding Between EAs - Moral Economics · 2015-08-01T15:20:16.503Z · score: 0 (0 votes) · EA · GW

One difference in cost comes from institutions such as Oxford requiring their employees to get prestigious wages. That makes the $75k average misleading. More obscure charities can hire employees much more cheaply.