The Future of Earning to Give 2019-10-13T21:28:54.012Z


Comment by PeterMcCluskey on Keynesian Altruism · 2020-09-15T02:31:36.122Z · EA · GW

10 years worth of cash sounds pretty unusual, at least for an EA charity.

But part of my point is that when stocks are low, the charity won't have enough of a cushion to do any investing, so it won't achieve the kind of returns that you'd expect from buying stocks at a no-worse-than-random time. E.g. I'd expect that a charity that tries to buy stocks would have bought around 2000 when the S&P was around 1400, sold some of that in 2003 when the S&P was around 1100 to make up for a shortfall in donations, bought again in 2007 at 1450, then sold again in 2009 at 1100. With patterns like that, it's easy to get negative returns.

Individual investors often underperform markets for the same reason. They can avoid that by investing only what they're saving for retirement. However, charities generally shouldn't have anything equivalent to saving for retirement.

Comment by PeterMcCluskey on Keynesian Altruism · 2020-09-13T22:47:51.013Z · EA · GW
  1. Cash sitting in a charity bank account costs money, so if you have lots of it, invest some;

But the obvious ways to invest (i.e. stocks) work poorly when combined with countercyclical spending. Charities are normally risk-averse about investments because they have plenty of money to invest when stocks are high, but need to draw down reserves when stocks are low.

Comment by PeterMcCluskey on How to Fix Private Prisons and Immigration · 2020-06-13T17:46:06.768Z · EA · GW

When I tell people that prisons and immigration should use a similar mechanism, they sometimes give me a look of concern. This concern is based on a misconception

I'll suggest that some people's concerns are due to an accurate intuition that your proposal will make it harder to hide the resemblance between prisons and immigration restrictions. Preventing immigration looks to me to be fairly similar to imprisoning them in their current country.

Comment by PeterMcCluskey on Idea: statements on behalf of the general EA community · 2020-06-11T20:02:28.596Z · EA · GW

It would be much easier to make a single, more generic policy statement. Something like:

When in doubt, assume that most EAs agree with whatever opinions are popular in London, Berkeley, and San Francisco.

Or maybe:

When in doubt, assume that most EAs agree with the views expressed by the most prestigious academics.

Reaffirming this individually for every controversy would redirect attention (of whatever EAs are involved in the decision) away from core EA priorities.

Comment by PeterMcCluskey on Will protests lead to thousands of coronavirus deaths? · 2020-06-04T15:41:53.201Z · EA · GW

Another risk is that increased distrust impairs the ability of authorities to do test and trace in low-income neighborhoods, which seem to now be key areas where the pandemic is hardest to control.

Comment by PeterMcCluskey on Climate Change Is Neglected By EA · 2020-06-02T15:56:24.622Z · EA · GW

EA is in danger of making itself a niche cause by loudly focusing on topics like x-risk

EA has been a niche cause, and changing that seems harder than solving climate change. Increased popularity would be useful, but shouldn't become a goal in and of itself.

If EAs should focus on climate change, my guess is that it should be a niche area within climate change. Maybe altering the albedo of buildings?

Comment by PeterMcCluskey on Policy idea: Incentivizing COVID-19 tracking app use with lottery tickets · 2020-04-24T18:05:04.088Z · EA · GW

How about having many locations that are open only to people who are running a tracking app?

I'm imagining that places such as restaurants, gyms, and airplanes could require that people use tracking apps in order to enter. Maybe the law should require that as a default for many locations, with the owners able to opt out if they post a conspicuous warning?

How hard would this be to enforce?

Comment by PeterMcCluskey on How Much Leverage Should Altruists Use? · 2020-01-11T20:50:55.788Z · EA · GW

Hmm. Maybe you're right. I guess I was thinking there was an important difference between "constant leverage" and infrequent rebalancing. But I guess that's a more complicated subject.

Comment by PeterMcCluskey on How Much Leverage Should Altruists Use? · 2020-01-08T23:41:48.805Z · EA · GW

See Colby Davis on the problems with leveraged ETFs.

Comment by PeterMcCluskey on How Much Leverage Should Altruists Use? · 2020-01-08T23:33:13.120Z · EA · GW

I like this post a good deal.

However, I think you overstate the benefits.

I like the idea of shorting the S&P and buying global ex-US stocks, but beware that past correlations between markets only provide a rough guess about future correlations.

I'm skeptical that managed futures will continue to do as well as backtesting suggests. Futures are new enough that there's likely been a moderate amount of learning among institutional investors that has been going on over the past couple of decades, so those markets are likely more efficient now than history suggests. Returns also depend on recognizing good managers, which tends to be harder than most people expect.

Startups might be good for some people, but it's generally hard to tell. Are you able to find startups before they apply to Y Combinator? Or do startups only come to you if they've been rejected by Y Combinator? Those are likely to have large effects on your expected returns. I've invested in about 10 early-stage startups over a period of 20 years, and I still have little idea of what returns to expect from my future startup investments.

I'm skeptical that momentum funds work well. Momentum strategies work if implemented really well, but a fund that tries to automate the strategy via simple rules is likely to lose the benefits to transaction costs and to other traders who anticipate the fund's trades. Or if it does without simple rules, most investors won't be able to tell whether it's a good fund. And if the strategy becomes too popular, that can easily cause returns to become significantly negative (whereas with value strategies, popularity will more likely drive returns to approximately the same as the overall market).

Comment by PeterMcCluskey on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-26T17:28:17.673Z · EA · GW

Nearly all of CFAR's activity is motivated by their effects on people who are likely to impact AI. As a donor, I don't distinguish much between the various types of workshops.

There are many ways that people can impact AI, and I presume the different types of workshop are slightly optimized for different strategies and different skills, and differ a bit in how strongly they're selecting for people who have a high probability of doing AI-relevant things. CFAR likely doesn't have a good prediction in advance about whether any individual person will prioritize AI, and we shouldn't expect them to try to admit only those with high probabilities of working on AI-related tasks.

Comment by PeterMcCluskey on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-24T00:49:55.367Z · EA · GW

OAK intends to train people who are likely to have important impacts on AI, to help them be kinder or something like that. So I see a good deal of overlap with the reasons why CFAR is valuable.

I attended a 2-day OAK retreat. It was run in a professional manner that suggests they'll provide a good deal of benefit to people who they train. But my intuition is that the impact will be mainly to make those people happier, and I expect that OAK's impact will have less effect on peoples' behavior than CFAR has.

I considered donating to OAK as an EA charity, but have decided it isn't quite effective enough for me to treat it that way.

I believe that the person who promoted that grant at SFF has more experience with OAK than I do.

I'm surprised that SFF gave more to OAK than to ALLFED.

Comment by PeterMcCluskey on The Future of Earning to Give · 2019-10-25T00:14:02.294Z · EA · GW

With almost all of those proposed intermediate goals, it's substantially harder to evaluate whether the goal will produce much value. In most cases, it will be tempting to define the intermediate goal in a way that is easy to measure, even when doing so weakens the connection between the goal and health.

E.g. good biomarkers of aging would be very valuable if they measure what we hope they measure. But your XPrize link suggests that people will be tempted to use expert acceptance in place of hard data. The benefits of biomarkers have been frequently overstated.

It's clear that most donors want prizes to have a high likelihood of being awarded fairly soon. But I see that desire as generally unrelated to a desire for maximizing health benefits. I'm guessing it indicates that donors prefer quick results over high-value results, and/or that they overestimate their knowledge of which intermediate steps are valuable.

A $10 million aging prize from an unknown charity might have serious credibility problems, but I expect that a $5 billion prize from the Gates Foundation or OpenPhil would be fairly credible - they wouldn't actually offer the prize without first getting some competent researchers to support it, and they'd likely first try out some smaller prizes in easier domains.

Comment by PeterMcCluskey on The Future of Earning to Give · 2019-10-14T18:39:00.317Z · EA · GW

I agree with most of your comment.

>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.

That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they're pretty replaceable.

Comment by PeterMcCluskey on The Future of Earning to Give · 2019-10-14T18:04:20.332Z · EA · GW

Yes, large donors more often reach diminishing returns on each recipient than do small donors. The one charity heuristic is mainly appropriate for people who are donating $50k per year or less.

Comment by PeterMcCluskey on The Future of Earning to Give · 2019-10-14T18:02:43.281Z · EA · GW

Yes. The post Drowning children are rare seemed to be saying that OPP was capable of making most EA donations unimportant. I'm arguing that we should reject that conclusion, even if many of that post's points are correct.

Comment by PeterMcCluskey on X-risk dollars -> Andrew Yang? · 2019-10-13T02:52:02.261Z · EA · GW

They may not have budged climate scientists, but there other ways they may have influenced policy. Did they (or other partisans) alter the outcomes of Washington Initiative 1631 or 732? That seems hard to evaluate.

Comment by PeterMcCluskey on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-24T19:12:39.146Z · EA · GW

Most of their analysis looks right.

But they implicitly assume a 100% chance of generating a charter city with better institutions than the host country, given a certain amount of effort on their part. I'd be eagerly donating to them if I believed that. But I expect most countries have political problems which will cause them to reject any charter city effort that comes from outside their country.

I'll estimate a less than 5% chance that any US based charity will catalyze the creation of a charter city in another country, and if such a charter city is created, I'll estimate maybe a 50% chance of it having better institutions than the host country. So I'm dividing their expected impact estimates by about 50 or 100.

Comment by PeterMcCluskey on Why were people skeptical about RAISE? · 2019-09-06T12:12:48.087Z · EA · GW

I meant something like "good enough to look like a MIRI researcher, but unlikely to turn out to be more productive than the average MIRI researcher". I guess when I wrote that I was feeling somewhat pessimistic about MIRI's hiring process. Given optimistic assumptions about how well MIRI distinguishes good from bad job applicants, then I'd expect that MIRI wouldn't hire RAISE graduates.

Comment by PeterMcCluskey on Are we living at the most influential time in history? · 2019-09-05T18:46:20.403Z · EA · GW

I agree with most of your reasoning, but disagree significantly about this:

>The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t.

It's true that a pure utilitarian would expect about an order of magnitude less utility from x-risk reduction if we have a 90% chance of being in a simulation compared to a zero chance of being in a simulation. But the pure utilitarian case for x-risk reduction isn't very sensitive to an order of magnitude change in utility, since the expected utility seems many orders of magnitude larger than what's needed to convince a pure utilitarian to focus on x-risks.

From a more selfish perspective, being in a simulation increases my desire to be involved in events that are interesting to the simulators, in case such people get simulated in more detail.

I'm somewhat concerned that being influenced much by the simulation hypothesis increases the risk that the simulation will be shut down, which seems like weak evidence for caution about altering my behavior much in response to the simulation hypothesis.

For these reasons, and WilliamKiely's comments about priors, I want to treat HoH as more than 1% likely.

Comment by PeterMcCluskey on Why were people skeptical about RAISE? · 2019-09-04T13:29:09.535Z · EA · GW

>anyone capable of significantly contributing wouldn't need an on-ramp

That's approximately why I was skeptical, although I want to frame it a bit differently. I expect that the most valuable contributions to AI safety will involve generating new paradigms, asking questions that nobody has yet thought to ask, or something like that. It's hard to teach the skills that are valuable for that.

I got the impression that RAISE was mostly oriented toward producing people who become typical MIRI researchers. Even if MIRI's paradigm is the right one, I expect that MIRI needs atypically good researchers, and would only get minor benefits from someone who is struggling to become a typical MIRI researcher.

Comment by PeterMcCluskey on Age-Weighted Voting · 2019-07-12T19:25:35.245Z · EA · GW

War is more likely when the population has a higher fraction of young men (e.g. see Angry Young Men Are Making the World Less Stable ). That's doesn't quite say that young men vote more for war, but it's suggestive.

More war could easily overwhelm any benefits from weighted voting.

Comment by PeterMcCluskey on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-23T15:43:40.085Z · EA · GW

IPOs are strongly dependent on an expanding economy. Cryptocurrency bubbles are somewhat more likely in an expanding economy.

The impact of IPOs and Bitcoin on other markets is much smaller than the impact of the economy on IPOs and Bitcoin.

Comment by PeterMcCluskey on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-21T15:40:05.954Z · EA · GW

I'll guess that EA giving is a bit more sensitive to the economy than other giving, because a disproportionate amount of EA giving comes from IPO-related wealth and cryptocurrency bubbles.

Comment by PeterMcCluskey on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T21:43:11.839Z · EA · GW

No, I expected that no rigorous research had been done on NLP as of 2014, and I don't know how rigorous the more recent research has been.

Comment by PeterMcCluskey on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-25T01:34:18.444Z · EA · GW

I don't know whether it has been published. I heard it from Rick Schwall (

Comment by PeterMcCluskey on Cash prizes for the best arguments against psychedelics being an EA cause area · 2019-05-12T21:24:27.926Z · EA · GW

I've contributed small amounts of money to MAPS , but I haven't been thinking of those as EA donations.

My doubts overlap a fair amount with those of Scott Alexander , but I'll focus on somewhat different reasoning which led me there.

It sounds like MAPS has been getting impressive results, and MAPS would likely qualify as an EA charity if FDA approval were the main obstacle to extending those results to the typical person who seeks help with PTSD. However, I suspect there are other important obstacles.

I know a couple of people, who I think consider themselves EAs, who have been trying to promote an NLP-based approach to treating PTSD, which reportedly has a higher success rate than MAPS has reported. The basic idea behind it has been around for years , without spreading very widely, and without much interest from mainstream science.

Maybe the reports I hear involve an improved version of the basic technique, and it will take off as soon as the studies based on the new version are published.

Or maybe the glowing reports are based on studies that attracted both therapists and patients who were unusually well suited for NLP, and don't generalize to random therapists and random PTSD patients. And maybe the MAPS study has similar problems.

Whatever the case is there, the ease with which I was able to stumble across an alternative to psychedelics that sounds about equally promising is some sort of evidence against the hypothesis that there's a shortage of promising techniques to treat PTSD.

I suspect there are important institutional problems in getting mental help professionals to adopt techniques that provide quick fixes. I doubt it's a complete coincidence that the number of visits required for for successful therapy happens to resemble a number that maximizes revenue per patient.

If that were simply a conspiracy of medical professionals, and patients were eager to work around them, I'd be vaguely hopeful of finding a way to do so. But I'm under the impression that patients have a weak tendency to contribute to the problem, by being more likely to recommend to their friends a therapist who they see for long time, than they would be to recommend a therapist who they stop seeing after a month because they were cured that fast. And I don't see lots of demand for alternative routes to finding therapists that have good track records.

None of these reasons for doubt is quite sufficient by itself to decide that MAPS isn't an EA charity, but they outline at least half of my intuitions for feeling somewhat pessimistic about this cause area.

Comment by PeterMcCluskey on Aligning Recommender Systems as Cause Area · 2019-05-10T23:48:51.872Z · EA · GW

I suspect that principal–agent problems are the biggest single obstacle to alignment. That leads me to suspect it's less tractable than you indicate.

I'm interested in what happened with Netflix. Ten years ago their recommendation system seemed focused almost exclusively on maximizing user ratings of movies. That dramatically improved my ability to find good movies.

Yet I didn't notice many people paying attention to those benefits. Netflix has since then shifted toward less aligned metrics. I'm less satisfied with Netflix now, but I'm unclear what other users think of the changes.

Comment by PeterMcCluskey on Should we consider the sleep loss epidemic an urgent global issue? · 2019-05-06T15:27:31.067Z · EA · GW

Sleep loss is an important problem, but it's unclear whether any charity should focus on it directly.

The problem of driving while sleep-deprived will likely be solved by robocars more than by any altruistic efforts.

The rest of the problem seems better tackled by focusing more on the stresses that cause sleep problems, and by relatively decentralized efforts to shift our cultures to be more sleep-friendly.

Sleep is something to keep in mind when asking whether EAs should donate to mental health charities or to meditation charities such as Monastic Academy. I'm very uncertain whether these charities should be considered effective enough to be EA causes.

Comment by PeterMcCluskey on Why does EA use QALYs instead of experience sampling? · 2019-04-25T14:30:33.244Z · EA · GW

>For anyone who's had some experience with depression or anxiety, as well as with "some problems walking about," it should be obvious that moderate depression or anxiety are (much) worse than moderate mobility problems, pound for pound.

That's obvious for rich people, but not at all obvious for someone who risks hunger as a result of mobility problems.

Comment by PeterMcCluskey on Long-Term Future Fund: April 2019 grant recommendations · 2019-04-11T16:37:52.375Z · EA · GW

I assume that by "cash-flow positive", you mean supported by fees from workshop participants?

I don't consider that to be a desirable goal for CFAR.

Habryka's analysis focuses on CFAR's track record. But CFAR's expected value comes mainly from possible results that aren't measured by that track record.

My main reason for donating to CFAR is the potential for improving the rationality of people who might influence x-risks. That includes mainstream AI researchers who aren't interested in the EA and rationality communities. The ability to offer them free workshops seems important to attracting the most influential people.

Comment by PeterMcCluskey on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-01T20:01:50.476Z · EA · GW

>which means that what everyone else is doing doesn't matter all that much

Earning to give still matters a moderate amount. That's mostly what I'm doing. I'm saying that average EA should start with the outside view that they can't do better than earning to give, and then attempt some more difficult analysis to figure out how they compare to average.

And it's presumably possible to matter more than the average earning to give EA, by devoting above-average thought to vetting new charities.

Comment by PeterMcCluskey on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-04-01T19:51:43.167Z · EA · GW

I'm unimpressed by the arguments for random funding of research proposals. The problems with research funding are mostly due to poor incentives, rather than people being unable to do much better than random guessing. EA organizations don't have ideal incentives, and may be on the path to unreasonable risk-aversion, but they still have a fairly sophisticated set of donors setting their incentives, and don't yet appear to be particularly risk-averse or credential-oriented.

Unless something has changed in the last few years, there are still plenty of startups with plausible ideas that don't get funded by Y Combinator or anything similar. Y Combinator clearly evaluates a lot more startups than I'm willing or able to evaluate, but it's not obvious that they're being less selective than I am about which ones they fund.

I mentioned Nick Bostrom and Eric Drexler because they're widely recognized as competent. I didn't mean to imply that we should focus more funding on people who are that well known - they do not seem to be funding constrained now.

Let me add some examples of funding I've done that better characterize what I'm aiming for in charitable donations (at the cost of being harder for many people to evaluate):

My largest donations so far have been to CFAR, starting in early 2013, when their track record was rather weak, and almost unknown outside of people who had attended their workshops. That was based largely on impressions of Anna Salamon that I got by interacting with her (for reasons that were only marginally related to EA goals).

Another example is Aubrey de Grey. I donated to the Methuselah Mouse Prize for several years starting in 2003, when Aubrey had approximately no relevant credentials beyond having given a good speech at the Foresight Institute and a similar paper on his little-known website.

Also, I respected Nick Bostrom and Eric Drexler fairly early in their careers. Not enough to donate to their charitable organizations at their very beginning (I wasn't actively looking for effective charities before I heard of GiveWell). But enough that I bought and read their first books, primarily because I expected them to be thoughtful writers.

Comment by PeterMcCluskey on $100 Prize to Best Argument Against Donating to the EA Hotel · 2019-03-31T17:35:27.126Z · EA · GW

Speaking for why I haven't donated, this is close to the key question:

>Then the question is (roughly) whether, given £60,000, it makes more sense to fund 1 researcher who's cleared the EA hiring bar, or 10 who haven't (and are in D).

My intuition has been that if those 10 are chosen at random, then I'm moderately confident that it's better to fund the 1 well-vetted researcher.

EA is talent-constrained in the sense that it needs more people like Nick Bostrom or Eric Drexler, but much less in the sense of needing more people who are average EAs to do direct EA work.

I've done some angel investing in startups. I initially took an approach of trying to fund anyone who has a a good idea. But that worked poorly, and I've shifted, as good VCs advise, to looking for signs of unusual competence in founders. (Alas, I still don't have much reason to think I'm good at angel investing). And evaluating founder's competence feels harder than evaluating a business idea, so I'm not willing to do it very often.

I use a similar approach with donating to early-stage charities, expecting to see many teams with decent ideas, but expecting the top 5% to be more than 10 times as valuable than the average. And I'm reluctant to evaluate more pre-track-record projects than I'm already doing.

With the hotel, I see a bunch of little hints that it's not worth my time to attempt an in-depth evaluation of the hotel's leaders. E.g. the focus on low rent, which seems like a popular meme among average and below average EAs in the bay area, yet the EAs whose judgment I most respect act as if rent is a relatively small issue.

I can imagine that the hotel attracts better than random EAs, but it's also easy to imagine that it selects mainly for people who aren't good enough to belong at a top EA organization.

Halffull has produced a better argument for the EA Hotel, but I find it somewhat odd that he starts with arguments that seem weak to me, and only in the middle did he get around to claims that are relevant to whether the hotel is better than a random group of EAs.

Also, if donors fund any charity that has a good idea, I'm a bit concerned that that will attract a larger number of low-quality projects, much like the quality of startups declined near the peak of the dot-com bubble, when investors threw money at startups without much regard for competence.

Comment by PeterMcCluskey on Bayesian Investor proposes you can predictably beat the market by ~3% following a simple and easy strategy · 2019-03-24T17:32:32.840Z · EA · GW

Here are a few examples of strategies that look (or looked) equally plausible, from the usually thoughtful blog of my fellow LessWronger Colby Davis .

This blog post recommends:
- emerging markets, which overlaps a fair amount with my advice
- put-writing, which sounds reasonable to me, but he managed to pick a bad time to advocate it
- preferred stock, which looks appropriate today for more risk-averse investors, but which looked overpriced when I wrote my post.

This post describes one of his failures. Buying XIV was almost a great idea. It was a lot like shorting VXX, and shorting VXX is in fact a good idea for experts who are cautious enough not to short too much (alas, the right amount of caution is harder to know than most people expect). I expect the rewards in this area to go only to those who accept hard-to-evaluate risks.

This post has some strategies that require more frequent trading. I suspect they're good, but I haven't given them enough thought to be confident.

Comment by PeterMcCluskey on Bayesian Investor proposes you can predictably beat the market by ~3% following a simple and easy strategy · 2019-03-17T17:32:00.175Z · EA · GW

Hi, I'm Bayesian Investor.

I doubt that following my advice would be riskier than the S&P 500 - the low volatility funds reduce the risk in important ways (mainly by moving less in bear markets) that roughly offset the features which increase risk.

It's rational for most people to ignore my advice, because there's lots of other (somewhat conflicting) advice out there that sounds equally plausible to most people.

I've got lots of evidence about my abilities (I started investing as a hobby in 1980, and it's been my main source of income for 20 years). But I don't have an easy way to provide much evidence of my abilities in a single blog post.

Comment by PeterMcCluskey on -0.16 ROI for Weight Loss Interventions (Optimistic) · 2019-03-17T16:35:42.379Z · EA · GW

I'm a little confused by this reply. Did you think I was complaining that you over-estimated the costs of weight loss? Let me emphasize that I was complaining about the actual resources devoted to weight loss, not your estimates of it. I'll guess that you under-estimated those costs, by focusing on money spent, rather than trying to evaluate the psychological costs.

My main point is that we should focus more on getting people to switch from typical weight loss approaches to ones that are easier and more effective.

I'm unsure what to infer from your weight satisfaction evidence. It might mean that some people notice that obesity is harming them (via sleep apnea? romantic problems?) and that's what causes them to worry. Or it might mean they're just more responsive to peer pressure, and it's the peer pressure, not the obesity, that's harmful.

Comment by PeterMcCluskey on -0.16 ROI for Weight Loss Interventions (Optimistic) · 2019-03-11T19:25:00.385Z · EA · GW

I suspect you underestimate the cost of obesity.

But there's something seriously wrong with the cost of the typical weight loss approach, and your ROI estimate might be close to the right answer for that.

I believe it's possible to adopt a much better than average approach to weight loss, by focusing more on switching to healthier foods (based on the Satiety Index, or on high fiber content), and/or some form of intermittent fasting.

Comment by PeterMcCluskey on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-28T16:52:26.188Z · EA · GW

I expect that good software engineers are more likely to figure out for themselves how to be more efficient than they are to figure out how to increase their work quality. So it's not obvious what to infer from "it's harder for an employer to train people to work faster" - does it just mean that the employer has less need to train the slow, high quality worker?

Comment by PeterMcCluskey on How Can Donors Incentivize Good Predictions on Important but Unpopular Topics? · 2019-02-06T17:57:10.039Z · EA · GW

Regulations shouldn't be much of a problem for subsidized prediction markets. The regulations are designed to protect people from losing their investments. You can avoid that by not taking investments - i.e. give every trader a free account. Just make sure any one trader can't create many accounts.

Alas, it's quite hard to predict how much it will cost to generate good predictions, regardless of what approach you take.

Comment by PeterMcCluskey on Disentangling arguments for the importance of AI safety · 2019-01-24T05:58:34.145Z · EA · GW

Drexler would disagree with some of Richard's phrasing, but he seems to agree that most (possibly all) of (somewhat modified versions of) those 6 reasons should cause us to be somewhat worried. In particular, he's pretty clear that powerful utility maximisers are possible and would be dangerous.

Comment by PeterMcCluskey on Pursuing infinite positive utility at any cost · 2018-12-12T02:00:06.230Z · EA · GW

I think it's more appropriate to use Bostrom's Moral Parliament to deal with conflicting moral theories.

Your approach might be right if the theories you're comparing used the same concept of utility, and merely disagreed about what people would experience.

But I expect that the concept of utility which best matches human interests will say that "infinite utility" doesn't make sense. Therefore I treat the word utility as referring to different phenomena in different theories, and I object to combining them as if they were the same.

Similarly, I use a dealist approach to morality. If you show me an argument that there's an objective morality which requires me to increase the probability of infinite utility, I'll still ask what would motivate me to obey that morality, and I expect any resolution of that will involve something more like Bostrom's parliament than like your approach.

Comment by PeterMcCluskey on Pursuing infinite positive utility at any cost · 2018-11-15T00:26:03.939Z · EA · GW

>For all actions have a non-zero chance of resulting in infinite positive utility.

Human utility functions seem clearly inconsistent with infinite utility. See Alex Mennen's Against the Linear Utility Hypothesis and the Leverage Penalty for arguments.

I don't identify 100% with future versions of myself, and I'm somewhat selfish, so I discount experiences that will happen in the distant future. I don't expect any set of possible experiences to add up to something I'd evaluate as infinite utility.

Comment by PeterMcCluskey on Thoughts on short timelines · 2018-10-24T18:06:59.250Z · EA · GW

I disagree with your analysis of "are we that ignorant?".

For things like nuclear war or financial meltdown, we've got lots of relevant data, and not too much reason to expect new risks. For advanced nanotechnology, I think we are ignorant enough that a 10% chance sounds right (I'm guessing it will take something like $1 billion in focused funding).

With AGI, ML researchers can be influenced to change their forecast by 75 years by subtle changes in how the question is worded. That suggests unusual uncertainty.

We can see from Moore's law and from ML progress that we're on track for something at least as unusual as the industrial revolution.

The stock and bond markets do provide some evidence of predictability, but I'm unsure how good they are at evaluating events that happen much less than once per century.

Comment by PeterMcCluskey on [deleted post] 2018-09-24T15:30:40.619Z

I'm a little unclear on what you are asking.

How strictly do you mean when you say "provably safe"? That seems like an area where all AI safety researchers are hesitant to say how high they're aiming.

And by "have it implemented", do you mean fully develop it own their own, or do you include scenarios where they convey keys insights to Google, and thereby cause Google to do something safer?

Comment by PeterMcCluskey on Open Thread #40 · 2018-07-17T15:13:02.230Z · EA · GW

I don't trust the author (Lomborg), based on the exaggerations I found in his book Cool It.

I reviewed that book here.

Comment by PeterMcCluskey on Open Thread #39 · 2018-05-30T01:09:45.099Z · EA · GW

I suggest starting with MAPS.

Comment by PeterMcCluskey on Against prediction markets · 2018-05-13T16:36:47.071Z · EA · GW

I think markets that have at least 20 people trading on any given question will on average be at least as good as any alternative.

Your comments about superforecasters suggest that you think what matters is hiring the right people. What I think matters is the incentives the people are given. Most organizations produce bad forecasts because they have goals which distract people from the truth. The biggest gains from prediction markets are due to replacing bad incentives with incentives that are closely connected with accurate predictions.

There are multiple ways to produce good incentives, and for internal office predictions, there's usually something simpler than prediction markets that works well enough.

Comment by PeterMcCluskey on A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4) · 2018-05-12T19:37:02.910Z · EA · GW

I object to the idea that early stage Alzheimer's is incurable. See the book The End of Alzheimer's.

Comment by PeterMcCluskey on Against prediction markets · 2018-05-12T16:59:59.436Z · EA · GW

Who are you arguing against? The three links in your first paragraph go to articles that don't clearly disagree with you.

I’d also be curious about a prediction market in which only superforecasters trade.

I'd guess that there would be fewer trades than otherwise, and this would often offset any benefits that come from the high quality of the participants.