Posts

The ESG Alignment Problem 2022-06-12T21:45:30.749Z
The Future of Earning to Give 2019-10-13T21:28:54.012Z

Comments

Comment by PeterMcCluskey on Stress Externalities More in AI Safety Pitches · 2022-09-27T15:45:59.320Z · EA · GW

It's risky to connect AI safety to one side of an ideological conflict.

Comment by PeterMcCluskey on The Next EA Global Should Have Safe Air · 2022-09-19T17:31:37.400Z · EA · GW

Convincing a venue to implement it well (or rewarding one that has already done that) will have benefits that last more than three days.

Comment by PeterMcCluskey on A Critique of AI Takeover Scenarios · 2022-08-31T19:02:37.339Z · EA · GW

I agree about the difficulty of developing major new technologies in secret. But you seem to be mostly overstating the problems with accelerating science. E.g.:

These passages seem to imply that the rate of scientific progress is primarily limited by the number and intelligence level of those working on scientific research. Here it sounds like you're imagining that the AI would only speed up the job functions that get classified as "science", whereas people are suggesting the AI would speed up a wide variety of tasks including gathering evidence, building tools, etc.

Comment by PeterMcCluskey on New Cause: Radio Ads Against Cousin Marriage in LMIC · 2022-08-15T18:54:33.873Z · EA · GW

My understanding of Henrich's model says that reducing cousin marriage is a necessary but hardly sufficient condition to replicate WEIRD affluence.

European culture likely had other features which enabled cooperation on larger-than-kin-network scales. Without those features, a society that stops cousin marriage could easily end up with only cooperation within smaller kin networks. We shouldn't be confident that we understand what the most important features are, much less that we can cause LMICs to have them.

Successful societies ought to be risk-averse about this kind of change. If this cause area is worth pursuing, it should focus on the least successful societies. But those are also the societies that are least willing to listen to WEIRD ideas.

Also, the idea that reduced cousin marriage was due to some random church edict seems to be the most suspicious part of Henrich's book. See The Explanation of Ideology for some claims that the nuclear family was normal in northwest Europe well before Christianity.

Comment by PeterMcCluskey on Resilience & Biodiversity · 2022-08-12T19:40:20.009Z · EA · GW

Resilience seems to matter for human safety mainly via food supply risks. I'm not too concerned about that, because the world is producing a good deal more food than is needed to support our current population. See my more detailed analysis here.

It's harder to evaluate the effects on other species. I expect a significant chance that technological changes will make current biodiversity efforts irrelevant. So to the limited extent I'm worried about wild animals, I'm focused more on ensuring that technological change develops so as to keep as many options open as possible.

Comment by PeterMcCluskey on Cause area: Short-sleeper genes · 2022-08-10T14:36:45.015Z · EA · GW

Why has this depended on NIH? Why aren't some for-profit companies eager to pursue this?

Comment by PeterMcCluskey on Changing the world through slack & hobbies · 2022-07-23T03:10:10.677Z · EA · GW

This seems to nudge people in a generally good direction.

But the emphasis on slack seems somewhat overdone.

My impression is that people who accomplish the most typically have had small to moderate amounts of slack. They made good use of their time by prioritizing their exploration of neglected questions well. That might create the impression of much slack, but I don't see slack as a good description of the cause.

One of my earliest memories of Eliezer is him writing something to the effect that he didn't have time to be a teenager (probably on the Extropians list, but I haven't found it).

I don't like the way you classify your approach as an alternative to direct work. I prefer to think of it as a typical way to get into direct work.

I've heard a couple of people mention recently that AI safety is constrained by the shortage of mentors for PhD theses. That seems wrong. I hope people don't treat a PhD as a standard path to direct work.

I also endorse Anna's related comments here.

Comment by PeterMcCluskey on Global health is important for the epistemic foundations of EA, even for longtermists · 2022-06-14T16:15:39.917Z · EA · GW

This seems mostly right, but it still doesn't seem like the main reason that we ought to talk about global health.

There are lots of investors visibly trying to do things that we ought to expect will make the stock market more efficient. There are still big differences between companies in returns on R&D or returns on capital expenditures. Those returns go mainly to people who can found a Moderna or Tesla, not to ordinary investors.

There are not (yet?) many philanthropists who try to make the altruistic market more efficient. But even if there were, there'd be big differences in who can accomplish what kinds of philanthropy.

Introductory EA materials ought to reflect that: instead of one strategy being optimal for everyone who wants to be an EA, the average person ought to focus on easy-to-evaluate philanthropy such as global health. A much smaller fraction of the population with unusual skills ought to focus on existential risks, much as a small fraction of the population ought to focus on founding companies like Moderna and Tesla.

Comment by PeterMcCluskey on The biggest risk of free-spending EA is not optics or motivated cognition, but grift · 2022-05-14T14:51:22.354Z · EA · GW

Can you give any examples of AI safety organizations that became less able to get funding due to lack of results?

Comment by PeterMcCluskey on The biggest risk of free-spending EA is not optics or motivated cognition, but grift · 2022-05-14T14:40:03.063Z · EA · GW

Worrying about the percent of spending misses the main problems, e.g. donors who notice the increasing grift become less willing to trust the claims of new organizations, thereby missing some of the best opportunities.

Comment by PeterMcCluskey on My thoughts on nanotechnology strategy research as an EA cause area · 2022-05-04T17:25:33.957Z · EA · GW

I have some relevant knowledge. I was involved in a relevant startup 20 years ago, but haven't paid much attention to this area recently.

My guess is that Drexlerian nanotech could probably be achieved in less than 10 years, but would need on the order of a billion dollars spent on an organization that's at least as competent as the Apollo program. As long as research is being done by a few labs that have just a couple of researchers, progress will likely continue to be slow to need much attention.

It's unclear what would trigger that kind of spending and that kind of collection of experts.

Profit motives aren't doing much here, due to a combination of the long time to profitability and a low probability that whoever produces the first usable assembler will also produce one that's good enough for a large market share. I expect that the first usable assembler will be fairly hard to use, and that anyone who can get a copy will use it to produce better versions. That means any company that sells assemblers will have many customers who experiment with ways to compete. It seems

Maybe some of the new crypto or Tesla billionaires will be willing to put up with those risks, or maybe they'll be deterred by the risks of nanotech causing a catastrophe.

Could a new cold war cause militaries to accelerate development? This seems like a medium-sized reason for concern.

What kind of nanotech safety efforts are needed?

I'm guessing the main need is for better think-tanks to advise politicians on military and political issues. That requires rather different skills than I or most EAs have.

There may be some need for technical knowledge on how to enforce arms control treaties.

There's some need for more research into grey goo risks. I don't think much has happened there since the ecophagy paper. Here's some old discussion about that paper: Hal Finney, Eliezer, me, Hal Finney

Comment by PeterMcCluskey on How Many People Are In The Invisible Graveyard? · 2022-04-22T20:29:08.793Z · EA · GW

Acting without information on the relative effectiveness of the vaccine candidates was not a feasible strategy for mitigating the pandemic.

I'm pretty sure that with a sufficiently bad virus, it's safer to vaccinate before effectiveness is known. We ought to plan ahead for how to make such a decision.

Comment by PeterMcCluskey on How Many People Are In The Invisible Graveyard? · 2022-04-22T20:15:40.852Z · EA · GW

This was the fastest vaccine rollout ever

Huh? 40 million doses of the 1957 flu vaccine were delivered within about 6 months of getting a virus sample to the US. Does that not count due to its similarity to existing vaccines?

Comment by PeterMcCluskey on Critique of OpenPhil's macroeconomic policy advocacy · 2022-03-25T17:20:36.032Z · EA · GW

Here are some of my reasons for disliking high inflation, which I think are similar to the reasons of most economists:

Inflation makes long-term agreements harder, since they become less useful unless indexed for inflation.

Inflation imposes costs on holding wealth in safe, liquid forms such as bank accounts, or dollar bills. That leads people to hold more wealth in inflation-proof forms such as real estate, and less in bank accounts, reducing their ability to handle emergencies.

Inflation creates a wide variety of transaction costs: stores need to change their prices displays more often, consumers need to observe prices more frequently, people use ATMs more frequently, etc.

Inflation transfers wealth from people who stay in one job for a long time, to people who frequently switch jobs.

When inflation is close to zero, these costs are offset by the effects of inflation on unemployment. Those employment effects are only important when wage increases are near zero, whereas the costs of inflation increase in proportion to the inflation rate.

Comment by PeterMcCluskey on Brain preservation to prevent involuntary death: a possible cause area · 2022-03-23T03:52:55.693Z · EA · GW

I don't see high value ways to donate money for this. The history of cryonics suggests that it's pretty hard to get more people to sign up. Cryonics seems to grow mainly from peer pressure, not research or marketing.

Comment by PeterMcCluskey on CE Research Report: Road Traffic Safety · 2022-03-11T18:40:02.603Z · EA · GW

I expect speed limits to hinder the adoption of robocars, without improving any robocar-related safety.

There's a simple way to make robocars err in the direction of excessive caution: hold the software company responsible for any crash it's involved in, unless it can prove someone else was unusually reckless. I expect some rule resembling that will be used.

Having speed limits on top of that will cause problems, due to robocars having to drive slower than humans drive in practice (annoying both the passengers and other drivers), when it's safe for them to sometimes drive faster than humans. I'm unsure how important this effect will be.

Ideally, robocars will be programmed to have more complex rules about maximum speed than current laws are designed to handle.

Comment by PeterMcCluskey on CE Research Report: Road Traffic Safety · 2022-03-08T18:13:51.161Z · EA · GW

How much of this will become irrelevant when robocars replace human drivers? I suspect the most important impact of safety rules will be how they affect the timing of that transition. Additional rules might slow that down a bit.

Comment by PeterMcCluskey on Prediction Bank: A way around current prediction market regulations? · 2022-01-28T04:29:56.769Z · EA · GW

CFTC regulations have been at least as much of an obstacle as gambling laws. It's not obvious whether the CFTC would allow this strategy.

Comment by PeterMcCluskey on Two tentative concerns about OpenPhil's Macroeconomic Stabilization Policy work · 2022-01-04T00:09:38.742Z · EA · GW

You're mostly right. But I have some important caveats.

The Fed acted for several decades as if it was subject to political pressure to reduce inflation. Economists mostly agree that the optimal inflation rate is around 2%. Yet from 2008 to about 2019 the Fed acted as if that were an upper bound, not a target.

But that doesn't mean that we always need more political pressure for inflation. In the 1960s and 1970s, there was a fair amount of political pressure to increase monetary stimulus by whatever it took to reduce unemployment. That worked well when inflation was creeping up around 2 or 3%, but as it got higher it reduced economic stability without doing much for unemployment. So I don't want EAs to support unconditional increases in inflation. To the extent that we can do something valuable, it should be to focus more attention on achieving a goal such as 2% inflation or 4% NGDP growth.

I don't see signs that the pressure to keep inflation below 2% came from the rich. Rich people and companies mostly know how to do well in an inflationary environment. The pressure seems to be coming from fairly average voters who are focused on the prices of gas and meat, and from people who live on fixed pensions.

Economic theory doesn't lend much support to the idea that it's risky to have unusually large increases in the money supply. Most of the concern seems to come from people who assume the velocity of money is pretty stable. That assumption has often worked okay, but has been pretty far off in 2008 and 2020.

It's not clear why there would be much risk, as long as the Fed adjusts the money supply to maintain an inflation or NGDP target. You're correct to worry that the inflation of 2021 provides some reasons for concern about whether the Fed will do that. My impression is that the main problem was that the Fed committed in 2020 to a particular path of interest rates over the next few years, when its commitments ought to be focused on a target such as inflation or NGDP. This is an area where economists still have some important disagreements.

It's pretty clear that both unusually high and unusually low inflation cause important damage. Yet too many people worry about only one of these risks.

For more on this subject, read Sumner's book The Money Illusion (which I reviewed here).

Comment by PeterMcCluskey on Issues with Futarchy · 2021-10-10T21:54:38.274Z · EA · GW

Hanson reports estimates that under our current system, elites have about 16 times as much influence as the median person.

My guess is that under futarchy, the wealthy would have somewhere between 2 and 10 times as much influence on outcomes that are determined via trading.

You seem to disagree with at least one of those estimates. Can you clarify where you disagree?

Comment by PeterMcCluskey on The motivated reasoning critique of effective altruism · 2021-09-28T02:36:40.455Z · EA · GW

The original approach was rather erratic about finding high value choices, and was weak at identifying the root causes of the biggest mistakes.

So participants would become more rational about flossing regularly, but rarely noticed that they weren't accomplishing much when they argued at length with people who were wrong on the internet. The latter often required asking embarrassing questions their motives, and sometimes realizing that they were less virtuous than assumed. People will, by default, tend to keep their attention away from questions like that.

The original approach reflected trends in academia to prioritize attention on behaviors that were most provably irrational, rather than on what caused the most harm. Part of the reason that CFAR hasn't documented their successes well is they've prioritized hard-to-measure changes.

Comment by PeterMcCluskey on The motivated reasoning critique of effective altruism · 2021-09-16T03:33:32.828Z · EA · GW

To the best of my knowledge, internal CEAs rarely if ever turn up negative.

Here's one example of an EA org analyzing the effectiveness of their work, and concluding the impact sucked:

CFAR in 2012 focused on teaching EAs to be fluent in Bayesian reasoning, and more generally to follow the advice from the Sequences. CFAR observed that this had little impact, and after much trial and error abandoned large parts of that curriculum.

This wasn't a quantitative cost-effectiveness analysis. It was more a subjective impression of "we're not getting good enough results to save the world, we can do better". CFAR did do an RCT which showed disappointing results, but I doubt this was CFAR's main reason for change.

These lessons percolated out to LessWrong blogging, which now focuses less on Bayes theorem and the Sequences, but without calling a lot of attention to the less.

I expect that most EAs who learned about CFAR after about 2014 underestimate the extent to which CFAR's initial strategies were wrong, and therefore underestimate the evidence that initial approaches to EA work are mistaken.

Comment by PeterMcCluskey on Decreasing populism and improving democracy, evidence-based policy, and rationality · 2021-08-01T03:14:34.841Z · EA · GW

It seems strange to call populism anti-democratic.

My understanding is that populists usually want more direct voter control over policy. The populist positions on immigration and international trade seem like stereotypical examples of conflicts where populists side with the average voter more than do the technocrats who they oppose.

Please don't equate anti-democratic with bad. It seems mostly good to have democratic control over the goals of public policy, but let's aim for less democratic control over factual claims.

Comment by PeterMcCluskey on What would a cheap, nutritious and enjoyable diet for the world's extreme poor people like? · 2021-07-13T22:13:08.591Z · EA · GW

I doubt that that study was able to tell whether the dietary changes improved nutrition. They don't appear to have looked at many nutrients, or figured out which nutrients the subjects were most deficient in. Even if they had quantified all important nutrients in the diet, nutrients in seeds are less bioavailable than nutrients in animal products (and that varies depending on how the seeds are prepared).

There's lots of somewhat relevant research, but it's hard to tell which of it is important, and maybe hard for the poor to figure out whether they ought to trust the information that comes from foreigners who claim to be trying to help.

I'll guess that that more sweet potatoes ought to be high on any list of cheap improvements, and also suggest that small increases in fruit and seafood are usually valuable. But there will be lots of local variation in what's best.

Comment by PeterMcCluskey on Maybe Antivirals aren’t a Useful Priority for Pandemics? · 2021-06-20T16:29:07.990Z · EA · GW

Could much of the problem be due to the difficulty of starting treatment soon enough after infection?

Comment by PeterMcCluskey on A Viral License for AI Safety · 2021-06-12T03:13:19.379Z · EA · GW

I see some important promise in this idea, but it looks really hard to convert the broad principles into something that's both useful, and clear enough that a lawyer could decide whether his employer was obeying it.

Comment by PeterMcCluskey on Keynesian Altruism · 2020-09-15T02:31:36.122Z · EA · GW

10 years worth of cash sounds pretty unusual, at least for an EA charity.

But part of my point is that when stocks are low, the charity won't have enough of a cushion to do any investing, so it won't achieve the kind of returns that you'd expect from buying stocks at a no-worse-than-random time. E.g. I'd expect that a charity that tries to buy stocks would have bought around 2000 when the S&P was around 1400, sold some of that in 2003 when the S&P was around 1100 to make up for a shortfall in donations, bought again in 2007 at 1450, then sold again in 2009 at 1100. With patterns like that, it's easy to get negative returns.

Individual investors often underperform markets for the same reason. They can avoid that by investing only what they're saving for retirement. However, charities generally shouldn't have anything equivalent to saving for retirement.

Comment by PeterMcCluskey on Keynesian Altruism · 2020-09-13T22:47:51.013Z · EA · GW
  1. Cash sitting in a charity bank account costs money, so if you have lots of it, invest some;

But the obvious ways to invest (i.e. stocks) work poorly when combined with countercyclical spending. Charities are normally risk-averse about investments because they have plenty of money to invest when stocks are high, but need to draw down reserves when stocks are low.

Comment by PeterMcCluskey on How to Fix Private Prisons and Immigration · 2020-06-13T17:46:06.768Z · EA · GW

When I tell people that prisons and immigration should use a similar mechanism, they sometimes give me a look of concern. This concern is based on a misconception

I'll suggest that some people's concerns are due to an accurate intuition that your proposal will make it harder to hide the resemblance between prisons and immigration restrictions. Preventing immigration looks to me to be fairly similar to imprisoning them in their current country.

Comment by PeterMcCluskey on Idea: statements on behalf of the general EA community · 2020-06-11T20:02:28.596Z · EA · GW

It would be much easier to make a single, more generic policy statement. Something like:

When in doubt, assume that most EAs agree with whatever opinions are popular in London, Berkeley, and San Francisco.

Or maybe:

When in doubt, assume that most EAs agree with the views expressed by the most prestigious academics.

Reaffirming this individually for every controversy would redirect attention (of whatever EAs are involved in the decision) away from core EA priorities.

Comment by PeterMcCluskey on Will protests lead to thousands of coronavirus deaths? · 2020-06-04T15:41:53.201Z · EA · GW

Another risk is that increased distrust impairs the ability of authorities to do test and trace in low-income neighborhoods, which seem to now be key areas where the pandemic is hardest to control.

Comment by PeterMcCluskey on Climate Change Is Neglected By EA · 2020-06-02T15:56:24.622Z · EA · GW

EA is in danger of making itself a niche cause by loudly focusing on topics like x-risk

EA has been a niche cause, and changing that seems harder than solving climate change. Increased popularity would be useful, but shouldn't become a goal in and of itself.

If EAs should focus on climate change, my guess is that it should be a niche area within climate change. Maybe altering the albedo of buildings?

Comment by PeterMcCluskey on Policy idea: Incentivizing COVID-19 tracking app use with lottery tickets · 2020-04-24T18:05:04.088Z · EA · GW

How about having many locations that are open only to people who are running a tracking app?

I'm imagining that places such as restaurants, gyms, and airplanes could require that people use tracking apps in order to enter. Maybe the law should require that as a default for many locations, with the owners able to opt out if they post a conspicuous warning?

How hard would this be to enforce?

Comment by PeterMcCluskey on How Much Leverage Should Altruists Use? · 2020-01-11T20:50:55.788Z · EA · GW

Hmm. Maybe you're right. I guess I was thinking there was an important difference between "constant leverage" and infrequent rebalancing. But I guess that's a more complicated subject.

Comment by PeterMcCluskey on How Much Leverage Should Altruists Use? · 2020-01-08T23:41:48.805Z · EA · GW

See Colby Davis on the problems with leveraged ETFs.

Comment by PeterMcCluskey on How Much Leverage Should Altruists Use? · 2020-01-08T23:33:13.120Z · EA · GW

I like this post a good deal.

However, I think you overstate the benefits.

I like the idea of shorting the S&P and buying global ex-US stocks, but beware that past correlations between markets only provide a rough guess about future correlations.

I'm skeptical that managed futures will continue to do as well as backtesting suggests. Futures are new enough that there's likely been a moderate amount of learning among institutional investors that has been going on over the past couple of decades, so those markets are likely more efficient now than history suggests. Returns also depend on recognizing good managers, which tends to be harder than most people expect.

Startups might be good for some people, but it's generally hard to tell. Are you able to find startups before they apply to Y Combinator? Or do startups only come to you if they've been rejected by Y Combinator? Those are likely to have large effects on your expected returns. I've invested in about 10 early-stage startups over a period of 20 years, and I still have little idea of what returns to expect from my future startup investments.

I'm skeptical that momentum funds work well. Momentum strategies work if implemented really well, but a fund that tries to automate the strategy via simple rules is likely to lose the benefits to transaction costs and to other traders who anticipate the fund's trades. Or if it does without simple rules, most investors won't be able to tell whether it's a good fund. And if the strategy becomes too popular, that can easily cause returns to become significantly negative (whereas with value strategies, popularity will more likely drive returns to approximately the same as the overall market).

Comment by PeterMcCluskey on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-26T17:28:17.673Z · EA · GW

Nearly all of CFAR's activity is motivated by their effects on people who are likely to impact AI. As a donor, I don't distinguish much between the various types of workshops.

There are many ways that people can impact AI, and I presume the different types of workshop are slightly optimized for different strategies and different skills, and differ a bit in how strongly they're selecting for people who have a high probability of doing AI-relevant things. CFAR likely doesn't have a good prediction in advance about whether any individual person will prioritize AI, and we shouldn't expect them to try to admit only those with high probabilities of working on AI-related tasks.

Comment by PeterMcCluskey on 2019 AI Alignment Literature Review and Charity Comparison · 2019-12-24T00:49:55.367Z · EA · GW

OAK intends to train people who are likely to have important impacts on AI, to help them be kinder or something like that. So I see a good deal of overlap with the reasons why CFAR is valuable.

I attended a 2-day OAK retreat. It was run in a professional manner that suggests they'll provide a good deal of benefit to people who they train. But my intuition is that the impact will be mainly to make those people happier, and I expect that OAK's impact will have less effect on peoples' behavior than CFAR has.

I considered donating to OAK as an EA charity, but have decided it isn't quite effective enough for me to treat it that way.

I believe that the person who promoted that grant at SFF has more experience with OAK than I do.

I'm surprised that SFF gave more to OAK than to ALLFED.

Comment by PeterMcCluskey on The Future of Earning to Give · 2019-10-25T00:14:02.294Z · EA · GW

With almost all of those proposed intermediate goals, it's substantially harder to evaluate whether the goal will produce much value. In most cases, it will be tempting to define the intermediate goal in a way that is easy to measure, even when doing so weakens the connection between the goal and health.

E.g. good biomarkers of aging would be very valuable if they measure what we hope they measure. But your XPrize link suggests that people will be tempted to use expert acceptance in place of hard data. The benefits of biomarkers have been frequently overstated.

It's clear that most donors want prizes to have a high likelihood of being awarded fairly soon. But I see that desire as generally unrelated to a desire for maximizing health benefits. I'm guessing it indicates that donors prefer quick results over high-value results, and/or that they overestimate their knowledge of which intermediate steps are valuable.

A $10 million aging prize from an unknown charity might have serious credibility problems, but I expect that a $5 billion prize from the Gates Foundation or OpenPhil would be fairly credible - they wouldn't actually offer the prize without first getting some competent researchers to support it, and they'd likely first try out some smaller prizes in easier domains.

Comment by PeterMcCluskey on The Future of Earning to Give · 2019-10-14T18:39:00.317Z · EA · GW

I agree with most of your comment.

>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.

That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they're pretty replaceable.

Comment by PeterMcCluskey on The Future of Earning to Give · 2019-10-14T18:04:20.332Z · EA · GW

Yes, large donors more often reach diminishing returns on each recipient than do small donors. The one charity heuristic is mainly appropriate for people who are donating $50k per year or less.

Comment by PeterMcCluskey on The Future of Earning to Give · 2019-10-14T18:02:43.281Z · EA · GW

Yes. The post Drowning children are rare seemed to be saying that OPP was capable of making most EA donations unimportant. I'm arguing that we should reject that conclusion, even if many of that post's points are correct.

Comment by PeterMcCluskey on X-risk dollars -> Andrew Yang? · 2019-10-13T02:52:02.261Z · EA · GW

They may not have budged climate scientists, but there other ways they may have influenced policy. Did they (or other partisans) alter the outcomes of Washington Initiative 1631 or 732? That seems hard to evaluate.

Comment by PeterMcCluskey on [Link] The Case for Charter Cities Within the EA Framework (CCI) · 2019-09-24T19:12:39.146Z · EA · GW

Most of their analysis looks right.

But they implicitly assume a 100% chance of generating a charter city with better institutions than the host country, given a certain amount of effort on their part. I'd be eagerly donating to them if I believed that. But I expect most countries have political problems which will cause them to reject any charter city effort that comes from outside their country.

I'll estimate a less than 5% chance that any US based charity will catalyze the creation of a charter city in another country, and if such a charter city is created, I'll estimate maybe a 50% chance of it having better institutions than the host country. So I'm dividing their expected impact estimates by about 50 or 100.

Comment by PeterMcCluskey on Why were people skeptical about RAISE? · 2019-09-06T12:12:48.087Z · EA · GW

I meant something like "good enough to look like a MIRI researcher, but unlikely to turn out to be more productive than the average MIRI researcher". I guess when I wrote that I was feeling somewhat pessimistic about MIRI's hiring process. Given optimistic assumptions about how well MIRI distinguishes good from bad job applicants, then I'd expect that MIRI wouldn't hire RAISE graduates.

Comment by PeterMcCluskey on Are we living at the most influential time in history? · 2019-09-05T18:46:20.403Z · EA · GW

I agree with most of your reasoning, but disagree significantly about this:

>The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t.

It's true that a pure utilitarian would expect about an order of magnitude less utility from x-risk reduction if we have a 90% chance of being in a simulation compared to a zero chance of being in a simulation. But the pure utilitarian case for x-risk reduction isn't very sensitive to an order of magnitude change in utility, since the expected utility seems many orders of magnitude larger than what's needed to convince a pure utilitarian to focus on x-risks.

From a more selfish perspective, being in a simulation increases my desire to be involved in events that are interesting to the simulators, in case such people get simulated in more detail.

I'm somewhat concerned that being influenced much by the simulation hypothesis increases the risk that the simulation will be shut down, which seems like weak evidence for caution about altering my behavior much in response to the simulation hypothesis.

For these reasons, and WilliamKiely's comments about priors, I want to treat HoH as more than 1% likely.

Comment by PeterMcCluskey on Why were people skeptical about RAISE? · 2019-09-04T13:29:09.535Z · EA · GW

>anyone capable of significantly contributing wouldn't need an on-ramp

That's approximately why I was skeptical, although I want to frame it a bit differently. I expect that the most valuable contributions to AI safety will involve generating new paradigms, asking questions that nobody has yet thought to ask, or something like that. It's hard to teach the skills that are valuable for that.

I got the impression that RAISE was mostly oriented toward producing people who become typical MIRI researchers. Even if MIRI's paradigm is the right one, I expect that MIRI needs atypically good researchers, and would only get minor benefits from someone who is struggling to become a typical MIRI researcher.


Comment by PeterMcCluskey on Age-Weighted Voting · 2019-07-12T19:25:35.245Z · EA · GW

War is more likely when the population has a higher fraction of young men (e.g. see Angry Young Men Are Making the World Less Stable ). That's doesn't quite say that young men vote more for war, but it's suggestive.

More war could easily overwhelm any benefits from weighted voting.

Comment by PeterMcCluskey on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-23T15:43:40.085Z · EA · GW

IPOs are strongly dependent on an expanding economy. Cryptocurrency bubbles are somewhat more likely in an expanding economy.

The impact of IPOs and Bitcoin on other markets is much smaller than the impact of the economy on IPOs and Bitcoin.

Comment by PeterMcCluskey on Increase Impact by Waiting for a Recession to Donate or Invest in a Cause. · 2019-06-21T15:40:05.954Z · EA · GW

I'll guess that EA giving is a bit more sensitive to the economy than other giving, because a disproportionate amount of EA giving comes from IPO-related wealth and cryptocurrency bubbles.