Posts

Open Thread #39 2017-10-23T20:26:23.277Z · score: 4 (4 votes)
Looking at how Superforecasting might improve some EA projects response to Superintelligence 2017-08-29T22:22:26.206Z · score: 2 (4 votes)
Open Thread #38 2017-08-22T10:01:15.557Z · score: 5 (5 votes)
Introducing Improving autonomy 2017-08-10T08:00:04.207Z · score: 4 (8 votes)
Towards a measure of Autonomy and what it means for EA 2017-07-21T10:35:11.952Z · score: 1 (1 votes)

Comments

Comment by willpearson on 8 things I believe about climate change · 2020-01-12T23:00:56.594Z · score: 1 (1 votes) · EA · GW

I found this report on adaptation, which suggest adaptation with some forethought will be better than waiting for problems to get worse. Talks about things other than crops too. The headlines

  • Without adaptation, climate change may depress growth in global agriculture yields up to 30 percent by 2050. The 500 million small farms around the world will be most affected.
  • The number of people who may lack sufficient water, at least one month per year, will soar from 3.6 billion today to more than 5 billion by 2050.
  • Rising seas and greater storm surges could force hundreds of millions of people in coastal cities from their homes, with a total cost to coastal urban areas of more than $1 trillion each year by 2050.
  • Climate change could push more than 100 million people within developing countries below the poverty line by 2030. The costs of climate change on people and the economy are clear. The toll on human life is irrefutable. The question is how will the world respond: Will we delay and pay more or plan ahead and prosper?
Comment by willpearson on On Collapse Risk (C-Risk) · 2020-01-02T18:02:30.881Z · score: 1 (1 votes) · EA · GW

I've been thinking for a while civilisational collapse scenarios impact some of the common assumptions about the expected value of movement building or saving for effective altruism. This has knock on implications to when things are most hingeist.

Comment by willpearson on 8 things I believe about climate change · 2019-12-29T13:46:42.317Z · score: 4 (3 votes) · EA · GW
That said, I personally would be quite surprised if worldwide crop yields actually ended up decreasing by 10-30%. (Not an informed opinion, just vague intuitions about econ).

I hope they won't too, if we manage to develop the changes we need to make before we need them. Economics isn't magic

But I wanted to point out that there will probably be costs associated with stopping deaths associated with food shortages with adaptation. Are they bigger or smaller than mitigation by reducing CO2 output or geoengineering?

This case hasn't been made either way to my knowledge and could help allocate resources effectively.

Comment by willpearson on 8 things I believe about climate change · 2019-12-29T12:38:32.451Z · score: 2 (2 votes) · EA · GW

Are there any states that have committed to doing geoengineering, or even experimenting with geoengineering, if mitigation fails?

Having some publicly stated sufficient strategy would convince me that this was not a neglected area.

Comment by willpearson on 8 things I believe about climate change · 2019-12-28T21:14:15.350Z · score: 4 (3 votes) · EA · GW

I'm expecting the richer nations to adapt more easily, So I'm expecting a swing away from food production in the less rich nations as poorer farmers would have a harder time adapting as there farms get less productive (and they have less food to sell). Also farmers with now unproductive land would struggle to buy food on the open market

I'd be happy to be pointed to the people thinking about this and planning on having funding for solving this problem. Who are the people that will be funding the teaching of subsistence rice farmers (of all nationalities) how to farm different crops they are not used to etc? Providing tools and processing equipment for the new crop. Most people interested in climate change I have met are still in the hopeful mitigation phase and if they are thinking about adaptation it is about their own localities.

This might not be a pressing problem now[1], but it could be worth having charities learning in the space about how to do it well (or how to help with migration if land becomes uninhabitable).

[1] https://blogs.ei.columbia.edu/2018/07/25/climate-change-food-agriculture/ suggests that some rice producing regions might have problems soon

Comment by willpearson on 8 things I believe about climate change · 2019-12-28T12:44:18.213Z · score: 7 (5 votes) · EA · GW

On 1) not being able to read the full text of the impactlab report, but it seem they just model the link between heat and mortality, but not the impact of heat on crop production causing knock on health problems. E.g. http://dels.nas.edu/resources/static-assets/materials-based-on-reports/booklets/warming_world_final.pdf suggests that each degree of warming would reduce the current crop yields by 5-15%. So for 4 degrees warming (baseline according to https://climateactiontracker.org/global/temperatures/ ), this would be 20-60% of world food supply reduction.

If governments stick to their policies (which they have been notoriously bad at so far) then the reduction would only be 10-30%. I'd expect even a 10% decrease to have massive knock on effects to the nutrition and mortality of the world. I expect that is not included in the impact lab report because it is very hard to have papers that encompass the entire scope of the climate crisis.

Of course there could be a lot of changes to how and where we grow crops to avoid these problems, but making sure that we manage this transition well, so that people in the global south can adopt the appropriate crops for whatever their climate becomes seems like something that could use some detailed analysis. It seems neglected as far as I can tell, there may be simple things we can do to help. It is not mainstream climate change mitigation though, so might fit your bill?



Comment by willpearson on Are we living at the most influential time in history? · 2019-10-17T13:12:24.663Z · score: 1 (1 votes) · EA · GW

As currently defined, long termists have two possible choices.

  1. Direct work to reduce X-risk
  2. Investing for the future (by saving or movement building) to then spend on reduction of x-risk at a later date

There are however other actions that may be more beneficial.

Let us look again at the definition of influential again

a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.

While direct work is not formally defined it can be seen here to be mainly referred to as near-term existential risk mitigation.

The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building.

What happens if the answer is neither option? What are the other levers we have on the future. One is that we might be able to take actions that change the expected rate of return. Perhaps the expected rate of return on investments is very bad, but there are actions you can take to increase it to more normal levels? Or there is low-hanging fruit for vastly increasing the expected rate of return on investments in the long term.

So let us introduce another type of influential (and another version of hingeness). Influential-i, that is the time that has the most influential action being ones that attempt to impact interest rates.

a time ti is more influential-i (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work to alter investment rates (rather than normal investment itself or X-risk reduction), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.

So what would increase the rate of return on investment? New energy sources with high energy returned on energy invested could increase the rate of return on investment. For example, if you manage to help invent nuclear fusion you would increase the amount of cleaner energy available to civilisation giving more resources to future altruists to use to solve problems.

Avoiding vast decreases in the rate of return would be actions that manage to stave of civilization collapse. Civilization collapse should be a lot higher on the radar for long term altruists, as it is.

  1. More likely than existential risk (considering civilisations have collapsed in the past and there are inside views for it happening in the future)
  2. Likely to cause a collapse in the effective altruism movement as well as people focus lower on maslow’s hierarchy of needs.
  3. Likely to cause hyperinflation (at the minimum), wiping out savings.
  4. Lowering/destroying existing existential risk reduction mitigation efforts (meteorite monitoring programs).

The long-termist community would do well to look at these options when thinking about the time frame of hundreds of years.

Comment by willpearson on Critique of Superintelligence Part 2 · 2018-12-19T11:22:32.122Z · score: 2 (2 votes) · EA · GW
Let's say they only mail you as much protein as one full human genome.

This doesn't make sense. Do you mean proteome? There is not a 1-1 mapping between genome and proteome. There are at least 20,000 different proteins in the human proteome, it might be quite noticeable (and tie up the expensive protein producing machines), if there were 20,000 orders in a day. I don't know the size of the market, so I may be off about that.

I will be impressed if the AI manages to make a biological nanotech that is not immediately eaten up or accidentally sabotaged by the soup of hostile nanotech that we swim in all the time.

There is a lot of uranium in the sea, only because there is a lot of sea. From the pages I have found, there is only 3 micrograms of U per liter, and 0.72 percent is U235. To get the uranium 235 (80% enriched 50Kg bomb) required for a single bomb you would need to process roughly 18 km3 of sea water or 1.8 * 10^13 liters.

This would be pretty noticeable if done in a short time scale (you might also have trouble with diluting the sea locally if you couldn't wait for diffusion to even out the concentrations globally).

To build 1 million nukes you would need more sea water than the Mediterranean (3.75 million km3)

Comment by willpearson on Impact Investing - A Viable Option for EAs? · 2018-07-23T01:00:24.885Z · score: 2 (2 votes) · EA · GW

There might be a further consideration, people might not start or fund impactful startups if there wasn't a good chance of getting investment. The initial investors (if not impact oriented), might still be counting on impact oriented people to buy the investment. So while each individual impact investor is not doing much in isolation, collectively they are creating a market for things that might not get funded otherwise. How you account for that I'm not sure.

Comment by willpearson on Open Thread #40 · 2018-07-15T18:55:10.208Z · score: 1 (1 votes) · EA · GW

It might be worth looking at the domains where it might be less worthwhile (formal chaotic systems, or systems with many sign flipping crucial considerations). If you can show that trying to make cost-effectiveness based decisions in such environments is not worth it, that might strengthen your case.

Comment by willpearson on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-07-04T14:17:49.657Z · score: 0 (2 votes) · EA · GW

Hi Gregory,

A couple of musings generated by your comment.

2: I don’t think there’s a neat distinction between ‘technical dangerous information’ and ‘broader ideas about possible risks’, with the latter being generally safe to publicise and discuss.

I have this idea of independent infrastructure, trying to make infrastructure (electricity/water/food/computing) that is on a smaller scale than current infrastructure. This is for a number of reasons, one of which includes mitigating risks, How should I build broad-scale support for my ideas without talking about the risks I am mitigating?

4.1: In addition to the considerations around the unilateralist’s curse offered by Brian Wang (I have written a bit about this in the context of biotechnology here) there is also an asymmetry in the sense that it is much easier to disclose previously-secret information than make previously-disclosed information secret. The irreversibility of disclosure warrants further caution in cases of uncertainty like this.

Although in some scenarios non-disclosure is irreversible as well, as conditions change. Consider if someone had the idea of hacking a computer and had managed to convince the designers of C to create a more secure list indexing and also everyone not to use other insecure languages. Now we would not be fighting the network effect of all the bad C code when trying to get people to code computers securely.

This irreversibility of non-disclosure seems to only occur if if something is not a huge threat right now, but may become more so as technology develops and gets more widely used and locked in. Not really relevant to the biotech arena. that I can think of immediately at least. But an interesting scenario nonetheless.

Comment by willpearson on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-27T19:50:21.842Z · score: 0 (0 votes) · EA · GW

For people outside of EA, I think those who are in possession of info hazard-y content are much more likely to be embedded in some sort of larger institution (e.g., a research scientist or a journal editor looking to publish something), where perhaps the best leverage is setting up certain policies, rather than trying to teach everyone the unilateralist's curse.

There is a growing movement of maker's and citizen scientists that are working on new technologies. It might be worth targeting them somewhat (although again probably without the math). I think the approaches for ea/non-ea seem sensible.

You're right, strict consensus is the wrong prescription. A vote is probably better. I wonder if there's mathematical modeling that you could do that would determine what fraction of votes is optimal, in order to minimize the harms of the standard unilateralist's curse and the curse in reverse? Is it a majority vote? A 2/3s vote? l suspect this will depend on what the "true sign" of releasing the potentially dangerous info is likely to be; the more likely it is to be negative, the higher bar you should be expected to clear before releasing.

I also like to weigh the downside of the lack of releasing the information as well. If you don't release information you are making everyone make marginally worse decisions (if you think someone will release it anyway later). For example in the nuclear fusion example, you think that everyone currently building new nuclear fission stations are wasting their time, that people training on how to manage coal plants should be training on something else etc, etc.

I also have another consideration which is possibly more controversial. I think we need some bias to action, because it seems like we can't go on as we are for too much longer (another 1000 years might be pushing it). The level of resources and coordination towards global problems fielded by the status quo seems insufficient. So it is a default bad outcome.

With this consideration, going back to the fusion pioneers, they might try and find people to tell so that they could increase the bus factor (the number of people that would have to die to lose the knowledge). They wouldn't want the knowledge to get lost (as it would be needed in the long term) and they would want to make sure that whoever they told understood the import and potential downsides of the technology.

Edit: Knowing the sign of an intervention is hard, even after the fact. Consider the invention and spread of the knowledge about nuclear chain reactions. Without it we would probably be burning a lot more fossil fuels, however with it we have the existential risk associated with it. If that risk never pays out, then it may have been a spur towards greater coordination and peace.

I'll try and formalise these thoughts at some point, but I am bit work impaired for a while.

Comment by willpearson on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-25T19:12:53.517Z · score: 2 (2 votes) · EA · GW

Ah right. I suppose the unilateralist's curse is only a problem insofar as there are a number of other actors also capable of releasing the information; if you are a single actor then the curse doesn't really apply. Although one wrinkle might be considering the unilateralist's curse with regards to different actors through time (i.e., erring on the side of caution with the expectation that other actors in the future will gain access to and might release the information), but coordination in this case might be more challenging.

Interesting idea. This may be worth trying to develop more fully?

Probably it's best to discuss privately with a number of other trusted individuals first, who also understand the unilateralist's curse,

I'm still coming at this from a lens of "actionable advice for people not in ea". It might be that the person doesn't know many other trusted individuals, what should be the advice then? It would probably also be worth giving advice on how to have the conversation as well. The original article gives some advice on what happens if consensus can't be reached (voting/such like).

As I understand it you shouldn't wait for consensus else you have the unilateralist's curse in reverse. Someone pessimistic about an intervention can block the deployment of an intervention needed to avoid disaster (this seems very possible if you consider crucial considerations flipping signs, rather than just random noise in beliefs in desirability).

Would you suggest discussion and vote (assuming no other courses of action can be agreed upon)? Do you see the need to correct for status quo bias in any way?

This seems very important to get right. I'll think about this some more.

Comment by willpearson on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-24T14:13:53.140Z · score: 2 (2 votes) · EA · GW

My understanding is that it applies regardless of whether or not you expect others to have the same information. All it requires is a number of actors making independent decisions, with randomly distributed error, with a unilaterally made decision having potentially negative consequences for all.

Information determines the decisions that can be made. For example you can't spread the knowledge of how to create effective nuclear fusion without the information on how to make it.

If there is a single person with the knowledge of how to create safe efficient nuclear fusion they cannot expect other people to release it on their behalf. They may expect it to be net positive but they also expect some downsides and are unsure of whether it will be net good or not. To give a potential downside of nuclear fusion, let us say they are worried about creating excess heat over what the earth can dissipate due to widescale deployment in the world (even if it fixes global warming due to trapping solar energy, it might cause another heat related problem). I forget the technical term for this unfortunately.

The fusion expert(s) cannot expect other people to release this information for them, for as far as they know they are the only people making that exact decision.

I'm also not sure how the strategy of "preemptively release, but mitigate" would work in practice. Does this mean release potentially dangerous information, but with the most dangerous parts redacted? Release with lots of safety caveats inserted? How does this preclude the further release of the unmitigated info?

What the researcher can do is try and build consensus/lobby for a collective decision making body on the internal climate heating (ICH) problem. Planning to release the information when they are satisfied that there is going to be a solution in time for fixing the problem when it occurs.

If they find a greater than expected number of people lobbying for solutions to the ICH problem, then they can expect they are in a unilateralist's curse scenario. And they may want to hold off on releasing information even when they are satisfied with the way things are going (in case there is some other issue they have not thought of).

They can look to see what the other people are doing that have been helping with ICH and see if there other initiatives they are starting, that may or may not be to do with the advent of nuclear fusion.

I think I am also objecting to the expected payoff being thought of as a fixed quantity. You can either learn more about the world to alter your knowledge of the payoff or try and introduce things/insituttions into the world to alter the expected payoff. Building useful institutions may rely on releasing some knowledge, that is where things become more hairy.

I've not had the best luck reaching out to talk to people about my ideas. I expect that the majority of new ideas will come from people not heavily inside the group and thus less influenced by group think. So you might want to think of solutions that take that into consideration.

I'm not sure I'm fully understanding you here. If you're saying that the majority of potentially dangerous ideas will originate in those who don't know what the unilateralist's curse is, then I agree –– but I think this is just all the more reason to try to –– but I think this is just all the more reason to try to spread norms of consensus.

I was suggesting that more norm spreading should be done outwards, keeping it simple and avoiding too much jargon. Is there a presentation of the unilateralist's curse aimed at micro biologists for example?

Also as the the unilaterlist's curse suggests discussing with other people such that they can undertake the information release, sometimes increases the expectation of a bad out come. How should consensus be reached in those situations?

Increasing the number of agents capable of undertaking the initiative also exacerbates the problem: as N grows, the likelihood of someone proceeding incorrectly increases monotonically towards 1.7 The magnitude of this effect can be quite large even for relatively small number of agents. For example, with the same error assumptions as above, if the true value of the initiative V* = -1 (the initiative is undesirable), then the probability of erroneously undertaking the initiative grows rapidly with N, passing 50% for just 4 agents.

Comment by willpearson on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-23T21:12:57.343Z · score: 0 (2 votes) · EA · GW

The unilateralists curse only applies if you expect other people to have the same information as you right?

You can figure out if they have the same information as you to see if they are concerned about the same things you are. By looking at the mitigation's people are attempting. Altruists should be attempting mitigations in a unilateralist's curse position, because they should expect someone less cautious than them to unleash the information. Or they want to unleash the information themselves and are mitigating the downsides until they think it is safe.

At the very least, you should privately discuss with several others and see if you can reach a consensus.

I've not had the best luck reaching out to talk to people about my ideas. I expect that the majority of new ideas will come from people not heavily inside the group and thus less influenced by group think. So you might want to think of solutions that take that into consideration.

Comment by willpearson on Ineffective entrepreneurship: post-mortem of Hippo, the happiness app that never quite was · 2018-05-23T21:46:58.347Z · score: 2 (2 votes) · EA · GW

Thanks for writing this up! I've forwarded it to a friend who was interested in the happiness app space a while back.

I would add to the advice, from my experience, pick something not too far out of people's comfort zones for a startup or research idea. There seems to be a horizon beyond which you don't get feedback or help at all.

Comment by willpearson on An Argument To Prioritize "Positively Shaping the Development of Crypto-assets" · 2018-04-05T10:39:08.401Z · score: 0 (0 votes) · EA · GW

I think it possible that blockchain can help us solve some co-ordination problems. However it also introduces new ones (e.g. which fork of a chain/version of the protocol you should go with).

So I am torn. It would be good to see one successful use/solid proposal of the technology for solving our real world coordination problems using ethereum.

Something I am keeping an eye on is the economic space agency

Comment by willpearson on Doing good while clueless · 2018-02-15T23:12:16.231Z · score: 0 (0 votes) · EA · GW

I would add something likes "Sensitivity" to the list of attributes needed to navigate the world.

This is different from Predictive Power. You can imagine two ships, with the exact same compute power and Predictive Power. One with cameras on the outside and long range sensors, one blind without. You'd expect the first to do a lot better moving about the world

In Effective Altruism's case I suspect this would be things like the basic empirical research about the state of the world and the things important to their goals.

Comment by willpearson on Open Thread #39 · 2018-02-12T21:15:54.806Z · score: 2 (2 votes) · EA · GW

I'm thinking about radically more secure computer architectures as a cause area.

  1. Radical architecture changes are neglected because it hard to change computer architecture
  2. Bad Computer security costs a fair amount at the moment
  3. Having a computer architecture that is insecure is making it hard to adopt more useful technology like Internet of Things.

I'd be interested in doing an analysis of whether it is effective altruistic cause. I'm just doing it as a hobby at the moment. Anyone interested in the same region want to collaborate?

Comment by willpearson on Could I have some more systemic change, please, sir? · 2018-01-23T20:00:40.293Z · score: 1 (1 votes) · EA · GW

There are some systemic reforms that seem easier reason about that others. Getting governments to be able to agree a tax scheme such that the Google's and Facebook's of the world can't hide their profits, seems like a pretty good idea. Their money piles suggest that they aren't hurting for cash to invest in innovation. It is hard to see the downside.

The upside is going to be less in developing world than the developed (due to more profits occurring in the developed world). So it may not be ideal. The tax justice network is something I want to follow more. They had a conversation with givewell.pdf)

Comment by willpearson on Open Thread #39 · 2017-12-02T22:28:18.561Z · score: 0 (0 votes) · EA · GW

I'm thinking about funding an analysis of the link between autonomy and happiness.

I have seen papers like

https://academic.oup.com/heapro/article/28/2/166/661129

and http://www.apa.org/pubs/journals/releases/psp-101-1-164.pdf

I am interested in how reproducible and reliable they are and I was wondering if I could convert money into an analysis of the methodology used in (some of) these papers.

As I respect EA's analytical skills (and hope their is a shared interest in happiness and truth), I thought I would ask here.

Comment by willpearson on Inadequacy and Modesty · 2017-10-31T16:41:30.063Z · score: 0 (0 votes) · EA · GW

In the context of the measurement problem: If the idea is that we may be able to explain the Born rule by revising our understanding of what the QM formalism corresponds to in reality (e.g., by saying that some hidden-variables theory is true and therefore the wave function may not be the whole story, may not be the kind of thing we'd naively think it is, etc.), then I'd be interested to hear more details.

Heh, I'm in danger of getting nerd sniped into physics land, which would be a multiyear journey. I'm found myself trying to figure out whether the stories in this paper count as real macroscopic worlds or not (or hidden variables). And then I tried to figure out whether it matters or not.

I'm going to bow out here. I mainly wanted to point out that there are more possibilities than just believe in Copenhagen and believe in Everett.

Comment by willpearson on Inadequacy and Modesty · 2017-10-31T14:49:53.474Z · score: 0 (0 votes) · EA · GW

Ah, it has been a while since I engaged with this stuff. That makes sense. I think we are talking past each other a bit though. I've adopted a moderately modest approach to QM since I've not touched it in a bit and I expect the debate has moved on a bit.

We started from a criticism of a particular position (the copenhagen interpretation) which I think is a fair thing to do for the modest and immodest. The modest person might misunderstand a position and be able to update themselves better if they criticize it and get a better explanation.

The question is what happens when you criticize it and don't get a better explanation. What should you do? Strongly adopt a partial solution to the problem, continue to look for other solutions or trust the specialists to figure it out?

I'm curious what you think about partial non-reality of wavefunctions (as described by the AncientGeek here and seeming to correspond to the QIT interpretation on the wiki page of interpretations, which fits with probabilities being in the mind ).

Comment by willpearson on Inadequacy and Modesty · 2017-10-31T12:59:54.261Z · score: 0 (0 votes) · EA · GW

and Eliezer hasn't endorsed any solution either, to my knowledge)

Huh, he seemed fairly confident about endorsing MWI in his sequence here

Comment by willpearson on Inadequacy and Modesty · 2017-10-31T08:51:24.501Z · score: 1 (1 votes) · EA · GW

Concerning QM: I think Eliezer's correct that Copenhagen-associated views like "objective collapse" and "quantum non-realism" are wrong, and that the traditional arguments for these views are variously confused or mistaken, often due to misunderstandings of principles like Ockham's razor. I'm happy to talk more about this too; I think the object-level discussions are important here.

I don't think the modest view (at least as presented by Gregory) would believe in any of the particular interpretations as there is significant debate still.

The informed modest person would go, "You have object reasons to dislike these interpretations. Other people have object reasons to dislike your interpretations. Call me when you have hashed it out or done an experiments to pick a side". They would go on an do QM without worrying too much about what it all means.

Comment by willpearson on In defence of epistemic modesty · 2017-10-30T09:23:34.615Z · score: 1 (1 votes) · EA · GW

Is there any data on how likely EAs think that explosive progress after HLMI will happen? I would have thought it more than 10%?

I would also have expected more debate about explosive progress, more than just the recent Hanson-Yudkowski flair up, if there was as much doubt in the community as that survey suggests.

Comment by willpearson on In defence of epistemic modesty · 2017-10-29T20:27:30.287Z · score: 1 (1 votes) · EA · GW

Another reason to not have too much modesty within society is that it makes expert opinion very appealing to subvert. I wrote a bit about that here.

Note that I don't think that my views about the things that I believe subverted/unmoored would be necessarily correct, but that the first order of business would be to try and build a set of experts with better incentives.

Comment by willpearson on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T19:53:44.431Z · score: 4 (4 votes) · EA · GW

Since I've not seen it mentioned here, unconferences seem like a inclusive type of event as described above. I'm not sure how EAG compare.

Comment by willpearson on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T19:44:03.083Z · score: 4 (4 votes) · EA · GW

Yes, there are a few EA leftists whose main priority is to systemically reform capitalism, but not significantly more than there were in the first place, and they are a tiny group in comparison to the liberals, the conservatives, the vegans, the x-risk people, and so on. As far as I can tell, the impact of all these articles and comments in bringing leftists into active participation with EA was totally nonexistent.

I'm not sure I count or not. My work on autonomy can be seen as investigating systemic change. I've been to a couple of meetups and hung around this forum a bit and I can tell you why the community is not very enticing or inviting from my point of view, if you are interested.

Edit to add:

I can only talk about EA London which I went to a couple of the meetups. To preface things I had generally good interactions with people they were nice and we chatted a bit about non-systemic EA interests (which I am also interested in). There was lots of conversation and not too much holding-forth.

I was mainly trying to find people interested in discussing AI/future things as any systemic change has to take this into consideration and there is lots of uncertainty. I was asked what I was interested in by organisers and asked if anyone knew people primarily interested in AI, and I didn't get any useful responses. At the time I didn't know enough about EA to ask about systemic change (and wasn't as clear on what I exactly wanted).

This slightly rambling point is to illustrate that it is hard to connect with people on niche topics (which AI seems to be in London). There probably needs to be a critical mass of people joining at once for a locality to support a topic.

I've joined a London EA facebook group focused on the future so I have my hopes.

That is pretty benign, a problem but not a large one. More could be done, but more could always be done.

The second, which I think might be more exclusionary, is EAG. I applied for tickets and to volunteer but I've heard nothing so far. I'm unsure why there is even selection on tickets.

I suspect I don't look like lots of EAs on an application form: I don't earn to give, but have taken a pay cut to work part time on my project, which I hope will help everyone in the long run. I may not have quite the same chipper enthusiasm.

I suspect other people interested in systemic change will look similarly different from lots of EAs, and the curation of EAG might be biased against them. If it is, then I probably have not lost out much by not going!

I mainly wrote this comment to try and give some possible reasons for the lack of a significant group interested in systemic change (despite articles/comments to the contrary). I'm not expecting EA to change, you can't be a group for everyone and you do interesting and good things. But it is good to know some of the potential reasons why things are how they are.

Edit2: I got a polite email from Julia Wise telling me that the reason I didn't get an invite was because London was a smaller event and that people were selected on the basis of "those who will benefit most from attending EA Global London." It would be nicer if these things were a little more transparent, e.g. you are applicant #X we can only accept #Y applicants, to give you a better idea of the chances. From my own perspective for the people that are interested in current niche EA topics it is important to be able to potentially meet other people from around the world interested in their topics. EAG might not be the place for that though.

Comment by willpearson on 5 Types of Systems Change Causes with the Potential for Exceptionally High Impact (post 3/3) · 2017-10-23T21:02:34.568Z · score: 0 (0 votes) · EA · GW

As a data point for how useful ITN is for trying to think about system change I shall talk about my attempt.

My attempt is here.

So the intervention was to try and start a movement of people who shared technological development with each other to help them live (so food/construction/computing), with the goal of creating autonomous communities (capable of space faring or living on other planets would be great, but is not the focus). The main difference between it and the normal open source movement is that it would focus on intelligence augmentation to start with for economic reasons.

The main problems I came across were arguing it would have positive long term impact due to existential risk and war risks compared to singletons (I think it would, because I am skeptical of the stability of singletons).

It will take a long time to form a convincing argument about this one way or another, and probably a significant amount of more expertise in economics, sociology and game theory as well.

If a fully justified ITN report is needed before EAs start to be interested in it, then it is likely that it and other ideas like it will be neglected. Some ideas will need significant resources put into determining their tractability and also the sign of their impact.

Comment by willpearson on Open Thread #39 · 2017-10-23T20:28:06.497Z · score: 2 (2 votes) · EA · GW

I've posted about an approach to AGI estimation.

I would love to find collaborators on this kind of thing. In London would be great.

Comment by willpearson on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-04T19:13:34.985Z · score: 0 (0 votes) · EA · GW

I take the point. This is a potential outcome, and I see the apprehension, but I think it's a probably a low risk that users will grow to mistake robotics and hardware accidents for AI accidents (and work that mitigates each) - sufficiently low that I'd argue expected value favours the accident frame. Of course, I recognize that I'm probably invested in that direction.

I would do some research onto how well sciences that have suffered brand dilution do.

As far as I understand it Research institutions have high incentives to

  1. Find funding
  2. Pump out tractible digestible papers

See this kind of article for other worries about this kind of thing.

You have to frame things with that in mind, give incentives so that people do the hard stuff and can be recognized for doing the hard stuff.

Nanotech is a classic case of a diluted research path, if you have contacts maybe try and talk to Erik Drexler, he is interested in AI safety so might be interested in how the AI Safety research is framed.

I think this steers close to an older debate on AI “safety” vs “control” vs “alignment”. I wasn't a member of that discussion so am hesitant to reenact concluded debates (I've found it difficult to find resources on that topic other than what I've linked - I'd be grateful to be directed to more). I personally disfavour 'motivation' on grounds of risk of anthropomorphism.

Fair enough I'm not wedded to motivation (I see animals having motivation as well, so not strictly human). It doesn't seem to cover Phototaxis which seems like the simplest thing we want to worry about. So that is an argument against motivation. I'm worded out at the moment. I'll see if my brain thinks of anything better in a bit.

Comment by willpearson on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-03T18:26:27.862Z · score: 0 (0 votes) · EA · GW

I agree it is worth reconsidering the terms!

The agi/narrow ai distinction is beside the point a bit, I'm happy to drop it. I also have an AI/IA bugbear so I'm used to not liking how things are talked about.

Part of the trouble is we have lost the marketing war before it even began, every vaguely advanced technology we have currently is marketing itself as AI, that leaves no space for anything else.

AI accidents brings to my mind trying to prevent robots crashing into things. 90% of robotics work could be classed as AI accident prevention because they are always crashing into things.

It is not just funding confusion that might be a problem. If I'm reading a journal on AI safety or taking a class on AI safety what should I expect? Robot mishaps or the alignment problem? How will we make sure the next generation of people can find the worthwhile papers/courses?

AI risks is not perfect, but is not at least it is not that.

Perhaps we should take a hard left and say that we are looking at studying Artificial Intelligence Motivation? People know that an incorrectly motivated person is bad and that figuring out how to motivate AIs might be important. It covers the alignment problem and the control problem.

Most AI doesn't look like it has any form of motivation and is harder to rebrand as such, so it is easier to steer funding to the right people and tell people what research to read.

It doesn't cover my IA gripe, which briefly is: AI makes people think of separate entities with their own goals/moral worth. I think we want to avoid that as much of possible. General Intelligence augmentation requires its own motivation work, but one so that the motivation of the human is inherited by the computer that human is augmenting. I think that my best hope is that AGI work might move in that direction.

Comment by willpearson on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-02T21:30:42.137Z · score: 0 (0 votes) · EA · GW

So what are the risks of this verbal change?

Potentially money gets mis-allocated: Just like all chemistry got rebranded nanotech during that phase in the 2000, if there is money in AI safety, computer departments will rebrand research as AI safety to prevent AI accidents. This might be a problem when governments start to try and fund AI Safety.

I personally want to be able to differentiate different types of work, between AI Safety and AGI Safety. Both are valuable, we are going to living in a world of AI for a while and it may cause catastrophic problems (including problems that distract us from AGI safety) and learning to mitigating them might help us with AGI Safety. I want us to be able continue to look at both as potentially separate things, because AI Safety may not help much with AGI Safety.

Comment by willpearson on Personal thoughts on careers in AI policy and strategy · 2017-09-29T11:27:02.061Z · score: 1 (3 votes) · EA · GW

I think an important thing for Ai strategy is to figure out ishow to fund empirical studies into questions that impinge on crucial considerations.

For example funding studies into the nature of IQ. I'll post an article on that later but wanted to flag it here as well.

Comment by willpearson on Personal thoughts on careers in AI policy and strategy · 2017-09-28T11:03:12.025Z · score: 4 (4 votes) · EA · GW

I agree that creativity is key.

I'd would point out that you may need discipline to do experiments based upon your creative thoughts (if the information you need is not available). If you can't check your original reasoning against the world, you are adrift in a sea of possibilities.

Comment by willpearson on Personal thoughts on careers in AI policy and strategy · 2017-09-28T09:09:54.010Z · score: 1 (1 votes) · EA · GW

I think it is important to note that in the political world there is the vision of two phases of AI development, narrow AI and general AI.

Narrow AI is happening now. The 30+% job loss predictions in the next 20 years, all narrow AI. This is what people in the political sphere are preparing for, from my exposure to it.

General AI is conveniently predicted more that 20 years away, so people aren't thinking about it because they don't know what it will look like and they have problems today to deal with.

Getting this policy response right to narrow AI does have a large impact. Large scale unemployment could destabilize countries, causing economic woes and potentially war.

So perhaps people interested in general AI policy should get involved with narrow AI policy, but make it clear that this is the first battle in a war, not the whole thing. This would place them well and they could build up reputations etc. They could be be in contact with the disentanglers so that when the general AI picture is clearer, they can make policy recommendations.

I'd love it if the narrow-general AI split was reflected in all types of AI work.

Comment by willpearson on S-risk FAQ · 2017-09-27T20:44:26.127Z · score: 0 (0 votes) · EA · GW

How do you feel about the mere addition paradox? These questions are not simple.

Comment by willpearson on Personal thoughts on careers in AI policy and strategy · 2017-09-27T20:01:33.349Z · score: 6 (6 votes) · EA · GW

I would broadly agree. I think this is an important post and I agree with most of the ways to prepare. I think we are not there yet for large scale AI policy/strategy.

There are few things that I would highlight as additions. 1) We need to cultivate the skills of disentanglement. Different people might be differently suited, but like all skills it is one that works better with practice and people to practice with. Lesswrong is trying to place itself as that kind of place. It is having a little resurgence with the new website www.lesserwrong.com. For example there has been lots of interesting discussion on the problems of Goodheart's law, which will be necessary to at least somewhat solve if we are to get AISafety groups that actually do AISafety research and don't just optimise some research output metric to get funding.

I am not sure if lesswrong is the correct place, but we do need places for disentanglers to grow.

2) I would also like to highlight the fact that we don't understand intelligence and that there have been lots of people studying it for a long time (psychologists etc) that I don't think we do enough to bring into discussing artificial versions of the thing they have studied. Lots of work on policy side of AI safety models it as utility maximimising agent in the economic style. I am pretty skeptical that is a good model of humans or of the AIs we will create. Figuring out what better models might be, is on the top of my personal priority list.

Edited to add 3) It seems like a sensible policy is to fund a competition in the style of at super forecasting aimed at AI and related technologies. This should give you some idea of the accuracy of peoples view on technology development/forecasting.

I would caution that we are also in the space of wicked problems so it may be there is never a complete certainty of the way we should move.

Comment by willpearson on Capitalism and Selfishness · 2017-09-26T13:11:21.765Z · score: 0 (0 votes) · EA · GW

There is still disagreement about how to best donate (to do most good) among individuals which gives support to the argument that profits should be paid out even among altruistic investor base

True, but to if I put myself in the perfect altruist company owner shoes I would really want to delegate the allocation of the my charitable giving, because I am too busy running my company to have much good information about who to donate to.

If I come happen to come in to some information about what good charitable giving is, I should be able to take the information to whoever I have delegated it too and they should incorporate it (being altruists wanting to do the most good as well).

It seems only when you distrust other agents, either morally, or their ability to update on information should you allocate it yourself.

Does that explain my intuitions?

Comment by willpearson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-22T20:06:50.834Z · score: 0 (0 votes) · EA · GW

Michelle, thanks. Yes very interesting!

Comment by willpearson on Capitalism and Selfishness · 2017-09-19T19:50:25.342Z · score: 0 (0 votes) · EA · GW

I don't understand this. Do you suggest that all companies should be trying to fulfill (all) the needs of some collective. It is very useful for companies to specialize.

I expect all benevolent companies to fulfil the needs of others with their profits (if they are not reinvesting them in expansion). For that is the definition of benevolence right? People have an ethos of benevolence insofar as they pursue the interests of others.

There are two aspects of ownership of the means of production

  1. Control over the operations
  2. Control of the profits

I would expect that a benevolent person/company would give away the control of the profit to an external entity. Why? Comparative advantage, it is unlikely that the person who specialises in control over operations of a company will be better than some group that specializes in getting the maximum charitable return (as long as the external groups is also benevolent). So you'd expect all the profits to go to something like open phil, directly so as to reduce costs from friction.

So who owns the company in this case? The people that controls operations or the group that controls profits?

I can't see a benevolent person arguing for needing private control over the profits (this might not mean public control, it might mean charitable control).

So I was trying to break down the concept of ownership some more and arguing that in a benevolent world private ownership might only mean keeping control over operations.

We don't live in a world of complete benevolence, so it is almost irrelevant. But it struck me as interesting as thought about how benevolence and capitalism would interact and look a lot different.

Comment by willpearson on Capitalism and Selfishness · 2017-09-17T10:35:49.996Z · score: 0 (2 votes) · EA · GW

I'm having trouble seeing how an individual benevolent person would hold ownership of a company (unless they were the only benevolent person). It would require the owner to think that they knew the best about how to distribute the fruits of the company.

This seems unlikely, as they are trying to help lots of people they need lots of data about what people need to meet their interests. This data collection seems like it would be best be done in a collective manner (as getting more data from more people about what is needed should give a more accurate view of the needs, and this data should be shared for efficiency).

So why wouldn't the benevolent individual give their share of the company to whatever collective system that determined the needs of the world? They could still be ceo, so that they could manage the company better (as they have good data about that). It seems like the capitalist system would morph into either socialism or charity-sector owned means of production, if everyone were benevolent.

I do however agree that socialism is not inherently selfless (nor is the system where charities own the means of production).

There are lots of other potential systems as well apart from the charity-ism I described above. I'm interested in what happens when sufficiently advanced micro-manufacturing enables everyone to own the means of production (this is also not inherently self-less or selfish). You could look at systems where people only rent the means of production from the state.

Comment by willpearson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-12T08:24:06.733Z · score: 1 (1 votes) · EA · GW

My personal idea of it is a broad church. So the systems that govern our lives, government and the economy distribute resources in a certain way. These can have a huge impact on the world. They are neglected because it involves fighting an uphill struggle against vested interests.

Someone in a monarchy campaigning for democracy would be an example of someone who is aiming for systemic change. Someone who has an idea to strengthen the UN so that it could help co-ordinate regulation/taxes better between countries (so that companies don't just move to low tax, low worker protection, low environmental regulation areas) is aiming for systemic change.

Comment by willpearson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-09T19:20:34.030Z · score: 0 (2 votes) · EA · GW

I'm not sure if there are many EAs interested in it, because of potential low tractability. But I am interested in "systemic change" as a cause area.

Comment by willpearson on EA Survey 2017 Series: Cause Area Preferences · 2017-09-09T18:59:32.484Z · score: 0 (0 votes) · EA · GW

Just a heads up "technological risks" ignores all the non-anthropogenic catastrophic risks. Global catastrophic risks seems good.

Comment by willpearson on Ideological engineering and social control: A neglected topic in AI safety research? · 2017-09-01T21:59:52.855Z · score: 1 (1 votes) · EA · GW

I think this is part of the backdrop to my investigation into the normal computer control problem. People don't have control over their own computers. the bad actors that do get control could be criminals or a malicious state (or AIs).

Comment by willpearson on Ideological engineering and social control: A neglected topic in AI safety research? · 2017-09-01T21:47:02.495Z · score: 0 (0 votes) · EA · GW

The increasing docility could be a stealth existential risk increaser, in that people would be less willing to challenge other peoples ideas and so slow or stop entirely technological progress we need to save ourselves from super volcanoes and other environmental threats

Comment by willpearson on Looking at how Superforecasting might improve some EA projects response to Superintelligence · 2017-08-31T08:12:40.675Z · score: 0 (0 votes) · EA · GW

Thanks for the links. It would have been nice to have got them when I emailed OPP a few days ago with a draft of this article.

I look forward to seeing the fruits of "Making Conversations Smarter, Faster"

I'm going to dig into the AI timeline stuff, but from what I have seen from similar things, there is an inferential step missing. The question is "Will HLMI (of any technology) might happen with probability X by Y" and the action is then "we should invest in most of the money in a community for machine learning people and people working on AI safety for machine learning". I think worth asking the question, "Do you expect HLMI to come from X technology". If you want to invest lots in that class of technology.

Rodney Brooks has an interesting blog about the future of robotics and AI. Worth keeping an eye on as a dissenter, and might be an example of someone who has said we will have intelligent agents by 2050, but doesn't think it will be current ML.

Comment by willpearson on Looking at how Superforecasting might improve some EA projects response to Superintelligence · 2017-08-30T19:27:00.511Z · score: 0 (0 votes) · EA · GW

Sorry if you felt I was being deceptive. The list of areas of expertise I mentioned in the 80K hours section was relatively broad and not meant to be exhaustive. I could add physics and economics off the top of my head. I'm sure there were many more. I was considering each AGI team as having to do small amounts of forecasting about the likely success and usefulness of their projects. I think building it in the superforecasting mindset at all levels of endeavours could be valuable, without having to rely on explicit superforecasters for every decision.

In my opinion, the best way to draw lessons from the Good Judgement Project is to directly rely on existing forecasting teams, or new forecasting teams trained and tested in the same manner, to give us their predictions on potential superintelligence, and to give the appropriate weight to their expertise.

It would be great to have a full team of forecasters working on intelligence in general (so they would have something to correlate their answers on Superintelligence). I was being moderate in my demands in how much Open Philanthropy Project should change how they make forecasts about what is good to do. I just wanted it to be directionally correct.

As a comparison, I don't think that giving a forecaster this list of suggestions and asking them to make predictions with those suggestions in mind would lead to performance similar to that of a superforecaster

There was a simple thing people could do to improve their predictions.

From the book:

One result was particularly surprised me was the effect of a tutorial covering some basic concepts that we'll explore in this book and are summarized in the Ten Commandments appendix. It took only sixty minutes to read and yet it improved accuracy by roughly 10% through the entire tournament year.

The ten commandment appendix is where I got the list of things to do. I figure if I managed to get Open Philosophy Project to try and follow them, things would improve. But I agree them getting good forecasters somehow would be a lot better.

Does that clear up where I was coming from?