Posts

An intervention to shape policy dialogue, communication, and AI research norms for AI safety 2017-10-01T18:29:15.685Z
Increasing Access to Pain Relief in Developing Countries - An EA Perspective 2017-01-31T16:13:20.165Z

Comments

Comment by lee_sharkey on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-15T18:30:21.090Z · EA · GW

Yes. Thanks. Link has been amended. Author was in fact Luke Muehlhauser, so labeling it 'WEF' is only partially accurate.

Comment by lee_sharkey on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-06T12:32:52.480Z · EA · GW

I agree that more of both is needed. Both need to be instantiated in actual code, though. And both are useless if researchers don't care implement them.

I admit I would benefit from some clarification on your point - are you arguing that the article assumes a bug-free AI won't cause AI accidents? Is it the case that this arose from Amodei et al.'s definition?: “unintended and harmful behavior that may emerge from poor design of real-world AI systems”. Poor design of real world AI systems isn't limited to being bug-free, but I can see why this might have caused confusion.

Comment by lee_sharkey on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-04T00:24:48.405Z · EA · GW

I don't think it's an implausible risk, but I also don't think that it's one that should prevent the goal of a better framing.

Comment by lee_sharkey on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-04T00:16:04.381Z · EA · GW

AI accidents brings to my mind trying to prevent robots crashing into things. 90% of robotics work could be classed as AI accident prevention because they are always crashing into things.

It is not just funding confusion that might be a problem. If I'm reading a journal on AI safety or taking a class on AI safety what should I expect? Robot mishaps or the alignment problem? How will we make sure the next generation of people can find the worthwhile papers/courses?

I take the point. This is a potential outcome, and I see the apprehension, but I think it's a probably a low risk that users will grow to mistake robotics and hardware accidents for AI accidents (and work that mitigates each) - sufficiently low that I'd argue expected value favours the accident frame. Of course, I recognize that I'm probably invested in that direction.

Perhaps we should take a hard left and say that we are looking at studying Artificial Intelligence Motivation? People know that an incorrectly motivated person is bad and that figuring out how to motivate AIs might be important. It covers the alignment problem and the control problem.

Most AI doesn't look like it has any form of motivation and is harder to rebrand as such, so it is easier to steer funding to the right people and tell people what research to read.

I think this steers close to an older debate on AI “safety” vs “control” vs “alignment”. I wasn't a member of that discussion so am hesitant to reenact concluded debates (I've found it difficult to find resources on that topic other than what I've linked - I'd be grateful to be directed to more). I personally disfavour 'motivation' on grounds of risk of anthropomorphism.

Comment by lee_sharkey on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-03T14:04:07.374Z · EA · GW

"permanent loss of control of a hostile AI system" - This seems especially facilitative of the science-fiction interpretation to me.

I agree with the rest.

Comment by lee_sharkey on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-03T13:54:41.285Z · EA · GW

I think this proposition could do with some refinement. AI safety should be a superset of both AGI safety and narrow-AI safety. Then we don't run into problematic sentences like "AI safety may not help much with AGI Safety", which contradicts how we currently use 'AI safety'.

To address the point on these terms, then:

I don't think AI safety runs the risk of being so attractive that misallocation becomes a big problem. Even if we consider risk of funding misallocation as significant, 'AI risk' seems like a worse term for permitting conflation of work areas.

Yes, it's of course useful to have two different concepts for these two types of work, but this conceptual distinction doesn't go away with a shift toward 'AI accidents' as the subject of these two fields. I don't think a move toward 'AI accidents' awkwardly merges all AI safety work.

But if it did: The outcome we want to avoid is AGI safety getting too little funding. This outcome seems more likely in a world that makes two fields of N-AI safety and AGI safety, given the common dispreference for work on AGI safety. Overflow seems more likely in the N-AI Safety -> AGI Safety direction when they are treated as the same category than when they are treated as different. It doesn't seem beneficial for AGI safety to market the two as separate types of work.

Ultimately, though, I place more weight on the other reasons why I think it's worth reconsidering the terms.

Comment by lee_sharkey on An intervention to shape policy dialogue, communication, and AI research norms for AI safety · 2017-10-03T12:58:42.625Z · EA · GW

What do you have in mind? If it can't be fixed with better programming, how will they be fixed?

Comment by lee_sharkey on Personal thoughts on careers in AI policy and strategy · 2017-09-27T20:10:30.525Z · EA · GW

Hi Carrick,

Thanks for your thoughts on this. I found this really helpful and I think 80'000 hours could maybe consider linking to it on the AI policy guide.

Disentanglement research feels like a valid concept, and it's great to see it exposed here. But given how much weight pivots on the idea and how much uncertainty surrounds identifying these skills, it seems like disentanglement research is a subject that is itself asking for further disentanglement! Perhaps it could be a trial question for any prospective disentanglers out there.

You've given examples of some entangled and under-defined questions in AI policy and provided the example of Bostrom as exhibiting strong disentanglement skills; Ben has detailed an example of an AI strategy question that seems to require some sort of "detangling" skill; Jade has given an illuminative abstract picture. These are each very helpful. But so far, the examples are either exclusively AI strategy related or entirely abstract. The process of identifying the general attributes of good disentanglers and disentanglement research might be assisted by providing a broader range of examples to include instances of disentanglement research outside of the field of AI strategy. Both answered and unanswered research questions of this sort might be useful. (I admit to being unable to think of any good examples right now)

Moving away from disentanglement, I've been interested for some time by your fourth, tentative suggestion for existing policy-type recommendations to

fund joint intergovernmental research projects located in relatively geopolitically neutral countries with open membership and a strong commitment to a common good principle.

This is a subject that I haven't been able to find much written material on - if you're aware of any I'd be very interested to know about it. It isn't completely clear whether or how to push for an idea like this. Additionally, based on the lack of literature, it feels like this hasn't received as much thought as it should, even in an exploratory sense (but being outside of a strategy research cluster, I could be wrong on this). You mention that race dynamics are easier to start than stop, meanwhile early intergovernmental initiatives are one of the few tools that can plausibly prevent/slow/stop international races of this sort. These lead me to believe that this 'recommendation' is actually more of a high priority research area. Exploring this area appears robustly positive in expectation. I'd be interested to hear other perspectives on this subject and to know whether any groups or individuals are currently working/thinking about it, and if not, how research on it might best be started, if indeed it should be.

Comment by lee_sharkey on Nothing Wrong With AI Weapons · 2017-08-28T18:18:19.753Z · EA · GW

Hey kbog, Thanks for this. I think this is well argued. If I may, I'd like to pick some holes. I'm not sure if they are sufficient to swing the argument the other way, but I don't think they're trivial either.

I'm going to use autonomy in weapons systems in favour of LAWs for reasons argued here(see Takeaway 1).

As far as I can tell, almost all considerations you give are to inter-state conflict. The intra-state consequences are not explored and I think they deserve to be. Fully autonomous weapons systems potentially obviate the need for a mutually beneficial social contract between the regimes in control of the weapons and the populations over which they rule. All dissent becomes easy to crush. This is patently bad in itself, but it also has consequences for interstate conflict; with less approval needed to go to war, inter-state conflict may increase.

The introduction of weapons systems with high degrees of autonomy poses an arguably serious risk of geopolitical turbulence: it is not clear that all states will develop the capability to produce highly autonomous weapons systems. Those that do not will have to purchase them from technologically-more advanced allies willing to sell them. States that find themselves outside of such alliances will be highly vulnerable to attack. This may motivate a nontrivial reshuffling of global military alliances, the outcomes of which are hard to predict. For those without access to these new powerful weapons, one risk mitigation strategy is to develop nuclear weapons, potentially motivating nuclear proliferation.

On your point:

The logic here is a little bit gross, since it's saying that we should make sure that ordinary soldiers like me die for the sake of the greater good of manipulating the political system and it also implies that things like body armor and medics should be banned from the battlefield, but I won't worry about that here because this is a forum full of consequentialists and I honestly think that consequentialist arguments are valid anyway.

My argument here isn't hugely important but I take some issue with the analogies. I prefer thinking in terms of both actors agreeing on acceptable level of vulnerability in order to reduce the risk of conflict. In this case, a better analogy is to the Cold War agreement not to build comprehensive ICBM defenses, an analogy which would come out in favour of limiting autonomy in weapons systems. But neither of us are placing much importance on this point overall.

I'd like to unpack this point a little bit:

Third, you might say that LAWs will prompt an arms race in AI, reducing safety. But faster AI development will help us avoid other kinds of risks unrelated to AI, and it will expedite humanity's progress and expansion towards a future with exponentially growing value. Moreover, there is already substantial AI development in civilian sectors as well as non-battlefield military use, and all of these things have competitive dynamics. AGI would have such broad applications that restricting its use in one or two domains is unlikely to make a large difference; after all, economic power is the source of all military power, and international public opinion has nontrivial importance in international relations, and AI can help nations beat their competitors at both.

I believe discourse on AI risks often conflates 'AI arms race' with 'race to the finish'. While these races are certainly linked, and therefore the conflation justified in some senses, I think it trips up the argument in this case. In an AI arms race, we should be concerned about the safety of non-AGI systems, which may be neglected in an arms race scenario. This weakens the argument that highly autonomous weapons systems might lead to fewer civilian casualties, as this is likely the sort of safety measure that might be neglected when racing to develop weapons systems capable of out-doing the ever more capable weapons of one's rival.

The second sentence only holds if the safety issue is solved, so I don't accept the argument that it will help humanity reach a future exponentially growing in value (at least insofar as we're talking about the long run future, as there may be some exponential progress in the near-term).

It could simply be my reading, but I'm not entirely clear on the point made across the third and fourth sentences, and I don't think they give a compelling case that we shouldn't try to avoid military application or avoid exacerbating race dynamics.

Lastly, while I think you've given a strong case to soften opposition to advancing autonomy in weapons systems, the argument against any regulation of these weapons hasn't been made. Not all actors seek outright bans, and I think it'd be worth acknowledging that (contrary to the title) there are some undesirable things with highly autonomous weapons systems and that we should like to impose some regulations on them such as, for example, some minimum safety requirements that help reduce civilian casualties.

Overall, I think the first point I made should cause serious pause, and it's the largest single reason I don't agree with your overall argument, as many good points as you make here.

(And to avoid any suspicions: despite arguing on his side, coming from the same city, and having the same rare surname, I am of no known relation to Noel Sharkey of the Stop Killer Robots Campaign, though I confess a pet goal to meet him for a pint one day.)

Comment by lee_sharkey on Open Thread #38 · 2017-08-25T22:02:19.326Z · EA · GW

Not sure if it's just me but the board_setup.jpg wouldn't load. I'm not sure why, so I'm not expecting a fix, just FYI. Cards look fun though!

Comment by lee_sharkey on High Time For Drug Policy Reform. Part 4/4: Estimating Cost-Effectiveness vs Other Causes; What EA Should Do Next · 2017-08-16T20:30:07.283Z · EA · GW

Hey Denkenberger, thanks for your comment. I too tend to weight the future heavily and I think there are some reasons to believe that DPR could have nontrivial benefits with this set of preferences. This was in fact why, as Michael mentions above:

"FWIW, I think the mental health impact of DPR is about 80% of it's value, but when I asked Lee the same question (before telling him my view) I think he said it was about 30% (we were potentially using different moral philosophies)." because I think DPR's effects on the far future could be the source of most of its expected value.

DPR sits at the juncture between international development & economic growth, global & mental health, national & international crime, terrorism, conflict & security, and human rights. I think we should expect solving the world drug problem to improve some or all of these issues, as Michael argued in the series.

I think it could be easy to overlook the expected benefits of significant reductions in funding and motivation for crime, corruption, terrorism, and conflict for fostering a stable, trusting global system. My weak conjecture is that such reductions would bring an array of global benefits composed of reduced out-group fear (on community and international levels), stronger institutions, and richer societies.

DPR might thus offer a step in the right direction towards solving issues of global coordination, which in turn may increase our expectations for solving the coordination problem for AI and, thence, the long-term future. I admit this is a fairly hand-wavy notion and that the causal chains are undesirably long and uncertain, relying on unpredictable assumptions (such as the timing of an intelligence takeoff compared with the length of time it would take to observe the international social benefits, for a start). My confidence intervals are therefore commensurately wide, but still I struggle to think of ways in which it could be net negative for global coordination. So almost all of my probability weight is positive. Multiplied by humanity's cosmic endowment, I weigh this relatively heavily. Of course, there may be other, more certain activities that we can do to improve the EV of humanity's future, and I think there are, but I don't think DPR is obviously a waste of time if that's what we care about.

Comment by lee_sharkey on High Time For Drug Policy Reform. Part 2/4: Six Ways It Could Do Good And Anticipating The Objections · 2017-08-16T17:46:19.894Z · EA · GW

(note: Lee wrote the pain section but we both did editing, so I'm unsure whether to use 'I' or 'we' here)

I align myself Michael's comment.

Comment by Lee_Sharkey on [deleted post] 2017-02-16T09:29:23.554Z

Really enjoying the Oxford Prioritisation Project!

One of my favourite comments from the Anonymous EA comments was the wish that EAs would post "little 5-hour research overviews of the best causes within almost-random cause areas and preliminary bad suggested donation targets." (http://effective-altruism.com/ea/16g/anonymous_comments/)

I expect average OPP posts take over 5 hours, and 5 hours might be an underestimate of the amount of time it would take for a useful overview without prior subject knowledge. But both that comment and the OPP seem to be of the same spirit, and it's great to see all this information shared through an EA lens.

Comment by Lee_Sharkey on [deleted post] 2017-02-16T09:11:18.522Z

I'd second that - it's not the most wieldy text editor. Not sure how easy it would be to remedy. Going into the HTML gets you what you want in the end, but it's undue effort.

Comment by lee_sharkey on Increasing Access to Pain Relief in Developing Countries - An EA Perspective · 2017-02-07T09:27:23.405Z · EA · GW

Hi Tom,

Great to hear that it's been suggested. By the looks of it, it may be an area better suited to an Open Philanthropy Project-style approach, being primarily a question of policy and having a sparser evidence base and impact definition difficulties. I styled my analysis around OPP's approach (with some obvious shortcomings on my part).

I could have done better in the analysis to distinguish between the various types of pain. As you say, they are not trivial distinctions, especially when it comes to treatment with opioids.

I'd be interested to hear your take on the impact of pain control on the nature of medicine and the doctor-patient dynamic. What trends are you concerned about hastening exactly?

Comment by lee_sharkey on Increasing Access to Pain Relief in Developing Countries - An EA Perspective · 2017-02-07T09:08:36.520Z · EA · GW

Thanks for those links. It's troubling to hear about some of the promotional techniques described, though I can't say it's surprising.

While US regulations have been developed decades before their equivalents in many developing countries, it's not necessarily a mark of quality. In the article I refer to less desirable idiosyncrasies of the US health system (i.e. aspects of the consumer-based model; pain as a fifth vital sign), which have exacerbated the crisis there and will not necessarily exist in some developing countries. Yet, while I hesitate to paint all developing countries with the same skeptical brush when it comes to developing adequate regulations, I agree with you more than I disagree. I say that a small amount of adverse outcomes are almost inevitable, and it's really difficult to judge where the positives outweigh the negatives.

I still think expanding access should be part of the strategy. The approach promoted by WHO, UNODC, INCB, is to aim for a 'balanced in policies on controlled substances'. The trouble is that countries are all too keen to control the downsides of using narcotic drugs at the expense of the upsides. So I think that what you're suggesting may already be the approach being taken, but the emphasis needs to compensate for states' existing imbalance.

And what you're doing sounds interesting! Feel free to post links

Comment by lee_sharkey on Increasing Access to Pain Relief in Developing Countries - An EA Perspective · 2017-02-04T16:19:53.730Z · EA · GW

Hi Austen,

Just to clarify, I'm not trying to promote or demote the cause. I'm aware that the cause is of interest to some EAs, and as someone in a good position to inform them, I thought something like this would help them make their own judgement :) I'm just sharing info and trying to be impartial.

Sorry if I my comments gave the impression that I thought it was low priority and financially inefficient. To reiterate I've withheld strong judgement on its priority, and I said I haven't looked into its financial efficiency compared with other interventions. Because its importance/effectiveness depends heavily on ethical value preferences, both of these question are hard for me to take strong stances on.

My apologies for seeming contrary here, but I'm not taking an anti-corporate stance either. I made those points because the way you had originally put it made it seem like you believed that access to pain relief was unique in that corporate influence didn't carry much risk compared with other causes. Unfortunately, it isn't so. Of course pharma involvement is essential, yet the history of this very cause illustrates the risks. I'd agree with you that lack of corporate involvement is the missing link in some aspects of increasing access, but we should both be specific about the sectors we're talking about to avoid appearing broadly pro-corporate or anti-corporate, which we both agree is unhelpful.

I haven't got a wide enough grasp of the palliative care movement to say if it suffers from an anti-corporate agenda. 'Global health' in general tends to be pretty anti-pharma, and it's hard to argue that the short-term externalities of the existing capitalistic model of drug development and production favours the 'Global health' agenda over the agenda of 'health in the developed world'. So Global health's culture of being anti-pharma is at least understandable, even if it relies on discounting the potentially-positive long-term externalities of the capitalistic model. It's hard to say if access to pain relief/palliative care is more antagonistic to pharma than the rest of Global health. If it is suspicious of opioid manufacturers being involved in other aspects of the movement such as policy, then, without being too SJW, I actually think they actually have good reason to be so, given the history.

Comment by lee_sharkey on Increasing Access to Pain Relief in Developing Countries - An EA Perspective · 2017-02-03T01:20:09.947Z · EA · GW

Hi Austen,

Thanks for all your interest!

I would have to disagree on your point about corporate influence. Pharma has been implicated heavily in the current opioid epidemic in the States and elsewhere. See the John Oliver expose for a light introduction (link above). In this area, if anything, there is even more reason to be wary of pharma influence because the product is so addictive when misused. Pharma does do some positive work - I'm aware of a BMS-funded training hospice in Romania (Casa Sperantei). I've only heard good things about it.

You've hit on an accepted strategy for promoting pain relief access/palliative care. One only knows one has succeeded in making a MoH care about the area when it does something about it, such as developing a policy. The 'public health approach' to increasing access to pain relief/palliative care, supported by WHO, recognizes policy as the foundation on which other progress can be built. Without it, success in other areas of the approach (namely medicine availability, education, and implementation) is much less likely. Kathy Foley and colleagues introduce the public health approach here http://www.jpsmjournal.com/article/S0885-3924(07)00122-4/pdf

Regarding tractability:

The issue is likely to be more tractable in some countries than in others, and so it's hard for me to give anything but a range.

I'm adding retrospective justification for my choice of low-moderate tractability here, but compare this cause to similar ones assessed by 80k. The scores given to them according to their scoring matrix are: Smoking in the Developing World - 3/6; Health in poor countries - 5/6; Land Use Reform - 3/6;

(Where 3 is "Some possible ways to make progress, with significant controversy; Significant uncertainty about how to approach, solution at least a decade off; many relevant people don’t care, or some supportive but significant opposition from status quo.")

Judging by the rest of the scoring matrix I think a range of 2 - 3.5 in most countries is appropriate, which roughly correlates to low-moderate in my book.

So I think I would stand by my choice of low-moderate. I probably a proclivity for pessimism so perhaps I'm not being generous enough about its solvability here. The problem may be highly tractable in some countries but I feel that to recognise it in the range would misrepresent the issue. As for Wisconsin, I would hesitate to proclaim its effectiveness before more specific analysis. So even if they only spend 15% of their time on it, that may not mean much in terms of tractability or neglectedness. It does seem promising though.

Other funding: There are reasons other than politics that PEPFAR may not have chose to fund palliative care measures. Preventive measures may just be way more cost effective in the long run. I haven't looked closely into it.

An area where palliative care is of growing interest is in multidrug resistant TB.

Comment by lee_sharkey on Increasing Access to Pain Relief in Developing Countries - An EA Perspective · 2017-02-02T22:40:58.135Z · EA · GW

Hi Elizabeth,

I focus on opioid medications for the same reasons that I don't focus on cannabinoids:

  • There isn't strong expert consensus on the effectiveness of cannabinoids. This may change as the search for alternative drugs, particularly for chronic pain, intensifies. While there are some areas that will likely see their use increase (you justly highlight neuropathic pain), my understanding is that current evidence doesn't reliably indicate their effectiveness for severe pain. All this said, there are good reasons to believe they are understudied, both as single interventions and as adjuvants. I should perhaps have elaborated on this and similar research avenues in the article. Thank you for bringing attention to this issue.

  • Opioid medications, although controlled and functionally inaccessible, are legal medicines in all countries. With few, well-evidence cannabinoid medications approved for use, and only in a handful of countries, it's unlikely that fighting to approve members of a controversial drug class of questionable efficacy for many medical indications is the best way to bring pain relief to patients in developing countries (It could be incredibly effective if generating widespread acceptance of cannabinoid medications, through a long causal chain, ended up driving more rational controlled substances policies. But this is far from a neglected and tractable cause).

For the above two reasons, the movement to increase access to opioid medications has historical precedent on its side and solid expert consensus on their efficacy (even if their dangers are debated). It seems that they comprise an essential component of the best solution (however imperfect) to the gross deficiency of analgesia in the majority of contexts globally. But you're correct to highlight what may be the least explored part of the analysis.

Comment by lee_sharkey on Increasing Access to Pain Relief in Developing Countries - An EA Perspective · 2017-02-02T21:06:33.048Z · EA · GW

Thanks Julia! Glad to have the chance to share

Comment by lee_sharkey on Increasing Access to Pain Relief in Developing Countries - An EA Perspective · 2017-02-01T21:28:21.956Z · EA · GW

Thanks Austen!

Yes, it's actually very large. So large, in fact, that it seems to be taken for granted by many people in those countries with low access.

I've withheld strong judgement on whether it should be a cause area that other EAs should act on. I think it could be a particularly attractive area for EAs with certain ethical preferences.

Before funding programmes such as PPSG's, further analyses of the cause and the programme(s) are warranted. I'd be open to suggestions on how to carry those out from anyone with experience, or I'd be happy to discuss the matter with anyone interested in taking it forward themselves.

Comment by Lee_Sharkey on [deleted post] 2017-01-31T13:45:02.166Z

This link is expired unfortunately. Is there anything CEA/the forum could do to collate existing translations?

Comment by lee_sharkey on EA essay contest for <18s · 2017-01-31T12:39:12.583Z · EA · GW

Indeed. And essay competitions are not like examinations; plagiarism only needs to be detected in potential winners and can be achieved by googling fragments of the essays.