Posts

Comments

Comment by william_s_duplicate0-17020928878049157 on Why I prioritize moral circle expansion over artificial intelligence alignment · 2018-02-26T03:50:12.437Z · EA · GW

I don't think CEV or similar reflection processes reliably lead to wide moral circles. I think they can still be heavily influenced by their initial set-up (e.g. what the values of humanity when reflection begins).

Why do you think this is the case? Do you think there is an alternative reflection process (either implemented by an AI, by a human society, or combination of both) that could be defined that would reliably lead to wide moral circles? Do you have any thoughts on what would it look like?

If we go through some kind of reflection process to determine our values, I would much rather have a reflection process that wasn't dependent on whether or not MCE occurred before hand, and I think not leading to a wide moral circle should be considered a serious bug in any definition of a reflection process. It seems to me that working on producing this would be a plausible alternative or at least parallel path to directly performing MCE.

Comment by william_s_duplicate0-17020928878049157 on Introducing Canada’s first political advocacy group on AI Safety and Technological Unemployment · 2017-11-06T18:29:44.177Z · EA · GW

I've talked to Wyatt and David, afterwards I am more optimistic that they'll think about downside risks and be responsive to feedback on their plans. I wasn't convinced that the plan laid out here is a useful direction, but we didn't dig into it into enough depth for me to be certain.

Comment by william_s_duplicate0-17020928878049157 on Introducing Canada’s first political advocacy group on AI Safety and Technological Unemployment · 2017-10-31T20:50:01.972Z · EA · GW

Seems like the main argument here is that: "The general public will eventually clue in to the stakes around ASI and AI safety and the best we can do is get in early in the debate, frame it as constructively as possible, and provide people with tools (petitions, campaigns) that will be an effective outlet for their concerns."

One concern about this is that "getting in early in the debate" might move up the time that the debate happens or becomes serious, which could be harmful.

An alternative approach would be to simply build latent capacity - work on issues that are already in the political domain (I think basic income as a solution for technological employment is something that is already out there in Canada), but avoid raising new issues until other groups move into that space too. While you're doing that, you could build latent capacity (skills, networks) and learn how to effectively advocate in spaces that don't carry the same risks of prematurely politicizing AI related issues. Then when something related to AI becomes a clear goal for policy advocacy, moving onto it at the right time.

Comment by william_s_duplicate0-17020928878049157 on Open Thread #38 · 2017-08-28T17:00:44.526Z · EA · GW

Thanks for the Nicky Case links

Comment by william_s_duplicate0-17020928878049157 on Open Thread #38 · 2017-08-23T17:21:43.804Z · EA · GW

Any thoughts on individual-level political de-polarization in the United States as a cause area? It seems important, because a functional US government helps with a lot of things, including x-risk. I don't know whether there are tractable/neglected approaches in the space. It seems possible that interventions on individuals that are intended to reduce polarization and promote understanding of other perspectives, as opposed to pushing a particular viewpoint or trying to lobby politicians, could be neglected. http://web.stanford.edu/~dbroock/published%20paper%20PDFs/broockman_kalla_transphobia_canvassing_experiment.pdf seems like a useful study in this area (it seems possible that this approach could be used for issues on the other side of the political spectrum)

Comment by william_s_duplicate0-17020928878049157 on The Map of Global Warming Prevention · 2016-08-12T00:33:53.155Z · EA · GW

I'm not saying these mean we shouldn't do geoengineering, that they can't be solved or that they will happen by default, just that these are additional risks (possibly unlikely but high impact) that you ought to include in your assessment and we ought to make sure that we avoid.

Re coordination problems not being bad: It's true that they might work out, but there's significant tail risk. Just imagine that say, the US unilaterally decides to do geonengineering, but it screws up food production and the economy in China. This probably increases chances of nuclear war (even more so than if climate change does it indirectly, as there will be a more specific, attributable event). It's worth thinking about how to prevent this scenario.

Comment by william_s_duplicate0-17020928878049157 on The Map of Global Warming Prevention · 2016-08-11T20:51:33.710Z · EA · GW

Extra risks from geoengineering:

Cause additional climate problems (ie. it doesn't just uniformly cool planet. I recall seeing a simulation somewhere where climate change + geoengineering did not equal no change, but instead significantly changed rainfall patterns).

Global coordination problems (who decides how much geoengineering to do, compensation for downside, etc.). This could cause a significant increase in international tensions, plausibly war.

Climate Wars by Gwynne Dyer has some specific negative scenarios (for climate change + geoengineering) https://www.amazon.com/Climate-Wars-Fight-Survival-Overheats/dp/1851688145

Comment by william_s_duplicate0-17020928878049157 on Announcing the Good Technology Project · 2016-01-16T18:59:52.005Z · EA · GW

It might be useful to suggest Technology for Good as, ie, a place where companies with that focus could send job postings, and have them seen by people who are interested in working on such projects.

Comment by william_s_duplicate0-17020928878049157 on Announcing the Good Technology Project · 2016-01-16T18:58:12.217Z · EA · GW

This is probably not answerable until you've made some significant progress in your current focus, but it would be nice to get a sense of how well the pool of people available to work on technology for good projects lines up with the skills required for those problems (for example, are there a lot of machine learning experts who are willing to work on these problems, but not many projects where that is the right solution? Is there a shortage of, say, front-end web developers who are willing to work on these kinds of projects?).

Comment by william_s_duplicate0-17020928878049157 on EA risks falling into a "meta trap". But we can avoid it. · 2015-08-25T21:06:16.034Z · EA · GW

Another way of thinking about this is that in an overdetermined environment it seems like there would be a point at which the impact of EA movement building will be "causing a person to join EA sooner" instead of "adding another person to EA" (which is the current basis for evaluating EA movement building impact), which would be much less valuable.

Comment by william_s_duplicate0-17020928878049157 on EA risks falling into a "meta trap". But we can avoid it. · 2015-08-25T19:25:05.285Z · EA · GW

What sort of feedback signals would we get if EA was currently falling into a meta-trap? What is the current state of those signals?

Comment by william_s_duplicate0-17020928878049157 on How we can make it easier to change your mind about cause areas · 2015-08-24T20:12:16.046Z · EA · GW

In response to this article, I followed the advice in 1) and thought about where I'd donate in the animal suffering cause area, ending up donating $20 to New Harvest.

Comment by william_s_duplicate0-17020928878049157 on How to get more EAs to connect in person and share expertise? · 2015-08-20T01:43:31.495Z · EA · GW

Idea: allow people to sign up to a list. Then, every (week/2 weeks/month) randomly pair up all people on the list and suggest they have a short Skype conversation with the person they are paired with.

Comment by william_s_duplicate0-17020928878049157 on The career questions thread · 2015-06-21T12:52:05.424Z · EA · GW

80k now has career profiles on doing Software Engineering, Data Science and a Computer Science PhD. I'm in a position where I could plausibly pursue any of these. What is the ratio of effective altruists currently pursuing each of these options, and where do you think adding an additional EA is of most value? (Having information this information on the career profiles might be a nice touch)

Comment by william_s_duplicate0-17020928878049157 on I am Nate Soares, AMA! · 2015-06-11T23:52:47.936Z · EA · GW

Are there any areas of the current software industry that developing expertise in might be useful to MIRI's research agenda in the future?

Comment by william_s_duplicate0-17020928878049157 on Solving donation coordination problems · 2015-05-28T04:10:34.229Z · EA · GW

I wonder if delaying donations might play a role as a crude comparison of room for more funding between different EA organizations, or for a desire to keep all current EA organizations afloat. A donor who wants to support EA organizations but is uncertain about which provides the most value might chose the heuristic "donate to the EA organization that is farthest from their fundraising target at the end of their fundraiser". If this is the case, providing better information for comparing EA organizations might help. Or, a "EA Meta-Organization Fund" could be created that the individual donors could fund, and then would fund the individual organizations (according to room for more funding, avoiding organizations collapsing due to lack of funds, or according to an impact evaluation of the individual organizations)

Comment by william_s_duplicate0-17020928878049157 on Solving donation coordination problems · 2015-05-28T04:04:13.231Z · EA · GW

Would it work to run shorter fundraisers? If it's the case that most donation money is tied up in this dynamic, then running a shorter fundraiser wouldn't significantly reduce the amount of money raised (of course, that might not be true)

Comment by william_s_duplicate0-17020928878049157 on Solving donation coordination problems · 2015-05-28T04:02:18.396Z · EA · GW

Maybe price in the cost of staff time spent on the fundraiser - that is, if everyone donates immediately, it takes $X to fill the fundraiser. But if everyone donates at the end, it takes $X + $Y, where $Y is the cost of additional staff time spend on the fundraiser.

Comment by william_s_duplicate0-17020928878049157 on Log-normal lamentations · 2015-05-22T00:26:17.883Z · EA · GW

I wonder if there's a large amount of impact to be had in people outside of the tail trying to enhance the effectiveness of people in the tail (these might look like being someone's personal assistant or sidekick, introducing someone in the tail to someone cool outside of the EA movement, being a solid employee for someone who founds an EA startup, etc.)? Being able to improve impact of someone in the tail (even if you can't quantify what you accomplished) might avert the social comparison aspect, as one would feel like they'd be able to at least take partial credit for the accomplishments of the EA superstars.

Comment by william_s_duplicate0-17020928878049157 on Log-normal lamentations · 2015-05-22T00:19:04.743Z · EA · GW

One approach to this could be tying your self-esteem into something other than your personal impact. You might try setting your goal to "be an effective altruist" or "be a member of the effective altruist tribe". There are reasonable and achievable criteria (ie. the GWWC pledge) for this, and performance of people on the tail in no way effects your ability to pass these criteria. And, while trying to improve one's own impact is a thing that effective altruists do, it's not necessary to do or to achieve any specific criteria of success to meet the self-esteem criteria. A useful supplement to this attitude is a feeling of excitement about where effective altruism is going, which is a feeling that is actually enhanced by the achievements of the long tail. ("I can't wait to see what these amazing people are going to accomplish!")

Comment by william_s_duplicate0-17020928878049157 on Log-normal lamentations · 2015-05-21T23:54:50.139Z · EA · GW

Maybe the status issues in the "lottery ticket" fields could be partially alleviated by having a formal mechanism of redistributing credit for success according to the ex-ante probabilities - for the malaria vaccine example, you could create something like impact certificates covering the output of all EAs working in the area, and distribute them according to an ex-ante estimate of each researcher's usefulness, or some other agreed on distribution. In that case, you would end up with a certificate saying you own x% of the discovery of the malaria vaccine, which would be pretty cool to have (and valuable to have, if a the impact certificate market takes off).

Comment by william_s_duplicate0-17020928878049157 on Log-normal lamentations · 2015-05-21T23:48:29.289Z · EA · GW

If anyone is ever at a point where they are significantly discouraged by thoughts along these lines (as I've been at times), there's an Effective Altruist self-help group where you can find other EAs to talk to about how you're feeling (and it really does help!). The group is hidden, but if you message me, I can point you in the right direction (or you can find information about it on the sidebar of the Effective Altruist facebook group).

Comment by william_s_duplicate0-17020928878049157 on The effectiveness-alone strategy and evidence-based policy · 2015-05-08T19:08:41.081Z · EA · GW

I haven't heard of anything like this. It's the sort of thing that might feel less important than identifying/supporting top charities to most EAs. It might also require some expertise both in the area of the charity and in EA, to actually provide value. It's the sort of thing that might be a good fit for someone with, say, a commitment to an existing organization, but with an interest in EA.

Comment by william_s_duplicate0-17020928878049157 on The effectiveness-alone strategy and evidence-based policy · 2015-05-07T17:51:35.747Z · EA · GW

Another application of the Effectiveness-alone strategy might be to create an EA organization aiming to improve the effectiveness of charities by applying EA ideas (as opposed to evaluating charities to find the best ones).

Comment by william_s_duplicate0-17020928878049157 on March Open Thread · 2015-03-20T21:14:25.323Z · EA · GW

When considering working for a startup/company with significant positive externalities, would it be far off to estimate your share of impact as (estimate of total impact of the company vs. the world where it did not exist) * (equity share of company)?

This seems easier to estimate than your impact on company as a whole, and matches up with something like the impact certificate model (equity share seems like the best estimate we would have of what impact certificate division might look like). It's also possible that there are distortions in allocation of money that would lead to an underestimate of true impact.

On the downside, it doesn't fully account for replaceabilty, and I'm not sure if it meshes with the assessment that "negative externalities don't matter too much in most cases because someone else would take your job" that seems to be the typical EA position.

Comment by william_s_duplicate0-17020928878049157 on Tech job Q&A · 2015-03-20T21:04:23.226Z · EA · GW

For people who have worked in the technology sector, what form has the most useful learning come in? (ie. learning from school, learning while working on a problem independently, learning while collaborating with people, learning from reading previous work/existing codebases, etc.)?

Comment by william_s_duplicate0-17020928878049157 on Tech job Q&A · 2015-03-20T19:55:11.405Z · EA · GW

It seems like the way to make the most money from working in tech jobs would be to find identifying startups/companies that are likely to do well in the future, work with them, and make money from the equity you get. For example, Dustin Moskovitz suggests that you can get a better return from trying to be employee #100 at the next Facebook or Dropbox than by being an entrepreneur Any thoughts on how to identify startups/companies likely to do well/be valuable to work for, or at least rule out ones likely to fail? (It seems like the problem of doing this from an investor standpoint is well investigated, and hard to do, but the employee standpoint is different).

It seems like the correct approach would be to make predictions on the future performance of a bunch of startups and track the results, in order to calibrate your predictive model, but one would need time to build up a prediction history. Short of this, there might be heuristics that are sort of helpful, ie. I'd guess that startups with more funding or more employees are more likely to succeed due to more people having confidence in them and having survived for some period of time already, but this also indicate that you are likely to get less equity.

Comment by william_s_duplicate0-17020928878049157 on Tech job Q&A · 2015-03-19T22:07:14.995Z · EA · GW

What skills/experience do you think will be useful to have in 3-5 years, either in general or for EA plots?

Comment by william_s_duplicate0-17020928878049157 on January Open Thread · 2015-01-25T16:29:54.846Z · EA · GW

I also have had negative experiences with career search stuff (more around making decisions). My suggestion, that I'm also going to try, is find someone else who you can help support you through the career search process, who you can talk over decisions with, get to look over applications, maybe help talk you through the time you spend feeling useless before applying. This could also help keep you from settling with an inferior job, if you have to justify it to someone else.

I would also suggest, from experience, to avoid committing to a job at a time when you feel really down about yourself - I've done that before, and it would have been better to just wait. At least try to wait a few days, talk to some people about it, etc.

(Also, there's a facebook group for EAs to help each other with personal issues, and it's the sort of place where you can post this stuff and get advice - messages are only visible to group members. Message me if you're interested and not already in it, and I can add you)

Comment by william_s_duplicate0-17020928878049157 on The Outside Critics of Effective Altruism · 2015-01-05T23:33:00.352Z · EA · GW

I wonder what you would get if you offered a cash prize to whoever wrote the "best" criticism of EA, according some criteria such as the opinion of a panel of specific EAs, or online voting on a forum. Obviously, this has a large potential for selection effects, but it might produce something interesting (either in the winner, or in other submissions that don't get selected because they are too good).

Comment by william_s_duplicate0-17020928878049157 on The perspectives on effective altruism we don't hear · 2015-01-02T18:27:59.812Z · EA · GW

I would like to note (although I don't quite know what to do with this information) that the proposed method of gathering feedback leaves out at least 3 billion people who don't have internet access. In practice, it's probably also limited to gathering information from countries/in languages with at least some EA presence already (and mainly English speaking). Now, from a "optimize spread of EA ideas" perspective, it might be reasonable to focus in wealthier countries to reach people with more leverage (ie. expected earnings), but there are reasons to pay attention to this:

1) It could be very useful to have a population of EAs with background/lived experience in developing countries, to aid in idea generation for new international development programs. 2) EA might end up not spreading very much to people living in countries like China/India, which will become more economically important in the future. 3) We might end up making a mistake on some philosophically important issue due to biases in the background of most people in the EA movement. (I don't have a good example of what this looks like, but there might be, say, system 1 factors arising from the culture where you grow up that influence your position on issues of population ethics or something).

I also don't know how to go about this on the object level, or whether it's the best place for marginal EA investment right now. (I also think that EA orgs involved in international development will have access to more of these diverse perspectives, but the point I make is that they aren't present in the meta-level discussions).

Comment by william_s_duplicate0-17020928878049157 on The perspectives on effective altruism we don't hear · 2015-01-02T18:12:43.348Z · EA · GW

Object level suggestion for collecting diverse opinions (for a specific person to look through, to make it easier to see trends): have something like a google form where people can report characteristics of an attempt to bring up EA ideas to a person or audience, and report comments on how the ideas were received. (This thread is a Schelling Point now, but won't remain so in the future)

Comment by william_s_duplicate0-17020928878049157 on Blind Spots: Compartmentalizing · 2015-01-02T17:56:02.114Z · EA · GW

When considering a controversial political issue, an EA should also think about whether there are positions to take that differ from those typically presented in the mainstream media. There might be alternatives that EA reasoning opens up that people traditionally avoid because they, for example, stick to deontological reasoning and believe that either an act is right or it is wrong in all cases, and that these restrictions should be codified into law.

For the object level example raised in the article, the traditional framing is "abortion should be legal" vs. "abortion should be illegal". Other alternatives to this might be, for example, performing other social interventions aimed at reducing the number of abortions within a framework where abortion is legal (ie. increasing social support offered to single mothers, so that fewer people choose to have an abortion).

Comment by william_s_duplicate0-17020928878049157 on Blind Spots: Compartmentalizing · 2015-01-02T17:08:24.497Z · EA · GW

I think if you want people to think about the meta-level, you would be better off with a post that says "suppose you have an argument for abortion" or "suppose you believe this simple argument X for abortion is correct" (where X is obviously a strawman, and raised as a hypothetical), and asks "what ought you do based on assuming this belief is true". There may be a less controversial topic to use in this case.

If you want to start an object level on abortion (which, if you believe this argument is true, it seems you ought to), it might be helpful to circulate the article you want to use to start the discussion to a few EAs with varying positions on the topic before posting for feedback, because it is on a topic likely to trigger political buttons.

Comment by william_s_duplicate0-17020928878049157 on Figuring Good Out - Launch Thread · 2014-12-24T02:50:38.376Z · EA · GW

While I don't think I would actually write a whole post for this, I might have a couple quick ideas to throw in a comments section. I'd suggest explicitly asking for comments and half-formed ideas in the summary post, and see if it produces anything interesting.

Comment by william_s_duplicate0-17020928878049157 on Christmas 2014 Open Thread (Open Thread 7) · 2014-12-15T23:22:37.248Z · EA · GW

As a consideration for, there may be behaviours in the founder-VC relationship that negatively impact the founders (comes up in http://paulgraham.com/fr.html), such as trying to hold off committing as long as possible. EA VCs could try to bypass these to improve odds of startup success.

Comment by william_s_duplicate0-17020928878049157 on Christmas 2014 Open Thread (Open Thread 7) · 2014-12-15T23:16:48.906Z · EA · GW

As a consideration against, the Halo Effect might cloud judgement around odds of success for EA entrepreneurs from the point of EA investors.

Comment by william_s_duplicate0-17020928878049157 on Christmas 2014 Open Thread (Open Thread 7) · 2014-12-15T23:14:06.096Z · EA · GW

Something in developing world entrepreneurship that gives you a good position to spot opportunities for/carry out other developing world entrepreneurship.

Comment by william_s_duplicate0-17020928878049157 on Anti Publication Bias Registry · 2014-12-13T18:34:22.189Z · EA · GW

If this turns out to something people find useful, it might also be useful to have people who watch the wiki and provide feedback/advice on the proposed study designs, or who can help people who are less familiar with study design and statistics to produce something useful. This provides an additional service along with the preregistration, so it isn't just an extra onerous task. (I'd be willing to do this if it seems useful).

I'm somewhat doubtful that this experiment registry will attract a lot of use, but +1 for setting it up to try it out.

Comment by william_s_duplicate0-17020928878049157 on Spitballing EA career ideas · 2014-12-02T21:22:27.385Z · EA · GW

I know someone who would be interested in looking through a list of organizations like this right now (hoping to find places to work).

Comment by william_s_duplicate0-17020928878049157 on Spitballing EA career ideas · 2014-12-02T21:15:24.416Z · EA · GW

A couple examples I've run across: DataWind (http://en.wikipedia.org/wiki/DataWind), which is now at a more mature stage. Went to a talk by one of the founders recently. They made a really cheap tablet and internet services that work over 2G, which opens up the market of large sections of India currently without internet access. I think they could end up being quite successful.

A early stage example is EyeCheck (http://www.eyechecksolutions.com/), started by a couple of engineers out of undergrad. They're developing a tool to improve diagnosis of vision problems to increase efficiency of providing glasses (think they're starting working with NGOs running vision camps).