Posts

Do impact certificates help if you're not sure your work is effective? 2020-02-12T14:13:25.689Z · score: 19 (9 votes)
What analysis has been done of space colonization as a cause area? 2019-10-09T20:33:27.473Z · score: 12 (8 votes)
What actions would obviously decrease x-risk? 2019-10-06T21:00:24.025Z · score: 21 (11 votes)
How effective is household recycling? 2019-08-29T06:13:46.296Z · score: 7 (5 votes)
What is the current best estimate of the cumulative elasticity of chicken? 2019-05-03T03:27:57.603Z · score: 22 (10 votes)
Confused about AI research as a means of addressing AI risk 2019-02-21T00:07:36.390Z · score: 31 (10 votes)
[Offer, Paid] Help me estimate the social impact of the startup I work for. 2019-01-03T05:16:48.710Z · score: 7 (3 votes)

Comments

Comment by reallyeli on My personal cruxes for working on AI safety · 2020-02-15T23:51:46.456Z · score: 1 (1 votes) · EA · GW

I'm saying we need to specify more than, "The chance that the full stack of individual propositions evaluates as true in the relevant direction." I'm not sure if we're disagreeing, or ... ?

Comment by reallyeli on My personal cruxes for working on AI safety · 2020-02-15T02:48:09.832Z · score: 1 (1 votes) · EA · GW

Suppose you're in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?

There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.

The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that's why Buck's number seems crazy high to Gordon.

Comment by reallyeli on My personal cruxes for working on AI safety · 2020-02-13T22:40:48.527Z · score: 13 (5 votes) · EA · GW

I agree with this intuition. I suspect the question that needs to be asked is "14% chance of what?"

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T22:34:08.242Z · score: 3 (2 votes) · EA · GW

I'm deciding whether organization A is effective. I see some respectable people working there, so I assume they must think work at A is effective, so I update in favor of A being effective. But unbeknownst to me, those people don't actually think work at A is effective, but they trade their impact certificates to other folks who do. I don't know these other folks.

Based on the theory that it's important to know who you're trusting, this is bad.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T22:29:45.121Z · score: 1 (1 votes) · EA · GW

"The sense in which employees are deferring to their employer's views on what to do" sounds fine to me, that's all I meant to say.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T17:00:03.593Z · score: 1 (1 votes) · EA · GW

Sure, I agree that if they're anonymous forever you can't do much. But that was just the generating context; I'm not arguing only against anonymity.

I'm arguing against impact certificate trading as a *wholesale replacement* for attempting to update each other. If you are trading certificates with someone, you are deferring to their views on what to do, which is fine, but it's important to know you're doing that and to have a decent understanding of why you differ.

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-13T00:28:18.286Z · score: 1 (1 votes) · EA · GW

I agree with this. I wasn't trying to make a hard distinction between empirical and moral worldviews. (Not sure if there are better words than 'means' and 'ends' here.)

I think you've clarified it for me. It seems to me that impact certificate trades have little downside when there is persistent, intractable disagreement. But in other cases, deciding to trade rather than to attempt to update each other may leave updates on the table. That's the situation I'm concerned about.

For context, I was imagining a trade with an anonymous partner, in a situation where you have reason to believe you have more information about org A than they do (because you work there).

Comment by reallyeli on Short-Term AI Alignment as a Priority Cause · 2020-02-12T16:55:39.871Z · score: 3 (2 votes) · EA · GW

I think this is an interesting topic. However, I downvoted because if you're going to claim something is the "greatest priority cause," which is quite a claim, I would at least want to see an analysis of how it fares against other causes on scale, tractability + neglectedness.

(Basically I agree with MichaelStJules's comment, except I think the analysis need not be quantitative.)

Comment by reallyeli on Do impact certificates help if you're not sure your work is effective? · 2020-02-12T15:59:08.749Z · score: 1 (1 votes) · EA · GW

Hmm, your first paragraph is indeed a different perspective than the one I had. Thanks! I remain unconvinced though.

Casting it as moral trade gives me the impression that impact certificates are for people who disagree about ends, not for people who agree about ends but disagree about means. In the case where my buyer and myself both have the same goals (e.g. chicken deaths prevented), why would I trust their assessment of chicken-welfare org A more than I trust my own? (Especially since presumably I work there and have access to more information about it than them.)

Some reasons I can imagine:

- I might think that the buyer is wiser than me and want to defer to them on this point. In this case I'd want to be clear that I'm deferring.

- I might think that no individual buyer is wiser than me, but the market aggregates information in a way that makes it wiser than me. In this case I'd want a robust market, probably better than PredictIt.

Comment by reallyeli on [AN #80]: Why AI risk might be solved without additional intervention from longtermists · 2020-01-18T16:21:25.190Z · score: 3 (3 votes) · EA · GW

I had the same reaction (checking in my head that a 10% chance still merited action).

However I really think we ought to be able to discuss guesses about what's true merely on the level of what's true, without thinking about secondary messages being sent by some statement or another. It seems to me that if we're unable to do so, that will make the difficult task of finding truth even more difficult.

Comment by reallyeli on In praise of unhistoric heroism · 2020-01-08T15:38:19.845Z · score: 4 (4 votes) · EA · GW

Ha, no I am an unrelated Eli.

Comment by reallyeli on In praise of unhistoric heroism · 2020-01-07T20:41:42.449Z · score: 5 (5 votes) · EA · GW

I like this point

But you are way more likely to end up being Dorothea.

because it emphasizes that the reason to have this mindset is a fact about the world. Sometimes, when I encounter statements like this, it can be easy for them to "bounce off" because I object "oh, of course it's adaptive to think that way... but that doesn't mean it's actually true." It was hard for this post to "bounce off" me because of the force of this point.

Comment by reallyeli on EA Hotel Fundraiser 5: Out of runway! · 2019-10-28T21:37:11.806Z · score: 18 (13 votes) · EA · GW

(I think the tone of this comment is the reason it is being downvoted. Since we all presumably believe that EA should be evidential, rational and objective, stating it again reads as a strong attack, as if you were trying to point out that no assessment of impact had been done, even though the original post links to some.)

Comment by reallyeli on EA Hotel Fundraiser 5: Out of runway! · 2019-10-28T18:41:40.154Z · score: 52 (23 votes) · EA · GW

I donated $1000 since it seems to me that something like the EA Hotel really ought to exist, and it would be really sad if it went under.

I'm posting this here so that, if you're debating donating, you have the additional data point of knowing that others are doing so.

Comment by reallyeli on What analysis has been done of space colonization as a cause area? · 2019-10-12T16:01:42.937Z · score: 4 (3 votes) · EA · GW

FWIW, I don't find it at all surprising when people's moral preferences contradict themselves (in terms of likely implications, as you say). I myself have many contradictory moral preferences.

Comment by reallyeli on What analysis has been done of space colonization as a cause area? · 2019-10-11T21:36:28.185Z · score: 1 (1 votes) · EA · GW

Awesome, I should have checked Kelsey Piper first. Thank you!

Comment by reallyeli on Off-Earth Governance · 2019-10-11T21:35:40.362Z · score: 1 (1 votes) · EA · GW

A fictional treatment of these issues you might be interested in is the book https://en.wikipedia.org/wiki/2312_(novel) by Kim Stanley Robinson. Spacefarers are genetically distinct from Earth-dwelling humans; each planet is its own political entity.

To me, determining what will happen in the future seems less and less possible the farther we go out, to the point where I think there are no arguments that would give me a high degree of confidence in a statement about the far future like the one you put up here. For any story that supports a particular outcome, it seems there is an equally compelling story that argues against it. :)

Comment by reallyeli on What analysis has been done of space colonization as a cause area? · 2019-10-11T21:24:50.955Z · score: 4 (3 votes) · EA · GW

Thanks for the perspective on dissenting views!

Comment by reallyeli on Andreas Mogensen's "Maximal Cluelessness" · 2019-10-06T21:29:04.254Z · score: 3 (2 votes) · EA · GW

By accumulating resources for the future, we give increased power to whatever decision-makers in the future we bequeath these resources. (Whether these decision-makers are us in 20 years, or our descendants in 200 years.)

In a clueless world, why do we think that increasing their power is good? What if those future decision makers make a bad decision, and the increased resources we've given them mean the impact is worse?

In other words, if we are clueless today, why will we be less clueless in the future? One might hope cluelessness decreases monotonically over time, as we learn more, but so does the probability of a large mistake.

Comment by reallyeli on How do most utilitarians feel about "replacement" thought experiments? · 2019-09-26T03:40:18.072Z · score: 1 (1 votes) · EA · GW

I found this very helpful.

Comment by reallyeli on Arguments for moral indefinability · 2019-08-30T19:24:47.013Z · score: 4 (3 votes) · EA · GW

This post really turned on a lightbulb for me, and I have thought about it consistently in the months since I read it.

Comment by reallyeli on How effective is household recycling? · 2019-08-29T18:19:01.891Z · score: 1 (1 votes) · EA · GW

Thank you!

Comment by reallyeli on How do you, personally, experience "EA motivation"? · 2019-08-29T05:17:48.578Z · score: 5 (3 votes) · EA · GW

I'm very happy to see this being discussed, and have enjoyed reading others' answers.

Upon reflection, I seem to have a few different motivations: this was a surprise to me, as I expected to find a single overarching one.

a) Imagining another person's experience, leading to imagining what it is like to experience some particular suffering that I can see they are experiencing. Imagining "what it is like" involves focusing on details of the experience and rejecting generalities (not "I have cancer" but "I am trying to reach down in the shower in the morning but can't and the water is too hot"). Soon my train of thought goes to a more objective or detached place, and I think about how there is no real difference between me and the other person, that except blind circumstance there is no reason they should suffer when I do not.

There is an erasure of self involved. I imagine the core of my consciousness, the experiencing self, inhabiting the other person's body and mind. From this one example I generalize; of course I should treat another person's suffering the same as my own, because in the final analysis there is no difference between me and other people. That's the altruism; desire for effectiveness is secondary and instrumental, not terminal.

b) Zooming out and imagining the whole of the world leads to imagining all the evil in the world. (Where "evil" is a broad term including suffering due to carelessness, due to misaligned incentives, due to lack of coordination, due to accident, etc.) It's overwhelming; there's a sense of perverse wonder. "The works of Moloch are as many and burn as cruelly as the white-hot stars." This leads to a powerful feeling of being "fed-up" with the bad things. The desire for them to stop is like a very strong version of the desire to clean up an untidy room. It's abstract and not connected to any one person's suffering. This tends to be a stronger motivating force than a); if a) is empathy, this is anger.

Eliezer's fiction is particularly good at conjuring this mind-state for me: for example, the "Make it stop" scene in http://yudkowsky.net/other/fiction/the-sword-of-good .

This mind-state seems more inherently connected to effectiveness than a), though effectiveness is still instrumental and not terminal. I want us to be making a strong/effective stand against the various bad things; when we're not doing that, I am frustrated. I am less willing to tolerate "weakness"/ineffectiveness because I conceptualize us as in a struggle with high stakes.

Comment by reallyeli on Concrete Ways to Reduce Risks of Value Drift and Lifestyle Drift · 2019-08-14T22:01:10.330Z · score: 3 (3 votes) · EA · GW

This strikes me as incredibly good advice.

Comment by reallyeli on Key points from The Dead Hand, David E. Hoffman · 2019-08-10T00:56:16.741Z · score: 7 (6 votes) · EA · GW

Just wanted to say I thought this post was great and really appreciate you writing it! I have a hard-to-feed hunger to know what the real situation with nuclear weapons is like, and this is one of the only things to touch it in the past few years. Any other resources you'd recommend?

I'm surprised and heartened to hear some evidence against the "Petrov singlehandedly saved the world" narrative. Is there somewhere I can learn about the other nuclear 'close calls' described in the book? (should I just read the book?)

Comment by reallyeli on Four practices where EAs ought to course-correct · 2019-07-31T21:24:36.857Z · score: 0 (2 votes) · EA · GW

Thanks for the response. That theory seems interesting and reasonable, but to my mind it doesn't constitute strong evidence for the claim. The claim is about a very complex system (international politics) and requires a huge weight of evidence.

I think we may be starting from different positions: if I imagine believing that the U.S. military is basically a force for good in the world, what you're saying sounds more intuitively appearing. However, I do not believe (nor disbelieve) this.

Comment by reallyeli on Four practices where EAs ought to course-correct · 2019-07-31T04:41:11.204Z · score: 5 (5 votes) · EA · GW

Although I think this post says some important things, I downvoted because some conclusions appear to be reached very quickly, without what to my mind is the right level of consideration.

For example, "True, there is moral hazard involved in giving better tools for politicians to commit to bad policies, but on my intuition that seems unlikely to outright outweigh the benefits of success - it would just partially counterbalance them." My intuition says the opposite of this. I don't think it's at all clear (whether increasing the capability of the U.S. military is a good or bad thing).

I agree that object-level progress is to be preferred over meta-level progress on methodology.

Comment by reallyeli on Effective Altruism is an Ideology, not (just) a Question · 2019-07-06T15:40:13.921Z · score: 4 (4 votes) · EA · GW

I gave this post a strong upvote. It articulated something which I feel but have not articulated myself. Thank you for the clarity of writing which is on display here.

That said, I have some reservations which I would be interested in your thoughts on. When we argue about whether something is an ideology or not, we are assuming that the word "ideology" is applied to some things and not others, and that whether or not it is applied tells us useful things about the things it is applied to.

I am convinced that on the spectrum of movements, we should put effective altruism closer to libertarianism and feminism than the article you're responding to would indicate. But what is on the other end of this spectrum? Is there a movement/"ism" you can point to that you'd say we should put on the other side of where we've put EA -- **less** ideological than it?

Comment by reallyeli on Effective Altruism is an Ideology, not (just) a Question · 2019-07-06T15:22:18.195Z · score: 1 (1 votes) · EA · GW
I wish I could triple-upvote this post.

You can! :P. Click-and-hold for "strong upvote."

Comment by reallyeli on Doing good while clueless · 2019-06-02T22:53:07.307Z · score: 1 (1 votes) · EA · GW
Therefore, we ought to prioritize interventions that improve the wisdom, capability, and coordination of future actors.

If we operate under the "ethical precautionary principle" you laid out in the previous post (always behave as if there was another crucial consideration yet to discover), how do we do this? We might think that some intervention will increase the wisdom of future actors, based on our best analysis of the situation. But we fear a lurking crucial consideration that will someday pounce and reveal that actually the intervention did nothing, or did the opposite.

In other words, don't we need to be *somewhat* clueful already in order to bootstrap our way into more cluefulness?

Comment by reallyeli on How tractable is cluelessness? · 2019-06-02T19:13:07.701Z · score: 4 (3 votes) · EA · GW

Thank you for this series — I this is is an enormously important consideration when trying to do good, and I wish it were talked about more.

I am rereading this, and find myself nodding along vigorously to this paragraph:

I think this implies operating under an ethical precautionary principle: acting as if there were always an unknown crucial consideration that would strongly affect our decision-making, if only we knew it (i.e. always acting as if we are in the “no we can’t become clueful enough” category).

But not the following one:

Does always following this precautionary principle imply analysis paralysis, such that we never take any action at all? I don’t think so. We find ourselves in the middle of a process that’s underway, and devoting all of our resources to analysis & contemplation is itself a decision (“If you choose not to decide, you still have made a choice”).

Perhaps we indeed should move towards "analysis paralysis", and reject actions that we do not have a very high level of certainty in the long-term effects of. Given the maxim that we should always act as if we are in the "no we can't become clueful enough" category, this approach would reject actions that we anticipate to have large long-term effects (e.g. radically changing government policy, founding a company that becomes very large). But it's not clear to me that it would reject all actions. Intuitively, P(cooking myself this fried egg will have large long-term effects) is low.

We can ask ourselves whether we are always in the position of the physician treating baby Hitler: every day when we go into work, we face many seemingly inconsequential decisions that are actually very consequential. i.e. P(cooking myself this fried egg will have large long-term effects) is actually high. But this doesn't seem self-evident.

In other words, it might be tractable to minimize the number of very consequential decisions that the world makes, and this might be a way out of extreme consequentialist cluelessness. For example, imagine a world made up of many populated islands, where overseas travel is impossible and so the islands are causally separated. In such a world, the possible effects of any one action end at the island it started at, so therefore the consequences of any one action are capped in a way they are not in this world.

It seems to me that this approach would imply an EA that looks very different than the current one (and recommendations that look different than the ones you make in the next post). But it may also be a sub-consideration of the general considerations you lay out in your next post. What do you think?

Comment by reallyeli on Please use art to convey EA! · 2019-05-25T22:57:00.188Z · score: 16 (7 votes) · EA · GW

Have you heard of Harry Potter and the Methods of Rationality (http://www.hpmor.com/) and/or http://unsongbook.com ? I think they serve some of this role for the community already.

It's interesting they are both long-form web fiction; we don't have EA tv shows or rock bands that I know of.

Comment by reallyeli on Stories and altruism · 2019-05-25T05:09:55.912Z · score: 1 (1 votes) · EA · GW

Thanks for posting about this! The experiences I've had with art feel like a big part of what motivates my altruism.

One of the ways art can encourage altruism is by rendering real the life of another person, making you experience their suffering or joy as your own. Many pieces of art have this effect on me, too many to name -- indeed I think of it as a defining quality of good art.

Another way art can encourage altruism is by taking a zoomed-out perspective and engaging with moral ideals in the abstract. This you might call "humanistic". I've listed mostly these below, as art of the other type is too numerous to name.

Books

- The Dispossessed by Ursula K. LeGuin is very meaningful to me as a vision of what a society where we cared "sufficiently" about others might look like.

- All Kurt Vonnegut, a very humanistic writer. God Bless You, Mr. Rosewater is explicitly about a philosophically-minded billionaire who decides to give his wealth away to the poor, and the consequences of that decision.

- George Saunders, another very humanistic writer. Tenth of December is great. https://www.newyorker.com/magazine/2012/10/15/the-semplica-girl-diaries is a great one of his about the banality of evil.

Poems

- https://www.pw.org/content/akhmatova_by_matthew_dickman

- https://www.newyorker.com/magazine/2008/08/11/trouble-poem-matthew-dickman (Content warning: suicide)

- https://www.poetryfoundation.org/poems/52173/what-work-is

Movies

- https://en.wikipedia.org/wiki/In_Jackson_Heights (a long, quiet, slice-of-life documentary that jumps between people)

- https://en.wikipedia.org/wiki/Death_by_Hanging (the Japanese police botch an execution, causing the criminal to lose all his memories of the crime; the police, panicking, try to jog his memory so they can execute him like they're supposed to)

Comment by reallyeli on How does one live/do community as an Effective Altruist? · 2019-05-16T01:36:24.107Z · score: 6 (3 votes) · EA · GW

Hi!

You write "I don't know how much of our time this is worth", but to me it seems clear that this is worth a *lot* of our time.

I have a model of human motivation. One aspect of my model is that it is very hard for most people (myself very included) to remain motivated to do something that does not get them any social rewards from the people around them.

Others on this forum have written about "values drift" (https://forum.effectivealtruism.org/posts/eRo5A7scsxdArxMCt/concrete-ways-to-reduce-risks-of-value-drift) and the role community plays in it.

Comment by reallyeli on What is the current best estimate of the cumulative elasticity of chicken? · 2019-05-07T03:58:16.716Z · score: 1 (1 votes) · EA · GW

I like the idea of using food scares as a proxy! Very cool.

It sounds like you are saying that knowing "how will kg of chicken sold change given change in price" will let you answer "how will kg of chicken sold change given me not buying chicken." I don't see quite how to do this, could you give me a pointer? (for concreteness, what does the paper's estimate of elasticity of poultry at 0.68 mean for "kg of chicken sold given I don't buy the chicken")

Perhaps more importantly, it sounds like you might disagree that one person abstaining from eating chicken has a meaningful impact on the number of chickens raised + killed. If so I'm quite interested, as this is something I have become convinced against by sources like https://reducing-suffering.org/does-vegetarianism-make-a-difference/.

My current model is that if I buy the meat of one chicken at a supermarket, that *in expectation* causes about one chicken to be raised + killed.

Comment by reallyeli on What is the current best estimate of the cumulative elasticity of chicken? · 2019-05-06T17:31:02.493Z · score: 1 (1 votes) · EA · GW

Thanks for finding this paper. But I think they are answering the question "If I change price, what happens to demand?", while I am asking "If demand drops (me not buying any chicken), what happens to total quantity sold?"

Comment by reallyeli on What is the current best estimate of the cumulative elasticity of chicken? · 2019-05-06T17:28:39.057Z · score: 2 (2 votes) · EA · GW

It doesn't seem consistent to me to say "I'm too small of an actor to affect price, but not to affect quantity sold."

Thank you for the small education in economics of consideration 2, though. I've read the Wikipedia article and found it helpful, although I have further questions. Are there goods that economists think do work like what my friend is describing? Is there a name for goods like this?

Comment by reallyeli on What is the current best estimate of the cumulative elasticity of chicken? · 2019-05-06T17:20:12.819Z · score: 2 (2 votes) · EA · GW

Thanks, Samara. I found the paper you're talking about here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2804646/pdf/216.pdf

I'm out of my depth here, but it looks like the paper is answering the question: "if the price of chicken changes from $X/kg to $(X + Y)/kg, how will kg of chicken sold change?" While the question I'm asking is "if I don't buy chicken, how will kg of chicken sold change?".

Comment by reallyeli on Burnout: What is it and how to Treat it. · 2019-04-19T02:19:00.370Z · score: 1 (1 votes) · EA · GW

This is how it feels to me to be mentally fatigued.

Comment by reallyeli on Who in EA enjoys managing people? · 2019-04-13T02:12:57.277Z · score: 3 (2 votes) · EA · GW

I've done management (of software engineers in a startup) and decided to move away from it for now, but can see a future in which I do more of it.

Comment by reallyeli on Workshop: Strategy, ideas, and life paths for reducing existential risks · 2019-03-29T03:59:34.969Z · score: 1 (1 votes) · EA · GW

I am quite interested but am in Boston. Do you know of similar events in my area or in the US?

Comment by reallyeli on Suggestions for EA wedding vows? · 2019-03-23T15:10:12.605Z · score: 5 (4 votes) · EA · GW

I love "What sets us against one another..." and feel this is the best expression of an idea which is powerful to me. I had not found such a short expression of it before. Thank you for it.

Comment by reallyeli on Confused about AI research as a means of addressing AI risk · 2019-03-17T01:32:42.243Z · score: 1 (1 votes) · EA · GW

Thanks, this and particularly the Medium post was helpful.

So to restate what I think your model around this is, it's "the efficiency gap determines how tractable social solutions will be (if < 10% they seem much more tractable), and technical safety work can change the efficiency gap."

Comment by reallyeli on Confused about AI research as a means of addressing AI risk · 2019-02-24T16:07:58.584Z · score: 5 (3 votes) · EA · GW

Thanks for the link. So I guess I should amend what Paul and OpenAI's goal seems like to me, to "create AGI, make sure it's aligned, and make sure it's competitive enough to become widespread."

Comment by reallyeli on Confused about AI research as a means of addressing AI risk · 2019-02-21T05:07:46.146Z · score: 10 (4 votes) · EA · GW

OK, this is what I modeled AI alignment folks as believing. But doesn't the idea of first-past-the-post-is-the-winner rely on a "hard takeoff" scenario? This is a view I associate with Eliezer. But Paul in the podcast says that he thinks a gradual takeoff is more likely, and envisions a smooth gradient of AI capability such that human-level AI comes into existence in a world where slightly stupider AIs already exist.

The relevant passage:

and in particular, when someone develops human level AI, it’s not going to emerge in a world like the world of today where we can say that indeed, having human level AI today would give you a decisive strategic advantage. Instead, it will emerge in a world which is already much, much crazier than the world of today, where having a human AI gives you some more modest advantage.

So I get why you would drop everything and race to be the first to build an aligned AGI if you're Eliezer. But if you're Paul, I'm not sure why you would do this, since you think it will only give you a modest advantage.

(Also, if the idea is to build your AGI first and then use it to stop everyone else from building their AGIs -- I feel like that second part of the plan should be fronted a bit more! "I'm doing research to ensure AI does what we tell it to" is quite a different proposition from "I'm doing research to ensure AI does what we tell it to, so that I can build an AI and tell it to conquer the world for me.")

Comment by reallyeli on [Offer, Paid] Help me estimate the social impact of the startup I work for. · 2019-01-26T06:58:22.665Z · score: 1 (1 votes) · EA · GW

Thanks Ozzie, this is helpful!

Comment by reallyeli on [Offer, Paid] Help me estimate the social impact of the startup I work for. · 2019-01-26T06:56:55.178Z · score: 2 (2 votes) · EA · GW

The former. To your other comment -- yes, I've gotten a number of emails! :)

Comment by reallyeli on [Offer, Paid] Help me estimate the social impact of the startup I work for. · 2019-01-21T15:47:03.067Z · score: 2 (2 votes) · EA · GW

Thanks very much for the comment Ozzie.

I share the idea that U.S. educational issues are not the most efficient ones to be working on, all else equal. My question arises because it's not obvious to me that all else is equal in my case. (Though I think the burden of proof should be on me here.) For example, I have a pretty senior role in the organization, and therefore presumably have higher leverage. How should I factor considerations like that in? (Or is it misguided to do so?)

I'm curious also about your statement that it's hard to have much counterfactual impact in the for-profit world. I've been struggling with similar questions. Why do you think so?

Comment by reallyeli on [Offer, Paid] Help me estimate the social impact of the startup I work for. · 2019-01-21T15:46:36.053Z · score: 1 (1 votes) · EA · GW

Thanks very much for the comment Ozzie.

I share the idea that U.S. educational issues are not the most efficient ones to be working on, all else equal. My question arises because it's not obvious to me that all else is equal in my case. (Though I think the burden of proof should be on me here.) For example, I have a pretty senior role in the organization, and therefore presumably have higher leverage. How should I factor considerations like that in? (Or is it misguided to do so?)

I'm curious also about your statement that it's hard to have much counterfactual impact in the for-profit world. I've been struggling with similar questions. Why do you think so?