Posts

Individual Project Fund: Further Details 2016-12-31T22:16:11.995Z · score: 9 (9 votes)
My Donations for 2016 2016-12-28T17:03:22.239Z · score: 15 (15 votes)

Comments

Comment by jsteinhardt on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-03T05:59:48.137Z · score: 12 (5 votes) · EA · GW

Okay, thanks for the clarification. I now see where the list comes from, although I personally am bearish on this type of weighting. For one, it ignores many people who are motivated to make AI beneficial for society but don't happen to frequent certain web forums or communities. Secondly, in my opinion it underrates the benefit of extremely competent peers and overrates the benefit of like-minded peers.

While it's hard to give generic advice, I would advocate for going to the school that is best at the research topic one is interested in pursuing, or where there is otherwise a good fit with a strong PI (though basing on a single PI rather than one's top-2/top-3 can sometimes backfire). If one's interests are not developed enough to have a good sense of topic or PI then I would go with general strength of program.

Comment by jsteinhardt on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-01T20:12:39.259Z · score: 2 (7 votes) · EA · GW

I'm not sure what the metric for the "good schools" list is but the ranking seemed off to me. Berkeley, Stanford, MIT, CMU, and UW are generally considered the top CS (and ML) schools. Toronto is also top-10 in CS and particularly strong in ML. All of these rankings are of course a bit silly but I still find it hard to justify the given list unless being located in the UK is somehow considered a large bonus.

Comment by jsteinhardt on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-01T20:03:08.779Z · score: 8 (5 votes) · EA · GW

I intended the document to be broader than a research agenda. For instance I describe many topics that I'm not personally excited about but that other people are and where the excitement seems defensible. I also go into a lot of detail on the reasons that people are interested in different directions. It's not a literature review in the sense that the references are far from exhaustive but I personally don't know of any better resource for learning about what's going on in the field. Of course as the author I'm biased.

Comment by jsteinhardt on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-01T15:20:56.065Z · score: 7 (7 votes) · EA · GW

Given that Nick has a PhD in Philosophy, and that OpenPhil has funded a large amount of academic research, this explanation seems unlikely.

Disclosure: I am working at OpenPhil over the summer. (I don't have any particular private information, both of the above facts are publicly available.)

EDIT: I don't intend to make any statement about whether EA as a whole has an anti-academic bias, just that this particular situation seems unlikely to reflect that.

Comment by jsteinhardt on Comparative advantage in the talent market · 2018-04-17T00:48:32.763Z · score: 1 (1 votes) · EA · GW

If we think of the community as needing one ops person and one research person, the marginal value in each area drops to zero once that role is filled.

Yes, but these effects only show up when the number of jobs is small. In particular: If there are already 99 ops people and we are looking at having 99 vs. 100 ops people, the marginal value isn't going to drop to zero. Going from 99 to 100 ops people means that mission-critical ops tasks will be done slightly better, and that some non-critical tasks will get done that wouldn't have otherwise. Going from 100 to 101 will have a similar effect.

In contrast, in the traditional comparative advantage setting, there remain gains-from-coordination/gains-from-trade even when the total pool of jobs/goods is quite large.

The fact that gains-from-coordination only show up in the small-N regime here, whereas they show up even in the large-N regime traditionally, seems like a crucial difference that makes it inappropriate to apply standard intuition about comparative advantage in the present setting.

If we want to analyze this more from first principles, we could pick one of the standard justifications for considering comparative advantage and I could try to show why it breaks down here. The one I'm most familiar with is the one by David Ricardo (https://en.wikipedia.org/wiki/Comparative_advantage#Ricardo's_example).

Comment by jsteinhardt on Comparative advantage in the talent market · 2018-04-13T02:34:44.699Z · score: 4 (4 votes) · EA · GW

I'm worried that you're mis-applying the concept of comparative advantage here. In particular, if agents A and B both have the same values and are pursuing altruistic ends, comparative advantage should not play a role---both agents should just do whatever they have an absolute advantage at (taking into account marginal effects, but in a large population this should often not matter).

For example: suppose that EA has a "shortage of operations people" but person A determines that they would have higher impact doing direct research rather than doing ops. Then in fact the best thing is for person A to work on direct research, even if there are already many other people doing research and few people doing ops. (Of course, person A could be mistaken about which choice has higher impact, but that is different from the trade considerations that comparative advantage is based on.)

I agree with the heuristic "if a type of work seems to have few people working on it, all else equal you should update towards that work being more neglected and hence higher impact" but the justification for that again doesn't require any considerations of trading with other people . In general, if A and B can trade in a mutually beneficial way, then either A and B have different values or one of them was making a mistake.

Comment by jsteinhardt on Talent gaps from the perspective of a talent limited organization. · 2017-11-06T21:23:04.790Z · score: 5 (5 votes) · EA · GW

FWIW, 50k seems really low to me (but I live in the U.S. in a major city, so maybe it's different elsewhere?). Specifically, I would be hesitant to take a job at that salary, if for no other reason than I thought that the organization was either dramatically undervaluing my skills, or so cash-constrained that I would be pretty unsure if they would exist in a couple years.

A rough comparison: if I were doing a commissioned project for a non-profit that I felt was well-run and value-aligned, my rate would be in the vicinity of $50USD/hour. I'd currently be willing to go down to $25USD/hour for a project that is something I basically would have done anyways. Once I get my PhD I think my going rates would be higher, and for a senior-level position I would probably expect more than either of these numbers, unless it was a small start-up-y organization that I felt was one of the most promising organizations in existence.

EDIT: So that people don't have to convert to per-year salaries in their heads, the above numbers if annualized would be $100k USD/year and $50k USD/year.

Comment by jsteinhardt on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-13T15:16:20.208Z · score: 12 (6 votes) · EA · GW

(Speaking for myself, not OpenPhil, who I wouldn't be able to speak for anyways.)

For what it's worth, I'm pretty critical of deep learning, which is the approach OpenAI wants to take, and still think the grant to OpenAI was a pretty good idea; and I can't really think of anyone more familiar with MIRI's work than Paul who isn't already at MIRI (note that Paul started out pursuing MIRI's approach and shifted in an ML direction over time).

That being said, I agree that the public write-up on the OpenAI grant doesn't reflect that well on OpenPhil, and it seems correct for people like you to demand better moving forward (although I'm not sure that adding HRAD researchers as TAs is the solution; also note that OPP does consult regularly with MIRI staff, though I don't know if they did for the OpenAI grant).

Comment by jsteinhardt on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-11T15:59:21.000Z · score: 6 (6 votes) · EA · GW

I think the argument along these lines that I'm most sympathetic to is that Paul's agenda fits more into the paradigm of typical ML research, and so is more likely to fail for reasons that are in many people's collective blind spot (because we're all blinded by the same paradigm).

Comment by jsteinhardt on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-11T15:55:45.625Z · score: 5 (5 votes) · EA · GW

This doesn't match my experience of why I find Paul's justifications easier to understand. In particular, I've been following MIRI since 2011, and my experience has been that I didn't find MIRI's arguments (about specific research directions) convincing in 2011*, and since then have had a lot of people try to convince me from a lot of different angles. I think pretty much all of the objections I have are ones I generated myself, or would have generated myself. Although, the one major objection I didn't generate myself is the one that I feel most applies to Paul's agenda.

( * There was a brief period shortly after reading the sequences that I found them extremely convincing, but I think I was much more credulous then than I am now. )

Comment by jsteinhardt on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-10T03:44:18.459Z · score: 9 (9 votes) · EA · GW

Shouldn't this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.

Personally, I feel like I understand Paul's approach better than I understand MIRI's approach, despite having spent more time on the latter. I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.

Comment by jsteinhardt on What Should the Average EA Do About AI Alignment? · 2017-02-28T06:48:15.061Z · score: 11 (11 votes) · EA · GW

I already mention this in my response to kbog above, but I think EAs should approach this cautiously; AI safety is already an area with a lot of noise, with a reputation for being dominated by outsiders who don't understand much about AI. I think outreach by non-experts could end up being net-negative.

Comment by jsteinhardt on What Should the Average EA Do About AI Alignment? · 2017-02-28T06:46:01.510Z · score: 13 (13 votes) · EA · GW

In general I think this sort of activism has a high potential for being net negative --- AI safety already has a reputation as something mainly being pushed by outsiders who don't understand much about AI. Since I assume this advice is targeted at the "average EA" (who presumably doesn't know much about AI), this would only exacerbate the issue.

Comment by jsteinhardt on 80,000 Hours: EA and Highly Political Causes · 2017-01-30T06:08:29.596Z · score: 1 (1 votes) · EA · GW

Thanks for clarifying; your position seems reasonable to me.

Comment by jsteinhardt on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T19:18:44.489Z · score: 6 (6 votes) · EA · GW

OpenPhil made an extensive write-up on their decision to hire Chloe here: http://blog.givewell.org/2015/09/03/the-process-of-hiring-our-first-cause-specific-program-officer/. Presumably after reading that you have enough information to decide whether to trust her recommendations (taking into account also whatever degree of trust you have in OpenPhil). If you decide you don't trust it then that's fine, but I don't think that can function as an argument that the recommendation shouldn't have been made in the first place (many people such as myself do trust it and got substantial value out of the recommendation and of reading what Chloe has to say in general).

I feel your overall engagement here hasn't been very productive. You're mostly repeating the same point, and to the extent you make other points it feels like you're reaching for whatever counterarguments you can think of, without considering whether someone who disagreed with you would have an immediate response. The fact that you and Larks are responsible for 20 of the 32 comments on the thread is a further negative sign to me (you could probably condense the same or more information into fewer better-thought-out comments than you are currently making).

Comment by jsteinhardt on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T03:35:26.260Z · score: 10 (10 votes) · EA · GW

Instead of writing this like some kind of expose, it seems you could get the same results by emailing the 80K team, noting the political sensitivity of the topic, and suggesting that they provide some additional disclaimers about the nature of the recommendation.

I don't agree with the_jaded_one's conclusions or think his post is particularly well-thought-out, but I don't think raising the bar on criticism like this is very productive if you care about getting good criticism. (If you think the_jaded_one's criticism is bad criticism, then I think it makes sense to just argue for that rather than saying that they should have made it privately.)

My reasons are very similar to Benjamin Hoffman's reasons here.

Comment by jsteinhardt on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-14T19:24:21.851Z · score: 4 (4 votes) · EA · GW

In my post, I said

anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

I would expect that conditioned on spending a large amount of time to write the criticism carefully, it would be met with significant praise. (This is backed up at least in upvotes by past examples of my own writing, e.g. Another Critique of Effective Altruism, The Power of Noise, and A Fervent Defense of Frequentist Statistics.)

Comment by jsteinhardt on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-13T08:30:50.847Z · score: 5 (5 votes) · EA · GW

I think parts of academia do this well (although other parts do it poorly, and I think it's been getting worse over time). In particular, if you present ideas at a seminar, essentially arbitrarily harsh criticism is fair game. Of course, this is different from the public internet, but it's still a group of people, many of whom do not know each other personally, where pretty strong criticism is the norm.

My impression is that criticism has traditionally been a strong part of Jewish culture, but I'm not culturally Jewish so can't speak directly.

I heard that Bridgewater did a bunch of stuff related to feedback/criticism but again don't know a ton about it.

Of course, none of these examples address the fact that much of the criticism of EA happens over the internet, but I do feel that some of the barriers to criticism online also carry over in person (though others don't).

Comment by jsteinhardt on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T19:19:44.203Z · score: 25 (23 votes) · EA · GW

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

While I'm sympathetic to the fact that there's also a lot of low-quality / lazy criticism of EA, I don't think responses that involve setting a high bar for high-quality criticism are the right way to go.

(Note that I don't think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)

Comment by jsteinhardt on My Donations for 2016 · 2016-12-30T20:46:07.061Z · score: 2 (2 votes) · EA · GW

Thanks. I think my reasons are basically the same as those in this post: http://effective-altruism.com/ea/14d/donor_lotteries_demonstration_and_faq/.

Comment by jsteinhardt on We Must Reassess What Makes a Charity Effective · 2016-12-28T17:07:03.051Z · score: 1 (1 votes) · EA · GW

So jobs don't go away, they are just created in other areas.

This isn't really true. Yes, probably there is some job replacement so that the jobs don't literally disappear 1-for-1. But there will probably be fewer jobs, and I don't think it's easy to say (without doing some research) whether it's 0.1 or 0.5 or 0.9 fewer jobs for each malaria net maker that goes away.

Comment by jsteinhardt on How many hits does hits-based giving get? A concrete study idea to find out (and a $1500 offer for implementation) · 2016-12-11T10:46:41.256Z · score: 4 (4 votes) · EA · GW

I like this idea. One danger (in both directions) with comparing to VC is that my impression is venture capital is way more focused on prestige and connections than funding charities is. In particular, if you can successfully become a prestigious, well-connected VC firm, then all of the Stanford/MIT students (for instance) will want you to fund their start-up, and picking with only minimal due diligence from among that group is likely to already be fairly profitable. [Disclaimer: I'm only tangentially connected to the VC world so this could be completely wrong, feel free to correct me.]

If this is true, what should we expect to see? We should expect that (1) VCs put in less research than OpenPhil (or similar organizations) when making investments, (2) hits-based is very successful for VC firms conditioned on having a strong established reputation. I would guess that both of these are true, though I'm unsure of the implications.

Comment by jsteinhardt on Why I'm donating to MIRI this year · 2016-12-02T05:54:02.862Z · score: 2 (2 votes) · EA · GW

Also, I realized it might not be clear why I thought the quotes above are relevant to whether the reviews addressed the "theory-building" aspect. The point is it seems to me that the quoted parts of the reviews are directly engaging with whether the definitions make sense / the results are meaningful, which is a question about the adequacy of the theory for addressing the claimed questions, and not of its technical impressiveness. (I could imagine you don't feel this addresses what you meant by theory-building, but in that case you'll have to be more specific for me to understand what you have in mind.)

Comment by jsteinhardt on Why I'm donating to MIRI this year · 2016-12-02T04:39:08.224Z · score: 3 (3 votes) · EA · GW

I feel like I care a lot about theory-building, and at least some of the other internal and external reviewers care a lot about it as well. As an example, consider External Review #1 of Paper #3 (particularly the section starting "How significant do you feel these results are for that?"). Here are some snippets (link to document here):

The first paragraph suggests that this problem is motivated by the concern of assigning probabilities to computations. This can be viewed as an instance of the more general problems of (a) modeling a resource-bounded decision maker computing probabilities and (b) finding techniques to help a resource-bounded decision maker compute probabilities. I find both of these problems very interesting. But I think that the model here is not that useful for either of these problems. Here are some reasons why:

It’s not clear why the properties of uniform coherence are the “right” ones to focus on. Uniform coherence does imply that, for any fixed formula, the probability converges to some number, which is certainly a requirement that we would want. This is implied by the second property of uniform coherence. But that property considers not just constant sequences of formulas, but sequence where the nth formula implies the (n+1)st. Why do we care about such sequences? [...]

The issue of computational complexity is not discussed in the paper, but it is clearly highly relevant. [...]

Several more points are raised, followed by (emphasis mine):

I see no obvious modification of uniformly coherent schemes that would address these concerns. Even worse, despite the initial motivation, the authors do not seem to be thinking about these motivational issues.

For another example, see External Review #1 of Paper #4 (I'm avoiding commenting on internal reviews because I want to be sensitive to breaking anonymity).

On the website, it is promised that this paper makes a step towards figuring out how to come up with “logically non-omniscient reasoners”. [...]

This surely sounds impressive, but there is the question whether this is a correct interpretation of Theorem 5. In particular, one could imagine two cases: a) we are predicting a single type of computation, and b) we are predicting several types of computations. In case (a), why would the delays matter in asymptotic convergence in the first place? [...] In case (b), the setting that is studied is not a good abstraction: in this case there should be some “contextual information” available to the learner, otherwise the only way to distinguish between two types of computations will be based on temporal relation, which is a very limiting assumption here.

To end with some thoughts of my own: in general, when theory-building I think it is very important to consider both the relevance of the theoretical definitions to the original problem of interest, and the richness of what can actually be said. I don't think that definitions can be assessed independently of the theory that can be built from them. At the danger of self-promotion, I think that my own work here, which makes both definitional and theoretical contributions relevant to ML + security, does a good job of putting forth definitions and justifying them (by showing that we can get unexpectedly strong results in the setting considered, via a nice and fairly general algorithm, and that these results have unexpected and important implications for initially unrelated-seeming problems). I also claim that this work is relevant to AI safety but perhaps others will disagree.

Comment by jsteinhardt on Why I'm donating to MIRI this year · 2016-12-01T04:14:37.014Z · score: 10 (10 votes) · EA · GW

(including the bizarre-ness of OpenPhil's analysis of number-of-papers-written, which is not how one measures progress of fundamentals research.)

What in the grant write-up makes you think the focus was on number-of-papers-written? I was one of the reviewers and that was definitely not our process.

(Disclaimer: I'm a scientific advisor for OpenPhil, all opinions here are my own.)