Posts

Individual Project Fund: Further Details 2016-12-31T22:16:11.995Z
My Donations for 2016 2016-12-28T17:03:22.239Z

Comments

Comment by jsteinhardt on EA Debate Championship & Lecture Series · 2021-04-07T16:43:26.034Z · EA · GW

I just don't think this is very relevant to whether outreach to debaters is good. A better metric would be to look at life outcomes of top debaters in high school. I don't have hard statistics on this but the two very successful debaters I know personally are both now researchers at the top of their respective fields, and certainly well above average in truth-seeking.

I also think the above arguments are common tropes in the "maths vs fuzzies" culture war, and given EA's current dispositions I suspect we're systematically more likely to hear and be receptive to anti-debate than to pro-debate talking points. (I say this as someone who loved to hate on debate in high school, especially as it was one of the main competitors with math team for recruiting smart students. But with hindsight from seeing my classmates' life outcomes I think most of the arguments I made were overrated.)

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-04-03T01:44:21.180Z · EA · GW

Thanks, and sorry for not responding to this earlier (was on vacation at the time). I really appreciated this and agree with willbradshaw's comment below :).

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-03-21T20:08:07.588Z · EA · GW

I think we just disagree about what a downvote means, but I'm not really that excited to argue about something that meta :).

As another data point, I appreciated Dicentra's comment elsewhere in the thread. I haven't decided whether I agree with it, but I thought it demonstrated empathy for all sides of a difficult issue even while disagreeing with the OP, and articulated an important perspective.

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-03-21T16:04:07.126Z · EA · GW

I think your characterization of my thought process is completely false for what it's worth. I went out of my way multiple times to say that I was not expressing disapproval of Dale's comment.

Edit: Maybe it's helpful for me to clarify that I think it's both good for Dale to write his comment, and for Khorton to write hers.

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-03-21T07:20:53.487Z · EA · GW

I didn't downvote Dale, nor do I wish to express social disapproval of his post (I worry that the length of this thread might lead Dale to feel otherwise, so I want to be explicit that I don't feel that way).

To your question, if I were writing a post similar to Dale, what I would do differently is be more careful to make sure I was responding to the actual content of the post. The OP asked people to support Asian community members who were upset, while at least the last paragraph of Dale's post seemed to assume that OP was arguing that we should be searching for ways to reduce violence against Asians. Whenever I engage on an emotionally charged topic I re-read the original post and my draft response to make sure that I actually understood the original post's argument, and I think this is good practice.

Another mistake I think Dale's post makes is assuming that whether the Atlanta attacks are racially motivated is a crux for most people's emotional response. I think Dale's claim may well be correct (I could see both arguments), but the larger context is a significant increase in violent incidents against Asians, at least some of which seem obviously racially motivated (the increase is also larger than other races). These have taken a constant emotional toll on Asians for a while now, and the particular Atlanta shootings are simply the first instance where it actually penetrated the broader public consciousness.

I can't think of an easy-to-implement rule that would avoid this mistake. The best would be "try harder to think from the perspective of the listener", but this is of course very difficult especially when there is a large gap in experience between the speaker and the listener. If I were trying super-hard I would run the post by an Asian friend to see if they felt like it engaged with the key arguments, but I think it would be unreasonable to expect, or expend, that level of effort for forum comments.

Again, I think people make communication mistakes like this all the time and do not find them particularly blameworthy and would normally not bother to comment on them. I am only pointing them out in detail because you asked me to.

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-03-21T04:44:17.586Z · EA · GW

I think it's good for people to point out ways that criticism can be phrased more sympathetically, and even aligned with your goal of encouraging more critical discussion (which I am also in favor of). As someone who often gives criticism, sometimes unpopular criticism, I both appreciate when people point out ways I could phrase it better but also strongly desire people to be forgiving when I fail to do so. If no one took the time to point these out to me, I would be less capable of offering effective criticism.

Along these lines, my guess is that you and Khorton are interpreting downvotes differently? I didn't take Khorton's downvote to be claiming "You should not be posting this on the forum" but instead "Next time you post something like this I wish you'd spend a bit more effort exercising empathy". And if Dale totally ignores this advice, the penalty is... mild social disapproval from Khorton and lots of upvotes from other people, as far as I can tell.

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-03-21T04:29:13.053Z · EA · GW

They being Laaunch? I agree they do a lot of different things. Hate is a Virus seemed to be doing even more scattered things, some of which didn't make sense to me. Everything Laaunch was doing seemed at least plausibly reasonable to me, and some, like the studies and movement-building, seemed pretty exciting.

 

My guess is that even within Asian advocacy, Laaunch is not going to look as mission-focused and impact-driven as say AMF. But my guess is no such organization exists--it's a niche cause compared to global poverty, so there's less professionalization--though I wouldn't be surprised if I found a better organization with more searching. I'm definitely in the market for that if you have ideas.

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-03-20T17:04:08.824Z · EA · GW

Thanks. I'm currently planning to donate to Laaunch as they seem the most disciplined and organized of the groups. I couldn't actually tell what Hate is a Virus wants to do from their website--for instance a lot of it seems to be about getting Asians to advocate for other racial minorities, but I'm specifically looking for something that will help Asians. Laaunch seems more focused on this while still trying to build alliances with other racial advocacy groups.

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-03-20T07:28:29.984Z · EA · GW

For me personally, it's symbolically important to make some sort of donation as a form of solidarity. It's not coming out of my EA budget, but I'd still rather spend the money as effectively as possible. It seems to me that practicing the virtue of effective spending in one domain will only help in other domains.

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-03-20T05:39:07.723Z · EA · GW

I think one concrete action people could take is to try to listen to the experiences of their Asian friends and colleagues. There is a lot of discrimination that isn't violence. Understanding and solidarity can go a long way, and can also prevent reduce discrimination.

For Chinese immigrants in particular there are also a lot of issues related to immigration and to U.S.-China tensions.

Neither of these is directly related to the Atlanta shootings, but I think it can be a good symbolic moment to better understand others, especially since discrimination against Asians is often ignored (indeed, my experience is that even when I bring it up with people it tends to get brushed aside).

Incidentally, I think we obsess too much over the particular question of whether the Atlanta shooting is a hate crime or was racially motivated. My personal views at least do not really hinge on this--I think we have much better evidence both on the increase in crime directed at Asians, and the ongoing discrimination faced by Asians, than this particular instance.

Comment by jsteinhardt on Please stand with the Asian diaspora · 2021-03-20T01:50:23.469Z · EA · GW

Thanks for this. I have been trying to think about what organizations I can support that would be most effective here. I'm still thinking though it myself but if you have particular thoughts, let me know.

Comment by jsteinhardt on (Autistic) visionaries are not natural-born leaders · 2021-01-28T04:30:33.530Z · EA · GW

I think I'd just note that the post, in my opinion, helps combat some of these issues. For instance it suggests that autistic people are able to learn how to interact with neurotypical people successfully, given sufficient effort--ie, the "mask".

Comment by jsteinhardt on TAI Safety Bibliographic Database · 2020-12-23T19:14:28.711Z · EA · GW

Thanks, that's helpful. If you're saying that the stricter criterion would also apply to DM/CHAI/etc. papers then I'm not as worried about bias against younger researchers.

Regarding your 4 criteria, I think they don't really delineate how to make the sort of judgment calls we're discussing here, so it really seems like it should be about a 5th criterion that does delineate that. I'm not sure yet how to formulate one that is time-efficient, so I'm going to bracket that for now (recognizing that might be less useful for you), since I think we actually disagree about in principle what papers are building towards TAI safety.

To elaborate, let's take verification as an example (since it's relevant to the Wong & Kolter paper). Lots of people think verification is helpful for TAI safety--MIRI has talked about it in the past, and very long-termist people like Paul Christiano are excited about it as a current direction afaik. If a small group of researchers at MIRI were trying to do work on verification but not getting much traction in the academic community, my intuition is that their papers would reliably meet your criteria. Now the reality is that verification does have lots of traction in the academic community, but why is that? It's because Wong & Kolter and Raghunathan et al. wrote two early papers that provided promising paths forward on neural net verification, which many other people are now trying to expand on. This seems strictly better to me than the MIRI example, so it seems like either:

 -The hypothetical MIRI work shouldn't have made the cut

 -There's actually two types of verification work (call them VerA and VerB), such that hypothetical MIRI was working on VerA that was relevant, while the above papers are VerB which is not relevant.

-Papers should make the cut on factors other than actual impact, e.g. perhaps the MIRI papers should be included because they're from MIRI, or you should want to highlight them more because they didn't get traction.

-Something else I'm missing?

I definitely agree that you shouldn't just include every paper on robustness or verification, but perhaps at least early work that led to an important/productive/TAI-relevant line should be included (e.g. I think the initial adversarial examples papers by Szegedy and Goodfellow should be included on similar grounds).

Comment by jsteinhardt on TAI Safety Bibliographic Database · 2020-12-23T06:00:13.734Z · EA · GW

Also in terms of alternatives, I'm not sure how time-expensive this is, but some ideas for discovering additional work:

-Following citation trails (esp. to highly-cited papers)

-Going to the personal webpages of authors of relevant papers, to see if there's more (also similarly for faculty webpages)

Comment by jsteinhardt on TAI Safety Bibliographic Database · 2020-12-23T05:48:22.920Z · EA · GW

Well,  it's biased toward safety organizations, not large organizations.

Yeah, good point. I agree it's more about organizations (although I do think that DeepMind is benefiting a lot here, e.g. you're including a fairly comprehensive list of their adversarial robustness work while explicitly ignoring that work at large--it's not super-clear on what grounds, for instance if you think Wong and Cohen should be dropped then about half of the DeepMind papers should be too since they're on almost identical topics and some are even follow-ups to the Wong paper).

Not because it's not high quality work, but just because I think it still happens in a world where no research is motivated by the safety of transformative AI; maybe that's wrong?

That seems wrong to me, but maybe that's a longer conversation. (I agree that similar papers would probably have come out within the next 3 years, but asking for that level of counterfactual irreplacibility seems kind of unreasonable imo.) I also think that the majority of the CHAI and DeepMind papers included wouldn't pass that test (tbc I think they're great papers! I just don't really see what basis you're using to separate them).

I think focusing on motivation rather than results can also lead to problems, and perhaps contributes to organization bias (by relying on branding to asses motivation). I do agree that counterfactual impact is a good metric, i.e. you should be less excited about a paper that was likely to soon happen anyways; maybe that's what you're saying? But that doesn't have much to do with motivation.

Also let me be clear that I'm very glad this database exists, and please interpret this as constructive feedback rather than a complaint.

Comment by jsteinhardt on TAI Safety Bibliographic Database · 2020-12-22T23:11:30.394Z · EA · GW

Thanks for curating this! You sort of acknowledge this already, but one bias in this list is that it's very tilted towards large organizations like DeepMind, CHAI, etc. One way to see this is that you have AugMix by Hendrycks et al., but not the Common Corruptions and Perturbations paper, which has the same first author and publication year and 4x the number of citations (in fact it would top the 2019 list by a wide margin). The main difference is that AugMix had DeepMind co-authors while Common Corruptions did not.

I mainly bring this up because this bias probably particularly falls against junior PhD students, many of whom are doing great work that we should seek to recognize. For instance (and I'm obviously biased here), Aditi Raghunathan and Dan Hendrycks would be at or near the top of your citation count for most years if you included all of their safety-relevant work.

In that vein, the verification work from Zico Kolter's group should probably be included, e.g. the convex outer polytope [by Eric Wong] and randomized smoothing [by Jeremy Cohen] papers (at least, it's not clear why you would include Aditi's SDP work with me and Percy, but not those).

I recognize it might not be feasible to really address this issue entirely, given your resource constraints. But it seems worth thinking about if there are cheap ways to ameliorate this.

Also, in case it's helpful, here's a review I wrote in 2019: AI Alignment Research Overview.

Comment by jsteinhardt on Ask Rethink Priorities Anything (AMA) · 2020-12-17T18:59:43.956Z · EA · GW

I didn't mean to imply that laziness was the main part of your reply, I was more pointing to "high personal costs of public posting" as an important dynamic that was left out of your list. I'd guess that we probably disagree about how high those are / how much effort it takes to mitigate them, and about how reasonable it is to expect people to be selfless in this regard, but I don't think we disagree on the overall list of considerations.

Comment by jsteinhardt on Ask Rethink Priorities Anything (AMA) · 2020-12-17T12:53:40.646Z · EA · GW

I think the reasons people don't post stuff publicly isn't out of laziness, but because there's lots of downside risk, e.g. of someone misinterpreting you and getting upset, and not much upside relative to sharing in smaller circles.

Comment by jsteinhardt on 80k hrs #88 - Response to criticism · 2020-12-13T17:58:48.977Z · EA · GW

Thanks for writing this and for your research in this area. Based on my own read of the literature, it seems broadly correct to me, and I wish that more people had an accurate impression of polarization on social media vs mainstream news and their relative effects.

While I think your position is much more correct than the conventional one, I did want to point to an interesting paper by Ro'ee Levy, which has some very good descriptive and casual statistics on polarization on Facebook: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3653388. It suggests (among many other interesting findings) that Facebook probably is somewhat more slanted than mainstream news and that this may drive a small but meaningful increase in affective polarization. That being said, it's unlikely to be the primary driver of US trends.

Comment by jsteinhardt on How to give advice · 2020-11-01T04:44:27.844Z · EA · GW

You also sort of touch on this but I think it's also helpful to convey when you have genuine uncertainty (not at the cost of needless hedging and underconfidence) and also say when you think someone else (who they have access to) would be likely to have more informed advice on a particular question.

Comment by jsteinhardt on How to give advice · 2020-11-01T04:41:28.804Z · EA · GW

I like your guidelines. Some others that come to mind:

-Some people are not just looking for advice but to avoid the responsibility of choosing for themselves (they want someone else to tell them what the right answer is). I think it's important to resist this and remind people that ultimately it's their responsibility to make the decision.

-If someone seems to be making a decision out of fear or anxiety, I try to address this and de-dramatize the different options. People rarely make their best decisions if they're afraid of the outcomes.

-I try to show my work and give the considerations behind different pieces of advice. That way if they get new evidence later they can integrate it with the considerations rather than starting from scratch.

Comment by jsteinhardt on Are there any other pro athlete aspiring EAs? · 2020-09-13T19:11:56.102Z · EA · GW

Thanks! 1 seems believable to me, at least for EA as it currently presents. 2 seems believable on average but I'd expect a lot of heterogeneity (I personally know athletes who have gone on to be very good researchers). It also seems like donations are pretty accessible to everyone, as you can piggyback on other people's research.

Comment by jsteinhardt on Are there any other pro athlete aspiring EAs? · 2020-09-13T19:06:08.695Z · EA · GW

I personally wouldn't pay that much attention to the particular language people use--it's more highly correlated with their local culture than with abilities or interests. I'd personally be extra excited to talk to someone with a strong track record of handling uncertainty well who had a completely different vocabulary than me, although I'd also expect it to take more effort to get to the payoff.

Comment by jsteinhardt on Are there any other pro athlete aspiring EAs? · 2020-09-13T06:18:04.502Z · EA · GW

This is a bit tangential, but I expect that pro athletes would be able to provide a lot of valuable mentorship to ambitious younger people in EA--my general experience has been that about 30% of the most valuable growth habits I have are imported from sports (and also not commonly found elsewhere). E.g. "The Inner Game of Tennis" was gold and I encourage all my PhD students to read it.

Comment by jsteinhardt on Are there any other pro athlete aspiring EAs? · 2020-09-13T06:11:07.122Z · EA · GW

I didn't downvote, but the analysis seems incorrect to me: most pro athletes are highly intelligent, and in terms of single attributes that predict success in subsequent difficult endeavors I can't think of much better; I'd probably take it over successful startup CEO even. It also seems like the sort of error that's particularly costly to make for reasons of overall social dynamics and biases.

Comment by jsteinhardt on [deleted post] 2020-09-13T06:01:32.103Z

Niceness and honesty are both things that take work, and can be especially hard when trying to achieve both at once. I think it's often possible to achieve both, but this often requires either substantial emotional labor or unusual skill on the part of the person giving feedback. Under realistic constraints on time and opportunity cost, niceness and honesty do trade off against each other.

This isn't an argument to not care about niceness, but I think it's important to realize that there is an actual trade-off. I personally prefer people to err strongly on the honesty side when giving me feedback. In the most blunt cases it can ruin my day but I still prefer overall to get the feedback even then.

Comment by jsteinhardt on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-03T05:59:48.137Z · EA · GW

Okay, thanks for the clarification. I now see where the list comes from, although I personally am bearish on this type of weighting. For one, it ignores many people who are motivated to make AI beneficial for society but don't happen to frequent certain web forums or communities. Secondly, in my opinion it underrates the benefit of extremely competent peers and overrates the benefit of like-minded peers.

While it's hard to give generic advice, I would advocate for going to the school that is best at the research topic one is interested in pursuing, or where there is otherwise a good fit with a strong PI (though basing on a single PI rather than one's top-2/top-3 can sometimes backfire). If one's interests are not developed enough to have a good sense of topic or PI then I would go with general strength of program.

Comment by jsteinhardt on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-01T20:12:39.259Z · EA · GW

I'm not sure what the metric for the "good schools" list is but the ranking seemed off to me. Berkeley, Stanford, MIT, CMU, and UW are generally considered the top CS (and ML) schools. Toronto is also top-10 in CS and particularly strong in ML. All of these rankings are of course a bit silly but I still find it hard to justify the given list unless being located in the UK is somehow considered a large bonus.

Comment by jsteinhardt on Ryan Carey on how to transition from being a software engineer to a research engineer at an AI safety team · 2019-12-01T20:03:08.779Z · EA · GW

I intended the document to be broader than a research agenda. For instance I describe many topics that I'm not personally excited about but that other people are and where the excitement seems defensible. I also go into a lot of detail on the reasons that people are interested in different directions. It's not a literature review in the sense that the references are far from exhaustive but I personally don't know of any better resource for learning about what's going on in the field. Of course as the author I'm biased.

Comment by jsteinhardt on The EA Community and Long-Term Future Funds Lack Transparency and Accountability · 2018-08-01T15:20:56.065Z · EA · GW

Given that Nick has a PhD in Philosophy, and that OpenPhil has funded a large amount of academic research, this explanation seems unlikely.

Disclosure: I am working at OpenPhil over the summer. (I don't have any particular private information, both of the above facts are publicly available.)

EDIT: I don't intend to make any statement about whether EA as a whole has an anti-academic bias, just that this particular situation seems unlikely to reflect that.

Comment by jsteinhardt on Comparative advantage in the talent market · 2018-04-17T00:48:32.763Z · EA · GW

If we think of the community as needing one ops person and one research person, the marginal value in each area drops to zero once that role is filled.

Yes, but these effects only show up when the number of jobs is small. In particular: If there are already 99 ops people and we are looking at having 99 vs. 100 ops people, the marginal value isn't going to drop to zero. Going from 99 to 100 ops people means that mission-critical ops tasks will be done slightly better, and that some non-critical tasks will get done that wouldn't have otherwise. Going from 100 to 101 will have a similar effect.

In contrast, in the traditional comparative advantage setting, there remain gains-from-coordination/gains-from-trade even when the total pool of jobs/goods is quite large.

The fact that gains-from-coordination only show up in the small-N regime here, whereas they show up even in the large-N regime traditionally, seems like a crucial difference that makes it inappropriate to apply standard intuition about comparative advantage in the present setting.

If we want to analyze this more from first principles, we could pick one of the standard justifications for considering comparative advantage and I could try to show why it breaks down here. The one I'm most familiar with is the one by David Ricardo (https://en.wikipedia.org/wiki/Comparative_advantage#Ricardo's_example).

Comment by jsteinhardt on Comparative advantage in the talent market · 2018-04-13T02:34:44.699Z · EA · GW

I'm worried that you're mis-applying the concept of comparative advantage here. In particular, if agents A and B both have the same values and are pursuing altruistic ends, comparative advantage should not play a role---both agents should just do whatever they have an absolute advantage at (taking into account marginal effects, but in a large population this should often not matter).

For example: suppose that EA has a "shortage of operations people" but person A determines that they would have higher impact doing direct research rather than doing ops. Then in fact the best thing is for person A to work on direct research, even if there are already many other people doing research and few people doing ops. (Of course, person A could be mistaken about which choice has higher impact, but that is different from the trade considerations that comparative advantage is based on.)

I agree with the heuristic "if a type of work seems to have few people working on it, all else equal you should update towards that work being more neglected and hence higher impact" but the justification for that again doesn't require any considerations of trading with other people . In general, if A and B can trade in a mutually beneficial way, then either A and B have different values or one of them was making a mistake.

Comment by jsteinhardt on Talent gaps from the perspective of a talent limited organization. · 2017-11-06T21:23:04.790Z · EA · GW

FWIW, 50k seems really low to me (but I live in the U.S. in a major city, so maybe it's different elsewhere?). Specifically, I would be hesitant to take a job at that salary, if for no other reason than I thought that the organization was either dramatically undervaluing my skills, or so cash-constrained that I would be pretty unsure if they would exist in a couple years.

A rough comparison: if I were doing a commissioned project for a non-profit that I felt was well-run and value-aligned, my rate would be in the vicinity of $50USD/hour. I'd currently be willing to go down to $25USD/hour for a project that is something I basically would have done anyways. Once I get my PhD I think my going rates would be higher, and for a senior-level position I would probably expect more than either of these numbers, unless it was a small start-up-y organization that I felt was one of the most promising organizations in existence.

EDIT: So that people don't have to convert to per-year salaries in their heads, the above numbers if annualized would be $100k USD/year and $50k USD/year.

Comment by jsteinhardt on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-13T15:16:20.208Z · EA · GW

(Speaking for myself, not OpenPhil, who I wouldn't be able to speak for anyways.)

For what it's worth, I'm pretty critical of deep learning, which is the approach OpenAI wants to take, and still think the grant to OpenAI was a pretty good idea; and I can't really think of anyone more familiar with MIRI's work than Paul who isn't already at MIRI (note that Paul started out pursuing MIRI's approach and shifted in an ML direction over time).

That being said, I agree that the public write-up on the OpenAI grant doesn't reflect that well on OpenPhil, and it seems correct for people like you to demand better moving forward (although I'm not sure that adding HRAD researchers as TAs is the solution; also note that OPP does consult regularly with MIRI staff, though I don't know if they did for the OpenAI grant).

Comment by jsteinhardt on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-11T15:59:21.000Z · EA · GW

I think the argument along these lines that I'm most sympathetic to is that Paul's agenda fits more into the paradigm of typical ML research, and so is more likely to fail for reasons that are in many people's collective blind spot (because we're all blinded by the same paradigm).

Comment by jsteinhardt on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-11T15:55:45.625Z · EA · GW

This doesn't match my experience of why I find Paul's justifications easier to understand. In particular, I've been following MIRI since 2011, and my experience has been that I didn't find MIRI's arguments (about specific research directions) convincing in 2011*, and since then have had a lot of people try to convince me from a lot of different angles. I think pretty much all of the objections I have are ones I generated myself, or would have generated myself. Although, the one major objection I didn't generate myself is the one that I feel most applies to Paul's agenda.

( * There was a brief period shortly after reading the sequences that I found them extremely convincing, but I think I was much more credulous then than I am now. )

Comment by jsteinhardt on My current thoughts on MIRI's "highly reliable agent design" work · 2017-07-10T03:44:18.459Z · EA · GW

Shouldn't this cut both ways? Paul has also spent far fewer words justifying his approach to others, compared to MIRI.

Personally, I feel like I understand Paul's approach better than I understand MIRI's approach, despite having spent more time on the latter. I actually do have some objections to it, but I feel it is likely to be significantly useful even if (as I, obviously, expect) my objections end up having teeth.

Comment by jsteinhardt on What Should the Average EA Do About AI Alignment? · 2017-02-28T06:48:15.061Z · EA · GW

I already mention this in my response to kbog above, but I think EAs should approach this cautiously; AI safety is already an area with a lot of noise, with a reputation for being dominated by outsiders who don't understand much about AI. I think outreach by non-experts could end up being net-negative.

Comment by jsteinhardt on What Should the Average EA Do About AI Alignment? · 2017-02-28T06:46:01.510Z · EA · GW

In general I think this sort of activism has a high potential for being net negative --- AI safety already has a reputation as something mainly being pushed by outsiders who don't understand much about AI. Since I assume this advice is targeted at the "average EA" (who presumably doesn't know much about AI), this would only exacerbate the issue.

Comment by jsteinhardt on 80,000 Hours: EA and Highly Political Causes · 2017-01-30T06:08:29.596Z · EA · GW

Thanks for clarifying; your position seems reasonable to me.

Comment by jsteinhardt on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T19:18:44.489Z · EA · GW

OpenPhil made an extensive write-up on their decision to hire Chloe here: http://blog.givewell.org/2015/09/03/the-process-of-hiring-our-first-cause-specific-program-officer/. Presumably after reading that you have enough information to decide whether to trust her recommendations (taking into account also whatever degree of trust you have in OpenPhil). If you decide you don't trust it then that's fine, but I don't think that can function as an argument that the recommendation shouldn't have been made in the first place (many people such as myself do trust it and got substantial value out of the recommendation and of reading what Chloe has to say in general).

I feel your overall engagement here hasn't been very productive. You're mostly repeating the same point, and to the extent you make other points it feels like you're reaching for whatever counterarguments you can think of, without considering whether someone who disagreed with you would have an immediate response. The fact that you and Larks are responsible for 20 of the 32 comments on the thread is a further negative sign to me (you could probably condense the same or more information into fewer better-thought-out comments than you are currently making).

Comment by jsteinhardt on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T03:35:26.260Z · EA · GW

Instead of writing this like some kind of expose, it seems you could get the same results by emailing the 80K team, noting the political sensitivity of the topic, and suggesting that they provide some additional disclaimers about the nature of the recommendation.

I don't agree with the_jaded_one's conclusions or think his post is particularly well-thought-out, but I don't think raising the bar on criticism like this is very productive if you care about getting good criticism. (If you think the_jaded_one's criticism is bad criticism, then I think it makes sense to just argue for that rather than saying that they should have made it privately.)

My reasons are very similar to Benjamin Hoffman's reasons here.

Comment by jsteinhardt on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-14T19:24:21.851Z · EA · GW

In my post, I said

anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

I would expect that conditioned on spending a large amount of time to write the criticism carefully, it would be met with significant praise. (This is backed up at least in upvotes by past examples of my own writing, e.g. Another Critique of Effective Altruism, The Power of Noise, and A Fervent Defense of Frequentist Statistics.)

Comment by jsteinhardt on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-13T08:30:50.847Z · EA · GW

I think parts of academia do this well (although other parts do it poorly, and I think it's been getting worse over time). In particular, if you present ideas at a seminar, essentially arbitrarily harsh criticism is fair game. Of course, this is different from the public internet, but it's still a group of people, many of whom do not know each other personally, where pretty strong criticism is the norm.

My impression is that criticism has traditionally been a strong part of Jewish culture, but I'm not culturally Jewish so can't speak directly.

I heard that Bridgewater did a bunch of stuff related to feedback/criticism but again don't know a ton about it.

Of course, none of these examples address the fact that much of the criticism of EA happens over the internet, but I do feel that some of the barriers to criticism online also carry over in person (though others don't).

Comment by jsteinhardt on Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things) · 2017-01-12T19:19:44.203Z · EA · GW

I strongly agree with the points Ben Hoffman has been making (mostly in the other threads) about the epistemic problems caused by holding criticism to a higher standard than praise. I also think that we should be fairly mindful that providing public criticism can have a high social cost to the person making the criticism, even though they are providing a public service.

There are definitely ways that Sarah could have improved her post. But that is basically always going to be true of any blog post unless one spends 20+ hours writing it.

I personally have a number of criticisms of EA (despite overall being a strong proponent of the movement) that I am fairly unlikely to share publicly, due to the following dynamic: anything I write that wouldn't incur unacceptably high social costs would have to be a highly watered-down version of the original point, and/or involve so much of my time to write carefully that it wouldn't be worthwhile.

While I'm sympathetic to the fact that there's also a lot of low-quality / lazy criticism of EA, I don't think responses that involve setting a high bar for high-quality criticism are the right way to go.

(Note that I don't think that EA is worse than is typical in terms of accepting criticism, though I do think that there are other groups / organizations that substantially outperform EA, which provides an existence proof that one can do much better.)

Comment by jsteinhardt on My Donations for 2016 · 2016-12-30T20:46:07.061Z · EA · GW

Thanks. I think my reasons are basically the same as those in this post: http://effective-altruism.com/ea/14d/donor_lotteries_demonstration_and_faq/.

Comment by jsteinhardt on We Must Reassess What Makes a Charity Effective · 2016-12-28T17:07:03.051Z · EA · GW

So jobs don't go away, they are just created in other areas.

This isn't really true. Yes, probably there is some job replacement so that the jobs don't literally disappear 1-for-1. But there will probably be fewer jobs, and I don't think it's easy to say (without doing some research) whether it's 0.1 or 0.5 or 0.9 fewer jobs for each malaria net maker that goes away.

Comment by jsteinhardt on How many hits does hits-based giving get? A concrete study idea to find out (and a $1500 offer for implementation) · 2016-12-11T10:46:41.256Z · EA · GW

I like this idea. One danger (in both directions) with comparing to VC is that my impression is venture capital is way more focused on prestige and connections than funding charities is. In particular, if you can successfully become a prestigious, well-connected VC firm, then all of the Stanford/MIT students (for instance) will want you to fund their start-up, and picking with only minimal due diligence from among that group is likely to already be fairly profitable. [Disclaimer: I'm only tangentially connected to the VC world so this could be completely wrong, feel free to correct me.]

If this is true, what should we expect to see? We should expect that (1) VCs put in less research than OpenPhil (or similar organizations) when making investments, (2) hits-based is very successful for VC firms conditioned on having a strong established reputation. I would guess that both of these are true, though I'm unsure of the implications.

Comment by jsteinhardt on Why I'm donating to MIRI this year · 2016-12-02T05:54:02.862Z · EA · GW

Also, I realized it might not be clear why I thought the quotes above are relevant to whether the reviews addressed the "theory-building" aspect. The point is it seems to me that the quoted parts of the reviews are directly engaging with whether the definitions make sense / the results are meaningful, which is a question about the adequacy of the theory for addressing the claimed questions, and not of its technical impressiveness. (I could imagine you don't feel this addresses what you meant by theory-building, but in that case you'll have to be more specific for me to understand what you have in mind.)

Comment by jsteinhardt on Why I'm donating to MIRI this year · 2016-12-02T04:39:08.224Z · EA · GW

I feel like I care a lot about theory-building, and at least some of the other internal and external reviewers care a lot about it as well. As an example, consider External Review #1 of Paper #3 (particularly the section starting "How significant do you feel these results are for that?"). Here are some snippets (link to document here):

The first paragraph suggests that this problem is motivated by the concern of assigning probabilities to computations. This can be viewed as an instance of the more general problems of (a) modeling a resource-bounded decision maker computing probabilities and (b) finding techniques to help a resource-bounded decision maker compute probabilities. I find both of these problems very interesting. But I think that the model here is not that useful for either of these problems. Here are some reasons why:

It’s not clear why the properties of uniform coherence are the “right” ones to focus on. Uniform coherence does imply that, for any fixed formula, the probability converges to some number, which is certainly a requirement that we would want. This is implied by the second property of uniform coherence. But that property considers not just constant sequences of formulas, but sequence where the nth formula implies the (n+1)st. Why do we care about such sequences? [...]

The issue of computational complexity is not discussed in the paper, but it is clearly highly relevant. [...]

Several more points are raised, followed by (emphasis mine):

I see no obvious modification of uniformly coherent schemes that would address these concerns. Even worse, despite the initial motivation, the authors do not seem to be thinking about these motivational issues.

For another example, see External Review #1 of Paper #4 (I'm avoiding commenting on internal reviews because I want to be sensitive to breaking anonymity).

On the website, it is promised that this paper makes a step towards figuring out how to come up with “logically non-omniscient reasoners”. [...]

This surely sounds impressive, but there is the question whether this is a correct interpretation of Theorem 5. In particular, one could imagine two cases: a) we are predicting a single type of computation, and b) we are predicting several types of computations. In case (a), why would the delays matter in asymptotic convergence in the first place? [...] In case (b), the setting that is studied is not a good abstraction: in this case there should be some “contextual information” available to the learner, otherwise the only way to distinguish between two types of computations will be based on temporal relation, which is a very limiting assumption here.

To end with some thoughts of my own: in general, when theory-building I think it is very important to consider both the relevance of the theoretical definitions to the original problem of interest, and the richness of what can actually be said. I don't think that definitions can be assessed independently of the theory that can be built from them. At the danger of self-promotion, I think that my own work here, which makes both definitional and theoretical contributions relevant to ML + security, does a good job of putting forth definitions and justifying them (by showing that we can get unexpectedly strong results in the setting considered, via a nice and fairly general algorithm, and that these results have unexpected and important implications for initially unrelated-seeming problems). I also claim that this work is relevant to AI safety but perhaps others will disagree.