Posts

Announcing: Mechanism Design for AI Safety - Reading Group 2022-08-09T04:25:15.824Z

Comments

Comment by Rubi on Red-teaming contest: demographics and power structures in EA · 2022-08-31T20:02:27.420Z · EA · GW

This comment is object-level, perhaps nitpicky, and I quite like your post on a high level.

Saving a life via, say, malaria nets gets you two benefits:

1. The person saved doesn't die, meeting their preference for continuing to exist

2. The externalities of that person continuing to live, such as foregone grief by their family and community.

I don't think it's too controversial to say that the majority of the benefit from saving a life goes to the person whose life is saved, rather than the people who would be sad that they died. But the IDinsights survey only provides information about the latter.

Consider what would happen if beneficiary surveys find the opposite conclusion in future communities, that certain beneficiaries did not care at all about the death of children under the age of 9. It would be ridiculous and immoral  to defer to that decision, and not provide any life-saving aid to those children.  The reason for this is that the community being surveyed is not the primary beneficiary of aid to their children, their children are, so their preferences make up a small fraction of the aid's value. But this also goes the other way, if the surveyed community overweights the lives of their children, that isn't a reason for major deferral. Especially if stated preferences contradict revealed preferences, as they often do.

Comment by Rubi on What ‘equilibrium shifts’ would you like to see in EA? · 2022-07-26T18:16:01.448Z · EA · GW

There are lots of advantages to being based in the Bay Area. It seems both easier and higher upside to solve the Berkeley real estate issue that to coordinate a move away from the Bay Area.

Comment by Rubi on Co-Creation of the Library of Effective Altruism [Information Design] (1/2) · 2022-07-11T14:56:23.702Z · EA · GW

I love the idea of a Library of EA! It would be helpful to eventually augment it with auxiliary and meta-information, probably through crowdsourcing among EAs. Each book could also be associated with short and medium summaries of the key arguments and takeaways, and warnings about which sections were later disproven or controversial (or a warning that the whole thing is a partial story/misleading). There's also a lot of overlap and superseding within the books (especially within the rationality and epistemology section), so it would be good to say "If you've read X, you don't need to read Y". It would also be great to have a "Summary of Y for people who have already read X" that just covers the key information.

I do strongly feel that a smaller library would be better. While there are advantages to being comprehensive, a smaller library is better at directing people to the most important books. It is really valuable to say that someone should start with a particular book on a subject, rather than their uninformed choice from a list. Parsimony in recommendations, at least on a personal level, is also important for conveying the importance of the recommendations you do make. It somewhat feels like you weren't confident enough to cut a book that was recommended by some subgroup, even if there were better options available.

There's a Pareto principle at play here, where reading 20% of the books will provide 80% of the value, and a repeated Pareto principle where 4% provide 64% of the value.  I think you could genuinely recommend four or five books from this list that provide two-thirds of the EA value of the entire list between them.  My picks would be The Most Good You Can Do, The Precipice,  Reasons and Persons, and Scout Mindset.  Curious what others would pick.

Comment by Rubi on Announcing Future Forum - Apply Now · 2022-07-08T01:34:48.406Z · EA · GW

In addition to EAG SF, there are some other major events and a general concentration of EAs happening in this 2-week time span in the Bay Area, so it might be generally good to come to the Bay around this time. 

Which other events are happening around that time? 

Comment by Rubi on A Quick List of Some Problems in AI Alignment As A Field · 2022-06-21T23:43:05.912Z · EA · GW

their approaches are correlated with each other. They all relate to things like corrigibility, the current ML paradigm, IDA, and other approaches that e.g. Paul Christiano would be interested in.

You need to explain better how these approaches are correlated, and what an uncorrelated approach might look like. It seems to me that, for example, MIRI's agent foundations and Anthropic's prosaic interpretability approaches are wildly different!

By the time you get good enough to get a grant, you have to have spent a lot of time studying this stuff. Unpaid, mind you, and likely with another job/school/whatever taking up your brain cycles.

I think you are wildly underestimating how easy it is for broadly competent people with an interest in AI alignment  but no experience to get funding to skill up. I'd go so far as to say it's a strength of the field.

Comment by Rubi on Digital people could make AI safer · 2022-06-11T00:09:10.764Z · EA · GW

I think your "digital people lead to AI" argument is spot on, and basically invalidates the entire approach. I think getting whole brain emulation working before AGI is such a longshot that the main effect of investing in it is advancing AI capabilities faster.

Comment by Rubi on Introducing EAecon: Community-Building Project · 2022-05-31T00:20:42.807Z · EA · GW

Hopefully one day they grow big enough to hire an executive assistant.

Comment by Rubi on Hiring: How to do it better · 2022-05-25T06:47:12.678Z · EA · GW

While I'm familiar with literature on hiring, particularly unstructured interviews, I think EA organizations should give serious consideration to the possibility that they can do better than average. In particular, the literature is  correlational, not causal, with major selection biases, and is certainly not as broadly applicable as authors claim.

From Cowen and Gross's book Talent, which I think captures the point I'm trying to make well:
> Most importantly, many of the research studies pessimistic about interviewing focus on unstructured interviews performed by relatively unskilled interviewers for relatively uninteresting, entry-level jobs. You can do better. Even if it were true that interviews do not on average improve candidate selection, that is a statement about averages, not about what is possible. You still would have the power, if properly talented and intellectually equipped, to beat the market averages. In fact, the worse a job the world as a whole is at doing interviews, the more reason to believe there are highly talented candidates just waiting to be found by you.

The fact that EA organizations are looking for specific, unusual qualities,  and the fact that EAs are generally smarter and more perceptive than the average hiring committee are both strong reasons to think that EA can beat the average results from research that tells only a partial story.

Comment by Rubi on EA and the current funding situation · 2022-05-10T19:27:39.346Z · EA · GW

One of the keys things you hit on is  "Treating expenditure with the moral seriousness it deserves. Even offhand or joking comments that take a flippant attitude to spending will often be seen as in bad taste, and apt to turn people off."

However, I wouldn't characterize this as an easy win, even if it would be an unqualified positive. Calling out such comments when they appear is straightforward enough, but that's a slow process that could result in only minor reductions. I'd be interested in hearing ideas for how to change attitudes more thoroughly and quickly, because I'm drawing a blank.

Comment by Rubi on Aaron Gertler's Shortform · 2022-05-03T20:02:47.905Z · EA · GW

Cool, thanks!

Comment by Rubi on Aaron Gertler's Shortform · 2022-05-02T07:46:51.515Z · EA · GW

I like many books on the list, but I think you're doing a disservice by trying to recommend too  many books at once. If you can cut it down to 2-3 in each category, that gives people a better starting point.

Comment by Rubi on Big EA and Its Infinite Money Printer · 2022-04-30T17:32:40.321Z · EA · GW

It's a bit surprising, but not THAT surprising. 50 more technical AI safety researchers would represent somewhere from a 50-100% increase  in the total number, which could be a justifiable use of 10% of OpenPhil's budget.

Comment by Rubi on Big EA and Its Infinite Money Printer · 2022-04-30T17:29:44.632Z · EA · GW

Thanks for the update!

Comment by Rubi on Big EA and Its Infinite Money Printer · 2022-04-30T02:43:48.774Z · EA · GW

Great writeup! 

Is there an OpenPhil source for "OpenPhil values a switch to an AI safety research career as +$20M in expected value"? It would help me a lot in addressing some concerns that have been brought up in local group discussions.

Comment by Rubi on Free-spending EA might be a big problem for optics and epistemics · 2022-04-13T20:57:29.658Z · EA · GW

Even before a cost-benefit analysis, I'd like to see an ordinal ranking of priorities. For organizations like the CEA,  what would they do with a 20% budget increase? What would they cut if they had to reduce their budget by 20%? Same thing for specific events, like EAGs. For a student campus club, what would they do with $500 in funding? $2,000? $10,000? I think this type of analysis would be helpful for determining if some of the spending that appears more frivolous is actually the least important.

Comment by Rubi on Democratising Risk - or how EA deals with critics · 2022-01-02T22:26:18.670Z · EA · GW

To clear up my identity, I am not Seán and do not know him. I go by Rubi in real life, although it is a nickname rather than my given name. I did not mean for my account to be an anonymous throwaway, and I intend to keep on using this account on the EA Forum. I can understand how that would not be obvious as this was my first post, but that is coincidental. The original post generated a lot of controversy, which is why I saw it and decided to comment.

You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology?

I would have genuinely liked an answer to this. If none of the reviewers made the case, that is useful information about the selection of the reviewers. If some reviewers  did, but were ignored by the authors, then it reflects negatively on the authors not to address this and say that the case for differential technology is unclear.

Comment by Rubi on Democratising Risk - or how EA deals with critics · 2022-01-02T22:00:33.681Z · EA · GW

Hi Carla,

Thanks for taking the time to engage with my reply. I'd like to engage with a few of the points you made.

First of all, my point prefaced with 'speaking abstractly' was genuinely that. I thought your paper was poorly argued, but certainly within acceptable limits that it should not result in withdrawn funding. On a sufficient timeframe, everybody will put out some duds, and your organizations certainly have a track record of producing excellent work. My point was about avoiding an overcorrection, where consistently low quality work is guaranteed some share of scarce funding merely out of fear that withdrawing such funding would be seen as censorship. It's a sign of healthy epistemics (in a dimension orthogonal to the criticisms of your post) for a community to be able to jump from a specific discussion to the general case, but I'm sorry you saw my abstraction as a personal attack.

You saw "we do not argue against the TUA, but point out the unanswered questions we observed. .. but highlight assumptions that may be incorrect or smuggle in values".  Pointing out unanswered questions and incorrect assumptions is how you argue against something! What makes your paper polemical is that you do not sufficiently check whether the questions  really are unanswered, or if the assumptions really are incorrect. There is no tension between calling your paper polemical and saying you do not sufficiently critique the TUA.  A more thorough critique that took counterarguments seriously and tried to address them would not be a polemic, as it would more clearly be driven by truth-seeking than hostility.

I was not "asking that we [you] articulate and address every hypothetical counterargument", I was asking that you address any, especially the most obvious ones. Don't just state "it is unclear why" they are believed to skip over a counterargument.

I am disappointed that you used my original post to further attack the epistemics of this community, and doubly so for claiming it failed to articulate clear, specific criticisms. The post was clear that the main failing I saw in your paper was a lack of  engagement with counterarguments, specifically the case for technological differentiation and the case for avoiding the disenfranchisement of future generations through a limited democracy. I do not believe that my criticism of the paper jumping around too much rather than engaging deeply on fewer issues was ambiguous either.  Ignoring these clear, specific criticisms to use the post as evidence of poor epistemics in the EA community makes me think you may be interpreting any disagreement as evidence for your point.

Comment by Rubi on Democratising Risk - or how EA deals with critics · 2021-12-29T10:30:50.886Z · EA · GW

>it's important that we don't condition funding on agreement with the funders' views.

Surely we can condition funding on the quality of the researcher's past work though? Freedom of speech and freedom of research are both important, but taking a heterodox approach shouldn't guarantee a sinecure either. 

If you completely disagree that people consistently producing bad work should not be allocated scare funds, I'm not sure we can have a productive conversation.

Comment by Rubi on Democratising Risk - or how EA deals with critics · 2021-12-29T09:19:24.553Z · EA · GW

That puts a huge and dictatorial responsibility on funders in ways that are exactly what the paper argued are inappropriate.


If not the funders,  do you believe anyone should be responsible for ensuring harmful and wrong ideas are not widely circulated? I can certainly see the case that even wrong, harmful ideas should only be addressed by counterargument. However, I'm not saying that resources should be spent censoring wrong ideas harmful to EA, just that resources should not be spent actively promoting them. Funding is a privilege, consistently making bad arguments should eventually lead to the withdrawal of funding, and if on top of that those bad arguments are harmful to EA causes that should expedite the decision.

To be clear, that is absolutely not to say that publishing Democratizing Risk is/was justification for firing or cut funding, I am still very much talking abstractly.

Comment by Rubi on Democratising Risk - or how EA deals with critics · 2021-12-29T03:56:53.170Z · EA · GW

I thought the paper itself was poorly argued, largely as a function of biting off too much at once. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. Then, while I thought the original description of TUA was accurate, the TUA response to criticisms was entirely ignored. Statements like "it is unclear why a precise slowing and speeding up of different technologies...across the world is more feasible or effective than the simpler approach of outright bans and moratoriums" were egregious, and made it seem like you did not do your research. You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Not a single mention of the difficulty of incorporating future generations into the democratic process?

Ultimately, I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking. I believe that was the major contributor to blowback you have received.

I agree that more diversity in funders would be beneficial. It is harmful to all researchers if access to future funding is dependent on the results of their work. Overall, it is unclear from your post the actual extent of the blowback. What does "tried to prevent the paper being published" mean? Is the threat of withdrawn funding real or imagined? Were the authors whose work was criticized angry, and did they take any actions to retaliate?

Finally, I would like to abstract away from this specific paper. Criticisms of the dominant paradigm  limiting future funding and career opportunities is a sign of terrible epistemics in a field. However, poor criticisms of the dominant paradigm limiting future funding and career opportunities is completely valid. The one line you wrote that I think all EAs would agree with is "This is not a game. Fucking it up could end really badly". If the wrong arguments being made would cause harm when believed, it is not only the right but the responsibility of funders to reduce their reach. Of course, the difficulty is in differentiating wrong criticisms from criticisms against the current paradigm, while within the current paradigm. The responsibility of the researcher is to make their case as bulletproofed as possible, and designed to convince believers in the current paradigm. Otherwise, even if their claims are correct, they won't make an impact. The "Effective" part of EA includes making the right arguments to convince the right people, rather than the argument that is cathartic to unleash.

Comment by Rubi on Democratising Risk - or how EA deals with critics · 2021-12-29T02:54:26.483Z · EA · GW

Priors should matter! For example, early rationalists were (rightfully) criticized for being too open to arguments from white nationalists,  believing they should only look at the argument itself rather than the source. It isn't good epistemics to ignore the source of an argument and their potential biases (though it isn't good epistemics to dismiss them out of hand either based on that, of course).