Comment by aidan-o-gara on Confused about AI research as a means of addressing AI risk · 2019-02-21T00:37:57.379Z · score: 3 (3 votes) · EA · GW

There's probably people who can answer better, but my crack at it: (from most to least important)

1. If people who care about AI safety also happen to be the best at making AI, then they'll try to align the AI they make. (This is already turning out to be a pretty successful strategy: OpenAI is an industry leader that cares a lot about risks.)

2. If somebody figures out how to align AI, other people can use their methods. They'd probably want to, if they buy that misaligned AI is dangerous to them, but this could fail if aligned methods are less powerful or more difficult than not-necessarily-aligned methods.

3. Credibility and public platform: People listen to Paul Christiano because he's a serious AI researcher. He can convince important people to care about AI risk.

Comment by aidan-o-gara on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-14T02:56:53.214Z · score: 9 (6 votes) · EA · GW

Really cool idea! Two possibilities:

1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate's health has nothing to do with the fact that you're an EA; they'd be just as good listening to any other trusted pundit. I'm not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.

2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree--they disagree for empirical reasons. If you stick to something where it's mostly a values question, people might trust your judgements more.

Comment by aidan-o-gara on EA grants available to individuals (crosspost from LessWrong) · 2019-02-13T00:14:23.099Z · score: 2 (2 votes) · EA · GW

Check out Tyler Cowen's Emergent Ventures.

We want to jumpstart high-reward ideas—moonshots in many cases—that advance prosperity, opportunity, liberty, and well-being. We welcome the unusual and the unorthodox.
Projects will either be fellowships or grants: fellowships involve time in residence at the Mercatus Center in Northern Virginia; grants are one-time or slightly staggered payments to support a project.
Think of the goal of Emergent Ventures as supporting new ideas and projects that are too difficult, too hard to measure, too unusual, too foreign, too small, or…too something to make their way through the usual foundation and philanthropic process.

Here's the first cohort of grant recipients. I think your project would fit what they're looking for, and it's a pretty low cost to apply.

Comment by aidan-o-gara on Will companies meet their animal welfare commitments? · 2019-02-03T23:55:13.018Z · score: 5 (4 votes) · EA · GW

Agreed on both, an article along the lines of "The world's biggest pork producer just broke their animal welfare commitment" seems very valuable and possibly effective as shaming, while "Corporate animal welfare campaigning often fails to deliver" would definitely be counterproductive.

Comment by aidan-o-gara on Will companies meet their animal welfare commitments? · 2019-02-03T18:08:47.305Z · score: 3 (3 votes) · EA · GW

I think Vox's Future Perfect could be a good platform for this--either one of you writing a guest article, or giving Vox the information and letting them write. It's an interesting news story to cover these broken commitments, Vox's readership already is fairly interested in animal rights, and they could build it into an ongoing series of articles tracking progress. Maybe consider reaching out directly to Kelsey Piper/Dylan Matthews/Vox?

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-29T23:47:52.209Z · score: 7 (7 votes) · EA · GW

I think I'd challenge this goal. If we're choosing between trying to improve Vox vs trying to discredit Vox, I think EA goals are served better by the former.

1. Vox seems at least somewhat open to change: Matthews and Ezra seem genuinely pretty EA, they went out on a limb to hire Piper, and they've sacrificed some readership to maintain EA fidelity. Even if they place less-than-ideal priority on EA goals vs. progressivsim, profit, etc., they still clearly place some weight on pure EA.

2. We're unlikely to convince Future Perfect's readers that Future Perfect is bad/wrong and we in EA are right. We can convince core EAs to discredit Vox, but that's unnecessary--if you read the EA Forum, your primary source of EA info is not Vox.

Bottom line: non-EAs will continue to read Future Perfect no matter what. So let's make Future Perfect more EA, not less.

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-29T23:04:07.428Z · score: 3 (3 votes) · EA · GW

Agreed. If you accept the premise that EA should enter popular discourse, most generally informed people should be aware of it, etc., then I think you should like Vox. But if you think EA should be a small elite academic group, not a mass movement, that's another discussion entirely, and maybe you shouldn't like Vox.

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T06:18:08.795Z · score: 17 (12 votes) · EA · GW

3. I have no personal or inside info on Future Perfect, Vox, Dylan Matthews, Ezra Klein, etc. But it seems like they've got a fair bit of respect for the EA movement--they actually care about impact, and they're not trying to discredit or overtake more traditional EA figureheads like MacAskill and Singer.

Therefore I think we should be very respectful towards Vox, and treat them like ingroup members. We have great norms in the EA blogosphere about epistemic modesty, avoiding ad hominem attacks, viewing opposition charitably, etc. that allow us to have much more productive discussions. I think we can extend that relationship to Vox.

Using this piece as an example, if you were criticizing Rob Wiblin's podcasting instead of Vox's writing, I think people might ask you to be more charitable. We're not anti-criticism -- We're absolutely committed to truth and honesty, which means seeking good criticism -- but we also have well-justified trust in the community. We share a common goal, and that makes it really easy to cooperate.

Let's trust Vox like that. It'll make our cooperation more effective, we can help each other achieve our common goal, and, if necessary, we can always take back our trust later.

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T06:05:48.846Z · score: 7 (4 votes) · EA · GW

2. Just throwing it out there: Should EA embrace being apolitical? As in, possible official core virtue of the EA movement proper: Effective Altruism doesn't take sides on controversial political issues, though of course individual EAs are free to.

Robin Hanson's "pulling the rope sideways" analogy has always struck me: In the great society tug-of-war debates on abortion, immigration, and taxes, it's rarely effective to pick a side and pull. First, you're one of many, facing plenty of opposition, making your goal difficult to accomplish. But second, if half the country thinks your goal is bad, it very well might be. On the other hand, pushing sideways is easy: nobody's going to filibuster to prevent you from handing out malaria nets-- everybody thinks it's a good idea.

(This doesn't mean not involving yourself in politics. 80k writes on improving political decision making or becoming a congressional staffer--they're both nonpartisan ways to do good in politics.)

If EA were officially apolitical like this, we would benefit by Hanson's logic: we can more easily achieve our goals without enemies, and we're more likely to be right. But we'd could also gain credibility and influence in the long run by refusing to enter the political fray.

I think part of EA's success is because it's an identity label, almost a third party, an ingroup for people who dislike the Red/Blue identity divide. I'd say most EAs (and certainly the EAs that do the most good) identify much more strongly with EA than with any political ideology. That keeps us more dedicated to the ingroup.

But I could imagine an EA failure mode where, a decade from now, Vox is the most popular "EA" platform and the average EA is liberal first, effective altruist second. This happens if EA becomes synonymous with other, more powerful identity labels--kinda how animal rights and environmentalism could be their own identities, but they've mostly been absorbed into the political left.

If apolitical were an official EA virtue, we could easily disown German Lopez on marijuana or Kamala Harris and criminal justice--improving epistemic standards and avoiding making enemies at the same time. Should we adopt it?

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T05:34:37.914Z · score: 9 (9 votes) · EA · GW

Really valuable post, particularly because EA should be paying more attention to Future Perfect--it's some of EA's biggest mainstream exposure. Some thoughts in different threads:

1. Writing for a general audience is really hard, and I don't think we can expect Vox to maintain the fidelity standards EA is used to. It has to be entertaining, every article has to be accessible to new readers (meaning you can't build up reader expecations over time, like a sequence of blog posts or book would), and Vox has to write for the audience they have rather than wait for the audience we'd like.

In that light, look at, say, the baby Hitler article. It has to be connected to the average Vox reader's existing interests, hence the Ben Shapiro intro. It has to be entertaining, so Matthew's digresses onto time travel and Matrix. Then it has to provide valuable information content: an intro to moral cluelessness and expected value.

It's pretty tough for one article to do all that, AND seriously critique Great Man history, AND explain the history of the Nazi Party. To me, dropping those isn't shoddy journalism, it's valuable insight into how to engage your readers, not the ideal reader.

Bottom line: People who took the 2018 EA Survey are twice more likely than the average American to hold a bachelor's degree, and 7x more likely to hold a Ph.D. That's why Robin Hanson and GiveWell have been great reading resources so far. But if we actually want EA to go mainstream, we can't rely on econbloggers and think-tanks to reach most people. We need easier explanations, and I think Vox provides that well.

...

(P.S. Small matter, Matthews does not say that it's "totally impossible" to act in the face of cluelessness, unlike what you implied--he says the opposite. And then: "If we know the near-term effects of foiling a nuclear terrorism plot are that millions of people don't die, and don't know what the long-term effects will be, that's still a good reason to foil the plot." That's a great informal explanation. Edit to correct that?)

Comment by aidan-o-gara on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T19:26:12.201Z · score: 2 (2 votes) · EA · GW

Fantastic, I completely agree, so I don't think we have any substantive disagreement.

I guess my only remaining question would then be: should your AI predictions ever influence your investing vs donating behavior? I'd say absolutely not, because you should have incredibly high priors on not beating the market. If your AI predictions imply that the market is wrong, that's just a mark against your AI predictions.

You seem inclined to agree: The only relevant factor for someone considering donation vs investment is expected future returns. You agree that we shouldn't expect AI companies to generate higher-than-average returns in the long run. Therefore, your choice to invest or donate should be completely independent of your AI beliefs, because no matter your AI predictions, you don't expect AI companies to have higher-than-average future returns.

Would you agree with that?

Comment by aidan-o-gara on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T02:21:11.241Z · score: 2 (2 votes) · EA · GW

I think the background assumptions are probably doing a lot of work here. You'd have to go really far into the weeds of AI forecasting to get a good sense of what factors push which directions, but I can come up with a million possible considerations.

Maybe slow takeoff is shortly followed by the end of material need, making any money earned in a slow takeoff scenario far less valuable. Maybe the government nationalizes valuable AI companies. Maybe slow takeoff doesn't really begin for another 50 years. Maybe the profits of AI will genuinely be broadly distributed. Maybe current companies won't be the ones to develop transformative AI. Maybe investing in AI research increases AI x-risks, by speeding up individual companies or causing a profit-driven race dynamic.

It's hard to predict when AI will happen, it's worlds harder to translate that into present day stock-picking advice. If you've got a world class understanding of the issues and spend a lot of time on it, then you might reasonably believe you can outpredict the market. But beating the market is the only way to generate higher than average returns in the long run.

Comment by aidan-o-gara on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T02:09:32.228Z · score: 3 (3 votes) · EA · GW

The implicit argument here seems to be that, even if you think typical investment returns are too low to justify saving over donating, you should still consider investing in AI because it has higher growth potential.

I totally might be misunderstanding your point, but here's the contradiction as I see it. If you believe (A) the S&P500 doesn't give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued (i.e., they have roughly the same net expected future returns as any other company), then you cannot believe that (C) AI stock is a better investment opportunity than any other.

I completely agree that many slow-takeoff scenarios would make tech stocks skyrocket. But unless you're hoping to predict the future of AI better than the market, I'd say the expected value of AI is already reflected in tech stock prices.

To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued.

Comment by aidan-o-gara on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T00:35:32.857Z · score: 13 (8 votes) · EA · GW

I like the general idea that AI timelines matter for all altruists, but I really don't think it's a good idea to try to "beat the market" like this. The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they'll spend billions bidding up the stock price until they're no longer undervalued.

Thinking that Google and Co are going to outperform the S&P500 over the next few decades might not sound like a super bold belief--but it should. It assumes that you're capable of making better predictions than the aggregate stock market. Don't bet on beating markets.