Comment by aidan-o-gara on High School EA Outreach · 2019-05-09T00:35:07.727Z · score: 1 (1 votes) · EA · GW

That's very interesting. Seems like evidence that EA might not be inherently more appealing to students at top schools, but rather that EA's current composition is a product of circumstance and chance.

Comment by aidan-o-gara on High School EA Outreach · 2019-05-09T00:34:03.848Z · score: 7 (3 votes) · EA · GW

Hm. Definitely more a personal impression, and I should've qualified that as "it seems to me". But I'd also bet on it being true.

Data point #1, people who took the 2018 EA Survey are twice as likely as the average American to hold a bachelor's degree, and 7x more likely to hold a Ph.D. Maybe they're getting these degrees from less competitive schools, but that seems less likely than the alternative.

Data point #2, a quick Google search reveals that all 7 Ivy League universities have EA clubs. On the other hand, at the 5 most populous US universities, each with 5-10x more students than the average Ivy, only Ohio State and Texas A&M have any online indication of an EA club, and both of these online pages have zero content posted on them.

Anecdotally, I go to an unranked state university with >50k students. There's no EA club and I haven't met anyone that's ever heard of EA.

I think there's a lot of potential for EA to be a lot more mainstream, but in its current state where top recommended careers include Machine Learning PhDs, Economics PhDs, and quant trading, it can be very hard to appeal to the vast majority of people.

Comment by aidan-o-gara on High School EA Outreach · 2019-05-07T05:33:51.817Z · score: 8 (4 votes) · EA · GW

What kinds of high schools did you generally target? Did you specifically target your efforts at schools that are feeders for top universities?

Though I wish EA were more diverse, it's simply true that students at top universities have far more interest in EA than the average population. I'd imagine this holds true in high schools: The kids who end up running Berkeley EA are the ones who'd love to read Peter Singer in high school.

Comment by aidan-o-gara on Thoughts on 80,000 Hours’ research that might help with job-search frustrations · 2019-04-19T03:32:01.382Z · score: 7 (6 votes) · EA · GW

An interesting data point is that the current Director of Operations at Open Philanthropy, Beth Jones, was previously the Chief Operating Officer of the Hillary Clinton 2016 campaign.

On the other hand, the four operations associates most recently hired by OpenPhil have impressive but not overwhelmingly intimidating backgrounds. I'd like to know how many applied for those four positions.

Comment by aidan-o-gara on Who in EA enjoys managing people? · 2019-04-11T05:45:06.722Z · score: 8 (7 votes) · EA · GW

Could you clarify a bit what you mean by "who"? As in, are you looking for organizations, names of individuals, personality types, or backgrounds of people who'd be more interested in management, or something else?

Comment by aidan-o-gara on Activism to Make Kidney Sales Legal · 2019-04-07T01:37:23.677Z · score: 1 (1 votes) · EA · GW

Wouldn't the prosecutor drop the charge?

Comment by aidan-o-gara on Does EA need an underpinning philosophy? Could sentientism be that philosophy? · 2019-03-28T02:00:45.597Z · score: 4 (4 votes) · EA · GW

I wouldn't say I'm opposed to the idea of sentientism, I agree with basically all of its claims and conclusions. But I don't think it'd be a good to strongly associate EA with sentientism, and I don't think it adds much to discussions of ethics.

On the first, I agree pretty strongly of the framing that effective altruism is a question, not an ideology, so I don't want to prescribe the ethics that someone must agree with in order to care about effective altruism.

Second, as I currently understand it (which is not super well), sentientism seems to only to take one ethical stance: conscious experience is the source of all moral value. This is definitely different from a stance that gods or humans or carbon-based life are the only sources of moral value, so kudos for having a position. But it takes no stance on most of the most important ethical questions: deontology vs consequentialism vs others, realism vs non-realism, internalism vs externalism, moral uncertainty. Even assuming a utilitarian starting point, it takes no stance on person affecting views, time discounting, preference vs hedonic utilitarianism, etc. Sentientism is my favorite answer to the question it's trying to answer, but it's hardly a comprehensive moral system.

[Meta: I'm still glad you posted this. We need people to think about about new ideas, even though we're not going to agree with most of them.]

Comment by aidan-o-gara on Severe Depression and Effective Altruism · 2019-03-26T19:36:12.064Z · score: 7 (4 votes) · EA · GW

Thank you for this, there are plenty of others who feel the same way.While I never experienced these feelings in an overwhelming or depressing way, I've felt these same concerns of guilt for taking care of myself before engaging in altruism.

This SlateStarCodex post convinced me that my view was simply incorrect. To be an effective altruist is to do the most good possible, and to feel guilty or to shame others for only doing some good and not all of the good is counterproductive to EA goals - it hurts you, it hurts EA as a movement, and ultimately that will hurt the people you're trying to help in the first place. There is no "correct" line of how much to give, so to help us help others without feeling guilty, EA/GWWC has decided to draw that line at 10%. Feel free to go above, but it's absolutely not an obligation.

Of course, knowing you shouldn't feel guilty is easier than escaping the emotion of guilt, and nobody can blame you for the feeling. But I genuinely believe on an intellectual level that I ought not feel guilty for most of the good I don't do, and it helps.

Comment by aidan-o-gara on Terrorism, Tylenol, and dangerous information · 2019-03-25T20:27:34.537Z · score: 1 (1 votes) · EA · GW

Good point, I hadn't considered that. If I were to try to fit this to my model, I would say that there's nobody really looking to produce the best military technology/tactics in between wars. But if you look at a period of sustained effort in staying on the military cutting edge, i.e. the Cold War, you won't see as many of these mistakes and you'll instead find fairly continuous progress with both sides continuously using the best available military technology. I'm not sure if this is actually a good interpretation, but it seems possible. (I'd be interested in where you think we're failing today!)

But even if this is true, your original claim remains true: if it takes a Cold War-level of vigilance to stay on the cutting edge, then terrorists probably aren't deploying the best available weaponry, just because they don't know about it.

So maybe an exceptional effort can keep you on the cutting edge, but terrorist groups aren't at that cutting edge?

Comment by aidan-o-gara on Terrorism, Tylenol, and dangerous information · 2019-03-23T17:08:57.046Z · score: 5 (5 votes) · EA · GW

The clearest explanation seems to be that extremely few people, terrorists included, are seriously trying to figure out the most effective ways to kill strangers - if they were, they'd be doing a better job of it.

AI Impacts' discontinuous progress investigation finds that it's really hard to make sudden progress on metrics that anyone cares about, because the low hanging fruit will already be gone. I doubt national militaries routinely miss effective ways to conduct war - when they make a serious effort, they find the best weapons.

If terrorists aren't noticing the most effective ways to maximize their damage, it could be good evidence that they're not seriously trying. (So +1 to Gwern's theory)

Comment by aidan-o-gara on Concept: EA Donor List. To enable EAs that are starting new projects to find seed donors, especially for people that aren’t well connected · 2019-03-19T00:58:52.564Z · score: 20 (8 votes) · EA · GW

Hey, saw your other post so just wanted to give some feedback. FWIW I think this is a good idea and good post. It builds on a concept that's already been somewhat discussed, does a good job brainstorming pros cons challenges and ideas, and overall is a very good conversation starter and continuer.

As for the negative feedback, one possibility is that I could see people disliking your "hard to abandon" concept. There's a fair bit of focus in EA on not causing harm when trying to do good, and one of the most advocated ways to avoid doing harm is to be cautious before taking irreversible actions. I could see someone arguing that a poor implementation of this idea is worse than none at all (because it would undermine possible future attempts, or lower the reputation of startup EA projects for actual success). I'd personally agree that a poor rollout could well be worse than none, and that the general mindset around this should probably be to do it right or not at all, though I don't see that as reason enough to downvote.

Also, as another newcomer who feels self-conscious/nervous beginning to post on here, just my encouragement to stick with it. It seems very likely that our input is valuable and valued, even when it feels ignored.

Comment by aidan-o-gara on I Am The 1% · 2019-03-14T08:43:33.034Z · score: 6 (5 votes) · EA · GW

Thanks for this! I think there should be a lot more introduction material to effective altruism, and this is a great step.

One stat I'd nitpick: I think GiveWell and other charity effectiveness estimators would pretty strongly disagree with the statement that someone can save a life with $586.

First, $586 is on the very low end of GiveWell's estimates for the cost of saving a life. From their website: "As of November 2016, the median estimate of our top charities' cost-effectiveness ranged from ~$900 to ~$7,000 per equivalent life saved."

Second, that's not literally saying $ per life saved, it's saying $ per "equivalent life saved". GiveWell does moral weight conversions, meaning e.g. if an intervention increases consumption by 25% for 100 people for one year, using their moral weight system, that would be equivalent to saving 0.685 lives. It's tough to make conversions like that, but it's essential in a world with unavoidable tradeoffs - but we should be transparent about when we're doing these conversions. (I'm actually not sure if this is an important factor in the fistula case, more just a general warning.)

Third, GiveWell seems to strongly believe that "we can't take expected value estimates literally, even when they're unbiased", because experience shows that exceptionally effective charities are simply rare. An example: if a high school physics student collects some experimental data that disproves F=ma, do you believe him? No, because this new evidence is much weaker than our prior belief. Similarly, if a new charity comes out with an estimate that says it can save a life for $1, do we believe it? Probably not - not because the study was flawed or biased or malicious or anything like that, but because there's way better odds the study was somehow wrong than there are that they can actually saves lives for $1.

One of the toughest parts about intros to EA is dealing with numbers like these. It's been debated with Giving What We Can and Will MacAskill's Doing Good Better. It's tempting and effective to give a jarring headline like "This campaign saved #x lives today", but all in all, I think it's the right move not to oversell and to be honest about our uncertainty.

(But seriously, really cool project)

Comment by aidan-o-gara on Survey to Promote EA Mental Health · 2019-03-14T08:14:33.599Z · score: 2 (2 votes) · EA · GW

Good point, I think you could reframe it to still work: If the goal is to treat mental health issues in EA, the subset of people you could actually reach with treatment is probably fairly similar to the subset that would answer this poll: people that use the Forum.

It probably can't deliver accurate numbers on prevalence, but it can profile the people it's targeting on their demographics and desires.

Comment by aidan-o-gara on The Importance of Truth-Oriented Discussions in EA · 2019-03-14T02:51:57.396Z · score: 56 (25 votes) · EA · GW

My 2 cents: Nobody's going to solve the question of social justice here, the path forward is to agree on whatever common ground is possible, and make sure that disagreements are (a) clearly defined, avoiding big vague words, (b) narrow enough to have a thorough discussion, and (c) relevant to EA. Otherwise, it's too easy to disagree on the overall "thumbs up or down to social justice" question, and not notice that you in fact do agree on most of the important operational questions of what EA should do.

So "When introducing EA to newcomers, we generally shouldn't discuss income and IQ, because it's unnecessary and could make people feel unwelcome at first" would be a good claim to disagree on, because it's important to EA, and because the disagreement is narrow enough to actually sort out.

Other examples of narrow and EA-relevant claims that therefore could be useful to discuss: "EA orgs should actively encourage minority applicants to apply to positions"; "On the EA Forum, no claim or topic should be forbidden for diversity reasons, as long as it's relevant to EA"; or "In public discussions, EAs should make minority voices welcome, but not single out members of minority groups and explicitly ask for their opinions/experiences, because this puts them in a potentially stressful situation."

On the other hand, I think this conversation has lots of claims that are (a) too vague to be true or false, (b) too broad to be effectively discussed, or (c) not relevant to EA goals. Questions like this would include "Are women oppressed?", "Is truth more important than inclusivity?", or "Is EA exclusionary?" It's not obvious what it would really mean for these to be true or false, you're unlikely to change anyone's mind in a reasonable amount of time, and their significance to EA is unclear.

My guess is that we all probably agree a lot on specific operationalized questions relevant to EA, and disagree much more when we abstract to overarching social justice debates. If we stick to specific, EA-relevant questions, there's probably a lot more common ground here than there seems to be.

Comment by aidan-o-gara on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-03T05:35:56.800Z · score: 15 (7 votes) · EA · GW

Strongly agreed. I really like Raemon's analysis why it's so hard to get EA careers: we're network constrained. [This isn't exactly how he frames it, more my take on his idea.]

Right now, EA operates very informally, relying heavily on the fact that the several hundred people working at explicitly EA orgs are all socially networked together to some degree. This social group was significantly inherited from LessWrong and Bay Area rationalism, and EA has had great success in co-opting it for EA goals.

But as EA grows beyond its roots, more people want in, and you can't have a social network of ten thousand, let alone a million. So we have two options: (a) increase the bandwidth of the social network, or (b) stop relying so much on the social network.

(a) increasing bandwidth looks like exactly what you're talking about: create ways for newcomers to EA to make EA friends, develop professional relationships with EAs, etc., by creating better online platforms and in person groups.

(b) not relying on personal relationships looks like becoming more corporate, relying on traditional credentials, scaling up until people actually stand a strong chance of landing jobs via open application, etc.

(a) seems to have clear benefits with no obvious harms, as long as it can be done, so it seems very much worth it for us to try.

Comment by aidan-o-gara on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-03-02T08:24:39.974Z · score: 14 (9 votes) · EA · GW

I think part of what might be driving the difference of opinion here is that the type of EAs that need a 45 minute chat are not the type of EAs that 80k meets. If you work at 80k, you and most of the EAs you know: probably have dozens of EA friends, have casual conversations about EA, pick up informal knowledge easily, and can talk out your EA ideas with people who can engage. But the majority of people who call themselves EA probably don't have many if any friends who work at EA organizations, donate lots, provide informal knowledge of EA, or who can seriously help you figure out how to have a high impact career.

A 45 minute discussion can therefore do a lot more good for someone outside the EA social circle than for someone who has friends who can have this conversation with them.

Comment by aidan-o-gara on Review of Education Interventions and Charities in Sub-Saharan Africa · 2019-02-27T03:21:05.395Z · score: 4 (4 votes) · EA · GW

Good point, I wasn't fully considering that. I think Michael Plant's recent investigation into mental health as a cause area is a perfect example of the value of independent research - mental health isn't something . While I still think it's going to be extremely difficult to beat GiveWell in i.e., evaluating which deworming charity is most effective, or which health intervention tends to be most effective, I do think independent researchers can make important contributions in identifying GiveWell's "blind spots".

Mental health and education both could be good examples. At this point, GiveWell doesn't recommend either. But they're not areas that GiveWell has spent years building expertise in. So it's reasonable to expect that, in these areas, a dedicated newcomer can produce research that rivals GiveWell's in quality.

So I'd revise my stance to: Do your own research if there's an upstream question (like the moral value of mental suffering, the validity of life satisfaction surveys, or the intrinsic value of education) that you think GiveWell might be wrong about. Often, you'll conclude that they were right, but the value of uncovering their occasional mistakes is high. Still, trust GiveWell if you agree with their initial assumptions on what matters.

Comment by aidan-o-gara on After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation · 2019-02-27T02:02:14.996Z · score: 20 (13 votes) · EA · GW

Just a thank you for sharing, it can be scary to share your personal background like this but it's extremely helpful for people looking into EA careers.

Comment by aidan-o-gara on Review of Education Interventions and Charities in Sub-Saharan Africa · 2019-02-26T00:12:12.392Z · score: 8 (6 votes) · EA · GW

I really like the education review, it seems like a great introduction to the literature on effective education interventions. And it's even better that you'll be reviewing health interventions soon, given that they seem generally more effective than education, both in terms of certainty and overall impact.

But I would still have strong confidence that GiveWell's top charities all have significantly higher expected value than the results of this investigation, for two reasons.

First, GiveWell has access to the internal workings of charities, allowing them to recommend charities that do a better job of achieving their intervention. This goes as far as GiveWell making almost a dozen site visits over the past five years to directly observe these charities in action. There's just no way to replicate this without close, prolonged contact with all the relevant charities.

Second, GiveWell simply has more experience and expertise in development evaluations than someone doing this in their free time. It's fantastic that you all are working with these donors, and your actions seem likely to have a strong impact. But GiveWell has 25 staff, a decade of experience in the area, and access to any relevant experts and insider information. It's very difficult to replicate the quality of recommendations that come from that process. Doing the research yourself has other benefits: it increases engagement with the cause, it teaches a valuable skill, etc. But when there's a million dollars to be donated, it might be best to trust GiveWell.

If the donors want an intervention that's both certain and transformative, GiveDirectly seems like an obvious choice.

Comment by aidan-o-gara on Has your "EA worldview" changed over time? How and why? · 2019-02-25T15:59:54.495Z · score: 4 (3 votes) · EA · GW

Really cool thought, this is persuasive to me.

If I can try to rephrase your beliefs: economic rationality tells us that tradeoffs do in fact exist, and therefore rational agents must be able to make a comparison in every case. There has to be some amount of every value that you'd trade for another amount of every other value, otherwise you'll end up paralyzed and decisionless.

You're saying that, although we'd like to have this coherent total utility function, realistically it's impossible to do so. We run into the theoretical problems you mention, and more fundamentally, some of our goals simply are not maximizing goals, and there is no rule that can accurately describe the relationship between those goals. Do we end up paralyzed and decisionless, with no principled way to tradeoff between the different goals? Yes, that's unavoidable.

And one clarification: Would you say that this non-comparability is a feature more of human preferences, where we biologically have desires that aren't integrated into a single utility function, or morality, where there are independent goals with independent moral worth?

Comment by aidan-o-gara on Can the EA community copy Teach for America? (Looking for Task Y) · 2019-02-24T22:52:35.694Z · score: 7 (5 votes) · EA · GW

As a college student, I volunteer a few hours a week at Faunalytics, an EA-aligned animal welfare advocacy/research group. I think volunteering with Faunalytics is a good candidate for a small-scale Task Y.

I started off by editing their old article archives and updating them to fit their new article formatting. It was pretty boring, but it was useful for Faunalytics because it let them publish their archived research summaries, and it let me show Faunalytics that I was committed and could be trusted with responsibility.

Sometimes I'd rewrite old articles that seemed poorly done, so after a few months, my supervisor liked my writing and moved me up to doing my own research summaries. Each week, I'd be assigned a paper about something relevant to animal or environmental advocacy. I'd write an 800 word summary in the style of a blog post, and Faunalytics would publish it to their library. Here's some of what I wrote (the tagging system is buggy, it doesn't list a lot of my articles).

I recently stopped doing research summaries for time reasons, but I'm now working with their research team on analyzing data from their annual Animal Tracker survey poll.

The parts I've really enjoyed about the work are:

  • The papers could be interesting, and I learned a bit about animal topics
  • I think most of what I wrote was informative and would be useful to e.g. animal activists who wanted to better understand a particular question. Examples: Does ecotourism help or harm local wildlife? What's the relationship between domestic violence and animal abuse? (But, see below: informative and useful to some people is not necessarily the same as effective in doing good)
  • Writing research summaries is very engaging work, just the right level of difficulty, and my writing skills markedly improved
  • It can lead to other opportunities: They now trust me enough to let me do their data analysis project, which is really fun, educational, and (given that I'm a student) will be probably the most legitimate thing I've published once it's done. I'd also be comfortable asking my supervisor for a recommendation letter for a job, and if I wanted to get more involved in EA animal rights, I think I'd be able to make connections through Faunalytics.

The parts that weren't so great are:

  • On the whole, I'm not sure I've had much impact. If I were convinced that the majority of causes within animal welfare are effective, then I would probably think I've had a good positive impact. But I don't think e.g. the environmental impacts of ecotourism are very important from an altruistic standpoint, which really decreases my value.
  • Being a low-commitment volunteer is simply a bad arrangement in a lot of ways. At least for me, doing something a few hours a week often leads to doing it zero hours a week, especially when it's a volunteer relationship where you've made very little firm commitment and there's no consequences for being late or failing to deliver. I think I combatted this pretty well by forcing myself to stick to deadlines, but I totally understand the GiveWell position of not accepting volunteers because they're not committed enough.

On the whole, for anyone looking to explore working in EA more broadly, I think volunteering at Faunalytics is a great idea: the possibility of direct impact, mostly engaging work, and a strong opportunity to prove yourself and make connections that can lead to future opportunities. Check it out here if you're interested, and feel free to message me with questions.

(Anybody have input on whether I should write a full post about my experience/advertising the opportunity?)

Comment by aidan-o-gara on EA Forum Prize: Winners for January 2019 · 2019-02-23T23:59:22.958Z · score: 15 (11 votes) · EA · GW

The prize definitely seems useful for encouraging deeper, better content. One question: would a smaller, more frequent set of prizes be more effective? Maybe a prize every two weeks?

My intuition says a $1000 top prize won't generate twice as much impact as a $500 top prize every two weeks - thinking along the lines of prospect theory, where a win is a win and winning $500 is worth a lot more than half of winning $1000; or prison reform literature, where a higher chance of a smaller punishment is more effective in deterring crime than a small chance of a big punishment.

These prize posts probably create buzz and motivate people to begin, improve, and finish their posts; doubling their frequency and halving their payout could be more effective at the same cost.

(Counterargument: the biggest cost isn't money, it's time, and a two week turnaround is a lot for moderators. Not sure how to handle that.)

Comment by aidan-o-gara on Rationality as an EA Cause Area · 2019-02-23T23:45:36.488Z · score: 2 (2 votes) · EA · GW

I agree that LW has been a big part of keeping EA epistemically strong, but I think most of that is selection rather than education. It's not that reading LW makes you much more clearer-thinking or focused on truth, it's that only people who are that way to begin with decide to read LW, and they then get channeled to EA.

If that's true, it doesn't necessarily discredit rationality as an EA cause area, it just changes the mechanism and the focus: maybe the goal shouldn't be making everybody LW-rational, it should be finding the people that already fit the mold, hopefully teaching them some LW-rationality, and then channeling them to EA.

Comment by aidan-o-gara on Confused about AI research as a means of addressing AI risk · 2019-02-21T00:37:57.379Z · score: 3 (3 votes) · EA · GW

There's probably people who can answer better, but my crack at it: (from most to least important)

1. If people who care about AI safety also happen to be the best at making AI, then they'll try to align the AI they make. (This is already turning out to be a pretty successful strategy: OpenAI is an industry leader that cares a lot about risks.)

2. If somebody figures out how to align AI, other people can use their methods. They'd probably want to, if they buy that misaligned AI is dangerous to them, but this could fail if aligned methods are less powerful or more difficult than not-necessarily-aligned methods.

3. Credibility and public platform: People listen to Paul Christiano because he's a serious AI researcher. He can convince important people to care about AI risk.

Comment by aidan-o-gara on A system for scoring political candidates. RFC (request for comments) on methodology and positions · 2019-02-14T02:56:53.214Z · score: 9 (6 votes) · EA · GW

Really cool idea! Two possibilities:

1. I think rating candidates on a few niche EA issues is more likely to gain traction than trying to formalize the entire voting process. If you invest time figuring which candidates are likely to promote good animal welfare and foreign aid policies, every EA has good reason to listen you. But the weight you place on e.g. a candidate's health has nothing to do with the fact that you're an EA; they'd be just as good listening to any other trusted pundit. I'm not sure if popularity is really your goal, but I think people would be primarily interested in the EA side of this.

2. It might be a good idea to stick to issues where any EA would agree: animal welfare, foreign aid. On other topics (military intervention, healthcare, education), values are often not the reason people disagree--they disagree for empirical reasons. If you stick to something where it's mostly a values question, people might trust your judgements more.

Comment by aidan-o-gara on EA grants available to individuals (crosspost from LessWrong) · 2019-02-13T00:14:23.099Z · score: 2 (2 votes) · EA · GW

Check out Tyler Cowen's Emergent Ventures.

We want to jumpstart high-reward ideas—moonshots in many cases—that advance prosperity, opportunity, liberty, and well-being. We welcome the unusual and the unorthodox.
Projects will either be fellowships or grants: fellowships involve time in residence at the Mercatus Center in Northern Virginia; grants are one-time or slightly staggered payments to support a project.
Think of the goal of Emergent Ventures as supporting new ideas and projects that are too difficult, too hard to measure, too unusual, too foreign, too small, or…too something to make their way through the usual foundation and philanthropic process.

Here's the first cohort of grant recipients. I think your project would fit what they're looking for, and it's a pretty low cost to apply.

Comment by aidan-o-gara on Will companies meet their animal welfare commitments? · 2019-02-03T23:55:13.018Z · score: 6 (5 votes) · EA · GW

Agreed on both, an article along the lines of "The world's biggest pork producer just broke their animal welfare commitment" seems very valuable and possibly effective as shaming, while "Corporate animal welfare campaigning often fails to deliver" would definitely be counterproductive.

Comment by aidan-o-gara on Will companies meet their animal welfare commitments? · 2019-02-03T18:08:47.305Z · score: 3 (3 votes) · EA · GW

I think Vox's Future Perfect could be a good platform for this--either one of you writing a guest article, or giving Vox the information and letting them write. It's an interesting news story to cover these broken commitments, Vox's readership already is fairly interested in animal rights, and they could build it into an ongoing series of articles tracking progress. Maybe consider reaching out directly to Kelsey Piper/Dylan Matthews/Vox?

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-29T23:47:52.209Z · score: 7 (7 votes) · EA · GW

I think I'd challenge this goal. If we're choosing between trying to improve Vox vs trying to discredit Vox, I think EA goals are served better by the former.

1. Vox seems at least somewhat open to change: Matthews and Ezra seem genuinely pretty EA, they went out on a limb to hire Piper, and they've sacrificed some readership to maintain EA fidelity. Even if they place less-than-ideal priority on EA goals vs. progressivsim, profit, etc., they still clearly place some weight on pure EA.

2. We're unlikely to convince Future Perfect's readers that Future Perfect is bad/wrong and we in EA are right. We can convince core EAs to discredit Vox, but that's unnecessary--if you read the EA Forum, your primary source of EA info is not Vox.

Bottom line: non-EAs will continue to read Future Perfect no matter what. So let's make Future Perfect more EA, not less.

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-29T23:04:07.428Z · score: 3 (3 votes) · EA · GW

Agreed. If you accept the premise that EA should enter popular discourse, most generally informed people should be aware of it, etc., then I think you should like Vox. But if you think EA should be a small elite academic group, not a mass movement, that's another discussion entirely, and maybe you shouldn't like Vox.

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T06:18:08.795Z · score: 17 (12 votes) · EA · GW

3. I have no personal or inside info on Future Perfect, Vox, Dylan Matthews, Ezra Klein, etc. But it seems like they've got a fair bit of respect for the EA movement--they actually care about impact, and they're not trying to discredit or overtake more traditional EA figureheads like MacAskill and Singer.

Therefore I think we should be very respectful towards Vox, and treat them like ingroup members. We have great norms in the EA blogosphere about epistemic modesty, avoiding ad hominem attacks, viewing opposition charitably, etc. that allow us to have much more productive discussions. I think we can extend that relationship to Vox.

Using this piece as an example, if you were criticizing Rob Wiblin's podcasting instead of Vox's writing, I think people might ask you to be more charitable. We're not anti-criticism -- We're absolutely committed to truth and honesty, which means seeking good criticism -- but we also have well-justified trust in the community. We share a common goal, and that makes it really easy to cooperate.

Let's trust Vox like that. It'll make our cooperation more effective, we can help each other achieve our common goal, and, if necessary, we can always take back our trust later.

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T06:05:48.846Z · score: 7 (4 votes) · EA · GW

2. Just throwing it out there: Should EA embrace being apolitical? As in, possible official core virtue of the EA movement proper: Effective Altruism doesn't take sides on controversial political issues, though of course individual EAs are free to.

Robin Hanson's "pulling the rope sideways" analogy has always struck me: In the great society tug-of-war debates on abortion, immigration, and taxes, it's rarely effective to pick a side and pull. First, you're one of many, facing plenty of opposition, making your goal difficult to accomplish. But second, if half the country thinks your goal is bad, it very well might be. On the other hand, pushing sideways is easy: nobody's going to filibuster to prevent you from handing out malaria nets-- everybody thinks it's a good idea.

(This doesn't mean not involving yourself in politics. 80k writes on improving political decision making or becoming a congressional staffer--they're both nonpartisan ways to do good in politics.)

If EA were officially apolitical like this, we would benefit by Hanson's logic: we can more easily achieve our goals without enemies, and we're more likely to be right. But we'd could also gain credibility and influence in the long run by refusing to enter the political fray.

I think part of EA's success is because it's an identity label, almost a third party, an ingroup for people who dislike the Red/Blue identity divide. I'd say most EAs (and certainly the EAs that do the most good) identify much more strongly with EA than with any political ideology. That keeps us more dedicated to the ingroup.

But I could imagine an EA failure mode where, a decade from now, Vox is the most popular "EA" platform and the average EA is liberal first, effective altruist second. This happens if EA becomes synonymous with other, more powerful identity labels--kinda how animal rights and environmentalism could be their own identities, but they've mostly been absorbed into the political left.

If apolitical were an official EA virtue, we could easily disown German Lopez on marijuana or Kamala Harris and criminal justice--improving epistemic standards and avoiding making enemies at the same time. Should we adopt it?

Comment by aidan-o-gara on Vox's "Future Perfect" column frequently has flawed journalism · 2019-01-28T05:34:37.914Z · score: 9 (9 votes) · EA · GW

Really valuable post, particularly because EA should be paying more attention to Future Perfect--it's some of EA's biggest mainstream exposure. Some thoughts in different threads:

1. Writing for a general audience is really hard, and I don't think we can expect Vox to maintain the fidelity standards EA is used to. It has to be entertaining, every article has to be accessible to new readers (meaning you can't build up reader expecations over time, like a sequence of blog posts or book would), and Vox has to write for the audience they have rather than wait for the audience we'd like.

In that light, look at, say, the baby Hitler article. It has to be connected to the average Vox reader's existing interests, hence the Ben Shapiro intro. It has to be entertaining, so Matthew's digresses onto time travel and Matrix. Then it has to provide valuable information content: an intro to moral cluelessness and expected value.

It's pretty tough for one article to do all that, AND seriously critique Great Man history, AND explain the history of the Nazi Party. To me, dropping those isn't shoddy journalism, it's valuable insight into how to engage your readers, not the ideal reader.

Bottom line: People who took the 2018 EA Survey are twice more likely than the average American to hold a bachelor's degree, and 7x more likely to hold a Ph.D. That's why Robin Hanson and GiveWell have been great reading resources so far. But if we actually want EA to go mainstream, we can't rely on econbloggers and think-tanks to reach most people. We need easier explanations, and I think Vox provides that well.

...

(P.S. Small matter, Matthews does not say that it's "totally impossible" to act in the face of cluelessness, unlike what you implied--he says the opposite. And then: "If we know the near-term effects of foiling a nuclear terrorism plot are that millions of people don't die, and don't know what the long-term effects will be, that's still a good reason to foil the plot." That's a great informal explanation. Edit to correct that?)

Comment by aidan-o-gara on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T19:26:12.201Z · score: 2 (2 votes) · EA · GW

Fantastic, I completely agree, so I don't think we have any substantive disagreement.

I guess my only remaining question would then be: should your AI predictions ever influence your investing vs donating behavior? I'd say absolutely not, because you should have incredibly high priors on not beating the market. If your AI predictions imply that the market is wrong, that's just a mark against your AI predictions.

You seem inclined to agree: The only relevant factor for someone considering donation vs investment is expected future returns. You agree that we shouldn't expect AI companies to generate higher-than-average returns in the long run. Therefore, your choice to invest or donate should be completely independent of your AI beliefs, because no matter your AI predictions, you don't expect AI companies to have higher-than-average future returns.

Would you agree with that?

Comment by aidan-o-gara on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T02:21:11.241Z · score: 2 (2 votes) · EA · GW

I think the background assumptions are probably doing a lot of work here. You'd have to go really far into the weeds of AI forecasting to get a good sense of what factors push which directions, but I can come up with a million possible considerations.

Maybe slow takeoff is shortly followed by the end of material need, making any money earned in a slow takeoff scenario far less valuable. Maybe the government nationalizes valuable AI companies. Maybe slow takeoff doesn't really begin for another 50 years. Maybe the profits of AI will genuinely be broadly distributed. Maybe current companies won't be the ones to develop transformative AI. Maybe investing in AI research increases AI x-risks, by speeding up individual companies or causing a profit-driven race dynamic.

It's hard to predict when AI will happen, it's worlds harder to translate that into present day stock-picking advice. If you've got a world class understanding of the issues and spend a lot of time on it, then you might reasonably believe you can outpredict the market. But beating the market is the only way to generate higher than average returns in the long run.

Comment by aidan-o-gara on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T02:09:32.228Z · score: 3 (3 votes) · EA · GW

The implicit argument here seems to be that, even if you think typical investment returns are too low to justify saving over donating, you should still consider investing in AI because it has higher growth potential.

I totally might be misunderstanding your point, but here's the contradiction as I see it. If you believe (A) the S&P500 doesn't give high enough returns to justify investing instead of donations, and (B) AI research companies are not currently undervalued (i.e., they have roughly the same net expected future returns as any other company), then you cannot believe that (C) AI stock is a better investment opportunity than any other.

I completely agree that many slow-takeoff scenarios would make tech stocks skyrocket. But unless you're hoping to predict the future of AI better than the market, I'd say the expected value of AI is already reflected in tech stock prices.

To invest in AI companies but not the S&P500 for altruistic reasons, I think you have to believe AI companies are currently undervalued.

Comment by aidan-o-gara on If slow-takeoff AGI is somewhat likely, don't give now · 2019-01-24T00:35:32.857Z · score: 13 (8 votes) · EA · GW

I like the general idea that AI timelines matter for all altruists, but I really don't think it's a good idea to try to "beat the market" like this. The current price of these companies is already determined by cutthroat competition between hyper-informed investors. If Warren Buffett or Goldman Sachs thinks the market is undervaluing these AI companies, then they'll spend billions bidding up the stock price until they're no longer undervalued.

Thinking that Google and Co are going to outperform the S&P500 over the next few decades might not sound like a super bold belief--but it should. It assumes that you're capable of making better predictions than the aggregate stock market. Don't bet on beating markets.