Posts

What If You Could Save the World in Your Free Time? 2015-07-07T01:14:58.162Z
EA Handbook Hard Copies for Local Hubs 2015-04-29T17:03:41.561Z
America’s Provincialism Ratio 2015-04-19T20:12:50.399Z

Comments

Comment by mhpage on Ongoing lawsuit naming "future generations" as plaintiffs; advice sought for how to investigate · 2018-01-25T03:24:43.549Z · EA · GW

Related (and perhaps of interest to EAs looking for rhetorical hooks): there are a bunch of constitutions (not the US) that recognize the rights of future generations. I believe they're primarily modeled after South Africa's constitution (see http://www.fdsd.org/ideas/the-south-african-constitution-gives-people-the-right-to-sustainable-development/ & https://en.wikipedia.org/wiki/Constitution_of_South_Africa).

Comment by mhpage on Ongoing lawsuit naming "future generations" as plaintiffs; advice sought for how to investigate · 2018-01-25T03:21:02.971Z · EA · GW

I haven't read about this case, but some context: This has been an issue in environmental cases for a while. It can manifest in different ways, including "standing," i.e., who has the ability to bring lawsuits, and what types of injuries are actionable. If you google some combination of "environmental law" & standing & future generations you'll find references to this literature, e.g.: https://scholarship.law.uc.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=1272&context=fac_pubs

Last I checked, this was the key case in which a court (from the Phillipines) actually recognized a right of future generations: http://heinonline.org/HOL/LandingPage?handle=hein.journals/gintenlr6&div=29&id=&page=

Also, people often list parties as plaintiffs for PR reasons, even though there's basically no chance that a court would recognize that the named party has legal standing.

Comment by mhpage on How to get a new cause into EA · 2018-01-10T12:04:57.853Z · EA · GW

This comment is not directly related to your post: I don't think the long-run future should be viewed of as a cause area. It's simply where most sentient beings live (or might live), and therefore it's a potential treasure trove of cause areas (or problems) that should be mined. Misaligned AI leading to an existential catastrophe is an example of a problem that impacts the long-run future, but there are so, so many more. Pandemic risk is a distinct problem. Indeed, there are so many more problems even if you're just thinking about the possible impacts of AI.

Comment by mhpage on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-27T02:24:45.697Z · EA · GW

Variant on this idea: I'd encourage a high status person and a low status person, both of whom regularly post on the EA Forum, to trade accounts for a period of time and see how that impacts their likes/dislikes.

Variant on that idea: No one should actually do this, but several people should talk about it, thereby making everyone paranoid about whether they're a part of a social experiment (and of course the response of the paranoid person would be to actually vote based on the content of the article).

Comment by mhpage on Why & How to Make Progress on Diversity & Inclusion in EA · 2017-10-26T22:51:34.001Z · EA · GW

I strongly agree. Put another way, I suspect we, as a community, are bad at assessing talent. If true, that manifests as both a diversity problem and a suboptimal distribution of talent, but the latter might not be as visible to us.

My guess re the mechanism: Because we don't have formal credentials that reflect relevant ability, we rely heavily on reputation and intuition. Both sources of evidence allow lots of biases to creep in.

My advice would be:

  1. When assessing someone's talent, focus on the content of what they're saying/writing, not the general feeling you get from them.

  2. When discussing how talented someone is, always explain the basis of your view (e.g., I read a paper they wrote; or Bob told me).

Comment by mhpage on EA Survey 2017 Series: Have EA Priorities Changed Over Time? · 2017-10-06T23:46:00.900Z · EA · GW

Thanks for doing these analyses. I find them very interesting.

Two relatively minor points, which I'm making here only because they refer to something I've seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:

  1. I don't think AI is a "cause area."
  2. I don't think there will be a non-AI far future.

Re the first point, people use "cause area" differently, but I don't think AI -- in its entirety -- fits any of the usages. The alignment/control problem does: it's a problem we can make progress on, like climate change or pandemic risk. But that's not all of what EAs are doing (or should be doing) with respect to AI.

This relates to the second point: I think AI will impact nearly every aspect of the long-run future. Accordingly, anyone who cares about positively impacting the long-run future should, to some extent, care about AI.

So although there are one or two distinct global risks relating to AI, my preferred framing of AI generally is as an unusually powerful and tractable lever on the shape of the long-term future. I actually think there's a LOT of low-hanging fruit (or near-surface root vegetables) involving AI and the long-term future, and I'd love to see more EAs foraging those carrots.

Comment by mhpage on EAGx Relaunch · 2017-07-26T17:49:07.440Z · EA · GW

Max's point can be generalized to mean that the "talent" vs. "funding" constraint framing misses the real bottleneck, which is institutions that can effectively put more money and talent to work. We of course need good people to run those institutions, but if you gave me a room full of good people, I couldn't just put them to work.

Comment by mhpage on Some Thoughts on Public Discourse · 2017-02-24T10:59:22.036Z · EA · GW

and I wonder how the next generation of highly informed, engaged critics (alluded to above) is supposed to develop if all substantive conversations are happening offline.

This is my concern (which is not to say it's Open Phil's responsibility to solve it).

Comment by mhpage on CEA is Fundraising! (Winter 2016) · 2016-12-09T10:44:00.535Z · EA · GW

Hey Josh,

As a preliminary matter, I assume you read the fundraising document linked in this post, but for those reading this comment who haven’t, I think it’s a good indication of the level of transparency and self-evaluation we intend to have going forward. I also think it addresses some of the concerns you raise.

I agree with much of what you say, but as you note, I think we’ve already taken steps toward correcting many of these problems. Regarding metrics on the effective altruism community, you are correct that we need to do more here, and we intend to. Before the reorganization, this responsibility didn’t fall squarely within any team’s jurisdiction which was part of the problem. (For example, Giving What We Can collected a lot of this data for a subset of the effective altruism community.) This is a priority for us.

Regarding measuring CEA activities, internally, we test and measure everything (particularly with respect to community and outreach activities). We measure user engagement with our content (including the cause prioritization tool), the newsletter, Doing Good Better, Facebook marketing, etc., trying to identify where we can most cost-effectively get people most deeply engaged. As we recently did with EAG and EAGx, we’ll then periodically share our findings with the effective altruism community. We will soon share our review of the Pareto Fellowship, for example.

Regarding transparency, our monthly updates, project evaluations (e.g., for EAG and EAGx, and the forthcoming evaluation of the Pareto Fellowship), and the fundraising document linked in this post are indicative of the approach we intend to take going forward. Creating all of this content is costly, and so while I agree that transparency is important, it’s not trivially true that more is always better. We’re trying to strike the right balance and will be very interested in others’ views about whether we’ve succeeded.

Lastly, regarding centralized decision-making, that was the primary purpose of the reorganization. As we note in the fundraising document, we’re still in the process of evaluating current projects. I don’t think the EA Concepts project is to the contrary: that was simply an output of the research team, which it put together in a few weeks, rather than a new project like Giving What We Can or the Pareto Fellowship (the confusion might be the result of using "project" in different ways). Whether we invest much more in that project going forward will depend on the reception and use of this minimum version.

Regards, Michael

Comment by mhpage on CEA is Fundraising! (Winter 2016) · 2016-12-07T20:55:38.573Z · EA · GW

This document is effectively CEA's year-end review and plans for next year (which I would expect to be relevant to people who visit this forum). We could literally delete a few sentences, and it would cease to be a fundraising document at all.

Comment by mhpage on A new reference site: Effective Altruism Concepts · 2016-12-06T09:01:25.247Z · EA · GW

Fixed. At least with respect to adding and referencing the Hurford post (more might also be needed). Please keep such suggestions forthcoming.

Comment by mhpage on Should effective altruism have a norm against donating to employers? · 2016-12-05T21:59:06.790Z · EA · GW

This came out of my pleasure budget.

Comment by mhpage on Should donors make commitments about future donations? · 2016-08-31T09:30:50.647Z · EA · GW

As you explain, the key tradeoff is organizational stability vs. donor flexibility to chase high-impact opportunities. There are a couple different ways to strike the right balance. For example, organizations can try to secure long-term commitments sufficient to cover a set percentage of their projected budget but no more, e.g., 100% one year out; 50% two years out; 25% three years out [disclaimer: these numbers are not considered].

Another possibility is for donors to commit to donating a certain amount in the future but not to where. For example, imagine EA organizations x, y, and z are funded in significant part by donors a, b, and c. The uncertainty for each organization comes from both (i) how much a, b, and c will donate in the future (e.g., for how long do they plan to earn to give?), and (ii) to which organization (x, y, or z) will they donate. The option value for the donors comes primarily* from (ii): the flexibility to donate more to x, y, or z depending on how good they look relative to the others. And I suspect much (if not most) of the uncertainty for x, y, and z comes from (i): not knowing how much "EA money" there will be in the future. If that's the case, we can get most of the good with little of the bad via general commitments to donate, without naming the beneficiary. One way to accomplish this would be an EA fund.

  • I say "primarily" because there is option value in being able to switch from earning to give to direct work, for example.
Comment by mhpage on .impact's pivot to focus projects · 2016-04-30T12:56:33.811Z · EA · GW

I'm looking into this on behalf of CEA/GWWC. Anyone else working on something similar should definitely message me (michael.page@centreforeffectivealtruism.org).

Comment by mhpage on Causality in altruism · 2016-03-04T15:39:03.992Z · EA · GW

If the reason we want to track impact is to guide/assess behavior, then I think counting foreseeable/intended counterfactual impact is the right approach. I'm not bothered by the fact that we can't add up everyone's impact. Is there any reason that would be important to do?

In the off-chance it's helpful, here's some legal jargon that deals with this issue: If a result would not have occurred without Person X's action, then Person X is the "but for" cause of the result. That is so even if the result also would not have occurred without Person Y's action. Under these circumstances, either Person X or Person Y can (usually) be sued and held liable for the full amount of damages (although that person might be able later to sue the other and force them to share in the costs).

Because "but for" causation chains can be traced out indefinitely, we generally only hold one accountable for the reasonably foreseeable results of their actions. This limitation on liability is called "proximate cause." So Person X proximately caused the result if their actions were the but-for cause of the result, and the result was reasonably foreseeable.

I think the policy reasons underlying this approach (to guide behavior) probably apply here as well.

Comment by mhpage on Some considerations for different ways to reduce x-risk · 2016-02-05T17:48:46.250Z · EA · GW

The downvoting throughout this thread looks funny. Absent comments, I'd view it as a weak signal.

Comment by mhpage on Against segregating EAs · 2016-01-22T15:12:43.565Z · EA · GW

Agreed. Someone earning to give doesn't meet the literal characterization of "full time" EA.

How about fully aligned and partially aligned (and any other modifier to "aligned" that might be descriptive)?

Comment by mhpage on Against segregating EAs · 2016-01-21T21:06:19.381Z · EA · GW

In thinking about terminology, it might be useful to distinguish (i) magnitude of impact and (ii) value alignment. There are a lot of wealthy individuals who've had an enormous impact (and should be applauded for it), but who correctly are not described as "EA." And there are individuals who are extremely value aligned with the imaginary prototypical EA (or range of prototypical EAs) but whose impact might be quite small, through no fault of their own. Incidentally, I think those in the latter category are better community leaders than those in the former.

Edit: I'm not suggesting that either group should be termed anything; just that the current terminology seems to elide these groups.

Comment by mhpage on Accomplishments Open Thread · 2016-01-09T21:30:06.182Z · EA · GW

I'll embrace the awkwardness of doing this (and this is more than the past month):

1) I printed and distributed about 1050 EA Handbooks to about a dozen different countries.

2) I believe I am the but-for cause of about five new EAs, one of whom is a professional poker player with a significant social media following who has been donating a percentage of her major tournament wins.

3) I donated $195k this calendar year.

Comment by mhpage on The Effective Altruism Newsletter & Open Thread - 15 December 2015 · 2015-12-17T21:18:29.061Z · EA · GW

I'm curious as a descriptive matter whether people have been downvoting due to disagreement or something else. Why, for example, do so many fundraising announcements get downvotes? I'm not certain we need a must-comment policy, but the mere fact that I don't know what a downvote means certainly impacts its signalling value.

Comment by mhpage on Promoting Effective Giving Using List-Style Articles · 2015-12-16T23:49:01.414Z · EA · GW

I see the downvoting trend as a symptom of some potentially problematic community dynamics. I think this warrants a top-level post so we can hash out what the purpose, value, and risks are of downvotes.

Comment by mhpage on Burnout and self-care · 2015-10-23T17:32:25.299Z · EA · GW

Thanks, Julia. You make an important point here that I think is often lost in discussion of the "how much is enough" issue. The issue often is framed in terms of a conflict between one's own interests and the world's interests (e.g., ice cream for me or a bednet for someone else). But when viewed in terms of burnout/sustainability, the conflict disappears: allowing oneself to eat ice cream every so often might actually be in the world's best interest. Even a means machine requires oil.

Comment by mhpage on Effective Altruism Merchandise Ideas · 2015-10-21T16:58:32.697Z · EA · GW

The people who ask me about my shirt generally have never heard of effective altruism, but they are sufficiently interested in what "effective altruism" literally suggests to want more information.

Comment by mhpage on Effective Altruism Merchandise Ideas · 2015-10-21T13:01:41.148Z · EA · GW

I wear the t-shirt from EA Global (San Francisco) all the time. I love the design and actually find it to be a pretty effective way to start a conversation about EA, presumably because only those with interest in the idea ask me about it. I think a more-involved logo might be viewed as more confrontational and therefore less likely to elicit inquiries.

Comment by mhpage on A Note on Framing Criticisms of Effective Altruism · 2015-07-24T19:44:09.901Z · EA · GW

I don't get that criticism. I can always donate to help you do direct work. I don't see any way to criticize donating per se other than through non-consequentialist reasoning.

Edit: Unless they're criticizing the ratio of direct work to donations.

Comment by mhpage on What If You Could Save the World in Your Free Time? · 2015-07-07T21:24:10.202Z · EA · GW

I appreciate the feedback. I also shoot down most of my ideas, but I thought this one was worth sharing. I don't want to be in the position of "defending" the viability of the idea, but I will at least attempt to clarify it:

I did not imagine this ultimately catering primarily to the EA community, which is why I didn't think of .impact or impact certificates as alternatives. I imagined a widely used site like Craigslist on which people advertised random skills and needs. I didn't imagine an explicit "EA angle" other than that the goal was to get the most out of people's time and encourage them to direct the proceeds to the best charities.

The idea behind creating a new site was twofold: First is that my perception (which might well be wrong) was that there is not presently a market for people to sell a few hours of their time a week. There is certainly a market for people who want to sell their talents as a closer-to-full-time profession. And there might be a market for people who want to sell a few hours on the weekend for certain services. But I didn't think what I envisioned existed. Again, I could be wrong (and it might be that voolla.org is trying to do exactly that).

Second is that I thought it at least possible that the site might develop momentum as a consequence of the charity angle. In other words, if the same service is offered for the same price on a regular commercial site and on this site, why not use this site and help the world at the same time? Relatedly, users of the site would be able to signal their charitable work.

Comment by mhpage on What If You Could Save the World in Your Free Time? · 2015-07-07T13:05:17.775Z · EA · GW

Yes, I meant to include a shout-out to .impact. Consider this a belated one.

Comment by mhpage on What If You Could Save the World in Your Free Time? · 2015-07-07T13:03:56.690Z · EA · GW

This IS quite similar! Thanks. Will look further into it.

Comment by mhpage on Revamping Existing Charities · 2015-06-24T17:43:51.206Z · EA · GW

Of course. But as I understand it, the hypothesis here is that given (i) the amount of money that will invariably go to sub-optimal charities; and (ii) the likely room for substantial improvements in sub-optimal charities (see DavidNash's comment), that one (arguably) might get more bang for their buck trying to fix sub-optimal charities. I think it's a plausible hypothesis.

I'm doubtful that one can make GiveWell charities substantially more effective. Those charities are already using the EA lens. It's the ones that aren't using the EA lens for which big improvements might be made at low cost.

EDIT: I suppose I'm assuming that's the OP's hypothesis. I could be wrong.

Comment by mhpage on Revamping Existing Charities · 2015-06-24T03:35:50.229Z · EA · GW

This is true with respect to where a rational, EA-inclined person chooses to donate, but I think you're taking it too far here. Even in the best case scenario, there will be MANY people who donate for non-EA reasons. Many of those people will donate to existing, well-known charities such as the Red Cross. If we can make the Red Cross more effective, I can't see how that would not be a net good.

Comment by mhpage on Revamping Existing Charities · 2015-06-24T01:25:20.484Z · EA · GW

I am very intrigued by the potential upside of this idea. As I see it, one can change charity culture by changing consumer demand (generally what GiveWell does), which will eventually lead to a change in product. Alternatively, one can change charity culture by changing the product directly, on the assumption that many consumers care more about the brand than the product.

Would the service be free to the nonprofits? Would it help nonprofits conduct studies to assess their impact?

Anecdata: I have a friend who works at a big-name nonprofit who has been trying to find exactly this service.

Comment by mhpage on The career questions thread · 2015-06-20T17:34:46.204Z · EA · GW

I've been thinking about how to weigh the direct impact of one's career (e.g., donations) against the impact of being a model for others. For example, imagine option A is a conventional, high-paying salaried job, and option B is something less conventional (e.g., a startup) with a higher expected (direct) impact value. It's not obvious to me that option B has a higher expected impact value when one takes into account the potential to be a model for others. In other words, I think there might be a unique kind of value in doing good in a way that others can emulate. I'm curious whether you agree with this, and if so, how one might factor it into the analysis.

Comment by mhpage on [Discussion] What have you found great value in not doing? · 2015-06-14T21:56:30.329Z · EA · GW

Haha, don't be silly, I stopped eating solid food a long time ago.

[Was just joking about vegetables.]

Comment by mhpage on [Discussion] What have you found great value in not doing? · 2015-06-14T17:12:46.683Z · EA · GW

I didn't derive sufficient immediate pleasure from reading the news. But like eating one's vegetables, I thought it was justified by long-term returns.

(Hoping someone now provides a reason I don't have to eat my vegetables.)

Comment by mhpage on I am Nate Soares, AMA! · 2015-06-11T22:23:31.605Z · EA · GW

Indeed, that is what I meant.

I was assuming that MIRI's position is that it presently is the most-effective recipient of funds, but that assumption might not be correct (which would itself be quite interesting).

Comment by mhpage on I am Nate Soares, AMA! · 2015-06-11T01:43:16.922Z · EA · GW

A modified version of this question: Assuming MIRI's goal is saving the world (and not MIRI), at what funding level would MIRI recommend giving elsewhere, and where would it recommend giving?

Comment by mhpage on [Discussion] What have you found great value in not doing? · 2015-06-08T23:40:48.440Z · EA · GW

Thanks, Ryan, but years of reading the news have left me unable to process such a long, thoughtful piece about how years of reading the news will leave me unable to process long, thoughtful pieces.

Comment by mhpage on [Discussion] What have you found great value in not doing? · 2015-06-08T20:08:48.182Z · EA · GW

I love it when reason points in a direction I already wanted to go but mistakenly thought it unreasonable. Thanks.

Comment by mhpage on [Discussion] What have you found great value in not doing? · 2015-06-08T19:44:12.767Z · EA · GW

What's the argument for not consuming news? I don't necessarily disagree, but it's not self-evident to me.

Comment by mhpage on You Could be the Warren Buffett of Social Investing · 2015-06-05T15:58:41.977Z · EA · GW

Here's an EA forum post on the second (Harvard Law) article: http://effective-altruism.com/ea/8f/lawyering_to_give/

Although well-intentioned, I think the Harvard Law article is dangerous. The legal community is potentially pretty low-hanging fruit for EA recruitment: it contains a lot of people who make a lot of money and who generally make misguided but well-intentioned charitable decisions, both regarding how to donate their money and how to use their talents.

Changing the culture of this community will be complicated, however. Early missteps could be extremely costly to the extent they give the community the wrong initial perception of EA-style thinking. In short, the stakes are high, and although I commend those who want to try to make inroads into the community, I suggest treading cautiously.

Comment by mhpage on Lawyering to Give · 2015-05-29T01:12:58.659Z · EA · GW

Once again, I am quite late to the party, but for posterity's sake, just want to add a few points: First, this is exactly what I do, and it's just not that hard. Second, I was formery a public interest lawyer (doing impact litigation) and believe the skill set required for that job is very similar to the skill set required for my current job (commercial litigation). Lastly, I am doing what I am doing on the belief that it does the most good -- I've considered the alternatives! If anyone seriously believes I'm mistaken, I'd very much like to hear from them.

Comment by mhpage on Effective Altruism as an intensional movement · 2015-05-26T16:17:09.847Z · EA · GW

I've noticed that what "EA is" seems to vary depending on the audience and, specifically, why it is that the audience is not already on board. For example, if one's objection to EA is that one values local lives over non-local lives, or that effects don't matter (or are trumped by other considerations), then EA is an ethical framework. But many people are on board with the basic ethical precepts but simply don't act in accordance with them. For those people, EA seems to be a support group for rejecting cognitive dissonance.

Comment by mhpage on How important is marginal earning to give? · 2015-05-23T01:41:43.866Z · EA · GW

Thanks, Ryan. That's all very helpful.

(And the MIRI reference was a superintelligent AI joke.)

Comment by mhpage on How important is marginal earning to give? · 2015-05-22T18:18:19.236Z · EA · GW

I'm thinking more along the line of mentors for the mentors, and I think one solution would be a platform on which to crowd source ideas for individuals' ten-year strategic plan. In a perfect world, one would be able to donate one's talents (in addition to one's money) to the EA cause, which could then be strategically deployed by an all-seeing EA director. Maybe MIRI could work on that.

Comment by mhpage on How important is marginal earning to give? · 2015-05-22T14:14:59.277Z · EA · GW

Absolutely re personal factors. "Outsource" is an overstatement.

And no, I don't mean decisions like whether to be a vegetarian (which, as I've noted elsewhere, presents a false dichotomy) or whether to floss, which can be generically answered.

I mean a personalized version of what 80,000 hours does for people mid-career. Imagine several people in their mid-30s to -40s--a USAID political appointee; a law firm partner; a data scientist working in the healthcare field--who have decided they are willing to make significant lifestyle changes to better the world. What should they do? This seems to be a very different inquiry than it is for an undergrad. And for some people, a lot turns on it--millions of dollars. Given the amount at stake, it seems like a decision that should be taken just as seriously by the EA community as how an EA organization should spend millions of dollars.

Comment by mhpage on How important is marginal earning to give? · 2015-05-21T20:50:51.639Z · EA · GW

I love the idea of outsourcing my donation decisions to someone who is much more knowledgeable than I am about how to be most effective. An individual might be preferable to an organization for reasons of flexibility. Is anyone actually doing this -- e.g., accepting others' EtG money?

In fact, I'd outsource all kinds of decisions to the smartest, most well-informed, most value-aligned person I could find. Why on earth would I trust myself to make major life decisions if I'm primarily motivated by altruistic considerations?

Comment by mhpage on Should I be vegan? · 2015-05-19T17:32:13.507Z · EA · GW

The trade-off argument is right as far as it goes, but that might not be as far as we think: the metaphor of the "will power points" seems problematic. As MichaelDickens and Jess note, many lifestyle changes have initial start-up costs but no ongoing costs. And many things we think will have ongoing costs do not (see, e.g., studies showing more money and more things don't on average make us happier; conversely, less money and fewer things might not make us less happy). An earning-to-give investment banker might use the trade-off logic to explain why she is not selling her sports car for a Honda Civic, and while that might be right in some cases, I think more often it would be wrong. Point being, it would be a shame if we used the trade-off argument to avoid trying lifestyle changes that, long term, might have no (or small) ongoing costs to our quality of life.

More generally, diet is not a binary choice. Avoid animal products when it's convenient; don't when it's inconvenient. Over time, you might learn it's not as inconvenient as you thought.

Comment by mhpage on Should I be vegan? · 2015-05-17T20:34:13.191Z · EA · GW

I use the recycling analogy when talking to people about this issue. I consider myself to be one-who-recycles, but if I have bottle in my hand and there's nowhere convenient to recycle it, I'll throw it away. Holding onto that bottle all day because I've decided I'm a categorical recycler seems kind of silly. I treat food the same way.

Regarding your broader point re consistency, my guess is that we way over-emphasize the effect of diet over other relatively cost-less things we can do to make the world a better place -- in large part because there are organized social movements around diet. That of course doesn't necessarily mean we should eat more animal products but rather that we should try to identify other low-hanging-fruit means of improving the world.

Comment by mhpage on Should I be vegan? · 2015-05-17T16:35:50.236Z · EA · GW

Wonderful essay. Thanks, Jess. A few responses:

(i) It's not clear to me that the vegan-vegetarian distinction makes sense, as I believe, for example, that consuming eggs or milk can be more harmful (in terms of animal suffering) than certain forms of meat consumption.

(ii) Related to (i) (and to Paul_Christiano's point re "other ways to make your life worse to make the world better"), other than for signalling/heuristic reasons, I don't think being categorically vegan/vegetarian is all that important. I believe that reducing animal products in my diet is always a good thing. I also believe that not buying coffee at coffee shops and, instead, donating the money to an animal-welfare organization is always a good thing. But I don't make the latter a categorical life philosophy. For that reason, I treat my diet just like every other facet of my life: I try to understand the consequences of my actions, identify the ethically ideal direction, and move in that direction wherever I reasonably can, recognizing that I am a deeply imperfect ethical actor.

(iii) Soylent is the solution to all!! It's now vegan, good for you, cheap, etc. I'd consume it in place of most meals even if I had no regard for animal welfare.

Comment by mhpage on You are a Lottery Ticket · 2015-05-11T14:38:33.396Z · EA · GW

Very interesting, Ben. Thanks for posting.

Here's an idea indirectly related to your article: The EA community has an incredible amount of intellectual talent. And it is unusual as far as communities go in that everyone's motive to make money is selfless. For that reason, I am indifferent to whether I make a million dollars all by myself or whether I make it with the help of 40 other people (aside from differences in the initial investment). Given that, isn't the EA community uniquely positioned to crowd source a business idea, fund that idea with an EA-friendly VC, hire EA types to run the business, and then give the vast majority of the returns (if any) to EA causes? Would it be a good investment for EA Ventures, for example, to organize an entrepreneurial think tank?