Posts

Cornelius's Shortform 2020-06-19T17:19:03.368Z

Comments

Comment by cornelius on Cornelius's Shortform · 2020-06-19T17:19:03.678Z · EA · GW

Are there any sites set up to gamify your donations? I rather liked how the old GWWC site had little token pictures next to the organizations you donated to (vaguely felt like a "collect them all game") and the pie chart breakdown along with other nifty visualizations. The new pledge dashboard over at effectivealtruism.org lacks all that and, for me, has reduced the pleasure I was having with organizing and tracking and thinking about my donation strategies. Though I can understand some people prefer the simplification, I don't, so are there any alternatives people like me that prefer a more gamified visualization-rich approach use?

Comment by cornelius on [Linkpost] - Mitigation versus Supression for COVID-19 · 2020-03-18T18:52:18.487Z · EA · GW

Worth pointing out some academics think the parameters used in the Imperial model was too negative based on real world data we have. See Bill Gates's take on it:

Fortunately it appears the parameters used in that model were too negative. The experience in China is the most critical data we have. They did their "shut down" and were able to reduce the number of cases. They are testing widely so they see rebounds immediately and so far there have not been a lot. They avoided widespread infection. The Imperial model does not match this experience. Models are only as good as the assumptions put into them. People are working on models that match what we are seeing more closely and they will become a key tool. A group called Institute for Disease Modelling that I fund is one of the groups working with others on this. ~ Bill Gates from his Reddit AMA
Comment by cornelius on Empirical data on value drift · 2018-08-17T21:37:07.718Z · EA · GW

To flip this one on its head: I think counter-factually for most EAs it could actually be "better" for the world at large to date non-EAs because of the whole drastic increase of impact that can typically be expected if you convince your lover of EA - which to me on balance seems more likely than value drift from dating a non-EA if you are in fact a committed EA. However, I think if you have long-term relationships exceeding 2 years then value drift becomes far more of an issue:

  • < 2 year relationship. Value drift potential = low. Convert lover to EA potential = very high
  • 2 year relationship. Value drift potential = medium. Convert lover to EA potential = very low if it didn't happen in the first 2 years

  • 5 year relationship. Value drift potential = high. Convert lover to EA potential = extremely low if it didn't happen in the first 5 years

Suffice to say my current girlfriend is now much more EA-minded and I have received messages from my ex that she eats less meat still even after she stopped dating me (I'll take her word for it). I know my behaviour has been very strongly impacted by the people I've dated so there's no reason to assume vice versa doesn't happen.

Fun-fact: I use this as an excuse to argue with my girlfriend that clearly I should be dating many many girls short-term for obvious EA-reasons.

Comment by cornelius on 80,000 Hours: EA and Highly Political Causes · 2017-07-28T21:29:06.508Z · EA · GW

that's a tribal war between economists and epidemiologists?

What?

I guess you aren't up to speed with worm-wars. Things have gotten pretty tribal here with twitter wars between respected academics (made worse by a viral Buzzfeed article that arguably politicized the issue...), but nobody (to date) would argue EAs should stay out of deworming altogether because of that.

On the contrary precisely because of all this shit I'd think we need more EAs working on deworming.

Of course in the case of deworming it seems more clear that throwing in EAs will lead to a better outcome. This isn't nearly as clear when it comes to politics so I am with you that EAs should be more weary when it comes to recommending political/politicized work. Either way, I think ozymandias's point was that just like we don't tell EAs in deworming to leave the sinking ship, it also seems absurd to have a blanket ban on EA political/politicized recommendations. You don't want a blanket ban and don't mind EA endorsing political charities because as you've said you don't mind your favourite immigration charity being recommended. So the argument between you and ozymandias seems to mostly be about "to what degree."

And niether of you have actually operationalized what your stance is on "to what degee" and as such, in my view, this is why the argument between the two of you dwindled into the void.

Comment by cornelius on If you want to disagree with effective altruism, you need to disagree one of these three claims · 2017-07-13T09:04:20.662Z · EA · GW

I see every day the devastating economic harm that organizations like the Against Malaria Foundation wreak on communities.

Make a series of videos about that instead then if it's so prevalent. It would serve to undermine GiveWell far more and strengthen your credibility.

Your video against GiveWell does not address or debunk any of GiveWell's evidence. It's a philosophical treatise on GiveWell's methods not an evidence-based treatise. Arguing by analogy based on your own experience is not evidence. I've been robbed 3 times living in Vancouver and yet zero times in Africa, despite living in Namibia/South Africa for most of my life. This does not however entail that Vancouver is more dangerous. I in fact have near-zero evidence to back up the claim that Vancouver is more dangerous

All of your methodology objections (and far stronger anti-EA arguments) were systematically raised in Iason Gabriel’s piece on criticisms of effective altruism. And all of these criticisms were systematically responded to and found lacking by Halstead et al's defense paper

I'd highly recommend reading both of these. They are both pretty bad ass.

Comment by cornelius on Getting to the Mainstream · 2017-06-27T23:09:34.588Z · EA · GW

I've for a long time seen things this way:

  • GiveWell: emphasizes effectiveness: the logic pull
  • TLYCS: emphasizes altruism: the emotion pull
  • GWWC: emphasizes the pledge: the act that unifies us as a common movement (or I think+feel it does)

One cute EA family.

Comment by Cornelius on [deleted post] 2017-05-19T20:09:36.966Z

We have found this exceptionally difficult due to the diversity of GFI’s activities and the particularly unclear counterfactuals.

Perhaps I am not understanding but isn't it possible to simplify your model by honing in on one particular thing GFI is doing and pretending that a donation goes towards only that? Oxfam's impact is notoriously difficult to model (too big, too many counterfactuals) but as soon as you only look at their disaster management programs (where they've done RCTs to showcase effectiveness) then suddenly we have far better cost-effectiveness assurance. This approach wouldn't grant a cost-effectiveness figure for all of GFI, but for one of their initiatives at least. Doing this should also drastically simplify your counterfactuals.

I've read the full report on GFI by ACE. Both it and this post suggest to me that a broad capture-everything approach is being undertaken by both ACE and OPP. I don't understand. Why do I not see a systematic list of all of GFIs projects and activities both on ACE's website and here and then an incremental systematic review of each one in isolation? I realize I am likely sounding like an obnoxious physicist encountering a new subject so do note that I am just confused. This is far from my area of expertise.

However, this approach is a bit silly because it does not model the acceleration of research: If there are no other donors in the field, then our donation is futile because £10,000 will not fund the entire effort required.

Could you explain this more clearly to me please? With some stats as an example it'll likely be much clearer. Looking at the development of the Impossible Burger seems a fair phenomena to base GFI's model on, at least for now and at least insofar as it is being used to model a GFI donation's counterfactual impact in supporting similar products GFI is trying to push to market. I don't understand why the approach is silly because $10,000 wouldn't support the entire effort and that this is somehow tied to acceleration of research.

Regarding acceleration dynamics then, isn't it best to just model based on the most pessimistic conservative curve? It makes sense to me to think this would be the diminishing returns one. This also falls in line with what I know about clean meat. If we eventually do need (might as well assume we do for sake of being conservative) to simulate all elements of meat we'll also have to go beyond merely the scaffolding and growth medium problem and also include an artificial blood circulation system for the meat being grown. No such system yet exists and it seems reasonable to suspect that the closer we want to simulate meat precisely the more our scientific problems rise exponentially. So a diminishing returns curve is expected from GFI's impact - at least insofar as its work on clean meat is concerned.

Comment by cornelius on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-13T21:21:08.783Z · EA · GW

It's pretty much like you said in this comment and I completely agree with you and am putting it here because of how well I think you've driven home the point:

...I myself once mocked a co-worker for taking an effort to recycle when the same effort could do so much more impact for people in Africa. That's wrong in any case, but I was probably wrong in my reasoning too because of numbers.

Also, I'm afraid that some doctor will stand up during an EA presentation and say

You kids pretend to be visionaries, but in reality you don't have the slightest idea what you are talking about. Firstly, it's impossible to cure trachoma induced blindness. Secondly [...] You should go back to play in your sandboxes instead of preaching adults how to solve real world problems

Also, I'm afraid that the doctor might be partially right

Also, my experience has persistently been that the blindness vs trachoma example is quite off-putting in an "now this person who might have gotten into EA is going to avoid it" kind of way. So if we want more EAs, this example seems miserably inept at getting people into EA. I myself have stopped using the example in introductory EA talks altogether. I might be an outlier though and will start using it again if provided a good argument that it works well, but I suspect I'm not the only one that has seen better results introducing EAs by not bringing up this example at all. Now with all the uncertainty around it, it would seem that both emotions and numbers argue against the EA community using this example in introductory talks? Save it for the in-depth discussions that happen after an intro instead?

Comment by cornelius on Fact checking comparison between trachoma surgeries and guide dogs · 2017-05-11T00:43:46.767Z · EA · GW

This is a great post and I thank you for taking the time to write it up.

I ran an EA club at my university and ran a workshop where we covered all the philosophical objections to Effective Altruism. All objections were fairly straightforward to address except for one which - in addressing it - seemed to upend how many participants viewed EA, given what image they thus far had of EA. That objection is: Effective Altruism is not that effective.

There is a lot to be said for this objection and I highly highly recommend anyone who calls themselves an EA to read up on it here and here. None of the other objections to EA seem to me to have nearly as much moral urgency as this one. If we call this thing we do EA and it is not E I see a moral problem. If you donate to deworming charities and have never heard of wormwars I also recommend taking a look at this which is an attempt to track the entire debacle of "deworming-isn't-that-effective" controversy in good faith.

Disclaimer: I donate to SCI and rank it near the top of my priorities, just below AMF currently. I even donate to less certain charities like ACE's recommendations. So I certainly don't mean to dissuade anyone from donating in this comment. Reasoning under uncertainty is a thing and you can see these two recent posts if you desire insight into how an EA might try to go about it effectively.

The take home of this though is the same as the three main points raised by OP. If it had been made clear to us from the get-go what mechanisms are at play that determine how much impact an individual has with their donation to an EA recommended charity, then this EA is not E objection would have been as innocuous as the rest. Instead, after addressing this concern and setting straight how things actually work (I still don't completely understand it, it's complicated) participants felt their initial exposure to EA (such as through the guide dog example and other over-simplified EA infographics that strongly imply it's as simple and obvious as: "donation = lives saved") contained false advertising. The words slight disillusionment comes to mind, given these were all dedicated EAs going into the workshop.

So yes, I bow down to the almighty points bestowed by OP:

  • many of us were overstating the point that money goes further in poor countries

  • many of us don’t do enough fact checking, especially before making public claims

  • many of us should communicate uncertainty better

Btw, Scope insensitive link does not seem to work I'm afraid (Update: Thanx for fixing!)

Comment by cornelius on Why you should consider going to EA Global · 2017-05-10T23:30:40.741Z · EA · GW

Everyone is warm (±37°C, ideally), open-minded, reasonable and curious.

You sir, will be thoroughly quoted and requoted on this gem, lol. I commend this heartfelt post.

Comment by cornelius on Why I left EA · 2017-05-10T20:46:00.583Z · EA · GW

One thing I'm unclear on is:

Is s/he leaving the EA community and retaining the EA philosophy or rejecting the EA philosophy and staying in the EA community or leaving both?

What EAs do and what EA is are two different things after all. I'm going to guess leaving the EA community given that yes most EAs are utilitarians and this seems to be foundational to the reason Lila is leaving. However the EA philosophy is not utilitarian per se so you'd expect there to be many non-utilitarian EAs. I've commented on this before here. Many of us are not utilitarian. 44% of us according to the 2015 survey in fact. The linked survey results argue that this sample accurately estimates the actual EA population. 44% is a lot of non-utilitarian EAs. I imagine many of them aren't as engaged in the EA community as the utilitarian EAs, despite self-identifying as EAs.

If s/he is just leaving the community then, to me, this is only disheartening insofar as s/he doesn't interact with the community from this point on. So I do hope Lila continues to be an EA outside of the EA community where s/he can spread goodness in the world using her/his non-utilitarian priortarian ethics (prioritizing victims of violence) using the EA philosophy as a guide.

The "movement isn't diverse enough" is a legitimate complaint and a sound reason to leave a movement if you don't feel like you fit in. So s/he might well do much better for the world elsewhere in some other movement that has a better personal fit. And as long as she stays in touch with EA then we can have some good 'ol moral trade for the benefit of all. This trade could conceivably be much more beneficial for EA and for Lila if s/he is no longer in the EA community.

Comment by cornelius on Scientific Charity Movement · 2017-05-07T10:29:23.835Z · EA · GW

The movement started around 1870 and was still appears to have been active around 1894 (latest handbook in OP). WW1 was 1914-1918 and WW2 1939-1945. I'd like to know if it survived to 1945. If it did this is its cut off since my guess is that it died very quickly after WW2 when eugenics very rapidly spread throughout the world's collective consciousness as an unspeakable evil. I imagine the movement couldn't adapt quickly enough to bad PR and silently faded or rebranded itself. For instance, the Charity Organization Society of Denver, Colorado, is the forerunner of the modern United Way of America.

So I imagine the lesson for EA is to beware the rapid and irreversible effects of having EA tied implicitly to something everyone everywhere has suddenly started to hate in the strongest possible terms. This is probably why it is a good idea for EA to stay out of politics. Once you associate a movement with something political, good luck disassociating yourself when some major bad stuff happens. Or maybe the lesson is just that EA should beware WW3. Who knows.

Comment by cornelius on Effective altruism: an elucidation and a defence · 2017-05-06T18:57:22.341Z · EA · GW

Update: Nir Eyal very much appears to self-identify as an effective altruist despite being a non-utilitarian. See interview with Harvard EA here specifically about non-utilitarin effective altruism and this article on Effective Altruism back in 2015. Wikipedia even mentions him as a "leader in Effective Altruism"

Comment by cornelius on Effective altruism: an elucidation and a defence · 2017-05-03T20:04:05.844Z · EA · GW

No one has made any concerted effort to map the values of people who are not utilitarians, to come up with metrics that may represent what such people care about and evaluate charities on these metrics.

This appears to be demonstrably false. And in very strong terms given how strong a claim you've made and how I only need to find one person to prove it wrong. We have many non-utilitarian egalitarian luminaries making a concerted effort to come up with exactly the metrics that would tell us, based on egalitarian/priorian principles, what charities/interventions we should prioritize:

  • Adam Swift: Political theorist, sociologist, specializes in liberal egalitarian ethics, family values, communitarianism, school choice, social justice.

  • Ole Norheim: Harvard Physician, Medical ethics prof. working on distributive theories of justice, fair priority setting in low and high income countries. Is the head of the Priority Setting in Global Health (2012-2017) research project which is aiming to do exactly what you claimed nobody is working on.

  • Alex Voorhoeve: Egalitarian theorist, member of Priority Setting in Global Health project, featured on BBC, has co-authored with Norheim unsurprisingly

  • Nir Eyal: Harvard Global Health and Social Medicine Prof., specializes in population-level bioethics. Is currently working on a book that defends an egalitarian consequentialist (i.e. instrumental egalitarianism) framework for evaluating questions in bioethics and political theory.

All of these folks are mentioned in the paper.

I don't want to call these individuals Effective Altruists without having personally seen/heard them self-identify with it but they have all publicly pledged 10% of their lifetime income to effective charities via Giving What We Can.

So if the old adage "Actions speak louder than words" still rings true then these non-utilitarians are far "more EA" than any number of utilitarians who publicly flaunt that they are part of effective altruism, but then do nothing.

And none of this should be surprising. The 2015 EA Survey shows that only 56% of respondents identify as utilitarian. The linked survey results argue that this sample accurately estimates the actual EA population. This would mean that ~44% of all EAs are non-utilitarian. That's a lot. So even if utilitarians are the single largest majority, of course the rest of us non-utilitarian EAs aren't just lounging around.

Comment by cornelius on The 2017 Effective Altruism Survey - Please Take! · 2017-04-29T20:48:22.788Z · EA · GW

I think that joint donations not only with kin or via couples, but with friends in an extended community, may become more common if EA becomes more prevalent in collectivist cultures. Right now EA is focused primarily in the UK, Netherlands, Germany, Switzerland, Australia and America, which are all pretty much your archetypal individualist cultures.

I mention this because I consistently notice the trend of the EA community focusing on advertising what the individual can accomplish with their donation. This may not be best if EA is to achieve broad appeal in pretty much any country in Asia, where an appeal to what a community can accomplish with their collective donation might have drastically more appeal.

I'm no expert on this topic though.

Comment by cornelius on Why I left EA · 2017-04-11T07:40:46.329Z · EA · GW

I'm confused and your 4 points only make me feel I'm missing something embarrassingly obvious.

Where did I suggest that valuing saving overall good lives means we are failing to achieve a shared goal of negative utilitarianism? In the first paragraph of my post and the part you seem to think is misleading I thought I specifically suggested exactly the opposite.

And yes, negative utilitarianism is a useful ethical theory that nonetheless many EAs and philosophers will indeed reject given particular real-world circumstances. And I wholeheartedly agree. This is a whole different topic though, so I feel like you're getting at something others think is obvious that I'm clearly missing.

Comment by cornelius on Effective altruism: an elucidation and a defence · 2017-03-26T19:02:49.558Z · EA · GW

Put this way I change my mind and agree it is unclear. However, to make your paper stronger, I would have included something akin to what you just wrote in the paper to make it clear why you think Gabriel's use of "iteration effects" is unclear and not the same as his usage in the 'priority' section.

I'm not sure how important clarifying something like this is for philosophical argumentation, but for me, this was the one nagging kink in what is otherwise fast becoming one of my favourite "EA-defense" papers.

Comment by cornelius on Effective altruism: an elucidation and a defence · 2017-03-26T06:14:25.732Z · EA · GW

I notice this in your paper:

He also mentions that cost-effectiveness analysis ignores the significance of ‘iteration effects’ (page 12)

Gabriel uses Iterate in his Ultra-poverty example so I'm fairly certain how he uses iterate here is what he was trying to refer to

Therefore, they would choose the program that supports literate men. When this pattern of reasoning is iterated many times, it leads to the systematic neglect of those at the very bottom, a trend exemplified by how EAs systematically neglect focusing on the very bottom in the first world. This is unjust (with my edits)

So it's the same with using the DALY to assess cost-effectiveness. He is concerned that if you scale up or replicate a program that is cost-effective due to DALY calculations that it would ignore iteration effects where a subset of those receiving the treatment might systematically be neglected - and that this goes against principles of justice and equality. Therefore using cost-effectiveness as a means of deciding what is good or what charity to fund is on morally shaky ground (according to Gabriel). This is how I understood him.

Comment by cornelius on Effective altruism: an elucidation and a defence · 2017-03-26T02:04:44.840Z · EA · GW

Perhaps "systemic change bias" needs to be coined, or something to that effect, to be used in further debates.

Might be useful in elucidating why people criticizing EAs always mischaracterize us as not caring about systemic change or harder-to-quantify causes.

Comment by cornelius on Effective altruism: an elucidation and a defence · 2017-03-23T21:36:17.555Z · EA · GW

Couldn't you just counter and say that if EA were around back then and it had just started out trying to figure out what the most good is that they would not support the abolitionist movement because of difficult EV calculations and because they are spending resources elsewhere? However, if the EA community existed back then and had matured a bit to the stage that something like OpenPhil existed back then as well (OpenPhil of course being an EA org for those reading who don't know) then they would have very likely supported attempts at cost-effectiveness campaigns to support the abolitionist movement.

The EA community like all entities is an entity in flux. I don't like hearing "If it existed back then then it wouldn't support the abolishionist movement and therefore it has problems, and this may implicitly imply it is bad because it is thinking in a bad quantification bias naughty way." This sounds like an unfair mischaracterization to me - especially given that you can just cherry-pick what the EA community was like at a particular time (how much it knows) and how many resources it has specifically so that it wouldn't support the abolishionist movement and then claim the reason is quantification bias.

What's better is "if EA existed back then as it existed in 2012/2050/20xy with x resources then it would not support the abolishionist movement" and now the factor of time and resources might very well be a much better explanation for why EA wouldn't have supported the abolishionist movement, not quantification bias.

Consider the EA community of 2050 that would have decades worth of knowledge built on how to deal with harder to quantify causes.

I suspect that if the EA community of 2050 had the resources of YMCA or United Way and existed in the 18th Century, it would have supported the hell out of the abolitionist movement.

Comment by cornelius on Why I left EA · 2017-03-06T05:11:44.020Z · EA · GW

Yes, precisely. Although - there are so many variants of negative utilitarianism that "precisely" is probably a misnomer.

Comment by cornelius on Why I left EA · 2017-03-03T22:56:38.579Z · EA · GW

Yea as a two-level consequentialist moral anti-realist I actually am pretty tired of EA's insistence of "how many lives we can save" instead of emphasizing how much "life fulfillment and happiness" you can spread. I always thought this was not only a PR mistake but also a utilitarian mistake. We're trying to prevent suffering, so obviously preventing instances where a single person goes through more suffering on the road to death is more morally relevant utils-wise than preventing a death with less suffering.

Nonetheless, this is the first I've heard that violence and exploitation are under-valued by EA's. It always seemed the case to me that EAs generally weep and feel angsty feelings in their gut when they read about the violence and exploitation of their fellow man. But, what can we do? Regions of violence are notoriously difficult for setting up interventions that are tractable. As such it always seeemed to me that we should focus on what we know works since lifting people out of disease and poverty empowers them to address issues of violence and exploitation themselves. And giving someone their own agency back in this way is, in my view, something worth putting a lot of moral weight on due to its long-term (albeit hard-to measure) consequences.

And now I'm going to say something that I feel some people probably wont like.

I consistently feel that a lot of the critique on EA has to do with how others perceive EAs rather than what they are really like. i.e prejudice. I mentioned above that I generally feel EAs are legit moved to tears (or whatever is a significant feeling for them) regarding issues of violence. But, I find that as soon as this person spends most of his/her time in the public space talking about math and weird utilitarian expected value calculations this person is suddenly viewed as no longer having a heart or "the right heart." The amount of compassion and empathy a person has is not tied to what weird mathematical arguments they push out but what they do and feel inside (this is how I operationalize "compassion" at any rate: an internal state leading to external consequences. Yes I know, that's a pretty virtue ethics way to look at it, so sue me.).

Anyway, maybe part of this is because I know what it feels like to be the highschool nerd that secretly cries when he sees someone getting bullied at break time but who then talks to people about and cevelops exstensivly resaeched weird ideas like transhumanism as a means of optimizing the human flourishing (instead of say caring to go to an anti-bullying event that everyone instead thinks I should be going to if I really cared about bullying). It makes sense to me that many people think I have my priorities wrong. But it certainly isn't due to a lack of compassion and concern for my fellow man. It's not too hard to go from this analogy and argue that

This is perhaps what I absolutely love about the EA community. I've finally found a community of nerds where I can be myself and go in depth with uber-weird (any and all) ideas without being looked at as any less compassionate <3.

When people talk about ending violence and exploitation by doing something that will change the system that keeps these problems in place I get upset. This "system" is often invisible and amorphous and a product of ideology rather than say cost-effectiveness calculations. Why this gets me upset is that I often find this means people are willing to sacrifice giving someone their agency back when it is clear you can do so through donating to proven disease and poverty alleviation interventions to instead donate/support a cause against violence and exploitation because it aligns with their ideology. This essentially seems to me a way of making donation about yourself - trying to make sure you feel content in your own ethical worldview because specifically not doing anything about that violence and exploitation makes you feel bad - rather than making it about the individuals on the receiving end of the donation.

Yea I know, my past virtue ethics predilections are showing again. Even if someone like what I've described above supports an anti-violence cause that though difficult to get a effectiveness measure from is still nontheless doing a lot of good in the world we cant measure I still don't like it. I'm caring what people think and arguing that certain self-serving thoughts appear morally problematic independent of the end-result they cause. So let me show I'm also strongly opposed to forms of anti-realist virtue ethics. It's not enough to merely be aligned with the right way of thinking/ideology etc and then good things come from that. The end result: the actual people on the receiving end - are what actually matter. And this is why I find a "mostly" utilitarian perspective so much more humanizing than people a lot of people who get uncomfortable with its extreme conclusions and then reject the whole thing. A more utilitarian perspective forces you to make it about the receiver.

Whatever the case, writing this has made me sad. I'm sad to see you go, you seem highly intelligent and a likely asset to the movement, and as someone who is on the front-line of EA and PR I take this as a personal failure but wish you the best. Does anyone know of any EA-vetted charities working on violence and exploitation prevention? Even ones that are a stretch tractability-wise would be good. I'd like to donate - always makes me feel better.

Comment by cornelius on How We Run Discussions at Stanford EA · 2015-08-31T04:25:53.477Z · EA · GW

I can also vouch for the success of "What's one good thing and one bad thing that has happened to you this week/month/since last time?" Each person picks one of each and talks about it. Naturally, some people may bring up things related to EA very easily with this question if they are involved with it.