Announcing new EA Funds management teamspost by MarekDuda · 2018-10-27T08:48:11.774Z · EA · GW · Legacy · 23 comments
Why teams? New Fund Management Teams: EA Meta Fund (formerly EA Community Fund) Long-Term Future Fund Animal Welfare Fund Global Development Fund How we expect this to work New granting schedule Collective Decision Making Mechanisms How to find out more and decide if you want to donate to the funds None 23 comments
As discussed in the EA Funds update post in August, CEA has been spending time over the last quarter creating a new management structure for several of the EA Funds. We are pleased to have concluded the process and to announce new management teams for the EA Community Fund (now renamed the EA Meta Fund) and the Long-Term Future Fund, as well as the addition of an expert team to the management of the Animal Welfare Fund (which continues to be chaired by Lewis Bollard).
We believe that involving a larger number of individuals in the management of the funds will have a number of benefits:
- It should result in a larger amount of total effort going into the sourcing, vetting and decision making for each grant made, which is expected to increase the quality and range of grants.
- It should increase the capacity available for publishing more detailed grant write-ups and other supporting content.
- The diversity of perspectives represented in funding decisions should increase, as well as the total coverage of the personal networks of the grant makers, allowing grant sourcing to go wider and deeper into the community than with a single fund manager.
- It is an opportunity for more individuals to develop grantmaking experience, thus growing the number of people with this skillset available to the community.
New Fund Management Teams:
EA Meta Fund (formerly EA Community Fund)
Chair: Luke Ding
Team: Denise Melchin, Matt Wage, Alex Foster, Tara MacAulay
Advisor: Nick Beckstead
The new management team for this fund have decided to rename the fund to better reflect the kind of grants they envision making. This is not a substantial change from how the fund has been run previously, but one change of scope is that the fund is unlikely to make further grants to local groups. The EA Community Building Grants project, run by CEA, focuses exclusively on funding local groups, and this fund does not expect to be granting to that space. Instead, the fund expects to support projects that are broadly referred to as ‘meta’ initiatives in the EA community, as well as to groups researching priority cause areas. The fund has historically granted most of its money to these types of organizations and has previously made one grant to a local group.
The fund managers expect to write grants to a relatively broad range of organizations in terms of maturity, specifically planning to make some grants to new projects. In this way, there may be some overlap in scope with EA Grants [? · GW], which is also run by CEA.
The fund will be chaired by Luke Ding, one of the first major donors from the early days of effective altruism. He has spent 50% of his time on EA-related philanthropy over the past 7 years and has donated millions of dollars to EA organizations during this time. Luke’s early donations played an important role in the rapid growth of the Centre for Effective Altruism, 80,000 Hours and Founders Pledge.
Nick Beckstead has agreed to stay on in an advisory capacity, providing continuity to donors and the benefit of his expertise to the new team.
Read more and see bios for the new team on the EA Meta Fund page at EA Funds.
Long-Term Future Fund
Chair: Matt Fallshaw
Team: Helen Toner, Oliver Habryka, Matt Wage, Alex Zhu
Advisors: Nick Beckstead, Jonas Vollmer
The fund will be chaired by Matt Fallshaw, cofounder of Bellroy and founder of Trike Apps. Matt has been involved in growing the EA movement since 2012, helping develop and host the original LessWrong and EA Forum websites, providing regular advice on organizational management and implementation to EA teams, supporting the growth of organizations including CEA, CFAR, and MIRI, and securing some of the early support for other EA projects. Matt currently splits his time between MIRI in Berkeley and his home in Melbourne, Australia.
The fund managers will consider grants to any causes focused on improving the long term future, but expect to favour activities such as research into possible existential risks and their mitigation, and especially work aimed at ensuring that advanced artificial intelligence systems are robust and beneficial.
Nick Beckstead has agreed to stay on in an advisory capacity for this fund also, and is joined by Jonas Vollmer from the EA Foundation in providing further expertise to the team.
Read more and see bios for the new team on the Long-Term Future Fund page at EA Funds.
Animal Welfare Fund
Chair: Lewis Bollard
Team: Jamie Spurgeon, Natalie Cargill, Toni Adleberg
Lewis has opted to use this opportunity to take on an expert team to assist with the management of this fund. He sees three primary benefits to this for the fund’s operation. First, the fund will now draw on a wider range of views and expertise from the animal welfare space, which is especially important given the dominant role that Open Philanthropy Project already plays in the space. Second, the fund will now draw on a deeper resource of time and experience, which will hopefully help identify more unique grant opportunities. Third, the fund will now have the capacity to better monitor the impact of grants to date, which will hopefully result in more learning for the fund managers and lessons that can be shared with the EA community.
The focus will remain on identifying the most cost-effective opportunities to reduce animal suffering over the long run. These opportunities will likely continue to mostly focus on factory farming, but may also support work on animal ethics, wild animal suffering, animal cognition, movement building, or other related fields.
Read more and see bios for the new team on the Animal Welfare Fund page at EA Funds
Global Development Fund
The Global Development Fund, managed by Elie Hassenfeld, will continue to operate with Elie as the sole manager, and will serve as something of a control group to the changes we are making on the other three funds. Should the team-managed funds experiment prove to be successful, Elie will consider putting together a team to work with him on managing this fund from some time in mid-2019.
How we expect this to work
First of all, we ought to note that we see these changes as currently being in a pilot period. Our best guess is that these changes will improve the EA Funds platform and produce good results, including increased donor satisfaction and donation effectiveness, however we will be ready to adjust any aspects which do not work as expected as we move forward.
New granting schedule
All of the funds will now move onto a fixed granting schedule, recommending and paying out grants three times per year - in November, February and June. The exception will be the Global Development Fund which will follow a schedule of granting in December, March and July - in order to be synched with when GiveWell decides on and grants their discretionary funds.
We will revisit the question of whether this granting schedule is optimal after the June/July 2019 round of grants. We do not expect to reduce it to lower than twice a year after that, and expect to maintain this cadence unless it becomes clear during the pilot that it is sub-optimal.
Collective Decision Making Mechanisms
Teams failing to agree on which grants to make or falling into sub-optimal group dynamics is plausibly the largest risk to this approach, therefore each of the teams has been thinking about how best to formalise their collective decision making process, to minimise any such risks.
Each of the groups has settled on a slightly different approach, and this is another aspect which is part of the experimental nature of this new approach to fund management. As part of the broader review after the June 2019 set of grants, we will work with the teams to see if any of the approaches piloted in this period seem to have performed better or worse, and if there is a clear winner we may wish to standardise the process and have all teams adopt that approach.
Fans of approval voting will be pleased to hear that the Animal Welfare and Long-Term Future teams are considering utilising this in their processes.
How to find out more and decide if you want to donate to the funds
Each of the Funds’ pages have been updated to reflect the new management teams, so please read through those for more information specific to each Fund. ( Long-Term Future / Animal Welfare / EA Meta )
The new teams will be making their first set of grant recommendations in November and publishing their reasoning to their respective Fund’s page - this will allow donors to get further data on what kinds of grants each group is likely to make and how they reason about the grant making process, so that donors can take this information into account when making their decisions during giving season.
Furthermore, each of the new management teams will also be doing an AMA on the EA Forum before mid-December; the exact dates for these will be announced in the coming weeks.
Finally, for those attending EA Global London this weekend, representatives from the EA Meta Fund and Animal Welfare Fund management teams, as well as CEA, will be available to answer questions during an ‘office hours’ session on Sunday at 1:30pm.
Hopefully you’re as excited as we are about this new phase for EA Funds! We hope these changes will make it an overall better donation platform for the community, and increase the effectiveness of the average donation.
Comments sorted by top scores.
comment by Larks · 2018-10-28T20:15:34.685Z · EA(p) · GW(p)
I'm glad to see these changes; they seem like significant improvements to the structure. However, I think it would have been nice to see some official recognition that these changes seem to be largely in response to problems were largely foreseen by the community a long time ago.Replies from: MarekDuda, Milan_Griffes
↑ comment by MarekDuda · 2018-10-30T16:05:08.641Z · EA(p) · GW(p)
Thank you for your comment. I am perfectly happy to acknowledge that the changes announced in this post are to a large degree inspired by feedback from donors and the community. We see this project as a collaborative effort and our plan is to continue to improve the platform, utilising such feedback in the process.Replies from: Larks
↑ comment by Milan_Griffes · 2018-10-29T18:41:05.820Z · EA(p) · GW(p)
comment by John G. Halstead (Halstead) · 2018-10-29T17:01:09.141Z · EA(p) · GW(p)
Given the extensive and ongoing concerns about ACE research that have been raised by Harrison Nathan and myself, I am very surprised to see ACE researchers being selected to the EA Animal Fund. This suggests a lack of concern for accountability in the procedure for selecting people to the funds. What is the explanation for this?Replies from: Milan_Griffes, LewisBollard, Jeff_Kaufman
↑ comment by Milan_Griffes · 2018-10-29T18:40:34.442Z · EA(p) · GW(p)
I'm also curious for more info about how teams were selected.
↑ comment by LewisBollard · 2018-11-01T04:38:21.611Z · EA(p) · GW(p)
Thanks for the feedback and Q’s Halstead. I made the decision to include two ACE researchers - ACE decided which two staff to include, while I personally picked Natalie. I chose to include ACE because (a) while I’ve had concerns re their past research, I think their work at identifying giving opportunities has been very good, (b) several EA donors told me it would increase their faith in the fund - and its value to them - to have ACE’s expertise and viewpoint represented, and (c) I’ve been personally impressed by all ACE researchers i’ve met, especially re their intelligence, open-mindedness, and EA values alignment. I thought some of your (Halstead’s) critiques of ACE were valid, but I don’t view them as especially relevant to Toni and Jamie’s ability to make outstanding giving recommendations via the fund.
I’m traveling in Asia so will be slow replying, but will try to ultimately reply to all messages here re the animal welfare fund (if only once i’m back in the US next week). Thanks for engaging with this!Replies from: Halstead, Dunja
↑ comment by John G. Halstead (Halstead) · 2018-11-01T12:41:45.011Z · EA(p) · GW(p)
Hi Lewis, thanks for this.
Is your view that they might happen to arrive at decent recommendations, or that the research method they use to arrive at those recommendations is good? I think the first is perhaps true but definitely not the second, and this should be sufficient disqualification. I'm loath to have to go over this again, but unfortunately it is necessary given this decision.
ACE have been around for six years and as of today have only two intervention reports on their website which they actually stand by - on leafleting and on protests. (The leafleting report shows that leafleting doesn't work.) They kept several long intervention reports on their website for years until I published my critique that were, by their own admission, poor. They only took their old leafleting report down around a year after Harrison Nathan pointed out how bad it was. They kept their grossly inaccurate 'impact calculator' on their website for a year after Nathan published his critique. Until only last year, their cost-effectiveness analyses contained various absurd figures such as that the digital reach of their charities was in the billions. ACE does not even try to check whether charities they assess played any role in claimed successful corporate campaigns, and until I published my critique, relied on a paper on the welfare effects of hen systems which Open Phil explained to be mistaken more than a year ago. They don't favour meat alternative research over charities doing corporate campaigns and the like because counting long-term effects would be "unfair" to the latter.
Which piece of their research do you think is good, aside from the recent reports on leafleting and protests, and do you not think this is an adequate outcome after six years of operation?
Their response to criticism in both my case and in Harrison Nathan's has been to suggest that critics have 'misunderstood' their research and have presented their responses as opportunities for clarification. In fact, what we both pointed out was that there were and are extensive flaws with their research. This is not genuine accountability and makes me seriously concerned that they will not actually improve. Again, I didn't want to have to express my true views on this, and I thought I wouldn't have to as they would be left alone with time to improve rather than being given control over millions of dollars by CEA.
For these reasons, I don't see how my critiques could not be highly relevant to whether they should be involved in the fund. Do you think the consistent publication of low quality research over the course of years is irrelevant to the ability to do research in the future? Or do you think that their research has actually been better than I have suggested? If so, I would be interested to which parts you think are indeed better.Replies from: LewisBollard
↑ comment by LewisBollard · 2018-11-17T02:47:22.261Z · EA(p) · GW(p)
Thanks for your feedback and questions, and thanks for your patience while I was traveling. On reflection, I think I made a mistake in delegating two seats on the Fund to ACE, rather than picking Toni and Jamie independently. My intention was to increase the Fund’s ideological diversity (ACE researchers have a range of viewpoints, and I wanted to avoid the natural bias to pick those who shared mine). But I now think this benefit is outweighed by the harm that the Fund could be misperceived as reflecting ACE’s organizational views or being based on ACE research.
Otherwise, I worry we’re talking past each other. I agree with several, though not all, of your criticisms of ACE's historical performance. But I also think ACE's charity recommendations have created substantial value by driving donations toward higher-impact activities (though I don't always agree with them). I believe this more because of my independent view of the activities and groups involved than because of ACE's public writing.
More importantly, I don’t think your criticisms of ACE reflect on Toni and Jamie’s ability to help the Fund accomplish the goals we established: a wider range of views, a deeper resource of time, and more capacity to monitor impact. Both are smart, have different ideas on how to most effectively fund animal groups within an EA framework, and have much more time than I do to identify new giving opportunities. And both have an open-mindedness and commitment to truth that I think is critical for objectively assessing impact.
Thanks again for engaging with this decision, and the Fund, so thoughtfully. We look forward to sharing updates on the Fund’s donations in the coming months. And thank you, as always, to everyone for your support of effective animal advocacy — whether via the Fund or directly.
↑ comment by Dunja · 2018-11-01T09:12:59.335Z · EA(p) · GW(p)
Thanks for the explanation, Lewis. In order to make the team as robust as possible towards criticism, and as reliable as possible, wouldn't it be better to have a diverse team, consisting also of critics of ACE? That would send the right message to the donors as well as to anyone taking a closer look at EA organizations. I think it would also benefit ACE since their researchers would have an opportunity to work directly with their critics.Replies from: LewisBollard
↑ comment by LewisBollard · 2018-11-17T02:49:25.729Z · EA(p) · GW(p)
Thanks for your feedback and question Dunja, and thanks for your patience while I was traveling. I agree that the Fund benefits from having a diverse team, but disagree that criticism of ACE is the right kind of ideological diversity. Both Toni and Jamie bring quite different perspectives on how to most cost-effectively help animals within an EA framework (see, for instance, the charities they’re excited about here). The Fund won’t be funding ACE now they’re onboard, and my guess is that we’ll continue to mostly fund smaller unique opportunities, rather than ACE top or standout charities. So I don’t think people’s views on ACE will be especially relevant to our giving picks here. I see less value to bringing in critics of EA, as many (though not all) of ACE’s critics are, as we'd have trouble reaching a consensus on funding decisions. Instead, I encourage those who are skeptical of EA views or the groups we fund to donate directly to effective animal groups they prefer.
↑ comment by Jeff Kaufman (Jeff_Kaufman) · 2018-10-31T15:01:32.063Z · EA(p) · GW(p)
Who would you have recommended for these spots?
My not-that-informed view is something like "there are a bunch of problems with ACE, but I'm not sure there's anyone better right now". But if you have people in mind who would have been better for this role that would be really helpful to know!Replies from: Halstead
↑ comment by John G. Halstead (Halstead) · 2018-10-31T17:51:59.552Z · EA(p) · GW(p)
I would have asked Harrison Nathan, as he has done some high quality research on the area, and really knows what is going on (though maybe he wouldn't have agreed). Aside from that, I'm not all that familiar with which other researchers there are, but there must be other viable options, and I think having a two person committee of Natalie and Lewis only would have been strongly preferable.
I think ACE researchers might well recommend some good stuff, but I'm troubled by the principle at play here. It suggests that documented past performance is irrelevant to whether the community allows you to make important decisions about millions of dollars. Imagine how this would look to non-EAs: it takes an outsider to review and criticise ACE's poor previous research, which still contains extensive and serious flaws today. The community then responds by giving ACE researchers control over a multi-million dollar fund. The incentives here are perverse, to say the least.
As I said in my post, I hope their research will improve in the future, but this is a hope not a guarantee and certainly does not justify the trust signalled by putting them in charge of millions of dollars.
comment by Dunja · 2018-10-30T09:47:33.334Z · EA(p) · GW(p)
I'd be curious to hear an explication for selecting the given team for the Long Term Future Funds. If they are expected to evaluate grants including research grants, how do they plan to do that, what qualifies them for this job, and in case they are not qualified, which experts do they plan to invite on such occasions.
From their bio page I don't see who of them should count as an expert in the field of research (and in view of which track-record), which is why I am asking. Thanks!Replies from: matt, Evan_Gaensbauer
↑ comment by matt · 2018-10-31T00:59:27.086Z · EA(p) · GW(p)
Hi Dunja, I'm Matt Fallshaw, Chair of the fund. This response is an attempt to be helpful, but I'm not entirely sure what, in answer to your question, would qualify as a qualification; perhaps it's relevant that I've been following the field for over 10 years, I've been an advisor to MIRI (I joined their Board of Directors in 2014 (a position I recently had to give up) and currently spend approaching half of my time working on MIRI projects) and I'm an advisor to BERI. I chose the expert team (in consultation with Marek Duda), and I chose them for (among other things) their intelligence, knowledge and connections (to both advisors and likely grantee orgs or individuals). We absolutely do intend to consult with experts (including Nick and Jonas, our listed advisors, and outside experts) when we don't feel that we have enough knowledge ourselves to properly assess a grant. Our connections span multiple continents and (when we don't feel qualified ourselves) we will choose advisors relevant to each grant we consider. … I'm not sure whether that response is going to be satisfying, so feel free to clarify your question and I'll try again.Replies from: Dunja, Dunja
↑ comment by Dunja · 2018-10-31T09:29:22.421Z · EA(p) · GW(p)
Hi Matt, thanks a lot for the reply! I appreciate your approach, but I do have worries, which Jonas, for instance, is very well aware of (I have been a strong critic of EAF policy and implementation of research grants, including those directed at MIRI and FRI).
My main worry is that evaluating grants aimed at research cannot be done without having them assessed by expert researchers in the given domain, that is, people who have a proven track-record in the given field of research. I think the best way to see why this matters is to take any other scientific domain: medicine, physics, etc. If we wanted to evaluate whether a certain research grant in medicine should be funded (e.g. a discovery of an important vaccine), it wouldn't be enough to just like the objective of the grant. We would have to assess:
Methodological feasibility of the grant: are the announced methods conducive to the given goals? How will the project react to possible obstacles and which alternative methods will in such cases be employed?
Fitness of the project within the state of the art: how well the grant is informed by the relevant research in the given domain (e.g. are some important methods and insights overlooked, is another research team already working on a related topic where combining insights would increase the efficiency of the current project, etc.)
Clearly, answering these questions cannot be done by anyone who is not an expert in medicine. My point is that the same goes for the research in any other scientific domain, from philosophy to AI. Hence, if your team consists of people who are enthusiastic about the topic, and who do have experience in reading about it or who have experience in managing EA grants and non-profit organizations, that's not the adequate expertise for evaluating research grants. The same goes for your advisers: Nick has a PhD in philosophy, but that's not enough for being an expert e.g. in AI (it's not enough for being an expert in many domains of philosophy either unless he has a track record of continuous research in the given domain). Jonas has a background in medicine and economics and charity evaluations, but that has nothing to do with an active engagement in research.
Inviting expert-researchers to evaluate each of the submitted projects is the only way to award research grants responsibly. That's precisely what both academic and non-academic funding institutions do. Otherwise, how can we possibly argue that the given funded research is promising and that we have done the best we can to estimate its effectiveness? This is important not only to assure the quality of the given research, but also to handle the donors' contributions responsibly, according to the values of EA in general.
My impression is that so far the main criterion employed when assessing the feasibility of grants is how trust-worthy the given team (proposing the grant) is, how enthusiastic they are about the topic and how much effort they are willing to put in it. But we wouldn't take those criteria to be enough when it comes to the discovery of vaccinations. We'd also want to see the track-record of the given researchers in the field of vaccination, we'd want to hear what their peers think of the methods they wish to employ, etc. And the very same holds for the research on far future. While some may reply that the academic world is insufficiently engaged in some of these topics, or biased against them, that still doesn't mean there are no expert researchers competent to evaluate the given grants (moreover, requests for expert evaluations can be formulated in such a way to target specific methodological questions, and minimize the effect of bias). At the end of the day, if research should have an impact, it will have to gain attention of the same academic world, in which case it is important to engage with their opinions and inform projects of possible objections early on. I could say more about these dangers of bias in case of reviews and how to mitigate the given risks, so we can come back to this topic if anyone's interested.
Finally, I hope we can continue this conversation without prematurely closing it. I have tried to do the same with EAF and their research-related policy, but unfortunately, they have never provided any explanation for why expert reviewers are not asked to evaluate the research projects which they fund (I plan to do a separate longer post on that as soon as I catch some free time, but I'd be happy to provide further background in the meantime if anyone is interested).
↑ comment by Dunja · 2018-11-03T18:02:40.176Z · EA(p) · GW(p)
Update: this is all the more important in view of common ways one may accidentally cause harm by trying to do good, which I've just learned about through DavidNash's post). As the article points out, having an informed opinion of experts, and a dense network with them can decrease chances of harmful impacts, such as reputational harm or locking in on suboptimal choices.
↑ comment by Evan_Gaensbauer · 2018-10-31T17:04:22.606Z · EA(p) · GW(p)
What would you say qualifies as expertise in these fields? It's ambiguous, because it's not like universities are offering Ph.D.'s in 'Safeguarding the Long-Term Future.'Replies from: Dunja
↑ comment by Dunja · 2018-10-31T18:05:06.978Z · EA(p) · GW(p)
That should always depend on the project at hand: if the project is primarily in a specific domain of AI research, then you need reviewers working precisely in that particular domain of AI; if it's in ethics, then you need experts working in ethics; if it's interdisciplinary, then you try to get reviewers from the respective fields. This also shows that it will be rather difficult (if not impossible) to have an expert team competent to evaluate each candidate project. Instead, the team should be competent in selecting the adequate expert reviewers (similarly to journal editors who invite expert reviewers for individual papers submitted to the journal). Of course, the team can do the pre-selection of projects, determining which are worthy of sending for expert review, but for that, it's usually useful to have at least some experience with research in one of the relevant domains, as well as with research proposals.
comment by Evan_Gaensbauer · 2018-10-28T10:46:31.733Z · EA(p) · GW(p)
Thank you for this. This satisfies virtually all the changes I suggested to the EA Funds in my post from July. I think the EA Funds in their prior form would have benefited from major donors to the funds being more proactive in informing the fund managers what kinds of projects they'd generally like to see receive grants. That is something that is up to donors themselves, and not something the CEA can directly change. But it appears the CEA is facilitating that as much as they can.
While the Funds were predicated on the notion many donors independently trying to evaluate the best projects or organizations within entire focus areas, this neglects the fact in the history of EA some of the biggest donors to various causes are themselves also the best evaluators of those causes. However, it's clear the CEA understands this by putting individuals like Matt Wage and Luke Ding on the new fund management teams. Across the teams of each of the funds, it appears the fund managers will be in frequent contact with a large and diverse pool of donors to each of these focus areas.
comment by KevinWatkinson · 2018-10-28T20:20:48.362Z · EA(p) · GW(p)
I would like to know a bit more about the reasoning behind bringing in people from ACE and Sentience Politics to contribute to the Animal Welfare Fund.
From my point of view ACE is already heavily represented in terms of decision making in relation to animal organisations, particularly distributing funds to organisations affiliated to the "pragmatic" ideology favoured by most utilitarians in EAA.
Bringing more people onboard to the Animal Welfare Fund is a good idea but seems to have offered an opportunity to take on a variety of perspectives to inform decision making (from people who hold them), and to be more representative in terms of theory, but instead seems to bolster a fairly narrow view associated with EAA. This is at least indicated by the track record of ACE and associated EAA organisations which have historically marginalised organisations and perspectives through not accounting or valuing them, particularly in relation to rights theory / ecofeminism.
I look forward to seeing how this develops, particularly if there is direction in terms of funding grassroots organisations and projects aligned to EA principles but working from the ground up*. Whilst i presume donations to ACE will now shift back to the Open Philanthropy Project rather than be directed through EA Funds.
*In relation to this i would like to see funders active in the animal movement space jointly allocate resources to convene a conference representing neglected views from people who hold them. With the particular aim of assessing the impact of EAA funding on the broader animal movement, and to explore possibilities and limitations.Replies from: Peter_Hurford
↑ comment by Peter Wildeford (Peter_Hurford) · 2018-10-28T21:18:14.558Z · EA(p) · GW(p)
people from ACE and Sentience Politics to contribute to the Animal Welfare Fund
Worth noting that no one from Sentience Politics is on the Animal Welfare Fund. Lewis is from OpenPhil, Natalie is from Effective Giving, and Toni/Jamie are from ACE.Replies from: KevinWatkinson
↑ comment by KevinWatkinson · 2018-10-29T07:50:09.364Z · EA(p) · GW(p)
I appreciate the clarification of where people are presently working. More information is available in the bios.