Posts

How do you decide between upvoting and strong upvoting? 2019-08-25T18:19:11.107Z
Explaining the Open Philanthropy Project's $17.5m bet on Sherlock Biosciences’ Innovations in Viral Diagnostics 2019-06-11T17:23:37.349Z
The case for taking AI seriously as a threat to humanity 2018-12-23T01:00:08.314Z
Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018) 2018-11-21T15:58:31.856Z

Comments

Comment by anonymous_ea on A Primer on the Symmetry Theory of Valence · 2021-09-10T02:15:42.079Z · EA · GW

Greg, I want to bring two comments that have been posted since your comment above to your attention:

  1. Abby said the following to Mike:

Your responses here are much more satisfying and comprehensible than your previous statements, it's a bit of a shame we can't reset the conversation.

2. Another anonymous commentator (thanks to Linch for posting) highlights that Abby's line of questioning regarding EEGs ultimately resulted in a response satisfactory to her and which she didn't have the expertise to further evaluate: 

if they had given the response that they gave in one of the final comments in the discussion, right at the beginning (assuming Abby would have responded similarly) the response to their exchange might have been very different i.e. I think people would have concluded that they gave a sensible response and were talking about things that Abby didn't have expertise to comment on:

_______


Abby Hoskin: If your answer relies on something about how modularism/functionalism is bad: why is source localization critical for your main neuroimaging analysis of interest? If source localization is not necessary: why can't you use EEG to measure synchrony of neural oscillations?

Mike Johnson: The harmonic analysis we’re most interested in depends on accurately modeling the active harmonics (eigenmodes) of the brain. EEG doesn’t directly model eigenmodes; to infer eigenmodes we’d need fairly accurate source localization. It could be there are alternative ways to test STV without modeling brain eigenmodes, and that EEG could give us. I hope that’s the case, and I hope we find it, since EEG is certainly a lot easier to work with than fMRI.

Abby Hoskin: Ok, I appreciate this concrete response. I don't know enough about calculating eigenmodes with EEG data to predict how tractable it is.

Comment by anonymous_ea on AI Timelines: Where the Arguments, and the "Experts," Stand · 2021-09-08T13:26:34.579Z · EA · GW

I appreciate you posting this picture, which I had not seen before. I just want to add that this was compiled in 2014, and some of the people in the picture have likely shifted in their views since then. 

Comment by anonymous_ea on Towards a Weaker Longtermism · 2021-08-09T19:46:54.828Z · EA · GW

Phil Trammell's point in  Which World Gets Saved is also relevant: 

It seems to me that there is another important consideration which complicates the case for x-risk reduction efforts, which people currently neglect. The consideration is that, even if we think the value of the future is positive and large, the value of the future conditional on the fact that we marginally averted a given x-risk may not be.

...

Once we start thinking along these lines, we open various cans of worms. If our x-risk reduction effort starts far "upstream", e.g. with an effort to make people more cooperative and peace-loving in general, to what extent should we take the success of the intermediate steps (which must succeed for the x-risk reduction effort to succeed) as evidence that the saved world would go on to a great future? Should we incorporate the fact of our own choice to pursue x-risk reduction itself into our estimate of the expected value of the future, as recommended by evidential decision theory, or should we exclude it, as recommended by causal? How should we generate all these conditional expected values, anyway?

Some of these questions may be worth the time to answer carefully, and some may not. My goal here is just to raise the broad conditional-value consideration which, though obvious once stated, so far seems to have received too little attention. (For reference: on discussing this consideration with Will MacAskill and Toby Ord, both said that they had not thought of it, and thought that it was a good point.) In short, "The utilitarian imperative 'Maximize expected aggregate utility!'" might not really, as Bostrom (2002) puts it, "be simplified to the maxim 'Minimize existential risk'".

Comment by anonymous_ea on What EA projects could grow to become megaprojects, eventually spending $100m per year? · 2021-08-07T20:59:04.492Z · EA · GW

I like this idea in general, but would it ever really be able to employ $100m+ annually? For comparison, GiveWell spends about $6 million/year and CSET was set up for $55m/5 years (11m/year)

Comment by anonymous_ea on Linch's Shortform · 2021-07-15T23:36:41.580Z · EA · GW

Thanks. Going back to your original impact estimate, I think the bigger difficulty I have in swallowing your impact estimate and claims related to it (e.g. "the ultimate weight of small decisions you make is measured not in dollars or relative status, but in stars") is not the probabilities of AI or space expansion, but what seems to me to be a pretty big jump from the potential stakes of a cause area or value possible in the future without any existential catastrophes, to the impact that researchers working on that cause area might have. 

Joe Carlsmith has a small paragraph articulating some of my worries along these lines elsewhere on the forum:

Of course, the possibly immense value at stake in the long-term future is not, in itself, enough to get various practically-relevant forms of longtermism off the ground. Such a future also needs to be adequately large in expectation (e.g., once one accounts for ongoing risk of events like extinction), and it needs to be possible for us to have a foreseeably positive and sufficiently long-lasting influence on it. There are lots of open questions about this, which I won’t attempt to address here.

Comment by anonymous_ea on Linch's Shortform · 2021-07-05T21:09:47.563Z · EA · GW

So is the basic idea that transformative AI not ending in an existential catastrophe is the major bottleneck on a vastly positive future for humanity? 

Comment by anonymous_ea on Linch's Shortform · 2021-07-05T16:50:03.495Z · EA · GW

Conditioning upon us buying the importance of work at MIRI (and if you don't buy it, replace what I said with CEA or Open Phil or CHAI or FHI or your favorite organization of choice), I think the work of someone sweeping the floors of MIRI is just phenomenally, astronomically important, in ways that is hard to comprehend intuitively. 

(Some point estimates with made-up numbers: Suppose EA work in the next few decades can reduce existential risk from AI by 1%.  Assume that MIRI is 1% of the solution, and that there are less than 100 employees of MIRI. Suppose variance in how good a job someone can do in cleanliness of MIRI affects research output by 10^-4 as much as an average researcher.* Then we're already at 10^-2 x 10^ -2 x 10^-2 x 10^-4 = 10^-10 the impact of the far future. Meanwhile there are 5 x 10^22 stars in the visible universe)

Can you spell out the impact estimation you are doing in more detail? It seems to me that you first estimate how much a janitor at an org might impact the research productivity of that org, and then there's some multiplication related to the (entire?) value of the far future. Are you assuming that AI will essentially solve all issues and lead to positive space colonization, or something along those lines? 

Comment by anonymous_ea on Looking for more 'PlayPumps' like examples · 2021-05-28T16:57:08.808Z · EA · GW

I'm not sure Make a Wish is a good example given the existence of this study. Quoting Dylan Matthews from Future Perfect on it (emphasis added):

The average wish costs $10,130 to fulfill. Given that Malaria Consortium can save the life of a child under 5 for roughly $2,000 (getting a precise figure is, of course, tough, but it’s around that), you could probably save four or five children’s lives in sub-Saharan Africa for the cost of providing a nice experience for a single child in the US. For the cost of the heartwarming Batkid stunt — $105,000 — you could save the lives of some 50-odd kids.

So that’s why I’ve been hard on Make-A-Wish in the past, and why effective altruists like Peter Singer have criticized the group as well.

But now I’m reconsidering. A new study in the journal Pediatric Research, comparing 496 patients at the Nationwide Children’s Hospital in Columbus, Ohio, who got their wishes granted to 496 “control” patients with similar ages, gender, and diseases, found that the patients who got their wishes granted went to the emergency room less, and were less likely to be readmitted to the hospital (outside of planned readmissions).

In a number of cases, this reduction in hospital admissions and emergency room visits resulted in a cost savings in excess of $10,130, the cost of the average wish. In other words, Make-A-Wish helped, and helped in a cost-effective way.

Comment by anonymous_ea on Draft report on existential risk from power-seeking AI · 2021-05-08T21:02:21.863Z · EA · GW

your other comment

This links to A Sketch of Good Communication, not whichever comment you were intending to link :)

Comment by anonymous_ea on Concerns with ACE's Recent Behavior · 2021-04-18T17:59:00.991Z · EA · GW

You know, this makes me think I know just how academia was taken over by cancel culture. 

It's a very strong statement that academia has been taken over by cancel culture. I definitely agree that there are some very concerning elements (one of the ones I find most concerning are the University of California diversity statements), but academia as a whole is quite big and you may be jumping the gun quite a bit. 

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-23T01:02:16.341Z · EA · GW

I guess I can't resist one last comment - please feel free to not reply any further. 

This seems clearly true to me, but I don't see how it explains the things that I'm puzzled by. 

To put it in rough Bayesian terms - I think your priors on what other people are saying and why are firing too strongly. This is making it hard to understand other people who are coming from a different place, and throwing up the elementary reasoning errors and anomalies you see. I wonder if you've previously encountered EAs or similar types of people saying the kinds of things jsteinhardt is saying here and meaning them sincerely, not performatively. I think some people, especially more online people, haven't. 

Comment by anonymous_ea on Open and Welcome Thread: March 2021 · 2021-03-22T16:50:54.335Z · EA · GW

Welcome Sive!

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-22T13:55:01.636Z · EA · GW

Thanks for explaining. I don't wish to engage further here [feel free to reply though of course], but FWIW I don't agree that there are any reasoning errors in Jacob's post or any anomalies to explain. I think you are strongly focused on a part of the conversation that is of particular importance to you (something along the lines of whether people who are not motivated or skilled at expressing sympathy will be welcome here), while Jacob is mostly focused on other aspects. 

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-21T22:33:12.693Z · EA · GW

what appears to me to be a series of anomalies that is otherwise hard to explain

What do you believe needs explaining? 

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-21T17:15:18.516Z · EA · GW

This might be a minor point, but personally I think it's better to avoid making generalizations of how an entire community must be feeling. Some members of the Asian community are unaware of recent events, while others may not be particularly affected by them. Perhaps something more along the lines of "I understand many people in the Asian community are feeling hurt right now" would be generally better. 

Comment by anonymous_ea on Please stand with the Asian diaspora · 2021-03-21T03:14:39.570Z · EA · GW

I'm curious how xccf's comment elsewhere on this thread fits in with your position as expressed here. 

Comment by anonymous_ea on [deleted post] 2021-03-20T15:44:10.817Z

Ben Hoffman's GiveWell and the problem of Partial Funding was also posted here on the forum, with replies from OpenPhil and GiveWell staff. 

Comment by anonymous_ea on [deleted post] 2021-03-19T19:09:45.330Z

I don't have any advice to offer, but as a datapoint for you: I applaud your goal and am even sympathetic to many of your points, but even I found this post actively annoying (unlike your previous ones in this series). It feels like you're writing a series of posts for your own benefit without actually engaging with your audience or interlocutors.  I think this is fine for a personal blog, but does not fit on this forum. 

Comment by anonymous_ea on Religious Texts and EA: What Can We Learn and What Can We Inform? · 2021-01-30T16:08:26.958Z · EA · GW

There's a Buddhists in Effective Altruism group as well. 

Comment by anonymous_ea on CEA update: Q4 2020 · 2021-01-15T22:54:59.758Z · EA · GW

Thanks for writing this!

Comment by anonymous_ea on Strong Longtermism, Irrefutability, and Moral Progress · 2021-01-08T21:43:19.459Z · EA · GW

It has, however, succumbed to a third — mathematical authority. Firmly grounded in Bayesian epistemology, the community is losing its ability to step away from the numbers when appropriate, and has forgotten that its favourite tools — expected value calculations, Bayes theorem, and mathematical models — are precisely that: tools. They are not in and of themselves a window onto truth, and they are not always applicable. Rather than respect the limit of their scope, however, EA seems to be adopting the dogma captured by the charming epithet shut up and multiply.

 

I wonder if this old post by GiveWell (and OpenPhil's ED) about expected value calculations assuages your fears a bit: Why we can’t take expected value estimates literally (even when they’re unbiased)

Personally I think equating strong longtermism with longtermism is not really correct. Longtermism is a much weaker claim. I highly doubt most longtermists are in danger of being convinced that strong longtermism is true, although I don't have any real data on it. 

Comment by anonymous_ea on Long-Term Future Fund: Ask Us Anything! · 2020-12-11T02:53:51.389Z · EA · GW

I think I would spend a substantial amount of money on prizes for people who seem to have done obviously really good things for the world. Giving $10M to scihub seems worth it. Maybe giving $5M to Daniel Ellsberg as a prize for his lifetime achievements. There are probably more people in this reference class of people who seem to me to have done heroic things, but haven't even been remotely well enough rewarded (like, it seems obvious that I would have wanted Einstein to die having at least a few millions in the bank, so righting wrongs of that reference class seems valuable, though Einstein did at least get a Nobel prize). My guess is one could spend another $100M this way.

 

I'm really surprised by this; I think things like the Future of Life award are good, but if I got $1B I would definitely not think about spending potentially $100m on similar awards as an EA endeavor. Can you say more about this? Why do you think this is so valuable? 

Comment by anonymous_ea on Long-Term Future Fund: Ask Us Anything! · 2020-12-05T23:29:57.533Z · EA · GW

Regardless of whatever happens, I've benefited greatly from all the effort you've put in your public writing on the fund Oliver. 

Comment by anonymous_ea on How to think about an uncertain future: lessons from other sectors & mistakes of longtermist EAs · 2020-09-05T23:58:39.024Z · EA · GW

I learned from stories 1 and 2 - thanks for the information!

Story 3 feels like it suffers from lack of familiarity with EA and argues against a straw version. E.g you write (emphasis added):

As the community grew it spread into new areas – Animal Charity Evaluators was founded in 2012 looking at animal welfare – the community also connected to the rationalist community that was worried about AI and to academics at FHI thinking about the long term future. Throughout all of this expected value calculations remained the gold star for making decisions on how to do good. The idea was to shut up and multiply. Even as effective altruism decision makers spread into areas of greater and greater uncertainty they (as far as I can tell) have mostly continued to use the same decision making tools (expected value calculations), without questioning if these were the best tools.

By 2011 GiveWell had already published Why we can’t take expected value estimates literally (even when they’re unbiased), arguing against, well, taking expected value calculations literally, critiquing GWWC's work on that basis, and discussing how their solution avoided Pascal's Mugging. There was a healthy discussion in the comments and the cross-post on LessWrong got 100 upvotes and 250 comments.

Comment by anonymous_ea on KevinO's Shortform · 2019-12-06T19:09:58.021Z · EA · GW

I just voted for the GFI, AMF, and GD videos because of your comment!

Comment by anonymous_ea on I'm Michelle Hutchinson, head of advising at 80,000 Hours, AMA · 2019-12-05T17:38:45.605Z · EA · GW

Even if that's not what edoard meant, I would be interested in hearing the answer to 'what are things you would say if you didn't need to be risk averse?'!

Comment by anonymous_ea on [Link] A new charity evaluator (NYTimes) · 2019-11-27T20:14:20.562Z · EA · GW

I hope ImpactMatters does well!

Comment by anonymous_ea on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-21T18:32:47.698Z · EA · GW

Meta: A big thank you to Buck for doing this and putting so much effort into it! This was very interesting and will hopefully encourage more dissemination of knowledge and opinions publicly

Comment by anonymous_ea on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-20T01:48:37.339Z · EA · GW

I agree with Issa about the costs of not giving reasons. My guess is that over the long run, giving reasons why you believe what you believe will be a better strategy to avoid convincing people of false things. Saying you believed X and now believe ~X seems like it's likely to convince people of ~X even more strongly.

Comment by anonymous_ea on I'm Buck Shlegeris, I do research and outreach at MIRI, AMA · 2019-11-16T21:59:58.824Z · EA · GW

What other crazy ideas do you have about EA outreach?

Comment by anonymous_ea on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-12T18:58:13.248Z · EA · GW
I think there may be a misunderstanding – the title of this post is “Feedback Collected by CEA”, not “for” CEA.

This is fair, but I want to give some examples of why I thought this document was about feedback about CEA, with the hope of helping with communication around this in the future. Even after your clarification, the document still gives a strong impression to me of the feedback being about CEA, rather than about the community in general. Below are some quotes that make it sound that way to me, with emphasis added:

Summary of Core Feedback Collected by CEA in Spring/Summer 2019

The title doesn't mention what the feedback is about. I think most people would assume that it refers to feedback about CEA, rather than the community overall. That's what I assumed.

CEA collects feedback from community members in a variety of ways (see “CEA’s Feedback Process” below). In the spring and summer of 2019, we reached out to about a dozen people who work in senior positions in EA-aligned organizations to solicit their feedback. We were particularly interested to get their take on execution, communication, and branding issues in EA. Despite this focus, the interviews were open-ended and tended to cover the areas each person felt was important.
This document is a summary of their feedback. The feedback is presented “as is,” without any endorsement by CEA.

It's not clearly stated what the feedback is about ("CEA collects feedback", "solicit their feedback" without elaboration of what the feedback is about). The closest it gets to specifying what feedback might pertain to is when it's mentioned that CEA was particularly interested in feedback on execution, communication, and branding issues in EA. This is still fairly vague, and "branding" to me implies that the feedback is about CEA. It does say "...issues in EA", but I didn't pay that much importance.

This post is the first in a series of upcoming posts where we aim to share summaries of the feedback we have received.

In general, I assume that feedback to an organization is about the organization itself.

CEA has, historically, been much better at collecting feedback than at publishing the results of what we collect.

While unclear again about what "feedback" refers to, in general I would expect this to mean feedback about CEA.

As some examples of other sources of feedback CEA has collected this year:
We have received about 2,000 questions, comments and suggestions via Intercom (a chat widget on many of CEA’s websites) so far this year
We hosted a group leaders retreat (27 attendees), a community builders retreat (33 attendees), and had calls with organizers from 20 EA groups asking about what’s currently going on in their groups and how CEA can be helpful
Calls with 18 of our most prolific EA Forum users, to ask how the Forum can be made better.
A “medium-term events” survey, where we asked everyone who had attended an Individual Outreach retreat how the retreat impacted them 6-12 months later. (53 responses)
EA Global has an advisory board of ~25 people who are asked for opinions about content, conference size, format, etc., and we receive 200-400 responses to the EA Global survey from attendees each time.

All of these are examples of feedback about CEA or its events and activities. There are no examples of feedback about the community.

I think the confusion comes from the lack of clear elaboration in the title and/or beginning of the document of what the scope of the feedback was. Clarifying this in the future should eliminate this problem.

Comment by anonymous_ea on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-09T21:13:14.894Z · EA · GW

Note: The comment you and Ben replied to seems to have disappeared

Comment by anonymous_ea on Forum update: New features (November 2019) · 2019-11-09T21:10:56.607Z · EA · GW

I'm really excited about subscribing and bookmarking! Pingbacks also seem useful

Comment by anonymous_ea on Effective Altruism and International Trade · 2019-11-07T20:56:27.860Z · EA · GW
EA headlining money and health as a cause priority while dropping education. + spending no money on education is straight out saying a lot about the priorities of EA.
EA gives zero value to education, and that is fundamentally wrong.

I don't think the last sentence follows from the ones before it. EA is fundamentally about doing the most good possible, not about doing good in every area that is valuable. EA will (hopefully) always be about focusing on the relatively few areas where we can do the most good. Not funding almost everything in the world doesn't mean that EA thinks that almost everything in the world has zero value. It is true that education for the sake of education is not a priority for EAs, but it doesn't mean that EAs think that education isn't important. In fact EA is very disproportionately composed of highly educated people - presumably at least some of these people value education highly.

Comment by anonymous_ea on Opinion: Estimating Invertebrate Sentience · 2019-11-07T20:44:00.030Z · EA · GW

I've been impressed by the work being produced by Rethink Priorities over the past several months. I appreciate the thought and nuance that went into this. Great job again!

Comment by anonymous_ea on Summary of Core Feedback Collected by CEA in Spring/Summer 2019 · 2019-11-07T20:32:51.485Z · EA · GW

I want to echo this. I would love to see CEA talk more about what they see as their mistakes and achievements, but this felt like a confusing mixture of feedback about some aspects of CEA (mostly EA Global, EA Forum, and the Community Health team) and some general feedback about the EA community that CEA only has partial control over. While CEA occupies an important position in EA, there are many factors beyond CEA that contribute to whether EA community members are smart and thoughtful or whether they're not welcoming enough.

Comment by anonymous_ea on Formalizing the cause prioritization framework · 2019-11-06T17:37:31.826Z · EA · GW

Update: The pictures load for me now

Comment by anonymous_ea on Formalizing the cause prioritization framework · 2019-11-05T20:59:00.910Z · EA · GW

None of the images display for me either. This is what it looks like for me:


Let's see how this works graphically. First, we start with tractability as a function of dollars (crowdedness), as in Figure 1. With diminishing marginal returns, "% solved/$" is decreasing in resources.

Next, we multiply tractability by importance to obtain MU/$ as a function of resources, in Figure 2. Assuming that Importance = "utility gained/% solved" is a constant[2], all this does is change the units on the y-axis, since we're multiplying a function by a constant.

Now we can clearly see the amount of good done for an additional dollar, for every level of resources invested. To decide whether we should invest more in a cause, we calculate the current level of resources invested, then evaluate the MU/$ function at that level of resources. We do this for all causes, and allocate resources to the highest MU/$ causes, ultimately equalizing MU/$ across all causes as diminishing returns take effect. (Note the similarity to the utility maximization problem from intermediate microeconomics, where you choose consumption of goods to maximize utility, given their prices and subject to a budget constraint.)

Comment by anonymous_ea on EA Forum Prize: Winners for September 2019 · 2019-11-04T23:47:39.936Z · EA · GW

It might be good to have a small number of runner of up posts without cash prizes. That would certainly help motivate me to post more.

Comment by anonymous_ea on What are your top papers of the 2010s? · 2019-10-23T18:25:28.184Z · EA · GW

Can you expand on how this influenced you?

Comment by anonymous_ea on Ineffective Altruism: Are there ideologies which generally cause there adherents to have worse impacts? · 2019-10-18T19:31:50.047Z · EA · GW

While I think that was a valuable post, the definition of ideology in it is so broad that even things like science and the study of climate change would be ideologies (as kbog points out in the comments). I'm not sure what system or way of thinking wouldn't qualify as an ideology based on the definition used.

Comment by anonymous_ea on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-11T01:26:07.557Z · EA · GW

Datapoint for Hauke: I also am very interested in this topic and Hauke's thoughts on it but found the formatting made it difficult for me to read it fully

Comment by anonymous_ea on What actions would obviously decrease x-risk? · 2019-10-08T23:05:56.829Z · EA · GW

A general comment about this thread rather than a reply to Khorton in particular: The original post didn't suggest that this should be a brainstorming thread, and I didn't interpret it like that. I interpreted it as a question looking for answers that the posters believe, rather than only hypothesis generation/brainstorming.

Comment by anonymous_ea on Long-Term Future Fund: August 2019 grant recommendations · 2019-10-08T03:16:57.646Z · EA · GW

I'm sorry to see the strong downvotes, especially when you've put in more effort on explaining your thinking and genuinely engaging with critiques than perhaps than all other EA Fund granters put together. I want you to know that I found your explanations very helpful and thought provoking, and really like how you've engaged with criticisms both in this thread and the last one.

Comment by anonymous_ea on EA Meta Fund and Long-Term Future Fund are looking for applications again until October 11th · 2019-09-20T04:57:06.668Z · EA · GW
We will likely not be able to make the following types of grants:
Self-development that is not directly related to community benefit
In order to make grants the public benefit needs to be greater than the private benefit to any individual. So we cannot make grants that focus on helping a single individual in a way that isn’t directly connected to public benefit.

Is this in response to some of the criticisms of the April 2019 Long Term Future grants in this thread or elsewhere?


Comment by anonymous_ea on Who would you like to see do an AMA on this forum? · 2019-08-27T17:46:48.467Z · EA · GW

I'm a single person using a single anonymous account :) Other anonymous accounts use other names.

Comment by anonymous_ea on Who would you like to see do an AMA on this forum? · 2019-08-26T16:00:29.572Z · EA · GW

Are people with disabilities or (especially) people who are intersex or non-binary under-represented in EA? My intuition is that there may be a greater proportion of them in EA than in the general population, but I haven't checked this. Or did you mean that their opinions, experiences, and perspectives might not be very visible?

Comment by anonymous_ea on Who would you like to see do an AMA on this forum? · 2019-08-26T05:56:01.773Z · EA · GW

A list of people I can think of right now whose AMA I would at least consider asking a question on (non-exhaustive):

General categories: Employees at EA orgs, people with deep domain expertise in specific areas, EAs who've spent a long time researching a particular topic, EAs who want to do an AMA, non-EAs who want to do an AMA in good faith

Specific people: Toby Ord, Nick Beckstead, Nick Bostrom, Rob Wiblin, Ben Todd, Holden Karnofsky, Elie Hassenfeld, Kelsey Piper, Dylan Matthews, Nate Soares, Oliver Habryka, Julia Wise, Jeff Kaufman, Buck Shlegeris, Claire Zabel, Khorton, Larks, Jason Methany, Eric Drexler, Rachel Glennester, Michael Kremer, Peter Singer, Michelle Hutchinson, Holly Elmore, Kit Harris, kbog, Phil Trammell, Peter Hurford, Ozzie Gooen, Hilary Greaves, Julia Galef, Anna Salamon, Carl Shulman, Hauke Hillebrandt, Brian Tomasik, Luke Muehlhauser, Helen Toner, Scott Alexander, Simon Beard, Kaj Sotala, Tom Sittler, etc

Comment by anonymous_ea on Ask Me Anything! · 2019-08-19T19:09:30.550Z · EA · GW

I don't understand why this question is downvoted with so many votes. It seems like a reasonable, if underspecified, question to me.

Edit: When I commented, this comment was at -2 with perhaps 15 votes.

Comment by anonymous_ea on Ask Me Anything! · 2019-08-19T19:05:26.842Z · EA · GW

I'd be super interested in hearing you elaborate more on most of the points! Especially the first two.